
* fix scraping failure with wepb image (merge upstream/fix-webp-scrape) * add filetype to requirements * add proxycrawl.com as fallback for douban scraper * load 3p js/css from cdn * add fix-cover task * fix book/album cover tasks * scrapestack * bandcamp scrape and preview ; manage.py scrape <url> ; make ^C work when DEBUG * use scrapestack when fix cover * add user agent to improve compatibility * search BandCamp for music albums * add missing MovieGenre * fix search 500 when song has no parent album * adjust timeout * individual scrapers * fix tmdb parser * export marks via rq; pref to send public toot; move import to data page * fix spotify import * fix edge cases * export: fix dupe tags * use rq to manage doufen import * add django command to manage rq jobs * fix export edge case * tune rq admin * fix detail page 502 step 1: async pull mastodon follow/block/mute list * fix detail page 502 step 2: calculate relationship by local cached data * manual sync mastodon follow info * domain_blocks parsing fix * marks by who i follows * adjust label * use username in urls * add page to list a user\'s review * review widget on user home page * fix preview 500 * fix typo * minor fix * fix google books parsing * allow mark/review visible to oneself * fix auto sync masto for new user * fix search 500 * add command to restart a sync task * reset visibility * delete user data * fix tag search result pagination * not upgrade to django 4 yet * basic doc * wip: collection * wip * wip * collection use htmx * show in-collection section for entities * fix typo * add su for easier debug * fix some 500s * fix login using alternative domain * hide data from disabled user * add item to list from detail page * my tags * collection: inline comment edit * show number of ratings * fix collection delete * more detail in collection view * use item template in search result * fix 500 * write index to meilisearch * fix search * reindex in batch * fix 500 * show search result from meilisearch * more search commands * index less fields * index new items only * search highlights * fix 500 * auto set search category * classic search if no meili server * fix index stats error * support typesense backend * workaround typesense bug * make external search async * fix 500, typo * fix cover scripts * fix minor issue in douban parser * supports m.douban.com and customized bandcamp domain * move account * reword with gender-friendly and instance-neutral language * Friendica does not have vapid_key in api response * enable anonymous search * tweak book result template * API v0 API v0 * fix meilisearch reindex * fix search by url error * login via twitter.com * login via pixelfed * minor fix * no refresh on inactive users * support refresh access token * get rid of /users/number-id/ * refresh twitter handler automatically * paste image when review * support PixelFed (very long token) * fix django-markdownx version * ignore single quote for meilisearch for now * update logo * show book review/mark from same isbn * show movie review/mark from same imdb * fix login with older mastodon servers * import Goodreads book list and profile * add timestamp to Goodreads import * support new google books api * import goodreads list * minor goodreads fix * click corner action icon to add to wishlist * clean up duplicated code * fix anonymous search * fix 500 * minor fix search 500 * show rating only if votes > 5 * Entity.refresh_rating() * preference to append text when sharing; clean up duplicated code * fix missing data for user tagged view * fix page link for tag view * fix 500 when language field longer than 10 * fix 500 when sharing mark for song * fix error when reimport goodread profile * fix minor typo * fix a rare 500 * error log dump less * fix tags in marks export * fix missing param in pagination * import douban review * clarify text * fix missing sheet in review import * review: show in progress * scrape douban: ignore unknown genre * minor fix * improve review import by guess entity urls * clear guide text for review import * improve review import form text * workaround some 500 * fix mark import error * fix img in review import * load external results earlier * ignore search server errors * simplify user register flow to avoid inconsistent state * Add a learn more link on login page * Update login.html * show mark created timestamp as mark time * no 500 for api error * redirect for expired tokens * ensure preference object created. * mark collections * tag list * fix tag display * fix sorting etc * fix 500 * fix potential export 500; save shared links * fix share to twittwe * fix review url * fix 500 * fix 500 * add timeline, etc * missing status change in timeline * missing id in timeline * timeline view by default * workaround bug in markdownx... * fix typo * option to create new collection when add from detail page * add missing announcement and tags in timeline home * add missing announcement * add missing announcement * opensearch * show fediverse shared link * public review no longer requires login * fix markdownx bug * fix 500 * use cloudflare cdn * validate jquery load and domain input * fix 500 * tips for goodreads import * collaborative collection * show timeline and profile link on nav bar * minor tweak * share collection * fix Goodreads search * show wish mark in timeline * resync failed urls with local proxy * resync failed urls with local proxy: check proxy first * scraper minor fix * resync failed urls * fix fields limit * fix douban parsing error * resync * scraper minor fix * scraper minor fix * scraper minor fix * local proxy * local proxy * sync default config from neodb * configurable site name * fix 500 * fix 500 for anonymous user * add sentry * add git version in log * add git version in log * no longer rely on cdnjs.cloudflare.com * move jq/cash to _common_libs template partial * fix rare js error * fix 500 * avoid double submission error * import tag in lower case * catch some js network errors * catch some js network errors * support more goodread urls * fix unaired tv in tmdb * support more google book urls * fix related series * more goodreads urls * robust googlebooks search * robust search * Update settings.py * Update scraper.py * Update requirements.txt * make nicedb work * doc update * simplify permission check * update doc * update doc for bug report link * skip spotify tracks * fix 500 * improve search api * blind fix import compatibility * show years for movie in timeline * show years for movie in timeline; thinner font * export reviews * revert user home to use jquery https://github.com/fabiospampinato/cash/issues/246 * IGDB * use IGDB for Steam * use TMDB for IMDb * steam: igdb then fallback to steam * keep change history * keep change history: add django settings * Steam: keep localized title/brief while merging IGDB * basic Docker support * rescrape * Create codeql-analysis.yml * Create SECURITY.md * Create pysa.yml Co-authored-by: doubaniux <goodsir@vivaldi.net> Co-authored-by: Your Name <you@example.com> Co-authored-by: Their Name <they@example.com> Co-authored-by: Mt. Front <mfcndw@gmail.com>
287 lines
8.8 KiB
Python
287 lines
8.8 KiB
Python
import requests
|
|
import re
|
|
import time
|
|
from common.models import SourceSiteEnum
|
|
from music.models import Album, Song
|
|
from music.forms import AlbumForm, SongForm
|
|
from django.conf import settings
|
|
from common.scraper import *
|
|
from threading import Thread
|
|
from django.core.exceptions import ObjectDoesNotExist
|
|
from django.utils import timezone
|
|
|
|
|
|
spotify_token = None
|
|
spotify_token_expire_time = time.time()
|
|
|
|
|
|
class SpotifyTrackScraper(AbstractScraper):
|
|
site_name = SourceSiteEnum.SPOTIFY.value
|
|
host = 'https://open.spotify.com/track/'
|
|
data_class = Song
|
|
form_class = SongForm
|
|
|
|
regex = re.compile(r"(?<=https://open\.spotify\.com/track/)[a-zA-Z0-9]+")
|
|
|
|
def scrape(self, url):
|
|
"""
|
|
Request from API, not really scraping
|
|
"""
|
|
global spotify_token, spotify_token_expire_time
|
|
|
|
if spotify_token is None or is_spotify_token_expired():
|
|
invoke_spotify_token()
|
|
effective_url = self.get_effective_url(url)
|
|
if effective_url is None:
|
|
raise ValueError("not valid url")
|
|
|
|
api_url = self.get_api_url(effective_url)
|
|
headers = {
|
|
'Authorization': f"Bearer {spotify_token}"
|
|
}
|
|
r = requests.get(api_url, headers=headers)
|
|
res_data = r.json()
|
|
|
|
artist = []
|
|
for artist_dict in res_data['artists']:
|
|
artist.append(artist_dict['name'])
|
|
if not artist:
|
|
artist = None
|
|
|
|
title = res_data['name']
|
|
|
|
release_date = parse_date(res_data['album']['release_date'])
|
|
|
|
duration = res_data['duration_ms']
|
|
|
|
if res_data['external_ids'].get('isrc'):
|
|
isrc = res_data['external_ids']['isrc']
|
|
else:
|
|
isrc = None
|
|
|
|
raw_img, ext = self.download_image(res_data['album']['images'][0]['url'], url)
|
|
|
|
data = {
|
|
'title': title,
|
|
'artist': artist,
|
|
'genre': None,
|
|
'release_date': release_date,
|
|
'duration': duration,
|
|
'isrc': isrc,
|
|
'album': None,
|
|
'brief': None,
|
|
'other_info': None,
|
|
'source_site': self.site_name,
|
|
'source_url': effective_url,
|
|
}
|
|
self.raw_data, self.raw_img, self.img_ext = data, raw_img, ext
|
|
return data, raw_img
|
|
|
|
@classmethod
|
|
def get_effective_url(cls, raw_url):
|
|
code = cls.regex.findall(raw_url)
|
|
if code:
|
|
return f"https://open.spotify.com/track/{code[0]}"
|
|
else:
|
|
return None
|
|
|
|
@classmethod
|
|
def get_api_url(cls, url):
|
|
return "https://api.spotify.com/v1/tracks/" + cls.regex.findall(url)[0]
|
|
|
|
|
|
class SpotifyAlbumScraper(AbstractScraper):
|
|
site_name = SourceSiteEnum.SPOTIFY.value
|
|
# API URL
|
|
host = 'https://open.spotify.com/album/'
|
|
data_class = Album
|
|
form_class = AlbumForm
|
|
|
|
regex = re.compile(r"(?<=https://open\.spotify\.com/album/)[a-zA-Z0-9]+")
|
|
|
|
def scrape(self, url):
|
|
"""
|
|
Request from API, not really scraping
|
|
"""
|
|
global spotify_token, spotify_token_expire_time
|
|
|
|
if spotify_token is None or is_spotify_token_expired():
|
|
invoke_spotify_token()
|
|
effective_url = self.get_effective_url(url)
|
|
if effective_url is None:
|
|
raise ValueError("not valid url")
|
|
|
|
api_url = self.get_api_url(effective_url)
|
|
headers = {
|
|
'Authorization': f"Bearer {spotify_token}"
|
|
}
|
|
r = requests.get(api_url, headers=headers)
|
|
res_data = r.json()
|
|
|
|
artist = []
|
|
for artist_dict in res_data['artists']:
|
|
artist.append(artist_dict['name'])
|
|
|
|
title = res_data['name']
|
|
|
|
genre = ', '.join(res_data['genres'])
|
|
|
|
company = []
|
|
for com in res_data['copyrights']:
|
|
company.append(com['text'])
|
|
|
|
duration = 0
|
|
track_list = []
|
|
track_urls = []
|
|
for track in res_data['tracks']['items']:
|
|
track_urls.append(track['external_urls']['spotify'])
|
|
duration += track['duration_ms']
|
|
if res_data['tracks']['items'][-1]['disc_number'] > 1:
|
|
# more than one disc
|
|
track_list.append(str(
|
|
track['disc_number']) + '-' + str(track['track_number']) + '. ' + track['name'])
|
|
else:
|
|
track_list.append(str(track['track_number']) + '. ' + track['name'])
|
|
track_list = '\n'.join(track_list)
|
|
|
|
release_date = parse_date(res_data['release_date'])
|
|
|
|
other_info = {}
|
|
if res_data['external_ids'].get('upc'):
|
|
# bar code
|
|
other_info['UPC'] = res_data['external_ids']['upc']
|
|
|
|
raw_img, ext = self.download_image(res_data['images'][0]['url'], url)
|
|
|
|
data = {
|
|
'title': title,
|
|
'artist': artist,
|
|
'genre': genre,
|
|
'track_list': track_list,
|
|
'release_date': release_date,
|
|
'duration': duration,
|
|
'company': company,
|
|
'brief': None,
|
|
'other_info': other_info,
|
|
'source_site': self.site_name,
|
|
'source_url': effective_url,
|
|
}
|
|
|
|
# set tracks_data, used for adding tracks
|
|
self.track_urls = track_urls
|
|
|
|
self.raw_data, self.raw_img, self.img_ext = data, raw_img, ext
|
|
return data, raw_img
|
|
|
|
@classmethod
|
|
def get_effective_url(cls, raw_url):
|
|
code = cls.regex.findall(raw_url)
|
|
if code:
|
|
return f"https://open.spotify.com/album/{code[0]}"
|
|
else:
|
|
return None
|
|
|
|
# @classmethod
|
|
# def save(cls, request_user):
|
|
# form = super().save(request_user)
|
|
# task = Thread(
|
|
# target=cls.add_tracks,
|
|
# args=(form.instance, request_user),
|
|
# daemon=True
|
|
# )
|
|
# task.start()
|
|
# return form
|
|
|
|
@classmethod
|
|
def get_api_url(cls, url):
|
|
return "https://api.spotify.com/v1/albums/" + cls.regex.findall(url)[0]
|
|
|
|
@classmethod
|
|
def add_tracks(cls, album: Album, request_user):
|
|
to_be_updated_tracks = []
|
|
for track_url in cls.track_urls:
|
|
track = cls.get_track_or_none(track_url)
|
|
# seems lik if fire too many requests at the same time
|
|
# spotify would limit access
|
|
if track is None:
|
|
task = Thread(
|
|
target=cls.scrape_and_save_track,
|
|
args=(track_url, album, request_user),
|
|
daemon=True
|
|
)
|
|
task.start()
|
|
task.join()
|
|
else:
|
|
to_be_updated_tracks.append(track)
|
|
cls.bulk_update_track_album(to_be_updated_tracks, album, request_user)
|
|
|
|
@classmethod
|
|
def get_track_or_none(cls, track_url: str):
|
|
try:
|
|
instance = Song.objects.get(source_url=track_url)
|
|
return instance
|
|
except ObjectDoesNotExist:
|
|
return None
|
|
|
|
@classmethod
|
|
def scrape_and_save_track(cls, url: str, album: Album, request_user):
|
|
data, img = SpotifyTrackScraper.scrape(url)
|
|
SpotifyTrackScraper.raw_data['album'] = album
|
|
SpotifyTrackScraper.save(request_user)
|
|
|
|
@classmethod
|
|
def bulk_update_track_album(cls, tracks, album, request_user):
|
|
for track in tracks:
|
|
track.last_editor = request_user
|
|
track.edited_time = timezone.now()
|
|
track.album = album
|
|
Song.objects.bulk_update(tracks, [
|
|
'last_editor',
|
|
'edited_time',
|
|
'album'
|
|
])
|
|
|
|
|
|
def get_spotify_token():
|
|
global spotify_token, spotify_token_expire_time
|
|
if spotify_token is None or is_spotify_token_expired():
|
|
invoke_spotify_token()
|
|
return spotify_token
|
|
|
|
|
|
def is_spotify_token_expired():
|
|
global spotify_token_expire_time
|
|
return True if spotify_token_expire_time <= time.time() else False
|
|
|
|
|
|
def invoke_spotify_token():
|
|
global spotify_token, spotify_token_expire_time
|
|
r = requests.post(
|
|
"https://accounts.spotify.com/api/token",
|
|
data={
|
|
"grant_type": "client_credentials"
|
|
},
|
|
headers={
|
|
"Authorization": f"Basic {settings.SPOTIFY_CREDENTIAL}"
|
|
}
|
|
)
|
|
data = r.json()
|
|
if r.status_code == 401:
|
|
# token expired, try one more time
|
|
# this maybe caused by external operations,
|
|
# for example debugging using a http client
|
|
r = requests.post(
|
|
"https://accounts.spotify.com/api/token",
|
|
data={
|
|
"grant_type": "client_credentials"
|
|
},
|
|
headers={
|
|
"Authorization": f"Basic {settings.SPOTIFY_CREDENTIAL}"
|
|
}
|
|
)
|
|
data = r.json()
|
|
elif r.status_code != 200:
|
|
raise Exception(f"Request to spotify API fails. Reason: {r.reason}")
|
|
# minus 2 for execution time error
|
|
spotify_token_expire_time = int(data['expires_in']) + time.time() - 2
|
|
spotify_token = data['access_token']
|