
* fix scraping failure with wepb image (merge upstream/fix-webp-scrape) * add filetype to requirements * add proxycrawl.com as fallback for douban scraper * load 3p js/css from cdn * add fix-cover task * fix book/album cover tasks * scrapestack * bandcamp scrape and preview ; manage.py scrape <url> ; make ^C work when DEBUG * use scrapestack when fix cover * add user agent to improve compatibility * search BandCamp for music albums * add missing MovieGenre * fix search 500 when song has no parent album * adjust timeout * individual scrapers * fix tmdb parser * export marks via rq; pref to send public toot; move import to data page * fix spotify import * fix edge cases * export: fix dupe tags * use rq to manage doufen import * add django command to manage rq jobs * fix export edge case * tune rq admin * fix detail page 502 step 1: async pull mastodon follow/block/mute list * fix detail page 502 step 2: calculate relationship by local cached data * manual sync mastodon follow info * domain_blocks parsing fix * marks by who i follows * adjust label * use username in urls * add page to list a user\'s review * review widget on user home page * fix preview 500 * fix typo * minor fix * fix google books parsing * allow mark/review visible to oneself * fix auto sync masto for new user * fix search 500 * add command to restart a sync task * reset visibility * delete user data * fix tag search result pagination * not upgrade to django 4 yet * basic doc * wip: collection * wip * wip * collection use htmx * show in-collection section for entities * fix typo * add su for easier debug * fix some 500s * fix login using alternative domain * hide data from disabled user * add item to list from detail page * my tags * collection: inline comment edit * show number of ratings * fix collection delete * more detail in collection view * use item template in search result * fix 500 * write index to meilisearch * fix search * reindex in batch * fix 500 * show search result from meilisearch * more search commands * index less fields * index new items only * search highlights * fix 500 * auto set search category * classic search if no meili server * fix index stats error * support typesense backend * workaround typesense bug * make external search async * fix 500, typo * fix cover scripts * fix minor issue in douban parser * supports m.douban.com and customized bandcamp domain * move account * reword with gender-friendly and instance-neutral language * Friendica does not have vapid_key in api response * enable anonymous search * tweak book result template * API v0 API v0 * fix meilisearch reindex * fix search by url error * login via twitter.com * login via pixelfed * minor fix * no refresh on inactive users * support refresh access token * get rid of /users/number-id/ * refresh twitter handler automatically * paste image when review * support PixelFed (very long token) * fix django-markdownx version * ignore single quote for meilisearch for now * update logo * show book review/mark from same isbn * show movie review/mark from same imdb * fix login with older mastodon servers * import Goodreads book list and profile * add timestamp to Goodreads import * support new google books api * import goodreads list * minor goodreads fix * click corner action icon to add to wishlist * clean up duplicated code * fix anonymous search * fix 500 * minor fix search 500 * show rating only if votes > 5 * Entity.refresh_rating() * preference to append text when sharing; clean up duplicated code * fix missing data for user tagged view * fix page link for tag view * fix 500 when language field longer than 10 * fix 500 when sharing mark for song * fix error when reimport goodread profile * fix minor typo * fix a rare 500 * error log dump less * fix tags in marks export * fix missing param in pagination * import douban review * clarify text * fix missing sheet in review import * review: show in progress * scrape douban: ignore unknown genre * minor fix * improve review import by guess entity urls * clear guide text for review import * improve review import form text * workaround some 500 * fix mark import error * fix img in review import * load external results earlier * ignore search server errors * simplify user register flow to avoid inconsistent state * Add a learn more link on login page * Update login.html * show mark created timestamp as mark time * no 500 for api error * redirect for expired tokens * ensure preference object created. * mark collections * tag list * fix tag display * fix sorting etc * fix 500 * fix potential export 500; save shared links * fix share to twittwe * fix review url * fix 500 * fix 500 * add timeline, etc * missing status change in timeline * missing id in timeline * timeline view by default * workaround bug in markdownx... * fix typo * option to create new collection when add from detail page * add missing announcement and tags in timeline home * add missing announcement * add missing announcement * opensearch * show fediverse shared link * public review no longer requires login * fix markdownx bug * fix 500 * use cloudflare cdn * validate jquery load and domain input * fix 500 * tips for goodreads import * collaborative collection * show timeline and profile link on nav bar * minor tweak * share collection * fix Goodreads search * show wish mark in timeline * resync failed urls with local proxy * resync failed urls with local proxy: check proxy first * scraper minor fix * resync failed urls * fix fields limit * fix douban parsing error * resync * scraper minor fix * scraper minor fix * scraper minor fix * local proxy * local proxy * sync default config from neodb * configurable site name * fix 500 * fix 500 for anonymous user * add sentry * add git version in log * add git version in log * no longer rely on cdnjs.cloudflare.com * move jq/cash to _common_libs template partial * fix rare js error * fix 500 * avoid double submission error * import tag in lower case * catch some js network errors * catch some js network errors * support more goodread urls * fix unaired tv in tmdb * support more google book urls * fix related series * more goodreads urls * robust googlebooks search * robust search * Update settings.py * Update scraper.py * Update requirements.txt * make nicedb work * doc update * simplify permission check * update doc * update doc for bug report link * skip spotify tracks * fix 500 * improve search api * blind fix import compatibility * show years for movie in timeline * show years for movie in timeline; thinner font * export reviews * revert user home to use jquery https://github.com/fabiospampinato/cash/issues/246 * IGDB * use IGDB for Steam * use TMDB for IMDb * steam: igdb then fallback to steam * keep change history * keep change history: add django settings * Steam: keep localized title/brief while merging IGDB * basic Docker support * rescrape * Create codeql-analysis.yml * Create SECURITY.md * Create pysa.yml Co-authored-by: doubaniux <goodsir@vivaldi.net> Co-authored-by: Your Name <you@example.com> Co-authored-by: Their Name <they@example.com> Co-authored-by: Mt. Front <mfcndw@gmail.com>
150 lines
6.2 KiB
Python
150 lines
6.2 KiB
Python
import requests
|
|
import re
|
|
from common.models import SourceSiteEnum
|
|
from movies.models import Movie
|
|
from movies.forms import MovieForm
|
|
from django.conf import settings
|
|
from common.scraper import *
|
|
|
|
|
|
class TmdbMovieScraper(AbstractScraper):
|
|
site_name = SourceSiteEnum.TMDB.value
|
|
host = 'https://www.themoviedb.org/'
|
|
data_class = Movie
|
|
form_class = MovieForm
|
|
regex = re.compile(r"https://www\.themoviedb\.org/(movie|tv)/([a-zA-Z0-9]+)")
|
|
# http://api.themoviedb.org/3/genre/movie/list?api_key=&language=zh
|
|
# http://api.themoviedb.org/3/genre/tv/list?api_key=&language=zh
|
|
genre_map = {
|
|
'Sci-Fi & Fantasy': 'Sci-Fi',
|
|
'War & Politics': 'War',
|
|
'儿童': 'Kids',
|
|
'冒险': 'Adventure',
|
|
'剧情': 'Drama',
|
|
'动作': 'Action',
|
|
'动作冒险': 'Action',
|
|
'动画': 'Animation',
|
|
'历史': 'History',
|
|
'喜剧': 'Comedy',
|
|
'奇幻': 'Fantasy',
|
|
'家庭': 'Family',
|
|
'恐怖': 'Horror',
|
|
'悬疑': 'Mystery',
|
|
'惊悚': 'Thriller',
|
|
'战争': 'War',
|
|
'新闻': 'News',
|
|
'爱情': 'Romance',
|
|
'犯罪': 'Crime',
|
|
'电视电影': 'TV Movie',
|
|
'真人秀': 'Reality-TV',
|
|
'科幻': 'Sci-Fi',
|
|
'纪录': 'Documentary',
|
|
'肥皂剧': 'Soap',
|
|
'脱口秀': 'Talk-Show',
|
|
'西部': 'Western',
|
|
'音乐': 'Music',
|
|
}
|
|
|
|
def scrape_imdb(self, imdb_code):
|
|
api_url = f"https://api.themoviedb.org/3/find/{imdb_code}?api_key={settings.TMDB_API3_KEY}&language=zh-CN&external_source=imdb_id"
|
|
r = requests.get(api_url)
|
|
res_data = r.json()
|
|
if 'movie_results' in res_data and len(res_data['movie_results']) > 0:
|
|
url = f"https://www.themoviedb.org/movie/{res_data['movie_results'][0]['id']}"
|
|
elif 'tv_results' in res_data and len(res_data['tv_results']) > 0:
|
|
url = f"https://www.themoviedb.org/tv/{res_data['tv_results'][0]['id']}"
|
|
else:
|
|
raise ValueError("Cannot find IMDb ID in TMDB")
|
|
return self.scrape(url)
|
|
|
|
def scrape(self, url):
|
|
m = self.regex.match(url)
|
|
if m:
|
|
effective_url = m[0]
|
|
else:
|
|
raise ValueError("not valid url")
|
|
effective_url = m[0]
|
|
is_series = m[1] == 'tv'
|
|
id = m[2]
|
|
if is_series:
|
|
api_url = f"https://api.themoviedb.org/3/tv/{id}?api_key={settings.TMDB_API3_KEY}&language=zh-CN&append_to_response=external_ids,credits"
|
|
else:
|
|
api_url = f"https://api.themoviedb.org/3/movie/{id}?api_key={settings.TMDB_API3_KEY}&language=zh-CN&append_to_response=external_ids,credits"
|
|
r = requests.get(api_url)
|
|
res_data = r.json()
|
|
|
|
if is_series:
|
|
title = res_data['name']
|
|
orig_title = res_data['original_name']
|
|
year = int(res_data['first_air_date'].split('-')[0]) if res_data['first_air_date'] else None
|
|
imdb_code = res_data['external_ids']['imdb_id']
|
|
showtime = [{res_data['first_air_date']: "首播日期"}] if res_data['first_air_date'] else None
|
|
duration = None
|
|
else:
|
|
title = res_data['title']
|
|
orig_title = res_data['original_title']
|
|
year = int(res_data['release_date'].split('-')[0]) if res_data['release_date'] else None
|
|
showtime = [{res_data['release_date']: "发布日期"}] if res_data['release_date'] else None
|
|
imdb_code = res_data['imdb_id']
|
|
duration = res_data['runtime'] if res_data['runtime'] else None # in minutes
|
|
|
|
genre = list(map(lambda x: self.genre_map[x['name']] if x['name'] in self.genre_map else 'Other', res_data['genres']))
|
|
language = list(map(lambda x: x['name'], res_data['spoken_languages']))
|
|
brief = res_data['overview']
|
|
|
|
if is_series:
|
|
director = list(map(lambda x: x['name'], res_data['created_by']))
|
|
else:
|
|
director = list(map(lambda x: x['name'], filter(lambda c: c['job'] == 'Director', res_data['credits']['crew'])))
|
|
playwright = list(map(lambda x: x['name'], filter(lambda c: c['job'] == 'Screenplay', res_data['credits']['crew'])))
|
|
actor = list(map(lambda x: x['name'], res_data['credits']['cast']))
|
|
area = []
|
|
|
|
other_info = {}
|
|
other_info['TMDB评分'] = res_data['vote_average']
|
|
# other_info['分级'] = res_data['contentRating']
|
|
# other_info['Metacritic评分'] = res_data['metacriticRating']
|
|
# other_info['奖项'] = res_data['awards']
|
|
other_info['TMDB_ID'] = id
|
|
if is_series:
|
|
other_info['Seasons'] = res_data['number_of_seasons']
|
|
other_info['Episodes'] = res_data['number_of_episodes']
|
|
|
|
img_url = ('https://image.tmdb.org/t/p/original/' + res_data['poster_path']) if res_data['poster_path'] is not None else None
|
|
# TODO: use GET /configuration to get base url
|
|
raw_img, ext = self.download_image(img_url, url)
|
|
|
|
data = {
|
|
'title': title,
|
|
'orig_title': orig_title,
|
|
'other_title': None,
|
|
'imdb_code': imdb_code,
|
|
'director': director,
|
|
'playwright': playwright,
|
|
'actor': actor,
|
|
'genre': genre,
|
|
'showtime': showtime,
|
|
'site': None,
|
|
'area': area,
|
|
'language': language,
|
|
'year': year,
|
|
'duration': duration,
|
|
'season': None,
|
|
'episodes': None,
|
|
'single_episode_length': None,
|
|
'brief': brief,
|
|
'is_series': is_series,
|
|
'other_info': other_info,
|
|
'source_site': self.site_name,
|
|
'source_url': effective_url,
|
|
}
|
|
self.raw_data, self.raw_img, self.img_ext = data, raw_img, ext
|
|
return data, raw_img
|
|
|
|
@classmethod
|
|
def get_effective_url(cls, raw_url):
|
|
m = cls.regex.match(raw_url)
|
|
if raw_url:
|
|
return m[0]
|
|
else:
|
|
return None
|