
* fix scraping failure with wepb image (merge upstream/fix-webp-scrape) * add filetype to requirements * add proxycrawl.com as fallback for douban scraper * load 3p js/css from cdn * add fix-cover task * fix book/album cover tasks * scrapestack * bandcamp scrape and preview ; manage.py scrape <url> ; make ^C work when DEBUG * use scrapestack when fix cover * add user agent to improve compatibility * search BandCamp for music albums * add missing MovieGenre * fix search 500 when song has no parent album * adjust timeout * individual scrapers * fix tmdb parser * export marks via rq; pref to send public toot; move import to data page * fix spotify import * fix edge cases * export: fix dupe tags * use rq to manage doufen import * add django command to manage rq jobs * fix export edge case * tune rq admin * fix detail page 502 step 1: async pull mastodon follow/block/mute list * fix detail page 502 step 2: calculate relationship by local cached data * manual sync mastodon follow info * domain_blocks parsing fix * marks by who i follows * adjust label * use username in urls * add page to list a user\'s review * review widget on user home page * fix preview 500 * fix typo * minor fix * fix google books parsing * allow mark/review visible to oneself * fix auto sync masto for new user * fix search 500 * add command to restart a sync task * reset visibility * delete user data * fix tag search result pagination * not upgrade to django 4 yet * basic doc * wip: collection * wip * wip * collection use htmx * show in-collection section for entities * fix typo * add su for easier debug * fix some 500s * fix login using alternative domain * hide data from disabled user * add item to list from detail page * my tags * collection: inline comment edit * show number of ratings * fix collection delete * more detail in collection view * use item template in search result * fix 500 * write index to meilisearch * fix search * reindex in batch * fix 500 * show search result from meilisearch * more search commands * index less fields * index new items only * search highlights * fix 500 * auto set search category * classic search if no meili server * fix index stats error * support typesense backend * workaround typesense bug * make external search async * fix 500, typo * fix cover scripts * fix minor issue in douban parser * supports m.douban.com and customized bandcamp domain * move account * reword with gender-friendly and instance-neutral language * Friendica does not have vapid_key in api response * enable anonymous search * tweak book result template * API v0 API v0 * fix meilisearch reindex * fix search by url error * login via twitter.com * login via pixelfed * minor fix * no refresh on inactive users * support refresh access token * get rid of /users/number-id/ * refresh twitter handler automatically * paste image when review * support PixelFed (very long token) * fix django-markdownx version * ignore single quote for meilisearch for now * update logo * show book review/mark from same isbn * show movie review/mark from same imdb * fix login with older mastodon servers * import Goodreads book list and profile * add timestamp to Goodreads import * support new google books api * import goodreads list * minor goodreads fix * click corner action icon to add to wishlist * clean up duplicated code * fix anonymous search * fix 500 * minor fix search 500 * show rating only if votes > 5 * Entity.refresh_rating() * preference to append text when sharing; clean up duplicated code * fix missing data for user tagged view * fix page link for tag view * fix 500 when language field longer than 10 * fix 500 when sharing mark for song * fix error when reimport goodread profile * fix minor typo * fix a rare 500 * error log dump less * fix tags in marks export * fix missing param in pagination * import douban review * clarify text * fix missing sheet in review import * review: show in progress * scrape douban: ignore unknown genre * minor fix * improve review import by guess entity urls * clear guide text for review import * improve review import form text * workaround some 500 * fix mark import error * fix img in review import * load external results earlier * ignore search server errors * simplify user register flow to avoid inconsistent state * Add a learn more link on login page * Update login.html * show mark created timestamp as mark time * no 500 for api error * redirect for expired tokens * ensure preference object created. * mark collections * tag list * fix tag display * fix sorting etc * fix 500 * fix potential export 500; save shared links * fix share to twittwe * fix review url * fix 500 * fix 500 * add timeline, etc * missing status change in timeline * missing id in timeline * timeline view by default * workaround bug in markdownx... * fix typo * option to create new collection when add from detail page * add missing announcement and tags in timeline home * add missing announcement * add missing announcement * opensearch * show fediverse shared link * public review no longer requires login * fix markdownx bug * fix 500 * use cloudflare cdn * validate jquery load and domain input * fix 500 * tips for goodreads import * collaborative collection * show timeline and profile link on nav bar * minor tweak * share collection * fix Goodreads search * show wish mark in timeline * resync failed urls with local proxy * resync failed urls with local proxy: check proxy first * scraper minor fix * resync failed urls * fix fields limit * fix douban parsing error * resync * scraper minor fix * scraper minor fix * scraper minor fix * local proxy * local proxy * sync default config from neodb * configurable site name * fix 500 * fix 500 for anonymous user * add sentry * add git version in log * add git version in log * no longer rely on cdnjs.cloudflare.com * move jq/cash to _common_libs template partial * fix rare js error * fix 500 * avoid double submission error * import tag in lower case * catch some js network errors * catch some js network errors * support more goodread urls * fix unaired tv in tmdb * support more google book urls * fix related series * more goodreads urls * robust googlebooks search * robust search * Update settings.py * Update scraper.py * Update requirements.txt * make nicedb work * doc update * simplify permission check * update doc * update doc for bug report link * skip spotify tracks * fix 500 * improve search api * blind fix import compatibility * show years for movie in timeline * show years for movie in timeline; thinner font * export reviews * revert user home to use jquery https://github.com/fabiospampinato/cash/issues/246 * IGDB * use IGDB for Steam * use TMDB for IMDb * steam: igdb then fallback to steam * keep change history * keep change history: add django settings * Steam: keep localized title/brief while merging IGDB * basic Docker support * rescrape * Create codeql-analysis.yml * Create SECURITY.md * Create pysa.yml Co-authored-by: doubaniux <goodsir@vivaldi.net> Co-authored-by: Your Name <you@example.com> Co-authored-by: Their Name <they@example.com> Co-authored-by: Mt. Front <mfcndw@gmail.com>
209 lines
8.8 KiB
Python
209 lines
8.8 KiB
Python
from urllib.parse import quote_plus
|
|
from enum import Enum
|
|
from common.models import SourceSiteEnum
|
|
from django.conf import settings
|
|
from common.scrapers.goodreads import GoodreadsScraper
|
|
from common.scrapers.spotify import get_spotify_token
|
|
import requests
|
|
from lxml import html
|
|
import logging
|
|
|
|
SEARCH_PAGE_SIZE = 5 # not all apis support page size
|
|
logger = logging.getLogger(__name__)
|
|
|
|
|
|
class Category(Enum):
|
|
Book = '书籍'
|
|
Movie = '电影'
|
|
Music = '音乐'
|
|
Game = '游戏'
|
|
TV = '剧集'
|
|
|
|
|
|
class SearchResultItem:
|
|
def __init__(self, category, source_site, source_url, title, subtitle, brief, cover_url):
|
|
self.category = category
|
|
self.source_site = source_site
|
|
self.source_url = source_url
|
|
self.title = title
|
|
self.subtitle = subtitle
|
|
self.brief = brief
|
|
self.cover_url = cover_url
|
|
|
|
@property
|
|
def verbose_category_name(self):
|
|
return self.category.value
|
|
|
|
@property
|
|
def link(self):
|
|
return f"/search?q={quote_plus(self.source_url)}"
|
|
|
|
@property
|
|
def scraped(self):
|
|
return False
|
|
|
|
|
|
class ProxiedRequest:
|
|
@classmethod
|
|
def get(cls, url):
|
|
u = f'http://api.scraperapi.com?api_key={settings.SCRAPERAPI_KEY}&url={quote_plus(url)}'
|
|
return requests.get(u, timeout=10)
|
|
|
|
|
|
class Goodreads:
|
|
@classmethod
|
|
def search(self, q, page=1):
|
|
results = []
|
|
try:
|
|
search_url = f'https://www.goodreads.com/search?page={page}&q={quote_plus(q)}'
|
|
r = requests.get(search_url)
|
|
if r.url.startswith('https://www.goodreads.com/book/show/'):
|
|
# Goodreads will 302 if only one result matches ISBN
|
|
data, img = GoodreadsScraper.scrape(r.url, r)
|
|
subtitle = f"{data['pub_year']} {', '.join(data['author'])} {', '.join(data['translator'] if data['translator'] else [])}"
|
|
results.append(SearchResultItem(Category.Book, SourceSiteEnum.GOODREADS,
|
|
data['source_url'], data['title'], subtitle,
|
|
data['brief'], data['cover_url']))
|
|
else:
|
|
h = html.fromstring(r.content.decode('utf-8'))
|
|
for c in h.xpath('//tr[@itemtype="http://schema.org/Book"]'):
|
|
el_cover = c.xpath('.//img[@class="bookCover"]/@src')
|
|
cover = el_cover[0] if el_cover else None
|
|
el_title = c.xpath('.//a[@class="bookTitle"]//text()')
|
|
title = ''.join(el_title).strip() if el_title else None
|
|
el_url = c.xpath('.//a[@class="bookTitle"]/@href')
|
|
url = 'https://www.goodreads.com' + \
|
|
el_url[0] if el_url else None
|
|
el_authors = c.xpath('.//a[@class="authorName"]//text()')
|
|
subtitle = ', '.join(el_authors) if el_authors else None
|
|
results.append(SearchResultItem(
|
|
Category.Book, SourceSiteEnum.GOODREADS, url, title, subtitle, '', cover))
|
|
except Exception as e:
|
|
logger.error(f"Goodreads search '{q}' error: {e}")
|
|
return results
|
|
|
|
|
|
class GoogleBooks:
|
|
@classmethod
|
|
def search(self, q, page=1):
|
|
results = []
|
|
try:
|
|
api_url = f'https://www.googleapis.com/books/v1/volumes?country=us&q={quote_plus(q)}&startIndex={SEARCH_PAGE_SIZE*(page-1)}&maxResults={SEARCH_PAGE_SIZE}&maxAllowedMaturityRating=MATURE'
|
|
j = requests.get(api_url).json()
|
|
if 'items' in j:
|
|
for b in j['items']:
|
|
if 'title' not in b['volumeInfo']:
|
|
continue
|
|
title = b['volumeInfo']['title']
|
|
subtitle = ''
|
|
if 'publishedDate' in b['volumeInfo']:
|
|
subtitle += b['volumeInfo']['publishedDate'] + ' '
|
|
if 'authors' in b['volumeInfo']:
|
|
subtitle += ', '.join(b['volumeInfo']['authors'])
|
|
if 'description' in b['volumeInfo']:
|
|
brief = b['volumeInfo']['description']
|
|
elif 'textSnippet' in b['volumeInfo']:
|
|
brief = b["volumeInfo"]["textSnippet"]["searchInfo"]
|
|
else:
|
|
brief = ''
|
|
category = Category.Book
|
|
# b['volumeInfo']['infoLink'].replace('http:', 'https:')
|
|
url = 'https://books.google.com/books?id=' + b['id']
|
|
cover = b['volumeInfo']['imageLinks']['thumbnail'] if 'imageLinks' in b['volumeInfo'] else None
|
|
results.append(SearchResultItem(
|
|
category, SourceSiteEnum.GOOGLEBOOKS, url, title, subtitle, brief, cover))
|
|
except Exception as e:
|
|
logger.error(f"GoogleBooks search '{q}' error: {e}")
|
|
return results
|
|
|
|
|
|
class TheMovieDatabase:
|
|
@classmethod
|
|
def search(self, q, page=1):
|
|
results = []
|
|
try:
|
|
api_url = f'https://api.themoviedb.org/3/search/multi?query={quote_plus(q)}&page={page}&api_key={settings.TMDB_API3_KEY}&language=zh-CN&include_adult=true'
|
|
j = requests.get(api_url).json()
|
|
for m in j['results']:
|
|
if m['media_type'] in ['tv', 'movie']:
|
|
url = f"https://www.themoviedb.org/{m['media_type']}/{m['id']}"
|
|
if m['media_type'] == 'tv':
|
|
cat = Category.TV
|
|
title = m['name']
|
|
subtitle = f"{m.get('first_air_date')} {m.get('original_name')}"
|
|
else:
|
|
cat = Category.Movie
|
|
title = m['title']
|
|
subtitle = f"{m.get('release_date')} {m.get('original_name')}"
|
|
cover = f"https://image.tmdb.org/t/p/w500/{m.get('poster_path')}"
|
|
results.append(SearchResultItem(
|
|
cat, SourceSiteEnum.TMDB, url, title, subtitle, m.get('overview'), cover))
|
|
except Exception as e:
|
|
logger.error(f"TMDb search '{q}' error: {e}")
|
|
return results
|
|
|
|
|
|
class Spotify:
|
|
@classmethod
|
|
def search(self, q, page=1):
|
|
results = []
|
|
try:
|
|
api_url = f"https://api.spotify.com/v1/search?q={q}&type=album&limit={SEARCH_PAGE_SIZE}&offset={page*SEARCH_PAGE_SIZE}"
|
|
headers = {
|
|
'Authorization': f"Bearer {get_spotify_token()}"
|
|
}
|
|
j = requests.get(api_url, headers=headers).json()
|
|
for a in j['albums']['items']:
|
|
title = a['name']
|
|
subtitle = a['release_date']
|
|
for artist in a['artists']:
|
|
subtitle += ' ' + artist['name']
|
|
url = a['external_urls']['spotify']
|
|
cover = a['images'][0]['url']
|
|
results.append(SearchResultItem(
|
|
Category.Music, SourceSiteEnum.SPOTIFY, url, title, subtitle, '', cover))
|
|
except Exception as e:
|
|
logger.error(f"Spotify search '{q}' error: {e}")
|
|
return results
|
|
|
|
|
|
class Bandcamp:
|
|
@classmethod
|
|
def search(self, q, page=1):
|
|
results = []
|
|
try:
|
|
search_url = f'https://bandcamp.com/search?from=results&item_type=a&page={page}&q={quote_plus(q)}'
|
|
r = requests.get(search_url)
|
|
h = html.fromstring(r.content.decode('utf-8'))
|
|
for c in h.xpath('//li[@class="searchresult data-search"]'):
|
|
el_cover = c.xpath('.//div[@class="art"]/img/@src')
|
|
cover = el_cover[0] if el_cover else None
|
|
el_title = c.xpath('.//div[@class="heading"]//text()')
|
|
title = ''.join(el_title).strip() if el_title else None
|
|
el_url = c.xpath('..//div[@class="itemurl"]/a/@href')
|
|
url = el_url[0] if el_url else None
|
|
el_authors = c.xpath('.//div[@class="subhead"]//text()')
|
|
subtitle = ', '.join(el_authors) if el_authors else None
|
|
results.append(SearchResultItem(Category.Music, SourceSiteEnum.BANDCAMP, url, title, subtitle, '', cover))
|
|
except Exception as e:
|
|
logger.error(f"Goodreads search '{q}' error: {e}")
|
|
return results
|
|
|
|
|
|
class ExternalSources:
|
|
@classmethod
|
|
def search(self, c, q, page=1):
|
|
if not q:
|
|
return []
|
|
results = []
|
|
if c == '' or c is None:
|
|
c = 'all'
|
|
if c == 'all' or c == 'movie':
|
|
results.extend(TheMovieDatabase.search(q, page))
|
|
if c == 'all' or c == 'book':
|
|
results.extend(GoogleBooks.search(q, page))
|
|
results.extend(Goodreads.search(q, page))
|
|
if c == 'all' or c == 'music':
|
|
results.extend(Spotify.search(q, page))
|
|
results.extend(Bandcamp.search(q, page))
|
|
return results
|