
* fix scraping failure with wepb image (merge upstream/fix-webp-scrape) * add filetype to requirements * add proxycrawl.com as fallback for douban scraper * load 3p js/css from cdn * add fix-cover task * fix book/album cover tasks * scrapestack * bandcamp scrape and preview ; manage.py scrape <url> ; make ^C work when DEBUG * use scrapestack when fix cover * add user agent to improve compatibility * search BandCamp for music albums * add missing MovieGenre * fix search 500 when song has no parent album * adjust timeout * individual scrapers * fix tmdb parser * export marks via rq; pref to send public toot; move import to data page * fix spotify import * fix edge cases * export: fix dupe tags * use rq to manage doufen import * add django command to manage rq jobs * fix export edge case * tune rq admin * fix detail page 502 step 1: async pull mastodon follow/block/mute list * fix detail page 502 step 2: calculate relationship by local cached data * manual sync mastodon follow info * domain_blocks parsing fix * marks by who i follows * adjust label * use username in urls * add page to list a user\'s review * review widget on user home page * fix preview 500 * fix typo * minor fix * fix google books parsing * allow mark/review visible to oneself * fix auto sync masto for new user * fix search 500 * add command to restart a sync task * reset visibility * delete user data * fix tag search result pagination * not upgrade to django 4 yet * basic doc * wip: collection * wip * wip * collection use htmx * show in-collection section for entities * fix typo * add su for easier debug * fix some 500s * fix login using alternative domain * hide data from disabled user * add item to list from detail page * my tags * collection: inline comment edit * show number of ratings * fix collection delete * more detail in collection view * use item template in search result * fix 500 * write index to meilisearch * fix search * reindex in batch * fix 500 * show search result from meilisearch * more search commands * index less fields * index new items only * search highlights * fix 500 * auto set search category * classic search if no meili server * fix index stats error * support typesense backend * workaround typesense bug * make external search async * fix 500, typo * fix cover scripts * fix minor issue in douban parser * supports m.douban.com and customized bandcamp domain * move account * reword with gender-friendly and instance-neutral language * Friendica does not have vapid_key in api response * enable anonymous search * tweak book result template * API v0 API v0 * fix meilisearch reindex * fix search by url error * login via twitter.com * login via pixelfed * minor fix * no refresh on inactive users * support refresh access token * get rid of /users/number-id/ * refresh twitter handler automatically * paste image when review * support PixelFed (very long token) * fix django-markdownx version * ignore single quote for meilisearch for now * update logo * show book review/mark from same isbn * show movie review/mark from same imdb * fix login with older mastodon servers * import Goodreads book list and profile * add timestamp to Goodreads import * support new google books api * import goodreads list * minor goodreads fix * click corner action icon to add to wishlist * clean up duplicated code * fix anonymous search * fix 500 * minor fix search 500 * show rating only if votes > 5 * Entity.refresh_rating() * preference to append text when sharing; clean up duplicated code * fix missing data for user tagged view * fix page link for tag view * fix 500 when language field longer than 10 * fix 500 when sharing mark for song * fix error when reimport goodread profile * fix minor typo * fix a rare 500 * error log dump less * fix tags in marks export * fix missing param in pagination * import douban review * clarify text * fix missing sheet in review import * review: show in progress * scrape douban: ignore unknown genre * minor fix * improve review import by guess entity urls * clear guide text for review import * improve review import form text * workaround some 500 * fix mark import error * fix img in review import * load external results earlier * ignore search server errors * simplify user register flow to avoid inconsistent state * Add a learn more link on login page * Update login.html * show mark created timestamp as mark time * no 500 for api error * redirect for expired tokens * ensure preference object created. * mark collections * tag list * fix tag display * fix sorting etc * fix 500 * fix potential export 500; save shared links * fix share to twittwe * fix review url * fix 500 * fix 500 * add timeline, etc * missing status change in timeline * missing id in timeline * timeline view by default * workaround bug in markdownx... * fix typo * option to create new collection when add from detail page * add missing announcement and tags in timeline home * add missing announcement * add missing announcement * opensearch * show fediverse shared link * public review no longer requires login * fix markdownx bug * fix 500 * use cloudflare cdn * validate jquery load and domain input * fix 500 * tips for goodreads import * collaborative collection * show timeline and profile link on nav bar * minor tweak * share collection * fix Goodreads search * show wish mark in timeline * resync failed urls with local proxy * resync failed urls with local proxy: check proxy first * scraper minor fix * resync failed urls * fix fields limit * fix douban parsing error * resync * scraper minor fix * scraper minor fix * scraper minor fix * local proxy * local proxy * sync default config from neodb * configurable site name * fix 500 * fix 500 for anonymous user * add sentry * add git version in log * add git version in log * no longer rely on cdnjs.cloudflare.com * move jq/cash to _common_libs template partial * fix rare js error * fix 500 * avoid double submission error * import tag in lower case * catch some js network errors * catch some js network errors * support more goodread urls * fix unaired tv in tmdb * support more google book urls * fix related series * more goodreads urls * robust googlebooks search * robust search * Update settings.py * Update scraper.py * Update requirements.txt * make nicedb work * doc update * simplify permission check * update doc * update doc for bug report link * skip spotify tracks * fix 500 * improve search api * blind fix import compatibility * show years for movie in timeline * show years for movie in timeline; thinner font * export reviews * revert user home to use jquery https://github.com/fabiospampinato/cash/issues/246 * IGDB * use IGDB for Steam * use TMDB for IMDb * steam: igdb then fallback to steam * keep change history * keep change history: add django settings * Steam: keep localized title/brief while merging IGDB * basic Docker support * rescrape * Create codeql-analysis.yml * Create SECURITY.md * Create pysa.yml Co-authored-by: doubaniux <goodsir@vivaldi.net> Co-authored-by: Your Name <you@example.com> Co-authored-by: Their Name <they@example.com> Co-authored-by: Mt. Front <mfcndw@gmail.com>
157 lines
6.1 KiB
Python
157 lines
6.1 KiB
Python
import requests
|
|
import re
|
|
import filetype
|
|
from lxml import html
|
|
from common.models import SourceSiteEnum
|
|
from movies.models import Movie, MovieGenreEnum
|
|
from movies.forms import MovieForm
|
|
from books.models import Book
|
|
from books.forms import BookForm
|
|
from music.models import Album, Song
|
|
from music.forms import AlbumForm, SongForm
|
|
from games.models import Game
|
|
from games.forms import GameForm
|
|
from django.conf import settings
|
|
from PIL import Image
|
|
from io import BytesIO
|
|
from common.scraper import *
|
|
|
|
|
|
class GoodreadsScraper(AbstractScraper):
|
|
site_name = SourceSiteEnum.GOODREADS.value
|
|
host = "www.goodreads.com"
|
|
data_class = Book
|
|
form_class = BookForm
|
|
regex = re.compile(r"https://www\.goodreads\.com/book/show/\d+")
|
|
|
|
@classmethod
|
|
def get_effective_url(cls, raw_url):
|
|
u = re.match(r".+/book/show/(\d+)", raw_url)
|
|
if not u:
|
|
u = re.match(r".+book/(\d+)", raw_url)
|
|
return "https://www.goodreads.com/book/show/" + u[1] if u else None
|
|
|
|
def scrape(self, url, response=None):
|
|
"""
|
|
This is the scraping portal
|
|
"""
|
|
if response is not None:
|
|
content = html.fromstring(response.content.decode('utf-8'))
|
|
else:
|
|
headers = None # DEFAULT_REQUEST_HEADERS.copy()
|
|
content = self.download_page(url, headers)
|
|
|
|
try:
|
|
title = content.xpath("//h1[@id='bookTitle']/text()")[0].strip()
|
|
except IndexError:
|
|
raise ValueError("given url contains no book info")
|
|
|
|
subtitle = None
|
|
|
|
orig_title_elem = content.xpath("//div[@id='bookDataBox']//div[text()='Original Title']/following-sibling::div/text()")
|
|
orig_title = orig_title_elem[0].strip() if orig_title_elem else None
|
|
|
|
language_elem = content.xpath('//div[@itemprop="inLanguage"]/text()')
|
|
language = language_elem[0].strip() if language_elem else None
|
|
|
|
pub_house_elem = content.xpath("//div[contains(text(), 'Published') and @class='row']/text()")
|
|
try:
|
|
months = ['January', 'February', 'March', 'April', 'May', 'June', 'July', 'August', 'September', 'October', 'November', 'December']
|
|
r = re.compile('.*Published.*(' + '|'.join(months) + ').*(\\d\\d\\d\\d).+by\\s*(.+)\\s*', re.DOTALL)
|
|
pub = r.match(pub_house_elem[0])
|
|
pub_year = pub[2]
|
|
pub_month = months.index(pub[1]) + 1
|
|
pub_house = pub[3].strip()
|
|
except Exception:
|
|
pub_year = None
|
|
pub_month = None
|
|
pub_house = None
|
|
|
|
pub_house_elem = content.xpath("//nobr[contains(text(), 'first published')]/text()")
|
|
try:
|
|
pub = re.match(r'.*first published\s+(.+\d\d\d\d).*', pub_house_elem[0], re.DOTALL)
|
|
first_pub = pub[1]
|
|
except Exception:
|
|
first_pub = None
|
|
|
|
binding_elem = content.xpath('//span[@itemprop="bookFormat"]/text()')
|
|
binding = binding_elem[0].strip() if binding_elem else None
|
|
|
|
pages_elem = content.xpath('//span[@itemprop="numberOfPages"]/text()')
|
|
pages = pages_elem[0].strip() if pages_elem else None
|
|
if pages is not None:
|
|
pages = int(RE_NUMBERS.findall(pages)[
|
|
0]) if RE_NUMBERS.findall(pages) else None
|
|
|
|
isbn_elem = content.xpath('//span[@itemprop="isbn"]/text()')
|
|
if not isbn_elem:
|
|
isbn_elem = content.xpath('//div[@itemprop="isbn"]/text()') # this is likely ASIN
|
|
isbn = isbn_elem[0].strip() if isbn_elem else None
|
|
|
|
brief_elem = content.xpath('//div[@id="description"]/span[@style="display:none"]/text()')
|
|
if brief_elem:
|
|
brief = '\n'.join(p.strip() for p in brief_elem)
|
|
else:
|
|
brief_elem = content.xpath('//div[@id="description"]/span/text()')
|
|
brief = '\n'.join(p.strip() for p in brief_elem) if brief_elem else None
|
|
|
|
genre = content.xpath('//div[@class="bigBoxBody"]/div/div/div/a/text()')
|
|
genre = genre[0] if genre else None
|
|
book_title = re.sub('\n', '', content.xpath('//h1[@id="bookTitle"]/text()')[0]).strip()
|
|
author = content.xpath('//a[@class="authorName"]/span/text()')[0]
|
|
contents = None
|
|
|
|
img_url_elem = content.xpath("//img[@id='coverImage']/@src")
|
|
img_url = img_url_elem[0].strip() if img_url_elem else None
|
|
raw_img, ext = self.download_image(img_url, url)
|
|
|
|
authors_elem = content.xpath("//a[@class='authorName'][not(../span[@class='authorName greyText smallText role'])]/span/text()")
|
|
if authors_elem:
|
|
authors = []
|
|
for author in authors_elem:
|
|
authors.append(RE_WHITESPACES.sub(' ', author.strip()))
|
|
else:
|
|
authors = None
|
|
|
|
translators = None
|
|
authors_elem = content.xpath("//a[@class='authorName'][../span/text()='(Translator)']/span/text()")
|
|
if authors_elem:
|
|
translators = []
|
|
for translator in authors_elem:
|
|
translators.append(RE_WHITESPACES.sub(' ', translator.strip()))
|
|
else:
|
|
translators = None
|
|
|
|
other = {}
|
|
if first_pub:
|
|
other['首版时间'] = first_pub
|
|
if genre:
|
|
other['分类'] = genre
|
|
series_elem = content.xpath("//h2[@id='bookSeries']/a/text()")
|
|
if series_elem:
|
|
other['丛书'] = re.sub(r'\(\s*(.+[^\s])\s*#.*\)', '\\1', series_elem[0].strip())
|
|
|
|
data = {
|
|
'title': title,
|
|
'subtitle': subtitle,
|
|
'orig_title': orig_title,
|
|
'author': authors,
|
|
'translator': translators,
|
|
'language': language,
|
|
'pub_house': pub_house,
|
|
'pub_year': pub_year,
|
|
'pub_month': pub_month,
|
|
'binding': binding,
|
|
'pages': pages,
|
|
'isbn': isbn,
|
|
'brief': brief,
|
|
'contents': contents,
|
|
'other_info': other,
|
|
'cover_url': img_url,
|
|
'source_site': self.site_name,
|
|
'source_url': self.get_effective_url(url),
|
|
}
|
|
data['source_url'] = self.get_effective_url(url)
|
|
|
|
self.raw_data, self.raw_img, self.img_ext = data, raw_img, ext
|
|
return data, raw_img
|