lib.itmens/books/management/commands/fix-book-cover.py
Henri Dickson 14b003a44a add all NeoDB features to NiceDB ()
* fix scraping failure with wepb image (merge upstream/fix-webp-scrape)

* add filetype to requirements

* add proxycrawl.com as fallback for douban scraper

* load 3p js/css from cdn

* add fix-cover task

* fix book/album cover tasks

* scrapestack

* bandcamp scrape and preview ;
manage.py scrape <url> ;
make ^C work when DEBUG

* use scrapestack when fix cover

* add user agent to improve compatibility

* search BandCamp for music albums

* add missing MovieGenre

* fix search 500 when song has no parent album

* adjust timeout

* individual scrapers

* fix tmdb parser

* export marks via rq; pref to send public toot; move import to data page

* fix spotify import

* fix edge cases

* export: fix dupe tags

* use rq to manage doufen import

* add django command to manage rq jobs

* fix export edge case

* tune rq admin

* fix detail page 502 step 1: async pull mastodon follow/block/mute list

* fix detail page 502 step 2: calculate relationship by local cached data

* manual sync mastodon follow info

* domain_blocks parsing fix

* marks by who i follows

* adjust label

* use username in urls

* add page to list a user\'s review

* review widget on user home page

* fix preview 500

* fix typo

* minor fix

* fix google books parsing

* allow mark/review visible to oneself

* fix auto sync masto for new user

* fix search 500

* add command to restart a sync task

* reset visibility

* delete user data

* fix tag search result pagination

* not upgrade to django 4 yet

* basic doc

* wip: collection

* wip

* wip

* collection use htmx

* show in-collection section for entities

* fix typo

* add su for easier debug

* fix some 500s

* fix login using alternative domain

* hide data from disabled user

* add item to list from detail page

* my tags

* collection: inline comment edit

* show number of ratings

* fix collection delete

* more detail in collection view

* use item template in search result

* fix 500

* write index to meilisearch

* fix search

* reindex in batch

* fix 500

* show search result from meilisearch

* more search commands

* index less fields

* index new items only

* search highlights

* fix 500

* auto set search category

* classic search if no meili server

* fix index stats error

* support typesense backend

* workaround typesense bug

* make external search async

* fix 500, typo

* fix cover scripts

* fix minor issue in douban parser

* supports m.douban.com and customized bandcamp domain

* move account

* reword with gender-friendly and instance-neutral language

* Friendica does not have vapid_key in api response

* enable anonymous search

* tweak book result template

* API v0

API v0

* fix meilisearch reindex

* fix search by url error

* login via twitter.com

* login via pixelfed

* minor fix

* no refresh on inactive users

* support refresh access token

* get rid of /users/number-id/

* refresh twitter handler automatically

* paste image when review

* support PixelFed (very long token)

* fix django-markdownx version

* ignore single quote for meilisearch for now

* update logo

* show book review/mark from same isbn

* show movie review/mark from same imdb

* fix login with older mastodon servers

* import Goodreads book list and profile

* add timestamp to Goodreads import

* support new google books api

* import goodreads list

* minor goodreads fix

* click corner action icon to add to wishlist

* clean up duplicated code

* fix anonymous search

* fix 500

* minor fix search 500

* show rating only if votes > 5

* Entity.refresh_rating()

* preference to append text when sharing; clean up duplicated code

* fix missing data for user tagged view

* fix page link for tag view

* fix 500 when language field longer than 10

* fix 500 when sharing mark for song

* fix error when reimport goodread profile

* fix minor typo

* fix a rare 500

* error log dump less

* fix tags in marks export

* fix missing param in pagination

* import douban review

* clarify text

* fix missing sheet in review import

* review: show in progress

* scrape douban: ignore unknown genre

* minor fix

* improve review import by guess entity urls

* clear guide text for review import

* improve review import form text

* workaround some 500

* fix mark import error

* fix img in review import

* load external results earlier

* ignore search server errors

* simplify user register flow to avoid inconsistent state

* Add a learn more link on login page

* Update login.html

* show mark created timestamp as mark time

* no 500 for api error

* redirect for expired tokens

* ensure preference object created.

* mark collections

* tag list

* fix tag display

* fix sorting etc

* fix 500

* fix potential export 500; save shared links

* fix share to twittwe

* fix review url

* fix 500

* fix 500

* add timeline, etc

* missing status change in timeline

* missing id in timeline

* timeline view by default

* workaround bug in markdownx...

* fix typo

* option to create new collection when add from detail page

* add missing announcement and tags in timeline home

* add missing announcement

* add missing announcement

* opensearch

* show fediverse shared link

* public review no longer requires login

* fix markdownx bug

* fix 500

* use cloudflare cdn

* validate jquery load and domain input

* fix 500

* tips for goodreads import

* collaborative collection

* show timeline and profile link on nav bar

* minor tweak

* share collection

* fix Goodreads search

* show wish mark in timeline

* resync failed urls with local proxy

* resync failed urls with local proxy: check proxy first

* scraper minor fix

* resync failed urls

* fix fields limit

* fix douban parsing error

* resync

* scraper minor fix

* scraper minor fix

* scraper minor fix

* local proxy

* local proxy

* sync default config from neodb

* configurable site name

* fix 500

* fix 500 for anonymous user

* add sentry

* add git version in log

* add git version in log

* no longer rely on cdnjs.cloudflare.com

* move jq/cash to _common_libs template partial

* fix rare js error

* fix 500

* avoid double submission error

* import tag in lower case

* catch some js network errors

* catch some js network errors

* support more goodread urls

* fix unaired tv in tmdb

* support more google book urls

* fix related series

* more goodreads urls

* robust googlebooks search

* robust search

* Update settings.py

* Update scraper.py

* Update requirements.txt

* make nicedb work

* doc update

* simplify permission check

* update doc

* update doc for bug report link

* skip spotify tracks

* fix 500

* improve search api

* blind fix import compatibility

* show years for movie in timeline

* show years for movie in timeline; thinner font

* export reviews

* revert user home to use jquery https://github.com/fabiospampinato/cash/issues/246

* IGDB

* use IGDB for Steam

* use TMDB for IMDb

* steam: igdb then fallback to steam

* keep change history

* keep change history: add django settings

* Steam: keep localized title/brief while merging IGDB

* basic Docker support

* rescrape

* Create codeql-analysis.yml

* Create SECURITY.md

* Create pysa.yml

Co-authored-by: doubaniux <goodsir@vivaldi.net>
Co-authored-by: Your Name <you@example.com>
Co-authored-by: Their Name <they@example.com>
Co-authored-by: Mt. Front <mfcndw@gmail.com>
2022-11-09 19:56:50 +01:00

200 lines
8.2 KiB
Python

from django.core.management.base import BaseCommand
from django.core.files.uploadedfile import SimpleUploadedFile
from django.conf import settings
from common.scraper import *
from books.models import Book
from books.forms import BookForm
import requests
import re
import filetype
from lxml import html
from PIL import Image
from io import BytesIO
class DoubanPatcherMixin:
@classmethod
def download_page(cls, url, headers):
url = cls.get_effective_url(url)
r = None
error = 'DoubanScrapper: error occured when downloading ' + url
content = None
def get(url, timeout):
nonlocal r
# print('Douban GET ' + url)
try:
r = requests.get(url, timeout=timeout)
except Exception as e:
r = requests.Response()
r.status_code = f"Exception when GET {url} {e}" + url
# print('Douban CODE ' + str(r.status_code))
return r
def check_content():
nonlocal r, error, content
content = None
if r.status_code == 200:
content = r.content.decode('utf-8')
if content.find('关于豆瓣') == -1:
# with open('/tmp/temp.html', 'w', encoding='utf-8') as fp:
# fp.write(content)
content = None
error = error + 'Content not authentic' # response is garbage
elif re.search('不存在[^<]+</title>', content, re.MULTILINE):
content = None
error = error + 'Not found or hidden by Douban'
else:
error = error + str(r.status_code)
def fix_wayback_links():
nonlocal content
# fix links
content = re.sub(r'href="http[^"]+http', r'href="http', content)
# https://img9.doubanio.com/view/subject/{l|m|s}/public/s1234.jpg
content = re.sub(r'src="[^"]+/(s\d+\.\w+)"',
r'src="https://img9.doubanio.com/view/subject/m/public/\1"', content)
# https://img9.doubanio.com/view/photo/s_ratio_poster/public/p2681329386.jpg
# https://img9.doubanio.com/view/photo/{l|m|s}/public/p1234.webp
content = re.sub(r'src="[^"]+/(p\d+\.\w+)"',
r'src="https://img9.doubanio.com/view/photo/m/public/\1"', content)
# Wayback Machine: get latest available
def wayback():
nonlocal r, error, content
error = error + '\nWayback: '
get('http://archive.org/wayback/available?url=' + url, 10)
if r.status_code == 200:
w = r.json()
if w['archived_snapshots'] and w['archived_snapshots']['closest']:
get(w['archived_snapshots']['closest']['url'], 10)
check_content()
if content is not None:
fix_wayback_links()
else:
error = error + 'No snapshot available'
else:
error = error + str(r.status_code)
# Wayback Machine: guess via CDX API
def wayback_cdx():
nonlocal r, error, content
error = error + '\nWayback: '
get('http://web.archive.org/cdx/search/cdx?url=' + url, 10)
if r.status_code == 200:
dates = re.findall(r'[^\s]+\s+(\d+)\s+[^\s]+\s+[^\s]+\s+\d+\s+[^\s]+\s+\d{5,}',
r.content.decode('utf-8'))
# assume snapshots whose size >9999 contain real content, use the latest one of them
if len(dates) > 0:
get('http://web.archive.org/web/' + dates[-1] + '/' + url, 10)
check_content()
if content is not None:
fix_wayback_links()
else:
error = error + 'No snapshot available'
else:
error = error + str(r.status_code)
def latest():
nonlocal r, error, content
if settings.SCRAPESTACK_KEY is None:
error = error + '\nDirect: '
get(url, 60)
else:
error = error + '\nScrapeStack: '
get(f'http://api.scrapestack.com/scrape?access_key={settings.SCRAPESTACK_KEY}&url={url}', 60)
check_content()
wayback_cdx()
if content is None:
latest()
if content is None:
logger.error(error)
content = '<html />'
return html.fromstring(content)
@classmethod
def download_image(cls, url, item_url=None):
if url is None:
logger.error(f"Douban: no image url for {item_url}")
return None, None
raw_img = None
ext = None
dl_url = url
if settings.SCRAPESTACK_KEY is not None:
dl_url = f'http://api.scrapestack.com/scrape?access_key={settings.SCRAPESTACK_KEY}&url={url}'
try:
img_response = requests.get(dl_url, timeout=90)
if img_response.status_code == 200:
raw_img = img_response.content
img = Image.open(BytesIO(raw_img))
img.load() # corrupted image will trigger exception
content_type = img_response.headers.get('Content-Type')
ext = filetype.get_type(mime=content_type.partition(';')[0].strip()).extension
else:
logger.error(f"Douban: download image failed {img_response.status_code} {dl_url} {item_url}")
# raise RuntimeError(f"Douban: download image failed {img_response.status_code} {dl_url}")
except Exception as e:
raw_img = None
ext = None
logger.error(f"Douban: download image failed {e} {dl_url} {item_url}")
if raw_img is None and settings.SCRAPESTACK_KEY is not None:
try:
img_response = requests.get(dl_url, timeout=90)
if img_response.status_code == 200:
raw_img = img_response.content
img = Image.open(BytesIO(raw_img))
img.load() # corrupted image will trigger exception
content_type = img_response.headers.get('Content-Type')
ext = filetype.get_type(mime=content_type.partition(';')[0].strip()).extension
else:
logger.error(f"Douban: download image failed {img_response.status_code} {dl_url} {item_url}")
except Exception as e:
raw_img = None
ext = None
logger.error(f"Douban: download image failed {e} {dl_url} {item_url}")
return raw_img, ext
class DoubanBookPatcher(DoubanPatcherMixin, AbstractScraper):
site_name = SourceSiteEnum.DOUBAN.value
host = 'book.douban.com'
data_class = Book
form_class = BookForm
regex = re.compile(r"https://book\.douban\.com/subject/\d+/{0,1}")
def scrape(self, url):
headers = DEFAULT_REQUEST_HEADERS.copy()
headers['Host'] = self.host
content = self.download_page(url, headers)
img_url_elem = content.xpath("//*[@id='mainpic']/a/img/@src")
img_url = img_url_elem[0].strip() if img_url_elem else None
raw_img, ext = self.download_image(img_url, url)
return raw_img, ext
class Command(BaseCommand):
help = 'fix cover image'
def add_arguments(self, parser):
parser.add_argument('threadId', type=int, help='% 8')
def handle(self, *args, **options):
t = int(options['threadId'])
for m in Book.objects.filter(cover='book/default.svg', source_site='douban'):
if m.id % 8 == t:
self.stdout.write(f'Re-fetching {m.source_url}')
try:
raw_img, img_ext = DoubanBookPatcher.scrape(m.source_url)
if img_ext is not None:
m.cover = SimpleUploadedFile('temp.' + img_ext, raw_img)
m.save()
self.stdout.write(self.style.SUCCESS(f'Saved {m.source_url}'))
else:
self.stdout.write(self.style.ERROR(f'Skipped {m.source_url}'))
except Exception as e:
print(e)