
* fix scraping failure with wepb image (merge upstream/fix-webp-scrape) * add filetype to requirements * add proxycrawl.com as fallback for douban scraper * load 3p js/css from cdn * add fix-cover task * fix book/album cover tasks * scrapestack * bandcamp scrape and preview ; manage.py scrape <url> ; make ^C work when DEBUG * use scrapestack when fix cover * add user agent to improve compatibility * search BandCamp for music albums * add missing MovieGenre * fix search 500 when song has no parent album * adjust timeout * individual scrapers * fix tmdb parser * export marks via rq; pref to send public toot; move import to data page * fix spotify import * fix edge cases * export: fix dupe tags * use rq to manage doufen import * add django command to manage rq jobs * fix export edge case * tune rq admin * fix detail page 502 step 1: async pull mastodon follow/block/mute list * fix detail page 502 step 2: calculate relationship by local cached data * manual sync mastodon follow info * domain_blocks parsing fix * marks by who i follows * adjust label * use username in urls * add page to list a user\'s review * review widget on user home page * fix preview 500 * fix typo * minor fix * fix google books parsing * allow mark/review visible to oneself * fix auto sync masto for new user * fix search 500 * add command to restart a sync task * reset visibility * delete user data * fix tag search result pagination * not upgrade to django 4 yet * basic doc * wip: collection * wip * wip * collection use htmx * show in-collection section for entities * fix typo * add su for easier debug * fix some 500s * fix login using alternative domain * hide data from disabled user * add item to list from detail page * my tags * collection: inline comment edit * show number of ratings * fix collection delete * more detail in collection view * use item template in search result * fix 500 * write index to meilisearch * fix search * reindex in batch * fix 500 * show search result from meilisearch * more search commands * index less fields * index new items only * search highlights * fix 500 * auto set search category * classic search if no meili server * fix index stats error * support typesense backend * workaround typesense bug * make external search async * fix 500, typo * fix cover scripts * fix minor issue in douban parser * supports m.douban.com and customized bandcamp domain * move account * reword with gender-friendly and instance-neutral language * Friendica does not have vapid_key in api response * enable anonymous search * tweak book result template * API v0 API v0 * fix meilisearch reindex * fix search by url error * login via twitter.com * login via pixelfed * minor fix * no refresh on inactive users * support refresh access token * get rid of /users/number-id/ * refresh twitter handler automatically * paste image when review * support PixelFed (very long token) * fix django-markdownx version * ignore single quote for meilisearch for now * update logo * show book review/mark from same isbn * show movie review/mark from same imdb * fix login with older mastodon servers * import Goodreads book list and profile * add timestamp to Goodreads import * support new google books api * import goodreads list * minor goodreads fix * click corner action icon to add to wishlist * clean up duplicated code * fix anonymous search * fix 500 * minor fix search 500 * show rating only if votes > 5 * Entity.refresh_rating() * preference to append text when sharing; clean up duplicated code * fix missing data for user tagged view * fix page link for tag view * fix 500 when language field longer than 10 * fix 500 when sharing mark for song * fix error when reimport goodread profile * fix minor typo * fix a rare 500 * error log dump less * fix tags in marks export * fix missing param in pagination * import douban review * clarify text * fix missing sheet in review import * review: show in progress * scrape douban: ignore unknown genre * minor fix * improve review import by guess entity urls * clear guide text for review import * improve review import form text * workaround some 500 * fix mark import error * fix img in review import * load external results earlier * ignore search server errors * simplify user register flow to avoid inconsistent state * Add a learn more link on login page * Update login.html * show mark created timestamp as mark time * no 500 for api error * redirect for expired tokens * ensure preference object created. * mark collections * tag list * fix tag display * fix sorting etc * fix 500 * fix potential export 500; save shared links * fix share to twittwe * fix review url * fix 500 * fix 500 * add timeline, etc * missing status change in timeline * missing id in timeline * timeline view by default * workaround bug in markdownx... * fix typo * option to create new collection when add from detail page * add missing announcement and tags in timeline home * add missing announcement * add missing announcement * opensearch * show fediverse shared link * public review no longer requires login * fix markdownx bug * fix 500 * use cloudflare cdn * validate jquery load and domain input * fix 500 * tips for goodreads import * collaborative collection * show timeline and profile link on nav bar * minor tweak * share collection * fix Goodreads search * show wish mark in timeline * resync failed urls with local proxy * resync failed urls with local proxy: check proxy first * scraper minor fix * resync failed urls * fix fields limit * fix douban parsing error * resync * scraper minor fix * scraper minor fix * scraper minor fix * local proxy * local proxy * sync default config from neodb * configurable site name * fix 500 * fix 500 for anonymous user * add sentry * add git version in log * add git version in log * no longer rely on cdnjs.cloudflare.com * move jq/cash to _common_libs template partial * fix rare js error * fix 500 * avoid double submission error * import tag in lower case * catch some js network errors * catch some js network errors * support more goodread urls * fix unaired tv in tmdb * support more google book urls * fix related series * more goodreads urls * robust googlebooks search * robust search * Update settings.py * Update scraper.py * Update requirements.txt * make nicedb work * doc update * simplify permission check * update doc * update doc for bug report link * skip spotify tracks * fix 500 * improve search api * blind fix import compatibility * show years for movie in timeline * show years for movie in timeline; thinner font * export reviews * revert user home to use jquery https://github.com/fabiospampinato/cash/issues/246 * IGDB * use IGDB for Steam * use TMDB for IMDb * steam: igdb then fallback to steam * keep change history * keep change history: add django settings * Steam: keep localized title/brief while merging IGDB * basic Docker support * rescrape * Create codeql-analysis.yml * Create SECURITY.md * Create pysa.yml Co-authored-by: doubaniux <goodsir@vivaldi.net> Co-authored-by: Your Name <you@example.com> Co-authored-by: Their Name <they@example.com> Co-authored-by: Mt. Front <mfcndw@gmail.com>
270 lines
12 KiB
Python
270 lines
12 KiB
Python
import openpyxl
|
|
import requests
|
|
import re
|
|
from lxml import html
|
|
from markdownify import markdownify as md
|
|
from datetime import datetime
|
|
from common.scraper import get_scraper_by_url
|
|
import logging
|
|
import pytz
|
|
from django.conf import settings
|
|
from django.core.exceptions import ObjectDoesNotExist
|
|
from user_messages import api as msg
|
|
import django_rq
|
|
from common.utils import GenerateDateUUIDMediaFilePath
|
|
import os
|
|
from books.models import BookReview, Book, BookMark, BookTag
|
|
from movies.models import MovieReview, Movie, MovieMark, MovieTag
|
|
from music.models import AlbumReview, Album, AlbumMark, AlbumTag
|
|
from games.models import GameReview, Game, GameMark, GameTag
|
|
from common.scraper import DoubanAlbumScraper, DoubanBookScraper, DoubanGameScraper, DoubanMovieScraper
|
|
from PIL import Image
|
|
from io import BytesIO
|
|
import filetype
|
|
from common.models import MarkStatusEnum
|
|
|
|
|
|
logger = logging.getLogger(__name__)
|
|
tz_sh = pytz.timezone('Asia/Shanghai')
|
|
|
|
|
|
def fetch_remote_image(url):
|
|
try:
|
|
print(f'fetching remote image {url}')
|
|
raw_img = None
|
|
ext = None
|
|
if settings.SCRAPESTACK_KEY is not None:
|
|
dl_url = f'http://api.scrapestack.com/scrape?access_key={settings.SCRAPESTACK_KEY}&url={url}'
|
|
elif settings.SCRAPERAPI_KEY is not None:
|
|
dl_url = f'http://api.scraperapi.com?api_key={settings.SCRAPERAPI_KEY}&url={url}'
|
|
else:
|
|
dl_url = url
|
|
img_response = requests.get(dl_url, timeout=settings.SCRAPING_TIMEOUT)
|
|
raw_img = img_response.content
|
|
img = Image.open(BytesIO(raw_img))
|
|
img.load() # corrupted image will trigger exception
|
|
content_type = img_response.headers.get('Content-Type')
|
|
ext = filetype.get_type(mime=content_type.partition(';')[0].strip()).extension
|
|
f = GenerateDateUUIDMediaFilePath(None, "x." + ext, settings.MARKDOWNX_MEDIA_PATH)
|
|
file = settings.MEDIA_ROOT + f
|
|
local_url = settings.MEDIA_URL + f
|
|
os.makedirs(os.path.dirname(file), exist_ok=True)
|
|
img.save(file)
|
|
# print(f'remote image saved as {local_url}')
|
|
return local_url
|
|
except Exception:
|
|
print(f'unable to fetch remote image {url}')
|
|
return url
|
|
|
|
|
|
class DoubanImporter:
|
|
total = 0
|
|
processed = 0
|
|
skipped = 0
|
|
imported = 0
|
|
failed = []
|
|
user = None
|
|
visibility = 0
|
|
file = None
|
|
|
|
def __init__(self, user, visibility):
|
|
self.user = user
|
|
self.visibility = visibility
|
|
|
|
def update_user_import_status(self, status):
|
|
self.user.preference.import_status['douban_pending'] = status
|
|
self.user.preference.import_status['douban_file'] = self.file
|
|
self.user.preference.import_status['douban_visibility'] = self.visibility
|
|
self.user.preference.import_status['douban_total'] = self.total
|
|
self.user.preference.import_status['douban_processed'] = self.processed
|
|
self.user.preference.import_status['douban_skipped'] = self.skipped
|
|
self.user.preference.import_status['douban_imported'] = self.imported
|
|
self.user.preference.import_status['douban_failed'] = self.failed
|
|
self.user.preference.save(update_fields=['import_status'])
|
|
|
|
def import_from_file(self, uploaded_file):
|
|
try:
|
|
wb = openpyxl.open(uploaded_file, read_only=True, data_only=True, keep_links=False)
|
|
wb.close()
|
|
file = settings.MEDIA_ROOT + GenerateDateUUIDMediaFilePath(None, "x.xlsx", settings.SYNC_FILE_PATH_ROOT)
|
|
os.makedirs(os.path.dirname(file), exist_ok=True)
|
|
with open(file, 'wb') as destination:
|
|
for chunk in uploaded_file.chunks():
|
|
destination.write(chunk)
|
|
self.file = file
|
|
self.update_user_import_status(2)
|
|
jid = f'Douban_{self.user.id}_{os.path.basename(self.file)}'
|
|
django_rq.get_queue('doufen').enqueue(self.import_from_file_task, job_id=jid)
|
|
except Exception:
|
|
return False
|
|
# self.import_from_file_task(file, user, visibility)
|
|
return True
|
|
|
|
mark_sheet_config = {
|
|
'想读': [MarkStatusEnum.WISH, DoubanBookScraper, Book, BookMark, BookTag],
|
|
'在读': [MarkStatusEnum.DO, DoubanBookScraper, Book, BookMark, BookTag],
|
|
'读过': [MarkStatusEnum.COLLECT, DoubanBookScraper, Book, BookMark, BookTag],
|
|
'想看': [MarkStatusEnum.WISH, DoubanMovieScraper, Movie, MovieMark, MovieTag],
|
|
'在看': [MarkStatusEnum.DO, DoubanMovieScraper, Movie, MovieMark, MovieTag],
|
|
'想看': [MarkStatusEnum.COLLECT, DoubanMovieScraper, Movie, MovieMark, MovieTag],
|
|
'想听': [MarkStatusEnum.WISH, DoubanAlbumScraper, Album, AlbumMark, AlbumTag],
|
|
'在听': [MarkStatusEnum.DO, DoubanAlbumScraper, Album, AlbumMark, AlbumTag],
|
|
'听过': [MarkStatusEnum.COLLECT, DoubanAlbumScraper, Album, AlbumMark, AlbumTag],
|
|
'想玩': [MarkStatusEnum.WISH, DoubanGameScraper, Game, GameMark, GameTag],
|
|
'在玩': [MarkStatusEnum.DO, DoubanGameScraper, Game, GameMark, GameTag],
|
|
'玩过': [MarkStatusEnum.COLLECT, DoubanGameScraper, Game, GameMark, GameTag],
|
|
}
|
|
review_sheet_config = {
|
|
'书评': [DoubanBookScraper, Book, BookReview],
|
|
'影评': [DoubanMovieScraper, Movie, MovieReview],
|
|
'乐评': [DoubanAlbumScraper, Album, AlbumReview],
|
|
'游戏评论&攻略': [DoubanGameScraper, Game, GameReview],
|
|
}
|
|
mark_data = {}
|
|
review_data = {}
|
|
entity_lookup = {}
|
|
|
|
def load_sheets(self):
|
|
f = open(self.file, 'rb')
|
|
wb = openpyxl.load_workbook(f, read_only=True, data_only=True, keep_links=False)
|
|
for data, config in [(self.mark_data, self.mark_sheet_config), (self.review_data, self.review_sheet_config)]:
|
|
for name in config:
|
|
data[name] = []
|
|
if name in wb:
|
|
print(f'{self.user} parsing {name}')
|
|
for row in wb[name].iter_rows(min_row=2, values_only=True):
|
|
cells = [cell for cell in row]
|
|
if len(cells) > 6:
|
|
data[name].append(cells)
|
|
for sheet in self.mark_data.values():
|
|
for cells in sheet:
|
|
# entity_lookup["title|rating"] = [(url, time), ...]
|
|
k = f'{cells[0]}|{cells[5]}'
|
|
v = (cells[3], cells[4])
|
|
if k in self.entity_lookup:
|
|
self.entity_lookup[k].append(v)
|
|
else:
|
|
self.entity_lookup[k] = [v]
|
|
self.total = sum(map(lambda a: len(a), self.review_data.values()))
|
|
|
|
def guess_entity_url(self, title, rating, timestamp):
|
|
k = f'{title}|{rating}'
|
|
if k not in self.entity_lookup:
|
|
return None
|
|
v = self.entity_lookup[k]
|
|
if len(v) > 1:
|
|
v.sort(key=lambda c: abs(timestamp - (datetime.strptime(c[1], "%Y-%m-%d %H:%M:%S") if type(c[1])==str else c[1]).replace(tzinfo=tz_sh)))
|
|
return v[0][0]
|
|
# for sheet in self.mark_data.values():
|
|
# for cells in sheet:
|
|
# if cells[0] == title and cells[5] == rating:
|
|
# return cells[3]
|
|
|
|
def import_from_file_task(self):
|
|
print(f'{self.user} import start')
|
|
msg.info(self.user, f'开始导入豆瓣评论')
|
|
self.update_user_import_status(1)
|
|
self.load_sheets()
|
|
print(f'{self.user} sheet loaded, {self.total} lines total')
|
|
self.update_user_import_status(1)
|
|
for name, param in self.review_sheet_config.items():
|
|
self.import_review_sheet(self.review_data[name], param[0], param[1], param[2])
|
|
self.update_user_import_status(0)
|
|
msg.success(self.user, f'豆瓣评论导入完成,共处理{self.total}篇,已存在{self.skipped}篇,新增{self.imported}篇。')
|
|
if len(self.failed):
|
|
msg.error(self.user, f'豆瓣评论导入时未能处理以下网址:\n{" , ".join(self.failed)}')
|
|
|
|
def import_review_sheet(self, worksheet, scraper, entity_class, review_class):
|
|
prefix = f'{self.user} |'
|
|
if worksheet is None: # or worksheet.max_row < 2:
|
|
print(f'{prefix} {review_class.__name__} empty sheet')
|
|
return
|
|
for cells in worksheet:
|
|
if len(cells) < 6:
|
|
continue
|
|
title = cells[0]
|
|
entity_title = re.sub('^《', '', re.sub('》$', '', cells[1]))
|
|
review_url = cells[2]
|
|
time = cells[3]
|
|
rating = cells[4]
|
|
content = cells[6]
|
|
self.processed += 1
|
|
if time:
|
|
if type(time) == str:
|
|
time = datetime.strptime(time, "%Y-%m-%d %H:%M:%S")
|
|
time = time.replace(tzinfo=tz_sh)
|
|
else:
|
|
time = None
|
|
if not content:
|
|
content = ""
|
|
if not title:
|
|
title = ""
|
|
r = self.import_review(entity_title, rating, title, review_url, content, time, scraper, entity_class, review_class)
|
|
if r == 1:
|
|
self.imported += 1
|
|
elif r == 2:
|
|
self.skipped += 1
|
|
else:
|
|
self.failed.append(review_url)
|
|
self.update_user_import_status(1)
|
|
|
|
def import_review(self, entity_title, rating, title, review_url, content, time, scraper, entity_class, review_class):
|
|
# return 1: done / 2: skipped / None: failed
|
|
prefix = f'{self.user} |'
|
|
url = self.guess_entity_url(entity_title, rating, time)
|
|
if url is None:
|
|
print(f'{prefix} fetching {review_url}')
|
|
try:
|
|
if settings.SCRAPESTACK_KEY is not None:
|
|
_review_url = f'http://api.scrapestack.com/scrape?access_key={settings.SCRAPESTACK_KEY}&url={review_url}'
|
|
else:
|
|
_review_url = review_url
|
|
r = requests.get(_review_url, timeout=settings.SCRAPING_TIMEOUT)
|
|
if r.status_code != 200:
|
|
print(f'{prefix} fetching error {review_url} {r.status_code}')
|
|
return
|
|
h = html.fromstring(r.content.decode('utf-8'))
|
|
for u in h.xpath("//header[@class='main-hd']/a/@href"):
|
|
if '.douban.com/subject/' in u:
|
|
url = u
|
|
if not url:
|
|
print(f'{prefix} fetching error {review_url} unable to locate entity url')
|
|
return
|
|
except Exception:
|
|
print(f'{prefix} fetching exception {review_url}')
|
|
return
|
|
try:
|
|
entity = entity_class.objects.get(source_url=url)
|
|
print(f'{prefix} matched {url}')
|
|
except ObjectDoesNotExist:
|
|
try:
|
|
print(f'{prefix} scraping {url}')
|
|
scraper.scrape(url)
|
|
form = scraper.save(request_user=self.user)
|
|
entity = form.instance
|
|
except Exception as e:
|
|
print(f"{prefix} scrape failed: {url} {e}")
|
|
logger.error(f"{prefix} scrape failed: {url}", exc_info=e)
|
|
return
|
|
params = {
|
|
'owner': self.user,
|
|
entity_class.__name__.lower(): entity
|
|
}
|
|
if review_class.objects.filter(**params).exists():
|
|
return 2
|
|
content = re.sub(r'<span style="font-weight: bold;">([^<]+)</span>', r'<b>\1</b>', content)
|
|
content = re.sub(r'(<img [^>]+>)', r'\1<br>', content)
|
|
content = re.sub(r'<div class="image-caption">([^<]+)</div>', r'<br><i>\1</i><br>', content)
|
|
content = md(content)
|
|
content = re.sub(r'(?<=!\[\]\()([^)]+)(?=\))', lambda x: fetch_remote_image(x[1]), content)
|
|
params = {
|
|
'owner': self.user,
|
|
'created_time': time,
|
|
'edited_time': time,
|
|
'title': title,
|
|
'content': content,
|
|
'visibility': self.visibility,
|
|
entity_class.__name__.lower(): entity,
|
|
}
|
|
review_class.objects.create(**params)
|
|
return 1
|