add all NeoDB features to NiceDB (#115)

* fix scraping failure with wepb image (merge upstream/fix-webp-scrape)

* add filetype to requirements

* add proxycrawl.com as fallback for douban scraper

* load 3p js/css from cdn

* add fix-cover task

* fix book/album cover tasks

* scrapestack

* bandcamp scrape and preview ;
manage.py scrape <url> ;
make ^C work when DEBUG

* use scrapestack when fix cover

* add user agent to improve compatibility

* search BandCamp for music albums

* add missing MovieGenre

* fix search 500 when song has no parent album

* adjust timeout

* individual scrapers

* fix tmdb parser

* export marks via rq; pref to send public toot; move import to data page

* fix spotify import

* fix edge cases

* export: fix dupe tags

* use rq to manage doufen import

* add django command to manage rq jobs

* fix export edge case

* tune rq admin

* fix detail page 502 step 1: async pull mastodon follow/block/mute list

* fix detail page 502 step 2: calculate relationship by local cached data

* manual sync mastodon follow info

* domain_blocks parsing fix

* marks by who i follows

* adjust label

* use username in urls

* add page to list a user\'s review

* review widget on user home page

* fix preview 500

* fix typo

* minor fix

* fix google books parsing

* allow mark/review visible to oneself

* fix auto sync masto for new user

* fix search 500

* add command to restart a sync task

* reset visibility

* delete user data

* fix tag search result pagination

* not upgrade to django 4 yet

* basic doc

* wip: collection

* wip

* wip

* collection use htmx

* show in-collection section for entities

* fix typo

* add su for easier debug

* fix some 500s

* fix login using alternative domain

* hide data from disabled user

* add item to list from detail page

* my tags

* collection: inline comment edit

* show number of ratings

* fix collection delete

* more detail in collection view

* use item template in search result

* fix 500

* write index to meilisearch

* fix search

* reindex in batch

* fix 500

* show search result from meilisearch

* more search commands

* index less fields

* index new items only

* search highlights

* fix 500

* auto set search category

* classic search if no meili server

* fix index stats error

* support typesense backend

* workaround typesense bug

* make external search async

* fix 500, typo

* fix cover scripts

* fix minor issue in douban parser

* supports m.douban.com and customized bandcamp domain

* move account

* reword with gender-friendly and instance-neutral language

* Friendica does not have vapid_key in api response

* enable anonymous search

* tweak book result template

* API v0

API v0

* fix meilisearch reindex

* fix search by url error

* login via twitter.com

* login via pixelfed

* minor fix

* no refresh on inactive users

* support refresh access token

* get rid of /users/number-id/

* refresh twitter handler automatically

* paste image when review

* support PixelFed (very long token)

* fix django-markdownx version

* ignore single quote for meilisearch for now

* update logo

* show book review/mark from same isbn

* show movie review/mark from same imdb

* fix login with older mastodon servers

* import Goodreads book list and profile

* add timestamp to Goodreads import

* support new google books api

* import goodreads list

* minor goodreads fix

* click corner action icon to add to wishlist

* clean up duplicated code

* fix anonymous search

* fix 500

* minor fix search 500

* show rating only if votes > 5

* Entity.refresh_rating()

* preference to append text when sharing; clean up duplicated code

* fix missing data for user tagged view

* fix page link for tag view

* fix 500 when language field longer than 10

* fix 500 when sharing mark for song

* fix error when reimport goodread profile

* fix minor typo

* fix a rare 500

* error log dump less

* fix tags in marks export

* fix missing param in pagination

* import douban review

* clarify text

* fix missing sheet in review import

* review: show in progress

* scrape douban: ignore unknown genre

* minor fix

* improve review import by guess entity urls

* clear guide text for review import

* improve review import form text

* workaround some 500

* fix mark import error

* fix img in review import

* load external results earlier

* ignore search server errors

* simplify user register flow to avoid inconsistent state

* Add a learn more link on login page

* Update login.html

* show mark created timestamp as mark time

* no 500 for api error

* redirect for expired tokens

* ensure preference object created.

* mark collections

* tag list

* fix tag display

* fix sorting etc

* fix 500

* fix potential export 500; save shared links

* fix share to twittwe

* fix review url

* fix 500

* fix 500

* add timeline, etc

* missing status change in timeline

* missing id in timeline

* timeline view by default

* workaround bug in markdownx...

* fix typo

* option to create new collection when add from detail page

* add missing announcement and tags in timeline home

* add missing announcement

* add missing announcement

* opensearch

* show fediverse shared link

* public review no longer requires login

* fix markdownx bug

* fix 500

* use cloudflare cdn

* validate jquery load and domain input

* fix 500

* tips for goodreads import

* collaborative collection

* show timeline and profile link on nav bar

* minor tweak

* share collection

* fix Goodreads search

* show wish mark in timeline

* resync failed urls with local proxy

* resync failed urls with local proxy: check proxy first

* scraper minor fix

* resync failed urls

* fix fields limit

* fix douban parsing error

* resync

* scraper minor fix

* scraper minor fix

* scraper minor fix

* local proxy

* local proxy

* sync default config from neodb

* configurable site name

* fix 500

* fix 500 for anonymous user

* add sentry

* add git version in log

* add git version in log

* no longer rely on cdnjs.cloudflare.com

* move jq/cash to _common_libs template partial

* fix rare js error

* fix 500

* avoid double submission error

* import tag in lower case

* catch some js network errors

* catch some js network errors

* support more goodread urls

* fix unaired tv in tmdb

* support more google book urls

* fix related series

* more goodreads urls

* robust googlebooks search

* robust search

* Update settings.py

* Update scraper.py

* Update requirements.txt

* make nicedb work

* doc update

* simplify permission check

* update doc

* update doc for bug report link

* skip spotify tracks

* fix 500

* improve search api

* blind fix import compatibility

* show years for movie in timeline

* show years for movie in timeline; thinner font

* export reviews

* revert user home to use jquery https://github.com/fabiospampinato/cash/issues/246

* IGDB

* use IGDB for Steam

* use TMDB for IMDb

* steam: igdb then fallback to steam

* keep change history

* keep change history: add django settings

* Steam: keep localized title/brief while merging IGDB

* basic Docker support

* rescrape

* Create codeql-analysis.yml

* Create SECURITY.md

* Create pysa.yml

Co-authored-by: doubaniux <goodsir@vivaldi.net>
Co-authored-by: Your Name <you@example.com>
Co-authored-by: Their Name <they@example.com>
Co-authored-by: Mt. Front <mfcndw@gmail.com>
This commit is contained in:
Henri Dickson 2022-11-09 13:56:50 -05:00 committed by GitHub
parent 6c3e377bbe
commit 14b003a44a
228 changed files with 12218 additions and 6514 deletions

74
.github/workflows/codeql-analysis.yml vendored Normal file
View file

@ -0,0 +1,74 @@
# For most projects, this workflow file will not need changing; you simply need
# to commit it to your repository.
#
# You may wish to alter this file to override the set of languages analyzed,
# or to provide custom queries or build logic.
#
# ******** NOTE ********
# We have attempted to detect the languages in your repository. Please check
# the `language` matrix defined below to confirm you have the correct set of
# supported CodeQL languages.
#
name: "CodeQL"
on:
push:
branches: [ "neo" ]
pull_request:
# The branches below must be a subset of the branches above
branches: [ "neo" ]
schedule:
- cron: '35 0 * * 0'
jobs:
analyze:
name: Analyze
runs-on: ubuntu-latest
permissions:
actions: read
contents: read
security-events: write
strategy:
fail-fast: false
matrix:
language: [ 'javascript', 'python' ]
# CodeQL supports [ 'cpp', 'csharp', 'go', 'java', 'javascript', 'python', 'ruby' ]
# Learn more about CodeQL language support at https://aka.ms/codeql-docs/language-support
steps:
- name: Checkout repository
uses: actions/checkout@v3
# Initializes the CodeQL tools for scanning.
- name: Initialize CodeQL
uses: github/codeql-action/init@v2
with:
languages: ${{ matrix.language }}
# If you wish to specify custom queries, you can do so here or in a config file.
# By default, queries listed here will override any specified in a config file.
# Prefix the list here with "+" to use these queries and those in the config file.
# Details on CodeQL's query packs refer to : https://docs.github.com/en/code-security/code-scanning/automatically-scanning-your-code-for-vulnerabilities-and-errors/configuring-code-scanning#using-queries-in-ql-packs
# queries: security-extended,security-and-quality
# Autobuild attempts to build any compiled languages (C/C++, C#, or Java).
# If this step fails, then you should remove it and run the build manually (see below)
- name: Autobuild
uses: github/codeql-action/autobuild@v2
# Command-line programs to run using the OS shell.
# 📚 See https://docs.github.com/en/actions/using-workflows/workflow-syntax-for-github-actions#jobsjob_idstepsrun
# If the Autobuild fails above, remove it and uncomment the following three lines.
# modify them (or add more) to build your code if your project, please refer to the EXAMPLE below for guidance.
# - run: |
# echo "Run, Build Application using script"
# ./location_of_script_within_repo/buildscript.sh
- name: Perform CodeQL Analysis
uses: github/codeql-action/analyze@v2
with:
category: "/language:${{matrix.language}}"

50
.github/workflows/pysa.yml vendored Normal file
View file

@ -0,0 +1,50 @@
# This workflow uses actions that are not certified by GitHub.
# They are provided by a third-party and are governed by
# separate terms of service, privacy policy, and support
# documentation.
# This workflow integrates Python Static Analyzer (Pysa) with
# GitHub's Code Scanning feature.
#
# Python Static Analyzer (Pysa) is a security-focused static
# analysis tool that tracks flows of data from where they
# originate to where they terminate in a dangerous location.
#
# See https://pyre-check.org/docs/pysa-basics/
name: Pysa
on:
workflow_dispatch:
push:
branches: [ "neo" ]
pull_request:
branches: [ "neo" ]
schedule:
- cron: '45 12 * * 4'
permissions:
contents: read
jobs:
pysa:
permissions:
actions: read
contents: read
security-events: write
runs-on: ubuntu-latest
steps:
- uses: actions/checkout@v3
with:
submodules: true
- name: Run Pysa
uses: facebook/pysa-action@f46a63777e59268613bd6e2ff4e29f144ca9e88b
with:
# To customize these inputs:
# See https://github.com/facebook/pysa-action#inputs
repo-directory: './'
requirements-path: 'requirements.txt'
infer-types: true
include-default-sapp-filters: true

3
.gitignore vendored
View file

@ -25,3 +25,6 @@ migrations/
# debug log file
/log
log
# conf folder for neodb
/neodb

23
Dockerfile Normal file
View file

@ -0,0 +1,23 @@
# syntax=docker/dockerfile:1
FROM python:3.8-slim
ENV PYTHONDONTWRITEBYTECODE=1
ENV PYTHONUNBUFFERED=1
RUN apt-get update \
&& apt-get install -y --no-install-recommends build-essential libpq-dev git \
&& rm -rf /var/lib/apt/lists/*
COPY requirements.txt /tmp/requirements.txt
RUN pip install --no-cache-dir -r /tmp/requirements.txt \
&& rm -rf /tmp/requirements.txt \
&& useradd -U app_user \
&& install -d -m 0755 -o app_user -g app_user /app/static
ENV DJANGO_SETTINGS_MODULE=neodb.dev
WORKDIR /app
USER app_user:app_user
COPY --chown=app_user:app_user . .
RUN chmod +x docker/*.sh
# Section 6- Docker Run Checks and Configurations
ENTRYPOINT [ "docker/entrypoint.sh" ]
CMD [ "docker/start.sh", "server" ]

View file

@ -3,6 +3,13 @@ An application allows you to mark any books, movies and more things you love.
Depends on Mastodon.
## Install
Please see [doc/GUIDE.md](doc/GUIDE.md)
## Bug Report
- to file a bug for NiceDB, please create an issue [here](https://github.com/doubaniux/boofilsic/issues/new)
- to file a bug or request new features for NeoDB, please contact NeoDB on [Fediverse](https://mastodon.social/@neodb) or [Twitter](https://twitter.com/NeoDBsocial)
## Contribution
The project is based on Django. If you are familiar with this technique and willing to read through the terrible code😝, your contribution would be the most welcome!
@ -11,8 +18,6 @@ Currently looking for someone to help with:
- Explaining the structure of code
- Refactoring (this is something big)
This project is still in its early stage, so you are not encouraged to deploy it on your own. If you do want to give it a try, please check the [fork of *alphatownsman*](https://github.com/alphatownsman/boofilsic), which is more friendly.
## Sponsor
If you like this project, please consider sponsoring us on [Patreon](https://patreon.com/tertius).
If you like this project, please consider sponsoring NiceDB on [Patreon](https://patreon.com/tertius).

5
SECURITY.md Normal file
View file

@ -0,0 +1,5 @@
# Security Policy
## Reporting a Vulnerability
Please DM [us on Fediverse](https://mastodon.social/@neodb) or send email to `dev`@`neodb.social` to report a vulnerability. Please do not post publicly or create pr/issues directly. Thank you.

View file

@ -0,0 +1,5 @@
from django.conf import settings
def site_info(request):
return settings.SITE_INFO

View file

@ -12,10 +12,13 @@ https://docs.djangoproject.com/en/3.0/ref/settings/
import os
import psycopg2.extensions
from git import Repo
# Build paths inside the project like this: os.path.join(BASE_DIR, ...)
BASE_DIR = os.path.dirname(os.path.dirname(os.path.abspath(__file__)))
# https://docs.djangoproject.com/en/3.2/releases/3.2/#customizing-type-of-auto-created-primary-keys
DEFAULT_AUTO_FIELD = 'django.db.models.AutoField'
# Quick-start development settings - unsuitable for production
# See https://docs.djangoproject.com/en/3.0/howto/deployment/checklist/
@ -38,6 +41,8 @@ INTERNAL_IPS = [
INSTALLED_APPS = [
'django.contrib.admin',
'hijack',
'hijack.contrib.admin',
'django.contrib.auth',
'django.contrib.contenttypes',
'django.contrib.sessions',
@ -45,6 +50,9 @@ INSTALLED_APPS = [
'django.contrib.staticfiles',
'django.contrib.humanize',
'django.contrib.postgres',
'django_sass',
'django_rq',
'simple_history',
'markdownx',
'management.apps.ManagementConfig',
'mastodon.apps.MastodonConfig',
@ -54,7 +62,12 @@ INSTALLED_APPS = [
'movies.apps.MoviesConfig',
'music.apps.MusicConfig',
'games.apps.GamesConfig',
'sync.apps.SyncConfig',
'collection.apps.CollectionConfig',
'timeline.apps.TimelineConfig',
'easy_thumbnails',
'user_messages',
'django_slack',
]
MIDDLEWARE = [
@ -65,6 +78,8 @@ MIDDLEWARE = [
'django.contrib.auth.middleware.AuthenticationMiddleware',
'django.contrib.messages.middleware.MessageMiddleware',
'django.middleware.clickjacking.XFrameOptionsMiddleware',
'hijack.middleware.HijackUserMiddleware',
'simple_history.middleware.HistoryRequestMiddleware',
]
ROOT_URLCONF = 'boofilsic.urls'
@ -79,7 +94,9 @@ TEMPLATES = [
'django.template.context_processors.debug',
'django.template.context_processors.request',
'django.contrib.auth.context_processors.auth',
'django.contrib.messages.context_processors.messages',
# 'django.contrib.messages.context_processors.messages',
"user_messages.context_processors.messages",
'boofilsic.context_processors.site_info',
],
},
},
@ -95,10 +112,10 @@ if DEBUG:
DATABASES = {
'default': {
'ENGINE': 'django.db.backends.postgresql',
'NAME': 'test',
'USER': 'donotban',
'PASSWORD': 'donotbansilvousplait',
'HOST': '172.18.116.29',
'NAME': os.environ.get('DB_NAME', 'test'),
'USER': os.environ.get('DB_USER', 'donotban'),
'PASSWORD': os.environ.get('DB_PASSWORD', 'donotbansilvousplait'),
'HOST': os.environ.get('DB_HOST', '172.18.116.29'),
'OPTIONS': {
'client_encoding': 'UTF8',
# 'isolation_level': psycopg2.extensions.ISOLATION_LEVEL_DEFAULT,
@ -184,13 +201,29 @@ STATICFILES_STORAGE = 'django.contrib.staticfiles.storage.ManifestStaticFilesSto
AUTH_USER_MODEL = 'users.User'
SILENCED_SYSTEM_CHECKS = [
"auth.W004", # User.username is non-unique
"admin.E404" # Required by django-user-messages
]
MEDIA_URL = '/media/'
MEDIA_ROOT = os.path.join(BASE_DIR, 'media/')
PROJECT_ROOT = os.path.abspath(os.path.dirname(__name__))
SITE_INFO = {
'site_name': 'NiceDB',
'support_link': 'https://github.com/doubaniux/boofilsic/issues',
'version_hash': None,
'settings_module': os.getenv('DJANGO_SETTINGS_MODULE'),
'sentry_dsn': None,
}
# Mastodon configs
CLIENT_NAME = 'NiceDB'
APP_WEBSITE = 'https://nicedb.org'
REDIRECT_URIS = "https://nicedb.org/users/OAuth2_login/\nhttps://www.nicedb.org/users/OAuth2_login/"
CLIENT_NAME = os.environ.get('APP_NAME', 'NiceDB')
SITE_INFO['site_name'] = os.environ.get('APP_NAME', 'NiceDB')
APP_WEBSITE = os.environ.get('APP_URL', 'https://nicedb.org')
REDIRECT_URIS = APP_WEBSITE + "/users/OAuth2_login/"
# Path to save report related images, ends with slash
REPORT_MEDIA_PATH_ROOT = 'report/'
@ -205,10 +238,23 @@ ALBUM_MEDIA_PATH_ROOT = 'album/'
DEFAULT_ALBUM_IMAGE = os.path.join(ALBUM_MEDIA_PATH_ROOT, 'default.svg')
GAME_MEDIA_PATH_ROOT = 'game/'
DEFAULT_GAME_IMAGE = os.path.join(GAME_MEDIA_PATH_ROOT, 'default.svg')
COLLECTION_MEDIA_PATH_ROOT = 'collection/'
DEFAULT_COLLECTION_IMAGE = os.path.join(COLLECTION_MEDIA_PATH_ROOT, 'default.svg')
SYNC_FILE_PATH_ROOT = 'sync/'
EXPORT_FILE_PATH_ROOT = 'export/'
# Allow user to login via any Mastodon/Pleroma sites
MASTODON_ALLOW_ANY_SITE = False
# Timeout of requests to Mastodon, in seconds
MASTODON_TIMEOUT = 30
MASTODON_CLIENT_SCOPE = 'read write follow'
#use the following if it's a new site
#MASTODON_CLIENT_SCOPE = 'read:accounts read:follows read:search read:blocks read:mutes write:statuses write:media'
MASTODON_LEGACY_CLIENT_SCOPE = 'read write follow'
# Tags for toots posted from this site
MASTODON_TAGS = '#NiceDB #NiceDB%(category)s #NiceDB%(category)s%(type)s'
@ -217,7 +263,7 @@ STAR_SOLID = ':star_solid:'
STAR_HALF = ':star_half:'
STAR_EMPTY = ':star_empty:'
# Default password for each user. since assword is not used any way,
# Default password for each user. since password is not used any way,
# any string that is not empty is ok
DEFAULT_PASSWORD = 'ab7nsm8didusbaqPgq'
@ -231,8 +277,12 @@ ADMIN_URL = 'tertqX7256n7ej8nbv5cwvsegdse6w7ne5rHd'
LUMINATI_USERNAME = 'lum-customer-hl_nw4tbv78-zone-static'
LUMINATI_PASSWORD = 'nsb7te9bw0ney'
SCRAPING_TIMEOUT = 90
# ScraperAPI api key
SCRAPERAPI_KEY = 'wnb3794v675b8w475h0e8hr7tyge'
PROXYCRAWL_KEY = None
SCRAPESTACK_KEY = None
# Spotify credentials
SPOTIFY_CREDENTIAL = "NzYzNkYTE6MGQ0ODY0NTY2Y2b3n645sdfgAyY2I1ljYjg3Nzc0MjIwODQ0ZWE="
@ -240,6 +290,17 @@ SPOTIFY_CREDENTIAL = "NzYzNkYTE6MGQ0ODY0NTY2Y2b3n645sdfgAyY2I1ljYjg3Nzc0MjIwODQ0
# IMDb API service https://imdb-api.com/
IMDB_API_KEY = "k23fwewff23"
# The Movie Database (TMDB) API Keys
TMDB_API3_KEY = "deadbeef"
TMDB_API4_KEY = "deadbeef.deadbeef.deadbeef"
# Google Books API Key
GOOGLE_API_KEY = 'deadbeef-deadbeef-deadbeef'
# IGDB
IGDB_CLIENT_ID = 'deadbeef'
IGDB_ACCESS_TOKEN = 'deadbeef'
# Thumbnail setting
# It is possible to optimize the image size even more: https://easy-thumbnails.readthedocs.io/en/latest/ref/optimize/
THUMBNAIL_ALIASES = {
@ -257,3 +318,47 @@ if DEBUG:
# https://django-debug-toolbar.readthedocs.io/en/latest/
# maybe benchmarking before deployment
REDIS_HOST = os.environ.get('REDIS_HOST', '127.0.0.1')
RQ_QUEUES = {
'mastodon': {
'HOST': REDIS_HOST,
'PORT': 6379,
'DB': 0,
'DEFAULT_TIMEOUT': -1,
},
'export': {
'HOST': REDIS_HOST,
'PORT': 6379,
'DB': 0,
'DEFAULT_TIMEOUT': -1,
},
'doufen': {
'HOST': REDIS_HOST,
'PORT': 6379,
'DB': 0,
'DEFAULT_TIMEOUT': -1,
}
}
RQ_SHOW_ADMIN_LINK = True
SEARCH_INDEX_NEW_ONLY = False
SEARCH_BACKEND = None
# SEARCH_BACKEND = 'MEILISEARCH'
# MEILISEARCH_SERVER = 'http://127.0.0.1:7700'
# MEILISEARCH_KEY = 'deadbeef'
# SEARCH_BACKEND = 'TYPESENSE'
# TYPESENSE_CONNECTION = {
# 'api_key': 'deadbeef',
# 'nodes': [{
# 'host': 'localhost',
# 'port': '8108',
# 'protocol': 'http'
# }],
# 'connection_timeout_seconds': 2
# }

View file

@ -27,10 +27,16 @@ urlpatterns = [
path('movies/', include('movies.urls')),
path('music/', include('music.urls')),
path('games/', include('games.urls')),
path('collections/', include('collection.urls')),
path('timeline/', include('timeline.urls')),
path('sync/', include('sync.urls')),
path('announcement/', include('management.urls')),
path('hijack/', include('hijack.urls')),
path('', include('common.urls')),
]
urlpatterns += [
path(settings.ADMIN_URL + '-rq/', include('django_rq.urls'))
]
if settings.DEBUG:

View file

@ -1,7 +1,8 @@
from django.contrib import admin
from .models import *
from simple_history.admin import SimpleHistoryAdmin
admin.site.register(Book)
admin.site.register(Book, SimpleHistoryAdmin)
admin.site.register(BookMark)
admin.site.register(BookReview)
admin.site.register(BookTag)

View file

@ -3,3 +3,8 @@ from django.apps import AppConfig
class BooksConfig(AppConfig):
name = 'books'
def ready(self):
from common.index import Indexer
from .models import Book
Indexer.update_model_indexable(Book)

View file

@ -1,17 +1,12 @@
from django import forms
from django.utils.translation import gettext_lazy as _
from .models import Book, BookMark, BookReview
from .models import Book, BookMark, BookReview, BookMarkStatusTranslation
from common.models import MarkStatusEnum
from common.forms import *
def BookMarkStatusTranslator(status):
trans_dict = {
MarkStatusEnum.DO.value: _("在读"),
MarkStatusEnum.WISH.value: _("想读"),
MarkStatusEnum.COLLECT.value: _("读过")
}
return trans_dict[status]
return BookMarkStatusTranslation[status]
class BookForm(forms.ModelForm):
@ -96,11 +91,8 @@ class BookMarkForm(MarkForm):
'status',
'rating',
'text',
'is_private',
'visibility',
]
labels = {
'rating': _("评分"),
}
widgets = {
'book': forms.TextInput(attrs={"hidden": ""}),
}
@ -115,14 +107,8 @@ class BookReviewForm(ReviewForm):
'book',
'title',
'content',
'is_private'
'visibility'
]
labels = {
'book': "",
'title': _("标题"),
'content': _("正文"),
'share_to_mastodon': _("分享到长毛象")
}
widgets = {
'book': forms.TextInput(attrs={"hidden": ""}),
}

View file

@ -0,0 +1,200 @@
from django.core.management.base import BaseCommand
from django.core.files.uploadedfile import SimpleUploadedFile
from django.conf import settings
from common.scraper import *
from books.models import Book
from books.forms import BookForm
import requests
import re
import filetype
from lxml import html
from PIL import Image
from io import BytesIO
class DoubanPatcherMixin:
@classmethod
def download_page(cls, url, headers):
url = cls.get_effective_url(url)
r = None
error = 'DoubanScrapper: error occured when downloading ' + url
content = None
def get(url, timeout):
nonlocal r
# print('Douban GET ' + url)
try:
r = requests.get(url, timeout=timeout)
except Exception as e:
r = requests.Response()
r.status_code = f"Exception when GET {url} {e}" + url
# print('Douban CODE ' + str(r.status_code))
return r
def check_content():
nonlocal r, error, content
content = None
if r.status_code == 200:
content = r.content.decode('utf-8')
if content.find('关于豆瓣') == -1:
# with open('/tmp/temp.html', 'w', encoding='utf-8') as fp:
# fp.write(content)
content = None
error = error + 'Content not authentic' # response is garbage
elif re.search('不存在[^<]+</title>', content, re.MULTILINE):
content = None
error = error + 'Not found or hidden by Douban'
else:
error = error + str(r.status_code)
def fix_wayback_links():
nonlocal content
# fix links
content = re.sub(r'href="http[^"]+http', r'href="http', content)
# https://img9.doubanio.com/view/subject/{l|m|s}/public/s1234.jpg
content = re.sub(r'src="[^"]+/(s\d+\.\w+)"',
r'src="https://img9.doubanio.com/view/subject/m/public/\1"', content)
# https://img9.doubanio.com/view/photo/s_ratio_poster/public/p2681329386.jpg
# https://img9.doubanio.com/view/photo/{l|m|s}/public/p1234.webp
content = re.sub(r'src="[^"]+/(p\d+\.\w+)"',
r'src="https://img9.doubanio.com/view/photo/m/public/\1"', content)
# Wayback Machine: get latest available
def wayback():
nonlocal r, error, content
error = error + '\nWayback: '
get('http://archive.org/wayback/available?url=' + url, 10)
if r.status_code == 200:
w = r.json()
if w['archived_snapshots'] and w['archived_snapshots']['closest']:
get(w['archived_snapshots']['closest']['url'], 10)
check_content()
if content is not None:
fix_wayback_links()
else:
error = error + 'No snapshot available'
else:
error = error + str(r.status_code)
# Wayback Machine: guess via CDX API
def wayback_cdx():
nonlocal r, error, content
error = error + '\nWayback: '
get('http://web.archive.org/cdx/search/cdx?url=' + url, 10)
if r.status_code == 200:
dates = re.findall(r'[^\s]+\s+(\d+)\s+[^\s]+\s+[^\s]+\s+\d+\s+[^\s]+\s+\d{5,}',
r.content.decode('utf-8'))
# assume snapshots whose size >9999 contain real content, use the latest one of them
if len(dates) > 0:
get('http://web.archive.org/web/' + dates[-1] + '/' + url, 10)
check_content()
if content is not None:
fix_wayback_links()
else:
error = error + 'No snapshot available'
else:
error = error + str(r.status_code)
def latest():
nonlocal r, error, content
if settings.SCRAPESTACK_KEY is None:
error = error + '\nDirect: '
get(url, 60)
else:
error = error + '\nScrapeStack: '
get(f'http://api.scrapestack.com/scrape?access_key={settings.SCRAPESTACK_KEY}&url={url}', 60)
check_content()
wayback_cdx()
if content is None:
latest()
if content is None:
logger.error(error)
content = '<html />'
return html.fromstring(content)
@classmethod
def download_image(cls, url, item_url=None):
if url is None:
logger.error(f"Douban: no image url for {item_url}")
return None, None
raw_img = None
ext = None
dl_url = url
if settings.SCRAPESTACK_KEY is not None:
dl_url = f'http://api.scrapestack.com/scrape?access_key={settings.SCRAPESTACK_KEY}&url={url}'
try:
img_response = requests.get(dl_url, timeout=90)
if img_response.status_code == 200:
raw_img = img_response.content
img = Image.open(BytesIO(raw_img))
img.load() # corrupted image will trigger exception
content_type = img_response.headers.get('Content-Type')
ext = filetype.get_type(mime=content_type.partition(';')[0].strip()).extension
else:
logger.error(f"Douban: download image failed {img_response.status_code} {dl_url} {item_url}")
# raise RuntimeError(f"Douban: download image failed {img_response.status_code} {dl_url}")
except Exception as e:
raw_img = None
ext = None
logger.error(f"Douban: download image failed {e} {dl_url} {item_url}")
if raw_img is None and settings.SCRAPESTACK_KEY is not None:
try:
img_response = requests.get(dl_url, timeout=90)
if img_response.status_code == 200:
raw_img = img_response.content
img = Image.open(BytesIO(raw_img))
img.load() # corrupted image will trigger exception
content_type = img_response.headers.get('Content-Type')
ext = filetype.get_type(mime=content_type.partition(';')[0].strip()).extension
else:
logger.error(f"Douban: download image failed {img_response.status_code} {dl_url} {item_url}")
except Exception as e:
raw_img = None
ext = None
logger.error(f"Douban: download image failed {e} {dl_url} {item_url}")
return raw_img, ext
class DoubanBookPatcher(DoubanPatcherMixin, AbstractScraper):
site_name = SourceSiteEnum.DOUBAN.value
host = 'book.douban.com'
data_class = Book
form_class = BookForm
regex = re.compile(r"https://book\.douban\.com/subject/\d+/{0,1}")
def scrape(self, url):
headers = DEFAULT_REQUEST_HEADERS.copy()
headers['Host'] = self.host
content = self.download_page(url, headers)
img_url_elem = content.xpath("//*[@id='mainpic']/a/img/@src")
img_url = img_url_elem[0].strip() if img_url_elem else None
raw_img, ext = self.download_image(img_url, url)
return raw_img, ext
class Command(BaseCommand):
help = 'fix cover image'
def add_arguments(self, parser):
parser.add_argument('threadId', type=int, help='% 8')
def handle(self, *args, **options):
t = int(options['threadId'])
for m in Book.objects.filter(cover='book/default.svg', source_site='douban'):
if m.id % 8 == t:
self.stdout.write(f'Re-fetching {m.source_url}')
try:
raw_img, img_ext = DoubanBookPatcher.scrape(m.source_url)
if img_ext is not None:
m.cover = SimpleUploadedFile('temp.' + img_ext, raw_img)
m.save()
self.stdout.write(self.style.SUCCESS(f'Saved {m.source_url}'))
else:
self.stdout.write(self.style.ERROR(f'Skipped {m.source_url}'))
except Exception as e:
print(e)

View file

@ -1,98 +1,184 @@
import uuid
import django.contrib.postgres.fields as postgres
from django.utils.translation import ugettext_lazy as _
from django.utils.translation import gettext_lazy as _
from django.db import models
from django.core.serializers.json import DjangoJSONEncoder
from django.shortcuts import reverse
from common.models import Entity, Mark, Review, Tag
from common.models import Entity, Mark, Review, Tag, MarkStatusEnum
from common.utils import GenerateDateUUIDMediaFilePath
from boofilsic.settings import BOOK_MEDIA_PATH_ROOT, DEFAULT_BOOK_IMAGE
from django.utils import timezone
from django.conf import settings
from django.db.models import Q
from simple_history.models import HistoricalRecords
BookMarkStatusTranslation = {
MarkStatusEnum.DO.value: _("在读"),
MarkStatusEnum.WISH.value: _("想读"),
MarkStatusEnum.COLLECT.value: _("读过")
}
def book_cover_path(instance, filename):
return GenerateDateUUIDMediaFilePath(instance, filename, BOOK_MEDIA_PATH_ROOT)
return GenerateDateUUIDMediaFilePath(instance, filename, settings.BOOK_MEDIA_PATH_ROOT)
class Book(Entity):
# widely recognized name, usually in Chinese
title = models.CharField(_("title"), max_length=200)
subtitle = models.CharField(_("subtitle"), blank=True, default='', max_length=200)
title = models.CharField(_("title"), max_length=500)
subtitle = models.CharField(
_("subtitle"), blank=True, default='', max_length=500)
# original name, for books in foreign language
orig_title = models.CharField(_("original title"), blank=True, default='', max_length=200)
orig_title = models.CharField(
_("original title"), blank=True, default='', max_length=500)
author = postgres.ArrayField(
models.CharField(_("author"), blank=True, default='', max_length=100),
models.CharField(_("author"), blank=True, default='', max_length=200),
null=True,
blank=True,
default=list,
)
translator = postgres.ArrayField(
models.CharField(_("translator"), blank=True, default='', max_length=100),
models.CharField(_("translator"), blank=True,
default='', max_length=200),
null=True,
blank=True,
default=list,
)
language = models.CharField(_("language"), blank=True, default='', max_length=10)
pub_house = models.CharField(_("publishing house"), blank=True, default='', max_length=200)
language = models.CharField(
_("language"), blank=True, default='', max_length=50)
pub_house = models.CharField(
_("publishing house"), blank=True, default='', max_length=200)
pub_year = models.IntegerField(_("published year"), null=True, blank=True)
pub_month = models.IntegerField(_("published month"), null=True, blank=True)
binding = models.CharField(_("binding"), blank=True, default='', max_length=50)
pub_month = models.IntegerField(
_("published month"), null=True, blank=True)
binding = models.CharField(
_("binding"), blank=True, default='', max_length=200)
# since data origin is not formatted and might be CNY USD or other currency, use char instead
price = models.CharField(_("pricing"), blank=True, default='', max_length=50)
price = models.CharField(_("pricing"), blank=True,
default='', max_length=50)
pages = models.PositiveIntegerField(_("pages"), null=True, blank=True)
isbn = models.CharField(_("ISBN"), blank=True, null=False, max_length=20, db_index=True, default='')
isbn = models.CharField(_("ISBN"), blank=True, null=False,
max_length=20, db_index=True, default='')
# to store previously scrapped data
cover = models.ImageField(_("cover picture"), upload_to=book_cover_path, default=DEFAULT_BOOK_IMAGE, blank=True)
cover = models.ImageField(_("cover picture"), upload_to=book_cover_path,
default=settings.DEFAULT_BOOK_IMAGE, blank=True)
contents = models.TextField(blank=True, default="")
history = HistoricalRecords()
class Meta:
# more info: https://docs.djangoproject.com/en/2.2/ref/models/options/
# set managed=False if the model represents an existing table or
# a database view that has been created by some other means.
# check the link above for further info
# managed = True
# db_table = 'book'
constraints = [
models.CheckConstraint(check=models.Q(pub_year__gte=0), name='pub_year_lowerbound'),
models.CheckConstraint(check=models.Q(pub_month__lte=12), name='pub_month_upperbound'),
models.CheckConstraint(check=models.Q(pub_month__gte=1), name='pub_month_lowerbound'),
models.CheckConstraint(check=models.Q(
pub_year__gte=0), name='pub_year_lowerbound'),
models.CheckConstraint(check=models.Q(
pub_month__lte=12), name='pub_month_upperbound'),
models.CheckConstraint(check=models.Q(
pub_month__gte=1), name='pub_month_lowerbound'),
]
def __str__(self):
return self.title
def get_json(self):
r = {
'subtitle': self.subtitle,
'original_title': self.orig_title,
'author': self.author,
'translator': self.translator,
'publisher': self.pub_house,
'publish_year': self.pub_year,
'publish_month': self.pub_month,
'language': self.language,
'isbn': self.isbn,
}
r.update(super().get_json())
return r
def get_absolute_url(self):
return reverse("books:retrieve", args=[self.id])
@property
def wish_url(self):
return reverse("books:wish", args=[self.id])
def get_tags_manager(self):
return self.book_tags
def get_related_books(self):
qs = Q(orig_title=self.title)
if self.isbn:
qs = qs | Q(isbn=self.isbn)
if self.orig_title:
qs = qs | Q(title=self.orig_title)
qs = qs | Q(orig_title=self.orig_title)
qs = qs & ~Q(id=self.id)
return Book.objects.filter(qs)
def get_identicals(self):
qs = Q(orig_title=self.title)
if self.isbn:
qs = Q(isbn=self.isbn)
# qs = qs & ~Q(id=self.id)
return Book.objects.filter(qs)
else:
return [self] # Book.objects.filter(id=self.id)
@property
def verbose_category_name(self):
return _("书籍")
@property
def mark_class(self):
return BookMark
@property
def tag_class(self):
return BookTag
class BookMark(Mark):
book = models.ForeignKey(Book, on_delete=models.CASCADE, related_name='book_marks', null=True)
book = models.ForeignKey(
Book, on_delete=models.CASCADE, related_name='book_marks', null=True)
class Meta:
constraints = [
models.UniqueConstraint(fields=['owner', 'book'], name="unique_book_mark")
models.UniqueConstraint(
fields=['owner', 'book'], name="unique_book_mark")
]
@property
def translated_status(self):
return BookMarkStatusTranslation[self.status]
class BookReview(Review):
book = models.ForeignKey(Book, on_delete=models.CASCADE, related_name='book_reviews', null=True)
book = models.ForeignKey(
Book, on_delete=models.CASCADE, related_name='book_reviews', null=True)
class Meta:
constraints = [
models.UniqueConstraint(fields=['owner', 'book'], name="unique_book_review")
models.UniqueConstraint(
fields=['owner', 'book'], name="unique_book_review")
]
@property
def url(self):
return settings.APP_WEBSITE + reverse("books:retrieve_review", args=[self.id])
@property
def item(self):
return self.book
class BookTag(Tag):
book = models.ForeignKey(Book, on_delete=models.CASCADE, related_name='book_tags', null=True)
mark = models.ForeignKey(BookMark, on_delete=models.CASCADE, related_name='bookmark_tags', null=True)
book = models.ForeignKey(
Book, on_delete=models.CASCADE, related_name='book_tags', null=True)
mark = models.ForeignKey(
BookMark, on_delete=models.CASCADE, related_name='bookmark_tags', null=True)
class Meta:
constraints = [
models.UniqueConstraint(fields=['content', 'mark'], name="unique_bookmark_tag")
models.UniqueConstraint(
fields=['content', 'mark'], name="unique_bookmark_tag")
]
@property
def item(self):
return self.book

View file

@ -10,8 +10,8 @@
<head>
<meta charset="UTF-8">
<meta name="viewport" content="width=device-width, initial-scale=1.0">
<title>{% trans 'NiceDB - ' %}{{ title }}</title>
<script src="https://cdnjs.cloudflare.com/ajax/libs/jquery/3.5.1/jquery.min.js"></script>
<title>{{ site_name }} - {{ title }}</title>
<script src="https://static.neodb.social/cdnjs.cloudflare.com/ajax/libs/jquery/3.6.0/jquery.min.js"></script>
<link rel="stylesheet" href="{% static 'css/boofilsic.min.css' %}">
</head>
@ -22,8 +22,24 @@
<section id="content" class="container">
<div class="grid">
{% if is_update and form.source_site.value != 'in-site' %}
<div style="float:right;padding-left:16px">
<div class="aside-section-wrapper">
<div class="action-panel">
<div class="action-panel__label">{% trans '源网站' %}: <a href="{{ form.source_url.value }}">{{ form.source_site.value }}</a></div>
<div class="action-panel__button-group">
<form method="post" action="{% url 'books:rescrape' form.id.value %}">
{% csrf_token %}
<input class="button" type="submit" value="{% trans '从源网站重新抓取' %}">
</form>
</div>
</div>
</div>
</div>
{% endif %}
<div class="single-section-wrapper" id="main">
<a href="{% url 'books:scrape' %}" class="single-section-wrapper__link single-section-wrapper__link--secondary">{% trans '>>> 试试一键剽取~ <<<' %}</a>
{% comment %} <a href="{% url 'books:scrape' %}" class="single-section-wrapper__link single-section-wrapper__link--secondary">{% trans '>>> 试试一键剽取~ <<<' %}</a> {% endcomment %}
<form class="entity-form" action="{{ submit_url }}" method="post" enctype="multipart/form-data">
{% csrf_token %}
{{ form.media }}
@ -38,12 +54,6 @@
</div>
{% comment %}
<div id="oauth2Token" hidden="true">{% oauth_token %}</div>
<div id="mastodonURI" hidden="true">{% mastodon request.user.mastodon_site %}</div>
<!--current user mastodon id-->
<div id="userMastodonID" hidden="true">{{ user.mastodon_id }}</div>
{% endcomment %}
<script>

View file

@ -12,8 +12,8 @@
<head>
<meta charset="UTF-8">
<meta name="viewport" content="width=device-width, initial-scale=1.0">
<title>{% trans 'NiceDB - ' %}{{ title }}</title>
<script src="https://cdnjs.cloudflare.com/ajax/libs/jquery/3.5.1/jquery.min.js"></script>
<title>{{ site_name }} - {{ title }}</title>
<script src="https://static.neodb.social/cdnjs.cloudflare.com/ajax/libs/jquery/3.6.0/jquery.min.js"></script>
<script src="{% static 'js/create_update_review.js' %}"></script>
<script src="{% static 'lib/js/rating-star.js' %}"></script>
<link rel="stylesheet" href="{% static 'lib/css/rating-star.css' %}">
@ -80,7 +80,7 @@
<div class="review-form__option">
<div class="review-form__visibility-radio">
{{ form.is_private.label }}{{ form.is_private }}
{{ form.visibility.label }}{{ form.visibility }}
</div>
<div class="review-form__share-checkbox">
{{ form.share_to_mastodon }}{{ form.share_to_mastodon.label }}
@ -100,12 +100,6 @@
{% include "partial/_footer.html" %}
</div>
{% comment %}
<div id="oauth2Token" hidden="true">{% oauth_token %}</div>
<div id="mastodonURI" hidden="true">{% mastodon request.user.mastodon_site %}</div>
<!--current user mastodon id-->
<div id="userMastodonID" hidden="true">{{ user.mastodon_id }}</div>
{% endcomment %}
<script>

View file

@ -11,8 +11,8 @@
<head>
<meta charset="UTF-8">
<meta name="viewport" content="width=device-width, initial-scale=1.0">
<title>{% trans 'NiceDB - 删除图书' %}</title>
<script src="https://cdnjs.cloudflare.com/ajax/libs/jquery/3.5.1/jquery.min.js"></script>
<title>{{ site_name }} - {% trans '删除图书' %}</title>
<script src="https://static.neodb.social/cdnjs.cloudflare.com/ajax/libs/jquery/3.6.0/jquery.min.js"></script>
<script src="{% static 'lib/js/rating-star.js' %}"></script>
<script src="{% static 'js/rating-star-readonly.js' %}"></script>
<link rel="stylesheet" href="{% static 'lib/css/rating-star.css' %}">
@ -55,7 +55,7 @@
{% if book.last_editor %}
<div>
{% trans '最近编辑者:' %}
<a href="{% url 'users:home' book.last_editor.id %}">
<a href="{% url 'users:home' book.last_editor.mastodon_username %}">
<span>{{ book.last_editor | default:"" }}</span>
</a>
</div>
@ -89,12 +89,6 @@
</div>
{% comment %}
<div id="oauth2Token" hidden="true">{% oauth_token %}</div>
<div id="mastodonURI" hidden="true">{% mastodon request.user.mastodon_site %}</div>
<!--current user mastodon id-->
<div id="userMastodonID" hidden="true">{{ user.mastodon_id }}</div>
{% endcomment %}
<script>

View file

@ -10,8 +10,8 @@
<head>
<meta charset="UTF-8">
<meta name="viewport" content="width=device-width, initial-scale=1.0">
<title>{% trans 'NiceDB - 删除评论' %}</title>
<script src="https://cdnjs.cloudflare.com/ajax/libs/jquery/3.5.1/jquery.min.js"></script>
<title>{{ site_name }} - {% trans '删除评论' %}</title>
<script src="https://static.neodb.social/cdnjs.cloudflare.com/ajax/libs/jquery/3.6.0/jquery.min.js"></script>
<link rel="stylesheet" href="{% static 'lib/css/rating-star.css' %}">
<link rel="stylesheet" href="{% static 'css/boofilsic.min.css' %}">
@ -35,7 +35,7 @@
<h5 class="review-head__title">
{{ review.title }}
</h5>
{% if review.is_private %}
{% if review.visibility > 0 %}
<span class="icon-lock"><svg xmlns="http://www.w3.org/2000/svg"
viewBox="0 0 20 20"><svg xmlns="http://www.w3.org/2000/svg"
viewBox="0 0 20 20">
@ -47,7 +47,7 @@
<div class="review-head__body">
<div class="review-head__info">
<a href="{% url 'users:home' review.owner.id %}"
<a href="{% url 'users:home' review.owner.mastodon_username %}"
class="review-head__owner-link">{{ review.owner.username }}</a>
{% if mark %}
@ -90,12 +90,6 @@
</div>
{% comment %}
<div id="oauth2Token" hidden="true">{% oauth_token %}</div>
<div id="mastodonURI" hidden="true">{% mastodon request.user.mastodon_site %}</div>
<!--current user mastodon id-->
<div id="userMastodonID" hidden="true">{{ user.mastodon_id }}</div>
{% endcomment %}
<script>

View file

@ -1,9 +1,12 @@
{% load static %}
{% load i18n %}
{% load l10n %}
{% load humanize %}
{% load admin_url %}
{% load mastodon %}
{% load oauth_token %}
{% load truncate %}
{% load strip_scheme %}
{% load thumb %}
<!DOCTYPE html>
<html lang="en">
@ -11,11 +14,11 @@
<head>
<meta charset="UTF-8">
<meta name="viewport" content="width=device-width, initial-scale=1.0">
<meta property="og:title" content="NiceDB书 - {{ book.title }}">
<meta property="og:title" content="{{ site_name }}书 - {{ book.title }}">
<meta property="og:type" content="book">
<meta property="og:url" content="{{ request.build_absolute_uri }}">
<meta property="og:image" content="{{ request.scheme }}://{{ request.get_host }}{{ book.cover.url }}">
<meta property="og:site_name" content="NiceDB">
<meta property="og:site_name" content="{{ site_name }}">
<meta property="og:description" content="{{ book.brief }}">
{% if book.author %}
<meta property="og:book:author" content="{% for author in book.author %}{{ author }}{% if not forloop.last %},{% endif %}{% endfor %}">
@ -24,12 +27,12 @@
<meta property="og:book:isbn" content="{{ book.isbn }}">
{% endif %}
<title>{% trans 'NiceDB - 书籍详情' %} | {{ book.title }}</title>
<script src="https://cdnjs.cloudflare.com/ajax/libs/jquery/3.5.1/jquery.min.js"></script>
<title>{{ site_name }} - {% trans '书籍详情' %} | {{ book.title }}</title>
{% include "partial/_common_libs.html" with jquery=1 %}
<script src="{% static 'lib/js/rating-star.js' %}"></script>
<script src="{% static 'js/detail.js' %}"></script>
<link rel="stylesheet" href="{% static 'css/boofilsic.min.css' %}">
<link rel="stylesheet" href="{% static 'lib/css/rating-star.css' %}">
</head>
<body>
@ -57,11 +60,12 @@
<div class="entity-detail__fields">
<div class="entity-detail__rating">
{% if book.rating %}
{% if book.rating and book.rating_number >= 5 %}
<span class="entity-detail__rating-star rating-star" data-rating-score="{{ book.rating | floatformat:"0" }}"></span>
<span class="entity-detail__rating-score"> {{ book.rating }} </span>
<small>({{ book.rating_number }}人评分)</small>
{% else %}
<span> {% trans '评分:暂无评分' %}</span>
<span> {% trans '评分:评分人数不足' %}</span>
{% endif %}
</div>
<div>{% if book.isbn %}{% trans 'ISBN' %}{{ book.isbn }}{% endif %}</div>
@ -96,7 +100,7 @@
{% if book.last_editor %}
<div>{% trans '最近编辑者:' %}<a href="{% url 'users:home' book.last_editor.id %}">{{ book.last_editor | default:"" }}</a></div>
<div>{% trans '最近编辑者:' %}<a href="{% url 'users:home' book.last_editor.mastodon_username %}">{{ book.last_editor | default:"" }}</a></div>
{% endif %}
<div>
@ -148,46 +152,27 @@
<div class="entity-marks">
<h5 class="entity-marks__title">{% trans '这本书的标记' %}</h5>
{% if mark_list_more %}
<a href="{% url 'books:retrieve_mark_list' book.id %}" class="entity-marks__more-link">{% trans '更多' %}</a>
{% endif %}
{% if mark_list %}
<ul class="entity-marks__mark-list">
{% for others_mark in mark_list %}
<li class="entity-marks__mark">
<a href="{% url 'users:home' others_mark.owner.id %}" class="entity-marks__owner-link">{{ others_mark.owner.username }}</a>
<span>{{ others_mark.get_status_display }}</span>
{% if others_mark.rating %}
<span class="entity-marks__rating-star rating-star" data-rating-score="{{ others_mark.rating | floatformat:"0" }}"></span>
{% endif %}
{% if others_mark.is_private %}
<span class="icon-lock"><svg xmlns="http://www.w3.org/2000/svg" viewBox="0 0 20 20"><path d="M17,8.48h-.73V6.27a6.27,6.27,0,1,0-12.53,0V8.48H3a.67.67,0,0,0-.67.67V19.33A.67.67,0,0,0,3,20H17a.67.67,0,0,0,.67-.67V9.15A.67.67,0,0,0,17,8.48ZM6.42,6.27h0a3.57,3.57,0,0,1,7.14,0h0V8.48H6.42Z"/></svg></span>
{% endif %}
<span class="entity-marks__mark-time">{{ others_mark.edited_time }}</span>
{% if others_mark.text %}
<p class="entity-marks__mark-content">{{ others_mark.text }}</p>
{% endif %}
</li>
{% endfor %}
</ul>
{% else %}
<div>{% trans '暂无标记' %}</div>
{% endif %}
<a href="{% url 'books:retrieve_mark_list' book.id %}" class="entity-marks__more-link">{% trans '全部标记' %}</a>
<a href="{% url 'books:retrieve_mark_list' book.id 1 %}" class="entity-marks__more-link">关注的人的标记</a>
{% include "partial/mark_list.html" with mark_list=mark_list current_item=book %}
</div>
<div class="entity-reviews">
<h5 class="entity-reviews__title">{% trans '这本书的评论' %}</h5>
{% if review_list_more %}
<a href="{% url 'books:retrieve_review_list' book.id %}" class="entity-reviews__more-link">{% trans '更多' %}</a>
<a href="{% url 'books:retrieve_review_list' book.id %}" class="entity-reviews__more-link">{% trans '全部评论' %}</a>
{% endif %}
{% if review_list %}
<ul class="entity-reviews__review-list">
{% for others_review in review_list %}
<li class="entity-reviews__review">
<a href="{% url 'users:home' others_review.owner.id %}" class="entity-reviews__owner-link">{{ others_review.owner.username }}</a>
{% if others_review.is_private %}
<a href="{% url 'users:home' others_review.owner.mastodon_username %}" class="entity-reviews__owner-link">{{ others_review.owner.username }}</a>
{% if others_review.visibility > 0 %}
<span class="icon-lock"><svg xmlns="http://www.w3.org/2000/svg" viewBox="0 0 20 20"><path d="M17,8.48h-.73V6.27a6.27,6.27,0,1,0-12.53,0V8.48H3a.67.67,0,0,0-.67.67V19.33A.67.67,0,0,0,3,20H17a.67.67,0,0,0,.67-.67V9.15A.67.67,0,0,0,17,8.48ZM6.42,6.27h0a3.57,3.57,0,0,1,7.14,0h0V8.48H6.42Z"/></svg></span>
{% endif %}
<span class="entity-reviews__review-time">{{ others_review.edited_time }}</span>
{% if others_review.book != book %}
<span class="entity-reviews__review-time source-label"><a class="entity-reviews__review-time" href="{% url 'books:retrieve' others_review.book.id %}">{{ others_review.book.get_source_site_display }}</a></span>
{% endif %}
<span class="entity-reviews__review-title"> <a href="{% url 'books:retrieve_review' others_review.id %}">{{ others_review.title }}</a></span>
<span>{{ others_review.get_plain_content | truncate:100 }}</span>
</li>
@ -202,7 +187,6 @@
<div class="grid__aside" id="aside">
<div class="aside-section-wrapper">
{% if mark %}
<div class="mark-panel">
@ -212,7 +196,7 @@
<span class="mark-panel__rating-star rating-star" data-rating-score="{{ mark.rating | floatformat:"0" }}"></span>
{% endif %}
{% endif %}
{% if mark.is_private %}
{% if mark.visibility > 0 %}
<span class="icon-lock"><svg xmlns="http://www.w3.org/2000/svg" viewBox="0 0 20 20"><path d="M17,8.48h-.73V6.27a6.27,6.27,0,1,0-12.53,0V8.48H3a.67.67,0,0,0-.67.67V19.33A.67.67,0,0,0,3,20H17a.67.67,0,0,0,.67-.67V9.15A.67.67,0,0,0,17,8.48ZM6.42,6.27h0a3.57,3.57,0,0,1,7.14,0h0V8.48H6.42Z"/></svg></span>
{% endif %}
<span class="mark-panel__actions">
@ -224,7 +208,7 @@
</span>
<div class="mark-panel__clear"></div>
<div class="mark-panel__time">{{ mark.edited_time }}</div>
<div class="mark-panel__time">{{ mark.created_time }}</div>
{% if mark.text %}
<p class="mark-panel__text">{{ mark.text }}</p>
@ -247,7 +231,6 @@
</div>
</div>
{% endif %}
</div>
<div class="aside-section-wrapper">
@ -255,7 +238,7 @@
<div class="review-panel">
<span class="review-panel__label">{% trans '我的评论' %}</span>
{% if review.is_private %}
{% if review.visibility > 0 %}
<span class="icon-lock"><svg xmlns="http://www.w3.org/2000/svg" viewBox="0 0 20 20"><path d="M17,8.48h-.73V6.27a6.27,6.27,0,1,0-12.53,0V8.48H3a.67.67,0,0,0-.67.67V19.33A.67.67,0,0,0,3,20H17a.67.67,0,0,0,.67-.67V9.15A.67.67,0,0,0,17,8.48ZM6.42,6.27h0a3.57,3.57,0,0,1,7.14,0h0V8.48H6.42Z"/></svg></span>
{% endif %}
@ -285,6 +268,52 @@
{% endif %}
</div>
{% if book.get_related_books.count > 0 %}
<div class="aside-section-wrapper">
<div class="action-panel">
<div class="action-panel__label">{% trans '相关书目' %}</div>
<div >
{% for b in book.get_related_books %}
<p>
<a href="{% url 'books:retrieve' b.id %}">{{ b.title }}</a>
<small>({{ b.pub_house }} {{ b.pub_year }})</small>
<span class="source-label source-label__{{ b.source_site }}">{{ b.get_source_site_display }}</span>
</p>
{% endfor %}
</div>
</div>
</div>
{% endif %}
{% if book.isbn %}
<div class="aside-section-wrapper">
<div class="action-panel">
<div class="action-panel__label">{% trans '借阅或购买' %}</div>
<div class="action-panel__button-group">
<a class="action-panel__button" target="_blank" href="https://www.worldcat.org/isbn/{{ book.isbn }}">{% trans 'WorldCat' %}</a>
<a class="action-panel__button" target="_blank" href="https://openlibrary.org/search?isbn={{ book.isbn }}">{% trans 'Open Library' %}</a>
</div>
</div>
</div>
{% endif %}
{% if collection_list %}
<div class="aside-section-wrapper">
<div class="action-panel">
<div class="action-panel__label">{% trans '相关收藏单' %}</div>
<div >
{% for c in collection_list %}
<p>
<a href="{% url 'collection:retrieve' c.id %}">{{ c.title }}</a>
</p>
{% endfor %}
<div class="action-panel__button-group action-panel__button-group--center">
<button class="action-panel__button add-to-list" hx-get="{% url 'collection:add_to_list' 'book' book.id %}" hx-target="body" hx-swap="beforeend">{% trans '添加到收藏单' %}</button>
</div>
</div>
</div>
</div>
{% endif %}
</div>
</div>
</section>
@ -296,7 +325,6 @@
<div id="modals">
<div class="mark-modal modal">
<div class="mark-modal__head">
{% if not mark %}
<style>
.mark-modal__title::after {
@ -344,8 +372,8 @@
<div class="mark-modal__option">
<div class="mark-modal__visibility-radio">
<span>{{ mark_form.is_private.label }}:</span>
{{ mark_form.is_private }}
<span>{{ mark_form.visibility.label }}:
{{ mark_form.visibility }}</span>
</div>
<div class="mark-modal__share-checkbox">
{{ mark_form.share_to_mastodon }}{{ mark_form.share_to_mastodon.label }}

View file

@ -12,8 +12,8 @@
<head>
<meta charset="UTF-8">
<meta name="viewport" content="width=device-width, initial-scale=1.0">
<title>{% trans 'NiceDB - ' %}{{ book.title }}{% trans '的标记' %}</title>
<script src="https://cdnjs.cloudflare.com/ajax/libs/jquery/3.5.1/jquery.min.js"></script>
<title>{{ site_name }} - {{ book.title }}{% trans '的标记' %}</title>
<script src="https://static.neodb.social/cdnjs.cloudflare.com/ajax/libs/jquery/3.6.0/jquery.min.js"></script>
<script src="{% static 'lib/js/rating-star.js' %}"></script>
<script src="{% static 'js/rating-star-readonly.js' %}"></script>
<link rel="stylesheet" href="{% static 'css/boofilsic.min.css' %}">
@ -33,38 +33,7 @@
<h5 class="entity-marks__title entity-marks__title--stand-alone">
<a href="{% url 'books:retrieve' book.id %}">{{ book.title }}</a>{% trans ' 的标记' %}
</h5>
<ul class="entity-marks__mark-list">
{% for mark in marks %}
<li class="entity-marks__mark entity-marks__mark--wider">
<a href="{% url 'users:home' mark.owner.id %}"
class="entity-marks__owner-link">{{ mark.owner.username }}</a>
<span>{{ mark.get_status_display }}</span>
{% if mark.rating %}
<span class="entity-marks__rating-star rating-star"
data-rating-score="{{ mark.rating | floatformat:"0" }}"></span>
{% endif %}
{% if mark.is_private %}
<span class="icon-lock"><svg xmlns="http://www.w3.org/2000/svg" viewBox="0 0 20 20">
<path
d="M17,8.48h-.73V6.27a6.27,6.27,0,1,0-12.53,0V8.48H3a.67.67,0,0,0-.67.67V19.33A.67.67,0,0,0,3,20H17a.67.67,0,0,0,.67-.67V9.15A.67.67,0,0,0,17,8.48ZM6.42,6.27h0a3.57,3.57,0,0,1,7.14,0h0V8.48H6.42Z" />
</svg></span>
{% endif %}
<span class="entity-marks__mark-time">{{ mark.edited_time }}</span>
{% if mark.text %}
<p class="entity-marks__mark-content">{{ mark.text }}</p>
{% endif %}
</li>
{% empty %}
<div>
{% trans '无结果' %}
</div>
{% endfor %}
</ul>
{% include "partial/mark_list.html" with mark_list=marks current_item=book %}
</div>
<div class="pagination">
@ -132,12 +101,6 @@
</div>
{% comment %}
<div id="oauth2Token" hidden="true">{% oauth_token %}</div>
<div id="mastodonURI" hidden="true">{% mastodon request.user.mastodon_site %}</div>
<!--current user mastodon id-->
<div id="userMastodonID" hidden="true">{{ user.mastodon_id }}</div>
{% endcomment %}
<script>

View file

@ -11,17 +11,18 @@
<head>
<meta charset="UTF-8">
<meta name="viewport" content="width=device-width, initial-scale=1.0">
<meta property="og:title" content="NiceDB书评 - {{ review.title }}">
<meta property="og:title" content="{{ site_name }}书评 - {{ review.title }}">
<meta property="og:type" content="article">
<meta property="og:article:author" content="{{ review.owner.username }}">
<meta property="og:url" content="{{ request.build_absolute_uri }}">
<meta property="og:image" content="{{ request.scheme }}://{{ request.get_host }}{% static 'img/logo_square.svg' %}">
<title>{% trans 'NiceDB - 评论详情' %}</title>
<script src="https://cdnjs.cloudflare.com/ajax/libs/jquery/3.5.1/jquery.min.js"></script>
<meta property="og:image" content="{{ book.cover|thumb:'normal' }}">
<title>{{ site_name }}{% trans '书评' %} - {{ review.title }}</title>
<script src="https://static.neodb.social/cdnjs.cloudflare.com/ajax/libs/jquery/3.6.0/jquery.min.js"></script>
<script src="{% static 'lib/js/rating-star.js' %}"></script>
<script src="{% static 'js/rating-star-readonly.js' %}"></script>
<link rel="stylesheet" href="{% static 'lib/css/rating-star.css' %}">
<link rel="stylesheet" href="{% static 'css/boofilsic.min.css' %}">
<link rel="stylesheet" href="{% static 'lib/css/neo.css' %}">
</head>
<body>
@ -37,7 +38,7 @@
<h5 class="review-head__title">
{{ review.title }}
</h5>
{% if review.is_private %}
{% if review.visibility > 0 %}
<span class="icon-lock"><svg xmlns="http://www.w3.org/2000/svg" viewBox="0 0 20 20">
<path
d="M17,8.48h-.73V6.27a6.27,6.27,0,1,0-12.53,0V8.48H3a.67.67,0,0,0-.67.67V19.33A.67.67,0,0,0,3,20H17a.67.67,0,0,0,.67-.67V9.15A.67.67,0,0,0,17,8.48ZM6.42,6.27h0a3.57,3.57,0,0,1,7.14,0h0V8.48H6.42Z" />
@ -46,7 +47,7 @@
<div class="review-head__body">
<div class="review-head__info">
<a href="{% url 'users:home' review.owner.id %}" class="review-head__owner-link">{{ review.owner.username }}</a>
<a href="{% url 'users:home' review.owner.mastodon_username %}" class="review-head__owner-link">{{ review.owner.username }}</a>
{% if mark %}
@ -71,6 +72,7 @@
{{ form.content }}
</div>
{{ form.media }}
{% csrf_token %}
</div>
</div>
@ -112,16 +114,8 @@
</div>
{% comment %}
<div id="oauth2Token" hidden="true">{% oauth_token %}</div>
<div id="mastodonURI" hidden="true">{% mastodon request.user.mastodon_site %}</div>
<!--current user mastodon id-->
<div id="userMastodonID" hidden="true">{{ user.mastodon_id }}</div>
{% endcomment %}
<script>
$(".markdownx textarea").hide();
</script>
</body>

View file

@ -12,8 +12,8 @@
<head>
<meta charset="UTF-8">
<meta name="viewport" content="width=device-width, initial-scale=1.0">
<title>{% trans 'NiceDB - ' %}{{ book.title }}{% trans '的评论' %}</title>
<script src="https://cdnjs.cloudflare.com/ajax/libs/jquery/3.5.1/jquery.min.js"></script>
<title>{{ site_name }} - {{ book.title }}{% trans '的评论' %}</title>
<script src="https://static.neodb.social/cdnjs.cloudflare.com/ajax/libs/jquery/3.6.0/jquery.min.js"></script>
<script src="{% static 'lib/js/rating-star.js' %}"></script>
<script src="{% static 'js/rating-star-readonly.js' %}"></script>
<link rel="stylesheet" href="{% static 'lib/css/rating-star.css' %}">
@ -39,12 +39,14 @@
<li class="entity-reviews__review entity-reviews__review--wider">
<a href="{% url 'users:home' review.owner.id %}" class="entity-reviews__owner-link">{{ review.owner.username }}</a>
{% if review.is_private %}
<a href="{% url 'users:home' review.owner.mastodon_username %}" class="entity-reviews__owner-link">{{ review.owner.username }}</a>
{% if review.visibility > 0 %}
<span class="icon-lock"><svg xmlns="http://www.w3.org/2000/svg" viewBox="0 0 20 20"><path d="M17,8.48h-.73V6.27a6.27,6.27,0,1,0-12.53,0V8.48H3a.67.67,0,0,0-.67.67V19.33A.67.67,0,0,0,3,20H17a.67.67,0,0,0,.67-.67V9.15A.67.67,0,0,0,17,8.48ZM6.42,6.27h0a3.57,3.57,0,0,1,7.14,0h0V8.48H6.42Z"/></svg></span>
{% endif %}
<span class="entity-reviews__review-time">{{ review.edited_time }}</span>
{% if review.book != book %}
<span class="entity-reviews__review-time source-label"><a href="{% url 'books:retrieve' review.book.id %}" class="entity-reviews__review-time">{{ review.book.get_source_site_display }}</a></span>
{% endif %}
<span href="{% url 'books:retrieve_review' review.id %}" class="entity-reviews__review-title"><a href="{% url 'books:retrieve_review' review.id %}">{{ review.title }}</a></span>
@ -119,12 +121,6 @@
</div>
{% comment %}
<div id="oauth2Token" hidden="true">{% oauth_token %}</div>
<div id="mastodonURI" hidden="true">{% mastodon request.user.mastodon_site %}</div>
<!--current user mastodon id-->
<div id="userMastodonID" hidden="true">{{ user.mastodon_id }}</div>
{% endcomment %}
<script>

View file

@ -10,8 +10,8 @@
<head>
<meta charset="UTF-8">
<meta name="viewport" content="width=device-width, initial-scale=1.0">
<title>{% trans 'NiceDB - 从豆瓣获取数据' %}</title>
<script src="https://cdnjs.cloudflare.com/ajax/libs/jquery/3.5.1/jquery.min.js"></script>
<title>{{ site_name }} - {% trans '从豆瓣获取数据' %}</title>
<script src="https://static.neodb.social/cdnjs.cloudflare.com/ajax/libs/jquery/3.6.0/jquery.min.js"></script>
<script src="{% static 'js/scrape.js' %}"></script>
<link rel="stylesheet" href="{% static 'css/boofilsic.min.css' %}">
</head>

View file

@ -1,4 +1,4 @@
from django.urls import path
from django.urls import path, re_path
from .views import *
@ -8,8 +8,10 @@ urlpatterns = [
path('<int:id>/', retrieve, name='retrieve'),
path('update/<int:id>/', update, name='update'),
path('delete/<int:id>/', delete, name='delete'),
path('rescrape/<int:id>/', rescrape, name='rescrape'),
path('mark/', create_update_mark, name='create_update_mark'),
path('<int:book_id>/mark/list/', retrieve_mark_list, name='retrieve_mark_list'),
path('wish/<int:id>/', wish, name='wish'),
re_path('(?P<book_id>[0-9]+)/mark/list/(?:(?P<following_only>\\d+))?', retrieve_mark_list, name='retrieve_mark_list'),
path('mark/delete/<int:id>/', delete_mark, name='delete_mark'),
path('<int:book_id>/review/create/', create_review, name='create_review'),
path('review/update/<int:id>/', update_review, name='update_review'),

View file

@ -2,22 +2,24 @@ import logging
from django.shortcuts import render, get_object_or_404, redirect, reverse
from django.contrib.auth.decorators import login_required, permission_required
from django.utils.translation import gettext_lazy as _
from django.http import HttpResponseBadRequest, HttpResponseServerError
from django.http import HttpResponseBadRequest, HttpResponseServerError, HttpResponse
from django.core.exceptions import ObjectDoesNotExist, PermissionDenied
from django.db import IntegrityError, transaction
from django.db.models import Count
from django.utils import timezone
from django.core.paginator import Paginator
from mastodon import mastodon_request_included
from mastodon.api import check_visibility, post_toot, TootVisibilityEnum
from mastodon.utils import rating_to_emoji
from mastodon.models import MastodonApplication
from mastodon.api import share_mark, share_review
from common.utils import PageLinksGenerator
from common.views import PAGE_LINK_NUMBER, jump_or_scrape
from common.views import PAGE_LINK_NUMBER, jump_or_scrape, go_relogin
from common.models import SourceSiteEnum
from .models import *
from .forms import *
from .forms import BookMarkStatusTranslator
from boofilsic.settings import MASTODON_TAGS
from django.conf import settings
from collection.models import CollectionItem
from common.scraper import get_scraper_by_url, get_normalized_url
logger = logging.getLogger(__name__)
@ -88,6 +90,18 @@ def create(request):
return HttpResponseBadRequest()
@login_required
def rescrape(request, id):
if request.method != 'POST':
return HttpResponseBadRequest()
item = get_object_or_404(Book, pk=id)
url = get_normalized_url(item.source_url)
scraper = get_scraper_by_url(url)
scraper.scrape(url)
form = scraper.save(request_user=request.user, instance=item)
return redirect(reverse("books:retrieve", args=[form.instance.id]))
@login_required
def update(request, id):
if request.method == 'GET':
@ -98,6 +112,7 @@ def update(request, id):
'books/create_update.html',
{
'form': form,
'is_update': True,
'title': _('修改书籍'),
'submit_url': reverse("books:update", args=[book.id]),
# provided for frontend js
@ -126,6 +141,7 @@ def update(request, id):
'books/create_update.html',
{
'form': form,
'is_update': True,
'title': _('修改书籍'),
'submit_url': reverse("books:update", args=[book.id]),
# provided for frontend js
@ -166,6 +182,7 @@ def retrieve(request, id):
else:
mark_form = BookMarkForm(initial={
'book': book,
'visibility': request.user.get_preference().default_visibility if request.user.is_authenticated else 0,
'tags': mark_tags
})
@ -184,10 +201,8 @@ def retrieve(request, id):
mark_list_more = None
review_list_more = None
else:
mark_list = BookMark.get_available(
book, request.user, request.session['oauth_token'])
review_list = BookReview.get_available(
book, request.user, request.session['oauth_token'])
mark_list = BookMark.get_available_for_identicals(book, request.user)
review_list = BookReview.get_available_for_identicals(book, request.user)
mark_list_more = True if len(mark_list) > MARK_NUMBER else False
mark_list = mark_list[:MARK_NUMBER]
for m in mark_list:
@ -195,6 +210,7 @@ def retrieve(request, id):
review_list_more = True if len(
review_list) > REVIEW_NUMBER else False
review_list = review_list[:REVIEW_NUMBER]
collection_list = filter(lambda c: c.is_visible_to(request.user), map(lambda i: i.collection, CollectionItem.objects.filter(book=book)))
# def strip_html_tags(text):
# import re
@ -219,6 +235,7 @@ def retrieve(request, id):
'review_list_more': review_list_more,
'book_tag_list': book_tag_list,
'mark_tags': mark_tags,
'collection_list': collection_list,
}
)
else:
@ -263,12 +280,19 @@ def create_update_mark(request):
pk = request.POST.get('id')
old_rating = None
old_tags = None
if not pk:
book_id = request.POST.get('book')
mark = BookMark.objects.filter(book_id=book_id, owner=request.user).first()
if mark:
pk = mark.id
if pk:
mark = get_object_or_404(BookMark, pk=pk)
if request.user != mark.owner:
return HttpResponseBadRequest()
old_rating = mark.rating
old_tags = mark.bookmark_tags.all()
if mark.status != request.POST.get('status'):
mark.created_time = timezone.now()
# update
form = BookMarkForm(request.POST, instance=mark)
else:
@ -276,7 +300,7 @@ def create_update_mark(request):
form = BookMarkForm(request.POST)
if form.is_valid():
if form.instance.status == MarkStatusEnum.WISH.value:
if form.instance.status == MarkStatusEnum.WISH.value or form.instance.rating == 0:
form.instance.rating = None
form.cleaned_data['rating'] = None
form.instance.owner = request.user
@ -304,27 +328,10 @@ def create_update_mark(request):
return HttpResponseServerError("integrity error")
if form.cleaned_data['share_to_mastodon']:
if form.cleaned_data['is_private']:
visibility = TootVisibilityEnum.PRIVATE
if not share_mark(form.instance):
return go_relogin(request)
else:
visibility = TootVisibilityEnum.UNLISTED
url = "https://" + request.get_host() + reverse("books:retrieve",
args=[book.id])
words = BookMarkStatusTranslator(form.cleaned_data['status']) +\
f"{book.title}" + \
rating_to_emoji(form.cleaned_data['rating'])
# tags = MASTODON_TAGS % {'category': '书', 'type': '标记'}
tags = ''
content = words + '\n' + url + '\n' + \
form.cleaned_data['text'] + '\n' + tags
response = post_toot(
request.user.mastodon_site, content, visibility, request.session['oauth_token'])
if response.status_code != 200:
mastodon_logger.error(f"CODE:{response.status_code} {response.text}")
return HttpResponseServerError("publishing mastodon status failed")
else:
return HttpResponseBadRequest("invalid form data")
return HttpResponseBadRequest(f"invalid form data {form.errors}")
return redirect(reverse("books:retrieve", args=[form.instance.book.id]))
else:
@ -333,11 +340,30 @@ def create_update_mark(request):
@mastodon_request_included
@login_required
def retrieve_mark_list(request, book_id):
def wish(request, id):
if request.method == 'POST':
book = get_object_or_404(Book, pk=id)
params = {
'owner': request.user,
'status': MarkStatusEnum.WISH,
'visibility': 0,
'book': book,
}
try:
BookMark.objects.create(**params)
except Exception:
pass
return HttpResponse("✔️")
else:
return HttpResponseBadRequest("invalid method")
@mastodon_request_included
@login_required
def retrieve_mark_list(request, book_id, following_only=False):
if request.method == 'GET':
book = get_object_or_404(Book, pk=book_id)
queryset = BookMark.get_available(
book, request.user, request.session['oauth_token'])
queryset = BookMark.get_available_for_identicals(book, request.user, following_only=following_only)
paginator = Paginator(queryset, MARK_PER_PAGE)
page_number = request.GET.get('page', default=1)
marks = paginator.get_page(page_number)
@ -398,23 +424,8 @@ def create_review(request, book_id):
form.instance.owner = request.user
form.save()
if form.cleaned_data['share_to_mastodon']:
if form.cleaned_data['is_private']:
visibility = TootVisibilityEnum.PRIVATE
else:
visibility = TootVisibilityEnum.UNLISTED
url = "https://" + request.get_host() + reverse("books:retrieve_review",
args=[form.instance.id])
words = "发布了关于" + f"{form.instance.book.title}" + "的评论"
# tags = MASTODON_TAGS % {'category': '书', 'type': '评论'}
tags = ''
content = words + '\n' + url + \
'\n' + form.cleaned_data['title'] + '\n' + tags
response = post_toot(
request.user.mastodon_site, content, visibility, request.session['oauth_token'])
if response.status_code != 200:
mastodon_logger.error(
f"CODE:{response.status_code} {response.text}")
return HttpResponseServerError("publishing mastodon status failed")
if not share_review(form.instance):
return go_relogin(request)
return redirect(reverse("books:retrieve_review", args=[form.instance.id]))
else:
return HttpResponseBadRequest()
@ -450,22 +461,8 @@ def update_review(request, id):
form.instance.edited_time = timezone.now()
form.save()
if form.cleaned_data['share_to_mastodon']:
if form.cleaned_data['is_private']:
visibility = TootVisibilityEnum.PRIVATE
else:
visibility = TootVisibilityEnum.UNLISTED
url = "https://" + request.get_host() + reverse("books:retrieve_review",
args=[form.instance.id])
words = "发布了关于" + f"{form.instance.book.title}" + "的评论"
# tags = MASTODON_TAGS % {'category': '书', 'type': '评论'}
tags = ''
content = words + '\n' + url + \
'\n' + form.cleaned_data['title'] + '\n' + tags
response = post_toot(
request.user.mastodon_site, content, visibility, request.session['oauth_token'])
if response.status_code != 200:
mastodon_logger.error(f"CODE:{response.status_code} {response.text}")
return HttpResponseServerError("publishing mastodon status failed")
if not share_review(form.instance):
return go_relogin(request)
return redirect(reverse("books:retrieve_review", args=[form.instance.id]))
else:
return HttpResponseBadRequest()
@ -500,11 +497,10 @@ def delete_review(request, id):
@mastodon_request_included
@login_required
def retrieve_review(request, id):
if request.method == 'GET':
review = get_object_or_404(BookReview, pk=id)
if not check_visibility(review, request.session['oauth_token'], request.user):
if not review.is_visible_to(request.user):
msg = _("你没有访问这个页面的权限😥")
return render(
request,
@ -539,8 +535,7 @@ def retrieve_review(request, id):
def retrieve_review_list(request, book_id):
if request.method == 'GET':
book = get_object_or_404(Book, pk=book_id)
queryset = BookReview.get_available(
book, request.user, request.session['oauth_token'])
queryset = BookReview.get_available_for_identicals(book, request.user)
paginator = Paginator(queryset, REVIEW_PER_PAGE)
page_number = request.GET.get('page', default=1)
reviews = paginator.get_page(page_number)

0
collection/__init__.py Normal file
View file

3
collection/admin.py Normal file
View file

@ -0,0 +1,3 @@
from django.contrib import admin
# Register your models here.

6
collection/apps.py Normal file
View file

@ -0,0 +1,6 @@
from django.apps import AppConfig
class CollectionConfig(AppConfig):
default_auto_field = 'django.db.models.BigAutoField'
name = 'collection'

45
collection/forms.py Normal file
View file

@ -0,0 +1,45 @@
from django import forms
from django.utils.translation import gettext_lazy as _
from .models import Collection
from common.forms import *
COLLABORATIVE_CHOICES = [
(0, _("仅限创建者")),
(1, _("创建者及其互关用户")),
]
class CollectionForm(forms.ModelForm):
# id = forms.IntegerField(required=False, widget=forms.HiddenInput())
title = forms.CharField(label=_("标题"))
description = MarkdownxFormField(label=_("详细介绍 (Markdown)"))
# share_to_mastodon = forms.BooleanField(label=_("分享到联邦网络"), initial=True, required=False)
visibility = forms.TypedChoiceField(
label=_("可见性"),
initial=0,
coerce=int,
choices=VISIBILITY_CHOICES,
widget=forms.RadioSelect
)
collaborative = forms.TypedChoiceField(
label=_("协作整理权限"),
initial=0,
coerce=int,
choices=COLLABORATIVE_CHOICES,
widget=forms.RadioSelect
)
class Meta:
model = Collection
fields = [
'title',
'description',
'cover',
'visibility',
'collaborative',
]
widgets = {
'cover': PreviewImageInput(),
}

126
collection/models.py Normal file
View file

@ -0,0 +1,126 @@
from django.db import models
from common.models import UserOwnedEntity
from movies.models import Movie
from books.models import Book
from music.models import Song, Album
from games.models import Game
from markdownx.models import MarkdownxField
from django.utils.translation import gettext_lazy as _
from django.conf import settings
from common.utils import ChoicesDictGenerator, GenerateDateUUIDMediaFilePath
from django.shortcuts import reverse
def collection_cover_path(instance, filename):
return GenerateDateUUIDMediaFilePath(instance, filename, settings.COLLECTION_MEDIA_PATH_ROOT)
class Collection(UserOwnedEntity):
title = models.CharField(max_length=200)
description = MarkdownxField()
cover = models.ImageField(_("封面"), upload_to=collection_cover_path, default=settings.DEFAULT_COLLECTION_IMAGE, blank=True)
collaborative = models.PositiveSmallIntegerField(default=0) # 0: Editable by owner only / 1: Editable by bi-direction followers
def __str__(self):
return f"Collection({self.id} {self.owner} {self.title})"
@property
def translated_status(self):
return '创建了收藏单'
@property
def collectionitem_list(self):
return sorted(list(self.collectionitem_set.all()), key=lambda i: i.position)
@property
def item_list(self):
return map(lambda i: i.item, self.collectionitem_list)
@property
def plain_description(self):
html = markdown(self.description)
return RE_HTML_TAG.sub(' ', html)
def has_item(self, item):
return len(list(filter(lambda i: i.item == item, self.collectionitem_list))) > 0
def append_item(self, item, comment=""):
cl = self.collectionitem_list
if item is None or self.has_item(item):
return None
else:
i = CollectionItem(collection=self, position=cl[-1].position + 1 if len(cl) else 1, comment=comment)
i.set_item(item)
i.save()
return i
@property
def item(self):
return self
@property
def mark_class(self):
return CollectionMark
@property
def url(self):
return settings.APP_WEBSITE + reverse("collection:retrieve", args=[self.id])
@property
def wish_url(self):
return reverse("collection:wish", args=[self.id])
def is_editable_by(self, viewer):
if viewer.is_staff or viewer.is_superuser or viewer == self.owner:
return True
elif self.collaborative == 1 and viewer.is_following(self.owner) and viewer.is_followed_by(self.owner):
return True
else:
return False
class CollectionItem(models.Model):
movie = models.ForeignKey(Movie, on_delete=models.CASCADE, null=True)
album = models.ForeignKey(Album, on_delete=models.CASCADE, null=True)
song = models.ForeignKey(Song, on_delete=models.CASCADE, null=True)
book = models.ForeignKey(Book, on_delete=models.CASCADE, null=True)
game = models.ForeignKey(Game, on_delete=models.CASCADE, null=True)
collection = models.ForeignKey(Collection, on_delete=models.CASCADE)
position = models.PositiveIntegerField()
comment = models.TextField(_("备注"), default='')
@property
def item(self):
items = list(filter(lambda i: i is not None, [self.movie, self.book, self.album, self.song, self.game]))
return items[0] if len(items) > 0 else None
# @item.setter
def set_item(self, new_item):
old_item = self.item
if old_item == new_item:
return
if old_item is not None:
self.movie = None
self.book = None
self.album = None
self.song = None
self.game = None
setattr(self, new_item.__class__.__name__.lower(), new_item)
class CollectionMark(UserOwnedEntity):
collection = models.ForeignKey(
Collection, on_delete=models.CASCADE, related_name='collection_marks', null=True)
class Meta:
constraints = [
models.UniqueConstraint(
fields=['owner', 'collection'], name="unique_collection_mark")
]
def __str__(self):
return f"CollectionMark({self.id} {self.owner} {self.collection})"
@property
def translated_status(self):
return '关注了收藏单'

View file

@ -0,0 +1,45 @@
{% load static %}
{% load i18n %}
{% load l10n %}
{% load humanize %}
{% load admin_url %}
{% load mastodon %}
{% load oauth_token %}
{% load truncate %}
{% load highlight %}
{% load thumb %}
<div id="modal" _="on closeModal add .closing then wait for animationend then remove me">
<div class="modal-underlay" _="on click trigger closeModal"></div>
<div class="modal-content">
<div class="add-to-list-modal__head">
<span class="add-to-list-modal__title">{% trans '添加到收藏单' %}</span>
<span class="add-to-list-modal__close-button modal-close" _="on click trigger closeModal">
<span class="icon-cross">
<svg xmlns="http://www.w3.org/2000/svg" viewBox="0 0 20 20">
<polygon
points="20 2.61 17.39 0 10 7.39 2.61 0 0 2.61 7.39 10 0 17.39 2.61 20 10 12.61 17.39 20 20 17.39 12.61 10 20 2.61">
</polygon>
</svg>
</span>
</span>
</div>
<div class="add-to-list-modal__body">
<form action="/collections/add_to_list/{{ type }}/{{ id }}/" method="post">
{% csrf_token %}
<select name="collection_id">
{% for collection in collections %}
<option value="{{ collection.id }}">{{ collection.title }}{% if collection.visibility > 0 %}🔒{% endif %}</option>
{% endfor %}
<option value="0">新建收藏单</option>
</select>
<div>
<textarea type="text" name="comment" placeholder="条目备注"></textarea>
</div>
<div class="add-to-list-modal__confirm-button">
<input type="submit" class="button float-right" value="{% trans '提交' %}">
</div>
</form>
</div>
</div>
</div>

View file

@ -0,0 +1,71 @@
{% load static %}
{% load i18n %}
{% load admin_url %}
{% load mastodon %}
{% load oauth_token %}
{% load truncate %}
<!DOCTYPE html>
<html lang="en">
<head>
<meta charset="UTF-8">
<meta name="viewport" content="width=device-width, initial-scale=1.0">
<title>{{ site_name }} - {{ title }}</title>
<script src="https://static.neodb.social/cdnjs.cloudflare.com/ajax/libs/jquery/3.6.0/jquery.min.js"></script>
<link rel="stylesheet" href="{% static 'css/boofilsic.min.css' %}">
<style type="text/css">
#id_collaborative li, #id_visibility li {display: inline-block !important;}
</style>
</head>
<body>
<div id="page-wrapper">
{% include "partial/_navbar.html" %}
<div id="content-wrapper">
<section id="content" class="container">
<div class="grid">
<div class="single-section-wrapper" id="main">
<form class="entity-form" action="{{ submit_url }}" method="post" enctype="multipart/form-data">
{% csrf_token %}
{{ form }}
<input class="button" type="submit" value="{% trans '提交' %}">
</form>
{{ form.media }}
</div>
</section>
</div>
{% include "partial/_footer.html" %}
</div>
<script>
// mark required
$("#content *[required]").each(function () {
$(this).prev().prepend("*");
});
// when source site is this site, hide url input box and populate it with fake url
// the backend would update this field
if ($("select[name='source_site']").val() == "{{ this_site_enum_value }}") {
$("input[name='source_url']").hide();
$("label[for='id_source_url']").hide();
$("input[name='source_url']").val("https://www.temp.com/" + Date.now() + Math.random());
}
$("select[name='source_site']").change(function () {
let value = $(this).val();
if (value == "{{ this_site_enum_value }}") {
$("input[name='source_url']").hide();
$("label[for='id_source_url']").hide();
$("input[name='source_url']").val("https://www.temp.com/" + Date.now() + Math.random());
} else {
$("input[name='source_url']").show();
$("label[for='id_source_url']").show();
$("input[name='source_url']").val("");
}
});
</script>
</body>
</html>

View file

@ -0,0 +1,117 @@
{% load static %}
{% load i18n %}
{% load l10n %}
{% load humanize %}
{% load admin_url %}
{% load mastodon %}
{% load oauth_token %}
{% load truncate %}
{% load thumb %}
<!DOCTYPE html>
<html lang="en">
<head>
<meta charset="UTF-8">
<meta name="viewport" content="width=device-width, initial-scale=1.0">
<meta property="og:title" content="{{ site_name }} {% trans '收藏单' %} - {{ collection.title }}">
<meta property="og:description" content="{{ collection.description }}">
<meta property="og:type" content="article">
<meta property="og:article:author" content="{{ collection.owner.username }}">
<meta property="og:url" content="{{ request.build_absolute_uri }}">
<meta property="og:image" content="{{ collection.cover|thumb:'normal' }}">
<title>{{ site_name }} {% trans '收藏单' %} - {{ collection.title }}</title>
<script src="https://static.neodb.social/cdnjs.cloudflare.com/ajax/libs/jquery/3.6.0/jquery.min.js"></script>
<script src="{% static 'lib/js/rating-star.js' %}"></script>
<script src="{% static 'js/rating-star-readonly.js' %}"></script>
<link rel="stylesheet" href="{% static 'lib/css/rating-star.css' %}">
<link rel="stylesheet" href="{% static 'css/boofilsic.min.css' %}">
<script src="https://static.neodb.social/cdnjs.cloudflare.com/ajax/libs/htmx/1.8.0/htmx.min.js"></script>
</head>
<body>
<div id="page-wrapper">
<div id="content-wrapper">
{% include "partial/_navbar.html" %}
<section id="content">
<div class="grid">
<div class="grid__main" id="main">
<div class="main-section-wrapper">
<div class="review-head">
<h5 class="review-head__title">
确认删除收藏单「{{ collection.title }}」吗?
</h5>
{% if collection.visibility > 0 %}
<span class="icon-lock"><svg xmlns="http://www.w3.org/2000/svg" viewBox="0 0 20 20">
<path
d="M17,8.48h-.73V6.27a6.27,6.27,0,1,0-12.53,0V8.48H3a.67.67,0,0,0-.67.67V19.33A.67.67,0,0,0,3,20H17a.67.67,0,0,0,.67-.67V9.15A.67.67,0,0,0,17,8.48ZM6.42,6.27h0a3.57,3.57,0,0,1,7.14,0h0V8.48H6.42Z" />
</svg></span>
{% endif %}
<div class="review-head__body">
<div class="review-head__info">
<a href="{% url 'users:home' collection.owner.mastodon_username %}" class="review-head__owner-link">{{ collection.owner.mastodon_username }}</a>
<span class="review-head__time">{{ collection.edited_time }}</span>
</div>
<div class="review-head__actions">
</div>
</div>
<div id="rawContent">
{{ form.description }}
</div>
{{ form.media }}
<div class="dividing-line"></div>
<div class="clearfix">
<form action="{% url 'collection:delete' collection.id %}" method="post" class="float-right">
{% csrf_token %}
<input class="button" type="submit" value="{% trans '确认' %}">
</form>
<button onclick="history.back()" class="button button-clear float-right">{% trans '返回' %}</button>
</div>
<!-- <div class="dividing-line"></div> -->
<!-- <div class="entity-card__img-wrapper" style="text-align: center;">
<img src="{{ collection.cover|thumb:'normal' }}" alt="" class="entity-card__img">
</div> -->
</div>
</div>
</div>
<div class="grid__aside" id="aside">
<div class="aside-section-wrapper">
<div class="entity-card">
<div class="entity-card__img-wrapper">
<a href="{% url 'collection:retrieve' collection.id %}">
<img src="{{ collection.cover|thumb:'normal' }}" alt="" class="entity-card__img">
</a>
</div>
<div class="entity-card__info-wrapper">
<h5 class="entity-card__title">
<a href="{% url 'collection:retrieve' collection.id %}">
{{ collection.title }}
</a>
</h5>
</div>
</div>
</div>
</div>
</div>
</section>
</div>
{% include "partial/_footer.html" %}
</div>
<script>
$(".markdownx textarea").hide();
</script>
<script>
document.body.addEventListener('htmx:configRequest', (event) => {
event.detail.headers['X-CSRFToken'] = '{{ csrf_token }}';
})
</script>
</body>
</html>

View file

@ -0,0 +1,147 @@
{% load static %}
{% load i18n %}
{% load l10n %}
{% load humanize %}
{% load admin_url %}
{% load mastodon %}
{% load oauth_token %}
{% load truncate %}
{% load thumb %}
<!DOCTYPE html>
<html lang="en">
<head>
<meta charset="UTF-8">
<meta name="viewport" content="width=device-width, initial-scale=1.0">
<meta property="og:title" content="{{ site_name }} {% trans '收藏单' %} - {{ collection.title }}">
<meta property="og:description" content="{{ collection.description }}">
<meta property="og:type" content="article">
<meta property="og:article:author" content="{{ collection.owner.username }}">
<meta property="og:url" content="{{ request.build_absolute_uri }}">
<meta property="og:image" content="{{ collection.cover|thumb:'normal' }}">
<title>{{ site_name }} {% trans '收藏单' %} - {{ collection.title }}</title>
{% include "partial/_common_libs.html" with jquery=1 %}
<script src="{% static 'lib/js/rating-star.js' %}"></script>
<script src="{% static 'js/rating-star-readonly.js' %}"></script>
</head>
<body>
<div id="page-wrapper">
<div id="content-wrapper">
{% include "partial/_navbar.html" %}
<section id="content">
<div class="grid">
<div class="grid__main" id="main">
<div class="main-section-wrapper">
<div class="review-head">
<h5 class="review-head__title">
{{ collection.title }}
</h5>
{% if collection.visibility > 0 %}
<span class="icon-lock"><svg xmlns="http://www.w3.org/2000/svg" viewBox="0 0 20 20">
<path
d="M17,8.48h-.73V6.27a6.27,6.27,0,1,0-12.53,0V8.48H3a.67.67,0,0,0-.67.67V19.33A.67.67,0,0,0,3,20H17a.67.67,0,0,0,.67-.67V9.15A.67.67,0,0,0,17,8.48ZM6.42,6.27h0a3.57,3.57,0,0,1,7.14,0h0V8.48H6.42Z" />
</svg></span>
{% endif %}
<div class="review-head__body">
<div class="review-head__info">
<a href="{% url 'users:home' collection.owner.mastodon_username %}" class="review-head__owner-link">{{ collection.owner.mastodon_username }}</a>
<span class="review-head__time">{{ collection.edited_time }}</span>
</div>
<div class="review-head__actions">
{% if request.user == collection.owner %}
<a class="review-head__action-link" href="{% url 'collection:update' collection.id %}">{% trans '编辑' %}</a>
<a class="review-head__action-link" href="{% url 'collection:delete' collection.id %}">{% trans '删除' %}</a>
{% elif editable %}
<span class="review-head__time">可协作整理</span>
{% endif %}
</div>
</div>
<!-- <div class="dividing-line"></div> -->
<!-- <div class="entity-card__img-wrapper" style="text-align: center;">
<img src="{{ collection.cover|thumb:'normal' }}" alt="" class="entity-card__img">
</div> -->
<div id="rawContent">
{{ form.description }}
</div>
{{ form.media }}
</div>
<div class="entity-list" hx-get="{% url 'collection:retrieve_entity_list' collection.id %}" hx-trigger="load">
</div>
</div>
</div>
<div class="grid__aside" id="aside">
<div class="aside-section-wrapper">
<div class="entity-card">
<div class="entity-card__img-wrapper">
<a href="{% url 'collection:retrieve' collection.id %}">
<img src="{{ collection.cover|thumb:'normal' }}" alt="" class="entity-card__img">
</a>
</div>
<div class="entity-card__info-wrapper">
<h5 class="entity-card__title">
<a href="{% url 'collection:retrieve' collection.id %}">
{{ collection.title }}
</a>
</h5>
</div>
</div>
</div>
{% if request.user != collection.owner %}
<div class="aside-section-wrapper">
<div class="action-panel">
<div class="action-panel__button-group action-panel__button-group--center">
{% if following %}
<form action="{% url 'collection:unfollow' collection.id %}" method="post">
{% csrf_token %}
<button class="action-panel__button">{% trans '取消关注' %}</button>
</form>
{% else %}
<form action="{% url 'collection:follow' collection.id %}" method="post">
{% csrf_token %}
<button class="action-panel__button">{% trans '关注' %}</button>
</form>
{% endif %}
</div>
</div>
</div>
{% endif %}
<div class="aside-section-wrapper">
<div class="action-panel">
<div class="action-panel__button-group action-panel__button-group--center">
<form>
<button class="action-panel__button add-to-list" hx-get="{% url 'collection:share' collection.id %}" hx-target="body" hx-swap="beforeend">{% trans '分享到联邦网络' %}</button>
</form>
</div>
</div>
</div>
</div>
</div>
</section>
</div>
{% include "partial/_footer.html" %}
</div>
<script>
$(".markdownx textarea").hide();
</script>
<script>
document.body.addEventListener('htmx:configRequest', (event) => {
event.detail.headers['X-CSRFToken'] = '{{ csrf_token }}';
})
</script>
</body>
</html>

View file

@ -0,0 +1,5 @@
<form hx-post="{% url 'collection:update_item_comment' collection.id collectionitem.id %}">
<input name="comment" value="{{ collectionitem.comment }}">
<input type="submit" style="width:unset;" value="修改">
<button style="width:unset;" hx-get="{% url 'collection:show_item_comment' collection.id collectionitem.id %}">取消</button>
</form>

View file

@ -0,0 +1,21 @@
{% load thumb %}
{% load i18n %}
{% load l10n %}
<ul class="entity-list__entities">
{% for collectionitem in collection.collectionitem_list %}
{% if collectionitem.item is not None %}
{% include "partial/list_item.html" with item=collectionitem.item %}
{% endif %}
{% empty %}
{% endfor %}
{% if editable %}
<li>
<form hx-target=".entity-list" hx-post="{% url 'collection:append_item' form.instance.id %}" method="POST">
{% csrf_token %}
<input type="url" name="url" placeholder="https://neodb.social/movies/1/" style="min-width:24rem" required>
<input type="text" name="comment" placeholder="{% trans '备注' %}" style="min-width:24rem">
<input class="button" type="submit" value="{% trans '添加' %}" >
</form>
</li>
{% endif %}
</ul>

View file

@ -0,0 +1,99 @@
{% load static %}
{% load i18n %}
{% load l10n %}
{% load humanize %}
{% load admin_url %}
{% load mastodon %}
{% load oauth_token %}
{% load truncate %}
{% load highlight %}
{% load thumb %}
<!DOCTYPE html>
<html lang="en">
<head>
<meta charset="UTF-8">
<meta name="viewport" content="width=device-width, initial-scale=1.0">
<title>{{ site_name }} - {{ title }}</title>
<script src="https://static.neodb.social/cdnjs.cloudflare.com/ajax/libs/jquery/3.6.0/jquery.min.js"></script>
<script src="{% static 'lib/js/rating-star.js' %}"></script>
<script src="{% static 'js/rating-star-readonly.js' %}"></script>
<link rel="stylesheet" href="{% static 'lib/css/rating-star.css' %}">
<link rel="stylesheet" href="{% static 'css/boofilsic.min.css' %}">
</head>
<body>
<div id="page-wrapper">
<div id="content-wrapper">
{% include "partial/_navbar.html" %}
<section id="content">
<div class="grid">
<div class="grid__main" id="main">
<div class="main-section-wrapper">
<div class="entity-reviews">
<h5 class="entity-reviews__title entity-reviews__title--stand-alone">
{{ title }}
</h5>
<ul class="entity-reviews__review-list">
{% for collection in collections %}
<li class="entity-reviews__review entity-reviews__review--wider">
<img src="{{ collection.cover|thumb:'normal' }}" style="width:40px; float:right"class="entity-card__img">
<span class="entity-reviews__review-title"><a href="{% url 'collection:retrieve' collection.id %}">{{ collection.title }}</a></span>
<span class="entity-reviews__review-time">{{ collection.edited_time }}</span>
{% if collection.visibility > 0 %}
<span class="icon-lock"><svg xmlns="http://www.w3.org/2000/svg" viewBox="0 0 20 20"><path d="M17,8.48h-.73V6.27a6.27,6.27,0,1,0-12.53,0V8.48H3a.67.67,0,0,0-.67.67V19.33A.67.67,0,0,0,3,20H17a.67.67,0,0,0,.67-.67V9.15A.67.67,0,0,0,17,8.48ZM6.42,6.27h0a3.57,3.57,0,0,1,7.14,0h0V8.48H6.42Z"/></svg></span>
{% endif %}
</li>
{% empty %}
<div>{% trans '无结果' %}</div>
{% endfor %}
</ul>
</div>
<div class="pagination">
{% if collections.pagination.has_prev %}
<a href="?page=1" class="pagination__nav-link pagination__nav-link">&laquo;</a>
<a href="?page={{ collections.previous_page_number }}"
class="pagination__nav-link pagination__nav-link--right-margin pagination__nav-link">&lsaquo;</a>
{% endif %}
{% for page in collections.pagination.page_range %}
{% if page == collections.pagination.current_page %}
<a href="?page={{ page }}" class="pagination__page-link pagination__page-link--current">{{ page }}</a>
{% else %}
<a href="?page={{ page }}" class="pagination__page-link">{{ page }}</a>
{% endif %}
{% endfor %}
{% if collections.pagination.has_next %}
<a href="?page={{ collections.next_page_number }}"
class="pagination__nav-link pagination__nav-link--left-margin">&rsaquo;</a>
<a href="?page={{ collections.pagination.last_page }}" class="pagination__nav-link">&raquo;</a>
{% endif %}
</div>
</div>
</div>
</div>
</section>
</div>
{% include "partial/_footer.html" %}
</div>
<script>
</script>
</body>
</html>

View file

@ -0,0 +1,56 @@
{% load static %}
{% load i18n %}
{% load l10n %}
{% load humanize %}
{% load admin_url %}
{% load mastodon %}
{% load oauth_token %}
{% load truncate %}
{% load highlight %}
{% load thumb %}
<div id="modal" _="on closeModal add .closing then wait for animationend then remove me">
<div class="modal-underlay" _="on click trigger closeModal"></div>
<div class="modal-content">
<div class="add-to-list-modal__head">
<span class="add-to-list-modal__title">{% trans '分享收藏单' %}</span>
<span class="add-to-list-modal__close-button modal-close" _="on click trigger closeModal">
<span class="icon-cross">
<svg xmlns="http://www.w3.org/2000/svg" viewBox="0 0 20 20">
<polygon
points="20 2.61 17.39 0 10 7.39 2.61 0 0 2.61 7.39 10 0 17.39 2.61 20 10 12.61 17.39 20 20 17.39 12.61 10 20 2.61">
</polygon>
</svg>
</span>
</span>
</div>
<div class="add-to-list-modal__body">
<form action="/collections/share/{{ id }}/" method="post">
{% csrf_token %}
<div>
<label for="id_visibility_0">分享可见性(不同于收藏单本身的权限):</label>
<ul id="id_visibility">
<li><label for="id_visibility_0"><input type="radio" name="visibility" value="0" required="" id="id_visibility_0" {% if visibility == 0 %}checked{% endif %}>
公开</label>
</li>
<li><label for="id_visibility_1"><input type="radio" name="visibility" value="1" required="" id="id_visibility_1" {% if visibility == 1 %}checked{% endif %}>
仅关注者</label>
</li>
<li><label for="id_visibility_2"><input type="radio" name="visibility" value="2" required="" id="id_visibility_2" {% if visibility == 2 %}checked{% endif %}>
仅自己</label>
</li>
</ul>
</div>
<div>
<textarea type="text" name="comment" placeholder="分享附言"></textarea>
</div>
<div class="add-to-list-modal__confirm-button">
<input type="submit" class="button float-right" value="{% trans '提交' %}">
</div>
</form>
</div>
</div>
</div>

View file

@ -0,0 +1,4 @@
{{ collectionitem.comment }}
{% if editable %}
<a class="action-icon" hx-get="{% url 'collection:update_item_comment' collection.id collectionitem.id %}"><svg xmlns="http://www.w3.org/2000/svg" viewBox="0 0 24 24"><g><path d="M19,20H5a1,1,0,0,0,0,2H19a1,1,0,0,0,0-2Z"/><path d="M5,18h.09l4.17-.38a2,2,0,0,0,1.21-.57l9-9a1.92,1.92,0,0,0-.07-2.71h0L16.66,2.6A2,2,0,0,0,14,2.53l-9,9a2,2,0,0,0-.57,1.21L4,16.91a1,1,0,0,0,.29.8A1,1,0,0,0,5,18ZM15.27,4,18,6.73,16,8.68,13.32,6Zm-8.9,8.91L12,7.32l2.7,2.7-5.6,5.6-3,.28Z"/></g></svg></a>
{% endif %}

3
collection/tests.py Normal file
View file

@ -0,0 +1,3 @@
from django.test import TestCase
# Create your tests here.

27
collection/urls.py Normal file
View file

@ -0,0 +1,27 @@
from django.urls import path, re_path
from .views import *
app_name = 'collection'
urlpatterns = [
path('mine/', list, name='list'),
path('create/', create, name='create'),
path('<int:id>/', retrieve, name='retrieve'),
path('<int:id>/entity_list', retrieve_entity_list, name='retrieve_entity_list'),
path('update/<int:id>/', update, name='update'),
path('delete/<int:id>/', delete, name='delete'),
path('follow/<int:id>/', follow, name='follow'),
path('unfollow/<int:id>/', unfollow, name='unfollow'),
path('<int:id>/append_item/', append_item, name='append_item'),
path('<int:id>/delete_item/<int:item_id>', delete_item, name='delete_item'),
path('<int:id>/move_up_item/<int:item_id>', move_up_item, name='move_up_item'),
path('<int:id>/move_down_item/<int:item_id>', move_down_item, name='move_down_item'),
path('<int:id>/update_item_comment/<int:item_id>', update_item_comment, name='update_item_comment'),
path('<int:id>/show_item_comment/<int:item_id>', show_item_comment, name='show_item_comment'),
path('with/<str:type>/<int:id>/', list_with, name='list_with'),
path('add_to_list/<str:type>/<int:id>/', add_to_list, name='add_to_list'),
path('share/<int:id>/', share, name='share'),
path('follow2/<int:id>/', wish, name='wish'),
# TODO: tag
]

442
collection/views.py Normal file
View file

@ -0,0 +1,442 @@
import logging
from django.shortcuts import render, get_object_or_404, redirect, reverse
from django.contrib.auth.decorators import login_required, permission_required
from django.utils.translation import gettext_lazy as _
from django.http import HttpResponseBadRequest, HttpResponseServerError, HttpResponse
from django.core.exceptions import ObjectDoesNotExist, PermissionDenied
from django.db import IntegrityError, transaction
from django.db.models import Count
from django.utils import timezone
from django.core.paginator import Paginator
from mastodon import mastodon_request_included
from mastodon.models import MastodonApplication
from mastodon.api import post_toot, TootVisibilityEnum, share_collection
from common.utils import PageLinksGenerator
from common.views import PAGE_LINK_NUMBER, jump_or_scrape, go_relogin
from common.models import SourceSiteEnum
from .models import *
from .forms import *
from django.conf import settings
import re
from users.models import User
from django.http import HttpResponseRedirect
logger = logging.getLogger(__name__)
mastodon_logger = logging.getLogger("django.mastodon")
# how many marks showed on the detail page
MARK_NUMBER = 5
# how many marks at the mark page
MARK_PER_PAGE = 20
# how many reviews showed on the detail page
REVIEW_NUMBER = 5
# how many reviews at the mark page
REVIEW_PER_PAGE = 20
# max tags on detail page
TAG_NUMBER = 10
class HTTPResponseHXRedirect(HttpResponseRedirect):
def __init__(self, *args, **kwargs):
super().__init__(*args, **kwargs)
self['HX-Redirect'] = self['Location']
status_code = 200
# public data
###########################
@login_required
def create(request):
if request.method == 'GET':
form = CollectionForm()
return render(
request,
'create_update.html',
{
'form': form,
'title': _('添加收藏单'),
'submit_url': reverse("collection:create"),
# provided for frontend js
'this_site_enum_value': SourceSiteEnum.IN_SITE.value,
}
)
elif request.method == 'POST':
if request.user.is_authenticated:
# only local user can alter public data
form = CollectionForm(request.POST, request.FILES)
form.instance.owner = request.user
if form.is_valid():
form.instance.last_editor = request.user
try:
with transaction.atomic():
form.save()
except IntegrityError as e:
logger.error(e.__str__())
return HttpResponseServerError("integrity error")
return redirect(reverse("collection:retrieve", args=[form.instance.id]))
else:
return render(
request,
'create_update.html',
{
'form': form,
'title': _('添加收藏单'),
'submit_url': reverse("collection:create"),
# provided for frontend js
'this_site_enum_value': SourceSiteEnum.IN_SITE.value,
}
)
else:
return redirect(reverse("users:login"))
else:
return HttpResponseBadRequest()
@login_required
def update(request, id):
page_title = _("修改收藏单")
collection = get_object_or_404(Collection, pk=id)
if not collection.is_visible_to(request.user):
raise PermissionDenied()
if request.method == 'GET':
form = CollectionForm(instance=collection)
return render(
request,
'create_update.html',
{
'form': form,
'is_update': True,
'title': page_title,
'submit_url': reverse("collection:update", args=[collection.id]),
# provided for frontend js
'this_site_enum_value': SourceSiteEnum.IN_SITE.value,
}
)
elif request.method == 'POST':
form = CollectionForm(request.POST, request.FILES, instance=collection)
if form.is_valid():
form.instance.last_editor = request.user
form.instance.edited_time = timezone.now()
try:
with transaction.atomic():
form.save()
except IntegrityError as e:
logger.error(e.__str__())
return HttpResponseServerError("integrity error")
else:
return render(
request,
'create_update.html',
{
'form': form,
'is_update': True,
'title': page_title,
'submit_url': reverse("collection:update", args=[collection.id]),
# provided for frontend js
'this_site_enum_value': SourceSiteEnum.IN_SITE.value,
}
)
return redirect(reverse("collection:retrieve", args=[form.instance.id]))
else:
return HttpResponseBadRequest()
@mastodon_request_included
# @login_required
def retrieve(request, id):
if request.method == 'GET':
collection = get_object_or_404(Collection, pk=id)
if not collection.is_visible_to(request.user):
raise PermissionDenied()
form = CollectionForm(instance=collection)
if request.user.is_authenticated:
following = True if CollectionMark.objects.filter(owner=request.user, collection=collection).first() is not None else False
followers = []
else:
following = False
followers = []
return render(
request,
'detail.html',
{
'collection': collection,
'form': form,
'editable': request.user.is_authenticated and collection.is_editable_by(request.user),
'followers': followers,
'following': following,
}
)
else:
logger.warning('non-GET method at /collections/<id>')
return HttpResponseBadRequest()
@mastodon_request_included
# @login_required
def retrieve_entity_list(request, id):
collection = get_object_or_404(Collection, pk=id)
if not collection.is_visible_to(request.user):
raise PermissionDenied()
form = CollectionForm(instance=collection)
followers = []
if request.user.is_authenticated:
followers = []
return render(
request,
'entity_list.html',
{
'collection': collection,
'form': form,
'editable': request.user.is_authenticated and collection.is_editable_by(request.user),
'followers': followers,
}
)
@login_required
def delete(request, id):
collection = get_object_or_404(Collection, pk=id)
if request.user.is_staff or request.user == collection.owner:
if request.method == 'GET':
return render(
request,
'delete.html',
{
'collection': collection,
'form': CollectionForm(instance=collection)
}
)
elif request.method == 'POST':
collection.delete()
return redirect(reverse("common:home"))
else:
raise PermissionDenied()
@login_required
def wish(request, id):
try:
CollectionMark.objects.create(owner=request.user, collection=Collection.objects.get(id=id))
except Exception:
pass
return HttpResponse("✔️")
@login_required
def follow(request, id):
CollectionMark.objects.create(owner=request.user, collection=Collection.objects.get(id=id))
return redirect(reverse("collection:retrieve", args=[id]))
@login_required
def unfollow(request, id):
CollectionMark.objects.filter(owner=request.user, collection=Collection.objects.get(id=id)).delete()
return redirect(reverse("collection:retrieve", args=[id]))
@login_required
def list(request, user_id=None, marked=False):
if request.method == 'GET':
user = request.user if user_id is None else User.objects.get(id=user_id)
if marked:
title = user.mastodon_username + _('关注的收藏单')
queryset = Collection.objects.filter(pk__in=CollectionMark.objects.filter(owner=user).values_list('collection', flat=True))
else:
title = user.mastodon_username + _('创建的收藏单')
queryset = Collection.objects.filter(owner=user)
paginator = Paginator(queryset, REVIEW_PER_PAGE)
page_number = request.GET.get('page', default=1)
collections = paginator.get_page(page_number)
collections.pagination = PageLinksGenerator(
PAGE_LINK_NUMBER, page_number, paginator.num_pages)
return render(
request,
'list.html',
{
'collections': collections,
'title': title,
}
)
else:
return HttpResponseBadRequest()
def get_entity_by_url(url):
m = re.findall(r'^/?(movies|books|games|music/album|music/song)/(\d+)/?', url.strip().lower().replace(settings.APP_WEBSITE.lower(), ''))
if len(m) > 0:
mapping = {
'movies': Movie,
'books': Book,
'games': Game,
'music/album': Album,
'music/song': Song,
}
cls = mapping.get(m[0][0])
id = int(m[0][1])
if cls is not None:
return cls.objects.get(id=id)
return None
@login_required
def append_item(request, id):
collection = get_object_or_404(Collection, pk=id)
if request.method == 'POST' and collection.is_editable_by(request.user):
url = request.POST.get('url')
comment = request.POST.get('comment')
item = get_entity_by_url(url)
collection.append_item(item, comment)
collection.save()
# return redirect(reverse("collection:retrieve", args=[id]))
return retrieve_entity_list(request, id)
else:
return HttpResponseBadRequest()
@login_required
def delete_item(request, id, item_id):
collection = get_object_or_404(Collection, pk=id)
if request.method == 'POST' and collection.is_editable_by(request.user):
# item_id = int(request.POST.get('item_id'))
item = CollectionItem.objects.get(id=item_id)
if item is not None and item.collection == collection:
item.delete()
# collection.save()
# return HTTPResponseHXRedirect(redirect_to=reverse("collection:retrieve", args=[id]))
return retrieve_entity_list(request, id)
return HttpResponseBadRequest()
@login_required
def move_up_item(request, id, item_id):
collection = get_object_or_404(Collection, pk=id)
if request.method == 'POST' and collection.is_editable_by(request.user):
# item_id = int(request.POST.get('item_id'))
item = CollectionItem.objects.get(id=item_id)
if item is not None and item.collection == collection:
items = collection.collectionitem_list
idx = items.index(item)
if idx > 0:
o = items[idx - 1]
p = o.position
o.position = item.position
item.position = p
o.save()
item.save()
# collection.save()
# return HTTPResponseHXRedirect(redirect_to=reverse("collection:retrieve", args=[id]))
return retrieve_entity_list(request, id)
return HttpResponseBadRequest()
@login_required
def move_down_item(request, id, item_id):
collection = get_object_or_404(Collection, pk=id)
if request.method == 'POST' and collection.is_editable_by(request.user):
# item_id = int(request.POST.get('item_id'))
item = CollectionItem.objects.get(id=item_id)
if item is not None and item.collection == collection:
items = collection.collectionitem_list
idx = items.index(item)
if idx + 1 < len(items):
o = items[idx + 1]
p = o.position
o.position = item.position
item.position = p
o.save()
item.save()
# collection.save()
# return HTTPResponseHXRedirect(redirect_to=reverse("collection:retrieve", args=[id]))
return retrieve_entity_list(request, id)
return HttpResponseBadRequest()
def show_item_comment(request, id, item_id):
collection = get_object_or_404(Collection, pk=id)
item = CollectionItem.objects.get(id=item_id)
editable = request.user.is_authenticated and collection.is_editable_by(request.user)
return render(request, 'show_item_comment.html', {'collection': collection, 'collectionitem': item, 'editable': editable})
@login_required
def update_item_comment(request, id, item_id):
collection = get_object_or_404(Collection, pk=id)
if collection.is_editable_by(request.user):
# item_id = int(request.POST.get('item_id'))
item = CollectionItem.objects.get(id=item_id)
if item is not None and item.collection == collection:
if request.method == 'POST':
item.comment = request.POST.get('comment', default='')
item.save()
return render(request, 'show_item_comment.html', {'collection': collection, 'collectionitem': item, 'editable': True})
else:
return render(request, 'edit_item_comment.html', {'collection': collection, 'collectionitem': item})
return retrieve_entity_list(request, id)
return HttpResponseBadRequest()
@login_required
def list_with(request, type, id):
pass
def get_entity_by_type_id(type, id):
mapping = {
'movie': Movie,
'book': Book,
'game': Game,
'album': Album,
'song': Song,
}
cls = mapping.get(type)
if cls is not None:
return cls.objects.get(id=id)
return None
@login_required
def add_to_list(request, type, id):
item = get_entity_by_type_id(type, id)
if request.method == 'GET':
queryset = Collection.objects.filter(owner=request.user)
return render(
request,
'add_to_list.html',
{
'type': type,
'id': id,
'item': item,
'collections': queryset,
}
)
else:
cid = int(request.POST.get('collection_id', default=0))
if not cid:
cid = Collection.objects.create(owner=request.user, title=f'{request.user.username}的收藏单').id
collection = Collection.objects.filter(owner=request.user, id=cid).first()
collection.append_item(item, request.POST.get('comment'))
return HttpResponseRedirect(request.META.get('HTTP_REFERER'))
@login_required
def share(request, id):
collection = Collection.objects.filter(id=id).first()
if not collection:
return HttpResponseBadRequest()
if request.method == 'GET':
return render(request, 'share_collection.html', {'id': id, 'visibility': request.user.get_preference().default_visibility})
else:
visibility = int(request.POST.get('visibility', default=0))
comment = request.POST.get('comment')
if share_collection(collection, comment, request.user, visibility):
return HttpResponseRedirect(request.META.get('HTTP_REFERER'))
else:
return go_relogin(request)

View file

@ -1,4 +1,5 @@
from django import forms
from markdownx.fields import MarkdownxFormField
import django.contrib.postgres.forms as postgres
from django.utils import formats
from django.core.exceptions import ValidationError
@ -45,7 +46,7 @@ class HstoreInput(forms.Widget):
js = ('js/key_value_input.js',)
class JSONField(postgres.JSONField):
class JSONField(forms.fields.JSONField):
widget = KeyValueInput
def to_python(self, value):
if not value:
@ -88,7 +89,7 @@ class RatingValidator:
_('%(value)s is not an integer'),
params={'value': value},
)
if not str(value) in [str(i) for i in range(1, 11)]:
if not str(value) in [str(i) for i in range(0, 11)]:
raise ValidationError(
_('%(value)s is not an integer in range 1-10'),
params={'value': value},
@ -154,9 +155,9 @@ class MultiSelect(forms.SelectMultiple):
class Media:
css = {
'all': ('lib/css/multiple-select.min.css',)
'all': ('https://cdn.jsdelivr.net/npm/multiple-select@1.5.2/dist/multiple-select.min.css',)
}
js = ('lib/js/multiple-select.min.js',)
js = ('https://cdn.jsdelivr.net/npm/multiple-select@1.5.2/dist/multiple-select.min.js',)
class HstoreField(forms.CharField):
@ -223,22 +224,25 @@ class DurationField(forms.TimeField):
#############################
# Form
#############################
VISIBILITY_CHOICES = [
(0, _("公开")),
(1, _("仅关注者")),
(2, _("仅自己")),
]
class MarkForm(forms.ModelForm):
IS_PRIVATE_CHOICES = [
(True, _("仅关注者")),
(False, _("公开")),
]
id = forms.IntegerField(required=False, widget=forms.HiddenInput())
share_to_mastodon = forms.BooleanField(
label=_("分享到长毛象"), initial=True, required=False)
label=_("分享到联邦网络"), initial=True, required=False)
rating = forms.IntegerField(
validators=[RatingValidator()], widget=forms.HiddenInput(), required=False)
is_private = RadioBooleanField(
label=_("评分"), validators=[RatingValidator()], widget=forms.HiddenInput(), required=False)
visibility = forms.TypedChoiceField(
label=_("可见性"),
initial=True,
choices=IS_PRIVATE_CHOICES
initial=0,
coerce=int,
choices=VISIBILITY_CHOICES,
widget=forms.RadioSelect
)
tags = TagField(
required=False,
@ -259,15 +263,15 @@ class MarkForm(forms.ModelForm):
class ReviewForm(forms.ModelForm):
IS_PRIVATE_CHOICES = [
(True, _("仅关注者")),
(False, _("公开")),
]
title = forms.CharField(label=_("标题"))
content = MarkdownxFormField(label=_("正文 (Markdown)"))
share_to_mastodon = forms.BooleanField(
label=_("分享到长毛象"), initial=True, required=False)
label=_("分享到联邦网络"), initial=True, required=False)
id = forms.IntegerField(required=False, widget=forms.HiddenInput())
is_private = RadioBooleanField(
visibility = forms.TypedChoiceField(
label=_("可见性"),
initial=True,
choices=IS_PRIVATE_CHOICES
initial=0,
coerce=int,
choices=VISIBILITY_CHOICES,
widget=forms.RadioSelect
)

270
common/importers/douban.py Normal file
View file

@ -0,0 +1,270 @@
import openpyxl
import requests
import re
from lxml import html
from markdownify import markdownify as md
from datetime import datetime
from common.scraper import get_scraper_by_url
import logging
import pytz
from django.conf import settings
from django.core.exceptions import ObjectDoesNotExist
from user_messages import api as msg
import django_rq
from common.utils import GenerateDateUUIDMediaFilePath
import os
from books.models import BookReview, Book, BookMark, BookTag
from movies.models import MovieReview, Movie, MovieMark, MovieTag
from music.models import AlbumReview, Album, AlbumMark, AlbumTag
from games.models import GameReview, Game, GameMark, GameTag
from common.scraper import DoubanAlbumScraper, DoubanBookScraper, DoubanGameScraper, DoubanMovieScraper
from PIL import Image
from io import BytesIO
import filetype
from common.models import MarkStatusEnum
logger = logging.getLogger(__name__)
tz_sh = pytz.timezone('Asia/Shanghai')
def fetch_remote_image(url):
try:
print(f'fetching remote image {url}')
raw_img = None
ext = None
if settings.SCRAPESTACK_KEY is not None:
dl_url = f'http://api.scrapestack.com/scrape?access_key={settings.SCRAPESTACK_KEY}&url={url}'
elif settings.SCRAPERAPI_KEY is not None:
dl_url = f'http://api.scraperapi.com?api_key={settings.SCRAPERAPI_KEY}&url={url}'
else:
dl_url = url
img_response = requests.get(dl_url, timeout=settings.SCRAPING_TIMEOUT)
raw_img = img_response.content
img = Image.open(BytesIO(raw_img))
img.load() # corrupted image will trigger exception
content_type = img_response.headers.get('Content-Type')
ext = filetype.get_type(mime=content_type.partition(';')[0].strip()).extension
f = GenerateDateUUIDMediaFilePath(None, "x." + ext, settings.MARKDOWNX_MEDIA_PATH)
file = settings.MEDIA_ROOT + f
local_url = settings.MEDIA_URL + f
os.makedirs(os.path.dirname(file), exist_ok=True)
img.save(file)
# print(f'remote image saved as {local_url}')
return local_url
except Exception:
print(f'unable to fetch remote image {url}')
return url
class DoubanImporter:
total = 0
processed = 0
skipped = 0
imported = 0
failed = []
user = None
visibility = 0
file = None
def __init__(self, user, visibility):
self.user = user
self.visibility = visibility
def update_user_import_status(self, status):
self.user.preference.import_status['douban_pending'] = status
self.user.preference.import_status['douban_file'] = self.file
self.user.preference.import_status['douban_visibility'] = self.visibility
self.user.preference.import_status['douban_total'] = self.total
self.user.preference.import_status['douban_processed'] = self.processed
self.user.preference.import_status['douban_skipped'] = self.skipped
self.user.preference.import_status['douban_imported'] = self.imported
self.user.preference.import_status['douban_failed'] = self.failed
self.user.preference.save(update_fields=['import_status'])
def import_from_file(self, uploaded_file):
try:
wb = openpyxl.open(uploaded_file, read_only=True, data_only=True, keep_links=False)
wb.close()
file = settings.MEDIA_ROOT + GenerateDateUUIDMediaFilePath(None, "x.xlsx", settings.SYNC_FILE_PATH_ROOT)
os.makedirs(os.path.dirname(file), exist_ok=True)
with open(file, 'wb') as destination:
for chunk in uploaded_file.chunks():
destination.write(chunk)
self.file = file
self.update_user_import_status(2)
jid = f'Douban_{self.user.id}_{os.path.basename(self.file)}'
django_rq.get_queue('doufen').enqueue(self.import_from_file_task, job_id=jid)
except Exception:
return False
# self.import_from_file_task(file, user, visibility)
return True
mark_sheet_config = {
'想读': [MarkStatusEnum.WISH, DoubanBookScraper, Book, BookMark, BookTag],
'在读': [MarkStatusEnum.DO, DoubanBookScraper, Book, BookMark, BookTag],
'读过': [MarkStatusEnum.COLLECT, DoubanBookScraper, Book, BookMark, BookTag],
'想看': [MarkStatusEnum.WISH, DoubanMovieScraper, Movie, MovieMark, MovieTag],
'在看': [MarkStatusEnum.DO, DoubanMovieScraper, Movie, MovieMark, MovieTag],
'想看': [MarkStatusEnum.COLLECT, DoubanMovieScraper, Movie, MovieMark, MovieTag],
'想听': [MarkStatusEnum.WISH, DoubanAlbumScraper, Album, AlbumMark, AlbumTag],
'在听': [MarkStatusEnum.DO, DoubanAlbumScraper, Album, AlbumMark, AlbumTag],
'听过': [MarkStatusEnum.COLLECT, DoubanAlbumScraper, Album, AlbumMark, AlbumTag],
'想玩': [MarkStatusEnum.WISH, DoubanGameScraper, Game, GameMark, GameTag],
'在玩': [MarkStatusEnum.DO, DoubanGameScraper, Game, GameMark, GameTag],
'玩过': [MarkStatusEnum.COLLECT, DoubanGameScraper, Game, GameMark, GameTag],
}
review_sheet_config = {
'书评': [DoubanBookScraper, Book, BookReview],
'影评': [DoubanMovieScraper, Movie, MovieReview],
'乐评': [DoubanAlbumScraper, Album, AlbumReview],
'游戏评论&攻略': [DoubanGameScraper, Game, GameReview],
}
mark_data = {}
review_data = {}
entity_lookup = {}
def load_sheets(self):
f = open(self.file, 'rb')
wb = openpyxl.load_workbook(f, read_only=True, data_only=True, keep_links=False)
for data, config in [(self.mark_data, self.mark_sheet_config), (self.review_data, self.review_sheet_config)]:
for name in config:
data[name] = []
if name in wb:
print(f'{self.user} parsing {name}')
for row in wb[name].iter_rows(min_row=2, values_only=True):
cells = [cell for cell in row]
if len(cells) > 6:
data[name].append(cells)
for sheet in self.mark_data.values():
for cells in sheet:
# entity_lookup["title|rating"] = [(url, time), ...]
k = f'{cells[0]}|{cells[5]}'
v = (cells[3], cells[4])
if k in self.entity_lookup:
self.entity_lookup[k].append(v)
else:
self.entity_lookup[k] = [v]
self.total = sum(map(lambda a: len(a), self.review_data.values()))
def guess_entity_url(self, title, rating, timestamp):
k = f'{title}|{rating}'
if k not in self.entity_lookup:
return None
v = self.entity_lookup[k]
if len(v) > 1:
v.sort(key=lambda c: abs(timestamp - (datetime.strptime(c[1], "%Y-%m-%d %H:%M:%S") if type(c[1])==str else c[1]).replace(tzinfo=tz_sh)))
return v[0][0]
# for sheet in self.mark_data.values():
# for cells in sheet:
# if cells[0] == title and cells[5] == rating:
# return cells[3]
def import_from_file_task(self):
print(f'{self.user} import start')
msg.info(self.user, f'开始导入豆瓣评论')
self.update_user_import_status(1)
self.load_sheets()
print(f'{self.user} sheet loaded, {self.total} lines total')
self.update_user_import_status(1)
for name, param in self.review_sheet_config.items():
self.import_review_sheet(self.review_data[name], param[0], param[1], param[2])
self.update_user_import_status(0)
msg.success(self.user, f'豆瓣评论导入完成,共处理{self.total}篇,已存在{self.skipped}篇,新增{self.imported}篇。')
if len(self.failed):
msg.error(self.user, f'豆瓣评论导入时未能处理以下网址:\n{" , ".join(self.failed)}')
def import_review_sheet(self, worksheet, scraper, entity_class, review_class):
prefix = f'{self.user} |'
if worksheet is None: # or worksheet.max_row < 2:
print(f'{prefix} {review_class.__name__} empty sheet')
return
for cells in worksheet:
if len(cells) < 6:
continue
title = cells[0]
entity_title = re.sub('^《', '', re.sub('》$', '', cells[1]))
review_url = cells[2]
time = cells[3]
rating = cells[4]
content = cells[6]
self.processed += 1
if time:
if type(time) == str:
time = datetime.strptime(time, "%Y-%m-%d %H:%M:%S")
time = time.replace(tzinfo=tz_sh)
else:
time = None
if not content:
content = ""
if not title:
title = ""
r = self.import_review(entity_title, rating, title, review_url, content, time, scraper, entity_class, review_class)
if r == 1:
self.imported += 1
elif r == 2:
self.skipped += 1
else:
self.failed.append(review_url)
self.update_user_import_status(1)
def import_review(self, entity_title, rating, title, review_url, content, time, scraper, entity_class, review_class):
# return 1: done / 2: skipped / None: failed
prefix = f'{self.user} |'
url = self.guess_entity_url(entity_title, rating, time)
if url is None:
print(f'{prefix} fetching {review_url}')
try:
if settings.SCRAPESTACK_KEY is not None:
_review_url = f'http://api.scrapestack.com/scrape?access_key={settings.SCRAPESTACK_KEY}&url={review_url}'
else:
_review_url = review_url
r = requests.get(_review_url, timeout=settings.SCRAPING_TIMEOUT)
if r.status_code != 200:
print(f'{prefix} fetching error {review_url} {r.status_code}')
return
h = html.fromstring(r.content.decode('utf-8'))
for u in h.xpath("//header[@class='main-hd']/a/@href"):
if '.douban.com/subject/' in u:
url = u
if not url:
print(f'{prefix} fetching error {review_url} unable to locate entity url')
return
except Exception:
print(f'{prefix} fetching exception {review_url}')
return
try:
entity = entity_class.objects.get(source_url=url)
print(f'{prefix} matched {url}')
except ObjectDoesNotExist:
try:
print(f'{prefix} scraping {url}')
scraper.scrape(url)
form = scraper.save(request_user=self.user)
entity = form.instance
except Exception as e:
print(f"{prefix} scrape failed: {url} {e}")
logger.error(f"{prefix} scrape failed: {url}", exc_info=e)
return
params = {
'owner': self.user,
entity_class.__name__.lower(): entity
}
if review_class.objects.filter(**params).exists():
return 2
content = re.sub(r'<span style="font-weight: bold;">([^<]+)</span>', r'<b>\1</b>', content)
content = re.sub(r'(<img [^>]+>)', r'\1<br>', content)
content = re.sub(r'<div class="image-caption">([^<]+)</div>', r'<br><i>\1</i><br>', content)
content = md(content)
content = re.sub(r'(?<=!\[\]\()([^)]+)(?=\))', lambda x: fetch_remote_image(x[1]), content)
params = {
'owner': self.user,
'created_time': time,
'edited_time': time,
'title': title,
'content': content,
'visibility': self.visibility,
entity_class.__name__.lower(): entity,
}
review_class.objects.create(**params)
return 1

View file

@ -0,0 +1,202 @@
import re
import requests
from lxml import html
from datetime import datetime
# from common.scrapers.goodreads import GoodreadsScraper
from common.scraper import get_scraper_by_url
from books.models import Book, BookMark
from collection.models import Collection
from common.models import MarkStatusEnum
from django.conf import settings
from user_messages import api as msg
import django_rq
from django.utils.timezone import make_aware
re_list = r'^https://www.goodreads.com/list/show/\d+'
re_shelf = r'^https://www.goodreads.com/review/list/\d+[^?]*\?shelf=[^&]+'
re_profile = r'^https://www.goodreads.com/user/show/(\d+)'
gr_rating = {
'did not like it': 2,
'it was ok': 4,
'liked it': 6,
'really liked it': 8,
'it was amazing': 10
}
class GoodreadsImporter:
@classmethod
def import_from_url(self, raw_url, user):
match_list = re.match(re_list, raw_url)
match_shelf = re.match(re_shelf, raw_url)
match_profile = re.match(re_profile, raw_url)
if match_profile or match_shelf or match_list:
django_rq.get_queue('doufen').enqueue(self.import_from_url_task, raw_url, user)
return True
else:
return False
@classmethod
def import_from_url_task(cls, url, user):
match_list = re.match(re_list, url)
match_shelf = re.match(re_shelf, url)
match_profile = re.match(re_profile, url)
total = 0
if match_list or match_shelf:
shelf = cls.parse_shelf(match_shelf[0], user) if match_shelf else cls.parse_list(match_list[0], user)
if shelf['title'] and shelf['books']:
collection = Collection.objects.create(title=shelf['title'],
description=shelf['description'] + '\n\nImported from [Goodreads](' + url + ')',
owner=user)
for book in shelf['books']:
collection.append_item(book['book'], book['review'])
total += 1
collection.save()
msg.success(user, f'成功从Goodreads导入包含{total}本书的收藏单{shelf["title"]}')
elif match_profile:
uid = match_profile[1]
shelves = {
MarkStatusEnum.WISH: f'https://www.goodreads.com/review/list/{uid}?shelf=to-read',
MarkStatusEnum.DO: f'https://www.goodreads.com/review/list/{uid}?shelf=currently-reading',
MarkStatusEnum.COLLECT: f'https://www.goodreads.com/review/list/{uid}?shelf=read',
}
for status in shelves:
shelf_url = shelves.get(status)
shelf = cls.parse_shelf(shelf_url, user)
for book in shelf['books']:
params = {
'owner': user,
'rating': book['rating'],
'text': book['review'],
'status': status,
'visibility': 0,
'book': book['book'],
}
if book['last_updated']:
params['created_time'] = book['last_updated']
params['edited_time'] = book['last_updated']
try:
mark = BookMark.objects.create(**params)
mark.book.update_rating(None, mark.rating)
except Exception:
print(f'Skip mark for {book["book"]}')
pass
total += 1
msg.success(user, f'成功从Goodreads用户主页导入{total}个标记。')
@classmethod
def parse_shelf(cls, url, user): # return {'title': 'abc', books: [{'book': obj, 'rating': 10, 'review': 'txt'}, ...]}
title = None
books = []
url_shelf = url + '&view=table'
while url_shelf:
print(f'Shelf loading {url_shelf}')
r = requests.get(url_shelf, timeout=settings.SCRAPING_TIMEOUT)
if r.status_code != 200:
print(f'Shelf loading error {url_shelf}')
break
url_shelf = None
content = html.fromstring(r.content.decode('utf-8'))
title_elem = content.xpath("//span[@class='h1Shelf']/text()")
if not title_elem:
print(f'Shelf parsing error {url_shelf}')
break
title = title_elem[0].strip()
print("Shelf title: " + title)
for cell in content.xpath("//tbody[@id='booksBody']/tr"):
url_book = 'https://www.goodreads.com' + \
cell.xpath(
".//td[@class='field title']//a/@href")[0].strip()
# has_review = cell.xpath(
# ".//td[@class='field actions']//a/text()")[0].strip() == 'view (with text)'
rating_elem = cell.xpath(
".//td[@class='field rating']//span/@title")
rating = gr_rating.get(
rating_elem[0].strip()) if rating_elem else None
url_review = 'https://www.goodreads.com' + \
cell.xpath(
".//td[@class='field actions']//a/@href")[0].strip()
review = ''
last_updated = None
try:
r2 = requests.get(
url_review, timeout=settings.SCRAPING_TIMEOUT)
if r2.status_code == 200:
c2 = html.fromstring(r2.content.decode('utf-8'))
review_elem = c2.xpath(
"//div[@itemprop='reviewBody']/text()")
review = '\n'.join(
p.strip() for p in review_elem) if review_elem else ''
date_elem = c2.xpath(
"//div[@class='readingTimeline__text']/text()")
for d in date_elem:
date_matched = re.search(r'(\w+)\s+(\d+),\s+(\d+)', d)
if date_matched:
last_updated = make_aware(datetime.strptime(date_matched[1] + ' ' + date_matched[2] + ' ' + date_matched[3], '%B %d %Y'))
else:
print(f"Error loading review{url_review}, ignored")
scraper = get_scraper_by_url(url_book)
url_book = scraper.get_effective_url(url_book)
book = Book.objects.filter(source_url=url_book).first()
if not book:
print("add new book " + url_book)
scraper.scrape(url_book)
form = scraper.save(request_user=user)
book = form.instance
books.append({
'url': url_book,
'book': book,
'rating': rating,
'review': review,
'last_updated': last_updated
})
except Exception:
print("Error adding " + url_book)
pass # likely just download error
next_elem = content.xpath("//a[@class='next_page']/@href")
url_shelf = ('https://www.goodreads.com' + next_elem[0].strip()) if next_elem else None
return {'title': title, 'description': '', 'books': books}
@classmethod
def parse_list(cls, url, user): # return {'title': 'abc', books: [{'book': obj, 'rating': 10, 'review': 'txt'}, ...]}
title = None
books = []
url_shelf = url
while url_shelf:
print(f'List loading {url_shelf}')
r = requests.get(url_shelf, timeout=settings.SCRAPING_TIMEOUT)
if r.status_code != 200:
print(f'List loading error {url_shelf}')
break
url_shelf = None
content = html.fromstring(r.content.decode('utf-8'))
title_elem = content.xpath('//h1[@class="gr-h1 gr-h1--serif"]/text()')
if not title_elem:
print(f'List parsing error {url_shelf}')
break
title = title_elem[0].strip()
description = content.xpath('//div[@class="mediumText"]/text()')[0].strip()
print("List title: " + title)
for link in content.xpath('//a[@class="bookTitle"]/@href'):
url_book = 'https://www.goodreads.com' + link
try:
scraper = get_scraper_by_url(url_book)
url_book = scraper.get_effective_url(url_book)
book = Book.objects.filter(source_url=url_book).first()
if not book:
print("add new book " + url_book)
scraper.scrape(url_book)
form = scraper.save(request_user=user)
book = form.instance
books.append({
'url': url_book,
'book': book,
'review': '',
})
except Exception:
print("Error adding " + url_book)
pass # likely just download error
next_elem = content.xpath("//a[@class='next_page']/@href")
url_shelf = ('https://www.goodreads.com' + next_elem[0].strip()) if next_elem else None
return {'title': title, 'description': description, 'books': books}

12
common/index.py Normal file
View file

@ -0,0 +1,12 @@
from django.conf import settings
if settings.SEARCH_BACKEND == 'MEILISEARCH':
from .search.meilisearch import Indexer
elif settings.SEARCH_BACKEND == 'TYPESENSE':
from .search.typesense import Indexer
else:
class Indexer:
@classmethod
def update_model_indexable(self, cls):
pass

View file

@ -0,0 +1,19 @@
from django.core.management.base import BaseCommand
import pprint
from redis import Redis
from rq.job import Job
from rq import Queue
class Command(BaseCommand):
help = 'Delete a job'
def add_arguments(self, parser):
parser.add_argument('job_id', type=str, help='Job ID')
def handle(self, *args, **options):
redis = Redis()
job_id = str(options['job_id'])
job = Job.fetch(job_id, connection=redis)
job.delete()
self.stdout.write(self.style.SUCCESS(f'Deleted {job}'))

View file

@ -0,0 +1,40 @@
from django.core.management.base import BaseCommand
from common.index import Indexer
from django.conf import settings
from movies.models import Movie
from books.models import Book
from games.models import Game
from music.models import Album, Song
from django.core.paginator import Paginator
from tqdm import tqdm
from time import sleep
from datetime import timedelta
from django.utils import timezone
class Command(BaseCommand):
help = 'Check search index'
def handle(self, *args, **options):
print(f'Connecting to search server')
stats = Indexer.get_stats()
print(stats)
st = Indexer.instance().get_all_update_status()
cnt = {"enqueued": [0, 0], "processing": [0, 0], "processed": [0, 0], "failed": [0, 0]}
lastEnq = {"enqueuedAt": ""}
lastProc = {"enqueuedAt": ""}
for s in st:
n = s["type"].get("number")
cnt[s["status"]][0] += 1
cnt[s["status"]][1] += n if n else 0
if s["status"] == "processing":
print(s)
elif s["status"] == "enqueued":
if s["enqueuedAt"] > lastEnq["enqueuedAt"]:
lastEnq = s
elif s["status"] == "processed":
if s["enqueuedAt"] > lastProc["enqueuedAt"]:
lastProc = s
print(lastEnq)
print(lastProc)
print(cnt)

View file

@ -0,0 +1,18 @@
from django.core.management.base import BaseCommand
from common.index import Indexer
from django.conf import settings
class Command(BaseCommand):
help = 'Initialize the search index'
def handle(self, *args, **options):
print(f'Connecting to search server')
Indexer.init()
self.stdout.write(self.style.SUCCESS('Index created.'))
# try:
# Indexer.init()
# self.stdout.write(self.style.SUCCESS('Index created.'))
# except Exception:
# Indexer.update_settings()
# self.stdout.write(self.style.SUCCESS('Index settings updated.'))

View file

@ -0,0 +1,24 @@
from django.core.management.base import BaseCommand
import pprint
from redis import Redis
from rq.job import Job
from rq import Queue
class Command(BaseCommand):
help = 'Show jobs in queue'
def add_arguments(self, parser):
parser.add_argument('queue', type=str, help='Queue')
def handle(self, *args, **options):
redis = Redis()
queue = Queue(str(options['queue']), connection=redis)
for registry in [queue.started_job_registry, queue.deferred_job_registry, queue.finished_job_registry, queue.failed_job_registry, queue.scheduled_job_registry]:
self.stdout.write(self.style.SUCCESS(f'Registry {registry}'))
for job_id in registry.get_job_ids():
try:
job = Job.fetch(job_id, connection=redis)
pprint.pp(job)
except Exception as e:
print(f'Error fetching {job_id}')

View file

@ -0,0 +1,40 @@
from django.core.management.base import BaseCommand
from common.index import Indexer
from django.conf import settings
from movies.models import Movie
from books.models import Book
from games.models import Game
from music.models import Album, Song
from django.core.paginator import Paginator
from tqdm import tqdm
from time import sleep
from datetime import timedelta
from django.utils import timezone
BATCH_SIZE = 1000
class Command(BaseCommand):
help = 'Regenerate the search index'
# def add_arguments(self, parser):
# parser.add_argument('hours', type=int, help='Re-index items modified in last N hours, 0 to reindex all')
def handle(self, *args, **options):
# h = int(options['hours'])
print(f'Connecting to search server')
if Indexer.busy():
print('Please wait for previous updates')
# Indexer.update_settings()
# self.stdout.write(self.style.SUCCESS('Index settings updated.'))
for c in [Book, Song, Album, Game, Movie]:
print(f'Re-indexing {c}')
qs = c.objects.all() # if h == 0 else c.objects.filter(edited_time__gt=timezone.now() - timedelta(hours=h))
pg = Paginator(qs.order_by('id'), BATCH_SIZE)
for p in tqdm(pg.page_range):
items = list(map(lambda o: Indexer.obj_to_dict(o), pg.get_page(p).object_list))
if items:
Indexer.replace_batch(items)
while Indexer.busy():
sleep(0.5)

View file

@ -0,0 +1,28 @@
from django.core.management.base import BaseCommand
from redis import Redis
from rq.job import Job
from sync.models import SyncTask
from sync.jobs import import_doufen_task
from django.utils import timezone
import django_rq
class Command(BaseCommand):
help = 'Restart a sync task'
def add_arguments(self, parser):
parser.add_argument('synctask_id', type=int, help='Sync Task ID')
def handle(self, *args, **options):
task = SyncTask.objects.get(id=options['synctask_id'])
task.finished_items = 0
task.failed_urls = []
task.success_items = 0
task.total_items = 0
task.is_finished = False
task.is_failed = False
task.break_point = ''
task.started_time = timezone.now()
task.save()
django_rq.get_queue('doufen').enqueue(import_doufen_task, task, job_id=f'SyncTask_{task.id}')
self.stdout.write(self.style.SUCCESS(f'Queued {task}'))

View file

@ -0,0 +1,25 @@
from django.core.management.base import BaseCommand
from common.scraper import get_scraper_by_url, get_normalized_url
import pprint
class Command(BaseCommand):
help = 'Scrape an item from URL (but not save it)'
def add_arguments(self, parser):
parser.add_argument('url', type=str, help='URL to scrape')
def handle(self, *args, **options):
url = str(options['url'])
url = get_normalized_url(url)
scraper = get_scraper_by_url(url)
if scraper is None:
self.stdout.write(self.style.ERROR(f'Unable to match a scraper for {url}'))
return
effective_url = scraper.get_effective_url(url)
self.stdout.write(f'Fetching {effective_url} via {scraper.__name__}')
data, img = scraper.scrape(effective_url)
self.stdout.write(self.style.SUCCESS(f'Done.'))
pprint.pp(data)

View file

@ -1,29 +1,34 @@
import re
from decimal import *
from markdown import markdown
from django.utils.translation import ugettext_lazy as _
from django.utils.translation import gettext_lazy as _
from django.db import models, IntegrityError
from django.core.serializers.json import DjangoJSONEncoder
from django.db.models import Q
from django.db.models import Q, Count, Sum
from markdownx.models import MarkdownxField
from users.models import User
from mastodon.api import get_relationships, get_cross_site_id
from boofilsic.settings import CLIENT_NAME
from django.utils import timezone
from django.conf import settings
RE_HTML_TAG = re.compile(r"<[^>]*>")
MAX_TOP_TAGS = 5
# abstract base classes
###################################
class SourceSiteEnum(models.TextChoices):
IN_SITE = "in-site", CLIENT_NAME
IN_SITE = "in-site", settings.CLIENT_NAME
DOUBAN = "douban", _("豆瓣")
SPOTIFY = "spotify", _("Spotify")
IMDB = "imdb", _("IMDb")
STEAM = "steam", _("STEAM")
BANGUMI = 'bangumi', _("bangumi")
GOODREADS = "goodreads", _("goodreads")
TMDB = "tmdb", _("The Movie Database")
GOOGLEBOOKS = "googlebooks", _("Google Books")
BANDCAMP = "bandcamp", _("BandCamp")
IGDB = "igdb", _("IGDB")
class Entity(models.Model):
@ -52,10 +57,25 @@ class Entity(models.Model):
rating__lte=10), name='%(class)s_rating_upperbound'),
]
def get_absolute_url(self):
raise NotImplementedError("Subclass should implement this method")
@property
def url(self):
return settings.APP_WEBSITE + self.get_absolute_url()
def get_json(self):
return {
'title': self.title,
'brief': self.brief,
'rating': self.rating,
'url': self.url,
'cover_url': settings.APP_WEBSITE + self.cover.url,
'top_tags': self.tags[:5],
'category_name': self.verbose_category_name,
'other_info': self.other_info,
}
def save(self, *args, **kwargs):
""" update rating and strip source url scheme & querystring before save to db """
if self.rating_number and self.rating_total_score:
@ -108,6 +128,15 @@ class Entity(models.Model):
self.calculate_rating(old_rating, new_rating)
self.save()
def refresh_rating(self): # TODO: replace update_rating()
a = self.marks.filter(rating__gt=0).aggregate(Sum('rating'), Count('rating'))
if self.rating_total_score != a['rating__sum'] or self.rating_number != a['rating__count']:
self.rating_total_score = a['rating__sum']
self.rating_number = a['rating__count']
self.rating = a['rating__sum'] / a['rating__count'] if a['rating__count'] > 0 else None
self.save()
return self.rating
def get_tags_manager(self):
"""
Since relation between tag and entity is foreign key, and related name has to be unique,
@ -115,6 +144,10 @@ class Entity(models.Model):
"""
raise NotImplementedError("Subclass should implement this method.")
@property
def top_tags(self):
return self.get_tags_manager().values('content').annotate(tag_frequency=Count('content')).order_by('-tag_frequency')[:MAX_TOP_TAGS]
def get_marks_manager(self):
"""
Normally this won't be used.
@ -129,6 +162,19 @@ class Entity(models.Model):
"""
raise NotImplementedError("Subclass should implement this method.")
@property
def all_tag_list(self):
return self.get_tags_manager().values('content').annotate(frequency=Count('content')).order_by('-frequency')
@property
def tags(self):
return list(map(lambda t: t['content'], self.all_tag_list))
@property
def marks(self):
params = {self.__class__.__name__.lower() + '_id': self.id}
return self.mark_class.objects.filter(**params)
@classmethod
def get_category_mapping_dict(cls):
category_mapping_dict = {}
@ -144,74 +190,63 @@ class Entity(models.Model):
def verbose_category_name(self):
raise NotImplementedError("Subclass should implement this.")
@property
def mark_class(self):
raise NotImplementedError("Subclass should implement this.")
@property
def tag_class(self):
raise NotImplementedError("Subclass should implement this.")
class UserOwnedEntity(models.Model):
is_private = models.BooleanField()
owner = models.ForeignKey(
User, on_delete=models.CASCADE, related_name='user_%(class)ss')
is_private = models.BooleanField(default=False, null=True) # first set allow null, then migration, finally (in a few days) remove for good
visibility = models.PositiveSmallIntegerField(default=0) # 0: Public / 1: Follower only / 2: Self only
owner = models.ForeignKey(User, on_delete=models.CASCADE, related_name='user_%(class)ss')
created_time = models.DateTimeField(default=timezone.now)
edited_time = models.DateTimeField(default=timezone.now)
class Meta:
abstract = True
def is_visible_to(self, viewer):
if not viewer.is_authenticated:
return self.visibility == 0
owner = self.owner
if owner == viewer:
return True
if not owner.is_active:
return False
if self.visibility == 2:
return False
if viewer.is_blocking(owner) or owner.is_blocking(viewer) or viewer.is_muting(owner):
return False
if self.visibility == 1:
return viewer.is_following(owner)
else:
return True
def is_editable_by(self, viewer):
return True if viewer.is_staff or viewer.is_superuser or viewer == self.owner else False
@classmethod
def get_available(cls, entity, request_user, token):
# TODO add amount limit for once query
"""
Returns all avaliable user-owned entities related to given entity.
This method handles mute/block relationships and private/public visibilities.
"""
# the foreign key field that points to entity
# has to be named as the lower case name of that entity
def get_available(cls, entity, request_user, following_only=False):
# e.g. SongMark.get_available(song, request.user)
query_kwargs = {entity.__class__.__name__.lower(): entity}
user_owned_entities = cls.objects.filter(
**query_kwargs).order_by("-edited_time")
# every user should only be abled to have one user owned entity for each entity
# this is guaranteed by models
id_list = []
# none_index tracks those failed cross site id query
none_index = []
for (i, entity) in enumerate(user_owned_entities):
if entity.owner.mastodon_site == request_user.mastodon_site:
id_list.append(entity.owner.mastodon_id)
else:
# TODO there could be many requests therefore make the pulling asynchronized
cross_site_id = get_cross_site_id(
entity.owner, request_user.mastodon_site, token)
if not cross_site_id is None:
id_list.append(cross_site_id)
else:
none_index.append(i)
# populate those query-failed None postions
# to ensure the consistency of the orders of
# the three(id_list, user_owned_entities, relationships)
id_list.append(request_user.mastodon_id)
# Mastodon request
relationships = get_relationships(
request_user.mastodon_site, id_list, token)
mute_block_blocked_index = []
following_index = []
for i, r in enumerate(relationships):
# the order of relationships is corresponding to the id_list,
# and the order of id_list is the same as user_owned_entiies
if r['blocking'] or r['blocked_by'] or r['muting']:
mute_block_blocked_index.append(i)
if r['following']:
following_index.append(i)
available_entities = [
e for i, e in enumerate(user_owned_entities)
if ((e.is_private == True and i in following_index) or e.is_private == False or e.owner == request_user)
and not i in mute_block_blocked_index and not i in none_index
]
return available_entities
all_entities = cls.objects.filter(**query_kwargs).order_by("-created_time") # get all marks for song
visible_entities = list(filter(lambda _entity: _entity.is_visible_to(request_user) and (_entity.owner.mastodon_username in request_user.mastodon_following if following_only else True), all_entities))
return visible_entities
@classmethod
def get_available_by_user(cls, owner, is_following):
def get_available_for_identicals(cls, entity, request_user, following_only=False):
# e.g. SongMark.get_available(song, request.user)
query_kwargs = {entity.__class__.__name__.lower() + '__in': entity.get_identicals()}
all_entities = cls.objects.filter(**query_kwargs).order_by("-created_time") # get all marks for song
visible_entities = list(filter(lambda _entity: _entity.is_visible_to(request_user) and (_entity.owner.mastodon_username in request_user.mastodon_following if following_only else True), all_entities))
return visible_entities
@classmethod
def get_available_by_user(cls, owner, is_following): # FIXME
"""
Returns all avaliable owner's entities.
Mute/Block relation is not handled in this method.
@ -220,10 +255,17 @@ class UserOwnedEntity(models.Model):
:param is_following: if the current user is following the owner
"""
user_owned_entities = cls.objects.filter(owner=owner)
if not is_following:
user_owned_entities = user_owned_entities.exclude(is_private=True)
if is_following:
user_owned_entities = user_owned_entities.exclude(visibility=2)
else:
user_owned_entities = user_owned_entities.filter(visibility=0)
return user_owned_entities
@property
def item(self):
attr = re.findall(r'[A-Z](?:[a-z]+|[A-Z]*(?=[A-Z]|$))', self.__class__.__name__)[0].lower()
return getattr(self, attr)
# commonly used entity classes
###################################
@ -236,10 +278,20 @@ class MarkStatusEnum(models.TextChoices):
class Mark(UserOwnedEntity):
status = models.CharField(choices=MarkStatusEnum.choices, max_length=20)
rating = models.PositiveSmallIntegerField(blank=True, null=True)
text = models.CharField(max_length=500, blank=True, default='')
text = models.CharField(max_length=5000, blank=True, default='')
shared_link = models.CharField(max_length=5000, blank=True, default='')
def __str__(self):
return f"({self.id}) {self.owner} {self.status.upper()}"
return f"Mark({self.id} {self.owner} {self.status.upper()})"
@property
def translated_status(self):
raise NotImplementedError("Subclass should implement this.")
@property
def tags(self):
tags = self.item.tag_class.objects.filter(mark_id=self.id)
return tags
class Meta:
abstract = True
@ -257,6 +309,7 @@ class Mark(UserOwnedEntity):
class Review(UserOwnedEntity):
title = models.CharField(max_length=120)
content = MarkdownxField()
shared_link = models.CharField(max_length=5000, blank=True, default='')
def __str__(self):
return self.title
@ -271,6 +324,10 @@ class Review(UserOwnedEntity):
class Meta:
abstract = True
@property
def translated_status(self):
return '评论了'
class Tag(models.Model):
content = models.CharField(max_length=50)
@ -278,5 +335,28 @@ class Tag(models.Model):
def __str__(self):
return self.content
@property
def edited_time(self):
return self.mark.edited_time
@property
def created_time(self):
return self.mark.created_time
@property
def text(self):
return self.mark.text
@classmethod
def find_by_user(cls, tag, owner, viewer):
qs = cls.objects.filter(content=tag, mark__owner=owner)
if owner != viewer:
qs = qs.filter(mark__visibility__lte=owner.get_max_visibility(viewer))
return qs
@classmethod
def all_by_user(cls, owner):
return cls.objects.filter(mark__owner=owner).values('content').annotate(total=Count('content')).order_by('-total')
class Meta:
abstract = True

File diff suppressed because it is too large Load diff

View file

@ -0,0 +1,71 @@
import re
import dateparser
import json
from lxml import html
from common.models import SourceSiteEnum
from common.scraper import AbstractScraper
from music.models import Album
from music.forms import AlbumForm
class BandcampAlbumScraper(AbstractScraper):
site_name = SourceSiteEnum.BANDCAMP.value
# API URL
host = '.bandcamp.com/'
data_class = Album
form_class = AlbumForm
regex = re.compile(r"https://[a-zA-Z0-9\-\.]+/album/[^?#]+")
def scrape(self, url, response=None):
effective_url = self.get_effective_url(url)
if effective_url is None:
raise ValueError("not valid url")
if response is not None:
content = html.fromstring(response.content.decode('utf-8'))
else:
content = self.download_page(url, {})
try:
title = content.xpath("//h2[@class='trackTitle']/text()")[0].strip()
artist = [content.xpath("//div[@id='name-section']/h3/span/a/text()")[0].strip()]
except IndexError:
raise ValueError("given url contains no valid info")
genre = [] # TODO: parse tags
track_list = []
release_nodes = content.xpath("//div[@class='tralbumData tralbum-credits']/text()")
release_date = dateparser.parse(re.sub(r'releas\w+ ', '', release_nodes[0].strip())) if release_nodes else None
duration = None
company = None
brief_nodes = content.xpath("//div[@class='tralbumData tralbum-about']/text()")
brief = "".join(brief_nodes) if brief_nodes else None
cover_url = content.xpath("//div[@id='tralbumArt']/a/@href")[0].strip()
bandcamp_page_data = json.loads(content.xpath(
"//meta[@name='bc-page-properties']/@content")[0].strip())
other_info = {}
other_info['bandcamp_album_id'] = bandcamp_page_data['item_id']
raw_img, ext = self.download_image(cover_url, url)
data = {
'title': title,
'artist': artist,
'genre': genre,
'track_list': track_list,
'release_date': release_date,
'duration': duration,
'company': company,
'brief': brief,
'other_info': other_info,
'source_site': self.site_name,
'source_url': effective_url,
'cover_url': cover_url,
}
self.raw_data, self.raw_img, self.img_ext = data, raw_img, ext
return data, raw_img
@classmethod
def get_effective_url(cls, raw_url):
url = cls.regex.findall(raw_url)
return url[0] if len(url) > 0 else None

199
common/scrapers/bangumi.py Normal file
View file

@ -0,0 +1,199 @@
import re
from common.models import SourceSiteEnum
from movies.models import Movie, MovieGenreEnum
from movies.forms import MovieForm
from books.models import Book
from books.forms import BookForm
from music.models import Album, Song
from music.forms import AlbumForm, SongForm
from games.models import Game
from games.forms import GameForm
from common.scraper import *
from django.core.exceptions import ObjectDoesNotExist
def find_entity(source_url):
"""
for bangumi
"""
# to be added when new scrape method is implemented
result = Game.objects.filter(source_url=source_url)
if result:
return result[0]
else:
raise ObjectDoesNotExist
class BangumiScraper(AbstractScraper):
site_name = SourceSiteEnum.BANGUMI.value
host = 'bgm.tv'
# for interface coherence
data_class = type("FakeDataClass", (object,), {})()
data_class.objects = type("FakeObjectsClass", (object,), {})()
data_class.objects.get = find_entity
# should be set at scrape_* method
form_class = ''
regex = re.compile(r"https{0,1}://bgm\.tv/subject/\d+")
def scrape(self, url):
"""
This is the scraping portal
"""
headers = DEFAULT_REQUEST_HEADERS.copy()
headers['Host'] = self.host
content = self.download_page(url, headers)
# download image
img_url = 'http:' + content.xpath("//div[@class='infobox']//img[1]/@src")[0]
raw_img, ext = self.download_image(img_url, url)
# Test category
category_code = content.xpath("//div[@id='headerSearch']//option[@selected]/@value")[0]
handler_map = {
'1': self.scrape_book,
'2': self.scrape_movie,
'3': self.scrape_album,
'4': self.scrape_game
}
data = handler_map[category_code](self, content)
data['source_url'] = self.get_effective_url(url)
self.raw_data, self.raw_img, self.img_ext = data, raw_img, ext
return data, raw_img
def scrape_game(self, content):
self.data_class = Game
self.form_class = GameForm
title_elem = content.xpath("//a[@property='v:itemreviewed']/text()")
if not title_elem:
raise ValueError("no game info found on this page")
title = None
else:
title = title_elem[0].strip()
other_title_elem = content.xpath(
"//ul[@id='infobox']/li[child::span[contains(text(),'别名')]]/text()")
if not other_title_elem:
other_title_elem = content.xpath(
"//ul[@id='infobox']/li[child::span[contains(text(),'别名')]]/a/text()")
other_title = other_title_elem if other_title_elem else []
chinese_name_elem = content.xpath(
"//ul[@id='infobox']/li[child::span[contains(text(),'中文')]]/text()")
if not chinese_name_elem:
chinese_name_elem = content.xpath(
"//ul[@id='infobox']/li[child::span[contains(text(),'中文')]]/a/text()")
if chinese_name_elem:
chinese_name = chinese_name_elem[0]
# switch chinese name with original name
title, chinese_name = chinese_name, title
# actually the name appended is original
other_title.append(chinese_name)
developer_elem = content.xpath(
"//ul[@id='infobox']/li[child::span[contains(text(),'开发')]]/text()")
if not developer_elem:
developer_elem = content.xpath(
"//ul[@id='infobox']/li[child::span[contains(text(),'开发')]]/a/text()")
developer = developer_elem if developer_elem else None
publisher_elem = content.xpath(
"//ul[@id='infobox']/li[child::span[contains(text(),'发行:')]]/text()")
if not publisher_elem:
publisher_elem = content.xpath(
"//ul[@id='infobox']/li[child::span[contains(text(),'发行:')]]/a/text()")
publisher = publisher_elem if publisher_elem else None
platform_elem = content.xpath(
"//ul[@id='infobox']/li[child::span[contains(text(),'平台')]]/text()")
if not platform_elem:
platform_elem = content.xpath(
"//ul[@id='infobox']/li[child::span[contains(text(),'平台')]]/a/text()")
platform = platform_elem if platform_elem else None
genre_elem = content.xpath(
"//ul[@id='infobox']/li[child::span[contains(text(),'类型')]]/text()")
if not genre_elem:
genre_elem = content.xpath(
"//ul[@id='infobox']/li[child::span[contains(text(),'类型')]]/a/text()")
genre = genre_elem if genre_elem else None
date_elem = content.xpath(
"//ul[@id='infobox']/li[child::span[contains(text(),'发行日期')]]/text()")
if not date_elem:
date_elem = content.xpath(
"//ul[@id='infobox']/li[child::span[contains(text(),'发行日期')]]/a/text()")
release_date = parse_date(date_elem[0]) if date_elem else None
brief = ''.join(content.xpath("//div[@property='v:summary']/text()"))
other_info = {}
other_elem = content.xpath(
"//ul[@id='infobox']/li[child::span[contains(text(),'人数')]]/text()")
if other_elem:
other_info['游玩人数'] = other_elem[0]
other_elem = content.xpath(
"//ul[@id='infobox']/li[child::span[contains(text(),'引擎')]]/text()")
if other_elem:
other_info['引擎'] = ' '.join(other_elem)
other_elem = content.xpath(
"//ul[@id='infobox']/li[child::span[contains(text(),'售价')]]/text()")
if other_elem:
other_info['售价'] = ' '.join(other_elem)
other_elem = content.xpath(
"//ul[@id='infobox']/li[child::span[contains(text(),'官方网站')]]/text()")
if other_elem:
other_info['网站'] = other_elem[0]
other_elem = content.xpath(
"//ul[@id='infobox']/li[child::span[contains(text(),'剧本')]]/a/text()") or content.xpath(
"//ul[@id='infobox']/li[child::span[contains(text(),'剧本')]]/text()")
if other_elem:
other_info['剧本'] = ' '.join(other_elem)
other_elem = content.xpath(
"//ul[@id='infobox']/li[child::span[contains(text(),'编剧')]]/a/text()") or content.xpath(
"//ul[@id='infobox']/li[child::span[contains(text(),'编剧')]]/text()")
if other_elem:
other_info['编剧'] = ' '.join(other_elem)
other_elem = content.xpath(
"//ul[@id='infobox']/li[child::span[contains(text(),'音乐')]]/a/text()") or content.xpath(
"//ul[@id='infobox']/li[child::span[contains(text(),'音乐')]]/text()")
if other_elem:
other_info['音乐'] = ' '.join(other_elem)
other_elem = content.xpath(
"//ul[@id='infobox']/li[child::span[contains(text(),'美术')]]/a/text()") or content.xpath(
"//ul[@id='infobox']/li[child::span[contains(text(),'美术')]]/text()")
if other_elem:
other_info['美术'] = ' '.join(other_elem)
data = {
'title': title,
'other_title': None,
'developer': developer,
'publisher': publisher,
'release_date': release_date,
'genre': genre,
'platform': platform,
'brief': brief,
'other_info': other_info,
'source_site': self.site_name,
}
return data
def scrape_movie(self, content):
self.data_class = Movie
self.form_class = MovieForm
raise NotImplementedError
def scrape_book(self, content):
self.data_class = Book
self.form_class = BookForm
raise NotImplementedError
def scrape_album(self, content):
self.data_class = Album
self.form_class = AlbumForm
raise NotImplementedError

714
common/scrapers/douban.py Normal file
View file

@ -0,0 +1,714 @@
import requests
import re
import filetype
from lxml import html
from common.models import SourceSiteEnum
from movies.models import Movie, MovieGenreEnum
from movies.forms import MovieForm
from books.models import Book
from books.forms import BookForm
from music.models import Album
from music.forms import AlbumForm
from games.models import Game
from games.forms import GameForm
from django.core.validators import URLValidator
from django.conf import settings
from PIL import Image
from io import BytesIO
from common.scraper import *
class DoubanScrapperMixin:
@classmethod
def download_page(cls, url, headers):
url = cls.get_effective_url(url)
r = None
error = 'DoubanScrapper: error occured when downloading ' + url
content = None
last_error = None
def get(url):
nonlocal r
# print('Douban GET ' + url)
try:
r = requests.get(url, timeout=settings.SCRAPING_TIMEOUT)
except Exception as e:
r = requests.Response()
r.status_code = f"Exception when GET {url} {e}" + url
# print('Douban CODE ' + str(r.status_code))
return r
def check_content():
nonlocal r, error, content, last_error
content = None
last_error = None
if r.status_code == 200:
content = r.content.decode('utf-8')
if content.find('关于豆瓣') == -1:
if content.find('你的 IP 发出') == -1:
error = error + 'Content not authentic' # response is garbage
else:
error = error + 'IP banned'
content = None
last_error = 'network'
elif content.find('<title>页面不存在</title>') != -1 or content.find('呃... 你想访问的条目豆瓣不收录。') != -1: # re.search('不存在[^<]+</title>', content, re.MULTILINE):
content = None
last_error = 'censorship'
error = error + 'Not found or hidden by Douban'
elif r.status_code == 204:
content = None
last_error = 'censorship'
error = error + 'Not found or hidden by Douban'
else:
content = None
last_error = 'network'
error = error + str(r.status_code)
def fix_wayback_links():
nonlocal content
# fix links
content = re.sub(r'href="http[^"]+http', r'href="http', content)
# https://img9.doubanio.com/view/subject/{l|m|s}/public/s1234.jpg
content = re.sub(r'src="[^"]+/(s\d+\.\w+)"',
r'src="https://img9.doubanio.com/view/subject/m/public/\1"', content)
# https://img9.doubanio.com/view/photo/s_ratio_poster/public/p2681329386.jpg
# https://img9.doubanio.com/view/photo/{l|m|s}/public/p1234.webp
content = re.sub(r'src="[^"]+/(p\d+\.\w+)"',
r'src="https://img9.doubanio.com/view/photo/m/public/\1"', content)
# Wayback Machine: get latest available
def wayback():
nonlocal r, error, content
error = error + '\nWayback: '
get('http://archive.org/wayback/available?url=' + url)
if r.status_code == 200:
w = r.json()
if w['archived_snapshots'] and w['archived_snapshots']['closest']:
get(w['archived_snapshots']['closest']['url'])
check_content()
if content is not None:
fix_wayback_links()
else:
error = error + 'No snapshot available'
else:
error = error + str(r.status_code)
# Wayback Machine: guess via CDX API
def wayback_cdx():
nonlocal r, error, content
error = error + '\nWayback: '
get('http://web.archive.org/cdx/search/cdx?url=' + url)
if r.status_code == 200:
dates = re.findall(r'[^\s]+\s+(\d+)\s+[^\s]+\s+[^\s]+\s+\d+\s+[^\s]+\s+\d{5,}',
r.content.decode('utf-8'))
# assume snapshots whose size >9999 contain real content, use the latest one of them
if len(dates) > 0:
get('http://web.archive.org/web/' + dates[-1] + '/' + url)
check_content()
if content is not None:
fix_wayback_links()
else:
error = error + 'No snapshot available'
else:
error = error + str(r.status_code)
def latest():
nonlocal r, error, content
if settings.SCRAPESTACK_KEY is not None:
error = error + '\nScrapeStack: '
get(f'http://api.scrapestack.com/scrape?access_key={settings.SCRAPESTACK_KEY}&url={url}')
elif settings.SCRAPERAPI_KEY is not None:
error = error + '\nScraperAPI: '
get(f'http://api.scraperapi.com?api_key={settings.SCRAPERAPI_KEY}&url={url}')
else:
error = error + '\nDirect: '
get(url)
check_content()
if last_error == 'network' and settings.PROXYCRAWL_KEY is not None:
error = error + '\nProxyCrawl: '
get(f'https://api.proxycrawl.com/?token={settings.PROXYCRAWL_KEY}&url={url}')
check_content()
if last_error == 'censorship' and settings.LOCAL_PROXY is not None:
error = error + '\nLocal: '
get(f'{settings.LOCAL_PROXY}?url={url}')
check_content()
latest()
if content is None:
wayback_cdx()
if content is None:
raise RuntimeError(error)
# with open('/tmp/temp.html', 'w', encoding='utf-8') as fp:
# fp.write(content)
return html.fromstring(content)
@classmethod
def download_image(cls, url, item_url=None):
raw_img = None
ext = None
if settings.SCRAPESTACK_KEY is not None:
dl_url = f'http://api.scrapestack.com/scrape?access_key={settings.SCRAPESTACK_KEY}&url={url}'
elif settings.SCRAPERAPI_KEY is not None:
dl_url = f'http://api.scraperapi.com?api_key={settings.SCRAPERAPI_KEY}&url={url}'
else:
dl_url = url
try:
img_response = requests.get(dl_url, timeout=settings.SCRAPING_TIMEOUT)
if img_response.status_code == 200:
raw_img = img_response.content
img = Image.open(BytesIO(raw_img))
img.load() # corrupted image will trigger exception
content_type = img_response.headers.get('Content-Type')
ext = filetype.get_type(mime=content_type.partition(';')[0].strip()).extension
else:
logger.error(f"Douban: download image failed {img_response.status_code} {dl_url} {item_url}")
# raise RuntimeError(f"Douban: download image failed {img_response.status_code} {dl_url}")
except Exception as e:
raw_img = None
ext = None
logger.error(f"Douban: download image failed {e} {dl_url} {item_url}")
if raw_img is None and settings.PROXYCRAWL_KEY is not None:
try:
dl_url = f'https://api.proxycrawl.com/?token={settings.PROXYCRAWL_KEY}&url={url}'
img_response = requests.get(dl_url, timeout=settings.SCRAPING_TIMEOUT)
if img_response.status_code == 200:
raw_img = img_response.content
img = Image.open(BytesIO(raw_img))
img.load() # corrupted image will trigger exception
content_type = img_response.headers.get('Content-Type')
ext = filetype.get_type(mime=content_type.partition(';')[0].strip()).extension
else:
logger.error(f"Douban: download image failed {img_response.status_code} {dl_url} {item_url}")
except Exception as e:
raw_img = None
ext = None
logger.error(f"Douban: download image failed {e} {dl_url} {item_url}")
return raw_img, ext
class DoubanBookScraper(DoubanScrapperMixin, AbstractScraper):
site_name = SourceSiteEnum.DOUBAN.value
host = "book.douban.com"
data_class = Book
form_class = BookForm
regex = re.compile(r"https://book\.douban\.com/subject/\d+/{0,1}")
def scrape(self, url):
headers = DEFAULT_REQUEST_HEADERS.copy()
headers['Host'] = self.host
content = self.download_page(url, headers)
isbn_elem = content.xpath("//div[@id='info']//span[text()='ISBN:']/following::text()")
isbn = isbn_elem[0].strip() if isbn_elem else None
title_elem = content.xpath("/html/body//h1/span/text()")
title = title_elem[0].strip() if title_elem else None
if not title:
if isbn:
title = 'isbn: ' + isbn
else:
raise ValueError("given url contains no book title or isbn")
subtitle_elem = content.xpath(
"//div[@id='info']//span[text()='副标题:']/following::text()")
subtitle = subtitle_elem[0].strip()[:500] if subtitle_elem else None
orig_title_elem = content.xpath(
"//div[@id='info']//span[text()='原作名:']/following::text()")
orig_title = orig_title_elem[0].strip()[:500] if orig_title_elem else None
language_elem = content.xpath(
"//div[@id='info']//span[text()='语言:']/following::text()")
language = language_elem[0].strip() if language_elem else None
pub_house_elem = content.xpath(
"//div[@id='info']//span[text()='出版社:']/following::text()")
pub_house = pub_house_elem[0].strip() if pub_house_elem else None
pub_date_elem = content.xpath(
"//div[@id='info']//span[text()='出版年:']/following::text()")
pub_date = pub_date_elem[0].strip() if pub_date_elem else ''
year_month_day = RE_NUMBERS.findall(pub_date)
if len(year_month_day) in (2, 3):
pub_year = int(year_month_day[0])
pub_month = int(year_month_day[1])
elif len(year_month_day) == 1:
pub_year = int(year_month_day[0])
pub_month = None
else:
pub_year = None
pub_month = None
if pub_year and pub_month and pub_year < pub_month:
pub_year, pub_month = pub_month, pub_year
pub_year = None if pub_year is not None and pub_year not in range(
0, 3000) else pub_year
pub_month = None if pub_month is not None and pub_month not in range(
1, 12) else pub_month
binding_elem = content.xpath(
"//div[@id='info']//span[text()='装帧:']/following::text()")
binding = binding_elem[0].strip() if binding_elem else None
price_elem = content.xpath(
"//div[@id='info']//span[text()='定价:']/following::text()")
price = price_elem[0].strip() if price_elem else None
pages_elem = content.xpath(
"//div[@id='info']//span[text()='页数:']/following::text()")
pages = pages_elem[0].strip() if pages_elem else None
if pages is not None:
pages = int(RE_NUMBERS.findall(pages)[
0]) if RE_NUMBERS.findall(pages) else None
if pages and (pages > 999999 or pages < 1):
pages = None
brief_elem = content.xpath(
"//h2/span[text()='内容简介']/../following-sibling::div[1]//div[@class='intro'][not(ancestor::span[@class='short'])]/p/text()")
brief = '\n'.join(p.strip()
for p in brief_elem) if brief_elem else None
contents = None
try:
contents_elem = content.xpath(
"//h2/span[text()='目录']/../following-sibling::div[1]")[0]
# if next the id of next sibling contains `dir`, that would be the full contents
if "dir" in contents_elem.getnext().xpath("@id")[0]:
contents_elem = contents_elem.getnext()
contents = '\n'.join(p.strip() for p in contents_elem.xpath(
"text()")[:-2]) if contents_elem else None
else:
contents = '\n'.join(p.strip() for p in contents_elem.xpath(
"text()")) if contents_elem else None
except Exception:
pass
img_url_elem = content.xpath("//*[@id='mainpic']/a/img/@src")
img_url = img_url_elem[0].strip() if img_url_elem else None
raw_img, ext = self.download_image(img_url, url)
# there are two html formats for authors and translators
authors_elem = content.xpath("""//div[@id='info']//span[text()='作者:']/following-sibling::br[1]/
preceding-sibling::a[preceding-sibling::span[text()='作者:']]/text()""")
if not authors_elem:
authors_elem = content.xpath(
"""//div[@id='info']//span[text()=' 作者']/following-sibling::a/text()""")
if authors_elem:
authors = []
for author in authors_elem:
authors.append(RE_WHITESPACES.sub(' ', author.strip())[:200])
else:
authors = None
translators_elem = content.xpath("""//div[@id='info']//span[text()='译者:']/following-sibling::br[1]/
preceding-sibling::a[preceding-sibling::span[text()='译者:']]/text()""")
if not translators_elem:
translators_elem = content.xpath(
"""//div[@id='info']//span[text()=' 译者']/following-sibling::a/text()""")
if translators_elem:
translators = []
for translator in translators_elem:
translators.append(RE_WHITESPACES.sub(' ', translator.strip()))
else:
translators = None
other = {}
cncode_elem = content.xpath(
"//div[@id='info']//span[text()='统一书号:']/following::text()")
if cncode_elem:
other['统一书号'] = cncode_elem[0].strip()
series_elem = content.xpath(
"//div[@id='info']//span[text()='丛书:']/following-sibling::a[1]/text()")
if series_elem:
other['丛书'] = series_elem[0].strip()
imprint_elem = content.xpath(
"//div[@id='info']//span[text()='出品方:']/following-sibling::a[1]/text()")
if imprint_elem:
other['出品方'] = imprint_elem[0].strip()
data = {
'title': title,
'subtitle': subtitle,
'orig_title': orig_title,
'author': authors,
'translator': translators,
'language': language,
'pub_house': pub_house,
'pub_year': pub_year,
'pub_month': pub_month,
'binding': binding,
'price': price,
'pages': pages,
'isbn': isbn,
'brief': brief,
'contents': contents,
'other_info': other,
'source_site': self.site_name,
'source_url': self.get_effective_url(url),
}
self.raw_data, self.raw_img, self.img_ext = data, raw_img, ext
return data, raw_img
class DoubanMovieScraper(DoubanScrapperMixin, AbstractScraper):
site_name = SourceSiteEnum.DOUBAN.value
host = 'movie.douban.com'
data_class = Movie
form_class = MovieForm
regex = re.compile(r"https://movie\.douban\.com/subject/\d+/{0,1}")
def scrape(self, url):
headers = DEFAULT_REQUEST_HEADERS.copy()
headers['Host'] = self.host
content = self.download_page(url, headers)
# parsing starts here
try:
raw_title = content.xpath(
"//span[@property='v:itemreviewed']/text()")[0].strip()
except IndexError:
raise ValueError("given url contains no movie info")
orig_title = content.xpath(
"//img[@rel='v:image']/@alt")[0].strip()
title = raw_title.split(orig_title)[0].strip()
# if has no chinese title
if title == '':
title = orig_title
if title == orig_title:
orig_title = None
# there are two html formats for authors and translators
other_title_elem = content.xpath(
"//div[@id='info']//span[text()='又名:']/following-sibling::text()[1]")
other_title = other_title_elem[0].strip().split(
' / ') if other_title_elem else None
imdb_elem = content.xpath(
"//div[@id='info']//span[text()='IMDb链接:']/following-sibling::a[1]/text()")
if not imdb_elem:
imdb_elem = content.xpath(
"//div[@id='info']//span[text()='IMDb:']/following-sibling::text()[1]")
imdb_code = imdb_elem[0].strip() if imdb_elem else None
director_elem = content.xpath(
"//div[@id='info']//span[text()='导演']/following-sibling::span[1]/a/text()")
director = director_elem if director_elem else None
playwright_elem = content.xpath(
"//div[@id='info']//span[text()='编剧']/following-sibling::span[1]/a/text()")
playwright = list(map(lambda a: a[:200], playwright_elem)) if playwright_elem else None
actor_elem = content.xpath(
"//div[@id='info']//span[text()='主演']/following-sibling::span[1]/a/text()")
actor = list(map(lambda a: a[:200], actor_elem)) if actor_elem else None
# construct genre translator
genre_translator = {}
attrs = [attr for attr in dir(MovieGenreEnum) if '__' not in attr]
for attr in attrs:
genre_translator[getattr(MovieGenreEnum, attr).label] = getattr(
MovieGenreEnum, attr).value
genre_elem = content.xpath("//span[@property='v:genre']/text()")
if genre_elem:
genre = []
for g in genre_elem:
g = g.split(' ')[0]
if g == '紀錄片': # likely some original data on douban was corrupted
g = '纪录片'
elif g == '鬼怪':
g = '惊悚'
if g in genre_translator:
genre.append(genre_translator[g])
elif g in genre_translator.values():
genre.append(g)
else:
logger.error(f'unable to map genre {g}')
else:
genre = None
showtime_elem = content.xpath(
"//span[@property='v:initialReleaseDate']/text()")
if showtime_elem:
showtime = []
for st in showtime_elem:
parts = st.split('(')
if len(parts) == 1:
time = st.split('(')[0]
region = ''
else:
time = st.split('(')[0]
region = st.split('(')[1][0:-1]
showtime.append({time: region})
else:
showtime = None
site_elem = content.xpath(
"//div[@id='info']//span[text()='官方网站:']/following-sibling::a[1]/@href")
site = site_elem[0].strip()[:200] if site_elem else None
try:
validator = URLValidator()
validator(site)
except ValidationError:
site = None
area_elem = content.xpath(
"//div[@id='info']//span[text()='制片国家/地区:']/following-sibling::text()[1]")
if area_elem:
area = [a.strip()[:100] for a in area_elem[0].split('/')]
else:
area = None
language_elem = content.xpath(
"//div[@id='info']//span[text()='语言:']/following-sibling::text()[1]")
if language_elem:
language = [a.strip() for a in language_elem[0].split(' / ')]
else:
language = None
year_elem = content.xpath("//span[@class='year']/text()")
year = int(re.search(r'\d+', year_elem[0])[0]) if year_elem and re.search(r'\d+', year_elem[0]) else None
duration_elem = content.xpath("//span[@property='v:runtime']/text()")
other_duration_elem = content.xpath(
"//span[@property='v:runtime']/following-sibling::text()[1]")
if duration_elem:
duration = duration_elem[0].strip()
if other_duration_elem:
duration += other_duration_elem[0].rstrip()
duration = duration.split('/')[0].strip()
else:
duration = None
season_elem = content.xpath(
"//*[@id='season']/option[@selected='selected']/text()")
if not season_elem:
season_elem = content.xpath(
"//div[@id='info']//span[text()='季数:']/following-sibling::text()[1]")
season = int(season_elem[0].strip()) if season_elem else None
else:
season = int(season_elem[0].strip())
episodes_elem = content.xpath(
"//div[@id='info']//span[text()='集数:']/following-sibling::text()[1]")
episodes = int(episodes_elem[0].strip()) if episodes_elem and episodes_elem[0].isdigit() else None
single_episode_length_elem = content.xpath(
"//div[@id='info']//span[text()='单集片长:']/following-sibling::text()[1]")
single_episode_length = single_episode_length_elem[0].strip(
)[:100] if single_episode_length_elem else None
# if has field `episodes` not none then must be series
is_series = True if episodes else False
brief_elem = content.xpath("//span[@class='all hidden']")
if not brief_elem:
brief_elem = content.xpath("//span[@property='v:summary']")
brief = '\n'.join([e.strip() for e in brief_elem[0].xpath(
'./text()')]) if brief_elem else None
img_url_elem = content.xpath("//img[@rel='v:image']/@src")
img_url = img_url_elem[0].strip() if img_url_elem else None
raw_img, ext = self.download_image(img_url, url)
data = {
'title': title,
'orig_title': orig_title,
'other_title': other_title,
'imdb_code': imdb_code,
'director': director,
'playwright': playwright,
'actor': actor,
'genre': genre,
'showtime': showtime,
'site': site,
'area': area,
'language': language,
'year': year,
'duration': duration,
'season': season,
'episodes': episodes,
'single_episode_length': single_episode_length,
'brief': brief,
'is_series': is_series,
'source_site': self.site_name,
'source_url': self.get_effective_url(url),
}
self.raw_data, self.raw_img, self.img_ext = data, raw_img, ext
return data, raw_img
class DoubanAlbumScraper(DoubanScrapperMixin, AbstractScraper):
site_name = SourceSiteEnum.DOUBAN.value
host = 'music.douban.com'
data_class = Album
form_class = AlbumForm
regex = re.compile(r"https://music\.douban\.com/subject/\d+/{0,1}")
def scrape(self, url):
headers = DEFAULT_REQUEST_HEADERS.copy()
headers['Host'] = self.host
content = self.download_page(url, headers)
# parsing starts here
try:
title = content.xpath("//h1/span/text()")[0].strip()
except IndexError:
raise ValueError("given url contains no album info")
if not title:
raise ValueError("given url contains no album info")
artists_elem = content.xpath("//div[@id='info']/span/span[@class='pl']/a/text()")
artist = None if not artists_elem else list(map(lambda a: a[:200], artists_elem))
genre_elem = content.xpath(
"//div[@id='info']//span[text()='流派:']/following::text()[1]")
genre = genre_elem[0].strip() if genre_elem else None
date_elem = content.xpath(
"//div[@id='info']//span[text()='发行时间:']/following::text()[1]")
release_date = parse_date(date_elem[0].strip()) if date_elem else None
company_elem = content.xpath(
"//div[@id='info']//span[text()='出版者:']/following::text()[1]")
company = company_elem[0].strip() if company_elem else None
track_list_elem = content.xpath(
"//div[@class='track-list']/div[@class='indent']/div/text()"
)
if track_list_elem:
track_list = '\n'.join([track.strip() for track in track_list_elem])
else:
track_list = None
brief_elem = content.xpath("//span[@class='all hidden']")
if not brief_elem:
brief_elem = content.xpath("//span[@property='v:summary']")
brief = '\n'.join([e.strip() for e in brief_elem[0].xpath(
'./text()')]) if brief_elem else None
other_info = {}
other_elem = content.xpath(
"//div[@id='info']//span[text()='又名:']/following-sibling::text()[1]")
if other_elem:
other_info['又名'] = other_elem[0].strip()
other_elem = content.xpath(
"//div[@id='info']//span[text()='专辑类型:']/following-sibling::text()[1]")
if other_elem:
other_info['专辑类型'] = other_elem[0].strip()
other_elem = content.xpath(
"//div[@id='info']//span[text()='介质:']/following-sibling::text()[1]")
if other_elem:
other_info['介质'] = other_elem[0].strip()
other_elem = content.xpath(
"//div[@id='info']//span[text()='ISRC:']/following-sibling::text()[1]")
if other_elem:
other_info['ISRC'] = other_elem[0].strip()
other_elem = content.xpath(
"//div[@id='info']//span[text()='条形码:']/following-sibling::text()[1]")
if other_elem:
other_info['条形码'] = other_elem[0].strip()
other_elem = content.xpath(
"//div[@id='info']//span[text()='碟片数:']/following-sibling::text()[1]")
if other_elem:
other_info['碟片数'] = other_elem[0].strip()
img_url_elem = content.xpath("//div[@id='mainpic']//img/@src")
img_url = img_url_elem[0].strip() if img_url_elem else None
raw_img, ext = self.download_image(img_url, url)
data = {
'title': title,
'artist': artist,
'genre': genre,
'release_date': release_date,
'duration': None,
'company': company,
'track_list': track_list,
'brief': brief,
'other_info': other_info,
'source_site': self.site_name,
'source_url': self.get_effective_url(url),
}
self.raw_data, self.raw_img, self.img_ext = data, raw_img, ext
return data, raw_img
class DoubanGameScraper(DoubanScrapperMixin, AbstractScraper):
site_name = SourceSiteEnum.DOUBAN.value
host = 'www.douban.com/game/'
data_class = Game
form_class = GameForm
regex = re.compile(r"https://www\.douban\.com/game/\d+/{0,1}")
def scrape(self, url):
headers = DEFAULT_REQUEST_HEADERS.copy()
headers['Host'] = 'www.douban.com'
content = self.download_page(url, headers)
try:
raw_title = content.xpath(
"//div[@id='content']/h1/text()")[0].strip()
except IndexError:
raise ValueError("given url contains no game info")
title = raw_title
other_title_elem = content.xpath(
"//dl[@class='game-attr']//dt[text()='别名:']/following-sibling::dd[1]/text()")
other_title = other_title_elem[0].strip().split(' / ') if other_title_elem else None
developer_elem = content.xpath(
"//dl[@class='game-attr']//dt[text()='开发商:']/following-sibling::dd[1]/text()")
developer = developer_elem[0].strip().split(' / ') if developer_elem else None
publisher_elem = content.xpath(
"//dl[@class='game-attr']//dt[text()='发行商:']/following-sibling::dd[1]/text()")
publisher = publisher_elem[0].strip().split(' / ') if publisher_elem else None
platform_elem = content.xpath(
"//dl[@class='game-attr']//dt[text()='平台:']/following-sibling::dd[1]/a/text()")
platform = platform_elem if platform_elem else None
genre_elem = content.xpath(
"//dl[@class='game-attr']//dt[text()='类型:']/following-sibling::dd[1]/a/text()")
genre = None
if genre_elem:
genre = [g for g in genre_elem if g != '游戏']
date_elem = content.xpath(
"//dl[@class='game-attr']//dt[text()='发行日期:']/following-sibling::dd[1]/text()")
release_date = parse_date(date_elem[0].strip()) if date_elem else None
brief_elem = content.xpath("//div[@class='mod item-desc']/p/text()")
brief = '\n'.join(brief_elem) if brief_elem else None
img_url_elem = content.xpath(
"//div[@class='item-subject-info']/div[@class='pic']//img/@src")
img_url = img_url_elem[0].strip() if img_url_elem else None
raw_img, ext = self.download_image(img_url, url)
data = {
'title': title,
'other_title': other_title,
'developer': developer,
'publisher': publisher,
'release_date': release_date,
'genre': genre,
'platform': platform,
'brief': brief,
'other_info': None,
'source_site': self.site_name,
'source_url': self.get_effective_url(url),
}
self.raw_data, self.raw_img, self.img_ext = data, raw_img, ext
return data, raw_img

View file

@ -0,0 +1,157 @@
import requests
import re
import filetype
from lxml import html
from common.models import SourceSiteEnum
from movies.models import Movie, MovieGenreEnum
from movies.forms import MovieForm
from books.models import Book
from books.forms import BookForm
from music.models import Album, Song
from music.forms import AlbumForm, SongForm
from games.models import Game
from games.forms import GameForm
from django.conf import settings
from PIL import Image
from io import BytesIO
from common.scraper import *
class GoodreadsScraper(AbstractScraper):
site_name = SourceSiteEnum.GOODREADS.value
host = "www.goodreads.com"
data_class = Book
form_class = BookForm
regex = re.compile(r"https://www\.goodreads\.com/book/show/\d+")
@classmethod
def get_effective_url(cls, raw_url):
u = re.match(r".+/book/show/(\d+)", raw_url)
if not u:
u = re.match(r".+book/(\d+)", raw_url)
return "https://www.goodreads.com/book/show/" + u[1] if u else None
def scrape(self, url, response=None):
"""
This is the scraping portal
"""
if response is not None:
content = html.fromstring(response.content.decode('utf-8'))
else:
headers = None # DEFAULT_REQUEST_HEADERS.copy()
content = self.download_page(url, headers)
try:
title = content.xpath("//h1[@id='bookTitle']/text()")[0].strip()
except IndexError:
raise ValueError("given url contains no book info")
subtitle = None
orig_title_elem = content.xpath("//div[@id='bookDataBox']//div[text()='Original Title']/following-sibling::div/text()")
orig_title = orig_title_elem[0].strip() if orig_title_elem else None
language_elem = content.xpath('//div[@itemprop="inLanguage"]/text()')
language = language_elem[0].strip() if language_elem else None
pub_house_elem = content.xpath("//div[contains(text(), 'Published') and @class='row']/text()")
try:
months = ['January', 'February', 'March', 'April', 'May', 'June', 'July', 'August', 'September', 'October', 'November', 'December']
r = re.compile('.*Published.*(' + '|'.join(months) + ').*(\\d\\d\\d\\d).+by\\s*(.+)\\s*', re.DOTALL)
pub = r.match(pub_house_elem[0])
pub_year = pub[2]
pub_month = months.index(pub[1]) + 1
pub_house = pub[3].strip()
except Exception:
pub_year = None
pub_month = None
pub_house = None
pub_house_elem = content.xpath("//nobr[contains(text(), 'first published')]/text()")
try:
pub = re.match(r'.*first published\s+(.+\d\d\d\d).*', pub_house_elem[0], re.DOTALL)
first_pub = pub[1]
except Exception:
first_pub = None
binding_elem = content.xpath('//span[@itemprop="bookFormat"]/text()')
binding = binding_elem[0].strip() if binding_elem else None
pages_elem = content.xpath('//span[@itemprop="numberOfPages"]/text()')
pages = pages_elem[0].strip() if pages_elem else None
if pages is not None:
pages = int(RE_NUMBERS.findall(pages)[
0]) if RE_NUMBERS.findall(pages) else None
isbn_elem = content.xpath('//span[@itemprop="isbn"]/text()')
if not isbn_elem:
isbn_elem = content.xpath('//div[@itemprop="isbn"]/text()') # this is likely ASIN
isbn = isbn_elem[0].strip() if isbn_elem else None
brief_elem = content.xpath('//div[@id="description"]/span[@style="display:none"]/text()')
if brief_elem:
brief = '\n'.join(p.strip() for p in brief_elem)
else:
brief_elem = content.xpath('//div[@id="description"]/span/text()')
brief = '\n'.join(p.strip() for p in brief_elem) if brief_elem else None
genre = content.xpath('//div[@class="bigBoxBody"]/div/div/div/a/text()')
genre = genre[0] if genre else None
book_title = re.sub('\n', '', content.xpath('//h1[@id="bookTitle"]/text()')[0]).strip()
author = content.xpath('//a[@class="authorName"]/span/text()')[0]
contents = None
img_url_elem = content.xpath("//img[@id='coverImage']/@src")
img_url = img_url_elem[0].strip() if img_url_elem else None
raw_img, ext = self.download_image(img_url, url)
authors_elem = content.xpath("//a[@class='authorName'][not(../span[@class='authorName greyText smallText role'])]/span/text()")
if authors_elem:
authors = []
for author in authors_elem:
authors.append(RE_WHITESPACES.sub(' ', author.strip()))
else:
authors = None
translators = None
authors_elem = content.xpath("//a[@class='authorName'][../span/text()='(Translator)']/span/text()")
if authors_elem:
translators = []
for translator in authors_elem:
translators.append(RE_WHITESPACES.sub(' ', translator.strip()))
else:
translators = None
other = {}
if first_pub:
other['首版时间'] = first_pub
if genre:
other['分类'] = genre
series_elem = content.xpath("//h2[@id='bookSeries']/a/text()")
if series_elem:
other['丛书'] = re.sub(r'\(\s*(.+[^\s])\s*#.*\)', '\\1', series_elem[0].strip())
data = {
'title': title,
'subtitle': subtitle,
'orig_title': orig_title,
'author': authors,
'translator': translators,
'language': language,
'pub_house': pub_house,
'pub_year': pub_year,
'pub_month': pub_month,
'binding': binding,
'pages': pages,
'isbn': isbn,
'brief': brief,
'contents': contents,
'other_info': other,
'cover_url': img_url,
'source_site': self.site_name,
'source_url': self.get_effective_url(url),
}
data['source_url'] = self.get_effective_url(url)
self.raw_data, self.raw_img, self.img_ext = data, raw_img, ext
return data, raw_img

102
common/scrapers/google.py Normal file
View file

@ -0,0 +1,102 @@
import requests
import re
import filetype
from lxml import html
from common.models import SourceSiteEnum
from movies.models import Movie, MovieGenreEnum
from movies.forms import MovieForm
from books.models import Book
from books.forms import BookForm
from music.models import Album, Song
from music.forms import AlbumForm, SongForm
from games.models import Game
from games.forms import GameForm
from django.conf import settings
from PIL import Image
from io import BytesIO
from common.scraper import *
# https://developers.google.com/youtube/v3/docs/?apix=true
# https://developers.google.com/books/docs/v1/using
class GoogleBooksScraper(AbstractScraper):
site_name = SourceSiteEnum.GOOGLEBOOKS.value
host = ["books.google.com", "www.google.com/books"]
data_class = Book
form_class = BookForm
regex = re.compile(r"https://books\.google\.com/books\?id=([^&#]+)")
@classmethod
def get_effective_url(cls, raw_url):
# https://books.google.com/books?id=wUHxzgEACAAJ
# https://books.google.com/books/about/%E7%8F%BE%E5%A0%B4%E6%AD%B7%E5%8F%B2.html?id=nvNoAAAAIAAJ
# https://www.google.com/books/edition/_/nvNoAAAAIAAJ?hl=en&gbpv=1
u = re.match(r"https://books\.google\.com/books.*id=([^&#]+)", raw_url)
if not u:
u = re.match(r"https://www\.google\.com/books/edition/[^/]+/([^&#?]+)", raw_url)
return 'https://books.google.com/books?id=' + u[1] if u else None
def scrape(self, url, response=None):
url = self.get_effective_url(url)
m = self.regex.match(url)
if m:
api_url = f'https://www.googleapis.com/books/v1/volumes/{m[1]}'
else:
raise ValueError("not valid url")
b = requests.get(api_url).json()
other = {}
title = b['volumeInfo']['title']
subtitle = b['volumeInfo']['subtitle'] if 'subtitle' in b['volumeInfo'] else None
pub_year = None
pub_month = None
if 'publishedDate' in b['volumeInfo']:
pub_date = b['volumeInfo']['publishedDate'].split('-')
pub_year = pub_date[0]
pub_month = pub_date[1] if len(pub_date) > 1 else None
pub_house = b['volumeInfo']['publisher'] if 'publisher' in b['volumeInfo'] else None
language = b['volumeInfo']['language'] if 'language' in b['volumeInfo'] else None
pages = b['volumeInfo']['pageCount'] if 'pageCount' in b['volumeInfo'] else None
if 'mainCategory' in b['volumeInfo']:
other['分类'] = b['volumeInfo']['mainCategory']
authors = b['volumeInfo']['authors'] if 'authors' in b['volumeInfo'] else None
if 'description' in b['volumeInfo']:
brief = b['volumeInfo']['description']
elif 'textSnippet' in b['volumeInfo']:
brief = b["volumeInfo"]["textSnippet"]["searchInfo"]
else:
brief = ''
brief = re.sub(r'<.*?>', '', brief.replace('<br', '\n<br'))
img_url = b['volumeInfo']['imageLinks']['thumbnail'] if 'imageLinks' in b['volumeInfo'] else None
isbn10 = None
isbn13 = None
for iid in b['volumeInfo']['industryIdentifiers'] if 'industryIdentifiers' in b['volumeInfo'] else []:
if iid['type'] == 'ISBN_10':
isbn10 = iid['identifier']
if iid['type'] == 'ISBN_13':
isbn13 = iid['identifier']
isbn = isbn13 if isbn13 is not None else isbn10
data = {
'title': title,
'subtitle': subtitle,
'orig_title': None,
'author': authors,
'translator': None,
'language': language,
'pub_house': pub_house,
'pub_year': pub_year,
'pub_month': pub_month,
'binding': None,
'pages': pages,
'isbn': isbn,
'brief': brief,
'contents': None,
'other_info': other,
'cover_url': img_url,
'source_site': self.site_name,
'source_url': self.get_effective_url(url),
}
raw_img, ext = self.download_image(img_url, url)
self.raw_data, self.raw_img, self.img_ext = data, raw_img, ext
return data, raw_img

88
common/scrapers/igdb.py Normal file
View file

@ -0,0 +1,88 @@
import requests
import re
from common.models import SourceSiteEnum
from games.models import Game
from games.forms import GameForm
from django.conf import settings
from common.scraper import *
from igdb.wrapper import IGDBWrapper
import json
import datetime
wrapper = IGDBWrapper(settings.IGDB_CLIENT_ID, settings.IGDB_ACCESS_TOKEN)
class IgdbGameScraper(AbstractScraper):
site_name = SourceSiteEnum.IGDB.value
host = 'https://www.igdb.com/'
data_class = Game
form_class = GameForm
regex = re.compile(r"https://www\.igdb\.com/games/([a-zA-Z0-9\-_]+)")
def scrape_steam(self, steam_url):
r = json.loads(wrapper.api_request('websites', f'fields *, game.*; where url = "{steam_url}";'))
if not r:
raise ValueError("Cannot find steam url in IGDB")
r = sorted(r, key=lambda w: w['game']['id'])
return self.scrape(r[0]['game']['url'])
def scrape(self, url):
m = self.regex.match(url)
if m:
effective_url = m[0]
else:
raise ValueError("not valid url")
effective_url = m[0]
slug = m[1]
fields = '*, cover.url, genres.name, platforms.name, involved_companies.*, involved_companies.company.name'
r = json.loads(wrapper.api_request('games', f'fields {fields}; where url = "{effective_url}";'))[0]
brief = r['summary'] if 'summary' in r else ''
brief += "\n\n" + r['storyline'] if 'storyline' in r else ''
developer = None
publisher = None
release_date = None
genre = None
platform = None
if 'involved_companies' in r:
developer = next(iter([c['company']['name'] for c in r['involved_companies'] if c['developer'] == True]), None)
publisher = next(iter([c['company']['name'] for c in r['involved_companies'] if c['publisher'] == True]), None)
if 'platforms' in r:
ps = sorted(r['platforms'], key=lambda p: p['id'])
platform = [(p['name'] if p['id'] != 6 else 'Windows') for p in ps]
if 'first_release_date' in r:
release_date = datetime.datetime.fromtimestamp(r['first_release_date'], datetime.timezone.utc)
if 'genres' in r:
genre = [g['name'] for g in r['genres']]
other_info = {'igdb_id': r['id']}
websites = json.loads(wrapper.api_request('websites', f'fields *; where game.url = "{effective_url}";'))
for website in websites:
if website['category'] == 1:
other_info['official_site'] = website['url']
elif website['category'] == 13:
other_info['steam_url'] = website['url']
data = {
'title': r['name'],
'other_title': None,
'developer': developer,
'publisher': publisher,
'release_date': release_date,
'genre': genre,
'platform': platform,
'brief': brief,
'other_info': other_info,
'source_site': self.site_name,
'source_url': self.get_effective_url(url),
}
raw_img, ext = self.download_image('https:' + r['cover']['url'].replace('t_thumb', 't_cover_big'), url)
self.raw_data, self.raw_img, self.img_ext = data, raw_img, ext
return data, raw_img
@classmethod
def get_effective_url(cls, raw_url):
m = cls.regex.match(raw_url)
if m:
return m[0]
else:
return None

116
common/scrapers/imdb.py Normal file
View file

@ -0,0 +1,116 @@
import requests
import re
from common.models import SourceSiteEnum
from movies.forms import MovieForm
from movies.models import Movie
from django.conf import settings
from common.scraper import *
class ImdbMovieScraper(AbstractScraper):
site_name = SourceSiteEnum.IMDB.value
host = 'https://www.imdb.com/title/'
data_class = Movie
form_class = MovieForm
regex = re.compile(r"(?<=https://www\.imdb\.com/title/)[a-zA-Z0-9]+")
def scrape(self, url):
effective_url = self.get_effective_url(url)
if effective_url is None:
raise ValueError("not valid url")
code = self.regex.findall(effective_url)[0]
s = TmdbMovieScraper()
s.scrape_imdb(code)
self.raw_data = s.raw_data
self.raw_img = s.raw_img
self.img_ext = s.img_ext
self.raw_data['source_site'] = self.site_name
self.raw_data['source_url'] = effective_url
return self.raw_data, self.raw_img
api_url = self.get_api_url(effective_url)
r = requests.get(api_url)
res_data = r.json()
if not res_data['type'] in ['Movie', 'TVSeries']:
raise ValueError("not movie/series item")
if res_data['type'] == 'Movie':
is_series = False
elif res_data['type'] == 'TVSeries':
is_series = True
title = res_data['title']
orig_title = res_data['originalTitle']
imdb_code = self.regex.findall(effective_url)[0]
director = []
for direct_dict in res_data['directorList']:
director.append(direct_dict['name'])
playwright = []
for writer_dict in res_data['writerList']:
playwright.append(writer_dict['name'])
actor = []
for actor_dict in res_data['actorList']:
actor.append(actor_dict['name'])
genre = res_data['genres'].split(', ')
area = res_data['countries'].split(', ')
language = res_data['languages'].split(', ')
year = int(res_data['year'])
duration = res_data['runtimeStr']
brief = res_data['plotLocal'] if res_data['plotLocal'] else res_data['plot']
if res_data['releaseDate']:
showtime = [{res_data['releaseDate']: "发布日期"}]
else:
showtime = None
other_info = {}
if res_data['contentRating']:
other_info['分级'] = res_data['contentRating']
if res_data['imDbRating']:
other_info['IMDb评分'] = res_data['imDbRating']
if res_data['metacriticRating']:
other_info['Metacritic评分'] = res_data['metacriticRating']
if res_data['awards']:
other_info['奖项'] = res_data['awards']
raw_img, ext = self.download_image(res_data['image'], url)
data = {
'title': title,
'orig_title': orig_title,
'other_title': None,
'imdb_code': imdb_code,
'director': director,
'playwright': playwright,
'actor': actor,
'genre': genre,
'showtime': showtime,
'site': None,
'area': area,
'language': language,
'year': year,
'duration': duration,
'season': None,
'episodes': None,
'single_episode_length': None,
'brief': brief,
'is_series': is_series,
'other_info': other_info,
'source_site': self.site_name,
'source_url': effective_url,
}
self.raw_data, self.raw_img, self.img_ext = data, raw_img, ext
return data, raw_img
@classmethod
def get_effective_url(cls, raw_url):
code = cls.regex.findall(raw_url)
if code:
return f"https://www.imdb.com/title/{code[0]}/"
else:
return None
@classmethod
def get_api_url(cls, url):
return f"https://imdb-api.com/zh/API/Title/{settings.IMDB_API_KEY}/{cls.regex.findall(url)[0]}/FullActor,"

287
common/scrapers/spotify.py Normal file
View file

@ -0,0 +1,287 @@
import requests
import re
import time
from common.models import SourceSiteEnum
from music.models import Album, Song
from music.forms import AlbumForm, SongForm
from django.conf import settings
from common.scraper import *
from threading import Thread
from django.core.exceptions import ObjectDoesNotExist
from django.utils import timezone
spotify_token = None
spotify_token_expire_time = time.time()
class SpotifyTrackScraper(AbstractScraper):
site_name = SourceSiteEnum.SPOTIFY.value
host = 'https://open.spotify.com/track/'
data_class = Song
form_class = SongForm
regex = re.compile(r"(?<=https://open\.spotify\.com/track/)[a-zA-Z0-9]+")
def scrape(self, url):
"""
Request from API, not really scraping
"""
global spotify_token, spotify_token_expire_time
if spotify_token is None or is_spotify_token_expired():
invoke_spotify_token()
effective_url = self.get_effective_url(url)
if effective_url is None:
raise ValueError("not valid url")
api_url = self.get_api_url(effective_url)
headers = {
'Authorization': f"Bearer {spotify_token}"
}
r = requests.get(api_url, headers=headers)
res_data = r.json()
artist = []
for artist_dict in res_data['artists']:
artist.append(artist_dict['name'])
if not artist:
artist = None
title = res_data['name']
release_date = parse_date(res_data['album']['release_date'])
duration = res_data['duration_ms']
if res_data['external_ids'].get('isrc'):
isrc = res_data['external_ids']['isrc']
else:
isrc = None
raw_img, ext = self.download_image(res_data['album']['images'][0]['url'], url)
data = {
'title': title,
'artist': artist,
'genre': None,
'release_date': release_date,
'duration': duration,
'isrc': isrc,
'album': None,
'brief': None,
'other_info': None,
'source_site': self.site_name,
'source_url': effective_url,
}
self.raw_data, self.raw_img, self.img_ext = data, raw_img, ext
return data, raw_img
@classmethod
def get_effective_url(cls, raw_url):
code = cls.regex.findall(raw_url)
if code:
return f"https://open.spotify.com/track/{code[0]}"
else:
return None
@classmethod
def get_api_url(cls, url):
return "https://api.spotify.com/v1/tracks/" + cls.regex.findall(url)[0]
class SpotifyAlbumScraper(AbstractScraper):
site_name = SourceSiteEnum.SPOTIFY.value
# API URL
host = 'https://open.spotify.com/album/'
data_class = Album
form_class = AlbumForm
regex = re.compile(r"(?<=https://open\.spotify\.com/album/)[a-zA-Z0-9]+")
def scrape(self, url):
"""
Request from API, not really scraping
"""
global spotify_token, spotify_token_expire_time
if spotify_token is None or is_spotify_token_expired():
invoke_spotify_token()
effective_url = self.get_effective_url(url)
if effective_url is None:
raise ValueError("not valid url")
api_url = self.get_api_url(effective_url)
headers = {
'Authorization': f"Bearer {spotify_token}"
}
r = requests.get(api_url, headers=headers)
res_data = r.json()
artist = []
for artist_dict in res_data['artists']:
artist.append(artist_dict['name'])
title = res_data['name']
genre = ', '.join(res_data['genres'])
company = []
for com in res_data['copyrights']:
company.append(com['text'])
duration = 0
track_list = []
track_urls = []
for track in res_data['tracks']['items']:
track_urls.append(track['external_urls']['spotify'])
duration += track['duration_ms']
if res_data['tracks']['items'][-1]['disc_number'] > 1:
# more than one disc
track_list.append(str(
track['disc_number']) + '-' + str(track['track_number']) + '. ' + track['name'])
else:
track_list.append(str(track['track_number']) + '. ' + track['name'])
track_list = '\n'.join(track_list)
release_date = parse_date(res_data['release_date'])
other_info = {}
if res_data['external_ids'].get('upc'):
# bar code
other_info['UPC'] = res_data['external_ids']['upc']
raw_img, ext = self.download_image(res_data['images'][0]['url'], url)
data = {
'title': title,
'artist': artist,
'genre': genre,
'track_list': track_list,
'release_date': release_date,
'duration': duration,
'company': company,
'brief': None,
'other_info': other_info,
'source_site': self.site_name,
'source_url': effective_url,
}
# set tracks_data, used for adding tracks
self.track_urls = track_urls
self.raw_data, self.raw_img, self.img_ext = data, raw_img, ext
return data, raw_img
@classmethod
def get_effective_url(cls, raw_url):
code = cls.regex.findall(raw_url)
if code:
return f"https://open.spotify.com/album/{code[0]}"
else:
return None
# @classmethod
# def save(cls, request_user):
# form = super().save(request_user)
# task = Thread(
# target=cls.add_tracks,
# args=(form.instance, request_user),
# daemon=True
# )
# task.start()
# return form
@classmethod
def get_api_url(cls, url):
return "https://api.spotify.com/v1/albums/" + cls.regex.findall(url)[0]
@classmethod
def add_tracks(cls, album: Album, request_user):
to_be_updated_tracks = []
for track_url in cls.track_urls:
track = cls.get_track_or_none(track_url)
# seems lik if fire too many requests at the same time
# spotify would limit access
if track is None:
task = Thread(
target=cls.scrape_and_save_track,
args=(track_url, album, request_user),
daemon=True
)
task.start()
task.join()
else:
to_be_updated_tracks.append(track)
cls.bulk_update_track_album(to_be_updated_tracks, album, request_user)
@classmethod
def get_track_or_none(cls, track_url: str):
try:
instance = Song.objects.get(source_url=track_url)
return instance
except ObjectDoesNotExist:
return None
@classmethod
def scrape_and_save_track(cls, url: str, album: Album, request_user):
data, img = SpotifyTrackScraper.scrape(url)
SpotifyTrackScraper.raw_data['album'] = album
SpotifyTrackScraper.save(request_user)
@classmethod
def bulk_update_track_album(cls, tracks, album, request_user):
for track in tracks:
track.last_editor = request_user
track.edited_time = timezone.now()
track.album = album
Song.objects.bulk_update(tracks, [
'last_editor',
'edited_time',
'album'
])
def get_spotify_token():
global spotify_token, spotify_token_expire_time
if spotify_token is None or is_spotify_token_expired():
invoke_spotify_token()
return spotify_token
def is_spotify_token_expired():
global spotify_token_expire_time
return True if spotify_token_expire_time <= time.time() else False
def invoke_spotify_token():
global spotify_token, spotify_token_expire_time
r = requests.post(
"https://accounts.spotify.com/api/token",
data={
"grant_type": "client_credentials"
},
headers={
"Authorization": f"Basic {settings.SPOTIFY_CREDENTIAL}"
}
)
data = r.json()
if r.status_code == 401:
# token expired, try one more time
# this maybe caused by external operations,
# for example debugging using a http client
r = requests.post(
"https://accounts.spotify.com/api/token",
data={
"grant_type": "client_credentials"
},
headers={
"Authorization": f"Basic {settings.SPOTIFY_CREDENTIAL}"
}
)
data = r.json()
elif r.status_code != 200:
raise Exception(f"Request to spotify API fails. Reason: {r.reason}")
# minus 2 for execution time error
spotify_token_expire_time = int(data['expires_in']) + time.time() - 2
spotify_token = data['access_token']

92
common/scrapers/steam.py Normal file
View file

@ -0,0 +1,92 @@
import re
from common.models import SourceSiteEnum
from games.models import Game
from games.forms import GameForm
from common.scraper import *
from common.scrapers.igdb import IgdbGameScraper
class SteamGameScraper(AbstractScraper):
site_name = SourceSiteEnum.STEAM.value
host = 'store.steampowered.com'
data_class = Game
form_class = GameForm
regex = re.compile(r"https://store\.steampowered\.com/app/\d+")
def scrape(self, url):
m = self.regex.match(url)
if m:
effective_url = m[0]
else:
raise ValueError("not valid url")
try:
s = IgdbGameScraper()
s.scrape_steam(effective_url)
self.raw_data = s.raw_data
self.raw_img = s.raw_img
self.img_ext = s.img_ext
self.raw_data['source_site'] = self.site_name
self.raw_data['source_url'] = effective_url
# return self.raw_data, self.raw_img
except:
self.raw_img = None
self.raw_data = {}
headers = DEFAULT_REQUEST_HEADERS.copy()
headers['Host'] = self.host
headers['Cookie'] = "wants_mature_content=1; birthtime=754700401;"
content = self.download_page(url, headers)
title = content.xpath("//div[@class='apphub_AppName']/text()")[0]
developer = content.xpath("//div[@id='developers_list']/a/text()")
publisher = content.xpath("//div[@class='glance_ctn']//div[@class='dev_row'][2]//a/text()")
release_date = parse_date(
content.xpath(
"//div[@class='release_date']/div[@class='date']/text()")[0]
)
genre = content.xpath(
"//div[@class='details_block']/b[2]/following-sibling::a/text()")
platform = ['PC']
brief = content.xpath(
"//div[@class='game_description_snippet']/text()")[0].strip()
img_url = content.xpath(
"//img[@class='game_header_image_full']/@src"
)[0].replace("header.jpg", "library_600x900.jpg")
raw_img, img_ext = self.download_image(img_url, url)
# no 600x900 picture
if raw_img is None:
img_url = content.xpath("//img[@class='game_header_image_full']/@src")[0]
raw_img, img_ext = self.download_image(img_url, url)
if raw_img is not None:
self.raw_img = raw_img
self.img_ext = img_ext
data = {
'title': title if title else self.raw_data['title'],
'other_title': None,
'developer': developer if 'developer' not in self.raw_data else self.raw_data['developer'],
'publisher': publisher if 'publisher' not in self.raw_data else self.raw_data['publisher'],
'release_date': release_date if 'release_date' not in self.raw_data else self.raw_data['release_date'],
'genre': genre if 'genre' not in self.raw_data else self.raw_data['genre'],
'platform': platform if 'platform' not in self.raw_data else self.raw_data['platform'],
'brief': brief if brief else self.raw_data['brief'],
'other_info': None if 'other_info' not in self.raw_data else self.raw_data['other_info'],
'source_site': self.site_name,
'source_url': effective_url
}
self.raw_data = data
return self.raw_data, self.raw_img
@classmethod
def get_effective_url(cls, raw_url):
m = cls.regex.match(raw_url)
if m:
return m[0]
else:
return None

150
common/scrapers/tmdb.py Normal file
View file

@ -0,0 +1,150 @@
import requests
import re
from common.models import SourceSiteEnum
from movies.models import Movie
from movies.forms import MovieForm
from django.conf import settings
from common.scraper import *
class TmdbMovieScraper(AbstractScraper):
site_name = SourceSiteEnum.TMDB.value
host = 'https://www.themoviedb.org/'
data_class = Movie
form_class = MovieForm
regex = re.compile(r"https://www\.themoviedb\.org/(movie|tv)/([a-zA-Z0-9]+)")
# http://api.themoviedb.org/3/genre/movie/list?api_key=&language=zh
# http://api.themoviedb.org/3/genre/tv/list?api_key=&language=zh
genre_map = {
'Sci-Fi & Fantasy': 'Sci-Fi',
'War & Politics': 'War',
'儿童': 'Kids',
'冒险': 'Adventure',
'剧情': 'Drama',
'动作': 'Action',
'动作冒险': 'Action',
'动画': 'Animation',
'历史': 'History',
'喜剧': 'Comedy',
'奇幻': 'Fantasy',
'家庭': 'Family',
'恐怖': 'Horror',
'悬疑': 'Mystery',
'惊悚': 'Thriller',
'战争': 'War',
'新闻': 'News',
'爱情': 'Romance',
'犯罪': 'Crime',
'电视电影': 'TV Movie',
'真人秀': 'Reality-TV',
'科幻': 'Sci-Fi',
'纪录': 'Documentary',
'肥皂剧': 'Soap',
'脱口秀': 'Talk-Show',
'西部': 'Western',
'音乐': 'Music',
}
def scrape_imdb(self, imdb_code):
api_url = f"https://api.themoviedb.org/3/find/{imdb_code}?api_key={settings.TMDB_API3_KEY}&language=zh-CN&external_source=imdb_id"
r = requests.get(api_url)
res_data = r.json()
if 'movie_results' in res_data and len(res_data['movie_results']) > 0:
url = f"https://www.themoviedb.org/movie/{res_data['movie_results'][0]['id']}"
elif 'tv_results' in res_data and len(res_data['tv_results']) > 0:
url = f"https://www.themoviedb.org/tv/{res_data['tv_results'][0]['id']}"
else:
raise ValueError("Cannot find IMDb ID in TMDB")
return self.scrape(url)
def scrape(self, url):
m = self.regex.match(url)
if m:
effective_url = m[0]
else:
raise ValueError("not valid url")
effective_url = m[0]
is_series = m[1] == 'tv'
id = m[2]
if is_series:
api_url = f"https://api.themoviedb.org/3/tv/{id}?api_key={settings.TMDB_API3_KEY}&language=zh-CN&append_to_response=external_ids,credits"
else:
api_url = f"https://api.themoviedb.org/3/movie/{id}?api_key={settings.TMDB_API3_KEY}&language=zh-CN&append_to_response=external_ids,credits"
r = requests.get(api_url)
res_data = r.json()
if is_series:
title = res_data['name']
orig_title = res_data['original_name']
year = int(res_data['first_air_date'].split('-')[0]) if res_data['first_air_date'] else None
imdb_code = res_data['external_ids']['imdb_id']
showtime = [{res_data['first_air_date']: "首播日期"}] if res_data['first_air_date'] else None
duration = None
else:
title = res_data['title']
orig_title = res_data['original_title']
year = int(res_data['release_date'].split('-')[0]) if res_data['release_date'] else None
showtime = [{res_data['release_date']: "发布日期"}] if res_data['release_date'] else None
imdb_code = res_data['imdb_id']
duration = res_data['runtime'] if res_data['runtime'] else None # in minutes
genre = list(map(lambda x: self.genre_map[x['name']] if x['name'] in self.genre_map else 'Other', res_data['genres']))
language = list(map(lambda x: x['name'], res_data['spoken_languages']))
brief = res_data['overview']
if is_series:
director = list(map(lambda x: x['name'], res_data['created_by']))
else:
director = list(map(lambda x: x['name'], filter(lambda c: c['job'] == 'Director', res_data['credits']['crew'])))
playwright = list(map(lambda x: x['name'], filter(lambda c: c['job'] == 'Screenplay', res_data['credits']['crew'])))
actor = list(map(lambda x: x['name'], res_data['credits']['cast']))
area = []
other_info = {}
other_info['TMDB评分'] = res_data['vote_average']
# other_info['分级'] = res_data['contentRating']
# other_info['Metacritic评分'] = res_data['metacriticRating']
# other_info['奖项'] = res_data['awards']
other_info['TMDB_ID'] = id
if is_series:
other_info['Seasons'] = res_data['number_of_seasons']
other_info['Episodes'] = res_data['number_of_episodes']
img_url = ('https://image.tmdb.org/t/p/original/' + res_data['poster_path']) if res_data['poster_path'] is not None else None
# TODO: use GET /configuration to get base url
raw_img, ext = self.download_image(img_url, url)
data = {
'title': title,
'orig_title': orig_title,
'other_title': None,
'imdb_code': imdb_code,
'director': director,
'playwright': playwright,
'actor': actor,
'genre': genre,
'showtime': showtime,
'site': None,
'area': area,
'language': language,
'year': year,
'duration': duration,
'season': None,
'episodes': None,
'single_episode_length': None,
'brief': brief,
'is_series': is_series,
'other_info': other_info,
'source_site': self.site_name,
'source_url': effective_url,
}
self.raw_data, self.raw_img, self.img_ext = data, raw_img, ext
return data, raw_img
@classmethod
def get_effective_url(cls, raw_url):
m = cls.regex.match(raw_url)
if raw_url:
return m[0]
else:
return None

View file

@ -0,0 +1,183 @@
import logging
import meilisearch
from django.conf import settings
from django.db.models.signals import post_save, post_delete
import types
INDEX_NAME = 'items'
SEARCHABLE_ATTRIBUTES = ['title', 'orig_title', 'other_title', 'subtitle', 'artist', 'author', 'translator', 'developer', 'director', 'actor', 'playwright', 'pub_house', 'company', 'publisher', 'isbn', 'imdb_code']
INDEXABLE_DIRECT_TYPES = ['BigAutoField', 'BooleanField', 'CharField', 'PositiveIntegerField', 'PositiveSmallIntegerField', 'TextField', 'ArrayField']
INDEXABLE_TIME_TYPES = ['DateTimeField']
INDEXABLE_DICT_TYPES = ['JSONField']
INDEXABLE_FLOAT_TYPES = ['DecimalField']
# NONINDEXABLE_TYPES = ['ForeignKey', 'FileField',]
SEARCH_PAGE_SIZE = 20
logger = logging.getLogger(__name__)
def item_post_save_handler(sender, instance, created, **kwargs):
if not created and settings.SEARCH_INDEX_NEW_ONLY:
return
Indexer.replace_item(instance)
def item_post_delete_handler(sender, instance, **kwargs):
Indexer.delete_item(instance)
def tag_post_save_handler(sender, instance, **kwargs):
pass
def tag_post_delete_handler(sender, instance, **kwargs):
pass
class Indexer:
class_map = {}
_instance = None
@classmethod
def instance(self):
if self._instance is None:
self._instance = meilisearch.Client(settings.MEILISEARCH_SERVER, settings.MEILISEARCH_KEY).index(INDEX_NAME)
return self._instance
@classmethod
def init(self):
meilisearch.Client(settings.MEILISEARCH_SERVER, settings.MEILISEARCH_KEY).create_index(INDEX_NAME, {'primaryKey': '_id'})
self.update_settings()
@classmethod
def update_settings(self):
self.instance().update_searchable_attributes(SEARCHABLE_ATTRIBUTES)
self.instance().update_filterable_attributes(['_class', 'tags', 'source_site'])
self.instance().update_settings({'displayedAttributes': ['_id', '_class', 'id', 'title', 'tags']})
@classmethod
def get_stats(self):
return self.instance().get_stats()
@classmethod
def busy(self):
return self.instance().get_stats()['isIndexing']
@classmethod
def update_model_indexable(self, model):
if settings.SEARCH_BACKEND is None:
return
self.class_map[model.__name__] = model
model.indexable_fields = ['tags']
model.indexable_fields_time = []
model.indexable_fields_dict = []
model.indexable_fields_float = []
for field in model._meta.get_fields():
type = field.get_internal_type()
if type in INDEXABLE_DIRECT_TYPES:
model.indexable_fields.append(field.name)
elif type in INDEXABLE_TIME_TYPES:
model.indexable_fields_time.append(field.name)
elif type in INDEXABLE_DICT_TYPES:
model.indexable_fields_dict.append(field.name)
elif type in INDEXABLE_FLOAT_TYPES:
model.indexable_fields_float.append(field.name)
post_save.connect(item_post_save_handler, sender=model)
post_delete.connect(item_post_delete_handler, sender=model)
@classmethod
def obj_to_dict(self, obj):
pk = f'{obj.__class__.__name__}-{obj.id}'
item = {
'_id': pk,
'_class': obj.__class__.__name__,
# 'id': obj.id
}
for field in obj.__class__.indexable_fields:
item[field] = getattr(obj, field)
for field in obj.__class__.indexable_fields_time:
item[field] = getattr(obj, field).timestamp()
for field in obj.__class__.indexable_fields_float:
item[field] = float(getattr(obj, field)) if getattr(obj, field) else None
for field in obj.__class__.indexable_fields_dict:
d = getattr(obj, field)
if d.__class__ is dict:
item.update(d)
item = {k: v for k, v in item.items() if v}
return item
@classmethod
def replace_item(self, obj):
try:
self.instance().add_documents([self.obj_to_dict(obj)])
except Exception as e:
logger.error(f"replace item error: \n{e}")
@classmethod
def replace_batch(self, objects):
try:
self.instance().update_documents(documents=objects)
except Exception as e:
logger.error(f"replace batch error: \n{e}")
@classmethod
def delete_item(self, obj):
pk = f'{obj.__class__.__name__}-{obj.id}'
try:
self.instance().delete_document(pk)
except Exception as e:
logger.error(f"delete item error: \n{e}")
@classmethod
def patch_item(self, obj, fields):
pk = f'{obj.__class__.__name__}-{obj.id}'
data = {}
for f in fields:
data[f] = getattr(obj, f)
try:
self.instance().update_documents(documents=[data], primary_key=[pk])
except Exception as e:
logger.error(f"patch item error: \n{e}")
@classmethod
def search(self, q, page=1, category=None, tag=None, sort=None):
if category or tag:
f = []
if category == 'music':
f.append("(_class = 'Album' OR _class = 'Song')")
elif category:
f.append(f"_class = '{category}'")
if tag:
t = tag.replace("'", "\'")
f.append(f"tags = '{t}'")
filter = ' AND '.join(f)
else:
filter = None
options = {
'offset': (page - 1) * SEARCH_PAGE_SIZE,
'limit': SEARCH_PAGE_SIZE,
'filter': filter,
'facetsDistribution': ['_class'],
'sort': None
}
try:
r = self.instance().search(q, options)
except Exception as e:
logger.error(f"MeiliSearch error: \n{e}")
r = {'nbHits': 0, 'hits': []}
# print(r)
results = types.SimpleNamespace()
results.items = list([x for x in map(lambda i: self.item_to_obj(i), r['hits']) if x is not None])
results.num_pages = (r['nbHits'] + SEARCH_PAGE_SIZE - 1) // SEARCH_PAGE_SIZE
# print(results)
return results
@classmethod
def item_to_obj(self, item):
try:
return self.class_map[item['_class']].objects.get(id=item['id'])
except Exception as e:
logger.error(f"unable to load search result item from db:\n{item}")
return None

215
common/search/typesense.py Normal file
View file

@ -0,0 +1,215 @@
import logging
import typesense
from django.conf import settings
from django.db.models.signals import post_save, post_delete
INDEX_NAME = 'items'
SEARCHABLE_ATTRIBUTES = ['title', 'orig_title', 'other_title', 'subtitle', 'artist', 'author', 'translator',
'developer', 'director', 'actor', 'playwright', 'pub_house', 'company', 'publisher', 'isbn', 'imdb_code']
FILTERABLE_ATTRIBUTES = ['_class', 'tags', 'source_site']
INDEXABLE_DIRECT_TYPES = ['BigAutoField', 'BooleanField', 'CharField',
'PositiveIntegerField', 'PositiveSmallIntegerField', 'TextField', 'ArrayField']
INDEXABLE_TIME_TYPES = ['DateTimeField']
INDEXABLE_DICT_TYPES = ['JSONField']
INDEXABLE_FLOAT_TYPES = ['DecimalField']
SORTING_ATTRIBUTE = None
# NONINDEXABLE_TYPES = ['ForeignKey', 'FileField',]
SEARCH_PAGE_SIZE = 20
logger = logging.getLogger(__name__)
def item_post_save_handler(sender, instance, created, **kwargs):
if not created and settings.SEARCH_INDEX_NEW_ONLY:
return
Indexer.replace_item(instance)
def item_post_delete_handler(sender, instance, **kwargs):
Indexer.delete_item(instance)
def tag_post_save_handler(sender, instance, **kwargs):
pass
def tag_post_delete_handler(sender, instance, **kwargs):
pass
class Indexer:
class_map = {}
_instance = None
@classmethod
def instance(self):
if self._instance is None:
self._instance = typesense.Client(settings.TYPESENSE_CONNECTION)
return self._instance
@classmethod
def init(self):
# self.instance().collections[INDEX_NAME].delete()
# fields = [
# {"name": "_class", "type": "string", "facet": True},
# {"name": "source_site", "type": "string", "facet": True},
# {"name": ".*", "type": "auto", "locale": "zh"},
# ]
# use dumb schema below before typesense fix a bug
fields = [
{'name': 'id', 'type': 'string'},
{'name': '_id', 'type': 'int64'},
{'name': '_class', 'type': 'string', "facet": True},
{'name': 'source_site', 'type': 'string', "facet": True},
{'name': 'isbn', 'optional': True, 'type': 'string'},
{'name': 'imdb_code', 'optional': True, 'type': 'string'},
{'name': 'author', 'optional': True, 'locale': 'zh', 'type': 'string[]'},
{'name': 'orig_title', 'optional': True, 'locale': 'zh', 'type': 'string'},
{'name': 'pub_house', 'optional': True, 'locale': 'zh', 'type': 'string'},
{'name': 'title', 'optional': True, 'locale': 'zh', 'type': 'string'},
{'name': 'translator', 'optional': True, 'locale': 'zh', 'type': 'string[]'},
{'name': 'subtitle', 'optional': True, 'locale': 'zh', 'type': 'string'},
{'name': 'artist', 'optional': True, 'locale': 'zh', 'type': 'string[]'},
{'name': 'company', 'optional': True, 'locale': 'zh', 'type': 'string[]'},
{'name': 'developer', 'optional': True, 'locale': 'zh', 'type': 'string[]'},
{'name': 'other_title', 'optional': True, 'locale': 'zh', 'type': 'string[]'},
{'name': 'publisher', 'optional': True, 'locale': 'zh', 'type': 'string[]'},
{'name': 'actor', 'optional': True, 'locale': 'zh', 'type': 'string[]'},
{'name': 'director', 'optional': True, 'locale': 'zh', 'type': 'string[]'},
{'name': 'playwright', 'optional': True, 'locale': 'zh', 'type': 'string[]'},
{'name': 'tags', 'optional': True, 'locale': 'zh', 'type': 'string[]'},
{'name': '.*', 'optional': True, 'locale': 'zh', 'type': 'auto'},
]
self.instance().collections.create({
"name": INDEX_NAME,
"fields": fields
})
@classmethod
def update_settings(self):
# https://github.com/typesense/typesense/issues/96
print('not supported by typesense yet')
pass
@classmethod
def get_stats(self):
return self.instance().collections[INDEX_NAME].retrieve()
@classmethod
def busy(self):
return False
@classmethod
def update_model_indexable(self, model):
if settings.SEARCH_BACKEND is None:
return
self.class_map[model.__name__] = model
model.indexable_fields = ['tags']
model.indexable_fields_time = []
model.indexable_fields_dict = []
model.indexable_fields_float = []
for field in model._meta.get_fields():
type = field.get_internal_type()
if type in INDEXABLE_DIRECT_TYPES:
model.indexable_fields.append(field.name)
elif type in INDEXABLE_TIME_TYPES:
model.indexable_fields_time.append(field.name)
elif type in INDEXABLE_DICT_TYPES:
model.indexable_fields_dict.append(field.name)
elif type in INDEXABLE_FLOAT_TYPES:
model.indexable_fields_float.append(field.name)
post_save.connect(item_post_save_handler, sender=model)
post_delete.connect(item_post_delete_handler, sender=model)
@classmethod
def obj_to_dict(self, obj):
pk = f'{obj.__class__.__name__}-{obj.id}'
item = {
'_class': obj.__class__.__name__,
}
for field in obj.__class__.indexable_fields:
item[field] = getattr(obj, field)
for field in obj.__class__.indexable_fields_time:
item[field] = getattr(obj, field).timestamp()
for field in obj.__class__.indexable_fields_float:
item[field] = float(getattr(obj, field)) if getattr(
obj, field) else None
for field in obj.__class__.indexable_fields_dict:
d = getattr(obj, field)
if d.__class__ is dict:
item.update(d)
item = {k: v for k, v in item.items() if v and (
k in SEARCHABLE_ATTRIBUTES or k in FILTERABLE_ATTRIBUTES or k == 'id')}
item['_id'] = item['id']
# typesense requires primary key to be named 'id', type string
item['id'] = pk
return item
@classmethod
def replace_item(self, obj):
try:
self.instance().collections[INDEX_NAME].documents.upsert(self.obj_to_dict(obj), {
'dirty_values': 'coerce_or_drop'
})
except Exception as e:
logger.error(f"replace item error: \n{e}")
@classmethod
def replace_batch(self, objects):
try:
self.instance().collections[INDEX_NAME].documents.import_(
objects, {'action': 'upsert'})
except Exception as e:
logger.error(f"replace batch error: \n{e}")
@classmethod
def delete_item(self, obj):
pk = f'{obj.__class__.__name__}-{obj.id}'
try:
self.instance().collections[INDEX_NAME].documents[pk].delete()
except Exception as e:
logger.error(f"delete item error: \n{e}")
@classmethod
def search(self, q, page=1, category=None, tag=None, sort=None):
f = []
if category == 'music':
f.append('_class:= [Album, Song]')
elif category:
f.append('_class:= ' + category)
else:
f.append('')
if tag:
f.append(f"tags:= '{tag}'")
filter = ' && '.join(f)
options = {
'q': q,
'page': page,
'per_page': SEARCH_PAGE_SIZE,
'query_by': ','.join(SEARCHABLE_ATTRIBUTES),
'filter_by': filter,
# 'facetsDistribution': ['_class'],
# 'sort_by': None,
}
# print(q)
r = self.instance().collections[INDEX_NAME].documents.search(options)
# print(r)
import types
results = types.SimpleNamespace()
results.items = list([x for x in map(lambda i: self.item_to_obj(
i['document']), r['hits']) if x is not None])
results.num_pages = (
r['found'] + SEARCH_PAGE_SIZE - 1) // SEARCH_PAGE_SIZE
# print(results)
return results
@classmethod
def item_to_obj(self, item):
try:
return self.class_map[item['_class']].objects.get(id=item['_id'])
except Exception as e:
logger.error(f"unable to load search result item from db:\n{item}")
return None

209
common/searcher.py Normal file
View file

@ -0,0 +1,209 @@
from urllib.parse import quote_plus
from enum import Enum
from common.models import SourceSiteEnum
from django.conf import settings
from common.scrapers.goodreads import GoodreadsScraper
from common.scrapers.spotify import get_spotify_token
import requests
from lxml import html
import logging
SEARCH_PAGE_SIZE = 5 # not all apis support page size
logger = logging.getLogger(__name__)
class Category(Enum):
Book = '书籍'
Movie = '电影'
Music = '音乐'
Game = '游戏'
TV = '剧集'
class SearchResultItem:
def __init__(self, category, source_site, source_url, title, subtitle, brief, cover_url):
self.category = category
self.source_site = source_site
self.source_url = source_url
self.title = title
self.subtitle = subtitle
self.brief = brief
self.cover_url = cover_url
@property
def verbose_category_name(self):
return self.category.value
@property
def link(self):
return f"/search?q={quote_plus(self.source_url)}"
@property
def scraped(self):
return False
class ProxiedRequest:
@classmethod
def get(cls, url):
u = f'http://api.scraperapi.com?api_key={settings.SCRAPERAPI_KEY}&url={quote_plus(url)}'
return requests.get(u, timeout=10)
class Goodreads:
@classmethod
def search(self, q, page=1):
results = []
try:
search_url = f'https://www.goodreads.com/search?page={page}&q={quote_plus(q)}'
r = requests.get(search_url)
if r.url.startswith('https://www.goodreads.com/book/show/'):
# Goodreads will 302 if only one result matches ISBN
data, img = GoodreadsScraper.scrape(r.url, r)
subtitle = f"{data['pub_year']} {', '.join(data['author'])} {', '.join(data['translator'] if data['translator'] else [])}"
results.append(SearchResultItem(Category.Book, SourceSiteEnum.GOODREADS,
data['source_url'], data['title'], subtitle,
data['brief'], data['cover_url']))
else:
h = html.fromstring(r.content.decode('utf-8'))
for c in h.xpath('//tr[@itemtype="http://schema.org/Book"]'):
el_cover = c.xpath('.//img[@class="bookCover"]/@src')
cover = el_cover[0] if el_cover else None
el_title = c.xpath('.//a[@class="bookTitle"]//text()')
title = ''.join(el_title).strip() if el_title else None
el_url = c.xpath('.//a[@class="bookTitle"]/@href')
url = 'https://www.goodreads.com' + \
el_url[0] if el_url else None
el_authors = c.xpath('.//a[@class="authorName"]//text()')
subtitle = ', '.join(el_authors) if el_authors else None
results.append(SearchResultItem(
Category.Book, SourceSiteEnum.GOODREADS, url, title, subtitle, '', cover))
except Exception as e:
logger.error(f"Goodreads search '{q}' error: {e}")
return results
class GoogleBooks:
@classmethod
def search(self, q, page=1):
results = []
try:
api_url = f'https://www.googleapis.com/books/v1/volumes?country=us&q={quote_plus(q)}&startIndex={SEARCH_PAGE_SIZE*(page-1)}&maxResults={SEARCH_PAGE_SIZE}&maxAllowedMaturityRating=MATURE'
j = requests.get(api_url).json()
if 'items' in j:
for b in j['items']:
if 'title' not in b['volumeInfo']:
continue
title = b['volumeInfo']['title']
subtitle = ''
if 'publishedDate' in b['volumeInfo']:
subtitle += b['volumeInfo']['publishedDate'] + ' '
if 'authors' in b['volumeInfo']:
subtitle += ', '.join(b['volumeInfo']['authors'])
if 'description' in b['volumeInfo']:
brief = b['volumeInfo']['description']
elif 'textSnippet' in b['volumeInfo']:
brief = b["volumeInfo"]["textSnippet"]["searchInfo"]
else:
brief = ''
category = Category.Book
# b['volumeInfo']['infoLink'].replace('http:', 'https:')
url = 'https://books.google.com/books?id=' + b['id']
cover = b['volumeInfo']['imageLinks']['thumbnail'] if 'imageLinks' in b['volumeInfo'] else None
results.append(SearchResultItem(
category, SourceSiteEnum.GOOGLEBOOKS, url, title, subtitle, brief, cover))
except Exception as e:
logger.error(f"GoogleBooks search '{q}' error: {e}")
return results
class TheMovieDatabase:
@classmethod
def search(self, q, page=1):
results = []
try:
api_url = f'https://api.themoviedb.org/3/search/multi?query={quote_plus(q)}&page={page}&api_key={settings.TMDB_API3_KEY}&language=zh-CN&include_adult=true'
j = requests.get(api_url).json()
for m in j['results']:
if m['media_type'] in ['tv', 'movie']:
url = f"https://www.themoviedb.org/{m['media_type']}/{m['id']}"
if m['media_type'] == 'tv':
cat = Category.TV
title = m['name']
subtitle = f"{m.get('first_air_date')} {m.get('original_name')}"
else:
cat = Category.Movie
title = m['title']
subtitle = f"{m.get('release_date')} {m.get('original_name')}"
cover = f"https://image.tmdb.org/t/p/w500/{m.get('poster_path')}"
results.append(SearchResultItem(
cat, SourceSiteEnum.TMDB, url, title, subtitle, m.get('overview'), cover))
except Exception as e:
logger.error(f"TMDb search '{q}' error: {e}")
return results
class Spotify:
@classmethod
def search(self, q, page=1):
results = []
try:
api_url = f"https://api.spotify.com/v1/search?q={q}&type=album&limit={SEARCH_PAGE_SIZE}&offset={page*SEARCH_PAGE_SIZE}"
headers = {
'Authorization': f"Bearer {get_spotify_token()}"
}
j = requests.get(api_url, headers=headers).json()
for a in j['albums']['items']:
title = a['name']
subtitle = a['release_date']
for artist in a['artists']:
subtitle += ' ' + artist['name']
url = a['external_urls']['spotify']
cover = a['images'][0]['url']
results.append(SearchResultItem(
Category.Music, SourceSiteEnum.SPOTIFY, url, title, subtitle, '', cover))
except Exception as e:
logger.error(f"Spotify search '{q}' error: {e}")
return results
class Bandcamp:
@classmethod
def search(self, q, page=1):
results = []
try:
search_url = f'https://bandcamp.com/search?from=results&item_type=a&page={page}&q={quote_plus(q)}'
r = requests.get(search_url)
h = html.fromstring(r.content.decode('utf-8'))
for c in h.xpath('//li[@class="searchresult data-search"]'):
el_cover = c.xpath('.//div[@class="art"]/img/@src')
cover = el_cover[0] if el_cover else None
el_title = c.xpath('.//div[@class="heading"]//text()')
title = ''.join(el_title).strip() if el_title else None
el_url = c.xpath('..//div[@class="itemurl"]/a/@href')
url = el_url[0] if el_url else None
el_authors = c.xpath('.//div[@class="subhead"]//text()')
subtitle = ', '.join(el_authors) if el_authors else None
results.append(SearchResultItem(Category.Music, SourceSiteEnum.BANDCAMP, url, title, subtitle, '', cover))
except Exception as e:
logger.error(f"Goodreads search '{q}' error: {e}")
return results
class ExternalSources:
@classmethod
def search(self, c, q, page=1):
if not q:
return []
results = []
if c == '' or c is None:
c = 'all'
if c == 'all' or c == 'movie':
results.extend(TheMovieDatabase.search(q, page))
if c == 'all' or c == 'book':
results.extend(GoogleBooks.search(q, page))
results.extend(Goodreads.search(q, page))
if c == 'all' or c == 'music':
results.extend(Spotify.search(q, page))
results.extend(Bandcamp.search(q, page))
return results

View file

@ -270,15 +270,12 @@ h6 {
img {
max-width: 100%;
-o-object-fit: contain;
object-fit: contain;
}
img.emoji {
height: 14px;
-webkit-box-sizing: border-box;
box-sizing: border-box;
-o-object-fit: contain;
object-fit: contain;
position: relative;
top: 3px;
@ -315,12 +312,10 @@ img.emoji--large {
*,
*:after,
*:before {
-webkit-box-sizing: inherit;
box-sizing: inherit;
}
html {
-webkit-box-sizing: border-box;
box-sizing: border-box;
height: 100%;
}
@ -379,15 +374,11 @@ input[type='time'],
input[type='color'],
textarea,
select {
-webkit-appearance: none;
-moz-appearance: none;
appearance: none;
background-color: transparent;
border: 0.1rem solid #ccc;
border-radius: .4rem;
-webkit-box-shadow: none;
box-shadow: none;
-webkit-box-sizing: inherit;
box-sizing: inherit;
padding: .6rem 1.0rem;
}
@ -408,51 +399,6 @@ select:focus {
outline: 0;
}
input[type='email']::-webkit-input-placeholder,
input[type='number']::-webkit-input-placeholder,
input[type='password']::-webkit-input-placeholder,
input[type='search']::-webkit-input-placeholder,
input[type='tel']::-webkit-input-placeholder,
input[type='text']::-webkit-input-placeholder,
input[type='url']::-webkit-input-placeholder,
input[type='date']::-webkit-input-placeholder,
input[type='time']::-webkit-input-placeholder,
input[type='color']::-webkit-input-placeholder,
textarea::-webkit-input-placeholder,
select::-webkit-input-placeholder {
color: #ccc;
}
input[type='email']:-ms-input-placeholder,
input[type='number']:-ms-input-placeholder,
input[type='password']:-ms-input-placeholder,
input[type='search']:-ms-input-placeholder,
input[type='tel']:-ms-input-placeholder,
input[type='text']:-ms-input-placeholder,
input[type='url']:-ms-input-placeholder,
input[type='date']:-ms-input-placeholder,
input[type='time']:-ms-input-placeholder,
input[type='color']:-ms-input-placeholder,
textarea:-ms-input-placeholder,
select:-ms-input-placeholder {
color: #ccc;
}
input[type='email']::-ms-input-placeholder,
input[type='number']::-ms-input-placeholder,
input[type='password']::-ms-input-placeholder,
input[type='search']::-ms-input-placeholder,
input[type='tel']::-ms-input-placeholder,
input[type='text']::-ms-input-placeholder,
input[type='url']::-ms-input-placeholder,
input[type='date']::-ms-input-placeholder,
input[type='time']::-ms-input-placeholder,
input[type='color']::-ms-input-placeholder,
textarea::-ms-input-placeholder,
select::-ms-input-placeholder {
color: #ccc;
}
input[type='email']::placeholder,
input[type='number']::placeholder,
input[type='password']::placeholder,
@ -468,11 +414,6 @@ select::placeholder {
color: #ccc;
}
::-moz-selection {
color: white;
background-color: #00a1cc;
}
::selection {
color: white;
background-color: #00a1cc;
@ -480,7 +421,6 @@ select::placeholder {
.navbar {
background-color: #f7f7f7;
-webkit-box-sizing: border-box;
box-sizing: border-box;
padding: 10px 0;
margin-bottom: 50px;
@ -488,20 +428,13 @@ select::placeholder {
}
.navbar .navbar__wrapper {
display: -webkit-box;
display: -ms-flexbox;
display: flex;
-webkit-box-pack: justify;
-ms-flex-pack: justify;
justify-content: space-between;
-webkit-box-align: center;
-ms-flex-align: center;
align-items: center;
position: relative;
}
.navbar .navbar__logo {
-ms-flex-preferred-size: 100px;
flex-basis: 100px;
}
@ -511,10 +444,7 @@ select::placeholder {
.navbar .navbar__link-list {
margin: 0;
display: -webkit-box;
display: -ms-flexbox;
display: flex;
-ms-flex-pack: distribute;
justify-content: space-around;
}
@ -533,11 +463,7 @@ select::placeholder {
.navbar .navbar__search-box {
margin: 0 12% 0 15px;
display: -webkit-inline-box;
display: -ms-inline-flexbox;
display: inline-flex;
-webkit-box-flex: 1;
-ms-flex: 1;
flex: 1;
}
@ -556,8 +482,6 @@ select::placeholder {
padding: 0;
padding-left: 10px;
color: #606c76;
-webkit-appearance: auto;
-moz-appearance: auto;
appearance: auto;
background-color: white;
height: 32px;
@ -596,7 +520,6 @@ select::placeholder {
.navbar .navbar__link-list {
margin-top: 7px;
max-height: 0;
-webkit-transition: max-height 0.6s ease-out;
transition: max-height 0.6s ease-out;
overflow: hidden;
}
@ -605,12 +528,10 @@ select::placeholder {
position: absolute;
right: 5px;
top: 3px;
-webkit-transform: scale(0.7);
transform: scale(0.7);
}
.navbar .navbar__dropdown-btn:hover + .navbar__link-list {
max-height: 500px;
-webkit-transition: max-height 0.6s ease-in;
transition: max-height 0.6s ease-in;
}
.navbar .navbar__search-box {
@ -654,14 +575,8 @@ select::placeholder {
width: 26%;
float: right;
position: relative;
display: -webkit-box;
display: -ms-flexbox;
display: flex;
-webkit-box-orient: vertical;
-webkit-box-direction: normal;
-ms-flex-direction: column;
flex-direction: column;
-ms-flex-pack: distribute;
justify-content: space-around;
}
@ -673,9 +588,6 @@ select::placeholder {
@media (max-width: 575.98px) {
.grid .grid__aside {
-webkit-box-orient: vertical !important;
-webkit-box-direction: normal !important;
-ms-flex-direction: column !important;
flex-direction: column !important;
}
}
@ -688,27 +600,18 @@ select::placeholder {
.grid .grid__aside {
width: 100%;
float: none;
-webkit-box-orient: horizontal;
-webkit-box-direction: normal;
-ms-flex-direction: row;
flex-direction: row;
}
.grid .grid__aside--tablet-column {
-webkit-box-orient: vertical;
-webkit-box-direction: normal;
-ms-flex-direction: column;
flex-direction: column;
}
.grid--reverse-order {
-webkit-transform: scaleY(-1);
transform: scaleY(-1);
}
.grid .grid__main--reverse-order {
-webkit-transform: scaleY(-1);
transform: scaleY(-1);
}
.grid .grid__aside--reverse-order {
-webkit-transform: scaleY(-1);
transform: scaleY(-1);
}
}
@ -778,7 +681,6 @@ select::placeholder {
margin-bottom: 4px !important;
position: absolute !important;
left: 50%;
-webkit-transform: translateX(-50%);
transform: translateX(-50%);
bottom: 0;
width: 100%;
@ -839,16 +741,13 @@ select::placeholder {
display: inline-block;
position: relative;
left: 50%;
-webkit-transform: translateX(-50%) scale(0.4);
transform: translateX(-50%) scale(0.4);
width: 80px;
height: 80px;
}
.spinner div {
-webkit-transform-origin: 40px 40px;
transform-origin: 40px 40px;
-webkit-animation: spinner 1.2s linear infinite;
animation: spinner 1.2s linear infinite;
}
@ -865,98 +764,65 @@ select::placeholder {
}
.spinner div:nth-child(1) {
-webkit-transform: rotate(0deg);
transform: rotate(0deg);
-webkit-animation-delay: -1.1s;
animation-delay: -1.1s;
}
.spinner div:nth-child(2) {
-webkit-transform: rotate(30deg);
transform: rotate(30deg);
-webkit-animation-delay: -1s;
animation-delay: -1s;
}
.spinner div:nth-child(3) {
-webkit-transform: rotate(60deg);
transform: rotate(60deg);
-webkit-animation-delay: -0.9s;
animation-delay: -0.9s;
}
.spinner div:nth-child(4) {
-webkit-transform: rotate(90deg);
transform: rotate(90deg);
-webkit-animation-delay: -0.8s;
animation-delay: -0.8s;
}
.spinner div:nth-child(5) {
-webkit-transform: rotate(120deg);
transform: rotate(120deg);
-webkit-animation-delay: -0.7s;
animation-delay: -0.7s;
}
.spinner div:nth-child(6) {
-webkit-transform: rotate(150deg);
transform: rotate(150deg);
-webkit-animation-delay: -0.6s;
animation-delay: -0.6s;
}
.spinner div:nth-child(7) {
-webkit-transform: rotate(180deg);
transform: rotate(180deg);
-webkit-animation-delay: -0.5s;
animation-delay: -0.5s;
}
.spinner div:nth-child(8) {
-webkit-transform: rotate(210deg);
transform: rotate(210deg);
-webkit-animation-delay: -0.4s;
animation-delay: -0.4s;
}
.spinner div:nth-child(9) {
-webkit-transform: rotate(240deg);
transform: rotate(240deg);
-webkit-animation-delay: -0.3s;
animation-delay: -0.3s;
}
.spinner div:nth-child(10) {
-webkit-transform: rotate(270deg);
transform: rotate(270deg);
-webkit-animation-delay: -0.2s;
animation-delay: -0.2s;
}
.spinner div:nth-child(11) {
-webkit-transform: rotate(300deg);
transform: rotate(300deg);
-webkit-animation-delay: -0.1s;
animation-delay: -0.1s;
}
.spinner div:nth-child(12) {
-webkit-transform: rotate(330deg);
transform: rotate(330deg);
-webkit-animation-delay: 0s;
animation-delay: 0s;
}
@-webkit-keyframes spinner {
0% {
opacity: 1;
}
100% {
opacity: 0;
}
}
@keyframes spinner {
0% {
opacity: 1;
@ -969,7 +835,6 @@ select::placeholder {
.bg-mask {
background-color: black;
z-index: 1;
-webkit-filter: opacity(20%);
filter: opacity(20%);
position: fixed;
width: 100%;
@ -986,7 +851,6 @@ select::placeholder {
width: 500px;
top: 50%;
left: 50%;
-webkit-transform: translate(-50%, -50%);
transform: translate(-50%, -50%);
background-color: #f7f7f7;
padding: 20px 20px 10px 20px;
@ -1107,7 +971,6 @@ select::placeholder {
width: 500px;
top: 50%;
left: 50%;
-webkit-transform: translate(-50%, -50%);
transform: translate(-50%, -50%);
background-color: #f7f7f7;
padding: 20px 20px 10px 20px;
@ -1146,7 +1009,6 @@ select::placeholder {
width: 500px;
top: 50%;
left: 50%;
-webkit-transform: translate(-50%, -50%);
transform: translate(-50%, -50%);
background-color: #f7f7f7;
padding: 20px 20px 10px 20px;
@ -1196,8 +1058,46 @@ select::placeholder {
word-break: break-all;
}
.add-to-list-modal {
z-index: 2;
display: none;
position: fixed;
width: 500px;
top: 50%;
left: 50%;
transform: translate(-50%, -50%);
background-color: #f7f7f7;
padding: 20px 20px 10px 20px;
color: #606c76;
}
.add-to-list-modal .add-to-list-modal__head {
margin-bottom: 20px;
}
.add-to-list-modal .add-to-list-modal__head::after {
content: ' ';
clear: both;
display: table;
}
.add-to-list-modal .add-to-list-modal__title {
font-weight: bold;
font-size: 1.2em;
float: left;
}
.add-to-list-modal .add-to-list-modal__close-button {
float: right;
cursor: pointer;
}
.add-to-list-modal .add-to-list-modal__confirm-button {
float: right;
}
@media (max-width: 575.98px) {
.mark-modal, .confirm-modal, .announcement-modal {
.mark-modal, .confirm-modal, .announcement-modal .add-to-list-modal {
width: 100%;
}
}
@ -1246,6 +1146,13 @@ select::placeholder {
font-weight: bold;
}
.source-label.source-label__igdb {
background-color: #323A44;
color: #DFE1E2;
border: none;
font-weight: bold;
}
.source-label.source-label__steam {
background: linear-gradient(30deg, #1387b8, #111d2e);
color: white;
@ -1261,6 +1168,37 @@ select::placeholder {
font-weight: 600;
}
.source-label.source-label__goodreads {
background: #F4F1EA;
color: #372213;
font-weight: lighter;
}
.source-label.source-label__tmdb {
background: linear-gradient(90deg, #91CCA3, #1FB4E2);
color: white;
border: none;
font-weight: lighter;
padding-top: 2px;
}
.source-label.source-label__googlebooks {
color: white;
background-color: #4285F4;
border-color: #4285F4;
}
.source-label.source-label__bandcamp {
color: white;
background-color: #28A0C1;
display: inline-block;
}
.source-label.source-label__bandcamp span {
display: inline-block;
margin: 0 4px;
}
.main-section-wrapper {
padding: 32px 48px 32px 36px;
background-color: #f7f7f7;
@ -1276,8 +1214,6 @@ select::placeholder {
}
.entity-list .entity-list__entity {
display: -webkit-box;
display: -ms-flexbox;
display: flex;
margin-bottom: 36px;
}
@ -1289,7 +1225,6 @@ select::placeholder {
}
.entity-list .entity-list__entity-img {
-o-object-fit: contain;
object-fit: contain;
min-width: 130px;
max-width: 130px;
@ -1368,15 +1303,12 @@ select::placeholder {
.entity-detail .entity-detail__img {
height: 210px;
float: left;
-o-object-fit: contain;
object-fit: contain;
max-width: 150px;
-o-object-position: top;
object-position: top;
}
.entity-detail .entity-detail__img-origin {
cursor: -webkit-zoom-in;
cursor: zoom-in;
}
@ -1451,13 +1383,9 @@ select::placeholder {
}
.entity-desc .entity-desc__unfold-button {
display: -webkit-box;
display: -ms-flexbox;
display: flex;
color: #00a1cc;
background-color: transparent;
-webkit-box-pack: center;
-ms-flex-pack: center;
justify-content: center;
text-align: center;
}
@ -1597,19 +1525,13 @@ select::placeholder {
}
.entity-sort .entity-sort__entity-list {
display: -webkit-box;
display: -ms-flexbox;
display: flex;
-webkit-box-pack: start;
-ms-flex-pack: start;
justify-content: flex-start;
-ms-flex-wrap: wrap;
flex-wrap: wrap;
}
.entity-sort .entity-sort__entity {
padding: 0 10px;
-ms-flex-preferred-size: 20%;
flex-basis: 20%;
text-align: center;
display: inline-block;
@ -1658,11 +1580,7 @@ select::placeholder {
}
.entity-sort-control {
display: -webkit-box;
display: -ms-flexbox;
display: flex;
-webkit-box-pack: end;
-ms-flex-pack: end;
justify-content: flex-end;
}
@ -1693,11 +1611,7 @@ select::placeholder {
}
.related-user-list .related-user-list__user {
display: -webkit-box;
display: -ms-flexbox;
display: flex;
-webkit-box-pack: start;
-ms-flex-pack: start;
justify-content: flex-start;
margin-bottom: 20px;
}
@ -1791,11 +1705,8 @@ select::placeholder {
overflow: auto;
scroll-behavior: smooth;
scrollbar-width: none;
display: -webkit-box;
display: -ms-flexbox;
display: flex;
margin: auto;
-webkit-box-sizing: border-box;
box-sizing: border-box;
padding-bottom: 10px;
}
@ -1820,7 +1731,6 @@ select::placeholder {
}
.track-carousel__track img {
-o-object-fit: contain;
object-fit: contain;
}
@ -1829,13 +1739,8 @@ select::placeholder {
}
.track-carousel__button {
display: -webkit-box;
display: -ms-flexbox;
display: flex;
-webkit-box-pack: center;
-ms-flex-pack: center;
justify-content: center;
-ms-flex-line-pack: center;
align-content: center;
background: white;
border: none;
@ -1849,21 +1754,16 @@ select::placeholder {
.track-carousel__button--prev {
left: 0;
-webkit-transform: translate(50%, -50%);
transform: translate(50%, -50%);
}
.track-carousel__button--next {
right: 0;
-webkit-transform: translate(-50%, -50%);
transform: translate(-50%, -50%);
}
@media (max-width: 575.98px) {
.entity-list .entity-list__entity {
-webkit-box-orient: vertical;
-webkit-box-direction: normal;
-ms-flex-direction: column;
flex-direction: column;
margin-bottom: 30px;
}
@ -1883,9 +1783,6 @@ select::placeholder {
-webkit-line-clamp: 5;
}
.entity-detail {
-webkit-box-orient: vertical;
-webkit-box-direction: normal;
-ms-flex-direction: column;
flex-direction: column;
}
.entity-detail .entity-detail__title {
@ -1894,12 +1791,7 @@ select::placeholder {
.entity-detail .entity-detail__info {
margin-left: 0;
float: none;
display: -webkit-box;
display: -ms-flexbox;
display: flex;
-webkit-box-orient: vertical;
-webkit-box-direction: normal;
-ms-flex-direction: column;
flex-direction: column;
width: 100%;
}
@ -1920,7 +1812,6 @@ select::placeholder {
margin-top: 24px;
}
.entity-sort .entity-sort__entity {
-ms-flex-preferred-size: 50%;
flex-basis: 50%;
}
.entity-sort .entity-sort__entity-img {
@ -1947,22 +1838,13 @@ select::placeholder {
padding: 32px 28px 28px 28px;
}
.entity-detail {
display: -webkit-box;
display: -ms-flexbox;
display: flex;
}
}
.aside-section-wrapper {
display: -webkit-box;
display: -ms-flexbox;
display: flex;
-webkit-box-flex: 1;
-ms-flex: 1;
flex: 1;
-webkit-box-orient: vertical;
-webkit-box-direction: normal;
-ms-flex-direction: column;
flex-direction: column;
width: 100%;
padding: 28px 25px 12px 25px;
@ -2005,17 +1887,11 @@ select::placeholder {
}
.action-panel .action-panel__button-group {
display: -webkit-box;
display: -ms-flexbox;
display: flex;
-webkit-box-pack: justify;
-ms-flex-pack: justify;
justify-content: space-between;
}
.action-panel .action-panel__button-group--center {
-webkit-box-pack: center;
-ms-flex-pack: center;
justify-content: center;
}
@ -2084,11 +1960,7 @@ select::placeholder {
}
.user-profile .user-profile__header {
display: -webkit-box;
display: -ms-flexbox;
display: flex;
-webkit-box-align: start;
-ms-flex-align: start;
align-items: flex-start;
margin-bottom: 15px;
}
@ -2118,11 +1990,7 @@ select::placeholder {
}
.user-relation .user-relation__related-user-list {
display: -webkit-box;
display: -ms-flexbox;
display: flex;
-webkit-box-pack: start;
-ms-flex-pack: start;
justify-content: flex-start;
}
@ -2131,7 +1999,6 @@ select::placeholder {
}
.user-relation .user-relation__related-user {
-ms-flex-preferred-size: 25%;
flex-basis: 25%;
padding: 0px 3px;
text-align: center;
@ -2268,7 +2135,7 @@ select::placeholder {
background-color: #d5d5d5;
border-radius: 0;
height: 10px;
width: 65%;
width: 54%;
}
.import-panel .import-panel__progress progress::-webkit-progress-bar {
@ -2310,20 +2177,12 @@ select::placeholder {
}
.entity-card {
display: -webkit-box;
display: -ms-flexbox;
display: flex;
margin-bottom: 10px;
-webkit-box-orient: vertical;
-webkit-box-direction: normal;
-ms-flex-direction: column;
flex-direction: column;
}
.entity-card--horizontal {
-webkit-box-orient: horizontal;
-webkit-box-direction: normal;
-ms-flex-direction: row;
flex-direction: row;
}
@ -2353,7 +2212,6 @@ select::placeholder {
}
.entity-card .entity-card__img-wrapper {
-ms-flex-preferred-size: 100px;
flex-basis: 100px;
}
@ -2373,15 +2231,9 @@ select::placeholder {
margin-bottom: 20px !important;
}
.action-panel {
-webkit-box-orient: vertical !important;
-webkit-box-direction: normal !important;
-ms-flex-direction: column !important;
flex-direction: column !important;
}
.entity-card--horizontal {
-webkit-box-orient: vertical !important;
-webkit-box-direction: normal !important;
-ms-flex-direction: column !important;
flex-direction: column !important;
}
.entity-card .entity-card__info-wrapper {
@ -2394,10 +2246,7 @@ select::placeholder {
@media (max-width: 991.98px) {
.add-entity-entries {
display: -webkit-box;
display: -ms-flexbox;
display: flex;
-ms-flex-pack: distribute;
justify-content: space-around;
}
.aside-section-wrapper {
@ -2419,14 +2268,9 @@ select::placeholder {
margin: 0;
}
.action-panel {
-webkit-box-orient: horizontal;
-webkit-box-direction: normal;
-ms-flex-direction: row;
flex-direction: row;
}
.action-panel .action-panel__button-group {
-webkit-box-pack: space-evenly;
-ms-flex-pack: space-evenly;
justify-content: space-evenly;
}
.relation-dropdown {
@ -2436,53 +2280,35 @@ select::placeholder {
padding-bottom: 10px;
background-color: #f7f7f7;
width: 100%;
display: -webkit-box;
display: -ms-flexbox;
display: flex;
-webkit-box-pack: center;
-ms-flex-pack: center;
justify-content: center;
-webkit-box-align: center;
-ms-flex-align: center;
align-items: center;
cursor: pointer;
-webkit-transition: -webkit-transform 0.3s;
transition: -webkit-transform 0.3s;
transition: transform 0.3s;
transition: transform 0.3s, -webkit-transform 0.3s;
}
.relation-dropdown .relation-dropdown__button:focus {
background-color: red;
}
.relation-dropdown .relation-dropdown__button > .icon-arrow {
-webkit-transition: -webkit-transform 0.3s;
transition: -webkit-transform 0.3s;
transition: transform 0.3s;
transition: transform 0.3s, -webkit-transform 0.3s;
}
.relation-dropdown .relation-dropdown__button:hover > .icon-arrow > svg {
fill: #00a1cc;
}
.relation-dropdown .relation-dropdown__button > .icon-arrow--expand {
-webkit-transform: rotate(-180deg);
transform: rotate(-180deg);
}
.relation-dropdown .relation-dropdown__button + .relation-dropdown__body--expand {
max-height: 2000px;
-webkit-transition: max-height 1s ease-in;
transition: max-height 1s ease-in;
}
.relation-dropdown .relation-dropdown__body {
background-color: #f7f7f7;
max-height: 0;
-webkit-transition: max-height 1s ease-out;
transition: max-height 1s ease-out;
overflow: hidden;
}
.entity-card {
-webkit-box-orient: horizontal;
-webkit-box-direction: normal;
-ms-flex-direction: row;
flex-direction: row;
}
.entity-card .entity-card__info-wrapper {
@ -2510,21 +2336,7 @@ select::placeholder {
overflow: auto;
}
.entity-form > input[type='email'],
.entity-form > input[type='number'],
.entity-form > input[type='password'],
.entity-form > input[type='search'],
.entity-form > input[type='tel'],
.entity-form > input[type='text'],
.entity-form > input[type='url'],
.entity-form textarea, .review-form > input[type='email'],
.review-form > input[type='number'],
.review-form > input[type='password'],
.review-form > input[type='search'],
.review-form > input[type='tel'],
.review-form > input[type='text'],
.review-form > input[type='url'],
.review-form textarea {
.entity-form > input[type='email'], .entity-form > input[type='number'], .entity-form > input[type='password'], .entity-form > input[type='search'], .entity-form > input[type='tel'], .entity-form > input[type='text'], .entity-form > input[type='url'], .entity-form textarea, .review-form > input[type='email'], .review-form > input[type='number'], .review-form > input[type='password'], .review-form > input[type='search'], .review-form > input[type='tel'], .review-form > input[type='text'], .review-form > input[type='url'], .review-form textarea {
width: 100%;
}
@ -2614,15 +2426,11 @@ select::placeholder {
.ms-parent > .ms-choice {
margin-bottom: 1.5rem;
-webkit-appearance: none;
-moz-appearance: none;
appearance: none;
background-color: transparent;
border: 0.1rem solid #ccc;
border-radius: .4rem;
-webkit-box-shadow: none;
box-shadow: none;
-webkit-box-sizing: inherit;
box-sizing: inherit;
padding: .6rem 1.0rem;
width: 100%;
@ -2664,7 +2472,9 @@ select::placeholder {
}
.tag-input input {
-webkit-box-flex: 1;
-ms-flex-positive: 1;
flex-grow: 1;
}
.tools-section-wrapper input, .tools-section-wrapper select {
width: unset;
}

File diff suppressed because one or more lines are too long

View file

@ -0,0 +1,5 @@
<svg width="850" height="850" xmlns="http://www.w3.org/2000/svg" version="1.1">
<g>
<path d="m464.16327,0q-32,0 -55,22t-25,55t20.5,58t56,27t58.5,-20.5t27,-56t-20.5,-59t-56.5,-26.5l-5,0zm-87,95l-232,118q20,20 25,48l231,-118q-19,-20 -24,-48zm167,27q-13,25 -38,38l183,184q13,-25 39,-38l-184,-184zm-142,22l-135,265l40,40l143,-280q-28,-5 -48,-25zm104,16q-22,11 -46,10l-8,-1l21,132l56,9l-23,-150zm-426,34q-32,0 -55,22.5t-25,55t20.5,58t56.5,27t59,-21t26.5,-56t-21,-58.5t-55.5,-27l-6,0zm90,68q1,9 1,18q-1,19 -10,35l132,21l26,-50l-149,-24zm225,36l-26,51l311,49q-1,-8 -1,-17q1,-19 10,-36l-294,-47zm372,6q-32,1 -55,23t-24.5,55t21,58t56,27t58.5,-20.5t27,-56.5t-20.5,-59t-56.5,-27l-6,0zm-606,13q-13,25 -39,38l210,210l51,-25l-222,-223zm-40,38q-21,11 -44,10l-9,-1l40,256q21,-10 45,-9l8,1l-40,-257zm364,22l48,311q21,-10 44,-9l10,1l-46,-294l-56,-9zm195,23l-118,60l8,56l135,-68q-20,-20 -25,-48zm26,49l-119,231q28,5 48,25l119,-231q-28,-5 -48,-25zm-475,29l-68,134q28,5 48,25l60,-119l-40,-40zm262,17l-281,143q19,20 24,48l265,-135l-8,-56zm-55,100l-51,25l106,107q13,-25 39,-38l-94,-94zm-291,24q-32,0 -55.5,22.5t-25,55t21,57.5t56,27t58.5,-20.5t27,-56t-20.5,-58.5t-56.5,-27l-5,0zm89,68q2,9 1,18q-1,19 -9,35l256,41q-1,-9 -1,-18q1,-18 10,-35l-257,-41zm335,0q-32,0 -55,22.5t-24.5,55t20.5,58t56,27t59,-21t27,-56t-20.5,-58.5t-56.5,-27l-6,0z"/>
</g>
</svg>

After

Width:  |  Height:  |  Size: 1.3 KiB

File diff suppressed because one or more lines are too long

Before

Width:  |  Height:  |  Size: 3 KiB

After

Width:  |  Height:  |  Size: 42 KiB

Binary file not shown.

Before

Width:  |  Height:  |  Size: 42 KiB

After

Width:  |  Height:  |  Size: 31 KiB

File diff suppressed because one or more lines are too long

Before

Width:  |  Height:  |  Size: 2.7 KiB

After

Width:  |  Height:  |  Size: 49 KiB

View file

@ -1,9 +1,9 @@
$(document).ready( function() {
$(".markdownx-preview").hide();
$(".markdownx textarea").attr("placeholder", "拖拽图片至编辑框即可插入哦~");
$(".markdownx textarea").attr("placeholder", "从剪贴板粘贴或者拖拽文件至编辑框即可插入图片");
$(".review-form__preview-button").click(function() {
$(".review-form__preview-button").on('click', function() {
if ($(".markdownx-preview").is(":visible")) {
$(".review-form__preview-button").text("预览");
$(".markdownx-preview").hide();

View file

@ -7,7 +7,7 @@ $(document).ready( function() {
// pop up new rating modal
$("#addMarkPanel button").each(function() {
$(this).click(function(e) {
$(this).on('click', function(e) {
e.preventDefault();
let title = $(this).text().trim();
$(".mark-modal__title").text(title);
@ -29,7 +29,7 @@ $(document).ready( function() {
})
// pop up modify mark modal
$(".mark-panel a.edit").click(function(e) {
$(".mark-panel a.edit").on('click', function(e) {
e.preventDefault();
let title = $(".mark-panel__status").text().trim();
$(".mark-modal__title").text(title);
@ -79,7 +79,7 @@ $(document).ready( function() {
if ($("#statusSelection input[type='radio']:checked").val() == WISH_CODE) {
$(".mark-modal .rating-star-edit").hide();
}
$("#statusSelection input[type='radio']").click(function() {
$("#statusSelection input[type='radio']").on('click', function() {
if ($(this).val() == WISH_CODE) {
$(".mark-modal .rating-star-edit").hide();
} else {
@ -89,14 +89,14 @@ $(document).ready( function() {
});
// show confirm modal
$(".mark-panel a.delete").click(function(e) {
$(".mark-panel a.delete").on('click', function(e) {
e.preventDefault();
$(".confirm-modal").show();
$(".bg-mask").show();
});
// confirm modal
$(".confirm-modal input[type='submit']").click(function(e) {
$(".confirm-modal input[type='submit']").on('click', function(e) {
e.preventDefault();
$(".mark-panel form").submit();
});
@ -116,20 +116,20 @@ $(document).ready( function() {
});
// expand hidden long text
$(".entity-desc__unfold-button a").click(function() {
$(".entity-desc__unfold-button a").on('click', function() {
$(this).parent().siblings(".entity-desc__content").removeClass('entity-desc__content--folded');
$(this).parent(".entity-desc__unfold-button").remove();
});
// disable delete mark button after click
const confirmDeleteMarkButton = $('.confirm-modal__confirm-button > input');
confirmDeleteMarkButton.click(function() {
confirmDeleteMarkButton.on('click', function() {
confirmDeleteMarkButton.prop("disabled", true);
});
// disable sumbit button after click
const confirmSumbitMarkButton = $('.mark-modal__confirm-button > input');
confirmSumbitMarkButton.click(function() {
confirmSumbitMarkButton.on('click', function() {
confirmSumbitMarkButton.prop("disabled", true);
confirmSumbitMarkButton.closest('form')[0].submit();
});

View file

@ -1,39 +1,22 @@
$(document).ready( function() {
$("#userInfoCard .mast-brief").text($("<div>"+$("#userInfoCard .mast-brief").text().replace(/\<br/g,'\n<br').replace(/\<p/g,'\n<p')+"</div>").text());
$("#userInfoCard .mast-brief").html($("#userInfoCard .mast-brief").html().replace(/\n/g,'<br/>'));
let token = $("#oauth2Token").text();
let mast_uri = $("#mastodonURI").text();
let mast_domain = new URL(mast_uri);
mast_domain = mast_domain.hostname;
let mast_domain = $("#mastodonURI").text();
let mast_uri = 'https://' + mast_domain
let id = $("#userMastodonID").text();
let userInfoSpinner = $("#spinner").clone().removeAttr("hidden");
if (id && id != 'None' && mast_domain != 'twitter.com') {
// let userInfoSpinner = $("#spinner").clone().removeAttr("hidden");
let followersSpinner = $("#spinner").clone().removeAttr("hidden");
let followingSpinner = $("#spinner").clone().removeAttr("hidden");
$("#userInfoCard").append(userInfoSpinner);
// $("#userInfoCard").append(userInfoSpinner);
$("#followings h5").after(followingSpinner);
$("#followers h5").after(followersSpinner);
$(".mast-following-more").hide();
$(".mast-followers-more").hide();
getUserInfo(
id,
mast_uri,
token,
function(userData) {
let userName;
if (userData.display_name) {
userName = translateEmojis(userData.display_name, userData.emojis, true);
} else {
userName = userData.username;
}
$("#userInfoCard .mast-avatar").attr("src", userData.avatar);
$("#userInfoCard .mast-displayname").html(userName);
$("#userInfoCard .mast-brief").text($(userData.note).text());
$(userInfoSpinner).remove();
}
);
getFollowers(
id,
mast_uri,
@ -109,6 +92,7 @@ $(document).ready( function() {
}
);
}
// mobile dropdown
$(".relation-dropdown__button").data("collapse", true);
@ -118,7 +102,7 @@ $(document).ready( function() {
button.children('.icon-arrow').toggleClass("icon-arrow--expand");
button.siblings('.relation-dropdown__body').toggleClass("relation-dropdown__body--expand");
}
$(".relation-dropdown__button").click(onClickDropdownButton)
$(".relation-dropdown__button").on('click', onClickDropdownButton);
// close when click outside
window.onclick = evt => {
@ -129,7 +113,7 @@ $(document).ready( function() {
};
// import panel
$("#uploadBtn").click(e => {
$("#uploadBtn").on('click', e => {
const btn = $("#uploadBtn")
const form = $(".import-panel__body form")
@ -201,7 +185,8 @@ $(document).ready( function() {
if (!data.total_items == 0) {
progress.attr("max", data.total_items);
progress.attr("value", data.finished_items);
percent.text(Math.floor(100 * data.finished_items / data.total_items) + '%');
progress.attr("value", data.finished_items);
percent.text("" + data.finished_items + "/" + data.total_items);
}
setTimeout(() => {
poll();

View file

@ -54,38 +54,50 @@ const NUMBER_PER_REQUEST = 20
// "fields": []
// }
// ]
function getFollowers(id, mastodonURI, token, callback) {
let url = mastodonURI + API_FOLLOWERS.replace(":id", id);
$.ajax({
url: url,
method: 'GET',
headers: {
'Authorization': 'Bearer ' + token,
},
data: {
'limit': NUMBER_PER_REQUEST
},
success: function(data, status, request){
callback(data, request);
},
async function getFollowers(id, mastodonURI, token, callback) {
const url = mastodonURI + API_FOLLOWERS.replace(":id", id);
var response;
try {
response = await fetch(url+'?limit='+NUMBER_PER_REQUEST, {headers: {'Authorization': 'Bearer ' + token}});
} catch (e) {
console.error('loading followers failed.');
return;
}
const json = await response.json();
let nextUrl = null;
let links = response.headers.get('link');
if (links) {
links.split(',').forEach(link => {
if (link.includes('next')) {
let regex = /<(.*?)>/;
nextUrl = link.match(regex)[1];
}
});
}
callback(json, nextUrl);
}
function getFollowing(id, mastodonURI, token, callback) {
let url = mastodonURI + API_FOLLOWING.replace(":id", id);
$.ajax({
url: url,
method: 'GET',
headers: {
'Authorization': 'Bearer ' + token,
},
data: {
'limit': NUMBER_PER_REQUEST
},
success: function(data, status, request){
callback(data, request);
},
async function getFollowing(id, mastodonURI, token, callback) {
const url = mastodonURI + API_FOLLOWING.replace(":id", id);
var response;
try {
response = await fetch(url+'?limit='+NUMBER_PER_REQUEST, {headers: {'Authorization': 'Bearer ' + token}});
} catch (e) {
console.error('loading following failed.');
return;
}
const json = await response.json();
let nextUrl = null;
let links = response.headers.get('link');
if (links) {
links.split(',').forEach(link => {
if (link.includes('next')) {
let regex = /<(.*?)>/;
nextUrl = link.match(regex)[1];
}
});
}
callback(json, nextUrl);
}
// {

View file

@ -1,5 +1,5 @@
$(document).ready( function() {
let render = function() {
let ratingLabels = $(".rating-star");
$(ratingLabels).each( function(index, value) {
let ratingScore = $(this).data("rating-score") / 2;
@ -8,5 +8,9 @@ $(document).ready( function() {
readOnly: true
});
});
};
document.body.addEventListener('htmx:load', function(evt) {
render();
});
render();
});

View file

@ -1,6 +1,6 @@
$(document).ready( function() {
$(".submit").click(function(e) {
$(".submit").on('click', function(e) {
e.preventDefault();
let form = $("#scrapeForm form");
if (form.data('submitted') === true) {

View file

@ -8,7 +8,7 @@ $(() => {
$(e).data("visibility", true);
}
let btn = $("#toggleDisplayButtonTemplate").clone().removeAttr("id");
btn.click(e => {
btn.on('click', e => {
if ($(e.currentTarget).parent().data('visibility') === true) {
// flip text
$(e.currentTarget).children("span.showText").show();
@ -72,7 +72,7 @@ $(() => {
});
// activate sorting
$("#sortEditButton").click(evt => {
$("#sortEditButton").on('click', evt => {
// test if edit mode is activated
isActivated = $("#sortSaveIcon").is(":visible");
@ -134,7 +134,7 @@ $(() => {
});
// exit edit mode
$("#sortExitButton").click(evt => {
$("#sortExitButton").on('click', evt => {
initialLayoutData.forEach(elem => {
// set visiblity
$('#' + elem.id).data('visibility', elem.visibility);

View file

@ -1,605 +0,0 @@
/*!
* Milligram v1.3.0
* https://milligram.github.io
*
* Copyright (c) 2017 CJ Patoilo
* Licensed under the MIT license
*/
*,
*:after,
*:before {
box-sizing: inherit;
}
html {
box-sizing: border-box;
font-size: 62.5%;
}
body {
color: #606c76;
font-family: 'Roboto', 'Helvetica Neue', 'Helvetica', 'Arial', sans-serif;
font-size: 1.6em;
font-weight: 300;
letter-spacing: .01em;
line-height: 1.6;
}
textarea {
font-family: 'Roboto', 'Helvetica Neue', 'Helvetica', 'Arial', sans-serif;
}
blockquote {
border-left: 0.3rem solid #d1d1d1;
margin-left: 0;
margin-right: 0;
padding: 1rem 1.5rem;
}
blockquote *:last-child {
margin-bottom: 0;
}
.button,
button,
input[type='button'],
input[type='reset'],
input[type='submit'] {
background-color: #00a1cc;
border: 0.1rem solid #00a1cc;
border-radius: .4rem;
color: #fff;
cursor: pointer;
display: inline-block;
font-size: 1.1rem;
font-weight: 700;
height: 3.8rem;
letter-spacing: .1rem;
line-height: 3.8rem;
padding: 0 3.0rem;
text-align: center;
text-decoration: none;
text-transform: uppercase;
white-space: nowrap;
}
.button:focus, .button:hover,
button:focus,
button:hover,
input[type='button']:focus,
input[type='button']:hover,
input[type='reset']:focus,
input[type='reset']:hover,
input[type='submit']:focus,
input[type='submit']:hover {
background-color: #606c76;
border-color: #606c76;
color: #fff;
outline: 0;
}
.button[disabled],
button[disabled],
input[type='button'][disabled],
input[type='reset'][disabled],
input[type='submit'][disabled] {
cursor: default;
opacity: .5;
}
.button[disabled]:focus, .button[disabled]:hover,
button[disabled]:focus,
button[disabled]:hover,
input[type='button'][disabled]:focus,
input[type='button'][disabled]:hover,
input[type='reset'][disabled]:focus,
input[type='reset'][disabled]:hover,
input[type='submit'][disabled]:focus,
input[type='submit'][disabled]:hover {
background-color: #00a1cc;
border-color: #00a1cc;
}
.button.button-outline,
button.button-outline,
input[type='button'].button-outline,
input[type='reset'].button-outline,
input[type='submit'].button-outline {
background-color: transparent;
color: #00a1cc;
}
.button.button-outline:focus, .button.button-outline:hover,
button.button-outline:focus,
button.button-outline:hover,
input[type='button'].button-outline:focus,
input[type='button'].button-outline:hover,
input[type='reset'].button-outline:focus,
input[type='reset'].button-outline:hover,
input[type='submit'].button-outline:focus,
input[type='submit'].button-outline:hover {
background-color: transparent;
border-color: #606c76;
color: #606c76;
}
.button.button-outline[disabled]:focus, .button.button-outline[disabled]:hover,
button.button-outline[disabled]:focus,
button.button-outline[disabled]:hover,
input[type='button'].button-outline[disabled]:focus,
input[type='button'].button-outline[disabled]:hover,
input[type='reset'].button-outline[disabled]:focus,
input[type='reset'].button-outline[disabled]:hover,
input[type='submit'].button-outline[disabled]:focus,
input[type='submit'].button-outline[disabled]:hover {
border-color: inherit;
color: #00a1cc;
}
.button.button-clear,
button.button-clear,
input[type='button'].button-clear,
input[type='reset'].button-clear,
input[type='submit'].button-clear {
background-color: transparent;
border-color: transparent;
color: #00a1cc;
}
.button.button-clear:focus, .button.button-clear:hover,
button.button-clear:focus,
button.button-clear:hover,
input[type='button'].button-clear:focus,
input[type='button'].button-clear:hover,
input[type='reset'].button-clear:focus,
input[type='reset'].button-clear:hover,
input[type='submit'].button-clear:focus,
input[type='submit'].button-clear:hover {
background-color: transparent;
border-color: transparent;
color: #606c76;
}
.button.button-clear[disabled]:focus, .button.button-clear[disabled]:hover,
button.button-clear[disabled]:focus,
button.button-clear[disabled]:hover,
input[type='button'].button-clear[disabled]:focus,
input[type='button'].button-clear[disabled]:hover,
input[type='reset'].button-clear[disabled]:focus,
input[type='reset'].button-clear[disabled]:hover,
input[type='submit'].button-clear[disabled]:focus,
input[type='submit'].button-clear[disabled]:hover {
color: #00a1cc;
}
code {
background: #f4f5f6;
border-radius: .4rem;
font-size: 86%;
margin: 0 .2rem;
padding: .2rem .5rem;
white-space: nowrap;
}
pre {
background: #f4f5f6;
border-left: 0.3rem solid #00a1cc;
overflow-y: hidden;
}
pre > code {
border-radius: 0;
display: block;
padding: 1rem 1.5rem;
white-space: pre;
}
hr {
border: 0;
border-top: 0.1rem solid #f4f5f6;
margin: 3.0rem 0;
}
input[type='email'],
input[type='number'],
input[type='password'],
input[type='search'],
input[type='tel'],
input[type='text'],
input[type='url'],
textarea,
select {
-webkit-appearance: none;
-moz-appearance: none;
appearance: none;
background-color: transparent;
border: 0.1rem solid #d1d1d1;
border-radius: .4rem;
box-shadow: none;
box-sizing: inherit;
height: 3.8rem;
padding: .6rem 1.0rem;
width: 100%;
}
input[type='email']:focus,
input[type='number']:focus,
input[type='password']:focus,
input[type='search']:focus,
input[type='tel']:focus,
input[type='text']:focus,
input[type='url']:focus,
textarea:focus,
select:focus {
border-color: #00a1cc;
outline: 0;
}
select {
background: url('data:image/svg+xml;utf8,<svg xmlns="http://www.w3.org/2000/svg" height="14" viewBox="0 0 29 14" width="29"><path fill="#d1d1d1" d="M9.37727 3.625l5.08154 6.93523L19.54036 3.625"/></svg>') center right no-repeat;
padding-right: 3.0rem;
}
select:focus {
background-image: url('data:image/svg+xml;utf8,<svg xmlns="http://www.w3.org/2000/svg" height="14" viewBox="0 0 29 14" width="29"><path fill="#00a1cc" d="M9.37727 3.625l5.08154 6.93523L19.54036 3.625"/></svg>');
}
textarea {
min-height: 6.5rem;
}
label,
legend {
display: block;
font-size: 1.6rem;
font-weight: 700;
margin-bottom: .5rem;
}
fieldset {
border-width: 0;
padding: 0;
}
input[type='checkbox'],
input[type='radio'] {
display: inline;
}
.label-inline {
display: inline-block;
font-weight: normal;
margin-left: .5rem;
}
.container {
margin: 0 auto;
max-width: 112.0rem;
padding: 0 2.0rem;
position: relative;
width: 100%;
}
.row {
display: flex;
flex-direction: column;
padding: 0;
width: 100%;
}
.row.row-no-padding {
padding: 0;
}
.row.row-no-padding > .column {
padding: 0;
}
.row.row-wrap {
flex-wrap: wrap;
}
.row.row-top {
align-items: flex-start;
}
.row.row-bottom {
align-items: flex-end;
}
.row.row-center {
align-items: center;
}
.row.row-stretch {
align-items: stretch;
}
.row.row-baseline {
align-items: baseline;
}
.row .column {
display: block;
flex: 1 1 auto;
margin-left: 0;
max-width: 100%;
width: 100%;
}
.row .column.column-offset-10 {
margin-left: 10%;
}
.row .column.column-offset-20 {
margin-left: 20%;
}
.row .column.column-offset-25 {
margin-left: 25%;
}
.row .column.column-offset-33, .row .column.column-offset-34 {
margin-left: 33.3333%;
}
.row .column.column-offset-50 {
margin-left: 50%;
}
.row .column.column-offset-66, .row .column.column-offset-67 {
margin-left: 66.6666%;
}
.row .column.column-offset-75 {
margin-left: 75%;
}
.row .column.column-offset-80 {
margin-left: 80%;
}
.row .column.column-offset-90 {
margin-left: 90%;
}
.row .column.column-10 {
flex: 0 0 10%;
max-width: 10%;
}
.row .column.column-20 {
flex: 0 0 20%;
max-width: 20%;
}
.row .column.column-25 {
flex: 0 0 25%;
max-width: 25%;
}
.row .column.column-33, .row .column.column-34 {
flex: 0 0 33.3333%;
max-width: 33.3333%;
}
.row .column.column-40 {
flex: 0 0 40%;
max-width: 40%;
}
.row .column.column-50 {
flex: 0 0 50%;
max-width: 50%;
}
.row .column.column-60 {
flex: 0 0 60%;
max-width: 60%;
}
.row .column.column-66, .row .column.column-67 {
flex: 0 0 66.6666%;
max-width: 66.6666%;
}
.row .column.column-75 {
flex: 0 0 75%;
max-width: 75%;
}
.row .column.column-80 {
flex: 0 0 80%;
max-width: 80%;
}
.row .column.column-90 {
flex: 0 0 90%;
max-width: 90%;
}
.row .column .column-top {
align-self: flex-start;
}
.row .column .column-bottom {
align-self: flex-end;
}
.row .column .column-center {
-ms-grid-row-align: center;
align-self: center;
}
@media (min-width: 40rem) {
.row {
flex-direction: row;
margin-left: -1.0rem;
width: calc(100% + 2.0rem);
}
.row .column {
margin-bottom: inherit;
padding: 0 1.0rem;
}
}
a {
color: #00a1cc;
text-decoration: none;
}
a:focus, a:hover {
color: #606c76;
}
dl,
ol,
ul {
list-style: none;
margin-top: 0;
padding-left: 0;
}
dl dl,
dl ol,
dl ul,
ol dl,
ol ol,
ol ul,
ul dl,
ul ol,
ul ul {
font-size: 90%;
margin: 1.5rem 0 1.5rem 3.0rem;
}
ol {
list-style: decimal inside;
}
ul {
list-style: circle inside;
}
.button,
button,
dd,
dt,
li {
margin-bottom: 1.0rem;
}
fieldset,
input,
select,
textarea {
margin-bottom: 1.5rem;
}
blockquote,
dl,
figure,
form,
ol,
p,
pre,
table,
ul {
margin-bottom: 2.5rem;
}
table {
border-spacing: 0;
width: 100%;
}
td,
th {
border-bottom: 0.1rem solid #e1e1e1;
padding: 1.2rem 1.5rem;
text-align: left;
}
td:first-child,
th:first-child {
padding-left: 0;
}
td:last-child,
th:last-child {
padding-right: 0;
}
b,
strong {
font-weight: bold;
}
p {
margin-top: 0;
}
h1,
h2,
h3,
h4,
h5,
h6 {
font-weight: 300;
letter-spacing: -.1rem;
margin-bottom: 2.0rem;
margin-top: 0;
}
h1 {
font-size: 4.6rem;
line-height: 1.2;
}
h2 {
font-size: 3.6rem;
line-height: 1.25;
}
h3 {
font-size: 2.8rem;
line-height: 1.3;
}
h4 {
font-size: 2.2rem;
letter-spacing: -.08rem;
line-height: 1.35;
}
h5 {
font-size: 1.8rem;
letter-spacing: -.05rem;
line-height: 1.5;
}
h6 {
font-size: 1.6rem;
letter-spacing: 0;
line-height: 1.4;
}
img {
max-width: 100%;
}
.clearfix:after {
clear: both;
content: ' ';
display: table;
}
.float-left {
float: left;
}
.float-right {
float: right;
}

View file

@ -1,10 +0,0 @@
/**
* multiple-select - Multiple select is a jQuery plugin to select multiple elements with checkboxes :).
*
* @version v1.5.2
* @homepage http://multiple-select.wenzhixin.net.cn
* @author wenzhixin <wenzhixin2010@gmail.com> (http://wenzhixin.net.cn/)
* @license MIT
*/
@charset "UTF-8";.ms-offscreen{clip:rect(0 0 0 0)!important;width:1px!important;height:1px!important;border:0!important;margin:0!important;padding:0!important;overflow:hidden!important;position:absolute!important;outline:0!important;left:auto!important;top:auto!important}.ms-parent{display:inline-block;position:relative;vertical-align:middle}.ms-choice{display:block;width:100%;height:26px;padding:0;overflow:hidden;cursor:pointer;border:1px solid #aaa;text-align:left;white-space:nowrap;line-height:26px;color:#444;text-decoration:none;border-radius:4px;background-color:#fff}.ms-choice.disabled{background-color:#f4f4f4;background-image:none;border:1px solid #ddd;cursor:default}.ms-choice>span{position:absolute;top:0;left:0;right:20px;white-space:nowrap;overflow:hidden;text-overflow:ellipsis;display:block;padding-left:8px}.ms-choice>span.placeholder{color:#999}.ms-choice>div.icon-close{position:absolute;top:0;right:16px;height:100%;width:16px}.ms-choice>div.icon-close:before{content:'×';color:#888;font-weight:bold;position:absolute;top:50%;margin-top:-14px}.ms-choice>div.icon-close:hover:before{color:#333}.ms-choice>div.icon-caret{position:absolute;width:0;height:0;top:50%;right:8px;margin-top:-2px;border-color:#888 transparent transparent transparent;border-style:solid;border-width:5px 4px 0 4px}.ms-choice>div.icon-caret.open{border-color:transparent transparent #888 transparent;border-width:0 4px 5px 4px}.ms-drop{width:auto;min-width:100%;overflow:hidden;display:none;margin-top:-1px;padding:0;position:absolute;z-index:1000;background:#fff;color:#000;border:1px solid #aaa;border-radius:4px}.ms-drop.bottom{top:100%;box-shadow:0 4px 5px rgba(0,0,0,0.15)}.ms-drop.top{bottom:100%;box-shadow:0 -4px 5px rgba(0,0,0,0.15)}.ms-search{display:inline-block;margin:0;min-height:26px;padding:2px;position:relative;white-space:nowrap;width:100%;z-index:10000;box-sizing:border-box}.ms-search input{width:100%;height:auto!important;min-height:24px;padding:0 5px;margin:0;outline:0;font-family:sans-serif;border:1px solid #aaa;border-radius:5px;box-shadow:none}.ms-drop ul{overflow:auto;margin:0;padding:0}.ms-drop ul>li{list-style:none;display:list-item;background-image:none;position:static;padding:.25rem 8px}.ms-drop ul>li .disabled{font-weight:normal!important;opacity:.35;filter:Alpha(Opacity=35);cursor:default}.ms-drop ul>li.multiple{display:block;float:left}.ms-drop ul>li.group{clear:both}.ms-drop ul>li.multiple label{width:100%;display:block;white-space:nowrap;overflow:hidden;text-overflow:ellipsis}.ms-drop ul>li label{position:relative;padding-left:1.25rem;margin-bottom:0;font-weight:normal;display:block;white-space:nowrap;cursor:pointer}.ms-drop ul>li label.optgroup{font-weight:bold}.ms-drop ul>li.hide-radio{padding:0}.ms-drop ul>li.hide-radio:focus,.ms-drop ul>li.hide-radio:hover{background-color:#f8f9fa}.ms-drop ul>li.hide-radio.selected{color:#fff;background-color:#007bff}.ms-drop ul>li.hide-radio label{margin-bottom:0;padding:5px 8px}.ms-drop ul>li.hide-radio input{display:none}.ms-drop ul>li.option-level-1 label{padding-left:28px}.ms-drop input[type="radio"],.ms-drop input[type="checkbox"]{position:absolute;margin-top:.3rem;margin-left:-1.25rem}.ms-drop .ms-no-results{display:none}

View file

@ -0,0 +1,166 @@
.markdownx-preview h1 {
font-size: 2.5em;
}
.markdownx-preview h2 {
font-size: 2.0em;
}
.markdownx-preview h3 {
font-size: 1.6em;
}
.markdownx-preview blockquote {
border-left: lightgray solid 0.4em;
padding-left: 0.4em;
}
.collection-item-position-edit {
float: right;
}
.collection-item-position-edit a {
cursor: pointer;
color: #ccc;
}
.action-icon svg {
cursor: pointer;
fill: #ccc;
height: 12px;
vertical-align: text-bottom;
}
.entity-list__entity-img-wrapper {
position: relative;
}
.entity-list__entity-action-icon {
position: absolute;
top:0;
right:0;
mix-blend-mode: hard-light;
text-stroke: 1px black;
background-color: lightgray;
border-radius: 0 0 0 8px;
padding: 0 4px;
cursor: pointer;
}
/***** MODAL DIALOG ****/
#modal {
/* Underlay covers entire screen. */
position: fixed;
top:0px;
bottom: 0px;
left:0px;
right:0px;
background-color:rgba(0,0,0,0.5);
z-index:1000;
/* Flexbox centers the .modal-content vertically and horizontally */
display:flex;
flex-direction:column;
align-items:center;
/* Animate when opening */
animation-name: fadeIn;
animation-duration:150ms;
animation-timing-function: ease;
}
#modal > .modal-underlay {
/* underlay takes up the entire viewport. This is only
required if you want to click to dismiss the popup */
position: absolute;
z-index: -1;
.collection_list_position_edittop:0px;
bottom:0px;
left: 0px;
right: 0px;
}
#modal > .modal-content {
/* Position visible dialog near the top of the window */
margin-top:10vh;
/* Sizing for visible dialog */
width:80%;
max-width:600px;
/* Display properties for visible dialog*/
background-color: #f7f7f7;
padding: 20px 20px 10px 20px;
color: #606c76;
/* Animate when opening */
animation-name:zoomIn;
animation-duration:150ms;
animation-timing-function: ease;
}
#modal.closing {
/* Animate when closing */
animation-name: fadeOut;
animation-duration:150ms;
animation-timing-function: ease;
}
#modal.closing > .modal-content {
/* Aniate when closing */
animation-name: zoomOut;
animation-duration:150ms;
animation-timing-function: ease;
}
@keyframes fadeIn {
0% {opacity: 0;}
100% {opacity: 1;}
}
@keyframes fadeOut {
0% {opacity: 1;}
100% {opacity: 0;}
}
@keyframes zoomIn {
0% {transform: scale(0.9);}
100% {transform: scale(1);}
}
@keyframes zoomOut {
0% {transform: scale(1);}
100% {transform: scale(0.9);}
}
#modal .add-to-list-modal__head {
margin-bottom: 20px;
}
#modal .add-to-list-modal__head::after {
content: ' ';
clear: both;
display: table;
}
#modal .add-to-list-modal__title {
font-weight: bold;
font-size: 1.2em;
float: left;
}
#modal .add-to-list-modal__close-button {
float: right;
cursor: pointer;
}
#modal .add-to-list-modal__confirm-button {
float: right;
}
#modal li, #modal ul, #modal label {
display: inline;
}

File diff suppressed because one or more lines are too long

File diff suppressed because one or more lines are too long

View file

@ -0,0 +1,8 @@
<?xml version="1.0" encoding="UTF-8"?>
<OpenSearchDescription xmlns="http://a9.com/-/spec/opensearch/1.1/">
<ShortName>NeoDB</ShortName>
<Description>输入关键字或站外条目链接搜索NeoDB书影音游戏</Description>
<InputEncoding>UTF-8</InputEncoding>
<Image type="image/jpeg" width="64" height="64">https://neodb.social/static/img/logo-square.jpg</Image>
<Url type="text/html" template="https://neodb.social/search/?q={searchTerms}"/>
</OpenSearchDescription>

View file

@ -236,7 +236,7 @@ $panel-padding : 0
background-color: $color-quaternary
border-radius: 0
height: 10px
width: 65%
width: 54%
progress::-webkit-progress-bar
background-color: $color-quaternary

View file

@ -7,10 +7,18 @@ $spotify-color-primary: #1ed760
$spotify-color-secondary: black
$imdb-color-primary: #F5C518
$imdb-color-secondary: #121212
$igdb-color-primary: #323A44
$igdb-color-secondary: #DFE1E2
$steam-color-primary: #1387b8
$steam-color-secondary: #111d2e
$bangumi-color-primary: #F09199
$bangumi-color-secondary: #FCFCFC
$goodreads-color-primary: #372213
$goodreads-color-secondary: #F4F1EA
$tmdb-color-primary: #91CCA3
$tmdb-color-secondary: #1FB4E2
$bandcamp-color-primary: #28A0C1
$bandcamp-color-secondary: white
.source-label
display: inline
@ -50,6 +58,11 @@ $bangumi-color-secondary: #FCFCFC
color: $imdb-color-secondary
border: none
font-weight: bold
&.source-label__igdb
background-color: $igdb-color-primary
color: $igdb-color-secondary
border: none
font-weight: bold
&.source-label__steam
background: linear-gradient(30deg, $steam-color-primary, $steam-color-secondary)
color: white
@ -61,3 +74,26 @@ $bangumi-color-secondary: #FCFCFC
color: $bangumi-color-primary
font-style: italic
font-weight: 600
&.source-label__goodreads
background: $goodreads-color-secondary
color: $goodreads-color-primary
font-weight: lighter
&.source-label__tmdb
background: linear-gradient(90deg, $tmdb-color-primary, $tmdb-color-secondary)
color: white
border: none
font-weight: lighter
padding-top: 2px
&.source-label__googlebooks
color: white
background-color: #4285F4
border-color: #4285F4
&.source-label__bandcamp
color: $bandcamp-color-secondary
background-color: $bandcamp-color-primary
// transform: skewX(-30deg)
display: inline-block
&.source-label__bandcamp span
// transform: skewX(30deg)
display: inline-block
margin: 0 4px

View file

@ -115,10 +115,12 @@
&__content
word-break: break-all
.add-to-list-modal
@include modal
// Small devices (landscape phones, 576px and up)
@media (max-width: $small-devices)
.mark-modal, .confirm-modal, .announcement-modal
.mark-modal, .confirm-modal, .announcement-modal .add-to-list-modal
width: 100%
// Medium devices (tablets, 768px and up)
@media (max-width: $medium-devices)

View file

@ -51,3 +51,6 @@
.tag-input input
flex-grow: 1
.tools-section-wrapper input, .tools-section-wrapper select
width: unset

View file

@ -5,9 +5,8 @@
<head>
<meta charset="UTF-8">
<meta name="viewport" content="width=device-width, initial-scale=1.0">
<meta http-equiv="refresh" content="3;url={% url 'common:home' %}">
<link rel="stylesheet" href="https://cdn.jsdelivr.net/npm/skeleton-css@2.0.4/css/normalize.css">
<link rel="stylesheet" href="{% static 'lib/css/milligram.css' %}">
<meta http-equiv="refresh" content="5;url={% if url %}{{url}}{% else %}{% url 'common:home' %}{% endif %}">
<link rel="stylesheet" href="https://static.neodb.social/cdnjs.cloudflare.com/ajax/libs/milligram/1.4.1/milligram.min.css">
<link rel="stylesheet" href="{% static 'css/boofilsic_edit.css' %}">
<link rel="stylesheet" href="{% static 'css/boofilsic_box.css' %}">
<title>{% trans '错误' %}</title>

View file

@ -0,0 +1,48 @@
{% load static %}
{% load i18n %}
{% load l10n %}
{% load humanize %}
{% load admin_url %}
{% load mastodon %}
{% load oauth_token %}
{% load truncate %}
{% load highlight %}
{% load thumb %}
{% for item in external_items %}
<li class="entity-list__entity">
<div class="entity-list__entity-img-wrapper">
<a href="{{ item.link }}">
<img src="{{ item.cover_url }}" alt="" class="entity-list__entity-img">
</a>
</div>
<div class="entity-list__entity-text">
<div class="entity-list__entity-title" style="font-style:italic;">
<a href="{{ item.link }}" class="entity-list__entity-link">
{% if request.GET.q %}
{{ item.title | highlight:request.GET.q }}
{% else %}
{{ item.title }}
{% endif %}
</a>
{% if not request.GET.c or not request.GET.c in categories %}
<span class="entity-list__entity-category">[{{item.verbose_category_name}}]</span>
{% endif %}
<a href="{{ item.source_url }}">
<span class="source-label source-label__{{ item.source_site }}">{{ item.source_site.label }}</span>
</a>
</div>
<span class="entity-list__entity-info entity-list__entity-info--full-length">
{{item.subtitle}}
</span>
<p class="entity-list__entity-brief">
{{ item.brief }}
</p>
<div class="tag-collection">
</div>
</div>
</li>
{% endfor %}

View file

@ -14,12 +14,14 @@
<head>
<meta charset="UTF-8">
<meta name="viewport" content="width=device-width, initial-scale=1.0">
<title>{% trans 'NiceDB - 搜索结果' %}</title>
<script src="https://cdnjs.cloudflare.com/ajax/libs/jquery/3.5.1/jquery.min.js"></script>
<title>{{ site_name }} - {% trans '搜索结果' %}</title>
<script src="https://static.neodb.social/cdnjs.cloudflare.com/ajax/libs/jquery/3.6.0/jquery.min.js"></script>
<script src="https://static.neodb.social/cdnjs.cloudflare.com/ajax/libs/htmx/1.8.0/htmx.min.js"></script>
<script src="{% static 'lib/js/rating-star.js' %}"></script>
<script src="{% static 'js/rating-star-readonly.js' %}"></script>
<link rel="stylesheet" href="{% static 'css/boofilsic.min.css' %}">
<link rel="stylesheet" href="{% static 'lib/css/rating-star.css' %}">
<link rel="stylesheet" href="{% static 'lib/css/neo.css' %}">
</head>
<body>
@ -43,399 +45,39 @@
<ul class="entity-list__entities">
{% for item in items %}
{% if item.category_name|lower == 'book' %}
{% with book=item %}
<li class="entity-list__entity">
<div class="entity-list__entity-img-wrapper">
<a href="{% url 'books:retrieve' book.id %}">
<img src="{{ book.cover|thumb:'normal' }}" alt="" class="entity-list__entity-img">
</a>
</div>
<div class="entity-list__entity-text">
<div class="entity-list__entity-title">
<a href="{% url 'books:retrieve' book.id %}" class="entity-list__entity-link">
{% if request.GET.q %}
{{ book.title | highlight:request.GET.q }}
{% else %}
{{ book.title }}
{% endif %}
</a>
{% if not request.GET.c or not request.GET.c in categories %}
<span class="entity-list__entity-category">[{{item.verbose_category_name}}]</span>
{% endif %}
<a href="{{ book.source_url }}">
<span class="source-label source-label__{{ book.source_site }}">{{ book.get_source_site_display }}</span>
</a>
</div>
{% if book.rating %}
<div class="rating-star entity-list__rating-star" data-rating-score="{{ book.rating | floatformat:"0" }}"></div>
<span class="entity-list__rating-score rating-score">{{ book.rating }}</span>
{% else %}
<div class="entity-list__rating entity-list__rating--empty"> {% trans '暂无评分' %}</div>
{% endif %}
<span class="entity-list__entity-info">
{% if book.pub_year %}
{{ book.pub_year }}{% trans '年' %}
{% if book.pub_month %}
{{book.pub_month }}{% trans '月' %} /
{% endif %}
{% endif %}
{% if book.author %}
{% trans '作者' %}
{% for author in book.author %}
{{ author }}{% if not forloop.last %},{% endif %}
{% endfor %}/
{% endif %}
{% if book.translator %}
{% trans '译者' %}
{% for translator in book.translator %}
{{ translator }}{% if not forloop.last %},{% endif %}
{% endfor %}/
{% endif %}
{% if book.orig_title %}
&nbsp;{% trans '原名' %}
{{ book.orig_title }}
{% endif %}
</span>
<p class="entity-list__entity-brief">
{{ book.brief }}
</p>
<div class="tag-collection">
{% for tag_dict in book.tag_list %}
{% for k, v in tag_dict.items %}
{% if k == 'content' %}
<span class="tag-collection__tag">
<a href="{% url 'common:search' %}?tag={{ v }}">{{ v }}</a>
</span>
{% endif %}
{% endfor %}
{% endfor %}
</div>
</div>
</li>
{% endwith %}
{% elif item.category_name|lower == 'movie' %}
{% with movie=item %}
<li class="entity-list__entity">
<div class="entity-list__entity-img-wrapper">
<a href="{% url 'movies:retrieve' movie.id %}">
<img src="{{ movie.cover|thumb:'normal' }}" alt="" class="entity-list__entity-img">
</a>
</div>
<div class="entity-list__entity-text">
<div class="entity-list__entity-title">
<a href="{% url 'movies:retrieve' movie.id %}" class="entity-list__entity-link">
{% if movie.season %}
{% if request.GET.q %}
{{ movie.title | highlight:request.GET.q }} {% trans '第' %}{{ movie.season|apnumber }}{% trans '季' %}
{{ movie.orig_title | highlight:request.GET.q }} Season {{ movie.season }}
{% if movie.year %}({{ movie.year }}){% endif %}
{% else %}
{{ movie.title }} {% trans '第' %}{{ movie.season|apnumber }}{% trans '季' %}
{{ movie.orig_title }} Season {{ movie.season }}
{% if movie.year %}({{ movie.year }}){% endif %}
{% endif %}
{% else %}
{% if request.GET.q %}
{{ movie.title | highlight:request.GET.q }} {{ movie.orig_title | highlight:request.GET.q }}
{% if movie.year %}({{ movie.year }}){% endif %}
{% else %}
{{ movie.title }} {{ movie.orig_title }}
{% if movie.year %}({{ movie.year }}){% endif %}
{% endif %}
{% endif %}
</a>
{% if not request.GET.c or not request.GET.c in categories %}
<span class="entity-list__entity-category">[{{item.verbose_category_name}}]</span>
{% endif %}
<a href="{{ movie.source_url }}">
<span class="source-label source-label__{{ movie.source_site }}">{{ movie.get_source_site_display }}</span>
</a>
</div>
{% if movie.rating %}
<div class="rating-star entity-list__rating-star" data-rating-score="{{ movie.rating | floatformat:"0" }}"></div>
<span class="entity-list__rating-score rating-score">{{ movie.rating }}</span>
{% else %}
<div class="entity-list__rating entity-list__rating--empty"> {% trans '暂无评分' %}</div>
{% endif %}
<span class="entity-list__entity-info ">
{% if movie.director %}{% trans '导演' %}
{% for director in movie.director %}
{{ director }}{% if not forloop.last %} {% endif %}
{% endfor %}/
{% endif %}
{% if movie.genre %}{% trans '类型' %}
{% for genre in movie.get_genre_display %}
{{ genre }}{% if not forloop.last %} {% endif %}
{% endfor %}/
{% endif %}
</span>
<span class="entity-list__entity-info entity-list__entity-info--full-length">
{% if movie.actor %}{% trans '主演' %}
{% for actor in movie.actor %}
<span {% if forloop.counter > 5 %}style="display: none;" {% endif %}>{{ actor }}</span>
{% if forloop.counter <= 5 %}
{% if not forloop.counter == 5 and not forloop.last %} {% endif %}
{% endif %}
{% endfor %}
{% endif %}
</span>
<p class="entity-list__entity-brief">
{{ movie.brief }}
</p>
<div class="tag-collection">
{% for tag_dict in movie.tag_list %}
{% for k, v in tag_dict.items %}
{% if k == 'content' %}
<span class="tag-collection__tag">
<a href="{% url 'common:search' %}?tag={{ v }}">{{ v }}</a>
</span>
{% endif %}
{% endfor %}
{% endfor %}
</div>
</div>
</li>
{% endwith %}
{% elif item.category_name|lower == 'game' %}
{% with game=item %}
<li class="entity-list__entity">
<div class="entity-list__entity-img-wrapper">
<a href="{% url 'games:retrieve' game.id %}">
<img src="{{ game.cover|thumb:'normal' }}" alt="" class="entity-list__entity-img">
</a>
</div>
<div class="entity-list__entity-text">
<div class="entity-list__entity-title">
<a href="{% url 'games:retrieve' game.id %}" class="entity-list__entity-link">
{% if request.GET.q %}
{{ game.title | highlight:request.GET.q }}
{% else %}
{{ game.title }}
{% endif %}
</a>
{% if not request.GET.c or not request.GET.c in categories %}
<span class="entity-list__entity-category">[{{item.verbose_category_name}}]</span>
{% endif %}
<a href="{{ game.source_url }}">
<span class="source-label source-label__{{ game.source_site }}">{{ game.get_source_site_display }}</span>
</a>
</div>
{% if game.rating %}
<div class="rating-star entity-list__rating-star" data-rating-score="{{ game.rating | floatformat:"0" }}"></div>
<span class="entity-list__rating-score rating-score">{{ game.rating }}</span>
{% else %}
<div class="entity-list__rating entity-list__rating--empty"> {% trans '暂无评分' %}</div>
{% endif %}
<span class="entity-list__entity-info entity-list__entity-info--full-length">
{% if game.other_title %}{% trans '别名' %}
{% for other_title in game.other_title %}
{{ other_title }}{% if not forloop.last %} {% endif %}
{% endfor %}/
{% endif %}
{% if game.developer %}{% trans '开发商' %}
{% for developer in game.developer %}
{{ developer }}{% if not forloop.last %} {% endif %}
{% endfor %}/
{% endif %}
{% if game.genre %}{% trans '类型' %}
{% for genre in game.genre %}
{{ genre }}{% if not forloop.last %} {% endif %}
{% endfor %}/
{% endif %}
{% if game.platform %}{% trans '平台' %}
{% for platform in game.platform %}
{{ platform }}{% if not forloop.last %} {% endif %}
{% endfor %}/
{% endif %}
</span>
<p class="entity-list__entity-brief">
{{ game.brief }}
</p>
<div class="tag-collection">
{% for tag_dict in game.tag_list %}
{% for k, v in tag_dict.items %}
{% if k == 'content' %}
<span class="tag-collection__tag">
<a href="{% url 'common:search' %}?tag={{ v }}">{{ v }}</a>
</span>
{% endif %}
{% endfor %}
{% endfor %}
</div>
</div>
</li>
{% endwith %}
{% elif item.category_name|lower == 'album' or item.category_name|lower == 'song' %}
{% with music=item %}
<li class="entity-list__entity">
<div class="entity-list__entity-img-wrapper">
{% if item.category_name|lower == 'album' %}
<a href="{% url 'music:retrieve_album' music.id %}">
<img src="{{ music.cover|thumb:'normal' }}" alt="" class="entity-list__entity-img">
</a>
{% elif item.category_name|lower == 'song' %}
<a href="{% url 'music:retrieve_song' music.id %}">
<img src="{{ music.cover|thumb:'normal' }}" alt="" class="entity-list__entity-img">
</a>
{% endif %}
</div>
<div class="entity-list__entity-text">
<div class="entity-list__entity-title">
{% if item.category_name|lower == 'album' %}
<a href="{% url 'music:retrieve_album' music.id %}" class="entity-list__entity-link">
{% if request.GET.q %}
{{ music.title | highlight:request.GET.q }}
{% else %}
{{ music.title }}
{% endif %}
</a>
{% elif item.category_name|lower == 'song' %}
<a href="{% url 'music:retrieve_song' music.id %}" class="entity-list__entity-link">
{% if request.GET.q %}
{{ music.title | highlight:request.GET.q }}
{% else %}
{{ music.title }}
{% endif %}
</a>
{% endif %}
{% if not request.GET.c or not request.GET.c in categories %}
<span class="entity-list__entity-category">[{{item.verbose_category_name}}]</span>
{% endif %}
<a href="{{ music.source_url }}">
<span class="source-label source-label__{{ music.source_site }}">{{ music.get_source_site_display }}</span>
</a>
</div>
{% if music.rating %}
<div class="rating-star entity-list__rating-star" data-rating-score="{{ music.rating | floatformat:"0" }}"></div>
<span class="entity-list__rating-score rating-score">{{ music.rating }}</span>
{% else %}
<div class="entity-list__rating entity-list__rating--empty"> {% trans '暂无评分' %}</div>
{% endif %}
<span class="entity-list__entity-info ">
{% if music.artist %}{% trans '艺术家' %}
{% for artist in music.artist %}
<span>{{ artist }}</span>
{% if not forloop.last %} {% endif %}
{% endfor %}
{% endif %}
{% if music.genre %}/ {% trans '流派' %}
{{ music.genre }}
{% endif %}
{% if music.release_date %}/ {% trans '发行日期' %}
{{ music.release_date }}
{% endif %}
</span>
<span class="entity-list__entity-info entity-list__entity-info--full-length">
</span>
{% if music.brief %}
<p class="entity-list__entity-brief">
{{ music.brief }}
</p>
{% elif music.category_name|lower == 'album' %}
<p class="entity-list__entity-brief">
{% trans '曲目:' %}{{ music.track_list }}
</p>
{% else %}
<!-- song -->
<p class="entity-list__entity-brief">
{% trans '所属专辑:' %}{{ music.album }}
</p>
{% endif %}
<div class="tag-collection">
{% for tag_dict in music.tag_list %}
{% for k, v in tag_dict.items %}
{% if k == 'content' %}
<span class="tag-collection__tag">
<a href="{% url 'common:search' %}?tag={{ v }}">{{ v }}</a>
</span>
{% endif %}
{% endfor %}
{% endfor %}
</div>
</div>
</li>
{% endwith %}
{% endif %}
{% include "partial/list_item.html" %}
{% empty %}
{% trans '无结果' %}
<li class="entity-list__entity">
{% trans '无站内条目匹配' %}
</li>
{% endfor %}
{% if request.GET.q and user.is_authenticated %}
<li class="entity-list__entity" hx-get="{% url 'common:external_search' %}?q={{ request.GET.q }}&c={{ request.GET.c }}&page={% if pagination.current_page %}{{ pagination.current_page }}{% else %}1{% endif %}" hx-trigger="load" hx-swap="outerHTML">
{% trans '正在实时搜索站外条目' %}
</li>
{% endif %}
</ul>
</div>
<div class="pagination" >
{% if items.pagination.has_prev %}
<a href="?page=1&{% if request.GET.q %}q={{ request.GET.q }}{% elif request.GET.tag %}tag={{ request.GET.tag }}{% endif %}" class="pagination__nav-link pagination__nav-link">&laquo;</a>
<a href="?page={{ items.previous_page_number }}&{% if request.GET.q %}q={{ request.GET.q }}{% elif request.GET.tag %}tag={{ request.GET.tag }}{% endif %}" class="pagination__nav-link pagination__nav-link--right-margin pagination__nav-link">&lsaquo;</a>
{% if pagination.has_prev %}
<a href="?page=1&{% if request.GET.q %}q={{ request.GET.q }}{% elif request.GET.tag %}tag={{ request.GET.tag }}{% endif %}{% if request.GET.c %}&c={{ request.GET.c }}{% endif %}" class="pagination__nav-link pagination__nav-link">&laquo;</a>
<a href="?page={{ pagination.previous_page }}&{% if request.GET.q %}q={{ request.GET.q }}{% elif request.GET.tag %}tag={{ request.GET.tag }}{% endif %}{% if request.GET.c %}&c={{ request.GET.c }}{% endif %}" class="pagination__nav-link pagination__nav-link--right-margin pagination__nav-link">&lsaquo;</a>
{% endif %}
{% for page in items.pagination.page_range %}
{% for page in pagination.page_range %}
{% if page == items.pagination.current_page %}
<a href="?page={{ page }}&{% if request.GET.q %}q={{ request.GET.q }}{% elif request.GET.tag %}tag={{ request.GET.tag }}{% endif %}" class="pagination__page-link pagination__page-link--current">{{ page }}</a>
{% if page == pagination.current_page %}
<a href="?page={{ page }}&{% if request.GET.q %}q={{ request.GET.q }}{% elif request.GET.tag %}tag={{ request.GET.tag }}{% endif %}{% if request.GET.c %}&c={{ request.GET.c }}{% endif %}" class="pagination__page-link pagination__page-link--current">{{ page }}</a>
{% else %}
<a href="?page={{ page }}&{% if request.GET.q %}q={{ request.GET.q }}{% elif request.GET.tag %}tag={{ request.GET.tag }}{% endif %}" class="pagination__page-link">{{ page }}</a>
<a href="?page={{ page }}&{% if request.GET.q %}q={{ request.GET.q }}{% elif request.GET.tag %}tag={{ request.GET.tag }}{% endif %}{% if request.GET.c %}&c={{ request.GET.c }}{% endif %}" class="pagination__page-link">{{ page }}</a>
{% endif %}
{% endfor %}
{% if items.pagination.has_next %}
<a href="?page={{ items.next_page_number }}&{% if request.GET.q %}q={{ request.GET.q }}{% elif request.GET.tag %}tag={{ request.GET.tag }}{% endif %}" class="pagination__nav-link pagination__nav-link--left-margin">&rsaquo;</a>
<a href="?page={{ items.pagination.last_page }}&{% if request.GET.q %}q={{ request.GET.q }}{% elif request.GET.tag %}tag={{ request.GET.tag }}{% endif %}" class="pagination__nav-link">&raquo;</a>
{% if pagination.has_next %}
<a href="?page={{ pagination.next_page }}&{% if request.GET.q %}q={{ request.GET.q }}{% elif request.GET.tag %}tag={{ request.GET.tag }}{% endif %}{% if request.GET.c %}&c={{ request.GET.c }}{% endif %}" class="pagination__nav-link pagination__nav-link--left-margin">&rsaquo;</a>
<a href="?page={{ pagination.last_page }}&{% if request.GET.q %}q={{ request.GET.q }}{% elif request.GET.tag %}tag={{ request.GET.tag }}{% endif %}{% if request.GET.c %}&c={{ request.GET.c }}{% endif %}" class="pagination__nav-link">&raquo;</a>
{% endif %}
</div>
@ -500,7 +142,7 @@
</a>
{% endif %}
</div>
<div class="add-entity-entries__entry">
<!-- div class="add-entity-entries__entry">
{% if request.GET.c and request.GET.c in categories %}
{% if request.GET.c|lower == 'book' %}
@ -560,7 +202,7 @@
</a>
{% endif %}
</div>
</div -->
</div>
@ -573,15 +215,11 @@
</div>
{% comment %}
<div id="oauth2Token" hidden="true">{% oauth_token %}</div>
<div id="mastodonURI" hidden="true">{% mastodon request.user.mastodon_site %}</div>
<!--current user mastodon id-->
<div id="userMastodonID" hidden="true">{{ user.mastodon_id }}</div>
{% endcomment %}
<script>
document.body.addEventListener('htmx:configRequest', (event) => {
event.detail.headers['X-CSRFToken'] = '{{ csrf_token }}';
})
</script>
</body>

View file

@ -0,0 +1,61 @@
{% load static %}
{% load i18n %}
{% load admin_url %}
{% load mastodon %}
{% load oauth_token %}
{% load truncate %}
{% load thumb %}
<div id="modals">
<style>
.bottom-link {
margin-top: 30px; text-align: center; margin-bottom: 5px;
}
.bottom-link a {
color: #ccc;
}
</style>
<div class="announcement-modal modal">
<div class="announcement-modal__head">
<h4 class="announcement-modal__title">{% trans '公告' %}</h4>
<span class="announcement-modal__close-button modal-close">
<span class="icon-cross">
<svg xmlns="http://www.w3.org/2000/svg" viewBox="0 0 20 20">
<polygon
points="20 2.61 17.39 0 10 7.39 2.61 0 0 2.61 7.39 10 0 17.39 2.61 20 10 12.61 17.39 20 20 17.39 12.61 10 20 2.61">
</polygon>
</svg>
</span>
</span>
</div>
<div class="announcement-modal__body">
<ul>
{% for ann in unread_announcements %}
<li class="announcement">
<a href="{% url 'management:retrieve' ann.pk %}">
<h5 class="announcement__title">{{ ann.title }}</h5>
</a>
<span class="announcement__datetime">{{ ann.created_time }}</span>
<p class="announcement__content">{{ ann.get_plain_content | truncate:200 }}</p>
</li>
{% if not forloop.last %}
<div class="dividing-line" style="border-top-style: dashed;"></div>
{% endif %}
{% endfor %}
</ul>
<div class="bottom-link">
<a href="{% url 'management:list' %}">{% trans '查看全部公告' %}</a>
</div>
</div>
</div>
</div>
<div class="bg-mask"></div>
<script>
// because the modal and mask elements only exist when there are new announcements
$(".announcement-modal").show();
$(".bg-mask").show();
$(".modal-close").on('click', function () {
$(this).parents(".modal").hide();
$(".bg-mask").hide();
});
</script>

View file

@ -0,0 +1,23 @@
{% load static %}
{% if sentry_dsn %}
<script src="https://static.neodb.social/browser.sentry-cdn.com/7.7.0/bundle.min.js"></script>
<script>
if (window.Sentry) Sentry.init({
dsn: "{{ sentry_dsn }}",
release: "NeoDB@{{ version_hash }}",
environment: "{{ settings_module }}",
tracesSampleRate: 1.0,
});
</script>
{% endif %}
{% if jquery %}
<script src="https://static.neodb.social/cdnjs.cloudflare.com/ajax/libs/jquery/3.6.0/jquery.min.js"></script>
{% else %}
<script src="https://static.neodb.social/cdnjs.cloudflare.com/ajax/libs/cash/8.1.1/cash.min.js"></script>
{% endif %}
<script src="https://static.neodb.social/cdnjs.cloudflare.com/ajax/libs/htmx/1.8.0/htmx.min.js"></script>
<script src="https://static.neodb.social/unpkg.com/hyperscript.org@0.9.7.js"></script>
<link rel="stylesheet" href="{% static 'css/boofilsic.css' %}">
<link rel="stylesheet" href="{% static 'lib/css/rating-star.css' %}">
<link rel="stylesheet" href="{% static 'lib/css/neo.css' %}">
<link rel="search"type="application/opensearchdescription+xml" title="{{ site_name }}" href="{% static 'opensearch.xml' %}">

View file

@ -1,13 +1,12 @@
<footer class="footer">
<div class="grid">
<div class="footer__border">
<a class="footer__link" target="_blank" href="https://donotban.com/@whitiewhite">作者长毛象</a>
<a class="footer__link" target="_blank" href="https://github.com/doubaniux/boofilsic/issues">报告错误</a>
<a class="footer__link" target="_blank" href="https://donotban.com/@whitiewhite">作者</a>
<a class="footer__link" target="_blank" href="{{ support_link }}">报告错误</a>
<a class="footer__link" target="_blank" href="https://github.com/doubaniux/boofilsic" id="githubLink">Github</a>
<a class="footer__link" target="_blank" href="https://patreon.com/tertius" id="sponsor">捐助项目</a>
<a class="footer__link" target="_blank" href="https://patreon.com/tertius" id="sponsor">捐助上游项目</a>
<a class="footer__link" target="_blank" href="/announcement/supported-sites/" id="supported-sites">支持的网站</a>
<a class="footer__link" target="_blank" href="/announcement/" id="supported-sites">公告栏</a>
<a class="footer__link" href="javascript:void();" id="version">V0.4.4</a>
</div>
</div>
</footer>

View file

@ -1,24 +1,24 @@
{% load static %}
{% load i18n %}
{% load admin_url %}
<form method="get" action="{% url 'common:search' %}">
<section id="navbar">
<nav class="navbar">
<div class="grid">
<div class="navbar__wrapper">
<a href="{% url 'common:home' %}" class="navbar__logo">
<img src="{% static 'img/logo.svg' %}" alt="" class="navbar__logo-img">
</a>
<div class="navbar__search-box">
<!-- <input type="search" class="" name="q" id="searchInput" required="true" value="{% for v in request.GET.values %}{{ v }}{% endfor %}" -->
<input type="search" class="" name="q" id="searchInput" required="true" value="{% if request.GET.q %}{{ request.GET.q }}{% endif %}"
placeholder="搜索书影音">
<select class="navbar__search-dropdown" id="searchCategory">
<option value="all" {% if request.GET.c and request.GET.c != 'movie' and request.GET.c != 'book' or not request.GET.c %}selected{% endif %}>{% trans '任意' %}</option>
<option value="book" {% if request.GET.c and request.GET.c == 'book' %}selected{% endif %}>{% trans '书籍' %}</option>
<option value="movie" {% if request.GET.c and request.GET.c == 'movie' %}selected{% endif %}>{% trans '电影' %}</option>
<option value="music" {% if request.GET.c and request.GET.c == 'music' %}selected{% endif %}>{% trans '音乐' %}</option>
<option value="game" {% if request.GET.c and request.GET.c == 'game' %}selected{% endif %}>{% trans '游戏' %}</option>
placeholder="搜索书影音游戏,或输入站外条目链接如 https://movie.douban.com/subject/1297880/ 支持站点列表见页底公告栏">
<select class="navbar__search-dropdown" id="searchCategory" name="c">
<option value="all" {% if request.GET.c and request.GET.c == 'all' or not request.GET.c %}selected{% endif %}>{% trans '任意' %}</option>
<option value="book" {% if request.GET.c and request.GET.c == 'book' or '/books/' in request.path %}selected{% endif %}>{% trans '书籍' %}</option>
<option value="movie" {% if request.GET.c and request.GET.c == 'movie' or '/movies/' in request.path %}selected{% endif %}>{% trans '电影' %}</option>
<option value="music" {% if request.GET.c and request.GET.c == 'music' or '/music/' in request.path %}selected{% endif %}>{% trans '音乐' %}</option>
<option value="game" {% if request.GET.c and request.GET.c == 'game' or '/games/' in request.path %}selected{% endif %}>{% trans '游戏' %}</option>
</select>
</div>
<button class="navbar__dropdown-btn">• • •</button>
@ -26,8 +26,11 @@
{% if request.user.is_authenticated %}
<a class="navbar__link" href="{% url 'users:home' request.user.mastodon_username %}">{% trans '主页' %}</a>
<a class="navbar__link" href="{% url 'timeline:timeline' %}">{% trans '动态' %}</a>
<a class="navbar__link" id="logoutLink" href="{% url 'users:data' %}">{% trans '数据' %}</a>
<a class="navbar__link" id="logoutLink" href="{% url 'users:preferences' %}">{% trans '设置' %}</a>
<a class="navbar__link" id="logoutLink" href="{% url 'users:logout' %}">{% trans '登出' %}</a>
<a class="navbar__link" href="{% url 'common:home' %}">{% trans '主页' %}</a>
{% if request.user.is_staff %}
<a class="navbar__link" href="{% admin_url %}">{% trans '后台' %}</a>
{% endif %}
@ -36,23 +39,9 @@
<a class="navbar__link" href="{% url 'users:login' %}?next={{ request.path }}">{% trans '登录' %}</a>
{% endif %}
</ul>
</div>
</div>
</nav>
<script>
$("#searchInput").on('keyup', function (e) {
// e.preventDefault();
if (e.keyCode === 13) {
let q = $(this).val();
let c = $("#searchCategory").val();
if (q) {
let new_location = "{% url 'common:search' %}" + "?c=" + c + "&q=" + q;
setTimeout(function () { document.location.href = new_location; }, 150);
}
}
});
</script>
</section>
</form>

View file

@ -0,0 +1,186 @@
{% load static %}
{% load i18n %}
{% load admin_url %}
{% load mastodon %}
{% load oauth_token %}
{% load truncate %}
{% load thumb %}
{% load neo %}
<div class="grid__aside grid__aside--reverse-order grid__aside--tablet-column">
<div class="aside-section-wrapper aside-section-wrapper--no-margin">
<div class="user-profile" id="userInfoCard">
<div class="user-profile__header">
<!-- <img src="" class="user-profile__avatar mast-avatar" alt="{{ user.username }}"> -->
<img src="{{ user.mastodon_account.avatar }}" class="user-profile__avatar mast-avatar">
<a href="{% url 'users:home' user.mastodon_username %}">
<h5 class="user-profile__username mast-displayname">{{ user.mastodon_account.display_name }}</h5>
</a>
</div>
<p><a class="user-profile__link mast-acct" target="_blank" href="{{ user.mastodon_account.url }}">@{{ user.username }}@{{ user.mastodon_site }}</a>
{% current_user_relationship user as relationship %}
{% if relationship %}
<a class="user-profile__report-link">
{{ relationship }}
</a>
{% endif %}
</p>
<p class="user-profile__bio mast-brief">{{ user.mastodon_account.note }}</p>
{% if request.user != user %}
<a href="{% url 'users:report' %}?user_id={{ user.id }}"
class="user-profile__report-link">{% trans '投诉用户' %}</a>
{% endif %}
</div>
</div>
<div class="relation-dropdown">
<div class="relation-dropdown__button">
<span class="icon-arrow">
<svg xmlns="http://www.w3.org/2000/svg" viewBox="0 0 10 10">
<path d="M8.12,3.29,5,6.42,1.86,3.29H.45L5,7.84,9.55,3.29Z" />
</svg>
</span>
</div>
{% if user == request.user %}
<div class="relation-dropdown__body">
<div
class="aside-section-wrapper aside-section-wrapper--transparent aside-section-wrapper--collapse">
<div class="user-relation" id="followings">
<h5 class="user-relation__label">
{% trans '关注的人' %}
</h5>
<a href="{% url 'users:following' user.mastodon_username %}"
class="user-relation__more-link mast-following-more">{% trans '更多' %}</a>
<ul class="user-relation__related-user-list mast-following">
<li class="user-relation__related-user">
<a>
<img src="" alt="" class="user-relation__related-user-avatar">
<div class="user-relation__related-user-name mast-displayname">
</div>
</a>
</li>
</ul>
</div>
<div class="user-relation" id="followers">
<h5 class="user-relation__label">
{% trans '被他们关注' %}
</h5>
<a href="{% url 'users:followers' user.mastodon_username %}"
class="user-relation__more-link mast-followers-more">{% trans '更多' %}</a>
<ul class="user-relation__related-user-list mast-followers">
<li class="user-relation__related-user">
<a>
<img src="" alt="" class="user-relation__related-user-avatar">
<div class="user-relation__related-user-name mast-displayname">
</div>
</a>
</li>
</ul>
</div>
<div class="user-relation">
<h5 class="user-relation__label">
{% trans '常用标签' %}
</h5>
<a href="{% url 'users:tag_list' user.mastodon_username %}">{% trans '更多' %}</a>
<div class="tag-collection" style="margin-left: 0;">
{% if book_tags %}
<div>{% trans '书籍' %}</div>
{% for v in book_tags %}
<span class="tag-collection__tag">
<a href="{% url 'users:book_list' user.mastodon_username 'tagged' %}?t={{ v.content }}">{{ v.content }}</a>
</span>
{% endfor %}
<div class="clearfix"></div>
{% endif %}
{% if movie_tags %}
<div>{% trans '电影和剧集' %}</div>
{% for v in movie_tags %}
<span class="tag-collection__tag">
<a href="{% url 'users:movie_list' user.mastodon_username 'tagged' %}?t={{ v.content }}">{{ v.content }}</a>
</span>
{% endfor %}
<div class="clearfix"></div>
{% endif %}
{% if music_tags %}
<div>{% trans '音乐' %}</div>
{% for v in music_tags %}
<span class="tag-collection__tag">
<a href="{% url 'users:music_list' user.mastodon_username 'tagged' %}?t={{ v.content }}">{{ v.content }}</a>
</span>
{% endfor %}
<div class="clearfix"></div>
{% endif %}
{% if game_tags %}
<div>{% trans '游戏' %}</div>
{% for v in game_tags %}
<span class="tag-collection__tag">
<a href="{% url 'users:game_list' user.mastodon_username 'tagged' %}?t={{ v.content }}">{{ v.content }}</a>
</span>
{% endfor %}
<div class="clearfix"></div>
{% endif %}
</div>
</div>
</div>
<div
class="aside-section-wrapper aside-section-wrapper--transparent aside-section-wrapper--collapse">
{% if request.user.is_staff and request.user == user%}
<div class="report-panel">
<h5 class="report-panel__label">{% trans '投诉信息' %}</h5>
<a class="report-panel__all-link"
href="{% url 'users:manage_report' %}">全部投诉</a>
<div class="report-panel__body">
<ul class="report-panel__report-list">
{% for report in reports %}
<li class="report-panel__report">
<a href="{% url 'users:home' report.submit_user.mastodon_username %}"
class="report-panel__user-link">{{ report.submit_user }}</a>{% trans '已投诉' %}<a
href="{% url 'users:home' report.reported_user.mastodon_username %}"
class="report-panel__user-link">{{ report.reported_user }}</a>
</li>
{% empty %}
<div>{% trans '暂无新投诉' %}</div>
{% endfor %}
</ul>
</div>
</div>
{% endif %}
</div>
</div>
{% endif %}
</div>
</div>
{% if user == request.user %}
<div id="oauth2Token" hidden="true">{{ request.user.mastodon_token }}</div>
<div id="mastodonURI" hidden="true">{{ request.user.mastodon_site }}</div>
<div id="userMastodonID" hidden="true">{{ user.mastodon_id }}</div>
<div id="userPageURL" hidden="true">{% url 'users:home' 0 %}</div>
<div id="spinner" hidden>
<div class="spinner">
<div></div>
<div></div>
<div></div>
<div></div>
<div></div>
<div></div>
<div></div>
<div></div>
<div></div>
<div></div>
<div></div>
<div></div>
</div>
</div>
{% endif %}

View file

@ -0,0 +1,9 @@
{% if item.category_name|lower == 'book' %}
{% include "partial/list_item_book.html" with book=item %}
{% elif item.category_name|lower == 'movie' %}
{% include "partial/list_item_movie.html" with movie=item %}
{% elif item.category_name|lower == 'game' %}
{% include "partial/list_item_game.html" with game=item %}
{% elif item.category_name|lower == 'album' or item.category_name|lower == 'song' %}
{% include "partial/list_item_music.html" with music=item %}
{% endif %}

Some files were not shown because too many files have changed in this diff Show more