lib.itmens/common/search/meilisearch.py

184 lines
6.3 KiB
Python
Raw Normal View History

add all NeoDB features to NiceDB (#115) * fix scraping failure with wepb image (merge upstream/fix-webp-scrape) * add filetype to requirements * add proxycrawl.com as fallback for douban scraper * load 3p js/css from cdn * add fix-cover task * fix book/album cover tasks * scrapestack * bandcamp scrape and preview ; manage.py scrape <url> ; make ^C work when DEBUG * use scrapestack when fix cover * add user agent to improve compatibility * search BandCamp for music albums * add missing MovieGenre * fix search 500 when song has no parent album * adjust timeout * individual scrapers * fix tmdb parser * export marks via rq; pref to send public toot; move import to data page * fix spotify import * fix edge cases * export: fix dupe tags * use rq to manage doufen import * add django command to manage rq jobs * fix export edge case * tune rq admin * fix detail page 502 step 1: async pull mastodon follow/block/mute list * fix detail page 502 step 2: calculate relationship by local cached data * manual sync mastodon follow info * domain_blocks parsing fix * marks by who i follows * adjust label * use username in urls * add page to list a user\'s review * review widget on user home page * fix preview 500 * fix typo * minor fix * fix google books parsing * allow mark/review visible to oneself * fix auto sync masto for new user * fix search 500 * add command to restart a sync task * reset visibility * delete user data * fix tag search result pagination * not upgrade to django 4 yet * basic doc * wip: collection * wip * wip * collection use htmx * show in-collection section for entities * fix typo * add su for easier debug * fix some 500s * fix login using alternative domain * hide data from disabled user * add item to list from detail page * my tags * collection: inline comment edit * show number of ratings * fix collection delete * more detail in collection view * use item template in search result * fix 500 * write index to meilisearch * fix search * reindex in batch * fix 500 * show search result from meilisearch * more search commands * index less fields * index new items only * search highlights * fix 500 * auto set search category * classic search if no meili server * fix index stats error * support typesense backend * workaround typesense bug * make external search async * fix 500, typo * fix cover scripts * fix minor issue in douban parser * supports m.douban.com and customized bandcamp domain * move account * reword with gender-friendly and instance-neutral language * Friendica does not have vapid_key in api response * enable anonymous search * tweak book result template * API v0 API v0 * fix meilisearch reindex * fix search by url error * login via twitter.com * login via pixelfed * minor fix * no refresh on inactive users * support refresh access token * get rid of /users/number-id/ * refresh twitter handler automatically * paste image when review * support PixelFed (very long token) * fix django-markdownx version * ignore single quote for meilisearch for now * update logo * show book review/mark from same isbn * show movie review/mark from same imdb * fix login with older mastodon servers * import Goodreads book list and profile * add timestamp to Goodreads import * support new google books api * import goodreads list * minor goodreads fix * click corner action icon to add to wishlist * clean up duplicated code * fix anonymous search * fix 500 * minor fix search 500 * show rating only if votes > 5 * Entity.refresh_rating() * preference to append text when sharing; clean up duplicated code * fix missing data for user tagged view * fix page link for tag view * fix 500 when language field longer than 10 * fix 500 when sharing mark for song * fix error when reimport goodread profile * fix minor typo * fix a rare 500 * error log dump less * fix tags in marks export * fix missing param in pagination * import douban review * clarify text * fix missing sheet in review import * review: show in progress * scrape douban: ignore unknown genre * minor fix * improve review import by guess entity urls * clear guide text for review import * improve review import form text * workaround some 500 * fix mark import error * fix img in review import * load external results earlier * ignore search server errors * simplify user register flow to avoid inconsistent state * Add a learn more link on login page * Update login.html * show mark created timestamp as mark time * no 500 for api error * redirect for expired tokens * ensure preference object created. * mark collections * tag list * fix tag display * fix sorting etc * fix 500 * fix potential export 500; save shared links * fix share to twittwe * fix review url * fix 500 * fix 500 * add timeline, etc * missing status change in timeline * missing id in timeline * timeline view by default * workaround bug in markdownx... * fix typo * option to create new collection when add from detail page * add missing announcement and tags in timeline home * add missing announcement * add missing announcement * opensearch * show fediverse shared link * public review no longer requires login * fix markdownx bug * fix 500 * use cloudflare cdn * validate jquery load and domain input * fix 500 * tips for goodreads import * collaborative collection * show timeline and profile link on nav bar * minor tweak * share collection * fix Goodreads search * show wish mark in timeline * resync failed urls with local proxy * resync failed urls with local proxy: check proxy first * scraper minor fix * resync failed urls * fix fields limit * fix douban parsing error * resync * scraper minor fix * scraper minor fix * scraper minor fix * local proxy * local proxy * sync default config from neodb * configurable site name * fix 500 * fix 500 for anonymous user * add sentry * add git version in log * add git version in log * no longer rely on cdnjs.cloudflare.com * move jq/cash to _common_libs template partial * fix rare js error * fix 500 * avoid double submission error * import tag in lower case * catch some js network errors * catch some js network errors * support more goodread urls * fix unaired tv in tmdb * support more google book urls * fix related series * more goodreads urls * robust googlebooks search * robust search * Update settings.py * Update scraper.py * Update requirements.txt * make nicedb work * doc update * simplify permission check * update doc * update doc for bug report link * skip spotify tracks * fix 500 * improve search api * blind fix import compatibility * show years for movie in timeline * show years for movie in timeline; thinner font * export reviews * revert user home to use jquery https://github.com/fabiospampinato/cash/issues/246 * IGDB * use IGDB for Steam * use TMDB for IMDb * steam: igdb then fallback to steam * keep change history * keep change history: add django settings * Steam: keep localized title/brief while merging IGDB * basic Docker support * rescrape * Create codeql-analysis.yml * Create SECURITY.md * Create pysa.yml Co-authored-by: doubaniux <goodsir@vivaldi.net> Co-authored-by: Your Name <you@example.com> Co-authored-by: Their Name <they@example.com> Co-authored-by: Mt. Front <mfcndw@gmail.com>
2022-11-09 13:56:50 -05:00
import logging
import meilisearch
from django.conf import settings
from django.db.models.signals import post_save, post_delete
import types
INDEX_NAME = 'items'
SEARCHABLE_ATTRIBUTES = ['title', 'orig_title', 'other_title', 'subtitle', 'artist', 'author', 'translator', 'developer', 'director', 'actor', 'playwright', 'pub_house', 'company', 'publisher', 'isbn', 'imdb_code']
INDEXABLE_DIRECT_TYPES = ['BigAutoField', 'BooleanField', 'CharField', 'PositiveIntegerField', 'PositiveSmallIntegerField', 'TextField', 'ArrayField']
INDEXABLE_TIME_TYPES = ['DateTimeField']
INDEXABLE_DICT_TYPES = ['JSONField']
INDEXABLE_FLOAT_TYPES = ['DecimalField']
# NONINDEXABLE_TYPES = ['ForeignKey', 'FileField',]
SEARCH_PAGE_SIZE = 20
logger = logging.getLogger(__name__)
def item_post_save_handler(sender, instance, created, **kwargs):
if not created and settings.SEARCH_INDEX_NEW_ONLY:
return
Indexer.replace_item(instance)
def item_post_delete_handler(sender, instance, **kwargs):
Indexer.delete_item(instance)
def tag_post_save_handler(sender, instance, **kwargs):
pass
def tag_post_delete_handler(sender, instance, **kwargs):
pass
class Indexer:
class_map = {}
_instance = None
@classmethod
def instance(self):
if self._instance is None:
self._instance = meilisearch.Client(settings.MEILISEARCH_SERVER, settings.MEILISEARCH_KEY).index(INDEX_NAME)
return self._instance
@classmethod
def init(self):
meilisearch.Client(settings.MEILISEARCH_SERVER, settings.MEILISEARCH_KEY).create_index(INDEX_NAME, {'primaryKey': '_id'})
self.update_settings()
@classmethod
def update_settings(self):
self.instance().update_searchable_attributes(SEARCHABLE_ATTRIBUTES)
self.instance().update_filterable_attributes(['_class', 'tags', 'source_site'])
self.instance().update_settings({'displayedAttributes': ['_id', '_class', 'id', 'title', 'tags']})
@classmethod
def get_stats(self):
return self.instance().get_stats()
@classmethod
def busy(self):
return self.instance().get_stats()['isIndexing']
@classmethod
def update_model_indexable(self, model):
if settings.SEARCH_BACKEND is None:
return
self.class_map[model.__name__] = model
model.indexable_fields = ['tags']
model.indexable_fields_time = []
model.indexable_fields_dict = []
model.indexable_fields_float = []
for field in model._meta.get_fields():
type = field.get_internal_type()
if type in INDEXABLE_DIRECT_TYPES:
model.indexable_fields.append(field.name)
elif type in INDEXABLE_TIME_TYPES:
model.indexable_fields_time.append(field.name)
elif type in INDEXABLE_DICT_TYPES:
model.indexable_fields_dict.append(field.name)
elif type in INDEXABLE_FLOAT_TYPES:
model.indexable_fields_float.append(field.name)
post_save.connect(item_post_save_handler, sender=model)
post_delete.connect(item_post_delete_handler, sender=model)
@classmethod
def obj_to_dict(self, obj):
pk = f'{obj.__class__.__name__}-{obj.id}'
item = {
'_id': pk,
'_class': obj.__class__.__name__,
# 'id': obj.id
}
for field in obj.__class__.indexable_fields:
item[field] = getattr(obj, field)
for field in obj.__class__.indexable_fields_time:
item[field] = getattr(obj, field).timestamp()
for field in obj.__class__.indexable_fields_float:
item[field] = float(getattr(obj, field)) if getattr(obj, field) else None
for field in obj.__class__.indexable_fields_dict:
d = getattr(obj, field)
if d.__class__ is dict:
item.update(d)
item = {k: v for k, v in item.items() if v}
return item
@classmethod
def replace_item(self, obj):
try:
self.instance().add_documents([self.obj_to_dict(obj)])
except Exception as e:
logger.error(f"replace item error: \n{e}")
@classmethod
def replace_batch(self, objects):
try:
self.instance().update_documents(documents=objects)
except Exception as e:
logger.error(f"replace batch error: \n{e}")
@classmethod
def delete_item(self, obj):
pk = f'{obj.__class__.__name__}-{obj.id}'
try:
self.instance().delete_document(pk)
except Exception as e:
logger.error(f"delete item error: \n{e}")
@classmethod
def patch_item(self, obj, fields):
pk = f'{obj.__class__.__name__}-{obj.id}'
data = {}
for f in fields:
data[f] = getattr(obj, f)
try:
self.instance().update_documents(documents=[data], primary_key=[pk])
except Exception as e:
logger.error(f"patch item error: \n{e}")
@classmethod
def search(self, q, page=1, category=None, tag=None, sort=None):
if category or tag:
f = []
if category == 'music':
f.append("(_class = 'Album' OR _class = 'Song')")
elif category:
f.append(f"_class = '{category}'")
if tag:
t = tag.replace("'", "\'")
f.append(f"tags = '{t}'")
filter = ' AND '.join(f)
else:
filter = None
options = {
'offset': (page - 1) * SEARCH_PAGE_SIZE,
'limit': SEARCH_PAGE_SIZE,
'filter': filter,
'facetsDistribution': ['_class'],
'sort': None
}
try:
r = self.instance().search(q, options)
except Exception as e:
logger.error(f"MeiliSearch error: \n{e}")
r = {'nbHits': 0, 'hits': []}
# print(r)
results = types.SimpleNamespace()
results.items = list([x for x in map(lambda i: self.item_to_obj(i), r['hits']) if x is not None])
results.num_pages = (r['nbHits'] + SEARCH_PAGE_SIZE - 1) // SEARCH_PAGE_SIZE
# print(results)
return results
@classmethod
def item_to_obj(self, item):
try:
return self.class_map[item['_class']].objects.get(id=item['id'])
except Exception as e:
logger.error(f"unable to load search result item from db:\n{item}")
return None