NeoDB is a Django project, and it runs side by side with a modified version of [Takahe](https://github.com/jointakahe/takahe) (a separate Django project, code in `neodb_takahe` as submodule). They communicate mainly thru database and task queue, the diagram in [Docker Installation](install-docker.md) demostrate a typical architecture. Currently the two are loosely coupled, so you may take either one offline without immediate impact on the other, which makes it very easy to conduct maintenance and troubleshooting separately. In the future, they may get combined but it's not decided and will not be decided very soon.
Before writing code
-------------------
Obviously a working version of local NeoDB instance has to be established first, see [install guide](install.md). If you are to test anything related with ActivityPub, a setup with externally reachable real domain name might be required, using `localhost` may cause quite a few issues here and there.
To debug source code with `docker compose`, add `NEODB_DEBUG=True` in `.env`, and use `--profile dev` instead of `--profile production` in commands. The `dev` profile is different from `production`:
- code in `NEODB_SRC` (default: .) and `TAKAHE_SRC` (default: ./neodb-takahe) will be mounted and used in the container instead of code in the image
-`runserver` with autoreload will be used instead of `gunicorn` for both neodb and takahe web server
- /static/ and /s/ url are not map to pre-generated static file path
- one `rqworker` container will be started, instead of two
- use `dev-shell` and `dev-root` to invoke shells, instead of `shell` and `root`
- there's no automatic `migration` container, but it can be triggered manually via `docker compose run dev-shell neodb-init`
- Python virtual environments inside docker image, which are `/neodb-venv` and `/takahe-venv`, will be used by default. They can be changed to different locations with `TAKAHE_VENV` and `NEODB_VENV` if needed, usually in a case of the local code using a package not in docker venv.
- Some packages inside python virtual environments are platform dependent, so mount venv from macOS into the Linux container will likely not work.
- Python servers are launched as `app` user, who has no write access to anywhere except /tmp and media path, that's by design.
- Database/redis used in the container cluster are not accessible outside, which is by design. Querying them can be done by either apt update/install client packages in `dev-root` or `root` container, or a modified `docker-compose.yml` with `ports` section uncommented.
requires `${NEODB_SRC}/.venv` and `${TAKAHE_SRC}/.venv` both ready with all the requirements installed, and python binary pointing to `/usr/local/bin/python` (because that's where python is in the docker base image).
-`mastodon` this leverages [Mastodon API](https://docs.joinmastodon.org/client/intro/) and [Twitter API](https://developer.twitter.com/en/docs/twitter-api) for user login and data sync
-`catalog` manages different types of items user may review, and scrapers to fetch from external resources, see [catalog.md](catalog.md) for more details
-`journal` manages user created content(review/ratings) and lists(collection/shelf/tag), see [journal.md](journal.md) for more details
-`legacy` this is only used by instances upgraded from 0.4.x and earlier, to provide a link mapping from old urls to new ones. If your journey starts with 0.5 and later, feel free to ignore it.