Once running, see http://localhost:30001/docs
for API documentation.
Run poetry install
first if you have not already done so.
The default settings in server.utils.Settings
should be ok for native use. Override any you want in a server/.env
file or by setting them as environment variables in the shells where the app and workers are running.
cd ..
docker-compose -f docker-compose.native.yml up
poetry run uvicorn app:app --host 0.0.0.0 --port 30001
cd tasks/
poetry run celery -A stream_inference worker --loglevel=info
Edit the env variables in docker-compose.yml
if needed - the defaults should be ok. There should be differences from the defaults in server.utils.Settings
for native use.
# from repo root
docker-compose up
cd ui/
yarn start
settings are configured here: https://github.com/HumanSignal/infra/blob/master/base/charts/humansignal/adala/values.yaml
The only significant difference from local as of now is that kafka topic autocreation is enabled as a backup.
Server pytests in test/test_server.py
rely on a running server (kafka, redis, and celery, but not app) behind the use_server
mark, and a real openai/azure key available in the OPENAI_API_KEY
/AZURE_API_KEY
env var behind the use_openai
/use_azure
mark. These tests are not run by default. To run them, use:
poetry run pytest -m use_server -m use_openai -m use_azure
# from repo root
docker-compose up --build
# if dependencies have changed, or to get to a clean state:
docker-compose build --no-cache
docker-compose up
./api_codegen.sh
export SERVER=$(pwd)
cd ui/
yarn codegen
now edit src/adala.tsx
manually given the autogenerated changes in src/_api/
If not set, the logging level will default to "INFO".
When doing native dev, you can set the LOG_LEVEL
env var to "INFO", "DEBUG", etc. For docker, change the value of the LOG_LEVEL
env var in docker-compose.yml
(for both app
and worker
).