The Opentrons AI application's server.
- This folder is not plugged into the global Make ecosystem. This is intentional, this is a serverless application not tied to the Robot Stack dependencies.
- clone the repository
gh repo clone Opentrons/opentrons
cd opentrons/opentrons-ai-server
- Have pyenv installed per DEV_SETUP.md
- Use pyenv to install python
pyenv install 3.12.4
or latest 3.12.* - Have nodejs and yarn installed per DEV_SETUP.md
- This allows formatting of of
.md
and.json
files
- This allows formatting of of
- select the python version
pyenv local 3.12.4
- This will create a
.python-version
file in this directory
- This will create a
- select the node version with
nvs
ornvm
currently 18.19* - Install pipenv and python dependencies
make setup
- AWS credentials and config
- docker
python -m pipenv install pytest==8.2.0 --dev
python -m pipenv install openai==1.25.1
- handler
- the router and request/response handling
- domain
- business logic
- integration
- integration with other services
- Make your changes
- Fix what can be automatically then lint and unit test like CI will
make pre-commit
make pre-commit
passes- run locally
make run
this runs the FastAPI server directly at localhost:8000- this watches for changes and restarts the server
- test locally
make live-test
(ENV=local is the default in the Makefile) - use the live client
make live-client
- Our first version of this service is a long running POST that may take from 1-3 minutes to complete
- This forces us to use CloudFront(Max 180) + Load Balancer + ECS Fargate FastAPI container
- An AWS service ticket is needed to increase the max CloudFront response time from 60 to 180 seconds