- Just want to run the project;
- The app in action;
- Design Document;
- Jupyter Notebook of model;
- App local development (Python);
- App local development (React);
- Saving a built and trained model;
- Exposing the model on Tensorflow serving;
- Next steps;
- Contribution;
docker run -p 5000:5000 -e MODEL_PATH="./models/1703825980" buarki/snapmath-app
Once running go to http://localhost:5000. You can find some images to use at the ML directory.
To develop the app you can follow this steps:
- enter the app directory:
cd app
- Initialize virtual env:
python3 -m venv app/snapmath
- Activate the virtual env:
source snapmath/bin/activate
- Install dependencies:
pip3 install -r requirements.txt
- To Run the app:
MODEL_PATH="../models/1703825980" python3 -m flask run --host=0.0.0.0
MODEL_PATH is pointing to the saved model 1703825980.
Once running go to http://localhost:5000. You can find some images to use at the ML directory.
To develop the app you can follow this steps:
- enter the app directory:
cd app-js
- Set Node version to 19.0.0:
nvm 19.0.0
- Install dependencies:
npm i
- Run
npm run dev
Once running go to http://localhost:3000. You can find some images to use at the ML directory.
For more info, see the full project here.
You can collect and save your built model by using this script. Just follow bellow steps:
- Collect the path of the model inside jupyter notebook. You can do so by running:
docker-compose exec jupyter-notebook sh -c "ls -td snapmath-model/* | head -n 1"
- Get the base path of the jupyter container:
docker-compose exec jupyter-notebook pwd
- Get the container id of jupyter notebook:
docker ps | grep "jupyter-notebook"
If you are using docker-compose you can also use the container name;
- Now call the script collect-model.sh proving these arguments:
./collect-model.sh jupyter-notebook CONTAINER_BASE_PATH/MODEL_PATH
A concrete example is bellow one:
./collect-model.sh 2c40e8ce7197 /home/jovyan/snapmath-model/1703825980
where:
- 2c40e8ce7197 is the container jupyter notebook container id
- /home/jovyan/snapmath-model/1703825980 is the new model to be saved
In order to see a trained model running performing the inferences, we can leverage TensorFlow Serving, which is a prepared docker image able to load the model and make it available via REST or gRPC. The Makefile already has a command to expose a collected model in it. First, we need to build the image based on the provided Dockerfile adding the collected models inside of it:
make build-tf-serving
Then we can run:
make run-tf-service
The REST API will be available at localhost:8501, and you can easily call this API using the provided client using Jupyter Notebook.
For now I see two interesting points to inspect:
- Check if is it possible to run this model using Go;
Contributions are welcome! If you find something interesting to improve just open a PR :)