Note
All Docker Hub images are available for both x86_64 and arm64 architectures.
Warning
If you choose to build the images yourself, you must train the model again as the model.pt file is not included in the repository.
This is a school project intended to demonstrate the use of Kubernetes to build a microservices-based application with at least 2 microservices and a non-scalable database.
The project built here is a simple machine learning application used to draw your own handwriting images and have a small convolutional neural network (CNN) model with 422K parameters recognize the drawn characters.
There are three microservices for this project; a frontend (React), an inference service (FastAPI), and an analytics service (FastAPI). This also includes a non-scalable PostgreSQL 16 database.
- Kubernetes
- Docker (optional)
- PSQL 16 (optional)
- Python 3.13 or higher (optional)
- NodeJS 22 or higher (optional)
- Ubuntu 24.04.3 (optional)
- MicroK8s (optional but highly recommended)
I use Microk8s for development and hosting the cluster. As this is for a school project, there is no need for a production-ready setup and as it runs on a single node, the ha-cluster addon is disabled. The cluster also uses the NGINX Ingress Controller for ingress, meaning the dns and ingress addons are enabled. For persistent volumes, the hostpath-storage addon is enabled.
This is the configuration I use:
ha-cluster: disableddns: enabledingress: enabledhostpath-storage: enabled
The database runs as a ClusterIP service and can therefore not be connected to directly using psql. So in order to access the database for administration purposes, you need to use port forwarding. This can be done by running the commands below in two separate terminals as the port forwarding feature runs as a daemon.
Terminal 1:
microk8s kubectl port-forward service/bs-postgres-service 5432:5432Terminal 2:
PGPASSWORD=postgres psql -h 127.0.0.1 -p 5432 -U postgres -d defaultRead the frontend documentation.
Read the inference documentation.
Read the model documentation.
Read the analytics documentation.
On my machine, I have configured the local domain bs-app.local to point to the ingress controller. This allows me to access the services behind my ingress controller using the domain name instead of the IP address of the cluster. You can do this by adding the following line to your /etc/hosts file:
127.0.0.1 bs-app.localAfter you modify the file, you can access the cluster using the url http://bs-app.local.
Kubectl automatically handles the pulling of images from Docker Hub and stored on your machine depending on how you configure your Kubernetes.
marcusfrdk/bs-frontend: ~20MB (x86_64, arm64)marcusfrdk/bs-inference: ~3.9GB (x86_64, arm64)marcusfrdk/bs-analytics: ~160MB (x86_64, arm64)
./deploy.sh./down.shmicrok8s kubectl apply -f db.yml
microk8s kubectl apply -f ingress.yml
microk8s kubectl apply -f analytics/k8s.yml
microk8s kubectl apply -f inference/k8s.yml
microk8s kubectl apply -f frontend/k8s.ymlmicrok8s kubectl delete -f db.yml
microk8s kubectl delete -f ingress.yml
microk8s kubectl delete -f analytics/k8s.yml
microk8s kubectl delete -f inference/k8s.yml
microk8s kubectl delete -f frontend/k8s.ymlStart by getting the development environment set up, this will install the NodeJS environment for the frontend microservice as well as the Python environment for the backend microservices.
chmod +x dev.sh
./dev.sh
source .venv/bin/activateOnce the environment is setup, open three separate terminal and run the following commands:
Terminal 1:
cd frontend
npm run devTerminal 2:
cd inference
fastapi dev server.pyTerminal 3:
cd analytics
fastapi dev server.py --port 8001Note: For each microservice, make sure to update the hosts and ports accordingly.
docker build -t marcusfrdk/bs-<service>:latest .cd <service>
docker buildx build \
--platform linux/amd64,linux/arm64 \
-t marcusfrdk/bs-<service>:latest \
--push .Note: This is the build process I use for my repository. If you want to push it to your own repository, make sure to update the image name accordingly.
This project is licensed under the MIT license. Read the LICENSE file for more details.

