This project demonstrates a core MLOps workflow: taking a trained Scikit-learn Machine Learning model, wrapping it in a FastAPI web service, and packaging the entire application into a portable Docker container.
This setup ensures that the ML model can be consistently deployed and run on any environment that supports Docker, eliminating "it works on my machine" problems.
- Model: Scikit-learn (Logistic Regression), packaged with
joblib. - Serving API: FastAPI (Python web framework)
- Web Server: Uvicorn (ASGI server for production)
- Containerization: Docker
This structure assumes the Dockerfile was moved to the project root for the final successful build.
├── app/
│ ├── main.py # FastAPI service (loads model, defines endpoints)
│ └── requirements.txt # Python dependencies for the container
├── model/
│ └── trained_model.pkl # The trained Logistic Regression model artifact
├── .dockerignore # Files to exclude from the Docker build context
├── Dockerfile # Instructions for building the Docker image
└── README.md # This documentation
- Docker Desktop (or Docker Engine) must be installed and running on your host machine.
- Your terminal must be open in the root directory of the project.
This command reads the Dockerfile in the root and builds the image.
docker build -t iris-ml-service:v1 .This command starts a new container instance in the background (-d), mapping the internal container port 80 to your host machine's port 8888.
docker run -d -p 8888:80 --name iris-predictor-container iris-ml-service:v1Confirm the service is running and predicting by sending a sample POST request.
PowerShell Command:
# 1. Define the JSON body
$body = @{
sepal_length = 5.1;
sepal_width = 3.5;
petal_length = 1.4;
petal_width = 0.2
} | ConvertTo-Json
# 2. Send the request
Invoke-RestMethod -Uri http://localhost:8888/predict -Method POST -Headers @{"Content-Type" = "application/json"} -Body $bodyExpected Output:
prediction_class confidence input_data
---------------- ---------- ----------
0 0.9817 @{sepal_length=5.1; sepal_width=3.5; petal_length=1.4; petal_width=0.2}Stop and remove the running container to free up system resources after testing.
1. Stop the running container:
docker stop iris-predictor-container2. Remove the stopped container instance:
docker rm iris-predictor-container3. Remove the image (Optional - if no longer needed):
docker rmi iris-ml-service:v1