ml_deploy_lite is a Python library designed to simplify the deployment of machine learning models. It was created to address the common challenges faced during the deployment process, such as the complexity of setting up REST APIs or gRPC services, the need for Docker and Kubernetes integration, and the lack of built-in monitoring and logging for performance and error tracking.
- Challenges in Deployments i have faced
- Installation
- Usage
- Features
- Why
ml_deploy_lite? - Creating a Sample Model
- Docker Integration
- Kubernetes Integration
- Testing
- Contributing
- License
-
Complexity: Setting up REST APIs or gRPC services for machine learning models can be complex and time-consuming.
-
Docker and Kubernetes Integration: Integrating machine learning models with Docker and Kubernetes can be challenging, especially for developers new to these technologies.
-
Monitoring and Logging: Without built-in support for monitoring and logging, it can be difficult to track the performance of the deployed models and identify errors.
ml_deploy_lite was created to simplify the deployment process for machine learning models. It provides a user-friendly interface for quickly converting models into REST APIs or gRPC services, automatically generates Dockerfiles for containerization, and simplifies the generation of Kubernetes deployment configurations. It also includes built-in support for monitoring performance and logging errors, making it easier for developers to track the performance of their deployed models.
-
Easy Deployment: Quickly convert machine learning models into REST APIs.
-
Docker Integration: Automatically generate a Dockerfile for containerization.
-
Kubernetes Support: Generate Kubernetes deployment configurations easily.
-
Monitoring and Logging: Built-in support for monitoring performance and logging errors.
-
User-Friendly: Designed to be easy to use for developers of all skill levels.
To install ml_deploy_lite, you can use pip. Make sure you have Python 3.6 or higher installed.
pip install ml_deploy_lite
Here’s a simple example of how to use ml_deploy_lite to deploy a machine learning model.
- Import the Library:
from ml_deploy_lite import MLDeployLite- Create an Instance:
deployer = MLDeployLite('path/to/your/model.pkl')- Run the API:
deployer.run()- Make Predictions:
Open a terminal and run the following command:
curl -X POST http://127.0.0.1:5000/predict \
-H "Content-Type: application/json" \
-d '{"features": [5.1, 3.5, 1.4, 0.2]}'
You should see a JSON response like:
{
"prediction": 0
}
OR
You can make predictions by sending a POST request to the /predict endpoint with the following JSON body:
{
"features": [5.1, 3.5, 1.4, 0.2]
}To create a sample machine learning model, you can use the following script:
import joblib
from sklearn.datasets import load_iris
from sklearn.ensemble import RandomForestClassifier
# Load dataset
iris = load_iris()
X, y = iris.data, iris.target
# Train a model
model = RandomForestClassifier()
model.fit(X, y)
# Save the model
joblib.dump(model, 'ml_deploy_lite/model/sample_model.pkl')Run this script to generate a sample model that you can use with ml_deploy_lite.
To create a Docker image for your application, you can use the provided create_dockerfile function in ml_deploy_lite/docker.py. This will generate a Dockerfile in the root directory of your project.
- Generate the Dockerfile:
from ml_deploy_lite.docker import create_dockerfile
create_dockerfile()- Build the Docker Image:
Run the following command in the terminal:
docker build -t your_docker_image:latest .- Run the Docker Container:
After building the image, you can run the container with:
docker run -p 5000:5000 your_docker_image:latestTo create a Kubernetes deployment configuration, you can use the create_k8s_deployment function in ml_deploy_lite/k8s.py. This will generate a k8s_deployment.yaml file that you can apply to your Kubernetes cluster.
- Generate the Kubernetes Deployment File:
from ml_deploy_lite.k8s import create_k8s_deployment
create_k8s_deployment()- Apply the Configuration:
Run the following command to deploy to your Kubernetes cluster:
kubectl apply -f k8s_deployment.yamlTo run the tests for the API, you can use the following command:
python -m unittest discover -s testsMake sure you have the necessary test data and models in place before running the tests.
Contributions are welcome! If you have suggestions for improvements or new features, please open an issue or submit a pull request.
- Fork the repository.
- Create a new branch (
git checkout -b feature-branch). - Make your changes and commit them (
git commit -m 'Add new feature'). - Push to the branch (
git push origin feature-branch). - Open a pull request.
This project is licensed under the MIT License.
For more information, please refer to the documentation for Flask and the Docker SDK for Python.
This README.md file provides a comprehensive overview of your ml_deploy_lite library, including installation instructions, usage examples, and details on Docker and Kubernetes integration. It is structured to help users understand how to use the library effectively without encountering errors. Feel free to modify any sections to better fit your project's specifics or to add any additional information you think is necessary!