Skip to content

ml-deploy-lite is a Python library designed to simplify the deployment of machine learning models. It allows developers to quickly turn their models into REST APIs or gRPC services with minimal configuration. The library integrates seamlessly with Docker and Kubernetes, providing built-in monitoring and logging for performance and error tracking.

Notifications You must be signed in to change notification settings

Blacksujit/ML-Deploy-Lite

Repository files navigation

ML-Deploy-Lite:

ml_deploy_lite is a Python library designed to simplify the deployment of machine learning models. It was created to address the common challenges faced during the deployment process, such as the complexity of setting up REST APIs or gRPC services, the need for Docker and Kubernetes integration, and the lack of built-in monitoring and logging for performance and error tracking.

Table of Contents:

Challenges in Deployments i have faced :

  • Complexity: Setting up REST APIs or gRPC services for machine learning models can be complex and time-consuming.

  • Docker and Kubernetes Integration: Integrating machine learning models with Docker and Kubernetes can be challenging, especially for developers new to these technologies.

  • Monitoring and Logging: Without built-in support for monitoring and logging, it can be difficult to track the performance of the deployed models and identify errors.

Why ml_deploy_lite?:

ml_deploy_lite was created to simplify the deployment process for machine learning models. It provides a user-friendly interface for quickly converting models into REST APIs or gRPC services, automatically generates Dockerfiles for containerization, and simplifies the generation of Kubernetes deployment configurations. It also includes built-in support for monitoring performance and logging errors, making it easier for developers to track the performance of their deployed models.

Features

  • Easy Deployment: Quickly convert machine learning models into REST APIs.

  • Docker Integration: Automatically generate a Dockerfile for containerization.

  • Kubernetes Support: Generate Kubernetes deployment configurations easily.

  • Monitoring and Logging: Built-in support for monitoring performance and logging errors.

  • User-Friendly: Designed to be easy to use for developers of all skill levels.

Installation

To install ml_deploy_lite, you can use pip. Make sure you have Python 3.6 or higher installed.

pip install ml_deploy_lite

Usage

Here’s a simple example of how to use ml_deploy_lite to deploy a machine learning model.

  1. Import the Library:
from ml_deploy_lite import MLDeployLite
  1. Create an Instance:
deployer = MLDeployLite('path/to/your/model.pkl')
  1. Run the API:
deployer.run()
  1. Make Predictions:

Test with curl:

Open a terminal and run the following command:

curl -X POST http://127.0.0.1:5000/predict \
-H "Content-Type: application/json" \
-d '{"features": [5.1, 3.5, 1.4, 0.2]}'

You should see a JSON response like:

{
    "prediction": 0
}

OR

You can make predictions by sending a POST request to the /predict endpoint with the following JSON body:

{
    "features": [5.1, 3.5, 1.4, 0.2]
}

Creating a Sample Model

To create a sample machine learning model, you can use the following script:

import joblib
from sklearn.datasets import load_iris
from sklearn.ensemble import RandomForestClassifier

# Load dataset
iris = load_iris()
X, y = iris.data, iris.target

# Train a model
model = RandomForestClassifier()
model.fit(X, y)

# Save the model
joblib.dump(model, 'ml_deploy_lite/model/sample_model.pkl')

Run this script to generate a sample model that you can use with ml_deploy_lite.

Docker Integration

To create a Docker image for your application, you can use the provided create_dockerfile function in ml_deploy_lite/docker.py. This will generate a Dockerfile in the root directory of your project.

  1. Generate the Dockerfile:
from ml_deploy_lite.docker import create_dockerfile

create_dockerfile()
  1. Build the Docker Image:

Run the following command in the terminal:

docker build -t your_docker_image:latest .
  1. Run the Docker Container:

After building the image, you can run the container with:

docker run -p 5000:5000 your_docker_image:latest

Kubernetes Integration

To create a Kubernetes deployment configuration, you can use the create_k8s_deployment function in ml_deploy_lite/k8s.py. This will generate a k8s_deployment.yaml file that you can apply to your Kubernetes cluster.

  1. Generate the Kubernetes Deployment File:
from ml_deploy_lite.k8s import create_k8s_deployment

create_k8s_deployment()
  1. Apply the Configuration:

Run the following command to deploy to your Kubernetes cluster:

kubectl apply -f k8s_deployment.yaml

Testing

To run the tests for the API, you can use the following command:

python -m unittest discover -s tests

Make sure you have the necessary test data and models in place before running the tests.

Contributing

Contributions are welcome! If you have suggestions for improvements or new features, please open an issue or submit a pull request.

  1. Fork the repository.
  2. Create a new branch (git checkout -b feature-branch).
  3. Make your changes and commit them (git commit -m 'Add new feature').
  4. Push to the branch (git push origin feature-branch).
  5. Open a pull request.

License

This project is licensed under the MIT License.


For more information, please refer to the documentation for Flask and the Docker SDK for Python.


Conclusion

This README.md file provides a comprehensive overview of your ml_deploy_lite library, including installation instructions, usage examples, and details on Docker and Kubernetes integration. It is structured to help users understand how to use the library effectively without encountering errors. Feel free to modify any sections to better fit your project's specifics or to add any additional information you think is necessary!

About

ml-deploy-lite is a Python library designed to simplify the deployment of machine learning models. It allows developers to quickly turn their models into REST APIs or gRPC services with minimal configuration. The library integrates seamlessly with Docker and Kubernetes, providing built-in monitoring and logging for performance and error tracking.

Topics

Resources

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published

Languages