Skip to content

SubModel/submodel-python

Repository files navigation

submodel | Python Library

PyPI Package   Downloads

CI | End-to-End submodel Python Tests

CI | Code Quality   CI | Unit Tests   CI | CodeQL

Welcome to the official Python library for submodel API & SDK.

Table of Contents

💻 | Installation

# Install the latest release version
pip install submodel

# or

# Install the latest development version (main branch)
pip install git+https://github.com/submodel/submodel-python.git

Python 3.8 or higher is required to use the latest version of this package.

| Serverless Worker (SDK)

This python package can also be used to create a serverless worker that can be deployed to submodel as a custom endpoint API.

Quick Start

Create a python script in your project that contains your model definition and the submodel worker start code. Run this python code as your default container start command:

# my_worker.py

import submodel

def is_even(job):

    job_input = job["input"]
    the_number = job_input["number"]

    if not isinstance(the_number, int):
        return {"error": "Silly human, you need to pass an integer."}

    if the_number % 2 == 0:
        return True

    return False

submodel.serverless.start({"handler": is_even})

Make sure that this file is ran when your container starts. This can be accomplished by calling it in the docker command when you set up a template at submodel.io/console/serverless/user/templates or by setting it as the default command in your Dockerfile.

See our blog post for creating a basic Serverless API, or view the details docs for more information.

Local Test Worker

You can also test your worker locally before deploying it to submodel. This is useful for debugging and testing.

python my_worker.py --sm_serve_api

📚 | API Language Library (API Wrapper)

When interacting with the submodel API you can use this library to make requests to the API.

import submodel

submodel.api_key = "your_submodel_api_key_found_under_settings"

Endpoints

You can interact with submodel endpoints via a run or run_sync method.

endpoint = submodel.Endpoint("ENDPOINT_ID")

run_request = endpoint.run(
    {"your_model_input_key": "your_model_input_value"}
)

# Check the status of the endpoint run request
print(run_request.status())

# Get the output of the endpoint run request, blocking until the endpoint run is complete.
print(run_request.output())
endpoint = submodel.Endpoint("ENDPOINT_ID")

run_request = endpoint.run_sync(
    {"your_model_input_key": "your_model_input_value"}
)

# Returns the job results if completed within 90 seconds, otherwise, returns the job status.
print(run_request )

GPU Cloud (Pods)

import submodel

submodel.api_key = "your_submodel_api_key_found_under_settings"

# Get all my pods
pods = submodel.get_pods()

# Get a specific pod
pod = submodel.get_pod(pod.id)

# Create a pod with GPU
pod = submodel.create_pod("test", "submodel/stack", "NVIDIA GeForce RTX 3070")

# Create a pod with CPU
pod = submodel.create_pod("test", "submodel/stack", instance_id="cpu3c-2-4")

# Stop the pod
submodel.stop_pod(pod.id)

# Resume the pod
submodel.resume_pod(pod.id)

# Terminate the pod
submodel.terminate_pod(pod.id)

📁 | Directory

.
├── docs               # Documentation
├── examples           # Examples
├── submodel             # Package source code
│   ├── api_wrapper    # Language library - API
│   ├── cli            # Command Line Interface Functions
│   ├── endpoint       # Language library - Endpoints
│   └── serverless     # SDK - Serverless Worker
└── tests              # Package tests

🤝 | Community and Contributing

We welcome both pull requests and issues on GitHub. Bug fixes and new features are encouraged, but please read our contributing guide first.

Discord Banner 2

About

SubModel Python SDK for Pods and Serverless

Resources

License

Contributing

Security policy

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published