Skip to content

Mariatta/defang-managed-llm-provider

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 

History

1 Commit
 
 
 
 
 
 
 
 
 
 
 
 
 
 

Repository files navigation

Managed LLM with Docker Model Provider

1-click-deploy

This sample application demonstrates using Managed LLMs with a Docker Model Provider, deployed with Defang.

Note: This version uses a Docker Model Provider for managing LLMs. For the version with Defang's OpenAI Access Gateway, please see our Managed LLM Sample instead.

The Docker Model Provider allows users to run LLMs locally using docker compose. It is a service with provider: in the compose.yaml file. Defang will transparently fixup your project to use AWS Bedrock or Google Cloud Vertex AI models during deployment.

You can configure the LLM_MODEL and LLM_URL for the LLM separately for local development and production environments.

  • The LLM_MODEL is the LLM Model ID you are using.
  • The LLM_URL will be set by Docker and during deployment Defang will provide authenticated access to the LLM model in the cloud.

Ensure you have enabled model access for the model you intend to use. To do this, you can check your AWS Bedrock model access or GCP Vertex AI model access.

To learn about available LLM models in Defang, please see our Model Mapping documentation.

For more about Managed LLMs in Defang, please see our Managed LLMs documentation.

Docker Model Provider

In the compose.yaml file, the llm service will route requests to the LLM API model using a Docker Model Provider.

The x-defang-llm property on the llm service must be set to true in order to use the Docker Model Provider when deploying with Defang.

Prerequisites

  1. Download Defang CLI
  2. (Optional) If you are using Defang BYOC authenticate with your cloud provider account
  3. (Optional for local development) Docker CLI

Development

To run the application locally, you can use the following command:

docker compose -f compose.local.yaml up --build

Deployment

Note

Download Defang CLI

Defang Playground

Deploy your application to the Defang Playground by opening up your terminal and typing:

defang compose up

BYOC

If you want to deploy to your own cloud account, you can use Defang BYOC.


Title: Managed LLM with Docker Model Provider

Short Description: An app using Managed LLMs with a Docker Model Provider, deployed with Defang.

Tags: LLM, Python, Bedrock, Vertex, Docker Model Provider

Languages: Python

About

defang workshop

Resources

Stars

Watchers

Forks

Releases

No releases published

Sponsor this project

 

Packages

No packages published