OpenLIT is an OpenTelemetry-native GenAI and LLM Application Observability tool. It's designed to make the integration process of observability into GenAI projects as easy as pie – literally, with just a single line of code. Whether you're working with popular LLM Libraries such as OpenAI and HuggingFace or leveraging vector databases like ChromaDB, OpenLIT ensures your applications are monitored seamlessly, providing critical insights to improve performance and reliability.
This project proudly follows the Semantic Conventions of the OpenTelemetry community, consistently updating to align with the latest standards in observability.
To install the OpenLIT chart with the release name openlit
:
helm repo add openlit https://openlit.github.io/helm/
helm repo update
helm install openlit openlit/openlit
🔧 Note: If the
openlit
StatefulSet Pod appears in an error state after installing the OpenLIT Helm chart, it may be due to the ClickHouse setup. Allow time for ClickHouse to fully initialize, which should help theopenlit
pod become healthy. If issues persist, restarting theopenlit
pod will resolve the issue.
After the OpenLIT chart is successfully deployed to your Kubernetes cluster, you'll need to generate an API key that can be used by the OpenLITmetry SDKs to authenticate requests to the OpenLIT platform.
Select the SDK that matches your application's programming language and integrate LLM monitoring with just a single line of code.
Install the openlit
Python SDK using pip:
pip install openlit
Add the following two lines to your application code:
import openlit
openlit.init()
from openai import OpenAI
import openlit
openlit.init()
client = OpenAI(
api_key="YOUR_OPENAI_KEY"
)
chat_completion = client.chat.completions.create(
messages=[
{
"role": "user",
"content": "What is LLM Observability",
}
],
model="gpt-3.5-turbo",
)
Refer to the openlit
Python SDK repository for more advanced configurations and use cases.
Once you have OpenLIT
SDKs set up, you can instantly get insights into how your LLM applications. Just head over to OpenLIT UI at 127.0.0.1:3000
on your browser to start exploring.
With OpenLIT, you get a simple, powerful view into important info like how much you’re spending on LLMs, which parts of your app are using them the most, and how well they’re performing. Find out which LLM models are favorites among your applications, and dive deep into performance details to make smart decisions. This setup is perfect for optimizing your app performance and keeping an eye on costs.
You can adjust the OpenLIT configuration by specifying each parameter using the --set key=value
argument to helm install
. For example:
helm install openlit \
--set openlit.service.type=NodePort \
--set openlit.service.port=3000 \
openlit/openlit
Alternatively, you can provide a YAML file that specifies the values for the required parameters while installing the chart. For example:
helm install openlit -f values.yaml openlit/openlit
To upgrade the openlit
deployment:
helm upgrade openlit openlit/openlit
To uninstall/delete the openlit
deployment:
helm delete openlit
If you've made any changes to values.yaml
, remember to use the -f
flag to provide the updated file.
We welcome contributions to the OpenLIT project. Please refer to CONTRIBUTING for detailed guidelines on how you can participate.
OpenLIT is available under the Apache-2.0 license.
For support, issues, or feature requests, submit an issue through the GitHub issues associated with the OpenLIT Repository and add Helm
label.