Skip to content

Commit

Permalink
feat: Integrate Prompt Hub (#19)
Browse files Browse the repository at this point in the history
improve: tracking for invoke API
breaking: Breaking change on the way connection setting happens
breaking: update package name and repo name
  • Loading branch information
SHARANTANGEDA committed Dec 25, 2023
1 parent bcb8aa1 commit e6eda88
Show file tree
Hide file tree
Showing 21 changed files with 325 additions and 69 deletions.
2 changes: 0 additions & 2 deletions CONTRIBUTING.md
Original file line number Diff line number Diff line change
Expand Up @@ -5,5 +5,3 @@
We are super excited to see that you'd like to contribute to our platform, thanks for your interest.

We value any feedback, bug report or feature requests raised by our community and are determined to solving them.

[TODO] Publish roadmap for planned/current items
34 changes: 21 additions & 13 deletions README.md
Original file line number Diff line number Diff line change
@@ -1,12 +1,12 @@
# Welcome to xpuls.ai 👋
# Welcome to xpuls.ai 👋

## MLMonitor - Automatic Instrumentation for ML Frameworks
## xpuls-ml-sdk
[![Twitter Follow](https://img.shields.io/twitter/follow/xpulsai?style=social)](https://x.com/xpulsai) [![Discord](https://img.shields.io/badge/Discord-Join-1147943825592045689?style=social)](https://social.xpuls.ai/join/discord)



<div align="center">
<a href="https://xpuls.ai">Website</a> | <a href="https://xpuls.ai">Docs</a> | <a href="https://xpuls.ai">Blog</a> | <a href="https://x.com/xpulsai">Twitter</a> | <a href="https://social.xpuls.ai/join/discord">Community</a>
<a href="https://xpuls.ai">Website</a> | <a href="https://xpuls.ai/docs">Docs</a> | <a href="https://xpuls.ai/articles">Articles</a> | <a href="https://x.com/xpulsai">Twitter</a> | <a href="https://social.xpuls.ai/join/discord">Community</a>
</div>

[![PyPI version](https://badge.fury.io/py/xpuls-mlmonitor.svg)](https://badge.fury.io/py/xpuls-mlmonitor)
Expand All @@ -31,35 +31,42 @@

1. Install from PyPI
```shell
pip install xpuls-mlmonitor
pip install xpuls-ml-sdk
```

## 🧩 Usage Example
```python
from xpuls.mlmonitor.langchain.instrument import LangchainTelemetry
import os
import xpuls
from xpuls.prompt_hub import PromptClient

# Enable this for advance tracking with our xpuls-ml platform
os.environ["XPULSAI_TRACING_ENABLED"] = "true"

xpuls.host_url = "https://api.xpuls.ai"
xpuls.api_key = "********************" # Get from https://platform.xpuls.ai
xpuls.adv_tracing_enabled = "true" # Enable this for automated insights and log tracing via xpulsAI platform
# Add default labels that will be added to all captured metrics
default_labels = {"service": "ml-project-service", "k8s_cluster": "app0", "namespace": "dev", "agent_name": "fallback_value"}

# Enable the auto-telemetry
LangchainTelemetry(
default_labels=default_labels,
xpuls_host_url="http://app.xpuls.ai" # Optional param, required when XPULSAI_TRACING is enabled
).auto_instrument()
LangchainTelemetry(default_labels=default_labels,).auto_instrument()
prompt_client = PromptClient(
prompt_id="clrfm4v70jnlb1kph240", # Get prompt_id from the platform
environment_name="dev" # Deployed environment name
)

## [Optional] Override labels for scope of decorator [Useful if you have multiple scopes where you need to override the default label values]
@TelemetryOverrideLabels(agent_name="chat_agent_alpha")
@MapXpulsProject(project_slug="defaultoPIt9USSR") # Get Project Slug from platform
def get_response_using_agent_alpha(prompt, query):
agent = initialize_agent(llm=chat_model,
verbose=True,
agent=CONVERSATIONAL_REACT_DESCRIPTION,
memory=memory)

data = prompt_client.get_prompt({"variable-1": "I'm the first variable"}) # Substitute any variables in prompt

res = agent.run(f"{prompt}. \n Query: {query}")
res = agent.run(data) # Pass the entire `XPPrompt` object to run or invoke method
```

## ℹ️ Complete Usage Guides
Expand All @@ -73,14 +80,15 @@ This project is licensed under the Apache License 2.0. See the LICENSE file for

## 📢 Contributing

We welcome contributions to MLMonitor! If you're interested in contributing.
We welcome contributions to xpuls-ml-sdk! If you're interested in contributing.

If you encounter any issues or have feature requests, please file an issue on our GitHub repository.



## 💬 Get in touch

👉 [Join our Discord community!](https://social.xpuls.ai/join/discord)

🐦 Follow the latest from xpuls.ai team on Twitter [@xpulsai](https://twitter.com/xpulsai)

📮 Write to us at [hello\@xpuls.ai](mailto:hello@xpuls.ai)
2 changes: 1 addition & 1 deletion SECURITY.md
Original file line number Diff line number Diff line change
@@ -1,6 +1,6 @@
# Security Policy

xpuls.ai is looking forward to working with security researchers across the world to keep xpuls-mlmonitor-python along with other products and our users safe. If you have found an issue in our systems/applications, please reach out to us.
xpxuls.ai is looking forward to working with security researchers across the world to keep xpuls-ml-sdk along with other products and our users safe. If you have found an issue in our systems/applications, please reach out to us.

## Supported Versions
We always recommend using the latest version of xpuls.ai to ensure you get all security updates
Expand Down
7 changes: 6 additions & 1 deletion demo.py
Original file line number Diff line number Diff line change
@@ -1,4 +1,9 @@
from demo.openai_langchain import run_openai_agent
# from demo.openai_langchain import run_openai_agent
#
# res = run_openai_agent()
# print(str(res))

from demo.mockgpt_runnable_langchain import run_openai_agent

res = run_openai_agent()
print(str(res))
22 changes: 15 additions & 7 deletions demo/mockgpt_runnable_langchain.py
Original file line number Diff line number Diff line change
Expand Up @@ -3,11 +3,13 @@

import openai
from langchain.chat_models import AzureChatOpenAI
from langchain.prompts import ChatPromptTemplate

import xpuls
from xpuls.mlmonitor.langchain.decorators.map_xpuls_project import MapXpulsProject
from xpuls.mlmonitor.langchain.decorators.telemetry_override_labels import TelemetryOverrideLabels
from xpuls.mlmonitor.langchain.instrument import LangchainTelemetry
from xpuls.mlmonitor.langchain.patches.xp_prompt_template import XPChatPromptTemplate
from xpuls.prompt_hub import PromptClient

logger = logging.getLogger(__name__)

Expand All @@ -19,14 +21,15 @@
openai.api_version = "2023-03-15-preview"

# Set this to enable Advanced prompt tracing with server
# os.environ["XPULSAI_TRACING_ENABLED"] = "false"
os.environ["XPULSAI_TRACING_ENABLED"] = "false"


default_labels = {"system": "openai-ln-test", "agent_name": "fallback_value"}
xpuls.host_url = "https://test-api.xpuls.ai"
xpuls.api_key = "****************************************"
xpuls.adv_tracing_enabled = "true"

LangchainTelemetry(
default_labels=default_labels,
xpuls_host_url="http://localhost:8000"
).auto_instrument()

chat_model = AzureChatOpenAI(
Expand All @@ -35,11 +38,16 @@
temperature=0
)


prompt_client = PromptClient(
prompt_id="clrfm4v70jnlb1kph240",
environment_name="dev"
)
@TelemetryOverrideLabels(agent_name="chat_agent_alpha")
@MapXpulsProject(project_id="default") # Get Project ID from console
@MapXpulsProject(project_id="defaultoPIt9USSR") # Get Project ID from console
def run_openai_agent():
prompt = ChatPromptTemplate.from_template("tell me a joke about {foo}")
# prompt = ChatPromptTemplate.from_template("tell me a joke about {foo}")
data = prompt_client.get_prompt({"variable-1": "I'm the first variable"})
prompt = XPChatPromptTemplate.from_template(data)
chain = prompt | chat_model
try:
res = chain.invoke({"foo": "bears"})
Expand Down
25 changes: 13 additions & 12 deletions demo/openai_langchain.py
Original file line number Diff line number Diff line change
Expand Up @@ -9,6 +9,8 @@
from xpuls.mlmonitor.langchain.decorators.map_xpuls_project import MapXpulsProject
from xpuls.mlmonitor.langchain.decorators.telemetry_override_labels import TelemetryOverrideLabels
from xpuls.mlmonitor.langchain.instrument import LangchainTelemetry
import xpuls
from xpuls.prompt_hub import PromptClient

logger = logging.getLogger(__name__)

Expand All @@ -20,26 +22,27 @@
openai.api_version = "2023-03-15-preview"

# Set this to enable Advanced prompt tracing with server
# os.environ["XPULSAI_TRACING_ENABLED"] = "false"
os.environ["XPULSAI_TRACING_ENABLED"] = "false"

default_labels = {"system": "openai-ln-test", "agent_name": "fallback_value"}

LangchainTelemetry(
default_labels=default_labels,
xpuls_host_url="http://localhost:8000"
).auto_instrument()
xpuls.host_url = "https://test-api.xpuls.ai"
xpuls.api_key = "****************************************"
xpuls.adv_tracing_enabled = "true"
LangchainTelemetry(default_labels=default_labels).auto_instrument()

memory = ConversationBufferMemory(memory_key="chat_history")
chat_model = AzureChatOpenAI(
deployment_name="gpt35turbo",
model_name="gpt-35-turbo",
temperature=0
)
prompt = PromptClient(
prompt_id="clrfm4v70jnlb1kph240",
environment_name="dev"
)


@TelemetryOverrideLabels(agent_name="chat_agent_alpha")
@MapXpulsProject(project_id="default") # Get Project ID from console
@MapXpulsProject(project_slug="defaultoPIt9USSR") # Get Project ID from console
def run_openai_agent():
agent = initialize_agent(llm=chat_model,
verbose=True,
Expand All @@ -51,10 +54,8 @@ def run_openai_agent():
agent_executor_kwargs={"extra_prompt_messages": "test"})

try:
res = agent.run("You are to behave as a think tank to answer the asked question in most creative way,"
" ensure to NOT be abusive or racist, you should validate your response w.r.t to validity "
"in practical world before giving final answer" +
f"\nQuestion: How does nature work?, is balance of life true? \n")
data = prompt.get_prompt({"variable-1": "I'm the first variable"})
res = agent.run(data.prompt)
except ValueError as e:
res = str(e)
if not res.startswith("Could not parse LLM output: `"):
Expand Down
6 changes: 3 additions & 3 deletions setup.py
Original file line number Diff line number Diff line change
Expand Up @@ -21,13 +21,13 @@ def read_requirements(file_name):
long_description = fh.read()

setup(
name='xpuls-mlmonitor',
version='0.2.0',
name='xpuls-ml-sdk',
version='0.3.0',
author='Sai Sharan Tangeda',
author_email='saisarantangeda@gmail.com',
description='Automated telemetry and monitoring for ML & LLM Frameworks',
license='Apache License 2.0',
url='https://github.com/xpuls-labs/xpuls-mlmonitor-python',
url='https://github.com/xpuls-labs/xpuls-ml-sdk',
packages=find_packages(),
install_requires=read_requirements('requirements.txt'),
extras_require={
Expand Down
6 changes: 6 additions & 0 deletions xpuls/__init__.py
Original file line number Diff line number Diff line change
@@ -0,0 +1,6 @@
import os


api_key = os.environ.get("XPULSAI_API_KEY")
host_url = os.environ.get("XPULSAI_HOST_URL", "https://api.xpuls.ai")
adv_tracing_enabled = os.environ.get("XPULSAI_TRACING_ENABLED", "false")
1 change: 1 addition & 0 deletions xpuls/client/__init__.py
Original file line number Diff line number Diff line change
@@ -0,0 +1 @@
from .xpuls_client import XpulsAIClient
3 changes: 3 additions & 0 deletions xpuls/client/constants.py
Original file line number Diff line number Diff line change
@@ -0,0 +1,3 @@
XPULSAI_API_KEY = "XPULSAI_API_KEY"
XPULSAI_HOST_URL = "XPULSAI_HOST_URL"
XPULSAI_TRACING_ENABLED = "XPULSAI_TRACING_ENABLED"
24 changes: 24 additions & 0 deletions xpuls/client/models.py
Original file line number Diff line number Diff line change
@@ -0,0 +1,24 @@
from typing import List

from pydantic import BaseModel


class PromptVariable(BaseModel):
variable: str
default: str


class PromptResponseData(BaseModel):
prompt_version_id: str
prompt_id: str
prompt_external_id: str
prompt: str
prompt_variables: List[PromptVariable]


class XPPrompt(BaseModel):
prompt_version_id: str
prompt_id: str
prompt_external_id: str
prompt: str

57 changes: 57 additions & 0 deletions xpuls/client/xpuls_client.py
Original file line number Diff line number Diff line change
@@ -0,0 +1,57 @@
import os
from typing import Optional

import requests
import time
import logging

import xpuls
from xpuls.client import constants
from xpuls.client.models import PromptResponseData


class XpulsAIClient:
def __init__(self):
self._host_url = xpuls.host_url
self._api_key = xpuls.api_key

self._headers = {"XP-API-Key": self._api_key}

def _make_request_with_retries(self, endpoint, method='GET', data=None, retries=3, backoff_factor=2):
"""
Make an API request with auto-retries and crashloop backoff.
Supports GET, POST, PUT, and DELETE requests.
"""
if endpoint.startswith("/"):
url = f"{self._host_url}/{endpoint[1:]}"
else:
url = f"{self._host_url}/{endpoint}"
for attempt in range(retries):
try:
if method == 'GET':
response = requests.get(url, headers=self._headers)
elif method == 'POST':
response = requests.post(url, headers=self._headers, json=data)
elif method == 'PUT':
response = requests.put(url, headers=self._headers, json=data)
elif method == 'DELETE':
response = requests.delete(url, headers=self._headers)
else:
raise ValueError("Unsupported HTTP method")

response.raise_for_status()
return response.json()
except requests.RequestException as e:
logging.warning(f"Request failed with error {e}, attempt {attempt + 1} of {retries}")
time.sleep(backoff_factor ** attempt)

raise Exception("Max retries exceeded")

def get_live_prompt(self, prompt_id: str, env_name: str) -> PromptResponseData:
data = self._make_request_with_retries(
endpoint=f"/v1/prompt/{prompt_id}/env/{env_name}",
method="GET",

)

return PromptResponseData(**data)
7 changes: 2 additions & 5 deletions xpuls/mlmonitor/langchain/instrument.py
Original file line number Diff line number Diff line change
@@ -1,4 +1,4 @@
from typing import Dict, Any
from typing import Dict, Any, Optional

from xpuls.mlmonitor.langchain.patches import patch_run
from xpuls.mlmonitor.langchain.patches.patch_invoke import patch_invoke
Expand All @@ -8,13 +8,10 @@

class LangchainTelemetry:
def __init__(self, default_labels: Dict[str, Any],
xpuls_host_url: str = "http://localhost:8000",
enable_prometheus: bool = True,):
self.ln_metrics = LangchainPrometheusMetrics(default_labels)

self.xpuls_client = XpulsAILangChainClient(
api_url=xpuls_host_url
)
self.xpuls_client = XpulsAILangChainClient()

self.default_labels = default_labels
self.enable_prometheus = enable_prometheus
Expand Down
Loading

0 comments on commit e6eda88

Please sign in to comment.