Framework | Status |
---|---|
Langchain | ✅ |
LLamaIndex | Planned |
PyTorch | Planned |
SKLearn | Planned |
Transformers | Planned |
Stable Diffusion | Next |
💡 If support of any framework/feature is useful for you, please feel free to reach out to us via Discord or Github Discussions
- Install from PyPI
pip install vishwa-ml-sdk
from vishwa.mlmonitor.langchain.instrument import LangchainTelemetry
import os
import vishwa
from vishwa.prompt_hub import PromptClient
# Enable this for advance tracking with our vishwa-ai platform
vishwa.host_url = "https://api.vishwa.ai"
vishwa.api_key = "********************" # Get from https://platform.vishwa.ai
vishwa.adv_tracing_enabled = "true" # Enable this for automated insights and log tracing via xpulsAI platform
# Add default labels that will be added to all captured metrics
default_labels = {"service": "ml-project-service", "k8s_cluster": "app0", "namespace": "dev", "agent_name": "fallback_value"}
# Enable the auto-telemetry
LangchainTelemetry(default_labels=default_labels,).auto_instrument()
prompt_client = PromptClient(
prompt_id="clrfm4v70jnlb1kph240", # Get prompt_id from the platform
environment_name="dev" # Deployed environment name
)
## [Optional] Override labels for scope of decorator [Useful if you have multiple scopes where you need to override the default label values]
@TelemetryOverrideLabels(agent_name="chat_agent_alpha")
@TagToProject(project_slug="defaultoPIt9USSR") # Get Project Slug from platform
def get_response_using_agent_alpha(prompt, query):
agent = initialize_agent(llm=chat_model,
verbose=True,
agent=CONVERSATIONAL_REACT_DESCRIPTION,
memory=memory)
data = prompt_client.get_prompt({"variable-1": "I'm the first variable"}) # Substitute any variables in prompt
res = agent.run(data) # Pass the entire `XPPrompt` object to run or invoke method
This project is licensed under the Apache License 2.0. See the LICENSE file for more details.
We welcome contributions to xpuls-ml-sdk! If you're interested in contributing.
If you encounter any issues or have feature requests, please file an issue on our GitHub repository.
🐦 Follow the latest from vishwa.ai team on Twitter @vishwa_ai
📮 Write to us at hello@vishwa.ai