Skip to content
Merged
Show file tree
Hide file tree
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
80 changes: 80 additions & 0 deletions docs/how-to/build-a-tool-agent.mdx
Original file line number Diff line number Diff line change
@@ -0,0 +1,80 @@
This guide outlines how to build a tool calling agent using Langchain + LLMstudio.

## 1. Set up your tools
Start by defining the tools your agent is going to have access to.
```python
from langchain.tools import tool

@tool
def buy_ticket(destination: str):
"""Use this to buy a ticket"""
return "Bought ticket number 270924"

@tool
def get_departure(ticket_number: str):
"""Use this to fetch the departure time of a train"""
return "8:25 AM"
```

## 2. Setup your .env
Create a `.env` file on the root of your project with the the credentials for the providers you want to use.

<Tabs>
<Tab title="OpenAI">
```
OPENAI_API_KEY="YOUR_API_KEY"
```
</Tab>
<Tab title="VertexAI">
```
GOOGLE_API_KEY="YOUR_API_KEY"
```
</Tab>
<Tab title="Azure">
```
AZURE_BASE_URL="YOUR_MODEL_ENDPOINT"
AZURE_API_KEY="YOUR_API_KEY"
```
</Tab>
</Tabs>

## 3. Set up your model using LLMstudio
Use LLMstudio to choose the provider and model you want to use.
<Tabs>
<Tab title="OpenAI">
```python
model = ChatLLMstudio(model_id='openai/gpt-4o')
```
</Tab>
<Tab title="VertexAI">
```python
model = ChatLLMstudio(model_id='vertexai/gemini-1.5-flash')
```
</Tab>
<Tab title="Azure">
```python
model = ChatLLMstudio(model_id='azure/Meta-Llama-3.1-70B-Instruct')
```
</Tab>
</Tabs>

## 4. Build the agent
Set up your agent and agent executor using Langchain.

```python
from langchain import hub
from langchain.agents import AgentExecutor, create_openai_tools_agent

prompt = hub.pull("hwchase17/openai-tools-agent")
agent = create_openai_tools_agent(model, tools, prompt)
agent_executor = AgentExecutor(agent=agent, tools=tools)

input = "Can you buy me a ticket to madrid?"

# Using with chat history
agent_executor.invoke(
{
"input": input,
}
)
```
156 changes: 156 additions & 0 deletions docs/how-to/deploy-on-gke/deploy-on-google-kubernetes-engine.mdx
Original file line number Diff line number Diff line change
@@ -0,0 +1,156 @@
Learn how to deploy LLMstudio as a containerized application on Google Kubernetes Engine and make calls from a local repository.


## Prerequisites
To follow this guide you need to have the following set-up:

- A **project** on google cloud platform.
- **Kubernetes Engine** API enabled on your project.
- **Kubernetes Engine Admin** role for the user performing the guide.

## Deploy LLMstudio

This example demonstrates a public deployment. For a private service accessible only within your enterprise infrastructure, deploy it within your own Virtual Private Cloud (VPC).
<Steps>
<Step title="Navigate to Kubernetes Engine">
Begin by navigating to the Kubernetes Engine page.
</Step>
<Step title="Select Deploy">
Go to **Workloads** and **Create a new Deployment**.
<Frame>
<img src="how-to/deploy-on-gke/images/step-2.png" />
</Frame>
</Step>
<Step title="Name Your Deployment">
Rename your project. We will call the one in this guide **llmstudio-on-gcp**.
<Frame>
<img src="how-to/deploy-on-gke/images/step-3.png" />
</Frame>
</Step>
<Step title="Select Your Cluster">
Choose between **creating a new cluster** or **using an existing cluster**.
For this guide, we will create a new cluster and use the default region.
<Frame>
<img src="how-to/deploy-on-gke/images/step-4.png" />
</Frame>
</Step>
<Step title="Proceed to Container Details">
Once done done with the **Deployment configuration**, proceed to **Container details**.
</Step>
<Step title="Set Image Path">
In the new container section, select **Existing container image**.


Copy the path to LLMstudio's image available on Docker Hub.
```bash Image Path
tensoropsai/llmstudio:latest
```
Set it as the **Image path** to your container.
<Frame>
<img src="how-to/deploy-on-gke/images/step-6.png" />
</Frame>
</Step>
<Step title="Set Environment Variables">
Configure the following mandatory environment variables:
| Environment Variable | Value |
|----------------------------|-----------|
| `LLMSTUDIO_ENGINE_HOST` | 0.0.0.0 |
| `LLMSTUDIO_ENGINE_PORT` | 8001 |
| `LLMSTUDIO_TRACKING_HOST` | 0.0.0.0 |
| `LLMSTUDIO_TRACKING_PORT` | 8002 |

Additionally, set the `GOOGLE_API_KEY` environment variable to enable calls to Google's Gemini models.
<Tip>Refer to **SDK/LLM/Providers** for instructions on setting up other providers.</Tip>

<Frame>
<img src="how-to/deploy-on-gke/images/step-7.png" />
</Frame>

</Step>
<Step title="Proceed to Expose (Optional)">
After configuring your container, proceed to **Expose (Optional)**.
</Step>
<Step title="Expose Ports">
Select **Expose deployment as a new service** and leave the first item as is.

<Frame>
<img src="how-to/deploy-on-gke/images/step-9-1.png" />
</Frame>

Add two other items, and expose the ports defined in the **Set Environment Variables** step.

<Frame>
<img src="how-to/deploy-on-gke/images/step-9-2.png" />
</Frame>
</Step>
<Step title="Deploy">
After setting up and exposing the ports, press **Deploy**.
<Check>You have successfully deployed **LLMstudio on Google Cloud Platform**!</Check>
</Step>

</Steps>

## Make a Call
Now let's make a call to our LLMstudio instance on GCP!



<Steps>
<Step title="Set Up Project">
Setup a simple project with this two files:
1. `calls.ipynb`
2. `.env`
</Step>

<Step title="Set Up Files">
<Tabs>
<Tab title=".env">

Go to your newly deployed **Workload**, scroll to the **Exposing services** section, and take note of the Host of your endpoint.
<Frame>
<img src="how-to/deploy-on-gke/images/step-env.png" />
</Frame>

Create your `.env` file with the following:

```env .env
LLMSTUDIO_ENGINE_HOST = "YOUR_HOST"
LLMSTUDIO_ENGINE_PORT = "8001"
LLMSTUDIO_TRACKING_HOST = "YOUR_HOST"
LLMSTUDIO_TRACKING_PORT = "8002"
```

<Check>You are done seting up you **.env** file!</Check>

</Tab>
<Tab title="calls.ipynb">
Start by importing llmstudio:
```python 1st cell
from llmstudio import LLM
```

Set up your LLM. We will be using `gemini-1.5-flash` for this guide.
```python 2nd cell
llm = LLM('vertexai/gemini-1.5-flash')
```

Chat with your model.
```python 3rd cell
llm.chat('Hello!')
print(response.chat_output)
```

<Frame>
<img src="how-to/deploy-on-gke/images/step-llmstudio-call.png" />
</Frame>


<Check>You are done calling llmstudio on GCP!</Check>

</Tab>

</Tabs>
</Step>


</Steps>
Binary file added docs/how-to/deploy-on-gke/images/step-2.png
Loading
Sorry, something went wrong. Reload?
Sorry, we cannot display this file.
Sorry, this file is invalid so it cannot be displayed.
Binary file added docs/how-to/deploy-on-gke/images/step-3.png
Loading
Sorry, something went wrong. Reload?
Sorry, we cannot display this file.
Sorry, this file is invalid so it cannot be displayed.
Binary file added docs/how-to/deploy-on-gke/images/step-4.png
Loading
Sorry, something went wrong. Reload?
Sorry, we cannot display this file.
Sorry, this file is invalid so it cannot be displayed.
Binary file added docs/how-to/deploy-on-gke/images/step-6.png
Loading
Sorry, something went wrong. Reload?
Sorry, we cannot display this file.
Sorry, this file is invalid so it cannot be displayed.
Binary file added docs/how-to/deploy-on-gke/images/step-7-1.png
Loading
Sorry, something went wrong. Reload?
Sorry, we cannot display this file.
Sorry, this file is invalid so it cannot be displayed.
Binary file added docs/how-to/deploy-on-gke/images/step-7-2.png
Loading
Sorry, something went wrong. Reload?
Sorry, we cannot display this file.
Sorry, this file is invalid so it cannot be displayed.
Binary file added docs/how-to/deploy-on-gke/images/step-9-1.png
Loading
Sorry, something went wrong. Reload?
Sorry, we cannot display this file.
Sorry, this file is invalid so it cannot be displayed.
Binary file added docs/how-to/deploy-on-gke/images/step-9-2.png
Loading
Sorry, something went wrong. Reload?
Sorry, we cannot display this file.
Sorry, this file is invalid so it cannot be displayed.
Binary file added docs/how-to/deploy-on-gke/images/step-env.png
Loading
Sorry, something went wrong. Reload?
Sorry, we cannot display this file.
Sorry, this file is invalid so it cannot be displayed.
Loading
Sorry, something went wrong. Reload?
Sorry, we cannot display this file.
Sorry, this file is invalid so it cannot be displayed.
101 changes: 101 additions & 0 deletions docs/llm/anthropic.mdx
Original file line number Diff line number Diff line change
@@ -0,0 +1,101 @@
Interact with your Anthropic models using LLMstudios LLM.

## Supported models
1. `claude-3-opus-20240229`
2. `claude-3-sonnet-2024022`
3. `claude-3-haiku-20240307`
4. `claude-2.1`
5. `claude-2`
6. `claude-instant-1.2`

## Parameters
An Anthropic LLM interface can have the following parameters:
| Parameter | Type | Description |
|-------------------|--------|-----------------------------------------------------------------------------|
| `api_key` | str | The API key for authentication. |
| `temperature` | float | The temperature parameter for the model. |
| `top_p` | float | The top-p parameter for the model. |
| `max_tokens` | int | The maximum number of tokens for the model's output. |
| `top_k` | int | The top-k parameter for the model. |


## Usage
Here is how you setup an interface to interact with your Anthropic models.

<Tabs>
<Tab title="w/ .env">
<Steps>
<Step>
Create a `.env` file with you `ANTHROPIC_API_KEY`

<Warning>Make sure you call your environment variable ANTHROPIC_API_KEY</Warning>
```bash
ANTHROPIC_API_KEY="YOUR-KEY"
```
</Step>
<Step >
In your python code, import LLM from llmstudio.
```python
from llmstudio import LLM
```
</Step>
<Step>
Create your **llm** instance.
```python
llm = LLM('anthropic/{model}')
```
</Step>
<Step>
**Optional:** You can add your parameters as follows:
```python
llm = LLM('anthropic/model',
temperature= ...,
max_tokens= ...,
top_p= ...,
frequency_penalty= ...,
presence_penalty= ...)
```
<Check>You are done setting up your **Anthropic LLM**!</Check>
</Step>
</Steps>
</Tab>
<Tab title="w/o .env">
<Steps>
<Step >
In your python code, import LLM from llmstudio.
```python
from llmstudio import LLM
```
</Step>
<Step>
Create your **llm** instance.
```python
llm = LLM('anthropic/{model}',api_key="YOUR_API_KEY")
```
</Step>
<Step>
**Optional:** You can add your parameters as follows:
```python
llm = LLM('anthropic/model',
temperature= ...,
max_tokens= ...,
top_p= ...,
frequency_penalty= ...,
presence_penalty= ...)
```
<Check>You are done setting up your **Anthropic LLM**!</Check>
</Step>
</Steps>
</Tab>
</Tabs>


## What's next?
<CardGroup cols={2}>
<Card title="LLM.chat()" icon="link" href="../chat">
Learn how to send messeges and recieve responses next!
</Card>
<Card title="Tool calling Agent" icon="link" href="../../../how-to/build-a-tool-agent">
Learn how to build a tool calling agent using llmstudio.
</Card>
</CardGroup>
Loading