Skip to content

Commit

Permalink
Rebase the 9th june
Browse files Browse the repository at this point in the history
  • Loading branch information
pprados committed Jun 9, 2023
1 parent cb81337 commit ac8f85e
Show file tree
Hide file tree
Showing 17 changed files with 646 additions and 335 deletions.
25 changes: 25 additions & 0 deletions docs/ecosystem/baseten.md
Original file line number Diff line number Diff line change
@@ -0,0 +1,25 @@
# Baseten

Learn how to use LangChain with models deployed on Baseten.

## Installation and setup

- Create a [Baseten](https://baseten.co) account and [API key](https://docs.baseten.co/settings/api-keys).
- Install the Baseten Python client with `pip install baseten`
- Use your API key to authenticate with `baseten login`

## Invoking a model

Baseten integrates with LangChain through the LLM module, which provides a standardized and interoperable interface for models that are deployed on your Baseten workspace.

You can deploy foundation models like WizardLM and Alpaca with one click from the [Baseten model library](https://app.baseten.co/explore/) or if you have your own model, [deploy it with this tutorial](https://docs.baseten.co/deploying-models/deploy).

In this example, we'll work with WizardLM. [Deploy WizardLM here](https://app.baseten.co/explore/wizardlm) and follow along with the deployed [model's version ID](https://docs.baseten.co/managing-models/manage).

```python
from langchain.llms import Baseten

wizardlm = Baseten(model="MODEL_VERSION_ID", verbose=True)

wizardlm("What is the difference between a Wizard and a Sorcerer?")
```
10 changes: 6 additions & 4 deletions docs/modules/indexes/text_splitters/getting_started.ipynb
Original file line number Diff line number Diff line change
Expand Up @@ -12,7 +12,8 @@
"\n",
"- `length_function`: how the length of chunks is calculated. Defaults to just counting number of characters, but it's pretty common to pass a token counter here.\n",
"- `chunk_size`: the maximum size of your chunks (as measured by the length function).\n",
"- `chunk_overlap`: the maximum overlap between chunks. It can be nice to have some overlap to maintain some continuity between chunks (eg do a sliding window)."
"- `chunk_overlap`: the maximum overlap between chunks. It can be nice to have some overlap to maintain some continuity between chunks (eg do a sliding window).\n",
"- `add_start_index` : wether to include the starting position of each chunk within the original document in the metadata. "
]
},
{
Expand Down Expand Up @@ -49,6 +50,7 @@
" chunk_size = 100,\n",
" chunk_overlap = 20,\n",
" length_function = len,\n",
" add_start_index = True,\n",
")"
]
},
Expand All @@ -62,8 +64,8 @@
"name": "stdout",
"output_type": "stream",
"text": [
"page_content='Madam Speaker, Madam Vice President, our First Lady and Second Gentleman. Members of Congress and' lookup_str='' metadata={} lookup_index=0\n",
"page_content='of Congress and the Cabinet. Justices of the Supreme Court. My fellow Americans.' lookup_str='' metadata={} lookup_index=0\n"
"page_content='Madam Speaker, Madam Vice President, our First Lady and Second Gentleman. Members of Congress and' metadata={'start_index': 0}\n",
"page_content='of Congress and the Cabinet. Justices of the Supreme Court. My fellow Americans.' metadata={'start_index': 82}\n"
]
}
],
Expand All @@ -90,7 +92,7 @@
"name": "python",
"nbconvert_exporter": "python",
"pygments_lexer": "ipython3",
"version": "3.9.1"
"version": "3.9.16"
},
"vscode": {
"interpreter": {
Expand Down
23 changes: 23 additions & 0 deletions docs/modules/memory/examples/dynamodb_chat_message_history.ipynb
Original file line number Diff line number Diff line change
Expand Up @@ -118,6 +118,29 @@
]
},
{
"cell_type": "markdown",
"id": "955f1b15",
"metadata": {},
"source": [
"## DynamoDBChatMessageHistory with Custom Endpoint URL\n",
"\n",
"Sometimes it is useful to specify the URL to the AWS endpoint to connect to. For instance, when you are running locally against [Localstack](https://localstack.cloud/). For those cases you can specify the URL via the `endpoint_url` parameter in the constructor."
]
},
{
"cell_type": "code",
"execution_count": null,
"id": "225713c8",
"metadata": {},
"outputs": [],
"source": [
"from langchain.memory.chat_message_histories import DynamoDBChatMessageHistory\n",
"\n",
"history = DynamoDBChatMessageHistory(table_name=\"SessionTable\", session_id=\"0\", endpoint_url=\"http://localhost.localstack.cloud:4566\")"
]
},
{
"attachments": {},
"cell_type": "markdown",
"id": "3b33c988",
"metadata": {},
Expand Down
2 changes: 1 addition & 1 deletion docs/modules/models/llms/examples/llm_caching.ipynb
Original file line number Diff line number Diff line change
Expand Up @@ -631,7 +631,7 @@
"id": "56ea6a08",
"metadata": {},
"source": [
"You'll need to get a Momemto auth token to use this class. This can either be passed in to a momento.CacheClient if you'd like to instantiate that directly, as a named parameter `auth_token` to `MomentoChatMessageHistory.from_client_params`, or can just be set as an environment variable `MOMENTO_AUTH_TOKEN`."
"You'll need to get a Momento auth token to use this class. This can either be passed in to a momento.CacheClient if you'd like to instantiate that directly, as a named parameter `auth_token` to `MomentoChatMessageHistory.from_client_params`, or can just be set as an environment variable `MOMENTO_AUTH_TOKEN`."
]
},
{
Expand Down
196 changes: 196 additions & 0 deletions docs/modules/models/llms/integrations/baseten.ipynb
Original file line number Diff line number Diff line change
@@ -0,0 +1,196 @@
{
"cells": [
{
"attachments": {},
"cell_type": "markdown",
"metadata": {},
"source": [
"# Baseten\n",
"\n",
"[Baseten](https://baseten.co) provides all the infrastructure you need to deploy and serve ML models performantly, scalably, and cost-efficiently.\n",
"\n",
"This example demonstrates using Langchain with models deployed on Baseten."
]
},
{
"attachments": {},
"cell_type": "markdown",
"metadata": {},
"source": [
"# Setup\n",
"\n",
"To run this notebook, you'll need a [Baseten account](https://baseten.co) and an [API key](https://docs.baseten.co/settings/api-keys).\n",
"\n",
"You'll also need to install the Baseten Python package:"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"!pip install baseten"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"import baseten\n",
"\n",
"baseten.login(\"YOUR_API_KEY\")"
]
},
{
"attachments": {},
"cell_type": "markdown",
"metadata": {},
"source": [
"# Single model call\n",
"\n",
"First, you'll need to deploy a model to Baseten.\n",
"\n",
"You can deploy foundation models like WizardLM and Alpaca with one click from the [Baseten model library](https://app.baseten.co/explore/) or if you have your own model, [deploy it with this tutorial](https://docs.baseten.co/deploying-models/deploy).\n",
"\n",
"In this example, we'll work with WizardLM. [Deploy WizardLM here](https://app.baseten.co/explore/llama) and follow along with the deployed [model's version ID](https://docs.baseten.co/managing-models/manage)."
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"from langchain.llms import Baseten"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"# Load the model\n",
"wizardlm = Baseten(model=\"MODEL_VERSION_ID\", verbose=True)"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"# Prompt the model\n",
"\n",
"wizardlm(\"What is the difference between a Wizard and a Sorcerer?\")"
]
},
{
"attachments": {},
"cell_type": "markdown",
"metadata": {},
"source": [
"# Chained model calls\n",
"\n",
"We can chain together multiple calls to one or multiple models, which is the whole point of Langchain!\n",
"\n",
"This example uses WizardLM to plan a meal with an entree, three sides, and an alcoholic and non-alcoholic beverage pairing."
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"from langchain.chains import SimpleSequentialChain\n",
"from langchain import PromptTemplate, LLMChain"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"# Build the first link in the chain\n",
"\n",
"prompt = PromptTemplate(\n",
" input_variables=[\"cuisine\"],\n",
" template=\"Name a complex entree for a {cuisine} dinner. Respond with just the name of a single dish.\",\n",
")\n",
"\n",
"link_one = LLMChain(llm=wizardlm, prompt=prompt)"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"# Build the second link in the chain\n",
"\n",
"prompt = PromptTemplate(\n",
" input_variables=[\"entree\"],\n",
" template=\"What are three sides that would go with {entree}. Respond with only a list of the sides.\",\n",
")\n",
"\n",
"link_two = LLMChain(llm=wizardlm, prompt=prompt)"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"# Build the third link in the chain\n",
"\n",
"prompt = PromptTemplate(\n",
" input_variables=[\"sides\"],\n",
" template=\"What is one alcoholic and one non-alcoholic beverage that would go well with this list of sides: {sides}. Respond with only the names of the beverages.\",\n",
")\n",
"\n",
"link_three = LLMChain(llm=wizardlm, prompt=prompt)"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"# Run the full chain!\n",
"\n",
"menu_maker = SimpleSequentialChain(chains=[link_one, link_two, link_three], verbose=True)\n",
"menu_maker.run(\"South Indian\")"
]
}
],
"metadata": {
"kernelspec": {
"display_name": ".venv",
"language": "python",
"name": "python3"
},
"language_info": {
"codemirror_mode": {
"name": "ipython",
"version": 3
},
"file_extension": ".py",
"mimetype": "text/x-python",
"name": "python",
"nbconvert_exporter": "python",
"pygments_lexer": "ipython3",
"version": "3.10.4"
},
"orig_nbformat": 4
},
"nbformat": 4,
"nbformat_minor": 2
}

0 comments on commit ac8f85e

Please sign in to comment.