Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Callbacks Refactor [base] #3256

Merged
merged 45 commits into from
Apr 30, 2023
Merged
Show file tree
Hide file tree
Changes from all commits
Commits
Show all changes
45 commits
Select commit Hold shift + click to select a range
3cc2ce6
callbacks changes
agola11 Apr 20, 2023
55c7964
Merge branch 'master' into ankush/callbacks-refactor
agola11 Apr 21, 2023
fa4a4f2
cr
agola11 Apr 21, 2023
675e27c
Callbacks Refactor [2/n]: refactor `CallbackManager` code to own file…
agola11 Apr 23, 2023
90cef7b
cr
agola11 Apr 23, 2023
4cdd19b
Callbacks Refactor [2/n] update tracer to work with new callbacks mec…
agola11 Apr 26, 2023
7bcdc66
fix notebook and warnings
agola11 Apr 26, 2023
6fec15b
write to different session
agola11 Apr 26, 2023
5066869
fix execution order issue
agola11 Apr 27, 2023
e953d2c
mypy
agola11 Apr 27, 2023
6cd653d
cr
agola11 Apr 27, 2023
8ae809a
mypy
agola11 Apr 28, 2023
1fc3941
mypy
agola11 Apr 28, 2023
15c0fa5
cr
agola11 Apr 28, 2023
5dcb44e
fix llm chain
agola11 Apr 28, 2023
da27d87
fix most tests
agola11 Apr 28, 2023
2ed4649
fix baby agi
agola11 Apr 28, 2023
0e81e83
Nc/callbacks docs (#3717)
nfcampos Apr 28, 2023
eb9de30
merge
agola11 Apr 28, 2023
1b48ea8
cr
agola11 Apr 28, 2023
18138c6
cr
agola11 Apr 28, 2023
50f6895
Chains callbacks refactor (#3683)
dev2049 Apr 28, 2023
eeb18c4
Merge branch 'master' of github.com:hwchase17/langchain into ankush/c…
agola11 Apr 28, 2023
40f3f6e
Merge branch 'ankush/callbacks-refactor' of github.com:hwchase17/lang…
agola11 Apr 28, 2023
83cda5e
lint
agola11 Apr 28, 2023
9c876bd
update chain notebooks (#3740)
dev2049 Apr 28, 2023
43410e4
fix test
agola11 Apr 28, 2023
145e1af
Merge branch 'ankush/callbacks-refactor' of github.com:hwchase17/lang…
agola11 Apr 28, 2023
56f16cd
Merge branch 'master' into ankush/callbacks-refactor
agola11 Apr 28, 2023
9c988ae
cr
agola11 Apr 28, 2023
bd9ac67
nb nit (#3744)
dev2049 Apr 28, 2023
e60489e
fix lint
agola11 Apr 28, 2023
3c5f983
Merge branch 'ankush/callbacks-refactor' of github.com:hwchase17/lang…
agola11 Apr 28, 2023
9dad051
fix test warnings (#3753)
dev2049 Apr 29, 2023
5f78219
fix some docs, add session variable
agola11 Apr 29, 2023
290fe75
Add RunManager to Tools Arguments (#3746)
vowelparrot Apr 29, 2023
20ba888
Call Manager for New Tools (#3755)
vowelparrot Apr 29, 2023
a038f37
Resolve merge conflicts
vowelparrot Apr 29, 2023
9192abc
Notebook Nits
vowelparrot Apr 29, 2023
35cc38f
merge
agola11 Apr 29, 2023
ebc6242
fix docs
agola11 Apr 29, 2023
737467a
use UUID
agola11 Apr 29, 2023
3839703
bw compat environ variable
agola11 Apr 29, 2023
fa1742c
fix openai callback
agola11 Apr 29, 2023
fb78f69
cr
hwchase17 Apr 30, 2023
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Jump to
Jump to file
Failed to load files.
Diff view
Diff view
12 changes: 5 additions & 7 deletions docs/ecosystem/aim_tracking.ipynb
Original file line number Diff line number Diff line change
Expand Up @@ -61,7 +61,6 @@
"from datetime import datetime\n",
"\n",
"from langchain.llms import OpenAI\n",
"from langchain.callbacks.base import CallbackManager\n",
"from langchain.callbacks import AimCallbackHandler, StdOutCallbackHandler"
]
},
Expand Down Expand Up @@ -109,8 +108,8 @@
" experiment_name=\"scenario 1: OpenAI LLM\",\n",
")\n",
"\n",
"manager = CallbackManager([StdOutCallbackHandler(), aim_callback])\n",
"llm = OpenAI(temperature=0, callback_manager=manager, verbose=True)"
"callbacks = [StdOutCallbackHandler(), aim_callback]\n",
"llm = OpenAI(temperature=0, callbacks=callbacks)"
]
},
{
Expand Down Expand Up @@ -177,7 +176,7 @@
"Title: {title}\n",
"Playwright: This is a synopsis for the above play:\"\"\"\n",
"prompt_template = PromptTemplate(input_variables=[\"title\"], template=template)\n",
"synopsis_chain = LLMChain(llm=llm, prompt=prompt_template, callback_manager=manager)\n",
"synopsis_chain = LLMChain(llm=llm, prompt=prompt_template, callbacks=callbacks)\n",
"\n",
"test_prompts = [\n",
" {\"title\": \"documentary about good video games that push the boundary of game design\"},\n",
Expand Down Expand Up @@ -249,13 +248,12 @@
],
"source": [
"# scenario 3 - Agent with Tools\n",
"tools = load_tools([\"serpapi\", \"llm-math\"], llm=llm, callback_manager=manager)\n",
"tools = load_tools([\"serpapi\", \"llm-math\"], llm=llm, callbacks=callbacks)\n",
"agent = initialize_agent(\n",
" tools,\n",
" llm,\n",
" agent=AgentType.ZERO_SHOT_REACT_DESCRIPTION,\n",
" callback_manager=manager,\n",
" verbose=True,\n",
" callbacks=callbacks,\n",
")\n",
"agent.run(\n",
" \"Who is Leo DiCaprio's girlfriend? What is her current age raised to the 0.43 power?\"\n",
Expand Down
10 changes: 4 additions & 6 deletions docs/ecosystem/clearml_tracking.ipynb
Original file line number Diff line number Diff line change
Expand Up @@ -79,7 +79,6 @@
"source": [
"from datetime import datetime\n",
"from langchain.callbacks import ClearMLCallbackHandler, StdOutCallbackHandler\n",
"from langchain.callbacks.base import CallbackManager\n",
"from langchain.llms import OpenAI\n",
"\n",
"# Setup and use the ClearML Callback\n",
Expand All @@ -93,9 +92,9 @@
" complexity_metrics=True,\n",
" stream_logs=True\n",
")\n",
"manager = CallbackManager([StdOutCallbackHandler(), clearml_callback])\n",
"callbacks = [StdOutCallbackHandler(), clearml_callback]\n",
"# Get the OpenAI model ready to go\n",
"llm = OpenAI(temperature=0, callback_manager=manager, verbose=True)"
"llm = OpenAI(temperature=0, callbacks=callbacks)"
]
},
{
Expand Down Expand Up @@ -523,13 +522,12 @@
"from langchain.agents import AgentType\n",
"\n",
"# SCENARIO 2 - Agent with Tools\n",
"tools = load_tools([\"serpapi\", \"llm-math\"], llm=llm, callback_manager=manager)\n",
"tools = load_tools([\"serpapi\", \"llm-math\"], llm=llm, callbacks=callbacks)\n",
"agent = initialize_agent(\n",
" tools,\n",
" llm,\n",
" agent=AgentType.ZERO_SHOT_REACT_DESCRIPTION,\n",
" callback_manager=manager,\n",
" verbose=True,\n",
" callbacks=callbacks,\n",
")\n",
"agent.run(\n",
" \"Who is the wife of the person who sang summer of 69?\"\n",
Expand Down
31 changes: 13 additions & 18 deletions docs/ecosystem/comet_tracking.ipynb
Original file line number Diff line number Diff line change
Expand Up @@ -121,7 +121,6 @@
"from datetime import datetime\n",
"\n",
"from langchain.callbacks import CometCallbackHandler, StdOutCallbackHandler\n",
"from langchain.callbacks.base import CallbackManager\n",
"from langchain.llms import OpenAI\n",
"\n",
"comet_callback = CometCallbackHandler(\n",
Expand All @@ -131,8 +130,8 @@
" tags=[\"llm\"],\n",
" visualizations=[\"dep\"],\n",
")\n",
"manager = CallbackManager([StdOutCallbackHandler(), comet_callback])\n",
"llm = OpenAI(temperature=0.9, callback_manager=manager, verbose=True)\n",
"callbacks = [StdOutCallbackHandler(), comet_callback]\n",
"llm = OpenAI(temperature=0.9, callbacks=callbacks, verbose=True)\n",
"\n",
"llm_result = llm.generate([\"Tell me a joke\", \"Tell me a poem\", \"Tell me a fact\"] * 3)\n",
"print(\"LLM result\", llm_result)\n",
Expand All @@ -153,7 +152,6 @@
"outputs": [],
"source": [
"from langchain.callbacks import CometCallbackHandler, StdOutCallbackHandler\n",
"from langchain.callbacks.base import CallbackManager\n",
"from langchain.chains import LLMChain\n",
"from langchain.llms import OpenAI\n",
"from langchain.prompts import PromptTemplate\n",
Expand All @@ -164,15 +162,14 @@
" stream_logs=True,\n",
" tags=[\"synopsis-chain\"],\n",
")\n",
"manager = CallbackManager([StdOutCallbackHandler(), comet_callback])\n",
"\n",
"llm = OpenAI(temperature=0.9, callback_manager=manager, verbose=True)\n",
"callbacks = [StdOutCallbackHandler(), comet_callback]\n",
"llm = OpenAI(temperature=0.9, callbacks=callbacks)\n",
"\n",
"template = \"\"\"You are a playwright. Given the title of play, it is your job to write a synopsis for that title.\n",
"Title: {title}\n",
"Playwright: This is a synopsis for the above play:\"\"\"\n",
"prompt_template = PromptTemplate(input_variables=[\"title\"], template=template)\n",
"synopsis_chain = LLMChain(llm=llm, prompt=prompt_template, callback_manager=manager)\n",
"synopsis_chain = LLMChain(llm=llm, prompt=prompt_template, callbacks=callbacks)\n",
"\n",
"test_prompts = [{\"title\": \"Documentary about Bigfoot in Paris\"}]\n",
"print(synopsis_chain.apply(test_prompts))\n",
Expand All @@ -194,7 +191,6 @@
"source": [
"from langchain.agents import initialize_agent, load_tools\n",
"from langchain.callbacks import CometCallbackHandler, StdOutCallbackHandler\n",
"from langchain.callbacks.base import CallbackManager\n",
"from langchain.llms import OpenAI\n",
"\n",
"comet_callback = CometCallbackHandler(\n",
Expand All @@ -203,15 +199,15 @@
" stream_logs=True,\n",
" tags=[\"agent\"],\n",
")\n",
"manager = CallbackManager([StdOutCallbackHandler(), comet_callback])\n",
"llm = OpenAI(temperature=0.9, callback_manager=manager, verbose=True)\n",
"callbacks = [StdOutCallbackHandler(), comet_callback]\n",
"llm = OpenAI(temperature=0.9, callbacks=callbacks)\n",
"\n",
"tools = load_tools([\"serpapi\", \"llm-math\"], llm=llm, callback_manager=manager)\n",
"tools = load_tools([\"serpapi\", \"llm-math\"], llm=llm, callbacks=callbacks)\n",
"agent = initialize_agent(\n",
" tools,\n",
" llm,\n",
" agent=\"zero-shot-react-description\",\n",
" callback_manager=manager,\n",
" callbacks=callbacks,\n",
" verbose=True,\n",
")\n",
"agent.run(\n",
Expand Down Expand Up @@ -255,7 +251,6 @@
"from rouge_score import rouge_scorer\n",
"\n",
"from langchain.callbacks import CometCallbackHandler, StdOutCallbackHandler\n",
"from langchain.callbacks.base import CallbackManager\n",
"from langchain.chains import LLMChain\n",
"from langchain.llms import OpenAI\n",
"from langchain.prompts import PromptTemplate\n",
Expand Down Expand Up @@ -298,10 +293,10 @@
" tags=[\"custom_metrics\"],\n",
" custom_metrics=rouge_score.compute_metric,\n",
")\n",
"manager = CallbackManager([StdOutCallbackHandler(), comet_callback])\n",
"llm = OpenAI(temperature=0.9, callback_manager=manager, verbose=True)\n",
"callbacks = [StdOutCallbackHandler(), comet_callback]\n",
"llm = OpenAI(temperature=0.9)\n",
"\n",
"synopsis_chain = LLMChain(llm=llm, prompt=prompt_template, callback_manager=manager)\n",
"synopsis_chain = LLMChain(llm=llm, prompt=prompt_template)\n",
"\n",
"test_prompts = [\n",
" {\n",
Expand All @@ -323,7 +318,7 @@
" \"\"\"\n",
" }\n",
"]\n",
"print(synopsis_chain.apply(test_prompts))\n",
"print(synopsis_chain.apply(test_prompts, callbacks=callbacks))\n",
"comet_callback.flush_tracker(synopsis_chain, finish=True)"
]
}
Expand Down
9 changes: 5 additions & 4 deletions docs/ecosystem/gpt4all.md
Original file line number Diff line number Diff line change
Expand Up @@ -3,6 +3,7 @@
This page covers how to use the `GPT4All` wrapper within LangChain. The tutorial is divided into two parts: installation and setup, followed by usage with an example.

## Installation and Setup

- Install the Python package with `pip install pyllamacpp`
- Download a [GPT4All model](https://github.com/nomic-ai/pyllamacpp#supported-model) and place it in your desired directory

Expand All @@ -28,16 +29,16 @@ To stream the model's predictions, add in a CallbackManager.

```python
from langchain.llms import GPT4All
from langchain.callbacks.base import CallbackManager
from langchain.callbacks.streaming_stdout import StreamingStdOutCallbackHandler

# There are many CallbackHandlers supported, such as
# from langchain.callbacks.streamlit import StreamlitCallbackHandler

callback_manager = CallbackManager([StreamingStdOutCallbackHandler()])
model = GPT4All(model="./models/gpt4all-model.bin", n_ctx=512, n_threads=8, callback_handler=callback_handler, verbose=True)
callbacks = [StreamingStdOutCallbackHandler()]
model = GPT4All(model="./models/gpt4all-model.bin", n_ctx=512, n_threads=8)

# Generate text. Tokens are streamed through the callback manager.
model("Once upon a time, ")
model("Once upon a time, ", callbacks=callbacks)
```

## Model File
Expand Down
14 changes: 6 additions & 8 deletions docs/ecosystem/wandb_tracking.ipynb
Original file line number Diff line number Diff line change
Expand Up @@ -50,7 +50,6 @@
"source": [
"from datetime import datetime\n",
"from langchain.callbacks import WandbCallbackHandler, StdOutCallbackHandler\n",
"from langchain.callbacks.base import CallbackManager\n",
"from langchain.llms import OpenAI"
]
},
Expand Down Expand Up @@ -196,8 +195,8 @@
" name=\"llm\",\n",
" tags=[\"test\"],\n",
")\n",
"manager = CallbackManager([StdOutCallbackHandler(), wandb_callback])\n",
"llm = OpenAI(temperature=0, callback_manager=manager, verbose=True)"
"callbacks = [StdOutCallbackHandler(), wandb_callback]\n",
"llm = OpenAI(temperature=0, callbacks=callbacks)"
]
},
{
Expand Down Expand Up @@ -484,7 +483,7 @@
"Title: {title}\n",
"Playwright: This is a synopsis for the above play:\"\"\"\n",
"prompt_template = PromptTemplate(input_variables=[\"title\"], template=template)\n",
"synopsis_chain = LLMChain(llm=llm, prompt=prompt_template, callback_manager=manager)\n",
"synopsis_chain = LLMChain(llm=llm, prompt=prompt_template, callbacks=callbacks)\n",
"\n",
"test_prompts = [\n",
" {\n",
Expand Down Expand Up @@ -577,16 +576,15 @@
],
"source": [
"# SCENARIO 3 - Agent with Tools\n",
"tools = load_tools([\"serpapi\", \"llm-math\"], llm=llm, callback_manager=manager)\n",
"tools = load_tools([\"serpapi\", \"llm-math\"], llm=llm)\n",
"agent = initialize_agent(\n",
" tools,\n",
" llm,\n",
" agent=AgentType.ZERO_SHOT_REACT_DESCRIPTION,\n",
" callback_manager=manager,\n",
" verbose=True,\n",
")\n",
"agent.run(\n",
" \"Who is Leo DiCaprio's girlfriend? What is her current age raised to the 0.43 power?\"\n",
" \"Who is Leo DiCaprio's girlfriend? What is her current age raised to the 0.43 power?\",\n",
" callbacks=callbacks,\n",
")\n",
"wandb_callback.flush_tracker(agent, reset=False, finish=True)"
]
Expand Down
3 changes: 3 additions & 0 deletions docs/index.rst
Original file line number Diff line number Diff line change
Expand Up @@ -44,6 +44,8 @@ These modules are, in increasing order of complexity:

- `Agents <./modules/agents.html>`_: Agents involve an LLM making decisions about which Actions to take, taking that Action, seeing an Observation, and repeating that until done. LangChain provides a standard interface for agents, a selection of agents to choose from, and examples of end to end agents.

- `Callbacks <./modules/callbacks/getting_started.html>`_: It can be difficult to track all that occurs inside a chain or agent - callbacks help add a level of observability and introspection.


.. toctree::
:maxdepth: 1
Expand All @@ -57,6 +59,7 @@ These modules are, in increasing order of complexity:
./modules/memory.md
./modules/chains.md
./modules/agents.md
./modules/callbacks/getting_started.ipynb

Use Cases
----------
Expand Down