Skip to content

Commit 43fdf25

Browse files
authored
docs: improve assistants notebook (langfuse#653)
1 parent 5eb93c3 commit 43fdf25

File tree

4 files changed

+82
-37
lines changed

4 files changed

+82
-37
lines changed

cookbook/integration_openai_assistants.ipynb

Lines changed: 32 additions & 13 deletions
Original file line numberDiff line numberDiff line change
@@ -27,6 +27,10 @@
2727
"\n",
2828
"The Assistants API from OpenAI allows developers to build AI assistants that can utilize multiple tools and data sources in parallel, such as code interpreters, file search, and custom tools created by calling functions. These assistants can access OpenAI's language models like GPT-4 with specific prompts, maintain persistent conversation histories, and process various file formats like text, images, and spreadsheets. Developers can fine-tune the language models on their own data and control aspects like output randomness. The API provides a framework for creating AI applications that combine language understanding with external tools and data.\n",
2929
"\n",
30+
"## Example Trace Output\n",
31+
"\n",
32+
"![OpenAI Assistants Trace in Langfuse](https://langfuse.com/images/docs/openai-assistants-trace.png)\n",
33+
"\n",
3034
"## Setup\n",
3135
"\n",
3236
"Install the required packages:"
@@ -189,7 +193,7 @@
189193
"import json\n",
190194
"from langfuse.decorators import langfuse_context\n",
191195
"\n",
192-
"@observe(as_type=\"generation\")\n",
196+
"@observe()\n",
193197
"def get_response(thread_id, run_id):\n",
194198
" client = OpenAI()\n",
195199
" \n",
@@ -206,8 +210,14 @@
206210
" thread_id=thread_id,\n",
207211
" )\n",
208212
" input_messages = [{\"role\": message.role, \"content\": message.content[0].text.value} for message in message_log.data[::-1][:-1]]\n",
209-
" \n",
210-
" langfuse_context.update_current_observation(\n",
213+
"\n",
214+
" # log internal generation within the openai assistant as a separate child generation to langfuse\n",
215+
" # get langfuse client used by the decorator, uses the low-level Python SDK\n",
216+
" langfuse_client = langfuse_context._get_langfuse()\n",
217+
" # pass trace_id and current observation ids to the newly created child generation\n",
218+
" langfuse_client.generation(\n",
219+
" trace_id=langfuse_context.get_current_trace_id(),\n",
220+
" parent_observation_id=langfuse_context.get_current_observation_id(),\n",
211221
" model=run.model,\n",
212222
" usage=run.usage,\n",
213223
" input=input_messages,\n",
@@ -216,21 +226,15 @@
216226
" \n",
217227
" return assistant_response, run\n",
218228
"\n",
219-
"# wrapper function as we want get_response to be a generation to track tokens\n",
220-
"# -> generations need to have a parent trace\n",
221-
"@observe()\n",
222-
"def get_response_trace(thread_id, run_id):\n",
223-
" return get_response(thread_id, run_id)\n",
224-
"\n",
225-
"response = get_response_trace(thread.id, run.id)\n",
229+
"response = get_response(thread.id, run.id)\n",
226230
"print(f\"Assistant response: {response[0]}\")"
227231
]
228232
},
229233
{
230234
"cell_type": "markdown",
231235
"metadata": {},
232236
"source": [
233-
"**[Public link of example trace](https://cloud.langfuse.com/project/cloramnkj0002jz088vzn1ja4/traces/3020450b-e9b7-4c12-b4fe-7288b6324118?observation=a083878e-73dd-4c47-867e-db4e23050fac) of fetching the response**"
237+
"**[Public link of example trace](https://cloud.langfuse.com/project/cloramnkj0002jz088vzn1ja4/traces/e0933ea5-6806-4eb7-aed8-a42d23c57096?observation=401fb816-22e5-45ac-a4c9-e437b120f2e7) of fetching the response**"
234238
]
235239
},
236240
{
@@ -246,10 +250,15 @@
246250
"metadata": {},
247251
"outputs": [],
248252
"source": [
253+
"import time\n",
254+
"\n",
249255
"@observe()\n",
250256
"def run_math_tutor(user_input):\n",
251257
" assistant = create_assistant()\n",
252258
" run, thread = run_assistant(assistant.id, user_input)\n",
259+
"\n",
260+
" time.sleep(5) # notebook only, wait for the assistant to finish\n",
261+
"\n",
253262
" response = get_response(thread.id, run.id)\n",
254263
" \n",
255264
" return response[0]\n",
@@ -265,8 +274,18 @@
265274
"source": [
266275
"The Langfuse trace shows the flow of creating the assistant, running it on a thread with user input, and retrieving the response, along with the captured input/output data.\n",
267276
"\n",
268-
"**[Public link of example trace](https://cloud.langfuse.com/project/cloramnkj0002jz088vzn1ja4/traces/1b2d53ad-f5d2-4f1e-9121-628b5ca1b5b2)**\n",
269-
"\n"
277+
"**[Public link of example trace](https://cloud.langfuse.com/project/cloramnkj0002jz088vzn1ja4/traces/b3b7b128-5664-4f42-9fab-31999da9e2f1)**\n",
278+
"\n",
279+
"![OpenAI Assistants Trace in Langfuse](https://langfuse.com/images/docs/openai-assistants-trace.png)"
280+
]
281+
},
282+
{
283+
"cell_type": "markdown",
284+
"metadata": {},
285+
"source": [
286+
"## Learn more\n",
287+
"\n",
288+
"If you use non-Assistants API endpoints, you can use the OpenAI SDK wrapper for tracing. Check out the [Langfuse documentation](https://langfuse.com/docs/integrations/openai/python/get-started) for more details."
270289
]
271290
}
272291
],

pages/docs/integrations/openai/python/assistants-api.md

Lines changed: 25 additions & 12 deletions
Original file line numberDiff line numberDiff line change
@@ -14,6 +14,10 @@ Note: The native [OpenAI SDK wrapper](https://langfuse.com/docs/integrations/ope
1414

1515
The Assistants API from OpenAI allows developers to build AI assistants that can utilize multiple tools and data sources in parallel, such as code interpreters, file search, and custom tools created by calling functions. These assistants can access OpenAI's language models like GPT-4 with specific prompts, maintain persistent conversation histories, and process various file formats like text, images, and spreadsheets. Developers can fine-tune the language models on their own data and control aspects like output randomness. The API provides a framework for creating AI applications that combine language understanding with external tools and data.
1616

17+
## Example Trace Output
18+
19+
![OpenAI Assistants Trace in Langfuse](https://langfuse.com/images/docs/openai-assistants-trace.png)
20+
1721
## Setup
1822

1923
Install the required packages:
@@ -111,7 +115,7 @@ Retrieve the assistant's response from the thread:
111115
import json
112116
from langfuse.decorators import langfuse_context
113117

114-
@observe(as_type="generation")
118+
@observe()
115119
def get_response(thread_id, run_id):
116120
client = OpenAI()
117121

@@ -128,8 +132,14 @@ def get_response(thread_id, run_id):
128132
thread_id=thread_id,
129133
)
130134
input_messages = [{"role": message.role, "content": message.content[0].text.value} for message in message_log.data[::-1][:-1]]
131-
132-
langfuse_context.update_current_observation(
135+
136+
# log internal generation within the openai assistant as a separate child generation to langfuse
137+
# get langfuse client used by the decorator, uses the low-level Python SDK
138+
langfuse_client = langfuse_context._get_langfuse()
139+
# pass trace_id and current observation ids to the newly created child generation
140+
langfuse_client.generation(
141+
trace_id=langfuse_context.get_current_trace_id(),
142+
parent_observation_id=langfuse_context.get_current_observation_id(),
133143
model=run.model,
134144
usage=run.usage,
135145
input=input_messages,
@@ -138,26 +148,25 @@ def get_response(thread_id, run_id):
138148

139149
return assistant_response, run
140150

141-
# wrapper function as we want get_response to be a generation to track tokens
142-
# -> generations need to have a parent trace
143-
@observe()
144-
def get_response_trace(thread_id, run_id):
145-
return get_response(thread_id, run_id)
146-
147-
response = get_response_trace(thread.id, run.id)
151+
response = get_response(thread.id, run.id)
148152
print(f"Assistant response: {response[0]}")
149153
```
150154

151-
**[Public link of example trace](https://cloud.langfuse.com/project/cloramnkj0002jz088vzn1ja4/traces/3020450b-e9b7-4c12-b4fe-7288b6324118?observation=a083878e-73dd-4c47-867e-db4e23050fac) of fetching the response**
155+
**[Public link of example trace](https://cloud.langfuse.com/project/cloramnkj0002jz088vzn1ja4/traces/e0933ea5-6806-4eb7-aed8-a42d23c57096?observation=401fb816-22e5-45ac-a4c9-e437b120f2e7) of fetching the response**
152156

153157
## All in one trace
154158

155159

156160
```python
161+
import time
162+
157163
@observe()
158164
def run_math_tutor(user_input):
159165
assistant = create_assistant()
160166
run, thread = run_assistant(assistant.id, user_input)
167+
168+
time.sleep(5) # notebook only, wait for the assistant to finish
169+
161170
response = get_response(thread.id, run.id)
162171

163172
return response[0]
@@ -169,6 +178,10 @@ print(f"Assistant response: {response}")
169178

170179
The Langfuse trace shows the flow of creating the assistant, running it on a thread with user input, and retrieving the response, along with the captured input/output data.
171180

172-
**[Public link of example trace](https://cloud.langfuse.com/project/cloramnkj0002jz088vzn1ja4/traces/1b2d53ad-f5d2-4f1e-9121-628b5ca1b5b2)**
181+
**[Public link of example trace](https://cloud.langfuse.com/project/cloramnkj0002jz088vzn1ja4/traces/b3b7b128-5664-4f42-9fab-31999da9e2f1)**
182+
183+
![OpenAI Assistants Trace in Langfuse](https://langfuse.com/images/docs/openai-assistants-trace.png)
173184

185+
## Learn more
174186

187+
If you use non-Assistants API endpoints, you can use the OpenAI SDK wrapper for tracing. Check out the [Langfuse documentation](https://langfuse.com/docs/integrations/openai/python/get-started) for more details.

pages/guides/cookbook/integration_openai_assistants.md

Lines changed: 25 additions & 12 deletions
Original file line numberDiff line numberDiff line change
@@ -14,6 +14,10 @@ Note: The native [OpenAI SDK wrapper](https://langfuse.com/docs/integrations/ope
1414

1515
The Assistants API from OpenAI allows developers to build AI assistants that can utilize multiple tools and data sources in parallel, such as code interpreters, file search, and custom tools created by calling functions. These assistants can access OpenAI's language models like GPT-4 with specific prompts, maintain persistent conversation histories, and process various file formats like text, images, and spreadsheets. Developers can fine-tune the language models on their own data and control aspects like output randomness. The API provides a framework for creating AI applications that combine language understanding with external tools and data.
1616

17+
## Example Trace Output
18+
19+
![OpenAI Assistants Trace in Langfuse](https://langfuse.com/images/docs/openai-assistants-trace.png)
20+
1721
## Setup
1822

1923
Install the required packages:
@@ -111,7 +115,7 @@ Retrieve the assistant's response from the thread:
111115
import json
112116
from langfuse.decorators import langfuse_context
113117

114-
@observe(as_type="generation")
118+
@observe()
115119
def get_response(thread_id, run_id):
116120
client = OpenAI()
117121

@@ -128,8 +132,14 @@ def get_response(thread_id, run_id):
128132
thread_id=thread_id,
129133
)
130134
input_messages = [{"role": message.role, "content": message.content[0].text.value} for message in message_log.data[::-1][:-1]]
131-
132-
langfuse_context.update_current_observation(
135+
136+
# log internal generation within the openai assistant as a separate child generation to langfuse
137+
# get langfuse client used by the decorator, uses the low-level Python SDK
138+
langfuse_client = langfuse_context._get_langfuse()
139+
# pass trace_id and current observation ids to the newly created child generation
140+
langfuse_client.generation(
141+
trace_id=langfuse_context.get_current_trace_id(),
142+
parent_observation_id=langfuse_context.get_current_observation_id(),
133143
model=run.model,
134144
usage=run.usage,
135145
input=input_messages,
@@ -138,26 +148,25 @@ def get_response(thread_id, run_id):
138148

139149
return assistant_response, run
140150

141-
# wrapper function as we want get_response to be a generation to track tokens
142-
# -> generations need to have a parent trace
143-
@observe()
144-
def get_response_trace(thread_id, run_id):
145-
return get_response(thread_id, run_id)
146-
147-
response = get_response_trace(thread.id, run.id)
151+
response = get_response(thread.id, run.id)
148152
print(f"Assistant response: {response[0]}")
149153
```
150154

151-
**[Public link of example trace](https://cloud.langfuse.com/project/cloramnkj0002jz088vzn1ja4/traces/3020450b-e9b7-4c12-b4fe-7288b6324118?observation=a083878e-73dd-4c47-867e-db4e23050fac) of fetching the response**
155+
**[Public link of example trace](https://cloud.langfuse.com/project/cloramnkj0002jz088vzn1ja4/traces/e0933ea5-6806-4eb7-aed8-a42d23c57096?observation=401fb816-22e5-45ac-a4c9-e437b120f2e7) of fetching the response**
152156

153157
## All in one trace
154158

155159

156160
```python
161+
import time
162+
157163
@observe()
158164
def run_math_tutor(user_input):
159165
assistant = create_assistant()
160166
run, thread = run_assistant(assistant.id, user_input)
167+
168+
time.sleep(5) # notebook only, wait for the assistant to finish
169+
161170
response = get_response(thread.id, run.id)
162171

163172
return response[0]
@@ -169,6 +178,10 @@ print(f"Assistant response: {response}")
169178

170179
The Langfuse trace shows the flow of creating the assistant, running it on a thread with user input, and retrieving the response, along with the captured input/output data.
171180

172-
**[Public link of example trace](https://cloud.langfuse.com/project/cloramnkj0002jz088vzn1ja4/traces/1b2d53ad-f5d2-4f1e-9121-628b5ca1b5b2)**
181+
**[Public link of example trace](https://cloud.langfuse.com/project/cloramnkj0002jz088vzn1ja4/traces/b3b7b128-5664-4f42-9fab-31999da9e2f1)**
182+
183+
![OpenAI Assistants Trace in Langfuse](https://langfuse.com/images/docs/openai-assistants-trace.png)
173184

185+
## Learn more
174186

187+
If you use non-Assistants API endpoints, you can use the OpenAI SDK wrapper for tracing. Check out the [Langfuse documentation](https://langfuse.com/docs/integrations/openai/python/get-started) for more details.
83.5 KB
Loading

0 commit comments

Comments
 (0)