Skip to content

Commit

Permalink
Merge pull request langchain-ai#13 from langchain-ai/hotfix_callbacks…
Browse files Browse the repository at this point in the history
…_docs

Update callback docs to reflect proper JS behavior
  • Loading branch information
hinthornw committed Aug 28, 2023
2 parents 0f92913 + 8478a74 commit 8660eec
Showing 1 changed file with 13 additions and 5 deletions.
18 changes: 13 additions & 5 deletions docs/tracing/tracing-faq.mdx
Original file line number Diff line number Diff line change
Expand Up @@ -198,7 +198,7 @@ within a defined context beneath a virtual parent run.
<CodeTabs
tabs={[
PythonBlock(`from langchain.callbacks.manager import (
trace_as_chain_group,
trace_as_chain_group,
atrace_as_chain_group,
)\n
with trace_as_chain_group("my_group_name") as group_manager:
Expand Down Expand Up @@ -462,9 +462,11 @@ Check out the [exporting runs](use-cases/export-runs) directory for more example

### How do I ensure logging is completed before exiting my application?

In LangChain, LangSmith's tracing is asynchronous (or done in a background thread in python) to avoid obstructing your production application. This means that if you're process may end before all traces are successfully posted to LangSmith. This is especially prevelant in a serverless environment, where your VM may be terminated immediately once your chain or agent completes.
In LangChain.py, LangSmith's tracing is done in a background thread to avoid obstructing your production application. This means that if you're process may end before all traces are successfully posted to LangSmith. This is especially prevelant in a serverless environment, where your VM may be terminated immediately once your chain or agent completes.

LangChain exposes methods to wait for traces to be submitted before exiting your application.
In LangChain.js, the default is to block for a short period of time for the trace to finish due to the greater popularity of serverless environments. You can make callbacks asynchronous by setting the `LANGCHAIN_CALLBACKS_BACKGROUND` environment variable to `"true"`.

For both languages, LangChain exposes methods to wait for traces to be submitted before exiting your application.

Below is an example:

Expand All @@ -481,8 +483,14 @@ finally:
`),
TypeScriptBlock(`import { ChatOpenAI } from "langchain/chat_models/openai";
import { awaitAllCallbacks } from "langchain/callbacks";\n
const llm = new ChatOpenAI();
llm.invoke("Hello, World!").finally(() => awaitAllCallbacks());
try {
const llm = new ChatOpenAI();
const response = await llm.invoke("Hello, World!");
} catch (e) {
// handle error
} finally {
await awaitAllCallbacks();
}
`),
]}
groupId="client-language"
Expand Down

0 comments on commit 8660eec

Please sign in to comment.