-
Notifications
You must be signed in to change notification settings - Fork 14.3k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Added generic event handler for both tockens and functions calls #9263
Added generic event handler for both tockens and functions calls #9263
Conversation
The latest updates on your projects. Learn more about Vercel for Git ↗︎ 1 Ignored Deployment
|
…ral-event-handler # Conflicts: # libs/langchain/poetry.lock
…ral-event-handler # Conflicts: # libs/langchain/poetry.lock
How about changing
To
There's room for a generic |
…ral-event-handler
@hinthornw implemented, please check🙏 |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I think we also ened to update the BaseTracer
def on_llm_new_token(
self,
token: str,
*,
run_id: UUID,
parent_run_id: Optional[UUID] = None,
**kwargs: Any,
) -> None:
"""Run on new LLM token. Only available when streaming is enabled."""
if not run_id:
raise TracerException("No run_id provided for on_llm_new_token callback.")
run_id_ = str(run_id)
llm_run = self.run_map.get(run_id_)
if llm_run is None or llm_run.run_type != "llm":
raise TracerException(f"No LLM Run found to be traced for {run_id}")
llm_run.events.append(
{
"name": "new_token",
"time": datetime.utcnow(),
"kwargs": {"token": token},
},
)
```
and add the chunk to the events dict
added , please review @hinthornw |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
LGTM pending lint passing. Thank you! 🎉
Description
Main motivation for this PR is to sync with JS langchain langchain-ai/langchainjs#2025
Added
on_event
callback that works for both token and openai function calls in streaming modeTwitter: @ShelfDev