Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Added generic event handler for both tockens and functions calls #9263

Merged
merged 23 commits into from
Aug 25, 2023

Conversation

andrewBatutin
Copy link
Contributor

@andrewBatutin andrewBatutin commented Aug 15, 2023

Description

Main motivation for this PR is to sync with JS langchain langchain-ai/langchainjs#2025

Added on_event callback that works for both token and openai function calls in streaming mode

Twitter: @ShelfDev

@dosubot dosubot bot added Ɑ: models Related to LLMs or chat model modules 🤖:improvement Medium size change to existing code to handle new use-cases labels Aug 15, 2023
@vercel
Copy link

vercel bot commented Aug 15, 2023

The latest updates on your projects. Learn more about Vercel for Git ↗︎

1 Ignored Deployment
Name Status Preview Comments Updated (UTC)
langchain ⬜️ Ignored (Inspect) Visit Preview Aug 24, 2023 4:16pm

@hinthornw
Copy link
Collaborator

hinthornw commented Aug 22, 2023

How about changing

def on_llm_new_token(
        self,
        token: str,
        **kwargs: Any,
    ) -> None:

To

def on_llm_new_token(
        self,
        token: str,
        chunk: Optional[Union[GenerationChunk,ChatGenerationChunk]] = None,
        **kwargs: Any,
    ) -> None:
        """Run when LLM generates a new token.

        Args:
            token (str): The new token.
            chunk (GenerationChunk | ChatGenerationChunk): The new generated chunk, containing
                content and other information.
        """

There's room for a generic on_event() callback but not scoped to the llm run manager and the input arg would have to be "Any" typed

@andrewBatutin
Copy link
Contributor Author

How about changing

def on_llm_new_token(
        self,
        token: str,
        **kwargs: Any,
    ) -> None:

To

def on_llm_new_token(
        self,
        token: str,
        chunk: Optional[Union[GenerationChunk,ChatGenerationChunk]] = None,
        **kwargs: Any,
    ) -> None:
        """Run when LLM generates a new token.

        Args:
            token (str): The new token.
            chunk (GenerationChunk | ChatGenerationChunk): The new generated chunk, containing
                content and other information.
        """

There's room for a generic on_event() callback but not scoped to the llm run manager and the input arg would have to be "Any" typed

@hinthornw implemented, please check🙏

Copy link
Collaborator

@hinthornw hinthornw left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I think we also ened to update the BaseTracer

def on_llm_new_token(
        self,
        token: str,
        *,
        run_id: UUID,
        parent_run_id: Optional[UUID] = None,
        **kwargs: Any,
    ) -> None:
        """Run on new LLM token. Only available when streaming is enabled."""
        if not run_id:
            raise TracerException("No run_id provided for on_llm_new_token callback.")

        run_id_ = str(run_id)
        llm_run = self.run_map.get(run_id_)
        if llm_run is None or llm_run.run_type != "llm":
            raise TracerException(f"No LLM Run found to be traced for {run_id}")
        llm_run.events.append(
            {
                "name": "new_token",
                "time": datetime.utcnow(),
                "kwargs": {"token": token},
            },
        )
        ```
        
        and add the chunk to the events dict

@andrewBatutin
Copy link
Contributor Author

I think we also ened to update the BaseTracer

def on_llm_new_token(
        self,
        token: str,
        *,
        run_id: UUID,
        parent_run_id: Optional[UUID] = None,
        **kwargs: Any,
    ) -> None:
        """Run on new LLM token. Only available when streaming is enabled."""
        if not run_id:
            raise TracerException("No run_id provided for on_llm_new_token callback.")

        run_id_ = str(run_id)
        llm_run = self.run_map.get(run_id_)
        if llm_run is None or llm_run.run_type != "llm":
            raise TracerException(f"No LLM Run found to be traced for {run_id}")
        llm_run.events.append(
            {
                "name": "new_token",
                "time": datetime.utcnow(),
                "kwargs": {"token": token},
            },
        )
        ```
        
        and add the chunk to the events dict

added , please review @hinthornw

Copy link
Collaborator

@hinthornw hinthornw left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

LGTM pending lint passing. Thank you! 🎉

@hinthornw hinthornw changed the base branch from master to wfh/chunky August 25, 2023 04:18
@hinthornw hinthornw merged commit f771d85 into langchain-ai:wfh/chunky Aug 25, 2023
25 of 27 checks passed
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
🤖:improvement Medium size change to existing code to handle new use-cases Ɑ: models Related to LLMs or chat model modules
Projects
None yet
Development

Successfully merging this pull request may close these issues.

None yet

2 participants