Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Add simple fix to update container status after llm complete #8311

Conversation

KedoKudo
Copy link
Contributor

Describe your changes

This PR fixes an issue when using Langchain with StreamlitCallbackHandler.
Currently, the container status is stuck at "running" after displaying all content.

image

This PR added one more step to update the container status after displaying all content:

image

GitHub Issue Link (if applicable)

This issue is reported in langchain-ai/langchain#11398

Testing Plan

  • Explanation of why no additional tests are needed: NA
  • Unit Tests (JS and/or Python): NA
  • E2E Tests: NA
  • Any manual testing needed?: Yes, please build a small chat bot with the example code and use Langchain and the callback to ensure that the container status is updated after receiving all tokens from LLM.

Contribution License Agreement

By submitting this pull request you agree that all contributions to this project are made under the Apache 2.0 license.

@LukasMasuch
Copy link
Collaborator

LukasMasuch commented Mar 18, 2024

@KedoKudo Thanks for the contribution. I believe the complete method was supposed to take care of this. It is called in on_agent_finish, but there are probably situations where this isn't called:

def complete(self, final_label: str | None = None) -> None:
"""Finish the thought."""
if final_label is None and self._state == LLMThoughtState.RUNNING_TOOL:
assert (
self._last_tool is not None
), "_last_tool should never be null when _state == RUNNING_TOOL"
final_label = self._labeler.get_tool_label(
self._last_tool, is_complete=True
)
if self._last_tool and self._last_tool.name == "_Exception":
self._state = LLMThoughtState.ERROR
elif self._state != LLMThoughtState.ERROR:
self._state = LLMThoughtState.COMPLETE
if self._collapse_on_complete:
# Add a quick delay to show the user the final output before we collapse
time.sleep(0.25)
self._container.update(
label=final_label,
expanded=False if self._collapse_on_complete else None,
state="error" if self._state == LLMThoughtState.ERROR else "complete",
)

I'm wondering if we should just call the following here?:

complete(self._labeler.get_final_agent_thought_label())

@LukasMasuch LukasMasuch added type:bug Something isn't working security-assessment-completed change:bugfix impact:users and removed type:bug Something isn't working labels Mar 18, 2024
@LukasMasuch LukasMasuch self-requested a review March 18, 2024 20:04
@KedoKudo
Copy link
Contributor Author

@KedoKudo Thanks for the contribution. I believe the complete method was supposed to take care of this, but there are probably situations where this isn't called:

def complete(self, final_label: str | None = None) -> None:
"""Finish the thought."""
if final_label is None and self._state == LLMThoughtState.RUNNING_TOOL:
assert (
self._last_tool is not None
), "_last_tool should never be null when _state == RUNNING_TOOL"
final_label = self._labeler.get_tool_label(
self._last_tool, is_complete=True
)
if self._last_tool and self._last_tool.name == "_Exception":
self._state = LLMThoughtState.ERROR
elif self._state != LLMThoughtState.ERROR:
self._state = LLMThoughtState.COMPLETE
if self._collapse_on_complete:
# Add a quick delay to show the user the final output before we collapse
time.sleep(0.25)
self._container.update(
label=final_label,
expanded=False if self._collapse_on_complete else None,
state="error" if self._state == LLMThoughtState.ERROR else "complete",
)

I wondering if we should just call the following here?:

complete(self._labeler.get_final_agent_thought_label())

Use the self.compete method also works, but I need to modify the bevaior of complete slightly to make sure the dialog stay properly open when requested.
The changes are now in the PR.

@LukasMasuch LukasMasuch changed the title add simple fix to update container status after llm complete Add simple fix to update container status after llm complete Mar 24, 2024
@LukasMasuch
Copy link
Collaborator

Another aspect that also doesn't seem to work well right now is if the llm runs into an error. It would be awesome if you could change the on_llm_error function to:

    def on_llm_error(self, error: BaseException, *args: Any, **kwargs: Any) -> None:
        self._container.exception(error)
        self._state = LLMThoughtState.ERROR
        self.complete("LLM encountered an error...")

@KedoKudo
Copy link
Contributor Author

Another aspect that also doesn't seem to work well right now is if the llm runs into an error. It would be awesome if you could change the on_llm_error function to:

    def on_llm_error(self, error: BaseException, *args: Any, **kwargs: Any) -> None:
        self._container.exception(error)
        self._state = LLMThoughtState.ERROR
        self.complete("LLM encountered an error...")

The suggested change has been added to this PR.

Copy link
Collaborator

@LukasMasuch LukasMasuch left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

LGTM 👍

@LukasMasuch LukasMasuch enabled auto-merge (squash) March 25, 2024 19:44
@LukasMasuch LukasMasuch merged commit 430ee43 into streamlit:develop Mar 25, 2024
34 of 35 checks passed
@KedoKudo KedoKudo deleted the fix_streamlit_langchain_container_status branch April 19, 2024 20:56
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Projects
None yet
Development

Successfully merging this pull request may close these issues.

None yet

3 participants