Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

fast_llm: Fix UnboundLocalError, add docstring #1592

Open
wants to merge 1 commit into
base: main
Choose a base branch
from

Conversation

endolith
Copy link

@endolith endolith commented Feb 12, 2025

Previously, fast_llm was trying to access 'response' in the finally block, which would fail with UnboundLocalError if the chat call raised an exception (because of missing API key, etc.)

UnboundLocalError: cannot access local variable 'response' where it is not associated with a value

Moved the return statement into the try block while keeping state restoration in finally, ensuring proper error propagation while maintaining conversation state.

This way, if there's an API key issue or any other problem, any errors from llm.interpreter.chat() propagate up and can be handled at the appropriate level in the call stack.

Added a detailed docstring to clarify the purpose, behavior, and usage of the fast_llm function.

Describe the changes you have made:

Reference any relevant issues (e.g. "Fixes #000"):

Pre-Submission Checklist (optional but appreciated):

  • I have included relevant documentation updates (stored in /docs)
  • I have read docs/CONTRIBUTING.md
  • I have read docs/ROADMAP.md

OS Tests (optional but appreciated):

  • Tested on Windows
  • Tested on MacOS
  • Tested on Linux

Previously, fast_llm was trying to access 'response' in the finally
block, which would fail with UnboundLocalError if the chat call raised
an exception (because of missing API key, etc.)

UnboundLocalError: cannot access local variable 'response' where it is
not associated with a value

Moved the return statement into the try block while keeping state
restoration in finally, ensuring proper error propagation while
maintaining conversation state.

This way, if there's an API key issue or any other problem, any errors
from `llm.interpreter.chat()` propagate up and  can be handled at the
appropriate level in the call stack.

Added a detailed docstring to clarify the purpose, behavior, and usage
of the `fast_llm` function.
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

1 participant