-
Notifications
You must be signed in to change notification settings - Fork 3
SDK: Document error handling & typed exceptions #91
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Merged
Merged
Changes from all commits
Commits
Show all changes
6 commits
Select commit
Hold shift + click to select a range
d490eb5
SDK: Document error handling & typed exceptions
enyst 373f6b2
SDK: Add Responses API example to error handling guide
enyst 420032d
SDK: Rename error-handling guide to llm-error-handling for consistency
enyst e5dbf09
Docs: Make error handling guide conversation-first; keep llm- prefix
enyst e61c923
Docs: Remove 'Notes for advanced users' section from error handling g…
enyst f42790c
Update sdk/guides/llm-error-handling.mdx
enyst File filter
Filter by extension
Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
Some comments aren't visible on the classic Files Changed page.
There are no files selected for viewing
This file contains hidden or bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
This file contains hidden or bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
| Original file line number | Diff line number | Diff line change |
|---|---|---|
| @@ -0,0 +1,182 @@ | ||
| --- | ||
| title: Exception Handling | ||
| description: Provider‑agnostic exceptions raised by the SDK and recommended patterns for handling them. | ||
| --- | ||
|
|
||
| The SDK normalizes common provider errors into typed, provider‑agnostic exceptions so your application can handle them consistently across OpenAI, Anthropic, Groq, Google, and others. | ||
|
|
||
| This guide explains when these errors occur and shows recommended handling patterns for both direct LLM usage and higher‑level agent/conversation flows. | ||
|
|
||
| ## Why typed exceptions? | ||
|
|
||
| LLM providers format errors differently (status codes, messages, exception classes). The SDK maps those into stable types so client apps don’t depend on provider‑specific details. Typical benefits: | ||
|
|
||
| - One code path to handle auth, rate limits, timeouts, service issues, and bad requests | ||
| - Clear behavior when conversation history exceeds the context window | ||
| - Backward compatibility when you switch providers or SDK versions | ||
|
|
||
| ## Quick start: Using agents and conversations | ||
|
|
||
| Agent-driven conversations are the common entry point. Exceptions from the underlying LLM calls bubble up from `conversation.run()` and `conversation.send_message(...)` when a condenser is not configured. | ||
|
|
||
| ```python icon="python" | ||
| from pydantic import SecretStr | ||
| from openhands.sdk import Agent, Conversation, LLM | ||
| from openhands.sdk.llm.exceptions import ( | ||
| LLMError, | ||
| LLMAuthenticationError, | ||
| LLMRateLimitError, | ||
| LLMTimeoutError, | ||
| LLMServiceUnavailableError, | ||
| LLMBadRequestError, | ||
| LLMContextWindowExceedError, | ||
| ) | ||
|
|
||
| llm = LLM(model="claude-sonnet-4-20250514", api_key=SecretStr("your-key")) | ||
| agent = Agent(llm=llm, tools=[]) | ||
| conversation = Conversation(agent=agent, persistence_dir="./.conversations", workspace=".") | ||
|
|
||
| try: | ||
| conversation.send_message("Continue the long analysis we started earlier…") | ||
| conversation.run() | ||
|
|
||
| except LLMContextWindowExceedError: | ||
| # Conversation is longer than the model’s context window | ||
| # Options: | ||
| # 1) Enable a condenser (recommended for long sessions) | ||
| # 2) Shorten inputs or reset conversation | ||
| print("Hit the context limit. Consider enabling a condenser.") | ||
|
|
||
| except LLMAuthenticationError: | ||
| print("Invalid or missing API credentials. Check your API key or auth setup.") | ||
|
|
||
| except LLMRateLimitError: | ||
| print("Rate limit exceeded. Back off and retry later.") | ||
|
|
||
| except LLMTimeoutError: | ||
| print("Request timed out. Consider increasing timeout or retrying.") | ||
|
|
||
| except LLMServiceUnavailableError: | ||
| print("Service unavailable or connectivity issue. Retry with backoff.") | ||
|
|
||
| except LLMBadRequestError: | ||
| print("Bad request to provider. Validate inputs and arguments.") | ||
|
|
||
| except LLMError as e: | ||
| # Fallback for other SDK LLM errors (parsing/validation, etc.) | ||
| print(f"Unhandled LLM error: {e}") | ||
| ``` | ||
|
|
||
|
|
||
|
|
||
| ### Avoiding context‑window errors with a condenser | ||
|
|
||
| If a condenser is configured, the SDK emits a condensation request event instead of raising `LLMContextWindowExceedError`. The agent will summarize older history and continue. | ||
|
|
||
| ```python icon="python" highlight={5-10} | ||
| from openhands.sdk.context.condenser import LLMSummarizingCondenser | ||
|
|
||
| condenser = LLMSummarizingCondenser( | ||
| llm=llm.model_copy(update={"usage_id": "condenser"}), | ||
| max_size=10, | ||
| keep_first=2, | ||
| ) | ||
|
|
||
| agent = Agent(llm=llm, tools=[], condenser=condenser) | ||
| conversation = Conversation(agent=agent, persistence_dir="./.conversations", workspace=".") | ||
| ``` | ||
|
|
||
| See the dedicated guide: [Context Condenser](/sdk/guides/context-condenser). | ||
|
|
||
| ## Handling errors with direct LLM calls | ||
|
|
||
| The same exceptions are raised from both `LLM.completion()` and `LLM.responses()` paths, so you can share handlers. | ||
|
|
||
| ### Example: Using completion() | ||
|
|
||
| ```python icon="python" | ||
| from pydantic import SecretStr | ||
| from openhands.sdk import LLM | ||
| from openhands.sdk.llm import Message, TextContent | ||
| from openhands.sdk.llm.exceptions import ( | ||
| LLMError, | ||
| LLMAuthenticationError, | ||
| LLMRateLimitError, | ||
| LLMTimeoutError, | ||
| LLMServiceUnavailableError, | ||
| LLMBadRequestError, | ||
| LLMContextWindowExceedError, | ||
| ) | ||
|
|
||
| llm = LLM(model="claude-sonnet-4-20250514", api_key=SecretStr("your-key")) | ||
|
|
||
| try: | ||
| response = llm.completion([ | ||
| Message.user([TextContent(text="Summarize our design doc")]) | ||
| ]) | ||
| print(response.message) | ||
|
|
||
| except LLMContextWindowExceedError: | ||
| print("Context window exceeded. Consider enabling a condenser.") | ||
| except LLMAuthenticationError: | ||
| print("Invalid or missing API credentials.") | ||
| except LLMRateLimitError: | ||
| print("Rate limit exceeded. Back off and retry later.") | ||
| except LLMTimeoutError: | ||
| print("Request timed out. Consider increasing timeout or retrying.") | ||
| except LLMServiceUnavailableError: | ||
| print("Service unavailable or connectivity issue. Retry with backoff.") | ||
| except LLMBadRequestError: | ||
| print("Bad request to provider. Validate inputs and arguments.") | ||
| except LLMError as e: | ||
| print(f"Unhandled LLM error: {e}") | ||
| ``` | ||
|
|
||
| ### Example: Using responses() | ||
|
|
||
| ```python icon="python" | ||
| from pydantic import SecretStr | ||
| from openhands.sdk import LLM | ||
| from openhands.sdk.llm import Message, TextContent | ||
| from openhands.sdk.llm.exceptions import LLMError, LLMContextWindowExceedError | ||
|
|
||
| llm = LLM(model="claude-sonnet-4-20250514", api_key=SecretStr("your-key")) | ||
|
|
||
| try: | ||
| resp = llm.responses([ | ||
| Message.user([TextContent(text="Write a one-line haiku about code.")]) | ||
| ]) | ||
| print(resp.message) | ||
| except LLMContextWindowExceedError: | ||
| print("Context window exceeded. Consider enabling a condenser.") | ||
| except LLMError as e: | ||
| print(f"LLM error: {e}") | ||
| ``` | ||
|
|
||
| ## Exception reference | ||
|
|
||
| All exceptions live under `openhands.sdk.llm.exceptions` unless noted. | ||
|
|
||
| - Provider/transport mapping (provider‑agnostic): | ||
| - `LLMContextWindowExceedError` — Conversation exceeds the model’s context window. Without a condenser, thrown for both Chat and Responses paths. | ||
| - `LLMAuthenticationError` — Invalid or missing credentials (401/403 patterns). | ||
| - `LLMRateLimitError` — Provider rate limit exceeded. | ||
| - `LLMTimeoutError` — SDK/lower‑level timeout while waiting for the provider. | ||
| - `LLMServiceUnavailableError` — Temporary connectivity/service outage (e.g., 5xx, connection issues). | ||
| - `LLMBadRequestError` — Client‑side request issues (invalid params, malformed input). | ||
|
|
||
| - Response parsing/validation: | ||
| - `LLMMalformedActionError` — Model returned a malformed action. | ||
| - `LLMNoActionError` — Model did not return an action when one was expected. | ||
| - `LLMResponseError` — Could not extract an action from the response. | ||
| - `FunctionCallConversionError` — Failed converting tool/function call payloads. | ||
| - `FunctionCallValidationError` — Tool/function call arguments failed validation. | ||
| - `FunctionCallNotExistsError` — Model referenced an unknown tool/function. | ||
| - `LLMNoResponseError` — Provider returned an empty/invalid response (seen rarely, e.g., some Gemini models). | ||
|
|
||
| - Cancellation: | ||
| - `UserCancelledError` — A user aborted the operation. | ||
| - `OperationCancelled` — A running operation was cancelled programmatically. | ||
|
|
||
| All of the above (except the explicit cancellation types) inherit from `LLMError`, so you can implement a catch‑all for unexpected SDK LLM errors while still keeping fine‑grained handlers for the most common cases. | ||
|
|
Add this suggestion to a batch that can be applied as a single commit.
This suggestion is invalid because no changes were made to the code.
Suggestions cannot be applied while the pull request is closed.
Suggestions cannot be applied while viewing a subset of changes.
Only one suggestion per line can be applied in a batch.
Add this suggestion to a batch that can be applied as a single commit.
Applying suggestions on deleted lines is not supported.
You must change the existing code in this line in order to create a valid suggestion.
Outdated suggestions cannot be applied.
This suggestion has been applied or marked resolved.
Suggestions cannot be applied from pending reviews.
Suggestions cannot be applied on multi-line comments.
Suggestions cannot be applied while the pull request is queued to merge.
Suggestion cannot be applied right now. Please check back later.
Uh oh!
There was an error while loading. Please reload this page.