From d490eb5c2526fb396316faf69c07389ad437736c Mon Sep 17 00:00:00 2001 From: enyst Date: Fri, 7 Nov 2025 15:57:27 +0000 Subject: [PATCH 1/6] SDK: Document error handling & typed exceptions - New guide: sdk/guides/error-handling.mdx - Navigation: add under SDK > Guides > LLM Features Refs OpenHands/software-agent-sdk#980 Co-authored-by: openhands --- docs.json | 3 +- sdk/guides/error-handling.mdx | 140 ++++++++++++++++++++++++++++++++++ 2 files changed, 142 insertions(+), 1 deletion(-) create mode 100644 sdk/guides/error-handling.mdx diff --git a/docs.json b/docs.json index b5543491..a256b389 100644 --- a/docs.json +++ b/docs.json @@ -194,7 +194,8 @@ "sdk/guides/llm-registry", "sdk/guides/llm-routing", "sdk/guides/llm-reasoning", - "sdk/guides/llm-image-input" + "sdk/guides/llm-image-input", + "sdk/guides/error-handling" ] }, { diff --git a/sdk/guides/error-handling.mdx b/sdk/guides/error-handling.mdx new file mode 100644 index 00000000..733ce7a9 --- /dev/null +++ b/sdk/guides/error-handling.mdx @@ -0,0 +1,140 @@ +--- +title: Error Handling & SDK Exceptions +description: Provider‑agnostic exceptions raised by the SDK and recommended patterns for handling them. +--- + +The SDK normalizes common provider errors into typed, provider‑agnostic exceptions so your application can handle them consistently across OpenAI, Anthropic, Groq, Google, and others. + +This guide explains when these errors occur and shows recommended handling patterns for both direct LLM usage and higher‑level agent/conversation flows. + +## Why typed exceptions? + +LLM providers format errors differently (status codes, messages, exception classes). The SDK maps those into stable types so client apps don’t depend on provider‑specific details. Typical benefits: + +- One code path to handle auth, rate limits, timeouts, service issues, and bad requests +- Clear behavior when conversation history exceeds the context window +- Backward compatibility when you switch providers or SDK versions + +## Quick start: handle errors around LLM calls + +```python icon="python" +from pydantic import SecretStr +from openhands.sdk import LLM +from openhands.sdk.llm import Message, TextContent +from openhands.sdk.llm.exceptions import ( + LLMError, + LLMAuthenticationError, + LLMRateLimitError, + LLMTimeoutError, + LLMServiceUnavailableError, + LLMBadRequestError, + LLMContextWindowExceedError, +) + +llm = LLM(model="claude-sonnet-4-20250514", api_key=SecretStr("your-key")) + +try: + response = llm.completion([ + Message.user([TextContent(text="Summarize our design doc")]) + ]) + print(response.message) + +except LLMContextWindowExceedError: + # Conversation is longer than the model’s context window + # Options: + # 1) Enable a condenser (recommended for long sessions) + # 2) Shorten inputs or reset conversation + print("Context window exceeded. Consider enabling a condenser.") + +except LLMAuthenticationError: + print("Invalid or missing API credentials. Check your API key or auth setup.") + +except LLMRateLimitError: + print("Rate limit exceeded. Back off and retry later.") + +except LLMTimeoutError: + print("Request timed out. Consider increasing timeout or retrying.") + +except LLMServiceUnavailableError: + print("Service unavailable or connectivity issue. Retry with backoff.") + +except LLMBadRequestError: + print("Bad request to provider. Validate inputs and arguments.") + +except LLMError as e: + # Fallback for other SDK LLM errors (parsing/validation, etc.) + print(f"Unhandled LLM error: {e}") +``` + +The same exceptions are raised from both `LLM.completion()` and `LLM.responses()` paths. + +## Using agents and conversations + +When you use `Agent` and `Conversation`, LLM exceptions propagate out of `conversation.run()` and `conversation.send_message(...)` if a condenser is not present. + +```python icon="python" +from pydantic import SecretStr +from openhands.sdk import Agent, Conversation, LLM +from openhands.sdk.llm.exceptions import LLMContextWindowExceedError + +llm = LLM(model="claude-sonnet-4-20250514", api_key=SecretStr("your-key")) +agent = Agent(llm=llm, tools=[]) +conv = Conversation(agent=agent, persistence_dir="./.conversations", workspace=".") + +try: + conv.send_message("Continue the long analysis we started earlier…") + conv.run() +except LLMContextWindowExceedError: + print("Hit the context limit. Add a condenser to avoid this in long sessions.") +``` + +### Avoiding context‑window errors with a condenser + +If a condenser is configured, the SDK emits a condensation request event instead of raising `LLMContextWindowExceedError`. The agent will summarize older history and continue. + +```python icon="python" highlight={5-10} +from openhands.sdk.context.condenser import LLMSummarizingCondenser + +condenser = LLMSummarizingCondenser( + llm=llm.model_copy(update={"usage_id": "condenser"}), + max_size=10, + keep_first=2, +) + +agent = Agent(llm=llm, tools=[], condenser=condenser) +conversation = Conversation(agent=agent, persistence_dir="./.conversations", workspace=".") +``` + +See the dedicated guide: [Context Condenser](/sdk/guides/context-condenser). + +## Exception reference + +All exceptions live under `openhands.sdk.llm.exceptions` unless noted. + +- Provider/transport mapping (provider‑agnostic): + - `LLMContextWindowExceedError` — Conversation exceeds the model’s context window. Without a condenser, thrown for both Chat and Responses paths. + - `LLMAuthenticationError` — Invalid or missing credentials (401/403 patterns). + - `LLMRateLimitError` — Provider rate limit exceeded. + - `LLMTimeoutError` — SDK/lower‑level timeout while waiting for the provider. + - `LLMServiceUnavailableError` — Temporary connectivity/service outage (e.g., 5xx, connection issues). + - `LLMBadRequestError` — Client‑side request issues (invalid params, malformed input). + +- Response parsing/validation: + - `LLMMalformedActionError` — Model returned a malformed action. + - `LLMNoActionError` — Model did not return an action when one was expected. + - `LLMResponseError` — Could not extract an action from the response. + - `FunctionCallConversionError` — Failed converting tool/function call payloads. + - `FunctionCallValidationError` — Tool/function call arguments failed validation. + - `FunctionCallNotExistsError` — Model referenced an unknown tool/function. + - `LLMNoResponseError` — Provider returned an empty/invalid response (seen rarely, e.g., some Gemini models). + +- Cancellation: + - `UserCancelledError` — A user aborted the operation. + - `OperationCancelled` — A running operation was cancelled programmatically. + +All of the above (except the explicit cancellation types) inherit from `LLMError`, so you can implement a catch‑all for unexpected SDK LLM errors while still keeping fine‑grained handlers for the most common cases. + +## Notes for advanced users + +- The SDK performs centralized exception mapping, translating provider/LiteLLM exceptions into the types above. This keeps your app free from provider‑specific exception imports. +- For long‑running sessions, we strongly recommend configuring a condenser to avoid context‑window interruptions. See the [Context Condenser](/sdk/guides/context-condenser) guide for details and examples. From 373f6b23e246a339a0035e262b211a36550f4c24 Mon Sep 17 00:00:00 2001 From: enyst Date: Fri, 7 Nov 2025 16:06:47 +0000 Subject: [PATCH 2/6] SDK: Add Responses API example to error handling guide - Show try/except around LLM.responses() Refs OpenHands/software-agent-sdk#980 Co-authored-by: openhands --- sdk/guides/error-handling.mdx | 21 +++++++++++++++++++++ 1 file changed, 21 insertions(+) diff --git a/sdk/guides/error-handling.mdx b/sdk/guides/error-handling.mdx index 733ce7a9..b25c0a45 100644 --- a/sdk/guides/error-handling.mdx +++ b/sdk/guides/error-handling.mdx @@ -68,6 +68,27 @@ except LLMError as e: The same exceptions are raised from both `LLM.completion()` and `LLM.responses()` paths. +### Example: Using the Responses API + +```python icon="python" +from pydantic import SecretStr +from openhands.sdk import LLM +from openhands.sdk.llm import Message, TextContent +from openhands.sdk.llm.exceptions import LLMError, LLMContextWindowExceedError + +llm = LLM(model="claude-sonnet-4-20250514", api_key=SecretStr("your-key")) + +try: + resp = llm.responses([ + Message.user([TextContent(text="Write a one-line haiku about code.")]) + ]) + print(resp.message) +except LLMContextWindowExceedError: + print("Context window exceeded. Consider enabling a condenser.") +except LLMError as e: + print(f"LLM error: {e}") +``` + ## Using agents and conversations When you use `Agent` and `Conversation`, LLM exceptions propagate out of `conversation.run()` and `conversation.send_message(...)` if a condenser is not present. From 420032dbba17afd1ef7f650435023f9c4bb939ee Mon Sep 17 00:00:00 2001 From: enyst Date: Fri, 7 Nov 2025 16:25:05 +0000 Subject: [PATCH 3/6] SDK: Rename error-handling guide to llm-error-handling for consistency - Rename file to align with other LLM features - Update navigation to point at new path Refs OpenHands/software-agent-sdk#980 Co-authored-by: openhands --- docs.json | 2 +- sdk/guides/{error-handling.mdx => llm-error-handling.mdx} | 0 2 files changed, 1 insertion(+), 1 deletion(-) rename sdk/guides/{error-handling.mdx => llm-error-handling.mdx} (100%) diff --git a/docs.json b/docs.json index a256b389..c2b50150 100644 --- a/docs.json +++ b/docs.json @@ -195,7 +195,7 @@ "sdk/guides/llm-routing", "sdk/guides/llm-reasoning", "sdk/guides/llm-image-input", - "sdk/guides/error-handling" + "sdk/guides/llm-error-handling" ] }, { diff --git a/sdk/guides/error-handling.mdx b/sdk/guides/llm-error-handling.mdx similarity index 100% rename from sdk/guides/error-handling.mdx rename to sdk/guides/llm-error-handling.mdx From e5dbf09dde4fbd4e98d2a23d9b4e65c4f9245063 Mon Sep 17 00:00:00 2001 From: enyst Date: Fri, 7 Nov 2025 17:28:35 +0000 Subject: [PATCH 4/6] Docs: Make error handling guide conversation-first; keep llm- prefix - Lead with Agent/Conversation examples and bubbling behavior - Move LLM examples (completion/responses) into secondary section Refs OpenHands/software-agent-sdk#980 Co-authored-by: openhands --- sdk/guides/llm-error-handling.mdx | 113 ++++++++++++++++++------------ 1 file changed, 69 insertions(+), 44 deletions(-) diff --git a/sdk/guides/llm-error-handling.mdx b/sdk/guides/llm-error-handling.mdx index b25c0a45..ea378451 100644 --- a/sdk/guides/llm-error-handling.mdx +++ b/sdk/guides/llm-error-handling.mdx @@ -15,12 +15,13 @@ LLM providers format errors differently (status codes, messages, exception class - Clear behavior when conversation history exceeds the context window - Backward compatibility when you switch providers or SDK versions -## Quick start: handle errors around LLM calls +## Quick start: Using agents and conversations + +Agent-driven conversations are the common entry point. Exceptions from the underlying LLM calls bubble up from `conversation.run()` and `conversation.send_message(...)` when a condenser is not configured. ```python icon="python" from pydantic import SecretStr -from openhands.sdk import LLM -from openhands.sdk.llm import Message, TextContent +from openhands.sdk import Agent, Conversation, LLM from openhands.sdk.llm.exceptions import ( LLMError, LLMAuthenticationError, @@ -32,19 +33,19 @@ from openhands.sdk.llm.exceptions import ( ) llm = LLM(model="claude-sonnet-4-20250514", api_key=SecretStr("your-key")) +agent = Agent(llm=llm, tools=[]) +conversation = Conversation(agent=agent, persistence_dir="./.conversations", workspace=".") try: - response = llm.completion([ - Message.user([TextContent(text="Summarize our design doc")]) - ]) - print(response.message) + conversation.send_message("Continue the long analysis we started earlier…") + conversation.run() except LLMContextWindowExceedError: # Conversation is longer than the model’s context window # Options: # 1) Enable a condenser (recommended for long sessions) # 2) Shorten inputs or reset conversation - print("Context window exceeded. Consider enabling a condenser.") + print("Hit the context limit. Consider enabling a condenser.") except LLMAuthenticationError: print("Invalid or missing API credentials. Check your API key or auth setup.") @@ -66,68 +67,92 @@ except LLMError as e: print(f"Unhandled LLM error: {e}") ``` -The same exceptions are raised from both `LLM.completion()` and `LLM.responses()` paths. -### Example: Using the Responses API + +### Avoiding context‑window errors with a condenser + +If a condenser is configured, the SDK emits a condensation request event instead of raising `LLMContextWindowExceedError`. The agent will summarize older history and continue. + +```python icon="python" highlight={5-10} +from openhands.sdk.context.condenser import LLMSummarizingCondenser + +condenser = LLMSummarizingCondenser( + llm=llm.model_copy(update={"usage_id": "condenser"}), + max_size=10, + keep_first=2, +) + +agent = Agent(llm=llm, tools=[], condenser=condenser) +conversation = Conversation(agent=agent, persistence_dir="./.conversations", workspace=".") +``` + +See the dedicated guide: [Context Condenser](/sdk/guides/context-condenser). + +## Handling errors with direct LLM calls + +The same exceptions are raised from both `LLM.completion()` and `LLM.responses()` paths, so you can share handlers. + +### Example: Using completion() ```python icon="python" from pydantic import SecretStr from openhands.sdk import LLM from openhands.sdk.llm import Message, TextContent -from openhands.sdk.llm.exceptions import LLMError, LLMContextWindowExceedError +from openhands.sdk.llm.exceptions import ( + LLMError, + LLMAuthenticationError, + LLMRateLimitError, + LLMTimeoutError, + LLMServiceUnavailableError, + LLMBadRequestError, + LLMContextWindowExceedError, +) llm = LLM(model="claude-sonnet-4-20250514", api_key=SecretStr("your-key")) try: - resp = llm.responses([ - Message.user([TextContent(text="Write a one-line haiku about code.")]) + response = llm.completion([ + Message.user([TextContent(text="Summarize our design doc")]) ]) - print(resp.message) + print(response.message) + except LLMContextWindowExceedError: print("Context window exceeded. Consider enabling a condenser.") +except LLMAuthenticationError: + print("Invalid or missing API credentials.") +except LLMRateLimitError: + print("Rate limit exceeded. Back off and retry later.") +except LLMTimeoutError: + print("Request timed out. Consider increasing timeout or retrying.") +except LLMServiceUnavailableError: + print("Service unavailable or connectivity issue. Retry with backoff.") +except LLMBadRequestError: + print("Bad request to provider. Validate inputs and arguments.") except LLMError as e: - print(f"LLM error: {e}") + print(f"Unhandled LLM error: {e}") ``` -## Using agents and conversations - -When you use `Agent` and `Conversation`, LLM exceptions propagate out of `conversation.run()` and `conversation.send_message(...)` if a condenser is not present. +### Example: Using responses() ```python icon="python" from pydantic import SecretStr -from openhands.sdk import Agent, Conversation, LLM -from openhands.sdk.llm.exceptions import LLMContextWindowExceedError +from openhands.sdk import LLM +from openhands.sdk.llm import Message, TextContent +from openhands.sdk.llm.exceptions import LLMError, LLMContextWindowExceedError llm = LLM(model="claude-sonnet-4-20250514", api_key=SecretStr("your-key")) -agent = Agent(llm=llm, tools=[]) -conv = Conversation(agent=agent, persistence_dir="./.conversations", workspace=".") try: - conv.send_message("Continue the long analysis we started earlier…") - conv.run() + resp = llm.responses([ + Message.user([TextContent(text="Write a one-line haiku about code.")]) + ]) + print(resp.message) except LLMContextWindowExceedError: - print("Hit the context limit. Add a condenser to avoid this in long sessions.") -``` - -### Avoiding context‑window errors with a condenser - -If a condenser is configured, the SDK emits a condensation request event instead of raising `LLMContextWindowExceedError`. The agent will summarize older history and continue. - -```python icon="python" highlight={5-10} -from openhands.sdk.context.condenser import LLMSummarizingCondenser - -condenser = LLMSummarizingCondenser( - llm=llm.model_copy(update={"usage_id": "condenser"}), - max_size=10, - keep_first=2, -) - -agent = Agent(llm=llm, tools=[], condenser=condenser) -conversation = Conversation(agent=agent, persistence_dir="./.conversations", workspace=".") + print("Context window exceeded. Consider enabling a condenser.") +except LLMError as e: + print(f"LLM error: {e}") ``` -See the dedicated guide: [Context Condenser](/sdk/guides/context-condenser). - ## Exception reference All exceptions live under `openhands.sdk.llm.exceptions` unless noted. From e61c923aa298f53658ad80f24a9f0178348dec03 Mon Sep 17 00:00:00 2001 From: enyst Date: Fri, 7 Nov 2025 18:25:13 +0000 Subject: [PATCH 5/6] Docs: Remove 'Notes for advanced users' section from error handling guide - Content is redundant and covered elsewhere Refs OpenHands/software-agent-sdk#980 Co-authored-by: openhands --- sdk/guides/llm-error-handling.mdx | 4 ---- 1 file changed, 4 deletions(-) diff --git a/sdk/guides/llm-error-handling.mdx b/sdk/guides/llm-error-handling.mdx index ea378451..6680b122 100644 --- a/sdk/guides/llm-error-handling.mdx +++ b/sdk/guides/llm-error-handling.mdx @@ -180,7 +180,3 @@ All exceptions live under `openhands.sdk.llm.exceptions` unless noted. All of the above (except the explicit cancellation types) inherit from `LLMError`, so you can implement a catch‑all for unexpected SDK LLM errors while still keeping fine‑grained handlers for the most common cases. -## Notes for advanced users - -- The SDK performs centralized exception mapping, translating provider/LiteLLM exceptions into the types above. This keeps your app free from provider‑specific exception imports. -- For long‑running sessions, we strongly recommend configuring a condenser to avoid context‑window interruptions. See the [Context Condenser](/sdk/guides/context-condenser) guide for details and examples. From f42790cd6b45db10a64057ef71e921c42603b0e8 Mon Sep 17 00:00:00 2001 From: Engel Nyst Date: Fri, 7 Nov 2025 20:15:18 +0100 Subject: [PATCH 6/6] Update sdk/guides/llm-error-handling.mdx --- sdk/guides/llm-error-handling.mdx | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/sdk/guides/llm-error-handling.mdx b/sdk/guides/llm-error-handling.mdx index 6680b122..1ce4a2bd 100644 --- a/sdk/guides/llm-error-handling.mdx +++ b/sdk/guides/llm-error-handling.mdx @@ -1,5 +1,5 @@ --- -title: Error Handling & SDK Exceptions +title: Exception Handling description: Provider‑agnostic exceptions raised by the SDK and recommended patterns for handling them. ---