fix(provider): force Gemini chat client to use managed httpx client#8112
Conversation
There was a problem hiding this comment.
Hey - I've found 2 issues, and left some high level feedback:
- Consider initializing
self._http_clienttoNonein the class constructor so that you don't needhasattrchecks and static type checkers / linters can reliably see the attribute. - If
_init_clientcan be called multiple times during the lifetime of the provider, you may want to either reuse or explicitly close any existingself._http_clientbefore creating a newhttpx.AsyncClientto avoid leaking connections.
Prompt for AI Agents
Please address the comments from this code review:
## Overall Comments
- Consider initializing `self._http_client` to `None` in the class constructor so that you don't need `hasattr` checks and static type checkers / linters can reliably see the attribute.
- If `_init_client` can be called multiple times during the lifetime of the provider, you may want to either reuse or explicitly close any existing `self._http_client` before creating a new `httpx.AsyncClient` to avoid leaking connections.
## Individual Comments
### Comment 1
<location path="astrbot/core/provider/sources/gemini_source.py" line_range="91-95" />
<code_context>
+ async_client_kwargs["proxy"] = proxy
+ logger.info("[Gemini] 使用代理")
+
+ self._http_client = httpx.AsyncClient(**async_client_kwargs)
+ http_options.httpx_async_client = self._http_client
+
</code_context>
<issue_to_address>
**suggestion (bug_risk):** Align httpx.AsyncClient config (e.g. timeout) with http_options to avoid inconsistent behavior
`AsyncClient` is currently only configured with `trust_env` and `proxy`, while `http_options` carries `base_url` and a timeout (`self.timeout * 1000`). Depending on how `http_options.httpx_async_client` is used, this can lead to different timeouts/base URLs between the client and the rest of the HTTP stack. Please pass the same timeout (and, if applicable, `base_url`) into `AsyncClient` to keep behavior consistent.
```suggestion
# 强制使用 httpx 作为异步 HTTP 后端,避免 aiohttp 响应类型兼容问题 (#7564)
# 与 http_options 中的 base_url / timeout 保持一致,避免行为不一致
async_client_kwargs = {
"trust_env": True,
"base_url": self.api_base,
"timeout": self.timeout * 1000, # 毫秒(与 http_options 一致)
}
if proxy:
async_client_kwargs["proxy"] = proxy
logger.info("[Gemini] 使用代理")
```
</issue_to_address>
### Comment 2
<location path="astrbot/core/provider/sources/gemini_source.py" line_range="92-94" />
<code_context>
)
+
+ # 强制使用 httpx 作为异步 HTTP 后端,避免 aiohttp 响应类型兼容问题 (#7564)
+ async_client_kwargs = {"trust_env": True}
if proxy:
- http_options.async_client_args = {"proxy": proxy}
- logger.info(f"[Gemini] 使用代理: {proxy}")
+ async_client_kwargs["proxy"] = proxy
+ logger.info("[Gemini] 使用代理")
+
</code_context>
<issue_to_address>
**question (bug_risk):** Re-evaluate `trust_env=True` if the intent is to only use the explicitly configured proxy
`trust_env=True` means httpx will also apply proxy and other settings from environment variables. If you want proxy behavior to be controlled only by the `proxy` argument and not by the environment, consider leaving `trust_env` as default or setting it to `False` when a custom proxy is used.
</issue_to_address>Help me be more useful! Please click 👍 or 👎 on each comment and I'll use the feedback to improve your reviews.
There was a problem hiding this comment.
Code Review
This pull request transitions the Gemini provider to use httpx.AsyncClient as the asynchronous HTTP backend to resolve compatibility issues with aiohttp. Feedback indicates a potential resource leak in the _init_client method, where new client instances are created during API key rotation without closing previous ones; a suggestion was provided to reuse the existing client.
| async_client_kwargs = {"trust_env": True} | ||
| if proxy: | ||
| http_options.async_client_args = {"proxy": proxy} | ||
| logger.info(f"[Gemini] 使用代理: {proxy}") | ||
| async_client_kwargs["proxy"] = proxy | ||
| logger.info("[Gemini] 使用代理") | ||
|
|
||
| self._http_client = httpx.AsyncClient(**async_client_kwargs) | ||
| http_options.httpx_async_client = self._http_client |
There was a problem hiding this comment.
The current implementation creates a new httpx.AsyncClient every time _init_client is called. Since _init_client is invoked by set_key during API key rotation (which happens automatically on certain errors), this will lead to a resource leak as the previous client instances are never closed. Since the proxy configuration is static for the provider instance, the httpx client should be initialized once and reused.
| async_client_kwargs = {"trust_env": True} | |
| if proxy: | |
| http_options.async_client_args = {"proxy": proxy} | |
| logger.info(f"[Gemini] 使用代理: {proxy}") | |
| async_client_kwargs["proxy"] = proxy | |
| logger.info("[Gemini] 使用代理") | |
| self._http_client = httpx.AsyncClient(**async_client_kwargs) | |
| http_options.httpx_async_client = self._http_client | |
| if not getattr(self, "_http_client", None) or self._http_client.is_closed: | |
| async_client_kwargs = {"trust_env": True} | |
| if proxy: | |
| async_client_kwargs["proxy"] = proxy | |
| logger.info("[Gemini] 使用代理") | |
| self._http_client = httpx.AsyncClient(**async_client_kwargs) | |
| http_options.httpx_async_client = self._http_client |
References
- Synchronous code blocks without 'await' are executed atomically in a single-threaded asyncio event loop, ensuring the check-and-set logic for the HTTP client is safe from race conditions.
9303b36 to
d117686
Compare
|
@sourcery-ai review |
|
Thanks for the review. All comments have been addressed in the amended commit:
|
There was a problem hiding this comment.
Hey - I've left some high level feedback:
- The use of
asyncio.get_event_loop().create_task(self._http_client.aclose())inside_init_clientcan be fragile (especially underasyncio.runor in environments without a running loop); consider making_init_clientasync and explicitly awaiting the old client close, or otherwise closing it from a known-running event loop instead of fire-and-forget. - Since
terminate()now closes bothself.clientandself._http_client, double-check whethergenai.Client.aclose()might already close a providedhttpx_async_client; if so, it may be cleaner to rely on one ownership path to avoid accidentally double-closing or obscuring errors from one of the closes.
Prompt for AI Agents
Please address the comments from this code review:
## Overall Comments
- The use of `asyncio.get_event_loop().create_task(self._http_client.aclose())` inside `_init_client` can be fragile (especially under `asyncio.run` or in environments without a running loop); consider making `_init_client` async and explicitly awaiting the old client close, or otherwise closing it from a known-running event loop instead of fire-and-forget.
- Since `terminate()` now closes both `self.client` and `self._http_client`, double-check whether `genai.Client.aclose()` might already close a provided `httpx_async_client`; if so, it may be cleaner to rely on one ownership path to avoid accidentally double-closing or obscuring errors from one of the closes.Help me be more useful! Please click 👍 or 👎 on each comment and I'll use the feedback to improve your reviews.
d117686 to
62cecb1
Compare
|
Thanks for the review. Changes applied:
|
|
@sourcery-ai review |
There was a problem hiding this comment.
Hey - I've left some high level feedback:
- If
_init_clientcan be called more than once during the lifetime of the provider, consider closing any existing_http_clientbefore overwriting it to avoid leakinghttpx.AsyncClientinstances. - In
terminate, you may want to mirror the_http_clienthandling forself.clientby settingself.client = Noneafteraclose()to make the lifecycle management explicit and avoid accidental reuse.
Prompt for AI Agents
Please address the comments from this code review:
## Overall Comments
- If `_init_client` can be called more than once during the lifetime of the provider, consider closing any existing `_http_client` before overwriting it to avoid leaking `httpx.AsyncClient` instances.
- In `terminate`, you may want to mirror the `_http_client` handling for `self.client` by setting `self.client = None` after `aclose()` to make the lifecycle management explicit and avoid accidental reuse.Help me be more useful! Please click 👍 or 👎 on each comment and I'll use the feedback to improve your reviews.
62cecb1 to
e86253c
Compare
|
Thanks for the review. Changes applied:
|
|
@sourcery-ai review |
There was a problem hiding this comment.
Hey - I've left some high level feedback:
- The client shutdown logic in
terminate()is a bit spread acrossself.client,_stale_http_clients, and_http_client; consider centralizing the httpx client lifecycle in a small helper (and makingterminate()idempotent with per-client exception handling) to keep cleanup simpler and more robust. - If
_init_client()can be called frequently (e.g., viaset_key), you might want to cap or immediately close the previous_http_clientinstead of accumulating them in_stale_http_clientsuntilterminate()to avoid holding onto many idle clients for a long-lived process.
Prompt for AI Agents
Please address the comments from this code review:
## Overall Comments
- The client shutdown logic in `terminate()` is a bit spread across `self.client`, `_stale_http_clients`, and `_http_client`; consider centralizing the httpx client lifecycle in a small helper (and making `terminate()` idempotent with per-client exception handling) to keep cleanup simpler and more robust.
- If `_init_client()` can be called frequently (e.g., via `set_key`), you might want to cap or immediately close the previous `_http_client` instead of accumulating them in `_stale_http_clients` until `terminate()` to avoid holding onto many idle clients for a long-lived process.Help me be more useful! Please click 👍 or 👎 on each comment and I'll use the feedback to improve your reviews.
e86253c to
cca5bde
Compare
|
Thanks for the review. Changes applied:
|
|
@sourcery-ai review |
There was a problem hiding this comment.
Hey - I've left some high level feedback:
- The broad exception swallowing in
_close_httpx_clientandterminatemakes debugging connection shutdown issues difficult; consider at least logging at debug level when a close fails so unexpected errors don't get silently hidden. - Instead of accumulating
_stale_http_clientsuntilterminate, consider closing the previous_http_clientimmediately when reinitializing (or bounding the list size) to avoid unbounded growth if_init_clientis called repeatedly without a corresponding shutdown.
Prompt for AI Agents
Please address the comments from this code review:
## Overall Comments
- The broad exception swallowing in `_close_httpx_client` and `terminate` makes debugging connection shutdown issues difficult; consider at least logging at debug level when a close fails so unexpected errors don't get silently hidden.
- Instead of accumulating `_stale_http_clients` until `terminate`, consider closing the previous `_http_client` immediately when reinitializing (or bounding the list size) to avoid unbounded growth if `_init_client` is called repeatedly without a corresponding shutdown.Help me be more useful! Please click 👍 or 👎 on each comment and I'll use the feedback to improve your reviews.
When both aiohttp and httpx are installed, google-genai prefers aiohttp as the async HTTP backend. In error response paths, the aiohttp backend returns raw aiohttp.ClientResponse objects that google-genai cannot handle, masking real API errors with: Unsupported response type: <class 'aiohttp.client_reqrep.ClientResponse'> This fix explicitly creates an httpx.AsyncClient and passes it via HttpOptions.httpx_async_client, ensuring the chat provider always uses the httpx backend. The managed client is closed in terminate(). - Preserve HTTP_PROXY/HTTPS_PROXY support via trust_env=True. - Preserve provider-level proxy via httpx.AsyncClient(proxy=...). - Avoid logging full proxy URLs for security. Fixes AstrBotDevs#7564
cca5bde to
33061c4
Compare
|
Thanks for the review. Changes applied:
|
|
@sourcery-ai review |
…strBotDevs#8112) When both aiohttp and httpx are installed, google-genai prefers aiohttp as the async HTTP backend. In error response paths, the aiohttp backend returns raw aiohttp.ClientResponse objects that google-genai cannot handle, masking real API errors with: Unsupported response type: <class 'aiohttp.client_reqrep.ClientResponse'> This fix explicitly creates an httpx.AsyncClient and passes it via HttpOptions.httpx_async_client, ensuring the chat provider always uses the httpx backend. The managed client is closed in terminate(). - Preserve HTTP_PROXY/HTTPS_PROXY support via trust_env=True. - Preserve provider-level proxy via httpx.AsyncClient(proxy=...). - Avoid logging full proxy URLs for security. Fixes AstrBotDevs#7564
Summary
Fixes #7564.
When both
aiohttpandhttpxare installed,google-genaiprefersaiohttpas the async HTTP backend. In error response paths, theaiohttpbackend returns rawaiohttp.ClientResponseobjects thatgoogle-genaicannot handle, masking real Gemini API errors with:Changes
httpx.AsyncClientand pass it viaHttpOptions.httpx_async_client, ensuring the chat provider always uses thehttpxbackend.httpxclient interminate().HTTP_PROXY/HTTPS_PROXYbehavior viatrust_env=Truewhen no provider proxy is set.proxyviahttpx.AsyncClient(proxy=...).Testing
gemini_source.pyinitializes without errors.aiohttpbackend path is no longer taken becausehttpx_async_clientis explicitly provided.Summary by Sourcery
Ensure the Gemini provider always uses a managed httpx async client and properly cleans it up on termination to avoid backend incompatibilities and resource leaks.
Bug Fixes:
Enhancements: