Close LLM client httpx pool before asyncio.run exits#70
Conversation
Adds aclose() to the LLMClient protocol and implements it on the Anthropic and OpenAI-compatible backends (delegating to the SDK's async close()). Each CLI flow that runs the client under asyncio.run (`ask`, `generate`, `tidy review`) now awaits aclose() in a finally block. Without this, the SDK's underlying httpx.AsyncClient is finalized after the loop has already shut down and trips a noisy "RuntimeError: Event loop is closed" traceback — particularly visible in `datasight ask --files`, which spawns one event loop per question. Co-Authored-By: Claude Opus 4.7 (1M context) <noreply@anthropic.com>
There was a problem hiding this comment.
Pull request overview
This PR adds an explicit async teardown hook to the LLM client abstraction and ensures CLI code paths close the underlying SDK/httpx connection pool before asyncio.run() shuts down the event loop, preventing noisy RuntimeError: Event loop is closed “Task exception was never retrieved” tracebacks during garbage collection.
Changes:
- Extend the
LLMClientprotocol withaclose()and implement it for Anthropic and OpenAI-compatible backends. - Wrap key CLI async entrypoints (
ask,generate,tidy review) intry/finallytoawait llm_client.aclose()within the same event loop. - Update/add tests and test stubs to satisfy the new protocol and assert the SDK
close()method is awaited.
Reviewed changes
Copilot reviewed 11 out of 11 changed files in this pull request and generated 1 comment.
Show a summary per file
| File | Description |
|---|---|
| src/datasight/llm.py | Add LLMClient.aclose() and implement pool shutdown for Anthropic and OpenAI-compatible clients. |
| src/datasight/cli.py | Ensure run_ask_pipeline closes the LLM client before loop teardown via try/finally. |
| src/datasight/cli_commands/generate.py | Ensure the generate command closes the LLM client within the _run() event loop. |
| src/datasight/cli_commands/tidy.py | Ensure the tidy review LLM call closes the LLM client within the _call() event loop. |
| tests/test_llm.py | Add async tests asserting aclose() awaits the underlying SDK close(). |
| tests/test_cli_tools.py | Add/update LLM stubs to provide aclose() for new teardown logic in pipelines. |
| tests/test_cli_commands.py | Update CLI test LLM stub to implement aclose(). |
| tests/test_verify.py | Update fake LLM clients to implement aclose() for protocol compatibility. |
| tests/test_tidy_review.py | Update fake LLM client to implement aclose(). |
| tests/test_generate_persist_db.py | Update stub LLM client to implement aclose(). |
| tests/test_agent_extra.py | Update fake LLM client to implement aclose(). |
💡 Add Copilot custom instructions for smarter, more guided reviews. Learn how to get started.
| # Close the SDK's httpx pool before the event loop shuts down. | ||
| # Without this, asyncio.run closes the loop, the GC later finalizes | ||
| # the httpx client, and its scheduled aclose() trips | ||
| # "RuntimeError: Event loop is closed" — noisy, especially in | ||
| # `datasight ask --files` where this runs once per question. | ||
| try: |
Codecov Report❌ Patch coverage is
❌ Your patch check has failed because the patch coverage (76.52%) is below the target coverage (80.00%). You can increase the patch coverage or adjust the target coverage. Additional details and impacted files@@ Coverage Diff @@
## main #70 +/- ##
==========================================
- Coverage 86.87% 86.86% -0.01%
==========================================
Files 61 61
Lines 11539 11552 +13
==========================================
+ Hits 10024 10035 +11
- Misses 1515 1517 +2 ☔ View full report in Codecov by Sentry. 🚀 New features to boost your workflow:
|
Copilot caught a missed call site: `datasight verify` also creates an LLM client inside `asyncio.run` and never closed it, so it could still trip the same "Event loop is closed" traceback on GC. Wrap the client in the same try/finally pattern as the other CLI flows. Co-Authored-By: Claude Opus 4.7 (1M context) <noreply@anthropic.com>
* Close LLM client httpx pool before asyncio.run exits Adds aclose() to the LLMClient protocol and implements it on the Anthropic and OpenAI-compatible backends (delegating to the SDK's async close()). Each CLI flow that runs the client under asyncio.run (`ask`, `generate`, `tidy review`) now awaits aclose() in a finally block. Without this, the SDK's underlying httpx.AsyncClient is finalized after the loop has already shut down and trips a noisy "RuntimeError: Event loop is closed" traceback — particularly visible in `datasight ask --files`, which spawns one event loop per question. Co-Authored-By: Claude Opus 4.7 (1M context) <noreply@anthropic.com>
Summary
aclose()to theLLMClientprotocol and implement it onAnthropicLLMClientand_OpenAICompatibleClient(delegates to the SDK's asyncclose(), which drains the underlyinghttpx.AsyncClientconnection pool).cli.run_ask_pipeline,generate._run, andtidy._callintry/finally: await llm_client.aclose()so the pool is closed inside the same event loop that uses it — beforeasyncio.runshuts the loop down.Without this, the SDK's httpx client gets garbage-collected after the loop is already closed, and its scheduled
aclose()tripsRuntimeError: Event loop is closed. asyncio prints that as a "Task exception was never retrieved" traceback. Particularly noisy withdatasight ask --files, which spawns one event loop per question — every row in a benchmark batch produced the traceback.The fix also covers
datasight generateanddatasight tidy review, which had the same teardown pattern.Test plan
pytest -m "not integration"— 1492 passed.prek run --all-files— ruff / ruff-format / ty all clean.tests/test_llm.pycases assert thatAnthropicLLMClient.aclose()andOllamaLLMClient.aclose()actually invoke the SDK'sclose().datasight ask --files=questions.txtand confirm no "Event loop is closed" traceback at the end of the run.🤖 Generated with Claude Code