Skip to content

fix(llm): silence litellm 'Provider List' stderr banner#142

Merged
rolandpg merged 1 commit into
masterfrom
fix/litellm-suppress-debug-noise
Apr 28, 2026
Merged

fix(llm): silence litellm 'Provider List' stderr banner#142
rolandpg merged 1 commit into
masterfrom
fix/litellm-suppress-debug-noise

Conversation

@rolandpg
Copy link
Copy Markdown
Owner

Summary

LiteLLM's get_llm_provider helper writes a coloured Provider List: https://docs.litellm.ai/docs/providers banner to stderr via raw print() whenever it cannot resolve a provider from a model name (litellm/litellm_core_utils/get_llm_provider_logic.py:466). The banner bypasses Python logging entirely, so it leaks past ZettelForge's structlog setup.

This affects every operator running recall() — the background LLM-NER enrichment fires litellm via entity_indexer.extract_llm even when the user is only doing pip-only remember() / recall() without a working LLM. The first user trying the README's 30-second hello world (R1, just merged) sees ~40 of these banners during a single recall — exactly the wrong first impression now that the no-Ollama path is being marketed.

Reproduction (before this PR)

$ python examples/quickstart.py 2>&1 | head -20
Stored: note_20260427_222417_0d23 (created)
Entities: ['cve-2024-3094', 'apt28', 'cobalt-strike', 't1021']
[1;31mProvider List: https://docs.litellm.ai/docs/providers[0m
[1;31mProvider List: https://docs.litellm.ai/docs/providers[0m
... (~40 more) ...
Found: APT28 (Fancy Bear) targets NATO defense contractors with spear-phishing...

Fix

Route the lazy litellm import through a new _get_litellm() helper that sets litellm.suppress_debug_info = True after import. suppress_debug_info is the documented escape hatch for exactly this banner — it gates only the print statement, not exception raising, so:

  • Real errors continue to propagate via litellm.exceptions.*.
  • The existing llm_call_exception structured-log event still fires with the same fields.
  • No behaviour change for callers with valid model names + API keys.

After this PR:

$ python examples/quickstart.py 2>&1 | grep -c "Provider List"
0

Verified end-to-end with a fd-2-capturing test harness — zero Provider List bytes leak through.

Tests

  • 12 LiteLLMProvider tests pass (was 11 + new test_generate_silences_litellm_debug_banner).
  • The new test mocks the litellm module, asserts the suppress flag flips True after a generate() call, and would catch any future regression that drops the helper or its side effect.

Out of scope

  • The env-dependent test_import_error_raised_when_sdk_missing already failed in litellm-installed environments before this change. CI runs without litellm, so it passes there. Out of scope for this PR.
  • Other litellm chatter (DEBUG-level LiteLLM Logging Details, etc.) goes through Python logging and is already managed by structlog level config — not in scope.

🤖 Generated with Claude Code

Copilot AI review requested due to automatic review settings April 28, 2026 03:46
Copy link
Copy Markdown
Contributor

Copilot AI left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Pull request overview

This PR reduces operator-facing stderr noise by silencing LiteLLM’s raw print() “Provider List” banner when provider resolution fails, ensuring it doesn’t leak past ZettelForge’s structlog setup during recall().

Changes:

  • Added a _get_litellm() import helper that sets litellm.suppress_debug_info = True before calls to litellm.completion().
  • Updated LiteLLMProvider.generate() to use _get_litellm() instead of importing LiteLLM inline.
  • Added a regression test asserting the suppress flag is flipped on generate().

Reviewed changes

Copilot reviewed 2 out of 2 changed files in this pull request and generated 2 comments.

File Description
src/zettelforge/llm_providers/litellm_provider.py Routes LiteLLM import through _get_litellm() and sets suppress_debug_info to silence the stderr banner.
tests/test_llm_providers.py Adds regression coverage to ensure generate() enables litellm.suppress_debug_info.

💡 Add Copilot custom instructions for smarter, more guided reviews. Learn how to get started.

Comment thread src/zettelforge/llm_providers/litellm_provider.py Outdated
Comment thread src/zettelforge/llm_providers/litellm_provider.py Outdated
LiteLLM's get_llm_provider helper writes a coloured "Provider List: ..."
banner to stderr via raw print() whenever it cannot resolve a provider
from a model name (litellm/litellm_core_utils/get_llm_provider_logic.py
line 466). The banner bypasses Python logging entirely, so it leaks past
ZettelForge's structlog setup and pollutes stderr for every operator
running recall() — the background LLM-NER enrichment fires litellm via
entity_indexer.extract_llm even when the user is just doing pip-only
remember()/recall() without a working LLM. The first user trying the
README's 30-second hello world sees ~40 of these banners during a
single recall.

Fix: route the lazy litellm import through a small _get_litellm()
helper that sets litellm.suppress_debug_info = True after import.
suppress_debug_info is the documented escape hatch for exactly this
banner — it gates only the print, not exception raising, so real
errors continue to propagate via litellm's exception types and our
own structured logger reports them through llm_call_exception as
before.

Idempotent (safe to set on every generate() call). Module-level
side effect on litellm is bounded to suppressing one print path that
no caller relies on for control flow.

Tests: 12 LiteLLMProvider tests pass (was 11 + new
test_generate_silences_litellm_debug_banner). The test mocks the
litellm module, asserts the suppress flag flips True after a
generate() call, and would catch any future regression that drops
the helper or its side effect.

Out of scope: the env-dependent
test_import_error_raised_when_sdk_missing already failed in
litellm-installed environments before this change. CI runs without
litellm, so it passes there.

Co-Authored-By: Claude Opus 4.7 (1M context) <noreply@anthropic.com>
@rolandpg rolandpg force-pushed the fix/litellm-suppress-debug-noise branch from f056e8d to 862f4f8 Compare April 28, 2026 03:54
@rolandpg rolandpg merged commit 575108b into master Apr 28, 2026
13 checks passed
@rolandpg rolandpg deleted the fix/litellm-suppress-debug-noise branch April 28, 2026 03:59
Copy link
Copy Markdown

@chatgpt-codex-connector chatgpt-codex-connector Bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

💡 Codex Review

Here are some automated review suggestions for this pull request.

Reviewed commit: 862f4f8846

ℹ️ About Codex in GitHub

Your team has set up Codex to review pull requests in this repo. Reviews are triggered when you

  • Open a pull request for review
  • Mark a draft as ready
  • Comment "@codex review".

If Codex has suggestions, it will comment; otherwise it will react with 👍.

Codex can also answer questions or update the PR. Try commenting "@codex address that feedback".

Comment on lines +130 to +131
prev_suppress = getattr(litellm, "suppress_debug_info", False)
litellm.suppress_debug_info = True
Copy link
Copy Markdown

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

P1 Badge Guard suppress_debug_info mutation with a lock

litellm.suppress_debug_info is a process-global module flag, but this code snapshots and restores it without synchronization; with concurrent generate() calls, one thread can restore an outdated value while another call is still in litellm.completion(). In this repo that is realistic because providers are shared singletons (src/zettelforge/llm_providers/registry.py notes singleton instances) and LLM work runs on a background enrichment thread (src/zettelforge/memory_manager.py), so this race can both re-enable the stderr banner mid-call and leave the global flag in the wrong final state after both calls complete.

Useful? React with 👍 / 👎.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

2 participants