-
-
Notifications
You must be signed in to change notification settings - Fork 3.8k
Revert "Add performance indexes to LiteLLM_SpendLogs for analytics queries" #11683
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Merged
krrishdholakia
merged 1 commit into
main
from
revert-11675-performance/add-spendlogs-indexes
Jun 13, 2025
Merged
Revert "Add performance indexes to LiteLLM_SpendLogs for analytics queries" #11683
krrishdholakia
merged 1 commit into
main
from
revert-11675-performance/add-spendlogs-indexes
Jun 13, 2025
Conversation
This file contains hidden or bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
… improve…" This reverts commit 2a7f113.
The latest updates on your projects. Learn more about Vercel for Git ↗︎
|
krrishdholakia
added a commit
that referenced
this pull request
Jun 13, 2025
…-> chat completion OR chat completion <-> responses api) (#11687) * feat(anthropic/passthrough): pass dynamic api key/api base params to litellm.completion allows calls to work with config.yaml * fix(responses_api/transformation): fix passing dynamic params to responses api from .completion() Allows responses api to work with config.yaml * fix(langfuse.py): fix responses api usage logging to langfuse * refactor(litellm_logging.py): add more generic solution for responses api usage logging ensures it works across all logging integrations * fix(litellm_logging.py): patch for anthropic messages not returning a pydantic object it should ideally return a pydantic object, which would simplify checks and reduce errors * fix(handler.py): correctly bubble up empty choices errors to litellm.completion causes downstream errors as it is expected there is at least one choice set * feat(litellm_logging.py): prevent double logging litellm responses ensures accurate spend tracking for calls when bridges are used * fix(litellm_logging.py): ensure logging is consistently enforced across all call types * fix: patch - set calltype before entering bridge api ensures logging object is applying the correct logic on the event hooks * fix(types/router.py): loosen type hint for mock response * change space_key header to space_id for Arize (#11595) * feat(schema): add additional indexes to LiteLLM_SpendLogs for improved query performance (#11675) * Revert "feat(schema): add additional indexes to LiteLLM_SpendLogs for improve…" (#11683) This reverts commit 2a7f113. * [Feat] Use dedicated Rest endpoints for list, calling MCP tools (#11684) * fix: (fix) use specific rest endpoints for MCP * ui - use rest mcp endpoints * fix imports * docs DISABLE_AIOHTTP_TRUST_ENV * docs(caching.md): remove batch redis get recommendation - old code path, no longer necessary * fix(vertex_and_google_ai_studio_gemini.py): handle gemini not passing audio token usage data * Chat Completions <-> Responses API Bridge Improvements (#11685) * feat(anthropic/passthrough): pass dynamic api key/api base params to litellm.completion allows calls to work with config.yaml * fix(responses_api/transformation): fix passing dynamic params to responses api from .completion() Allows responses api to work with config.yaml * fix(langfuse.py): fix responses api usage logging to langfuse * refactor(litellm_logging.py): add more generic solution for responses api usage logging ensures it works across all logging integrations * fix(litellm_logging.py): patch for anthropic messages not returning a pydantic object it should ideally return a pydantic object, which would simplify checks and reduce errors * fix(handler.py): correctly bubble up empty choices errors to litellm.completion causes downstream errors as it is expected there is at least one choice set * fix(response_metadata.py): allow model_info to be none * fix(litellm_logging.py): copy object before mutating * fix: fix lint check * fix: fix linting error * fix: fix linting error --------- Co-authored-by: vanities <mischkeaa@gmail.com> Co-authored-by: Cole McIntosh <82463175+colesmcintosh@users.noreply.github.com> Co-authored-by: Ishaan Jaff <ishaanjaffer0324@gmail.com>
X4tar
pushed a commit
to X4tar/litellm
that referenced
this pull request
Jun 17, 2025
… improve…" (BerriAI#11683) This reverts commit 2a7f113.
X4tar
pushed a commit
to X4tar/litellm
that referenced
this pull request
Jun 17, 2025
…-> chat completion OR chat completion <-> responses api) (BerriAI#11687) * feat(anthropic/passthrough): pass dynamic api key/api base params to litellm.completion allows calls to work with config.yaml * fix(responses_api/transformation): fix passing dynamic params to responses api from .completion() Allows responses api to work with config.yaml * fix(langfuse.py): fix responses api usage logging to langfuse * refactor(litellm_logging.py): add more generic solution for responses api usage logging ensures it works across all logging integrations * fix(litellm_logging.py): patch for anthropic messages not returning a pydantic object it should ideally return a pydantic object, which would simplify checks and reduce errors * fix(handler.py): correctly bubble up empty choices errors to litellm.completion causes downstream errors as it is expected there is at least one choice set * feat(litellm_logging.py): prevent double logging litellm responses ensures accurate spend tracking for calls when bridges are used * fix(litellm_logging.py): ensure logging is consistently enforced across all call types * fix: patch - set calltype before entering bridge api ensures logging object is applying the correct logic on the event hooks * fix(types/router.py): loosen type hint for mock response * change space_key header to space_id for Arize (BerriAI#11595) * feat(schema): add additional indexes to LiteLLM_SpendLogs for improved query performance (BerriAI#11675) * Revert "feat(schema): add additional indexes to LiteLLM_SpendLogs for improve…" (BerriAI#11683) This reverts commit 2a7f113. * [Feat] Use dedicated Rest endpoints for list, calling MCP tools (BerriAI#11684) * fix: (fix) use specific rest endpoints for MCP * ui - use rest mcp endpoints * fix imports * docs DISABLE_AIOHTTP_TRUST_ENV * docs(caching.md): remove batch redis get recommendation - old code path, no longer necessary * fix(vertex_and_google_ai_studio_gemini.py): handle gemini not passing audio token usage data * Chat Completions <-> Responses API Bridge Improvements (BerriAI#11685) * feat(anthropic/passthrough): pass dynamic api key/api base params to litellm.completion allows calls to work with config.yaml * fix(responses_api/transformation): fix passing dynamic params to responses api from .completion() Allows responses api to work with config.yaml * fix(langfuse.py): fix responses api usage logging to langfuse * refactor(litellm_logging.py): add more generic solution for responses api usage logging ensures it works across all logging integrations * fix(litellm_logging.py): patch for anthropic messages not returning a pydantic object it should ideally return a pydantic object, which would simplify checks and reduce errors * fix(handler.py): correctly bubble up empty choices errors to litellm.completion causes downstream errors as it is expected there is at least one choice set * fix(response_metadata.py): allow model_info to be none * fix(litellm_logging.py): copy object before mutating * fix: fix lint check * fix: fix linting error * fix: fix linting error --------- Co-authored-by: vanities <mischkeaa@gmail.com> Co-authored-by: Cole McIntosh <82463175+colesmcintosh@users.noreply.github.com> Co-authored-by: Ishaan Jaff <ishaanjaffer0324@gmail.com>
Sign up for free
to join this conversation on GitHub.
Already have an account?
Sign in to comment
Add this suggestion to a batch that can be applied as a single commit.
This suggestion is invalid because no changes were made to the code.
Suggestions cannot be applied while the pull request is closed.
Suggestions cannot be applied while viewing a subset of changes.
Only one suggestion per line can be applied in a batch.
Add this suggestion to a batch that can be applied as a single commit.
Applying suggestions on deleted lines is not supported.
You must change the existing code in this line in order to create a valid suggestion.
Outdated suggestions cannot be applied.
This suggestion has been applied or marked resolved.
Suggestions cannot be applied from pending reviews.
Suggestions cannot be applied on multi-line comments.
Suggestions cannot be applied while the pull request is queued to merge.
Suggestion cannot be applied right now. Please check back later.
Reverts #11675
This is done because creating indices on large spend logs tables can lead to high CPU usage - causing DB's to get overwhelmed.
We should investigate other/safer approaches to querying model analytics instead.