Skip to content

Revert "Add performance indexes to LiteLLM_SpendLogs for analytics queries" #11683

New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Merged
merged 1 commit into from
Jun 13, 2025

Conversation

krrishdholakia
Copy link
Contributor

Reverts #11675

This is done because creating indices on large spend logs tables can lead to high CPU usage - causing DB's to get overwhelmed.

We should investigate other/safer approaches to querying model analytics instead.

Copy link

vercel bot commented Jun 13, 2025

The latest updates on your projects. Learn more about Vercel for Git ↗︎

Name Status Preview Comments Updated (UTC)
litellm ✅ Ready (Inspect) Visit Preview 💬 Add feedback Jun 13, 2025 1:18am

@krrishdholakia krrishdholakia merged commit eb0d0c2 into main Jun 13, 2025
9 checks passed
krrishdholakia added a commit that referenced this pull request Jun 13, 2025
krrishdholakia added a commit that referenced this pull request Jun 13, 2025
…-> chat completion OR chat completion <-> responses api) (#11687)

* feat(anthropic/passthrough): pass dynamic api key/api base params to litellm.completion

allows calls to work with config.yaml

* fix(responses_api/transformation): fix passing dynamic params to responses api from .completion()

Allows responses api to work with config.yaml

* fix(langfuse.py): fix responses api usage logging to langfuse

* refactor(litellm_logging.py): add more generic solution for responses api usage logging

ensures it works across all logging integrations

* fix(litellm_logging.py): patch for anthropic messages not returning a pydantic object

it should ideally return a pydantic object, which would simplify checks and reduce errors

* fix(handler.py): correctly bubble up empty choices errors to litellm.completion

causes downstream errors as it is expected there is at least one choice set

* feat(litellm_logging.py): prevent double logging litellm responses

ensures accurate spend tracking for calls when bridges are used

* fix(litellm_logging.py): ensure logging is consistently enforced across all call types

* fix: patch - set calltype before entering bridge api

ensures logging object is applying the correct logic on the event hooks

* fix(types/router.py): loosen type hint for mock response

* change space_key header to space_id for Arize (#11595)

* feat(schema): add additional indexes to LiteLLM_SpendLogs for improved query performance (#11675)

* Revert "feat(schema): add additional indexes to LiteLLM_SpendLogs for improve…" (#11683)

This reverts commit 2a7f113.

* [Feat] Use dedicated Rest endpoints for list, calling MCP tools  (#11684)

* fix: (fix) use specific rest endpoints for MCP

* ui - use rest mcp endpoints

* fix imports

* docs DISABLE_AIOHTTP_TRUST_ENV

* docs(caching.md): remove batch redis get recommendation - old code path, no longer necessary

* fix(vertex_and_google_ai_studio_gemini.py): handle gemini not passing audio token usage data

* Chat Completions <-> Responses API Bridge Improvements (#11685)

* feat(anthropic/passthrough): pass dynamic api key/api base params to litellm.completion

allows calls to work with config.yaml

* fix(responses_api/transformation): fix passing dynamic params to responses api from .completion()

Allows responses api to work with config.yaml

* fix(langfuse.py): fix responses api usage logging to langfuse

* refactor(litellm_logging.py): add more generic solution for responses api usage logging

ensures it works across all logging integrations

* fix(litellm_logging.py): patch for anthropic messages not returning a pydantic object

it should ideally return a pydantic object, which would simplify checks and reduce errors

* fix(handler.py): correctly bubble up empty choices errors to litellm.completion

causes downstream errors as it is expected there is at least one choice set

* fix(response_metadata.py): allow model_info to be none

* fix(litellm_logging.py): copy object before mutating

* fix: fix lint check

* fix: fix linting error

* fix: fix linting error

---------

Co-authored-by: vanities <mischkeaa@gmail.com>
Co-authored-by: Cole McIntosh <82463175+colesmcintosh@users.noreply.github.com>
Co-authored-by: Ishaan Jaff <ishaanjaffer0324@gmail.com>
X4tar pushed a commit to X4tar/litellm that referenced this pull request Jun 17, 2025
X4tar pushed a commit to X4tar/litellm that referenced this pull request Jun 17, 2025
…-> chat completion OR chat completion <-> responses api) (BerriAI#11687)

* feat(anthropic/passthrough): pass dynamic api key/api base params to litellm.completion

allows calls to work with config.yaml

* fix(responses_api/transformation): fix passing dynamic params to responses api from .completion()

Allows responses api to work with config.yaml

* fix(langfuse.py): fix responses api usage logging to langfuse

* refactor(litellm_logging.py): add more generic solution for responses api usage logging

ensures it works across all logging integrations

* fix(litellm_logging.py): patch for anthropic messages not returning a pydantic object

it should ideally return a pydantic object, which would simplify checks and reduce errors

* fix(handler.py): correctly bubble up empty choices errors to litellm.completion

causes downstream errors as it is expected there is at least one choice set

* feat(litellm_logging.py): prevent double logging litellm responses

ensures accurate spend tracking for calls when bridges are used

* fix(litellm_logging.py): ensure logging is consistently enforced across all call types

* fix: patch - set calltype before entering bridge api

ensures logging object is applying the correct logic on the event hooks

* fix(types/router.py): loosen type hint for mock response

* change space_key header to space_id for Arize (BerriAI#11595)

* feat(schema): add additional indexes to LiteLLM_SpendLogs for improved query performance (BerriAI#11675)

* Revert "feat(schema): add additional indexes to LiteLLM_SpendLogs for improve…" (BerriAI#11683)

This reverts commit 2a7f113.

* [Feat] Use dedicated Rest endpoints for list, calling MCP tools  (BerriAI#11684)

* fix: (fix) use specific rest endpoints for MCP

* ui - use rest mcp endpoints

* fix imports

* docs DISABLE_AIOHTTP_TRUST_ENV

* docs(caching.md): remove batch redis get recommendation - old code path, no longer necessary

* fix(vertex_and_google_ai_studio_gemini.py): handle gemini not passing audio token usage data

* Chat Completions <-> Responses API Bridge Improvements (BerriAI#11685)

* feat(anthropic/passthrough): pass dynamic api key/api base params to litellm.completion

allows calls to work with config.yaml

* fix(responses_api/transformation): fix passing dynamic params to responses api from .completion()

Allows responses api to work with config.yaml

* fix(langfuse.py): fix responses api usage logging to langfuse

* refactor(litellm_logging.py): add more generic solution for responses api usage logging

ensures it works across all logging integrations

* fix(litellm_logging.py): patch for anthropic messages not returning a pydantic object

it should ideally return a pydantic object, which would simplify checks and reduce errors

* fix(handler.py): correctly bubble up empty choices errors to litellm.completion

causes downstream errors as it is expected there is at least one choice set

* fix(response_metadata.py): allow model_info to be none

* fix(litellm_logging.py): copy object before mutating

* fix: fix lint check

* fix: fix linting error

* fix: fix linting error

---------

Co-authored-by: vanities <mischkeaa@gmail.com>
Co-authored-by: Cole McIntosh <82463175+colesmcintosh@users.noreply.github.com>
Co-authored-by: Ishaan Jaff <ishaanjaffer0324@gmail.com>
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

1 participant