Skip to content

Q&A generation in Testbed fails with OCI models #266

@corradodebari

Description

@corradodebari

Checklist

  • I have searched the existing issues for similar issues.
  • I added a very descriptive title to this issue.
  • I have provided sufficient information below to help reproduce this issue.

Summary

In Testbed, for a Q&A Test Set generation, using:

  • Q&A Language Model: meta.llama-4-maverick-17b-128e-instruct-fp8
  • Q&A Embedding Model: ollama/all-minilm

I've got this issue, that stucks the process:

Error Generating TestSet: litellm.APIConnectionError: 2 validation errors for OCICompletionResponse chatResponse.usage.completionTokensDetails Field required [type=missing, input_value={'completionTokens': 7, '...01, 'totalTokens': 1308}, input_type=dict] For further information visit https://errors.pydantic.dev/2.11/v/missing chatResponse.usage.promptTokensDetails Field required [type=missing, input_value={'completionTokens': 7, '...01, 'totalTokens': 1308}, input_type=dict] For further information visit https://errors.pydantic.dev/2.11/v/missing Traceback (most recent call last): File "/Users/cdebari/Documents/GitHub/ai-optimizer-248-mcp-export/src/.venv/lib/python3.11/site-packages/litellm/main.py", line 2489, in completion response = base_llm_http_handler.completion( ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "/Users/cdebari/Documents/GitHub/ai-optimizer-248-mcp-export/src/.venv/lib/python3.11/site-packages/litellm/llms/custom_httpx/llm_http_handler.py", line 482, in completion return provider_config.transform_response( ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "/Users/cdebari/Documents/GitHub/ai-optimizer-248-mcp-export/src/.venv/lib/python3.11/site-packages/litellm/llms/oci/chat/transformation.py", line 499, in transform_response completion_response = OCICompletionResponse(**json) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "/Users/cdebari/Documents/GitHub/ai-optimizer-248-mcp-export/src/.venv/lib/python3.11/site-packages/pydantic/main.py", line 253, in init validated_self = self.pydantic_validator.validate_python(data, self_instance=self) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ pydantic_core._pydantic_core.ValidationError: 2 validation errors for OCICompletionResponse chatResponse.usage.completionTokensDetails Field required [type=missing, input_value={'completionTokens': 7, '...01, 'totalTokens': 1308}, input_type=dict] For further information visit https://errors.pydantic.dev/2.11/v/missing chatResponse.usage.promptTokensDetails Field required [type=missing, input_value={'completionTokens': 7, '...01, 'totalTokens': 1308}, input_type=dict] For further information visit https://errors.pydantic.dev/2.11/v/missing

This specific model works with Vector Search, even there is an exception raised in execution described in issue #264

Steps To Reproduce

No response

Expected Behavior

No response

Current Behavior

No response

Is this a regression?

  • Yes, this used to work in a previous version.

Debug info

  • Version:
  • Python version:
  • Operating System:
  • Browser:

Additional Information

No response

Metadata

Metadata

Assignees

No one assigned

    Labels

    No labels
    No labels

    Type

    No type

    Projects

    No projects

    Milestone

    No milestone

    Relationships

    None yet

    Development

    No branches or pull requests

    Issue actions