Conversation
…S#344) Gemma models served through LM Studio (and similar local inference servers) reject response_format={"type": "json_object"}, returning a 400 error: "'response_format.type' must be 'json_schema' or 'text'". Add supports_response_format: False to the existing "gemma" MODEL_OVERRIDES entry so these models are excluded from the json_object path. The existing extract_json_object utilities in the visualize and math-animator agents already parse JSON from plain text responses, so all callers continue to work without further changes.
|
this problem persists with qwen-like architectures. |
Collaborator
|
Thanks for your contribution! |
This file contains hidden or bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
Sign up for free
to join this conversation on GitHub.
Already have an account?
Sign in to comment
Add this suggestion to a batch that can be applied as a single commit.This suggestion is invalid because no changes were made to the code.Suggestions cannot be applied while the pull request is closed.Suggestions cannot be applied while viewing a subset of changes.Only one suggestion per line can be applied in a batch.Add this suggestion to a batch that can be applied as a single commit.Applying suggestions on deleted lines is not supported.You must change the existing code in this line in order to create a valid suggestion.Outdated suggestions cannot be applied.This suggestion has been applied or marked resolved.Suggestions cannot be applied from pending reviews.Suggestions cannot be applied on multi-line comments.Suggestions cannot be applied while the pull request is queued to merge.Suggestion cannot be applied right now. Please check back later.
Fixes #344
Problem
When using LM Studio with
gemmamodels (e.g.gemma-4-e2b), thevisualization and other JSON-structured features fail with:
The root cause: the
lm_studiobinding hassupports_response_format: Truein
PROVIDER_CAPABILITIES, so the code sendsresponse_format={"type": "json_object"}.However, newer Gemma models only accept
json_schemaortext— they do notsupport the legacy
json_objecttype.Solution
Add
"supports_response_format": Falseto the existing"gemma"entry inMODEL_OVERRIDESinsidedeeptutor/services/llm/capabilities.py.When
supports_response_formatreturnsFalse, the code omits theresponse_formatparameter entirely. The existingextract_json_objectutilities in the
visualizeandmath-animatoragents already parsestructured JSON from plain-text responses, so all callers work correctly
without
response_formatbeing set.This is the same pattern already used for DeepSeek and Anthropic models.
Testing
gemma-4-e2b,gemma-3-4b,gemma-2-9ball returnFalsemistral-7b,llama-3) continue to returnTruetest_gemma_response_format_disabledtotests/services/llm/test_capabilities.py