Conversation
…oning_content xAI (and other providers like DeepSeek) can return reasoning_content alongside text content, producing 2+ content items. The test now accepts >= 1 items and verifies at least one is Text, instead of requiring exactly 1 item. Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com> Signed-off-by: clayarnoldg2m <carnold@g2m.ai>
The submit filter in ProviderConfiguationModal only included fields where the user had typed a new value (entry.value), causing untouched fields like API Host to be silently dropped. Now the filter also includes fields with existing non-masked server values (entry.serverValue), preventing config reversion when only some fields are edited. Fixes #7245 Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com> Signed-off-by: clayarnoldg2m <carnold@g2m.ai>
Replace provider.complete() with provider.complete_with_model() using explicit max_tokens of 16384 in both generate_new_app_content() and generate_updated_app_content(). This prevents app HTML from being truncated when the provider's default max_tokens is too low. Fixes #7239 Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com> Signed-off-by: clayarnoldg2m <carnold@g2m.ai>
Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com> Signed-off-by: clayarnoldg2m <carnold@g2m.ai>
Add a Some("thinking") arm to parse_stream_json_response() that
extracts thinking content from Gemini CLI stream events and creates
MessageContent::Thinking entries. Without this, thinking blocks were
silently dropped, causing truncated responses.
Includes tests for thinking block parsing and no-thinking fallback.
Fixes #7203
Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
Signed-off-by: clayarnoldg2m <carnold@g2m.ai>
Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com> Signed-off-by: clayarnoldg2m <carnold@g2m.ai>
Three fixes for reasoning_content handling:
1. agent.rs: Preserve reasoning_content when splitting parallel tool calls.
Providers like Kimi require reasoning_content on all assistant messages
with tool_calls when thinking mode is enabled.
2. openai.rs format_messages: Omit reasoning_content field entirely when
empty instead of sending empty string. Kimi rejects empty
reasoning_content ("").
3. openai.rs streaming: Properly accumulate reasoning_content chunks
across streaming deltas and emit as MessageContent::reasoning().
Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
Signed-off-by: clayarnoldg2m <carnold@g2m.ai>
4 tasks
This file contains hidden or bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
Sign up for free
to join this conversation on GitHub.
Already have an account?
Sign in to comment
Add this suggestion to a batch that can be applied as a single commit.This suggestion is invalid because no changes were made to the code.Suggestions cannot be applied while the pull request is closed.Suggestions cannot be applied while viewing a subset of changes.Only one suggestion per line can be applied in a batch.Add this suggestion to a batch that can be applied as a single commit.Applying suggestions on deleted lines is not supported.You must change the existing code in this line in order to create a valid suggestion.Outdated suggestions cannot be applied.This suggestion has been applied or marked resolved.Suggestions cannot be applied from pending reviews.Suggestions cannot be applied on multi-line comments.Suggestions cannot be applied while the pull request is queued to merge.Suggestion cannot be applied right now. Please check back later.
Testing fork PR #7252 to run full CI. See #7252