Skip to content

Testing PR #7252 (fork CI)#7262

Closed
michaelneale wants to merge 7 commits intomainfrom
micn/testing-7252
Closed

Testing PR #7252 (fork CI)#7262
michaelneale wants to merge 7 commits intomainfrom
micn/testing-7252

Conversation

@michaelneale
Copy link
Collaborator

Testing fork PR #7252 to run full CI. See #7252

clayarnoldg2m and others added 7 commits February 16, 2026 22:39
…oning_content

xAI (and other providers like DeepSeek) can return reasoning_content alongside
text content, producing 2+ content items. The test now accepts >= 1 items and
verifies at least one is Text, instead of requiring exactly 1 item.

Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
Signed-off-by: clayarnoldg2m <carnold@g2m.ai>
The submit filter in ProviderConfiguationModal only included fields where
the user had typed a new value (entry.value), causing untouched fields like
API Host to be silently dropped. Now the filter also includes fields with
existing non-masked server values (entry.serverValue), preventing config
reversion when only some fields are edited.

Fixes #7245

Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
Signed-off-by: clayarnoldg2m <carnold@g2m.ai>
Replace provider.complete() with provider.complete_with_model() using
explicit max_tokens of 16384 in both generate_new_app_content() and
generate_updated_app_content(). This prevents app HTML from being
truncated when the provider's default max_tokens is too low.

Fixes #7239

Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
Signed-off-by: clayarnoldg2m <carnold@g2m.ai>
Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
Signed-off-by: clayarnoldg2m <carnold@g2m.ai>
Add a Some("thinking") arm to parse_stream_json_response() that
extracts thinking content from Gemini CLI stream events and creates
MessageContent::Thinking entries. Without this, thinking blocks were
silently dropped, causing truncated responses.

Includes tests for thinking block parsing and no-thinking fallback.

Fixes #7203

Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
Signed-off-by: clayarnoldg2m <carnold@g2m.ai>
Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
Signed-off-by: clayarnoldg2m <carnold@g2m.ai>
Three fixes for reasoning_content handling:

1. agent.rs: Preserve reasoning_content when splitting parallel tool calls.
   Providers like Kimi require reasoning_content on all assistant messages
   with tool_calls when thinking mode is enabled.

2. openai.rs format_messages: Omit reasoning_content field entirely when
   empty instead of sending empty string. Kimi rejects empty
   reasoning_content ("").

3. openai.rs streaming: Properly accumulate reasoning_content chunks
   across streaming deltas and emit as MessageContent::reasoning().

Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
Signed-off-by: clayarnoldg2m <carnold@g2m.ai>
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

2 participants

Comments