Before submitting
Problem
T3 Code currently shows the selected provider/model in the composer footer, but that only represents the current input configuration. After a chat has multiple turns, especially if the user switches providers or models mid-thread, there is no obvious indication in the transcript of which provider/model generated each previous assistant message.
Example workflow:
- Start a T3 Code chat using OpenCode with MiniMax.
- Send one or more messages.
- Switch the composer to GitHub Copilot with GPT-5.5.
- Look back at earlier assistant messages.
The composer now shows the current GitHub Copilot / GPT-5.5 selection, but earlier assistant messages do not clearly show that they were generated by OpenCode / MiniMax. That makes it harder to audit the conversation, compare model behavior, understand why a previous answer behaved differently, or reconstruct what happened later.
Inspiration / comparison
OpenCode makes this clearer by showing model metadata directly on each assistant message. In the OpenCode transcript, each response includes a small per-message footer such as:
Build · MiniMax-M2.7 · 3.9s
The important part is that the metadata belongs to the message itself, not just the current input state. The composer can still show the current model, but the transcript should also preserve what was used for historical turns.
Expected behavior
Each assistant message in T3 Code should show the provider/model snapshot used to generate that specific message.
Suggested metadata to show, where available:
- provider, e.g.
OpenCode, GitHub Copilot, Codex, Claude
- model, e.g.
MiniMax-M2.7, GPT-5.5
- mode/agent, e.g.
Build, Plan, or the OpenCode agent name
- reasoning/effort/variant, e.g.
High, xhigh, if applicable
- elapsed time / duration, if already available
A compact footer under each assistant message would work well, similar to OpenCode. For example:
OpenCode · Build · MiniMax-M2.7 · High · 39s
GitHub Copilot · Build · GPT-5.5 · High · 3m 24s
Acceptance criteria
- Assistant messages display the provider/model used for that exact turn.
- The displayed metadata does not change when the user later switches the composer to a different provider or model.
- The current composer footer remains focused on the next message configuration.
- Historical messages remain readable after mixed-provider threads, e.g. OpenCode/MiniMax followed by GitHub Copilot/GPT-5.5.
- If older persisted messages lack this metadata, the UI handles them gracefully, e.g. hides the footer or shows only available fields.
- Tool-call-heavy assistant messages also show the same per-message model metadata, since those are often the turns where provenance matters most.
Why this matters
Mixed-provider workflows are common in T3 Code. A user may begin a thread with one provider/model for planning, switch to another for implementation, and switch again for debugging or verification. Without per-message model provenance, the transcript can become ambiguous because the only visible model state is the current composer selection.
This feature would make conversations easier to audit, compare, reproduce, and trust. It would also make model switching feel less lossy: the current selection controls the next turn, while prior turns remain self-describing.
Before submitting
Problem
T3 Code currently shows the selected provider/model in the composer footer, but that only represents the current input configuration. After a chat has multiple turns, especially if the user switches providers or models mid-thread, there is no obvious indication in the transcript of which provider/model generated each previous assistant message.
Example workflow:
The composer now shows the current GitHub Copilot / GPT-5.5 selection, but earlier assistant messages do not clearly show that they were generated by OpenCode / MiniMax. That makes it harder to audit the conversation, compare model behavior, understand why a previous answer behaved differently, or reconstruct what happened later.
Inspiration / comparison
OpenCode makes this clearer by showing model metadata directly on each assistant message. In the OpenCode transcript, each response includes a small per-message footer such as:
The important part is that the metadata belongs to the message itself, not just the current input state. The composer can still show the current model, but the transcript should also preserve what was used for historical turns.
Expected behavior
Each assistant message in T3 Code should show the provider/model snapshot used to generate that specific message.
Suggested metadata to show, where available:
OpenCode,GitHub Copilot,Codex,ClaudeMiniMax-M2.7,GPT-5.5Build,Plan, or the OpenCode agent nameHigh,xhigh, if applicableA compact footer under each assistant message would work well, similar to OpenCode. For example:
Acceptance criteria
Why this matters
Mixed-provider workflows are common in T3 Code. A user may begin a thread with one provider/model for planning, switch to another for implementation, and switch again for debugging or verification. Without per-message model provenance, the transcript can become ambiguous because the only visible model state is the current composer selection.
This feature would make conversations easier to audit, compare, reproduce, and trust. It would also make model switching feel less lossy: the current selection controls the next turn, while prior turns remain self-describing.