What version of Codex CLI is running?
0.119.0
What subscription do you have?
azure
Which model were you using?
gpt-5.4-mini, gpt-5.4
What platform is your computer?
linux x64
What terminal emulator and version are you using (if applicable)?
No response
What issue are you seeing?
When switching models mid-conversation (e.g. via turn/start with a different model in app-server mode), the Responses API returns invalid_encrypted_content because encrypted_content from the previous model's Reasoning and Compaction items is sent verbatim to the new model, which cannot decrypt it.
What steps can reproduce the bug?
- Start a conversation using Model A (e.g. an Azure-hosted model that supports reasoning)
- Have a multi-turn conversation so that the history contains
Reasoning items with encrypted_content
- Switch to Model B (e.g.
gpt-5.4 on a different provider/deployment) mid-conversation via turn/start with the new model name
- Send a new user message in the same thread
What is the expected behavior?
The conversation should continue normally with Model B. Previous reasoning context should be handled gracefully (e.g. encrypted_content stripped, or summarized).
Additional information
Actual Behavior
The API returns an error:
litellm.ContentPolicyViolationError: AzureException - {
"error": {
"message": "The encrypted content gAAA...S6Q= could not be verified.
Reason: Encrypted content could not be decrypted or parsed.",
"type": "invalid_request_error",
"param": null,
"code": "invalid_encrypted_content"
}
}
Root Cause Analysis
The encrypted_content field in ResponseItem::Reasoning and ResponseItem::Compaction is provider-specific encrypted data. It is designed so that the same model/provider can "recall" its reasoning without re-computing it. However, a different model/provider does not have the keys to decrypt it.
The issue is that the conversation history is sent to the new model without sanitizing these fields:
-
History construction (core/src/codex.rs:6257-6261): sess.clone_history().await.for_prompt(...) returns all items including Reasoning with encrypted_content.
-
for_prompt() / normalize_history() (core/src/context_manager/history.rs:120-125): Only strips images for text-only models and removes orphaned outputs. Does not strip encrypted_content.
-
Request construction (core/src/client.rs:829-868): get_formatted_input() passes all history items (with encrypted_content) into the ResponsesApiRequest.input.
-
The new model receives encrypted_content it cannot decrypt and returns an error.
Existing precedent for content stripping on model switch
The codebase already handles similar incompatibility when switching from an image-capable model to a text-only model — images are stripped and replaced with placeholder text (tested in model_change_from_image_to_text_strips_prior_image_content in core/tests/suite/model_switching.rs). The same pattern should apply to encrypted_content.
Affected Code Locations
| File |
Location |
Description |
codex-rs/protocol/src/models.rs:206-216 |
ResponseItem::Reasoning |
encrypted_content: Option<String> — provider-specific |
codex-rs/protocol/src/models.rs:336-338 |
ResponseItem::Compaction |
encrypted_content: String — also provider-specific |
codex-rs/core/src/context_manager/history.rs:120-125 |
for_prompt() |
Returns items without filtering encrypted_content |
codex-rs/core/src/context_manager/history.rs:364-373 |
normalize_history() |
Does not handle encrypted_content |
codex-rs/core/src/client.rs:829-868 |
build_responses_request() |
Includes all history items in input and adds include: ["reasoning.encrypted_content"] |
Suggested Fix
When a model switch is detected (the new model slug differs from the model that produced the history items), strip encrypted_content from prior Reasoning and Compaction items before sending them to the new model. This could be done in one of:
normalize_history() in history.rs — similar to how images are stripped for text-only models
for_prompt() in history.rs — as an additional filtering step
build_responses_request() in client.rs — as a last-resort sanitization before the API call
The Reasoning item's summary field should be preserved so that the new model still has context about what the previous model was thinking.
Workaround
Create a new thread (thread/create) when switching models instead of switching within the same thread via turn/start. This avoids carrying over encrypted_content from the previous model.
Environment
- Using Codex in app-server mode
- Model switching via
turn/start with a different model parameter
- Backend: litellm proxy routing to Azure OpenAI deployments
What version of Codex CLI is running?
0.119.0
What subscription do you have?
azure
Which model were you using?
gpt-5.4-mini, gpt-5.4
What platform is your computer?
linux x64
What terminal emulator and version are you using (if applicable)?
No response
What issue are you seeing?
When switching models mid-conversation (e.g. via
turn/startwith a differentmodelin app-server mode), the Responses API returnsinvalid_encrypted_contentbecauseencrypted_contentfrom the previous model'sReasoningandCompactionitems is sent verbatim to the new model, which cannot decrypt it.What steps can reproduce the bug?
Reasoningitems withencrypted_contentgpt-5.4on a different provider/deployment) mid-conversation viaturn/startwith the new model nameWhat is the expected behavior?
The conversation should continue normally with Model B. Previous reasoning context should be handled gracefully (e.g.
encrypted_contentstripped, or summarized).Additional information
Actual Behavior
The API returns an error:
Root Cause Analysis
The
encrypted_contentfield inResponseItem::ReasoningandResponseItem::Compactionis provider-specific encrypted data. It is designed so that the same model/provider can "recall" its reasoning without re-computing it. However, a different model/provider does not have the keys to decrypt it.The issue is that the conversation history is sent to the new model without sanitizing these fields:
History construction (
core/src/codex.rs:6257-6261):sess.clone_history().await.for_prompt(...)returns all items includingReasoningwithencrypted_content.for_prompt()/normalize_history()(core/src/context_manager/history.rs:120-125): Only strips images for text-only models and removes orphaned outputs. Does not stripencrypted_content.Request construction (
core/src/client.rs:829-868):get_formatted_input()passes all history items (withencrypted_content) into theResponsesApiRequest.input.The new model receives
encrypted_contentit cannot decrypt and returns an error.Existing precedent for content stripping on model switch
The codebase already handles similar incompatibility when switching from an image-capable model to a text-only model — images are stripped and replaced with placeholder text (tested in
model_change_from_image_to_text_strips_prior_image_contentincore/tests/suite/model_switching.rs). The same pattern should apply toencrypted_content.Affected Code Locations
codex-rs/protocol/src/models.rs:206-216ResponseItem::Reasoningencrypted_content: Option<String>— provider-specificcodex-rs/protocol/src/models.rs:336-338ResponseItem::Compactionencrypted_content: String— also provider-specificcodex-rs/core/src/context_manager/history.rs:120-125for_prompt()encrypted_contentcodex-rs/core/src/context_manager/history.rs:364-373normalize_history()encrypted_contentcodex-rs/core/src/client.rs:829-868build_responses_request()inputand addsinclude: ["reasoning.encrypted_content"]Suggested Fix
When a model switch is detected (the new model slug differs from the model that produced the history items), strip
encrypted_contentfrom priorReasoningandCompactionitems before sending them to the new model. This could be done in one of:normalize_history()inhistory.rs— similar to how images are stripped for text-only modelsfor_prompt()inhistory.rs— as an additional filtering stepbuild_responses_request()inclient.rs— as a last-resort sanitization before the API callThe
Reasoningitem'ssummaryfield should be preserved so that the new model still has context about what the previous model was thinking.Workaround
Create a new thread (
thread/create) when switching models instead of switching within the same thread viaturn/start. This avoids carrying overencrypted_contentfrom the previous model.Environment
turn/startwith a differentmodelparameter