Skip to content

Model switch mid-conversation fails: "encrypted content could not be decrypted" #17541

@caibirdme

Description

@caibirdme

What version of Codex CLI is running?

0.119.0

What subscription do you have?

azure

Which model were you using?

gpt-5.4-mini, gpt-5.4

What platform is your computer?

linux x64

What terminal emulator and version are you using (if applicable)?

No response

What issue are you seeing?

When switching models mid-conversation (e.g. via turn/start with a different model in app-server mode), the Responses API returns invalid_encrypted_content because encrypted_content from the previous model's Reasoning and Compaction items is sent verbatim to the new model, which cannot decrypt it.

What steps can reproduce the bug?

  1. Start a conversation using Model A (e.g. an Azure-hosted model that supports reasoning)
  2. Have a multi-turn conversation so that the history contains Reasoning items with encrypted_content
  3. Switch to Model B (e.g. gpt-5.4 on a different provider/deployment) mid-conversation via turn/start with the new model name
  4. Send a new user message in the same thread

What is the expected behavior?

The conversation should continue normally with Model B. Previous reasoning context should be handled gracefully (e.g. encrypted_content stripped, or summarized).

Additional information

Actual Behavior

The API returns an error:

litellm.ContentPolicyViolationError: AzureException - {
  "error": {
    "message": "The encrypted content gAAA...S6Q= could not be verified.
                Reason: Encrypted content could not be decrypted or parsed.",
    "type": "invalid_request_error",
    "param": null,
    "code": "invalid_encrypted_content"
  }
}

Root Cause Analysis

The encrypted_content field in ResponseItem::Reasoning and ResponseItem::Compaction is provider-specific encrypted data. It is designed so that the same model/provider can "recall" its reasoning without re-computing it. However, a different model/provider does not have the keys to decrypt it.

The issue is that the conversation history is sent to the new model without sanitizing these fields:

  1. History construction (core/src/codex.rs:6257-6261): sess.clone_history().await.for_prompt(...) returns all items including Reasoning with encrypted_content.

  2. for_prompt() / normalize_history() (core/src/context_manager/history.rs:120-125): Only strips images for text-only models and removes orphaned outputs. Does not strip encrypted_content.

  3. Request construction (core/src/client.rs:829-868): get_formatted_input() passes all history items (with encrypted_content) into the ResponsesApiRequest.input.

  4. The new model receives encrypted_content it cannot decrypt and returns an error.

Existing precedent for content stripping on model switch

The codebase already handles similar incompatibility when switching from an image-capable model to a text-only model — images are stripped and replaced with placeholder text (tested in model_change_from_image_to_text_strips_prior_image_content in core/tests/suite/model_switching.rs). The same pattern should apply to encrypted_content.

Affected Code Locations

File Location Description
codex-rs/protocol/src/models.rs:206-216 ResponseItem::Reasoning encrypted_content: Option<String> — provider-specific
codex-rs/protocol/src/models.rs:336-338 ResponseItem::Compaction encrypted_content: String — also provider-specific
codex-rs/core/src/context_manager/history.rs:120-125 for_prompt() Returns items without filtering encrypted_content
codex-rs/core/src/context_manager/history.rs:364-373 normalize_history() Does not handle encrypted_content
codex-rs/core/src/client.rs:829-868 build_responses_request() Includes all history items in input and adds include: ["reasoning.encrypted_content"]

Suggested Fix

When a model switch is detected (the new model slug differs from the model that produced the history items), strip encrypted_content from prior Reasoning and Compaction items before sending them to the new model. This could be done in one of:

  • normalize_history() in history.rs — similar to how images are stripped for text-only models
  • for_prompt() in history.rs — as an additional filtering step
  • build_responses_request() in client.rs — as a last-resort sanitization before the API call

The Reasoning item's summary field should be preserved so that the new model still has context about what the previous model was thinking.

Workaround

Create a new thread (thread/create) when switching models instead of switching within the same thread via turn/start. This avoids carrying over encrypted_content from the previous model.

Environment

  • Using Codex in app-server mode
  • Model switching via turn/start with a different model parameter
  • Backend: litellm proxy routing to Azure OpenAI deployments

Metadata

Metadata

Assignees

No one assigned

    Labels

    azureIssues related to the Azure-hosted OpenAI modelsbugSomething isn't working

    Type

    No type
    No fields configured for issues without a type.

    Projects

    No projects

    Milestone

    No milestone

    Relationships

    None yet

    Development

    No branches or pull requests

    Issue actions