Update OpenAI Responses API compaction and telemetry#308436
Merged
dileepyavan merged 2 commits intomainfrom Apr 8, 2026
Merged
Update OpenAI Responses API compaction and telemetry#308436dileepyavan merged 2 commits intomainfrom
dileepyavan merged 2 commits intomainfrom
Conversation
Contributor
There was a problem hiding this comment.
Pull request overview
This PR refines the OpenAI Responses API “context management / compaction” integration so that compaction-specific behavior (request shaping, stream processing, and telemetry) only occurs when compaction is actually enabled, and adds regression tests around the enabled/disabled flows.
Changes:
- Centralizes compaction threshold computation and uses it to gate
context_managementrequest fields and compaction outcome telemetry. - Improves stateful-marker + compaction round-tripping so the latest compaction item is preserved even when it predates the stateful marker.
- Adds unit tests covering request-body shaping, compaction reconciliation, and telemetry emission behavior.
Show a summary per file
| File | Description |
|---|---|
| extensions/copilot/src/platform/networking/node/chatStream.ts | Adds a dedicated responsesApi.compactionOutcome telemetry helper + GDPR annotation. |
| extensions/copilot/src/platform/endpoint/node/responsesApi.ts | Centralizes threshold logic, updates request shaping, reconciles latest compaction item in stream processor, emits compaction outcome telemetry when enabled. |
| extensions/copilot/src/platform/endpoint/node/chatEndpoint.ts | Threads computed compaction threshold into processResponseFromChatEndpoint. |
| extensions/copilot/src/extension/prompt/node/chatMLFetcher.ts | Passes compaction threshold (derived from request body) into the Responses stream processor for WebSocket flows. |
| extensions/copilot/src/extension/externalAgents/node/oaiLanguageModelServer.ts | Same as above for pass-through external agent streaming. |
| extensions/copilot/src/platform/endpoint/node/test/responsesApi.spec.ts | Adds regression coverage for compaction enablement, stateful-marker slicing, compaction round-trip, and telemetry emission. |
Copilot's findings
- Files reviewed: 6/6 changed files
- Comments generated: 2
Co-authored-by: Copilot <175728472+Copilot@users.noreply.github.com>
alexdima
approved these changes
Apr 8, 2026
This file contains hidden or bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
Sign up for free
to join this conversation on GitHub.
Already have an account?
Sign in to comment
Add this suggestion to a batch that can be applied as a single commit.This suggestion is invalid because no changes were made to the code.Suggestions cannot be applied while the pull request is closed.Suggestions cannot be applied while viewing a subset of changes.Only one suggestion per line can be applied in a batch.Add this suggestion to a batch that can be applied as a single commit.Applying suggestions on deleted lines is not supported.You must change the existing code in this line in order to create a valid suggestion.Outdated suggestions cannot be applied.This suggestion has been applied or marked resolved.Suggestions cannot be applied from pending reviews.Suggestions cannot be applied on multi-line comments.Suggestions cannot be applied while the pull request is queued to merge.Suggestion cannot be applied right now. Please check back later.
Summary
This PR refines how the OpenAI Responses API context-management compaction feature is enabled and observed, ensuring compaction-specific behavior only runs when compaction is actually enabled and adding outcome telemetry plus regression coverage.
Details
responseIdis passed whenignoreStatefulMarkeris false.context_managementand WebSocket stateful-marker slicing.responsesApi.compactionOutcometelemetry only when compaction is enabled.Testing