Part of #1. Depends on #3.
Goal
When the upstream returns reasoning items with encrypted_content, copilot-api must preserve that field byte-exact across turns. Stripping or re-serializing it produces upstream errors like "encrypted content could not be verified" or silent reasoning-quality regression on subsequent turns.
Why this matters
OpenAI's Responses API reasoning items contain an opaque encrypted_content blob (returned only when include: ["reasoning.encrypted_content"] and store: false). The next turn must echo every reasoning item back verbatim in input so the model can resume its chain of thought. Otherwise:
- Server may reject with verification error, OR
- Model silently re-thinks from scratch — repeated reasoning-token cost, degraded answers, longer latency.
This is exactly the bug litellm fixed in PR #17130 and refined in PR #22370: the parent OpenAIResponsesAPIConfig._handle_reasoning_item was stripping encrypted_content, so they overrode it for Copilot.
Tasks
Acceptance criteria
- Two-turn manual test against
gpt-5.3-codex: turn 1 returns reasoning with encrypted_content. Turn 2 includes that reasoning item in input. Upstream does not return verification errors. Reasoning cost on turn 2 is materially lower than turn 1 (cache hit).
- Unit test: parsing an upstream response with
encrypted_content and re-serializing produces a bytes-identical reasoning item.
File pointers
Part of #1. Depends on #3.
Goal
When the upstream returns reasoning items with
encrypted_content, copilot-api must preserve that field byte-exact across turns. Stripping or re-serializing it produces upstream errors like"encrypted content could not be verified"or silent reasoning-quality regression on subsequent turns.Why this matters
OpenAI's Responses API reasoning items contain an opaque
encrypted_contentblob (returned only wheninclude: ["reasoning.encrypted_content"]andstore: false). The next turn must echo every reasoning item back verbatim ininputso the model can resume its chain of thought. Otherwise:This is exactly the bug litellm fixed in PR #17130 and refined in PR #22370: the parent
OpenAIResponsesAPIConfig._handle_reasoning_itemwas strippingencrypted_content, so they overrode it for Copilot.Tasks
ResponseReasoningItemtype (Add upstream Responses API service client #3), declareencrypted_content?: stringencrypted_content— round-trip it as-isstatus: null), do it surgically — only the empty fields, notencrypted_contentinputand forward them upstream untouchedAcceptance criteria
gpt-5.3-codex: turn 1 returns reasoning withencrypted_content. Turn 2 includes that reasoning item ininput. Upstream does not return verification errors. Reasoning cost on turn 2 is materially lower than turn 1 (cache hit).encrypted_contentand re-serializing produces a bytes-identical reasoning item.File pointers
litellm/llms/github_copilot/responses/transformation.py— the_handle_reasoning_itemoverride