Part of #1. Depends on #3.
Goal
Make sure `reasoning` items in upstream Responses output keep their `encrypted_content` field intact when round-tripping. Without this, multi-turn calls fail with `"encrypted content could not be verified"` upstream and chain-of-thought continuity is lost (worse answers, repeated reasoning-token spend).
Background
OpenAI's Responses API, when called with `include: ["reasoning.encrypted_content"]` and `store: false` (the typical ZDR configuration), returns reasoning items shaped like:
```
{
"type": "reasoning",
"id": "rs_…",
"summary": [{ "type": "summary_text", "text": "…" }],
"encrypted_content": "",
"status": "completed"
}
```
On the next turn, clients must echo every reasoning item verbatim into `input` so the model can resume its CoT. The blob is opaque and cannot be regenerated.
litellm hit this exact bug — see PR #17130 which overrides `_handle_reasoning_item` specifically for Copilot.
Tasks
Acceptance criteria
- Two-turn agent loop with `gpt-5` and `reasoning.encrypted_content` produces a successful second-turn response (no "encrypted content could not be verified")
- Reasoning token usage on turn 2 is not inflated by re-thinking from scratch
- Test fixture in `tests/` covers the multi-turn echo path
File pointers
- New: `src/routes/responses/translation.ts` (likely)
- Reference: litellm `_handle_reasoning_item` override in `litellm/llms/github_copilot/responses/transformation.py`
Part of #1. Depends on #3.
Goal
Make sure `reasoning` items in upstream Responses output keep their `encrypted_content` field intact when round-tripping. Without this, multi-turn calls fail with `"encrypted content could not be verified"` upstream and chain-of-thought continuity is lost (worse answers, repeated reasoning-token spend).
Background
OpenAI's Responses API, when called with `include: ["reasoning.encrypted_content"]` and `store: false` (the typical ZDR configuration), returns reasoning items shaped like:
```
{
"type": "reasoning",
"id": "rs_…",
"summary": [{ "type": "summary_text", "text": "…" }],
"encrypted_content": "",
"status": "completed"
}
```
On the next turn, clients must echo every reasoning item verbatim into `input` so the model can resume its CoT. The blob is opaque and cannot be regenerated.
litellm hit this exact bug — see PR #17130 which overrides `_handle_reasoning_item` specifically for Copilot.
Tasks
Acceptance criteria
File pointers