Skip to content

Preserve encrypted_content for multi-turn reasoning #6

@HXYerror

Description

@HXYerror

Part of #1. Depends on #3.

Goal

Make sure `reasoning` items in upstream Responses output keep their `encrypted_content` field intact when round-tripping. Without this, multi-turn calls fail with `"encrypted content could not be verified"` upstream and chain-of-thought continuity is lost (worse answers, repeated reasoning-token spend).

Background

OpenAI's Responses API, when called with `include: ["reasoning.encrypted_content"]` and `store: false` (the typical ZDR configuration), returns reasoning items shaped like:

```
{
"type": "reasoning",
"id": "rs_…",
"summary": [{ "type": "summary_text", "text": "…" }],
"encrypted_content": "",
"status": "completed"
}
```

On the next turn, clients must echo every reasoning item verbatim into `input` so the model can resume its CoT. The blob is opaque and cannot be regenerated.

litellm hit this exact bug — see PR #17130 which overrides `_handle_reasoning_item` specifically for Copilot.

Tasks

  • In the response-translation layer (`src/routes/responses/`), do not strip `encrypted_content` from reasoning items
  • Strip `status: null` (litellm PR #22370 — Copilot upstream rejects null status fields)
  • When the request comes in with prior `input` containing reasoning items, pass them through verbatim (do not regenerate IDs, do not re-encode)
  • If we add the Anthropic→Responses adapter (separate issue), preserve the encrypted blob inside Anthropic's `thinking` block (e.g., as `signature` / `data` fields per Anthropic's extended thinking spec) so a Claude Code multi-turn round-trip survives the translation

Acceptance criteria

  • Two-turn agent loop with `gpt-5` and `reasoning.encrypted_content` produces a successful second-turn response (no "encrypted content could not be verified")
  • Reasoning token usage on turn 2 is not inflated by re-thinking from scratch
  • Test fixture in `tests/` covers the multi-turn echo path

File pointers

  • New: `src/routes/responses/translation.ts` (likely)
  • Reference: litellm `_handle_reasoning_item` override in `litellm/llms/github_copilot/responses/transformation.py`

Metadata

Metadata

Assignees

No one assigned

    Labels

    reasoningReasoning / thinking / encrypted_contentresponses-apiOpenAI /v1/responses API support

    Projects

    No projects

    Milestone

    No milestone

    Relationships

    None yet

    Development

    No branches or pull requests

    Issue actions