π¬ Research Finding (live-tested with copilot-models-litellm, enterprise endpoint β 2026-05-10)
GitHub Copilot upstream (api.enterprise.githubcopilot.com) natively supports the Anthropic Messages API at /v1/messages for all Claude models β no translation needed.
Model Γ Endpoint matrix (10 Claude models tested)
| Model |
/v1/messages (Anthropic native) |
/chat/completions (OpenAI) |
claude-sonnet-4.5 |
β
|
β
|
claude-sonnet-4.6 |
β
|
β
|
claude-opus-4.5 |
β
|
β
|
claude-opus-4.6 |
β
|
β
|
claude-opus-4.6-1m |
β
|
β
|
claude-opus-4.7 |
β
|
β
|
claude-opus-4.7-high |
β
|
β
|
claude-opus-4.7-xhigh |
β
|
β
|
claude-opus-4.7-1m-internal |
β
|
β
|
claude-haiku-4.5 |
β
|
β
|
Feature matrix (all tested on /v1/messages native path)
| Feature |
Status |
Notes |
| Basic chat |
β
|
Native {type:"message", content:[...]} response |
| Streaming SSE |
β
|
message_start / content_block_* / message_delta / message_stop |
thinking blocks (4.5/4.6) |
β
|
Full thinking content + signature returned |
thinking blocks (4.7+) |
β οΈ |
New API: "type": "adaptive" + output_config.effort; "enabled" rejected with 400 |
tools / tool_use |
β
|
stop_reason: "tool_use", correct input |
| thinking + tools together |
β
|
Both blocks in response simultaneously |
system prompt |
β
|
String and array form |
cache_control (ephemeral) |
β
|
cache_creation_input_tokens in usage |
top_k |
β
|
Accepted without error |
| URL images |
β |
"external image URLs are not supported" |
| Base64 images |
β
|
(verified in prior tests) |
| Multi-turn |
β
|
History forwarded correctly |
New API format for claude-opus-4.7+ thinking
{
"thinking": {"type": "adaptive"},
"output_config": {"effort": "medium"}
}
Old "type": "enabled" format rejected with:
"thinking.type.enabled" is not supported for this model. Use "thinking.type.adaptive" and "output_config.effort"
Native usage fields (richer than OpenAI translation path)
{
"usage": {
"input_tokens": 47,
"output_tokens": 136,
"cache_creation_input_tokens": 0,
"cache_read_input_tokens": 0,
"cache_creation": {
"ephemeral_1h_input_tokens": 0,
"ephemeral_5m_input_tokens": 0
}
}
}
Problem: Current copilot-api translates everything through OpenAI
All /v1/messages requests go through:
non-stream-translation.ts β Copilot /chat/completions β stream-translation.ts
This loses:
thinking blocks entirely (dropped on input, never returned on output)
top_k (no OpenAI equivalent β dropped)
cache_control, cache_creation_input_tokens
signature on thinking blocks (required for multi-turn reasoning)
- New
claude-opus-4.7+ thinking API (adaptive + output_config.effort)
Additionally non-stream-translation.ts:305 has a stale comment: "GitHub Copilot doesn't generate thinking blocks" β factually wrong.
Proposed fix: Direct Anthropic pass-through for all Claude models
When the model is Claude (vendor=Anthropic), forward the request directly to ${copilotBaseUrl}/v1/messages and pass the response back unchanged.
Client (Claude Code / Anthropic SDK)
POST /v1/messages {model: "claude-sonnet-4.6", thinking: ...}
β
route.ts β detect Claude model
β
create-messages-native.ts
β POST api.enterprise.githubcopilot.com/v1/messages (native Anthropic format)
β response passed through unchanged (thinking blocks, signature, rich usage all intact)
β
Client receives native Anthropic response
Non-Claude models (gpt-4o etc.) keep the existing translation path.
Sub-issues
π¬ Research Finding (live-tested with
copilot-models-litellm, enterprise endpoint β 2026-05-10)GitHub Copilot upstream (
api.enterprise.githubcopilot.com) natively supports the Anthropic Messages API at/v1/messagesfor all Claude models β no translation needed.Model Γ Endpoint matrix (10 Claude models tested)
/v1/messages(Anthropic native)/chat/completions(OpenAI)claude-sonnet-4.5claude-sonnet-4.6claude-opus-4.5claude-opus-4.6claude-opus-4.6-1mclaude-opus-4.7claude-opus-4.7-highclaude-opus-4.7-xhighclaude-opus-4.7-1m-internalclaude-haiku-4.5Feature matrix (all tested on
/v1/messagesnative path){type:"message", content:[...]}responsemessage_start / content_block_* / message_delta / message_stopthinkingblocks (4.5/4.6)signaturereturnedthinkingblocks (4.7+)"type": "adaptive"+output_config.effort;"enabled"rejected with 400tool_usestop_reason: "tool_use", correctinputsystempromptcache_control(ephemeral)cache_creation_input_tokensin usagetop_k"external image URLs are not supported"New API format for
claude-opus-4.7+thinking{ "thinking": {"type": "adaptive"}, "output_config": {"effort": "medium"} }Old
"type": "enabled"format rejected with:Native usage fields (richer than OpenAI translation path)
{ "usage": { "input_tokens": 47, "output_tokens": 136, "cache_creation_input_tokens": 0, "cache_read_input_tokens": 0, "cache_creation": { "ephemeral_1h_input_tokens": 0, "ephemeral_5m_input_tokens": 0 } } }Problem: Current copilot-api translates everything through OpenAI
All
/v1/messagesrequests go through:non-stream-translation.tsβ Copilot/chat/completionsβstream-translation.tsThis loses:
thinkingblocks entirely (dropped on input, never returned on output)top_k(no OpenAI equivalent β dropped)cache_control,cache_creation_input_tokenssignatureon thinking blocks (required for multi-turn reasoning)claude-opus-4.7+thinking API (adaptive+output_config.effort)Additionally
non-stream-translation.ts:305has a stale comment:"GitHub Copilot doesn't generate thinking blocks"β factually wrong.Proposed fix: Direct Anthropic pass-through for all Claude models
When the model is Claude (vendor=Anthropic), forward the request directly to
${copilotBaseUrl}/v1/messagesand pass the response back unchanged.Non-Claude models (gpt-4o etc.) keep the existing translation path.
Sub-issues
create-messages-native.ts: Anthropic pass-through service clientsignaturefield types + multi-turn testclaude-opus-4.7+new thinking API (adaptive+output_config.effort)/modelsendpoint (vendor=Anthropic)anthropic-types.tswithsignature, richer usage fields