-
-
Notifications
You must be signed in to change notification settings - Fork 607
Comparing changes
Open a pull request
base repository: ericc-ch/copilot-api
base: master
head repository: EnTaroYan/copilot-api
compare: master
- 18 commits
- 32 files changed
- 3 contributors
Commits on Mar 30, 2026
-
feat: add OpenAI Responses API endpoint support
Add /v1/responses and /responses endpoints that forward requests directly to GitHub Copilot's responses API. This enables support for responses-only models like gpt-5.4 and gpt-5.3-codex.
Configuration menu - View commit details
-
Copy full SHA for ee3fcd3 - Browse repository at this point
Copy the full SHA ee3fcd3View commit details
Commits on Apr 22, 2026
-
local: /models/all debug route, idleTimeout, responses input guards
- Add GET /models/all route exposing raw upstream model list - Raise Bun idleTimeout to 255s so long /v1/responses calls don't close - Guard hasVisionContent/hasAgentMessages against non-array input Co-authored-by: Copilot <223556219+Copilot@users.noreply.github.com>
Configuration menu - View commit details
-
Copy full SHA for ce0327b - Browse repository at this point
Copy the full SHA ce0327bView commit details
Commits on Apr 23, 2026
-
feat: strip unsupported image_generation tool from responses payload
Codex and similar clients require an image_generation tool on startup. Copilot upstream does not support it and returns 400. Silently drop entries whose type is image_generation before forwarding, so client startup validation passes while the rest of the payload still works. Co-authored-by: Copilot <223556219+Copilot@users.noreply.github.com>
Configuration menu - View commit details
-
Copy full SHA for 4e4a77c - Browse repository at this point
Copy the full SHA 4e4a77cView commit details -
refactor: apply model name normalization across all routes
Extract translateModelName into a shared helper in src/lib/translate-model.ts and apply it in /v1/chat/completions and /v1/responses handlers in addition to the existing /v1/messages path. This lets clients that request subagent specific model names (e.g. claude-opus-4-6, claude-sonnet-4-5) reach Copilot successfully on every route, since the upstream only recognizes the base family names (claude-opus-4, claude-sonnet-4). Co-authored-by: Copilot <223556219+Copilot@users.noreply.github.com>
Configuration menu - View commit details
-
Copy full SHA for aad8ad5 - Browse repository at this point
Copy the full SHA aad8ad5View commit details -
feat: preserve specific model version when normalizing claude ids
Claude Code subagents / agent teams send model ids using dash separated version numbers (e.g. claude-opus-4-6, claude-sonnet-4-5, claude-haiku-4-5). Copilot advertises those models with a dot in the version (claude-opus-4.6, ...), so the dash form was rejected with model_not_supported. The previous normalizer also collapsed everything to claude-opus-4, which is no longer an advertised id and produced 400s in /v1/messages traffic. The new translateModelName consults state.models and: 1. returns the name unchanged if it is already advertised, 2. tries converting a trailing -N-M[-suffix] to .N-M[-suffix] and uses that if advertised (covers claude-opus-4-6-1m too), 3. falls back to the base claude-<family>-N id if advertised, 4. otherwise returns the original so upstream errors remain visible. If the model list has not been fetched yet, fall back to the legacy collapse behaviour so behaviour is no worse than before. Add unit tests covering opus/sonnet/haiku, the -1m suffix, fallback to the base family id, and the pre-model-list legacy path. Co-authored-by: Copilot <223556219+Copilot@users.noreply.github.com>Configuration menu - View commit details
-
Copy full SHA for 81b4b20 - Browse repository at this point
Copy the full SHA 81b4b20View commit details
Commits on Apr 27, 2026
-
feat: support GPT-5.5/responses-only models for Claude Code
GitHub Copilot upstream gates some models (gpt-5.5, gpt-5-pro) to the /responses endpoint and rejects /chat/completions with unsupported_api_for_model. Claude Code (which goes through /v1/messages -> Anthropic-to-ChatCompletions translation) and OpenAI SDKs hitting /chat/completions therefore couldn't use these models at all. This commit fixes three concrete upstream rejections: 1. unsupported_api_for_model on /chat/completions for gpt-5.5 New responses-bridge.ts transparently translates ChatCompletions payloads <-> Responses API in createChatCompletions(). Detection is a static set ({gpt-5.5, gpt-5-pro}) plus a runtime cache populated by adaptive fallback when upstream returns the error. Streaming tool_calls are tracked by upstream item_id -> dense index to preserve ordering across parallel calls. Unsupported CC fields (n>1, logit_bias, response_format, seed, logprobs, etc.) raise an explicit error rather than being silently dropped. 2. Unsupported parameter: 'max_tokens' ... use 'max_completion_tokens' GPT-5 family and o1/o3/o4 reasoning models reject the legacy field. adaptPayloadForModel() in create-chat-completions.ts renames it on the way out only for those model families. 3. Invalid 'user': string too long (max 64) Claude Code's metadata.user_id is ~150 chars. New clampUserField() in api-config.ts truncates to 64. Applied at all three upstream call sites: /chat/completions, /responses bridge, /responses direct. Tests: +12 covering bridge detection, system+developer instructions folding, tools/tool_choice mapping, tool result round-trip, streaming text deltas, parallel tool_calls index isolation, length finish_reason mapping, unsupported field rejection, max_tokens renaming for the relevant model families, and user clamping. Co-authored-by: Copilot <223556219+Copilot@users.noreply.github.com>Configuration menu - View commit details
-
Copy full SHA for dc6d654 - Browse repository at this point
Copy the full SHA dc6d654View commit details -
chore: ignore local pid and start/stop scripts
Co-authored-by: Copilot <223556219+Copilot@users.noreply.github.com>
Configuration menu - View commit details
-
Copy full SHA for 57f527d - Browse repository at this point
Copy the full SHA 57f527dView commit details -
feat: detect responses-only models from upstream supported_endpoints
GitHub Copilot's /models endpoint exposes a supported_endpoints array on most models (e.g. ["/chat/completions"], ["/responses"], or both). This is the authoritative signal for which upstream API a model accepts. isResponsesOnlyModel() now consults state.models first: if the model has a non-empty supported_endpoints, the decision is taken from there (supports /responses but not /chat/completions => responses-only). The hardcoded {gpt-5.5, gpt-5-pro} set and the runtime-learned cache stay as fallbacks for models whose entry omits supported_endpoints, or for the brief window before /models is fetched on startup. Also extends the Model type with the optional supported_endpoints and model_picker_category fields actually returned by upstream. Test added to verify upstream signal overrides the static fallback in both directions. Co-authored-by: Copilot <223556219+Copilot@users.noreply.github.com>Configuration menu - View commit details
-
Copy full SHA for ec8fe30 - Browse repository at this point
Copy the full SHA ec8fe30View commit details -
chore: bump VS Code fallback version to 1.117.0
The runtime version is fetched from the AUR PKGBUILD (currently 1.117.0) on startup. This bumps the offline fallback used when that fetch fails so the editor-version header sent upstream stays close to current. Co-authored-by: Copilot <223556219+Copilot@users.noreply.github.com>
Configuration menu - View commit details
-
Copy full SHA for f9821b7 - Browse repository at this point
Copy the full SHA f9821b7View commit details -
feat: dynamically fetch latest copilot-chat version from VS Marketplace
Mirrors the existing VS Code version fetch pattern. Queries the public extensionquery API for GitHub.copilot-chat at startup and uses the returned stable version for the editor-plugin-version and user-agent headers. Falls back to 0.45.1 (current latest stable as of writing) on network failure. Co-authored-by: Copilot <223556219+Copilot@users.noreply.github.com>
Configuration menu - View commit details
-
Copy full SHA for 160bb9e - Browse repository at this point
Copy the full SHA 160bb9eView commit details -
chore: bump X-GitHub-Api-Version to 2025-10-01
The 2025-10-01 schema is the latest accepted by api.githubcopilot.com and is a strict superset of 2025-04-01: model objects now include `billing.{is_premium,multiplier,restricted_to}`, `is_chat_default`, `is_chat_fallback`, and `info_messages.{code,message}`. Empirically verified: only 2025-04-01, 2025-05-01, 2025-10-01 are accepted by the server; any other value yields `bad request: error: invalid apiVersion`. The upstream copilot-chat client still uses 2025-05-01 on main, but the server already serves the richer 2025-10-01 schema. Co-authored-by: Copilot <223556219+Copilot@users.noreply.github.com>Configuration menu - View commit details
-
Copy full SHA for 1100f05 - Browse repository at this point
Copy the full SHA 1100f05View commit details -
feat: surface premium-request multiplier on /models/all and startup log
- Extend Model type with billing.{is_premium, multiplier, restricted_to} so the field is type-safe everywhere. - /models/all already returns the raw upstream payload, so once the X-GitHub-Api-Version bump landed it transparently exposes billing to clients without any handler change. - Startup 'Available models' log now appends the multiplier: - claude-opus-4.7 (7.5x) - claude-sonnet-4.5 (1x) - gpt-4o (free) - gpt-4.1 ← no annotation when upstream omits billing Co-authored-by: Copilot <223556219+Copilot@users.noreply.github.com>Configuration menu - View commit details
-
Copy full SHA for f6cc055 - Browse repository at this point
Copy the full SHA f6cc055View commit details -
feat: include multiplier and is_premium in /models response
The slim OpenAI-compatible model listing (GET /models, GET /v1/models) now also surfaces billing.multiplier and billing.is_premium per model so clients that don't need the full upstream payload from /models/all can still see premium-request cost. Both fields are passed through verbatim from upstream and will be `undefined` (omitted from JSON) for models where upstream does not report billing info. Co-authored-by: Copilot <223556219+Copilot@users.noreply.github.com>
Configuration menu - View commit details
-
Copy full SHA for 1cf2297 - Browse repository at this point
Copy the full SHA 1cf2297View commit details
Commits on May 9, 2026
-
fix(responses): strip unsupported service_tier field before forwarding
Some clients (e.g. Codex CLI, OpenAI SDKs) include a top-level `service_tier` field per the OpenAI Responses spec. GitHub Copilot's upstream /responses endpoint rejects this with: { code: 'unsupported_value', param: 'service_tier', type: 'invalid_request_error', message: 'service_tier is not supported' } Drop the field silently before forwarding, mirroring the existing stripUnsupportedTools sanitization, so the request succeeds with the upstream default tier instead of bubbling a 400 to the caller. Co-authored-by: Copilot <223556219+Copilot@users.noreply.github.com>Configuration menu - View commit details
-
Copy full SHA for fc79360 - Browse repository at this point
Copy the full SHA fc79360View commit details -
feat(auth): support GHEC tenants via configurable GitHub base URL + h…
…ome path Adds CLI flags and matching env vars to point copilot-api at a GitHub Enterprise Cloud (data residency) tenant and to isolate per-instance on-disk state when running an account pool. - New `--github-base-url` (env: COPILOT_API_GITHUB_BASE_URL) on `start` and `auth`. Defaults to https://github.com. The REST API base URL is derived by prefixing the host with `api.` so https://acme.ghe.com becomes https://api.acme.ghe.com (works for both github.com and *.ghe.com tenants). - New `--home` (env: COPILOT_API_HOME) overrides the directory used to derive APP_DIR (<home>/.local/share/copilot-api). Multiple instances with different --home values keep their GitHub tokens fully isolated. - Hardcoded GITHUB_CLIENT_ID switched to 01ab8ac9400c4e429b23 (the client_id observed in current Copilot device-flow traffic, which works for both github.com and ghe.com tenants). - New per-home instance lockfile (<APP_DIR>/instance.lock with PID). `start` refuses to launch if another live instance already holds the same --home, with a clear error pointing at the conflict. Stale locks (PID no longer alive) are silently overwritten. Pass `--force` to bypass. Exit/SIGINT/SIGTERM handlers clean up best-effort. - Internal: GitHub URL constants moved to a new lib/runtime-config module with getter functions; consumers in services/github/* updated. Co-authored-by: Copilot <223556219+Copilot@users.noreply.github.com>
Configuration menu - View commit details
-
Copy full SHA for 8b3b32f - Browse repository at this point
Copy the full SHA 8b3b32fView commit details -
Use Copilot API endpoint from token response
GHEC tokens carry a region 'stamp' (e.g. prod-wus3-01) in the JWT and the public api.githubcopilot.com host returns 400 'unknown stamp' when called with such tokens. Read endpoints.api from the token response and prefer it over the account-type-derived URL, matching what VS Code does. Falls back to the previous derivation when the field is absent (public github.com tokens). Also surface the upstream URL + status + body when get-models or get-copilot-token fail at startup; the bare HTTPError thrown from cacheModels was previously swallowed before forwardError could log it, making first-run diagnostics on GHEC very hard. Co-authored-by: Copilot <223556219+Copilot@users.noreply.github.com>
Configuration menu - View commit details
-
Copy full SHA for 0247ca4 - Browse repository at this point
Copy the full SHA 0247ca4View commit details -
docs: document --home / --github-base-url for multi-account pools and…
… GHEC - Add --home, --github-base-url, --force to the start options table. - New 'Multi-Account Pool / GitHub Enterprise (GHEC)' section explaining per-instance token directories, the instance.lock, and that the Copilot upstream endpoint is auto-discovered from the token response (no extra flag for GHEC region stamps). - Note that --account-type is usually unnecessary now that we honor endpoints.api from the token response. - gitignore the local pool helpers (start_pool.sh, stop_pool.sh, pool.manifest) alongside the existing single-instance scripts. Co-authored-by: Copilot <223556219+Copilot@users.noreply.github.com>
Configuration menu - View commit details
-
Copy full SHA for d9806a6 - Browse repository at this point
Copy the full SHA d9806a6View commit details
Commits on May 15, 2026
-
chore: drop dead .gitignore entries for relocated scripts
Co-authored-by: Copilot <223556219+Copilot@users.noreply.github.com>
Configuration menu - View commit details
-
Copy full SHA for ffcedc3 - Browse repository at this point
Copy the full SHA ffcedc3View commit details
This comparison is taking too long to generate.
Unfortunately it looks like we can’t render this comparison for you right now. It might be too big, or there might be something weird with your repository.
You can try running this command locally to see the comparison on your machine:
git diff master...master