Convert GitHub Copilot into an OpenAI-compatible API with one click. Works with Continue and other plugins.
- One-Click Auth - Automatic GitHub OAuth device-flow on first run, no manual token copying
- Auto Refresh - Background automatic Copilot Token refresh, zero intervention needed
- Remember Login - Token saved locally, auto-restored on next launch
- OpenAI Compatible - Standard OpenAI API format, works with any compatible client
- Bypass Education/Free plan restrictions on models like Claude
-
GitHub account with one of the following enabled:
- Copilot Free
- Copilot Pro
- Copilot Education
-
Python 3.7+
pip install flask requestspython copilot_proxy.pyThe script will automatically open your browser. Follow the prompts:
[1/3] Requesting device verification code...
[2/3] Please complete authorization in your browser:
┌─────────────────────────────┐
│ │
│ Code: ABCD-1234 │
│ │
└─────────────────────────────┘
Open: https://github.com/login/device
(Browser opened automatically)
[3/3] Waiting for authorization...
[✓] Authorization successful!
Open Continue's config.yaml and add model configurations:
models:
# Claude Sonnet
- name: Claude Sonnet 4.6 (Copilot_proxy)
provider: openai
model: claude-sonnet-4.6
apiBase: http://localhost:{PROXY_PORT}
apiKey: "dummy"
roles:
- chat
- edit
# Claude Opus
- name: Claude Opus 4.6 (Copilot_proxy)
provider: openai
model: claude-opus-4.6
apiBase: http://localhost:{PROXY_PORT}
apiKey: "dummy"
roles:
- chat
# GPT-5.4
- name: GPT-5.4 (Copilot_proxy)
provider: openai
model: gpt-5.4
apiBase: http://localhost:{PROXY_PORT}
apiKey: "dummy"
roles:
- chat
# Gemini 3.1 Pro preview
- name: Gemini 3.1 Pro Preview (Copilot_proxy)
provider: openai
model: gemini-3.1-pro-preview
apiBase: http://localhost:{PROXY_PORT}
apiKey: "dummy"
roles:
- chat
# Code Completion (Tab)
- name: Codex Mini (Copilot)
provider: openai
model: gpt-5.1-codex-mini
apiBase: http://localhost:{PROXY_PORT}
apiKey: "dummy"
roles:
- autocompleteOpen the Continue sidebar in VS Code, select a model, and start chatting.
Edit the top of copilot_proxy.py:
PROXY_PORT = 15432 # Change to your desired portDelete the token file and re-run:
rm .copilot_token.json
python copilot_proxy.pyAny tool that supports the OpenAI API format can connect:
API Base: http://localhost:15432
API Key: any value (e.g. "dummy")
Model: claude-sonnet-4.6
Continue / Other Clients
│
│ POST /chat/completions (OpenAI format)
▼
┌─────────────────────┐
│ copilot_proxy.py │ localhost:15432
│ │
│ • OAuth device flow │
│ • Auto token refresh│
│ • Request forwarding│
└─────────────────────┘
│
│ Bearer <copilot_token>
▼
GitHub Copilot API
│
▼
Claude / GPT / Gemini
- OAuth Authorization - Obtain a long-lived
ghu_token via GitHub device flow - Token Exchange - Exchange the
ghu_token for a short-lived Copilot token (~30 min validity) - Auto Refresh - Background refresh of the Copilot token every 25 minutes
- Request Forwarding - Forward OpenAI-format requests to the Copilot API with valid credentials
Make sure your GitHub account has Copilot enabled: https://github.com/settings/copilot
The token may have expired. Restart the script to auto-refresh.
Different Copilot subscription tiers have access to different models. The free plan may not support all models.
Each computer needs to run the script and authorize separately. Do not copy .copilot_token.json to other machines.
- Tokens are only stored locally in your
.copilot_token.jsonfile - All requests communicate directly with the official GitHub API
- No third-party servers involved
MIT