Skip to content

billxc/copilot-api

 
 

Repository files navigation

Copilot API Proxy

Fork Notice:
The original project by ericc-ch is no longer actively maintained. This fork by billxc updates the project for personal use, adding native passthrough support for the Responses API and Claude API.

Warning

This is a reverse-engineered proxy of GitHub Copilot API. It is not supported by GitHub, and may break unexpectedly. Use at your own risk.

Warning

GitHub Security Notice:
Excessive automated or scripted use of Copilot (including rapid or bulk requests, such as via automated tools) may trigger GitHub's abuse-detection systems.
You may receive a warning from GitHub Security, and further anomalous activity could result in temporary suspension of your Copilot access.

GitHub prohibits use of their servers for excessive automated bulk activity or any activity that places undue burden on their infrastructure.

Please review:

Use this proxy responsibly to avoid account restrictions.

爱发电


Note: If you are using opencode, you do not need this project. Opencode supports GitHub Copilot provider out of the box.


Project Overview

A reverse-engineered proxy for the GitHub Copilot API that exposes it as an OpenAI and Anthropic compatible service. This allows you to use GitHub Copilot with any tool that supports the OpenAI Chat Completions API or the Anthropic Messages API, including to power Claude Code.

Features

  • OpenAI & Anthropic Compatibility: Exposes GitHub Copilot as an OpenAI-compatible (/v1/chat/completions, /v1/models, /v1/embeddings) and Anthropic-compatible (/v1/messages) API.
  • Claude Code Integration: Easily configure and launch Claude Code to use Copilot as its backend with a simple command-line flag (--claude-code).
  • Usage Dashboard: A web-based dashboard to monitor your Copilot API usage, view quotas, and see detailed statistics.
  • Rate Limit Control: Manage API usage with rate-limiting options (--rate-limit) and a waiting mechanism (--wait) to prevent errors from rapid requests.
  • Manual Request Approval: Manually approve or deny each API request for fine-grained control over usage (--manual).
  • Token Visibility: Option to display GitHub and Copilot tokens during authentication and refresh for debugging (--show-token).
  • Flexible Authentication: Authenticate interactively or provide a GitHub token directly, suitable for CI/CD environments.
  • Support for Different Account Types: Works with individual, business, and enterprise GitHub Copilot plans.

Demo

copilot-api-demo.mp4

Prerequisites

  • Bun (>= 1.2.x)
  • GitHub account with Copilot subscription (individual, business, or enterprise)

Installation

To install dependencies, run:

bun install

Using with Docker

Using Pre-built Image from GitHub Container Registry

You can pull and run the pre-built image directly from GitHub Container Registry:

# Pull the latest image
docker pull ghcr.io/billxc/copilot-api:latest

# Create a directory on your host to persist the GitHub token and related data
mkdir -p ./copilot-data

# Run the container with a bind mount to persist the token
docker run -p 4141:4141 -v $(pwd)/copilot-data:/root/.local/share/copilot-api ghcr.io/billxc/copilot-api:latest

Available tags:

  • latest - Latest build from the master branch
  • v*.*.* - Specific version tags (e.g., v1.0.0)
  • master - Latest master branch build
  • <sha> - Specific commit builds

Building Locally

Build image from source:

docker build -t copilot-api .

Run the locally built container:

# Create a directory on your host to persist the GitHub token and related data
mkdir -p ./copilot-data

# Run the container with a bind mount to persist the token
# This ensures your authentication survives container restarts

docker run -p 4141:4141 -v $(pwd)/copilot-data:/root/.local/share/copilot-api copilot-api

Note: The GitHub token and related data will be stored in copilot-data on your host. This is mapped to /root/.local/share/copilot-api inside the container, ensuring persistence across restarts.

Docker with Environment Variables

You can pass the GitHub token directly to the container using environment variables:

# Build with GitHub token
docker build --build-arg GH_TOKEN=your_github_token_here -t copilot-api .

# Run with GitHub token
docker run -p 4141:4141 -e GH_TOKEN=your_github_token_here copilot-api

# Run with additional options
docker run -p 4141:4141 -e GH_TOKEN=your_token copilot-api start --verbose --port 4141

Docker Compose Example

Using pre-built image:

version: "3.8"
services:
  copilot-api:
    image: ghcr.io/billxc/copilot-api:latest
    ports:
      - "4141:4141"
    environment:
      - GH_TOKEN=your_github_token_here
    volumes:
      - ./copilot-data:/root/.local/share/copilot-api
    restart: unless-stopped

Or building from source:

version: "3.8"
services:
  copilot-api:
    build: .
    ports:
      - "4141:4141"
    environment:
      - GH_TOKEN=your_github_token_here
    volumes:
      - ./copilot-data:/root/.local/share/copilot-api
    restart: unless-stopped

The Docker image includes:

  • Multi-stage build for optimized image size
  • Non-root user for enhanced security
  • Health check for container monitoring
  • Pinned base image version for reproducible builds

Using with npx

You can run the project directly using npx:

npx xc-copilot-api@latest start

With options:

npx xc-copilot-api@latest start --port 8080

For authentication only:

npx xc-copilot-api@latest auth

Command Structure

Copilot API now uses a subcommand structure with these main commands:

  • start: Start the Copilot API server. This command will also handle authentication if needed.
  • auth: Run GitHub authentication flow without starting the server. This is typically used if you need to generate a token for use with the --github-token option, especially in non-interactive environments.
  • check-usage: Show your current GitHub Copilot usage and quota information directly in the terminal (no server required).
  • debug: Display diagnostic information including version, runtime details, file paths, and authentication status. Useful for troubleshooting and support.

A separate xc-copilot-api-daemon CLI is available for managing the server as a background daemon (install, status, restart, stop, uninstall, logs) on macOS, Linux, and Windows.

Command Line Options

Start Command Options

The following command line options are available for the start command:

Option Description Default Alias
--port Port to listen on 4141 -p
--verbose Enable verbose logging false -v
--account-type Account type to use (individual, business, enterprise) individual -a
--manual Enable manual request approval false none
--rate-limit Rate limit in seconds between requests none -r
--wait Wait instead of error when rate limit is hit false -w
--github-token Provide GitHub token directly (must be generated using the auth subcommand) none -g
--claude-code Generate a command to launch Claude Code with Copilot API config false -c
--show-token Show GitHub and Copilot tokens on fetch and refresh false none
--proxy-env Initialize proxy from environment variables false none

Auth Command Options

Option Description Default Alias
--verbose Enable verbose logging false -v
--show-token Show GitHub token on auth false none

Debug Command Options

Option Description Default Alias
--json Output debug info as JSON false none

API Endpoints

The server exposes several endpoints to interact with the Copilot API. It provides OpenAI-compatible endpoints and now also includes support for Anthropic-compatible endpoints, allowing for greater flexibility with different tools and services.

OpenAI Compatible Endpoints

These endpoints mimic the OpenAI API structure.

Endpoint Method Description
POST /v1/chat/completions POST Creates a model response for the given chat conversation.
GET /v1/models GET Lists the currently available models.
POST /v1/embeddings POST Creates an embedding vector representing the input text.

Anthropic Compatible Endpoints

These endpoints are designed to be compatible with the Anthropic Messages API.

Endpoint Method Description
POST /v1/messages POST Creates a model response for a given conversation.
POST /v1/messages/count_tokens POST Calculates the number of tokens for a given set of messages.

Usage Monitoring Endpoints

New endpoints for monitoring your Copilot usage and quotas.

Endpoint Method Description
GET /usage GET Get detailed Copilot usage statistics and quota information.
GET /token GET Get the current Copilot token being used by the API.

Example Usage

Using with npx:

# Basic usage with start command
npx xc-copilot-api@latest start

# Run on custom port with verbose logging
npx xc-copilot-api@latest start --port 8080 --verbose

# Use with a business plan GitHub account
npx xc-copilot-api@latest start --account-type business

# Use with an enterprise plan GitHub account
npx xc-copilot-api@latest start --account-type enterprise

# Enable manual approval for each request
npx xc-copilot-api@latest start --manual

# Set rate limit to 30 seconds between requests
npx xc-copilot-api@latest start --rate-limit 30

# Wait instead of error when rate limit is hit
npx xc-copilot-api@latest start --rate-limit 30 --wait

# Provide GitHub token directly
npx xc-copilot-api@latest start --github-token ghp_YOUR_TOKEN_HERE

# Run only the auth flow
npx xc-copilot-api@latest auth

# Run auth flow with verbose logging
npx xc-copilot-api@latest auth --verbose

# Show your Copilot usage/quota in the terminal (no server needed)
npx xc-copilot-api@latest check-usage

# Display debug information for troubleshooting
npx xc-copilot-api@latest debug

# Display debug information in JSON format
npx xc-copilot-api@latest debug --json

# Initialize proxy from environment variables (HTTP_PROXY, HTTPS_PROXY, etc.)
npx xc-copilot-api@latest start --proxy-env

Using the Usage Viewer

After starting the server, a URL to the Copilot Usage Dashboard will be displayed in your console. This dashboard is a web interface for monitoring your API usage.

  1. Start the server. For example, using npx:

npx xc-copilot-api@latest start ``` 2. The server will output a URL to the usage viewer. Copy and paste this URL into your browser. It will look something like this: https://billxc.github.io/copilot-api?endpoint=http://localhost:4141/usage - If you use the `start.bat` script on Windows, this page will open automatically.

The dashboard provides a user-friendly interface to view your Copilot usage data:

  • API Endpoint URL: The dashboard is pre-configured to fetch data from your local server endpoint via the URL query parameter. You can change this URL to point to any other compatible API endpoint.
  • Fetch Data: Click the "Fetch" button to load or refresh the usage data. The dashboard will automatically fetch data on load.
  • Usage Quotas: View a summary of your usage quotas for different services like Chat and Completions, displayed with progress bars for a quick overview.
  • Detailed Information: See the full JSON response from the API for a detailed breakdown of all available usage statistics.
  • URL-based Configuration: You can also specify the API endpoint directly in the URL using a query parameter. This is useful for bookmarks or sharing links. For example: https://billxc.github.io/copilot-api?endpoint=http://your-api-server/usage

Using with Claude Code

This proxy can be used to power Claude Code, an experimental conversational AI assistant for developers from Anthropic.

There are two ways to configure Claude Code to use this proxy:

Interactive Setup with --claude-code flag

To get started, run the start command with the --claude-code flag:

npx xc-copilot-api@latest start --claude-code

You will be prompted to select a primary model and a "small, fast" model for background tasks. After selecting the models, a command will be copied to your clipboard. This command sets the necessary environment variables for Claude Code to use the proxy.

Paste and run this command in a new terminal to launch Claude Code.

Manual Configuration with settings.json

Alternatively, you can configure Claude Code by creating a .claude/settings.json file in your project's root directory. This file should contain the environment variables needed by Claude Code. This way you don't need to run the interactive setup every time.

Here is an example .claude/settings.json file:

{
  "env": {
    "ANTHROPIC_BASE_URL": "http://localhost:4141",
    "ANTHROPIC_AUTH_TOKEN": "dummy",
    "ANTHROPIC_MODEL": "gpt-4.1",
    "ANTHROPIC_DEFAULT_SONNET_MODEL": "gpt-4.1",
    "ANTHROPIC_SMALL_FAST_MODEL": "gpt-4.1",
    "ANTHROPIC_DEFAULT_HAIKU_MODEL": "gpt-4.1",
    "DISABLE_NON_ESSENTIAL_MODEL_CALLS": "1",
    "CLAUDE_CODE_DISABLE_NONESSENTIAL_TRAFFIC": "1"
  },
  "permissions": {
    "deny": [
      "WebSearch"
    ]
  }
}

You can find more options here: Claude Code settings

You can also read more about IDE integration here: Add Claude Code to your IDE

Running from Source

The project can be run from source in several ways:

Development Mode

bun run dev

Production Mode

bun run start

Usage Tips

  • To avoid hitting GitHub Copilot's rate limits, you can use the following flags:
    • --manual: Enables manual approval for each request, giving you full control over when requests are sent.
    • --rate-limit <seconds>: Enforces a minimum time interval between requests. For example, xc-copilot-api start --rate-limit 30 will ensure there's at least a 30-second gap between requests.
    • --wait: Use this with --rate-limit. It makes the server wait for the cooldown period to end instead of rejecting the request with an error. This is useful for clients that don't automatically retry on rate limit errors.
  • If you have a GitHub business or enterprise plan account with Copilot, use the --account-type flag (e.g., --account-type business). See the official documentation for more details.

Update Log

Native Passthrough for Responses API and Claude API

The proxy now supports native passthrough for both the OpenAI Responses API (/responses) and the Anthropic Messages API (/v1/messages, /v1/messages/count_tokens) when targeting supported models.

What changed

  • Claude API passthrough: When the request model starts with claude, the entire Anthropic Messages request is forwarded directly to the upstream Copilot endpoint without any local translation. The request body, headers (anthropic-version, anthropic-beta, etc.), and response are passed through as-is.

  • Responses API passthrough: When the request model starts with gpt, the OpenAI Responses API request is forwarded directly to the upstream /responses endpoint without transformation.

  • Non-matching models fall back to local translation: For non-Claude models hitting /v1/messages, the existing Anthropic-to-OpenAI translation layer is still used, converting Anthropic Messages format to OpenAI Chat Completions and translating responses back.

Why this matters

  • Full protocol fidelity: Native passthrough preserves all upstream features (tool use, streaming, thinking blocks, vision, etc.) without any translation loss.
  • Simpler architecture: The passthrough path is a thin wrapper — no body parsing, no model normalization, no feature detection. What goes in is what goes out.
  • Better compatibility: Claude Code and other Anthropic-native clients get the exact upstream behavior they expect, including proper tool_use/tool_result round trips, count_tokens support, and streaming.

About

Turn GitHub Copilot into OpenAI/Anthropic API compatible server. Usable with Claude Code!

Resources

License

Stars

Watchers

Forks

Releases

No releases published

Packages

 
 
 

Contributors

Languages

  • TypeScript 91.9%
  • JavaScript 7.3%
  • Other 0.8%