SDK for programmatic control of GitHub Copilot CLI.
Note: This SDK is in technical preview and may change in breaking ways.
dotnet add package GitHub.Copilot.SDKusing GitHub.Copilot.SDK;
// Create and start client
await using var client = new CopilotClient();
await client.StartAsync();
// Create a session
await using var session = await client.CreateSessionAsync(new SessionConfig
{
Model = "gpt-5"
});
// Wait for response using session.idle event
var done = new TaskCompletionSource();
session.On(evt =>
{
if (evt is AssistantMessageEvent msg)
{
Console.WriteLine(msg.Data.Content);
}
else if (evt is SessionIdleEvent)
{
done.SetResult();
}
});
// Send a message and wait for completion
await session.SendAsync(new MessageOptions { Prompt = "What is 2+2?" });
await done.Task;new CopilotClient(CopilotClientOptions? options = null)Options:
CliPath- Path to CLI executable (default: "copilot" from PATH)CliArgs- Extra arguments prepended before SDK-managed flagsCliUrl- URL of existing CLI server to connect to (e.g.,"localhost:8080"). When provided, the client will not spawn a CLI process.Port- Server port (default: 0 for random)UseStdio- Use stdio transport instead of TCP (default: true)LogLevel- Log level (default: "info")AutoStart- Auto-start server (default: true)AutoRestart- Auto-restart on crash (default: true)Cwd- Working directory for the CLI processEnvironment- Environment variables to pass to the CLI processLogger-ILoggerinstance for SDK logging
Start the CLI server and establish connection.
Stop the server and close all sessions. Throws if errors are encountered during cleanup.
Force stop the CLI server without graceful cleanup. Use when StopAsync() takes too long.
Create a new conversation session.
Config:
SessionId- Custom session IDModel- Model to use ("gpt-5", "claude-sonnet-4.5", etc.)Tools- Custom tools exposed to the CLISystemMessage- System message customizationAvailableTools- List of tool names to allowExcludedTools- List of tool names to disableProvider- Custom API provider configuration (BYOK)Streaming- Enable streaming of response chunks (default: false)
Resume an existing session.
Ping the server to check connectivity.
Get current connection state.
List all available sessions.
Delete a session and its data from disk.
Represents a single conversation session.
SessionId- The unique identifier for this session
Send a message to the session.
Options:
Prompt- The message/prompt to sendAttachments- File attachmentsMode- Delivery mode ("enqueue" or "immediate")
Returns the message ID.
Subscribe to session events. Returns a disposable to unsubscribe.
var subscription = session.On(evt =>
{
Console.WriteLine($"Event: {evt.Type}");
});
// Later...
subscription.Dispose();Abort the currently processing message in this session.
Get all events/messages from this session.
Dispose the session and free resources.
Sessions emit various events during processing. Each event type is a class that inherits from SessionEvent:
UserMessageEvent- User message addedAssistantMessageEvent- Assistant responseToolExecutionStartEvent- Tool execution startedToolExecutionCompleteEvent- Tool execution completedSessionStartEvent- Session startedSessionIdleEvent- Session is idleSessionErrorEvent- Session error occurred- And more...
Use pattern matching to handle specific event types:
session.On(evt =>
{
switch (evt)
{
case AssistantMessageEvent msg:
Console.WriteLine(msg.Data.Content);
break;
case SessionErrorEvent err:
Console.WriteLine($"Error: {err.Data.Message}");
break;
}
});Enable streaming to receive assistant response chunks as they're generated:
var session = await client.CreateSessionAsync(new SessionConfig
{
Model = "gpt-5",
Streaming = true
});
// Use TaskCompletionSource to wait for completion
var done = new TaskCompletionSource();
session.On(evt =>
{
switch (evt)
{
case AssistantMessageDeltaEvent delta:
// Streaming message chunk - print incrementally
Console.Write(delta.Data.DeltaContent);
break;
case AssistantReasoningDeltaEvent reasoningDelta:
// Streaming reasoning chunk (if model supports reasoning)
Console.Write(reasoningDelta.Data.DeltaContent);
break;
case AssistantMessageEvent msg:
// Final message - complete content
Console.WriteLine("\n--- Final message ---");
Console.WriteLine(msg.Data.Content);
break;
case AssistantReasoningEvent reasoningEvt:
// Final reasoning content (if model supports reasoning)
Console.WriteLine("--- Reasoning ---");
Console.WriteLine(reasoningEvt.Data.Content);
break;
case SessionIdleEvent:
// Session finished processing
done.SetResult();
break;
}
});
await session.SendAsync(new MessageOptions { Prompt = "Tell me a short story" });
await done.Task; // Wait for streaming to completeWhen Streaming = true:
AssistantMessageDeltaEventevents are sent withDeltaContentcontaining incremental textAssistantReasoningDeltaEventevents are sent withDeltaContentfor reasoning/chain-of-thought (model-dependent)- Accumulate
DeltaContentvalues to build the full response progressively - The final
AssistantMessageEventandAssistantReasoningEventevents contain the complete content
Note: AssistantMessageEvent and AssistantReasoningEvent (final events) are always sent regardless of streaming setting.
var client = new CopilotClient(new CopilotClientOptions { AutoStart = false });
// Start manually
await client.StartAsync();
// Use client...
// Stop manually
await client.StopAsync();You can let the CLI call back into your process when the model needs capabilities you own. Use AIFunctionFactory.Create from Microsoft.Extensions.AI for type-safe tool definitions:
using Microsoft.Extensions.AI;
using System.ComponentModel;
var session = await client.CreateSessionAsync(new SessionConfig
{
Model = "gpt-5",
Tools = [
AIFunctionFactory.Create(
async ([Description("Issue identifier")] string id) => {
var issue = await FetchIssueAsync(id);
return issue;
},
"lookup_issue",
"Fetch issue details from our tracker"),
]
});When Copilot invokes lookup_issue, the client automatically runs your handler and responds to the CLI. Handlers can return any JSON-serializable value (automatically wrapped), or a ToolResultAIContent wrapping a ToolResultObject for full control over result metadata.
Control the system prompt using SystemMessage in session config:
var session = await client.CreateSessionAsync(new SessionConfig
{
Model = "gpt-5",
SystemMessage = new SystemMessageConfig
{
Mode = SystemMessageMode.Append,
Content = @"
<workflow_rules>
- Always check for security vulnerabilities
- Suggest performance improvements when applicable
</workflow_rules>
"
}
});For full control (removes all guardrails), use Mode = SystemMessageMode.Replace:
var session = await client.CreateSessionAsync(new SessionConfig
{
Model = "gpt-5",
SystemMessage = new SystemMessageConfig
{
Mode = SystemMessageMode.Replace,
Content = "You are a helpful assistant."
}
});var session1 = await client.CreateSessionAsync(new SessionConfig { Model = "gpt-5" });
var session2 = await client.CreateSessionAsync(new SessionConfig { Model = "claude-sonnet-4.5" });
// Both sessions are independent
await session1.SendAsync(new MessageOptions { Prompt = "Hello from session 1" });
await session2.SendAsync(new MessageOptions { Prompt = "Hello from session 2" });await session.SendAsync(new MessageOptions
{
Prompt = "Analyze this file",
Attachments = new List<UserMessageDataAttachmentsItem>
{
new UserMessageDataAttachmentsItem
{
Type = UserMessageDataAttachmentsItemType.File,
Path = "/path/to/file.cs",
DisplayName = "My File"
}
}
});Use a custom API provider:
var session = await client.CreateSessionAsync(new SessionConfig
{
Provider = new ProviderConfig
{
Type = "openai",
BaseUrl = "https://api.openai.com/v1",
ApiKey = "your-api-key"
}
});try
{
var session = await client.CreateSessionAsync();
await session.SendAsync(new MessageOptions { Prompt = "Hello" });
}
catch (StreamJsonRpc.RemoteInvocationException ex)
{
Console.Error.WriteLine($"JSON-RPC Error: {ex.Message}");
}
catch (Exception ex)
{
Console.Error.WriteLine($"Error: {ex.Message}");
}- .NET 8.0 or later
- GitHub Copilot CLI installed and in PATH (or provide custom
CliPath)
MIT