Skip to content

Add Claude + Gemini provider support, rename to LLMock#9

Merged
jpr5 merged 2 commits intomainfrom
feat/multi-provider-llmock
Mar 3, 2026
Merged

Add Claude + Gemini provider support, rename to LLMock#9
jpr5 merged 2 commits intomainfrom
feat/multi-provider-llmock

Conversation

@jpr5
Copy link
Copy Markdown
Contributor

@jpr5 jpr5 commented Mar 3, 2026

Summary

  • Add Anthropic Claude Messages API support (POST /v1/messages) — streaming and non-streaming, tool use with input_json_delta, full message lifecycle events
  • Add Google Gemini GenerateContent API support (POST /v1beta/models/{model}:generateContent and :streamGenerateContent) — data-only SSE streaming, functionCall/functionResponse round-trips, FUNCTION_CALL finish reason
  • Rename MockOpenAI → LLMock — class, package (@copilotkit/llmock), CLI binary, all imports and tests. Clean break, no backward-compat alias.

Both new providers follow the established pattern from responses.ts: convert inbound request → ChatCompletionRequest → match fixtures → convert response → provider-specific output. Same fixtures work across all 4 endpoints (completions, responses, messages, gemini).

326 tests pass, lint clean, prettier clean.

Test plan

  • pnpm run test — 326 tests pass (12 test files)
  • pnpm run lint — clean
  • pnpm run format:check — clean
  • pnpm run build — clean
  • Cross-provider fixture sharing test (same fixture → all 4 endpoints return 200)
  • No MockOpenAI or mock-openai references remain in src/
  • CLI --help shows Usage: llmock

jpr5 added 2 commits March 3, 2026 11:32
Add handler modules for two new LLM provider APIs, both following the
established pattern from responses.ts: convert inbound request to
ChatCompletionRequest, match fixtures, convert response back to
provider-specific format.

Claude Messages API (/v1/messages):
- Streaming via event: type / data: json SSE format
- Non-streaming JSON responses
- Full message lifecycle: message_start through message_stop
- Tool use with input_json_delta streaming
- msg_ and toolu_ ID prefixes

Google Gemini GenerateContent API:
- /v1beta/models/{model}:generateContent (non-streaming)
- /v1beta/models/{model}:streamGenerateContent (streaming)
- data-only SSE format (no event prefix, no [DONE])
- functionCall/functionResponse round-trips with synthetic IDs
- FUNCTION_CALL finishReason for tool call responses

Also adds generateMessageId() and generateToolUseId() helpers,
server routes for both providers, and comprehensive tests.
Rename the project from @copilotkit/mock-openai to @copilotkit/llmock
to reflect multi-provider scope (OpenAI, Anthropic, Google Gemini).

- Class: MockOpenAI → LLMock
- Files: mock-openai.ts → llmock.ts, mock-openai.test.ts → llmock.test.ts
- Package: @copilotkit/mock-openai → @copilotkit/llmock
- CLI: "Usage: mock-openai" → "Usage: llmock"
- Binary: mock-openai → llmock
- All imports, tests, and docs updated
- Clean break — no backward-compat alias
@pkg-pr-new
Copy link
Copy Markdown

pkg-pr-new bot commented Mar 3, 2026

Open in StackBlitz

npm i https://pkg.pr.new/CopilotKit/mock-openai/@copilotkit/llmock@9

commit: 7c78a44

@jpr5 jpr5 merged commit 81ca5c9 into main Mar 3, 2026
8 checks passed
@jpr5 jpr5 mentioned this pull request Mar 19, 2026
4 tasks
jpr5 added a commit that referenced this pull request Apr 3, 2026
Add Claude + Gemini provider support, rename to LLMock
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

1 participant