- Purpose: Persist user memories locally, score them for relevance, and deliver context back to LLM agents via CLI or MCP.
- Runtime: TypeScript (ESM, async/await) executed with
tsxin dev or Node on the built JS. - Key Entry Points: CLI REPL (src/cli.ts) and MCP stdio server (src/mcp-server.ts).
- Single Source of Truth: All storage/search/compression logic lives in src/memoryStore.ts.
flowchart TD
subgraph Runtime Modes
CLI[CLI REPL<br/>npm run dev]
MCP[MCP Server<br/>npm run mcp]
end
CLI -->|commands| MemoryStore
MCP -->|tools & prompts| MemoryStore
subgraph Core Engine
MemoryStore[memoryStore.ts<br/>load/search/compress]
DeepSeek[deepseek.ts<br/>optional LLM compress]
end
MemoryStore -->|atomic read/write| File[(.copilot-memory.json)]
DeepSeek -->|fetch| API[(DeepSeek API)]
- src/memoryStore.ts handles:
- File locking (lock file per store) to prevent concurrent writes.
- Keyword extraction + relevance scoring used by both CLI and MCP.
- Deterministic compression that emits Markdown within a character budget.
- src/deepseek.ts provides the optional LLM compression step; only runs when
--llmis requested andDEEPSEEK_API_KEYis present. - src/cli.ts wraps the core APIs in REPL commands; it reloads the store before every command to avoid stale reads.
- src/mcp-server.ts exposes tools/resources/prompts for GitHub Copilot Agent mode using
@modelcontextprotocol/sdk.
sequenceDiagram
participant User
participant CLI as CLI Command
participant Store as memoryStore.ts
participant File as .copilot-memory.json
User->>CLI: add --tags pref "Prefer TypeScript"
CLI->>Store: addMemory(text, tags)
Store->>Store: acquire lock / extract keywords
Store->>File: atomic write
Store->>CLI: record id
CLI->>User: ✅ Added m_...
User->>CLI: search typescript --limit 5
CLI->>Store: loadStore() / search()
Store->>File: read JSON
Store->>CLI: hits[]
CLI->>User: Markdown results
npm run mcpusestsxto run src/mcp-server.ts.- Server registers tools (
memory_write,memory_search, etc.), resources (memory://stats,memory://recent), and prompts. - Each tool:
- Validates input with
zodschemas. - Reads or mutates the store via functions in src/memoryStore.ts.
- Returns structured text payloads; all logging goes to stderr via
log().
- Validates input with
- Compression tool optionally calls src/deepseek.ts when
llm=true.
Data Model @ src/memoryStore.ts
MemoryRecord:{ id, text, tags, keywords, createdAt, updatedAt, deletedAt }.loadStore()resolvesMEMORY_PATHenv override, reads JSON (default.copilot-memory.json).addMemory()normalizes tags, extracts keywords, writes atomically, and returns the record.search()filters out tombstoned records and applies token-based scoring (text hits + tag bonus + keyword bonus + recency).compressDeterministic()formats hits into Markdown and truncates within the caller’sbudget.
- Install:
npm install(installs@modelcontextprotocol/sdk,tsx, TypeScript). - CLI REPL:
npm run dev→ watch output in the Git Bash integrated terminal. - Build:
npm run build(emits ESM todist/). - MCP Server:
npm run mcp(development) ornpm run mcp:distafter building. - Inspector:
npm run inspect(against compiled JS) ornpm run inspect:dev(live TypeScript) to explore tools/resources visually. - VS Code Debugging: choose a launch preset (CLI or MCP) in .vscode/launch.json.
- Copy
.env.exampleto.envbefore running; key knobs:MEMORY_PATHto point at an alternate JSON file (e.g.,project-memory.jsonused in samples).MEMORY_LOCK_PATHif you need lock files elsewhere.DEEPSEEK_*settings to enable the LLM compression path.
.copilot-memory.jsonis git-ignored—each learner gets their own memory store.
- Emphasize that all behaviors (CLI, MCP, future UIs) should call into src/memoryStore.ts rather than reimplementing file I/O.
- Show how adding tags improves
search()scoring, then demonstratecompressDeterministic()to illustrate context budgeting. - Have students compare deterministic vs. DeepSeek compression by toggling
--llm. - Encourage using MCP prompts (e.g.,
summarize-memories) to automate context workflows from Copilot Chat.