Skip to content

Commit 88853e4

Browse files
chore(doc): auto generate docs
1 parent c98301c commit 88853e4

1 file changed

Lines changed: 48 additions & 4 deletions

File tree

doc/CopilotChat.txt

Lines changed: 48 additions & 4 deletions
Original file line numberDiff line numberDiff line change
@@ -137,7 +137,7 @@ COMMANDS ~
137137
- `:CopilotChatLoad <name>?` - Load chat history from file
138138
- `:CopilotChatDebugInfo` - Show debug information
139139
- `:CopilotChatModels` - View and select available models. This is reset when a new instance is made. Please set your model in `init.lua` for persistence.
140-
- `:CopilotChatModel` - View the currently selected model.
140+
- `:CopilotChatAgents` - View and select available agents. This is reset when a new instance is made. Please set your agent in `init.lua` for persistence.
141141

142142

143143
COMMANDS COMING FROM DEFAULT PROMPTS
@@ -151,6 +151,49 @@ COMMANDS COMING FROM DEFAULT PROMPTS
151151
- `:CopilotChatCommit` - Write commit message for the change with commitizen convention
152152

153153

154+
MODELS, AGENTS AND CONTEXTS ~
155+
156+
157+
MODELS
158+
159+
You can list available models with `:CopilotChatModels` command. Model
160+
determines the AI model used for the chat. Default models are:
161+
162+
- `gpt-4o` - This is the default Copilot Chat model. It is a versatile, multimodal model that excels in both text and image processing and is designed to provide fast, reliable responses. It also has superior performance in non-English languages. Gpt-4o is hosted on Azure.
163+
- `claude-3.5-sonnet` - This model excels at coding tasks across the entire software development lifecycle, from initial design to bug fixes, maintenance to optimizations. GitHub Copilot uses Claude 3.5 Sonnet hosted on Amazon Web Services.
164+
- `o1-preview` - This model is focused on advanced reasoning and solving complex problems, in particular in math and science. It responds more slowly than the gpt-4o model. You can make 10 requests to this model per day. o1-preview is hosted on Azure.
165+
- `o1-mini` - This is the faster version of the o1-preview model, balancing the use of complex reasoning with the need for faster responses. It is best suited for code generation and small context operations. You can make 50 requests to this model per day. o1-mini is hosted on Azure.
166+
167+
For more information about models, see here
168+
<https://docs.github.com/en/copilot/using-github-copilot/asking-github-copilot-questions-in-your-ide#ai-models-for-copilot-chat>
169+
You can use more models from here <https://github.com/marketplace/models> by
170+
using `@models` agent from here <https://github.com/marketplace/models-github>
171+
(example: `@models Using Mistral-small, what is 1 + 11`)
172+
173+
174+
AGENTS
175+
176+
Agents are used to determine the AI agent used for the chat. You can list
177+
available agents with `:CopilotChatAgents` command. You can set the agent in
178+
the prompt by using `@` followed by the agent name. Default "noop" agent is
179+
`copilot`.
180+
181+
For more information about extension agents, see here
182+
<https://docs.github.com/en/copilot/using-github-copilot/using-extensions-to-integrate-external-tools-with-copilot-chat>
183+
You can install more agents from here
184+
<https://github.com/marketplace?type=apps&copilot_app=true>
185+
186+
187+
CONTEXTS
188+
189+
Contexts are used to determine the context of the chat. You can set the context
190+
in the prompt by using `#` followed by the context name. Supported contexts
191+
are:
192+
193+
- `buffers` - Includes all open buffers in chat context
194+
- `buffer` - Includes only the current buffer in chat context
195+
196+
154197
API ~
155198

156199
>lua
@@ -233,8 +276,10 @@ Also see here </lua/CopilotChat/config.lua>:
233276
allow_insecure = false, -- Allow insecure server connections
234277

235278
system_prompt = prompts.COPILOT_INSTRUCTIONS, -- System prompt to use
236-
model = 'gpt-4o', -- GPT model to use, see ':CopilotChatModels' for available models
237-
temperature = 0.1, -- GPT temperature
279+
model = 'gpt-4o', -- Default model to use, see ':CopilotChatModels' for available models
280+
agent = 'copilot', -- Default agent to use, see ':CopilotChatAgents' for available agents (can be specified manually in prompt via @).
281+
context = nil, -- Default context to use, 'buffers', 'buffer' or none (can be specified manually in prompt via #).
282+
temperature = 0.1, -- GPT result temperature
238283

239284
question_header = '## User ', -- Header to use for user questions
240285
answer_header = '## Copilot ', -- Header to use for AI answers
@@ -249,7 +294,6 @@ Also see here </lua/CopilotChat/config.lua>:
249294
clear_chat_on_new_prompt = false, -- Clears chat on every new prompt
250295
highlight_selection = true, -- Highlight selection in the source buffer when in the chat window
251296

252-
context = nil, -- Default context to use, 'buffers', 'buffer' or none (can be specified manually in prompt via @).
253297
history_path = vim.fn.stdpath('data') .. '/copilotchat_history', -- Default path to stored history
254298
callback = nil, -- Callback to use when ask response is received
255299

0 commit comments

Comments
 (0)