Custom
Custom prompts defined by the user.
A prompt represents a reusable prompt template that can be retrieved and used
by MCP clients.
A Prompt is essentially a template for a message or a series of messages that can be sent to a Large Language Model (LLM). The Toolbox server implements the prompts/list and prompts/get methods from the Model Context Protocol (MCP) specification, allowing clients to discover and retrieve these prompts.
prompts:
code_review:
description: "Asks the LLM to analyze code quality and suggest improvements."
messages:
- content: "Please review the following code for quality, correctness, and potential improvements: \n\n{{.code}}"
arguments:
- name: "code"
description: "The code to review"
| field | type | required | description |
|---|---|---|---|
| description | string | No | A brief explanation of what the prompt does. |
| kind | string | No | The kind of prompt. Defaults to "custom". |
| messages | []Message | Yes | A list of one or more message objects that make up the prompt’s content. |
| arguments | []Argument | No | A list of arguments that can be interpolated into the prompt’s content. |
| field | type | required | description |
|---|---|---|---|
| role | string | No | The role of the sender. Can be "user" or "assistant". Defaults to "user". |
| content | string | Yes | The text of the message. You can include placeholders for arguments using {{.argument_name}} syntax. |
An argument can be any Parameter
type. If the type field is not specified, it will default to string.
Prompts defined in your tools.yaml can be seamlessly integrated with the Gemini CLI to create custom slash commands. The workflow is as follows:
Discovery: When the Gemini CLI connects to your Toolbox server, it automatically calls prompts/list to discover all available prompts.
Conversion: Each discovered prompt is converted into a corresponding slash command. For example, a prompt named code_review becomes the /code_review command in the CLI.
Execution: You can execute the command as follows:
/code_review --code="def hello():\n print('world')"
Interpolation: Once all arguments are collected, the CLI calls prompts/get with your provided values to retrieve the final, interpolated prompt. Eg.
Please review the following code for quality, correctness, and potential improvements: \ndef hello():\n print('world')
Response: This completed prompt is then sent to the Gemini model, and the model’s response is displayed back to you in the CLI.
Custom prompts defined by the user.