Prompts

Prompts allow servers to provide structured messages and instructions for interacting with language models.

A prompt represents a reusable prompt template that can be retrieved and used by MCP clients.

A Prompt is essentially a template for a message or a series of messages that can be sent to a Large Language Model (LLM). The Toolbox server implements the prompts/list and prompts/get methods from the Model Context Protocol (MCP) specification, allowing clients to discover and retrieve these prompts.

prompts:
  code_review:
    description: "Asks the LLM to analyze code quality and suggest improvements."
    messages:
      - content: "Please review the following code for quality, correctness, and potential improvements: \n\n{{.code}}"
    arguments:
      - name: "code"
        description: "The code to review"

Prompt Schema

fieldtyperequireddescription
descriptionstringNoA brief explanation of what the prompt does.
kindstringNoThe kind of prompt. Defaults to "custom".
messages[]MessageYesA list of one or more message objects that make up the prompt’s content.
arguments[]ArgumentNoA list of arguments that can be interpolated into the prompt’s content.

Message Schema

fieldtyperequireddescription
rolestringNoThe role of the sender. Can be "user" or "assistant". Defaults to "user".
contentstringYesThe text of the message. You can include placeholders for arguments using {{.argument_name}} syntax.

Argument Schema

An argument can be any Parameter type. If the type field is not specified, it will default to string.

Usage with Gemini CLI

Prompts defined in your tools.yaml can be seamlessly integrated with the Gemini CLI to create custom slash commands. The workflow is as follows:

  1. Discovery: When the Gemini CLI connects to your Toolbox server, it automatically calls prompts/list to discover all available prompts.

  2. Conversion: Each discovered prompt is converted into a corresponding slash command. For example, a prompt named code_review becomes the /code_review command in the CLI.

  3. Execution: You can execute the command as follows:

    /code_review --code="def hello():\n    print('world')"
    
  4. Interpolation: Once all arguments are collected, the CLI calls prompts/get with your provided values to retrieve the final, interpolated prompt. Eg.

    Please review the following code for quality, correctness, and potential improvements: \ndef hello():\n    print('world')
    
  5. Response: This completed prompt is then sent to the Gemini model, and the model’s response is displayed back to you in the CLI.

Kinds of prompts


Custom

Custom prompts defined by the user.

Last modified November 12, 2025: docs: fix prompt message (#1930) (cfd4b18dee)