MCP Toolbox for Databases is an open source MCP server for databases. It enables you to develop tools easier, faster, and more securely by handling the complexities such as connection pooling, authentication, and more.
# Getting Started
How to get started with Toolbox.
# Introduction
An introduction to MCP Toolbox for Databases.
MCP Toolbox for Databases is an open source MCP server for databases. It enables
you to develop tools easier, faster, and more securely by handling the complexities
such as connection pooling, authentication, and more.
{{< notice note >}}
This solution was originally named “Gen AI Toolbox for
Databases” as its initial development predated MCP, but was renamed to align
with recently added MCP compatibility.
{{< /notice >}}
## Why Toolbox?
Toolbox helps you build Gen AI tools that let your agents access data in your
database. Toolbox provides:
- **Simplified development**: Integrate tools to your agent in less than 10
lines of code, reuse tools between multiple agents or frameworks, and deploy
new versions of tools more easily.
- **Better performance**: Best practices such as connection pooling,
authentication, and more.
- **Enhanced security**: Integrated auth for more secure access to your data
- **End-to-end observability**: Out of the box metrics and tracing with built-in
support for OpenTelemetry.
**⚡ Supercharge Your Workflow with an AI Database Assistant ⚡**
Stop context-switching and let your AI assistant become a true co-developer. By
[connecting your IDE to your databases with MCP Toolbox][connect-ide], you can
delegate complex and time-consuming database tasks, allowing you to build faster
and focus on what matters. This isn't just about code completion; it's about
giving your AI the context it needs to handle the entire development lifecycle.
Here’s how it will save you time:
- **Query in Plain English**: Interact with your data using natural language
right from your IDE. Ask complex questions like, *"How many orders were
delivered in 2024, and what items were in them?"* without writing any SQL.
- **Automate Database Management**: Simply describe your data needs, and let the
AI assistant manage your database for you. It can handle generating queries,
creating tables, adding indexes, and more.
- **Generate Context-Aware Code**: Empower your AI assistant to generate
application code and tests with a deep understanding of your real-time
database schema. This accelerates the development cycle by ensuring the
generated code is directly usable.
- **Slash Development Overhead**: Radically reduce the time spent on manual
setup and boilerplate. MCP Toolbox helps streamline lengthy database
configurations, repetitive code, and error-prone schema migrations.
Learn [how to connect your AI tools (IDEs) to Toolbox using MCP][connect-ide].
[connect-ide]: ../../how-to/connect-ide/
## General Architecture
Toolbox sits between your application's orchestration framework and your
database, providing a control plane that is used to modify, distribute, or
invoke tools. It simplifies the management of your tools by providing you with a
centralized location to store and update tools, allowing you to share tools
between agents and applications and update those tools without necessarily
redeploying your application.

## Getting Started
### Installing the server
For the latest version, check the [releases page][releases] and use the
following instructions for your OS and CPU architecture.
[releases]: https://github.com/googleapis/genai-toolbox/releases
{{< tabpane text=true >}}
{{% tab header="Binary" lang="en" %}}
{{< tabpane text=true >}}
{{% tab header="Linux (AMD64)" lang="en" %}}
To install Toolbox as a binary on Linux (AMD64):
```sh
# see releases page for other versions
export VERSION=0.21.0
curl -L -o toolbox https://storage.googleapis.com/genai-toolbox/v$VERSION/linux/amd64/toolbox
chmod +x toolbox
```
{{% /tab %}}
{{% tab header="macOS (Apple Silicon)" lang="en" %}}
To install Toolbox as a binary on macOS (Apple Silicon):
```sh
# see releases page for other versions
export VERSION=0.21.0
curl -L -o toolbox https://storage.googleapis.com/genai-toolbox/v$VERSION/darwin/arm64/toolbox
chmod +x toolbox
```
{{% /tab %}}
{{% tab header="macOS (Intel)" lang="en" %}}
To install Toolbox as a binary on macOS (Intel):
```sh
# see releases page for other versions
export VERSION=0.21.0
curl -L -o toolbox https://storage.googleapis.com/genai-toolbox/v$VERSION/darwin/amd64/toolbox
chmod +x toolbox
```
{{% /tab %}}
{{% tab header="Windows (AMD64)" lang="en" %}}
To install Toolbox as a binary on Windows (AMD64):
```powershell
:: see releases page for other versions
set VERSION=0.21.0
curl -o toolbox.exe "https://storage.googleapis.com/genai-toolbox/v%VERSION%/windows/amd64/toolbox.exe"
```
{{% /tab %}}
{{< /tabpane >}}
{{% /tab %}}
{{% tab header="Container image" lang="en" %}}
You can also install Toolbox as a container:
```sh
# see releases page for other versions
export VERSION=0.21.0
docker pull us-central1-docker.pkg.dev/database-toolbox/toolbox/toolbox:$VERSION
```
{{% /tab %}}
{{% tab header="Homebrew" lang="en" %}}
To install Toolbox using Homebrew on macOS or Linux:
```sh
brew install mcp-toolbox
```
{{% /tab %}}
{{% tab header="Compile from source" lang="en" %}}
To install from source, ensure you have the latest version of
[Go installed](https://go.dev/doc/install), and then run the following command:
```sh
go install github.com/googleapis/genai-toolbox@v0.21.0
```
{{% /tab %}}
{{< /tabpane >}}
### Running the server
[Configure](../configure.md) a `tools.yaml` to define your tools, and then
execute `toolbox` to start the server:
```sh
./toolbox --tools-file "tools.yaml"
```
{{< notice note >}}
Toolbox enables dynamic reloading by default. To disable, use the
`--disable-reload` flag.
{{< /notice >}}
#### Launching Toolbox UI
To launch Toolbox's interactive UI, use the `--ui` flag. This allows you to test
tools and toolsets with features such as authorized parameters. To learn more,
visit [Toolbox UI](../../how-to/toolbox-ui/index.md).
```sh
./toolbox --ui
```
#### Homebrew Users
If you installed Toolbox using Homebrew, the `toolbox` binary is available in
your system path. You can start the server with the same command:
```sh
toolbox --tools-file "tools.yaml"
```
You can use `toolbox help` for a full list of flags! To stop the server, send a
terminate signal (`ctrl+c` on most platforms).
For more detailed documentation on deploying to different environments, check
out the resources in the [How-to section](../../how-to/)
### Integrating your application
Once your server is up and running, you can load the tools into your
application. See below the list of Client SDKs for using various frameworks:
#### Python
{{< tabpane text=true persist=header >}}
{{% tab header="Core" lang="en" %}}
Once you've installed the [Toolbox Core
SDK](https://pypi.org/project/toolbox-core/), you can load
tools:
{{< highlight python >}}
from toolbox_core import ToolboxClient
# update the url to point to your server
async with ToolboxClient("http://127.0.0.1:5000") as client:
# these tools can be passed to your application!
tools = await client.load_toolset("toolset_name")
{{< /highlight >}}
For more detailed instructions on using the Toolbox Core SDK, see the
[project's
README](https://github.com/googleapis/mcp-toolbox-sdk-python/blob/main/packages/toolbox-core/README.md).
{{% /tab %}}
{{% tab header="LangChain" lang="en" %}}
Once you've installed the [Toolbox LangChain
SDK](https://pypi.org/project/toolbox-langchain/), you can load
tools:
{{< highlight python >}}
from toolbox_langchain import ToolboxClient
# update the url to point to your server
async with ToolboxClient("http://127.0.0.1:5000") as client:
# these tools can be passed to your application!
tools = client.load_toolset()
{{< /highlight >}}
For more detailed instructions on using the Toolbox LangChain SDK, see the
[project's
README](https://github.com/googleapis/mcp-toolbox-sdk-python/blob/main/packages/toolbox-langchain/README.md).
{{% /tab %}}
{{% tab header="Llamaindex" lang="en" %}}
Once you've installed the [Toolbox Llamaindex
SDK](https://github.com/googleapis/genai-toolbox-llamaindex-python), you can load
tools:
{{< highlight python >}}
from toolbox_llamaindex import ToolboxClient
# update the url to point to your server
async with ToolboxClient("http://127.0.0.1:5000") as client:
# these tools can be passed to your application
tools = client.load_toolset()
{{< /highlight >}}
For more detailed instructions on using the Toolbox Llamaindex SDK, see the
[project's
README](https://github.com/googleapis/genai-toolbox-llamaindex-python/blob/main/README.md).
{{% /tab %}}
{{< /tabpane >}}
#### Javascript/Typescript
Once you've installed the [Toolbox Core
SDK](https://www.npmjs.com/package/@toolbox-sdk/core), you can load
tools:
{{< tabpane text=true persist=header >}}
{{% tab header="Core" lang="en" %}}
{{< highlight javascript >}}
import { ToolboxClient } from '@toolbox-sdk/core';
// update the url to point to your server
const URL = 'http://127.0.0.1:5000';
let client = new ToolboxClient(URL);
// these tools can be passed to your application!
const toolboxTools = await client.loadToolset('toolsetName');
{{< /highlight >}}
{{% /tab %}}
{{% tab header="LangChain/Langraph" lang="en" %}}
{{< highlight javascript >}}
import { ToolboxClient } from '@toolbox-sdk/core';
// update the url to point to your server
const URL = 'http://127.0.0.1:5000';
let client = new ToolboxClient(URL);
// these tools can be passed to your application!
const toolboxTools = await client.loadToolset('toolsetName');
// Define the basics of the tool: name, description, schema and core logic
const getTool = (toolboxTool) => tool(currTool, {
name: toolboxTool.getName(),
description: toolboxTool.getDescription(),
schema: toolboxTool.getParamSchema()
});
// Use these tools in your Langchain/Langraph applications
const tools = toolboxTools.map(getTool);
{{< /highlight >}}
{{% /tab %}}
{{% tab header="Genkit" lang="en" %}}
{{< highlight javascript >}}
import { ToolboxClient } from '@toolbox-sdk/core';
import { genkit } from 'genkit';
// Initialise genkit
const ai = genkit({
plugins: [
googleAI({
apiKey: process.env.GEMINI_API_KEY || process.env.GOOGLE_API_KEY
})
],
model: googleAI.model('gemini-2.0-flash'),
});
// update the url to point to your server
const URL = 'http://127.0.0.1:5000';
let client = new ToolboxClient(URL);
// these tools can be passed to your application!
const toolboxTools = await client.loadToolset('toolsetName');
// Define the basics of the tool: name, description, schema and core logic
const getTool = (toolboxTool) => ai.defineTool({
name: toolboxTool.getName(),
description: toolboxTool.getDescription(),
schema: toolboxTool.getParamSchema()
}, toolboxTool)
// Use these tools in your Genkit applications
const tools = toolboxTools.map(getTool);
{{< /highlight >}}
{{% /tab %}}
{{% tab header="LlamaIndex" lang="en" %}}
{{< highlight javascript >}}
import { ToolboxClient } from '@toolbox-sdk/core';
import { tool } from "llamaindex";
// update the url to point to your server
const URL = 'http://127.0.0.1:5000';
let client = new ToolboxClient(URL);
// these tools can be passed to your application!
const toolboxTools = await client.loadToolset('toolsetName');
// Define the basics of the tool: name, description, schema and core logic
const getTool = (toolboxTool) => tool({
name: toolboxTool.getName(),
description: toolboxTool.getDescription(),
parameters: toolboxTool.getParamSchema(),
execute: toolboxTool
});;
// Use these tools in your LlamaIndex applications
const tools = toolboxTools.map(getTool);
{{< /highlight >}}
{{% /tab %}}
{{< /tabpane >}}
For more detailed instructions on using the Toolbox Core SDK, see the
[project's
README](https://github.com/googleapis/mcp-toolbox-sdk-js/blob/main/packages/toolbox-core/README.md).
#### Go
Once you've installed the [Toolbox Go
SDK](https://pkg.go.dev/github.com/googleapis/mcp-toolbox-sdk-go/core), you can load
tools:
{{< tabpane text=true persist=header >}}
{{% tab header="Core" lang="en" %}}
{{< highlight go >}}
package main
import (
"context"
"log"
"github.com/googleapis/mcp-toolbox-sdk-go/core"
)
func main() {
// update the url to point to your server
URL := "http://127.0.0.1:5000"
ctx := context.Background()
client, err := core.NewToolboxClient(URL)
if err != nil {
log.Fatalf("Failed to create Toolbox client: %v", err)
}
// Framework agnostic tools
tools, err := client.LoadToolset("toolsetName", ctx)
if err != nil {
log.Fatalf("Failed to load tools: %v", err)
}
}
{{< /highlight >}}
{{% /tab %}}
{{% tab header="LangChain Go" lang="en" %}}
{{< highlight go >}}
package main
import (
"context"
"encoding/json"
"log"
"github.com/googleapis/mcp-toolbox-sdk-go/core"
"github.com/tmc/langchaingo/llms"
)
func main() {
// Make sure to add the error checks
// update the url to point to your server
URL := "http://127.0.0.1:5000"
ctx := context.Background()
client, err := core.NewToolboxClient(URL)
if err != nil {
log.Fatalf("Failed to create Toolbox client: %v", err)
}
// Framework agnostic tool
tool, err := client.LoadTool("toolName", ctx)
if err != nil {
log.Fatalf("Failed to load tools: %v", err)
}
// Fetch the tool's input schema
inputschema, err := tool.InputSchema()
if err != nil {
log.Fatalf("Failed to fetch inputSchema: %v", err)
}
var paramsSchema map[string]any
_ = json.Unmarshal(inputschema, ¶msSchema)
// Use this tool with LangChainGo
langChainTool := llms.Tool{
Type: "function",
Function: &llms.FunctionDefinition{
Name: tool.Name(),
Description: tool.Description(),
Parameters: paramsSchema,
},
}
}
{{< /highlight >}}
For end-to-end samples on using the Toolbox Go SDK with LangChain Go, see the [project's
samples](https://github.com/googleapis/mcp-toolbox-sdk-go/tree/main/core/samples)
{{% /tab %}}
{{% tab header="Genkit Go" lang="en" %}}
{{< highlight go >}}
package main
import (
"context"
"encoding/json"
"log"
"github.com/firebase/genkit/go/ai"
"github.com/firebase/genkit/go/genkit"
"github.com/googleapis/mcp-toolbox-sdk-go/core"
"github.com/googleapis/mcp-toolbox-sdk-go/tbgenkit"
"github.com/invopop/jsonschema"
)
func main() {
// Make sure to add the error checks
// Update the url to point to your server
URL := "http://127.0.0.1:5000"
ctx := context.Background()
g, err := genkit.Init(ctx)
client, err := core.NewToolboxClient(URL)
if err != nil {
log.Fatalf("Failed to create Toolbox client: %v", err)
}
// Framework agnostic tool
tool, err := client.LoadTool("toolName", ctx)
if err != nil {
log.Fatalf("Failed to load tools: %v", err)
}
// Convert the tool using the tbgenkit package
// Use this tool with Genkit Go
genkitTool, err := tbgenkit.ToGenkitTool(tool, g)
if err != nil {
log.Fatalf("Failed to convert tool: %v\n", err)
}
}
{{< /highlight >}}
For end-to-end samples on using the Toolbox Go SDK with Genkit Go, see the [project's
samples](https://github.com/googleapis/mcp-toolbox-sdk-go/tree/main/tbgenkit/samples)
{{% /tab %}}
{{% tab header="Go GenAI" lang="en" %}}
{{< highlight go >}}
package main
import (
"context"
"encoding/json"
"log"
"github.com/googleapis/mcp-toolbox-sdk-go/core"
"google.golang.org/genai"
)
func main() {
// Make sure to add the error checks
// Update the url to point to your server
URL := "http://127.0.0.1:5000"
ctx := context.Background()
client, err := core.NewToolboxClient(URL)
if err != nil {
log.Fatalf("Failed to create Toolbox client: %v", err)
}
// Framework agnostic tool
tool, err := client.LoadTool("toolName", ctx)
if err != nil {
log.Fatalf("Failed to load tools: %v", err)
}
// Fetch the tool's input schema
inputschema, err := tool.InputSchema()
if err != nil {
log.Fatalf("Failed to fetch inputSchema: %v", err)
}
var schema *genai.Schema
_ = json.Unmarshal(inputschema, &schema)
funcDeclaration := &genai.FunctionDeclaration{
Name: tool.Name(),
Description: tool.Description(),
Parameters: schema,
}
// Use this tool with Go GenAI
genAITool := &genai.Tool{
FunctionDeclarations: []*genai.FunctionDeclaration{funcDeclaration},
}
}
{{< /highlight >}}
For end-to-end samples on using the Toolbox Go SDK with Go GenAI, see the [project's
samples](https://github.com/googleapis/mcp-toolbox-sdk-go/tree/main/core/samples)
{{% /tab %}}
{{% tab header="OpenAI Go" lang="en" %}}
{{< highlight go >}}
package main
import (
"context"
"encoding/json"
"log"
"github.com/googleapis/mcp-toolbox-sdk-go/core"
openai "github.com/openai/openai-go"
)
func main() {
// Make sure to add the error checks
// Update the url to point to your server
URL := "http://127.0.0.1:5000"
ctx := context.Background()
client, err := core.NewToolboxClient(URL)
if err != nil {
log.Fatalf("Failed to create Toolbox client: %v", err)
}
// Framework agnostic tool
tool, err := client.LoadTool("toolName", ctx)
if err != nil {
log.Fatalf("Failed to load tools: %v", err)
}
// Fetch the tool's input schema
inputschema, err := tool.InputSchema()
if err != nil {
log.Fatalf("Failed to fetch inputSchema: %v", err)
}
var paramsSchema openai.FunctionParameters
_ = json.Unmarshal(inputschema, ¶msSchema)
// Use this tool with OpenAI Go
openAITool := openai.ChatCompletionToolParam{
Function: openai.FunctionDefinitionParam{
Name: tool.Name(),
Description: openai.String(tool.Description()),
Parameters: paramsSchema,
},
}
}
{{< /highlight >}}
For end-to-end samples on using the Toolbox Go SDK with OpenAI Go, see the [project's
samples](https://github.com/googleapis/mcp-toolbox-sdk-go/tree/main/core/samples)
{{% /tab %}}
{{% tab header="ADK Go" lang="en" %}}
{{< highlight go >}}
package main
import (
"context"
"fmt"
"github.com/googleapis/mcp-toolbox-sdk-go/tbadk"
)
func main() {
// Make sure to add the error checks
// Update the url to point to your server
URL := "http://127.0.0.1:5000"
ctx := context.Background()
client, err := tbadk.NewToolboxClient(URL)
if err != nil {
return fmt.Sprintln("Could not start Toolbox Client", err)
}
// Use this tool with ADK Go
tool, err := client.LoadTool("toolName", ctx)
if err != nil {
return fmt.Sprintln("Could not load Toolbox Tool", err)
}
}
{{< /highlight >}}
For end-to-end samples on using the Toolbox Go SDK with ADK Go, see the [project's
samples](https://github.com/googleapis/mcp-toolbox-sdk-go/tree/main/tbadk/samples)
{{% /tab %}}
{{< /tabpane >}}
For more detailed instructions on using the Toolbox Go SDK, see the
[project's
README](https://github.com/googleapis/mcp-toolbox-sdk-go/blob/main/core/README.md).
# Python Quickstart (Local)
How to get started running Toolbox locally with [Python](https://github.com/googleapis/mcp-toolbox-sdk-python), PostgreSQL, and [Agent Development Kit](https://google.github.io/adk-docs/), [LangGraph](https://www.langchain.com/langgraph), [LlamaIndex](https://www.llamaindex.ai/) or [GoogleGenAI](https://pypi.org/project/google-genai/).
[](https://colab.research.google.com/github/googleapis/genai-toolbox/blob/main/docs/en/getting-started/colab_quickstart.ipynb)
## Before you begin
This guide assumes you have already done the following:
1. Installed [Python 3.10+][install-python] (including [pip][install-pip] and
your preferred virtual environment tool for managing dependencies e.g.
[venv][install-venv]).
1. Installed [PostgreSQL 16+ and the `psql` client][install-postgres].
[install-python]: https://wiki.python.org/moin/BeginnersGuide/Download
[install-pip]: https://pip.pypa.io/en/stable/installation/
[install-venv]:
https://packaging.python.org/en/latest/tutorials/installing-packages/#creating-virtual-environments
[install-postgres]: https://www.postgresql.org/download/
### Cloud Setup (Optional)
{{< regionInclude "quickstart/shared/cloud_setup.md" "cloud_setup" >}}
## Step 1: Set up your database
{{< regionInclude "quickstart/shared/database_setup.md" "database_setup" >}}
## Step 2: Install and configure Toolbox
{{< regionInclude "quickstart/shared/configure_toolbox.md" "configure_toolbox" >}}
## Step 3: Connect your agent to Toolbox
In this section, we will write and run an agent that will load the Tools
from Toolbox.
{{< notice tip>}}
If you prefer to experiment within a Google Colab environment, you can connect
to a [local
runtime](https://research.google.com/colaboratory/local-runtimes.html).
{{< /notice >}}
1. In a new terminal, install the SDK package.
{{< tabpane persist=header >}}
{{< tab header="ADK" lang="bash" >}}
pip install toolbox-core
{{< /tab >}}
{{< tab header="Langchain" lang="bash" >}}
pip install toolbox-langchain
{{< /tab >}}
{{< tab header="LlamaIndex" lang="bash" >}}
pip install toolbox-llamaindex
{{< /tab >}}
{{< tab header="Core" lang="bash" >}}
pip install toolbox-core
{{< /tab >}}
{{< /tabpane >}}
1. Install other required dependencies:
{{< tabpane persist=header >}}
{{< tab header="ADK" lang="bash" >}}
pip install google-adk
{{< /tab >}}
{{< tab header="Langchain" lang="bash" >}}
# TODO(developer): replace with correct package if needed
pip install langgraph langchain-google-vertexai
# pip install langchain-google-genai
# pip install langchain-anthropic
{{< /tab >}}
{{< tab header="LlamaIndex" lang="bash" >}}
# TODO(developer): replace with correct package if needed
pip install llama-index-llms-google-genai
# pip install llama-index-llms-anthropic
{{< /tab >}}
{{< tab header="Core" lang="bash" >}}
pip install google-genai
{{< /tab >}}
{{< /tabpane >}}
1. Create the agent:
{{< tabpane persist=header >}}
{{% tab header="ADK" text=true %}}
1. Create a new agent project. This will create a new directory named `my_agent`
with a file `agent.py`.
```bash
adk create my_agent
```
1. Update `my_agent/agent.py` with the following content to connect to Toolbox:
```py
{{< include "quickstart/python/adk/quickstart.py" >}}
```
1. Create a `.env` file with your Google API key:
```bash
echo 'GOOGLE_API_KEY="YOUR_API_KEY"' > my_agent/.env
```
{{% /tab %}}
{{% tab header="LangChain" text=true %}}
Create a new file named `agent.py` and copy the following code:
```py
{{< include "quickstart/python/langchain/quickstart.py" >}}
```
{{% /tab %}}
{{% tab header="LlamaIndex" text=true %}}
Create a new file named `agent.py` and copy the following code:
```py
{{< include "quickstart/python/llamaindex/quickstart.py" >}}
```
{{% /tab %}}
{{% tab header="Core" text=true %}}
Create a new file named `agent.py` and copy the following code:
```py
{{< include "quickstart/python/core/quickstart.py" >}}
```
{{% /tab %}}
{{< /tabpane >}}
{{< tabpane text=true persist=header >}}
{{% tab header="ADK" lang="en" %}}
To learn more about Agent Development Kit, check out the [ADK
Documentation](https://google.github.io/adk-docs/get-started/python/).
{{% /tab %}}
{{% tab header="Langchain" lang="en" %}}
To learn more about Agents in LangChain, check out the [LangGraph Agent
Documentation](https://langchain-ai.github.io/langgraph/reference/prebuilt/#langgraph.prebuilt.chat_agent_executor.create_react_agent).
{{% /tab %}}
{{% tab header="LlamaIndex" lang="en" %}}
To learn more about Agents in LlamaIndex, check out the [LlamaIndex
AgentWorkflow
Documentation](https://docs.llamaindex.ai/en/stable/examples/agent/agent_workflow_basic/).
{{% /tab %}}
{{% tab header="Core" lang="en" %}}
To learn more about tool calling with Google GenAI, check out the
[Google GenAI
Documentation](https://github.com/googleapis/python-genai?tab=readme-ov-file#manually-declare-and-invoke-a-function-for-function-calling).
{{% /tab %}}
{{< /tabpane >}}
4. Run your agent, and observe the results:
{{< tabpane persist=header >}}
{{% tab header="ADK" text=true %}}
Run your agent locally for testing:
```sh
adk run my_agent
```
Alternatively, serve it via a web interface:
```sh
adk web --port 8000
```
For more information, refer to the ADK documentation on [Running
Agents](https://google.github.io/adk-docs/get-started/python/#run-your-agent)
and [Deploying to Cloud](https://google.github.io/adk-docs/deploy/).
{{% /tab %}}
{{< tab header="Langchain" lang="bash" >}}
python agent.py
{{< /tab >}}
{{< tab header="LlamaIndex" lang="bash" >}}
python agent.py
{{< /tab >}}
{{< tab header="Core" lang="bash" >}}
python agent.py
{{< /tab >}}
{{< /tabpane >}}
{{< notice info >}}
For more information, visit the [Python SDK
repo](https://github.com/googleapis/mcp-toolbox-sdk-python).
{{ notice >}}
# JS Quickstart (Local)
How to get started running Toolbox locally with [JavaScript](https://github.com/googleapis/mcp-toolbox-sdk-js), PostgreSQL, and orchestration frameworks such as [LangChain](https://js.langchain.com/docs/introduction/), [GenkitJS](https://genkit.dev/docs/get-started/), [LlamaIndex](https://ts.llamaindex.ai/) and [GoogleGenAI](https://github.com/googleapis/js-genai).
## Before you begin
This guide assumes you have already done the following:
1. Installed [Node.js (v18 or higher)].
1. Installed [PostgreSQL 16+ and the `psql` client][install-postgres].
[Node.js (v18 or higher)]: https://nodejs.org/
[install-postgres]: https://www.postgresql.org/download/
### Cloud Setup (Optional)
{{< regionInclude "quickstart/shared/cloud_setup.md" "cloud_setup" >}}
## Step 1: Set up your database
{{< regionInclude "quickstart/shared/database_setup.md" "database_setup" >}}
## Step 2: Install and configure Toolbox
{{< regionInclude "quickstart/shared/configure_toolbox.md" "configure_toolbox" >}}
## Step 3: Connect your agent to Toolbox
In this section, we will write and run an agent that will load the Tools
from Toolbox.
1. (Optional) Initialize a Node.js project:
```bash
npm init -y
```
1. In a new terminal, install the
[SDK](https://www.npmjs.com/package/@toolbox-sdk/core).
```bash
npm install @toolbox-sdk/core
```
1. Install other required dependencies
{{< tabpane persist=header >}}
{{< tab header="LangChain" lang="bash" >}}
npm install langchain @langchain/google-genai
{{< /tab >}}
{{< tab header="GenkitJS" lang="bash" >}}
npm install genkit @genkit-ai/googleai
{{< /tab >}}
{{< tab header="LlamaIndex" lang="bash" >}}
npm install llamaindex @llamaindex/google @llamaindex/workflow
{{< /tab >}}
{{< tab header="GoogleGenAI" lang="bash" >}}
npm install @google/genai
{{< /tab >}}
{{< /tabpane >}}
1. Create a new file named `hotelAgent.js` and copy the following code to create
an agent:
{{< tabpane persist=header >}}
{{< tab header="LangChain" lang="js" >}}
{{< include "quickstart/js/langchain/quickstart.js" >}}
{{< /tab >}}
{{< tab header="GenkitJS" lang="js" >}}
{{< include "quickstart/js/genkit/quickstart.js" >}}
{{< /tab >}}
{{< tab header="LlamaIndex" lang="js" >}}
{{< include "quickstart/js/llamaindex/quickstart.js" >}}
{{< /tab >}}
{{< tab header="GoogleGenAI" lang="js" >}}
{{< include "quickstart/js/genAI/quickstart.js" >}}
{{< /tab >}}
{{< /tabpane >}}
1. Run your agent, and observe the results:
```sh
node hotelAgent.js
```
{{< notice info >}}
For more information, visit the [JS SDK
repo](https://github.com/googleapis/mcp-toolbox-sdk-js).
{{ notice >}}
# Go Quickstart (Local)
How to get started running Toolbox locally with [Go](https://github.com/googleapis/mcp-toolbox-sdk-go), PostgreSQL, and orchestration frameworks such as [LangChain Go](https://tmc.github.io/langchaingo/docs/), [GenkitGo](https://genkit.dev/go/docs/get-started-go/), [Go GenAI](https://github.com/googleapis/go-genai) and [OpenAI Go](https://github.com/openai/openai-go).
## Before you begin
This guide assumes you have already done the following:
1. Installed [Go (v1.24.2 or higher)].
1. Installed [PostgreSQL 16+ and the `psql` client][install-postgres].
[Go (v1.24.2 or higher)]: https://go.dev/doc/install
[install-postgres]: https://www.postgresql.org/download/
### Cloud Setup (Optional)
{{< regionInclude "quickstart/shared/cloud_setup.md" "cloud_setup" >}}
## Step 1: Set up your database
{{< regionInclude "quickstart/shared/database_setup.md" "database_setup" >}}
## Step 2: Install and configure Toolbox
{{< regionInclude "quickstart/shared/configure_toolbox.md" "configure_toolbox" >}}
## Step 3: Connect your agent to Toolbox
In this section, we will write and run an agent that will load the Tools
from Toolbox.
1. Initialize a go module:
```bash
go mod init main
```
1. In a new terminal, install the
[SDK](https://pkg.go.dev/github.com/googleapis/mcp-toolbox-sdk-go).
```bash
go get github.com/googleapis/mcp-toolbox-sdk-go
```
1. Create a new file named `hotelagent.go` and copy the following code to create
an agent:
{{< tabpane persist=header >}}
{{< tab header="LangChain Go" lang="go" >}}
{{< include "quickstart/go/langchain/quickstart.go" >}}
{{< /tab >}}
{{< tab header="Genkit Go" lang="go" >}}
{{< include "quickstart/go/genkit/quickstart.go" >}}
{{< /tab >}}
{{< tab header="Go GenAI" lang="go" >}}
{{< include "quickstart/go/genAI/quickstart.go" >}}
{{< /tab >}}
{{< tab header="OpenAI Go" lang="go" >}}
{{< include "quickstart/go/openAI/quickstart.go" >}}
{{< /tab >}}
{{< tab header="ADK Go" lang="go" >}}
{{< include "quickstart/go/adkgo/quickstart.go" >}}
{{< /tab >}}
{{< /tabpane >}}
1. Ensure all dependencies are installed:
```sh
go mod tidy
```
2. Run your agent, and observe the results:
```sh
go run hotelagent.go
```
{{< notice info >}}
For more information, visit the [Go SDK
repo](https://github.com/googleapis/mcp-toolbox-sdk-go).
{{ notice >}}
# Quickstart (MCP)
How to get started running Toolbox locally with MCP Inspector.
## Overview
[Model Context Protocol](https://modelcontextprotocol.io) is an open protocol
that standardizes how applications provide context to LLMs. Check out this page
on how to [connect to Toolbox via MCP](../../how-to/connect_via_mcp.md).
## Step 1: Set up your database
In this section, we will create a database, insert some data that needs to be
access by our agent, and create a database user for Toolbox to connect with.
1. Connect to postgres using the `psql` command:
```bash
psql -h 127.0.0.1 -U postgres
```
Here, `postgres` denotes the default postgres superuser.
1. Create a new database and a new user:
{{< notice tip >}}
For a real application, it's best to follow the principle of least permission
and only grant the privileges your application needs.
{{< /notice >}}
```sql
CREATE USER toolbox_user WITH PASSWORD 'my-password';
CREATE DATABASE toolbox_db;
GRANT ALL PRIVILEGES ON DATABASE toolbox_db TO toolbox_user;
ALTER DATABASE toolbox_db OWNER TO toolbox_user;
```
1. End the database session:
```bash
\q
```
1. Connect to your database with your new user:
```bash
psql -h 127.0.0.1 -U toolbox_user -d toolbox_db
```
1. Create a table using the following command:
```sql
CREATE TABLE hotels(
id INTEGER NOT NULL PRIMARY KEY,
name VARCHAR NOT NULL,
location VARCHAR NOT NULL,
price_tier VARCHAR NOT NULL,
checkin_date DATE NOT NULL,
checkout_date DATE NOT NULL,
booked BIT NOT NULL
);
```
1. Insert data into the table.
```sql
INSERT INTO hotels(id, name, location, price_tier, checkin_date, checkout_date, booked)
VALUES
(1, 'Hilton Basel', 'Basel', 'Luxury', '2024-04-22', '2024-04-20', B'0'),
(2, 'Marriott Zurich', 'Zurich', 'Upscale', '2024-04-14', '2024-04-21', B'0'),
(3, 'Hyatt Regency Basel', 'Basel', 'Upper Upscale', '2024-04-02', '2024-04-20', B'0'),
(4, 'Radisson Blu Lucerne', 'Lucerne', 'Midscale', '2024-04-24', '2024-04-05', B'0'),
(5, 'Best Western Bern', 'Bern', 'Upper Midscale', '2024-04-23', '2024-04-01', B'0'),
(6, 'InterContinental Geneva', 'Geneva', 'Luxury', '2024-04-23', '2024-04-28', B'0'),
(7, 'Sheraton Zurich', 'Zurich', 'Upper Upscale', '2024-04-27', '2024-04-02', B'0'),
(8, 'Holiday Inn Basel', 'Basel', 'Upper Midscale', '2024-04-24', '2024-04-09', B'0'),
(9, 'Courtyard Zurich', 'Zurich', 'Upscale', '2024-04-03', '2024-04-13', B'0'),
(10, 'Comfort Inn Bern', 'Bern', 'Midscale', '2024-04-04', '2024-04-16', B'0');
```
1. End the database session:
```bash
\q
```
## Step 2: Install and configure Toolbox
In this section, we will download Toolbox, configure our tools in a
`tools.yaml`, and then run the Toolbox server.
1. Download the latest version of Toolbox as a binary:
{{< notice tip >}}
Select the
[correct binary](https://github.com/googleapis/genai-toolbox/releases)
corresponding to your OS and CPU architecture.
{{< /notice >}}
```bash
export OS="linux/amd64" # one of linux/amd64, darwin/arm64, darwin/amd64, or windows/amd64
curl -O https://storage.googleapis.com/genai-toolbox/v0.21.0/$OS/toolbox
```
1. Make the binary executable:
```bash
chmod +x toolbox
```
1. Write the following into a `tools.yaml` file. Be sure to update any fields
such as `user`, `password`, or `database` that you may have customized in the
previous step.
{{< notice tip >}}
In practice, use environment variable replacement with the format ${ENV_NAME}
instead of hardcoding your secrets into the configuration file.
{{< /notice >}}
```yaml
sources:
my-pg-source:
kind: postgres
host: 127.0.0.1
port: 5432
database: toolbox_db
user: toolbox_user
password: my-password
tools:
search-hotels-by-name:
kind: postgres-sql
source: my-pg-source
description: Search for hotels based on name.
parameters:
- name: name
type: string
description: The name of the hotel.
statement: SELECT * FROM hotels WHERE name ILIKE '%' || $1 || '%';
search-hotels-by-location:
kind: postgres-sql
source: my-pg-source
description: Search for hotels based on location.
parameters:
- name: location
type: string
description: The location of the hotel.
statement: SELECT * FROM hotels WHERE location ILIKE '%' || $1 || '%';
book-hotel:
kind: postgres-sql
source: my-pg-source
description: >-
Book a hotel by its ID. If the hotel is successfully booked, returns a NULL, raises an error if not.
parameters:
- name: hotel_id
type: string
description: The ID of the hotel to book.
statement: UPDATE hotels SET booked = B'1' WHERE id = $1;
update-hotel:
kind: postgres-sql
source: my-pg-source
description: >-
Update a hotel's check-in and check-out dates by its ID. Returns a message
indicating whether the hotel was successfully updated or not.
parameters:
- name: hotel_id
type: string
description: The ID of the hotel to update.
- name: checkin_date
type: string
description: The new check-in date of the hotel.
- name: checkout_date
type: string
description: The new check-out date of the hotel.
statement: >-
UPDATE hotels SET checkin_date = CAST($2 as date), checkout_date = CAST($3
as date) WHERE id = $1;
cancel-hotel:
kind: postgres-sql
source: my-pg-source
description: Cancel a hotel by its ID.
parameters:
- name: hotel_id
type: string
description: The ID of the hotel to cancel.
statement: UPDATE hotels SET booked = B'0' WHERE id = $1;
toolsets:
my-toolset:
- search-hotels-by-name
- search-hotels-by-location
- book-hotel
- update-hotel
- cancel-hotel
```
For more info on tools, check out the
[Tools](../../resources/tools/) section.
1. Run the Toolbox server, pointing to the `tools.yaml` file created earlier:
```bash
./toolbox --tools-file "tools.yaml"
```
## Step 3: Connect to MCP Inspector
1. Run the MCP Inspector:
```bash
npx @modelcontextprotocol/inspector
```
1. Type `y` when it asks to install the inspector package.
1. It should show the following when the MCP Inspector is up and running (please
take note of ``):
```bash
Starting MCP inspector...
⚙️ Proxy server listening on localhost:6277
🔑 Session token:
Use this token to authenticate requests or set DANGEROUSLY_OMIT_AUTH=true to disable auth
🚀 MCP Inspector is up and running at:
http://localhost:6274/?MCP_PROXY_AUTH_TOKEN=
```
1. Open the above link in your browser.
1. For `Transport Type`, select `Streamable HTTP`.
1. For `URL`, type in `http://127.0.0.1:5000/mcp`.
1. For `Configuration` -> `Proxy Session Token`, make sure
`` is present.
1. Click Connect.

1. Select `List Tools`, you will see a list of tools configured in `tools.yaml`.

1. Test out your tools here!
# Configuration
How to configure Toolbox's tools.yaml file.
The primary way to configure Toolbox is through the `tools.yaml` file. If you
have multiple files, you can tell toolbox which to load with the `--tools-file
tools.yaml` flag.
You can find more detailed reference documentation to all resource types in the
[Resources](../resources/).
### Using Environment Variables
To avoid hardcoding certain secret fields like passwords, usernames, API keys
etc., you could use environment variables instead with the format `${ENV_NAME}`.
```yaml
user: ${USER_NAME}
password: ${PASSWORD}
```
A default value can be specified like `${ENV_NAME:default}`.
```yaml
port: ${DB_PORT:3306}
```
### Sources
The `sources` section of your `tools.yaml` defines what data sources your
Toolbox should have access to. Most tools will have at least one source to
execute against.
```yaml
sources:
my-pg-source:
kind: postgres
host: 127.0.0.1
port: 5432
database: toolbox_db
user: ${USER_NAME}
password: ${PASSWORD}
```
For more details on configuring different types of sources, see the
[Sources](../resources/sources/).
### Tools
The `tools` section of your `tools.yaml` defines the actions your agent can
take: what kind of tool it is, which source(s) it affects, what parameters it
uses, etc.
```yaml
tools:
search-hotels-by-name:
kind: postgres-sql
source: my-pg-source
description: Search for hotels based on name.
parameters:
- name: name
type: string
description: The name of the hotel.
statement: SELECT * FROM hotels WHERE name ILIKE '%' || $1 || '%';
```
For more details on configuring different types of tools, see the
[Tools](../resources/tools/).
### Toolsets
The `toolsets` section of your `tools.yaml` allows you to define groups of tools
that you want to be able to load together. This can be useful for defining
different sets for different agents or different applications.
```yaml
toolsets:
my_first_toolset:
- my_first_tool
- my_second_tool
my_second_toolset:
- my_second_tool
- my_third_tool
```
You can load toolsets by name:
```python
# This will load all tools
all_tools = client.load_toolset()
# This will only load the tools listed in 'my_second_toolset'
my_second_toolset = client.load_toolset("my_second_toolset")
```
### Prompts
The `prompts` section of your `tools.yaml` defines the templates containing
structured messages and instructions for interacting with language models.
```yaml
prompts:
code_review:
description: "Asks the LLM to analyze code quality and suggest improvements."
messages:
- content: "Please review the following code for quality, correctness, and potential improvements: \n\n{{.code}}"
arguments:
- name: "code"
description: "The code to review"
```
For more details on configuring different types of prompts, see the
[Prompts](../resources/prompts/).
# Concepts
Some core concepts in Toolbox
# Telemetry
An overview of telemetry and observability in Toolbox.
## About
Telemetry data such as logs, metrics, and traces will help developers understand
the internal state of the system. This page walks though different types of
telemetry and observability available in Toolbox.
Toolbox exports telemetry data of logs via standard out/err, and traces/metrics
through [OpenTelemetry](https://opentelemetry.io/). Additional flags can be
passed to Toolbox to enable different logging behavior, or to export metrics
through a specific [exporter](#exporter).
## Logging
The following flags can be used to customize Toolbox logging:
| **Flag** | **Description** |
|--------------------|-----------------------------------------------------------------------------------------|
| `--log-level` | Preferred log level, allowed values: `debug`, `info`, `warn`, `error`. Default: `info`. |
| `--logging-format` | Preferred logging format, allowed values: `standard`, `json`. Default: `standard`. |
**Example:**
```bash
./toolbox --tools-file "tools.yaml" --log-level warn --logging-format json
```
### Level
Toolbox supports the following log levels, including:
| **Log level** | **Description** |
|---------------|------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|
| Debug | Debug logs typically contain information that is only useful during the debugging phase and may be of little value during production. |
| Info | Info logs include information about successful operations within the application, such as a successful start, pause, or exit of the application. |
| Warn | Warning logs are slightly less severe than error conditions. While it does not cause an error, it indicates that an operation might fail in the future if action is not taken now. |
| Error | Error log is assigned to event logs that contain an application error message. |
Toolbox will only output logs that are equal or more severe to the
level that it is set. Below are the log levels that Toolbox supports in the
order of severity.
### Format
Toolbox supports both standard and structured logging format.
The standard logging outputs log as string:
```
2024-11-12T15:08:11.451377-08:00 INFO "Initialized 0 sources.\n"
```
The structured logging outputs log as JSON:
```
{
"timestamp":"2024-11-04T16:45:11.987299-08:00",
"severity":"ERROR",
"logging.googleapis.com/sourceLocation":{...},
"message":"unable to parse tool file at \"tools.yaml\": \"cloud-sql-postgres1\" is not a valid kind of data source"
}
```
{{< notice tip >}}
`logging.googleapis.com/sourceLocation` shows the source code
location information associated with the log entry, if any.
{{< /notice >}}
## Telemetry
Toolbox is supports exporting metrics and traces to any OpenTelemetry compatible
exporter.
### Metrics
A metric is a measurement of a service captured at runtime. The collected data
can be used to provide important insights into the service. Toolbox provides the
following custom metrics:
| **Metric Name** | **Description** |
|------------------------------------|---------------------------------------------------------|
| `toolbox.server.toolset.get.count` | Counts the number of toolset manifest requests served |
| `toolbox.server.tool.get.count` | Counts the number of tool manifest requests served |
| `toolbox.server.tool.get.invoke` | Counts the number of tool invocation requests served |
| `toolbox.server.mcp.sse.count` | Counts the number of mcp sse connection requests served |
| `toolbox.server.mcp.post.count` | Counts the number of mcp post requests served |
All custom metrics have the following attributes/labels:
| **Metric Attributes** | **Description** |
|----------------------------|-----------------------------------------------------------|
| `toolbox.name` | Name of the toolset or tool, if applicable. |
| `toolbox.operation.status` | Operation status code, for example: `success`, `failure`. |
| `toolbox.sse.sessionId` | Session id for sse connection, if applicable. |
| `toolbox.method` | Method of JSON-RPC request, if applicable. |
### Traces
A trace is a tree of spans that shows the path that a request makes through an
application.
Spans generated by Toolbox server is prefixed with `toolbox/server/`. For
example, when user run Toolbox, it will generate spans for the following, with
`toolbox/server/init` as the root span:

### Resource Attributes
All metrics and traces generated within Toolbox will be associated with a
unified [resource][resource]. The list of resource attributes included are:
| **Resource Name** | **Description** |
|-------------------------------------------------------------------------------------------|---------------------------------------------------------------------------------------------------------------------------------------------------------------|
| [TelemetrySDK](https://pkg.go.dev/go.opentelemetry.io/otel/sdk/resource#WithTelemetrySDK) | TelemetrySDK version info. |
| [OS](https://pkg.go.dev/go.opentelemetry.io/otel/sdk/resource#WithOS) | OS attributes including OS description and OS type. |
| [Container](https://pkg.go.dev/go.opentelemetry.io/otel/sdk/resource#WithContainer) | Container attributes including container ID, if applicable. |
| [Host](https://pkg.go.dev/go.opentelemetry.io/otel/sdk/resource#WithHost) | Host attributes including host name. |
| [SchemaURL](https://pkg.go.dev/go.opentelemetry.io/otel/sdk/resource#WithSchemaURL) | Sets the schema URL for the configured resource. |
| `service.name` | Open telemetry service name. Defaulted to `toolbox`. User can set the service name via flag mentioned above to distinguish between different toolbox service. |
| `service.version` | The version of Toolbox used. |
[resource]: https://opentelemetry.io/docs/languages/go/resources/
### Exporter
An exporter is responsible for processing and exporting telemetry data. Toolbox
generates telemetry data within the OpenTelemetry Protocol (OTLP), and user can
choose to use exporters that are designed to support the OpenTelemetry
Protocol. Within Toolbox, we provide two types of exporter implementation to
choose from, either the Google Cloud Exporter that will send data directly to
the backend, or the OTLP Exporter along with a Collector that will act as a
proxy to collect and export data to the telemetry backend of user's choice.

#### Google Cloud Exporter
The Google Cloud Exporter directly exports telemetry to Google Cloud Monitoring.
It utilizes the [GCP Metric Exporter][gcp-metric-exporter] and [GCP Trace
Exporter][gcp-trace-exporter].
[gcp-metric-exporter]:
https://github.com/GoogleCloudPlatform/opentelemetry-operations-go/tree/main/exporter/metric
[gcp-trace-exporter]:
https://github.com/GoogleCloudPlatform/opentelemetry-operations-go/tree/main/exporter/trace
{{< notice note >}}
If you're using Google Cloud Monitoring, the following APIs will need to be
enabled:
- [Cloud Logging API](https://cloud.google.com/logging/docs/api/enable-api)
- [Cloud Monitoring API](https://cloud.google.com/monitoring/api/enable-api)
- [Cloud Trace API](https://console.cloud.google.com/apis/enableflow?apiid=cloudtrace.googleapis.com)
{{< /notice >}}
#### OTLP Exporter
This implementation uses the default OTLP Exporter over HTTP for
[metrics][otlp-metric-exporter] and [traces][otlp-trace-exporter]. You can use
this exporter if you choose to export your telemetry data to a Collector.
[otlp-metric-exporter]: https://opentelemetry.io/docs/languages/go/exporters/#otlp-traces-over-http
[otlp-trace-exporter]: https://opentelemetry.io/docs/languages/go/exporters/#otlp-traces-over-http
### Collector
A collector acts as a proxy between the application and the telemetry backend.
It receives telemetry data, transforms it, and then exports data to backends
that can store it permanently. Toolbox provide an option to export telemetry
data to user's choice of backend(s) that are compatible with the Open Telemetry
Protocol (OTLP). If you would like to use a collector, please refer to this
[Export Telemetry using the Otel Collector](../../how-to/export_telemetry.md).
### Flags
The following flags are used to determine Toolbox's telemetry configuration:
| **flag** | **type** | **description** |
|----------------------------|----------|------------------------------------------------------------------------------------------------------------------|
| `--telemetry-gcp` | bool | Enable exporting directly to Google Cloud Monitoring. Default is `false`. |
| `--telemetry-otlp` | string | Enable exporting using OpenTelemetry Protocol (OTLP) to the specified endpoint (e.g. ""). |
| `--telemetry-service-name` | string | Sets the value of the `service.name` resource attribute. Default is `toolbox`. |
In addition to the flags noted above, you can also make additional configuration
for OpenTelemetry via the [General SDK Configuration][sdk-configuration] through
environmental variables.
[sdk-configuration]:
https://opentelemetry.io/docs/languages/sdk-configuration/general/
**Examples:**
To enable Google Cloud Exporter:
```bash
./toolbox --telemetry-gcp
```
To enable OTLP Exporter, provide Collector endpoint:
```bash
./toolbox --telemetry-otlp="http://127.0.0.1:4553"
```
# How-to
List of guides detailing how to do different things with Toolbox.
# Connect from your IDE
List of guides detailing how to connect your AI tools (IDEs) to Toolbox using MCP.
# AlloyDB Admin API using MCP
Create your AlloyDB database with MCP Toolbox.
# AlloyDB using MCP
Connect your IDE to AlloyDB using Toolbox.
# BigQuery using MCP
Connect your IDE to BigQuery using Toolbox.
# Cloud SQL for MySQL using MCP
Connect your IDE to Cloud SQL for MySQL using Toolbox.
# Cloud SQL for Postgres using MCP
Connect your IDE to Cloud SQL for Postgres using Toolbox.
# Cloud SQL for SQL Server using MCP
Connect your IDE to Cloud SQL for SQL Server using Toolbox.
# Firestore using MCP
Connect your IDE to Firestore using Toolbox.
# Looker using MCP
Connect your IDE to Looker using Toolbox.
[Model Context Protocol (MCP)](https://modelcontextprotocol.io/introduction) is
an open protocol for connecting Large Language Models (LLMs) to data sources
like Postgres. This guide covers how to use [MCP Toolbox for Databases][toolbox]
to expose your developer assistant tools to a Looker instance:
* [Gemini-CLI][gemini-cli]
* [Cursor][cursor]
* [Windsurf][windsurf] (Codium)
* [Visual Studio Code][vscode] (Copilot)
* [Cline][cline] (VS Code extension)
* [Claude desktop][claudedesktop]
* [Claude code][claudecode]
[toolbox]: https://github.com/googleapis/genai-toolbox
[gemini-cli]: #configure-your-mcp-client
[cursor]: #configure-your-mcp-client
[windsurf]: #configure-your-mcp-client
[vscode]: #configure-your-mcp-client
[cline]: #configure-your-mcp-client
[claudedesktop]: #configure-your-mcp-client
[claudecode]: #configure-your-mcp-client
## Set up Looker
1. Get a Looker Client ID and Client Secret. Follow the directions
[here](https://cloud.google.com/looker/docs/api-auth#authentication_with_an_sdk).
1. Have the base URL of your Looker instance available. It is likely
something like `https://looker.example.com`. In some cases the API is
listening at a different port, and you will need to use
`https://looker.example.com:19999` instead.
## Install MCP Toolbox
1. Download the latest version of Toolbox as a binary. Select the [correct
binary](https://github.com/googleapis/genai-toolbox/releases) corresponding
to your OS and CPU architecture. You are required to use Toolbox version
v0.10.0+:
{{< tabpane persist=header >}}
{{< tab header="linux/amd64" lang="bash" >}}
curl -O https://storage.googleapis.com/genai-toolbox/v0.21.0/linux/amd64/toolbox
{{< /tab >}}
{{< tab header="darwin/arm64" lang="bash" >}}
curl -O https://storage.googleapis.com/genai-toolbox/v0.21.0/darwin/arm64/toolbox
{{< /tab >}}
{{< tab header="darwin/amd64" lang="bash" >}}
curl -O https://storage.googleapis.com/genai-toolbox/v0.21.0/darwin/amd64/toolbox
{{< /tab >}}
{{< tab header="windows/amd64" lang="bash" >}}
curl -O https://storage.googleapis.com/genai-toolbox/v0.21.0/windows/amd64/toolbox.exe
{{< /tab >}}
{{< /tabpane >}}
1. Make the binary executable:
```bash
chmod +x toolbox
```
1. Verify the installation:
```bash
./toolbox --version
```
## Configure your MCP Client
{{< tabpane text=true >}}
{{% tab header="Gemini-CLI" lang="en" %}}
1. Install
[Gemini-CLI](https://github.com/google-gemini/gemini-cli#install-globally-with-npm).
1. Create a directory `.gemini` in your home directory if it doesn't exist.
1. Create the file `.gemini/settings.json` if it doesn't exist.
1. Add the following configuration, or add the mcpServers stanza if you already
have a `settings.json` with content. Replace the path to the toolbox
executable and the environment variables with your values, and save:
```json
{
"mcpServers": {
"looker-toolbox": {
"command": "./PATH/TO/toolbox",
"args": ["--stdio", "--prebuilt", "looker"],
"env": {
"LOOKER_BASE_URL": "https://looker.example.com",
"LOOKER_CLIENT_ID": "",
"LOOKER_CLIENT_SECRET": "",
"LOOKER_VERIFY_SSL": "true"
}
}
}
}
```
1. Start Gemini-CLI with the `gemini` command and use the command `/mcp` to see
the configured MCP tools.
{{% /tab %}}
{{% tab header="Claude code" lang="en" %}}
1. Install [Claude
Code](https://docs.anthropic.com/en/docs/agents-and-tools/claude-code/overview).
1. Create a `.mcp.json` file in your project root if it doesn't exist.
1. Add the following configuration, replace the environment variables with your
values, and save:
```json
{
"mcpServers": {
"looker-toolbox": {
"command": "./PATH/TO/toolbox",
"args": ["--stdio", "--prebuilt", "looker"],
"env": {
"LOOKER_BASE_URL": "https://looker.example.com",
"LOOKER_CLIENT_ID": "",
"LOOKER_CLIENT_SECRET": "",
"LOOKER_VERIFY_SSL": "true"
}
}
}
}
```
1. Restart Claude Code to apply the new configuration.
{{% /tab %}}
{{% tab header="Claude desktop" lang="en" %}}
1. Open [Claude desktop](https://claude.ai/download) and navigate to Settings.
1. Under the Developer tab, tap Edit Config to open the configuration file.
1. Add the following configuration, replace the environment variables with your
values, and save:
```json
{
"mcpServers": {
"looker-toolbox": {
"command": "./PATH/TO/toolbox",
"args": ["--stdio", "--prebuilt", "looker"],
"env": {
"LOOKER_BASE_URL": "https://looker.example.com",
"LOOKER_CLIENT_ID": "",
"LOOKER_CLIENT_SECRET": "",
"LOOKER_VERIFY_SSL": "true"
}
}
}
}
```
1. Restart Claude desktop.
1. From the new chat screen, you should see a hammer (MCP) icon appear with the
new MCP server available.
{{% /tab %}}
{{% tab header="Cline" lang="en" %}}
1. Open the [Cline](https://github.com/cline/cline) extension in VS Code and tap
the **MCP Servers** icon.
1. Tap Configure MCP Servers to open the configuration file.
1. Add the following configuration, replace the environment variables with your
values, and save:
```json
{
"mcpServers": {
"looker-toolbox": {
"command": "./PATH/TO/toolbox",
"args": ["--stdio", "--prebuilt", "looker"],
"env": {
"LOOKER_BASE_URL": "https://looker.example.com",
"LOOKER_CLIENT_ID": "",
"LOOKER_CLIENT_SECRET": "",
"LOOKER_VERIFY_SSL": "true"
}
}
}
}
```
1. You should see a green active status after the server is successfully
connected.
{{% /tab %}}
{{% tab header="Cursor" lang="en" %}}
1. Create a `.cursor` directory in your project root if it doesn't exist.
1. Create a `.cursor/mcp.json` file if it doesn't exist and open it.
1. Add the following configuration, replace the environment variables with your
values, and save:
```json
{
"mcpServers": {
"looker-toolbox": {
"command": "./PATH/TO/toolbox",
"args": ["--stdio", "--prebuilt", "looker"],
"env": {
"LOOKER_BASE_URL": "https://looker.example.com",
"LOOKER_CLIENT_ID": "",
"LOOKER_CLIENT_SECRET": "",
"LOOKER_VERIFY_SSL": "true"
}
}
}
}
```
1. Open [Cursor](https://www.cursor.com/) and navigate to **Settings > Cursor
Settings > MCP**. You should see a green active status after the server is
successfully connected.
{{% /tab %}}
{{% tab header="Visual Studio Code (Copilot)" lang="en" %}}
1. Open [VS Code](https://code.visualstudio.com/docs/copilot/overview) and
create a `.vscode` directory in your project root if it doesn't exist.
1. Create a `.vscode/mcp.json` file if it doesn't exist and open it.
1. Add the following configuration, replace the environment variables with your
values, and save:
```json
{
"servers": {
"looker-toolbox": {
"command": "./PATH/TO/toolbox",
"args": ["--stdio", "--prebuilt", "looker"],
"env": {
"LOOKER_BASE_URL": "https://looker.example.com",
"LOOKER_CLIENT_ID": "",
"LOOKER_CLIENT_SECRET": "",
"LOOKER_VERIFY_SSL": "true"
}
}
}
}
```
{{% /tab %}}
{{% tab header="Windsurf" lang="en" %}}
1. Open [Windsurf](https://docs.codeium.com/windsurf) and navigate to the
Cascade assistant.
1. Tap on the hammer (MCP) icon, then Configure to open the configuration file.
1. Add the following configuration, replace the environment variables with your
values, and save:
```json
{
"mcpServers": {
"looker-toolbox": {
"command": "./PATH/TO/toolbox",
"args": ["--stdio", "--prebuilt", "looker"],
"env": {
"LOOKER_BASE_URL": "https://looker.example.com",
"LOOKER_CLIENT_ID": "",
"LOOKER_CLIENT_SECRET": "",
"LOOKER_VERIFY_SSL": "true"
}
}
}
}
```
{{% /tab %}}
{{< /tabpane >}}
## Use Tools
Your AI tool is now connected to Looker using MCP. Try asking your AI
assistant to list models, explores, dimensions, and measures. Run a
query, retrieve the SQL for a query, and run a saved Look.
The full tool list is available in the [Prebuilt Tools
Reference](../../reference/prebuilt-tools/#looker).
The following tools are available to the LLM:
### Looker Model and Query Tools
These tools are used to get information about a Looker model
and execute queries against that model.
1. **get_models**: list the LookML models in Looker
1. **get_explores**: list the explores in a given model
1. **get_dimensions**: list the dimensions in a given explore
1. **get_measures**: list the measures in a given explore
1. **get_filters**: list the filters in a given explore
1. **get_parameters**: list the parameters in a given explore
1. **query**: Run a query and return the data
1. **query_sql**: Return the SQL generated by Looker for a query
1. **query_url**: Return a link to the query in Looker for further exploration
### Looker Content Tools
These tools get saved content (Looks and Dashboards) from a Looker
instance and create new saved content.
1. **get_looks**: Return the saved Looks that match a title or description
1. **run_look**: Run a saved Look and return the data
1. **make_look**: Create a saved Look in Looker and return the URL
1. **get_dashboards**: Return the saved dashboards that match a title or
description
1. **run_dashboard**: Run the queries associated with a dashboard and return the
data
1. **make_dashboard**: Create a saved dashboard in Looker and return the URL
1. **add_dashboard_element**: Add a tile to a dashboard
### Looker Instance Health Tools
These tools offer the same health check algorithms that the popular
CLI [Henry](https://github.com/looker-open-source/henry) offers.
1. **health_pulse**: Check the health of a Looker intance
1. **health_analyze**: Analyze the usage of a Looker object
1. **health_vacuum**: Find LookML elements that might be unused
### LookML Authoring Tools
These tools allow enable the caller to write and modify LookML files
as well as get the database schema needed to write LookML effectively.
1. **dev_mode**: Activate dev mode.
1. **get_projects**: Get the list of LookML projects
1. **get_project_files**: Get the list of files in a project
1. **get_project_file**: Get the contents of a file in a project
1. **create_project_file**: Create a file in a project
1. **update_project_file**: Update the contents of a file in a project
1. **delete_project_file**: Delete a file in a project
1. **get_connections**: Get the list of connections
1. **get_connection_schemas**: Get the list of schemas for a connection
1. **get_connection_databases**: Get the list of databases for a connection
1. **get_connection_tables**: Get the list of tables for a connection
1. **get_connection_table_columns**: Get the list of columns for a table in a
connection
{{< notice note >}}
Prebuilt tools are pre-1.0, so expect some tool changes between versions. LLMs
will adapt to the tools available, so this shouldn't affect most users.
{{< /notice >}}
# MySQL using MCP
Connect your IDE to MySQL using Toolbox.
[Model Context Protocol (MCP)](https://modelcontextprotocol.io/introduction) is
an open protocol for connecting Large Language Models (LLMs) to data sources
like MySQL. This guide covers how to use [MCP Toolbox for Databases][toolbox] to
expose your developer assistant tools to a MySQL instance:
* [Cursor][cursor]
* [Windsurf][windsurf] (Codium)
* [Visual Studio Code][vscode] (Copilot)
* [Cline][cline] (VS Code extension)
* [Claude desktop][claudedesktop]
* [Claude code][claudecode]
* [Gemini CLI][geminicli]
* [Gemini Code Assist][geminicodeassist]
[toolbox]: https://github.com/googleapis/genai-toolbox
[cursor]: #configure-your-mcp-client
[windsurf]: #configure-your-mcp-client
[vscode]: #configure-your-mcp-client
[cline]: #configure-your-mcp-client
[claudedesktop]: #configure-your-mcp-client
[claudecode]: #configure-your-mcp-client
[geminicli]: #configure-your-mcp-client
[geminicodeassist]: #configure-your-mcp-client
## Set up the database
1. [Create or select a MySQL instance.](https://dev.mysql.com/downloads/installer/)
## Install MCP Toolbox
1. Download the latest version of Toolbox as a binary. Select the [correct
binary](https://github.com/googleapis/genai-toolbox/releases) corresponding
to your OS and CPU architecture. You are required to use Toolbox version
V0.10.0+:
{{< tabpane persist=header >}}
{{< tab header="linux/amd64" lang="bash" >}}
curl -O https://storage.googleapis.com/genai-toolbox/v0.21.0/linux/amd64/toolbox
{{< /tab >}}
{{< tab header="darwin/arm64" lang="bash" >}}
curl -O https://storage.googleapis.com/genai-toolbox/v0.21.0/darwin/arm64/toolbox
{{< /tab >}}
{{< tab header="darwin/amd64" lang="bash" >}}
curl -O https://storage.googleapis.com/genai-toolbox/v0.21.0/darwin/amd64/toolbox
{{< /tab >}}
{{< tab header="windows/amd64" lang="bash" >}}
curl -O https://storage.googleapis.com/genai-toolbox/v0.21.0/windows/amd64/toolbox.exe
{{< /tab >}}
{{< /tabpane >}}
1. Make the binary executable:
```bash
chmod +x toolbox
```
1. Verify the installation:
```bash
./toolbox --version
```
## Configure your MCP Client
{{< tabpane text=true >}}
{{% tab header="Claude code" lang="en" %}}
1. Install [Claude
Code](https://docs.anthropic.com/en/docs/agents-and-tools/claude-code/overview).
1. Create a `.mcp.json` file in your project root if it doesn't exist.
1. Add the following configuration, replace the environment variables with your
values, and save:
```json
{
"mcpServers": {
"mysql": {
"command": "./PATH/TO/toolbox",
"args": ["--prebuilt", "mysql", "--stdio"],
"env": {
"MYSQL_HOST": "",
"MYSQL_PORT": "",
"MYSQL_DATABASE": "",
"MYSQL_USER": "",
"MYSQL_PASSWORD": ""
}
}
}
}
```
1. Restart Claude code to apply the new configuration.
{{% /tab %}}
{{% tab header="Claude desktop" lang="en" %}}
1. Open [Claude desktop](https://claude.ai/download) and navigate to Settings.
1. Under the Developer tab, tap Edit Config to open the configuration file.
1. Add the following configuration, replace the environment variables with your
values, and save:
```json
{
"mcpServers": {
"mysql": {
"command": "./PATH/TO/toolbox",
"args": ["--prebuilt", "mysql", "--stdio"],
"env": {
"MYSQL_HOST": "",
"MYSQL_PORT": "",
"MYSQL_DATABASE": "",
"MYSQL_USER": "",
"MYSQL_PASSWORD": ""
}
}
}
}
```
1. Restart Claude desktop.
1. From the new chat screen, you should see a hammer (MCP) icon appear with the
new MCP server available.
{{% /tab %}}
{{% tab header="Cline" lang="en" %}}
1. Open the [Cline](https://github.com/cline/cline) extension in VS Code and
tap the **MCP Servers** icon.
1. Tap Configure MCP Servers to open the configuration file.
1. Add the following configuration, replace the environment variables with your
values, and save:
```json
{
"mcpServers": {
"mysql": {
"command": "./PATH/TO/toolbox",
"args": ["--prebuilt", "mysql", "--stdio"],
"env": {
"MYSQL_HOST": "",
"MYSQL_PORT": "",
"MYSQL_DATABASE": "",
"MYSQL_USER": "",
"MYSQL_PASSWORD": ""
}
}
}
}
```
1. You should see a green active status after the server is successfully
connected.
{{% /tab %}}
{{% tab header="Cursor" lang="en" %}}
1. Create a `.cursor` directory in your project root if it doesn't exist.
1. Create a `.cursor/mcp.json` file if it doesn't exist and open it.
1. Add the following configuration, replace the environment variables with your
values, and save:
```json
{
"mcpServers": {
"mysql": {
"command": "./PATH/TO/toolbox",
"args": ["--prebuilt", "mysql", "--stdio"],
"env": {
"MYSQL_HOST": "",
"MYSQL_PORT": "",
"MYSQL_DATABASE": "",
"MYSQL_USER": "",
"MYSQL_PASSWORD": ""
}
}
}
}
```
1. Open [Cursor](https://www.cursor.com/) and navigate to **Settings > Cursor
Settings > MCP**. You should see a green active status after the server is
successfully connected.
{{% /tab %}}
{{% tab header="Visual Studio Code (Copilot)" lang="en" %}}
1. Open [VS Code](https://code.visualstudio.com/docs/copilot/overview) and
create a `.vscode` directory in your project root if it doesn't exist.
1. Create a `.vscode/mcp.json` file if it doesn't exist and open it.
1. Add the following configuration, replace the environment variables with your
values, and save:
```json
{
"servers": {
"mysql": {
"command": "./PATH/TO/toolbox",
"args": ["--prebuilt","mysql","--stdio"],
"env": {
"MYSQL_HOST": "",
"MYSQL_PORT": "",
"MYSQL_DATABASE": "",
"MYSQL_USER": "",
"MYSQL_PASSWORD": ""
}
}
}
}
```
{{% /tab %}}
{{% tab header="Windsurf" lang="en" %}}
1. Open [Windsurf](https://docs.codeium.com/windsurf) and navigate to the
Cascade assistant.
1. Tap on the hammer (MCP) icon, then Configure to open the configuration file.
1. Add the following configuration, replace the environment variables with your
values, and save:
```json
{
"mcpServers": {
"mysql": {
"command": "./PATH/TO/toolbox",
"args": ["--prebuilt","mysql","--stdio"],
"env": {
"MYSQL_HOST": "",
"MYSQL_PORT": "",
"MYSQL_DATABASE": "",
"MYSQL_USER": "",
"MYSQL_PASSWORD": ""
}
}
}
}
```
{{% /tab %}}
{{% tab header="Gemini CLI" lang="en" %}}
1. Install the [Gemini
CLI](https://github.com/google-gemini/gemini-cli?tab=readme-ov-file#quickstart).
1. In your working directory, create a folder named `.gemini`. Within it,
create a `settings.json` file.
1. Add the following configuration, replace the environment variables with your
values, and then save:
```json
{
"mcpServers": {
"mysql": {
"command": "./PATH/TO/toolbox",
"args": ["--prebuilt","mysql","--stdio"],
"env": {
"MYSQL_HOST": "",
"MYSQL_PORT": "",
"MYSQL_DATABASE": "",
"MYSQL_USER": "",
"MYSQL_PASSWORD": ""
}
}
}
}
```
{{% /tab %}}
{{% tab header="Gemini Code Assist" lang="en" %}}
1. Install the [Gemini Code
Assist](https://marketplace.visualstudio.com/items?itemName=Google.geminicodeassist)
extension in Visual Studio Code.
1. Enable Agent Mode in Gemini Code Assist chat.
1. In your working directory, create a folder named `.gemini`. Within it,
create a `settings.json` file.
1. Add the following configuration, replace the environment variables with your
values, and then save:
```json
{
"mcpServers": {
"mysql": {
"command": "./PATH/TO/toolbox",
"args": ["--prebuilt","mysql","--stdio"],
"env": {
"MYSQL_HOST": "",
"MYSQL_PORT": "",
"MYSQL_DATABASE": "",
"MYSQL_USER": "",
"MYSQL_PASSWORD": ""
}
}
}
}
```
{{% /tab %}}
{{< /tabpane >}}
## Use Tools
Your AI tool is now connected to MySQL using MCP. Try asking your AI assistant
to list tables, create a table, or define and execute other SQL statements.
The following tools are available to the LLM:
1. **list_tables**: lists tables and descriptions
1. **execute_sql**: execute any SQL statement
{{< notice note >}}
Prebuilt tools are pre-1.0, so expect some tool changes between versions. LLMs
will adapt to the tools available, so this shouldn't affect most users.
{{< /notice >}}
# Neo4j using MCP
Connect your IDE to Neo4j using Toolbox.
[Model Context Protocol (MCP)](https://modelcontextprotocol.io/introduction) is
an open protocol for connecting Large Language Models (LLMs) to data sources
like Neo4j. This guide covers how to use [MCP Toolbox for Databases][toolbox] to
expose your developer assistant tools to a Neo4j instance:
* [Cursor][cursor]
* [Windsurf][windsurf] (Codium)
* [Visual Studio Code][vscode] (Copilot)
* [Cline][cline] (VS Code extension)
* [Claude desktop][claudedesktop]
* [Claude code][claudecode]
* [Gemini CLI][geminicli]
* [Gemini Code Assist][geminicodeassist]
[toolbox]: https://github.com/googleapis/genai-toolbox
[cursor]: #configure-your-mcp-client
[windsurf]: #configure-your-mcp-client
[vscode]: #configure-your-mcp-client
[cline]: #configure-your-mcp-client
[claudedesktop]: #configure-your-mcp-client
[claudecode]: #configure-your-mcp-client
[geminicli]: #configure-your-mcp-client
[geminicodeassist]: #configure-your-mcp-client
## Set up the database
1. [Create or select a Neo4j
instance.](https://neo4j.com/cloud/platform/aura-graph-database/)
## Install MCP Toolbox
1. Download the latest version of Toolbox as a binary. Select the [correct
binary](https://github.com/googleapis/genai-toolbox/releases) corresponding
to your OS and CPU architecture. You are required to use Toolbox version
v0.15.0+:
{{< tabpane persist=header >}}
{{< tab header="linux/amd64" lang="bash" >}}
curl -O https://storage.googleapis.com/genai-toolbox/v0.21.0/linux/amd64/toolbox
{{< /tab >}}
{{< tab header="darwin/arm64" lang="bash" >}}
curl -O https://storage.googleapis.com/genai-toolbox/v0.21.0/darwin/arm64/toolbox
{{< /tab >}}
{{< tab header="darwin/amd64" lang="bash" >}}
curl -O https://storage.googleapis.com/genai-toolbox/v0.21.0/darwin/amd64/toolbox
{{< /tab >}}
{{< tab header="windows/amd64" lang="bash" >}}
curl -O https://storage.googleapis.com/genai-toolbox/v0.21.0/windows/amd64/toolbox.exe
{{< /tab >}}
{{< /tabpane >}}
1. Make the binary executable:
```bash
chmod +x toolbox
```
1. Verify the installation:
```bash
./toolbox --version
```
## Configure your MCP Client
{{< tabpane text=true >}}
{{% tab header="Claude code" lang="en" %}}
1. Install [Claude
Code](https://docs.anthropic.com/en/docs/agents-and-tools/claude-code/overview).
1. Create a `.mcp.json` file in your project root if it doesn't exist.
1. Add the following configuration, replace the environment variables with your
values, and save:
```json
{
"mcpServers": {
"neo4j": {
"command": "./PATH/TO/toolbox",
"args": ["--prebuilt","neo4j","--stdio"],
"env": {
"NEO4J_URI": "",
"NEO4J_DATABASE": "",
"NEO4J_USERNAME": "",
"NEO4J_PASSWORD": ""
}
}
}
}
```
1. Restart Claude code to apply the new configuration.
{{% /tab %}}
{{% tab header="Claude desktop" lang="en" %}}
1. Open [Claude desktop](https://claude.ai/download) and navigate to Settings.
1. Under the Developer tab, tap Edit Config to open the configuration file.
1. Add the following configuration, replace the environment variables with your
values, and save:
```json
{
"mcpServers": {
"neo4j": {
"command": "./PATH/TO/toolbox",
"args": ["--prebuilt","neo4j","--stdio"],
"env": {
"NEO4J_URI": "",
"NEO4J_DATABASE": "",
"NEO4J_USERNAME": "",
"NEO4J_PASSWORD": ""
}
}
}
}
```
1. Restart Claude desktop.
1. From the new chat screen, you should see a hammer (MCP) icon appear with the
new MCP server available.
{{% /tab %}}
{{% tab header="Cline" lang="en" %}}
1. Open the [Cline](https://github.com/cline/cline) extension in VS Code and
tap the **MCP Servers** icon.
1. Tap Configure MCP Servers to open the configuration file.
1. Add the following configuration, replace the environment variables with your
values, and save:
```json
{
"mcpServers": {
"neo4j": {
"command": "./PATH/TO/toolbox",
"args": ["--prebuilt","neo4j","--stdio"],
"env": {
"NEO4J_URI": "",
"NEO4J_DATABASE": "",
"NEO4J_USERNAME": "",
"NEO4J_PASSWORD": ""
}
}
}
}
```
1. You should see a green active status after the server is successfully
connected.
{{% /tab %}}
{{% tab header="Cursor" lang="en" %}}
1. Create a `.cursor` directory in your project root if it doesn't exist.
1. Create a `.cursor/mcp.json` file if it doesn't exist and open it.
1. Add the following configuration, replace the environment variables with your
values, and save:
```json
{
"mcpServers": {
"neo4j": {
"command": "./PATH/TO/toolbox",
"args": ["--prebuilt","neo4j","--stdio"],
"env": {
"NEO4J_URI": "",
"NEO4J_DATABASE": "",
"NEO4J_USERNAME": "",
"NEO4J_PASSWORD": ""
}
}
}
}
```
1. Open [Cursor](https://www.cursor.com/) and navigate to **Settings > Cursor
Settings > MCP**. You should see a green active status after the server is
successfully connected.
{{% /tab %}}
{{% tab header="Visual Studio Code (Copilot)" lang="en" %}}
1. Open [VS Code](https://code.visualstudio.com/docs/copilot/overview) and
create a `.vscode` directory in your project root if it doesn't exist.
1. Create a `.vscode/mcp.json` file if it doesn't exist and open it.
1. Add the following configuration, replace the environment variables with your
values, and save:
```json
{
"mcp" : {
"servers": {
"neo4j": {
"command": "./PATH/TO/toolbox",
"args": ["--prebuilt","neo4j","--stdio"],
"env": {
"NEO4J_URI": "",
"NEO4J_DATABASE": "",
"NEO4J_USERNAME": "",
"NEO4J_PASSWORD": ""
}
}
}
}
}
```
{{% /tab %}}
{{% tab header="Windsurf" lang="en" %}}
1. Open [Windsurf](https://docs.codeium.com/windsurf) and navigate to the
Cascade assistant.
1. Tap on the hammer (MCP) icon, then Configure to open the configuration file.
1. Add the following configuration, replace the environment variables with your
values, and save:
```json
{
"mcpServers": {
"neo4j": {
"command": "./PATH/TO/toolbox",
"args": ["--prebuilt","neo4j","--stdio"],
"env": {
"NEO4J_URI": "",
"NEO4J_DATABASE": "",
"NEO4J_USERNAME": "",
"NEO4J_PASSWORD": ""
}
}
}
}
```
{{% /tab %}}
{{% tab header="Gemini CLI" lang="en" %}}
1. Install the [Gemini
CLI](https://github.com/google-gemini/gemini-cli?tab=readme-ov-file#quickstart).
1. In your working directory, create a folder named `.gemini`. Within it,
create a `settings.json` file.
1. Add the following configuration, replace the environment variables with your
values, and then save:
```json
{
"mcpServers": {
"neo4j": {
"command": "./PATH/TO/toolbox",
"args": ["--prebuilt","neo4j","--stdio"],
"env": {
"NEO4J_URI": "",
"NEO4J_DATABASE": "",
"NEO4J_USERNAME": "",
"NEO4J_PASSWORD": ""
}
}
}
}
```
{{% /tab %}}
{{% tab header="Gemini Code Assist" lang="en" %}}
1. Install the [Gemini Code
Assist](https://marketplace.visualstudio.com/items?itemName=Google.geminicodeassist)
extension in Visual Studio Code.
1. Enable Agent Mode in Gemini Code Assist chat.
1. In your working directory, create a folder named `.gemini`. Within it,
create a `settings.json` file.
1. Add the following configuration, replace the environment variables with your
values, and then save:
```json
{
"mcpServers": {
"neo4j": {
"command": "./PATH/TO/toolbox",
"args": ["--prebuilt","neo4j","--stdio"],
"env": {
"NEO4J_URI": "",
"NEO4J_DATABASE": "",
"NEO4J_USERNAME": "",
"NEO4J_PASSWORD": ""
}
}
}
}
```
{{% /tab %}}
{{< /tabpane >}}
## Use Tools
Your AI tool is now connected to Neo4j using MCP. Try asking your AI assistant
to get the graph schema or execute Cypher statements.
The following tools are available to the LLM:
1. **get_schema**: extracts the complete database schema, including details
about node labels, relationships, properties, constraints, and indexes.
1. **execute_cypher**: executes any arbitrary Cypher statement.
{{< notice note >}}
Prebuilt tools are pre-1.0, so expect some tool changes between versions. LLMs
will adapt to the tools available, so this shouldn't affect most users.
{{< /notice >}}
# PostgreSQL using MCP
Connect your IDE to PostgreSQL using Toolbox.
[Model Context Protocol (MCP)](https://modelcontextprotocol.io/introduction) is
an open protocol for connecting Large Language Models (LLMs) to data sources
like Postgres. This guide covers how to use [MCP Toolbox for Databases][toolbox]
to expose your developer assistant tools to a Postgres instance:
* [Cursor][cursor]
* [Windsurf][windsurf] (Codium)
* [Visual Studio Code][vscode] (Copilot)
* [Cline][cline] (VS Code extension)
* [Claude desktop][claudedesktop]
* [Claude code][claudecode]
* [Gemini CLI][geminicli]
* [Gemini Code Assist][geminicodeassist]
[toolbox]: https://github.com/googleapis/genai-toolbox
[cursor]: #configure-your-mcp-client
[windsurf]: #configure-your-mcp-client
[vscode]: #configure-your-mcp-client
[cline]: #configure-your-mcp-client
[claudedesktop]: #configure-your-mcp-client
[claudecode]: #configure-your-mcp-client
[geminicli]: #configure-your-mcp-client
[geminicodeassist]: #configure-your-mcp-client
{{< notice tip >}}
This guide can be used with [AlloyDB
Omni](https://cloud.google.com/alloydb/omni/current/docs/overview).
{{< /notice >}}
## Set up the database
1. Create or select a PostgreSQL instance.
* [Install PostgreSQL locally](https://www.postgresql.org/download/)
* [Install AlloyDB Omni](https://cloud.google.com/alloydb/omni/current/docs/quickstart)
1. Create or reuse [a database
user](https://cloud.google.com/alloydb/omni/current/docs/database-users/manage-users)
and have the username and password ready.
## Install MCP Toolbox
1. Download the latest version of Toolbox as a binary. Select the [correct
binary](https://github.com/googleapis/genai-toolbox/releases) corresponding
to your OS and CPU architecture. You are required to use Toolbox version
V0.6.0+:
{{< tabpane persist=header >}}
{{< tab header="linux/amd64" lang="bash" >}}
curl -O https://storage.googleapis.com/genai-toolbox/v0.21.0/linux/amd64/toolbox
{{< /tab >}}
{{< tab header="darwin/arm64" lang="bash" >}}
curl -O https://storage.googleapis.com/genai-toolbox/v0.21.0/darwin/arm64/toolbox
{{< /tab >}}
{{< tab header="darwin/amd64" lang="bash" >}}
curl -O https://storage.googleapis.com/genai-toolbox/v0.21.0/darwin/amd64/toolbox
{{< /tab >}}
{{< tab header="windows/amd64" lang="bash" >}}
curl -O https://storage.googleapis.com/genai-toolbox/v0.21.0/windows/amd64/toolbox.exe
{{< /tab >}}
{{< /tabpane >}}
1. Make the binary executable:
```bash
chmod +x toolbox
```
1. Verify the installation:
```bash
./toolbox --version
```
## Configure your MCP Client
{{< tabpane text=true >}}
{{% tab header="Claude code" lang="en" %}}
1. Install [Claude
Code](https://docs.anthropic.com/en/docs/agents-and-tools/claude-code/overview).
1. Create a `.mcp.json` file in your project root if it doesn't exist.
1. Add the following configuration, replace the environment variables with your
values, and save:
```json
{
"mcpServers": {
"postgres": {
"command": "./PATH/TO/toolbox",
"args": ["--prebuilt","postgres","--stdio"],
"env": {
"POSTGRES_HOST": "",
"POSTGRES_PORT": "",
"POSTGRES_DATABASE": "",
"POSTGRES_USER": "",
"POSTGRES_PASSWORD": ""
}
}
}
}
```
1. Restart Claude code to apply the new configuration.
{{% /tab %}}
{{% tab header="Claude desktop" lang="en" %}}
1. Open [Claude desktop](https://claude.ai/download) and navigate to Settings.
1. Under the Developer tab, tap Edit Config to open the configuration file.
1. Add the following configuration, replace the environment variables with your
values, and save:
```json
{
"mcpServers": {
"postgres": {
"command": "./PATH/TO/toolbox",
"args": ["--prebuilt","postgres","--stdio"],
"env": {
"POSTGRES_HOST": "",
"POSTGRES_PORT": "",
"POSTGRES_DATABASE": "",
"POSTGRES_USER": "",
"POSTGRES_PASSWORD": ""
}
}
}
}
```
1. Restart Claude desktop.
1. From the new chat screen, you should see a hammer (MCP) icon appear with the
new MCP server available.
{{% /tab %}}
{{% tab header="Cline" lang="en" %}}
1. Open the [Cline](https://github.com/cline/cline) extension in VS Code and tap
the **MCP Servers** icon.
1. Tap Configure MCP Servers to open the configuration file.
1. Add the following configuration, replace the environment variables with your
values, and save:
```json
{
"mcpServers": {
"postgres": {
"command": "./PATH/TO/toolbox",
"args": ["--prebuilt","postgres","--stdio"],
"env": {
"POSTGRES_HOST": "",
"POSTGRES_PORT": "",
"POSTGRES_DATABASE": "",
"POSTGRES_USER": "",
"POSTGRES_PASSWORD": ""
}
}
}
}
```
1. You should see a green active status after the server is successfully
connected.
{{% /tab %}}
{{% tab header="Cursor" lang="en" %}}
1. Create a `.cursor` directory in your project root if it doesn't exist.
1. Create a `.cursor/mcp.json` file if it doesn't exist and open it.
1. Add the following configuration, replace the environment variables with your
values, and save:
```json
{
"mcpServers": {
"postgres": {
"command": "./PATH/TO/toolbox",
"args": ["--prebuilt","postgres","--stdio"],
"env": {
"POSTGRES_HOST": "",
"POSTGRES_PORT": "",
"POSTGRES_DATABASE": "",
"POSTGRES_USER": "",
"POSTGRES_PASSWORD": ""
}
}
}
}
```
1. [Cursor](https://www.cursor.com/) and navigate to **Settings > Cursor
Settings > MCP**. You should see a green active status after the server is
successfully connected.
{{% /tab %}}
{{% tab header="Visual Studio Code (Copilot)" lang="en" %}}
1. Open [VS Code](https://code.visualstudio.com/docs/copilot/overview) and
create a `.vscode` directory in your project root if it doesn't exist.
1. Create a `.vscode/mcp.json` file if it doesn't exist and open it.
1. Add the following configuration, replace the environment variables with your
values, and save:
```json
{
"servers": {
"postgres": {
"command": "./PATH/TO/toolbox",
"args": ["--prebuilt","postgres","--stdio"],
"env": {
"POSTGRES_HOST": "",
"POSTGRES_PORT": "",
"POSTGRES_DATABASE": "",
"POSTGRES_USER": "",
"POSTGRES_PASSWORD": ""
}
}
}
}
```
{{% /tab %}}
{{% tab header="Windsurf" lang="en" %}}
1. Open [Windsurf](https://docs.codeium.com/windsurf) and navigate to the
Cascade assistant.
1. Tap on the hammer (MCP) icon, then Configure to open the configuration file.
1. Add the following configuration, replace the environment variables with your
values, and save:
```json
{
"mcpServers": {
"postgres": {
"command": "./PATH/TO/toolbox",
"args": ["--prebuilt","postgres","--stdio"],
"env": {
"POSTGRES_HOST": "",
"POSTGRES_PORT": "",
"POSTGRES_DATABASE": "",
"POSTGRES_USER": "",
"POSTGRES_PASSWORD": ""
}
}
}
}
```
{{% /tab %}}
{{% tab header="Gemini CLI" lang="en" %}}
1. Install the [Gemini CLI](https://github.com/google-gemini/gemini-cli?tab=readme-ov-file#quickstart).
1. In your working directory, create a folder named `.gemini`. Within it, create a `settings.json` file.
1. Add the following configuration, replace the environment variables with your values, and then save:
```json
{
"mcpServers": {
"postgres": {
"command": "./PATH/TO/toolbox",
"args": ["--prebuilt","postgres","--stdio"],
"env": {
"POSTGRES_HOST": "",
"POSTGRES_PORT": "",
"POSTGRES_DATABASE": "",
"POSTGRES_USER": "",
"POSTGRES_PASSWORD": ""
}
}
}
}
```
{{% /tab %}}
{{% tab header="Gemini Code Assist" lang="en" %}}
1. Install the [Gemini Code Assist](https://marketplace.visualstudio.com/items?itemName=Google.geminicodeassist) extension in Visual Studio Code.
1. Enable Agent Mode in Gemini Code Assist chat.
1. In your working directory, create a folder named `.gemini`. Within it, create a `settings.json` file.
1. Add the following configuration, replace the environment variables with your values, and then save:
```json
{
"mcpServers": {
"postgres": {
"command": "./PATH/TO/toolbox",
"args": ["--prebuilt","postgres","--stdio"],
"env": {
"POSTGRES_HOST": "",
"POSTGRES_PORT": "",
"POSTGRES_DATABASE": "",
"POSTGRES_USER": "",
"POSTGRES_PASSWORD": ""
}
}
}
}
```
{{% /tab %}}
{{< /tabpane >}}
## Use Tools
Your AI tool is now connected to Postgres using MCP. Try asking your AI
assistant to list tables, create a table, or define and execute other SQL
statements.
The following tools are available to the LLM:
1. **list_tables**: lists tables and descriptions
1. **execute_sql**: execute any SQL statement
{{< notice note >}}
Prebuilt tools are pre-1.0, so expect some tool changes between versions. LLMs
will adapt to the tools available, so this shouldn't affect most users.
{{< /notice >}}
# Spanner using MCP
Connect your IDE to Spanner using Toolbox.
# SQL Server using MCP
Connect your IDE to SQL Server using Toolbox.
[Model Context Protocol (MCP)](https://modelcontextprotocol.io/introduction) is
an open protocol for connecting Large Language Models (LLMs) to data sources
like SQL Server. This guide covers how to use [MCP Toolbox for
Databases][toolbox] to expose your developer assistant tools to a SQL Server
instance:
* [Cursor][cursor]
* [Windsurf][windsurf] (Codium)
* [Visual Studio Code][vscode] (Copilot)
* [Cline][cline] (VS Code extension)
* [Claude desktop][claudedesktop]
* [Claude code][claudecode]
* [Gemini CLI][geminicli]
* [Gemini Code Assist][geminicodeassist]
[toolbox]: https://github.com/googleapis/genai-toolbox
[cursor]: #configure-your-mcp-client
[windsurf]: #configure-your-mcp-client
[vscode]: #configure-your-mcp-client
[cline]: #configure-your-mcp-client
[claudedesktop]: #configure-your-mcp-client
[claudecode]: #configure-your-mcp-client
[geminicli]: #configure-your-mcp-client
[geminicodeassist]: #configure-your-mcp-client
## Set up the database
1. [Create or select a SQL Server
instance.](https://www.microsoft.com/en-us/sql-server/sql-server-downloads)
## Install MCP Toolbox
1. Download the latest version of Toolbox as a binary. Select the [correct
binary](https://github.com/googleapis/genai-toolbox/releases) corresponding
to your OS and CPU architecture. You are required to use Toolbox version
V0.10.0+:
{{< tabpane persist=header >}}
{{< tab header="linux/amd64" lang="bash" >}}
curl -O https://storage.googleapis.com/genai-toolbox/v0.21.0/linux/amd64/toolbox
{{< /tab >}}
{{< tab header="darwin/arm64" lang="bash" >}}
curl -O https://storage.googleapis.com/genai-toolbox/v0.21.0/darwin/arm64/toolbox
{{< /tab >}}
{{< tab header="darwin/amd64" lang="bash" >}}
curl -O https://storage.googleapis.com/genai-toolbox/v0.21.0/darwin/amd64/toolbox
{{< /tab >}}
{{< tab header="windows/amd64" lang="bash" >}}
curl -O https://storage.googleapis.com/genai-toolbox/v0.21.0/windows/amd64/toolbox.exe
{{< /tab >}}
{{< /tabpane >}}
1. Make the binary executable:
```bash
chmod +x toolbox
```
1. Verify the installation:
```bash
./toolbox --version
```
## Configure your MCP Client
{{< tabpane text=true >}}
{{% tab header="Claude code" lang="en" %}}
1. Install [Claude
Code](https://docs.anthropic.com/en/docs/agents-and-tools/claude-code/overview).
1. Create a `.mcp.json` file in your project root if it doesn't exist.
1. Add the following configuration, replace the environment variables with your
values, and save:
```json
{
"mcpServers": {
"sqlserver": {
"command": "./PATH/TO/toolbox",
"args": ["--prebuilt","mssql","--stdio"],
"env": {
"MSSQL_HOST": "",
"MSSQL_PORT": "",
"MSSQL_DATABASE": "",
"MSSQL_USER": "",
"MSSQL_PASSWORD": ""
}
}
}
}
```
1. Restart Claude code to apply the new configuration.
{{% /tab %}}
{{% tab header="Claude desktop" lang="en" %}}
1. Open [Claude desktop](https://claude.ai/download) and navigate to Settings.
1. Under the Developer tab, tap Edit Config to open the configuration file.
1. Add the following configuration, replace the environment variables with your
values, and save:
```json
{
"mcpServers": {
"sqlserver": {
"command": "./PATH/TO/toolbox",
"args": ["--prebuilt","mssql","--stdio"],
"env": {
"MSSQL_HOST": "",
"MSSQL_PORT": "",
"MSSQL_DATABASE": "",
"MSSQL_USER": "",
"MSSQL_PASSWORD": ""
}
}
}
}
```
1. Restart Claude desktop.
1. From the new chat screen, you should see a hammer (MCP) icon appear with the
new MCP server available.
{{% /tab %}}
{{% tab header="Cline" lang="en" %}}
1. Open the [Cline](https://github.com/cline/cline) extension in VS Code and
tap the **MCP Servers** icon.
1. Tap Configure MCP Servers to open the configuration file.
1. Add the following configuration, replace the environment variables with your
values, and save:
```json
{
"mcpServers": {
"sqlserver": {
"command": "./PATH/TO/toolbox",
"args": ["--prebuilt","mssql","--stdio"],
"env": {
"MSSQL_HOST": "",
"MSSQL_PORT": "",
"MSSQL_DATABASE": "",
"MSSQL_USER": "",
"MSSQL_PASSWORD": ""
}
}
}
}
```
1. You should see a green active status after the server is successfully
connected.
{{% /tab %}}
{{% tab header="Cursor" lang="en" %}}
1. Create a `.cursor` directory in your project root if it doesn't exist.
1. Create a `.cursor/mcp.json` file if it doesn't exist and open it.
1. Add the following configuration, replace the environment variables with your
values, and save:
```json
{
"mcpServers": {
"sqlserver": {
"command": "./PATH/TO/toolbox",
"args": ["--prebuilt","mssql","--stdio"],
"env": {
"MSSQL_HOST": "",
"MSSQL_PORT": "",
"MSSQL_DATABASE": "",
"MSSQL_USER": "",
"MSSQL_PASSWORD": ""
}
}
}
}
```
1. Open [Cursor](https://www.cursor.com/) and navigate to **Settings > Cursor
Settings > MCP**. You should see a green active status after the server is
successfully connected.
{{% /tab %}}
{{% tab header="Visual Studio Code (Copilot)" lang="en" %}}
1. Open [VS Code](https://code.visualstudio.com/docs/copilot/overview) and
create a `.vscode` directory in your project root if it doesn't exist.
1. Create a `.vscode/mcp.json` file if it doesn't exist and open it.
1. Add the following configuration, replace the environment variables with your
values, and save:
```json
{
"servers": {
"mssql": {
"command": "./PATH/TO/toolbox",
"args": ["--prebuilt","mssql","--stdio"],
"env": {
"MSSQL_HOST": "",
"MSSQL_PORT": "",
"MSSQL_DATABASE": "",
"MSSQL_USER": "",
"MSSQL_PASSWORD": ""
}
}
}
}
```
{{% /tab %}}
{{% tab header="Windsurf" lang="en" %}}
1. Open [Windsurf](https://docs.codeium.com/windsurf) and navigate to the
Cascade assistant.
1. Tap on the hammer (MCP) icon, then Configure to open the configuration file.
1. Add the following configuration, replace the environment variables with your
values, and save:
```json
{
"mcpServers": {
"sqlserver": {
"command": "./PATH/TO/toolbox",
"args": ["--prebuilt","mssql","--stdio"],
"env": {
"MSSQL_HOST": "",
"MSSQL_PORT": "",
"MSSQL_DATABASE": "",
"MSSQL_USER": "",
"MSSQL_PASSWORD": ""
}
}
}
}
```
{{% /tab %}}
{{% tab header="Gemini CLI" lang="en" %}}
1. Install the [Gemini
CLI](https://github.com/google-gemini/gemini-cli?tab=readme-ov-file#quickstart).
1. In your working directory, create a folder named `.gemini`. Within it,
create a `settings.json` file.
1. Add the following configuration, replace the environment variables with your
values, and then save:
```json
{
"mcpServers": {
"sqlserver": {
"command": "./PATH/TO/toolbox",
"args": ["--prebuilt","mssql","--stdio"],
"env": {
"MSSQL_HOST": "",
"MSSQL_PORT": "",
"MSSQL_DATABASE": "",
"MSSQL_USER": "",
"MSSQL_PASSWORD": ""
}
}
}
}
```
{{% /tab %}}
{{% tab header="Gemini Code Assist" lang="en" %}}
1. Install the [Gemini Code
Assist](https://marketplace.visualstudio.com/items?itemName=Google.geminicodeassist)
extension in Visual Studio Code.
1. Enable Agent Mode in Gemini Code Assist chat.
1. In your working directory, create a folder named `.gemini`. Within it,
create a `settings.json` file.
1. Add the following configuration, replace the environment variables with your
values, and then save:
```json
{
"mcpServers": {
"sqlserver": {
"command": "./PATH/TO/toolbox",
"args": ["--prebuilt","mssql","--stdio"],
"env": {
"MSSQL_HOST": "",
"MSSQL_PORT": "",
"MSSQL_DATABASE": "",
"MSSQL_USER": "",
"MSSQL_PASSWORD": ""
}
}
}
}
```
{{% /tab %}}
{{< /tabpane >}}
## Use Tools
Your AI tool is now connected to SQL Server using MCP. Try asking your AI
assistant to list tables, create a table, or define and execute other SQL
statements.
The following tools are available to the LLM:
1. **list_tables**: lists tables and descriptions
1. **execute_sql**: execute any SQL statement
{{< notice note >}}
Prebuilt tools are pre-1.0, so expect some tool changes between versions. LLMs
will adapt to the tools available, so this shouldn't affect most users.
{{< /notice >}}
# SQLite using MCP
Connect your IDE to SQLite using Toolbox.
[Model Context Protocol (MCP)](https://modelcontextprotocol.io/introduction) is
an open protocol for connecting Large Language Models (LLMs) to data sources
like SQLite. This guide covers how to use [MCP Toolbox for Databases][toolbox]
to expose your developer assistant tools to a SQLite instance:
* [Cursor][cursor]
* [Windsurf][windsurf] (Codium)
* [Visual Studio Code][vscode] (Copilot)
* [Cline][cline] (VS Code extension)
* [Claude desktop][claudedesktop]
* [Claude code][claudecode]
* [Gemini CLI][geminicli]
* [Gemini Code Assist][geminicodeassist]
[toolbox]: https://github.com/googleapis/genai-toolbox
[cursor]: #configure-your-mcp-client
[windsurf]: #configure-your-mcp-client
[vscode]: #configure-your-mcp-client
[cline]: #configure-your-mcp-client
[claudedesktop]: #configure-your-mcp-client
[claudecode]: #configure-your-mcp-client
[geminicli]: #configure-your-mcp-client
[geminicodeassist]: #configure-your-mcp-client
## Set up the database
1. [Create or select a SQLite database file.](https://www.sqlite.org/download.html)
## Install MCP Toolbox
1. Download the latest version of Toolbox as a binary. Select the [correct
binary](https://github.com/googleapis/genai-toolbox/releases) corresponding
to your OS and CPU architecture. You are required to use Toolbox version
V0.10.0+:
{{< tabpane persist=header >}}
{{< tab header="linux/amd64" lang="bash" >}}
curl -O https://storage.googleapis.com/genai-toolbox/v0.21.0/linux/amd64/toolbox
{{< /tab >}}
{{< tab header="darwin/arm64" lang="bash" >}}
curl -O https://storage.googleapis.com/genai-toolbox/v0.21.0/darwin/arm64/toolbox
{{< /tab >}}
{{< tab header="darwin/amd64" lang="bash" >}}
curl -O https://storage.googleapis.com/genai-toolbox/v0.21.0/darwin/amd64/toolbox
{{< /tab >}}
{{< tab header="windows/amd64" lang="bash" >}}
curl -O https://storage.googleapis.com/genai-toolbox/v0.21.0/windows/amd64/toolbox.exe
{{< /tab >}}
{{< /tabpane >}}
1. Make the binary executable:
```bash
chmod +x toolbox
```
1. Verify the installation:
```bash
./toolbox --version
```
## Configure your MCP Client
{{< tabpane text=true >}}
{{% tab header="Claude code" lang="en" %}}
1. Install [Claude
Code](https://docs.anthropic.com/en/docs/agents-and-tools/claude-code/overview).
1. Create a `.mcp.json` file in your project root if it doesn't exist.
1. Add the following configuration, replace the environment variables with your
values, and save:
```json
{
"mcpServers": {
"sqlite": {
"command": "./PATH/TO/toolbox",
"args": ["--prebuilt", "sqlite", "--stdio"],
"env": {
"SQLITE_DATABASE": "./sample.db"
}
}
}
}
```
1. Restart Claude code to apply the new configuration.
{{% /tab %}}
{{% tab header="Claude desktop" lang="en" %}}
1. Open [Claude desktop](https://claude.ai/download) and navigate to Settings.
1. Under the Developer tab, tap Edit Config to open the configuration file.
1. Add the following configuration, replace the environment variables with your
values, and save:
```json
{
"mcpServers": {
"sqlite": {
"command": "./PATH/TO/toolbox",
"args": ["--prebuilt", "sqlite", "--stdio"],
"env": {
"SQLITE_DATABASE": "./sample.db"
}
}
}
}
```
1. Restart Claude desktop.
1. From the new chat screen, you should see a hammer (MCP) icon appear with the
new MCP server available.
{{% /tab %}}
{{% tab header="Cline" lang="en" %}}
1. Open the [Cline](https://github.com/cline/cline) extension in VS Code and
tap the **MCP Servers** icon.
1. Tap Configure MCP Servers to open the configuration file.
1. Add the following configuration, replace the environment variables with your
values, and save:
```json
{
"mcpServers": {
"sqlite": {
"command": "./PATH/TO/toolbox",
"args": ["--prebuilt", "sqlite", "--stdio"],
"env": {
"SQLITE_DATABASE": "./sample.db"
}
}
}
}
```
1. You should see a green active status after the server is successfully
connected.
{{% /tab %}}
{{% tab header="Cursor" lang="en" %}}
1. Create a `.cursor` directory in your project root if it doesn't exist.
1. Create a `.cursor/mcp.json` file if it doesn't exist and open it.
1. Add the following configuration, replace the environment variables with your
values, and save:
```json
{
"mcpServers": {
"sqlite": {
"command": "./PATH/TO/toolbox",
"args": ["--prebuilt", "sqlite", "--stdio"],
"env": {
"SQLITE_DATABASE": "./sample.db"
}
}
}
}
```
1. Open [Cursor](https://www.cursor.com/) and navigate to **Settings > Cursor
Settings > MCP**. You should see a green active status after the server is
successfully connected.
{{% /tab %}}
{{% tab header="Visual Studio Code (Copilot)" lang="en" %}}
1. Open [VS Code](https://code.visualstudio.com/docs/copilot/overview) and
create a `.vscode` directory in your project root if it doesn't exist.
1. Create a `.vscode/mcp.json` file if it doesn't exist and open it.
1. Add the following configuration, replace the environment variables with your
values, and save:
```json
{
"servers": {
"sqlite": {
"command": "./PATH/TO/toolbox",
"args": ["--prebuilt","sqlite","--stdio"],
"env": {
"SQLITE_DATABASE": "./sample.db"
}
}
}
}
```
{{% /tab %}}
{{% tab header="Windsurf" lang="en" %}}
1. Open [Windsurf](https://docs.codeium.com/windsurf) and navigate to the
Cascade assistant.
1. Tap on the hammer (MCP) icon, then Configure to open the configuration file.
1. Add the following configuration, replace the environment variables with your
values, and save:
```json
{
"mcpServers": {
"sqlite": {
"command": "./PATH/TO/toolbox",
"args": ["--prebuilt","sqlite","--stdio"],
"env": {
"SQLITE_DATABASE": "./sample.db"
}
}
}
}
```
{{% /tab %}}
{{% tab header="Gemini CLI" lang="en" %}}
1. Install the [Gemini
CLI](https://github.com/google-gemini/gemini-cli?tab=readme-ov-file#quickstart).
1. In your working directory, create a folder named `.gemini`. Within it,
create a `settings.json` file.
1. Add the following configuration, replace the environment variables with your
values, and then save:
```json
{
"mcpServers": {
"sqlite": {
"command": "./PATH/TO/toolbox",
"args": ["--prebuilt","sqlite","--stdio"],
"env": {
"SQLITE_DATABASE": "./sample.db"
}
}
}
}
```
{{% /tab %}}
{{% tab header="Gemini Code Assist" lang="en" %}}
1. Install the [Gemini Code
Assist](https://marketplace.visualstudio.com/items?itemName=Google.geminicodeassist)
extension in Visual Studio Code.
1. Enable Agent Mode in Gemini Code Assist chat.
1. In your working directory, create a folder named `.gemini`. Within it,
create a `settings.json` file.
1. Add the following configuration, replace the environment variables with your
values, and then save:
```json
{
"mcpServers": {
"sqlite": {
"command": "./PATH/TO/toolbox",
"args": ["--prebuilt","sqlite","--stdio"],
"env": {
"SQLITE_DATABASE": "./sample.db"
}
}
}
}
```
{{% /tab %}}
{{< /tabpane >}}
## Use Tools
Your AI tool is now connected to SQLite using MCP. Try asking your AI assistant
to list tables, create a table, or define and execute other SQL statements.
The following tools are available to the LLM:
1. **list_tables**: lists tables and descriptions
1. **execute_sql**: execute any SQL statement
{{< notice note >}}
Prebuilt tools are pre-1.0, so expect some tool changes between versions. LLMs
will adapt to the tools available, so this shouldn't affect most users.
{{< /notice >}}
# Cloud SQL for PostgreSQL Admin using MCP
Create and manage Cloud SQL for PostgreSQL (Admin) using Toolbox.
This guide covers how to use [MCP Toolbox for Databases][toolbox] to expose your
developer assistant tools to create and manage Cloud SQL for PostgreSQL
instance, database and users:
* [Cursor][cursor]
* [Windsurf][windsurf] (Codium)
* [Visual Studio Code][vscode] (Copilot)
* [Cline][cline] (VS Code extension)
* [Claude desktop][claudedesktop]
* [Claude code][claudecode]
* [Gemini CLI][geminicli]
* [Gemini Code Assist][geminicodeassist]
[toolbox]: https://github.com/googleapis/genai-toolbox
[cursor]: #configure-your-mcp-client
[windsurf]: #configure-your-mcp-client
[vscode]: #configure-your-mcp-client
[cline]: #configure-your-mcp-client
[claudedesktop]: #configure-your-mcp-client
[claudecode]: #configure-your-mcp-client
[geminicli]: #configure-your-mcp-client
[geminicodeassist]: #configure-your-mcp-client
## Before you begin
1. In the Google Cloud console, on the [project selector
page](https://console.cloud.google.com/projectselector2/home/dashboard),
select or create a Google Cloud project.
1. [Make sure that billing is enabled for your Google Cloud
project](https://cloud.google.com/billing/docs/how-to/verify-billing-enabled#confirm_billing_is_enabled_on_a_project).
1. Grant the necessary IAM roles to the user that will be running the MCP
server. The tools available will depend on the roles granted:
* `roles/cloudsql.viewer`: Provides read-only access to resources.
* `get_instance`
* `list_instances`
* `list_databases`
* `wait_for_operation`
* `roles/cloudsql.editor`: Provides permissions to manage existing resources.
* All `viewer` tools
* `create_database`
* `roles/cloudsql.admin`: Provides full control over all resources.
* All `editor` and `viewer` tools
* `create_instance`
* `create_user`
* `clone_instance`
## Install MCP Toolbox
1. Download the latest version of Toolbox as a binary. Select the [correct
binary](https://github.com/googleapis/genai-toolbox/releases) corresponding
to your OS and CPU architecture. You are required to use Toolbox version
V0.15.0+:
{{< tabpane persist=header >}}
{{< tab header="linux/amd64" lang="bash" >}}
curl -O https://storage.googleapis.com/genai-toolbox/v0.15.0/linux/amd64/toolbox
{{< /tab >}}
{{< tab header="darwin/arm64" lang="bash" >}}
curl -O https://storage.googleapis.com/genai-toolbox/v0.15.0/darwin/arm64/toolbox
{{< /tab >}}
{{< tab header="darwin/amd64" lang="bash" >}}
curl -O https://storage.googleapis.com/genai-toolbox/v0.15.0/darwin/amd64/toolbox
{{< /tab >}}
{{< tab header="windows/amd64" lang="bash" >}}
curl -O https://storage.googleapis.com/genai-toolbox/v0.15.0/windows/amd64/toolbox.exe
{{< /tab >}}
{{< /tabpane >}}
1. Make the binary executable:
```bash
chmod +x toolbox
```
1. Verify the installation:
```bash
./toolbox --version
```
## Configure your MCP Client
{{< tabpane text=true >}}
{{% tab header="Claude code" lang="en" %}}
1. Install [Claude
Code](https://docs.anthropic.com/en/docs/agents-and-tools/claude-code/overview).
1. Create a `.mcp.json` file in your project root if it doesn't exist.
1. Add the following configuration and save:
```json
{
"mcpServers": {
"cloud-sql-postgres-admin": {
"command": "./PATH/TO/toolbox",
"args": ["--prebuilt","cloud-sql-postgres-admin","--stdio"],
"env": {
}
}
}
}
```
1. Restart Claude code to apply the new configuration.
{{% /tab %}}
{{% tab header="Claude desktop" lang="en" %}}
1. Open [Claude desktop](https://claude.ai/download) and navigate to Settings.
1. Under the Developer tab, tap Edit Config to open the configuration file.
1. Add the following configuration and save:
```json
{
"mcpServers": {
"cloud-sql-postgres-admin": {
"command": "./PATH/TO/toolbox",
"args": ["--prebuilt","cloud-sql-postgres-admin","--stdio"],
"env": {
}
}
}
}
```
1. Restart Claude desktop.
1. From the new chat screen, you should see a hammer (MCP) icon appear with the
new MCP server available.
{{% /tab %}}
{{% tab header="Cline" lang="en" %}}
1. Open the [Cline](https://github.com/cline/cline) extension in VS Code and tap
the **MCP Servers** icon.
1. Tap Configure MCP Servers to open the configuration file.
1. Add the following configuration and save:
```json
{
"mcpServers": {
"cloud-sql-postgres-admin": {
"command": "./PATH/TO/toolbox",
"args": ["--prebuilt","cloud-sql-postgres-admin","--stdio"],
"env": {
}
}
}
}
```
1. You should see a green active status after the server is successfully
connected.
{{% /tab %}}
{{% tab header="Cursor" lang="en" %}}
1. Create a `.cursor` directory in your project root if it doesn't exist.
1. Create a `.cursor/mcp.json` file if it doesn't exist and open it.
1. Add the following configuration and save:
```json
{
"mcpServers": {
"cloud-sql-postgres-admin": {
"command": "./PATH/TO/toolbox",
"args": ["--prebuilt","cloud-sql-postgres-admin","--stdio"],
"env": {
}
}
}
}
```
1. [Cursor](https://www.cursor.com/) and navigate to **Settings > Cursor
Settings > MCP**. You should see a green active status after the server is
successfully connected.
{{% /tab %}}
{{% tab header="Visual Studio Code (Copilot)" lang="en" %}}
1. Open [VS Code](https://code.visualstudio.com/docs/copilot/overview) and
create a `.vscode` directory in your project root if it doesn't exist.
1. Create a `.vscode/mcp.json` file if it doesn't exist and open it.
1. Add the following configuration and save:
```json
{
"servers": {
"cloud-sql-postgres-admin": {
"command": "./PATH/TO/toolbox",
"args": ["--prebuilt","cloud-sql-postgres-admin","--stdio"],
"env": {
}
}
}
}
```
{{% /tab %}}
{{% tab header="Windsurf" lang="en" %}}
1. Open [Windsurf](https://docs.codeium.com/windsurf) and navigate to the
Cascade assistant.
1. Tap on the hammer (MCP) icon, then Configure to open the configuration file.
1. Add the following configuration and save:
```json
{
"mcpServers": {
"cloud-sql-postgres-admin": {
"command": "./PATH/TO/toolbox",
"args": ["--prebuilt","cloud-sql-postgres-admin","--stdio"],
"env": {
}
}
}
}
```
{{% /tab %}}
{{% tab header="Gemini CLI" lang="en" %}}
1. Install the [Gemini
CLI](https://github.com/google-gemini/gemini-cli?tab=readme-ov-file#quickstart).
1. In your working directory, create a folder named `.gemini`. Within it,
create a `settings.json` file.
1. Add the following configuration and save:
```json
{
"mcpServers": {
"cloud-sql-postgres-admin": {
"command": "./PATH/TO/toolbox",
"args": ["--prebuilt","cloud-sql-postgres-admin","--stdio"],
"env": {
}
}
}
}
```
{{% /tab %}}
{{% tab header="Gemini Code Assist" lang="en" %}}
1. Install the [Gemini Code
Assist](https://marketplace.visualstudio.com/items?itemName=Google.geminicodeassist)
extension in Visual Studio Code.
1. Enable Agent Mode in Gemini Code Assist chat.
1. In your working directory, create a folder named `.gemini`. Within it,
create a `settings.json` file.
1. Add the following configuration and save:
```json
{
"mcpServers": {
"cloud-sql-postgres-admin": {
"command": "./PATH/TO/toolbox",
"args": ["--prebuilt","cloud-sql-postgres-admin","--stdio"],
"env": {
}
}
}
}
```
{{% /tab %}}
{{< /tabpane >}}
## Use Tools
Your AI tool is now connected to Cloud SQL for PostgreSQL using MCP.
The `cloud-sql-postgres-admin` server provides tools for managing your Cloud SQL
instances and interacting with your database:
* **create_instance**: Creates a new Cloud SQL for PostgreSQL instance.
* **get_instance**: Gets information about a Cloud SQL instance.
* **list_instances**: Lists Cloud SQL instances in a project.
* **create_database**: Creates a new database in a Cloud SQL instance.
* **list_databases**: Lists all databases for a Cloud SQL instance.
* **create_user**: Creates a new user in a Cloud SQL instance.
* **wait_for_operation**: Waits for a Cloud SQL operation to complete.
* **clone_instance**: Creates a clone of an existing Cloud SQL for PostgreSQL instance.
{{< notice note >}}
Prebuilt tools are pre-1.0, so expect some tool changes between versions. LLMs
will adapt to the tools available, so this shouldn't affect most users.
{{< /notice >}}
# Cloud SQL for MySQL Admin using MCP
Create and manage Cloud SQL for MySQL (Admin) using Toolbox.
This guide covers how to use [MCP Toolbox for Databases][toolbox] to expose your
developer assistant tools to create and manage Cloud SQL for MySQL instance,
database and users:
* [Cursor][cursor]
* [Windsurf][windsurf] (Codium)
* [Visual Studio Code][vscode] (Copilot)
* [Cline][cline] (VS Code extension)
* [Claude desktop][claudedesktop]
* [Claude code][claudecode]
* [Gemini CLI][geminicli]
* [Gemini Code Assist][geminicodeassist]
[toolbox]: https://github.com/googleapis/genai-toolbox
[cursor]: #configure-your-mcp-client
[windsurf]: #configure-your-mcp-client
[vscode]: #configure-your-mcp-client
[cline]: #configure-your-mcp-client
[claudedesktop]: #configure-your-mcp-client
[claudecode]: #configure-your-mcp-client
[geminicli]: #configure-your-mcp-client
[geminicodeassist]: #configure-your-mcp-client
## Before you begin
1. In the Google Cloud console, on the [project selector
page](https://console.cloud.google.com/projectselector2/home/dashboard),
select or create a Google Cloud project.
1. [Make sure that billing is enabled for your Google Cloud
project](https://cloud.google.com/billing/docs/how-to/verify-billing-enabled#confirm_billing_is_enabled_on_a_project).
1. Grant the necessary IAM roles to the user that will be running the MCP
server. The tools available will depend on the roles granted:
* `roles/cloudsql.viewer`: Provides read-only access to resources.
* `get_instance`
* `list_instances`
* `list_databases`
* `wait_for_operation`
* `roles/cloudsql.editor`: Provides permissions to manage existing resources.
* All `viewer` tools
* `create_database`
* `roles/cloudsql.admin`: Provides full control over all resources.
* All `editor` and `viewer` tools
* `create_instance`
* `create_user`
* `clone_instance`
## Install MCP Toolbox
1. Download the latest version of Toolbox as a binary. Select the [correct
binary](https://github.com/googleapis/genai-toolbox/releases) corresponding
to your OS and CPU architecture. You are required to use Toolbox version
V0.15.0+:
{{< tabpane persist=header >}}
{{< tab header="linux/amd64" lang="bash" >}}
curl -O https://storage.googleapis.com/genai-toolbox/v0.15.0/linux/amd64/toolbox
{{< /tab >}}
{{< tab header="darwin/arm64" lang="bash" >}}
curl -O https://storage.googleapis.com/genai-toolbox/v0.15.0/darwin/arm64/toolbox
{{< /tab >}}
{{< tab header="darwin/amd64" lang="bash" >}}
curl -O https://storage.googleapis.com/genai-toolbox/v0.15.0/darwin/amd64/toolbox
{{< /tab >}}
{{< tab header="windows/amd64" lang="bash" >}}
curl -O https://storage.googleapis.com/genai-toolbox/v0.15.0/windows/amd64/toolbox.exe
{{< /tab >}}
{{< /tabpane >}}
1. Make the binary executable:
```bash
chmod +x toolbox
```
1. Verify the installation:
```bash
./toolbox --version
```
## Configure your MCP Client
{{< tabpane text=true >}}
{{% tab header="Claude code" lang="en" %}}
1. Install [Claude
Code](https://docs.anthropic.com/en/docs/agents-and-tools/claude-code/overview).
1. Create a `.mcp.json` file in your project root if it doesn't exist.
1. Add the following configuration and save:
```json
{
"mcpServers": {
"cloud-sql-mysql-admin": {
"command": "./PATH/TO/toolbox",
"args": ["--prebuilt","cloud-sql-mysql-admin","--stdio"],
"env": {
}
}
}
}
```
1. Restart Claude code to apply the new configuration.
{{% /tab %}}
{{% tab header="Claude desktop" lang="en" %}}
1. Open [Claude desktop](https://claude.ai/download) and navigate to Settings.
1. Under the Developer tab, tap Edit Config to open the configuration file.
1. Add the following configuration and save:
```json
{
"mcpServers": {
"cloud-sql-mysql-admin": {
"command": "./PATH/TO/toolbox",
"args": ["--prebuilt","cloud-sql-mysql-admin","--stdio"],
"env": {
}
}
}
}
```
1. Restart Claude desktop.
1. From the new chat screen, you should see a hammer (MCP) icon appear with the
new MCP server available.
{{% /tab %}}
{{% tab header="Cline" lang="en" %}}
1. Open the [Cline](https://github.com/cline/cline) extension in VS Code and tap
the **MCP Servers** icon.
1. Tap Configure MCP Servers to open the configuration file.
1. Add the following configuration and save:
```json
{
"mcpServers": {
"cloud-sql-mysql-admin": {
"command": "./PATH/TO/toolbox",
"args": ["--prebuilt","cloud-sql-mysql-admin","--stdio"],
"env": {
}
}
}
}
```
1. You should see a green active status after the server is successfully
connected.
{{% /tab %}}
{{% tab header="Cursor" lang="en" %}}
1. Create a `.cursor` directory in your project root if it doesn't exist.
1. Create a `.cursor/mcp.json` file if it doesn't exist and open it.
1. Add the following configuration and save:
```json
{
"mcpServers": {
"cloud-sql-mysql-admin": {
"command": "./PATH/TO/toolbox",
"args": ["--prebuilt","cloud-sql-mysql-admin","--stdio"],
"env": {
}
}
}
}
```
1. [Cursor](https://www.cursor.com/) and navigate to **Settings > Cursor
Settings > MCP**. You should see a green active status after the server is
successfully connected.
{{% /tab %}}
{{% tab header="Visual Studio Code (Copilot)" lang="en" %}}
1. Open [VS Code](https://code.visualstudio.com/docs/copilot/overview) and
create a `.vscode` directory in your project root if it doesn't exist.
1. Create a `.vscode/mcp.json` file if it doesn't exist and open it.
1. Add the following configuration and save:
```json
{
"servers": {
"cloud-sql-mysql-admin": {
"command": "./PATH/TO/toolbox",
"args": ["--prebuilt","cloud-sql-mysql-admin","--stdio"],
"env": {
}
}
}
}
```
{{% /tab %}}
{{% tab header="Windsurf" lang="en" %}}
1. Open [Windsurf](https://docs.codeium.com/windsurf) and navigate to the
Cascade assistant.
1. Tap on the hammer (MCP) icon, then Configure to open the configuration file.
1. Add the following configuration and save:
```json
{
"mcpServers": {
"cloud-sql-mysql-admin": {
"command": "./PATH/TO/toolbox",
"args": ["--prebuilt","cloud-sql-mysql-admin","--stdio"],
"env": {
}
}
}
}
```
{{% /tab %}}
{{% tab header="Gemini CLI" lang="en" %}}
1. Install the [Gemini
CLI](https://github.com/google-gemini/gemini-cli?tab=readme-ov-file#quickstart).
1. In your working directory, create a folder named `.gemini`. Within it,
create a `settings.json` file.
1. Add the following configuration and save:
```json
{
"mcpServers": {
"cloud-sql-mysql-admin": {
"command": "./PATH/TO/toolbox",
"args": ["--prebuilt","cloud-sql-mysql-admin","--stdio"],
"env": {
}
}
}
}
```
{{% /tab %}}
{{% tab header="Gemini Code Assist" lang="en" %}}
1. Install the [Gemini Code
Assist](https://marketplace.visualstudio.com/items?itemName=Google.geminicodeassist)
extension in Visual Studio Code.
1. Enable Agent Mode in Gemini Code Assist chat.
1. In your working directory, create a folder named `.gemini`. Within it,
create a `settings.json` file.
1. Add the following configuration and save:
```json
{
"mcpServers": {
"cloud-sql-mysql-admin": {
"command": "./PATH/TO/toolbox",
"args": ["--prebuilt","cloud-sql-mysql-admin","--stdio"],
"env": {
}
}
}
}
```
{{% /tab %}}
{{< /tabpane >}}
## Use Tools
Your AI tool is now connected to Cloud SQL for MySQL using MCP.
The `cloud-sql-mysql-admin` server provides tools for managing your Cloud SQL
instances and interacting with your database:
* **create_instance**: Creates a new Cloud SQL for MySQL instance.
* **get_instance**: Gets information about a Cloud SQL instance.
* **list_instances**: Lists Cloud SQL instances in a project.
* **create_database**: Creates a new database in a Cloud SQL instance.
* **list_databases**: Lists all databases for a Cloud SQL instance.
* **create_user**: Creates a new user in a Cloud SQL instance.
* **wait_for_operation**: Waits for a Cloud SQL operation to complete.
* **clone_instance**: Creates a clone of an existing Cloud SQL for MySQL instance.
{{< notice note >}}
Prebuilt tools are pre-1.0, so expect some tool changes between versions. LLMs
will adapt to the tools available, so this shouldn't affect most users.
{{< /notice >}}
# Cloud SQL for SQL Server Admin using MCP
Create and manage Cloud SQL for SQL Server (Admin) using Toolbox.
This guide covers how to use [MCP Toolbox for Databases][toolbox] to expose your
developer assistant tools to create and manage Cloud SQL for SQL Server
instance, database and users:
* [Cursor][cursor]
* [Windsurf][windsurf] (Codium)
* [Visual Studio Code][vscode] (Copilot)
* [Cline][cline] (VS Code extension)
* [Claude desktop][claudedesktop]
* [Claude code][claudecode]
* [Gemini CLI][geminicli]
* [Gemini Code Assist][geminicodeassist]
[toolbox]: https://github.com/googleapis/genai-toolbox
[cursor]: #configure-your-mcp-client
[windsurf]: #configure-your-mcp-client
[vscode]: #configure-your-mcp-client
[cline]: #configure-your-mcp-client
[claudedesktop]: #configure-your-mcp-client
[claudecode]: #configure-your-mcp-client
[geminicli]: #configure-your-mcp-client
[geminicodeassist]: #configure-your-mcp-client
## Before you begin
1. In the Google Cloud console, on the [project selector
page](https://console.cloud.google.com/projectselector2/home/dashboard),
select or create a Google Cloud project.
1. [Make sure that billing is enabled for your Google Cloud
project](https://cloud.google.com/billing/docs/how-to/verify-billing-enabled#confirm_billing_is_enabled_on_a_project).
1. Grant the necessary IAM roles to the user that will be running the MCP
server. The tools available will depend on the roles granted:
* `roles/cloudsql.viewer`: Provides read-only access to resources.
* `get_instance`
* `list_instances`
* `list_databases`
* `wait_for_operation`
* `roles/cloudsql.editor`: Provides permissions to manage existing resources.
* All `viewer` tools
* `create_database`
* `roles/cloudsql.admin`: Provides full control over all resources.
* All `editor` and `viewer` tools
* `create_instance`
* `create_user`
* `clone_instance`
## Install MCP Toolbox
1. Download the latest version of Toolbox as a binary. Select the [correct
binary](https://github.com/googleapis/genai-toolbox/releases) corresponding
to your OS and CPU architecture. You are required to use Toolbox version
V0.15.0+:
{{< tabpane persist=header >}}
{{< tab header="linux/amd64" lang="bash" >}}
curl -O https://storage.googleapis.com/genai-toolbox/v0.15.0/linux/amd64/toolbox
{{< /tab >}}
{{< tab header="darwin/arm64" lang="bash" >}}
curl -O https://storage.googleapis.com/genai-toolbox/v0.15.0/darwin/arm64/toolbox
{{< /tab >}}
{{< tab header="darwin/amd64" lang="bash" >}}
curl -O https://storage.googleapis.com/genai-toolbox/v0.15.0/darwin/amd64/toolbox
{{< /tab >}}
{{< tab header="windows/amd64" lang="bash" >}}
curl -O https://storage.googleapis.com/genai-toolbox/v0.15.0/windows/amd64/toolbox.exe
{{< /tab >}}
{{< /tabpane >}}
1. Make the binary executable:
```bash
chmod +x toolbox
```
1. Verify the installation:
```bash
./toolbox --version
```
## Configure your MCP Client
{{< tabpane text=true >}}
{{% tab header="Claude code" lang="en" %}}
1. Install [Claude
Code](https://docs.anthropic.com/en/docs/agents-and-tools/claude-code/overview).
1. Create a `.mcp.json` file in your project root if it doesn't exist.
1. Add the following configuration and save:
```json
{
"mcpServers": {
"cloud-sql-mssql-admin": {
"command": "./PATH/TO/toolbox",
"args": ["--prebuilt","cloud-sql-mssql-admin","--stdio"],
"env": {
}
}
}
}
```
1. Restart Claude code to apply the new configuration.
{{% /tab %}}
{{% tab header="Claude desktop" lang="en" %}}
1. Open [Claude desktop](https://claude.ai/download) and navigate to Settings.
1. Under the Developer tab, tap Edit Config to open the configuration file.
1. Add the following configuration and save:
```json
{
"mcpServers": {
"cloud-sql-mssql-admin": {
"command": "./PATH/TO/toolbox",
"args": ["--prebuilt","cloud-sql-mssql-admin","--stdio"],
"env": {
}
}
}
}
```
1. Restart Claude desktop.
1. From the new chat screen, you should see a hammer (MCP) icon appear with the
new MCP server available.
{{% /tab %}}
{{% tab header="Cline" lang="en" %}}
1. Open the [Cline](https://github.com/cline/cline) extension in VS Code and tap
the **MCP Servers** icon.
1. Tap Configure MCP Servers to open the configuration file.
1. Add the following configuration and save:
```json
{
"mcpServers": {
"cloud-sql-mssql-admin": {
"command": "./PATH/TO/toolbox",
"args": ["--prebuilt","cloud-sql-mssql-admin","--stdio"],
"env": {
}
}
}
}
```
1. You should see a green active status after the server is successfully
connected.
{{% /tab %}}
{{% tab header="Cursor" lang="en" %}}
1. Create a `.cursor` directory in your project root if it doesn't exist.
1. Create a `.cursor/mcp.json` file if it doesn't exist and open it.
1. Add the following configuration and save:
```json
{
"mcpServers": {
"cloud-sql-mssql-admin": {
"command": "./PATH/TO/toolbox",
"args": ["--prebuilt","cloud-sql-mssql-admin","--stdio"],
"env": {
}
}
}
}
```
1. [Cursor](https://www.cursor.com/) and navigate to **Settings > Cursor
Settings > MCP**. You should see a green active status after the server is
successfully connected.
{{% /tab %}}
{{% tab header="Visual Studio Code (Copilot)" lang="en" %}}
1. Open [VS Code](https://code.visualstudio.com/docs/copilot/overview) and
create a `.vscode` directory in your project root if it doesn't exist.
1. Create a `.vscode/mcp.json` file if it doesn't exist and open it.
1. Add the following configuration and save:
```json
{
"servers": {
"cloud-sql-mssql-admin": {
"command": "./PATH/TO/toolbox",
"args": ["--prebuilt","cloud-sql-mssql-admin","--stdio"],
"env": {
}
}
}
}
```
{{% /tab %}}
{{% tab header="Windsurf" lang="en" %}}
1. Open [Windsurf](https://docs.codeium.com/windsurf) and navigate to the
Cascade assistant.
1. Tap on the hammer (MCP) icon, then Configure to open the configuration file.
1. Add the following configuration and save:
```json
{
"mcpServers": {
"cloud-sql-mssql-admin": {
"command": "./PATH/TO/toolbox",
"args": ["--prebuilt","cloud-sql-mssql-admin","--stdio"],
"env": {
}
}
}
}
```
{{% /tab %}}
{{% tab header="Gemini CLI" lang="en" %}}
1. Install the [Gemini
CLI](https://github.com/google-gemini/gemini-cli?tab=readme-ov-file#quickstart).
1. In your working directory, create a folder named `.gemini`. Within it,
create a `settings.json` file.
1. Add the following configuration and save:
```json
{
"mcpServers": {
"cloud-sql-mssql-admin": {
"command": "./PATH/TO/toolbox",
"args": ["--prebuilt","cloud-sql-mssql-admin","--stdio"],
"env": {
}
}
}
}
```
{{% /tab %}}
{{% tab header="Gemini Code Assist" lang="en" %}}
1. Install the [Gemini Code
Assist](https://marketplace.visualstudio.com/items?itemName=Google.geminicodeassist)
extension in Visual Studio Code.
1. Enable Agent Mode in Gemini Code Assist chat.
1. In your working directory, create a folder named `.gemini`. Within it,
create a `settings.json` file.
1. Add the following configuration and save:
```json
{
"mcpServers": {
"cloud-sql-mssql-admin": {
"command": "./PATH/TO/toolbox",
"args": ["--prebuilt","cloud-sql-mssql-admin","--stdio"],
"env": {
}
}
}
}
```
{{% /tab %}}
{{< /tabpane >}}
## Use Tools
Your AI tool is now connected to Cloud SQL for SQL Server using MCP.
The `cloud-sql-mssql-admin` server provides tools for managing your Cloud SQL
instances and interacting with your database:
* **create_instance**: Creates a new Cloud SQL for SQL Server instance.
* **get_instance**: Gets information about a Cloud SQL instance.
* **list_instances**: Lists Cloud SQL instances in a project.
* **create_database**: Creates a new database in a Cloud SQL instance.
* **list_databases**: Lists all databases for a Cloud SQL instance.
* **create_user**: Creates a new user in a Cloud SQL instance.
* **wait_for_operation**: Waits for a Cloud SQL operation to complete.
* **clone_instance**: Creates a clone of an existing Cloud SQL for SQL Server instance.
{{< notice note >}}
Prebuilt tools are pre-1.0, so expect some tool changes between versions. LLMs
will adapt to the tools available, so this shouldn't affect most users.
{{< /notice >}}
# Connect via MCP Client
How to connect to Toolbox from a MCP Client.
## Toolbox SDKs vs Model Context Protocol (MCP)
Toolbox now supports connections via both the native Toolbox SDKs and via [Model
Context Protocol (MCP)](https://modelcontextprotocol.io/). However, Toolbox has
several features which are not supported in the MCP specification (such as
Authenticated Parameters and Authorized invocation).
We recommend using the native SDKs over MCP clients to leverage these features.
The native SDKs can be combined with MCP clients in many cases.
### Protocol Versions
Toolbox currently supports the following versions of MCP specification:
* [2025-06-18](https://modelcontextprotocol.io/specification/2025-06-18)
* [2025-03-26](https://modelcontextprotocol.io/specification/2025-03-26)
* [2024-11-05](https://modelcontextprotocol.io/specification/2024-11-05)
### Toolbox AuthZ/AuthN Not Supported by MCP
The auth implementation in Toolbox is not supported in MCP's auth specification.
This includes:
* [Authenticated Parameters](../resources/tools/#authenticated-parameters)
* [Authorized Invocations](../resources/tools/#authorized-invocations)
## Connecting to Toolbox with an MCP client
### Before you begin
{{< notice note >}}
MCP is only compatible with Toolbox version 0.3.0 and above.
{{< /notice >}}
1. [Install](../getting-started/introduction/#installing-the-server)
Toolbox version 0.3.0+.
1. Make sure you've set up and initialized your database.
1. [Set up](../getting-started/configure.md) your `tools.yaml` file.
### Connecting via Standard Input/Output (stdio)
Toolbox supports the
[stdio](https://modelcontextprotocol.io/docs/concepts/transports#standard-input%2Foutput-stdio)
transport protocol. Users that wish to use stdio will have to include the
`--stdio` flag when running Toolbox.
```bash
./toolbox --stdio
```
When running with stdio, Toolbox will listen via stdio instead of acting as a
remote HTTP server. Logs will be set to the `warn` level by default. `debug` and
`info` logs are not supported with stdio.
{{< notice note >}}
Toolbox enables dynamic reloading by default. To disable, use the
`--disable-reload` flag.
{{< /notice >}}
### Connecting via HTTP
Toolbox supports the HTTP transport protocol with and without SSE.
{{< tabpane text=true >}} {{% tab header="HTTP with SSE (deprecated)" lang="en" %}}
Add the following configuration to your MCP client configuration:
```bash
{
"mcpServers": {
"toolbox": {
"type": "sse",
"url": "http://127.0.0.1:5000/mcp/sse",
}
}
}
```
If you would like to connect to a specific toolset, replace `url` with
`"http://127.0.0.1:5000/mcp/{toolset_name}/sse"`.
HTTP with SSE is only supported in version `2024-11-05` and is currently
deprecated.
{{% /tab %}} {{% tab header="Streamable HTTP" lang="en" %}}
Add the following configuration to your MCP client configuration:
```bash
{
"mcpServers": {
"toolbox": {
"type": "http",
"url": "http://127.0.0.1:5000/mcp",
}
}
}
```
If you would like to connect to a specific toolset, replace `url` with
`"http://127.0.0.1:5000/mcp/{toolset_name}"`.
{{% /tab %}} {{< /tabpane >}}
### Using the MCP Inspector with Toolbox
Use MCP [Inspector](https://github.com/modelcontextprotocol/inspector) for
testing and debugging Toolbox server.
{{< tabpane text=true >}}
{{% tab header="STDIO" lang="en" %}}
1. Run Inspector with Toolbox as a subprocess:
```bash
npx @modelcontextprotocol/inspector ./toolbox --stdio
```
1. For `Transport Type` dropdown menu, select `STDIO`.
1. In `Command`, make sure that it is set to :`./toolbox` (or the correct path
to where the Toolbox binary is installed).
1. In `Arguments`, make sure that it's filled with `--stdio`.
1. Click the `Connect` button. It might take awhile to spin up Toolbox. Voila!
You should be able to inspect your toolbox tools!
{{% /tab %}}
{{% tab header="HTTP with SSE (deprecated)" lang="en" %}}
1. [Run Toolbox](../getting-started/introduction/#running-the-server).
1. In a separate terminal, run Inspector directly through `npx`:
```bash
npx @modelcontextprotocol/inspector
```
1. For `Transport Type` dropdown menu, select `SSE`.
1. For `URL`, type in `http://127.0.0.1:5000/mcp/sse` to use all tool or
`http//127.0.0.1:5000/mcp/{toolset_name}/sse` to use a specific toolset.
1. Click the `Connect` button. Voila! You should be able to inspect your toolbox
tools!
{{% /tab %}}
{{% tab header="Streamable HTTP" lang="en" %}}
1. [Run Toolbox](../getting-started/introduction/#running-the-server).
1. In a separate terminal, run Inspector directly through `npx`:
```bash
npx @modelcontextprotocol/inspector
```
1. For `Transport Type` dropdown menu, select `Streamable HTTP`.
1. For `URL`, type in `http://127.0.0.1:5000/mcp` to use all tool or
`http//127.0.0.1:5000/mcp/{toolset_name}` to use a specific toolset.
1. Click the `Connect` button. Voila! You should be able to inspect your toolbox
tools!
{{% /tab %}} {{< /tabpane >}}
### Tested Clients
| Client | SSE Works | MCP Config Docs |
|--------------------|------------|---------------------------------------------------------------------------------|
| Claude Desktop | ✅ | |
| MCP Inspector | ✅ | |
| Cursor | ✅ | |
| Windsurf | ✅ | |
| VS Code (Insiders) | ✅ | |
# Toolbox UI
How to effectively use Toolbox UI.
Toolbox UI is a built-in web interface that allows users to visually inspect and
test out configured resources such as tools and toolsets.
## Launching Toolbox UI
To launch Toolbox's interactive UI, use the `--ui` flag.
```sh
./toolbox --ui
```
Toolbox UI will be served from the same host and port as the Toolbox Server,
with the `/ui` suffix. Once Toolbox is launched, the following INFO log with
Toolbox UI's url will be shown:
```bash
INFO "Toolbox UI is up and running at: http://localhost:5000/ui"
```
## Navigating the Tools Page
The tools page shows all tools loaded from your configuration file. This
corresponds to the default toolset (represented by an empty string). Each tool's
name on this page will exactly match its name in the configuration file.
To view details for a specific tool, click on the tool name. The main content
area will be populated with the tool name, description, and available
parameters.

### Invoking a Tool
1. Click on a Tool
1. Enter appropriate parameters in each parameter field
1. Click "Run Tool"
1. Done! Your results will appear in the response field
1. (Optional) Uncheck "Prettify JSON" to format the response as plain text

### Optional Parameters
Toolbox allows users to add [optional
parameters](../../resources/tools/#basic-parameters) with or without a default
value.
To exclude a parameter, uncheck the box to the right of an associated parameter,
and that parameter will not be included in the request body. If the parameter is
not sent, Toolbox will either use it as `nil` value or the `default` value, if
configured. If the parameter is required, Toolbox will throw an error.
When the box is checked, parameter will be sent exactly as entered in the
response field (e.g. empty string).


### Editing Headers
To edit headers, press the "Edit Headers" button to display the header modal.
Within this modal, users can make direct edits by typing into the header's text
area.
Toolbox UI validates that the headers are in correct JSON format. Other
header-related errors (e.g., incorrect header names or values required by the
tool) will be reported in the Response section after running the tool.

#### Google OAuth
Currently, Toolbox supports Google OAuth 2.0 as an AuthService, which allows
tools to utilize authorized parameters. When a tool uses an authorized
parameter, the parameter will be displayed but not editable, as it will be
populated from the authentication token.
To provide the token, add your Google OAuth ID Token to the request header using
the "Edit Headers" button and modal described above. The key should be the name
of your AuthService as defined in your tool configuration file, suffixed with
`_token`. The value should be your ID token as a string.
1. Select a tool that requires [authenticated parameters]()
1. The auth parameter's text field is greyed out. This is because it cannot be
entered manually and will be parsed from the resolved auth token
1. To update request headers with the token, select "Edit Headers"
1. (Optional) If you wish to manually edit the header, checkout the dropdown
"How to extract Google OAuth ID Token manually" for guidance on retrieving ID
token
1. To edit the header automatically, click the "Auto Setup" button that is
associated with your Auth Profile
1. Enter the Client ID defined in your tools configuration file
1. Click "Continue"
1. Click "Sign in With Google" and login with your associated google account.
This should automatically populate the header text area with your token
1. Click "Save"
1. Click "Run Tool"
```json
{
"Content-Type": "application/json",
"my-google-auth_token": "YOUR_ID_TOKEN_HERE"
}
```

## Navigating the Toolsets Page
Through the toolsets page, users can search for a specific toolset to retrieve
tools from. Simply enter the toolset name in the search bar, and press "Enter"
to retrieve the associated tools.
If the toolset name is not defined within the tools configuration file, an error
message will be displayed.

# Connect via Gemini CLI Extensions
Connect to Toolbox via Gemini CLI Extensions.
## Gemini CLI Extensions
[Gemini CLI][gemini-cli] is an open-source AI agent designed to assist with
development workflows by assisting with coding, debugging, data exploration, and
content creation. Its mission is to provide an agentic interface for interacting
with database and analytics services and popular open-source databases.
### How extensions work
Gemini CLI is highly extensible, allowing for the addition of new tools and
capabilities through extensions. You can load the extensions from a GitHub URL,
a local directory, or a configurable registry. They provide new tools, slash
commands, and prompts to assist with your workflow.
Use the Gemini CLI Extensions to load prebuilt or custom tools to interact with
your databases.
[gemini-cli]: https://google-gemini.github.io/gemini-cli/
Below are a list of Gemini CLI Extensions powered by MCP Toolbox:
* [alloydb](https://github.com/gemini-cli-extensions/alloydb)
* [alloydb-observability](https://github.com/gemini-cli-extensions/alloydb-observability)
* [bigquery-conversational-analytics](https://github.com/gemini-cli-extensions/bigquery-conversational-analytics)
* [bigquery-data-analytics](https://github.com/gemini-cli-extensions/bigquery-data-analytics)
* [cloud-sql-mysql](https://github.com/gemini-cli-extensions/cloud-sql-mysql)
* [cloud-sql-mysql-observability](https://github.com/gemini-cli-extensions/cloud-sql-mysql-observability)
* [cloud-sql-postgresql](https://github.com/gemini-cli-extensions/cloud-sql-postgresql)
* [cloud-sql-postgresql-observability](https://github.com/gemini-cli-extensions/cloud-sql-postgresql-observability)
* [cloud-sql-sqlserver](https://github.com/gemini-cli-extensions/cloud-sql-sqlserver)
* [cloud-sql-sqlserver-observability](https://github.com/gemini-cli-extensions/cloud-sql-sqlserver-observability)
* [dataplex](https://github.com/gemini-cli-extensions/dataplex)
* [firestore-native](https://github.com/gemini-cli-extensions/firestore-native)
* [looker](https://github.com/gemini-cli-extensions/looker)
* [mcp-toolbox](https://github.com/gemini-cli-extensions/mcp-toolbox)
* [mysql](https://github.com/gemini-cli-extensions/mysql)
* [postgres](https://github.com/gemini-cli-extensions/postgres)
* [spanner](https://github.com/gemini-cli-extensions/spanner)
* [sql-server](https://github.com/gemini-cli-extensions/sql-server)
# Deploy to Cloud Run
How to set up and configure Toolbox to run on Cloud Run.
## Before you begin
1. [Install](https://cloud.google.com/sdk/docs/install) the Google Cloud CLI.
1. Set the PROJECT_ID environment variable:
```bash
export PROJECT_ID="my-project-id"
```
1. Initialize gcloud CLI:
```bash
gcloud init
gcloud config set project $PROJECT_ID
```
1. Make sure you've set up and initialized your database.
1. You must have the following APIs enabled:
```bash
gcloud services enable run.googleapis.com \
cloudbuild.googleapis.com \
artifactregistry.googleapis.com \
iam.googleapis.com \
secretmanager.googleapis.com
```
1. To create an IAM account, you must have the following IAM permissions (or
roles):
- Create Service Account role (roles/iam.serviceAccountCreator)
1. To create a secret, you must have the following roles:
- Secret Manager Admin role (roles/secretmanager.admin)
1. To deploy to Cloud Run, you must have the following set of roles:
- Cloud Run Developer (roles/run.developer)
- Service Account User role (roles/iam.serviceAccountUser)
{{< notice note >}}
If you are using sources that require VPC-access (such as
AlloyDB or Cloud SQL over private IP), make sure your Cloud Run service and the
database are in the same VPC network.
{{< /notice >}}
## Create a service account
1. Create a backend service account if you don't already have one:
```bash
gcloud iam service-accounts create toolbox-identity
```
1. Grant permissions to use secret manager:
```bash
gcloud projects add-iam-policy-binding $PROJECT_ID \
--member serviceAccount:toolbox-identity@$PROJECT_ID.iam.gserviceaccount.com \
--role roles/secretmanager.secretAccessor
```
1. Grant additional permissions to the service account that are specific to the
source, e.g.:
- [AlloyDB for PostgreSQL](../resources/sources/alloydb-pg.md#iam-permissions)
- [Cloud SQL for PostgreSQL](../resources/sources/cloud-sql-pg.md#iam-permissions)
## Configure `tools.yaml` file
Create a `tools.yaml` file that contains your configuration for Toolbox. For
details, see the
[configuration](../resources/sources/)
section.
## Deploy to Cloud Run
1. Upload `tools.yaml` as a secret:
```bash
gcloud secrets create tools --data-file=tools.yaml
```
If you already have a secret and want to update the secret version, execute
the following:
```bash
gcloud secrets versions add tools --data-file=tools.yaml
```
1. Set an environment variable to the container image that you want to use for
cloud run:
```bash
export IMAGE=us-central1-docker.pkg.dev/database-toolbox/toolbox/toolbox:latest
```
{{< notice note >}}
**The `$PORT` Environment Variable**
Google Cloud Run dictates the port your application must listen on by setting
the `$PORT` environment variable inside your container. This value defaults to
**8080**. Your application's `--port` argument **must** be set to listen on this
port. If there is a mismatch, the container will fail to start and the
deployment will time out.
{{< /notice >}}
1. Deploy Toolbox to Cloud Run using the following command:
```bash
gcloud run deploy toolbox \
--image $IMAGE \
--service-account toolbox-identity \
--region us-central1 \
--set-secrets "/app/tools.yaml=tools:latest" \
--args="--tools-file=/app/tools.yaml","--address=0.0.0.0","--port=8080"
# --allow-unauthenticated # https://cloud.google.com/run/docs/authenticating/public#gcloud
```
If you are using a VPC network, use the command below:
```bash
gcloud run deploy toolbox \
--image $IMAGE \
--service-account toolbox-identity \
--region us-central1 \
--set-secrets "/app/tools.yaml=tools:latest" \
--args="--tools-file=/app/tools.yaml","--address=0.0.0.0","--port=8080" \
# TODO(dev): update the following to match your VPC if necessary
--network default \
--subnet default
# --allow-unauthenticated # https://cloud.google.com/run/docs/authenticating/public#gcloud
```
### Update deployed server to be secure
To prevent DNS rebinding attack, use the `--allowed-origins` flag to specify a
list of origins permitted to access the server. In order to do that, you will
have to re-deploy the cloud run service with the new flag.
1. Set an environment variable to the cloud run url:
```bash
export URL=
```
2. Redeploy Toolbox:
```bash
gcloud run deploy toolbox \
--image $IMAGE \
--service-account toolbox-identity \
--region us-central1 \
--set-secrets "/app/tools.yaml=tools:latest" \
--args="--tools-file=/app/tools.yaml","--address=0.0.0.0","--port=8080","--allowed-origins=$URL"
# --allow-unauthenticated # https://cloud.google.com/run/docs/authenticating/public#gcloud
```
If you are using a VPC network, use the command below:
```bash
gcloud run deploy toolbox \
--image $IMAGE \
--service-account toolbox-identity \
--region us-central1 \
--set-secrets "/app/tools.yaml=tools:latest" \
--args="--tools-file=/app/tools.yaml","--address=0.0.0.0","--port=8080","--allowed-origins=$URL" \
# TODO(dev): update the following to match your VPC if necessary
--network default \
--subnet default
# --allow-unauthenticated # https://cloud.google.com/run/docs/authenticating/public#gcloud
```
## Connecting with Toolbox Client SDK
You can connect to Toolbox Cloud Run instances directly through the SDK.
1. [Set up `Cloud Run Invoker` role
access](https://cloud.google.com/run/docs/securing/managing-access#service-add-principals)
to your Cloud Run service.
1. (Only for local runs) Set up [Application Default
Credentials](https://cloud.google.com/docs/authentication/set-up-adc-local-dev-environment)
for the principal you set up the `Cloud Run Invoker` role access to.
1. Run the following to retrieve a non-deterministic URL for the cloud run service:
```bash
gcloud run services describe toolbox --format 'value(status.url)'
```
1. Import and initialize the toolbox client with the URL retrieved above:
{{< tabpane persist=header >}}
{{< tab header="Python" lang="python" >}}
import asyncio
from toolbox_core import ToolboxClient, auth_methods
# Replace with the Cloud Run service URL generated in the previous step
URL = "https://cloud-run-url.app"
auth_token_provider = auth_methods.aget_google_id_token(URL) # can also use sync method
async def main():
async with ToolboxClient(
URL,
client_headers={"Authorization": auth_token_provider},
) as toolbox:
toolset = await toolbox.load_toolset()
# ...
asyncio.run(main())
{{< /tab >}}
{{< tab header="Javascript" lang="javascript" >}}
import { ToolboxClient } from '@toolbox-sdk/core';
import {getGoogleIdToken} from '@toolbox-sdk/core/auth'
// Replace with the Cloud Run service URL generated in the previous step.
const URL = 'http://127.0.0.1:5000';
const authTokenProvider = () => getGoogleIdToken(URL);
const client = new ToolboxClient(URL, null, {"Authorization": authTokenProvider});
{{< /tab >}}
{{< tab header="Go" lang="go" >}}
import "github.com/googleapis/mcp-toolbox-sdk-go/core"
func main() {
// Replace with the Cloud Run service URL generated in the previous step.
URL := "http://127.0.0.1:5000"
auth_token_provider, err := core.GetGoogleIDToken(ctx, URL)
if err != nil {
log.Fatalf("Failed to fetch token %v", err)
}
toolboxClient, err := core.NewToolboxClient(
URL,
core.WithClientHeaderString("Authorization", auth_token_provider))
if err != nil {
log.Fatalf("Failed to create Toolbox client: %v", err)
}
}
{{< /tab >}}
{{< /tabpane >}}
Now, you can use this client to connect to the deployed Cloud Run instance!
## Troubleshooting
{{< notice note >}}
For any deployment or runtime error, the best first step is to check the logs
for your service in the Google Cloud Console's Cloud Run section. They often
contain the specific error message needed to diagnose the problem.
{{< /notice >}}
- **Deployment Fails with "Container failed to start":** This is almost always
caused by a port mismatch. Ensure your container's `--port` argument is set to
`8080` to match the `$PORT` environment variable provided by Cloud Run.
- **Client Receives Permission Denied Error (401 or 403):** If your client
application (e.g., your local SDK) gets a `401 Unauthorized` or `403
Forbidden` error when trying to call your Cloud Run service, it means the
client is not properly authenticated as an invoker.
- Ensure the user or service account calling the service has the **Cloud Run
Invoker** (`roles/run.invoker`) IAM role.
- If running locally, make sure your Application Default Credentials are set
up correctly by running `gcloud auth application-default login`.
- **Service Fails to Access Secrets (in logs):** If your application starts but
the logs show errors like "permission denied" when trying to access Secret
Manager, it means the Toolbox service account is missing permissions.
- Ensure the `toolbox-identity` service account has the **Secret Manager
Secret Accessor** (`roles/secretmanager.secretAccessor`) IAM role.
# Deploy ADK Agent and MCP Toolbox
How to deploy your ADK Agent to Vertex AI Agent Engine and connect it to an MCP Toolbox deployed on Cloud Run.
## Before you begin
This guide assumes you have already done the following:
1. Completed the [Python Quickstart
(Local)](../getting-started/local_quickstart.md) and have a working ADK
agent running locally.
2. Installed the [Google Cloud CLI](https://cloud.google.com/sdk/docs/install).
3. A Google Cloud project with billing enabled.
## Step 1: Deploy MCP Toolbox to Cloud Run
Before deploying your agent, your MCP Toolbox server needs to be accessible from
the cloud. We will deploy MCP Toolbox to Cloud Run.
Follow the [Deploy to Cloud Run](deploy_toolbox.md) guide to deploy your MCP
Toolbox instance.
{{% alert title="Important" %}}
After deployment, note down the Service URL of your MCP Toolbox Cloud Run
service. You will need this to configure your agent.
{{% /alert %}}
## Step 2: Prepare your Agent for Deployment
We will use the `agent-starter-pack` tool to enhance your local agent project
with the necessary configuration for deployment to Vertex AI Agent Engine.
1. Open a terminal and navigate to the **parent directory** of your agent
project (the directory containing the `my_agent` folder).
2. Run the following command to enhance your project:
```bash
uvx agent-starter-pack enhance --adk -d agent_engine
```
3. Follow the interactive prompts to configure your deployment settings. This
process will generate deployment configuration files (like a `Makefile` and
`Dockerfile`) in your project directory.
4. Add `toolbox-core` as a dependency to the new project:
```bash
uv add toolbox-core
```
## Step 3: Configure Google Cloud Authentication
Ensure your local environment is authenticated with Google Cloud to perform the
deployment.
1. Login with Application Default Credentials (ADC):
```bash
gcloud auth application-default login
```
2. Set your active project:
```bash
gcloud config set project
```
## Step 4: Connect Agent to Deployed MCP Toolbox
You need to update your agent's code to connect to the Cloud Run URL of your MCP
Toolbox instead of the local address.
1. Recall that you can find the Cloud Run deployment URL of the MCP Toolbox
server using the following command:
```bash
gcloud run services describe toolbox --format 'value(status.url)'
```
2. Open your agent file (`my_agent/agent.py`).
3. Update the `ToolboxSyncClient` initialization to use your Cloud Run URL.
{{% alert color="info" %}}
Since Cloud Run services are secured by default, you also need to provide an
authentication token.
{{% /alert %}}
Replace your existing client initialization code with the following:
```python
from google.adk import Agent
from google.adk.apps import App
from toolbox_core import ToolboxSyncClient, auth_methods
# TODO(developer): Replace with your Toolbox Cloud Run Service URL
TOOLBOX_URL = "https://your-toolbox-service-xyz.a.run.app"
# Initialize the client with the Cloud Run URL and Auth headers
client = ToolboxSyncClient(
TOOLBOX_URL,
client_headers={"Authorization": auth_methods.get_google_id_token(TOOLBOX_URL)}
)
root_agent = Agent(
name='root_agent',
model='gemini-2.5-flash',
instruction="You are a helpful AI assistant designed to provide accurate and useful information.",
tools=client.load_toolset(),
)
app = App(root_agent=root_agent, name="my_agent")
```
{{% alert title="Important" %}}
Ensure that the `name` parameter in the `App` initialization matches the name of
your agent's parent directory (e.g., `my_agent`).
```python
...
app = App(root_agent=root_agent, name="my_agent")
```
{{% /alert %}}
## Step 5: Deploy to Agent Engine
Run the deployment command:
```bash
make backend
```
This command will build your agent's container image and deploy it to Vertex AI.
## Step 6: Test your Deployment
Once the deployment command (`make backend`) completes, it will output the URL
for the Agent Engine Playground. You can click on this URL to open the
Playground in your browser and start chatting with your agent to test the tools.
For additional test scenarios, refer to the [Test deployed
agent](https://google.github.io/adk-docs/deploy/agent-engine/#test-deployment)
section in the ADK documentation.
# Deploy to Kubernetes
How to set up and configure Toolbox to deploy on Kubernetes with Google Kubernetes Engine (GKE).
## Before you begin
1. Set the PROJECT_ID environment variable:
```bash
export PROJECT_ID="my-project-id"
```
1. [Install the `gcloud` CLI](https://cloud.google.com/sdk/docs/install).
1. Initialize gcloud CLI:
```bash
gcloud init
gcloud config set project $PROJECT_ID
```
1. You must have the following APIs enabled:
```bash
gcloud services enable artifactregistry.googleapis.com \
cloudbuild.googleapis.com \
container.googleapis.com \
iam.googleapis.com
```
1. `kubectl` is used to manage Kubernetes, the cluster orchestration system used
by GKE. Verify if you have `kubectl` installed:
```bash
kubectl version --client
```
1. If needed, install `kubectl` component using the Google Cloud CLI:
```bash
gcloud components install kubectl
```
## Create a service account
1. Specify a name for your service account with an environment variable:
```bash
export SA_NAME=toolbox
```
1. Create a backend service account:
```bash
gcloud iam service-accounts create $SA_NAME
```
1. Grant any IAM roles necessary to the IAM service account. Each source has a
list of necessary IAM permissions listed on its page. The example below is
for cloud sql postgres source:
```bash
gcloud projects add-iam-policy-binding $PROJECT_ID \
--member serviceAccount:$SA_NAME@$PROJECT_ID.iam.gserviceaccount.com \
--role roles/cloudsql.client
```
- [AlloyDB IAM Identity](../resources/sources/alloydb-pg.md#iam-permissions)
- [CloudSQL IAM Identity](../resources/sources/cloud-sql-pg.md#iam-permissions)
- [Spanner IAM Identity](../resources/sources/spanner.md#iam-permissions)
## Deploy to Kubernetes
1. Set environment variables:
```bash
export CLUSTER_NAME=toolbox-cluster
export DEPLOYMENT_NAME=toolbox
export SERVICE_NAME=toolbox-service
export REGION=us-central1
export NAMESPACE=toolbox-namespace
export SECRET_NAME=toolbox-config
export KSA_NAME=toolbox-service-account
```
1. Create a [GKE cluster](https://cloud.google.com/kubernetes-engine/docs/concepts/cluster-architecture).
```bash
gcloud container clusters create-auto $CLUSTER_NAME \
--location=us-central1
```
1. Get authentication credentials to interact with the cluster. This also
configures `kubectl` to use the cluster.
```bash
gcloud container clusters get-credentials $CLUSTER_NAME \
--region=$REGION \
--project=$PROJECT_ID
```
1. View the current context for `kubectl`.
```bash
kubectl config current-context
```
1. Create namespace for the deployment.
```bash
kubectl create namespace $NAMESPACE
```
1. Create a Kubernetes Service Account (KSA).
```bash
kubectl create serviceaccount $KSA_NAME --namespace $NAMESPACE
```
1. Enable the IAM binding between Google Service Account (GSA) and Kubernetes
Service Account (KSA).
```bash
gcloud iam service-accounts add-iam-policy-binding \
--role="roles/iam.workloadIdentityUser" \
--member="serviceAccount:$PROJECT_ID.svc.id.goog[$NAMESPACE/$KSA_NAME]" \
$SA_NAME@$PROJECT_ID.iam.gserviceaccount.com
```
1. Add annotation to KSA to complete binding:
```bash
kubectl annotate serviceaccount \
$KSA_NAME \
iam.gke.io/gcp-service-account=$SA_NAME@$PROJECT_ID.iam.gserviceaccount.com \
--namespace $NAMESPACE
```
1. Prepare the Kubernetes secret for your `tools.yaml` file.
```bash
kubectl create secret generic $SECRET_NAME \
--from-file=./tools.yaml \
--namespace=$NAMESPACE
```
1. Create a Kubernetes manifest file (`k8s_deployment.yaml`) to build deployment.
```yaml
apiVersion: apps/v1
kind: Deployment
metadata:
name: toolbox
namespace: toolbox-namespace
spec:
selector:
matchLabels:
app: toolbox
template:
metadata:
labels:
app: toolbox
spec:
serviceAccountName: toolbox-service-account
containers:
- name: toolbox
# Recommend to use the latest version of toolbox
image: us-central1-docker.pkg.dev/database-toolbox/toolbox/toolbox:latest
args: ["--address", "0.0.0.0"]
ports:
- containerPort: 5000
volumeMounts:
- name: toolbox-config
mountPath: "/app/tools.yaml"
subPath: tools.yaml
readOnly: true
volumes:
- name: toolbox-config
secret:
secretName: toolbox-config
items:
- key: tools.yaml
path: tools.yaml
```
{{< notice tip >}}
To prevent DNS rebinding attack, use the `--allowed-origins` flag to specify a
list of origins permitted to access the server. E.g. `args: ["--address",
"0.0.0.0", "--allowed-origins", "https://foo.bar"]`
{{< /notice >}}
1. Create the deployment.
```bash
kubectl apply -f k8s_deployment.yaml --namespace $NAMESPACE
```
1. Check the status of deployment.
```bash
kubectl get deployments --namespace $NAMESPACE
```
1. Create a Kubernetes manifest file (`k8s_service.yaml`) to build service.
```yaml
apiVersion: v1
kind: Service
metadata:
name: toolbox-service
namespace: toolbox-namespace
annotations:
cloud.google.com/l4-rbs: "enabled"
spec:
selector:
app: toolbox
ports:
- port: 5000
targetPort: 5000
type: LoadBalancer
```
1. Create the service.
```bash
kubectl apply -f k8s_service.yaml --namespace $NAMESPACE
```
1. You can find your IP address created for your service by getting the service
information through the following.
```bash
kubectl describe services $SERVICE_NAME --namespace $NAMESPACE
```
1. To look at logs, run the following.
```bash
kubectl logs -f deploy/$DEPLOYMENT_NAME --namespace $NAMESPACE
```
1. You might have to wait a couple of minutes. It is ready when you can see
`EXTERNAL-IP` with the following command:
```bash
kubectl get svc -n $NAMESPACE
```
1. Access toolbox locally.
```bash
curl :5000
```
## Clean up resources
1. Delete secret.
```bash
kubectl delete secret $SECRET_NAME --namespace $NAMESPACE
```
1. Delete deployment.
```bash
kubectl delete deployment $DEPLOYMENT_NAME --namespace $NAMESPACE
```
1. Delete the application's service.
```bash
kubectl delete service $SERVICE_NAME --namespace $NAMESPACE
```
1. Delete the Kubernetes cluster.
```bash
gcloud container clusters delete $CLUSTER_NAME \
--location=$REGION
```
# Deploy using Docker Compose
How to deploy Toolbox using Docker Compose.
## Before you begin
1. [Install Docker Compose.](https://docs.docker.com/compose/install/)
## Configure `tools.yaml` file
Create a `tools.yaml` file that contains your configuration for Toolbox. For
details, see the
[configuration](https://github.com/googleapis/genai-toolbox/blob/main/README.md#configuration)
section.
## Deploy using Docker Compose
1. Create a `docker-compose.yml` file, customizing as needed:
```yaml
services:
toolbox:
# TODO: It is recommended to pin to a specific image version instead of latest.
image: us-central1-docker.pkg.dev/database-toolbox/toolbox/toolbox:latest
hostname: toolbox
platform: linux/amd64
ports:
- "5000:5000"
volumes:
- ./config:/config
command: [ "toolbox", "--tools-file", "/config/tools.yaml", "--address", "0.0.0.0"]
depends_on:
db:
condition: service_healthy
networks:
- tool-network
db:
# TODO: It is recommended to pin to a specific image version instead of latest.
image: postgres
hostname: db
environment:
POSTGRES_USER: toolbox_user
POSTGRES_PASSWORD: my-password
POSTGRES_DB: toolbox_db
ports:
- "5432:5432"
volumes:
- ./db:/var/lib/postgresql/data
# This file can be used to bootstrap your schema if needed.
# See "initialization scripts" on https://hub.docker.com/_/postgres/ for more info
- ./config/init.sql:/docker-entrypoint-initdb.d/init.sql
healthcheck:
test: ["CMD-SHELL", "pg_isready -U toolbox_user -d toolbox_db"]
interval: 10s
timeout: 5s
retries: 5
networks:
- tool-network
networks:
tool-network:
```
{{< notice tip >}}
To prevent DNS rebinding attack, use the `--allowed-origins` flag to specify a
list of origins permitted to access the server. E.g. `command: [ "toolbox",
"--tools-file", "/config/tools.yaml", "--address", "0.0.0.0",
"--allowed-origins", "https://foo.bar"]`
{{< /notice >}}
1. Run the following command to bring up the Toolbox and Postgres instance
```bash
docker-compose up -d
```
{{< notice tip >}}
You can use this setup to quickly set up Toolbox + Postgres to follow along in our
[Quickstart](../getting-started/local_quickstart.md)
{{< /notice >}}
## Connecting with Toolbox Client SDK
Next, we will use Toolbox with the Client SDKs:
1. The url for the Toolbox server running using docker-compose will be:
```
http://localhost:5000
```
1. Import and initialize the client with the URL:
{{< tabpane persist=header >}}
{{< tab header="LangChain" lang="Python" >}}
from toolbox_langchain import ToolboxClient
# Replace with the cloud run service URL generated above
async with ToolboxClient("http://$YOUR_URL") as toolbox:
{{< /tab >}}
{{< tab header="Llamaindex" lang="Python" >}}
from toolbox_llamaindex import ToolboxClient
# Replace with the cloud run service URL generated above
async with ToolboxClient("http://$YOUR_URL") as toolbox:
{{< /tab >}}
{{< /tabpane >}}
# Export Telemetry
How to set up and configure Toolbox to use the Otel Collector.
## About
The [OpenTelemetry Collector][about-collector] offers a vendor-agnostic
implementation of how to receive, process and export telemetry data. It removes
the need to run, operate, and maintain multiple agents/collectors.
[about-collector]: https://opentelemetry.io/docs/collector/
## Configure the Collector
To configure the collector, you will have to provide a configuration file. The
configuration file consists of four classes of pipeline component that access
telemetry data.
- `Receivers`
- `Processors`
- `Exporters`
- `Connectors`
Example of setting up the classes of pipeline components (in this example, we
don't use connectors):
```yaml
receivers:
otlp:
protocols:
http:
endpoint: "127.0.0.1:4553"
exporters:
googlecloud:
project:
processors:
batch:
send_batch_size: 200
```
After each pipeline component is configured, you will enable it within the
`service` section of the configuration file.
```yaml
service:
pipelines:
traces:
receivers: ["otlp"]
processors: ["batch"]
exporters: ["googlecloud"]
```
## Running the Collector
There are a couple of steps to run and use a Collector.
1. [Install the
Collector](https://opentelemetry.io/docs/collector/installation/) binary.
Pull a binary or Docker image for the OpenTelemetry contrib collector.
1. Set up credentials for telemetry backend.
1. Set up the Collector config. Below are some examples for setting up the
Collector config:
- [Google Cloud Exporter][google-cloud-exporter]
- [Google Managed Service for Prometheus Exporter][google-prometheus-exporter]
1. Run the Collector with the configuration file.
```bash
./otelcol-contrib --config=collector-config.yaml
```
1. Run toolbox with the `--telemetry-otlp` flag. Configure it to send them to
`http://127.0.0.1:4553` (for HTTP) or the Collector's URL.
```bash
./toolbox --telemetry-otlp=http://127.0.0.1:4553
```
1. Once telemetry datas are collected, you can view them in your telemetry
backend. If you are using GCP exporters, telemetry will be visible in GCP
dashboard at [Metrics Explorer][metrics-explorer] and [Trace
Explorer][trace-explorer].
{{< notice note >}}
If you are exporting to Google Cloud monitoring, we recommend that you use
the Google Cloud Exporter for traces and the Google Managed Service for
Prometheus Exporter for metrics.
{{< /notice >}}
[google-cloud-exporter]:
https://github.com/open-telemetry/opentelemetry-collector-contrib/tree/main/exporter/googlecloudexporter
[google-prometheus-exporter]:
https://github.com/open-telemetry/opentelemetry-collector-contrib/tree/main/exporter/googlemanagedprometheusexporter#example-configuration
[metrics-explorer]: https://console.cloud.google.com/monitoring/metrics-explorer
[trace-explorer]: https://console.cloud.google.com/traces
# Resources
List of reference documentation for resources in Toolbox.
# AuthServices
AuthServices represent services that handle authentication and authorization.
AuthServices represent services that handle authentication and authorization. It
can primarily be used by [Tools](../tools/) in two different ways:
- [**Authorized Invocation**][auth-invoke] is when a tool
is validated by the auth service before the call can be invoked. Toolbox
will reject any calls that fail to validate or have an invalid token.
- [**Authenticated Parameters**][auth-params] replace the value of a parameter
with a field from an [OIDC][openid-claims] claim. Toolbox will automatically
resolve the ID token provided by the client and replace the parameter in the
tool call.
[openid-claims]: https://openid.net/specs/openid-connect-core-1_0.html#StandardClaims
[auth-invoke]: ../tools/#authorized-invocations
[auth-params]: ../tools/#authenticated-parameters
## Example
The following configurations are placed at the top level of a `tools.yaml` file.
{{< notice tip >}}
If you are accessing Toolbox with multiple applications, each
application should register their own Client ID even if they use the same
"kind" of auth provider.
{{< /notice >}}
```yaml
authServices:
my_auth_app_1:
kind: google
clientId: ${YOUR_CLIENT_ID_1}
my_auth_app_2:
kind: google
clientId: ${YOUR_CLIENT_ID_2}
```
{{< notice tip >}}
Use environment variable replacement with the format ${ENV_NAME}
instead of hardcoding your secrets into the configuration file.
{{< /notice >}}
After you've configured an `authService` you'll, need to reference it in the
configuration for each tool that should use it:
- **Authorized Invocations** for authorizing a tool call, [use the
`authRequired` field in a tool config][auth-invoke]
- **Authenticated Parameters** for using the value from a OIDC claim, [use the
`authServices` field in a parameter config][auth-params]
## Specifying ID Tokens from Clients
After [configuring](#example) your `authServices` section, use a Toolbox SDK to
add your ID tokens to the header of a Tool invocation request. When specifying a
token you will provide a function (that returns an id). This function is called
when the tool is invoked. This allows you to cache and refresh the ID token as
needed.
The primary method for providing these getters is via the `auth_token_getters`
parameter when loading tools, or the `add_auth_token_getter`() /
`add_auth_token_getters()` methods on a loaded tool object.
### Specifying tokens during load
#### Python
Use the [Python SDK](https://github.com/googleapis/mcp-toolbox-sdk-python/tree/main).
{{< tabpane persist=header >}}
{{< tab header="Core" lang="Python" >}}
import asyncio
from toolbox_core import ToolboxClient
async def get_auth_token():
# ... Logic to retrieve ID token (e.g., from local storage, OAuth flow)
# This example just returns a placeholder. Replace with your actual token retrieval.
return "YOUR_ID_TOKEN" # Placeholder
async def main():
async with ToolboxClient("") as toolbox:
auth_tool = await toolbox.load_tool(
"get_sensitive_data",
auth_token_getters={"my_auth_app_1": get_auth_token}
)
result = await auth_tool(param="value")
print(result)
if **name** == "**main**":
asyncio.run(main())
{{< /tab >}}
{{< tab header="LangChain" lang="Python" >}}
import asyncio
from toolbox_langchain import ToolboxClient
async def get_auth_token():
# ... Logic to retrieve ID token (e.g., from local storage, OAuth flow)
# This example just returns a placeholder. Replace with your actual token retrieval.
return "YOUR_ID_TOKEN" # Placeholder
async def main():
toolbox = ToolboxClient("")
auth_tool = await toolbox.aload_tool(
"get_sensitive_data",
auth_token_getters={"my_auth_app_1": get_auth_token}
)
result = await auth_tool.ainvoke({"param": "value"})
print(result)
if **name** == "**main**":
asyncio.run(main())
{{< /tab >}}
{{< tab header="Llamaindex" lang="Python" >}}
import asyncio
from toolbox_llamaindex import ToolboxClient
async def get_auth_token():
# ... Logic to retrieve ID token (e.g., from local storage, OAuth flow)
# This example just returns a placeholder. Replace with your actual token retrieval.
return "YOUR_ID_TOKEN" # Placeholder
async def main():
toolbox = ToolboxClient("")
auth_tool = await toolbox.aload_tool(
"get_sensitive_data",
auth_token_getters={"my_auth_app_1": get_auth_token}
)
# result = await auth_tool.acall(param="value")
# print(result.content)
if **name** == "**main**":
asyncio.run(main()){{< /tab >}}
{{< /tabpane >}}
#### Javascript/Typescript
Use the [JS SDK](https://github.com/googleapis/mcp-toolbox-sdk-js/tree/main).
```javascript
import { ToolboxClient } from '@toolbox-sdk/core';
async function getAuthToken() {
// ... Logic to retrieve ID token (e.g., from local storage, OAuth flow)
// This example just returns a placeholder. Replace with your actual token retrieval.
return "YOUR_ID_TOKEN" // Placeholder
}
const URL = 'http://127.0.0.1:5000';
let client = new ToolboxClient(URL);
const authTool = await client.loadTool("my-tool", {"my_auth_app_1": getAuthToken});
const result = await authTool({param:"value"});
console.log(result);
print(result)
```
#### Go
Use the [Go SDK](https://github.com/googleapis/mcp-toolbox-sdk-go/tree/main).
```go
import "github.com/googleapis/mcp-toolbox-sdk-go/core"
import "fmt"
func getAuthToken() string {
// ... Logic to retrieve ID token (e.g., from local storage, OAuth flow)
// This example just returns a placeholder. Replace with your actual token retrieval.
return "YOUR_ID_TOKEN" // Placeholder
}
func main() {
URL := 'http://127.0.0.1:5000'
client, err := core.NewToolboxClient(URL)
if err != nil {
log.Fatalf("Failed to create Toolbox client: %v", err)
}
dynamicTokenSource := core.NewCustomTokenSource(getAuthToken)
authTool, err := client.LoadTool(
"my-tool",
ctx,
core.WithAuthTokenSource("my_auth_app_1", dynamicTokenSource))
if err != nil {
log.Fatalf("Failed to load tool: %v", err)
}
inputs := map[string]any{"param": "value"}
result, err := authTool.Invoke(ctx, inputs)
if err != nil {
log.Fatalf("Failed to invoke tool: %v", err)
}
fmt.Println(result)
}
```
### Specifying tokens for existing tools
#### Python
Use the [Python
SDK](https://github.com/googleapis/mcp-toolbox-sdk-python/tree/main).
{{< tabpane persist=header >}}
{{< tab header="Core" lang="Python" >}}
tools = await toolbox.load_toolset()
# for a single token
authorized_tool = tools[0].add_auth_token_getter("my_auth", get_auth_token)
# OR, if multiple tokens are needed
authorized_tool = tools[0].add_auth_token_getters({
"my_auth1": get_auth1_token,
"my_auth2": get_auth2_token,
})
{{< /tab >}}
{{< tab header="LangChain" lang="Python" >}}
tools = toolbox.load_toolset()
# for a single token
authorized_tool = tools[0].add_auth_token_getter("my_auth", get_auth_token)
# OR, if multiple tokens are needed
authorized_tool = tools[0].add_auth_token_getters({
"my_auth1": get_auth1_token,
"my_auth2": get_auth2_token,
})
{{< /tab >}}
{{< tab header="Llamaindex" lang="Python" >}}
tools = toolbox.load_toolset()
# for a single token
authorized_tool = tools[0].add_auth_token_getter("my_auth", get_auth_token)
# OR, if multiple tokens are needed
authorized_tool = tools[0].add_auth_token_getters({
"my_auth1": get_auth1_token,
"my_auth2": get_auth2_token,
})
{{< /tab >}}
{{< /tabpane >}}
#### Javascript/Typescript
Use the [JS SDK](https://github.com/googleapis/mcp-toolbox-sdk-js/tree/main).
```javascript
const URL = 'http://127.0.0.1:5000';
let client = new ToolboxClient(URL);
let tool = await client.loadTool("my-tool")
// for a single token
const authorizedTool = tool.addAuthTokenGetter("my_auth", get_auth_token)
// OR, if multiple tokens are needed
const multiAuthTool = tool.addAuthTokenGetters({
"my_auth_1": getAuthToken1,
"my_auth_2": getAuthToken2,
})
```
#### Go
Use the [Go SDK](https://github.com/googleapis/mcp-toolbox-sdk-go/tree/main).
```go
import "github.com/googleapis/mcp-toolbox-sdk-go/core"
func main() {
URL := 'http://127.0.0.1:5000'
client, err := core.NewToolboxClient(URL)
if err != nil {
log.Fatalf("Failed to create Toolbox client: %v", err)
}
tool, err := client.LoadTool("my-tool", ctx))
if err != nil {
log.Fatalf("Failed to load tool: %v", err)
}
dynamicTokenSource1 := core.NewCustomTokenSource(getAuthToken1)
dynamicTokenSource2 := core.NewCustomTokenSource(getAuthToken1)
// For a single token
authTool, err := tool.ToolFrom(
core.WithAuthTokenSource("my-auth", dynamicTokenSource),
)
// OR, if multiple tokens are needed
authTool, err := tool.ToolFrom(
core.WithAuthTokenSource("my-auth_1", dynamicTokenSource1),
core.WithAuthTokenSource("my-auth_2", dynamicTokenSource2),
)
}
```
## Kinds of Auth Services
# Google Sign-In
Use Google Sign-In for Oauth 2.0 flow and token lifecycle.
## Getting Started
Google Sign-In manages the OAuth 2.0 flow and token lifecycle. To integrate the
Google Sign-In workflow to your web app [follow this guide][gsi-setup].
After setting up the Google Sign-In workflow, you should have registered your
application and retrieved a [Client ID][client-id]. Configure your auth service
in with the `Client ID`.
[gsi-setup]: https://developers.google.com/identity/sign-in/web/sign-in
[client-id]: https://developers.google.com/identity/sign-in/web/sign-in#create_authorization_credentials
## Behavior
### Authorized Invocations
When using [Authorized Invocations][auth-invoke], a tool will be
considered authorized if it has a valid Oauth 2.0 token that matches the Client
ID.
[auth-invoke]: ../tools/#authorized-invocations
### Authenticated Parameters
When using [Authenticated Parameters][auth-params], any [claim provided by the
id-token][provided-claims] can be used for the parameter.
[auth-params]: ../tools/#authenticated-parameters
[provided-claims]:
https://developers.google.com/identity/openid-connect/openid-connect#obtaininguserprofileinformation
## Example
```yaml
authServices:
my-google-auth:
kind: google
clientId: ${YOUR_GOOGLE_CLIENT_ID}
```
{{< notice tip >}}
Use environment variable replacement with the format ${ENV_NAME}
instead of hardcoding your secrets into the configuration file.
{{< /notice >}}
## Reference
| **field** | **type** | **required** | **description** |
|-----------|:--------:|:------------:|------------------------------------------------------------------|
| kind | string | true | Must be "google". |
| clientId | string | true | Client ID of your application from registering your application. |
# Sources
Sources represent your different data sources that a tool can interact with.
A Source represents a data sources that a tool can interact with. You can define
Sources as a map in the `sources` section of your `tools.yaml` file. Typically,
a source configuration will contain any information needed to connect with and
interact with the database.
{{< notice tip >}}
Use environment variable replacement with the format ${ENV_NAME}
instead of hardcoding your secrets into the configuration file.
{{< /notice >}}
```yaml
sources:
my-cloud-sql-source:
kind: cloud-sql-postgres
project: my-project-id
region: us-central1
instance: my-instance-name
database: my_db
user: ${USER_NAME}
password: ${PASSWORD}
```
In implementation, each source is a different connection pool or client that used
to connect to the database and execute the tool.
## Available Sources
# AlloyDB for PostgreSQL
AlloyDB for PostgreSQL is a fully-managed, PostgreSQL-compatible database for demanding transactional workloads.
## About
[AlloyDB for PostgreSQL][alloydb-docs] is a fully-managed, PostgreSQL-compatible
database for demanding transactional workloads. It provides enterprise-grade
performance and availability while maintaining 100% compatibility with
open-source PostgreSQL.
If you are new to AlloyDB for PostgreSQL, you can [create a free trial
cluster][alloydb-free-trial].
[alloydb-docs]: https://cloud.google.com/alloydb/docs
[alloydb-free-trial]: https://cloud.google.com/alloydb/docs/create-free-trial-cluster
## Available Tools
- [`alloydb-ai-nl`](../tools/alloydbainl/alloydb-ai-nl.md)
Use natural language queries on AlloyDB, powered by AlloyDB AI.
- [`postgres-sql`](../tools/postgres/postgres-sql.md)
Execute SQL queries as prepared statements in AlloyDB Postgres.
- [`postgres-execute-sql`](../tools/postgres/postgres-execute-sql.md)
Run parameterized SQL statements in AlloyDB Postgres.
- [`postgres-list-tables`](../tools/postgres/postgres-list-tables.md)
List tables in an AlloyDB for PostgreSQL database.
- [`postgres-list-active-queries`](../tools/postgres/postgres-list-active-queries.md)
List active queries in an AlloyDB for PostgreSQL database.
- [`postgres-list-available-extensions`](../tools/postgres/postgres-list-available-extensions.md)
List available extensions for installation in a PostgreSQL database.
- [`postgres-list-installed-extensions`](../tools/postgres/postgres-list-installed-extensions.md)
List installed extensions in a PostgreSQL database.
- [`postgres-list-views`](../tools/postgres/postgres-list-views.md)
List views in an AlloyDB for PostgreSQL database.
- [`postgres-list-schemas`](../tools/postgres/postgres-list-schemas.md)
List schemas in an AlloyDB for PostgreSQL database.
- [`postgres-database-overview`](../tools/postgres/postgres-database-overview.md)
Fetches the current state of the PostgreSQL server.
- [`postgres-list-triggers`](../tools/postgres/postgres-list-triggers.md)
List triggers in an AlloyDB for PostgreSQL database.
- [`postgres-list-indexes`](../tools/postgres/postgres-list-indexes.md)
List available user indexes in a PostgreSQL database.
- [`postgres-list-sequences`](../tools/postgres/postgres-list-sequences.md)
List sequences in a PostgreSQL database.
- [`postgres-long-running-transactions`](../tools/postgres/postgres-long-running-transactions.md)
List long running transactions in a PostgreSQL database.
- [`postgres-list-locks`](../tools/postgres/postgres-list-locks.md)
List lock stats in a PostgreSQL database.
- [`postgres-replication-stats`](../tools/postgres/postgres-replication-stats.md)
List replication stats in a PostgreSQL database.
- [`postgres-list-query-stats`](../tools/postgres/postgres-list-query-stats.md)
List query statistics in a PostgreSQL database.
- [`postgres-get-column-cardinality`](../tools/postgres/postgres-get-column-cardinality.md)
List cardinality of columns in a table in a PostgreSQL database.
- [`postgres-list-pg-settings`](../tools/postgres/postgres-list-pg-settings.md)
List configuration parameters for the PostgreSQL server.
- [`postgres-list-database-stats`](../tools/postgres/postgres-list-database-stats.md)
Lists the key performance and activity statistics for each database in the AlloyDB
instance.
### Pre-built Configurations
- [AlloyDB using MCP](https://googleapis.github.io/genai-toolbox/how-to/connect-ide/alloydb_pg_mcp/)
Connect your IDE to AlloyDB using Toolbox.
- [AlloyDB Admin API using MCP](https://googleapis.github.io/genai-toolbox/how-to/connect-ide/alloydb_pg_admin_mcp/)
Create your AlloyDB database with MCP Toolbox.
## Requirements
### IAM Permissions
By default, AlloyDB for PostgreSQL source uses the [AlloyDB Go
Connector][alloydb-go-conn] to authorize and establish mTLS connections to your
AlloyDB instance. The Go connector uses your [Application Default Credentials
(ADC)][adc] to authorize your connection to AlloyDB.
In addition to [setting the ADC for your server][set-adc], you need to ensure
the IAM identity has been given the following IAM roles (or corresponding
permissions):
- `roles/alloydb.client`
- `roles/serviceusage.serviceUsageConsumer`
[alloydb-go-conn]: https://github.com/GoogleCloudPlatform/alloydb-go-connector
[adc]: https://cloud.google.com/docs/authentication#adc
[set-adc]: https://cloud.google.com/docs/authentication/provide-credentials-adc
### Networking
AlloyDB supports connecting over both from external networks via the internet
([public IP][public-ip]), and internal networks ([private IP][private-ip]).
For more information on choosing between the two options, see the AlloyDB page
[Connection overview][conn-overview].
You can configure the `ipType` parameter in your source configuration to
`public` or `private` to match your cluster's configuration. Regardless of which
you choose, all connections use IAM-based authorization and are encrypted with
mTLS.
[private-ip]: https://cloud.google.com/alloydb/docs/private-ip
[public-ip]: https://cloud.google.com/alloydb/docs/connect-public-ip
[conn-overview]: https://cloud.google.com/alloydb/docs/connection-overview
### Authentication
This source supports both password-based authentication and IAM
authentication (using your [Application Default Credentials][adc]).
#### Standard Authentication
To connect using user/password, [create
a PostgreSQL user][alloydb-users] and input your credentials in the `user` and
`password` fields.
```yaml
user: ${USER_NAME}
password: ${PASSWORD}
```
#### IAM Authentication
To connect using IAM authentication:
1. Prepare your database instance and user following this [guide][iam-guide].
2. You could choose one of the two ways to log in:
- Specify your IAM email as the `user`.
- Leave your `user` field blank. Toolbox will fetch the [ADC][adc]
automatically and log in using the email associated with it.
3. Leave the `password` field blank.
[iam-guide]: https://cloud.google.com/alloydb/docs/database-users/manage-iam-auth
[alloydb-users]: https://cloud.google.com/alloydb/docs/database-users/about
## Example
```yaml
sources:
my-alloydb-pg-source:
kind: alloydb-postgres
project: my-project-id
region: us-central1
cluster: my-cluster
instance: my-instance
database: my_db
user: ${USER_NAME}
password: ${PASSWORD}
# ipType: "public"
```
{{< notice tip >}}
Use environment variable replacement with the format ${ENV_NAME}
instead of hardcoding your secrets into the configuration file.
{{< /notice >}}
## Reference
| **field** | **type** | **required** | **description** |
|-----------|:--------:|:------------:|--------------------------------------------------------------------------------------------------------------------------|
| kind | string | true | Must be "alloydb-postgres". |
| project | string | true | Id of the GCP project that the cluster was created in (e.g. "my-project-id"). |
| region | string | true | Name of the GCP region that the cluster was created in (e.g. "us-central1"). |
| cluster | string | true | Name of the AlloyDB cluster (e.g. "my-cluster"). |
| instance | string | true | Name of the AlloyDB instance within the cluster (e.g. "my-instance"). |
| database | string | true | Name of the Postgres database to connect to (e.g. "my_db"). |
| user | string | false | Name of the Postgres user to connect as (e.g. "my-pg-user"). Defaults to IAM auth using [ADC][adc] email if unspecified. |
| password | string | false | Password of the Postgres user (e.g. "my-password"). Defaults to attempting IAM authentication if unspecified. |
| ipType | string | false | IP Type of the AlloyDB instance; must be one of `public` or `private`. Default: `public`. |
# AlloyDB Admin
The "alloydb-admin" source provides a client for the AlloyDB API.
## About
The `alloydb-admin` source provides a client to interact with the [Google
AlloyDB API](https://cloud.google.com/alloydb/docs/reference/rest). This allows
tools to perform administrative tasks on AlloyDB resources, such as managing
clusters, instances, and users.
Authentication can be handled in two ways:
1. **Application Default Credentials (ADC):** By default, the source uses ADC
to authenticate with the API.
2. **Client-side OAuth:** If `useClientOAuth` is set to `true`, the source will
expect an OAuth 2.0 access token to be provided by the client (e.g., a web
browser) for each request.
## Example
```yaml
sources:
my-alloydb-admin:
kind: alloy-admin
my-oauth-alloydb-admin:
kind: alloydb-admin
useClientOAuth: true
```
## Reference
| **field** | **type** | **required** | **description** |
| -------------- | :------: | :----------: | ---------------------------------------------------------------------------------------------------------------------------------------------- |
| kind | string | true | Must be "alloydb-admin". |
| defaultProject | string | false | The Google Cloud project ID to use for AlloyDB infrastructure tools. |
| useClientOAuth | boolean | false | If true, the source will use client-side OAuth for authorization. Otherwise, it will use Application Default Credentials. Defaults to `false`. |
# BigQuery
BigQuery is Google Cloud's fully managed, petabyte-scale, and cost-effective analytics data warehouse that lets you run analytics over vast amounts of data in near real time. With BigQuery, there's no infrastructure to set up or manage, letting you focus on finding meaningful insights using GoogleSQL and taking advantage of flexible pricing models across on-demand and flat-rate options.
# BigQuery Source
[BigQuery][bigquery-docs] is Google Cloud's fully managed, petabyte-scale,
and cost-effective analytics data warehouse that lets you run analytics
over vast amounts of data in near real time. With BigQuery, there's no
infrastructure to set up or manage, letting you focus on finding meaningful
insights using GoogleSQL and taking advantage of flexible pricing models
across on-demand and flat-rate options.
If you are new to BigQuery, you can try to
[load and query data with the bq tool][bigquery-quickstart-cli].
BigQuery uses [GoogleSQL][bigquery-googlesql] for querying data. GoogleSQL
is an ANSI-compliant structured query language (SQL) that is also implemented
for other Google Cloud services. SQL queries are handled by cluster nodes
in the same way as NoSQL data requests. Therefore, the same best practices
apply when creating SQL queries to run against your BigQuery data, such as
avoiding full table scans or complex filters.
[bigquery-docs]: https://cloud.google.com/bigquery/docs
[bigquery-quickstart-cli]:
https://cloud.google.com/bigquery/docs/quickstarts/quickstart-command-line
[bigquery-googlesql]:
https://cloud.google.com/bigquery/docs/reference/standard-sql/
## Available Tools
- [`bigquery-analyze-contribution`](../tools/bigquery/bigquery-analyze-contribution.md)
Performs contribution analysis, also called key driver analysis in BigQuery.
- [`bigquery-conversational-analytics`](../tools/bigquery/bigquery-conversational-analytics.md)
Allows conversational interaction with a BigQuery source.
- [`bigquery-execute-sql`](../tools/bigquery/bigquery-execute-sql.md)
Execute structured queries using parameters.
- [`bigquery-forecast`](../tools/bigquery/bigquery-forecast.md)
Forecasts time series data in BigQuery.
- [`bigquery-get-dataset-info`](../tools/bigquery/bigquery-get-dataset-info.md)
Retrieve metadata for a specific dataset.
- [`bigquery-get-table-info`](../tools/bigquery/bigquery-get-table-info.md)
Retrieve metadata for a specific table.
- [`bigquery-list-dataset-ids`](../tools/bigquery/bigquery-list-dataset-ids.md)
List available dataset IDs.
- [`bigquery-list-table-ids`](../tools/bigquery/bigquery-list-table-ids.md)
List tables in a given dataset.
- [`bigquery-sql`](../tools/bigquery/bigquery-sql.md)
Run SQL queries directly against BigQuery datasets.
- [`bigquery-search-catalog`](../tools/bigquery/bigquery-search-catalog.md)
List all entries in Dataplex Catalog (e.g. tables, views, models) that matches
given user query.
### Pre-built Configurations
- [BigQuery using
MCP](https://googleapis.github.io/genai-toolbox/how-to/connect-ide/bigquery_mcp/)
Connect your IDE to BigQuery using Toolbox.
## Requirements
### IAM Permissions
BigQuery uses [Identity and Access Management (IAM)][iam-overview] to control
user and group access to BigQuery resources like projects, datasets, and tables.
### Authentication via Application Default Credentials (ADC)
By **default**, Toolbox will use your [Application Default Credentials
(ADC)][adc] to authorize and authenticate when interacting with
[BigQuery][bigquery-docs].
When using this method, you need to ensure the IAM identity associated with your
ADC (such as a service account) has the correct permissions for the queries you
intend to run. Common roles include `roles/bigquery.user` (which includes
permissions to run jobs and read data) or `roles/bigbigquery.dataViewer`.
Follow this [guide][set-adc] to set up your ADC.
### Authentication via User's OAuth Access Token
If the `useClientOAuth` parameter is set to `true`, Toolbox will instead use the
OAuth access token for authentication. This token is parsed from the
`Authorization` header passed in with the tool invocation request. This method
allows Toolbox to make queries to [BigQuery][bigquery-docs] on behalf of the
client or the end-user.
When using this on-behalf-of authentication, you must ensure that the
identity used has been granted the correct IAM permissions.
[iam-overview]:
[adc]:
[set-adc]:
## Example
Initialize a BigQuery source that uses ADC:
```yaml
sources:
my-bigquery-source:
kind: "bigquery"
project: "my-project-id"
# location: "US" # Optional: Specifies the location for query jobs.
# writeMode: "allowed" # One of: allowed, blocked, protected. Defaults to "allowed".
# allowedDatasets: # Optional: Restricts tool access to a specific list of datasets.
# - "my_dataset_1"
# - "other_project.my_dataset_2"
# impersonateServiceAccount: "service-account@project-id.iam.gserviceaccount.com" # Optional: Service account to impersonate
```
Initialize a BigQuery source that uses the client's access token:
```yaml
sources:
my-bigquery-client-auth-source:
kind: "bigquery"
project: "my-project-id"
useClientOAuth: true
# location: "US" # Optional: Specifies the location for query jobs.
# writeMode: "allowed" # One of: allowed, blocked, protected. Defaults to "allowed".
# allowedDatasets: # Optional: Restricts tool access to a specific list of datasets.
# - "my_dataset_1"
# - "other_project.my_dataset_2"
# impersonateServiceAccount: "service-account@project-id.iam.gserviceaccount.com" # Optional: Service account to impersonate
```
## Reference
| **field** | **type** | **required** | **description** |
|---------------------------|:--------:|:------------:|---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|
| kind | string | true | Must be "bigquery". |
| project | string | true | Id of the Google Cloud project to use for billing and as the default project for BigQuery resources. |
| location | string | false | Specifies the location (e.g., 'us', 'asia-northeast1') in which to run the query job. This location must match the location of any tables referenced in the query. Defaults to the table's location or 'US' if the location cannot be determined. [Learn More](https://cloud.google.com/bigquery/docs/locations) |
| writeMode | string | false | Controls the write behavior for tools. `allowed` (default): All queries are permitted. `blocked`: Only `SELECT` statements are allowed for the `bigquery-execute-sql` tool. `protected`: Enables session-based execution where all tools associated with this source instance share the same [BigQuery session](https://cloud.google.com/bigquery/docs/sessions-intro). This allows for stateful operations using temporary tables (e.g., `CREATE TEMP TABLE`). For `bigquery-execute-sql`, `SELECT` statements can be used on all tables, but write operations are restricted to the session's temporary dataset. For tools like `bigquery-sql`, `bigquery-forecast`, and `bigquery-analyze-contribution`, the `writeMode` restrictions do not apply, but they will operate within the shared session. **Note:** The `protected` mode cannot be used with `useClientOAuth: true`. It is also not recommended for multi-user server environments, as all users would share the same session. A session is terminated automatically after 24 hours of inactivity or after 7 days, whichever comes first. A new session is created on the next request, and any temporary data from the previous session will be lost. |
| allowedDatasets | []string | false | An optional list of dataset IDs that tools using this source are allowed to access. If provided, any tool operation attempting to access a dataset not in this list will be rejected. To enforce this, two types of operations are also disallowed: 1) Dataset-level operations (e.g., `CREATE SCHEMA`), and 2) operations where table access cannot be statically analyzed (e.g., `EXECUTE IMMEDIATE`, `CREATE PROCEDURE`). If a single dataset is provided, it will be treated as the default for prebuilt tools. |
| useClientOAuth | bool | false | If true, forwards the client's OAuth access token from the "Authorization" header to downstream queries. **Note:** This cannot be used with `writeMode: protected`. |
| impersonateServiceAccount | string | false | Service account email to impersonate when making BigQuery and Dataplex API calls. The authenticated principal must have the `roles/iam.serviceAccountTokenCreator` role on the target service account. [Learn More](https://cloud.google.com/iam/docs/service-account-impersonation) |
# Bigtable
Bigtable is a low-latency NoSQL database service for machine learning, operational analytics, and user-facing operations. It's a wide-column, key-value store that can scale to billions of rows and thousands of columns. With Bigtable, you can replicate your data to regions across the world for high availability and data resiliency.
# Bigtable Source
[Bigtable][bigtable-docs] is a low-latency NoSQL database service for machine
learning, operational analytics, and user-facing operations. It's a wide-column,
key-value store that can scale to billions of rows and thousands of columns.
With Bigtable, you can replicate your data to regions across the world for high
availability and data resiliency.
If you are new to Bigtable, you can try to [create an instance and write data
with the cbt CLI][bigtable-quickstart-with-cli].
You can use [GoogleSQL statements][bigtable-googlesql] to query your Bigtable
data. GoogleSQL is an ANSI-compliant structured query language (SQL) that is
also implemented for other Google Cloud services. SQL queries are handled by
cluster nodes in the same way as NoSQL data requests. Therefore, the same best
practices apply when creating SQL queries to run against your Bigtable data,
such as avoiding full table scans or complex filters.
[bigtable-docs]: https://cloud.google.com/bigtable/docs
[bigtable-quickstart-with-cli]:
https://cloud.google.com/bigtable/docs/create-instance-write-data-cbt-cli
[bigtable-googlesql]:
https://cloud.google.com/bigtable/docs/googlesql-overview
## Available Tools
- [`bigtable-sql`](../tools/bigtable/bigtable-sql.md)
Run SQL-like queries over Bigtable rows.
## Requirements
### IAM Permissions
Bigtable uses [Identity and Access Management (IAM)][iam-overview] to control
user and group access to Bigtable resources at the project, instance, table, and
backup level. Toolbox will use your [Application Default Credentials (ADC)][adc]
to authorize and authenticate when interacting with [Bigtable][bigtable-docs].
In addition to [setting the ADC for your server][set-adc], you need to ensure
the IAM identity has been given the correct IAM permissions for the query
provided. See [Apply IAM roles][grant-permissions] for more information on
applying IAM permissions and roles to an identity.
[iam-overview]: https://cloud.google.com/bigtable/docs/access-control
[adc]: https://cloud.google.com/docs/authentication#adc
[set-adc]: https://cloud.google.com/docs/authentication/provide-credentials-adc
[grant-permissions]: https://cloud.google.com/bigtable/docs/access-control#iam-management-instance
## Example
```yaml
sources:
my-bigtable-source:
kind: "bigtable"
project: "my-project-id"
instance: "test-instance"
```
## Reference
| **field** | **type** | **required** | **description** |
|-----------|:--------:|:------------:|-------------------------------------------------------------------------------|
| kind | string | true | Must be "bigtable". |
| project | string | true | Id of the GCP project that the cluster was created in (e.g. "my-project-id"). |
| instance | string | true | Name of the Bigtable instance. |
# Cassandra
Apache Cassandra is a NoSQL distributed database known for its horizontal scalability, distributed architecture, and flexible schema definition.
## About
[Apache Cassandra][cassandra-docs] is a NoSQL distributed database. By design,
NoSQL databases are lightweight, open-source, non-relational, and largely
distributed. Counted among their strengths are horizontal scalability,
distributed architectures, and a flexible approach to schema definition.
[cassandra-docs]: https://cassandra.apache.org/
## Available Tools
- [`cassandra-cql`](../tools/cassandra/cassandra-cql.md)
Run parameterized CQL queries in Cassandra.
## Example
```yaml
sources:
my-cassandra-source:
kind: cassandra
hosts:
- 127.0.0.1
keyspace: my_keyspace
protoVersion: 4
username: ${USER_NAME}
password: ${PASSWORD}
caPath: /path/to/ca.crt # Optional: path to CA certificate
certPath: /path/to/client.crt # Optional: path to client certificate
keyPath: /path/to/client.key # Optional: path to client key
enableHostVerification: true # Optional: enable host verification
```
{{< notice tip >}}
Use environment variable replacement with the format ${ENV_NAME}
instead of hardcoding your secrets into the configuration file.
{{< /notice >}}
## Reference
| **field** | **type** | **required** | **description** |
|------------------------|:--------:|:------------:|----------------------------------------------------------------------------------------------------------------------------------------------------|
| kind | string | true | Must be "cassandra". |
| hosts | string[] | true | List of IP addresses to connect to (e.g., ["192.168.1.1:9042", "192.168.1.2:9042","192.168.1.3:9042"]). The default port is 9042 if not specified. |
| keyspace | string | true | Name of the Cassandra keyspace to connect to (e.g., "my_keyspace"). |
| protoVersion | integer | false | Protocol version for the Cassandra connection (e.g., 4). |
| username | string | false | Name of the Cassandra user to connect as (e.g., "my-cassandra-user"). |
| password | string | false | Password of the Cassandra user (e.g., "my-password"). |
| caPath | string | false | Path to the CA certificate for SSL/TLS (e.g., "/path/to/ca.crt"). |
| certPath | string | false | Path to the client certificate for SSL/TLS (e.g., "/path/to/client.crt"). |
| keyPath | string | false | Path to the client key for SSL/TLS (e.g., "/path/to/client.key"). |
| enableHostVerification | boolean | false | Enable host verification for SSL/TLS (e.g., true). By default, host verification is disabled. |
# ClickHouse
ClickHouse is an open-source, OLTP database.
## About
[ClickHouse][clickhouse-docs] is a fast, open-source, column-oriented database
[clickhouse-docs]: https://clickhouse.com/docs
## Available Tools
- [`clickhouse-execute-sql`](../tools/clickhouse/clickhouse-execute-sql.md)
Execute parameterized SQL queries in ClickHouse with query logging.
- [`clickhouse-sql`](../tools/clickhouse/clickhouse-sql.md)
Execute SQL queries as prepared statements in ClickHouse.
## Requirements
### Database User
This source uses standard ClickHouse authentication. You will need to [create a
ClickHouse user][clickhouse-users] (or with [ClickHouse
Cloud][clickhouse-cloud]) to connect to the database with. The user should have
appropriate permissions for the operations you plan to perform.
[clickhouse-cloud]:
https://clickhouse.com/docs/getting-started/quick-start/cloud#connect-with-your-app
[clickhouse-users]: https://clickhouse.com/docs/en/sql-reference/statements/create/user
### Network Access
ClickHouse supports multiple protocols:
- **HTTPS protocol** (default port 8443) - Secure HTTP access (default)
- **HTTP protocol** (default port 8123) - Good for web-based access
## Example
### Secure Connection Example
```yaml
sources:
secure-clickhouse-source:
kind: clickhouse
host: clickhouse.example.com
port: "8443"
database: analytics
user: ${CLICKHOUSE_USER}
password: ${CLICKHOUSE_PASSWORD}
protocol: https
secure: true
```
### HTTP Protocol Example
```yaml
sources:
http-clickhouse-source:
kind: clickhouse
host: localhost
port: "8123"
database: logs
user: ${CLICKHOUSE_USER}
password: ${CLICKHOUSE_PASSWORD}
protocol: http
secure: false
```
{{< notice tip >}}
Use environment variable replacement with the format ${ENV_NAME}
instead of hardcoding your secrets into the configuration file.
{{< /notice >}}
## Reference
| **field** | **type** | **required** | **description** |
|-----------|:--------:|:------------:|-------------------------------------------------------------------------------------|
| kind | string | true | Must be "clickhouse". |
| host | string | true | IP address or hostname to connect to (e.g. "127.0.0.1" or "clickhouse.example.com") |
| port | string | true | Port to connect to (e.g. "8443" for HTTPS, "8123" for HTTP) |
| database | string | true | Name of the ClickHouse database to connect to (e.g. "my_database"). |
| user | string | true | Name of the ClickHouse user to connect as (e.g. "analytics_user"). |
| password | string | false | Password of the ClickHouse user (e.g. "my-password"). |
| protocol | string | false | Connection protocol: "https" (default) or "http". |
| secure | boolean | false | Whether to use a secure connection (TLS). Default: false. |
# Cloud Healthcare API
The Cloud Healthcare API provides a managed solution for storing and accessing healthcare data in Google Cloud, providing a critical bridge between existing care systems and applications hosted on Google Cloud.
## About
The [Cloud Healthcare API][healthcare-docs] provides a managed solution
for storing and accessing healthcare data in Google Cloud, providing a
critical bridge between existing care systems and applications hosted on
Google Cloud. It supports healthcare data standards such as HL7® FHIR®,
HL7® v2, and DICOM®. It provides a fully managed, highly scalable,
enterprise-grade development environment for building clinical and analytics
solutions securely on Google Cloud.
A dataset is a container in your Google Cloud project that holds modality-specific
healthcare data. Datasets contain other data stores, such as FHIR stores and DICOM
stores, which in turn hold their own types of healthcare data.
A single dataset can contain one or many data stores, and those stores can all
service the same modality or different modalities as application needs dictate.
Using multiple stores in the same dataset might be appropriate in various
situations.
If you are new to the Cloud Healthcare API, you can try to
[create and view datasets and stores using curl][healthcare-quickstart-curl].
[healthcare-docs]: https://cloud.google.com/healthcare/docs
[healthcare-quickstart-curl]:
https://cloud.google.com/healthcare-api/docs/store-healthcare-data-rest
## Available Tools
- [`cloud-healthcare-get-dataset`](../tools/cloudhealthcare/cloud-healthcare-get-dataset.md)
Retrieves a dataset’s details.
- [`cloud-healthcare-list-fhir-stores`](../tools/cloudhealthcare/cloud-healthcare-list-fhir-stores.md)
Lists the available FHIR stores in the healthcare dataset.
- [`cloud-healthcare-list-dicom-stores`](../tools/cloudhealthcare/cloud-healthcare-list-dicom-stores.md)
Lists the available DICOM stores in the healthcare dataset.
- [`cloud-healthcare-get-fhir-store`](../tools/cloudhealthcare/cloud-healthcare-get-fhir-store.md)
Retrieves information about a FHIR store.
- [`cloud-healthcare-get-fhir-store-metrics`](../tools/cloudhealthcare/cloud-healthcare-get-fhir-store-metrics.md)
Retrieves metrics for a FHIR store.
- [`cloud-healthcare-get-fhir-resource`](../tools/cloudhealthcare/cloud-healthcare-get-fhir-resource.md)
Retrieves a specific FHIR resource from a FHIR store.
- [`cloud-healthcare-fhir-patient-search`](../tools/cloudhealthcare/cloud-healthcare-fhir-patient-search.md)
Searches for patients in a FHIR store based on a set of criteria.
- [`cloud-healthcare-fhir-patient-everything`](../tools/cloudhealthcare/cloud-healthcare-fhir-patient-everything.md)
Retrieves all information for a given patient.
- [`cloud-healthcare-fhir-fetch-page`](../tools/cloudhealthcare/cloud-healthcare-fhir-fetch-page.md)
Fetches a page of FHIR resources from a given URL.
- [`cloud-healthcare-get-dicom-store`](../tools/cloudhealthcare/cloud-healthcare-get-dicom-store.md)
Retrieves information about a DICOM store.
- [`cloud-healthcare-get-dicom-store-metrics`](../tools/cloudhealthcare/cloud-healthcare-get-dicom-store-metrics.md)
Retrieves metrics for a DICOM store.
- [`cloud-healthcare-search-dicom-studies`](../tools/cloudhealthcare/cloud-healthcare-search-dicom-studies.md)
Searches for DICOM studies in a DICOM store.
- [`cloud-healthcare-search-dicom-series`](../tools/cloudhealthcare/cloud-healthcare-search-dicom-series.md)
Searches for DICOM series in a DICOM store.
- [`cloud-healthcare-search-dicom-instances`](../tools/cloudhealthcare/cloud-healthcare-search-dicom-instances.md)
Searches for DICOM instances in a DICOM store.
- [`cloud-healthcare-retrieve-rendered-dicom-instance`](../tools/cloudhealthcare/cloud-healthcare-retrieve-rendered-dicom-instance.md)
Retrieves a rendered DICOM instance from a DICOM store.
## Requirements
### IAM Permissions
The Cloud Healthcare API uses [Identity and Access Management
(IAM)][iam-overview] to control user and group access to Cloud Healthcare
resources like projects, datasets, and stores.
### Authentication via Application Default Credentials (ADC)
By **default**, Toolbox will use your [Application Default Credentials
(ADC)][adc] to authorize and authenticate when interacting with the
[Cloud Healthcare API][healthcare-docs].
When using this method, you need to ensure the IAM identity associated with your
ADC (such as a service account) has the correct permissions for the queries you
intend to run. Common roles include `roles/healthcare.fhirResourceReader` (which
includes permissions to read and search for FHIR resources) or
`roles/healthcare.dicomViewer` (for retrieving DICOM images).
Follow this [guide][set-adc] to set up your ADC.
### Authentication via User's OAuth Access Token
If the `useClientOAuth` parameter is set to `true`, Toolbox will instead use the
OAuth access token for authentication. This token is parsed from the
`Authorization` header passed in with the tool invocation request. This method
allows Toolbox to make queries to the [Cloud Healthcare API][healthcare-docs] on
behalf of the client or the end-user.
When using this on-behalf-of authentication, you must ensure that the
identity used has been granted the correct IAM permissions.
[iam-overview]:
[adc]:
[set-adc]:
## Example
Initialize a Cloud Healthcare API source that uses ADC:
```yaml
sources:
my-healthcare-source:
kind: "cloud-healthcare"
project: "my-project-id"
region: "us-central1"
dataset: "my-healthcare-dataset-id"
# allowedFhirStores: # Optional: Restricts tool access to a specific list of FHIR store IDs.
# - "my_fhir_store_1"
# allowedDicomStores: # Optional: Restricts tool access to a specific list of DICOM store IDs.
# - "my_dicom_store_1"
# - "my_dicom_store_2"
```
Initialize a Cloud Healthcare API source that uses the client's access token:
```yaml
sources:
my-healthcare-client-auth-source:
kind: "cloud-healthcare"
project: "my-project-id"
region: "us-central1"
dataset: "my-healthcare-dataset-id"
useClientOAuth: true
# allowedFhirStores: # Optional: Restricts tool access to a specific list of FHIR store IDs.
# - "my_fhir_store_1"
# allowedDicomStores: # Optional: Restricts tool access to a specific list of DICOM store IDs.
# - "my_dicom_store_1"
# - "my_dicom_store_2"
```
## Reference
| **field** | **type** | **required** | **description** |
|--------------------|:--------:|:------------:|------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|
| kind | string | true | Must be "cloud-healthcare". |
| project | string | true | ID of the GCP project that the dataset lives in. |
| region | string | true | Specifies the region (e.g., 'us', 'asia-northeast1') of the healthcare dataset. [Learn More](https://cloud.google.com/healthcare-api/docs/regions) |
| dataset | string | true | ID of the healthcare dataset. |
| allowedFhirStores | []string | false | An optional list of FHIR store IDs that tools using this source are allowed to access. If provided, any tool operation attempting to access a store not in this list will be rejected. If a single store is provided, it will be treated as the default for prebuilt tools. |
| allowedDicomStores | []string | false | An optional list of DICOM store IDs that tools using this source are allowed to access. If provided, any tool operation attempting to access a store not in this list will be rejected. If a single store is provided, it will be treated as the default for prebuilt tools. |
| useClientOAuth | bool | false | If true, forwards the client's OAuth access token from the "Authorization" header to downstream queries. |
# Cloud Monitoring
A "cloud-monitoring" source provides a client for the Cloud Monitoring API.
## About
The `cloud-monitoring` source provides a client to interact with the [Google
Cloud Monitoring API](https://cloud.google.com/monitoring/api). This allows
tools to access cloud monitoring metrics explorer and run promql queries.
Authentication can be handled in two ways:
1. **Application Default Credentials (ADC):** By default, the source uses ADC
to authenticate with the API.
2. **Client-side OAuth:** If `useClientOAuth` is set to `true`, the source will
expect an OAuth 2.0 access token to be provided by the client (e.g., a web
browser) for each request.
## Example
```yaml
sources:
my-cloud-monitoring:
kind: cloud-monitoring
my-oauth-cloud-monitoring:
kind: cloud-monitoring
useClientOAuth: true
```
## Reference
| **field** | **type** | **required** | **description** |
|----------------|:--------:|:------------:|------------------------------------------------------------------------------------------------------------------------------------------------|
| kind | string | true | Must be "cloud-monitoring". |
| useClientOAuth | boolean | false | If true, the source will use client-side OAuth for authorization. Otherwise, it will use Application Default Credentials. Defaults to `false`. |
# Cloud SQL for MySQL
Cloud SQL for MySQL is a fully-managed database service for MySQL.
## About
[Cloud SQL for MySQL][csql-mysql-docs] is a fully-managed database service
that helps you set up, maintain, manage, and administer your MySQL
relational databases on Google Cloud Platform.
If you are new to Cloud SQL for MySQL, you can try [creating and connecting
to a database by following these instructions][csql-mysql-quickstart].
[csql-mysql-docs]: https://cloud.google.com/sql/docs/mysql
[csql-mysql-quickstart]: https://cloud.google.com/sql/docs/mysql/connect-instance-local-computer
## Available Tools
- [`mysql-sql`](../tools/mysql/mysql-sql.md)
Execute pre-defined prepared SQL queries in Cloud SQL for MySQL.
- [`mysql-execute-sql`](../tools/mysql/mysql-execute-sql.md)
Run parameterized SQL queries in Cloud SQL for MySQL.
- [`mysql-list-active-queries`](../tools/mysql/mysql-list-active-queries.md)
List active queries in Cloud SQL for MySQL.
- [`mysql-list-tables`](../tools/mysql/mysql-list-tables.md)
List tables in a Cloud SQL for MySQL database.
- [`mysql-list-tables-missing-unique-indexes`](../tools/mysql/mysql-list-tables-missing-unique-indexes.md)
List tables in a Cloud SQL for MySQL database that do not have primary or unique indices.
- [`mysql-list-table-fragmentation`](../tools/mysql/mysql-list-table-fragmentation.md)
List table fragmentation in Cloud SQL for MySQL tables.
### Pre-built Configurations
- [Cloud SQL for MySQL using
MCP](https://googleapis.github.io/genai-toolbox/how-to/connect-ide/cloud_sql_mysql_mcp/)
Connect your IDE to Cloud SQL for MySQL using Toolbox.
## Requirements
### IAM Permissions
By default, this source uses the [Cloud SQL Go Connector][csql-go-conn] to
authorize and establish mTLS connections to your Cloud SQL instance. The Go
connector uses your [Application Default Credentials (ADC)][adc] to authorize
your connection to Cloud SQL.
In addition to [setting the ADC for your server][set-adc], you need to ensure
the IAM identity has been given the following IAM roles (or corresponding
permissions):
- `roles/cloudsql.client`
{{< notice tip >}}
If you are connecting from Compute Engine, make sure your VM
also has the [proper
scope](https://cloud.google.com/compute/docs/access/service-accounts#accesscopesiam)
to connect using the Cloud SQL Admin API.
{{< /notice >}}
[csql-go-conn]: https://github.com/GoogleCloudPlatform/cloud-sql-go-connector
[adc]: https://cloud.google.com/docs/authentication#adc
[set-adc]: https://cloud.google.com/docs/authentication/provide-credentials-adc
### Networking
Cloud SQL supports connecting over both from external networks via the internet
([public IP][public-ip]), and internal networks ([private IP][private-ip]).
For more information on choosing between the two options, see the Cloud SQL page
[Connection overview][conn-overview].
You can configure the `ipType` parameter in your source configuration to
`public` or `private` to match your cluster's configuration. Regardless of which
you choose, all connections use IAM-based authorization and are encrypted with
mTLS.
[private-ip]: https://cloud.google.com/sql/docs/mysql/configure-private-ip
[public-ip]: https://cloud.google.com/sql/docs/mysql/configure-ip
[conn-overview]: https://cloud.google.com/sql/docs/mysql/connect-overview
### Database User
Currently, this source only uses standard authentication. You will need to [create
a MySQL user][cloud-sql-users] to login to the database with.
[cloud-sql-users]: https://cloud.google.com/sql/docs/mysql/create-manage-users
## Example
```yaml
sources:
my-cloud-sql-mysql-source:
kind: cloud-sql-mysql
project: my-project-id
region: us-central1
instance: my-instance
database: my_db
user: ${USER_NAME}
password: ${PASSWORD}
# ipType: "private"
```
{{< notice tip >}}
Use environment variable replacement with the format ${ENV_NAME}
instead of hardcoding your secrets into the configuration file.
{{< /notice >}}
## Reference
| **field** | **type** | **required** | **description** |
|-----------|:--------:|:------------:|------------------------------------------------------------------------------------------------------|
| kind | string | true | Must be "cloud-sql-mysql". |
| project | string | true | Id of the GCP project that the cluster was created in (e.g. "my-project-id"). |
| region | string | true | Name of the GCP region that the cluster was created in (e.g. "us-central1"). |
| instance | string | true | Name of the Cloud SQL instance within the cluster (e.g. "my-instance"). |
| database | string | true | Name of the MySQL database to connect to (e.g. "my_db"). |
| user | string | true | Name of the MySQL user to connect as (e.g. "my-pg-user"). |
| password | string | true | Password of the MySQL user (e.g. "my-password"). |
| ipType | string | false | IP Type of the Cloud SQL instance, must be either `public`, `private`, or `psc`. Default: `public`. |
# Cloud SQL for PostgreSQL
Cloud SQL for PostgreSQL is a fully-managed database service for Postgres.
## About
[Cloud SQL for PostgreSQL][csql-pg-docs] is a fully-managed database service
that helps you set up, maintain, manage, and administer your PostgreSQL
relational databases on Google Cloud Platform.
If you are new to Cloud SQL for PostgreSQL, you can try [creating and connecting
to a database by following these instructions][csql-pg-quickstart].
[csql-pg-docs]: https://cloud.google.com/sql/docs/postgres
[csql-pg-quickstart]:
https://cloud.google.com/sql/docs/postgres/connect-instance-local-computer
## Available Tools
- [`postgres-sql`](../tools/postgres/postgres-sql.md)
Execute SQL queries as prepared statements in PostgreSQL.
- [`postgres-execute-sql`](../tools/postgres/postgres-execute-sql.md)
Run parameterized SQL statements in PostgreSQL.
- [`postgres-list-tables`](../tools/postgres/postgres-list-tables.md)
List tables in a PostgreSQL database.
- [`postgres-list-active-queries`](../tools/postgres/postgres-list-active-queries.md)
List active queries in a PostgreSQL database.
- [`postgres-list-available-extensions`](../tools/postgres/postgres-list-available-extensions.md)
List available extensions for installation in a PostgreSQL database.
- [`postgres-list-installed-extensions`](../tools/postgres/postgres-list-installed-extensions.md)
List installed extensions in a PostgreSQL database.
- [`postgres-list-views`](../tools/postgres/postgres-list-views.md)
List views in a PostgreSQL database.
- [`postgres-list-schemas`](../tools/postgres/postgres-list-schemas.md)
List schemas in a PostgreSQL database.
- [`postgres-database-overview`](../tools/postgres/postgres-database-overview.md)
Fetches the current state of the PostgreSQL server.
- [`postgres-list-triggers`](../tools/postgres/postgres-list-triggers.md)
List triggers in a PostgreSQL database.
- [`postgres-list-indexes`](../tools/postgres/postgres-list-indexes.md)
List available user indexes in a PostgreSQL database.
- [`postgres-list-sequences`](../tools/postgres/postgres-list-sequences.md)
List sequences in a PostgreSQL database.
- [`postgres-long-running-transactions`](../tools/postgres/postgres-long-running-transactions.md)
List long running transactions in a PostgreSQL database.
- [`postgres-list-locks`](../tools/postgres/postgres-list-locks.md)
List lock stats in a PostgreSQL database.
- [`postgres-replication-stats`](../tools/postgres/postgres-replication-stats.md)
List replication stats in a PostgreSQL database.
- [`postgres-list-query-stats`](../tools/postgres/postgres-list-query-stats.md)
List query statistics in a PostgreSQL database.
- [`postgres-get-column-cardinality`](../tools/postgres/postgres-get-column-cardinality.md)
List cardinality of columns in a table in a PostgreSQL database.
- [`postgres-list-pg-settings`](../tools/postgres/postgres-list-pg-settings.md)
List configuration parameters for the PostgreSQL server.
- [`postgres-list-database-stats`](../tools/postgres/postgres-list-database-stats.md)
Lists the key performance and activity statistics for each database in the postgreSQL
instance.
### Pre-built Configurations
- [Cloud SQL for Postgres using
MCP](https://googleapis.github.io/genai-toolbox/how-to/connect-ide/cloud_sql_pg_mcp/)
Connect your IDE to Cloud SQL for Postgres using Toolbox.
## Requirements
### IAM Permissions
By default, this source uses the [Cloud SQL Go Connector][csql-go-conn] to
authorize and establish mTLS connections to your Cloud SQL instance. The Go
connector uses your [Application Default Credentials (ADC)][adc] to authorize
your connection to Cloud SQL.
In addition to [setting the ADC for your server][set-adc], you need to ensure
the IAM identity has been given the following IAM roles (or corresponding
permissions):
- `roles/cloudsql.client`
{{< notice tip >}}
If you are connecting from Compute Engine, make sure your VM
also has the [proper
scope](https://cloud.google.com/compute/docs/access/service-accounts#accesscopesiam)
to connect using the Cloud SQL Admin API.
{{< /notice >}}
[csql-go-conn]:
[adc]:
[set-adc]:
### Networking
Cloud SQL supports connecting over both from external networks via the internet
([public IP][public-ip]), and internal networks ([private IP][private-ip]).
For more information on choosing between the two options, see the Cloud SQL page
[Connection overview][conn-overview].
You can configure the `ipType` parameter in your source configuration to
`public` or `private` to match your cluster's configuration. Regardless of which
you choose, all connections use IAM-based authorization and are encrypted with
mTLS.
[private-ip]: https://cloud.google.com/sql/docs/postgres/configure-private-ip
[public-ip]: https://cloud.google.com/sql/docs/postgres/configure-ip
[conn-overview]: https://cloud.google.com/sql/docs/postgres/connect-overview
### Authentication
This source supports both password-based authentication and IAM
authentication (using your [Application Default Credentials][adc]).
#### Standard Authentication
To connect using user/password, [create
a PostgreSQL user][cloudsql-users] and input your credentials in the `user` and
`password` fields.
```yaml
user: ${USER_NAME}
password: ${PASSWORD}
```
#### IAM Authentication
To connect using IAM authentication:
1. Prepare your database instance and user following this [guide][iam-guide].
2. You could choose one of the two ways to log in:
- Specify your IAM email as the `user`.
- Leave your `user` field blank. Toolbox will fetch the [ADC][adc]
automatically and log in using the email associated with it.
3. Leave the `password` field blank.
[iam-guide]: https://cloud.google.com/sql/docs/postgres/iam-logins
[cloudsql-users]: https://cloud.google.com/sql/docs/postgres/create-manage-users
## Example
```yaml
sources:
my-cloud-sql-pg-source:
kind: cloud-sql-postgres
project: my-project-id
region: us-central1
instance: my-instance
database: my_db
user: ${USER_NAME}
password: ${PASSWORD}
# ipType: "private"
```
{{< notice tip >}}
Use environment variable replacement with the format ${ENV_NAME}
instead of hardcoding your secrets into the configuration file.
{{< /notice >}}
## Reference
| **field** | **type** | **required** | **description** |
|-----------|:--------:|:------------:|--------------------------------------------------------------------------------------------------------------------------|
| kind | string | true | Must be "cloud-sql-postgres". |
| project | string | true | Id of the GCP project that the cluster was created in (e.g. "my-project-id"). |
| region | string | true | Name of the GCP region that the cluster was created in (e.g. "us-central1"). |
| instance | string | true | Name of the Cloud SQL instance within the cluster (e.g. "my-instance"). |
| database | string | true | Name of the Postgres database to connect to (e.g. "my_db"). |
| user | string | false | Name of the Postgres user to connect as (e.g. "my-pg-user"). Defaults to IAM auth using [ADC][adc] email if unspecified. |
| password | string | false | Password of the Postgres user (e.g. "my-password"). Defaults to attempting IAM authentication if unspecified. |
| ipType | string | false | IP Type of the Cloud SQL instance; must be one of `public`, `private`, or `psc`. Default: `public`. |
# Cloud SQL for SQL Server
Cloud SQL for SQL Server is a fully-managed database service for SQL Server.
## About
[Cloud SQL for SQL Server][csql-mssql-docs] is a managed database service that
helps you set up, maintain, manage, and administer your SQL Server databases on
Google Cloud.
If you are new to Cloud SQL for SQL Server, you can try [creating and connecting
to a database by following these instructions][csql-mssql-connect].
[csql-mssql-docs]: https://cloud.google.com/sql/docs/sqlserver
[csql-mssql-connect]: https://cloud.google.com/sql/docs/sqlserver/connect-overview
## Available Tools
- [`mssql-sql`](../tools/mssql/mssql-sql.md)
Execute pre-defined SQL Server queries with placeholder parameters.
- [`mssql-execute-sql`](../tools/mssql/mssql-execute-sql.md)
Run parameterized SQL Server queries in Cloud SQL for SQL Server.
- [`mssql-list-tables`](../tools/mssql/mssql-list-tables.md)
List tables in a Cloud SQL for SQL Server database.
### Pre-built Configurations
- [Cloud SQL for SQL Server using MCP](https://googleapis.github.io/genai-toolbox/how-to/connect-ide/cloud_sql_mssql_mcp/)
Connect your IDE to Cloud SQL for SQL Server using Toolbox.
## Requirements
### IAM Permissions
By default, this source uses the [Cloud SQL Go Connector][csql-go-conn] to
authorize and establish mTLS connections to your Cloud SQL instance. The Go
connector uses your [Application Default Credentials (ADC)][adc] to authorize
your connection to Cloud SQL.
In addition to [setting the ADC for your server][set-adc], you need to ensure
the IAM identity has been given the following IAM roles (or corresponding
permissions):
- `roles/cloudsql.client`
{{< notice tip >}}
If you are connecting from Compute Engine, make sure your VM
also has the [proper
scope](https://cloud.google.com/compute/docs/access/service-accounts#accesscopesiam)
to connect using the Cloud SQL Admin API.
{{< /notice >}}
[csql-go-conn]: https://github.com/GoogleCloudPlatform/cloud-sql-go-connector
[adc]: https://cloud.google.com/docs/authentication#adc
[set-adc]: https://cloud.google.com/docs/authentication/provide-credentials-adc
### Networking
Cloud SQL supports connecting over both from external networks via the internet
([public IP][public-ip]), and internal networks ([private IP][private-ip]).
For more information on choosing between the two options, see the Cloud SQL page
[Connection overview][conn-overview].
You can configure the `ipType` parameter in your source configuration to
`public` or `private` to match your cluster's configuration. Regardless of which
you choose, all connections use IAM-based authorization and are encrypted with
mTLS.
[private-ip]: https://cloud.google.com/sql/docs/sqlserver/configure-private-ip
[public-ip]: https://cloud.google.com/sql/docs/sqlserver/configure-ip
[conn-overview]: https://cloud.google.com/sql/docs/sqlserver/connect-overview
### Database User
Currently, this source only uses standard authentication. You will need to
[create a SQL Server user][cloud-sql-users] to login to the database with.
[cloud-sql-users]: https://cloud.google.com/sql/docs/sqlserver/create-manage-users
## Example
```yaml
sources:
my-cloud-sql-mssql-instance:
kind: cloud-sql-mssql
project: my-project
region: my-region
instance: my-instance
database: my_db
user: ${USER_NAME}
password: ${PASSWORD}
# ipType: private
```
{{< notice tip >}}
Use environment variable replacement with the format ${ENV_NAME}
instead of hardcoding your secrets into the configuration file.
{{< /notice >}}
## Reference
| **field** | **type** | **required** | **description** |
|-----------|:--------:|:------------:|------------------------------------------------------------------------------------------------------|
| kind | string | true | Must be "cloud-sql-mssql". |
| project | string | true | Id of the GCP project that the cluster was created in (e.g. "my-project-id"). |
| region | string | true | Name of the GCP region that the cluster was created in (e.g. "us-central1"). |
| instance | string | true | Name of the Cloud SQL instance within the cluster (e.g. "my-instance"). |
| database | string | true | Name of the Cloud SQL database to connect to (e.g. "my_db"). |
| user | string | true | Name of the SQL Server user to connect as (e.g. "my-pg-user"). |
| password | string | true | Password of the SQL Server user (e.g. "my-password"). |
| ipType | string | false | IP Type of the Cloud SQL instance, must be either `public`, `private`, or `psc`. Default: `public`. |
# Cloud SQL Admin
A "cloud-sql-admin" source provides a client for the Cloud SQL Admin API.
## About
The `cloud-sql-admin` source provides a client to interact with the [Google
Cloud SQL Admin API](https://cloud.google.com/sql/docs/mysql/admin-api). This
allows tools to perform administrative tasks on Cloud SQL instances, such as
creating users and databases.
Authentication can be handled in two ways:
1. **Application Default Credentials (ADC):** By default, the source uses ADC
to authenticate with the API.
2. **Client-side OAuth:** If `useClientOAuth` is set to `true`, the source will
expect an OAuth 2.0 access token to be provided by the client (e.g., a web
browser) for each request.
## Example
```yaml
sources:
my-cloud-sql-admin:
kind: cloud-sql-admin
my-oauth-cloud-sql-admin:
kind: cloud-sql-admin
useClientOAuth: true
```
## Reference
| **field** | **type** | **required** | **description** |
| -------------- | :------: | :----------: | ---------------------------------------------------------------------------------------------------------------------------------------------- |
| kind | string | true | Must be "cloud-sql-admin". |
| defaultProject | string | false | The Google Cloud project ID to use for Cloud SQL infrastructure tools. |
| useClientOAuth | boolean | false | If true, the source will use client-side OAuth for authorization. Otherwise, it will use Application Default Credentials. Defaults to `false`. |
# Couchbase
A "couchbase" source connects to a Couchbase database.
## About
A `couchbase` source establishes a connection to a Couchbase database cluster,
allowing tools to execute SQL queries against it.
## Available Tools
- [`couchbase-sql`](../tools/couchbase/couchbase-sql.md)
Run SQL++ statements on Couchbase with parameterized input.
## Example
```yaml
sources:
my-couchbase-instance:
kind: couchbase
connectionString: couchbase://localhost
bucket: travel-sample
scope: inventory
username: Administrator
password: password
```
{{< notice note >}}
For more details about alternate addresses and custom ports refer to [Managing
Connections](https://docs.couchbase.com/java-sdk/current/howtos/managing-connections.html).
{{< /notice >}}
## Reference
| **field** | **type** | **required** | **description** |
|----------------------|:--------:|:------------:|------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|
| kind | string | true | Must be "couchbase". |
| connectionString | string | true | Connection string for the Couchbase cluster. |
| bucket | string | true | Name of the bucket to connect to. |
| scope | string | true | Name of the scope within the bucket. |
| username | string | false | Username for authentication. |
| password | string | false | Password for authentication. |
| clientCert | string | false | Path to client certificate file for TLS authentication. |
| clientCertPassword | string | false | Password for the client certificate. |
| clientKey | string | false | Path to client key file for TLS authentication. |
| clientKeyPassword | string | false | Password for the client key. |
| caCert | string | false | Path to CA certificate file. |
| noSslVerify | boolean | false | If true, skip server certificate verification. **Warning:** This option should only be used in development or testing environments. Disabling SSL verification poses significant security risks in production as it makes your connection vulnerable to man-in-the-middle attacks. |
| profile | string | false | Name of the connection profile to apply. |
| queryScanConsistency | integer | false | Query scan consistency. Controls the consistency guarantee for index scanning. Values: 1 for "not_bounded" (fastest option, but results may not include the most recent operations), 2 for "request_plus" (highest consistency level, includes all operations up until the query started, but incurs a performance penalty). If not specified, defaults to the Couchbase Go SDK default. |
# Dataplex
Dataplex Universal Catalog is a unified, intelligent governance solution for data and AI assets in Google Cloud. Dataplex Universal Catalog powers AI, analytics, and business intelligence at scale.
# Dataplex Source
[Dataplex][dataplex-docs] Universal Catalog is a unified, intelligent governance
solution for data and AI assets in Google Cloud. Dataplex Universal Catalog
powers AI, analytics, and business intelligence at scale.
At the heart of these governance capabilities is a catalog that contains a
centralized inventory of the data assets in your organization. Dataplex
Universal Catalog holds business, technical, and runtime metadata for all of
your data. It helps you discover relationships and semantics in the metadata by
applying artificial intelligence and machine learning.
[dataplex-docs]: https://cloud.google.com/dataplex/docs
## Example
```yaml
sources:
my-dataplex-source:
kind: "dataplex"
project: "my-project-id"
```
## Sample System Prompt
You can use the following system prompt as "Custom Instructions" in your client
application.
```
# Objective
Your primary objective is to help discover, organize and manage metadata related to data assets.
# Tone and Style
1. Adopt the persona of a senior subject matter expert
2. Your communication style must be:
1. Concise: Always favor brevity.
2. Direct: Avoid greetings (e.g., "Hi there!", "Certainly!"). Get straight to the point.
Example (Incorrect): Hi there! I see that you are looking for...
Example (Correct): This problem likely stems from...
3. Do not reiterate or summarize the question in the answer.
4. Crucially, always convey a tone of uncertainty and caution. Since you are interpreting metadata and have no way to externally verify your answers, never express complete confidence. Frame your responses as interpretations based solely on the provided metadata. Use a suggestive tone, not a prescriptive one:
Example (Correct): "The entry describes..."
Example (Correct): "According to catalog,..."
Example (Correct): "Based on the metadata,..."
Example (Correct): "Based on the search results,..."
5. Do not make assumptions
# Data Model
## Entries
Entry represents a specific data asset. Entry acts as a metadata record for something that is managed by Catalog, such as:
- A BigQuery table or dataset
- A Cloud Storage bucket or folder
- An on-premises SQL table
## Aspects
While the Entry itself is a container, the rich descriptive information about the asset (e.g., schema, data types, business descriptions, classifications) is stored in associated components called Aspects. Aspects are created based on pre-defined blueprints known as Aspect Types.
## Aspect Types
Aspect Type is a reusable template that defines the schema for a set of metadata fields. Think of an Aspect Type as a structure for the kind of metadata that is organized in the catalog within the Entry.
Examples:
- projects/dataplex-types/locations/global/aspectTypes/analytics-hub-exchange
- projects/dataplex-types/locations/global/aspectTypes/analytics-hub
- projects/dataplex-types/locations/global/aspectTypes/analytics-hub-listing
- projects/dataplex-types/locations/global/aspectTypes/bigquery-connection
- projects/dataplex-types/locations/global/aspectTypes/bigquery-data-policy
- projects/dataplex-types/locations/global/aspectTypes/bigquery-dataset
- projects/dataplex-types/locations/global/aspectTypes/bigquery-model
- projects/dataplex-types/locations/global/aspectTypes/bigquery-policy
- projects/dataplex-types/locations/global/aspectTypes/bigquery-routine
- projects/dataplex-types/locations/global/aspectTypes/bigquery-row-access-policy
- projects/dataplex-types/locations/global/aspectTypes/bigquery-table
- projects/dataplex-types/locations/global/aspectTypes/bigquery-view
- projects/dataplex-types/locations/global/aspectTypes/cloud-bigtable-instance
- projects/dataplex-types/locations/global/aspectTypes/cloud-bigtable-table
- projects/dataplex-types/locations/global/aspectTypes/cloud-spanner-database
- projects/dataplex-types/locations/global/aspectTypes/cloud-spanner-instance
- projects/dataplex-types/locations/global/aspectTypes/cloud-spanner-table
- projects/dataplex-types/locations/global/aspectTypes/cloud-spanner-view
- projects/dataplex-types/locations/global/aspectTypes/cloudsql-database
- projects/dataplex-types/locations/global/aspectTypes/cloudsql-instance
- projects/dataplex-types/locations/global/aspectTypes/cloudsql-schema
- projects/dataplex-types/locations/global/aspectTypes/cloudsql-table
- projects/dataplex-types/locations/global/aspectTypes/cloudsql-view
- projects/dataplex-types/locations/global/aspectTypes/contacts
- projects/dataplex-types/locations/global/aspectTypes/dataform-code-asset
- projects/dataplex-types/locations/global/aspectTypes/dataform-repository
- projects/dataplex-types/locations/global/aspectTypes/dataform-workspace
- projects/dataplex-types/locations/global/aspectTypes/dataproc-metastore-database
- projects/dataplex-types/locations/global/aspectTypes/dataproc-metastore-service
- projects/dataplex-types/locations/global/aspectTypes/dataproc-metastore-table
- projects/dataplex-types/locations/global/aspectTypes/data-product
- projects/dataplex-types/locations/global/aspectTypes/data-quality-scorecard
- projects/dataplex-types/locations/global/aspectTypes/external-connection
- projects/dataplex-types/locations/global/aspectTypes/overview
- projects/dataplex-types/locations/global/aspectTypes/pubsub-topic
- projects/dataplex-types/locations/global/aspectTypes/schema
- projects/dataplex-types/locations/global/aspectTypes/sensitive-data-protection-job-result
- projects/dataplex-types/locations/global/aspectTypes/sensitive-data-protection-profile
- projects/dataplex-types/locations/global/aspectTypes/sql-access
- projects/dataplex-types/locations/global/aspectTypes/storage-bucket
- projects/dataplex-types/locations/global/aspectTypes/storage-folder
- projects/dataplex-types/locations/global/aspectTypes/storage
- projects/dataplex-types/locations/global/aspectTypes/usage
## Entry Types
Every Entry must conform to an Entry Type. The Entry Type acts as a template, defining the structure, required aspects, and constraints for Entries of that type.
Examples:
- projects/dataplex-types/locations/global/entryTypes/analytics-hub-exchange
- projects/dataplex-types/locations/global/entryTypes/analytics-hub-listing
- projects/dataplex-types/locations/global/entryTypes/bigquery-connection
- projects/dataplex-types/locations/global/entryTypes/bigquery-data-policy
- projects/dataplex-types/locations/global/entryTypes/bigquery-dataset
- projects/dataplex-types/locations/global/entryTypes/bigquery-model
- projects/dataplex-types/locations/global/entryTypes/bigquery-routine
- projects/dataplex-types/locations/global/entryTypes/bigquery-row-access-policy
- projects/dataplex-types/locations/global/entryTypes/bigquery-table
- projects/dataplex-types/locations/global/entryTypes/bigquery-view
- projects/dataplex-types/locations/global/entryTypes/cloud-bigtable-instance
- projects/dataplex-types/locations/global/entryTypes/cloud-bigtable-table
- projects/dataplex-types/locations/global/entryTypes/cloud-spanner-database
- projects/dataplex-types/locations/global/entryTypes/cloud-spanner-instance
- projects/dataplex-types/locations/global/entryTypes/cloud-spanner-table
- projects/dataplex-types/locations/global/entryTypes/cloud-spanner-view
- projects/dataplex-types/locations/global/entryTypes/cloudsql-mysql-database
- projects/dataplex-types/locations/global/entryTypes/cloudsql-mysql-instance
- projects/dataplex-types/locations/global/entryTypes/cloudsql-mysql-table
- projects/dataplex-types/locations/global/entryTypes/cloudsql-mysql-view
- projects/dataplex-types/locations/global/entryTypes/cloudsql-postgresql-database
- projects/dataplex-types/locations/global/entryTypes/cloudsql-postgresql-instance
- projects/dataplex-types/locations/global/entryTypes/cloudsql-postgresql-schema
- projects/dataplex-types/locations/global/entryTypes/cloudsql-postgresql-table
- projects/dataplex-types/locations/global/entryTypes/cloudsql-postgresql-view
- projects/dataplex-types/locations/global/entryTypes/cloudsql-sqlserver-database
- projects/dataplex-types/locations/global/entryTypes/cloudsql-sqlserver-instance
- projects/dataplex-types/locations/global/entryTypes/cloudsql-sqlserver-schema
- projects/dataplex-types/locations/global/entryTypes/cloudsql-sqlserver-table
- projects/dataplex-types/locations/global/entryTypes/cloudsql-sqlserver-view
- projects/dataplex-types/locations/global/entryTypes/dataform-code-asset
- projects/dataplex-types/locations/global/entryTypes/dataform-repository
- projects/dataplex-types/locations/global/entryTypes/dataform-workspace
- projects/dataplex-types/locations/global/entryTypes/dataproc-metastore-database
- projects/dataplex-types/locations/global/entryTypes/dataproc-metastore-service
- projects/dataplex-types/locations/global/entryTypes/dataproc-metastore-table
- projects/dataplex-types/locations/global/entryTypes/pubsub-topic
- projects/dataplex-types/locations/global/entryTypes/storage-bucket
- projects/dataplex-types/locations/global/entryTypes/storage-folder
- projects/dataplex-types/locations/global/entryTypes/vertexai-dataset
- projects/dataplex-types/locations/global/entryTypes/vertexai-feature-group
- projects/dataplex-types/locations/global/entryTypes/vertexai-feature-online-store
## Entry Groups
Entries are organized within Entry Groups, which are logical groupings of Entries. An Entry Group acts as a namespace for its Entries.
## Entry Links
Entries can be linked together using EntryLinks to represent relationships between data assets (e.g. foreign keys).
# Tool instructions
## Tool: dataplex_search_entries
## General
- Do not try to search within search results on your own.
- Do not fetch multiple pages of results unless explicitly asked.
## Search syntax
### Simple search
In its simplest form, a search query consists of a single predicate. Such a predicate can match several pieces of metadata:
- A substring of a name, display name, or description of a resource
- A substring of the type of a resource
- A substring of a column name (or nested column name) in the schema of a resource
- A substring of a project ID
- A string from an overview description
For example, the predicate foo matches the following resources:
- Resource with the name foo.bar
- Resource with the display name Foo Bar
- Resource with the description This is the foo script
- Resource with the exact type foo
- Column foo_bar in the schema of a resource
- Nested column foo_bar in the schema of a resource
- Project prod-foo-bar
- Resource with an overview containing the word foo
### Qualified predicates
You can qualify a predicate by prefixing it with a key that restricts the matching to a specific piece of metadata:
- An equal sign (=) restricts the search to an exact match.
- A colon (:) after the key matches the predicate to either a substring or a token within the value in the search results.
Tokenization splits the stream of text into a series of tokens, with each token usually corresponding to a single word. For example:
- name:foo selects resources with names that contain the foo substring, like foo1 and barfoo.
- description:foo selects resources with the foo token in the description, like bar and foo.
- location=foo matches resources in a specified location with foo as the location name.
The predicate keys type, system, location, and orgid support only the exact match (=) qualifier, not the substring qualifier (:). For example, type=foo or orgid=number.
Search syntax supports the following qualifiers:
- "name:x" - Matches x as a substring of the resource ID.
- "displayname:x" - Match x as a substring of the resource display name.
- "column:x" - Matches x as a substring of the column name (or nested column name) in the schema of the resource.
- "description:x" - Matches x as a token in the resource description.
- "label:bar" - Matches BigQuery resources that have a label (with some value) and the label key has bar as a substring.
- "label=bar" - Matches BigQuery resources that have a label (with some value) and the label key equals bar as a string.
- "label:bar:x" - Matches x as a substring in the value of a label with a key bar attached to a BigQuery resource.
- "label=foo:bar" - Matches BigQuery resources where the key equals foo and the key value equals bar.
- "label.foo=bar" - Matches BigQuery resources where the key equals foo and the key value equals bar.
- "label.foo" - Matches BigQuery resources that have a label whose key equals foo as a string.
- "type=TYPE" - Matches resources of a specific entry type or its type alias.
- "projectid:bar" - Matches resources within Google Cloud projects that match bar as a substring in the ID.
- "parent:x" - Matches x as a substring of the hierarchical path of a resource. It supports same syntax as `name` predicate.
- "orgid=number" - Matches resources within a Google Cloud organization with the exact ID value of the number.
- "system=SYSTEM" - Matches resources from a specified system. For example, system=bigquery matches BigQuery resources.
- "location=LOCATION" - Matches resources in a specified location with an exact name. For example, location=us-central1 matches assets hosted in Iowa. BigQuery Omni assets support this qualifier by using the BigQuery Omni location name. For example, location=aws-us-east-1 matches BigQuery Omni assets in Northern Virginia.
- "createtime" -
Finds resources that were created within, before, or after a given date or time. For example "createtime:2019-01-01" matches resources created on 2019-01-01.
- "updatetime" - Finds resources that were updated within, before, or after a given date or time. For example "updatetime>2019-01-01" matches resources updated after 2019-01-01.
### Aspect Search
To search for entries based on their attached aspects, use the following query syntax.
aspect:x Matches x as a substring of the full path to the aspect type of an aspect that is attached to the entry, in the format projectid.location.ASPECT_TYPE_ID
aspect=x Matches x as the full path to the aspect type of an aspect that is attached to the entry, in the format projectid.location.ASPECT_TYPE_ID
aspect:xOPERATORvalue
Searches for aspect field values. Matches x as a substring of the full path to the aspect type and field name of an aspect that is attached to the entry, in the format projectid.location.ASPECT_TYPE_ID.FIELD_NAME
The list of supported {OPERATOR}s depends on the type of field in the aspect, as follows:
- String: = (exact match) and : (substring)
- All number types: =, :, <, >, <=, >=, =>, =<
- Enum: =
- Datetime: same as for numbers, but the values to compare are treated as datetimes instead of numbers
- Boolean: =
Only top-level fields of the aspect are searchable. For example, all of the following queries match entries where the value of the is-enrolled field in the employee-info aspect type is true. Other entries that match on the substring are also returned.
- aspect:example-project.us-central1.employee-info.is-enrolled=true
- aspect:example-project.us-central1.employee=true
- aspect:employee=true
Example:-
You can use following filters
- dataplex-types.global.bigquery-table.type={BIGLAKE_TABLE, BIGLAKE_OBJECT_TABLE, EXTERNAL_TABLE, TABLE}
- dataplex-types.global.storage.type={STRUCTURED, UNSTRUCTURED}
### Logical operators
A query can consist of several predicates with logical operators. If you don't specify an operator, logical AND is implied. For example, foo bar returns resources that match both predicate foo and predicate bar.
Logical AND and logical OR are supported. For example, foo OR bar.
You can negate a predicate with a - (hyphen) or NOT prefix. For example, -name:foo returns resources with names that don't match the predicate foo.
Logical operators are case-sensitive. `OR` and `AND` are acceptable whereas `or` and `and` are not.
### Request
1. Always try to rewrite the prompt using search syntax.
### Response
1. If there are multiple search results found
1. Present the list of search results
2. Format the output in nested ordered list, for example:
Given
```
{
results: [
{
name: "projects/test-project/locations/us/entryGroups/@bigquery-aws-us-east-1/entries/users"
entrySource: {
displayName: "Users"
description: "Table contains list of users."
location: "aws-us-east-1"
system: "BigQuery"
}
},
{
name: "projects/another_project/locations/us-central1/entryGroups/@bigquery/entries/top_customers"
entrySource: {
displayName: "Top customers",
description: "Table contains list of best customers."
location: "us-central1"
system: "BigQuery"
}
},
]
}
```
Return output formatted as markdown nested list:
```
* Users:
- projectId: test_project
- location: aws-us-east-1
- description: Table contains list of users.
* Top customers:
- projectId: another_project
- location: us-central1
- description: Table contains list of best customers.
```
3. Ask to select one of the presented search results
2. If there is only one search result found
1. Present the search result immediately.
3. If there are no search result found
1. Explain that no search result was found
2. Suggest to provide a more specific search query.
## Tool: dataplex_lookup_entry
### Request
1. Always try to limit the size of the response by specifying `aspect_types` parameter. Make sure to include to select view=CUSTOM when using aspect_types parameter. If you do not know the name of the aspect type, use the `dataplex_search_aspect_types` tool.
2. If you do not know the name of the entry, use `dataplex_search_entries` tool
### Response
1. Unless asked for a specific aspect, respond with all aspects attached to the entry.
```
## Reference
| **field** | **type** | **required** | **description** |
|-----------|:--------:|:------------:|----------------------------------------------------------------------------------|
| kind | string | true | Must be "dataplex". |
| project | string | true | ID of the GCP project used for quota and billing purposes (e.g. "my-project-id").|
# Dgraph
Dgraph is fully open-source, built-for-scale graph database for Gen AI workloads
## About
[Dgraph][dgraph-docs] is an open-source graph database. It is designed for
real-time workloads, horizontal scalability, and data flexibility. Implemented
as a distributed system, Dgraph processes queries in parallel to deliver the
fastest result.
This source can connect to either a self-managed Dgraph cluster or one hosted on
Dgraph Cloud. If you're new to Dgraph, the fastest way to get started is to
[sign up for Dgraph Cloud][dgraph-login].
[dgraph-docs]: https://dgraph.io/docs
[dgraph-login]: https://cloud.dgraph.io/login
## Available Tools
- [`dgraph-dql`](../tools/dgraph/dgraph-dql.md)
Run DQL (Dgraph Query Language) queries.
## Requirements
### Database User
When **connecting to a hosted Dgraph database**, this source uses the API key
for access. If you are using a dedicated environment, you will additionally need
the namespace and user credentials for that namespace.
For **connecting to a local or self-hosted Dgraph database**, use the namespace
and user credentials for that namespace.
## Example
```yaml
sources:
my-dgraph-source:
kind: dgraph
dgraphUrl: https://xxxx.cloud.dgraph.io
user: ${USER_NAME}
password: ${PASSWORD}
apiKey: ${API_KEY}
namespace : 0
```
{{< notice tip >}}
Use environment variable replacement with the format ${ENV_NAME}
instead of hardcoding your secrets into the configuration file.
{{< /notice >}}
## Reference
| **Field** | **Type** | **Required** | **Description** |
|-------------|:--------:|:------------:|--------------------------------------------------------------------------------------------------|
| kind | string | true | Must be "dgraph". |
| dgraphUrl | string | true | Connection URI (e.g. "", ""). |
| user | string | false | Name of the Dgraph user to connect as (e.g., "groot"). |
| password | string | false | Password of the Dgraph user (e.g., "password"). |
| apiKey | string | false | API key to connect to a Dgraph Cloud instance. |
| namespace | uint64 | false | Dgraph namespace (not required for Dgraph Cloud Shared Clusters). |
# Elasticsearch
Elasticsearch is a distributed, free and open search and analytics engine for all types of data, including textual, numerical, geospatial, structured, and unstructured.
# Elasticsearch Source
[Elasticsearch][elasticsearch-docs] is a distributed, free and open search and
analytics engine for all types of data, including textual, numerical,
geospatial, structured, and unstructured.
If you are new to Elasticsearch, you can learn how to
[set up a cluster and start indexing data][elasticsearch-quickstart].
Elasticsearch uses [ES|QL][elasticsearch-esql] for querying data. ES|QL
is a powerful query language that allows you to search and aggregate data in
Elasticsearch.
See the [official
documentation](https://www.elastic.co/guide/en/elasticsearch/reference/current/index.html)
for more information.
[elasticsearch-docs]:
https://www.elastic.co/guide/en/elasticsearch/reference/current/index.html
[elasticsearch-quickstart]:
https://www.elastic.co/guide/en/elasticsearch/reference/current/getting-started.html
[elasticsearch-esql]:
https://www.elastic.co/guide/en/elasticsearch/reference/current/esql.html
## Available Tools
- [`elasticsearch-esql`](../tools/elasticsearch/elasticsearch-esql.md)
Execute ES|QL queries.
## Requirements
### API Key
Toolbox uses an [API key][api-key] to authorize and authenticate when
interacting with [Elasticsearch][elasticsearch-docs].
In addition to [setting the API key for your server][set-api-key], you need to
ensure the API key has the correct permissions for the queries you intend to
run. See [API key management][api-key-management] for more information on
applying permissions to an API key.
[api-key]:
https://www.elastic.co/guide/en/elasticsearch/reference/current/security-api-create-api-key.html
[set-api-key]:
https://www.elastic.co/guide/en/elasticsearch/reference/current/security-api-create-api-key.html
[api-key-management]:
https://www.elastic.co/guide/en/elasticsearch/reference/current/security-api-get-api-key.html
## Example
```yaml
sources:
my-elasticsearch-source:
kind: "elasticsearch"
addresses:
- "http://localhost:9200"
apikey: "my-api-key"
```
## Reference
| **field** | **type** | **required** | **description** |
|-----------|:--------:|:------------:|--------------------------------------------|
| kind | string | true | Must be "elasticsearch". |
| addresses | []string | true | List of Elasticsearch hosts to connect to. |
| apikey | string | true | The API key to use for authentication. |
# Firebird
Firebird is a powerful, cross-platform, and open-source relational database.
## About
[Firebird][fb-docs] is a relational database management system offering many
ANSI SQL standard features that runs on Linux, Windows, and a variety of Unix
platforms. It is known for its small footprint, powerful features, and easy
maintenance.
[fb-docs]: https://firebirdsql.org/
## Available Tools
- [`firebird-sql`](../tools/firebird/firebird-sql.md)
Execute SQL queries as prepared statements in Firebird.
- [`firebird-execute-sql`](../tools/firebird/firebird-execute-sql.md)
Run parameterized SQL statements in Firebird.
## Requirements
### Database User
This source uses standard authentication. You will need to [create a Firebird
user][fb-users] to login to the database with.
[fb-users]: https://www.firebirdsql.org/refdocs/langrefupd25-security-sql-user-mgmt.html#langrefupd25-security-create-user
## Example
```yaml
sources:
my_firebird_db:
kind: firebird
host: "localhost"
port: 3050
database: "/path/to/your/database.fdb"
user: ${FIREBIRD_USER}
password: ${FIREBIRD_PASS}
```
{{< notice tip >}}
Use environment variable replacement with the format ${ENV_NAME}
instead of hardcoding your secrets into the configuration file.
{{< /notice >}}
## Reference
| **field** | **type** | **required** | **description** |
|-----------|:--------:|:------------:|------------------------------------------------------------------------------|
| kind | string | true | Must be "firebird". |
| host | string | true | IP address to connect to (e.g. "127.0.0.1") |
| port | string | true | Port to connect to (e.g. "3050") |
| database | string | true | Path to the Firebird database file (e.g. "/var/lib/firebird/data/test.fdb"). |
| user | string | true | Name of the Firebird user to connect as (e.g. "SYSDBA"). |
| password | string | true | Password of the Firebird user (e.g. "masterkey"). |
# Firestore
Firestore is a NoSQL document database built for automatic scaling, high performance, and ease of application development. It's a fully managed, serverless database that supports mobile, web, and server development.
# Firestore Source
[Firestore][firestore-docs] is a NoSQL document database built for automatic
scaling, high performance, and ease of application development. While the
Firestore interface has many of the same features as traditional databases,
as a NoSQL database it differs from them in the way it describes relationships
between data objects.
If you are new to Firestore, you can [create a database and learn the
basics][firestore-quickstart].
[firestore-docs]: https://cloud.google.com/firestore/docs
[firestore-quickstart]: https://cloud.google.com/firestore/docs/quickstart-servers
## Requirements
### IAM Permissions
Firestore uses [Identity and Access Management (IAM)][iam-overview] to control
user and group access to Firestore resources. Toolbox will use your [Application
Default Credentials (ADC)][adc] to authorize and authenticate when interacting
with [Firestore][firestore-docs].
In addition to [setting the ADC for your server][set-adc], you need to ensure
the IAM identity has been given the correct IAM permissions for accessing
Firestore. Common roles include:
- `roles/datastore.user` - Read and write access to Firestore
- `roles/datastore.viewer` - Read-only access to Firestore
- `roles/firebaserules.admin` - Full management of Firebase Security Rules for
Firestore. This role is required for operations that involve creating,
updating, or managing Firestore security rules (see [Firebase Security Rules
roles][firebaserules-roles])
See [Firestore access control][firestore-iam] for more information on
applying IAM permissions and roles to an identity.
[iam-overview]: https://cloud.google.com/firestore/docs/security/iam
[adc]: https://cloud.google.com/docs/authentication#adc
[set-adc]: https://cloud.google.com/docs/authentication/provide-credentials-adc
[firestore-iam]: https://cloud.google.com/firestore/docs/security/iam
[firebaserules-roles]:
https://cloud.google.com/iam/docs/roles-permissions/firebaserules
### Database Selection
Firestore allows you to create multiple databases within a single project. Each
database is isolated from the others and has its own set of documents and
collections. If you don't specify a database in your configuration, the default
database named `(default)` will be used.
## Example
```yaml
sources:
my-firestore-source:
kind: "firestore"
project: "my-project-id"
# database: "my-database" # Optional, defaults to "(default)"
```
## Reference
| **field** | **type** | **required** | **description** |
|-----------|:--------:|:------------:|----------------------------------------------------------------------------------------------------------|
| kind | string | true | Must be "firestore". |
| project | string | true | Id of the GCP project that contains the Firestore database (e.g. "my-project-id"). |
| database | string | false | Name of the Firestore database to connect to. Defaults to "(default)" if not specified. |
# HTTP
The HTTP source enables the Toolbox to retrieve data from a remote server using HTTP requests.
## About
The HTTP Source allows Toolbox to retrieve data from arbitrary HTTP
endpoints. This enables Generative AI applications to access data from web APIs
and other HTTP-accessible resources.
## Available Tools
- [`http`](../tools/http/http.md)
Make HTTP requests to REST APIs or other web services.
## Example
```yaml
sources:
my-http-source:
kind: http
baseUrl: https://api.example.com/data
timeout: 10s # default to 30s
headers:
Authorization: Bearer ${API_KEY}
Content-Type: application/json
queryParams:
param1: value1
param2: value2
# disableSslVerification: false
```
{{< notice tip >}}
Use environment variable replacement with the format ${ENV_NAME}
instead of hardcoding your secrets into the configuration file.
{{< /notice >}}
## Reference
| **field** | **type** | **required** | **description** |
|------------------------|:-----------------:|:------------:|------------------------------------------------------------------------------------------------------------------------------------|
| kind | string | true | Must be "http". |
| baseUrl | string | true | The base URL for the HTTP requests (e.g., `https://api.example.com`). |
| timeout | string | false | The timeout for HTTP requests (e.g., "5s", "1m", refer to [ParseDuration][parse-duration-doc] for more examples). Defaults to 30s. |
| headers | map[string]string | false | Default headers to include in the HTTP requests. |
| queryParams | map[string]string | false | Default query parameters to include in the HTTP requests. |
| disableSslVerification | bool | false | Disable SSL certificate verification. This should only be used for local development. Defaults to `false`. |
[parse-duration-doc]: https://pkg.go.dev/time#ParseDuration
# Looker
Looker is a business intelligence tool that also provides a semantic layer.
## About
[Looker][looker-docs] is a web based business intelligence and data management
tool that provides a semantic layer to facilitate querying. It can be deployed
in the cloud, on GCP, or on premises.
[looker-docs]: https://cloud.google.com/looker/docs
## Requirements
### Looker User
This source only uses API authentication. You will need to
[create an API user][looker-user] to login to Looker.
[looker-user]:
https://cloud.google.com/looker/docs/api-auth#authentication_with_an_sdk
{{< notice note >}}
To use the Conversational Analytics API, you will need to have the following
Google Cloud Project API enabled and IAM permissions.
{{< /notice >}}
### API Enablement in GCP
Enable the following APIs in your Google Cloud Project:
```
gcloud services enable geminidataanalytics.googleapis.com --project=$PROJECT_ID
gcloud services enable cloudaicompanion.googleapis.com --project=$PROJECT_ID
```
### IAM Permissions in GCP
In addition to [setting the ADC for your server][set-adc], you need to ensure
the IAM identity has been given the following IAM roles (or corresponding
permissions):
- `roles/looker.instanceUser`
- `roles/cloudaicompanion.user`
- `roles/geminidataanalytics.dataAgentStatelessUser`
To initialize the application default credential run `gcloud auth login
--update-adc` in your environment before starting MCP Toolbox.
[set-adc]: https://cloud.google.com/docs/authentication/provide-credentials-adc
## Example
```yaml
sources:
my-looker-source:
kind: looker
base_url: http://looker.example.com
client_id: ${LOOKER_CLIENT_ID}
client_secret: ${LOOKER_CLIENT_SECRET}
project: ${LOOKER_PROJECT}
location: ${LOOKER_LOCATION}
verify_ssl: true
timeout: 600s
```
The Looker base url will look like "https://looker.example.com", don't include
a trailing "/". In some cases, especially if your Looker is deployed
on-premises, you may need to add the API port number like
"https://looker.example.com:19999".
Verify ssl should almost always be "true" (all lower case) unless you are using
a self-signed ssl certificate for the Looker server. Anything other than "true"
will be interpreted as false.
The client id and client secret are seemingly random character sequences
assigned by the looker server. If you are using Looker OAuth you don't need
these settings
The `project` and `location` fields are utilized **only** when using the
conversational analytics tool.
{{< notice tip >}}
Use environment variable replacement with the format ${ENV_NAME}
instead of hardcoding your secrets into the configuration file.
{{< /notice >}}
## Reference
| **field** | **type** | **required** | **description** |
|----------------------|:--------:|:------------:|-------------------------------------------------------------------------------------------|
| kind | string | true | Must be "looker". |
| base_url | string | true | The URL of your Looker server with no trailing /. |
| client_id | string | false | The client id assigned by Looker. |
| client_secret | string | false | The client secret assigned by Looker. |
| verify_ssl | string | false | Whether to check the ssl certificate of the server. |
| project | string | false | The project id to use in Google Cloud. |
| location | string | false | The location to use in Google Cloud. (default: us) |
| timeout | string | false | Maximum time to wait for query execution (e.g. "30s", "2m"). By default, 120s is applied. |
| use_client_oauth | string | false | Use OAuth tokens instead of client_id and client_secret. (default: false) If a header |
| | | | name is provided, it will be used instead of "Authorization". |
| show_hidden_models | string | false | Show or hide hidden models. (default: true) |
| show_hidden_explores | string | false | Show or hide hidden explores. (default: true) |
| show_hidden_fields | string | false | Show or hide hidden fields. (default: true) |
# MindsDB
MindsDB is an AI federated database that enables SQL queries across hundreds of datasources and ML models.
## About
[MindsDB][mindsdb-docs] is an AI federated database in the world. It allows you
to combine information from hundreds of datasources as if they were SQL,
supporting joins across datasources and enabling you to query all unstructured
data as if it were structured.
MindsDB translates MySQL queries into whatever API is needed - whether it's REST
APIs, GraphQL, or native database protocols. This means you can write standard
SQL queries and MindsDB automatically handles the translation to APIs like
Salesforce, Jira, GitHub, email systems, MongoDB, and hundreds of other
datasources.
MindsDB also enables you to use ML frameworks to train and use models as virtual
tables from the data in those datasources. With MindsDB, the GenAI Toolbox can
now expand to hundreds of datasources and leverage all of MindsDB's capabilities
on ML and unstructured data.
**Key Features:**
- **Federated Database**: Connect and query hundreds of datasources through a
single SQL interface
- **Cross-Datasource Joins**: Perform joins across different datasources
seamlessly
- **API Translation**: Automatically translates MySQL queries into REST APIs,
GraphQL, and native protocols
- **Unstructured Data Support**: Query unstructured data as if it were
structured
- **ML as Virtual Tables**: Train and use ML models as virtual tables
- **MySQL Wire Protocol**: Compatible with standard MySQL clients and tools
[mindsdb-docs]: https://docs.mindsdb.com/
[mindsdb-github]: https://github.com/mindsdb/mindsdb
## Supported Datasources
MindsDB supports hundreds of datasources, including:
### **Business Applications**
- **Salesforce**: Query leads, opportunities, accounts, and custom objects
- **Jira**: Access issues, projects, workflows, and team data
- **GitHub**: Query repositories, commits, pull requests, and issues
- **Slack**: Access channels, messages, and team communications
- **HubSpot**: Query contacts, companies, deals, and marketing data
### **Databases & Storage**
- **MongoDB**: Query NoSQL collections as structured tables
- **Redis**: Key-value stores and caching layers
- **Elasticsearch**: Search and analytics data
- **S3/Google Cloud Storage**: File storage and data lakes
### **Communication & Email**
- **Gmail/Outlook**: Query emails, attachments, and metadata
- **Slack**: Access workspace data and conversations
- **Microsoft Teams**: Team communications and files
- **Discord**: Server data and message history
## Example Queries
### Cross-Datasource Analytics
```sql
-- Join Salesforce opportunities with GitHub activity
SELECT
s.opportunity_name,
s.amount,
g.repository_name,
COUNT(g.commits) as commit_count
FROM salesforce.opportunities s
JOIN github.repositories g ON s.account_id = g.owner_id
WHERE s.stage = 'Closed Won'
GROUP BY s.opportunity_name, s.amount, g.repository_name;
```
### Email & Communication Analysis
```sql
-- Analyze email patterns with Slack activity
SELECT
e.sender,
e.subject,
s.channel_name,
COUNT(s.messages) as message_count
FROM gmail.emails e
JOIN slack.messages s ON e.sender = s.user_name
WHERE e.date >= '2024-01-01'
GROUP BY e.sender, e.subject, s.channel_name;
```
### ML Model Predictions
```sql
-- Use ML model to predict customer churn
SELECT
customer_id,
customer_name,
predicted_churn_probability,
recommended_action
FROM customer_churn_model
WHERE predicted_churn_probability > 0.8;
```
## Requirements
### Database User
This source uses standard MySQL authentication since MindsDB implements the
MySQL wire protocol. You will need to [create a MindsDB user][mindsdb-users] to
login to the database with. If MindsDB is configured without authentication, you
can omit the password field.
[mindsdb-users]: https://docs.mindsdb.com/
## Example
```yaml
sources:
my-mindsdb-source:
kind: mindsdb
host: 127.0.0.1
port: 3306
database: my_db
user: ${USER_NAME}
password: ${PASSWORD} # Optional: omit if MindsDB is configured without authentication
queryTimeout: 30s # Optional: query timeout duration
```
### Working Configuration Example
Here's a working configuration that has been tested:
```yaml
sources:
my-pg-source:
kind: mindsdb
host: 127.0.0.1
port: 47335
database: files
user: mindsdb
```
{{< notice tip >}}
Use environment variable replacement with the format ${ENV_NAME}
instead of hardcoding your secrets into the configuration file.
{{< /notice >}}
## Use Cases
With MindsDB integration, you can:
- **Query Multiple Datasources**: Connect to databases, APIs, file systems, and
more through a single SQL interface
- **Cross-Datasource Analytics**: Perform joins and analytics across different
data sources
- **ML Model Integration**: Use trained ML models as virtual tables for
predictions and insights
- **Unstructured Data Processing**: Query documents, images, and other
unstructured data as structured tables
- **Real-time Predictions**: Get real-time predictions from ML models through
SQL queries
- **API Abstraction**: Write SQL queries that automatically translate to REST
APIs, GraphQL, and native protocols
## Reference
| **field** | **type** | **required** | **description** |
|--------------|:--------:|:------------:|--------------------------------------------------------------------------------------------------------------|
| kind | string | true | Must be "mindsdb". |
| host | string | true | IP address to connect to (e.g. "127.0.0.1"). |
| port | string | true | Port to connect to (e.g. "3306"). |
| database | string | true | Name of the MindsDB database to connect to (e.g. "my_db"). |
| user | string | true | Name of the MindsDB user to connect as (e.g. "my-mindsdb-user"). |
| password | string | false | Password of the MindsDB user (e.g. "my-password"). Optional if MindsDB is configured without authentication. |
| queryTimeout | string | false | Maximum time to wait for query execution (e.g. "30s", "2m"). By default, no timeout is applied. |
## Resources
- [MindsDB Documentation][mindsdb-docs] - Official documentation and guides
- [MindsDB GitHub][mindsdb-github] - Source code and community
# MongoDB
MongoDB is a no-sql data platform that can not only serve general purpose data requirements also perform VectorSearch where both operational data and embeddings used of search can reside in same document.
## About
[MongoDB][mongodb-docs] is a popular NoSQL database that stores data in
flexible, JSON-like documents, making it easy to develop and scale applications.
[mongodb-docs]: https://www.mongodb.com/docs/atlas/getting-started/
## Example
```yaml
sources:
my-mongodb:
kind: mongodb
uri: "mongodb+srv://username:password@host.mongodb.net"
```
## Reference
| **field** | **type** | **required** | **description** |
|-----------|:--------:|:------------:|-------------------------------------------------------------------|
| kind | string | true | Must be "mongodb". |
| uri | string | true | connection string to connect to MongoDB |
# MySQL
MySQL is a relational database management system that stores and manages data.
## About
[MySQL][mysql-docs] is a relational database management system (RDBMS) that
stores and manages data. It's a popular choice for developers because of its
reliability, performance, and ease of use.
[mysql-docs]: https://www.mysql.com/
## Available Tools
- [`mysql-sql`](../tools/mysql/mysql-sql.md)
Execute pre-defined prepared SQL queries in MySQL.
- [`mysql-execute-sql`](../tools/mysql/mysql-execute-sql.md)
Run parameterized SQL queries in MySQL.
- [`mysql-list-active-queries`](../tools/mysql/mysql-list-active-queries.md)
List active queries in MySQL.
- [`mysql-list-tables`](../tools/mysql/mysql-list-tables.md)
List tables in a MySQL database.
- [`mysql-list-tables-missing-unique-indexes`](../tools/mysql/mysql-list-tables-missing-unique-indexes.md)
List tables in a MySQL database that do not have primary or unique indices.
- [`mysql-list-table-fragmentation`](../tools/mysql/mysql-list-table-fragmentation.md)
List table fragmentation in MySQL tables.
## Requirements
### Database User
This source only uses standard authentication. You will need to [create a
MySQL user][mysql-users] to login to the database with.
[mysql-users]: https://dev.mysql.com/doc/refman/8.4/en/user-names.html
## Example
```yaml
sources:
my-mysql-source:
kind: mysql
host: 127.0.0.1
port: 3306
database: my_db
user: ${USER_NAME}
password: ${PASSWORD}
# Optional TLS and other driver parameters. For example, enable preferred TLS:
# queryParams:
# tls: preferred
queryTimeout: 30s # Optional: query timeout duration
```
{{< notice tip >}}
Use environment variable replacement with the format ${ENV_NAME}
instead of hardcoding your secrets into the configuration file.
{{< /notice >}}
## Reference
| **field** | **type** | **required** | **description** |
| ------------ | :------: | :----------: | ----------------------------------------------------------------------------------------------- |
| kind | string | true | Must be "mysql". |
| host | string | true | IP address to connect to (e.g. "127.0.0.1"). |
| port | string | true | Port to connect to (e.g. "3306"). |
| database | string | true | Name of the MySQL database to connect to (e.g. "my_db"). |
| user | string | true | Name of the MySQL user to connect as (e.g. "my-mysql-user"). |
| password | string | true | Password of the MySQL user (e.g. "my-password"). |
| queryTimeout | string | false | Maximum time to wait for query execution (e.g. "30s", "2m"). By default, no timeout is applied. |
| queryParams | map | false | Arbitrary DSN parameters passed to the driver (e.g. `tls: preferred`, `charset: utf8mb4`). Useful for enabling TLS or other connection options. |
# Neo4j
Neo4j is a powerful, open source graph database system
## About
[Neo4j][neo4j-docs] is a powerful, open source graph database system with over
15 years of active development that has earned it a strong reputation for
reliability, feature robustness, and performance.
[neo4j-docs]: https://neo4j.com/docs
## Available Tools
- [`neo4j-cypher`](../tools/neo4j/neo4j-cypher.md)
Run Cypher queries against your Neo4j graph database.
## Requirements
### Database User
This source only uses standard authentication. You will need to [create a Neo4j
user][neo4j-users] to log in to the database with, or use the default `neo4j`
user if available.
[neo4j-users]: https://neo4j.com/docs/operations-manual/current/authentication-authorization/manage-users/
## Example
```yaml
sources:
my-neo4j-source:
kind: neo4j
uri: neo4j+s://xxxx.databases.neo4j.io:7687
user: ${USER_NAME}
password: ${PASSWORD}
database: "neo4j"
```
{{< notice tip >}}
Use environment variable replacement with the format ${ENV_NAME}
instead of hardcoding your secrets into the configuration file.
{{< /notice >}}
## Reference
| **field** | **type** | **required** | **description** |
|-----------|:--------:|:------------:|----------------------------------------------------------------------|
| kind | string | true | Must be "neo4j". |
| uri | string | true | Connect URI ("bolt://localhost", "neo4j+s://xxx.databases.neo4j.io") |
| user | string | true | Name of the Neo4j user to connect as (e.g. "neo4j"). |
| password | string | true | Password of the Neo4j user (e.g. "my-password"). |
| database | string | true | Name of the Neo4j database to connect to (e.g. "neo4j"). |
# OceanBase
OceanBase is a distributed relational database that provides high availability, scalability, and compatibility with MySQL.
## About
[OceanBase][oceanbase-docs] is a distributed relational database management
system (RDBMS) that provides high availability, scalability, and strong
consistency. It's designed to handle large-scale data processing and is
compatible with MySQL, making it easy for developers to migrate from MySQL to
OceanBase.
[oceanbase-docs]: https://www.oceanbase.com/
## Requirements
### Database User
This source only uses standard authentication. You will need to create an
OceanBase user to login to the database with. OceanBase supports
MySQL-compatible user management syntax.
### Network Connectivity
Ensure that your application can connect to the OceanBase cluster. OceanBase
typically runs on ports 2881 (for MySQL protocol) or 3881 (for MySQL protocol
with SSL).
## Example
```yaml
sources:
my-oceanbase-source:
kind: oceanbase
host: 127.0.0.1
port: 2881
database: my_db
user: ${USER_NAME}
password: ${PASSWORD}
queryTimeout: 30s # Optional: query timeout duration
```
{{< notice tip >}}
Use environment variable replacement with the format ${ENV_NAME}
instead of hardcoding your secrets into the configuration file.
{{< /notice >}}
## Reference
| **field** | **type** | **required** | **description** |
| ------------ | :------: | :----------: |-------------------------------------------------------------------------------------------------|
| kind | string | true | Must be "oceanbase". |
| host | string | true | IP address to connect to (e.g. "127.0.0.1"). |
| port | string | true | Port to connect to (e.g. "2881"). |
| database | string | true | Name of the OceanBase database to connect to (e.g. "my_db"). |
| user | string | true | Name of the OceanBase user to connect as (e.g. "my-oceanbase-user"). |
| password | string | true | Password of the OceanBase user (e.g. "my-password"). |
| queryTimeout | string | false | Maximum time to wait for query execution (e.g. "30s", "2m"). By default, no timeout is applied. |
## Features
### MySQL Compatibility
OceanBase is highly compatible with MySQL, supporting most MySQL SQL syntax,
data types, and functions. This makes it easy to migrate existing MySQL
applications to OceanBase.
### High Availability
OceanBase provides automatic failover and data replication across multiple
nodes, ensuring high availability and data durability.
### Scalability
OceanBase can scale horizontally by adding more nodes to the cluster, making it
suitable for large-scale applications.
### Strong Consistency
OceanBase provides strong consistency guarantees, ensuring that all transactions
are ACID compliant.
# Oracle
Oracle Database is a widely-used relational database management system.
## About
[Oracle Database][oracle-docs] is a multi-model database management system
produced and marketed by Oracle Corporation. It is commonly used for running
online transaction processing (OLTP), data warehousing (DW), and mixed (OLTP &
DW) database workloads.
[oracle-docs]: https://www.oracle.com/database/
## Available Tools
- [`oracle-sql`](../tools/oracle/oracle-sql.md)
Execute pre-defined prepared SQL queries in Oracle.
- [`oracle-execute-sql`](../tools/oracle/oracle-execute-sql.md)
Run parameterized SQL queries in Oracle.
## Requirements
### Database User
This source uses standard authentication. You will need to [create an Oracle
user][oracle-users] to log in to the database with the necessary permissions.
[oracle-users]:
https://docs.oracle.com/en/database/oracle/oracle-database/21/sqlrf/CREATE-USER.html
## Connection Methods
You can configure the connection to your Oracle database using one of the
following three methods. **You should only use one method** in your source
configuration.
### Basic Connection (Host/Port/Service Name)
This is the most straightforward method, where you provide the connection
details as separate fields:
- `host`: The IP address or hostname of the database server.
- `port`: The port number the Oracle listener is running on (typically 1521).
- `serviceName`: The service name for the database instance you wish to connect
to.
### Connection String
As an alternative, you can provide all the connection details in a single
`connectionString`. This is a convenient way to consolidate the connection
information. The typical format is `hostname:port/servicename`.
### TNS Alias
For environments that use a `tnsnames.ora` configuration file, you can connect
using a TNS (Transparent Network Substrate) alias.
- `tnsAlias`: Specify the alias name defined in your `tnsnames.ora` file.
- `tnsAdmin` (Optional): If your configuration file is not in a standard
location, you can use this field to provide the path to the directory
containing it. This setting will override the `TNS_ADMIN` environment
variable.
## Example
```yaml
sources:
my-oracle-source:
kind: oracle
# --- Choose one connection method ---
# 1. Host, Port, and Service Name
host: 127.0.0.1
port: 1521
serviceName: XEPDB1
# 2. Direct Connection String
connectionString: "127.0.0.1:1521/XEPDB1"
# 3. TNS Alias (requires tnsnames.ora)
tnsAlias: "MY_DB_ALIAS"
tnsAdmin: "/opt/oracle/network/admin" # Optional: overrides TNS_ADMIN env var
user: ${USER_NAME}
password: ${PASSWORD}
```
{{< notice tip >}}
Use environment variable replacement with the format ${ENV_NAME}
instead of hardcoding your secrets into the configuration file.
{{< /notice >}}
## Reference
| **field** | **type** | **required** | **description** |
|------------------|:--------:|:------------:|-----------------------------------------------------------------------------------------------------------------------------|
| kind | string | true | Must be "oracle". |
| user | string | true | Name of the Oracle user to connect as (e.g. "my-oracle-user"). |
| password | string | true | Password of the Oracle user (e.g. "my-password"). |
| host | string | false | IP address or hostname to connect to (e.g. "127.0.0.1"). Required if not using `connectionString` or `tnsAlias`. |
| port | integer | false | Port to connect to (e.g. "1521"). Required if not using `connectionString` or `tnsAlias`. |
| serviceName | string | false | The Oracle service name of the database to connect to. Required if not using `connectionString` or `tnsAlias`. |
| connectionString | string | false | A direct connection string (e.g. "hostname:port/servicename"). Use as an alternative to `host`, `port`, and `serviceName`. |
| tnsAlias | string | false | A TNS alias from a `tnsnames.ora` file. Use as an alternative to `host`/`port` or `connectionString`. |
| tnsAdmin | string | false | Path to the directory containing the `tnsnames.ora` file. This overrides the `TNS_ADMIN` environment variable if it is set. |
# PostgreSQL
PostgreSQL is a powerful, open source object-relational database.
## About
[PostgreSQL][pg-docs] is a powerful, open source object-relational database
system with over 35 years of active development that has earned it a strong
reputation for reliability, feature robustness, and performance.
[pg-docs]: https://www.postgresql.org/
## Available Tools
- [`postgres-sql`](../tools/postgres/postgres-sql.md)
Execute SQL queries as prepared statements in PostgreSQL.
- [`postgres-execute-sql`](../tools/postgres/postgres-execute-sql.md)
Run parameterized SQL statements in PostgreSQL.
- [`postgres-list-tables`](../tools/postgres/postgres-list-tables.md)
List tables in a PostgreSQL database.
- [`postgres-list-active-queries`](../tools/postgres/postgres-list-active-queries.md)
List active queries in a PostgreSQL database.
- [`postgres-list-available-extensions`](../tools/postgres/postgres-list-available-extensions.md)
List available extensions for installation in a PostgreSQL database.
- [`postgres-list-installed-extensions`](../tools/postgres/postgres-list-installed-extensions.md)
List installed extensions in a PostgreSQL database.
- [`postgres-list-views`](../tools/postgres/postgres-list-views.md)
List views in a PostgreSQL database.
- [`postgres-list-schemas`](../tools/postgres/postgres-list-views.md)
List schemas in a PostgreSQL database.
- [`postgres-database-overview`](../tools/postgres/postgres-database-overview.md)
Fetches the current state of the PostgreSQL server.
- [`postgres-list-triggers`](../tools/postgres/postgres-list-triggers.md)
List triggers in a PostgreSQL database.
- [`postgres-list-indexes`](../tools/postgres/postgres-list-indexes.md)
List available user indexes in a PostgreSQL database.
- [`postgres-list-sequences`](../tools/postgres/postgres-list-sequences.md)
List sequences in a PostgreSQL database.
- [`postgres-long-running-transactions`](../tools/postgres/postgres-long-running-transactions.md)
List long running transactions in a PostgreSQL database.
- [`postgres-list-locks`](../tools/postgres/postgres-list-locks.md)
List lock stats in a PostgreSQL database.
- [`postgres-replication-stats`](../tools/postgres/postgres-replication-stats.md)
List replication stats in a PostgreSQL database.
- [`postgres-list-query-stats`](../tools/postgres/postgres-list-query-stats.md)
List query statistics in a PostgreSQL database.
- [`postgres-get-column-cardinality`](../tools/postgres/postgres-get-column-cardinality.md)
List cardinality of columns in a table in a PostgreSQL database.
- [`postgres-list-pg-settings`](../tools/postgres/postgres-list-pg-settings.md)
List configuration parameters for the PostgreSQL server.
- [`postgres-list-database-stats`](../tools/postgres/postgres-list-database-stats.md)
Lists the key performance and activity statistics for each database in the postgreSQL
server.
### Pre-built Configurations
- [PostgreSQL using MCP](https://googleapis.github.io/genai-toolbox/how-to/connect-ide/postgres_mcp/)
Connect your IDE to PostgreSQL using Toolbox.
## Requirements
### Database User
This source only uses standard authentication. You will need to [create a
PostgreSQL user][pg-users] to login to the database with.
[pg-users]: https://www.postgresql.org/docs/current/sql-createuser.html
## Example
```yaml
sources:
my-pg-source:
kind: postgres
host: 127.0.0.1
port: 5432
database: my_db
user: ${USER_NAME}
password: ${PASSWORD}
```
{{< notice tip >}}
Use environment variable replacement with the format ${ENV_NAME}
instead of hardcoding your secrets into the configuration file.
{{< /notice >}}
## Reference
| **field** | **type** | **required** | **description** |
|-------------|:------------------:|:------------:|------------------------------------------------------------------------|
| kind | string | true | Must be "postgres". |
| host | string | true | IP address to connect to (e.g. "127.0.0.1") |
| port | string | true | Port to connect to (e.g. "5432") |
| database | string | true | Name of the Postgres database to connect to (e.g. "my_db"). |
| user | string | true | Name of the Postgres user to connect as (e.g. "my-pg-user"). |
| password | string | true | Password of the Postgres user (e.g. "my-password"). |
| queryParams | map[string]string | false | Raw query to be added to the db connection string. |
# Redis
Redis is a in-memory data structure store.
## About
Redis is a in-memory data structure store, used as a database,
cache, and message broker. It supports data structures such as strings, hashes,
lists, sets, sorted sets with range queries, bitmaps, hyperloglogs, and
geospatial indexes with radius queries.
If you are new to Redis, you can find installation and getting started guides on
the [official Redis website](https://redis.io/docs/).
## Available Tools
- [`redis`](../tools/redis/redis.md)
Run Redis commands and interact with key-value pairs.
## Requirements
### Redis
[AUTH string][auth] is a password for connection to Redis. If you have the
`requirepass` directive set in your Redis configuration, incoming client
connections must authenticate in order to connect.
Specify your AUTH string in the password field:
```yaml
sources:
my-redis-instance:
kind: redis
address:
- 127.0.0.1:6379
username: ${MY_USER_NAME}
password: ${MY_AUTH_STRING} # Omit this field if you don't have a password.
# database: 0
# clusterEnabled: false
# useGCPIAM: false
```
{{< notice tip >}}
Use environment variable replacement with the format ${ENV_NAME}
instead of hardcoding your secrets into the configuration file.
{{< /notice >}}
### Memorystore For Redis
Memorystore standalone instances support authentication using an [AUTH][auth]
string.
Here is an example tools.yaml config with [AUTH][auth] enabled:
```yaml
sources:
my-redis-cluster-instance:
kind: memorystore-redis
address:
- 127.0.0.1:6379
password: ${MY_AUTH_STRING}
# useGCPIAM: false
# clusterEnabled: false
```
Memorystore Redis Cluster supports IAM authentication instead. Grant your
account the required [IAM role][iam] and make sure to set `useGCPIAM` to `true`.
Here is an example tools.yaml config for Memorystore Redis Cluster instances
using IAM authentication:
```yaml
sources:
my-redis-cluster-instance:
kind: memorystore-redis
address:
- 127.0.0.1:6379
useGCPIAM: true
clusterEnabled: true
```
[iam]: https://cloud.google.com/memorystore/docs/cluster/about-iam-auth
## Reference
| **field** | **type** | **required** | **description** |
|----------------|:--------:|:------------:|---------------------------------------------------------------------------------------------------------------------------------|
| kind | string | true | Must be "memorystore-redis". |
| address | string | true | Primary endpoint for the Memorystore Redis instance to connect to. |
| username | string | false | If you are using a non-default user, specify the user name here. If you are using Memorystore for Redis, leave this field blank |
| password | string | false | If you have [Redis AUTH][auth] enabled, specify the AUTH string here |
| database | int | false | The Redis database to connect to. Not applicable for cluster enabled instances. The default database is `0`. |
| clusterEnabled | bool | false | Set it to `true` if using a Redis Cluster instance. Defaults to `false`. |
| useGCPIAM | string | false | Set it to `true` if you are using GCP's IAM authentication. Defaults to `false`. |
[auth]: https://cloud.google.com/memorystore/docs/redis/about-redis-auth
# Serverless for Apache Spark
Google Cloud Serverless for Apache Spark lets you run Spark workloads without requiring you to provision and manage your own Spark cluster.
## About
The [Serverless for Apache
Spark](https://cloud.google.com/dataproc-serverless/docs/overview) source allows
Toolbox to interact with Spark batches hosted on Google Cloud Serverless for
Apache Spark.
## Available Tools
- [`serverless-spark-list-batches`](../tools/serverless-spark/serverless-spark-list-batches.md)
List and filter Serverless Spark batches.
- [`serverless-spark-get-batch`](../tools/serverless-spark/serverless-spark-get-batch.md)
Get a Serverless Spark batch.
- [`serverless-spark-cancel-batch`](../tools/serverless-spark/serverless-spark-cancel-batch.md)
Cancel a running Serverless Spark batch operation.
## Requirements
### IAM Permissions
Serverless for Apache Spark uses [Identity and Access Management
(IAM)](https://cloud.google.com/bigquery/docs/access-control) to control user
and group access to serverless Spark resources like batches and sessions.
Toolbox will use your [Application Default Credentials
(ADC)](https://cloud.google.com/docs/authentication#adc) to authorize and
authenticate when interacting with Google Cloud Serverless for Apache Spark.
When using this method, you need to ensure the IAM identity associated with your
ADC has the correct
[permissions](https://cloud.google.com/dataproc-serverless/docs/concepts/iam)
for the actions you intend to perform. Common roles include
`roles/dataproc.serverlessEditor` (which includes permissions to run batches) or
`roles/dataproc.serverlessViewer`. Follow this
[guide](https://cloud.google.com/docs/authentication/provide-credentials-adc) to
set up your ADC.
## Example
```yaml
sources:
my-serverless-spark-source:
kind: serverless-spark
project: my-project-id
location: us-central1
```
## Reference
| **field** | **type** | **required** | **description** |
| --------- | :------: | :----------: | ----------------------------------------------------------------- |
| kind | string | true | Must be "serverless-spark". |
| project | string | true | ID of the GCP project with Serverless for Apache Spark resources. |
| location | string | true | Location containing Serverless for Apache Spark resources. |
# SingleStore
SingleStore is the cloud-native database built with speed and scale to power data-intensive applications.
## About
[SingleStore][singlestore-docs] is a distributed SQL database built to power
intelligent applications. It is both relational and multi-model, enabling
developers to easily build and scale applications and workloads.
SingleStore is built around Universal Storage which combines in-memory rowstore
and on-disk columnstore data formats to deliver a single table type that is
optimized to handle both transactional and analytical workloads.
[singlestore-docs]: https://docs.singlestore.com/
## Available Tools
- [`singlestore-sql`](../tools/singlestore/singlestore-sql.md)
Execute pre-defined prepared SQL queries in SingleStore.
- [`singlestore-execute-sql`](../tools/singlestore/singlestore-execute-sql.md)
Run parameterized SQL queries in SingleStore.
## Requirements
### Database User
This source only uses standard authentication. You will need to [create a
database user][singlestore-user] to login to the database with.
[singlestore-user]:
https://docs.singlestore.com/cloud/reference/sql-reference/security-management-commands/create-user/
## Example
```yaml
sources:
my-singlestore-source:
kind: singlestore
host: 127.0.0.1
port: 3306
database: my_db
user: ${USER_NAME}
password: ${PASSWORD}
queryTimeout: 30s # Optional: query timeout duration
```
{{< notice tip >}}
Use environment variable replacement with the format ${ENV_NAME}
instead of hardcoding your secrets into the configuration file.
{{< /notice >}}
## Reference
| **field** | **type** | **required** | **description** |
|--------------|:--------:|:------------:|-------------------------------------------------------------------------------------------------|
| kind | string | true | Must be "singlestore". |
| host | string | true | IP address to connect to (e.g. "127.0.0.1"). |
| port | string | true | Port to connect to (e.g. "3306"). |
| database | string | true | Name of the SingleStore database to connect to (e.g. "my_db"). |
| user | string | true | Name of the SingleStore database user to connect as (e.g. "admin"). |
| password | string | true | Password of the SingleStore database user. |
| queryTimeout | string | false | Maximum time to wait for query execution (e.g. "30s", "2m"). By default, no timeout is applied. |
# Spanner
Spanner is a fully managed database service from Google Cloud that combines relational, key-value, graph, and search capabilities.
# Spanner Source
[Spanner][spanner-docs] is a fully managed, mission-critical database service
that brings together relational, graph, key-value, and search. It offers
transactional consistency at global scale, automatic, synchronous replication
for high availability, and support for two SQL dialects: GoogleSQL (ANSI 2011
with extensions) and PostgreSQL.
If you are new to Spanner, you can try to [create and query a database using
the Google Cloud console][spanner-quickstart].
[spanner-docs]: https://cloud.google.com/spanner/docs
[spanner-quickstart]:
https://cloud.google.com/spanner/docs/create-query-database-console
## Available Tools
- [`spanner-sql`](../tools/spanner/spanner-sql.md)
Execute SQL on Google Cloud Spanner.
- [`spanner-execute-sql`](../tools/spanner/spanner-execute-sql.md)
Run structured and parameterized queries on Spanner.
- [`spanner-list-tables`](../tools/spanner/spanner-list-tables.md)
Retrieve schema information about tables in a Spanner database.
- [`spanner-list-graphs`](../tools/spanner/spanner-list-graphs.md)
Retrieve schema information about graphs in a Spanner database.
### Pre-built Configurations
- [Spanner using MCP](https://googleapis.github.io/genai-toolbox/how-to/connect-ide/spanner_mcp/)
Connect your IDE to Spanner using Toolbox.
## Requirements
### IAM Permissions
Spanner uses [Identity and Access Management (IAM)][iam-overview] to control
user and group access to Spanner resources at the project, Spanner instance, and
Spanner database levels. Toolbox will use your [Application Default Credentials
(ADC)][adc] to authorize and authenticate when interacting with Spanner.
In addition to [setting the ADC for your server][set-adc], you need to ensure
the IAM identity has been given the correct IAM permissions for the query
provided. See [Apply IAM roles][grant-permissions] for more information on
applying IAM permissions and roles to an identity.
[iam-overview]: https://cloud.google.com/spanner/docs/iam
[adc]: https://cloud.google.com/docs/authentication#adc
[set-adc]: https://cloud.google.com/docs/authentication/provide-credentials-adc
[grant-permissions]: https://cloud.google.com/spanner/docs/grant-permissions
## Example
```yaml
sources:
my-spanner-source:
kind: "spanner"
project: "my-project-id"
instance: "my-instance"
database: "my_db"
# dialect: "googlesql"
```
## Reference
| **field** | **type** | **required** | **description** |
|-----------|:--------:|:------------:|---------------------------------------------------------------------------------------------------------------------|
| kind | string | true | Must be "spanner". |
| project | string | true | Id of the GCP project that the cluster was created in (e.g. "my-project-id"). |
| instance | string | true | Name of the Spanner instance. |
| database | string | true | Name of the database on the Spanner instance |
| dialect | string | false | Name of the dialect type of the Spanner database, must be either `googlesql` or `postgresql`. Default: `googlesql`. |
# SQL Server
SQL Server is a relational database management system (RDBMS).
## About
[SQL Server][mssql-docs] is a relational database management system (RDBMS)
developed by Microsoft that allows users to store, retrieve, and manage large
amount of data through a structured format.
[mssql-docs]: https://www.microsoft.com/en-us/sql-server
## Available Tools
- [`mssql-sql`](../tools/mssql/mssql-sql.md)
Execute pre-defined SQL Server queries with placeholder parameters.
- [`mssql-execute-sql`](../tools/mssql/mssql-execute-sql.md)
Run parameterized SQL Server queries in SQL Server.
- [`mssql-list-tables`](../tools/mssql/mssql-list-tables.md)
List tables in a SQL Server database.
## Requirements
### Database User
This source only uses standard authentication. You will need to [create a
SQL Server user][mssql-users] to login to the database with.
[mssql-users]:
https://learn.microsoft.com/en-us/sql/relational-databases/security/authentication-access/create-a-database-user?view=sql-server-ver16
## Example
```yaml
sources:
my-mssql-source:
kind: mssql
host: 127.0.0.1
port: 1433
database: my_db
user: ${USER_NAME}
password: ${PASSWORD}
# encrypt: strict
```
{{< notice tip >}}
Use environment variable replacement with the format ${ENV_NAME}
instead of hardcoding your secrets into the configuration file.
{{< /notice >}}
## Reference
| **field** | **type** | **required** | **description** |
|-----------|:--------:|:------------:|--------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|
| kind | string | true | Must be "mssql". |
| host | string | true | IP address to connect to (e.g. "127.0.0.1"). |
| port | string | true | Port to connect to (e.g. "1433"). |
| database | string | true | Name of the SQL Server database to connect to (e.g. "my_db"). |
| user | string | true | Name of the SQL Server user to connect as (e.g. "my-user"). |
| password | string | true | Password of the SQL Server user (e.g. "my-password"). |
| encrypt | string | false | Encryption level for data transmitted between the client and server (e.g., "strict"). If not specified, defaults to the [github.com/microsoft/go-mssqldb](https://github.com/microsoft/go-mssqldb?tab=readme-ov-file#common-parameters) package's default encrypt value. |
# SQLite
SQLite is a C-language library that implements a small, fast, self-contained, high-reliability, full-featured, SQL database engine.
## About
[SQLite](https://sqlite.org/) is a software library that provides a relational
database management system. The lite in SQLite means lightweight in terms of
setup, database administration, and required resources.
SQLite has the following notable characteristics:
- Self-contained with no external dependencies
- Serverless - the SQLite library accesses its storage files directly
- Single database file that can be easily copied or moved
- Zero-configuration - no setup or administration needed
- Transactional with ACID properties
## Available Tools
- [`sqlite-sql`](../tools/sqlite/sqlite-sql.md)
Run SQL queries against a local SQLite database.
- [`sqlite-execute-sql`](../tools/sqlite/sqlite-execute-sql.md)
Run parameterized SQL statements in SQlite.
### Pre-built Configurations
- [SQLite using MCP](../../how-to/connect-ide/sqlite_mcp.md)
Connect your IDE to SQlite using Toolbox.
## Requirements
### Database File
You need a SQLite database file. This can be:
- An existing database file
- A path where a new database file should be created
- `:memory:` for an in-memory database
## Example
```yaml
sources:
my-sqlite-db:
kind: "sqlite"
database: "/path/to/database.db"
```
For an in-memory database:
```yaml
sources:
my-sqlite-memory-db:
kind: "sqlite"
database: ":memory:"
```
## Reference
### Configuration Fields
| **field** | **type** | **required** | **description** |
|-----------|:--------:|:------------:|---------------------------------------------------------------------------------------------------------------------|
| kind | string | true | Must be "sqlite". |
| database | string | true | Path to SQLite database file, or ":memory:" for an in-memory database. |
### Connection Properties
SQLite connections are configured with these defaults for optimal performance:
- `MaxOpenConns`: 1 (SQLite only supports one writer at a time)
- `MaxIdleConns`: 1
# TiDB
TiDB is a distributed SQL database that combines the best of traditional RDBMS and NoSQL databases.
## About
[TiDB][tidb-docs] is an open-source distributed SQL database that supports
Hybrid Transactional and Analytical Processing (HTAP) workloads. It is
MySQL-compatible and features horizontal scalability, strong consistency, and
high availability.
[tidb-docs]: https://docs.pingcap.com/tidb/stable
## Requirements
### Database User
This source uses standard MySQL protocol authentication. You will need to
[create a TiDB user][tidb-users] to login to the database with.
For TiDB Cloud users, you can create database users through the TiDB Cloud
console.
[tidb-users]: https://docs.pingcap.com/tidb/stable/user-account-management
## SSL Configuration
- TiDB Cloud
For TiDB Cloud instances, SSL is automatically enabled when the hostname
matches the TiDB Cloud pattern (`gateway*.*.*.tidbcloud.com`). You don't
need to explicitly set `ssl: true` for TiDB Cloud connections.
- Self-Hosted TiDB
For self-hosted TiDB instances, you can optionally enable SSL by setting
`ssl: true` in your configuration.
## Example
- TiDB Cloud
```yaml
sources:
my-tidb-cloud-source:
kind: tidb
host: gateway01.us-west-2.prod.aws.tidbcloud.com
port: 4000
database: my_db
user: ${TIDB_USERNAME}
password: ${TIDB_PASSWORD}
# SSL is automatically enabled for TiDB Cloud
```
- Self-Hosted TiDB
```yaml
sources:
my-tidb-source:
kind: tidb
host: 127.0.0.1
port: 4000
database: my_db
user: ${TIDB_USERNAME}
password: ${TIDB_PASSWORD}
# ssl: true # Optional: enable SSL for secure connections
```
{{< notice tip >}}
Use environment variable replacement with the format ${ENV_NAME}
instead of hardcoding your secrets into the configuration file.
{{< /notice >}}
## Reference
| **field** | **type** | **required** | **description** |
|-----------|:--------:|:------------:|--------------------------------------------------------------------------------------------|
| kind | string | true | Must be "tidb". |
| host | string | true | IP address or hostname to connect to (e.g. "127.0.0.1" or "gateway01.*.tidbcloud.com"). |
| port | string | true | Port to connect to (typically "4000" for TiDB). |
| database | string | true | Name of the TiDB database to connect to (e.g. "my_db"). |
| user | string | true | Name of the TiDB user to connect as (e.g. "my-tidb-user"). |
| password | string | true | Password of the TiDB user (e.g. "my-password"). |
| ssl | boolean | false | Whether to use SSL/TLS encryption. Automatically enabled for TiDB Cloud instances. |
# Trino
Trino is a distributed SQL query engine for big data analytics.
## About
[Trino][trino-docs] is a distributed SQL query engine designed for fast analytic
queries against data of any size. It allows you to query data where it lives,
including Hive, Cassandra, relational databases or even proprietary data stores.
[trino-docs]: https://trino.io/docs/
## Available Tools
- [`trino-sql`](../tools/trino/trino-sql.md)
Execute parameterized SQL queries against Trino.
- [`trino-execute-sql`](../tools/trino/trino-execute-sql.md)
Execute arbitrary SQL queries against Trino.
## Requirements
### Trino Cluster
You need access to a running Trino cluster with appropriate user permissions for
the catalogs and schemas you want to query.
## Example
```yaml
sources:
my-trino-source:
kind: trino
host: trino.example.com
port: "8080"
user: ${TRINO_USER} # Optional for anonymous access
password: ${TRINO_PASSWORD} # Optional
catalog: hive
schema: default
```
{{< notice tip >}}
Use environment variable replacement with the format ${ENV_NAME}
instead of hardcoding your secrets into the configuration file.
{{< /notice >}}
## Reference
| **field** | **type** | **required** | **description** |
|-----------------|:--------:|:------------:|------------------------------------------------------------------------------|
| kind | string | true | Must be "trino". |
| host | string | true | Trino coordinator hostname (e.g. "trino.example.com") |
| port | string | true | Trino coordinator port (e.g. "8080", "8443") |
| user | string | false | Username for authentication (e.g. "analyst"). Optional for anonymous access. |
| password | string | false | Password for basic authentication |
| catalog | string | true | Default catalog to use for queries (e.g. "hive") |
| schema | string | true | Default schema to use for queries (e.g. "default") |
| queryTimeout | string | false | Query timeout duration (e.g. "30m", "1h") |
| accessToken | string | false | JWT access token for authentication |
| kerberosEnabled | boolean | false | Enable Kerberos authentication (default: false) |
| sslEnabled | boolean | false | Enable SSL/TLS (default: false) |
# Valkey
Valkey is an open-source, in-memory data structure store, forked from Redis.
## About
Valkey is an open-source, in-memory data structure store that originated as a
fork of Redis. It's designed to be used as a database, cache, and message
broker, supporting a wide range of data structures like strings, hashes, lists,
sets, sorted sets with range queries, bitmaps, hyperloglogs, and geospatial
indexes with radius queries.
If you're new to Valkey, you can find installation and getting started guides on
the [official Valkey website](https://valkey.io/topics/quickstart/).
## Available Tools
- [`valkey`](../tools/valkey/valkey.md)
Issue Valkey (Redis-compatible) commands.
## Example
```yaml
sources:
my-valkey-instance:
kind: valkey
address:
- 127.0.0.1:6379
username: ${YOUR_USERNAME}
password: ${YOUR_PASSWORD}
# database: 0
# useGCPIAM: false
# disableCache: false
```
{{< notice tip >}}
Use environment variable replacement with the format ${ENV_NAME}
instead of hardcoding your secrets into the configuration file.
{{< /notice >}}
### IAM Authentication
If you are using GCP's Memorystore for Valkey, you can connect using IAM
authentication. Grant your account the required [IAM role][iam] and set
`useGCPIAM` to `true`:
```yaml
sources:
my-valkey-instance:
kind: valkey
address:
- 127.0.0.1:6379
useGCPIAM: true
```
[iam]: https://cloud.google.com/memorystore/docs/valkey/about-iam-auth
## Reference
| **field** | **type** | **required** | **description** |
|--------------|:--------:|:------------:|----------------------------------------------------------------------------------------------------------------------------------|
| kind | string | true | Must be "valkey". |
| address | []string | true | Endpoints for the Valkey instance to connect to. |
| username | string | false | If you are using a non-default user, specify the user name here. If you are using Memorystore for Valkey, leave this field blank |
| password | string | false | Password for the Valkey instance |
| database | int | false | The Valkey database to connect to. Not applicable for cluster enabled instances. The default database is `0`. |
| useGCPIAM | bool | false | Set it to `true` if you are using GCP's IAM authentication. Defaults to `false`. |
| disableCache | bool | false | Set it to `true` if you want to enable client-side caching. Defaults to `false`. |
# YugabyteDB
YugabyteDB is a high-performance, distributed SQL database.
## About
[YugabyteDB][yugabytedb] is a high-performance, distributed SQL database
designed for global, internet-scale applications, with full PostgreSQL
compatibility.
[yugabytedb]: https://www.yugabyte.com/
## Example
```yaml
sources:
my-yb-source:
kind: yugabytedb
host: 127.0.0.1
port: 5433
database: yugabyte
user: ${USER_NAME}
password: ${PASSWORD}
loadBalance: true
topologyKeys: cloud.region.zone1:1,cloud.region.zone2:2
```
## Reference
| **field** | **type** | **required** | **description** |
|------------------------------|:--------:|:------------:|-----------------------------------------------------------------------------------------------------------------------------------------------------------------------|
| kind | string | true | Must be "yugabytedb". |
| host | string | true | IP address to connect to. |
| port | integer | true | Port to connect to. The default port is 5433. |
| database | string | true | Name of the YugabyteDB database to connect to. The default database name is yugabyte. |
| user | string | true | Name of the YugabyteDB user to connect as. The default user is yugabyte. |
| password | string | true | Password of the YugabyteDB user. The default password is yugabyte. |
| loadBalance | boolean | false | If true, enable uniform load balancing. The default loadBalance value is false. |
| topologyKeys | string | false | Comma-separated geo-locations in the form cloud.region.zone:priority to enable topology-aware load balancing. Ignored if loadBalance is false. It is null by default. |
| ybServersRefreshInterval | integer | false | The interval (in seconds) to refresh the servers list; ignored if loadBalance is false. The default value of ybServersRefreshInterval is 300. |
| fallbackToTopologyKeysOnly | boolean | false | If set to true and topologyKeys are specified, only connect to nodes specified in topologyKeys. By defualt, this is set to false. |
| failedHostReconnectDelaySecs | integer | false | Time (in seconds) to wait before trying to connect to failed nodes. The default value of is 5. |
# Tools
Tools define actions an agent can take -- such as reading and writing to a source.
A tool represents an action your agent can take, such as running a SQL
statement. You can define Tools as a map in the `tools` section of your
`tools.yaml` file. Typically, a tool will require a source to act on:
```yaml
tools:
search_flights_by_number:
kind: postgres-sql
source: my-pg-instance
statement: |
SELECT * FROM flights
WHERE airline = $1
AND flight_number = $2
LIMIT 10
description: |
Use this tool to get information for a specific flight.
Takes an airline code and flight number and returns info on the flight.
Do NOT use this tool with a flight id. Do NOT guess an airline code or flight number.
An airline code is a code for an airline service consisting of a two-character
airline designator and followed by a flight number, which is a 1 to 4 digit number.
For example, if given CY 0123, the airline is "CY", and flight_number is "123".
Another example for this is DL 1234, the airline is "DL", and flight_number is "1234".
If the tool returns more than one option choose the date closest to today.
Example:
{{
"airline": "CY",
"flight_number": "888",
}}
Example:
{{
"airline": "DL",
"flight_number": "1234",
}}
parameters:
- name: airline
type: string
description: Airline unique 2 letter identifier
- name: flight_number
type: string
description: 1 to 4 digit number
```
## Specifying Parameters
Parameters for each Tool will define what inputs the agent will need to provide
to invoke them. Parameters should be pass as a list of Parameter objects:
```yaml
parameters:
- name: airline
type: string
description: Airline unique 2 letter identifier
- name: flight_number
type: string
description: 1 to 4 digit number
```
### Basic Parameters
Basic parameters types include `string`, `integer`, `float`, `boolean` types. In
most cases, the description will be provided to the LLM as context on specifying
the parameter.
```yaml
parameters:
- name: airline
type: string
description: Airline unique 2 letter identifier
```
| **field** | **type** | **required** | **description** |
|----------------|:--------------:|:------------:|----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|
| name | string | true | Name of the parameter. |
| type | string | true | Must be one of "string", "integer", "float", "boolean" "array" |
| description | string | true | Natural language description of the parameter to describe it to the agent. |
| default | parameter type | false | Default value of the parameter. If provided, `required` will be `false`. |
| required | bool | false | Indicate if the parameter is required. Default to `true`. |
| allowedValues | []string | false | Input value will be checked against this field. Regex is also supported. |
| excludedValues | []string | false | Input value will be checked against this field. Regex is also supported. |
| escape | string | false | Only available for type `string`. Indicate the escaping delimiters used for the parameter. This field is intended to be used with templateParameters. Must be one of "single-quotes", "double-quotes", "backticks", "square-brackets". |
| minValue | int or float | false | Only available for type `integer` and `float`. Indicate the minimum value allowed. |
| maxValue | int or float | false | Only available for type `integer` and `float`. Indicate the maximum value allowed. |
### Array Parameters
The `array` type is a list of items passed in as a single parameter.
To use the `array` type, you must also specify what kind of items are
in the list using the items field:
```yaml
parameters:
- name: preferred_airlines
type: array
description: A list of airline, ordered by preference.
items:
name: name
type: string
description: Name of the airline.
statement: |
SELECT * FROM airlines WHERE preferred_airlines = ANY($1);
```
| **field** | **type** | **required** | **description** |
|----------------|:----------------:|:------------:|----------------------------------------------------------------------------|
| name | string | true | Name of the parameter. |
| type | string | true | Must be "array" |
| description | string | true | Natural language description of the parameter to describe it to the agent. |
| default | parameter type | false | Default value of the parameter. If provided, `required` will be `false`. |
| required | bool | false | Indicate if the parameter is required. Default to `true`. |
| allowedValues | []string | false | Input value will be checked against this field. Regex is also supported. |
| excludedValues | []string | false | Input value will be checked against this field. Regex is also supported. |
| items | parameter object | true | Specify a Parameter object for the type of the values in the array. |
{{< notice note >}}
Items in array should not have a `default` or `required` value. If provided, it
will be ignored.
{{< /notice >}}
### Map Parameters
The map type is a collection of key-value pairs. It can be configured in two
ways:
- Generic Map: By default, it accepts values of any primitive type (string,
integer, float, boolean), allowing for mixed data.
- Typed Map: By setting the valueType field, you can enforce that all values
within the map must be of the same specified type.
#### Generic Map (Mixed Value Types)
This is the default behavior when valueType is omitted. It's useful for passing
a flexible group of settings.
```yaml
parameters:
- name: execution_context
type: map
description: A flexible set of key-value pairs for the execution environment.
```
#### Typed Map
Specify valueType to ensure all values in the map are of the same type. An error
will be thrown in case of value type mismatch.
```yaml
parameters:
- name: user_scores
type: map
description: A map of user IDs to their scores. All scores must be integers.
valueType: integer # This enforces the value type for all entries.
```
### Authenticated Parameters
Authenticated parameters are automatically populated with user
information decoded from [ID
tokens](../authServices/#specifying-id-tokens-from-clients) that are passed in
request headers. They do not take input values in request bodies like other
parameters. To use authenticated parameters, you must configure the tool to map
the required [authServices](../authServices/) to specific claims within the
user's ID token.
```yaml
tools:
search_flights_by_user_id:
kind: postgres-sql
source: my-pg-instance
statement: |
SELECT * FROM flights WHERE user_id = $1
parameters:
- name: user_id
type: string
description: Auto-populated from Google login
authServices:
# Refer to one of the `authServices` defined
- name: my-google-auth
# `sub` is the OIDC claim field for user ID
field: sub
```
| **field** | **type** | **required** | **description** |
|-----------|:--------:|:------------:|----------------------------------------------------------------------------------|
| name | string | true | Name of the [authServices](../authServices/) used to verify the OIDC auth token. |
| field | string | true | Claim field decoded from the OIDC token used to auto-populate this parameter. |
### Template Parameters
Template parameters types include `string`, `integer`, `float`, `boolean` types.
In most cases, the description will be provided to the LLM as context on
specifying the parameter. Template parameters will be inserted into the SQL
statement before executing the prepared statement. They will be inserted without
quotes, so to insert a string using template parameters, quotes must be
explicitly added within the string.
Template parameter arrays can also be used similarly to basic parameters, and array
items must be strings. Once inserted into the SQL statement, the outer layer of
quotes will be removed. Therefore to insert strings into the SQL statement, a
set of quotes must be explicitly added within the string.
{{< notice warning >}}
Because template parameters can directly replace identifiers, column names, and
table names, they are prone to SQL injections. Basic parameters are preferred
for performance and safety reasons.
{{< /notice >}}
{{< notice tip >}}
To minimize SQL injection risk when using template parameters, always provide
the `allowedValues` field within the parameter to restrict inputs.
Alternatively, for `string` type parameters, you can use the `escape` field to
add delimiters to the identifier. For `integer` or `float` type parameters, you
can use `minValue` and `maxValue` to define the allowable range.
{{< /notice >}}
```yaml
tools:
select_columns_from_table:
kind: postgres-sql
source: my-pg-instance
statement: |
SELECT {{array .columnNames}} FROM {{.tableName}}
description: |
Use this tool to list all information from a specific table.
Example:
{{
"tableName": "flights",
"columnNames": ["id", "name"]
}}
templateParameters:
- name: tableName
type: string
description: Table to select from
- name: columnNames
type: array
description: The columns to select
items:
name: column
type: string
description: Name of a column to select
escape: double-quotes # with this, the statement will resolve to `SELECT "id", "name" FROM flights`
```
| **field** | **type** | **required** | **description** |
|----------------|:----------------:|:---------------:|-------------------------------------------------------------------------------------|
| name | string | true | Name of the template parameter. |
| type | string | true | Must be one of "string", "integer", "float", "boolean", "array" |
| description | string | true | Natural language description of the template parameter to describe it to the agent. |
| default | parameter type | false | Default value of the parameter. If provided, `required` will be `false`. |
| required | bool | false | Indicate if the parameter is required. Default to `true`. |
| allowedValues | []string | false | Input value will be checked against this field. Regex is also supported. |
| excludedValues | []string | false | Input value will be checked against this field. Regex is also supported. |
| items | parameter object | true (if array) | Specify a Parameter object for the type of the values in the array (string only). |
## Authorized Invocations
You can require an authorization check for any Tool invocation request by
specifying an `authRequired` field. Specify a list of
[authServices](../authServices/) defined in the previous section.
```yaml
tools:
search_all_flight:
kind: postgres-sql
source: my-pg-instance
statement: |
SELECT * FROM flights
# A list of `authServices` defined previously
authRequired:
- my-google-auth
- other-auth-service
```
## Kinds of tools
# AlloyDB AI NL
AlloyDB AI NL Tool.
# AlloyDB for PostgreSQL
Tools for AlloyDB for PostgreSQL.
# BigQuery
Tools that work with BigQuery Sources.
# Bigtable
Tools that work with Bigtable Sources.
# Cassandra
Tools that work with Cassandra Sources.
# ClickHouse
Tools for interacting with ClickHouse databases and tables.
# Cloud Healthcare API
Tools that work with Cloud Healthcare Sources.
# Cloud Monitoring
Tools that work with Cloud Monitoring source.
# Cloud SQL
Tools that work with Cloud SQL.
# Couchbase
Tools that work with Couchbase Sources.
# Dataform
Tools that work with Dataform.
# Dataplex
Tools that work with Dataplex Sources.
# Dgraph
Tools that work with Dgraph Sources.
# Elasticsearch
Tools that work with Elasticsearch Sources.
# Firebird
Tools that work with Firebird Sources.
# Firestore
Tools that work with Firestore Sources.
# HTTP
Tools that work with HTTP Sources.
# Looker
Tools that work with Looker Sources.
# MindsDB Tools
MindsDB tools that enable SQL queries across hundreds of datasources and ML models.
## About
MindsDB is the most widely adopted AI federated database that enables you to
query hundreds of datasources and ML models through a single SQL interface. The
following tools work with MindsDB databases:
- [mindsdb-execute-sql](mindsdb-execute-sql.md) - Execute SQL queries directly
on MindsDB
- [mindsdb-sql](mindsdb-sql.md) - Execute parameterized SQL queries on MindsDB
These tools leverage MindsDB's capabilities to:
- **Connect to Multiple Datasources**: Query databases, APIs, file systems, and
more through SQL
- **Cross-Datasource Operations**: Perform joins and analytics across different
data sources
- **ML Model Integration**: Use trained ML models as virtual tables for
predictions
- **Unstructured Data Processing**: Query documents, images, and other
unstructured data as structured tables
- **Real-time Predictions**: Get real-time predictions from ML models through
SQL
- **API Translation**: Automatically translate SQL queries into REST APIs,
GraphQL, and native protocols
## Supported Datasources
MindsDB automatically translates your SQL queries into the appropriate APIs for
hundreds of datasources:
### **Business Applications**
- **Salesforce**: Query leads, opportunities, accounts, and custom objects
- **Jira**: Access issues, projects, workflows, and team data
- **GitHub**: Query repositories, commits, pull requests, and issues
- **Slack**: Access channels, messages, and team communications
- **HubSpot**: Query contacts, companies, deals, and marketing data
### **Databases & Storage**
- **MongoDB**: Query NoSQL collections as structured tables
- **PostgreSQL/MySQL**: Standard relational databases
- **Redis**: Key-value stores and caching layers
- **Elasticsearch**: Search and analytics data
- **S3/Google Cloud Storage**: File storage and data lakes
### **Communication & Email**
- **Gmail/Outlook**: Query emails, attachments, and metadata
- **Microsoft Teams**: Team communications and files
- **Discord**: Server data and message history
### **Analytics & Monitoring**
- **Google Analytics**: Website traffic and user behavior
- **Mixpanel**: Product analytics and user events
- **Datadog**: Infrastructure monitoring and logs
- **Grafana**: Time-series data and metrics
## Example Use Cases
### Cross-Datasource Analytics
```sql
-- Join Salesforce opportunities with GitHub activity
SELECT
s.opportunity_name,
s.amount,
g.repository_name,
COUNT(g.commits) as commit_count
FROM salesforce.opportunities s
JOIN github.repositories g ON s.account_id = g.owner_id
WHERE s.stage = 'Closed Won';
```
### Email & Communication Analysis
```sql
-- Analyze email patterns with Slack activity
SELECT
e.sender,
e.subject,
s.channel_name,
COUNT(s.messages) as message_count
FROM gmail.emails e
JOIN slack.messages s ON e.sender = s.user_name
WHERE e.date >= '2024-01-01';
```
### ML Model Predictions
```sql
-- Use ML model to predict customer churn
SELECT
customer_id,
customer_name,
predicted_churn_probability,
recommended_action
FROM customer_churn_model
WHERE predicted_churn_probability > 0.8;
```
Since MindsDB implements the MySQL wire protocol, these tools are functionally
compatible with MySQL tools while providing access to MindsDB's advanced
federated database capabilities.
## Working Configuration Example
Here's a complete working configuration that has been tested:
```yaml
sources:
my-pg-source:
kind: mindsdb
host: 127.0.0.1
port: 47335
database: files
user: mindsdb
tools:
mindsdb-execute-sql:
kind: mindsdb-execute-sql
source: my-pg-source
description: |
Execute SQL queries directly on MindsDB database.
Use this tool to run any SQL statement against your MindsDB instance.
Example: SELECT * FROM my_table LIMIT 10
```
# MongoDB
Tools that work with the MongoDB Source.
# MySQL
Tools that work with MySQL Sources, such as Cloud SQL for MySQL.
# Neo4j
Tools that work with Neo4j Sources.
# OceanBase
Tools that work with OceanBase Sources.
# Oracle
Tools that work with Oracle Sources.
# Postgres
Tools that work with Postgres Sources, such as Cloud SQL for Postgres and AlloyDB.
# Redis
Tools that work with Redis Sources.
# Serverless for Apache Spark
Tools that work with Google Cloud Serverless for Apache Spark Sources.
- [serverless-spark-get-batch](./serverless-spark-get-batch.md)
- [serverless-spark-list-batches](./serverless-spark-list-batches.md)
- [serverless-spark-cancel-batch](./serverless-spark-cancel-batch.md)
# SingleStore
Tools that work with SingleStore Sources
# Spanner
Tools that work with Spanner Sources.
# SQL Server
Tools that work with SQL Server Sources, such as CloudSQL for SQL Server.
# SQLite
Tools that work with SQLite Sources.
# TiDB
Tools that work with TiDB Sources, such as TiDB Cloud and self-hosted TiDB.
# Trino
Tools that work with Trino Sources, such as distributed SQL query engine.
# Utility tools
Tools that provide utility.
# Valkey
Tools that work with Valkey Sources.
# YugabyteDB
Tools that work with Valkey Sources.
# Prompts
Prompts allow servers to provide structured messages and instructions for interacting with language models.
A `prompt` represents a reusable prompt template that can be retrieved and used
by MCP clients.
A Prompt is essentially a template for a message or a series of messages that
can be sent to a Large Language Model (LLM). The Toolbox server implements the
`prompts/list` and `prompts/get` methods from the [Model Context Protocol
(MCP)](https://modelcontextprotocol.io/docs/getting-started/intro)
specification, allowing clients to discover and retrieve these prompts.
```yaml
prompts:
code_review:
description: "Asks the LLM to analyze code quality and suggest improvements."
messages:
- content: "Please review the following code for quality, correctness, and potential improvements: \n\n{{.code}}"
arguments:
- name: "code"
description: "The code to review"
```
## Prompt Schema
| **field** | **type** | **required** | **description** |
|-------------|--------------------------------|--------------|--------------------------------------------------------------------------|
| description | string | No | A brief explanation of what the prompt does. |
| kind | string | No | The kind of prompt. Defaults to `"custom"`. |
| messages | [][Message](#message-schema) | Yes | A list of one or more message objects that make up the prompt's content. |
| arguments | [][Argument](#argument-schema) | No | A list of arguments that can be interpolated into the prompt's content. |
## Message Schema
| **field** | **type** | **required** | **description** |
|-----------|----------|--------------|--------------------------------------------------------------------------------------------------------|
| role | string | No | The role of the sender. Can be `"user"` or `"assistant"`. Defaults to `"user"`. |
| content | string | Yes | The text of the message. You can include placeholders for arguments using `{{.argument_name}}` syntax. |
## Argument Schema
An argument can be any [Parameter](../tools/_index.md#specifying-parameters)
type. If the `type` field is not specified, it will default to `string`.
## Usage with Gemini CLI
Prompts defined in your `tools.yaml` can be seamlessly integrated with the
Gemini CLI to create [custom slash
commands](https://github.com/google-gemini/gemini-cli/blob/main/docs/tools/mcp-server.md#mcp-prompts-as-slash-commands).
The workflow is as follows:
1. **Discovery:** When the Gemini CLI connects to your Toolbox server, it
automatically calls `prompts/list` to discover all available prompts.
2. **Conversion:** Each discovered prompt is converted into a corresponding
slash command. For example, a prompt named `code_review` becomes the
`/code_review` command in the CLI.
3. **Execution:** You can execute the command as follows:
```bash
/code_review --code="def hello():\n print('world')"
```
4. **Interpolation:** Once all arguments are collected, the CLI calls prompts/get
with your provided values to retrieve the final, interpolated prompt.
Eg.
```bash
Please review the following code for quality, correctness, and potential improvements: \ndef hello():\n print('world')
```
5. **Response:** This completed prompt is then sent to the Gemini model, and the
model's response is displayed back to you in the CLI.
## Kinds of prompts
# Custom
Custom prompts defined by the user.
Custom prompts are defined by the user to be exposed through their MCP server.
They are the default type for prompts.
## Examples
### Basic Prompt
Here is an example of a simple prompt that takes a single argument, code, and
asks an LLM to review it.
```yaml
prompts:
code_review:
description: "Asks the LLM to analyze code quality and suggest improvements."
messages:
- content: "Please review the following code for quality, correctness, and potential improvements: \n\n{{.code}}"
arguments:
- name: "code"
description: "The code to review"
```
### Multi-message prompt
You can define prompts with multiple messages to set up more complex
conversational contexts, like a role-playing scenario.
```yaml
prompts:
roleplay_scenario:
description: "Sets up a roleplaying scenario with initial messages."
arguments:
- name: "character"
description: "The character the AI should embody."
- name: "situation"
description: "The initial situation for the roleplay."
messages:
- role: "user"
content: "Let's roleplay. You are {{.character}}. The situation is: {{.situation}}"
- role: "assistant"
content: "Okay, I understand. I am ready. What happens next?"
```
## Reference
### Prompt Schema
| **field** | **type** | **required** | **description** |
|-------------|--------------------------------|--------------|--------------------------------------------------------------------------|
| kind | string | No | The kind of prompt. Must be `"custom"`. |
| description | string | No | A brief explanation of what the prompt does. |
| messages | [][Message](#message-schema) | Yes | A list of one or more message objects that make up the prompt's content. |
| arguments | [][Argument](#argument-schema) | No | A list of arguments that can be interpolated into the prompt's content. |
### Message Schema
Refer to the default prompt [Message Schema](../_index.md#message-schema).
### Argument Schema
Refer to the default prompt [Argument Schema](../_index.md#argument-schema).
# Samples
Samples and guides for using Toolbox.
# AlloyDB
How to get started with Toolbox using AlloyDB.
# Quickstart (MCP with AlloyDB)
How to get started running Toolbox with MCP Inspector and AlloyDB as the source.
## Overview
[Model Context Protocol](https://modelcontextprotocol.io) is an open protocol
that standardizes how applications provide context to LLMs. Check out this page
on how to [connect to Toolbox via MCP](../../how-to/connect_via_mcp.md).
## Before you begin
This guide assumes you have already done the following:
1. [Create a AlloyDB cluster and
instance](https://cloud.google.com/alloydb/docs/cluster-create) with a
database and user.
1. Connect to the instance using [AlloyDB
Studio](https://cloud.google.com/alloydb/docs/manage-data-using-studio),
[`psql` command-line tool](https://www.postgresql.org/download/), or any
other PostgreSQL client.
1. Enable the `pgvector` and `google_ml_integration`
[extensions](https://cloud.google.com/alloydb/docs/ai). These are required
for Semantic Search and Natural Language to SQL tools. Run the following SQL
commands:
```sql
CREATE EXTENSION IF NOT EXISTS "vector";
CREATE EXTENSION IF NOT EXISTS "google_ml_integration";
CREATE EXTENSION IF NOT EXISTS alloydb_ai_nl cascade;
CREATE EXTENSION IF NOT EXISTS parameterized_views;
```
## Step 1: Set up your AlloyDB database
In this section, we will create the necessary tables and functions in your
AlloyDB instance.
1. Create tables using the following commands:
```sql
CREATE TABLE products (
product_id SERIAL PRIMARY KEY,
name VARCHAR(255) NOT NULL,
description TEXT,
price DECIMAL(10, 2) NOT NULL,
category_id INT,
embedding vector(3072) -- Vector size for model(gemini-embedding-001)
);
CREATE TABLE customers (
customer_id SERIAL PRIMARY KEY,
name VARCHAR(255) NOT NULL,
email VARCHAR(255) UNIQUE NOT NULL
);
CREATE TABLE cart (
cart_id SERIAL PRIMARY KEY,
customer_id INT UNIQUE NOT NULL,
created_at TIMESTAMP WITH TIME ZONE DEFAULT CURRENT_TIMESTAMP,
FOREIGN KEY (customer_id) REFERENCES customers(customer_id)
);
CREATE TABLE cart_items (
cart_item_id SERIAL PRIMARY KEY,
cart_id INT NOT NULL,
product_id INT NOT NULL,
quantity INT NOT NULL,
price DECIMAL(10, 2) NOT NULL,
FOREIGN KEY (cart_id) REFERENCES cart(cart_id),
FOREIGN KEY (product_id) REFERENCES products(product_id)
);
CREATE TABLE categories (
category_id SERIAL PRIMARY KEY,
name VARCHAR(255) NOT NULL
);
```
2. Insert sample data into the tables:
```sql
INSERT INTO categories (category_id, name) VALUES
(1, 'Flowers'),
(2, 'Vases');
INSERT INTO products (product_id, name, description, price, category_id, embedding) VALUES
(1, 'Rose', 'A beautiful red rose', 2.50, 1, embedding('gemini-embedding-001', 'A beautiful red rose')),
(2, 'Tulip', 'A colorful tulip', 1.50, 1, embedding('gemini-embedding-001', 'A colorful tulip')),
(3, 'Glass Vase', 'A transparent glass vase', 10.00, 2, embedding('gemini-embedding-001', 'A transparent glass vase')),
(4, 'Ceramic Vase', 'A handmade ceramic vase', 15.00, 2, embedding('gemini-embedding-001', 'A handmade ceramic vase'));
INSERT INTO customers (customer_id, name, email) VALUES
(1, 'John Doe', 'john.doe@example.com'),
(2, 'Jane Smith', 'jane.smith@example.com');
INSERT INTO cart (cart_id, customer_id) VALUES
(1, 1),
(2, 2);
INSERT INTO cart_items (cart_id, product_id, quantity, price) VALUES
(1, 1, 2, 2.50),
(1, 3, 1, 10.00),
(2, 2, 5, 1.50);
```
## Step 2: Install Toolbox
In this section, we will download and install the Toolbox binary.
1. Download the latest version of Toolbox as a binary:
{{< notice tip >}}
Select the
[correct binary](https://github.com/googleapis/genai-toolbox/releases)
corresponding to your OS and CPU architecture.
{{< /notice >}}
```bash
export OS="linux/amd64" # one of linux/amd64, darwin/arm64, darwin/amd64, or windows/amd64
export VERSION="0.21.0"
curl -O https://storage.googleapis.com/genai-toolbox/v$VERSION/$OS/toolbox
```
1. Make the binary executable:
```bash
chmod +x toolbox
```
## Step 3: Configure the tools
Create a `tools.yaml` file and add the following content. You must replace the
placeholders with your actual AlloyDB configuration.
First, define the data source for your tools. This tells Toolbox how to connect
to your AlloyDB instance.
```yaml
sources:
alloydb-pg-source:
kind: alloydb-postgres
project: YOUR_PROJECT_ID
region: YOUR_REGION
cluster: YOUR_CLUSTER
instance: YOUR_INSTANCE
database: YOUR_DATABASE
user: YOUR_USER
password: YOUR_PASSWORD
```
Next, define the tools the agent can use. We will categorize them into three
types:
### 1. Structured Queries Tools
These tools execute predefined SQL statements. They are ideal for common,
structured queries like managing a shopping cart. Add the following to your
`tools.yaml` file:
```yaml
tools:
access-cart-information:
kind: postgres-sql
source: alloydb-pg-source
description: >-
List items in customer cart.
Use this tool to list items in a customer cart. This tool requires the cart ID.
parameters:
- name: cart_id
type: integer
description: The id of the cart.
statement: |
SELECT
p.name AS product_name,
ci.quantity,
ci.price AS item_price,
(ci.quantity * ci.price) AS total_item_price,
c.created_at AS cart_created_at,
ci.product_id AS product_id
FROM
cart_items ci JOIN cart c ON ci.cart_id = c.cart_id
JOIN products p ON ci.product_id = p.product_id
WHERE
c.cart_id = $1;
add-to-cart:
kind: postgres-sql
source: alloydb-pg-source
description: >-
Add items to customer cart using the product ID and product prices from the product list.
Use this tool to add items to a customer cart.
This tool requires the cart ID, product ID, quantity, and price.
parameters:
- name: cart_id
type: integer
description: The id of the cart.
- name: product_id
type: integer
description: The id of the product.
- name: quantity
type: integer
description: The quantity of items to add.
- name: price
type: float
description: The price of items to add.
statement: |
INSERT INTO
cart_items (cart_id, product_id, quantity, price)
VALUES($1,$2,$3,$4);
delete-from-cart:
kind: postgres-sql
source: alloydb-pg-source
description: >-
Remove products from customer cart.
Use this tool to remove products from a customer cart.
This tool requires the cart ID and product ID.
parameters:
- name: cart_id
type: integer
description: The id of the cart.
- name: product_id
type: integer
description: The id of the product.
statement: |
DELETE FROM
cart_items
WHERE
cart_id = $1 AND product_id = $2;
```
### 2. Semantic Search Tools
These tools use vector embeddings to find the most relevant results based on the
meaning of a user's query, rather than just keywords. Append the following tools
to the `tools` section in your `tools.yaml`:
```yaml
search-product-recommendations:
kind: postgres-sql
source: alloydb-pg-source
description: >-
Search for products based on user needs.
Use this tool to search for products. This tool requires the user's needs.
parameters:
- name: query
type: string
description: The product characteristics
statement: |
SELECT
product_id,
name,
description,
ROUND(CAST(price AS numeric), 2) as price
FROM
products
ORDER BY
embedding('gemini-embedding-001', $1)::vector <=> embedding
LIMIT 5;
```
### 3. Natural Language to SQL (NL2SQL) Tools
1. Create a [natural language
configuration](https://cloud.google.com/alloydb/docs/ai/use-natural-language-generate-sql-queries#create-config)
for your AlloyDB cluster.
{{< notice tip >}}Before using NL2SQL tools,
you must first install the `alloydb_ai_nl` extension and
create the [semantic
layer](https://cloud.google.com/alloydb/docs/ai/natural-language-overview)
under a configuration named `flower_shop`.
{{< /notice >}}
2. Configure your NL2SQL tool to use your configuration. These tools translate
natural language questions into SQL queries, allowing users to interact with
the database conversationally. Append the following tool to the `tools`
section:
```yaml
ask-questions-about-products:
kind: alloydb-ai-nl
source: alloydb-pg-source
nlConfig: flower_shop
description: >-
Ask questions related to products or brands.
Use this tool to ask questions about products or brands.
Always SELECT the IDs of objects when generating queries.
```
Finally, group the tools into a `toolset` to make them available to the model.
Add the following to the end of your `tools.yaml` file:
```yaml
toolsets:
flower_shop:
- access-cart-information
- search-product-recommendations
- ask-questions-about-products
- add-to-cart
- delete-from-cart
```
For more info on tools, check out the
[Tools](../../resources/tools/) section.
## Step 4: Run the Toolbox server
Run the Toolbox server, pointing to the `tools.yaml` file created earlier:
```bash
./toolbox --tools-file "tools.yaml"
```
## Step 5: Connect to MCP Inspector
1. Run the MCP Inspector:
```bash
npx @modelcontextprotocol/inspector
```
1. Type `y` when it asks to install the inspector package.
1. It should show the following when the MCP Inspector is up and running (please
take note of ``):
```bash
Starting MCP inspector...
⚙️ Proxy server listening on localhost:6277
🔑 Session token:
Use this token to authenticate requests or set DANGEROUSLY_OMIT_AUTH=true to disable auth
🚀 MCP Inspector is up and running at:
http://localhost:6274/?MCP_PROXY_AUTH_TOKEN=
```
1. Open the above link in your browser.
1. For `Transport Type`, select `Streamable HTTP`.
1. For `URL`, type in `http://127.0.0.1:5000/mcp`.
1. For `Configuration` -> `Proxy Session Token`, make sure
`` is present.
1. Click Connect.
1. Select `List Tools`, you will see a list of tools configured in `tools.yaml`.
1. Test out your tools here!
## What's next
- Learn more about [MCP Inspector](../../how-to/connect_via_mcp.md).
- Learn more about [Toolbox Resources](../../resources/).
- Learn more about [Toolbox How-to guides](../../how-to/).
# Getting started with alloydb-ai-nl tool
An end to end tutorial for building an ADK agent using the AlloyDB AI NL tool.
{{< ipynb "alloydb_ai_nl.ipynb" >}}
# BigQuery
How to get started with Toolbox using BigQuery.
# Quickstart (Local with BigQuery)
How to get started running Toolbox locally with Python, BigQuery, and LangGraph, LlamaIndex, or ADK.
[](https://colab.research.google.com/github/googleapis/genai-toolbox/blob/main/docs/en/samples/bigquery/colab_quickstart_bigquery.ipynb)
## Before you begin
This guide assumes you have already done the following:
1. Installed [Python 3.10+][install-python] (including [pip][install-pip] and
your preferred virtual environment tool for managing dependencies e.g.
[venv][install-venv]).
1. Installed and configured the [Google Cloud SDK (gcloud CLI)][install-gcloud].
1. Authenticated with Google Cloud for Application Default Credentials (ADC):
```bash
gcloud auth login --update-adc
```
1. Set your default Google Cloud project (replace `YOUR_PROJECT_ID` with your
actual project ID):
```bash
gcloud config set project YOUR_PROJECT_ID
export GOOGLE_CLOUD_PROJECT=YOUR_PROJECT_ID
```
Toolbox and the client libraries will use this project for BigQuery, unless
overridden in configurations.
1. [Enabled the BigQuery API][enable-bq-api] in your Google Cloud project.
1. Installed the BigQuery client library for Python:
```bash
pip install google-cloud-bigquery
```
1. Completed setup for usage with an LLM model such as
{{< tabpane text=true persist=header >}}
{{% tab header="Core" lang="en" %}}
- [langchain-vertexai](https://python.langchain.com/docs/integrations/llms/google_vertex_ai_palm/#setup)
package.
- [langchain-google-genai](https://python.langchain.com/docs/integrations/chat/google_generative_ai/#setup)
package.
- [langchain-anthropic](https://python.langchain.com/docs/integrations/chat/anthropic/#setup)
package.
{{% /tab %}}
{{% tab header="LangChain" lang="en" %}}
- [langchain-vertexai](https://python.langchain.com/docs/integrations/llms/google_vertex_ai_palm/#setup)
package.
- [langchain-google-genai](https://python.langchain.com/docs/integrations/chat/google_generative_ai/#setup)
package.
- [langchain-anthropic](https://python.langchain.com/docs/integrations/chat/anthropic/#setup)
package.
{{% /tab %}}
{{% tab header="LlamaIndex" lang="en" %}}
- [llama-index-llms-google-genai](https://pypi.org/project/llama-index-llms-google-genai/)
package.
- [llama-index-llms-anthropic](https://docs.llamaindex.ai/en/stable/examples/llm/anthropic)
package.
{{% /tab %}}
{{% tab header="ADK" lang="en" %}}
- [google-adk](https://pypi.org/project/google-adk/) package.
{{% /tab %}}
{{< /tabpane >}}
[install-python]: https://wiki.python.org/moin/BeginnersGuide/Download
[install-pip]: https://pip.pypa.io/en/stable/installation/
[install-venv]:
https://packaging.python.org/en/latest/tutorials/installing-packages/#creating-virtual-environments
[install-gcloud]: https://cloud.google.com/sdk/docs/install
[enable-bq-api]:
https://cloud.google.com/bigquery/docs/quickstarts/query-public-dataset-console#before-you-begin
## Step 1: Set up your BigQuery Dataset and Table
In this section, we will create a BigQuery dataset and a table, then insert some
data that needs to be accessed by our agent. BigQuery operations are performed
against your configured Google Cloud project.
1. Create a new BigQuery dataset (replace `YOUR_DATASET_NAME` with your desired
dataset name, e.g., `toolbox_ds`, and optionally specify a location like `US`
or `EU`):
```bash
export BQ_DATASET_NAME="YOUR_DATASET_NAME" # e.g., toolbox_ds
export BQ_LOCATION="US" # e.g., US, EU, asia-northeast1
bq --location=$BQ_LOCATION mk $BQ_DATASET_NAME
```
You can also do this through the [Google Cloud
Console](https://console.cloud.google.com/bigquery).
{{< notice tip >}}
For a real application, ensure that the service account or user running Toolbox
has the necessary IAM permissions (e.g., BigQuery Data Editor, BigQuery User)
on the dataset or project. For this local quickstart with user credentials,
your own permissions will apply.
{{< /notice >}}
1. The hotels table needs to be defined in your new dataset for use with the bq
query command. First, create a file named `create_hotels_table.sql` with the
following content:
```sql
CREATE TABLE IF NOT EXISTS `YOUR_PROJECT_ID.YOUR_DATASET_NAME.hotels` (
id INT64 NOT NULL,
name STRING NOT NULL,
location STRING NOT NULL,
price_tier STRING NOT NULL,
checkin_date DATE NOT NULL,
checkout_date DATE NOT NULL,
booked BOOLEAN NOT NULL
);
```
> **Note:** Replace `YOUR_PROJECT_ID` and `YOUR_DATASET_NAME` in the SQL
> with your actual project ID and dataset name.
Then run the command below to execute the sql query:
```bash
bq query --project_id=$GOOGLE_CLOUD_PROJECT --dataset_id=$BQ_DATASET_NAME --use_legacy_sql=false < create_hotels_table.sql
```
1. Next, populate the hotels table with some initial data. To do this, create a
file named `insert_hotels_data.sql` and add the following SQL INSERT
statement to it.
```sql
INSERT INTO `YOUR_PROJECT_ID.YOUR_DATASET_NAME.hotels` (id, name, location, price_tier, checkin_date, checkout_date, booked)
VALUES
(1, 'Hilton Basel', 'Basel', 'Luxury', '2024-04-20', '2024-04-22', FALSE),
(2, 'Marriott Zurich', 'Zurich', 'Upscale', '2024-04-14', '2024-04-21', FALSE),
(3, 'Hyatt Regency Basel', 'Basel', 'Upper Upscale', '2024-04-02', '2024-04-20', FALSE),
(4, 'Radisson Blu Lucerne', 'Lucerne', 'Midscale', '2024-04-05', '2024-04-24', FALSE),
(5, 'Best Western Bern', 'Bern', 'Upper Midscale', '2024-04-01', '2024-04-23', FALSE),
(6, 'InterContinental Geneva', 'Geneva', 'Luxury', '2024-04-23', '2024-04-28', FALSE),
(7, 'Sheraton Zurich', 'Zurich', 'Upper Upscale', '2024-04-02', '2024-04-27', FALSE),
(8, 'Holiday Inn Basel', 'Basel', 'Upper Midscale', '2024-04-09', '2024-04-24', FALSE),
(9, 'Courtyard Zurich', 'Zurich', 'Upscale', '2024-04-03', '2024-04-13', FALSE),
(10, 'Comfort Inn Bern', 'Bern', 'Midscale', '2024-04-04', '2024-04-16', FALSE);
```
> **Note:** Replace `YOUR_PROJECT_ID` and `YOUR_DATASET_NAME` in the SQL
> with your actual project ID and dataset name.
Then run the command below to execute the sql query:
```bash
bq query --project_id=$GOOGLE_CLOUD_PROJECT --dataset_id=$BQ_DATASET_NAME --use_legacy_sql=false < insert_hotels_data.sql
```
## Step 2: Install and configure Toolbox
In this section, we will download Toolbox, configure our tools in a `tools.yaml`
to use BigQuery, and then run the Toolbox server.
1. Download the latest version of Toolbox as a binary:
{{< notice tip >}}
Select the
[correct binary](https://github.com/googleapis/genai-toolbox/releases)
corresponding to your OS and CPU architecture.
{{< /notice >}}
```bash
export OS="linux/amd64" # one of linux/amd64, darwin/arm64, darwin/amd64, or windows/amd64
curl -O https://storage.googleapis.com/genai-toolbox/v0.21.0/$OS/toolbox
```
1. Make the binary executable:
```bash
chmod +x toolbox
```
1. Write the following into a `tools.yaml` file. You must replace the
`YOUR_PROJECT_ID` and `YOUR_DATASET_NAME` placeholder in the config with your
actual BigQuery project and dataset name. The `location` field is optional;
if not specified, it defaults to 'us'. The table name `hotels` is used
directly in the statements.
{{< notice tip >}}
Authentication with BigQuery is handled via Application Default Credentials
(ADC). Ensure you have run `gcloud auth application-default login`.
{{< /notice >}}
```yaml
sources:
my-bigquery-source:
kind: bigquery
project: YOUR_PROJECT_ID
location: us
tools:
search-hotels-by-name:
kind: bigquery-sql
source: my-bigquery-source
description: Search for hotels based on name.
parameters:
- name: name
type: string
description: The name of the hotel.
statement: SELECT * FROM `YOUR_DATASET_NAME.hotels` WHERE LOWER(name) LIKE LOWER(CONCAT('%', @name, '%'));
search-hotels-by-location:
kind: bigquery-sql
source: my-bigquery-source
description: Search for hotels based on location.
parameters:
- name: location
type: string
description: The location of the hotel.
statement: SELECT * FROM `YOUR_DATASET_NAME.hotels` WHERE LOWER(location) LIKE LOWER(CONCAT('%', @location, '%'));
book-hotel:
kind: bigquery-sql
source: my-bigquery-source
description: >-
Book a hotel by its ID. If the hotel is successfully booked, returns a NULL, raises an error if not.
parameters:
- name: hotel_id
type: integer
description: The ID of the hotel to book.
statement: UPDATE `YOUR_DATASET_NAME.hotels` SET booked = TRUE WHERE id = @hotel_id;
update-hotel:
kind: bigquery-sql
source: my-bigquery-source
description: >-
Update a hotel's check-in and check-out dates by its ID. Returns a message indicating whether the hotel was successfully updated or not.
parameters:
- name: checkin_date
type: string
description: The new check-in date of the hotel.
- name: checkout_date
type: string
description: The new check-out date of the hotel.
- name: hotel_id
type: integer
description: The ID of the hotel to update.
statement: >-
UPDATE `YOUR_DATASET_NAME.hotels` SET checkin_date = PARSE_DATE('%Y-%m-%d', @checkin_date), checkout_date = PARSE_DATE('%Y-%m-%d', @checkout_date) WHERE id = @hotel_id;
cancel-hotel:
kind: bigquery-sql
source: my-bigquery-source
description: Cancel a hotel by its ID.
parameters:
- name: hotel_id
type: integer
description: The ID of the hotel to cancel.
statement: UPDATE `YOUR_DATASET_NAME.hotels` SET booked = FALSE WHERE id = @hotel_id;
```
**Important Note on `toolsets`**: The `tools.yaml` content above does not
include a `toolsets` section. The Python agent examples in Step 3 (e.g.,
`await toolbox_client.load_toolset("my-toolset")`) rely on a toolset named
`my-toolset`. To make those examples work, you will need to add a `toolsets`
section to your `tools.yaml` file, for example:
```yaml
# Add this to your tools.yaml if using load_toolset("my-toolset")
# Ensure it's at the same indentation level as 'sources:' and 'tools:'
toolsets:
my-toolset:
- search-hotels-by-name
- search-hotels-by-location
- book-hotel
- update-hotel
- cancel-hotel
```
Alternatively, you can modify the agent code to load tools individually
(e.g., using `await toolbox_client.load_tool("search-hotels-by-name")`).
For more info on tools, check out the [Resources](../../resources/) section
of the docs.
1. Run the Toolbox server, pointing to the `tools.yaml` file created earlier:
```bash
./toolbox --tools-file "tools.yaml"
```
{{< notice note >}}
Toolbox enables dynamic reloading by default. To disable, use the
`--disable-reload` flag.
{{< /notice >}}
## Step 3: Connect your agent to Toolbox
In this section, we will write and run an agent that will load the Tools
from Toolbox.
{{< notice tip>}} If you prefer to experiment within a Google Colab environment,
you can connect to a
[local runtime](https://research.google.com/colaboratory/local-runtimes.html).
{{< /notice >}}
1. In a new terminal, install the SDK package.
{{< tabpane persist=header >}}
{{< tab header="Core" lang="bash" >}}
pip install toolbox-core
{{< /tab >}}
{{< tab header="Langchain" lang="bash" >}}
pip install toolbox-langchain
{{< /tab >}}
{{< tab header="LlamaIndex" lang="bash" >}}
pip install toolbox-llamaindex
{{< /tab >}}
{{< tab header="ADK" lang="bash" >}}
pip install google-adk
{{< /tab >}}
{{< /tabpane >}}
1. Install other required dependencies:
{{< tabpane persist=header >}}
{{< tab header="Core" lang="bash" >}}
# TODO(developer): replace with correct package if needed
pip install langgraph langchain-google-vertexai
# pip install langchain-google-genai
# pip install langchain-anthropic
{{< /tab >}}
{{< tab header="Langchain" lang="bash" >}}
# TODO(developer): replace with correct package if needed
pip install langgraph langchain-google-vertexai
# pip install langchain-google-genai
# pip install langchain-anthropic
{{< /tab >}}
{{< tab header="LlamaIndex" lang="bash" >}}
# TODO(developer): replace with correct package if needed
pip install llama-index-llms-google-genai
# pip install llama-index-llms-anthropic
{{< /tab >}}
{{< tab header="ADK" lang="bash" >}}
pip install toolbox-core
{{< /tab >}}
{{< /tabpane >}}
1. Create a new file named `hotel_agent.py` and copy the following
code to create an agent:
{{< tabpane persist=header >}}
{{< tab header="Core" lang="python" >}}
import asyncio
from google import genai
from google.genai.types import (
Content,
FunctionDeclaration,
GenerateContentConfig,
Part,
Tool,
)
from toolbox_core import ToolboxClient
prompt = """
You're a helpful hotel assistant. You handle hotel searching, booking and
cancellations. When the user searches for a hotel, mention it's name, id,
location and price tier. Always mention hotel id while performing any
searches. This is very important for any operations. For any bookings or
cancellations, please provide the appropriate confirmation. Be sure to
update checkin or checkout dates if mentioned by the user.
Don't ask for confirmations from the user.
"""
queries = [
"Find hotels in Basel with Basel in it's name.",
"Please book the hotel Hilton Basel for me.",
"This is too expensive. Please cancel it.",
"Please book Hyatt Regency for me",
"My check in dates for my booking would be from April 10, 2024 to April 19, 2024.",
]
async def run_application():
async with ToolboxClient("") as toolbox_client:
# The toolbox_tools list contains Python callables (functions/methods) designed for LLM tool-use
# integration. While this example uses Google's genai client, these callables can be adapted for
# various function-calling or agent frameworks. For easier integration with supported frameworks
# (https://github.com/googleapis/mcp-toolbox-python-sdk/tree/main/packages), use the
# provided wrapper packages, which handle framework-specific boilerplate.
toolbox_tools = await toolbox_client.load_toolset("my-toolset")
genai_client = genai.Client(
vertexai=True, project="project-id", location="us-central1"
)
genai_tools = [
Tool(
function_declarations=[
FunctionDeclaration.from_callable_with_api_option(callable=tool)
]
)
for tool in toolbox_tools
]
history = []
for query in queries:
user_prompt_content = Content(
role="user",
parts=[Part.from_text(text=query)],
)
history.append(user_prompt_content)
response = genai_client.models.generate_content(
model="gemini-2.0-flash-001",
contents=history,
config=GenerateContentConfig(
system_instruction=prompt,
tools=genai_tools,
),
)
history.append(response.candidates[0].content)
function_response_parts = []
for function_call in response.function_calls:
fn_name = function_call.name
# The tools are sorted alphabetically
if fn_name == "search-hotels-by-name":
function_result = await toolbox_tools[3](**function_call.args)
elif fn_name == "search-hotels-by-location":
function_result = await toolbox_tools[2](**function_call.args)
elif fn_name == "book-hotel":
function_result = await toolbox_tools[0](**function_call.args)
elif fn_name == "update-hotel":
function_result = await toolbox_tools[4](**function_call.args)
elif fn_name == "cancel-hotel":
function_result = await toolbox_tools[1](**function_call.args)
else:
raise ValueError("Function name not present.")
function_response = {"result": function_result}
function_response_part = Part.from_function_response(
name=function_call.name,
response=function_response,
)
function_response_parts.append(function_response_part)
if function_response_parts:
tool_response_content = Content(role="tool", parts=function_response_parts)
history.append(tool_response_content)
response2 = genai_client.models.generate_content(
model="gemini-2.0-flash-001",
contents=history,
config=GenerateContentConfig(
tools=genai_tools,
),
)
final_model_response_content = response2.candidates[0].content
history.append(final_model_response_content)
print(response2.text)
asyncio.run(run_application())
{{< /tab >}}
{{< tab header="LangChain" lang="python" >}}
import asyncio
from langgraph.prebuilt import create_react_agent
# TODO(developer): replace this with another import if needed
from langchain_google_vertexai import ChatVertexAI
# from langchain_google_genai import ChatGoogleGenerativeAI
# from langchain_anthropic import ChatAnthropic
from langgraph.checkpoint.memory import MemorySaver
from toolbox_langchain import ToolboxClient
prompt = """
You're a helpful hotel assistant. You handle hotel searching, booking and
cancellations. When the user searches for a hotel, mention it's name, id,
location and price tier. Always mention hotel ids while performing any
searches. This is very important for any operations. For any bookings or
cancellations, please provide the appropriate confirmation. Be sure to
update checkin or checkout dates if mentioned by the user.
Don't ask for confirmations from the user.
"""
queries = [
"Find hotels in Basel with Basel in its name.",
"Can you book the Hilton Basel for me?",
"Oh wait, this is too expensive. Please cancel it and book the Hyatt Regency instead.",
"My check in dates would be from April 10, 2024 to April 19, 2024.",
]
async def main():
# TODO(developer): replace this with another model if needed
model = ChatVertexAI(model_name="gemini-2.0-flash-001")
# model = ChatGoogleGenerativeAI(model="gemini-2.0-flash-001")
# model = ChatAnthropic(model="claude-3-5-sonnet-20240620")
# Load the tools from the Toolbox server
client = ToolboxClient("http://127.0.0.1:5000")
tools = await client.aload_toolset()
agent = create_react_agent(model, tools, checkpointer=MemorySaver())
config = {"configurable": {"thread_id": "thread-1"}}
for query in queries:
inputs = {"messages": [("user", prompt + query)]}
response = await agent.ainvoke(inputs, stream_mode="values", config=config)
print(response["messages"][-1].content)
asyncio.run(main())
{{< /tab >}}
{{< tab header="LlamaIndex" lang="python" >}}
import asyncio
import os
from llama_index.core.agent.workflow import AgentWorkflow
from llama_index.core.workflow import Context
# TODO(developer): replace this with another import if needed
from llama_index.llms.google_genai import GoogleGenAI
# from llama_index.llms.anthropic import Anthropic
from toolbox_llamaindex import ToolboxClient
prompt = """
You're a helpful hotel assistant. You handle hotel searching, booking and
cancellations. When the user searches for a hotel, mention it's name, id,
location and price tier. Always mention hotel ids while performing any
searches. This is very important for any operations. For any bookings or
cancellations, please provide the appropriate confirmation. Be sure to
update checkin or checkout dates if mentioned by the user.
Don't ask for confirmations from the user.
"""
queries = [
"Find hotels in Basel with Basel in it's name.",
"Can you book the Hilton Basel for me?",
"Oh wait, this is too expensive. Please cancel it and book the Hyatt Regency instead.",
"My check in dates would be from April 10, 2024 to April 19, 2024.",
]
async def main():
# TODO(developer): replace this with another model if needed
llm = GoogleGenAI(
model="gemini-2.0-flash-001",
vertexai_config={"location": "us-central1"},
)
# llm = GoogleGenAI(
# api_key=os.getenv("GOOGLE_API_KEY"),
# model="gemini-2.0-flash-001",
# )
# llm = Anthropic(
# model="claude-3-7-sonnet-latest",
# api_key=os.getenv("ANTHROPIC_API_KEY")
# )
# Load the tools from the Toolbox server
client = ToolboxClient("http://127.0.0.1:5000")
tools = await client.aload_toolset()
agent = AgentWorkflow.from_tools_or_functions(
tools,
llm=llm,
system_prompt=prompt,
)
ctx = Context(agent)
for query in queries:
response = await agent.arun(user_msg=query, ctx=ctx)
print(f"---- {query} ----")
print(str(response))
asyncio.run(main())
{{< /tab >}}
{{< tab header="ADK" lang="python" >}}
from google.adk.agents import Agent
from google.adk.runners import Runner
from google.adk.sessions import InMemorySessionService
from google.adk.artifacts.in_memory_artifact_service import InMemoryArtifactService
from google.genai import types # For constructing message content
from toolbox_core import ToolboxSyncClient
import os
os.environ['GOOGLE_GENAI_USE_VERTEXAI'] = 'True'
# TODO(developer): Replace 'YOUR_PROJECT_ID' with your Google Cloud Project ID
os.environ['GOOGLE_CLOUD_PROJECT'] = 'YOUR_PROJECT_ID'
# TODO(developer): Replace 'us-central1' with your Google Cloud Location (region)
os.environ['GOOGLE_CLOUD_LOCATION'] = 'us-central1'
# --- Load Tools from Toolbox ---
# TODO(developer): Ensure the Toolbox server is running at
with ToolboxSyncClient("") as toolbox_client:
# TODO(developer): Replace "my-toolset" with the actual ID of your toolset as configured in your MCP Toolbox server.
agent_toolset = toolbox_client.load_toolset("my-toolset")
# --- Define the Agent's Prompt ---
prompt = """
You're a helpful hotel assistant. You handle hotel searching, booking and
cancellations. When the user searches for a hotel, mention it's name, id,
location and price tier. Always mention hotel ids while performing any
searches. This is very important for any operations. For any bookings or
cancellations, please provide the appropriate confirmation. Be sure to
update checkin or checkout dates if mentioned by the user.
Don't ask for confirmations from the user.
"""
# --- Configure the Agent ---
root_agent = Agent(
model='gemini-2.0-flash-001',
name='hotel_agent',
description='A helpful AI assistant that can search and book hotels.',
instruction=prompt,
tools=agent_toolset, # Pass the loaded toolset
)
# --- Initialize Services for Running the Agent ---
session_service = InMemorySessionService()
artifacts_service = InMemoryArtifactService()
# Create a new session for the interaction.
session = session_service.create_session(
state={}, app_name='hotel_agent', user_id='123'
)
runner = Runner(
app_name='hotel_agent',
agent=root_agent,
artifact_service=artifacts_service,
session_service=session_service,
)
# --- Define Queries and Run the Agent ---
queries = [
"Find hotels in Basel with Basel in it's name.",
"Can you book the Hilton Basel for me?",
"Oh wait, this is too expensive. Please cancel it and book the Hyatt Regency instead.",
"My check in dates would be from April 10, 2024 to April 19, 2024.",
]
for query in queries:
content = types.Content(role='user', parts=[types.Part(text=query)])
events = runner.run(session_id=session.id,
user_id='123', new_message=content)
responses = (
part.text
for event in events
for part in event.content.parts
if part.text is not None
)
for text in responses:
print(text)
{{< /tab >}}
{{< /tabpane >}}
{{< tabpane text=true persist=header >}}
{{% tab header="Core" lang="en" %}}
To learn more about the Core SDK, check out the [Toolbox Core SDK
documentation.](https://github.com/googleapis/mcp-toolbox-sdk-python/blob/main/packages/toolbox-core/README.md)
{{% /tab %}}
{{% tab header="Langchain" lang="en" %}}
To learn more about Agents in LangChain, check out the [LangGraph Agent
documentation.](https://langchain-ai.github.io/langgraph/reference/prebuilt/#langgraph.prebuilt.chat_agent_executor.create_react_agent)
{{% /tab %}}
{{% tab header="LlamaIndex" lang="en" %}}
To learn more about Agents in LlamaIndex, check out the [LlamaIndex
AgentWorkflow
documentation.](https://docs.llamaindex.ai/en/stable/examples/agent/agent_workflow_basic/)
{{% /tab %}}
{{% tab header="ADK" lang="en" %}}
To learn more about Agents in ADK, check out the [ADK
documentation.](https://google.github.io/adk-docs/)
{{% /tab %}}
{{< /tabpane >}}
1. Run your agent, and observe the results:
```sh
python hotel_agent.py
```
# Quickstart (MCP with BigQuery)
How to get started running Toolbox with MCP Inspector and BigQuery as the source.
## Overview
[Model Context Protocol](https://modelcontextprotocol.io) is an open protocol
that standardizes how applications provide context to LLMs. Check out this page
on how to [connect to Toolbox via MCP](../../../how-to/connect_via_mcp.md).
## Step 1: Set up your BigQuery Dataset and Table
In this section, we will create a BigQuery dataset and a table, then insert some
data that needs to be accessed by our agent.
1. Create a new BigQuery dataset (replace `YOUR_DATASET_NAME` with your desired
dataset name, e.g., `toolbox_mcp_ds`, and optionally specify a location like
`US` or `EU`):
```bash
export BQ_DATASET_NAME="YOUR_DATASET_NAME"
export BQ_LOCATION="US"
bq --location=$BQ_LOCATION mk $BQ_DATASET_NAME
```
You can also do this through the [Google Cloud
Console](https://console.cloud.google.com/bigquery).
1. The `hotels` table needs to be defined in your new dataset. First, create a
file named `create_hotels_table.sql` with the following content:
```sql
CREATE TABLE IF NOT EXISTS `YOUR_PROJECT_ID.YOUR_DATASET_NAME.hotels` (
id INT64 NOT NULL,
name STRING NOT NULL,
location STRING NOT NULL,
price_tier STRING NOT NULL,
checkin_date DATE NOT NULL,
checkout_date DATE NOT NULL,
booked BOOLEAN NOT NULL
);
```
> **Note:** Replace `YOUR_PROJECT_ID` and `YOUR_DATASET_NAME` in the SQL
> with your actual project ID and dataset name.
Then run the command below to execute the sql query:
```bash
bq query --project_id=$GOOGLE_CLOUD_PROJECT --dataset_id=$BQ_DATASET_NAME --use_legacy_sql=false < create_hotels_table.sql
```
1. . Next, populate the hotels table with some initial data. To do this, create
a file named `insert_hotels_data.sql` and add the following SQL INSERT
statement to it.
```sql
INSERT INTO `YOUR_PROJECT_ID.YOUR_DATASET_NAME.hotels` (id, name, location, price_tier, checkin_date, checkout_date, booked)
VALUES
(1, 'Hilton Basel', 'Basel', 'Luxury', '2024-04-20', '2024-04-22', FALSE),
(2, 'Marriott Zurich', 'Zurich', 'Upscale', '2024-04-14', '2024-04-21', FALSE),
(3, 'Hyatt Regency Basel', 'Basel', 'Upper Upscale', '2024-04-02', '2024-04-20', FALSE),
(4, 'Radisson Blu Lucerne', 'Lucerne', 'Midscale', '2024-04-05', '2024-04-24', FALSE),
(5, 'Best Western Bern', 'Bern', 'Upper Midscale', '2024-04-01', '2024-04-23', FALSE),
(6, 'InterContinental Geneva', 'Geneva', 'Luxury', '2024-04-23', '2024-04-28', FALSE),
(7, 'Sheraton Zurich', 'Zurich', 'Upper Upscale', '2024-04-02', '2024-04-27', FALSE),
(8, 'Holiday Inn Basel', 'Basel', 'Upper Midscale', '2024-04-09', '2024-04-24', FALSE),
(9, 'Courtyard Zurich', 'Zurich', 'Upscale', '2024-04-03', '2024-04-13', FALSE),
(10, 'Comfort Inn Bern', 'Bern', 'Midscale', '2024-04-04', '2024-04-16', FALSE);
```
> **Note:** Replace `YOUR_PROJECT_ID` and `YOUR_DATASET_NAME` in the SQL
> with your actual project ID and dataset name.
Then run the command below to execute the sql query:
```bash
bq query --project_id=$GOOGLE_CLOUD_PROJECT --dataset_id=$BQ_DATASET_NAME --use_legacy_sql=false < insert_hotels_data.sql
```
## Step 2: Install and configure Toolbox
In this section, we will download Toolbox, configure our tools in a
`tools.yaml`, and then run the Toolbox server.
1. Download the latest version of Toolbox as a binary:
{{< notice tip >}}
Select the
[correct binary](https://github.com/googleapis/genai-toolbox/releases)
corresponding to your OS and CPU architecture.
{{< /notice >}}
```bash
export OS="linux/amd64" # one of linux/amd64, darwin/arm64, darwin/amd64, or windows/amd64
curl -O https://storage.googleapis.com/genai-toolbox/v0.21.0/$OS/toolbox
```
1. Make the binary executable:
```bash
chmod +x toolbox
```
1. Write the following into a `tools.yaml` file. You must replace the
`YOUR_PROJECT_ID` and `YOUR_DATASET_NAME` placeholder in the config with your
actual BigQuery project and dataset name. The `location` field is optional;
if not specified, it defaults to 'us'. The table name `hotels` is used
directly in the statements.
{{< notice tip >}}
Authentication with BigQuery is handled via Application Default Credentials
(ADC). Ensure you have run `gcloud auth application-default login`.
{{< /notice >}}
```yaml
sources:
my-bigquery-source:
kind: bigquery
project: YOUR_PROJECT_ID
location: us
tools:
search-hotels-by-name:
kind: bigquery-sql
source: my-bigquery-source
description: Search for hotels based on name.
parameters:
- name: name
type: string
description: The name of the hotel.
statement: SELECT * FROM `YOUR_DATASET_NAME.hotels` WHERE LOWER(name) LIKE LOWER(CONCAT('%', @name, '%'));
search-hotels-by-location:
kind: bigquery-sql
source: my-bigquery-source
description: Search for hotels based on location.
parameters:
- name: location
type: string
description: The location of the hotel.
statement: SELECT * FROM `YOUR_DATASET_NAME.hotels` WHERE LOWER(location) LIKE LOWER(CONCAT('%', @location, '%'));
book-hotel:
kind: bigquery-sql
source: my-bigquery-source
description: >-
Book a hotel by its ID. If the hotel is successfully booked, returns a NULL, raises an error if not.
parameters:
- name: hotel_id
type: integer
description: The ID of the hotel to book.
statement: UPDATE `YOUR_DATASET_NAME.hotels` SET booked = TRUE WHERE id = @hotel_id;
update-hotel:
kind: bigquery-sql
source: my-bigquery-source
description: >-
Update a hotel's check-in and check-out dates by its ID. Returns a message indicating whether the hotel was successfully updated or not.
parameters:
- name: checkin_date
type: string
description: The new check-in date of the hotel.
- name: checkout_date
type: string
description: The new check-out date of the hotel.
- name: hotel_id
type: integer
description: The ID of the hotel to update.
statement: >-
UPDATE `YOUR_DATASET_NAME.hotels` SET checkin_date = PARSE_DATE('%Y-%m-%d', @checkin_date), checkout_date = PARSE_DATE('%Y-%m-%d', @checkout_date) WHERE id = @hotel_id;
cancel-hotel:
kind: bigquery-sql
source: my-bigquery-source
description: Cancel a hotel by its ID.
parameters:
- name: hotel_id
type: integer
description: The ID of the hotel to cancel.
statement: UPDATE `YOUR_DATASET_NAME.hotels` SET booked = FALSE WHERE id = @hotel_id;
toolsets:
my-toolset:
- search-hotels-by-name
- search-hotels-by-location
- book-hotel
- update-hotel
- cancel-hotel
```
For more info on tools, check out the
[Tools](../../../resources/tools/) section.
1. Run the Toolbox server, pointing to the `tools.yaml` file created earlier:
```bash
./toolbox --tools-file "tools.yaml"
```
## Step 3: Connect to MCP Inspector
1. Run the MCP Inspector:
```bash
npx @modelcontextprotocol/inspector
```
1. Type `y` when it asks to install the inspector package.
1. It should show the following when the MCP Inspector is up and running (please
take note of ``):
```bash
Starting MCP inspector...
⚙️ Proxy server listening on localhost:6277
🔑 Session token:
Use this token to authenticate requests or set DANGEROUSLY_OMIT_AUTH=true to disable auth
🚀 MCP Inspector is up and running at:
http://localhost:6274/?MCP_PROXY_AUTH_TOKEN=
```
1. Open the above link in your browser.
1. For `Transport Type`, select `Streamable HTTP`.
1. For `URL`, type in `http://127.0.0.1:5000/mcp`.
1. For `Configuration` -> `Proxy Session Token`, make sure
`` is present.
1. Click Connect.

1. Select `List Tools`, you will see a list of tools configured in `tools.yaml`.

1. Test out your tools here!
# Looker
How to get started with Toolbox using Looker.
# Gemini-CLI and OAuth
How to connect to Looker from Gemini-CLI with end-user credentials
## Overview
Gemini-CLI can be configured to get an OAuth token from Looker, then send this
token to MCP Toolbox as part of the request. MCP Toolbox can then use this token
to authentincate with Looker. This means that there is no need to get a Looker
Client ID and Client Secret. This also means that MCP Toolbox can be set up as a
shared resource.
This configuration requires Toolbox v0.14.0 or later.
## Step 1: Register the OAuth App in Looker
You first need to register the OAuth application. Refer to the documentation
[here](https://cloud.google.com/looker/docs/api-cors#registering_an_oauth_client_application).
You may need to ask an administrator to do this for you.
1. Go to the API Explorer application, locate "Register OAuth App", and press
the "Run It" button.
1. Set the `client_guid` to "gemini-cli".
1. Set the `redirect_uri` to "http://localhost:7777/oauth/callback".
1. The `display_name` and `description` can be "Gemini-CLI" or anything
meaningful.
1. Set `enabled` to "true".
1. Check the box confirming that you understand this API will change data.
1. Click the "Run" button.

## Step 2: Install and configure Toolbox
In this section, we will download Toolbox and run the Toolbox server.
1. Download the latest version of Toolbox as a binary:
{{< notice tip >}}
Select the
[correct binary](https://github.com/googleapis/genai-toolbox/releases)
corresponding to your OS and CPU architecture.
{{< /notice >}}
```bash
export OS="linux/amd64" # one of linux/amd64, darwin/arm64, darwin/amd64, or windows/amd64
curl -O https://storage.googleapis.com/genai-toolbox/v0.21.0/$OS/toolbox
```
1. Make the binary executable:
```bash
chmod +x toolbox
```
1. Create a file `looker_env` with the settings for your
Looker instance.
```bash
export LOOKER_BASE_URL=https://looker.example.com
export LOOKER_VERIFY_SSL=true
export LOOKER_USE_CLIENT_OAUTH=true
```
In some instances you may need to append `:19999` to
the LOOKER_BASE_URL.
1. Load the looker_env file into your environment.
```bash
source looker_env
```
1. Run the Toolbox server using the prebuilt Looker tools.
```bash
./toolbox --prebuilt looker
```
The toolbox server will begin listening on localhost port 5000. Leave it
running and continue in another terminal.
Later, when it is time to shut everything down, you can quit the toolbox
server with Ctrl-C in this terminal window.
## Step 3: Configure Gemini-CLI
1. Edit the file `~/.gemini/settings.json`. Add the following, substituting your
Looker server host name for `looker.example.com`.
```json
"mcpServers": {
"looker": {
"httpUrl": "http://localhost:5000/mcp",
"oauth": {
"enabled": true,
"clientId": "gemini-cli",
"authorizationUrl": "https://looker.example.com/auth",
"tokenUrl": "https://looker.example.com/api/token",
"scopes": ["cors_api"]
}
}
}
```
The `authorizationUrl` should point to the URL you use to access Looker via the
web UI. The `tokenUrl` should point to the URL you use to access Looker via
the API. In some cases you will need to use the port number `:19999` after
the host name but before the `/api/token` part.
1. Start Gemini-CLI.
1. Authenticate with the command `/mcp auth looker`. Gemini-CLI will open up a
browser where you will confirm that you want to access Looker with your
account.


1. Use Gemini-CLI with your tools.
## Using Toolbox as a Shared Service
Toolbox can be run on another server as a shared service accessed by multiple
users. We strongly recommend running toolbox behind a web proxy such as `nginx`
which will provide SSL encryption. Google Cloud Run is another good way to run
toolbox. You will connect to a service like `https://toolbox.example.com/mcp`.
The proxy server will handle the SSL encryption and certificates. Then it will
foward the requests to `http://localhost:5000/mcp` running in that environment.
The details of the config are beyond the scope of this document, but will be
familiar to your system administrators.
To use the shared service, just change the `localhost:5000` in the `httpUrl` in
`~/.gemini/settings.json` to the host name and possibly the port of the shared
service.
# Quickstart (MCP with Looker and Gemini-CLI)
How to get started running Toolbox with Gemini-CLI and Looker as the source.
## Overview
[Model Context Protocol](https://modelcontextprotocol.io) is an open protocol
that standardizes how applications provide context to LLMs. Check out this page
on how to [connect to Toolbox via MCP](../../how-to/connect_via_mcp.md).
## Step 1: Get a Looker Client ID and Client Secret
The Looker Client ID and Client Secret can be obtained from the Users page of
your Looker instance. Refer to the documentation
[here](https://cloud.google.com/looker/docs/api-auth#authentication_with_an_sdk).
You may need to ask an administrator to get the Client ID and Client Secret
for you.
## Step 2: Install and configure Toolbox
In this section, we will download Toolbox and run the Toolbox server.
1. Download the latest version of Toolbox as a binary:
{{< notice tip >}}
Select the
[correct binary](https://github.com/googleapis/genai-toolbox/releases)
corresponding to your OS and CPU architecture.
{{< /notice >}}
```bash
export OS="linux/amd64" # one of linux/amd64, darwin/arm64, darwin/amd64, or windows/amd64
curl -O https://storage.googleapis.com/genai-toolbox/v0.21.0/$OS/toolbox
```
1. Make the binary executable:
```bash
chmod +x toolbox
```
1. Edit the file `~/.gemini/settings.json` and add the following
to the list of mcpServers. Use the Client Id and Client Secret
you obtained earlier. The name of the server - here
`looker-toolbox` - can be anything meaningful to you.
```json
"mcpServers": {
"looker-toolbox": {
"command": "/path/to/toolbox",
"args": [
"--stdio",
"--prebuilt",
"looker"
],
"env": {
"LOOKER_BASE_URL": "https://looker.example.com",
"LOOKER_CLIENT_ID": "",
"LOOKER_CLIENT_SECRET": "",
"LOOKER_VERIFY_SSL": "true"
}
}
}
```
In some instances you may need to append `:19999` to
the LOOKER_BASE_URL.
## Step 3: Start Gemini-CLI
1. Run Gemini-CLI:
```bash
npx https://github.com/google-gemini/gemini-cli
```
1. Type `y` when it asks to download.
1. Log into Gemini-CLI
1. Enter the command `/mcp` and you should see a list of
available tools like
```
ℹ Configured MCP servers:
🟢 looker-toolbox - Ready (10 tools)
- looker-toolbox__get_models
- looker-toolbox__query
- looker-toolbox__get_looks
- looker-toolbox__get_measures
- looker-toolbox__get_filters
- looker-toolbox__get_parameters
- looker-toolbox__get_explores
- looker-toolbox__query_sql
- looker-toolbox__get_dimensions
- looker-toolbox__run_look
- looker-toolbox__query_url
```
1. Start exploring your Looker instance with commands like
`Find an explore to see orders` or `show me my current
inventory broken down by item category`.
1. Gemini will prompt you for your approval before using
a tool. You can approve all the tools at once or
one at a time.
# Quickstart (MCP with Looker)
How to get started running Toolbox with MCP Inspector and Looker as the source.
## Overview
[Model Context Protocol](https://modelcontextprotocol.io) is an open protocol
that standardizes how applications provide context to LLMs. Check out this page
on how to [connect to Toolbox via MCP](../../../how-to/connect_via_mcp.md).
## Step 1: Get a Looker Client ID and Client Secret
The Looker Client ID and Client Secret can be obtained from the Users page of
your Looker instance. Refer to the documentation
[here](https://cloud.google.com/looker/docs/api-auth#authentication_with_an_sdk).
You may need to ask an administrator to get the Client ID and Client Secret
for you.
## Step 2: Install and configure Toolbox
In this section, we will download Toolbox and run the Toolbox server.
1. Download the latest version of Toolbox as a binary:
{{< notice tip >}}
Select the
[correct binary](https://github.com/googleapis/genai-toolbox/releases)
corresponding to your OS and CPU architecture.
{{< /notice >}}
```bash
export OS="linux/amd64" # one of linux/amd64, darwin/arm64, darwin/amd64, or windows/amd64
curl -O https://storage.googleapis.com/genai-toolbox/v0.21.0/$OS/toolbox
```
1. Make the binary executable:
```bash
chmod +x toolbox
```
1. Create a file `looker_env` with the settings for your
Looker instance. Use the Client ID and Client Secret
you obtained earlier.
```bash
export LOOKER_BASE_URL=https://looker.example.com
export LOOKER_VERIFY_SSL=true
export LOOKER_CLIENT_ID=Q7ynZkRkvj9S9FHPm4Wj
export LOOKER_CLIENT_SECRET=P5JvZstFnhpkhCYy2yNSfJ6x
```
In some instances you may need to append `:19999` to
the LOOKER_BASE_URL.
1. Load the looker_env file into your environment.
```bash
source looker_env
```
1. Run the Toolbox server using the prebuilt Looker tools.
```bash
./toolbox --prebuilt looker
```
## Step 3: Connect to MCP Inspector
1. Run the MCP Inspector:
```bash
npx @modelcontextprotocol/inspector
```
1. Type `y` when it asks to install the inspector package.
1. It should show the following when the MCP Inspector is up and running:
```bash
🔍 MCP Inspector is up and running at http://127.0.0.1:5173 🚀
```
1. Open the above link in your browser.
1. For `Transport Type`, select `SSE`.
1. For `URL`, type in `http://127.0.0.1:5000/mcp/sse`.
1. Click Connect.

1. Select `List Tools`, you will see a list of tools.

1. Test out your tools here!
# About
A list of other information related to Toolbox.
# FAQ
Frequently asked questions about Toolbox.
## How can I deploy or run Toolbox?
MCP Toolbox for Databases is open-source and can be run or deployed to a
multitude of environments. For convenience, we release [compiled binaries and
docker images][release-notes] (but you can always compile yourself as well!).
For detailed instructions, check out these resources:
- [Quickstart: How to Run Locally](../getting-started/local_quickstart.md)
- [Deploy to Cloud Run](../how-to/deploy_toolbox.md)
[release-notes]: https://github.com/googleapis/genai-toolbox/releases/
## Do I need a Google Cloud account/project to get started with Toolbox?
Nope! While some of the sources Toolbox connects to may require GCP credentials,
Toolbox doesn't require them and can connect to a bunch of different resources
that don't.
## Does Toolbox take contributions from external users?
Absolutely! Please check out our [DEVELOPER.md][] for instructions on how to get
started developing _on_ Toolbox instead of with it, and the [CONTRIBUTING.md][]
for instructions on completing the CLA and getting a PR accepted.
[DEVELOPER.md]: https://github.com/googleapis/genai-toolbox/blob/main/DEVELOPER.md
[CONTRIBUTING.MD]: https://github.com/googleapis/genai-toolbox/blob/main/CONTRIBUTING.md
## Can Toolbox support a feature to let me do _$FOO_?
Maybe? The best place to start is by [opening an issue][github-issue] for
discussion (or seeing if there is already one open), so we can better understand
your use case and the best way to solve it. Generally we aim to prioritize the
most popular issues, so make sure to +1 ones you are the most interested in.
[github-issue]: https://github.com/googleapis/genai-toolbox/issues
## Can Toolbox be used for non-database tools?
**Yes!** While Toolbox is primarily focused on databases, it also supports generic
**HTTP tools** (`kind: http`). These allow you to connect your agents to REST APIs
and other web services, enabling workflows that extend beyond database interactions.
For configuration details, see the [HTTP Tools documentation](../resources/tools/http/http.md).
## Can I use _$BAR_ orchestration framework to use tools from Toolbox?
Currently, Toolbox only supports a limited number of client SDKs at our initial
launch. We are investigating support for more frameworks as well as more general
approaches for users without a framework -- look forward to seeing an update
soon.
## Why does Toolbox use a server-client architecture pattern?
Toolbox's server-client architecture allows us to more easily support a wide
variety of languages and frameworks with a centralized implementation. It also
allows us to tackle problems like connection pooling, auth, or caching more
completely than entirely client-side solutions.
## Why was Toolbox written in Go?
While a large part of the Gen AI Ecosystem is predominately Python, we opted to
use Go. We chose Go because it's still easy and simple to use, but also easier
to write fast, efficient, and concurrent servers. Additionally, given the
server-client architecture, we can still meet many developers where they are
with clients in their preferred language. As Gen AI matures, we want developers
to be able to use Toolbox on the serving path of mission critical applications.
It's easier to build the needed robustness, performance and scalability in Go
than in Python.
## Is Toolbox compatible with Model Context Protocol (MCP)?
Yes! Toolbox is compatible with [Anthropic's Model Context Protocol
(MCP)](https://modelcontextprotocol.io/). Please checkout [Connect via
MCP](../how-to/connect_via_mcp.md) on how to connect to Toolbox with an MCP
client.
# SDKs
Client SDKs to connect to the MCP Toolbox server.
# Go SDK
Go lang client SDK
# JS SDK
Javascript client SDK
# Python SDK
Python client SDK
# Reference
This section contains reference documentation.
# CLI
This page describes the `toolbox` command-line options.
## Reference
| Flag (Short) | Flag (Long) | Description | Default |
|--------------|----------------------------|-----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|-------------|
| `-a` | `--address` | Address of the interface the server will listen on. | `127.0.0.1` |
| | `--disable-reload` | Disables dynamic reloading of tools file. | |
| `-h` | `--help` | help for toolbox | |
| | `--log-level` | Specify the minimum level logged. Allowed: 'DEBUG', 'INFO', 'WARN', 'ERROR'. | `info` |
| | `--logging-format` | Specify logging format to use. Allowed: 'standard' or 'JSON'. | `standard` |
| `-p` | `--port` | Port the server will listen on. | `5000` |
| | `--prebuilt` | Use a prebuilt tool configuration by source type. Cannot be used with --tools-file. See [Prebuilt Tools Reference](prebuilt-tools.md) for allowed values. | |
| | `--stdio` | Listens via MCP STDIO instead of acting as a remote HTTP server. | |
| | `--telemetry-gcp` | Enable exporting directly to Google Cloud Monitoring. | |
| | `--telemetry-otlp` | Enable exporting using OpenTelemetry Protocol (OTLP) to the specified endpoint (e.g. 'http://127.0.0.1:4318') | |
| | `--telemetry-service-name` | Sets the value of the service.name resource attribute for telemetry data. | `toolbox` |
| | `--tools-file` | File path specifying the tool configuration. Cannot be used with --prebuilt, --tools-files, or --tools-folder. | |
| | `--tools-files` | Multiple file paths specifying tool configurations. Files will be merged. Cannot be used with --prebuilt, --tools-file, or --tools-folder. | |
| | `--tools-folder` | Directory path containing YAML tool configuration files. All .yaml and .yml files in the directory will be loaded and merged. Cannot be used with --prebuilt, --tools-file, or --tools-files. | |
| | `--ui` | Launches the Toolbox UI web server. | |
| | `--allowed-origins` | Specifies a list of origins permitted to access this server. | `*` |
| `-v` | `--version` | version for toolbox | |
## Examples
### Transport Configuration
**Server Settings:**
- `--address`, `-a`: Server listening address (default: "127.0.0.1")
- `--port`, `-p`: Server listening port (default: 5000)
**STDIO:**
- `--stdio`: Run in MCP STDIO mode instead of HTTP server
#### Usage Examples
```bash
# Basic server with custom port configuration
./toolbox --tools-file "tools.yaml" --port 8080
```
### Tool Configuration Sources
The CLI supports multiple mutually exclusive ways to specify tool configurations:
**Single File:** (default)
- `--tools-file`: Path to a single YAML configuration file (default: `tools.yaml`)
**Multiple Files:**
- `--tools-files`: Comma-separated list of YAML files to merge
**Directory:**
- `--tools-folder`: Directory containing YAML files to load and merge
**Prebuilt Configurations:**
- `--prebuilt`: Use predefined configurations for specific database types (e.g.,
'bigquery', 'postgres', 'spanner'). See [Prebuilt Tools
Reference](prebuilt-tools.md) for allowed values.
{{< notice tip >}}
The CLI enforces mutual exclusivity between configuration source flags,
preventing simultaneous use of `--prebuilt` with file-based options, and
ensuring only one of `--tools-file`, `--tools-files`, or `--tools-folder` is
used at a time.
{{< /notice >}}
### Hot Reload
Toolbox enables dynamic reloading by default. To disable, use the
`--disable-reload` flag.
### Toolbox UI
To launch Toolbox's interactive UI, use the `--ui` flag. This allows you to test
tools and toolsets with features such as authorized parameters. To learn more,
visit [Toolbox UI](../how-to/toolbox-ui/index.md).
# Prebuilt Tools
This page lists all the prebuilt tools available.
Prebuilt tools are reusable, pre-packaged toolsets that are designed to extend
the capabilities of agents. These tools are built to be generic and adaptable,
allowing developers to interact with and take action on databases.
See guides, [Connect from your IDE](../how-to/connect-ide/_index.md), for
details on how to connect your AI tools (IDEs) to databases via Toolbox and MCP.
## AlloyDB Postgres
* `--prebuilt` value: `alloydb-postgres`
* **Environment Variables:**
* `ALLOYDB_POSTGRES_PROJECT`: The GCP project ID.
* `ALLOYDB_POSTGRES_REGION`: The region of your AlloyDB instance.
* `ALLOYDB_POSTGRES_CLUSTER`: The ID of your AlloyDB cluster.
* `ALLOYDB_POSTGRES_INSTANCE`: The ID of your AlloyDB instance.
* `ALLOYDB_POSTGRES_DATABASE`: The name of the database to connect to.
* `ALLOYDB_POSTGRES_USER`: (Optional) The database username. Defaults to
IAM authentication if unspecified.
* `ALLOYDB_POSTGRES_PASSWORD`: (Optional) The password for the database
user. Defaults to IAM authentication if unspecified.
* `ALLOYDB_POSTGRES_IP_TYPE`: (Optional) The IP type i.e. "Public" or
"Private" (Default: Public).
* **Permissions:**
* **AlloyDB Client** (`roles/alloydb.client`) to connect to the instance.
* Database-level permissions (e.g., `SELECT`, `INSERT`) are required to
execute queries.
* **Tools:**
* `execute_sql`: Executes a SQL query.
* `list_tables`: Lists tables in the database.
* `list_autovacuum_configurations`: Lists autovacuum configurations in the
database.
* `list_memory_configurations`: Lists memory-related configurations in the
database.
* `list_top_bloated_tables`: List top bloated tables in the database.
* `list_replication_slots`: Lists replication slots in the database.
* `list_invalid_indexes`: Lists invalid indexes in the database.
* `get_query_plan`: Generate the execution plan of a statement.
* `list_views`: Lists views in the database from pg_views with a default
limit of 50 rows. Returns schemaname, viewname and the ownername.
* `list_schemas`: Lists schemas in the database.
* `database_overview`: Fetches the current state of the PostgreSQL server.
* `list_triggers`: Lists triggers in the database.
* `list_indexes`: List available user indexes in a PostgreSQL database.
* `list_sequences`: List sequences in a PostgreSQL database.
* `list_pg_settings`: List configuration parameters for the PostgreSQL server.
* `list_database_stats`: Lists the key performance and activity statistics for
each database in the AlloyDB instance.
## AlloyDB Postgres Admin
* `--prebuilt` value: `alloydb-postgres-admin`
* **Permissions:**
* **AlloyDB Viewer** (`roles/alloydb.viewer`) is required for `list` and
`get` tools.
* **AlloyDB Admin** (`roles/alloydb.admin`) is required for `create` tools.
* **Tools:**
* `create_cluster`: Creates a new AlloyDB cluster.
* `list_clusters`: Lists all AlloyDB clusters in a project.
* `get_cluster`: Gets information about a specified AlloyDB cluster.
* `create_instance`: Creates a new AlloyDB instance within a cluster.
* `list_instances`: Lists all instances within an AlloyDB cluster.
* `get_instance`: Gets information about a specified AlloyDB instance.
* `create_user`: Creates a new database user in an AlloyDB cluster.
* `list_users`: Lists all database users within an AlloyDB cluster.
* `get_user`: Gets information about a specified database user in an
AlloyDB cluster.
* `wait_for_operation`: Polls the operations API to track the status of
long-running operations.
## AlloyDB Postgres Observability
* `--prebuilt` value: `alloydb-postgres-observability`
* **Permissions:**
* **Monitoring Viewer** (`roles/monitoring.viewer`) is required on the
project to view monitoring data.
* **Tools:**
* `get_system_metrics`: Fetches system level cloud monitoring data
(timeseries metrics) for an AlloyDB instance using a PromQL query.
* `get_query_metrics`: Fetches query level cloud monitoring data
(timeseries metrics) for queries running in an AlloyDB instance using a
PromQL query.
## BigQuery
* `--prebuilt` value: `bigquery`
* **Environment Variables:**
* `BIGQUERY_PROJECT`: The GCP project ID.
* `BIGQUERY_LOCATION`: (Optional) The dataset location.
* `BIGQUERY_USE_CLIENT_OAUTH`: (Optional) If `true`, forwards the client's
OAuth access token for authentication. Defaults to `false`.
* **Permissions:**
* **BigQuery User** (`roles/bigquery.user`) to execute queries and view
metadata.
* **BigQuery Metadata Viewer** (`roles/bigquery.metadataViewer`) to view
all datasets.
* **BigQuery Data Editor** (`roles/bigquery.dataEditor`) to create or
modify datasets and tables.
* **Gemini for Google Cloud** (`roles/cloudaicompanion.user`) to use the
conversational analytics API.
* **Tools:**
* `analyze_contribution`: Use this tool to perform contribution analysis,
also called key driver analysis.
* `ask_data_insights`: Use this tool to perform data analysis, get
insights, or answer complex questions about the contents of specific
BigQuery tables. For more information on required roles, API setup, and
IAM configuration, see the setup and authentication section of the
[Conversational Analytics API
documentation](https://cloud.google.com/gemini/docs/conversational-analytics-api/overview).
* `execute_sql`: Executes a SQL statement.
* `forecast`: Use this tool to forecast time series data.
* `get_dataset_info`: Gets dataset metadata.
* `get_table_info`: Gets table metadata.
* `list_dataset_ids`: Lists datasets.
* `list_table_ids`: Lists tables.
* `search_catalog`: Search for entries based on the provided query.
## Cloud SQL for MySQL
* `--prebuilt` value: `cloud-sql-mysql`
* **Environment Variables:**
* `CLOUD_SQL_MYSQL_PROJECT`: The GCP project ID.
* `CLOUD_SQL_MYSQL_REGION`: The region of your Cloud SQL instance.
* `CLOUD_SQL_MYSQL_INSTANCE`: The ID of your Cloud SQL instance.
* `CLOUD_SQL_MYSQL_DATABASE`: The name of the database to connect to.
* `CLOUD_SQL_MYSQL_USER`: The database username.
* `CLOUD_SQL_MYSQL_PASSWORD`: The password for the database user.
* `CLOUD_SQL_MYSQL_IP_TYPE`: The IP type i.e. "Public
or "Private" (Default: Public).
* **Permissions:**
* **Cloud SQL Client** (`roles/cloudsql.client`) to connect to the
instance.
* Database-level permissions (e.g., `SELECT`, `INSERT`) are required to
execute queries.
* **Tools:**
* `execute_sql`: Executes a SQL query.
* `list_tables`: Lists tables in the database.
* `get_query_plan`: Provides information about how MySQL executes a SQL
statement.
* `list_active_queries`: Lists ongoing queries.
* `list_tables_missing_unique_indexes`: Looks for tables that do not have
primary or unique key contraint.
* `list_table_fragmentation`: Displays table fragmentation in MySQL.
## Cloud SQL for MySQL Observability
* `--prebuilt` value: `cloud-sql-mysql-observability`
* **Permissions:**
* **Monitoring Viewer** (`roles/monitoring.viewer`) is required on the
project to view monitoring data.
* **Tools:**
* `get_system_metrics`: Fetches system level cloud monitoring data
(timeseries metrics) for a MySQL instance using a PromQL query.
* `get_query_metrics`: Fetches query level cloud monitoring data
(timeseries metrics) for queries running in a MySQL instance using a
PromQL query.
## Cloud SQL for MySQL Admin
* `--prebuilt` value: `cloud-sql-mysql-admin`
* **Permissions:**
* **Cloud SQL Viewer** (`roles/cloudsql.viewer`): Provides read-only
access to resources.
* `get_instance`
* `list_instances`
* `list_databases`
* `wait_for_operation`
* **Cloud SQL Editor** (`roles/cloudsql.editor`): Provides permissions to
manage existing resources.
* All `viewer` tools
* `create_database`
* **Cloud SQL Admin** (`roles/cloudsql.admin`): Provides full control over
all resources.
* All `editor` and `viewer` tools
* `create_instance`
* `create_user`
* `clone_instance`
* **Tools:**
* `create_instance`: Creates a new Cloud SQL for MySQL instance.
* `get_instance`: Gets information about a Cloud SQL instance.
* `list_instances`: Lists Cloud SQL instances in a project.
* `create_database`: Creates a new database in a Cloud SQL instance.
* `list_databases`: Lists all databases for a Cloud SQL instance.
* `create_user`: Creates a new user in a Cloud SQL instance.
* `wait_for_operation`: Waits for a Cloud SQL operation to complete.
* `clone_instance`: Creates a clone for an existing Cloud SQL for MySQL instance.
## Cloud SQL for PostgreSQL
* `--prebuilt` value: `cloud-sql-postgres`
* **Environment Variables:**
* `CLOUD_SQL_POSTGRES_PROJECT`: The GCP project ID.
* `CLOUD_SQL_POSTGRES_REGION`: The region of your Cloud SQL instance.
* `CLOUD_SQL_POSTGRES_INSTANCE`: The ID of your Cloud SQL instance.
* `CLOUD_SQL_POSTGRES_DATABASE`: The name of the database to connect to.
* `CLOUD_SQL_POSTGRES_USER`: (Optional) The database username. Defaults to
IAM authentication if unspecified.
* `CLOUD_SQL_POSTGRES_PASSWORD`: (Optional) The password for the database
user. Defaults to IAM authentication if unspecified.
* `CLOUD_SQL_POSTGRES_IP_TYPE`: (Optional) The IP type i.e. "Public" or
"Private" (Default: Public).
* **Permissions:**
* **Cloud SQL Client** (`roles/cloudsql.client`) to connect to the
instance.
* Database-level permissions (e.g., `SELECT`, `INSERT`) are required to
execute queries.
* **Tools:**
* `execute_sql`: Executes a SQL query.
* `list_tables`: Lists tables in the database.
* `list_autovacuum_configurations`: Lists autovacuum configurations in the
database.
* `list_memory_configurations`: Lists memory-related configurations in the
database.
* `list_top_bloated_tables`: List top bloated tables in the database.
* `list_replication_slots`: Lists replication slots in the database.
* `list_invalid_indexes`: Lists invalid indexes in the database.
* `get_query_plan`: Generate the execution plan of a statement.
* `list_views`: Lists views in the database from pg_views with a default
limit of 50 rows. Returns schemaname, viewname and the ownername.
* `list_schemas`: Lists schemas in the database.
* `database_overview`: Fetches the current state of the PostgreSQL server.
* `list_triggers`: Lists triggers in the database.
* `list_indexes`: List available user indexes in a PostgreSQL database.
* `list_sequences`: List sequences in a PostgreSQL database.
* `list_pg_settings`: List configuration parameters for the PostgreSQL server.
* `list_database_stats`: Lists the key performance and activity statistics for
each database in the postgreSQL instance.
## Cloud SQL for PostgreSQL Observability
* `--prebuilt` value: `cloud-sql-postgres-observability`
* **Permissions:**
* **Monitoring Viewer** (`roles/monitoring.viewer`) is required on the
project to view monitoring data.
* **Tools:**
* `get_system_metrics`: Fetches system level cloud monitoring data
(timeseries metrics) for a Postgres instance using a PromQL query.
* `get_query_metrics`: Fetches query level cloud monitoring data
(timeseries metrics) for queries running in Postgres instance using a
PromQL query.
## Cloud SQL for PostgreSQL Admin
* `--prebuilt` value: `cloud-sql-postgres-admin`
* **Permissions:**
* **Cloud SQL Viewer** (`roles/cloudsql.viewer`): Provides read-only
access to resources.
* `get_instance`
* `list_instances`
* `list_databases`
* `wait_for_operation`
* **Cloud SQL Editor** (`roles/cloudsql.editor`): Provides permissions to
manage existing resources.
* All `viewer` tools
* `create_database`
* **Cloud SQL Admin** (`roles/cloudsql.admin`): Provides full control over
all resources.
* All `editor` and `viewer` tools
* `create_instance`
* `create_user`
* `clone_instance`
* **Tools:**
* `create_instance`: Creates a new Cloud SQL for PostgreSQL instance.
* `get_instance`: Gets information about a Cloud SQL instance.
* `list_instances`: Lists Cloud SQL instances in a project.
* `create_database`: Creates a new database in a Cloud SQL instance.
* `list_databases`: Lists all databases for a Cloud SQL instance.
* `create_user`: Creates a new user in a Cloud SQL instance.
* `wait_for_operation`: Waits for a Cloud SQL operation to complete.
* `clone_instance`: Creates a clone for an existing Cloud SQL for PostgreSQL instance.
## Cloud SQL for SQL Server
* `--prebuilt` value: `cloud-sql-mssql`
* **Environment Variables:**
* `CLOUD_SQL_MSSQL_PROJECT`: The GCP project ID.
* `CLOUD_SQL_MSSQL_REGION`: The region of your Cloud SQL instance.
* `CLOUD_SQL_MSSQL_INSTANCE`: The ID of your Cloud SQL instance.
* `CLOUD_SQL_MSSQL_DATABASE`: The name of the database to connect to.
* `CLOUD_SQL_MSSQL_USER`: The database username.
* `CLOUD_SQL_MSSQL_PASSWORD`: The password for the database user.
* `CLOUD_SQL_MSSQL_IP_TYPE`: (Optional) The IP type i.e. "Public" or
"Private" (Default: Public).
* **Permissions:**
* **Cloud SQL Client** (`roles/cloudsql.client`) to connect to the
instance.
* Database-level permissions (e.g., `SELECT`, `INSERT`) are required to
execute queries.
* **Tools:**
* `execute_sql`: Executes a SQL query.
* `list_tables`: Lists tables in the database.
## Cloud SQL for SQL Server Observability
* `--prebuilt` value: `cloud-sql-mssql-observability`
* **Permissions:**
* **Monitoring Viewer** (`roles/monitoring.viewer`) is required on the
project to view monitoring data.
* **Tools:**
* `get_system_metrics`: Fetches system level cloud monitoring data
(timeseries metrics) for a SQL Server instance using a PromQL query.
## Cloud SQL for SQL Server Admin
* `--prebuilt` value: `cloud-sql-mssql-admin`
* **Permissions:**
* **Cloud SQL Viewer** (`roles/cloudsql.viewer`): Provides read-only
access to resources.
* `get_instance`
* `list_instances`
* `list_databases`
* `wait_for_operation`
* **Cloud SQL Editor** (`roles/cloudsql.editor`): Provides permissions to
manage existing resources.
* All `viewer` tools
* `create_database`
* **Cloud SQL Admin** (`roles/cloudsql.admin`): Provides full control over
all resources.
* All `editor` and `viewer` tools
* `create_instance`
* `create_user`
* `clone_instance`
* **Tools:**
* `create_instance`: Creates a new Cloud SQL for SQL Server instance.
* `get_instance`: Gets information about a Cloud SQL instance.
* `list_instances`: Lists Cloud SQL instances in a project.
* `create_database`: Creates a new database in a Cloud SQL instance.
* `list_databases`: Lists all databases for a Cloud SQL instance.
* `create_user`: Creates a new user in a Cloud SQL instance.
* `wait_for_operation`: Waits for a Cloud SQL operation to complete.
* `clone_instance`: Creates a clone for an existing Cloud SQL for SQL Server instance.
## Dataplex
* `--prebuilt` value: `dataplex`
* **Environment Variables:**
* `DATAPLEX_PROJECT`: The GCP project ID.
* **Permissions:**
* **Dataplex Reader** (`roles/dataplex.viewer`) to search and look up
entries.
* **Dataplex Editor** (`roles/dataplex.editor`) to modify entries.
* **Tools:**
* `dataplex_search_entries`: Searches for entries in Dataplex Catalog.
* `dataplex_lookup_entry`: Retrieves a specific entry from Dataplex
Catalog.
* `dataplex_search_aspect_types`: Finds aspect types relevant to the
query.
## Firestore
* `--prebuilt` value: `firestore`
* **Environment Variables:**
* `FIRESTORE_PROJECT`: The GCP project ID.
* `FIRESTORE_DATABASE`: (Optional) The Firestore database ID. Defaults to
"(default)".
* **Permissions:**
* **Cloud Datastore User** (`roles/datastore.user`) to get documents, list
collections, and query collections.
* **Firebase Rules Viewer** (`roles/firebaserules.viewer`) to get and
validate Firestore rules.
* **Tools:**
* `get_documents`: Gets multiple documents from Firestore by their paths.
* `add_documents`: Adds a new document to a Firestore collection.
* `update_document`: Updates an existing document in Firestore.
* `list_collections`: Lists Firestore collections for a given parent path.
* `delete_documents`: Deletes multiple documents from Firestore.
* `query_collection`: Retrieves one or more Firestore documents from a
collection.
* `get_rules`: Retrieves the active Firestore security rules.
* `validate_rules`: Checks the provided Firestore Rules source for syntax
and validation errors.
## Looker
* `--prebuilt` value: `looker`
* **Environment Variables:**
* `LOOKER_BASE_URL`: The URL of your Looker instance.
* `LOOKER_CLIENT_ID`: The client ID for the Looker API.
* `LOOKER_CLIENT_SECRET`: The client secret for the Looker API.
* `LOOKER_VERIFY_SSL`: Whether to verify SSL certificates.
* `LOOKER_USE_CLIENT_OAUTH`: Whether to use OAuth for authentication.
* `LOOKER_SHOW_HIDDEN_MODELS`: Whether to show hidden models.
* `LOOKER_SHOW_HIDDEN_EXPLORES`: Whether to show hidden explores.
* `LOOKER_SHOW_HIDDEN_FIELDS`: Whether to show hidden fields.
* **Permissions:**
* A Looker account with permissions to access the desired models,
explores, and data is required.
* **Tools:**
* `get_models`: Retrieves the list of LookML models.
* `get_explores`: Retrieves the list of explores in a model.
* `get_dimensions`: Retrieves the list of dimensions in an explore.
* `get_measures`: Retrieves the list of measures in an explore.
* `get_filters`: Retrieves the list of filters in an explore.
* `get_parameters`: Retrieves the list of parameters in an explore.
* `query`: Runs a query against the LookML model.
* `query_sql`: Generates the SQL for a query.
* `query_url`: Generates a URL for a query in Looker.
* `get_looks`: Searches for saved looks.
* `run_look`: Runs the query associated with a look.
* `make_look`: Creates a new look.
* `get_dashboards`: Searches for saved dashboards.
* `run_dashboard`: Runs the queries associated with a dashboard.
* `make_dashboard`: Creates a new dashboard.
* `add_dashboard_element`: Adds a tile to a dashboard.
* `health_pulse`: Test the health of a Looker instance.
* `health_analyze`: Analyze the LookML usage of a Looker instance.
* `health_vacuum`: Suggest LookML elements that can be removed.
* `dev_mode`: Activate developer mode.
* `get_projects`: Get the LookML projects in a Looker instance.
* `get_project_files`: List the project files in a project.
* `get_project_file`: Get the content of a LookML file.
* `create_project_file`: Create a new LookML file.
* `update_project_file`: Update an existing LookML file.
* `delete_project_file`: Delete a LookML file.
* `get_connections`: Get the available connections in a Looker instance.
* `get_connection_schemas`: Get the available schemas in a connection.
* `get_connection_databases`: Get the available databases in a connection.
* `get_connection_tables`: Get the available tables in a connection.
* `get_connection_table_columns`: Get the available columns for a table.
## Looker Conversational Analytics
* `--prebuilt` value: `looker-conversational-analytics`
* **Environment Variables:**
* `LOOKER_BASE_URL`: The URL of your Looker instance.
* `LOOKER_CLIENT_ID`: The client ID for the Looker API.
* `LOOKER_CLIENT_SECRET`: The client secret for the Looker API.
* `LOOKER_VERIFY_SSL`: Whether to verify SSL certificates.
* `LOOKER_USE_CLIENT_OAUTH`: Whether to use OAuth for authentication.
* `LOOKER_PROJECT`: The GCP Project to use for Conversational Analytics.
* `LOOKER_LOCATION`: The GCP Location to use for Conversational Analytics.
* **Permissions:**
* A Looker account with permissions to access the desired models,
explores, and data is required.
* **Looker Instance User** (`roles/looker.instanceUser`): IAM role to
access Looker.
* **Gemini for Google Cloud User** (`roles/cloudaicompanion.user`): IAM
role to access Conversational Analytics.
* **Gemini Data Analytics Stateless Chat User (Beta)**
(`roles/geminidataanalytics.dataAgentStatelessUser`): IAM role to
access Conversational Analytics.
* **Tools:**
* `ask_data_insights`: Ask a question of the data.
* `get_models`: Retrieves the list of LookML models.
* `get_explores`: Retrieves the list of explores in a model.
## Microsoft SQL Server
* `--prebuilt` value: `mssql`
* **Environment Variables:**
* `MSSQL_HOST`: (Optional) The hostname or IP address of the SQL Server instance.
* `MSSQL_PORT`: (Optional) The port number for the SQL Server instance.
* `MSSQL_DATABASE`: The name of the database to connect to.
* `MSSQL_USER`: The database username.
* `MSSQL_PASSWORD`: The password for the database user.
* **Permissions:**
* Database-level permissions (e.g., `SELECT`, `INSERT`) are required to
execute queries.
* **Tools:**
* `execute_sql`: Executes a SQL query.
* `list_tables`: Lists tables in the database.
## MySQL
* `--prebuilt` value: `mysql`
* **Environment Variables:**
* `MYSQL_HOST`: The hostname or IP address of the MySQL server.
* `MYSQL_PORT`: The port number for the MySQL server.
* `MYSQL_DATABASE`: The name of the database to connect to.
* `MYSQL_USER`: The database username.
* `MYSQL_PASSWORD`: The password for the database user.
* **Permissions:**
* Database-level permissions (e.g., `SELECT`, `INSERT`) are required to
execute queries.
* **Tools:**
* `execute_sql`: Executes a SQL query.
* `list_tables`: Lists tables in the database.
* `get_query_plan`: Provides information about how MySQL executes a SQL
statement.
* `list_active_queries`: Lists ongoing queries.
* `list_tables_missing_unique_indexes`: Looks for tables that do not have
primary or unique key contraint.
* `list_table_fragmentation`: Displays table fragmentation in MySQL.
## OceanBase
* `--prebuilt` value: `oceanbase`
* **Environment Variables:**
* `OCEANBASE_HOST`: The hostname or IP address of the OceanBase server.
* `OCEANBASE_PORT`: The port number for the OceanBase server.
* `OCEANBASE_DATABASE`: The name of the database to connect to.
* `OCEANBASE_USER`: The database username.
* `OCEANBASE_PASSWORD`: The password for the database user.
* **Permissions:**
* Database-level permissions (e.g., `SELECT`, `INSERT`) are required to
execute queries.
* **Tools:**
* `execute_sql`: Executes a SQL query.
* `list_tables`: Lists tables in the database.
## PostgreSQL
* `--prebuilt` value: `postgres`
* **Environment Variables:**
* `POSTGRES_HOST`: (Optional) The hostname or IP address of the PostgreSQL server.
* `POSTGRES_PORT`: (Optional) The port number for the PostgreSQL server.
* `POSTGRES_DATABASE`: The name of the database to connect to.
* `POSTGRES_USER`: The database username.
* `POSTGRES_PASSWORD`: The password for the database user.
* `POSTGRES_QUERY_PARAMS`: (Optional) Raw query to be added to the db
connection string.
* **Permissions:**
* Database-level permissions (e.g., `SELECT`, `INSERT`) are required to
execute queries.
* **Tools:**
* `execute_sql`: Executes a SQL query.
* `list_tables`: Lists tables in the database.
* `list_autovacuum_configurations`: Lists autovacuum configurations in the
database.
* `list_memory_configurations`: Lists memory-related configurations in the
database.
* `list_top_bloated_tables`: List top bloated tables in the database.
* `list_replication_slots`: Lists replication slots in the database.
* `list_invalid_indexes`: Lists invalid indexes in the database.
* `get_query_plan`: Generate the execution plan of a statement.
* `list_views`: Lists views in the database from pg_views with a default
limit of 50 rows. Returns schemaname, viewname and the ownername.
* `list_schemas`: Lists schemas in the database.
* `database_overview`: Fetches the current state of the PostgreSQL server.
* `list_triggers`: Lists triggers in the database.
* `list_indexes`: List available user indexes in a PostgreSQL database.
* `list_sequences`: List sequences in a PostgreSQL database.
* `list_pg_settings`: List configuration parameters for the PostgreSQL server.
* `list_database_stats`: Lists the key performance and activity statistics for
each database in the PostgreSQL server.
## Google Cloud Serverless for Apache Spark
* `--prebuilt` value: `serverless-spark`
* **Environment Variables:**
* `SERVERLESS_SPARK_PROJECT`: The GCP project ID
* `SERVERLESS_SPARK_LOCATION`: The GCP Location.
* **Permissions:**
* **Dataproc Serverless Viewer** (`roles/dataproc.serverlessViewer`) to
view serverless batches.
* **Dataproc Serverless Editor** (`roles/dataproc.serverlessEditor`) to
view serverless batches.
* **Tools:**
* `list_batches`: Lists Spark batches.
## Spanner (GoogleSQL dialect)
* `--prebuilt` value: `spanner`
* **Environment Variables:**
* `SPANNER_PROJECT`: The GCP project ID.
* `SPANNER_INSTANCE`: The Spanner instance ID.
* `SPANNER_DATABASE`: The Spanner database ID.
* **Permissions:**
* **Cloud Spanner Database Reader** (`roles/spanner.databaseReader`) to
execute DQL queries and list tables.
* **Cloud Spanner Database User** (`roles/spanner.databaseUser`) to
execute DML queries.
* **Tools:**
* `execute_sql`: Executes a DML SQL query.
* `execute_sql_dql`: Executes a DQL SQL query.
* `list_tables`: Lists tables in the database.
* `list_graphs`: Lists graphs in the database.
## Spanner (PostgreSQL dialect)
* `--prebuilt` value: `spanner-postgres`
* **Environment Variables:**
* `SPANNER_PROJECT`: The GCP project ID.
* `SPANNER_INSTANCE`: The Spanner instance ID.
* `SPANNER_DATABASE`: The Spanner database ID.
* **Permissions:**
* **Cloud Spanner Database Reader** (`roles/spanner.databaseReader`) to
execute DQL queries and list tables.
* **Cloud Spanner Database User** (`roles/spanner.databaseUser`) to
execute DML queries.
* **Tools:**
* `execute_sql`: Executes a DML SQL query using the PostgreSQL interface
for Spanner.
* `execute_sql_dql`: Executes a DQL SQL query using the PostgreSQL
interface for Spanner.
* `list_tables`: Lists tables in the database.
## SQLite
* `--prebuilt` value: `sqlite`
* **Environment Variables:**
* `SQLITE_DATABASE`: The path to the SQLite database file (e.g.,
`./sample.db`).
* **Permissions:**
* File system read/write permissions for the specified database file.
* **Tools:**
* `execute_sql`: Executes a SQL query.
* `list_tables`: Lists tables in the database.
## Neo4j
* `--prebuilt` value: `neo4j`
* **Environment Variables:**
* `NEO4J_URI`: The URI of the Neo4j instance (e.g.,
`bolt://localhost:7687`).
* `NEO4J_DATABASE`: The name of the Neo4j database to connect to.
* `NEO4J_USERNAME`: The username for the Neo4j instance.
* `NEO4J_PASSWORD`: The password for the Neo4j instance.
* **Permissions:**
* **Database-level permissions** are required to execute Cypher queries.
* **Tools:**
* `execute_cypher`: Executes a Cypher query.
* `get_schema`: Retrieves the schema of the Neo4j database.
## Google Cloud Healthcare API
* `--prebuilt` value: `cloud-healthcare`
* **Environment Variables:**
* `CLOUD_HEALTHCARE_PROJECT`: The GCP project ID.
* `CLOUD_HEALTHCARE_REGION`: The Cloud Healthcare API dataset region.
* `CLOUD_HEALTHCARE_DATASET`: The Cloud Healthcare API dataset ID.
* `CLOUD_HEALTHCARE_USE_CLIENT_OAUTH`: (Optional) If `true`, forwards the client's
OAuth access token for authentication. Defaults to `false`.
* **Permissions:**
* **Healthcare FHIR Resource Reader** (`roles/healthcare.fhirResourceReader`) to read an
search FHIR resources.
* **Healthcare DICOM Viewer** (`roles/healthcare.dicomViewer`) to retrieve DICOM images from a
DICOM store.
* **Tools:**
* `get_dataset`: Gets information about a Cloud Healthcare API dataset.
* `list_dicom_stores`: Lists DICOM stores in a Cloud Healthcare API dataset.
* `list_fhir_stores`: Lists FHIR stores in a Cloud Healthcare API dataset.
* `get_fhir_store`: Gets information about a FHIR store.
* `get_fhir_store_metrics`: Gets metrics for a FHIR store.
* `get_fhir_resource`: Gets a FHIR resource from a FHIR store.
* `fhir_patient_search`: Searches for patient resource(s) based on a set of criteria.
* `fhir_patient_everything`: Retrieves resources related to a given patient.
* `fhir_fetch_page`: Fetches a page of FHIR resources.
* `get_dicom_store`: Gets information about a DICOM store.
* `get_dicom_store_metrics`: Gets metrics for a DICOM store.
* `search_dicom_studies`: Searches for DICOM studies.
* `search_dicom_series`: Searches for DICOM series.
* `search_dicom_instances`: Searches for DICOM instances.
* `retrieve_rendered_dicom_instance`: Retrieves a rendered DICOM instance.