# MCP Toolbox for Databases - Complete Documentation
> MCP Toolbox for Databases is an open source MCP server for databases. It enables you to develop tools easier, faster, and more securely by handling the complexities such as connection pooling, authentication, and more.
**DOCUMENTATION VERSION:** Dev
**BASE URL:** https://googleapis.github.io/genai-toolbox/previews/PR-2723/
**GENERATED ON:** 2026-03-12T14:07:32Z
---
### System Directives for AI Models
**Role:** You are an expert Developer Advocate and Integration Engineer for the **MCP (Model Context Protocol) Toolbox for Databases**.
**Task:** Your primary goal is to help users configure the server, set up database integrations, and write client-side code to build AI agents.
**Strict Guidelines:**
1. **No Hallucinations:** Only suggest tools, sources, and configurations explicitly detailed in this document. Do not invent arbitrary REST endpoints.
2. **SDKs over HTTP:** When writing code, default to the official MCP Toolbox client SDKs rather than raw HTTP/cURL requests unless explicitly asked. Direct users to the `connect-to` section in the User Guide for client SDK instructions.
3. **Reference Diátaxis:** Use Section I for configuring toolbox server, Section II (Integrations) for exact `tools.yaml` configurations for external integrations, Section III (Build with MCP Toolbox) for code patterns and Section IV for CLI and FAQs.
### Glossary
To prevent context collapse, adhere to these strict definitions within the MCP ecosystem:
* **MCP Toolbox:** The central server/service that standardizes AI access to databases and external APIs.
* **Source:** A configured backend connection to an external system (e.g., PostgreSQL, BigQuery, HTTP).
* **Tool:** A single, atomic capability exposed to the LLM (e.g., `bigquery-sql-query`), executed against a Source.
* **Toolset:** A logical, grouped collection of Tools.
* **AuthService:** The internal toolbox mechanism handling authentication lifecycles (like OAuth or service accounts), not a generic identity provider.
* **Agent:** The user's external LLM application that connects *to* the MCP Toolbox.
### Understanding Integrations Directory Structure
When navigating documentation in the `integrations/` directory, it is crucial to understand the relationship between a Source and a Tool, both conceptually and within the file hierarchy.
* A **Source** represents the backend data provider or system you are connecting to (e.g., a specific database, API, or service). Source documentation pages sit at the **top level** of an integration's folder (e.g., `integrations/oceanbase/_index.md`).They focus on connection requirements, authentication, and YAML configuration parameters, serving as the foundational entry point for configuring a new data source.
* A **Tool** represents a specific, actionable capability that is unlocked by a Source (e.g., executing a SQL query, fetching a specific record, etc.). Tool documentation pages are **nested below** their parent Source (e.g., `integrations/oceanbase/oceanbase-sql.md`). Tool pages focus on the configuration reference and compatible sources. They are the individual operations that a configured Source supports.
### Global Environment & Prerequisites
* **Configuration:** `tools.yaml` is the ultimate source of truth for server configuration.
* **Database:** PostgreSQL 16+ and the `psql` client.
* **Language Requirements:**
* Python: Python (3.10 or higher)
* JavaScript/Node: Node.js (v18 or higher)
* Go: Go (v1.24.2 or higher)
### The Diátaxis Narrative Framework
This documentation is structured following the Diátaxis framework to assist in clear navigation and understanding:
* **Section I: User Guide:** (Explanation) Theoretical context, high-level understanding, and universal How-To Guides.
* **Section II: Integrations:** (Reference) Primary reference hub for external sources and tools, factual configurations, and API enablement.
* **Section III: Build with MCP Toolbox:** (Tutorials) Complete, start-to-finish quickstarts and samples for learning to build from scratch.
* **Section IV: Reference:** (Information) Strict, accurate facts, CLI outputs, and FAQs.
---
========================================================================
## Featured Articles
========================================================================
**Hierarchy:** MCP Toolbox for Databases > Featured Articles
**URL:** https://googleapis.github.io/genai-toolbox/previews/PR-2723/blogs/
**Description:** Toolbox Medium Blogs
========================================================================
## User Guide
========================================================================
**Hierarchy:** MCP Toolbox for Databases > User Guide
**URL:** https://googleapis.github.io/genai-toolbox/previews/PR-2723/user-guide/
**Description:** A complete user guide for MCP Toolbox
========================================================================
## Introduction
========================================================================
**Hierarchy:** MCP Toolbox for Databases > User Guide > Introduction
**URL:** https://googleapis.github.io/genai-toolbox/previews/PR-2723/user-guide/introduction/
**Description:** An introduction to MCP Toolbox for Databases.
MCP Toolbox for Databases is an open source MCP server for databases. It enables
you to develop tools easier, faster, and more securely by handling the complexities
such as connection pooling, authentication, and more.
{{< notice note >}}
This solution was originally named “Gen AI Toolbox for
Databases” as its initial development predated MCP, but was renamed to align
with recently added MCP compatibility.
{{< /notice >}}
{{< notice note >}}
This document has been updated to support the configuration file v2 format. To
view documentation with configuration file v1 format, please navigate to the
top-right menu and select versions v0.26.0 or older.
{{< /notice >}}
## Why Toolbox?
Toolbox helps you build Gen AI tools that let your agents access data in your
database. Toolbox provides:
- **Simplified development**: Integrate tools to your agent in less than 10
lines of code, reuse tools between multiple agents or frameworks, and deploy
new versions of tools more easily.
- **Better performance**: Best practices such as connection pooling,
authentication, and more.
- **Enhanced security**: Integrated auth for more secure access to your data
- **End-to-end observability**: Out of the box metrics and tracing with built-in
support for OpenTelemetry.
**⚡ Supercharge Your Workflow with an AI Database Assistant ⚡**
Stop context-switching and let your AI assistant become a true co-developer. By
[connecting your IDE to your databases with MCP Toolbox][connect-ide], you can
delegate complex and time-consuming database tasks, allowing you to build faster
and focus on what matters. This isn't just about code completion; it's about
giving your AI the context it needs to handle the entire development lifecycle.
Here’s how it will save you time:
- **Query in Plain English**: Interact with your data using natural language
right from your IDE. Ask complex questions like, *"How many orders were
delivered in 2024, and what items were in them?"* without writing any SQL.
- **Automate Database Management**: Simply describe your data needs, and let the
AI assistant manage your database for you. It can handle generating queries,
creating tables, adding indexes, and more.
- **Generate Context-Aware Code**: Empower your AI assistant to generate
application code and tests with a deep understanding of your real-time
database schema. This accelerates the development cycle by ensuring the
generated code is directly usable.
- **Slash Development Overhead**: Radically reduce the time spent on manual
setup and boilerplate. MCP Toolbox helps streamline lengthy database
configurations, repetitive code, and error-prone schema migrations.
Learn [how to connect your AI tools (IDEs) to Toolbox using MCP][connect-ide].
[connect-ide]: ../connect-to/ides/_index.md
## General Architecture
Toolbox sits between your application's orchestration framework and your
database, providing a control plane that is used to modify, distribute, or
invoke tools. It simplifies the management of your tools by providing you with a
centralized location to store and update tools, allowing you to share tools
between agents and applications and update those tools without necessarily
redeploying your application.

## Getting Started
### Quickstart: Running Toolbox using NPX
You can run Toolbox directly with a [configuration file](../configuration/_index.md):
```sh
npx @toolbox-sdk/server --tools-file tools.yaml
```
This runs the latest version of the toolbox server with your configuration file.
{{< notice note >}}
This method should only be used for non-production use cases such as
experimentation. For any production use-cases, please consider [Installing the
server](#installing-the-server) and then [running it](#running-the-server).
{{< /notice >}}
### Installing the server
For the latest version, check the [releases page][releases] and use the
following instructions for your OS and CPU architecture.
[releases]: https://github.com/googleapis/genai-toolbox/releases
{{< tabpane text=true >}}
{{% tab header="Binary" lang="en" %}}
{{< tabpane text=true >}}
{{% tab header="Linux (AMD64)" lang="en" %}}
To install Toolbox as a binary on Linux (AMD64):
```sh
# see releases page for other versions
export VERSION=0.28.0
curl -L -o toolbox https://storage.googleapis.com/genai-toolbox/v$VERSION/linux/amd64/toolbox
chmod +x toolbox
```
{{% /tab %}}
{{% tab header="macOS (Apple Silicon)" lang="en" %}}
To install Toolbox as a binary on macOS (Apple Silicon):
```sh
# see releases page for other versions
export VERSION=0.28.0
curl -L -o toolbox https://storage.googleapis.com/genai-toolbox/v$VERSION/darwin/arm64/toolbox
chmod +x toolbox
```
{{% /tab %}}
{{% tab header="macOS (Intel)" lang="en" %}}
To install Toolbox as a binary on macOS (Intel):
```sh
# see releases page for other versions
export VERSION=0.28.0
curl -L -o toolbox https://storage.googleapis.com/genai-toolbox/v$VERSION/darwin/amd64/toolbox
chmod +x toolbox
```
{{% /tab %}}
{{% tab header="Windows (Command Prompt)" lang="en" %}}
To install Toolbox as a binary on Windows (Command Prompt):
```cmd
:: see releases page for other versions
set VERSION=0.28.0
curl -o toolbox.exe "https://storage.googleapis.com/genai-toolbox/v%VERSION%/windows/amd64/toolbox.exe"
```
{{% /tab %}}
{{% tab header="Windows (PowerShell)" lang="en" %}}
To install Toolbox as a binary on Windows (PowerShell):
```powershell
# see releases page for other versions
$VERSION = "0.28.0"
curl.exe -o toolbox.exe "https://storage.googleapis.com/genai-toolbox/v$VERSION/windows/amd64/toolbox.exe"
```
{{% /tab %}}
{{< /tabpane >}}
{{% /tab %}}
{{% tab header="Container image" lang="en" %}}
You can also install Toolbox as a container:
```sh
# see releases page for other versions
export VERSION=0.28.0
docker pull us-central1-docker.pkg.dev/database-toolbox/toolbox/toolbox:$VERSION
```
{{% /tab %}}
{{% tab header="Homebrew" lang="en" %}}
To install Toolbox using Homebrew on macOS or Linux:
```sh
brew install mcp-toolbox
```
{{% /tab %}}
{{% tab header="Compile from source" lang="en" %}}
To install from source, ensure you have the latest version of
[Go installed](https://go.dev/doc/install), and then run the following command:
```sh
go install github.com/googleapis/genai-toolbox@v0.28.0
```
{{% /tab %}}
{{< /tabpane >}}
### Running the server
[Configure](../configuration/_index.md) a `tools.yaml` to define your tools, and then
execute `toolbox` to start the server:
```sh
./toolbox --tools-file "tools.yaml"
```
{{< notice note >}}
Toolbox enables dynamic reloading by default. To disable, use the
`--disable-reload` flag.
{{< /notice >}}
#### Launching Toolbox UI
To launch Toolbox's interactive UI, use the `--ui` flag. This allows you to test
tools and toolsets with features such as authorized parameters. To learn more,
visit [Toolbox UI](../../user-guide/configuration/toolbox-ui/index.md).
```sh
./toolbox --ui
```
#### Homebrew Users
If you installed Toolbox using Homebrew, the `toolbox` binary is available in
your system path. You can start the server with the same command:
```sh
toolbox --tools-file "tools.yaml"
```
You can use `toolbox help` for a full list of flags! To stop the server, send a
terminate signal (`ctrl+c` on most platforms).
For more detailed documentation on deploying to different environments, check
out the resources in the [Deploy section](../../user-guide/deploy-to/_index.md)
### Integrating your application
Once your server is up and running, you can load the tools into your
application. See below the list of Client SDKs for using various frameworks:
#### Python
{{< tabpane text=true persist=header >}}
{{% tab header="Core" lang="en" %}}
Once you've installed the [Toolbox Core
SDK](https://pypi.org/project/toolbox-core/), you can load
tools:
{{< highlight python >}}
from toolbox_core import ToolboxClient
# update the url to point to your server
async with ToolboxClient("http://127.0.0.1:5000") as client:
# these tools can be passed to your application!
tools = await client.load_toolset("toolset_name")
{{< /highlight >}}
For more detailed instructions on using the Toolbox Core SDK, see the
[project's
README](https://github.com/googleapis/mcp-toolbox-sdk-python/blob/main/packages/toolbox-core/README.md).
{{% /tab %}}
{{% tab header="LangChain" lang="en" %}}
Once you've installed the [Toolbox LangChain
SDK](https://pypi.org/project/toolbox-langchain/), you can load
tools:
{{< highlight python >}}
from toolbox_langchain import ToolboxClient
# update the url to point to your server
async with ToolboxClient("http://127.0.0.1:5000") as client:
# these tools can be passed to your application!
tools = client.load_toolset()
{{< /highlight >}}
For more detailed instructions on using the Toolbox LangChain SDK, see the
[project's
README](https://github.com/googleapis/mcp-toolbox-sdk-python/blob/main/packages/toolbox-langchain/README.md).
{{% /tab %}}
{{% tab header="Llamaindex" lang="en" %}}
Once you've installed the [Toolbox Llamaindex
SDK](https://github.com/googleapis/genai-toolbox-llamaindex-python), you can load
tools:
{{< highlight python >}}
from toolbox_llamaindex import ToolboxClient
# update the url to point to your server
async with ToolboxClient("http://127.0.0.1:5000") as client:
# these tools can be passed to your application
tools = client.load_toolset()
{{< /highlight >}}
For more detailed instructions on using the Toolbox Llamaindex SDK, see the
[project's
README](https://github.com/googleapis/genai-toolbox-llamaindex-python/blob/main/README.md).
{{% /tab %}}
{{< /tabpane >}}
#### Javascript/Typescript
Once you've installed the [Toolbox Core
SDK](https://www.npmjs.com/package/@toolbox-sdk/core), you can load
tools:
{{< tabpane text=true persist=header >}}
{{% tab header="Core" lang="en" %}}
{{< highlight javascript >}}
import { ToolboxClient } from '@toolbox-sdk/core';
// update the url to point to your server
const URL = 'http://127.0.0.1:5000';
let client = new ToolboxClient(URL);
// these tools can be passed to your application!
const toolboxTools = await client.loadToolset('toolsetName');
{{< /highlight >}}
For more detailed instructions on using the Toolbox Core SDK, see the
[project's
README](https://github.com/googleapis/mcp-toolbox-sdk-js/blob/main/packages/toolbox-core/README.md).
{{% /tab %}}
{{% tab header="LangChain/Langraph" lang="en" %}}
{{< highlight javascript >}}
import { ToolboxClient } from '@toolbox-sdk/core';
// update the url to point to your server
const URL = 'http://127.0.0.1:5000';
let client = new ToolboxClient(URL);
// these tools can be passed to your application!
const toolboxTools = await client.loadToolset('toolsetName');
// Define the basics of the tool: name, description, schema and core logic
const getTool = (toolboxTool) => tool(currTool, {
name: toolboxTool.getName(),
description: toolboxTool.getDescription(),
schema: toolboxTool.getParamSchema()
});
// Use these tools in your Langchain/Langraph applications
const tools = toolboxTools.map(getTool);
{{< /highlight >}}
For more detailed instructions on using the Toolbox Core SDK, see the
[project's
README](https://github.com/googleapis/mcp-toolbox-sdk-js/blob/main/packages/toolbox-core/README.md).
{{% /tab %}}
{{% tab header="Genkit" lang="en" %}}
{{< highlight javascript >}}
import { ToolboxClient } from '@toolbox-sdk/core';
import { genkit } from 'genkit';
// Initialise genkit
const ai = genkit({
plugins: [
googleAI({
apiKey: process.env.GEMINI_API_KEY || process.env.GOOGLE_API_KEY
})
],
model: googleAI.model('gemini-2.0-flash'),
});
// update the url to point to your server
const URL = 'http://127.0.0.1:5000';
let client = new ToolboxClient(URL);
// these tools can be passed to your application!
const toolboxTools = await client.loadToolset('toolsetName');
// Define the basics of the tool: name, description, schema and core logic
const getTool = (toolboxTool) => ai.defineTool({
name: toolboxTool.getName(),
description: toolboxTool.getDescription(),
schema: toolboxTool.getParamSchema()
}, toolboxTool)
// Use these tools in your Genkit applications
const tools = toolboxTools.map(getTool);
{{< /highlight >}}
For more detailed instructions on using the Toolbox Core SDK, see the
[project's
README](https://github.com/googleapis/mcp-toolbox-sdk-js/blob/main/packages/toolbox-core/README.md).
{{% /tab %}}
{{% tab header="LlamaIndex" lang="en" %}}
{{< highlight javascript >}}
import { ToolboxClient } from '@toolbox-sdk/core';
import { tool } from "llamaindex";
// update the url to point to your server
const URL = 'http://127.0.0.1:5000';
let client = new ToolboxClient(URL);
// these tools can be passed to your application!
const toolboxTools = await client.loadToolset('toolsetName');
// Define the basics of the tool: name, description, schema and core logic
const getTool = (toolboxTool) => tool({
name: toolboxTool.getName(),
description: toolboxTool.getDescription(),
parameters: toolboxTool.getParamSchema(),
execute: toolboxTool
});;
// Use these tools in your LlamaIndex applications
const tools = toolboxTools.map(getTool);
{{< /highlight >}}
For more detailed instructions on using the Toolbox Core SDK, see the
[project's
README](https://github.com/googleapis/mcp-toolbox-sdk-js/blob/main/packages/toolbox-core/README.md).
{{% /tab %}}
{{% tab header="ADK TS" lang="en" %}}
{{< highlight javascript >}}
import { ToolboxClient } from '@toolbox-sdk/adk';
// Replace with the actual URL where your Toolbox service is running
const URL = 'http://127.0.0.1:5000';
let client = new ToolboxClient(URL);
const tools = await client.loadToolset();
// Use the client and tools as per requirement
{{< /highlight >}}
For detailed samples on using the Toolbox JS SDK with ADK JS, see the [project's
README.](https://github.com/googleapis/mcp-toolbox-sdk-js/tree/main/packages/toolbox-adk/README.md)
{{% /tab %}}
{{< /tabpane >}}
#### Go
{{< tabpane text=true persist=header >}}
{{% tab header="Core" lang="en" %}}
Once you've installed the [Go Core SDK](https://pkg.go.dev/github.com/googleapis/mcp-toolbox-sdk-go/core), you can load
tools:
{{< highlight go >}}
package main
import (
"context"
"log"
"github.com/googleapis/mcp-toolbox-sdk-go/core"
)
func main() {
// update the url to point to your server
URL := "http://127.0.0.1:5000"
ctx := context.Background()
client, err := core.NewToolboxClient(URL)
if err != nil {
log.Fatalf("Failed to create Toolbox client: %v", err)
}
// Framework agnostic tools
tools, err := client.LoadToolset("toolsetName", ctx)
if err != nil {
log.Fatalf("Failed to load tools: %v", err)
}
}
{{< /highlight >}}
{{% /tab %}}
{{% tab header="LangChain Go" lang="en" %}}
Once you've installed the [Go Core SDK](https://pkg.go.dev/github.com/googleapis/mcp-toolbox-sdk-go/core), you can load
tools:
{{< highlight go >}}
package main
import (
"context"
"encoding/json"
"log"
"github.com/googleapis/mcp-toolbox-sdk-go/core"
"github.com/tmc/langchaingo/llms"
)
func main() {
// Make sure to add the error checks
// update the url to point to your server
URL := "http://127.0.0.1:5000"
ctx := context.Background()
client, err := core.NewToolboxClient(URL)
if err != nil {
log.Fatalf("Failed to create Toolbox client: %v", err)
}
// Framework agnostic tool
tool, err := client.LoadTool("toolName", ctx)
if err != nil {
log.Fatalf("Failed to load tools: %v", err)
}
// Fetch the tool's input schema
inputschema, err := tool.InputSchema()
if err != nil {
log.Fatalf("Failed to fetch inputSchema: %v", err)
}
var paramsSchema map[string]any
_ = json.Unmarshal(inputschema, ¶msSchema)
// Use this tool with LangChainGo
langChainTool := llms.Tool{
Type: "function",
Function: &llms.FunctionDefinition{
Name: tool.Name(),
Description: tool.Description(),
Parameters: paramsSchema,
},
}
}
{{< /highlight >}}
For end-to-end samples on using the Toolbox Go SDK with LangChain Go, see the [module's samples](https://github.com/googleapis/mcp-toolbox-sdk-go/tree/main/core/samples)
{{% /tab %}}
{{% tab header="Genkit Go" lang="en" %}}
Once you've installed the [Go TBGenkit SDK](https://pkg.go.dev/github.com/googleapis/mcp-toolbox-sdk-go/tbgenkit), you can load
tools:
{{< highlight go >}}
package main
import (
"context"
"encoding/json"
"log"
"github.com/firebase/genkit/go/ai"
"github.com/firebase/genkit/go/genkit"
"github.com/googleapis/mcp-toolbox-sdk-go/core"
"github.com/googleapis/mcp-toolbox-sdk-go/tbgenkit"
"github.com/invopop/jsonschema"
)
func main() {
// Make sure to add the error checks
// Update the url to point to your server
URL := "http://127.0.0.1:5000"
ctx := context.Background()
g, err := genkit.Init(ctx)
client, err := core.NewToolboxClient(URL)
if err != nil {
log.Fatalf("Failed to create Toolbox client: %v", err)
}
// Framework agnostic tool
tool, err := client.LoadTool("toolName", ctx)
if err != nil {
log.Fatalf("Failed to load tools: %v", err)
}
// Convert the tool using the tbgenkit package
// Use this tool with Genkit Go
genkitTool, err := tbgenkit.ToGenkitTool(tool, g)
if err != nil {
log.Fatalf("Failed to convert tool: %v\n", err)
}
}
{{< /highlight >}}
For end-to-end samples on using the Toolbox Go SDK with Genkit Go, see the [module's samples](https://github.com/googleapis/mcp-toolbox-sdk-go/tree/main/tbgenkit/samples)
{{% /tab %}}
{{% tab header="Go GenAI" lang="en" %}}
Once you've installed the [Go Core SDK](https://pkg.go.dev/github.com/googleapis/mcp-toolbox-sdk-go/core), you can load
tools:
{{< highlight go >}}
package main
import (
"context"
"encoding/json"
"log"
"github.com/googleapis/mcp-toolbox-sdk-go/core"
"google.golang.org/genai"
)
func main() {
// Make sure to add the error checks
// Update the url to point to your server
URL := "http://127.0.0.1:5000"
ctx := context.Background()
client, err := core.NewToolboxClient(URL)
if err != nil {
log.Fatalf("Failed to create Toolbox client: %v", err)
}
// Framework agnostic tool
tool, err := client.LoadTool("toolName", ctx)
if err != nil {
log.Fatalf("Failed to load tools: %v", err)
}
// Fetch the tool's input schema
inputschema, err := tool.InputSchema()
if err != nil {
log.Fatalf("Failed to fetch inputSchema: %v", err)
}
var schema *genai.Schema
_ = json.Unmarshal(inputschema, &schema)
funcDeclaration := &genai.FunctionDeclaration{
Name: tool.Name(),
Description: tool.Description(),
Parameters: schema,
}
// Use this tool with Go GenAI
genAITool := &genai.Tool{
FunctionDeclarations: []*genai.FunctionDeclaration{funcDeclaration},
}
}
{{< /highlight >}}
For end-to-end samples on using the Toolbox Go SDK with Go GenAI, see the [module's samples](https://github.com/googleapis/mcp-toolbox-sdk-go/tree/main/core/samples)
{{% /tab %}}
{{% tab header="OpenAI Go" lang="en" %}}
Once you've installed the [Go Core SDK](https://pkg.go.dev/github.com/googleapis/mcp-toolbox-sdk-go/core), you can load
tools:
{{< highlight go >}}
package main
import (
"context"
"encoding/json"
"log"
"github.com/googleapis/mcp-toolbox-sdk-go/core"
openai "github.com/openai/openai-go"
)
func main() {
// Make sure to add the error checks
// Update the url to point to your server
URL := "http://127.0.0.1:5000"
ctx := context.Background()
client, err := core.NewToolboxClient(URL)
if err != nil {
log.Fatalf("Failed to create Toolbox client: %v", err)
}
// Framework agnostic tool
tool, err := client.LoadTool("toolName", ctx)
if err != nil {
log.Fatalf("Failed to load tools: %v", err)
}
// Fetch the tool's input schema
inputschema, err := tool.InputSchema()
if err != nil {
log.Fatalf("Failed to fetch inputSchema: %v", err)
}
var paramsSchema openai.FunctionParameters
_ = json.Unmarshal(inputschema, ¶msSchema)
// Use this tool with OpenAI Go
openAITool := openai.ChatCompletionToolParam{
Function: openai.FunctionDefinitionParam{
Name: tool.Name(),
Description: openai.String(tool.Description()),
Parameters: paramsSchema,
},
}
}
{{< /highlight >}}
For end-to-end samples on using the Toolbox Go SDK with OpenAI Go, see the [module's samples](https://github.com/googleapis/mcp-toolbox-sdk-go/tree/main/core/samples)
{{% /tab %}}
{{% tab header="ADK Go" lang="en" %}}
Once you've installed the [Go TBADK SDK](https://pkg.go.dev/github.com/googleapis/mcp-toolbox-sdk-go/tbadk), you can load
tools:
{{< highlight go >}}
package main
import (
"context"
"fmt"
"github.com/googleapis/mcp-toolbox-sdk-go/tbadk"
)
func main() {
// Make sure to add the error checks
// Update the url to point to your server
URL := "http://127.0.0.1:5000"
ctx := context.Background()
client, err := tbadk.NewToolboxClient(URL)
if err != nil {
return fmt.Sprintln("Could not start Toolbox Client", err)
}
// Use this tool with ADK Go
tool, err := client.LoadTool("toolName", ctx)
if err != nil {
return fmt.Sprintln("Could not load Toolbox Tool", err)
}
}
{{< /highlight >}}
For end-to-end samples on using the Toolbox Go SDK with ADK Go, see the [module's samples](https://github.com/googleapis/mcp-toolbox-sdk-go/tree/main/tbadk/samples)
{{% /tab %}}
{{< /tabpane >}}
For more detailed instructions on using the Toolbox Go SDK, see the
[project's
README](https://github.com/googleapis/mcp-toolbox-sdk-go/blob/main/core/README.md).
========================================================================
## Getting Started
========================================================================
**Hierarchy:** MCP Toolbox for Databases > User Guide > Getting Started
**URL:** https://googleapis.github.io/genai-toolbox/previews/PR-2723/user-guide/getting-started/
**Description:** Understand the core concepts of MCP Toolbox, explore integration strategies, and learn how to architect your AI agent connections.
Before you spin up your server and start writing code, it is helpful to understand the different ways you can utilize the Toolbox within your architecture.
This guide breaks down the core methodologies for using MCP Toolbox, how to think about your tool configurations, and the different ways your applications can connect to it.
## Prebuilt vs. Custom Configs
MCP Toolbox provides two main approaches for tools: **prebuilt** and **custom**.
[**Prebuilt tools**](../configuration/prebuilt-configs/_index.md) are ready to use out of
the box. For example, a tool like
[`postgres-execute-sql`](../../integrations/postgres/postgres-execute-sql.md) has fixed parameters
and always works the same way, allowing the agent to execute arbitrary SQL.
While these are convenient, they are typically only safe when a developer is in
the loop (e.g., during prototyping, developing, or debugging).
For application use cases, you need to be wary of security risks such as prompt
injection or data poisoning. Allowing an LLM to execute arbitrary queries in
production is highly dangerous.
To secure your application, you should [**use custom tools**](../configuration/tools/_index.md) to suit your
specific schema and application needs. Creating a custom tool restricts the
agent's capabilities to only what is necessary. For example, you can use the
[`postgres-sql`](../../integrations/postgres/postgres-sql.md) tool to define a specific action. This
typically involves:
* **Prepared Statements:** Writing a SQL query ahead of time and letting the
agent only fill in specific [basic parameters](../configuration/tools/_index.md#basic-parameters).
---
## Build-Time vs. Runtime Implementation
A key architectural benefit of the MCP Toolbox is flexibility in *how* and *when* your AI clients learn about their available tools. Understanding this distinction helps you choose the right integration path.
### Build-Time
In this model, the available tools and their schemas are established when the client initializes.
* **How it works:** The client launches or connects to the MCP Toolbox server, reads the available tools once, and keeps them static for the session.
* **Best for:** **IDEs and CLI tools**
### Runtime
In this model, your application dynamically requests the latest tools from the Toolbox server on the fly.
* **How it works:** Your application code actively calls the server at runtime to fetch the latest toolsets and their schemas.
* **Best for:** **AI Agents and Custom Applications**.
---
## Usage Methodologies: How to Connect
Being built on the Model Context Protocol (MCP), MCP Toolbox is framework-agnostic. You can connect to it in three main ways:
* **IDE Integrations:** Connect your local Toolbox server directly to MCP-compatible development environments.
* **CLI Tools:** Use command-line interfaces like the Gemini CLI to interact with your databases using natural language directly from your terminal.
* **Application Integration (Client SDKs):** If you are building custom AI agents, you can use our Client SDKs to pull tools directly into your application code. We provide native support for major orchestration frameworks including LangChain, LlamaIndex, Genkit, and more across Python, JavaScript/TypeScript, and Go.
---
## Popular Quickstarts
Ready to dive in? Here are some of the most popular paths to getting your first agent up and running:
* [**Python SDK Quickstart:**](../../build-with-mcp-toolbox/local_quickstart.md) Build a custom agent from scratch using our native Python client. This is the go-to choice for developers wanting full control over their application logic and orchestration.
* [**MCP Client Quickstart:**](../../build-with-mcp-toolbox/mcp_quickstart/_index.md) Plug your databases directly into the MCP ecosystem. Perfect for a setup that works instantly with existing MCP-compatible clients and various IDEs.
{{< notice tip >}}
These are just a few starting points. For a complete list of tutorials, language-specific samples (Go, JS/TS, etc.), and advanced usage, explore the full [Build with MCP Toolbox section](../../build-with-mcp-toolbox/_index.md).
{{< /notice >}}
## Next Steps
Now that you understand the high-level concepts, it's time to build!
Learn how to [configure your custom MCP Toolbox Server](../configuration/_index.md).
========================================================================
## Configuration
========================================================================
**Hierarchy:** MCP Toolbox for Databases > User Guide > Configuration
**URL:** https://googleapis.github.io/genai-toolbox/previews/PR-2723/user-guide/configuration/
**Description:** How to configure Toolbox's tools.yaml file.
The primary way to configure Toolbox is through the `tools.yaml` file. If you
have multiple files, you can tell toolbox which to load with the `--tools-file
tools.yaml` flag.
### Using Environment Variables
To avoid hardcoding certain secret fields like passwords, usernames, API keys
etc., you could use environment variables instead with the format `${ENV_NAME}`.
```yaml
user: ${USER_NAME}
password: ${PASSWORD}
```
A default value can be specified like `${ENV_NAME:default}`.
```yaml
port: ${DB_PORT:3306}
```
### Sources
The `sources` section of your `tools.yaml` defines what data sources your
Toolbox should have access to. Most tools will have at least one source to
execute against.
```yaml
kind: sources
name: my-pg-source
type: postgres
host: 127.0.0.1
port: 5432
database: toolbox_db
user: ${USER_NAME}
password: ${PASSWORD}
```
For more details on configuring different types of sources, see the
[Sources](./sources/_index.md).
### Tools
The `tools` section of your `tools.yaml` defines the actions your agent can
take: what type of tool it is, which source(s) it affects, what parameters it
uses, etc.
```yaml
kind: tools
name: search-hotels-by-name
type: postgres-sql
source: my-pg-source
description: Search for hotels based on name.
parameters:
- name: name
type: string
description: The name of the hotel.
statement: SELECT * FROM hotels WHERE name ILIKE '%' || $1 || '%';
```
For more details on configuring different types of tools, see the
[Tools](./tools/_index.md).
### Toolsets
The `toolsets` section of your `tools.yaml` allows you to define groups of tools
that you want to be able to load together. This can be useful for defining
different sets for different agents or different applications.
```yaml
kind: toolsets
name: my_first_toolset
tools:
- my_first_tool
- my_second_tool
---
kind: toolsets
name: my_second_toolset
tools:
- my_second_tool
- my_third_tool
```
### Prompts
The `prompts` section of your `tools.yaml` defines the templates containing
structured messages and instructions for interacting with language models.
```yaml
kind: prompts
name: code_review
description: "Asks the LLM to analyze code quality and suggest improvements."
messages:
- content: "Please review the following code for quality, correctness, and potential improvements: \n\n{{.code}}"
arguments:
- name: "code"
description: "The code to review"
```
For more details on configuring different types of prompts, see the
[Prompts](./prompts/_index.md).
---
## Explore Configuration Modules
========================================================================
## Authentication
========================================================================
**Hierarchy:** MCP Toolbox for Databases > User Guide > Configuration > Authentication
**URL:** https://googleapis.github.io/genai-toolbox/previews/PR-2723/user-guide/configuration/authentication/
**Description:** AuthServices represent services that handle authentication and authorization.
AuthServices represent services that handle authentication and authorization. It
can primarily be used by [Tools](../tools/_index.md) in two different ways:
- [**Authorized Invocation**][auth-invoke] is when a tool
is validated by the auth service before the call can be invoked. Toolbox
will reject any calls that fail to validate or have an invalid token.
- [**Authenticated Parameters**][auth-params] replace the value of a parameter
with a field from an [OIDC][openid-claims] claim. Toolbox will automatically
resolve the ID token provided by the client and replace the parameter in the
tool call.
[openid-claims]: https://openid.net/specs/openid-connect-core-1_0.html#StandardClaims
[auth-invoke]: ../tools/_index.md#authorized-invocations
[auth-params]: ../tools/_index.md#authenticated-parameters
## Example
The following configurations are placed at the top level of a `tools.yaml` file.
{{< notice tip >}}
If you are accessing Toolbox with multiple applications, each
application should register their own Client ID even if they use the same
"type" of auth provider.
{{< /notice >}}
```yaml
kind: authServices
name: my_auth_app_1
type: google
clientId: ${YOUR_CLIENT_ID_1}
---
kind: authServices
name: my_auth_app_2
type: google
clientId: ${YOUR_CLIENT_ID_2}
```
{{< notice tip >}}
Use environment variable replacement with the format ${ENV_NAME}
instead of hardcoding your secrets into the configuration file.
{{< /notice >}}
After you've configured an `authService` you'll, need to reference it in the
configuration for each tool that should use it:
- **Authorized Invocations** for authorizing a tool call, [use the
`authRequired` field in a tool config][auth-invoke]
- **Authenticated Parameters** for using the value from a OIDC claim, [use the
`authServices` field in a parameter config][auth-params]
## Specifying ID Tokens from Clients
After [configuring](#example) your `authServices` section, use a Toolbox SDK to
add your ID tokens to the header of a Tool invocation request. When specifying a
token you will provide a function (that returns an id). This function is called
when the tool is invoked. This allows you to cache and refresh the ID token as
needed.
The primary method for providing these getters is via the `auth_token_getters`
parameter when loading tools, or the `add_auth_token_getter`() /
`add_auth_token_getters()` methods on a loaded tool object.
### Specifying tokens during load
#### Python
Use the [Python SDK](https://github.com/googleapis/mcp-toolbox-sdk-python/tree/main).
{{< tabpane persist=header >}}
{{< tab header="Core" lang="Python" >}}
import asyncio
from toolbox_core import ToolboxClient
async def get_auth_token():
# ... Logic to retrieve ID token (e.g., from local storage, OAuth flow)
# This example just returns a placeholder. Replace with your actual token retrieval.
return "YOUR_ID_TOKEN" # Placeholder
async def main():
async with ToolboxClient("") as toolbox:
auth_tool = await toolbox.load_tool(
"get_sensitive_data",
auth_token_getters={"my_auth_app_1": get_auth_token}
)
result = await auth_tool(param="value")
print(result)
if **name** == "**main**":
asyncio.run(main())
{{< /tab >}}
{{< tab header="LangChain" lang="Python" >}}
import asyncio
from toolbox_langchain import ToolboxClient
async def get_auth_token():
# ... Logic to retrieve ID token (e.g., from local storage, OAuth flow)
# This example just returns a placeholder. Replace with your actual token retrieval.
return "YOUR_ID_TOKEN" # Placeholder
async def main():
toolbox = ToolboxClient("")
auth_tool = await toolbox.aload_tool(
"get_sensitive_data",
auth_token_getters={"my_auth_app_1": get_auth_token}
)
result = await auth_tool.ainvoke({"param": "value"})
print(result)
if **name** == "**main**":
asyncio.run(main())
{{< /tab >}}
{{< tab header="Llamaindex" lang="Python" >}}
import asyncio
from toolbox_llamaindex import ToolboxClient
async def get_auth_token():
# ... Logic to retrieve ID token (e.g., from local storage, OAuth flow)
# This example just returns a placeholder. Replace with your actual token retrieval.
return "YOUR_ID_TOKEN" # Placeholder
async def main():
toolbox = ToolboxClient("")
auth_tool = await toolbox.aload_tool(
"get_sensitive_data",
auth_token_getters={"my_auth_app_1": get_auth_token}
)
# result = await auth_tool.acall(param="value")
# print(result.content)
if **name** == "**main**":
asyncio.run(main()){{< /tab >}}
{{< /tabpane >}}
#### Javascript/Typescript
Use the [JS SDK](https://github.com/googleapis/mcp-toolbox-sdk-js/tree/main).
```javascript
import { ToolboxClient } from '@toolbox-sdk/core';
async function getAuthToken() {
// ... Logic to retrieve ID token (e.g., from local storage, OAuth flow)
// This example just returns a placeholder. Replace with your actual token retrieval.
return "YOUR_ID_TOKEN" // Placeholder
}
const URL = 'http://127.0.0.1:5000';
let client = new ToolboxClient(URL);
const authTool = await client.loadTool("my-tool", {"my_auth_app_1": getAuthToken});
const result = await authTool({param:"value"});
console.log(result);
print(result)
```
#### Go
Use the [Go SDK](https://github.com/googleapis/mcp-toolbox-sdk-go/tree/main).
```go
import "github.com/googleapis/mcp-toolbox-sdk-go/core"
import "fmt"
func getAuthToken() string {
// ... Logic to retrieve ID token (e.g., from local storage, OAuth flow)
// This example just returns a placeholder. Replace with your actual token retrieval.
return "YOUR_ID_TOKEN" // Placeholder
}
func main() {
URL := "http://127.0.0.1:5000"
client, err := core.NewToolboxClient(URL)
if err != nil {
log.Fatalf("Failed to create Toolbox client: %v", err)
}
dynamicTokenSource := core.NewCustomTokenSource(getAuthToken)
authTool, err := client.LoadTool(
"my-tool",
ctx,
core.WithAuthTokenSource("my_auth_app_1", dynamicTokenSource))
if err != nil {
log.Fatalf("Failed to load tool: %v", err)
}
inputs := map[string]any{"param": "value"}
result, err := authTool.Invoke(ctx, inputs)
if err != nil {
log.Fatalf("Failed to invoke tool: %v", err)
}
fmt.Println(result)
}
```
### Specifying tokens for existing tools
#### Python
Use the [Python
SDK](https://github.com/googleapis/mcp-toolbox-sdk-python/tree/main).
{{< tabpane persist=header >}}
{{< tab header="Core" lang="Python" >}}
tools = await toolbox.load_toolset()
# for a single token
authorized_tool = tools[0].add_auth_token_getter("my_auth", get_auth_token)
# OR, if multiple tokens are needed
authorized_tool = tools[0].add_auth_token_getters({
"my_auth1": get_auth1_token,
"my_auth2": get_auth2_token,
})
{{< /tab >}}
{{< tab header="LangChain" lang="Python" >}}
tools = toolbox.load_toolset()
# for a single token
authorized_tool = tools[0].add_auth_token_getter("my_auth", get_auth_token)
# OR, if multiple tokens are needed
authorized_tool = tools[0].add_auth_token_getters({
"my_auth1": get_auth1_token,
"my_auth2": get_auth2_token,
})
{{< /tab >}}
{{< tab header="Llamaindex" lang="Python" >}}
tools = toolbox.load_toolset()
# for a single token
authorized_tool = tools[0].add_auth_token_getter("my_auth", get_auth_token)
# OR, if multiple tokens are needed
authorized_tool = tools[0].add_auth_token_getters({
"my_auth1": get_auth1_token,
"my_auth2": get_auth2_token,
})
{{< /tab >}}
{{< /tabpane >}}
#### Javascript/Typescript
Use the [JS SDK](https://github.com/googleapis/mcp-toolbox-sdk-js/tree/main).
```javascript
const URL = 'http://127.0.0.1:5000';
let client = new ToolboxClient(URL);
let tool = await client.loadTool("my-tool")
// for a single token
const authorizedTool = tool.addAuthTokenGetter("my_auth", get_auth_token)
// OR, if multiple tokens are needed
const multiAuthTool = tool.addAuthTokenGetters({
"my_auth_1": getAuthToken1,
"my_auth_2": getAuthToken2,
})
```
#### Go
Use the [Go SDK](https://github.com/googleapis/mcp-toolbox-sdk-go/tree/main).
```go
import "github.com/googleapis/mcp-toolbox-sdk-go/core"
func main() {
URL := "http://127.0.0.1:5000"
client, err := core.NewToolboxClient(URL)
if err != nil {
log.Fatalf("Failed to create Toolbox client: %v", err)
}
tool, err := client.LoadTool("my-tool", ctx)
if err != nil {
log.Fatalf("Failed to load tool: %v", err)
}
dynamicTokenSource1 := core.NewCustomTokenSource(getAuthToken1)
dynamicTokenSource2 := core.NewCustomTokenSource(getAuthToken1)
// For a single token
authTool, err := tool.ToolFrom(
core.WithAuthTokenSource("my-auth", dynamicTokenSource1),
)
// OR, if multiple tokens are needed
authTool, err := tool.ToolFrom(
core.WithAuthTokenSource("my-auth_1", dynamicTokenSource1),
core.WithAuthTokenSource("my-auth_2", dynamicTokenSource2),
)
}
```
## Kinds of Auth Services
========================================================================
## Google Sign-In
========================================================================
**Hierarchy:** MCP Toolbox for Databases > User Guide > Configuration > Authentication > Google Sign-In
**URL:** https://googleapis.github.io/genai-toolbox/previews/PR-2723/user-guide/configuration/authentication/google/
**Description:** Use Google Sign-In for Oauth 2.0 flow and token lifecycle.
## Getting Started
Google Sign-In manages the OAuth 2.0 flow and token lifecycle. To integrate the
Google Sign-In workflow to your web app [follow this guide][gsi-setup].
After setting up the Google Sign-In workflow, you should have registered your
application and retrieved a [Client ID][client-id]. Configure your auth service
in with the `Client ID`.
[gsi-setup]: https://developers.google.com/identity/sign-in/web/sign-in
[client-id]: https://developers.google.com/identity/sign-in/web/sign-in#create_authorization_credentials
## Behavior
### Authorized Invocations
When using [Authorized Invocations][auth-invoke], a tool will be
considered authorized if it has a valid Oauth 2.0 token that matches the Client
ID.
[auth-invoke]: ../tools/_index.md#authorized-invocations
### Authenticated Parameters
When using [Authenticated Parameters][auth-params], any [claim provided by the
id-token][provided-claims] can be used for the parameter.
[auth-params]: ../tools/_index.md#authenticated-parameters
[provided-claims]:
https://developers.google.com/identity/openid-connect/openid-connect#obtaininguserprofileinformation
## Example
```yaml
kind: authServices
name: my-google-auth
type: google
clientId: ${YOUR_GOOGLE_CLIENT_ID}
```
{{< notice tip >}}
Use environment variable replacement with the format ${ENV_NAME}
instead of hardcoding your secrets into the configuration file.
{{< /notice >}}
## Reference
| **field** | **type** | **required** | **description** |
|-----------|:--------:|:------------:|------------------------------------------------------------------|
| type | string | true | Must be "google". |
| clientId | string | true | Client ID of your application from registering your application. |
========================================================================
## Prebuilt Configs
========================================================================
**Hierarchy:** MCP Toolbox for Databases > User Guide > Configuration > Prebuilt Configs
**URL:** https://googleapis.github.io/genai-toolbox/previews/PR-2723/user-guide/configuration/prebuilt-configs/
**Description:** This page lists all the prebuilt configs available.
Prebuilt configs are reusable, pre-packaged toolsets that are designed to extend
the capabilities of agents. These configs are built to be generic and adaptable,
allowing developers to interact with and take action on databases.
See guides, [Connect from your IDE](../../connect-to/ides/_index.md), for
details on how to connect your AI tools (IDEs) to databases via Toolbox and MCP.
{{< notice tip >}}
You can now use `--prebuilt` along `--tools-file`, `--tools-files`, or
`--tools-folder` to combine prebuilt configs with custom tools.
You can also combine multiple prebuilt configs.
See [Usage Examples](../../../reference/cli.md#usage-examples).
{{< /notice >}}
## AlloyDB Postgres
* `--prebuilt` value: `alloydb-postgres`
* **Environment Variables:**
* `ALLOYDB_POSTGRES_PROJECT`: The GCP project ID.
* `ALLOYDB_POSTGRES_REGION`: The region of your AlloyDB instance.
* `ALLOYDB_POSTGRES_CLUSTER`: The ID of your AlloyDB cluster.
* `ALLOYDB_POSTGRES_INSTANCE`: The ID of your AlloyDB instance.
* `ALLOYDB_POSTGRES_DATABASE`: The name of the database to connect to.
* `ALLOYDB_POSTGRES_USER`: (Optional) The database username. Defaults to
IAM authentication if unspecified.
* `ALLOYDB_POSTGRES_PASSWORD`: (Optional) The password for the database
user. Defaults to IAM authentication if unspecified.
* `ALLOYDB_POSTGRES_IP_TYPE`: (Optional) The IP type i.e. "Public" or
"Private" (Default: Public).
* **Permissions:**
* **AlloyDB Client** (`roles/alloydb.client`) to connect to the instance.
* Database-level permissions (e.g., `SELECT`, `INSERT`) are required to
execute queries.
* **Tools:**
* `execute_sql`: Executes a SQL query.
* `list_tables`: Lists tables in the database.
* `list_active_queries`: Lists ongoing queries.
* `list_available_extensions`: Discover all PostgreSQL extensions available for installation.
* `list_installed_extensions`: List all installed PostgreSQL extensions.
* `long_running_transactions`: Identifies and lists database transactions that exceed a specified time limit.
* `list_locks`: Identifies all locks held by active processes.
* `replication_stats`: Lists each replica's process ID and sync state.
* `list_autovacuum_configurations`: Lists autovacuum configurations in the
database.
* `list_memory_configurations`: Lists memory-related configurations in the
database.
* `list_top_bloated_tables`: List top bloated tables in the database.
* `list_replication_slots`: Lists replication slots in the database.
* `list_invalid_indexes`: Lists invalid indexes in the database.
* `get_query_plan`: Generate the execution plan of a statement.
* `list_views`: Lists views in the database from pg_views with a default
limit of 50 rows. Returns schemaname, viewname and the ownername.
* `list_schemas`: Lists schemas in the database.
* `database_overview`: Fetches the current state of the PostgreSQL server.
* `list_triggers`: Lists triggers in the database.
* `list_indexes`: List available user indexes in a PostgreSQL database.
* `list_sequences`: List sequences in a PostgreSQL database.
* `list_query_stats`: Lists query statistics.
* `get_column_cardinality`: Gets column cardinality.
* `list_table_stats`: Lists table statistics.
* `list_publication_tables`: List publication tables in a PostgreSQL database.
* `list_tablespaces`: Lists tablespaces in the database.
* `list_pg_settings`: List configuration parameters for the PostgreSQL server.
* `list_database_stats`: Lists the key performance and activity statistics for
each database in the AlloyDB instance.
* `list_roles`: Lists all the user-created roles in PostgreSQL database.
* `list_stored_procedure`: Lists stored procedures.
## AlloyDB Postgres Admin
* `--prebuilt` value: `alloydb-postgres-admin`
* **Permissions:**
* **AlloyDB Viewer** (`roles/alloydb.viewer`) is required for `list` and
`get` tools.
* **AlloyDB Admin** (`roles/alloydb.admin`) is required for `create` tools.
* **Tools:**
* `create_cluster`: Creates a new AlloyDB cluster.
* `list_clusters`: Lists all AlloyDB clusters in a project.
* `get_cluster`: Gets information about a specified AlloyDB cluster.
* `create_instance`: Creates a new AlloyDB instance within a cluster.
* `list_instances`: Lists all instances within an AlloyDB cluster.
* `get_instance`: Gets information about a specified AlloyDB instance.
* `create_user`: Creates a new database user in an AlloyDB cluster.
* `list_users`: Lists all database users within an AlloyDB cluster.
* `get_user`: Gets information about a specified database user in an
AlloyDB cluster.
* `wait_for_operation`: Polls the operations API to track the status of
long-running operations.
## AlloyDB Postgres Observability
* `--prebuilt` value: `alloydb-postgres-observability`
* **Permissions:**
* **Monitoring Viewer** (`roles/monitoring.viewer`) is required on the
project to view monitoring data.
* **Tools:**
* `get_system_metrics`: Fetches system level cloud monitoring data
(timeseries metrics) for an AlloyDB instance using a PromQL query.
* `get_query_metrics`: Fetches query level cloud monitoring data
(timeseries metrics) for queries running in an AlloyDB instance using a
PromQL query.
## AlloyDB Omni
* `--prebuilt` value: `alloydb-omni`
* **Environment Variables:**
* `ALLOYDB_OMNI_HOST`: (Optional) The hostname or IP address (Default: localhost).
* `ALLOYDB_OMNI_PORT`: (Optional) The port number (Default: 5432).
* `ALLOYDB_OMNI_DATABASE`: The name of the database to connect to.
* `ALLOYDB_OMNI_USER`: The database username.
* `ALLOYDB_OMNI_PASSWORD`: (Optional) The password for the database user.
* `ALLOYDB_OMNI_QUERY_PARAMS`: (Optional) Connection query parameters.
* **Tools:**
* `execute_sql`: Executes a SQL query.
* `list_tables`: Lists tables in the database.
* `list_active_queries`: Lists ongoing queries.
* `list_available_extensions`: Discover all PostgreSQL extensions available for installation.
* `list_installed_extensions`: List all installed PostgreSQL extensions.
* `long_running_transactions`: Identifies and lists database transactions that exceed a specified time limit.
* `list_locks`: Identifies all locks held by active processes.
* `replication_stats`: Lists each replica's process ID and sync state.
* `list_autovacuum_configurations`: Lists autovacuum configurations in the
database.
* `list_columnar_configurations`: List AlloyDB Omni columnar-related configurations.
* `list_columnar_recommended_columns`: Lists columns that AlloyDB Omni recommends adding to the columnar engine.
* `list_memory_configurations`: Lists memory-related configurations in the
database.
* `list_top_bloated_tables`: List top bloated tables in the database.
* `list_replication_slots`: Lists replication slots in the database.
* `list_invalid_indexes`: Lists invalid indexes in the database.
* `get_query_plan`: Generate the execution plan of a statement.
* `list_views`: Lists views in the database from pg_views with a default
limit of 50 rows. Returns schemaname, viewname and the ownername.
* `list_schemas`: Lists schemas in the database.
* `database_overview`: Fetches the current state of the PostgreSQL server.
* `list_triggers`: Lists triggers in the database.
* `list_indexes`: List available user indexes in a PostgreSQL database.
* `list_sequences`: List sequences in a PostgreSQL database.
* `list_query_stats`: Lists query statistics.
* `get_column_cardinality`: Gets column cardinality.
* `list_table_stats`: Lists table statistics.
* `list_publication_tables`: List publication tables in a PostgreSQL database.
* `list_tablespaces`: Lists tablespaces in the database.
* `list_pg_settings`: List configuration parameters for the PostgreSQL server.
* `list_database_stats`: Lists the key performance and activity statistics for
each database in the AlloyDB instance.
* `list_roles`: Lists all the user-created roles in PostgreSQL database.
* `list_stored_procedure`: Lists stored procedures.
## BigQuery
* `--prebuilt` value: `bigquery`
* **Environment Variables:**
* `BIGQUERY_PROJECT`: The GCP project ID.
* `BIGQUERY_LOCATION`: (Optional) The dataset location.
* `BIGQUERY_USE_CLIENT_OAUTH`: (Optional) If `true`, forwards the client's
OAuth access token for authentication. Defaults to `false`.
* `BIGQUERY_SCOPES`: (Optional) A comma-separated list of OAuth scopes to
use for authentication.
* **Permissions:**
* **BigQuery User** (`roles/bigquery.user`) to execute queries and view
metadata.
* **BigQuery Metadata Viewer** (`roles/bigquery.metadataViewer`) to view
all datasets.
* **BigQuery Data Editor** (`roles/bigquery.dataEditor`) to create or
modify datasets and tables.
* **Gemini for Google Cloud** (`roles/cloudaicompanion.user`) to use the
conversational analytics API.
* **Tools:**
* `analyze_contribution`: Use this tool to perform contribution analysis,
also called key driver analysis.
* `ask_data_insights`: Use this tool to perform data analysis, get
insights, or answer complex questions about the contents of specific
BigQuery tables. For more information on required roles, API setup, and
IAM configuration, see the setup and authentication section of the
[Conversational Analytics API
documentation](https://cloud.google.com/gemini/docs/conversational-analytics-api/overview).
* `execute_sql`: Executes a SQL statement.
* `forecast`: Use this tool to forecast time series data.
* `get_dataset_info`: Gets dataset metadata.
* `get_table_info`: Gets table metadata.
* `list_dataset_ids`: Lists datasets.
* `list_table_ids`: Lists tables.
* `search_catalog`: Search for entries based on the provided query.
## ClickHouse
* `--prebuilt` value: `clickhouse`
* **Environment Variables:**
* `CLICKHOUSE_HOST`: The hostname or IP address of the ClickHouse server.
* `CLICKHOUSE_PORT`: The port number of the ClickHouse server.
* `CLICKHOUSE_USER`: The database username.
* `CLICKHOUSE_PASSWORD`: The password for the database user.
* `CLICKHOUSE_DATABASE`: The name of the database to connect to.
* `CLICKHOUSE_PROTOCOL`: The protocol to use (e.g., http).
* **Tools:**
* `execute_sql`: Use this tool to execute SQL.
* `list_databases`: Use this tool to list all databases in ClickHouse.
* `list_tables`: Use this tool to list all tables in a specific ClickHouse database.
## Cloud SQL for MySQL
* `--prebuilt` value: `cloud-sql-mysql`
* **Environment Variables:**
* `CLOUD_SQL_MYSQL_PROJECT`: The GCP project ID.
* `CLOUD_SQL_MYSQL_REGION`: The region of your Cloud SQL instance.
* `CLOUD_SQL_MYSQL_INSTANCE`: The ID of your Cloud SQL instance.
* `CLOUD_SQL_MYSQL_DATABASE`: The name of the database to connect to.
* `CLOUD_SQL_MYSQL_USER`: The database username.
* `CLOUD_SQL_MYSQL_PASSWORD`: The password for the database user.
* `CLOUD_SQL_MYSQL_IP_TYPE`: The IP type i.e. "Public
or "Private" (Default: Public).
* **Permissions:**
* **Cloud SQL Client** (`roles/cloudsql.client`) to connect to the
instance.
* Database-level permissions (e.g., `SELECT`, `INSERT`) are required to
execute queries.
* **Tools:**
* `execute_sql`: Executes a SQL query.
* `list_tables`: Lists tables in the database.
* `get_query_plan`: Provides information about how MySQL executes a SQL
statement.
* `list_active_queries`: Lists ongoing queries.
* `list_tables_missing_unique_indexes`: Looks for tables that do not have
primary or unique key contraint.
* `list_table_fragmentation`: Displays table fragmentation in MySQL.
## Cloud SQL for MySQL Observability
* `--prebuilt` value: `cloud-sql-mysql-observability`
* **Permissions:**
* **Monitoring Viewer** (`roles/monitoring.viewer`) is required on the
project to view monitoring data.
* **Tools:**
* `get_system_metrics`: Fetches system level cloud monitoring data
(timeseries metrics) for a MySQL instance using a PromQL query.
* `get_query_metrics`: Fetches query level cloud monitoring data
(timeseries metrics) for queries running in a MySQL instance using a
PromQL query.
## Cloud SQL for MySQL Admin
* `--prebuilt` value: `cloud-sql-mysql-admin`
* **Permissions:**
* **Cloud SQL Viewer** (`roles/cloudsql.viewer`): Provides read-only
access to resources.
* `get_instance`
* `list_instances`
* `list_databases`
* `wait_for_operation`
* **Cloud SQL Editor** (`roles/cloudsql.editor`): Provides permissions to
manage existing resources.
* All `viewer` tools
* `create_database`
* `create_backup`
* **Cloud SQL Admin** (`roles/cloudsql.admin`): Provides full control over
all resources.
* All `editor` and `viewer` tools
* `create_instance`
* `create_user`
* `clone_instance`
* `restore_backup`
* **Tools:**
* `create_instance`: Creates a new Cloud SQL for MySQL instance.
* `get_instance`: Gets information about a Cloud SQL instance.
* `list_instances`: Lists Cloud SQL instances in a project.
* `create_database`: Creates a new database in a Cloud SQL instance.
* `list_databases`: Lists all databases for a Cloud SQL instance.
* `create_user`: Creates a new user in a Cloud SQL instance.
* `wait_for_operation`: Waits for a Cloud SQL operation to complete.
* `clone_instance`: Creates a clone for an existing Cloud SQL for MySQL instance.
* `create_backup`: Creates a backup on a Cloud SQL instance.
* `restore_backup`: Restores a backup of a Cloud SQL instance.
## Cloud SQL for PostgreSQL
* `--prebuilt` value: `cloud-sql-postgres`
* **Environment Variables:**
* `CLOUD_SQL_POSTGRES_PROJECT`: The GCP project ID.
* `CLOUD_SQL_POSTGRES_REGION`: The region of your Cloud SQL instance.
* `CLOUD_SQL_POSTGRES_INSTANCE`: The ID of your Cloud SQL instance.
* `CLOUD_SQL_POSTGRES_DATABASE`: The name of the database to connect to.
* `CLOUD_SQL_POSTGRES_USER`: (Optional) The database username. Defaults to
IAM authentication if unspecified.
* `CLOUD_SQL_POSTGRES_PASSWORD`: (Optional) The password for the database
user. Defaults to IAM authentication if unspecified.
* `CLOUD_SQL_POSTGRES_IP_TYPE`: (Optional) The IP type i.e. "Public" or
"Private" (Default: Public).
* **Permissions:**
* **Cloud SQL Client** (`roles/cloudsql.client`) to connect to the
instance.
* Database-level permissions (e.g., `SELECT`, `INSERT`) are required to
execute queries.
* **Tools:**
* `execute_sql`: Executes a SQL query.
* `list_tables`: Lists tables in the database.
* `list_active_queries`: Lists ongoing queries.
* `list_available_extensions`: Discover all PostgreSQL extensions available for installation.
* `list_installed_extensions`: List all installed PostgreSQL extensions.
* `long_running_transactions`: Identifies and lists database transactions that exceed a specified time limit.
* `list_locks`: Identifies all locks held by active processes.
* `replication_stats`: Lists each replica's process ID and sync state.
* `list_autovacuum_configurations`: Lists autovacuum configurations in the
database.
* `list_memory_configurations`: Lists memory-related configurations in the
database.
* `list_top_bloated_tables`: List top bloated tables in the database.
* `list_replication_slots`: Lists replication slots in the database.
* `list_invalid_indexes`: Lists invalid indexes in the database.
* `get_query_plan`: Generate the execution plan of a statement.
* `list_views`: Lists views in the database from pg_views with a default
limit of 50 rows. Returns schemaname, viewname and the ownername.
* `list_schemas`: Lists schemas in the database.
* `database_overview`: Fetches the current state of the PostgreSQL server.
* `list_triggers`: Lists triggers in the database.
* `list_indexes`: List available user indexes in a PostgreSQL database.
* `list_sequences`: List sequences in a PostgreSQL database.
* `list_query_stats`: Lists query statistics.
* `get_column_cardinality`: Gets column cardinality.
* `list_table_stats`: Lists table statistics.
* `list_publication_tables`: List publication tables in a PostgreSQL database.
* `list_tablespaces`: Lists tablespaces in the database.
* `list_pg_settings`: List configuration parameters for the PostgreSQL server.
* `list_database_stats`: Lists the key performance and activity statistics for
each database in the postgreSQL instance.
* `list_roles`: Lists all the user-created roles in PostgreSQL database.
* `list_stored_procedure`: Lists stored procedures.
## Cloud SQL for PostgreSQL Observability
* `--prebuilt` value: `cloud-sql-postgres-observability`
* **Permissions:**
* **Monitoring Viewer** (`roles/monitoring.viewer`) is required on the
project to view monitoring data.
* **Tools:**
* `get_system_metrics`: Fetches system level cloud monitoring data
(timeseries metrics) for a Postgres instance using a PromQL query.
* `get_query_metrics`: Fetches query level cloud monitoring data
(timeseries metrics) for queries running in Postgres instance using a
PromQL query.
## Cloud SQL for PostgreSQL Admin
* `--prebuilt` value: `cloud-sql-postgres-admin`
* **Permissions:**
* **Cloud SQL Viewer** (`roles/cloudsql.viewer`): Provides read-only
access to resources.
* `get_instance`
* `list_instances`
* `list_databases`
* `wait_for_operation`
* **Cloud SQL Editor** (`roles/cloudsql.editor`): Provides permissions to
manage existing resources.
* All `viewer` tools
* `create_database`
* `create_backup`
* **Cloud SQL Admin** (`roles/cloudsql.admin`): Provides full control over
all resources.
* All `editor` and `viewer` tools
* `create_instance`
* `create_user`
* `clone_instance`
* `restore_backup`
* **Tools:**
* `create_instance`: Creates a new Cloud SQL for PostgreSQL instance.
* `get_instance`: Gets information about a Cloud SQL instance.
* `list_instances`: Lists Cloud SQL instances in a project.
* `create_database`: Creates a new database in a Cloud SQL instance.
* `list_databases`: Lists all databases for a Cloud SQL instance.
* `create_user`: Creates a new user in a Cloud SQL instance.
* `wait_for_operation`: Waits for a Cloud SQL operation to complete.
* `clone_instance`: Creates a clone for an existing Cloud SQL for PostgreSQL instance.
* `postgres_upgrade_precheck`: Performs a precheck for a major version upgrade of a Cloud SQL for PostgreSQL instance.
* `create_backup`: Creates a backup on a Cloud SQL instance.
* `restore_backup`: Restores a backup of a Cloud SQL instance.
## Cloud SQL for SQL Server
* `--prebuilt` value: `cloud-sql-mssql`
* **Environment Variables:**
* `CLOUD_SQL_MSSQL_PROJECT`: The GCP project ID.
* `CLOUD_SQL_MSSQL_REGION`: The region of your Cloud SQL instance.
* `CLOUD_SQL_MSSQL_INSTANCE`: The ID of your Cloud SQL instance.
* `CLOUD_SQL_MSSQL_DATABASE`: The name of the database to connect to.
* `CLOUD_SQL_MSSQL_USER`: The database username.
* `CLOUD_SQL_MSSQL_PASSWORD`: The password for the database user.
* `CLOUD_SQL_MSSQL_IP_TYPE`: (Optional) The IP type i.e. "Public" or
"Private" (Default: Public).
* **Permissions:**
* **Cloud SQL Client** (`roles/cloudsql.client`) to connect to the
instance.
* Database-level permissions (e.g., `SELECT`, `INSERT`) are required to
execute queries.
* **Tools:**
* `execute_sql`: Executes a SQL query.
* `list_tables`: Lists tables in the database.
## Cloud SQL for SQL Server Observability
* `--prebuilt` value: `cloud-sql-mssql-observability`
* **Permissions:**
* **Monitoring Viewer** (`roles/monitoring.viewer`) is required on the
project to view monitoring data.
* **Tools:**
* `get_system_metrics`: Fetches system level cloud monitoring data
(timeseries metrics) for a SQL Server instance using a PromQL query.
## Cloud SQL for SQL Server Admin
* `--prebuilt` value: `cloud-sql-mssql-admin`
* **Permissions:**
* **Cloud SQL Viewer** (`roles/cloudsql.viewer`): Provides read-only
access to resources.
* `get_instance`
* `list_instances`
* `list_databases`
* `wait_for_operation`
* **Cloud SQL Editor** (`roles/cloudsql.editor`): Provides permissions to
manage existing resources.
* All `viewer` tools
* `create_database`
* `create_backup`
* **Cloud SQL Admin** (`roles/cloudsql.admin`): Provides full control over
all resources.
* All `editor` and `viewer` tools
* `create_instance`
* `create_user`
* `clone_instance`
* `restore_backup`
* **Tools:**
* `create_instance`: Creates a new Cloud SQL for SQL Server instance.
* `get_instance`: Gets information about a Cloud SQL instance.
* `list_instances`: Lists Cloud SQL instances in a project.
* `create_database`: Creates a new database in a Cloud SQL instance.
* `list_databases`: Lists all databases for a Cloud SQL instance.
* `create_user`: Creates a new user in a Cloud SQL instance.
* `wait_for_operation`: Waits for a Cloud SQL operation to complete.
* `clone_instance`: Creates a clone for an existing Cloud SQL for SQL Server instance.
* `create_backup`: Creates a backup on a Cloud SQL instance.
* `restore_backup`: Restores a backup of a Cloud SQL instance.
## Dataplex
* `--prebuilt` value: `dataplex`
* **Environment Variables:**
* `DATAPLEX_PROJECT`: The GCP project ID.
* **Permissions:**
* **Dataplex Reader** (`roles/dataplex.viewer`) to search and look up
entries.
* **Dataplex Editor** (`roles/dataplex.editor`) to modify entries.
* **Tools:**
* `search_entries`: Searches for entries in Dataplex Catalog.
* `lookup_entry`: Retrieves a specific entry from Dataplex
Catalog.
* `search_aspect_types`: Finds aspect types relevant to the
query.
## Dataproc
* `--prebuilt` value: `dataproc`
* **Environment Variables:**
* `DATAPROC_PROJECT`: The GCP project ID.
* `DATAPROC_REGION`: The Dataproc region.
* **Permissions:**
* **Dataproc Viewer** (`roles/dataproc.viewer`) to examine clusters and jobs.
* **Tools:**
* `list_clusters`: Lists Dataproc clusters.
* `get_cluster`: Gets a Dataproc cluster.
* `list_jobs`: Lists Dataproc jobs.
* `get_job`: Gets a Dataproc job.
## Elasticsearch
* `--prebuilt` value: `elasticsearch`
* **Environment Variables:**
* `ELASTICSEARCH_HOST`: The hostname or IP address of the Elasticsearch server.
* `ELASTICSEARCH_APIKEY`: The API key for authentication.
* **Tools:**
* `execute_esql_query`: Use this tool to execute ES|QL queries.
## Firestore
* `--prebuilt` value: `firestore`
* **Environment Variables:**
* `FIRESTORE_PROJECT`: The GCP project ID.
* `FIRESTORE_DATABASE`: (Optional) The Firestore database ID. Defaults to
"(default)".
* **Permissions:**
* **Cloud Datastore User** (`roles/datastore.user`) to get documents, list
collections, and query collections.
* **Firebase Rules Viewer** (`roles/firebaserules.viewer`) to get and
validate Firestore rules.
* **Tools:**
* `get_documents`: Gets multiple documents from Firestore by their paths.
* `add_documents`: Adds a new document to a Firestore collection.
* `update_document`: Updates an existing document in Firestore.
* `list_collections`: Lists Firestore collections for a given parent path.
* `delete_documents`: Deletes multiple documents from Firestore.
* `query_collection`: Retrieves one or more Firestore documents from a
collection.
* `get_rules`: Retrieves the active Firestore security rules.
* `validate_rules`: Checks the provided Firestore Rules source for syntax
and validation errors.
## Looker
* `--prebuilt` value: `looker`
* **Environment Variables:**
* `LOOKER_BASE_URL`: The URL of your Looker instance.
* `LOOKER_CLIENT_ID`: The client ID for the Looker API.
* `LOOKER_CLIENT_SECRET`: The client secret for the Looker API.
* `LOOKER_VERIFY_SSL`: Whether to verify SSL certificates.
* `LOOKER_USE_CLIENT_OAUTH`: Whether to use OAuth for authentication.
* `LOOKER_SHOW_HIDDEN_MODELS`: Whether to show hidden models.
* `LOOKER_SHOW_HIDDEN_EXPLORES`: Whether to show hidden explores.
* `LOOKER_SHOW_HIDDEN_FIELDS`: Whether to show hidden fields.
* **Permissions:**
* A Looker account with permissions to access the desired models,
explores, and data is required.
* **Tools:**
* `get_models`: Retrieves the list of LookML models.
* `get_explores`: Retrieves the list of explores in a model.
* `get_dimensions`: Retrieves the list of dimensions in an explore.
* `get_measures`: Retrieves the list of measures in an explore.
* `get_filters`: Retrieves the list of filters in an explore.
* `get_parameters`: Retrieves the list of parameters in an explore.
* `query`: Runs a query against the LookML model.
* `query_sql`: Generates the SQL for a query.
* `query_url`: Generates a URL for a query in Looker.
* `get_looks`: Searches for saved looks.
* `run_look`: Runs the query associated with a look.
* `make_look`: Creates a new look.
* `get_dashboards`: Searches for saved dashboards.
* `run_dashboard`: Runs the queries associated with a dashboard.
* `make_dashboard`: Creates a new dashboard.
* `add_dashboard_element`: Adds a tile to a dashboard.
* `add_dashboard_filter`: Adds a filter to a dashboard.
* `generate_embed_url`: Generate an embed url for content.
## Looker Dev
* `--prebuilt` value: `looker-dev`
* **Environment Variables:**
* `LOOKER_BASE_URL`: The URL of your Looker instance.
* `LOOKER_CLIENT_ID`: The client ID for the Looker API.
* `LOOKER_CLIENT_SECRET`: The client secret for the Looker API.
* `LOOKER_VERIFY_SSL`: Whether to verify SSL certificates.
* `LOOKER_USE_CLIENT_OAUTH`: Whether to use OAuth for authentication.
* `LOOKER_SHOW_HIDDEN_MODELS`: Whether to show hidden models.
* `LOOKER_SHOW_HIDDEN_EXPLORES`: Whether to show hidden explores.
* `LOOKER_SHOW_HIDDEN_FIELDS`: Whether to show hidden fields.
* **Permissions:**
* A Looker account with permissions to access the desired projects
and LookML is required.
* **Tools:**
* `health_pulse`: Test the health of a Looker instance.
* `health_analyze`: Analyze the LookML usage of a Looker instance.
* `health_vacuum`: Suggest LookML elements that can be removed.
* `dev_mode`: Activate developer mode.
* `get_projects`: Get the LookML projects in a Looker instance.
* `get_project_files`: List the project files in a project.
* `get_project_file`: Get the content of a LookML file.
* `create_project_file`: Create a new LookML file.
* `update_project_file`: Update an existing LookML file.
* `delete_project_file`: Delete a LookML file.
* `get_project_directories`: Retrieves a list of project directories for a given LookML project.
* `create_project_directory`: Creates a new directory within a specified LookML project.
* `delete_project_directory`: Deletes a directory from a specified LookML project.
* `validate_project`: Check the syntax of a LookML project.
* `get_connections`: Get the available connections in a Looker instance.
* `get_connection_schemas`: Get the available schemas in a connection.
* `get_connection_databases`: Get the available databases in a connection.
* `get_connection_tables`: Get the available tables in a connection.
* `get_connection_table_columns`: Get the available columns for a table.
* `get_lookml_tests`: Retrieves a list of available LookML tests for a project.
* `run_lookml_tests`: Executes specific LookML tests within a project.
* `create_view_from_table`: Generates boilerplate LookML views directly from the database schema.
## Looker Conversational Analytics
* `--prebuilt` value: `looker-conversational-analytics`
* **Environment Variables:**
* `LOOKER_BASE_URL`: The URL of your Looker instance.
* `LOOKER_CLIENT_ID`: The client ID for the Looker API.
* `LOOKER_CLIENT_SECRET`: The client secret for the Looker API.
* `LOOKER_VERIFY_SSL`: Whether to verify SSL certificates.
* `LOOKER_USE_CLIENT_OAUTH`: Whether to use OAuth for authentication.
* `LOOKER_PROJECT`: The GCP Project to use for Conversational Analytics.
* `LOOKER_LOCATION`: The GCP Location to use for Conversational Analytics.
* **Permissions:**
* A Looker account with permissions to access the desired models,
explores, and data is required.
* **Looker Instance User** (`roles/looker.instanceUser`): IAM role to
access Looker.
* **Gemini for Google Cloud User** (`roles/cloudaicompanion.user`): IAM
role to access Conversational Analytics.
* **Gemini Data Analytics Stateless Chat User (Beta)**
(`roles/geminidataanalytics.dataAgentStatelessUser`): IAM role to
access Conversational Analytics.
* **Tools:**
* `ask_data_insights`: Ask a question of the data.
* `get_models`: Retrieves the list of LookML models.
* `get_explores`: Retrieves the list of explores in a model.
## Microsoft SQL Server
* `--prebuilt` value: `mssql`
* **Environment Variables:**
* `MSSQL_HOST`: (Optional) The hostname or IP address of the SQL Server instance.
* `MSSQL_PORT`: (Optional) The port number for the SQL Server instance.
* `MSSQL_DATABASE`: The name of the database to connect to.
* `MSSQL_USER`: The database username.
* `MSSQL_PASSWORD`: The password for the database user.
* **Permissions:**
* Database-level permissions (e.g., `SELECT`, `INSERT`) are required to
execute queries.
* **Tools:**
* `execute_sql`: Executes a SQL query.
* `list_tables`: Lists tables in the database.
## MindsDB
* `--prebuilt` value: `mindsdb`
* **Environment Variables:**
* `MINDSDB_HOST`: The hostname or IP address of the MindsDB server.
* `MINDSDB_PORT`: The port number of the MindsDB server.
* `MINDSDB_DATABASE`: The name of the database to connect to.
* `MINDSDB_USER`: The database username.
* `MINDSDB_PASS`: The password for the database user.
* **Tools:**
* `mindsdb-execute-sql`: Execute SQL queries directly on MindsDB database.
* `mindsdb-sql`: Execute parameterized SQL queries on MindsDB database.
## MySQL
* `--prebuilt` value: `mysql`
* **Environment Variables:**
* `MYSQL_HOST`: The hostname or IP address of the MySQL server.
* `MYSQL_PORT`: The port number for the MySQL server.
* `MYSQL_DATABASE`: The name of the database to connect to.
* `MYSQL_USER`: The database username.
* `MYSQL_PASSWORD`: The password for the database user.
* **Permissions:**
* Database-level permissions (e.g., `SELECT`, `INSERT`) are required to
execute queries.
* **Tools:**
* `execute_sql`: Executes a SQL query.
* `list_tables`: Lists tables in the database.
* `get_query_plan`: Provides information about how MySQL executes a SQL
statement.
* `list_active_queries`: Lists ongoing queries.
* `list_tables_missing_unique_indexes`: Looks for tables that do not have
primary or unique key contraint.
* `list_table_fragmentation`: Displays table fragmentation in MySQL.
## OceanBase
* `--prebuilt` value: `oceanbase`
* **Environment Variables:**
* `OCEANBASE_HOST`: The hostname or IP address of the OceanBase server.
* `OCEANBASE_PORT`: The port number for the OceanBase server.
* `OCEANBASE_DATABASE`: The name of the database to connect to.
* `OCEANBASE_USER`: The database username.
* `OCEANBASE_PASSWORD`: The password for the database user.
* **Permissions:**
* Database-level permissions (e.g., `SELECT`, `INSERT`) are required to
execute queries.
* **Tools:**
* `execute_sql`: Executes a SQL query.
* `list_tables`: Lists tables in the database.
## PostgreSQL
* `--prebuilt` value: `postgres`
* **Environment Variables:**
* `POSTGRES_HOST`: (Optional) The hostname or IP address of the PostgreSQL server.
* `POSTGRES_PORT`: (Optional) The port number for the PostgreSQL server.
* `POSTGRES_DATABASE`: The name of the database to connect to.
* `POSTGRES_USER`: The database username.
* `POSTGRES_PASSWORD`: The password for the database user.
* `POSTGRES_QUERY_PARAMS`: (Optional) Raw query to be added to the db
connection string.
* **Permissions:**
* Database-level permissions (e.g., `SELECT`, `INSERT`) are required to
execute queries.
* **Tools:**
* `execute_sql`: Executes a SQL query.
* `list_tables`: Lists tables in the database.
* `list_active_queries`: Lists ongoing queries.
* `list_available_extensions`: Discover all PostgreSQL extensions available for installation.
* `list_installed_extensions`: List all installed PostgreSQL extensions.
* `long_running_transactions`: Identifies and lists database transactions that exceed a specified time limit.
* `list_locks`: Identifies all locks held by active processes.
* `replication_stats`: Lists each replica's process ID and sync state.
* `list_autovacuum_configurations`: Lists autovacuum configurations in the
database.
* `list_memory_configurations`: Lists memory-related configurations in the
database.
* `list_top_bloated_tables`: List top bloated tables in the database.
* `list_replication_slots`: Lists replication slots in the database.
* `list_invalid_indexes`: Lists invalid indexes in the database.
* `get_query_plan`: Generate the execution plan of a statement.
* `list_views`: Lists views in the database from pg_views with a default
limit of 50 rows. Returns schemaname, viewname and the ownername.
* `list_schemas`: Lists schemas in the database.
* `database_overview`: Fetches the current state of the PostgreSQL server.
* `list_triggers`: Lists triggers in the database.
* `list_indexes`: List available user indexes in a PostgreSQL database.
* `list_sequences`: List sequences in a PostgreSQL database.
* `list_query_stats`: Lists query statistics.
* `get_column_cardinality`: Gets column cardinality.
* `list_table_stats`: Lists table statistics.
* `list_publication_tables`: List publication tables in a PostgreSQL database.
* `list_tablespaces`: Lists tablespaces in the database.
* `list_pg_settings`: List configuration parameters for the PostgreSQL server.
* `list_database_stats`: Lists the key performance and activity statistics for
each database in the PostgreSQL server.
* `list_roles`: Lists all the user-created roles in PostgreSQL database.
* `list_stored_procedure`: Lists stored procedures.
## Google Cloud Serverless for Apache Spark
* `--prebuilt` value: `serverless-spark`
* **Environment Variables:**
* `SERVERLESS_SPARK_PROJECT`: The GCP project ID
* `SERVERLESS_SPARK_LOCATION`: The GCP Location.
* **Permissions:**
* **Dataproc Serverless Viewer** (`roles/dataproc.serverlessViewer`) to
view serverless batches.
* **Dataproc Serverless Editor** (`roles/dataproc.serverlessEditor`) to
view serverless batches.
* **Tools:**
* `list_batches`: Lists Spark batches.
* `get_batch`: Gets information about a Spark batch.
* `cancel_batch`: Cancels a Spark batch.
* `create_pyspark_batch`: Creates a PySpark batch.
* `create_spark_batch`: Creates a Spark batch.
* `list_sessions`: Lists Spark sessions.
* `get_session`: Gets a Spark session.
## SingleStore
* `--prebuilt` value: `singlestore`
* **Environment Variables:**
* `SINGLESTORE_HOST`: The hostname or IP address of the SingleStore server.
* `SINGLESTORE_PORT`: The port number of the SingleStore server.
* `SINGLESTORE_DATABASE`: The name of the database to connect to.
* `SINGLESTORE_USER`: The database username.
* `SINGLESTORE_PASSWORD`: The password for the database user.
* **Tools:**
* `execute_sql`: Use this tool to execute SQL.
* `list_tables`: Lists detailed schema information for user-created tables.
## Snowflake
* `--prebuilt` value: `snowflake`
* **Environment Variables:**
* `SNOWFLAKE_ACCOUNT`: The Snowflake account.
* `SNOWFLAKE_USER`: The database username.
* `SNOWFLAKE_PASSWORD`: The password for the database user.
* `SNOWFLAKE_DATABASE`: The name of the database to connect to.
* `SNOWFLAKE_SCHEMA`: The schema name.
* `SNOWFLAKE_WAREHOUSE`: The warehouse name.
* `SNOWFLAKE_ROLE`: The role name.
* **Tools:**
* `execute_sql`: Use this tool to execute SQL.
* `list_tables`: Lists detailed schema information for user-created tables.
## Spanner (GoogleSQL dialect)
* `--prebuilt` value: `spanner`
* **Environment Variables:**
* `SPANNER_PROJECT`: The GCP project ID.
* `SPANNER_INSTANCE`: The Spanner instance ID.
* `SPANNER_DATABASE`: The Spanner database ID.
* **Permissions:**
* **Cloud Spanner Database Reader** (`roles/spanner.databaseReader`) to
execute DQL queries and list tables.
* **Cloud Spanner Database User** (`roles/spanner.databaseUser`) to
execute DML queries.
* **Tools:**
* `execute_sql`: Executes a DML SQL query.
* `execute_sql_dql`: Executes a DQL SQL query.
* `list_tables`: Lists tables in the database.
* `list_graphs`: Lists graphs in the database.
## Spanner (PostgreSQL dialect)
* `--prebuilt` value: `spanner-postgres`
* **Environment Variables:**
* `SPANNER_PROJECT`: The GCP project ID.
* `SPANNER_INSTANCE`: The Spanner instance ID.
* `SPANNER_DATABASE`: The Spanner database ID.
* **Permissions:**
* **Cloud Spanner Database Reader** (`roles/spanner.databaseReader`) to
execute DQL queries and list tables.
* **Cloud Spanner Database User** (`roles/spanner.databaseUser`) to
execute DML queries.
* **Tools:**
* `execute_sql`: Executes a DML SQL query using the PostgreSQL interface
for Spanner.
* `execute_sql_dql`: Executes a DQL SQL query using the PostgreSQL
interface for Spanner.
* `list_tables`: Lists tables in the database.
## SQLite
* `--prebuilt` value: `sqlite`
* **Environment Variables:**
* `SQLITE_DATABASE`: The path to the SQLite database file (e.g.,
`./sample.db`).
* **Permissions:**
* File system read/write permissions for the specified database file.
* **Tools:**
* `execute_sql`: Executes a SQL query.
* `list_tables`: Lists tables in the database.
## Neo4j
* `--prebuilt` value: `neo4j`
* **Environment Variables:**
* `NEO4J_URI`: The URI of the Neo4j instance (e.g.,
`bolt://localhost:7687`).
* `NEO4J_DATABASE`: The name of the Neo4j database to connect to.
* `NEO4J_USERNAME`: The username for the Neo4j instance.
* `NEO4J_PASSWORD`: The password for the Neo4j instance.
* **Permissions:**
* **Database-level permissions** are required to execute Cypher queries.
* **Tools:**
* `execute_cypher`: Executes a Cypher query.
* `get_schema`: Retrieves the schema of the Neo4j database.
## Google Cloud Healthcare API
* `--prebuilt` value: `cloud-healthcare`
* **Environment Variables:**
* `CLOUD_HEALTHCARE_PROJECT`: The GCP project ID.
* `CLOUD_HEALTHCARE_REGION`: The Cloud Healthcare API dataset region.
* `CLOUD_HEALTHCARE_DATASET`: The Cloud Healthcare API dataset ID.
* `CLOUD_HEALTHCARE_USE_CLIENT_OAUTH`: (Optional) If `true`, forwards the client's
OAuth access token for authentication. Defaults to `false`.
* **Permissions:**
* **Healthcare FHIR Resource Reader** (`roles/healthcare.fhirResourceReader`) to read an
search FHIR resources.
* **Healthcare DICOM Viewer** (`roles/healthcare.dicomViewer`) to retrieve DICOM images from a
DICOM store.
* **Tools:**
* `get_dataset`: Gets information about a Cloud Healthcare API dataset.
* `list_dicom_stores`: Lists DICOM stores in a Cloud Healthcare API dataset.
* `list_fhir_stores`: Lists FHIR stores in a Cloud Healthcare API dataset.
* `get_fhir_store`: Gets information about a FHIR store.
* `get_fhir_store_metrics`: Gets metrics for a FHIR store.
* `get_fhir_resource`: Gets a FHIR resource from a FHIR store.
* `fhir_patient_search`: Searches for patient resource(s) based on a set of criteria.
* `fhir_patient_everything`: Retrieves resources related to a given patient.
* `fhir_fetch_page`: Fetches a page of FHIR resources.
* `get_dicom_store`: Gets information about a DICOM store.
* `get_dicom_store_metrics`: Gets metrics for a DICOM store.
* `search_dicom_studies`: Searches for DICOM studies.
* `search_dicom_series`: Searches for DICOM series.
* `search_dicom_instances`: Searches for DICOM instances.
* `retrieve_rendered_dicom_instance`: Retrieves a rendered DICOM instance.
========================================================================
## Sources
========================================================================
**Hierarchy:** MCP Toolbox for Databases > User Guide > Configuration > Sources
**URL:** https://googleapis.github.io/genai-toolbox/previews/PR-2723/user-guide/configuration/sources/
**Description:** Sources represent your different data sources that a tool can interact with.
A Source represents a data sources that a tool can interact with. You can define
Sources as a map in the `sources` section of your `tools.yaml` file. Typically,
a source configuration will contain any information needed to connect with and
interact with the database.
{{< notice tip >}}
Use environment variable replacement with the format ${ENV_NAME}
instead of hardcoding your secrets into the configuration file.
{{< /notice >}}
```yaml
kind: sources
name: my-cloud-sql-source
type: cloud-sql-postgres
project: my-project-id
region: us-central1
instance: my-instance-name
database: my_db
user: ${USER_NAME}
password: ${PASSWORD}
```
In implementation, each source is a different connection pool or client that used
to connect to the database and execute the tool.
## Available Sources
To see all supported sources and the specific tools they unlock, explore the full list of our [Integrations](../../../integrations/_index.md).
========================================================================
## Tools
========================================================================
**Hierarchy:** MCP Toolbox for Databases > User Guide > Configuration > Tools
**URL:** https://googleapis.github.io/genai-toolbox/previews/PR-2723/user-guide/configuration/tools/
**Description:** Tools define actions an agent can take -- such as reading and writing to a source.
A tool represents an action your agent can take, such as running a SQL
statement. You can define Tools as a map in the `tools` section of your
`tools.yaml` file. Typically, a tool will require a source to act on:
```yaml
kind: tools
name: search_flights_by_number
type: postgres-sql
source: my-pg-instance
statement: |
SELECT * FROM flights
WHERE airline = $1
AND flight_number = $2
LIMIT 10
description: |
Use this tool to get information for a specific flight.
Takes an airline code and flight number and returns info on the flight.
Do NOT use this tool with a flight id. Do NOT guess an airline code or flight number.
An airline code is a code for an airline service consisting of a two-character
airline designator and followed by a flight number, which is a 1 to 4 digit number.
For example, if given CY 0123, the airline is "CY", and flight_number is "123".
Another example for this is DL 1234, the airline is "DL", and flight_number is "1234".
If the tool returns more than one option choose the date closest to today.
Example:
{{
"airline": "CY",
"flight_number": "888",
}}
Example:
{{
"airline": "DL",
"flight_number": "1234",
}}
parameters:
- name: airline
type: string
description: Airline unique 2 letter identifier
- name: flight_number
type: string
description: 1 to 4 digit number
```
## Specifying Parameters
Parameters for each Tool will define what inputs the agent will need to provide
to invoke them. Parameters should be pass as a list of Parameter objects:
```yaml
parameters:
- name: airline
type: string
description: Airline unique 2 letter identifier
- name: flight_number
type: string
description: 1 to 4 digit number
```
### Basic Parameters
Basic parameters types include `string`, `integer`, `float`, `boolean` types. In
most cases, the description will be provided to the LLM as context on specifying
the parameter.
```yaml
parameters:
- name: airline
type: string
description: Airline unique 2 letter identifier
```
| **field** | **type** | **required** | **description** |
|----------------|:--------------:|:------------:|----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|
| name | string | true | Name of the parameter. |
| type | string | true | Must be one of "string", "integer", "float", "boolean" "array" |
| description | string | true | Natural language description of the parameter to describe it to the agent. |
| default | parameter type | false | Default value of the parameter. If provided, `required` will be `false`. |
| required | bool | false | Indicate if the parameter is required. Default to `true`. |
| allowedValues | []string | false | Input value will be checked against this field. Regex is also supported. |
| excludedValues | []string | false | Input value will be checked against this field. Regex is also supported. |
| escape | string | false | Only available for type `string`. Indicate the escaping delimiters used for the parameter. This field is intended to be used with templateParameters. Must be one of "single-quotes", "double-quotes", "backticks", "square-brackets". |
| minValue | int or float | false | Only available for type `integer` and `float`. Indicate the minimum value allowed. |
| maxValue | int or float | false | Only available for type `integer` and `float`. Indicate the maximum value allowed. |
### Array Parameters
The `array` type is a list of items passed in as a single parameter.
To use the `array` type, you must also specify what kind of items are
in the list using the items field:
```yaml
parameters:
- name: preferred_airlines
type: array
description: A list of airline, ordered by preference.
items:
name: name
type: string
description: Name of the airline.
statement: |
SELECT * FROM airlines WHERE preferred_airlines = ANY($1);
```
| **field** | **type** | **required** | **description** |
|----------------|:----------------:|:------------:|----------------------------------------------------------------------------|
| name | string | true | Name of the parameter. |
| type | string | true | Must be "array" |
| description | string | true | Natural language description of the parameter to describe it to the agent. |
| default | parameter type | false | Default value of the parameter. If provided, `required` will be `false`. |
| required | bool | false | Indicate if the parameter is required. Default to `true`. |
| allowedValues | []string | false | Input value will be checked against this field. Regex is also supported. |
| excludedValues | []string | false | Input value will be checked against this field. Regex is also supported. |
| items | parameter object | true | Specify a Parameter object for the type of the values in the array. |
{{< notice note >}}
Items in array should not have a `default` or `required` value. If provided, it
will be ignored.
{{< /notice >}}
### Map Parameters
The map type is a collection of key-value pairs. It can be configured in two
ways:
- Generic Map: By default, it accepts values of any primitive type (string,
integer, float, boolean), allowing for mixed data.
- Typed Map: By setting the valueType field, you can enforce that all values
within the map must be of the same specified type.
#### Generic Map (Mixed Value Types)
This is the default behavior when valueType is omitted. It's useful for passing
a flexible group of settings.
```yaml
parameters:
- name: execution_context
type: map
description: A flexible set of key-value pairs for the execution environment.
```
#### Typed Map
Specify valueType to ensure all values in the map are of the same type. An error
will be thrown in case of value type mismatch.
```yaml
parameters:
- name: user_scores
type: map
description: A map of user IDs to their scores. All scores must be integers.
valueType: integer # This enforces the value type for all entries.
```
### Authenticated Parameters
Authenticated parameters are automatically populated with user
information decoded from [ID
tokens](../authentication/_index.md#specifying-id-tokens-from-clients) that are passed in
request headers. They do not take input values in request bodies like other
parameters. To use authenticated parameters, you must configure the tool to map
the required [authServices](../authentication/_index.md) to specific claims within the
user's ID token.
```yaml
kind: tools
name: search_flights_by_user_id
type: postgres-sql
source: my-pg-instance
statement: |
SELECT * FROM flights WHERE user_id = $1
parameters:
- name: user_id
type: string
description: Auto-populated from Google login
authServices:
# Refer to one of the `authServices` defined
- name: my-google-auth
# `sub` is the OIDC claim field for user ID
field: sub
```
| **field** | **type** | **required** | **description** |
|-----------|:--------:|:------------:|----------------------------------------------------------------------------------|
| name | string | true | Name of the [authServices](../authentication/_index.md) used to verify the OIDC auth token. |
| field | string | true | Claim field decoded from the OIDC token used to auto-populate this parameter. |
### Template Parameters
Template parameters types include `string`, `integer`, `float`, `boolean` types.
In most cases, the description will be provided to the LLM as context on
specifying the parameter. Template parameters will be inserted into the SQL
statement before executing the prepared statement. They will be inserted without
quotes, so to insert a string using template parameters, quotes must be
explicitly added within the string.
Template parameter arrays can also be used similarly to basic parameters, and array
items must be strings. Once inserted into the SQL statement, the outer layer of
quotes will be removed. Therefore to insert strings into the SQL statement, a
set of quotes must be explicitly added within the string.
{{< notice warning >}}
Because template parameters can directly replace identifiers, column names, and
table names, they are prone to SQL injections. Basic parameters are preferred
for performance and safety reasons.
{{< /notice >}}
{{< notice tip >}}
To minimize SQL injection risk when using template parameters, always provide
the `allowedValues` field within the parameter to restrict inputs.
Alternatively, for `string` type parameters, you can use the `escape` field to
add delimiters to the identifier. For `integer` or `float` type parameters, you
can use `minValue` and `maxValue` to define the allowable range.
{{< /notice >}}
```yaml
kind: tools
name: select_columns_from_table
type: postgres-sql
source: my-pg-instance
statement: |
SELECT {{array .columnNames}} FROM {{.tableName}}
description: |
Use this tool to list all information from a specific table.
Example:
{{
"tableName": "flights",
"columnNames": ["id", "name"]
}}
templateParameters:
- name: tableName
type: string
description: Table to select from
- name: columnNames
type: array
description: The columns to select
items:
name: column
type: string
description: Name of a column to select
escape: double-quotes # with this, the statement will resolve to `SELECT "id", "name" FROM flights`
```
| **field** | **type** | **required** | **description** |
|----------------|:----------------:|:---------------:|-------------------------------------------------------------------------------------|
| name | string | true | Name of the template parameter. |
| type | string | true | Must be one of "string", "integer", "float", "boolean", "array" |
| description | string | true | Natural language description of the template parameter to describe it to the agent. |
| default | parameter type | false | Default value of the parameter. If provided, `required` will be `false`. |
| required | bool | false | Indicate if the parameter is required. Default to `true`. |
| allowedValues | []string | false | Input value will be checked against this field. Regex is also supported. |
| excludedValues | []string | false | Input value will be checked against this field. Regex is also supported. |
| items | parameter object | true (if array) | Specify a Parameter object for the type of the values in the array (string only). |
## Authorized Invocations
You can require an authorization check for any Tool invocation request by
specifying an `authRequired` field. Specify a list of
[authServices](../authentication/_index.md) defined in the previous section.
```yaml
kind: tools
name: search_all_flight
type: postgres-sql
source: my-pg-instance
statement: |
SELECT * FROM flights
# A list of `authServices` defined previously
authRequired:
- my-google-auth
- other-auth-service
```
## Tool Annotations
Tool annotations provide semantic metadata that helps MCP clients understand tool
behavior. These hints enable clients to make better decisions about tool usage
and provide appropriate user experiences.
### Available Annotations
| **annotation** | **type** | **default** | **description** |
|--------------------|:-----------:|:-----------:|------------------------------------------------------------------------|
| readOnlyHint | bool | false | Tool only reads data, no modifications to the environment. |
| destructiveHint | bool | true | Tool may create, update, or delete data. |
| idempotentHint | bool | false | Repeated calls with same arguments have no additional effect. |
| openWorldHint | bool | true | Tool interacts with external entities beyond its local environment. |
### Specifying Annotations
Annotations can be specified in YAML tool configuration:
```yaml
tools:
my_query_tool:
kind: mongodb-find-one
source: my-mongodb
description: Find a single document
database: mydb
collection: users
annotations:
readOnlyHint: true
idempotentHint: true
```
### Default Annotations
If not specified, tools use sensible defaults based on their operation type:
- **Read operations** (find, aggregate, list): `readOnlyHint: true`
- **Write operations** (insert, update, delete): `destructiveHint: true`, `readOnlyHint: false`
### MCP Client Response
Annotations appear in the `tools/list` MCP response:
```json
{
"name": "my_query_tool",
"description": "Find a single document",
"annotations": {
"readOnlyHint": true
}
}
```
## Using tools with MCP Toolbox Client SDKs
Once your tools are defined in your configuration, you can retrieve them directly from your application code.
Here is how to load and invoke your tools across our supported languages:
### Python
```python
# Loading a single tool
tool = await toolbox.load_tool("my-tool")
# Invoke the tool
result = await tool("foo", bar="baz")
```
### Javascript/Typescript
```javascript
// Loading a single tool
const tool = await client.loadTool("my-tool")
// Invoke the tool
const result = await tool({a: 5, b: 2})
```
### Go
```go
// Loading a single tool
tool, err = client.LoadTool("my-tool", ctx)
// Invoke the tool
inputs := map[string]any{"location": "London"}
result, err := tool.Invoke(ctx, inputs)
```
To see all supported sources and the specific tools they unlock, explore the full list of our [Integrations](../../../integrations/_index.md).
========================================================================
## Invoke Tools via CLI
========================================================================
**Hierarchy:** MCP Toolbox for Databases > User Guide > Configuration > Tools > Invoke Tools via CLI
**URL:** https://googleapis.github.io/genai-toolbox/previews/PR-2723/user-guide/configuration/tools/invoke_tool/
**Description:** Learn how to invoke your tools directly from the command line using the `invoke` command.
The `invoke` command allows you to invoke tools defined in your configuration directly from the CLI. This is useful for:
- **Ephemeral Invocation:** Executing a tool without spinning up a full MCP server/client.
- **Debugging:** Isolating tool execution logic and testing with various parameter combinations.
{{< notice tip >}}
**Keep configurations minimal:** The `invoke` command initializes *all* resources (sources, tools, etc.) defined in your configuration files during execution. To ensure fast response times, consider using a minimal configuration file containing only the tools you need for the specific invocation.
{{< /notice >}}
## Before you begin
1. Make sure you have the `toolbox` binary installed or built.
2. Make sure you have a valid tool configuration file (e.g., `tools.yaml`).
### Command Usage
The basic syntax for the command is:
```bash
toolbox invoke [params]
```
- ``: Can be `--tools-file`, `--tools-files`, `--tools-folder`, and `--prebuilt`. See the [CLI Reference](../../../reference/cli.md) for details.
- ``: The name of the tool you want to call. This must match the name defined in your `tools.yaml`.
- `[params]`: (Optional) A JSON string representing the arguments for the tool.
## Examples
### 1. Calling a Tool without Parameters
If your tool takes no parameters, simply provide the tool name:
```bash
toolbox --tools-file tools.yaml invoke my-simple-tool
```
### 2. Calling a Tool with Parameters
For tools that require arguments, pass them as a JSON string. Ensure you escape quotes correctly for your shell.
**Example: A tool that takes parameters**
Assuming a tool named `mytool` taking `a` and `b`:
```bash
toolbox --tools-file tools.yaml invoke mytool '{"a": 10, "b": 20}'
```
**Example: A tool that queries a database**
```bash
toolbox --tools-file tools.yaml invoke db-query '{"sql": "SELECT * FROM users LIMIT 5"}'
```
### 3. Using Prebuilt Configurations
You can also use the `--prebuilt` flag to load prebuilt toolsets.
```bash
toolbox --prebuilt cloudsql-postgres invoke cloudsql-postgres-list-instances
```
## Troubleshooting
- **Tool not found:** Ensure the `` matches exactly what is in your YAML file and that the file is correctly loaded via `--tools-file`.
- **Invalid parameters:** Double-check your JSON syntax. The error message will usually indicate if the JSON parsing failed or if the parameters didn't match the tool's schema.
- **Auth errors:** The `invoke` command currently does not support flows requiring client-side authorization (like OAuth flow initiation via the CLI). It works best for tools using service-side authentication (e.g., Application Default Credentials).
========================================================================
## Toolsets
========================================================================
**Hierarchy:** MCP Toolbox for Databases > User Guide > Configuration > Toolsets
**URL:** https://googleapis.github.io/genai-toolbox/previews/PR-2723/user-guide/configuration/toolsets/
**Description:** Toolsets allow you to define logical groups of tools to load together for specific agents or applications.
A Toolset allows you to logically group multiple tools together so they can be loaded and managed as a single unit. You can define Toolsets as documents in your `tools.yaml` file.
This is especially useful when you are building a system with multiple AI agents or applications, where each agent only needs access to a specific subset of tools to perform its specialized tasks safely and efficiently.
{{< notice tip >}}
Try organizing your toolsets by the agent's persona or app feature (e.g., `data_analyst_set` vs `customer_support_set`). This keeps your client-side code clean and ensures an agent isn't distracted by tools it doesn't need.
{{< /notice >}}
## Defining Toolsets
In your configuration file, define each toolset by providing a unique `name` and a list of `tools` that belong to that group..
```yaml
kind: toolsets
name: my_first_toolset
tools:
- my_first_tool
- my_second_tool
---
kind: toolsets
name: my_second_toolset
tools:
- my_second_tool
- my_third_tool
```
## Using toolsets with MCP Toolbox Client SDKs
Once your toolsets are defined in your configuration, you can retrieve them directly from your application code. If you request a toolset without specifying a name, the SDKs will default to loading every tool available on the server.
Here is how to load your toolsets across our supported languages:
### Python
```python
# Load all tools available on the server
all_tools = client.load_toolset()
# Load only the tools defined in 'my_second_toolset'
my_second_toolset = client.load_toolset("my_second_toolset")
```
### Javascript/Typescript
```javascript
// Load all tools available on the server
const allTools = await client.loadToolset()
// Load only the tools defined in 'my_second_toolset'
const mySecondToolset = await client.loadToolset("my_second_toolset")
```
### Go
```go
// Load all tools available on the server
allTools, err := client.LoadToolset("", ctx)
// Load only the tools defined in 'my_second_toolset'
mySecondToolset, err := client.LoadToolset("my-toolset", ctx)
```
========================================================================
## Generate Agent Skills
========================================================================
**Hierarchy:** MCP Toolbox for Databases > User Guide > Configuration > Toolsets > Generate Agent Skills
**URL:** https://googleapis.github.io/genai-toolbox/previews/PR-2723/user-guide/configuration/toolsets/generate_skill/
**Description:** How to generate agent skills from a toolset.
The `skills-generate` command allows you to convert a **toolset** into an **Agent Skill**. A toolset is a collection of tools, and the generated skill will contain metadata and execution scripts for all tools within that toolset, complying with the [Agent Skill specification](https://agentskills.io/specification).
## Before you begin
1. Make sure you have the `toolbox` executable in your PATH.
2. Make sure you have [Node.js](https://nodejs.org/) installed on your system.
## Generating a Skill from a Toolset
A skill package consists of a `SKILL.md` file (with required YAML frontmatter) and a set of Node.js scripts. Each tool defined in your toolset maps to a corresponding script in the generated Node.js scripts (`.js`) that work across different platforms (Linux, macOS, Windows).
### Command Usage
The basic syntax for the command is:
```bash
toolbox skills-generate \
--name \
--toolset \
--description \
--output-dir
```
- ``: Can be `--tools-file`, `--tools-files`, `--tools-folder`, and `--prebuilt`. See the [CLI Reference](../../../reference/cli.md) for details.
- `--name`: Name of the generated skill.
- `--description`: Description of the generated skill.
- `--toolset`: (Optional) Name of the toolset to convert into a skill. If not provided, all tools will be included.
- `--output-dir`: (Optional) Directory to output generated skills (default: "skills").
{{< notice note >}}
**Note:** The `` must follow the Agent Skill [naming convention](https://agentskills.io/specification): it must contain only lowercase alphanumeric characters and hyphens, cannot start or end with a hyphen, and cannot contain consecutive hyphens (e.g., `my-skill`, `data-processing`).
{{< /notice >}}
### Example: Custom Tools File
1. Create a `tools.yaml` file with a toolset and some tools:
```yaml
tools:
tool_a:
description: "First tool"
run:
command: "echo 'Tool A'"
tool_b:
description: "Second tool"
run:
command: "echo 'Tool B'"
toolsets:
my_toolset:
tools:
- tool_a
- tool_b
```
2. Generate the skill:
```bash
toolbox --tools-file tools.yaml skills-generate \
--name "my-skill" \
--toolset "my_toolset" \
--description "A skill containing multiple tools" \
--output-dir "generated-skills"
```
3. The generated skill directory structure:
```text
generated-skills/
└── my-skill/
├── SKILL.md
├── assets/
│ ├── tool_a.yaml
│ └── tool_b.yaml
└── scripts/
├── tool_a.js
└── tool_b.js
```
In this example, the skill contains two Node.js scripts (`tool_a.js` and `tool_b.js`), each mapping to a tool in the original toolset.
### Example: Prebuilt Configuration
You can also generate skills from prebuilt toolsets:
```bash
toolbox --prebuilt alloydb-postgres-admin skills-generate \
--name "alloydb-postgres-admin" \
--description "skill for performing administrative operations on alloydb"
```
## Installing the Generated Skill in Gemini CLI
Once you have generated a skill, you can install it into the Gemini CLI using the `gemini skills install` command.
### Installation Command
Provide the path to the directory containing the generated skill:
```bash
gemini skills install /path/to/generated-skills/my-skill
```
Alternatively, use ~/.gemini/skills as the `--output-dir` to generate the skill straight to the Gemini CLI.
========================================================================
## EmbeddingModels
========================================================================
**Hierarchy:** MCP Toolbox for Databases > User Guide > Configuration > EmbeddingModels
**URL:** https://googleapis.github.io/genai-toolbox/previews/PR-2723/user-guide/configuration/embedding-models/
**Description:** EmbeddingModels represent services that transform text into vector embeddings for semantic search.
EmbeddingModels represent services that generate vector representations of text
data. In the MCP Toolbox, these models enable **Semantic Queries**, allowing
[Tools](../tools/_index.md) to automatically convert human-readable text into numerical
vectors before using them in a query.
This is primarily used in two scenarios:
- **Vector Ingestion**: Converting a text parameter into a vector string during
an `INSERT` operation.
- **Semantic Search**: Converting a natural language query into a vector to
perform similarity searches.
## Hidden Parameter Duplication (valueFromParam)
When building tools for vector ingestion, you often need the same input string
twice:
1. To store the original text in a TEXT column.
1. To generate the vector embedding for a VECTOR column.
Requesting an Agent (LLM) to output the exact same string twice is inefficient
and error-prone. The `valueFromParam` field solves this by allowing a parameter
to inherit its value from another parameter in the same tool.
### Key Behaviors
1. Hidden from Manifest: Parameters with valueFromParam set are excluded from
the tool definition sent to the Agent. The Agent does not know this parameter
exists.
1. Auto-Filled: When the tool is executed, the Toolbox automatically copies the
value from the referenced parameter before processing embeddings.
## Example
The following configuration defines an embedding model and applies it to
specific tool parameters.
{{< notice tip >}} Use environment variable replacement with the format
${ENV_NAME} instead of hardcoding your API keys into the configuration file.
{{< /notice >}}
### Step 1 - Define an Embedding Model
Define an embedding model in the `embeddingModels` section:
```yaml
kind: embeddingModels
name: gemini-model # Name of the embedding model
type: gemini
model: gemini-embedding-001
apiKey: ${GOOGLE_API_KEY}
dimension: 768
```
### Step 2 - Embed Tool Parameters
Use the defined embedding model, embed your query parameters using the
`embeddedBy` field. Only string-typed parameters can be embedded:
```yaml
# Vector ingestion tool
kind: tools
name: insert_embedding
type: postgres-sql
source: my-pg-instance
statement: |
INSERT INTO documents (content, embedding)
VALUES ($1, $2);
parameters:
- name: content
type: string
description: The raw text content to be stored in the database.
- name: vector_string
type: string
# This parameter is hidden from the LLM.
# It automatically copies the value from 'content' and embeds it.
valueFromParam: content
embeddedBy: gemini-model
---
# Semantic search tool
kind: tools
name: search_embedding
type: postgres-sql
source: my-pg-instance
statement: |
SELECT id, content, embedding <-> $1 AS distance
FROM documents
ORDER BY distance LIMIT 1
parameters:
- name: semantic_search_string
type: string
description: The search query that will be converted to a vector.
embeddedBy: gemini-model # refers to the name of a defined embedding model
```
## Kinds of Embedding Models
========================================================================
## Gemini Embedding
========================================================================
**Hierarchy:** MCP Toolbox for Databases > User Guide > Configuration > EmbeddingModels > Gemini Embedding
**URL:** https://googleapis.github.io/genai-toolbox/previews/PR-2723/user-guide/configuration/embedding-models/gemini/
**Description:** Use Google's Gemini models to generate high-performance text embeddings for vector databases.
## About
Google Gemini provides state-of-the-art embedding models that convert text into
high-dimensional vectors.
### Authentication
Toolbox uses your [Application Default Credentials
(ADC)][adc] to authorize with the
Gemini API client.
Optionally, you can use an [API key][api-key] obtain an API
Key from the [Google AI Studio][ai-studio].
We recommend using an API key for testing and using application default
credentials for production.
[adc]: https://cloud.google.com/docs/authentication#adc
[api-key]: https://ai.google.dev/gemini-api/docs/api-key#api-keys
[ai-studio]: https://aistudio.google.com/app/apikey
## Behavior
### Automatic Vectorization
When a tool parameter is configured with `embeddedBy: `,
the Toolbox intercepts the raw text input from the client and sends it to the
Gemini API. The resulting numerical array is then formatted before being passed
to your database source.
### Dimension Matching
The `dimension` field must match the expected size of your database column
(e.g., a `vector(768)` column in PostgreSQL). This setting is supported by newer
models since 2024 only. You cannot set this value if using the earlier model
(`models/embedding-001`). Check out [available Gemini models][modellist] for more
information.
[modellist]:
https://docs.cloud.google.com/vertex-ai/generative-ai/docs/embeddings/get-text-embeddings#supported-models
## Example
```yaml
kind: embeddingModels
name: gemini-model
type: gemini
model: gemini-embedding-001
apiKey: ${GOOGLE_API_KEY}
dimension: 768
```
{{< notice tip >}}
Use environment variable replacement with the format ${ENV_NAME}
instead of hardcoding your secrets into the configuration file.
{{< /notice >}}
## Reference
| **field** | **type** | **required** | **description** |
|-----------|:--------:|:------------:|--------------------------------------------------------------|
| type | string | true | Must be `gemini`. |
| model | string | true | The Gemini model ID to use (e.g., `gemini-embedding-001`). |
| apiKey | string | false | Your API Key from Google AI Studio. |
| dimension | integer | false | The number of dimensions in the output vector (e.g., `768`). |
========================================================================
## Prompts
========================================================================
**Hierarchy:** MCP Toolbox for Databases > User Guide > Configuration > Prompts
**URL:** https://googleapis.github.io/genai-toolbox/previews/PR-2723/user-guide/configuration/prompts/
**Description:** Prompts allow servers to provide structured messages and instructions for interacting with language models.
A `prompt` represents a reusable prompt template that can be retrieved and used
by MCP clients.
A Prompt is essentially a template for a message or a series of messages that
can be sent to a Large Language Model (LLM). The Toolbox server implements the
`prompts/list` and `prompts/get` methods from the [Model Context Protocol
(MCP)](https://modelcontextprotocol.io/docs/getting-started/intro)
specification, allowing clients to discover and retrieve these prompts.
```yaml
kind: prompts
name: code_review
description: "Asks the LLM to analyze code quality and suggest improvements."
messages:
- content: "Please review the following code for quality, correctness, and potential improvements: \n\n{{.code}}"
arguments:
- name: "code"
description: "The code to review"
```
## Prompt Schema
| **field** | **type** | **required** | **description** |
|-------------|--------------------------------|--------------|--------------------------------------------------------------------------|
| description | string | No | A brief explanation of what the prompt does. |
| type | string | No | The type of prompt. Defaults to `"custom"`. |
| messages | [][Message](#message-schema) | Yes | A list of one or more message objects that make up the prompt's content. |
| arguments | [][Argument](#argument-schema) | No | A list of arguments that can be interpolated into the prompt's content. |
## Message Schema
| **field** | **type** | **required** | **description** |
|-----------|----------|--------------|--------------------------------------------------------------------------------------------------------|
| role | string | No | The role of the sender. Can be `"user"` or `"assistant"`. Defaults to `"user"`. |
| content | string | Yes | The text of the message. You can include placeholders for arguments using `{{.argument_name}}` syntax. |
## Argument Schema
An argument can be any [Parameter](../tools/_index.md#specifying-parameters)
type. If the `type` field is not specified, it will default to `string`.
## Usage with Gemini CLI
Prompts defined in your `tools.yaml` can be seamlessly integrated with the
Gemini CLI to create [custom slash
commands](https://github.com/google-gemini/gemini-cli/blob/main/docs/tools/mcp-server.md#mcp-prompts-as-slash-commands).
The workflow is as follows:
1. **Discovery:** When the Gemini CLI connects to your Toolbox server, it
automatically calls `prompts/list` to discover all available prompts.
2. **Conversion:** Each discovered prompt is converted into a corresponding
slash command. For example, a prompt named `code_review` becomes the
`/code_review` command in the CLI.
3. **Execution:** You can execute the command as follows:
```bash
/code_review --code="def hello():\n print('world')"
```
4. **Interpolation:** Once all arguments are collected, the CLI calls prompts/get
with your provided values to retrieve the final, interpolated prompt.
Eg.
```bash
Please review the following code for quality, correctness, and potential improvements: \ndef hello():\n print('world')
```
5. **Response:** This completed prompt is then sent to the Gemini model, and the
model's response is displayed back to you in the CLI.
## Kinds of prompts
========================================================================
## Custom
========================================================================
**Hierarchy:** MCP Toolbox for Databases > User Guide > Configuration > Prompts > Custom
**URL:** https://googleapis.github.io/genai-toolbox/previews/PR-2723/user-guide/configuration/prompts/custom/
**Description:** Custom prompts defined by the user.
Custom prompts are defined by the user to be exposed through their MCP server.
They are the default type for prompts.
## Examples
### Basic Prompt
Here is an example of a simple prompt that takes a single argument, code, and
asks an LLM to review it.
```yaml
kind: prompts
name: code_review
description: "Asks the LLM to analyze code quality and suggest improvements."
messages:
- content: "Please review the following code for quality, correctness, and potential improvements: \n\n{{.code}}"
arguments:
- name: "code"
description: "The code to review"
```
### Multi-message prompt
You can define prompts with multiple messages to set up more complex
conversational contexts, like a role-playing scenario.
```yaml
kind: prompts
name: roleplay_scenario
description: "Sets up a roleplaying scenario with initial messages."
arguments:
- name: "character"
description: "The character the AI should embody."
- name: "situation"
description: "The initial situation for the roleplay."
messages:
- role: "user"
content: "Let's roleplay. You are {{.character}}. The situation is: {{.situation}}"
- role: "assistant"
content: "Okay, I understand. I am ready. What happens next?"
```
## Reference
### Prompt Schema
| **field** | **type** | **required** | **description** |
|-------------|--------------------------------|--------------|--------------------------------------------------------------------------|
| type | string | No | The type of prompt. Must be `"custom"`. |
| description | string | No | A brief explanation of what the prompt does. |
| messages | [][Message](#message-schema) | Yes | A list of one or more message objects that make up the prompt's content. |
| arguments | [][Argument](#argument-schema) | No | A list of arguments that can be interpolated into the prompt's content. |
### Message Schema
Refer to the default prompt [Message Schema](../_index.md#message-schema).
### Argument Schema
Refer to the default prompt [Argument Schema](../_index.md#argument-schema).
========================================================================
## Toolbox UI
========================================================================
**Hierarchy:** MCP Toolbox for Databases > User Guide > Configuration > Toolbox UI
**URL:** https://googleapis.github.io/genai-toolbox/previews/PR-2723/user-guide/configuration/toolbox-ui/
**Description:** How to effectively use Toolbox UI.
Toolbox UI is a built-in web interface that allows users to visually inspect and
test out configured resources such as tools and toolsets.
## Launching Toolbox UI
To launch Toolbox's interactive UI, use the `--ui` flag.
```sh
./toolbox --ui
```
Toolbox UI will be served from the same host and port as the Toolbox Server,
with the `/ui` suffix. Once Toolbox is launched, the following INFO log with
Toolbox UI's url will be shown:
```bash
INFO "Toolbox UI is up and running at: http://localhost:5000/ui"
```
## Navigating the Tools Page
The tools page shows all tools loaded from your configuration file. This
corresponds to the default toolset (represented by an empty string). Each tool's
name on this page will exactly match its name in the configuration file.
To view details for a specific tool, click on the tool name. The main content
area will be populated with the tool name, description, and available
parameters.

### Invoking a Tool
1. Click on a Tool
1. Enter appropriate parameters in each parameter field
1. Click "Run Tool"
1. Done! Your results will appear in the response field
1. (Optional) Uncheck "Prettify JSON" to format the response as plain text

### Optional Parameters
Toolbox allows users to add [optional parameters](../tools/_index.md#basic-parameters) with or without a default
value.
To exclude a parameter, uncheck the box to the right of an associated parameter,
and that parameter will not be included in the request body. If the parameter is
not sent, Toolbox will either use it as `nil` value or the `default` value, if
configured. If the parameter is required, Toolbox will throw an error.
When the box is checked, parameter will be sent exactly as entered in the
response field (e.g. empty string).


### Editing Headers
To edit headers, press the "Edit Headers" button to display the header modal.
Within this modal, users can make direct edits by typing into the header's text
area.
Toolbox UI validates that the headers are in correct JSON format. Other
header-related errors (e.g., incorrect header names or values required by the
tool) will be reported in the Response section after running the tool.

#### Google OAuth
Currently, Toolbox supports Google OAuth 2.0 as an AuthService, which allows
tools to utilize authorized parameters. When a tool uses an authorized
parameter, the parameter will be displayed but not editable, as it will be
populated from the authentication token.
To provide the token, add your Google OAuth ID Token to the request header using
the "Edit Headers" button and modal described above. The key should be the name
of your AuthService as defined in your tool configuration file, suffixed with
`_token`. The value should be your ID token as a string.
1. Select a tool that requires [authenticated parameters]()
1. The auth parameter's text field is greyed out. This is because it cannot be
entered manually and will be parsed from the resolved auth token
1. To update request headers with the token, select "Edit Headers"
1. (Optional) If you wish to manually edit the header, checkout the dropdown
"How to extract Google OAuth ID Token manually" for guidance on retrieving ID
token
1. To edit the header automatically, click the "Auto Setup" button that is
associated with your Auth Profile
1. Enter the Client ID defined in your tools configuration file
1. Click "Continue"
1. Click "Sign in With Google" and login with your associated google account.
This should automatically populate the header text area with your token
1. Click "Save"
1. Click "Run Tool"
```json
{
"Content-Type": "application/json",
"my-google-auth_token": "YOUR_ID_TOKEN_HERE"
}
```

## Navigating the Toolsets Page
Through the toolsets page, users can search for a specific toolset to retrieve
tools from. Simply enter the toolset name in the search bar, and press "Enter"
to retrieve the associated tools.
If the toolset name is not defined within the tools configuration file, an error
message will be displayed.

========================================================================
## Connect to Toolbox
========================================================================
**Hierarchy:** MCP Toolbox for Databases > User Guide > Connect to Toolbox
**URL:** https://googleapis.github.io/genai-toolbox/previews/PR-2723/user-guide/connect-to/
**Description:** Learn how to connect your applications, AI agents, CLIs, and IDEs to MCP Toolbox.
Once your MCP Toolbox server is configured and running, the next step is putting those tools to work. Because MCP Toolbox is built on the Model Context Protocol (MCP), it acts as a universal control plane that can be consumed by a wide variety of clients.
Choose your connection method below based on your use case:
## Client SDKs (Application Integration)
If you are building custom AI agents or orchestrating multi-step workflows in code, use our officially supported Client SDKs. These SDKs allow your application to fetch tool schemas and execute queries dynamically at runtime.
* **[Python SDKs](client-sdks/python-sdk/_index.md)**: Connect using our Core SDK, or leverage native integrations for LangChain, LlamaIndex, and the Agent Development Kit (ADK).
* **[JavaScript / TypeScript SDKs](client-sdks/javascript-sdk/_index.md)**: Build Node.js applications using our Core SDK or ADK integrations.
* **[Go SDKs](client-sdks/go-sdk/_index.md)**: Build highly concurrent agents with our Go Core SDK, or use our integrations for Genkit and ADK.
## MCP Clients & CLIs
You do not need to build a full application to use the Toolbox. You can interact with your configured databases and execute tools directly from your terminal using MCP-compatible command-line clients.
* **[MCP Client](mcp-client/_index.md)**: Connect to an MCP client.
* **[Gemini CLI](gemini-cli/_index.md)**: Explore how to use the Gemini CLI and its available datacloud extensions to manage and query your data using natural language commands right from your terminal.
## IDE Integrations
By connecting the Toolbox directly to an MCP-compatible IDE, your AI coding assistant gains real-time access to your database schemas, allowing it to write perfectly tailored queries and application code.
* **[IDEs](ides/_index.md)**: Guide for connecting your IDE to AlloyDB instances.
## Available Connection Methods
========================================================================
## Client SDKs
========================================================================
**Hierarchy:** MCP Toolbox for Databases > User Guide > Connect to Toolbox > Client SDKs
**URL:** https://googleapis.github.io/genai-toolbox/previews/PR-2723/user-guide/connect-to/client-sdks/
**Description:** Integrate the MCP Toolbox directly into your custom applications and AI agents using our official SDKs for Python, JavaScript/TypeScript, and Go.
Our Client SDKs provide the foundational building blocks for connecting your custom applications to the MCP Toolbox server.
Whether you are writing a simple script to execute a single query or building a complex, multi-agent orchestration system, these SDKs handle the underlying Model Context Protocol (MCP) communication so you can focus on your business logic.
By using our SDKs, your application can dynamically request tools, bind parameters, add authentication, and execute commands at runtime. We offer official support and deep framework integrations across three primary languages:
* **[Python](./python-sdk/_index.md)**: Includes the Core SDK, along with native integrations for popular orchestrators like LangChain, LlamaIndex, and the ADK.
* **[JavaScript / TypeScript](./javascript-sdk/_index.md)**: Includes the Node.js Core SDK and integrations for the Agent Development Kit (ADK).
* **[Go](./go-sdk/_index.md/)**: Includes the Core SDK, plus dedicated packages for building agents with Genkit (`tbgenkit`) and the ADK.
Select your preferred language to explore installation instructions, quickstart guides, and framework-specific implementations.
========================================================================
## Python
========================================================================
**Hierarchy:** MCP Toolbox for Databases > User Guide > Connect to Toolbox > Client SDKs > Python
**URL:** https://googleapis.github.io/genai-toolbox/previews/PR-2723/user-guide/connect-to/client-sdks/python-sdk/
**Description:** Python SDKs to connect to the MCP Toolbox server.
## Overview
The MCP Toolbox service provides a centralized way to manage and expose tools
(like API connectors, database query tools, etc.) for use by GenAI applications.
These Python SDKs act as clients for that service. They handle the communication needed to:
* Fetch tool definitions from your running Toolbox instance.
* Provide convenient Python objects or functions representing those tools.
* Invoke the tools (calling the underlying APIs/services configured in Toolbox).
* Handle authentication and parameter binding as needed.
By using these SDKs, you can easily leverage your Toolbox-managed tools directly
within your Python applications or AI orchestration frameworks.
## Which Package Should I Use?
Choosing the right package depends on how you are building your application:
* [`toolbox-adk`](adk):
Use this package if you are building your application using Google ADK (Agent Development Kit).
It provides tools that are directly compatible with the
Google ADK ecosystem (`BaseTool` / `BaseToolset` interface) handling authentication propagation, header management, and tool wrapping automatically.
* [`toolbox-core`](core):
Use this package if you are not using LangChain/LangGraph or any other
orchestration framework, or if you need a framework-agnostic way to interact
with Toolbox tools (e.g., for custom orchestration logic or direct use in
Python scripts).
* [`toolbox-langchain`](langchain):
Use this package if you are building your application using the LangChain or
LangGraph frameworks. It provides tools that are directly compatible with the
LangChain ecosystem (`BaseTool` interface), simplifying integration.
* [`toolbox-llamaindex`](llamaindex):
Use this package if you are building your application using the LlamaIndex framework.
It provides tools that are directly compatible with the
LlamaIndex ecosystem (`BaseTool` interface), simplifying integration.
## Available Packages
This repository hosts the following Python packages. See the package-specific
README for detailed installation and usage instructions:
| Package | Target Use Case | Integration | Path | Details (README) | PyPI Status |
| :------ | :---------- | :---------- | :---------------------- | :---------- | :---------
| `toolbox-adk` | Google ADK applications | Google ADK | `packages/toolbox-adk/` | 📄 [View README](https://github.com/googleapis/mcp-toolbox-sdk-python/blob/main/packages/toolbox-adk/README.md) |  |
| `toolbox-core` | Framework-agnostic / Custom applications | Use directly / Custom | `packages/toolbox-core/` | 📄 [View README](https://github.com/googleapis/mcp-toolbox-sdk-python/blob/main/packages/toolbox-core/README.md) |  |
| `toolbox-langchain` | LangChain / LangGraph applications | LangChain / LangGraph | `packages/toolbox-langchain/` | 📄 [View README](https://github.com/googleapis/mcp-toolbox-sdk-python/blob/main/packages/toolbox-langchain/README.md) |  |
| `toolbox-llamaindex` | LlamaIndex applications | LlamaIndex | `packages/toolbox-llamaindex/` | 📄 [View README](https://github.com/googleapis/mcp-toolbox-sdk-python/blob/main/packages/toolbox-llamaindex/README.md) |  |
## Getting Started
To get started using Toolbox tools with an application, follow these general steps:
1. **Set up and Run the Toolbox Service:**
Before using the SDKs, you need the main MCP Toolbox service running. Follow
the instructions here: [**Toolbox Getting Started
Guide**](https://github.com/googleapis/genai-toolbox?tab=readme-ov-file#getting-started)
2. **Install the Appropriate SDK:**
Choose the package based on your needs (see "[Which Package Should I Use?](#which-package-should-i-use)" above) and install it:
```bash
# For the Google ADK Integration
pip install google-adk[toolbox]
# OR
# For the core, framework-agnostic SDK
pip install toolbox-core
# OR
# For LangChain/LangGraph integration
pip install toolbox-langchain
# OR
# For the LlamaIndex integration
pip install toolbox-llamaindex
```
{{< notice note >}}
Source code for [python-sdk](https://github.com/googleapis/mcp-toolbox-sdk-python)
{{< /notice >}}
========================================================================
## ADK
========================================================================
**Hierarchy:** MCP Toolbox for Databases > User Guide > Connect to Toolbox > Client SDKs > Python > ADK
**URL:** https://googleapis.github.io/genai-toolbox/previews/PR-2723/user-guide/connect-to/client-sdks/python-sdk/adk/
**Description:** MCP Toolbox SDK for integrating functionalities of MCP Toolbox into your ADK apps.
## Overview
The `toolbox-adk` package provides a Python interface to the MCP Toolbox service, enabling you to load and invoke tools from your own applications.
## Installation
```bash
pip install google-adk[toolbox]
```
## Usage
The primary entry point is the `ToolboxToolset`, which loads tools from a remote Toolbox server and adapts them for use with ADK agents.
{{< notice note>}}
This package contains the core implementation of the `ToolboxToolset`. The `ToolboxToolset` provided in the [`google-adk`](https://github.com/google/adk-python/blob/758d337c76d877e3174c35f06551cc9beb1def06/src/google/adk/tools/toolbox_toolset.py#L35) package is a shim that simply delegates all functionality to this implementation.
{{< /notice >}}
```python
from google.adk.tools.toolbox_toolset import ToolboxToolset
from google.adk.agents import Agent
# Create the Toolset
toolset = ToolboxToolset(
server_url="http://127.0.0.1:5000"
)
# Use in your ADK Agent
agent = Agent(tools=[toolset])
```
## Transport Protocols
The SDK supports multiple transport protocols for communicating with the Toolbox server. By default, the client uses the latest supported version of the **Model Context Protocol (MCP)**.
You can explicitly select a protocol using the `protocol` option during toolset initialization. This is useful if you need to use the native Toolbox HTTP protocol or pin the client to a specific legacy version of MCP.
{{< notice note>}}
* **Native Toolbox Transport**: This uses the service's native **REST over HTTP** API.
* **MCP Transports**: These options use the **Model Context Protocol over HTTP**.
{{< /notice >}}
### Supported Protocols
| Constant | Description |
| :--- | :--- |
| `Protocol.MCP` | **(Default)** Alias for the default MCP version (currently `2025-06-18`). |
| `Protocol.TOOLBOX` | **DEPRECATED**: The native Toolbox HTTP protocol. Will be removed on March 4, 2026. |
| `Protocol.MCP_v20251125` | MCP Protocol version 2025-11-25. |
| `Protocol.MCP_v20250618` | MCP Protocol version 2025-06-18. |
| `Protocol.MCP_v20250326` | MCP Protocol version 2025-03-26. |
| `Protocol.MCP_v20241105` | MCP Protocol version 2024-11-05. |
{{< notice note >}}
The **Native Toolbox Protocol** (`Protocol.TOOLBOX`) is deprecated and will be removed on **March 4, 2026**.
Please migrate to using the **MCP Protocol** (`Protocol.MCP`), which is the default.
{{< /notice >}}
### Example
If you wish to use the native Toolbox protocol:
```python
from toolbox_adk import ToolboxToolset
from toolbox_core.protocol import Protocol
toolset = ToolboxToolset(
server_url="http://127.0.0.1:5000",
protocol=Protocol.TOOLBOX
)
```
If you want to pin the MCP Version 2025-03-26:
```python
from toolbox_adk import ToolboxToolset
from toolbox_core.protocol import Protocol
toolset = ToolboxToolset(
server_url="http://127.0.0.1:5000",
protocol=Protocol.MCP_v20250326
)
```
{{< notice tip>}}
By default, it uses **Toolbox Identity** (no authentication), which is suitable for local development.
For production environments (Cloud Run, GKE) or accessing protected resources, see the [Authentication](#authentication) section for strategies like Workload Identity or OAuth2.
{{< /notice >}}
## Authentication
The `ToolboxToolset` requires credentials to authenticate with the Toolbox server. You can configure these credentials using the `CredentialStrategy` factory methods.
The strategies handle two main types of authentication:
* **Client-to-Server**: Securing the connection to the Toolbox server (e.g., Workload Identity, API keys).
* **User Identity**: Authenticating the end-user for specific tools (e.g., 3-legged OAuth).
### 1. Workload Identity (ADC)
*Recommended for Cloud Run, GKE, or local development with `gcloud auth login`.*
Uses the agent's Application Default Credentials (ADC) to generate an OIDC token. This is the standard way for one service to authenticate to another on Google Cloud.
```python
from toolbox_adk import CredentialStrategy, ToolboxToolset
# target_audience: The URL of your Toolbox server
creds = CredentialStrategy.workload_identity(target_audience="https://my-toolbox-service.run.app")
toolset = ToolboxToolset(
server_url="https://my-toolbox-service.run.app",
credentials=creds
)
```
### 2. User Identity (OAuth2)
*Recommended for tools that act on behalf of the user.*
Configures the ADK-native interactive 3-legged OAuth flow to get consent and credentials from the end-user at runtime. This strategy is passed to the `ToolboxToolset` just like any other credential strategy.
```python
from toolbox_adk import CredentialStrategy, ToolboxToolset
creds = CredentialStrategy.user_identity(
client_id="YOUR_CLIENT_ID",
client_secret="YOUR_CLIENT_SECRET",
scopes=["https://www.googleapis.com/auth/cloud-platform"]
)
# The toolset will now initiate OAuth flows when required by tools
toolset = ToolboxToolset(
server_url="...",
credentials=creds
)
```
### 3. API Key
*Use a static API key passed in a specific header (default: `X-API-Key`).*
```python
from toolbox_adk import CredentialStrategy
# Default header: X-API-Key
creds = CredentialStrategy.api_key(key="my-secret-key")
# Custom header
creds = CredentialStrategy.api_key(key="my-secret-key", header_name="X-My-Header")
```
### 4. HTTP Bearer Token
*Manually supply a static bearer token.*
```python
from toolbox_adk import CredentialStrategy
creds = CredentialStrategy.manual_token(token="your-static-bearer-token")
```
### 5. Manual Google Credentials
*Use an existing `google.auth.credentials.Credentials` object.*
```python
from toolbox_adk import CredentialStrategy
import google.auth
creds_obj, _ = google.auth.default()
creds = CredentialStrategy.manual_credentials(credentials=creds_obj)
```
### 6. Toolbox Identity (No Auth)
*Use this if your Toolbox server does not require authentication (e.g., local development).*
```python
from toolbox_adk import CredentialStrategy
creds = CredentialStrategy.toolbox_identity()
```
### 7. Native ADK Integration
*Convert ADK-native `AuthConfig` or `AuthCredential` objects.*
```python
from toolbox_adk import CredentialStrategy
# From AuthConfig
creds = CredentialStrategy.from_adk_auth_config(auth_config)
# From AuthCredential + AuthScheme
creds = CredentialStrategy.from_adk_credentials(auth_credential, scheme)
```
### 8. Tool-Specific Authentication
*Resolve authentication tokens dynamically for specific tools.*
Some tools may define their own authentication requirements (e.g., Salesforce OAuth, GitHub PAT) via `authSources` in their schema. You can provide a mapping of getters to resolve these tokens at runtime.
```python
async def get_salesforce_token():
# Fetch token from secret manager or reliable source
return "sf-access-token"
toolset = ToolboxToolset(
server_url="...",
auth_token_getters={
"salesforce-auth": get_salesforce_token, # Async callable
"github-pat": lambda: "my-pat-token" # Sync callable or static lambda
}
)
```
## Advanced Configuration
### Additional Headers
You can inject custom headers into every request made to the Toolbox server. This is useful for passing tracing IDs, API keys, or other metadata.
```python
toolset = ToolboxToolset(
server_url="...",
additional_headers={
"X-Trace-ID": "12345",
"X-My-Header": lambda: get_dynamic_header_value() # Can be a callable
}
)
```
### Parameter Binding
Bind values to tool parameters globally across all loaded tools. These values will be **fixed** and **hidden** from the LLM.
* **Schema Hiding**: The bound parameters are removed from the tool schema sent to the model, simplifying the context window.
* **Auto-Injection**: The values are automatically injected into the tool arguments during execution.
```python
toolset = ToolboxToolset(
server_url="...",
bound_params={
# 'region' will be hidden from the LLM and injected automatically
"region": "us-central1",
"api_key": lambda: get_api_key() # Can be a callable
}
)
```
========================================================================
## Core
========================================================================
**Hierarchy:** MCP Toolbox for Databases > User Guide > Connect to Toolbox > Client SDKs > Python > Core
**URL:** https://googleapis.github.io/genai-toolbox/previews/PR-2723/user-guide/connect-to/client-sdks/python-sdk/core/
**Description:** MCP Toolbox Core SDK for integrating functionalities of MCP Toolbox into your Agentic apps.
## Overview
The `toolbox-core` package provides a Python interface to the MCP Toolbox service, enabling you to load and invoke tools from your own applications.
## Installation
```bash
pip install toolbox-core
```
{{< notice note >}}
* The primary `ToolboxClient` is asynchronous and requires using `await` for loading and invoking tools, as shown in most examples.
* Asynchronous code needs to run within an event loop (e.g., using `asyncio.run()` or in an async framework). See the [Python `asyncio` documentation](https://docs.python.org/3/library/asyncio-task.html) for more details.
* If you prefer synchronous execution, refer to the [Synchronous Usage](#synchronous-usage) section below.
{{< /notice >}}
{{< notice note>}}
The `ToolboxClient` (and its synchronous counterpart `ToolboxSyncClient`) interacts with network resources using an underlying HTTP client session. You should remember to use a context manager or explicitly call `close()` to clean up these resources. If you provide your own session, you'll need to close it in addition to calling `ToolboxClient.close()`.
{{< /notice >}}
## Quickstart
1. **Start the Toolbox Service**
- Make sure the MCP Toolbox service is running on port `5000` of your local machine. See the [Toolbox Getting Started Guide](../../../../introduction/_index.md#getting-started).
2. **Minimal Example**
```python
import asyncio
from toolbox_core import ToolboxClient
async def main():
# Replace with the actual URL where your Toolbox service is running
async with ToolboxClient("http://127.0.0.1:5000") as toolbox:
weather_tool = await toolbox.load_tool("get_weather")
result = await weather_tool(location="London")
print(result)
if __name__ == "__main__":
asyncio.run(main())
```
{{< notice tip>}}
For a complete, end-to-end example including setting up the service and using an SDK, see the full tutorial: [**Toolbox Quickstart Tutorial**](../../../../../build-with-mcp-toolbox/local_quickstart.md)
{{< /notice >}}
{{< notice note>}}
If you initialize `ToolboxClient` without providing an external session and cannot use `async with`, you must explicitly close the client using `await toolbox.close()` in a `finally` block. This ensures the internally created session is closed.
```py
toolbox = ToolboxClient("http://127.0.0.1:5000")
try:
# ... use toolbox ...
finally:
await toolbox.close()
```
{{< /notice >}}
## Usage
Import and initialize an MCP Toolbox client, pointing it to the URL of your running
Toolbox service.
```py
from toolbox_core import ToolboxClient
# Replace with your Toolbox service's URL
async with ToolboxClient("http://127.0.0.1:5000") as toolbox:
```
All interactions for loading and invoking tools happen through this client.
{{< notice tip>}}
For advanced use cases, you can provide an external `aiohttp.ClientSession` during initialization (e.g., `ToolboxClient(url, session=my_session)`). If you provide your own session, you are responsible for managing its lifecycle; `ToolboxClient` *will not* close it.
{{< /notice >}}
{{< notice note>}}
Closing the `ToolboxClient` also closes the underlying network session shared by all tools loaded from that client. As a result, any tool instances you have loaded will cease to function and will raise an error if you attempt to invoke them after the client is closed.
{{< /notice >}}
## Transport Protocols
The SDK supports multiple transport protocols for communicating with the Toolbox server. By default, the client uses the latest supported version of the **Model Context Protocol (MCP)**.
You can explicitly select a protocol using the `protocol` option during client initialization. This is useful if you need to use the native Toolbox HTTP protocol or pin the client to a specific legacy version of MCP.
{{< notice note >}}
* **Native Toolbox Transport**: This uses the service's native **REST over HTTP** API.
* **MCP Transports**: These options use the **Model Context Protocol over HTTP**.
{{< /notice >}}
### Supported Protocols
| Constant | Description |
| :--- | :--- |
| `Protocol.MCP` | **(Default)** Alias for the default MCP version (currently `2025-06-18`). |
| `Protocol.TOOLBOX` | **DEPRECATED**: The native Toolbox HTTP protocol. Will be removed on March 4, 2026. |
| `Protocol.MCP_v20251125` | MCP Protocol version 2025-11-25. |
| `Protocol.MCP_v20250618` | MCP Protocol version 2025-06-18. |
| `Protocol.MCP_v20241105` | MCP Protocol version 2024-11-05. |
{{< notice note >}}
The **Native Toolbox Protocol** (`Protocol.TOOLBOX`) is deprecated and will be removed on **March 4, 2026**.
Please migrate to using the **MCP Protocol** (`Protocol.MCP`), which is the default.
{{< /notice >}}
### Example
If you wish to use the native Toolbox protocol:
```py
from toolbox_core import ToolboxClient
from toolbox_core.protocol import Protocol
async with ToolboxClient("http://127.0.0.1:5000", protocol=Protocol.TOOLBOX) as toolbox:
# Use client
pass
```
If you want to pin the MCP Version 2025-03-26:
```py
from toolbox_core import ToolboxClient
from toolbox_core.protocol import Protocol
async with ToolboxClient("http://127.0.0.1:5000", protocol=Protocol.MCP_v20250326) as toolbox:
# Use client
pass
```
## Loading Tools
You can load tools individually or in groups (toolsets) as defined in your
Toolbox service configuration. Loading a toolset is convenient when working with
multiple related functions, while loading a single tool offers more granular
control.
### Load a toolset
A toolset is a collection of related tools. You can load all tools in a toolset
or a specific one:
```py
# Load all tools
tools = await toolbox.load_toolset()
# Load a specific toolset
tools = await toolbox.load_toolset("my-toolset")
```
### Load a single tool
Loads a specific tool by its unique name. This provides fine-grained control.
```py
tool = await toolbox.load_tool("my-tool")
```
## Invoking Tools
Once loaded, tools behave like awaitable Python functions. You invoke them using
`await` and pass arguments corresponding to the parameters defined in the tool's
configuration within the Toolbox service.
```py
tool = await toolbox.load_tool("my-tool")
result = await tool("foo", bar="baz")
```
{{< notice tip>}}
For a more comprehensive guide on setting up the Toolbox service itself, which you'll need running to use this SDK, please refer to the [Toolbox Quickstart Guide](../../../../../build-with-mcp-toolbox/local_quickstart.md).
{{< /notice >}}
## Synchronous Usage
By default, the `ToolboxClient` and the `ToolboxTool` objects it produces behave like asynchronous Python functions, requiring the use of `await`.
If your application primarily uses synchronous code, or you prefer not to manage an asyncio event loop, you can use the synchronous alternatives provided:
* `ToolboxSyncClient`: The synchronous counterpart to `ToolboxClient`.
* `ToolboxSyncTool`: The synchronous counterpart to `ToolboxTool`.
The `ToolboxSyncClient` handles communication with the Toolbox service synchronously and produces `ToolboxSyncTool` instances when you load tools. You do not use the `await` keyword when interacting with these synchronous versions.
```py
from toolbox_core import ToolboxSyncClient
with ToolboxSyncClient("http://127.0.0.1:5000") as toolbox:
weather_tool = toolbox.load_tool("get_weather")
result = weather_tool(location="Paris")
print(result)
```
{{< notice tip>}}
While synchronous invocation is available for convenience, it's generally considered best practice to use asynchronous operations (like those provided by the default `ToolboxClient` and `ToolboxTool`) for an I/O-bound task like tool invocation. Asynchronous programming allows for cooperative multitasking, often leading to better performance and resource utilization, especially in applications handling concurrent requests.
{{< /notice >}}
## Use with LangGraph
The Toolbox Core SDK integrates smoothly with frameworks like LangGraph,
allowing you to incorporate tools managed by the Toolbox service into your
agentic workflows.
{{< notice tip>}}
The loaded tools (both async `ToolboxTool` and sync `ToolboxSyncTool`) are callable and can often be used directly. However, to ensure parameter descriptions from Google-style docstrings are accurately parsed and made available to the LLM (via `bind_tools()`) and LangGraph internals, it's recommended to wrap the loaded tools using LangChain's [`StructuredTool`](https://python.langchain.com/api_reference/core/tools/langchain_core.tools.structured.StructuredTool.html).
{{< /notice >}}
Here's a conceptual example adapting the [official LangGraph tool calling
guide](https://langchain-ai.github.io/langgraph/how-tos/tool-calling):
```py
import asyncio
from typing import Annotated
from typing_extensions import TypedDict
from langchain_core.messages import HumanMessage, BaseMessage
from toolbox_core import ToolboxClient
from langchain_google_vertexai import ChatVertexAI
from langgraph.graph import StateGraph, START, END
from langgraph.prebuilt import ToolNode
from langchain.tools import StructuredTool
from langgraph.graph.message import add_messages
class State(TypedDict):
messages: Annotated[list[BaseMessage], add_messages]
async def main():
async with ToolboxClient("http://127.0.0.1:5000") as toolbox:
tools = await toolbox.load_toolset()
wrapped_tools = [StructuredTool.from_function(tool, parse_docstring=True) for tool in tools]
model_with_tools = ChatVertexAI(model="gemini-3-flash-preview").bind_tools(wrapped_tools)
tool_node = ToolNode(wrapped_tools)
def call_agent(state: State):
response = model_with_tools.invoke(state["messages"])
return {"messages": [response]}
def should_continue(state: State):
last_message = state["messages"][-1]
if last_message.tool_calls:
return "tools"
return END
graph_builder = StateGraph(State)
graph_builder.add_node("agent", call_agent)
graph_builder.add_node("tools", tool_node)
graph_builder.add_edge(START, "agent")
graph_builder.add_conditional_edges(
"agent",
should_continue,
)
graph_builder.add_edge("tools", "agent")
app = graph_builder.compile()
prompt = "What is the weather in London?"
inputs = {"messages": [HumanMessage(content=prompt)]}
print(f"User: {prompt}\n")
print("--- Streaming Agent Steps ---")
events = app.stream(
inputs,
stream_mode="values",
)
for event in events:
event["messages"][-1].pretty_print()
print("\n---\n")
asyncio.run(main())
```
## Client to Server Authentication
This section describes how to authenticate the ToolboxClient itself when
connecting to an MCP Toolbox server instance that requires authentication. This is
crucial for securing your Toolbox server endpoint, especially when deployed on
platforms like Cloud Run, GKE, or any environment where unauthenticated access is restricted.
This client-to-server authentication ensures that the Toolbox server can verify
the identity of the client making the request before any tool is loaded or
called. It is different from [Authenticating Tools](#authenticating-tools),
which deals with providing credentials for specific tools within an already
connected Toolbox session.
### When is Client-to-Server Authentication Needed?
You'll need this type of authentication if your Toolbox server is configured to
deny unauthenticated requests. For example:
- Your Toolbox server is deployed on Cloud Run and configured to "Require authentication."
- Your server is behind an Identity-Aware Proxy (IAP) or a similar
authentication layer.
- You have custom authentication middleware on your self-hosted Toolbox server.
Without proper client authentication in these scenarios, attempts to connect or
make calls (like `load_tool`) will likely fail with `Unauthorized` errors.
### How it works
The `ToolboxClient` (and `ToolboxSyncClient`) allows you to specify functions
(or coroutines for the async client) that dynamically generate HTTP headers for
every request sent to the Toolbox server. The most common use case is to add an
Authorization header with a bearer token (e.g., a Google ID token).
These header-generating functions are called just before each request, ensuring
that fresh credentials or header values can be used.
### Configuration
You can configure these dynamic headers as seen below:
```python
from toolbox_core import ToolboxClient
async with ToolboxClient("toolbox-url", client_headers={"header1": header1_getter, "header2": header2_getter, ...}) as client:
# Use client
pass
```
### Authenticating with Google Cloud Servers
For Toolbox servers hosted on Google Cloud (e.g., Cloud Run) and requiring
`Google ID token` authentication, the helper module
[auth_methods](https://github.com/googleapis/mcp-toolbox-sdk-python/blob/main/packages/toolbox-core/src/toolbox_core/auth_methods.py) provides utility functions.
### Step by Step Guide for Cloud Run
1. **Configure Permissions**: [Grant](https://cloud.google.com/run/docs/securing/managing-access#service-add-principals) the `roles/run.invoker` IAM role on the Cloud
Run service to the principal. This could be your `user account email` or a
`service account`.
2. **Configure Credentials**
- Local Development: Set up
[ADC](https://cloud.google.com/docs/authentication/set-up-adc-local-dev-environment).
- Google Cloud Environments: When running within Google Cloud (e.g., Compute
Engine, GKE, another Cloud Run service, Cloud Functions), ADC is typically
configured automatically, using the environment's default service account.
3. **Connect to the Toolbox Server**
```python
from toolbox_core import auth_methods
auth_token_provider = auth_methods.aget_google_id_token(URL) # can also use sync method
async with ToolboxClient(
URL,
client_headers={"Authorization": auth_token_provider},
) as client:
tools = await client.load_toolset()
# Now, you can use the client as usual.
```
## Authenticating Tools
{{< notice info >}}
**Always use HTTPS** to connect your application with the Toolbox service, especially in **production environments** or whenever the communication involves **sensitive data** (including scenarios where tools require authentication tokens). Using plain HTTP lacks encryption and exposes your application and data to significant security risks, such as eavesdropping and tampering.
{{}}
Tools can be configured within the Toolbox service to require authentication,
ensuring only authorized users or applications can invoke them, especially when
accessing sensitive data.
### When is Authentication Needed?
Authentication is configured per-tool within the Toolbox service itself. If a
tool you intend to use is marked as requiring authentication in the service, you
must configure the SDK client to provide the necessary credentials (currently
Oauth2 tokens) when invoking that specific tool.
### Supported Authentication Mechanisms
The Toolbox service enables secure tool usage through **Authenticated Parameters**. For detailed information on how these mechanisms work within the Toolbox service and how to configure them, please refer to [Authenticated Parameters](../../../../configuration/tools/_index.md#authenticated-parameters)
### Step 1: Configure Tools in Toolbox Service
First, ensure the target tool(s) are configured correctly in the Toolbox service
to require authentication. Refer to the [Authenticated Parameters](../../../../configuration/tools/_index.md#authenticated-parameters)
for instructions.
### Step 2: Configure SDK Client
Your application needs a way to obtain the required Oauth2 token for the
authenticated user. The SDK requires you to provide a function capable of
retrieving this token *when the tool is invoked*.
#### Provide an ID Token Retriever Function
You must provide the SDK with a function (sync or async) that returns the
necessary token when called. The implementation depends on your application's
authentication flow (e.g., retrieving a stored token, initiating an OAuth flow).
{{< notice info>}}
The name used when registering the getter function with the SDK (e.g.,`"my_api_token"`) must exactly match the `name` of the corresponding `authServices` defined in the tool's configuration within the Toolbox service.
{{}}
```py
async def get_auth_token():
# ... Logic to retrieve ID token (e.g., from local storage, OAuth flow)
# This example just returns a placeholder. Replace with your actual token retrieval.
return "YOUR_ID_TOKEN" # Placeholder
```
{{< notice tip>}}
Your token retriever function is invoked every time an authenticated parameter requires a token for a tool call. Consider implementing caching logic within this function to avoid redundant token fetching or generation, especially for tokens with longer validity periods or if the retrieval process is resource-intensive.
{{}}
#### Option A: Add Authentication to a Loaded Tool
You can add the token retriever function to a tool object *after* it has been
loaded. This modifies the specific tool instance.
```py
async with ToolboxClient("http://127.0.0.1:5000") as toolbox:
tool = await toolbox.load_tool("my-tool")
auth_tool = tool.add_auth_token_getter("my_auth", get_auth_token) # Single token
# OR
multi_auth_tool = tool.add_auth_token_getters({
"my_auth_1": get_auth_token_1,
"my_auth_2": get_auth_token_2,
}) # Multiple tokens
```
#### Option B: Add Authentication While Loading Tools
You can provide the token retriever(s) directly during the `load_tool` or
`load_toolset` calls. This applies the authentication configuration only to the
tools loaded in that specific call, without modifying the original tool objects
if they were loaded previously.
```py
auth_tool = await toolbox.load_tool(auth_token_getters={"my_auth": get_auth_token})
# OR
auth_tools = await toolbox.load_toolset(auth_token_getters={"my_auth": get_auth_token})
```
{{< notice >}}
Adding auth tokens during loading only affect the tools loaded within that call.
{{}}
### Complete Authentication Example
```py
import asyncio
from toolbox_core import ToolboxClient
async def get_auth_token():
# ... Logic to retrieve ID token (e.g., from local storage, OAuth flow)
# This example just returns a placeholder. Replace with your actual token retrieval.
return "YOUR_ID_TOKEN" # Placeholder
async with ToolboxClient("http://127.0.0.1:5000") as toolbox:
tool = await toolbox.load_tool("my-tool")
auth_tool = tool.add_auth_token_getters({"my_auth": get_auth_token})
result = auth_tool(input="some input")
print(result)
```
{{< notice >}}
An auth token getter for a specific name (e.g., `“GOOGLE_ID”`) will replace any client header with the same name followed by `“_token”` (e.g., `“GOOGLE_ID_token”`).
{{}}
## Parameter Binding
The SDK allows you to pre-set, or "bind", values for specific tool parameters
before the tool is invoked or even passed to an LLM. These bound values are
fixed and will not be requested or modified by the LLM during tool use.
### Why Bind Parameters?
- **Protecting sensitive information:** API keys, secrets, etc.
- **Enforcing consistency:** Ensuring specific values for certain parameters.
- **Pre-filling known data:** Providing defaults or context.
{{< notice info >}}
The parameter names used for binding (e.g., `"api_key"`) must exactly match the parameter names defined in the tool’s configuration within the Toolbox service.
{{}}
{{< notice >}}
You do not need to modify the tool’s configuration in the Toolbox service to bind parameter values using the SDK.
{{}}
### Option A: Binding Parameters to a Loaded Tool
Bind values to a tool object *after* it has been loaded. This modifies the
specific tool instance.
```py
async with ToolboxClient("http://127.0.0.1:5000") as toolbox:
tool = await toolbox.load_tool("my-tool")
bound_tool = tool.bind_param("param", "value")
# OR
bound_tool = tool.bind_params({"param": "value"})
```
### Option B: Binding Parameters While Loading Tools
Specify bound parameters directly when loading tools. This applies the binding
only to the tools loaded in that specific call.
```py
bound_tool = await toolbox.load_tool("my-tool", bound_params={"param": "value"})
# OR
bound_tools = await toolbox.load_toolset(bound_params={"param": "value"})
```
### Binding Dynamic Values
Instead of a static value, you can bind a parameter to a synchronous or
asynchronous function. This function will be called *each time* the tool is
invoked to dynamically determine the parameter's value at runtime.
{{< notice >}}
You don’t need to modify tool configurations to bind parameter values.
{{}}
```py
async def get_dynamic_value():
# Logic to determine the value
return "dynamic_value"
# Assuming `tool` is a loaded tool instance from a ToolboxClient
dynamic_bound_tool = tool.bind_param("param", get_dynamic_value)
```
========================================================================
## LangChain/LangGraph
========================================================================
**Hierarchy:** MCP Toolbox for Databases > User Guide > Connect to Toolbox > Client SDKs > Python > LangChain/LangGraph
**URL:** https://googleapis.github.io/genai-toolbox/previews/PR-2723/user-guide/connect-to/client-sdks/python-sdk/langchain/
**Description:** MCP Toolbox SDK for integrating functionalities of MCP Toolbox into your LangChain/LangGraph apps.
## Overview
The `toolbox-langchain` package provides a Python interface to the MCP Toolbox service, enabling you to load and invoke tools from your own applications.
## Installation
```bash
pip install toolbox-langchain
```
## Quickstart
Here's a minimal example to get you started using
[LangGraph](https://langchain-ai.github.io/langgraph/reference/prebuilt/#langgraph.prebuilt.chat_agent_executor.create_react_agent):
```py
from toolbox_langchain import ToolboxClient
from langchain_google_vertexai import ChatVertexAI
from langgraph.prebuilt import create_react_agent
async with ToolboxClient("http://127.0.0.1:5000") as toolbox:
tools = toolbox.load_toolset()
model = ChatVertexAI(model="gemini-3-flash-preview")
agent = create_react_agent(model, tools)
prompt = "How's the weather today?"
for s in agent.stream({"messages": [("user", prompt)]}, stream_mode="values"):
message = s["messages"][-1]
if isinstance(message, tuple):
print(message)
else:
message.pretty_print()
```
{{< notice tip >}}
For a complete, end-to-end example including setting up the service and using an SDK, see the full tutorial: [Toolbox Quickstart Tutorial](../../../../../build-with-mcp-toolbox/local_quickstart.md)
{{< /notice >}}
## Usage
Import and initialize the toolbox client.
```py
from toolbox_langchain import ToolboxClient
# Replace with your Toolbox service's URL
async with ToolboxClient("http://127.0.0.1:5000") as toolbox:
```
## Transport Protocols
The SDK supports multiple transport protocols for communicating with the Toolbox server. By default, the client uses the latest supported version of the **Model Context Protocol (MCP)**.
You can explicitly select a protocol using the `protocol` option during client initialization. This is useful if you need to use the native Toolbox HTTP protocol or pin the client to a specific legacy version of MCP.
{{< notice note >}}
* **Native Toolbox Transport**: This uses the service's native **REST over HTTP** API.
* **MCP Transports**: These options use the **Model Context Protocol over HTTP**.
{{< /notice >}}
### Supported Protocols
| Constant | Description |
| :--- | :--- |
| `Protocol.MCP` | **(Default)** Alias for the default MCP version (currently `2025-06-18`). |
| `Protocol.TOOLBOX` | **DEPRECATED**: The native Toolbox HTTP protocol. Will be removed on March 4, 2026. |
| `Protocol.MCP_v20251125` | MCP Protocol version 2025-11-25. |
| `Protocol.MCP_v20250618` | MCP Protocol version 2025-06-18. |
| `Protocol.MCP_v20250326` | MCP Protocol version 2025-03-26. |
| `Protocol.MCP_v20241105` | MCP Protocol version 2024-11-05. |
{{< notice note >}}
The **Native Toolbox Protocol** (`Protocol.TOOLBOX`) is deprecated and will be removed on **March 4, 2026**.
Please migrate to using the **MCP Protocol** (`Protocol.MCP`), which is the default.
{{< /notice >}}
### Example
If you wish to use the native Toolbox protocol:
```py
from toolbox_langchain import ToolboxClient
from toolbox_core.protocol import Protocol
async with ToolboxClient("http://127.0.0.1:5000", protocol=Protocol.TOOLBOX) as toolbox:
# Use client
pass
```
If you want to pin the MCP Version 2025-03-26:
```py
from toolbox_langchain import ToolboxClient
from toolbox_core.protocol import Protocol
async with ToolboxClient("http://127.0.0.1:5000", protocol=Protocol.MCP_v20250326) as toolbox:
# Use client
pass
```
## Loading Tools
### Load a toolset
A toolset is a collection of related tools. You can load all tools in a toolset
or a specific one:
```py
# Load all tools
tools = toolbox.load_toolset()
# Load a specific toolset
tools = toolbox.load_toolset("my-toolset")
```
### Load a single tool
```py
tool = toolbox.load_tool("my-tool")
```
Loading individual tools gives you finer-grained control over which tools are
available to your LLM agent.
## Use with LangChain
LangChain's agents can dynamically choose and execute tools based on the user
input. Include tools loaded from the Toolbox SDK in the agent's toolkit:
```py
from langchain_google_vertexai import ChatVertexAI
model = ChatVertexAI(model="gemini-3-flash-preview")
# Initialize agent with tools
agent = model.bind_tools(tools)
# Run the agent
result = agent.invoke("Do something with the tools")
```
## Use with LangGraph
Integrate the Toolbox SDK with LangGraph to use Toolbox service tools within a
graph-based workflow. Follow the [official
guide](https://langchain-ai.github.io/langgraph/) with minimal changes.
### Represent Tools as Nodes
Represent each tool as a LangGraph node, encapsulating the tool's execution within the node's functionality:
```py
from toolbox_langchain import ToolboxClient
from langgraph.graph import StateGraph, MessagesState
from langgraph.prebuilt import ToolNode
# Define the function that calls the model
def call_model(state: MessagesState):
messages = state['messages']
response = model.invoke(messages)
return {"messages": [response]} # Return a list to add to existing messages
model = ChatVertexAI(model="gemini-3-flash-preview")
builder = StateGraph(MessagesState)
tool_node = ToolNode(tools)
builder.add_node("agent", call_model)
builder.add_node("tools", tool_node)
```
### Connect Tools with LLM
Connect tool nodes with LLM nodes. The LLM decides which tool to use based on
input or context. Tool output can be fed back into the LLM:
```py
from typing import Literal
from langgraph.graph import END, START
from langchain_core.messages import HumanMessage
# Define the function that determines whether to continue or not
def should_continue(state: MessagesState) -> Literal["tools", END]:
messages = state['messages']
last_message = messages[-1]
if last_message.tool_calls:
return "tools" # Route to "tools" node if LLM makes a tool call
return END # Otherwise, stop
builder.add_edge(START, "agent")
builder.add_conditional_edges("agent", should_continue)
builder.add_edge("tools", 'agent')
graph = builder.compile()
graph.invoke({"messages": [HumanMessage(content="Do something with the tools")]})
```
## Manual usage
Execute a tool manually using the `invoke` method:
```py
result = tools[0].invoke({"name": "Alice", "age": 30})
```
This is useful for testing tools or when you need precise control over tool
execution outside of an agent framework.
## Client to Server Authentication
This section describes how to authenticate the ToolboxClient itself when
connecting to a Toolbox server instance that requires authentication. This is
crucial for securing your Toolbox server endpoint, especially when deployed on
platforms like Cloud Run, GKE, or any environment where unauthenticated access
is restricted.
This client-to-server authentication ensures that the Toolbox server can verify
the identity of the client making the request before any tool is loaded or
called. It is different from [Authenticating Tools](#authenticating-tools),
which deals with providing credentials for specific tools within an already
connected Toolbox session.
### When is Client-to-Server Authentication Needed?
You'll need this type of authentication if your Toolbox server is configured to
deny unauthenticated requests. For example:
- Your Toolbox server is deployed on Cloud Run and configured to "Require authentication."
- Your server is behind an Identity-Aware Proxy (IAP) or a similar
authentication layer.
- You have custom authentication middleware on your self-hosted Toolbox server.
Without proper client authentication in these scenarios, attempts to connect or
make calls (like `load_tool`) will likely fail with `Unauthorized` errors.
### How it works
The `ToolboxClient` allows you to specify functions (or coroutines for the async
client) that dynamically generate HTTP headers for every request sent to the
Toolbox server. The most common use case is to add an Authorization header with
a bearer token (e.g., a Google ID token).
These header-generating functions are called just before each request, ensuring
that fresh credentials or header values can be used.
### Configuration
You can configure these dynamic headers as follows:
```python
from toolbox_langchain import ToolboxClient
async with ToolboxClient(
"toolbox-url",
client_headers={"header1": header1_getter, "header2": header2_getter, ...}
) as client:
```
### Authenticating with Google Cloud Servers
For Toolbox servers hosted on Google Cloud (e.g., Cloud Run) and requiring
`Google ID token` authentication, the helper module
[auth_methods](https://github.com/googleapis/mcp-toolbox-sdk-python/blob/main/packages/toolbox-core/src/toolbox_core/auth_methods.py) provides utility functions.
### Step by Step Guide for Cloud Run
1. **Configure Permissions**:
[Grant](https://cloud.google.com/run/docs/securing/managing-access#service-add-principals)
the `roles/run.invoker` IAM role on the Cloud
Run service to the principal. This could be your `user account email` or a
`service account`.
2. **Configure Credentials**
- Local Development: Set up
[ADC](https://cloud.google.com/docs/authentication/set-up-adc-local-dev-environment).
- Google Cloud Environments: When running within Google Cloud (e.g., Compute
Engine, GKE, another Cloud Run service, Cloud Functions), ADC is typically
configured automatically, using the environment's default service account.
3. **Connect to the Toolbox Server**
```python
from toolbox_langchain import ToolboxClient
from toolbox_core import auth_methods
auth_token_provider = auth_methods.aget_google_id_token(URL) # can also use sync method
async with ToolboxClient(
URL,
client_headers={"Authorization": auth_token_provider},
) as client:
tools = client.load_toolset()
# Now, you can use the client as usual.
```
## Authenticating Tools
{{< notice info >}}
Always use HTTPS to connect your application with the Toolbox service, especially when using tools with authentication configured. Using HTTP exposes your application to serious security risks.
{{< /notice >}}
Some tools require user authentication to access sensitive data.
### Supported Authentication Mechanisms
Toolbox currently supports authentication using the [OIDC
protocol](https://openid.net/specs/openid-connect-core-1_0.html) with [ID
tokens](https://openid.net/specs/openid-connect-core-1_0.html#IDToken) (not
access tokens) for [Google OAuth
2.0](https://cloud.google.com/apigee/docs/api-platform/security/oauth/oauth-home).
### Configure Tools
Refer to [these
instructions](../../../../configuration/tools/_index.md#authenticated-parameters) on
configuring tools for authenticated parameters.
### Configure SDK
You need a method to retrieve an ID token from your authentication service:
```py
async def get_auth_token():
# ... Logic to retrieve ID token (e.g., from local storage, OAuth flow)
# This example just returns a placeholder. Replace with your actual token retrieval.
return "YOUR_ID_TOKEN" # Placeholder
```
#### Add Authentication to a Tool
```py
async with ToolboxClient("http://127.0.0.1:5000") as toolbox:
tools = toolbox.load_toolset()
auth_tool = tools[0].add_auth_token_getter("my_auth", get_auth_token) # Single token
multi_auth_tool = tools[0].add_auth_token_getters({"auth_1": get_auth_1}, {"auth_2": get_auth_2}) # Multiple tokens
# OR
auth_tools = [tool.add_auth_token_getter("my_auth", get_auth_token) for tool in tools]
```
#### Add Authentication While Loading
```py
auth_tool = toolbox.load_tool(auth_token_getters={"my_auth": get_auth_token})
auth_tools = toolbox.load_toolset(auth_token_getters={"my_auth": get_auth_token})
```
{{< notice note >}}
Adding auth tokens during loading only affect the tools loaded within that call.
{{< /notice >}}
### Complete Example
```py
import asyncio
from toolbox_langchain import ToolboxClient
async def get_auth_token():
# ... Logic to retrieve ID token (e.g., from local storage, OAuth flow)
# This example just returns a placeholder. Replace with your actual token retrieval.
return "YOUR_ID_TOKEN" # Placeholder
async with ToolboxClient("http://127.0.0.1:5000") as toolbox:
tool = toolbox.load_tool("my-tool")
auth_tool = tool.add_auth_token_getter("my_auth", get_auth_token)
result = auth_tool.invoke({"input": "some input"})
print(result)
```
## Parameter Binding
Predetermine values for tool parameters using the SDK. These values won't be
modified by the LLM. This is useful for:
* **Protecting sensitive information:** API keys, secrets, etc.
* **Enforcing consistency:** Ensuring specific values for certain parameters.
* **Pre-filling known data:** Providing defaults or context.
### Binding Parameters to a Tool
```py
async with ToolboxClient("http://127.0.0.1:5000") as toolbox:
tools = toolbox.load_toolset()
bound_tool = tool[0].bind_param("param", "value") # Single param
multi_bound_tool = tools[0].bind_params({"param1": "value1", "param2": "value2"}) # Multiple params
# OR
bound_tools = [tool.bind_param("param", "value") for tool in tools]
```
### Binding Parameters While Loading
```py
bound_tool = toolbox.load_tool("my-tool", bound_params={"param": "value"})
bound_tools = toolbox.load_toolset(bound_params={"param": "value"})
```
{{< notice note >}}
Bound values during loading only affect the tools loaded in that call.
{{< /notice >}}
### Binding Dynamic Values
Use a function to bind dynamic values:
```py
def get_dynamic_value():
# Logic to determine the value
return "dynamic_value"
dynamic_bound_tool = tool.bind_param("param", get_dynamic_value)
```
{{< notice note >}}
You don’t need to modify tool configurations to bind parameter values.
{{< /notice >}}
## Asynchronous Usage
For better performance through [cooperative
multitasking](https://en.wikipedia.org/wiki/Cooperative_multitasking), you can
use the asynchronous interfaces of the `ToolboxClient`.
{{< notice note >}}
Asynchronous interfaces like `aload_tool` and `aload_toolset` require an asynchronous environment. For guidance on running asynchronous Python programs, see [asyncio documentation](https://docs.python.org/3/library/asyncio-runner.html#running-an-asyncio-program).
{{< /notice >}}
```py
import asyncio
from toolbox_langchain import ToolboxClient
async def main():
async with ToolboxClient("http://127.0.0.1:5000") as toolbox:
tool = await client.aload_tool("my-tool")
tools = await client.aload_toolset()
response = await tool.ainvoke()
if __name__ == "__main__":
asyncio.run(main())
```
========================================================================
## LlamaIndex
========================================================================
**Hierarchy:** MCP Toolbox for Databases > User Guide > Connect to Toolbox > Client SDKs > Python > LlamaIndex
**URL:** https://googleapis.github.io/genai-toolbox/previews/PR-2723/user-guide/connect-to/client-sdks/python-sdk/llamaindex/
**Description:** MCP Toolbox LlamaIndex SDK for integrating functionalities of MCP Toolbox into your LlamaIndex apps.
## Overview
The `toolbox-llamaindex` package provides a Python interface to the MCP Toolbox service, enabling you to load and invoke tools from your own applications.
## Installation
```bash
pip install toolbox-llamaindex
```
## Quickstart
Here's a minimal example to get you started using
[LlamaIndex](https://docs.llamaindex.ai/en/stable/#getting-started):
```py
import asyncio
from llama_index.llms.google_genai import GoogleGenAI
from llama_index.core.agent.workflow import AgentWorkflow
from toolbox_llamaindex import ToolboxClient
async def run_agent():
async with ToolboxClient("http://127.0.0.1:5000") as toolbox:
tools = toolbox.load_toolset()
vertex_model = GoogleGenAI(
model="gemini-3-flash-preview",
vertexai_config={"project": "project-id", "location": "us-central1"},
)
agent = AgentWorkflow.from_tools_or_functions(
tools,
llm=vertex_model,
system_prompt="You are a helpful assistant.",
)
response = await agent.run(user_msg="Get some response from the agent.")
print(response)
asyncio.run(run_agent())
```
{{< notice tip >}}
For a complete, end-to-end example including setting up the service and using an SDK, see the full tutorial: [Toolbox Quickstart Tutorial](../../../../../build-with-mcp-toolbox/local_quickstart.md)
{{< /notice >}}
## Usage
Import and initialize the toolbox client.
```py
from toolbox_llamaindex import ToolboxClient
# Replace with your Toolbox service's URL
async with ToolboxClient("http://127.0.0.1:5000") as toolbox:
```
## Transport Protocols
The SDK supports multiple transport protocols for communicating with the Toolbox server. By default, the client uses the latest supported version of the **Model Context Protocol (MCP)**.
You can explicitly select a protocol using the `protocol` option during client initialization. This is useful if you need to use the native Toolbox HTTP protocol or pin the client to a specific legacy version of MCP.
{{< notice note >}}
* **Native Toolbox Transport**: This uses the service's native **REST over HTTP** API.
* **MCP Transports**: These options use the **Model Context Protocol over HTTP**.
{{< /notice >}}
### Supported Protocols
| Constant | Description |
| :--- | :--- |
| `Protocol.MCP` | **(Default)** Alias for the default MCP version (currently `2025-06-18`). |
| `Protocol.TOOLBOX` | **DEPRECATED**: The native Toolbox HTTP protocol. Will be removed on March 4, 2026. |
| `Protocol.MCP_v20251125` | MCP Protocol version 2025-11-25. |
| `Protocol.MCP_v20250618` | MCP Protocol version 2025-06-18. |
| `Protocol.MCP_v20250326` | MCP Protocol version 2025-03-26. |
| `Protocol.MCP_v20241105` | MCP Protocol version 2024-11-05. |
{{< notice note >}}
The **Native Toolbox Protocol** (`Protocol.TOOLBOX`) is deprecated and will be removed on **March 4, 2026**.
Please migrate to using the **MCP Protocol** (`Protocol.MCP`), which is the default.
{{< /notice >}}
### Example
If you wish to use the native Toolbox protocol:
```py
from toolbox_llamaindex import ToolboxClient
from toolbox_core.protocol import Protocol
async with ToolboxClient("http://127.0.0.1:5000", protocol=Protocol.TOOLBOX) as toolbox:
# Use client
pass
```
If you want to pin the MCP Version 2025-03-26:
```py
from toolbox_llamaindex import ToolboxClient
from toolbox_core.protocol import Protocol
async with ToolboxClient("http://127.0.0.1:5000", protocol=Protocol.MCP_v20250326) as toolbox:
# Use client
pass
```
## Loading Tools
### Load a toolset
A toolset is a collection of related tools. You can load all tools in a toolset
or a specific one:
```py
# Load all tools
tools = toolbox.load_toolset()
# Load a specific toolset
tools = toolbox.load_toolset("my-toolset")
```
### Load a single tool
```py
tool = toolbox.load_tool("my-tool")
```
Loading individual tools gives you finer-grained control over which tools are
available to your LLM agent.
## Use with LlamaIndex
LlamaIndex's agents can dynamically choose and execute tools based on the user
input. Include tools loaded from the Toolbox SDK in the agent's toolkit:
```py
from llama_index.llms.google_genai import GoogleGenAI
from llama_index.core.agent.workflow import AgentWorkflow
vertex_model = GoogleGenAI(
model="gemini-3-flash-preview",
vertexai_config={"project": "project-id", "location": "us-central1"},
)
# Initialize agent with tools
agent = AgentWorkflow.from_tools_or_functions(
tools,
llm=vertex_model,
system_prompt="You are a helpful assistant.",
)
# Query the agent
response = await agent.run(user_msg="Get some response from the agent.")
print(response)
```
### Maintain state
To maintain state for the agent, add context as follows:
```py
from llama_index.core.agent.workflow import AgentWorkflow
from llama_index.core.workflow import Context
from llama_index.llms.google_genai import GoogleGenAI
vertex_model = GoogleGenAI(
model="gemini-3-flash-preview",
vertexai_config={"project": "project-id", "location": "us-central1"},
)
agent = AgentWorkflow.from_tools_or_functions(
tools,
llm=vertex_model,
system_prompt="You are a helpful assistant.",
)
# Save memory in agent context
ctx = Context(agent)
response = await agent.run(user_msg="Give me some response.", ctx=ctx)
print(response)
```
## Manual usage
Execute a tool manually using the `call` method:
```py
result = tools[0].call(name="Alice", age=30)
```
This is useful for testing tools or when you need precise control over tool
execution outside of an agent framework.
## Client to Server Authentication
This section describes how to authenticate the ToolboxClient itself when
connecting to a Toolbox server instance that requires authentication. This is
crucial for securing your Toolbox server endpoint, especially when deployed on
platforms like Cloud Run, GKE, or any environment where unauthenticated access is restricted.
This client-to-server authentication ensures that the Toolbox server can verify
the identity of the client making the request before any tool is loaded or
called. It is different from [Authenticating Tools](#authenticating-tools),
which deals with providing credentials for specific tools within an already
connected Toolbox session.
### When is Client-to-Server Authentication Needed?
You'll need this type of authentication if your Toolbox server is configured to
deny unauthenticated requests. For example:
- Your Toolbox server is deployed on Cloud Run and configured to "Require authentication."
- Your server is behind an Identity-Aware Proxy (IAP) or a similar
authentication layer.
- You have custom authentication middleware on your self-hosted Toolbox server.
Without proper client authentication in these scenarios, attempts to connect or
make calls (like `load_tool`) will likely fail with `Unauthorized` errors.
### How it works
The `ToolboxClient` allows you to specify functions (or coroutines for the async
client) that dynamically generate HTTP headers for every request sent to the
Toolbox server. The most common use case is to add an Authorization header with
a bearer token (e.g., a Google ID token).
These header-generating functions are called just before each request, ensuring
that fresh credentials or header values can be used.
### Configuration
You can configure these dynamic headers as follows:
```python
from toolbox_llamaindex import ToolboxClient
async with ToolboxClient(
"toolbox-url",
client_headers={"header1": header1_getter, "header2": header2_getter},
) as client:
```
### Authenticating with Google Cloud Servers
For Toolbox servers hosted on Google Cloud (e.g., Cloud Run) and requiring
`Google ID token` authentication, the helper module
[auth_methods](https://github.com/googleapis/mcp-toolbox-sdk-python/blob/main/packages/toolbox-core/src/toolbox_core/auth_methods.py) provides utility functions.
### Step by Step Guide for Cloud Run
1. **Configure Permissions**: [Grant](https://cloud.google.com/run/docs/securing/managing-access#service-add-principals) the `roles/run.invoker` IAM role on the Cloud
Run service to the principal. This could be your `user account email` or a
`service account`.
2. **Configure Credentials**
- Local Development: Set up
[ADC](https://cloud.google.com/docs/authentication/set-up-adc-local-dev-environment).
- Google Cloud Environments: When running within Google Cloud (e.g., Compute
Engine, GKE, another Cloud Run service, Cloud Functions), ADC is typically
configured automatically, using the environment's default service account.
3. **Connect to the Toolbox Server**
```python
from toolbox_llamaindex import ToolboxClient
from toolbox_core import auth_methods
auth_token_provider = auth_methods.aget_google_id_token(URL)
async with ToolboxClient(
URL,
client_headers={"Authorization": auth_token_provider},
) as client:
tools = await client.aload_toolset()
# Now, you can use the client as usual.
```
## Authenticating Tools
{{< notice info >}}
Always use HTTPS to connect your application with the Toolbox service, especially when using tools with authentication configured. Using HTTP exposes your application to serious security risks.
{{< /notice >}}
Some tools require user authentication to access sensitive data.
### Supported Authentication Mechanisms
Toolbox currently supports authentication using the [OIDC
protocol](https://openid.net/specs/openid-connect-core-1_0.html) with [ID
tokens](https://openid.net/specs/openid-connect-core-1_0.html#IDToken) (not
access tokens) for [Google OAuth
2.0](https://cloud.google.com/apigee/docs/api-platform/security/oauth/oauth-home).
### Configure Tools
Refer to [these
instructions](../../../../configuration/tools/_index.md#authenticated-parameters) on
configuring tools for authenticated parameters.
### Configure SDK
You need a method to retrieve an ID token from your authentication service:
```py
async def get_auth_token():
# ... Logic to retrieve ID token (e.g., from local storage, OAuth flow)
# This example just returns a placeholder. Replace with your actual token retrieval.
return "YOUR_ID_TOKEN" # Placeholder
```
#### Add Authentication to a Tool
```py
async with ToolboxClient("http://127.0.0.1:5000") as toolbox:
tools = toolbox.load_toolset()
auth_tool = tools[0].add_auth_token_getter("my_auth", get_auth_token) # Single token
multi_auth_tool = tools[0].add_auth_token_getters({"auth_1": get_auth_1}, {"auth_2": get_auth_2}) # Multiple tokens
# OR
auth_tools = [tool.add_auth_token_getter("my_auth", get_auth_token) for tool in tools]
```
#### Add Authentication While Loading
```py
auth_tool = toolbox.load_tool(auth_token_getters={"my_auth": get_auth_token})
auth_tools = toolbox.load_toolset(auth_token_getters={"my_auth": get_auth_token})
```
{{< notice note >}}
Adding auth tokens during loading only affect the tools loaded within that call.
{{< /notice >}}
### Complete Example
```py
import asyncio
from toolbox_llamaindex import ToolboxClient
async def get_auth_token():
# ... Logic to retrieve ID token (e.g., from local storage, OAuth flow)
# This example just returns a placeholder. Replace with your actual token retrieval.
return "YOUR_ID_TOKEN" # Placeholder
async with ToolboxClient("http://127.0.0.1:5000") as toolbox:
tool = toolbox.load_tool("my-tool")
auth_tool = tool.add_auth_token_getter("my_auth", get_auth_token)
result = auth_tool.call(input="some input")
print(result)
```
## Parameter Binding
Predetermine values for tool parameters using the SDK. These values won't be
modified by the LLM. This is useful for:
* **Protecting sensitive information:** API keys, secrets, etc.
* **Enforcing consistency:** Ensuring specific values for certain parameters.
* **Pre-filling known data:** Providing defaults or context.
### Binding Parameters to a Tool
```py
async with ToolboxClient("http://127.0.0.1:5000") as toolbox:
tools = toolbox.load_toolset()
bound_tool = tool[0].bind_param("param", "value") # Single param
multi_bound_tool = tools[0].bind_params({"param1": "value1", "param2": "value2"}) # Multiple params
# OR
bound_tools = [tool.bind_param("param", "value") for tool in tools]
```
### Binding Parameters While Loading
```py
bound_tool = toolbox.load_tool("my-tool", bound_params={"param": "value"})
bound_tools = toolbox.load_toolset(bound_params={"param": "value"})
```
{{< notice note >}}
Bound values during loading only affect the tools loaded in that call.
{{< /notice >}}
### Binding Dynamic Values
Use a function to bind dynamic values:
```py
def get_dynamic_value():
# Logic to determine the value
return "dynamic_value"
dynamic_bound_tool = tool.bind_param("param", get_dynamic_value)
```
{{< notice note >}}
You don't need to modify tool configurations to bind parameter values.
{{< /notice >}}
## Asynchronous Usage
For better performance through [cooperative
multitasking](https://en.wikipedia.org/wiki/Cooperative_multitasking), you can
use the asynchronous interfaces of the `ToolboxClient`.
{{< notice note >}}
Asynchronous interfaces like `aload_tool` and `aload_toolset` require an asynchronous environment. For guidance on running asynchronous Python programs, see [asyncio documentation](https://docs.python.org/3/library/asyncio-runner.html#running-an-asyncio-program).
{{< /notice >}}
```py
import asyncio
from toolbox_llamaindex import ToolboxClient
async def main():
async with ToolboxClient("http://127.0.0.1:5000") as toolbox:
tool = await client.aload_tool("my-tool")
tools = await client.aload_toolset()
response = await tool.ainvoke()
if __name__ == "__main__":
asyncio.run(main())
```
========================================================================
## Javascript
========================================================================
**Hierarchy:** MCP Toolbox for Databases > User Guide > Connect to Toolbox > Client SDKs > Javascript
**URL:** https://googleapis.github.io/genai-toolbox/previews/PR-2723/user-guide/connect-to/client-sdks/javascript-sdk/
**Description:** Javascript SDKs to connect to the MCP Toolbox server.
## Overview
The MCP Toolbox service provides a centralized way to manage and expose tools
(like API connectors, database query tools, etc.) for use by GenAI applications.
These JS SDKs act as clients for that service. They handle the communication needed to:
* Fetch tool definitions from your running Toolbox instance.
* Provide convenient JS objects or functions representing those tools.
* Invoke the tools (calling the underlying APIs/services configured in Toolbox).
* Handle authentication and parameter binding as needed.
By using these SDKs, you can easily leverage your Toolbox-managed tools directly
within your JS applications or AI orchestration frameworks.
## Which Package Should I Use?
Choosing the right package depends on how you are building your application:
- [`@toolbox-sdk/core`](https://github.com/googleapis/mcp-toolbox-sdk-js/tree/main/packages/toolbox-core):
This is a framework agnostic way to connect the tools to popular frameworks
like Langchain, LlamaIndex and Genkit.
- [`@toolbox-sdk/adk`](https://github.com/googleapis/mcp-toolbox-sdk-js/tree/main/packages/toolbox-adk):
This package provides a seamless way to connect to [Google ADK TS](https://github.com/google/adk-js).
## Available Packages
This repository hosts the following TS packages. See the package-specific
README for detailed installation and usage instructions:
| Package | Target Use Case | Integration | Path | Details (README) | Npm Version |
| :------ | :---------- | :---------- | :---------------------- | :---------- | :---------- |
| `toolbox-core` | Framework-agnostic / Custom applications | Use directly / Custom | `packages/toolbox-core/` | 📄 [View README](https://github.com/googleapis/mcp-toolbox-sdk-js/blob/main/packages/toolbox-core/README.md) |  |
| `toolbox-adk` | ADK applications | ADK | `packages/toolbox-adk/` | 📄 [View README](https://github.com/googleapis/mcp-toolbox-sdk-js/blob/main/packages/toolbox-adk/README.md) |  |
## Getting Started
To get started using Toolbox tools with an application, follow these general steps:
1. **Set up and Run the Toolbox Service:**
Before using the SDKs, you need the main MCP Toolbox service running. Follow
the instructions here: [**Toolbox Getting Started
Guide**](https://github.com/googleapis/genai-toolbox?tab=readme-ov-file#getting-started)
2. **Install the Appropriate SDK:**
Choose the package based on your needs (see "[Which Package Should I Use?](#which-package-should-i-use)" above) and install it:
```bash
# For the core, framework-agnostic SDK
npm install @toolbox-sdk/core
# For ADK applications
npm install @toolbox-sdk/adk
```
{{< notice note >}}
Source code for [js-sdk](https://github.com/googleapis/mcp-toolbox-sdk-js)
{{< /notice >}}
========================================================================
## ADK
========================================================================
**Hierarchy:** MCP Toolbox for Databases > User Guide > Connect to Toolbox > Client SDKs > Javascript > ADK
**URL:** https://googleapis.github.io/genai-toolbox/previews/PR-2723/user-guide/connect-to/client-sdks/javascript-sdk/adk/
**Description:** MCP Toolbox SDK for integrating functionalities of MCP Toolbox into your ADK apps.
## Overview
The `@toolbox-sdk/adk` package provides a Javascript interface to the MCP Toolbox service, enabling you to load and invoke tools from your own applications.
## Supported Environments
This SDK is a standard Node.js package built with TypeScript, ensuring broad compatibility with the modern JavaScript ecosystem.
- Node.js: Actively supported on Node.js v18.x and higher. The package is compatible with both modern ES Module (import) and legacy CommonJS (require).
- TypeScript: The SDK is written in TypeScript and ships with its own type declarations, providing a first-class development experience with autocompletion and type-checking out of the box.
- JavaScript: Fully supports modern JavaScript in Node.js environments.
## Installation
```bash
npm install @toolbox-sdk/adk
```
## Quickstart
1. **Start the Toolbox Service**
- Make sure the MCP Toolbox service is running. See the [Toolbox Getting Started Guide](../../../../introduction/_index.md#getting-started).
2. **Minimal Example**
Here's a minimal example to get you started. Ensure your Toolbox service is running and accessible.
```javascript
import { ToolboxClient } from '@toolbox-sdk/adk';
const URL = 'http://127.0.0.1:5000'; // Replace with your Toolbox service URL
const client = new ToolboxClient(URL);
async function quickstart() {
try {
const tools = await client.loadToolset();
// Use tools
} catch (error) {
console.error("unable to load toolset:", error.message);
}
}
quickstart();
```
{{< notice note>}}
This guide uses modern ES Module (`import`) syntax. If your project uses CommonJS, you can import the library using require: `const { ToolboxClient } = require('@toolbox-sdk/adk')`;.
{{< /notice >}}
## Usage
Import and initialize a Toolbox client, pointing it to the URL of your running Toolbox service.
```javascript
import { ToolboxClient } from '@toolbox-sdk/adk';
// Replace with the actual URL where your Toolbox service is running
const URL = 'http://127.0.0.1:5000';
let client = new ToolboxClient(URL);
const tools = await client.loadToolset();
// Use the client and tools as per requirement
```
All interactions for loading and invoking tools happen through this client.
{{< notice note>}}
Closing the `ToolboxClient` also closes the underlying network session shared by all tools loaded from that client. As a result, any tool instances you have loaded will cease to function and will raise an error if you attempt to invoke them after the client is closed.
{{< /notice >}}
{{< notice note>}}
For advanced use cases, you can provide an external `AxiosInstance` during initialization (e.g., `ToolboxClient(url, my_session)`).
{{< /notice >}}
## Transport Protocols
The SDK supports multiple transport protocols to communicate with the Toolbox server. You can specify the protocol version during client initialization.
### Available Protocols
{{< notice note >}}
The native Toolbox protocol (`Protocol.TOOLBOX`) is deprecated and will be removed on March 4, 2026. Please use `Protocol.MCP` or specific MCP versions.
{{< /notice >}}
- `Protocol.MCP`: The default protocol version (currently aliases to `MCP_v20250618`).
- `Protocol.MCP_v20241105`: Use this for compatibility with older MCP servers (November 2024 version).
- `Protocol.MCP_v20250326`: March 2025 version.
- `Protocol.MCP_v20250618`: June 2025 version.
- `Protocol.MCP_v20251125`: November 2025 version.
- `Protocol.TOOLBOX`: **Deprecated** Legacy Toolbox protocol.
### Specifying a Protocol
You can explicitly set the protocol by passing the `protocol` argument to the `ToolboxClient` constructor.
```javascript
import { ToolboxClient, Protocol } from '@toolbox-sdk/adk';
const URL = 'http://127.0.0.1:5000';
// Initialize with a specific protocol version
const client = new ToolboxClient(URL, null, null, Protocol.MCP_v20241105);
const tools = await client.loadToolset();
```
## Loading Tools
You can load tools individually or in groups (toolsets) as defined in your Toolbox service configuration. Loading a toolset is convenient when working with multiple related functions, while loading a single tool offers more granular control.
### Load a toolset
A toolset is a collection of related tools. You can load all tools in a toolset or a specific one:
```javascript
// Load all tools
const tools = await toolbox.loadToolset()
// Load a specific toolset
const tools = await toolbox.loadToolset("my-toolset")
```
### Load a single tool
Loads a specific tool by its unique name. This provides fine-grained control.
```javascript
const tool = await toolbox.loadTool("my-tool")
```
## Invoking Tools
Once loaded, tools behave like awaitable JS functions. You invoke them using `await` and pass arguments corresponding to the parameters defined in the tool's configuration within the Toolbox service.
```javascript
const tool = await client.loadTool("my-tool")
const result = await tool.runAsync(args: {a: 5, b: 2})
```
{{< notice tip>}}
For a more comprehensive guide on setting up the Toolbox service itself, which you'll need running to use this SDK, please refer to the [Toolbox Quickstart Guide](../../../../../build-with-mcp-toolbox/local_quickstart_js.md).
{{< /notice >}}
## Client to Server Authentication
This section describes how to authenticate the ToolboxClient itself when
connecting to a Toolbox server instance that requires authentication. This is
crucial for securing your Toolbox server endpoint, especially when deployed on
platforms like Cloud Run, GKE, or any environment where unauthenticated access is restricted.
This client-to-server authentication ensures that the Toolbox server can verify
the identity of the client making the request before any tool is loaded or
called. It is different from [Authenticating Tools](#authenticating-tools),
which deals with providing credentials for specific tools within an already
connected Toolbox session.
### When is Client-to-Server Authentication Needed?
You'll need this type of authentication if your Toolbox server is configured to
deny unauthenticated requests. For example:
- Your Toolbox server is deployed on Cloud Run and configured to "Require authentication."
- Your server is behind an Identity-Aware Proxy (IAP) or a similar
authentication layer.
- You have custom authentication middleware on your self-hosted Toolbox server.
Without proper client authentication in these scenarios, attempts to connect or
make calls (like `load_tool`) will likely fail with `Unauthorized` errors.
### How it works
The `ToolboxClient` allows you to specify functions that dynamically generate
HTTP headers for every request sent to the Toolbox server. The most common use
case is to add an [Authorization
header](https://developer.mozilla.org/en-US/docs/Web/HTTP/Reference/Headers/Authorization)
with a bearer token (e.g., a Google ID token).
These header-generating functions are called just before each request, ensuring
that fresh credentials or header values can be used.
### Configuration
You can configure these dynamic headers as seen below:
```javascript
import { ToolboxClient } from '@toolbox-sdk/adk';
import {getGoogleIdToken} from '@toolbox-sdk/core/auth'
const URL = 'http://127.0.0.1:5000';
const getGoogleIdTokenGetter = () => getGoogleIdToken(URL);
const client = new ToolboxClient(URL, null, {"Authorization": getGoogleIdTokenGetter});
// Use the client as usual
```
### Authenticating with Google Cloud Servers
For Toolbox servers hosted on Google Cloud (e.g., Cloud Run) and requiring
`Google ID token` authentication, the helper module
[auth_methods](../core/index.md#authenticating-with-google-cloud-servers) provides utility functions.
### Step by Step Guide for Cloud Run
1. **Configure Permissions**: [Grant](https://cloud.google.com/run/docs/securing/managing-access#service-add-principals) the `roles/run.invoker` IAM role on the Cloud
Run service to the principal. This could be your `user account email` or a
`service account`.
2. **Configure Credentials**
- Local Development: Set up
[ADC](https://cloud.google.com/docs/authentication/set-up-adc-local-dev-environment).
- Google Cloud Environments: When running within Google Cloud (e.g., Compute
Engine, GKE, another Cloud Run service, Cloud Functions), ADC is typically
configured automatically, using the environment's default service account.
3. **Connect to the Toolbox Server**
```javascript
import { ToolboxClient } from '@toolbox-sdk/adk';
import {getGoogleIdToken} from '@toolbox-sdk/core/auth'
const URL = 'http://127.0.0.1:5000';
const getGoogleIdTokenGetter = () => getGoogleIdToken(URL);
const client = new ToolboxClient(URL, null, {"Authorization": getGoogleIdTokenGetter});
// Use the client as usual
```
## Authenticating Tools
{{< notice note>}}
**Always use HTTPS** to connect your application with the Toolbox service, especially in **production environments** or whenever the communication involves **sensitive data** (including scenarios where tools require authentication tokens). Using plain HTTP lacks encryption and exposes your application and data to significant security risks, such as eavesdropping and tampering.
{{< /notice >}}
Tools can be configured within the Toolbox service to require authentication,
ensuring only authorized users or applications can invoke them, especially when
accessing sensitive data.
### When is Authentication Needed?
Authentication is configured per-tool within the Toolbox service itself. If a
tool you intend to use is marked as requiring authentication in the service, you
must configure the SDK client to provide the necessary credentials (currently
Oauth2 tokens) when invoking that specific tool.
### Supported Authentication Mechanisms
The Toolbox service enables secure tool usage through **Authenticated Parameters**. For detailed information on how these mechanisms work within the Toolbox service and how to configure them, please refer to [Toolbox Service Documentation - Authenticated Parameters](../../../../configuration/tools/_index.md#authenticated-parameters)
### Step 1: Configure Tools in Toolbox Service
First, ensure the target tool(s) are configured correctly in the Toolbox service
to require authentication. Refer to the [Toolbox Service Documentation -
Authenticated
Parameters](../../../../configuration/tools/_index.md#authenticated-parameters)
for instructions.
### Step 2: Configure SDK Client
Your application needs a way to obtain the required Oauth2 token for the
authenticated user. The SDK requires you to provide a function capable of
retrieving this token *when the tool is invoked*.
#### Provide an ID Token Retriever Function
You must provide the SDK with a function (sync or async) that returns the
necessary token when called. The implementation depends on your application's
authentication flow (e.g., retrieving a stored token, initiating an OAuth flow).
{{< notice note>}}
The name used when registering the getter function with the SDK (e.g., `"my_api_token"`) must exactly match the `name` of the corresponding `authServices` defined in the tool's configuration within the Toolbox service.
{{< /notice >}}
```javascript
async function getAuthToken() {
// ... Logic to retrieve ID token (e.g., from local storage, OAuth flow)
// This example just returns a placeholder. Replace with your actual token retrieval.
return "YOUR_ID_TOKEN" // Placeholder
}
```
{{< notice tip>}}
Your token retriever function is invoked every time an authenticated parameter requires a token for a tool call. Consider implementing caching logic within this function to avoid redundant token fetching or generation, especially for tokens with longer validity periods or if the retrieval process is resource-intensive.
{{< /notice >}}
#### Option A: Add Authentication to a Loaded Tool
You can add the token retriever function to a tool object *after* it has been
loaded. This modifies the specific tool instance.
```javascript
const URL = 'http://127.0.0.1:5000';
let client = new ToolboxClient(URL);
let tool = await client.loadTool("my-tool")
const authTool = tool.addAuthTokenGetter("my_auth", get_auth_token) // Single token
// OR
const multiAuthTool = tool.addAuthTokenGetters({
"my_auth_1": getAuthToken1,
"my_auth_2": getAuthToken2,
}) // Multiple tokens
```
#### Option B: Add Authentication While Loading Tools
You can provide the token retriever(s) directly during the `loadTool` or
`loadToolset` calls. This applies the authentication configuration only to the
tools loaded in that specific call, without modifying the original tool objects
if they were loaded previously.
```javascript
const authTool = await toolbox.loadTool("toolName", {"myAuth": getAuthToken})
// OR
const authTools = await toolbox.loadToolset({"myAuth": getAuthToken})
```
{{< notice note>}}
Adding auth tokens during loading only affect the tools loaded within that call.
{{< /notice >}}
### Complete Authentication Example
```javascript
import { ToolboxClient } from '@toolbox-sdk/adk';
async function getAuthToken() {
// ... Logic to retrieve ID token (e.g., from local storage, OAuth flow)
// This example just returns a placeholder. Replace with your actual token retrieval.
return "YOUR_ID_TOKEN" // Placeholder
}
const URL = 'http://127.0.0.1:5000';
let client = new ToolboxClient(URL);
const tool = await client.loadTool("my-tool");
const authTool = tool.addAuthTokenGetters({"my_auth": getAuthToken});
const result = await authTool.runAsync(args: {input:"some input"});
console.log(result);
```
## Binding Parameter Values
The SDK allows you to pre-set, or "bind", values for specific tool parameters
before the tool is invoked or even passed to an LLM. These bound values are
fixed and will not be requested or modified by the LLM during tool use.
### Why Bind Parameters?
- **Protecting sensitive information:** API keys, secrets, etc.
- **Enforcing consistency:** Ensuring specific values for certain parameters.
- **Pre-filling known data:** Providing defaults or context.
{{< notice note>}}
The parameter names used for binding (e.g., `"api_key"`) must exactly match the parameter names defined in the tool's configuration within the Toolbox service.
{{< /notice >}}
{{< notice note>}}
You do not need to modify the tool's configuration in the Toolbox service to
> bind parameter values using the SDK.
{{< /notice >}}
### Option A: Binding Parameters to a Loaded Tool
Bind values to a tool object *after* it has been loaded. This modifies the
specific tool instance.
```javascript
import { ToolboxClient } from '@toolbox-sdk/adk';
const URL = 'http://127.0.0.1:5000';
let client = new ToolboxClient(URL);
const tool = await client.loadTool("my-tool");
const boundTool = tool.bindParam("param", "value");
// OR
const boundTool = tool.bindParams({"param": "value"});
```
### Option B: Binding Parameters While Loading Tools
Specify bound parameters directly when loading tools. This applies the binding
only to the tools loaded in that specific call.
```javascript
const boundTool = await client.loadTool("my-tool", null, {"param": "value"})
// OR
const boundTools = await client.loadToolset(null, {"param": "value"})
```
{{< notice note>}}
Bound values during loading only affect the tools loaded in that call.
{{< /notice >}}
### Binding Dynamic Values
Instead of a static value, you can bind a parameter to a synchronous or
asynchronous function. This function will be called *each time* the tool is
invoked to dynamically determine the parameter's value at runtime.
```javascript
async function getDynamicValue() {
// Logic to determine the value
return "dynamicValue";
}
const dynamicBoundTool = tool.bindParam("param", getDynamicValue)
```
{{< notice note>}}
You don't need to modify tool configurations to bind parameter values.
{{< /notice >}}
# Using with ADK
ADK JS:
```javascript
import {FunctionTool, InMemoryRunner, LlmAgent} from '@google/adk';
import {Content} from '@google/genai';
import {ToolboxClient} from '@toolbox-sdk/core'
const toolboxClient = new ToolboxClient("http://127.0.0.1:5000");
const loadedTools = await toolboxClient.loadToolset();
export const rootAgent = new LlmAgent({
name: 'weather_time_agent',
model: 'gemini-3-flash-preview',
description:
'Agent to answer questions about the time and weather in a city.',
instruction:
'You are a helpful agent who can answer user questions about the time and weather in a city.',
tools: loadedTools,
});
async function main() {
const userId = 'test_user';
const appName = rootAgent.name;
const runner = new InMemoryRunner({agent: rootAgent, appName});
const session = await runner.sessionService.createSession({
appName,
userId,
});
const prompt = 'What is the weather in New York? And the time?';
const content: Content = {
role: 'user',
parts: [{text: prompt}],
};
console.log(content);
for await (const e of runner.runAsync({
userId,
sessionId: session.id,
newMessage: content,
})) {
if (e.content?.parts?.[0]?.text) {
console.log(`${e.author}: ${JSON.stringify(e.content, null, 2)}`);
}
}
}
main().catch(console.error);
```
========================================================================
## Core
========================================================================
**Hierarchy:** MCP Toolbox for Databases > User Guide > Connect to Toolbox > Client SDKs > Javascript > Core
**URL:** https://googleapis.github.io/genai-toolbox/previews/PR-2723/user-guide/connect-to/client-sdks/javascript-sdk/core/
**Description:** MCP Toolbox Core SDK for integrating functionalities of MCP Toolbox into your Agentic apps.
## Overview
The `@toolbox-sdk/core` package provides a Javascript interface to the MCP Toolbox service, enabling you to load and invoke tools from your own applications.
## Supported Environments
This SDK is a standard Node.js package built with TypeScript, ensuring broad compatibility with the modern JavaScript ecosystem.
- Node.js: Actively supported on Node.js v18.x and higher. The package is compatible with both modern ES Module (import) and legacy CommonJS (require).
- TypeScript: The SDK is written in TypeScript and ships with its own type declarations, providing a first-class development experience with autocompletion and type-checking out of the box.
- JavaScript: Fully supports modern JavaScript in Node.js environments.
## Installation
```bash
npm install @toolbox-sdk/core
```
## Quickstart
1. **Start the Toolbox Service**
- Make sure the MCP Toolbox service is running. See the [Toolbox Getting Started Guide](../../../../introduction/_index.md#getting-started).
2. **Minimal Example**
Here's a minimal example to get you started. Ensure your Toolbox service is running and accessible.
```javascript
import { ToolboxClient } from '@toolbox-sdk/core';
const URL = 'http://127.0.0.1:5000'; // Replace with your Toolbox service URL
const client = new ToolboxClient(URL);
async function quickstart() {
try {
const tools = await client.loadToolset();
// Use tools
} catch (error) {
console.error("unable to load toolset:", error.message);
}
}
quickstart();
```
{{< notice note>}}
This guide uses modern ES Module (`import`) syntax. If your project uses CommonJS, you can import the library using require: `const { ToolboxClient } = require('@toolbox-sdk/core')`;.
{{< /notice >}}
## Usage
Import and initialize a Toolbox client, pointing it to the URL of your running Toolbox service.
```javascript
import { ToolboxClient } from '@toolbox-sdk/core';
// Replace with the actual URL where your Toolbox service is running
const URL = 'http://127.0.0.1:5000';
let client = new ToolboxClient(URL);
const tools = await client.loadToolset();
// Use the client and tools as per requirement
```
All interactions for loading and invoking tools happen through this client.
{{< notice note>}}
Closing the `ToolboxClient` also closes the underlying network session shared by all tools loaded from that client. As a result, any tool instances you have loaded will cease to function and will raise an error if you attempt to invoke them after the client is closed.
{{< /notice >}}
{{< notice note>}}
For advanced use cases, you can provide an external `AxiosInstance` during initialization (e.g., `ToolboxClient(url, my_session)`).
{{< /notice >}}
## Transport Protocols
The SDK supports multiple transport protocols to communicate with the Toolbox server. You can specify the protocol version during client initialization.
### Available Protocols
{{ }}
The native Toolbox protocol (Protocol.TOOLBOX) is deprecated and will be removed on March 4, 2026. Please use Protocol.MCP or specific MCP versions.
{{ < /notice >}}
- `Protocol.MCP`: The default protocol version (currently aliases to `MCP_v20250618`).
- `Protocol.MCP_v20241105`: Use this for compatibility with older MCP servers (November 2024 version).
- `Protocol.MCP_v20250326`: March 2025 version.
- `Protocol.MCP_v20250618`: June 2025 version.
- `Protocol.MCP_v20251125`: November 2025 version.
- `Protocol.TOOLBOX`: **Deprecated** Legacy Toolbox protocol.
### Specifying a Protocol
You can explicitly set the protocol by passing the `protocol` argument to the `ToolboxClient` constructor.
```javascript
import { ToolboxClient, Protocol } from '@toolbox-sdk/core';
const URL = 'http://127.0.0.1:5000';
// Initialize with a specific protocol version
const client = new ToolboxClient(URL, null, null, Protocol.MCP_v20241105);
const tools = await client.loadToolset();
```
## Loading Tools
You can load tools individually or in groups (toolsets) as defined in your Toolbox service configuration. Loading a toolset is convenient when working with multiple related functions, while loading a single tool offers more granular control.
### Load a toolset
A toolset is a collection of related tools. You can load all tools in a toolset or a specific one:
```javascript
// Load all tools
const tools = await client.loadToolset()
// Load a specific toolset
const tools = await client.loadToolset("my-toolset")
```
### Load a single tool
Loads a specific tool by its unique name. This provides fine-grained control.
```javascript
const tool = await client.loadTool("my-tool")
```
## Invoking Tools
Once loaded, tools behave like awaitable JS functions. You invoke them using `await` and pass arguments corresponding to the parameters defined in the tool's configuration within the Toolbox service.
```javascript
const tool = await client.loadTool("my-tool")
const result = await tool({a: 5, b: 2})
```
{{< notice tip>}}
For a more comprehensive guide on setting up the Toolbox service itself, which you'll need running to use this SDK, please refer to the [Toolbox Quickstart Guide](../../../../../build-with-mcp-toolbox/local_quickstart_js.md).
{{< /notice >}}
## Client to Server Authentication
This section describes how to authenticate the ToolboxClient itself when
connecting to a Toolbox server instance that requires authentication. This is
crucial for securing your Toolbox server endpoint, especially when deployed on
platforms like Cloud Run, GKE, or any environment where unauthenticated access is restricted.
This client-to-server authentication ensures that the Toolbox server can verify
the identity of the client making the request before any tool is loaded or
called. It is different from [Authenticating Tools](#authenticating-tools),
which deals with providing credentials for specific tools within an already
connected Toolbox session.
### When is Client-to-Server Authentication Needed?
You'll need this type of authentication if your Toolbox server is configured to
deny unauthenticated requests. For example:
- Your Toolbox server is deployed on Cloud Run and configured to "Require authentication."
- Your server is behind an Identity-Aware Proxy (IAP) or a similar authentication layer.
- You have custom authentication middleware on your self-hosted Toolbox server.
Without proper client authentication in these scenarios, attempts to connect or
make calls (like `load_tool`) will likely fail with `Unauthorized` errors.
### How it works
The `ToolboxClient` allows you to specify functions that dynamically generate
HTTP headers for every request sent to the Toolbox server. The most common use
case is to add an [Authorization
header](https://developer.mozilla.org/en-US/docs/Web/HTTP/Reference/Headers/Authorization)
with a bearer token (e.g., a Google ID token).
These header-generating functions are called just before each request, ensuring that fresh credentials or header values can be used.
### Configuration
You can configure these dynamic headers as seen below:
```javascript
import { ToolboxClient } from '@toolbox-sdk/core';
import {getGoogleIdToken} from '@toolbox-sdk/core/auth'
const URL = 'http://127.0.0.1:5000';
const getGoogleIdTokenGetter = () => getGoogleIdToken(URL);
const client = new ToolboxClient(URL, null, {"Authorization": getGoogleIdTokenGetter});
// Use the client as usual
```
### Authenticating with Google Cloud Servers
For Toolbox servers hosted on Google Cloud (e.g., Cloud Run) and requiring
`Google ID token` authentication, the helper module
[auth_methods](https://github.com/googleapis/mcp-toolbox-sdk-js/blob/main/packages/toolbox-core/src/toolbox_core/authMethods.ts) provides utility functions.
### Step by Step Guide for Cloud Run
1. **Configure Permissions**: [Grant](https://cloud.google.com/run/docs/securing/managing-access#service-add-principals) the `roles/run.invoker` IAM role on the Cloud
Run service to the principal. This could be your `user account email` or a
`service account`.
2. **Configure Credentials**
- Local Development: Set up
[ADC](https://cloud.google.com/docs/authentication/set-up-adc-local-dev-environment).
- Google Cloud Environments: When running within Google Cloud (e.g., Compute
Engine, GKE, another Cloud Run service, Cloud Functions), ADC is typically
configured automatically, using the environment's default service account.
3. **Connect to the Toolbox Server**
```javascript
import { ToolboxClient } from '@toolbox-sdk/core';
import {getGoogleIdToken} from '@toolbox-sdk/core/auth'
const URL = 'http://127.0.0.1:5000';
const getGoogleIdTokenGetter = () => getGoogleIdToken(URL);
const client = new ToolboxClient(URL, null, {"Authorization": getGoogleIdTokenGetter});
// Use the client as usual
```
## Authenticating Tools
{{< notice note>}}
**Always use HTTPS** to connect your application with the Toolbox service, especially in **production environments** or whenever the communication involves **sensitive data** (including scenarios where tools require authentication tokens). Using plain HTTP lacks encryption and exposes your application and data to significant security risks, such as eavesdropping and tampering.
{{< /notice >}}
Tools can be configured within the Toolbox service to require authentication,
ensuring only authorized users or applications can invoke them, especially when
accessing sensitive data.
### When is Authentication Needed?
Authentication is configured per-tool within the Toolbox service itself. If a
tool you intend to use is marked as requiring authentication in the service, you
must configure the SDK client to provide the necessary credentials (currently
Oauth2 tokens) when invoking that specific tool.
### Supported Authentication Mechanisms
The Toolbox service enables secure tool usage through **Authenticated Parameters**. For detailed information on how these mechanisms work within the Toolbox service and how to configure them, please refer to [Toolbox Service Documentation - Authenticated Parameters](../../../../configuration/tools/_index.md#authenticated-parameters)
### Step 1: Configure Tools in Toolbox Service
First, ensure the target tool(s) are configured correctly in the Toolbox service
to require authentication. Refer to the [Toolbox Service Documentation -
Authenticated
Parameters](../../../../configuration/tools/_index.md#authenticated-parameters)
for instructions.
### Step 2: Configure SDK Client
Your application needs a way to obtain the required Oauth2 token for the
authenticated user. The SDK requires you to provide a function capable of
retrieving this token *when the tool is invoked*.
#### Provide an ID Token Retriever Function
You must provide the SDK with a function (sync or async) that returns the
necessary token when called. The implementation depends on your application's
authentication flow (e.g., retrieving a stored token, initiating an OAuth flow).
{{< notice note>}}
The name used when registering the getter function with the SDK (e.g., `"my_api_token"`) must exactly match the `name` of the corresponding `authServices` defined in the tool's configuration within the Toolbox service.
{{< /notice >}}
```javascript
async function getAuthToken() {
// ... Logic to retrieve ID token (e.g., from local storage, OAuth flow)
// This example just returns a placeholder. Replace with your actual token retrieval.
return "YOUR_ID_TOKEN" // Placeholder
}
```
{{< notice tip>}}
Your token retriever function is invoked every time an authenticated parameter requires a token for a tool call. Consider implementing caching logic within this function to avoid redundant token fetching or generation, especially for tokens with longer validity periods or if the retrieval process is resource-intensive.
{{< /notice >}}
#### Option A: Add Authentication to a Loaded Tool
You can add the token retriever function to a tool object *after* it has been
loaded. This modifies the specific tool instance.
```javascript
const URL = 'http://127.0.0.1:5000';
let client = new ToolboxClient(URL);
let tool = await client.loadTool("my-tool")
const authTool = tool.addAuthTokenGetter("my_auth", get_auth_token) // Single token
// OR
const multiAuthTool = tool.addAuthTokenGetters({
"my_auth_1": getAuthToken1,
"my_auth_2": getAuthToken2,
}) // Multiple tokens
```
#### Option B: Add Authentication While Loading Tools
You can provide the token retriever(s) directly during the `loadTool` or
`loadToolset` calls. This applies the authentication configuration only to the
tools loaded in that specific call, without modifying the original tool objects
if they were loaded previously.
```javascript
const authTool = await client.loadTool("toolName", {"myAuth": getAuthToken})
// OR
const authTools = await client.loadToolset({"myAuth": getAuthToken})
```
{{< notice note>}}
Adding auth tokens during loading only affect the tools loaded within that call.
{{< /notice >}}
### Complete Authentication Example
```javascript
import { ToolboxClient } from '@toolbox-sdk/core';
async function getAuthToken() {
// ... Logic to retrieve ID token (e.g., from local storage, OAuth flow)
// This example just returns a placeholder. Replace with your actual token retrieval.
return "YOUR_ID_TOKEN" // Placeholder
}
const URL = 'http://127.0.0.1:5000';
let client = new ToolboxClient(URL);
const tool = await client.loadTool("my-tool");
const authTool = tool.addAuthTokenGetters({"my_auth": getAuthToken});
const result = await authTool({input:"some input"});
console.log(result);
```
## Binding Parameter Values
The SDK allows you to pre-set, or "bind", values for specific tool parameters
before the tool is invoked or even passed to an LLM. These bound values are
fixed and will not be requested or modified by the LLM during tool use.
### Why Bind Parameters?
- **Protecting sensitive information:** API keys, secrets, etc.
- **Enforcing consistency:** Ensuring specific values for certain parameters.
- **Pre-filling known data:** Providing defaults or context.
{{< notice note>}}
The parameter names used for binding (e.g., `"api_key"`) must exactly match the parameter names defined in the tool's configuration within the Toolbox service.
{{< /notice >}}
{{< notice note>}}
You do not need to modify the tool's configuration in the Toolbox service to bind parameter values using the SDK.
{{< /notice >}}
### Option A: Binding Parameters to a Loaded Tool
Bind values to a tool object *after* it has been loaded. This modifies the
specific tool instance.
```javascript
import { ToolboxClient } from '@toolbox-sdk/core';
const URL = 'http://127.0.0.1:5000';
let client = new ToolboxClient(URL);
const tool = await client.loadTool("my-tool");
const boundTool = tool.bindParam("param", "value");
// OR
const boundTool = tool.bindParams({"param": "value"});
```
### Option B: Binding Parameters While Loading Tools
Specify bound parameters directly when loading tools. This applies the binding
only to the tools loaded in that specific call.
```javascript
const boundTool = await client.loadTool("my-tool", null, {"param": "value"})
// OR
const boundTools = await client.loadToolset(null, {"param": "value"})
```
{{< notice note>}}
Bound values during loading only affect the tools loaded in that call.
{{< /notice >}}
### Binding Dynamic Values
Instead of a static value, you can bind a parameter to a synchronous or
asynchronous function. This function will be called *each time* the tool is
invoked to dynamically determine the parameter's value at runtime.
```javascript
async function getDynamicValue() {
// Logic to determine the value
return "dynamicValue";
}
const dynamicBoundTool = tool.bindParam("param", getDynamicValue)
```
{{< notice note>}}
You don't need to modify tool configurations to bind parameter values.
{{< /notice >}}
# Using with Orchestration Frameworks
Langchain
[LangchainJS](https://js.langchain.com/docs/introduction/)
```javascript
import {ToolboxClient} from "@toolbox-sdk/core"
import { tool } from "@langchain/core/tools";
let client = ToolboxClient(URL)
multiplyTool = await client.loadTool("multiply")
const multiplyNumbers = tool(multiplyTool, {
name: multiplyTool.getName(),
description: multiplyTool.getDescription(),
schema: multiplyTool.getParamSchema()
});
await multiplyNumbers.invoke({ a: 2, b: 3 });
```
The `multiplyNumbers` tool is compatible with [Langchain/Langraph
agents](http://js.langchain.com/docs/concepts/agents/)
such as [React
Agents](https://langchain-ai.github.io/langgraphjs/reference/functions/langgraph_prebuilt.createReactAgent.html).
LlamaIndex
[LlamaindexTS](https://ts.llamaindex.ai/)
```javascript
import {ToolboxClient} from "@toolbox-sdk/core"
import { tool } from "llamaindex";
let client = ToolboxClient(URL)
multiplyTool = await client.loadTool("multiply")
const multiplyNumbers = tool({
name: multiplyTool.getName(),
description: multiplyTool.getDescription(),
parameters: multiplyTool.getParamSchema(),
execute: multiplyTool
});
await multiplyNumbers.call({ a: 2, b: 3 });
```
The `multiplyNumbers` tool is compatible with LlamaIndex
[agents](https://ts.llamaindex.ai/docs/llamaindex/migration/deprecated/agent)
and [agent
workflows](https://ts.llamaindex.ai/docs/llamaindex/modules/agents/agent_workflow).
Genkit
[GenkitJS](https://genkit.dev/docs/get-started/#_top)
```javascript
import {ToolboxClient} from "@toolbox-sdk/core"
import { genkit, z } from 'genkit';
import { googleAI } from '@genkit-ai/googleai';
let client = ToolboxClient(URL)
multiplyTool = await client.loadTool("multiply")
const ai = genkit({
plugins: [googleAI()],
model: googleAI.model('gemini-3-flash-preview'),
});
const multiplyNumbers = ai.defineTool({
name: multiplyTool.getName(),
description: multiplyTool.getDescription(),
inputSchema: multiplyTool.getParamSchema(),
},
multiplyTool,
);
await ai.generate({
prompt: 'Can you multiply 5 and 7?',
tools: [multiplyNumbers],
});
```
========================================================================
## Go
========================================================================
**Hierarchy:** MCP Toolbox for Databases > User Guide > Connect to Toolbox > Client SDKs > Go
**URL:** https://googleapis.github.io/genai-toolbox/previews/PR-2723/user-guide/connect-to/client-sdks/go-sdk/
**Description:** Go SDKs to connect to the MCP Toolbox server.
## Overview
The MCP Toolbox service provides a centralized way to manage and expose tools
(like API connectors, database query tools, etc.) for use by GenAI applications.
The Go SDK act as clients for that service. They handle the communication needed to:
* Fetch tool definitions from your running Toolbox instance.
* Provide convenient Go structs representing those tools.
* Invoke the tools (calling the underlying APIs/services configured in Toolbox).
* Handle authentication and parameter binding as needed.
By using the SDK, you can easily leverage your Toolbox-managed tools directly
within your Go applications or AI orchestration frameworks.
## Which Package Should I Use?
Choosing the right package depends on how you are building your application:
- [**`core`**](core/):
This is a framework-agnostic way to connect tools to popular frameworks
like Google GenAI, LangChain, etc.
- [**`tbadk`**](tbadk/):
This package provides a way to connect tools to ADK Go.
- [**`tbgenkit`**](tbgenkit/):
This package provides functionality to convert the Tool fetched using the core package
into a Genkit Go compatible tool.
## Available Packages
This repository hosts the following Go packages. See the package-specific
README for detailed installation and usage instructions:
| Package | Target Use Case | Integration | Path | Details (README) |
| :------ | :----------| :---------- | :---------------------- | :---------- |
| [`core`](core/) | Framework-agnostic / Custom applications | Use directly / Custom | `core/` | 📄 [View README](https://github.com/googleapis/mcp-toolbox-sdk-go/blob/main/core/README.md) |
| [`tbadk`](tbadk/) | ADK Go | Use directly | `tbadk/` | 📄 [View README](https://github.com/googleapis/mcp-toolbox-sdk-go/blob/main/tbadk/README.md) |
| [`tbgenkit`](tbgenkit/) | Genkit Go | Along with core | `tbgenkit/` | 📄 [View README](https://github.com/googleapis/mcp-toolbox-sdk-go/blob/main/tbgenkit/README.md) |
## Getting Started
To get started using Toolbox tools with an application, follow these general steps:
1. **Set up and Run the Toolbox Service:**
Before using the SDKs, you need the MCP Toolbox server running. Follow
the instructions here: [**Toolbox Getting Started
Guide**](https://github.com/googleapis/genai-toolbox?tab=readme-ov-file#getting-started)
2. **Install the Appropriate SDK:**
Choose the package based on your needs (see "[Which Package Should I Use?](#which-package-should-i-use)" above)
Use this command to install the SDK module
```bash
# For the core, framework-agnostic SDK
go get github.com/googleapis/mcp-toolbox-sdk-go
```
{{< notice note >}}
Source code for [Go-sdk](https://github.com/googleapis/mcp-toolbox-sdk-go)
{{< /notice >}}
========================================================================
## ADK Package
========================================================================
**Hierarchy:** MCP Toolbox for Databases > User Guide > Connect to Toolbox > Client SDKs > Go > ADK Package
**URL:** https://googleapis.github.io/genai-toolbox/previews/PR-2723/user-guide/connect-to/client-sdks/go-sdk/tbadk/
**Description:** MCP Toolbox ADK for integrating functionalities of MCP Toolbox into your Agentic apps.
## Overview
The `tbadk` package provides a Go interface to the MCP Toolbox service, enabling you to load and invoke tools from your own applications.
## Installation
```bash
go get github.com/googleapis/mcp-toolbox-sdk-go/tbadk
```
This SDK is supported on Go version 1.24.4 and higher.
{{< notice note >}}
While the SDK itself is synchronous, you can execute its functions within goroutines to achieve asynchronous behavior.
{{< /notice >}}
{{< notice note >}}
**Breaking Change Notice**: As of version `0.6.0`, this repository has transitioned to a multi-module structure.
* **For new versions (`v0.6.0`+)**: You must import specific modules (e.g., `go get github.com/googleapis/mcp-toolbox-sdk-go/tbadk`).
* **For older versions (`v0.5.1` and below)**: The repository remains a single-module library (`go get github.com/googleapis/mcp-toolbox-sdk-go`).
* Please update your imports and `go.mod` accordingly when upgrading.
{{< /notice >}}
## Quickstart
Here's a minimal example to get you started. Ensure your Toolbox service is
running and accessible.
```go
package main
import (
"context"
"fmt"
"github.com/googleapis/mcp-toolbox-sdk-go/tbadk"
)
func quickstart() string {
inputs := map[string]any{"location": "London"}
client, err := tbadk.NewToolboxClient("http://localhost:5000")
if err != nil {
return fmt.Sprintln("Could not start Toolbox Client", err)
}
tool, err := client.LoadTool("get_weather", ctx)
if err != nil {
return fmt.Sprintln("Could not load Toolbox Tool", err)
}
// pass the tool.Context as ctx into the Run() method
result, err := tool.Run(ctx, inputs)
if err != nil {
return fmt.Sprintln("Could not invoke tool", err)
}
return fmt.Sprintln(result["output"])
}
func main() {
fmt.Println(quickstart())
}
```
## Usage
Import and initialize a Toolbox client, pointing it to the URL of your running
Toolbox service.
```go
import "github.com/googleapis/mcp-toolbox-sdk-go/tbadk"
client, err := tbadk.NewToolboxClient("http://localhost:5000")
```
All interactions for loading and invoking tools happen through this client.
{{< notice note >}}
For advanced use cases, you can provide an external custom `http.Client`
during initialization (e.g., `tbadk.NewToolboxClient(URL, core.WithHTTPClient(myClient)`). If you provide your own session, you are responsible for managing its lifecycle;
`ToolboxClient` *will not* close it.
{{< /notice >}}
## Transport Protocols
The SDK supports multiple transport protocols. By default, the client uses the latest supported version of the **Model Context Protocol (MCP)**.
You can explicitly select a protocol using the `core.WithProtocol` option during client initialization.
{{< notice note >}}
* **Native Toolbox Transport**: This uses the service's native **REST over HTTP** API.
* **MCP Transports**: These options use the **Model Context Protocol over HTTP**.
{{< /notice >}}
### Supported Protocols
{{< notice note >}}
The native Toolbox protocol (`core.Toolbox`) is deprecated and will be removed on March 4, 2026. Please use `core.MCP` or specific MCP versions.
{{< /notice >}}
| Constant | Description |
| :--- | :--- |
| `core.MCP` | **(Default)** Alias for the latest supported MCP version (currently `v2025-06-18`). |
| `core.Toolbox` | **Deprecated** The native Toolbox HTTP protocol. |
| `core.MCPv20251125` | MCP Protocol version 2025-11-25. |
| `core.MCPv20250618` | MCP Protocol version 2025-06-18. |
| `core.MCPv20250326` | MCP Protocol version 2025-03-26. |
| `core.MCPv20241105` | MCP Protocol version 2024-11-05. |
### Example
```go
import (
"github.com/googleapis/mcp-toolbox-sdk-go/core"
"github.com/googleapis/mcp-toolbox-sdk-go/tbadk"
)
// Initialize with the native Toolbox protocol
client, err := tbadk.NewToolboxClient(
"http://localhost:5000",
core.WithProtocol(core.Toolbox),
)
// Initialize with the MCP Protocol 2025-03-26
client, err := tbadk.NewToolboxClient(
"http://localhost:5000",
core.WithProtocol(core.MCPv20250326),
)
```
## Loading Tools
You can load tools individually or in groups (toolsets) as defined in your
Toolbox service configuration. Loading a toolset is convenient when working with
multiple related functions, while loading a single tool offers more granular
control.
### Load a toolset
A toolset is a collection of related tools. You can load all tools in a toolset
or a specific one:
```go
// Load default toolset by providing an empty string as the name
tools, err := client.LoadToolset("", ctx)
// Load a specific toolset
tools, err := client.LoadToolset("my-toolset", ctx)
```
`LoadToolset` returns a slice of the ToolboxTool structs (`[]ToolboxTool`).
### Load a single tool
Loads a specific tool by its unique name. This provides fine-grained control.
```go
tool, err = client.LoadTool("my-tool", ctx)
```
## Invoking Tools
Once loaded, tools behave like Go structs. You invoke them using `Run` method
by passing arguments corresponding to the parameters defined in the tool's
configuration within the Toolbox service.
```go
tool, err = client.LoadTool("my-tool", ctx)
inputs := map[string]any{"location": "London"}
// Pass the tool.Context as ctx to the Run() function
result, err := tool.Run(ctx, inputs)
```
{{< notice tip >}}For a more comprehensive guide on setting up the Toolbox service itself, which
you'll need running to use this SDK, please refer to the [Toolbox Quickstart
Guide](https://googleapis.github.io/genai-toolbox/getting-started/local_quickstart).
{{< /notice >}}
## Client to Server Authentication
This section describes how to authenticate the ToolboxClient itself when
connecting to a Toolbox server instance that requires authentication. This is
crucial for securing your Toolbox server endpoint, especially when deployed on
platforms like Cloud Run, GKE, or any environment where unauthenticated access is restricted.
This client-to-server authentication ensures that the Toolbox server can verify
the identity of the client making the request before any tool is loaded or
called. It is different from [Authenticating Tools](#authenticating-tools),
which deals with providing credentials for specific tools within an already
connected Toolbox session.
### When is Client-to-Server Authentication Needed?
You'll need this type of authentication if your Toolbox server is configured to
deny unauthenticated requests. For example:
- Your Toolbox server is deployed on Cloud Run and configured to "Require authentication."
- Your server is behind an Identity-Aware Proxy (IAP) or a similar
authentication layer.
- You have custom authentication middleware on your self-hosted Toolbox server.
Without proper client authentication in these scenarios, attempts to connect or
make calls (like `LoadTool`) will likely fail with `Unauthorized` errors.
### How it works
The `ToolboxClient` allows you to specify TokenSources that dynamically generate HTTP headers for
every request sent to the Toolbox server. The most common use case is to add an
Authorization header with a bearer token (e.g., a Google ID token).
These header-generating functions are called just before each request, ensuring
that fresh credentials or header values can be used.
### Configuration
You can configure these dynamic headers as seen below:
```go
import (
"github.com/googleapis/mcp-toolbox-sdk-go/core"
"github.com/googleapis/mcp-toolbox-sdk-go/tbadk"
)
tokenProvider := func() string {
return "header3_value"
}
staticTokenSource := oauth2.StaticTokenSource(&oauth2.Token{AccessToken: "header2_value"})
dynamicTokenSource := core.NewCustomTokenSource(tokenProvider)
client, err := tbadk.NewToolboxClient(
"toolbox-url",
core.WithClientHeaderString("header1", "header1_value"),
core.WithClientHeaderTokenSource("header2", staticTokenSource),
core.WithClientHeaderTokenSource("header3", dynamicTokenSource),
)
```
### Authenticating with Google Cloud Servers
For Toolbox servers hosted on Google Cloud (e.g., Cloud Run) and requiring
`Google ID token` authentication, the helper module
[auth_methods](../core/_index.md#authenticating-with-google-cloud-servers) provides utility functions.
### Step by Step Guide for Cloud Run
1. **Configure Permissions**: [Grant](https://cloud.google.com/run/docs/securing/managing-access#service-add-principals) the `roles/run.Runr` IAM role on the Cloud
Run service to the principal. This could be your `user account email` or a
`service account`.
2. **Configure Credentials**
- Local Development: Set up
[ADC](https://cloud.google.com/docs/authentication/set-up-adc-local-dev-environment).
- Google Cloud Environments: When running within Google Cloud (e.g., Compute
Engine, GKE, another Cloud Run service, Cloud Functions), ADC is typically
configured automatically, using the environment's default service account.
3. **Connect to the Toolbox Server**
```go
import (
"github.com/googleapis/mcp-toolbox-sdk-go/core"
"github.com/googleapis/mcp-toolbox-sdk-go/tbadk"
"context"
)
ctx := context.Background()
token, err := core.GetGoogleIDToken(ctx, URL)
client, err := tbadk.NewToolboxClient(
URL,
core.WithClientHeaderString("Authorization", token),
)
// Now, you can use the client as usual.
```
## Authenticating Tools
{{< notice warning >}} **Always use HTTPS** to connect your application with the Toolbox service,
especially in **production environments** or whenever the communication
involves **sensitive data** (including scenarios where tools require
authentication tokens). Using plain HTTP lacks encryption and exposes your
application and data to significant security risks, such as eavesdropping and tampering.
{{< /notice >}}
Tools can be configured within the Toolbox service to require authentication,
ensuring only authorized users or applications can invoke them, especially when
accessing sensitive data.
### When is Authentication Needed?
Authentication is configured per-tool within the Toolbox service itself. If a
tool you intend to use is marked as requiring authentication in the service, you
must configure the SDK client to provide the necessary credentials (currently
Oauth2 tokens) when invoking that specific tool.
### Supported Authentication Mechanisms
The Toolbox service enables secure tool usage through **Authenticated Parameters**.
For detailed information on how these mechanisms work within the Toolbox service and how to configure them, please refer to [Toolbox Service Documentation - Authenticated Parameters](../../../../configuration/tools/_index.md#authenticated-parameters).
### Step 1: Configure Tools in Toolbox Service
First, ensure the target tool(s) are configured correctly in the Toolbox service
to require authentication. Refer to the [Toolbox Service Documentation -
Authenticated
Parameters](../../../../configuration/tools/_index.md#authenticated-parameters)
for instructions.
### Step 2: Configure SDK Client
Your application needs a way to obtain the required Oauth2 token for the
authenticated user. The SDK requires you to provide a function capable of
retrieving this token *when the tool is invoked*.
#### Provide an ID Token Retriever Function
You must provide the SDK with a function that returns the
necessary token when called. The implementation depends on your application's
authentication flow (e.g., retrieving a stored token, initiating an OAuth flow).
{{< notice info >}}
The name used when registering the getter function with the SDK (e.g.,
`"my_api_token"`) must exactly match the `name` of the corresponding
`authServices` defined in the tool's configuration within the Toolbox service.
{{< /notice >}}
```go
func getAuthToken() string {
// ... Logic to retrieve ID token (e.g., from local storage, OAuth flow)
// This example just returns a placeholder. Replace with your actual token retrieval.
return "YOUR_ID_TOKEN" // Placeholder
}
```
{{< notice tip >}} Your token retriever function is invoked every time an authenticated parameter
requires a token for a tool call. Consider implementing caching logic within
this function to avoid redundant token fetching or generation, especially for
tokens with longer validity periods or if the retrieval process is resource-intensive.
{{< /notice >}}
#### Option A: Add Default Authentication to a Client
You can add default tool level authentication to a client.
Every tool / toolset loaded by the client will contain the auth token.
```go
ctx := context.Background()
client, err := tbadk.NewToolboxClient("http://127.0.0.1:5000",
core.WithDefaultToolOptions(
core.WithAuthTokenString("my-auth-1", "auth-value"),
),
)
AuthTool, err := client.LoadTool("my-tool", ctx)
```
#### Option B: Add Authentication to a Loaded Tool
You can add the token retriever function to a tool object *after* it has been
loaded. This modifies the specific tool instance.
```go
ctx := context.Background()
client, err := tbadk.NewToolboxClient("http://127.0.0.1:5000")
tool, err := client.LoadTool("my-tool", ctx)
AuthTool, err := tool.ToolFrom(
core.WithAuthTokenSource("my-auth", headerTokenSource),
core.WithAuthTokenString("my-auth-1", "value"),
)
```
#### Option C: Add Authentication While Loading Tools
You can provide the token retriever(s) directly during the `LoadTool` or
`LoadToolset` calls. This applies the authentication configuration only to the
tools loaded in that specific call, without modifying the original tool objects
if they were loaded previously.
```go
AuthTool, err := client.LoadTool("my-tool", ctx, core.WithAuthTokenString("my-auth-1", "value"))
// or
AuthTools, err := client.LoadToolset(
"my-toolset",
ctx,
core.WithAuthTokenString("my-auth-1", "value"),
)
```
{{< notice note >}}
Adding auth tokens during loading only affect the tools loaded within that call.
{{< /notice >}}
### Complete Authentication Example
```go
import "github.com/googleapis/mcp-toolbox-sdk-go/core"
import "fmt"
func getAuthToken() string {
// ... Logic to retrieve ID token (e.g., from local storage, OAuth flow)
// This example just returns a placeholder. Replace with your actual token retrieval.
return "YOUR_ID_TOKEN" // Placeholder
}
func main() {
ctx := context.Background()
inputs := map[string]any{"input": "some input"}
dynamicTokenSource := core.NewCustomTokenSource(getAuthToken)
client, err := tbadk.NewToolboxClient("http://127.0.0.1:5000")
tool, err := client.LoadTool("my-tool", ctx)
AuthTool, err := tool.ToolFrom(core.WithAuthTokenSource("my_auth", dynamicTokenSource))
result, err := AuthTool.Run(ctx, inputs)
fmt.Println(result)
}
```
{{< notice note >}}An auth token getter for a specific name (e.g., "GOOGLE_ID") will replace any client header with the same name followed by "_token" (e.g.,"GOOGLE_ID_token").
{{< /notice >}}
## Binding Parameter Values
The SDK allows you to pre-set, or "bind", values for specific tool parameters
before the tool is invoked or even passed to an LLM. These bound values are
fixed and will not be requested or modified by the LLM during tool use.
### Why Bind Parameters?
- **Protecting sensitive information:** API keys, secrets, etc.
- **Enforcing consistency:** Ensuring specific values for certain parameters.
- **Pre-filling known data:** Providing defaults or context.
{{< notice info >}}
The parameter names used for binding (e.g., `"api_key"`) must exactly match the
parameter names defined in the tool's configuration within the Toolbox service.
{{< /notice >}}
{{< notice note >}}
You do not need to modify the tool's configuration in the Toolbox service to bind parameter values using the SDK.
{{< /notice >}}
#### Option A: Add Default Bound Parameters to a Client
You can add default tool level bound parameters to a client. Every tool / toolset
loaded by the client will have the bound parameter.
```go
ctx := context.Background()
client, err := tbadk.NewToolboxClient("http://127.0.0.1:5000",
core.WithDefaultToolOptions(
core.WithBindParamString("param1", "value"),
),
)
boundTool, err := client.LoadTool("my-tool", ctx)
```
### Option B: Binding Parameters to a Loaded Tool
Bind values to a tool object *after* it has been loaded. This modifies the
specific tool instance.
```go
client, err := tbadk.NewToolboxClient("http://127.0.0.1:5000")
tool, err := client.LoadTool("my-tool", ctx)
boundTool, err := tool.ToolFrom(
core.WithBindParamString("param1", "value"),
core.WithBindParamString("param2", "value")
)
```
### Option C: Binding Parameters While Loading Tools
Specify bound parameters directly when loading tools. This applies the binding
only to the tools loaded in that specific call.
```go
boundTool, err := client.LoadTool("my-tool", ctx, core.WithBindParamString("param", "value"))
// OR
boundTool, err := client.LoadToolset("", ctx, core.WithBindParamString("param", "value"))
```
{{< notice note >}}
Bound values during loading only affect the tools loaded in that call.
{{< /notice >}}
### Binding Dynamic Values
Instead of a static value, you can bind a parameter to a synchronous or
asynchronous function. This function will be called *each time* the tool is
invoked to dynamically determine the parameter's value at runtime.
Functions with the return type (data_type, error) can be provided.
```go
getDynamicValue := func() (string, error) { return "req-123", nil }
dynamicBoundTool, err := tool.ToolFrom(core.WithBindParamStringFunc("param", getDynamicValue))
```
{{< notice info >}}
You don't need to modify tool configurations to bind parameter values.
{{< /notice >}}
## Default Parameters
Tools defined in the MCP Toolbox server can specify default values for their optional parameters. When invoking a tool using the SDK, if an input for a parameter with a default value is not provided, the SDK will automatically populate the request with the default value.
```go
tool, err = client.LoadTool("my-tool", ctx)
// If 'my-tool' has a parameter 'param2' with a default value of "default-value",
// we can omit 'param2' from the inputs.
inputs := map[string]any{"param1": "value"}
// The invocation will automatically use param2="default-value" if not provided
result, err := tool.Run(ctx, inputs)
```
## Using with ADK Go
After altering the tool to your needs, type-assert the ToolboxTool and pass it to the LLM agent.
### For a single tool
```go
toolboxtool, err = client.LoadTool("my-tool", ctx)
//
llmagent, err := llmagent.New(llmagent.Config{
Name: "assistant",
Model: model,
Description: "Agent to answer questions.",
Tools: []tool.Tool{&toolboxtool},
})
```
### For a toolset
```go
toolboxtools, err := client.LoadToolset("", ctx)
//
toolsList := make([]tool.Tool, len(toolboxtools))
for i := range toolboxtools {
toolsList[i] = &toolboxtools[i]
}
llmagent, err := llmagent.New(llmagent.Config{
Name: "assistant",
Model: model,
Description: "Agent to answer questions.",
Tools: toolsList,
})
```
The reason we have to type assert it before passing it to ADK Go, is because it requires a generic `tool.Tool` interface. You can always convert it back to `ToolboxTool` format to access the specialized methods.
# Using with Orchestration Frameworks
To see how the MCP Toolbox Go SDK works with orchestration frameworks, check out these end-to-end examples given below.
ADK Go
```go
//This sample contains a complete example on how to integrate MCP Toolbox Go SDK with ADK Go using the tbadk package.
package main
import (
"context"
"fmt"
"log"
"os"
"github.com/googleapis/mcp-toolbox-sdk-go/tbadk"
"google.golang.org/adk/agent"
"google.golang.org/adk/agent/llmagent"
"google.golang.org/adk/model/gemini"
"google.golang.org/adk/runner"
"google.golang.org/adk/session"
"google.golang.org/adk/tool"
"google.golang.org/genai"
)
func main() {
genaiKey := os.Getenv("GEMINI_API_KEY")
toolboxURL := "http://localhost:5000"
ctx := context.Background()
// Initialize MCP Toolbox client
toolboxClient, err := tbadk.NewToolboxClient(toolboxURL)
if err != nil {
log.Fatalf("Failed to create MCP Toolbox client: %v", err)
}
toolsetName := "my-toolset"
toolset, err := toolboxClient.LoadToolset(toolsetName, ctx)
if err != nil {
log.Fatalf("Failed to load MCP toolset '%s': %v\nMake sure your Toolbox server is running.", toolsetName, err)
}
// Create Gemini model
model, err := gemini.NewModel(ctx, "gemini-2.5-flash", &genai.ClientConfig{
APIKey: genaiKey,
})
if err != nil {
log.Fatalf("Failed to create model: %v", err)
}
tools := make([]tool.Tool, len(toolset))
for i := range toolset {
tools[i] = &toolset[i]
}
llmagent, err := llmagent.New(llmagent.Config{
Name: "hotel_assistant",
Model: model,
Description: "Agent to answer questions about hotels.",
Tools: tools,
})
if err != nil {
log.Fatalf("Failed to create agent: %v", err)
}
appName := "hotel_assistant"
userID := "user-123"
sessionService := session.InMemoryService()
resp, err := sessionService.Create(ctx, &session.CreateRequest{
AppName: appName,
UserID: userID,
})
if err != nil {
log.Fatalf("Failed to create the session service: %v", err)
}
session := resp.Session
r, err := runner.New(runner.Config{
AppName: appName,
Agent: llmagent,
SessionService: sessionService,
})
if err != nil {
log.Fatalf("Failed to create runner: %v", err)
}
query := "Find hotels with Basel in its name."
fmt.Println(query)
userMsg := genai.NewContentFromText(query, genai.RoleUser)
streamingMode := agent.StreamingModeSSE
for event, err := range r.Run(ctx, userID, session.ID(), userMsg, agent.RunConfig{
StreamingMode: streamingMode,
}) {
if err != nil {
fmt.Printf("\nAGENT_ERROR: %v\n", err)
} else {
if event.LLMResponse.Content != nil {
for _, p := range event.LLMResponse.Content.Parts {
if streamingMode != agent.StreamingModeSSE || event.LLMResponse.Partial {
fmt.Print(p.Text)
}
}
}
}
}
fmt.Println()
}
```
========================================================================
## Core Package
========================================================================
**Hierarchy:** MCP Toolbox for Databases > User Guide > Connect to Toolbox > Client SDKs > Go > Core Package
**URL:** https://googleapis.github.io/genai-toolbox/previews/PR-2723/user-guide/connect-to/client-sdks/go-sdk/core/
**Description:** MCP Toolbox Core SDK for integrating functionalities of MCP Toolbox into your Agentic apps.
## Overview
The `core` package provides a Go interface to the MCP Toolbox service, enabling you to load and invoke tools from your own applications.
## Installation
```bash
go get github.com/googleapis/mcp-toolbox-sdk-go/core
```
This SDK is supported on Go version 1.24.4 and higher.
{{< notice note >}}
While the SDK itself is synchronous, you can execute its functions within goroutines to achieve asynchronous behavior.
{{< /notice >}}
{{< notice note >}}
**Breaking Change Notice**: As of version `0.6.0`, this repository has transitioned to a multi-module structure.
* **For new versions (`v0.6.0`+)**: You must import specific modules (e.g., `go get github.com/googleapis/mcp-toolbox-sdk-go/core`).
* **For older versions (`v0.5.1` and below)**: The repository remains a single-module library (`go get github.com/googleapis/mcp-toolbox-sdk-go`).
* Please update your imports and `go.mod` accordingly when upgrading.
{{< /notice >}}
## Quickstart
Here's a minimal example to get you started. Ensure your Toolbox service is
running and accessible.
```go
package main
import (
"context"
"fmt"
"github.com/googleapis/mcp-toolbox-sdk-go/core"
)
func quickstart() string {
ctx := context.Background()
inputs := map[string]any{"location": "London"}
client, err := core.NewToolboxClient("http://localhost:5000")
if err != nil {
return fmt.Sprintln("Could not start Toolbox Client", err)
}
tool, err := client.LoadTool("get_weather", ctx)
if err != nil {
return fmt.Sprintln("Could not load Toolbox Tool", err)
}
result, err := tool.Invoke(ctx, inputs)
if err != nil {
return fmt.Sprintln("Could not invoke tool", err)
}
return fmt.Sprintln(result)
}
func main() {
fmt.Println(quickstart())
}
```
## Usage
Import and initialize a Toolbox client, pointing it to the URL of your running
Toolbox service.
```go
import "github.com/googleapis/mcp-toolbox-sdk-go/core"
client, err := core.NewToolboxClient("http://localhost:5000")
```
All interactions for loading and invoking tools happen through this client.
{{< notice note >}}
For advanced use cases, you can provide an external custom `http.Client` during initialization (e.g., `core.NewToolboxClient(URL, core.WithHTTPClient(myClient)`).
If you provide your own session, you are responsible for managing its lifecycle; `ToolboxClient` *will not* close it.
{{< /notice >}}
{{< notice info >}}
Closing the `ToolboxClient` also closes the underlying network session shared by all tools loaded from that client. As a result, any tool instances you have loaded will cease to function and will raise an error if you attempt to invoke them after the client is closed.
{{< /notice >}}
## Transport Protocols
The SDK supports multiple transport protocols for communicating with the Toolbox server. By default, the client uses the latest supported version of the **Model Context Protocol (MCP)**.
You can explicitly select a protocol using the `core.WithProtocol` option during client initialization. This is useful if you need to use the native Toolbox HTTP protocol or pin the client to a specific legacy version of MCP.
{{< notice note >}}
* **Native Toolbox Transport**: This uses the service's native **REST over HTTP** API.
* **MCP Transports**: These options use the **Model Context Protocol over HTTP**.
{{< /notice >}}
### Supported Protocols
{{< notice note >}}
The native Toolbox protocol (`core.Toolbox`) is deprecated and will be removed on March 4, 2026. Please use `core.MCP` or specific MCP versions.
{{< /notice >}}
| Constant | Description |
| :--- | :--- |
| `core.MCP` | **(Default)** Alias for the latest supported MCP version (currently `v2025-06-18`). |
| `core.Toolbox` | **Deprecated** The native Toolbox HTTP protocol. |
| `core.MCPv20251125` | MCP Protocol version 2025-11-25. |
| `core.MCPv20250618` | MCP Protocol version 2025-06-18. |
| `core.MCPv20250326` | MCP Protocol version 2025-03-26. |
| `core.MCPv20241105` | MCP Protocol version 2024-11-05. |
### Example
If you wish to use the native Toolbox protocol:
```go
import "github.com/googleapis/mcp-toolbox-sdk-go/core"
client, err := core.NewToolboxClient(
"http://localhost:5000",
core.WithProtocol(core.Toolbox),
)
```
If you want to pin the MCP Version 2025-03-26:
```go
import "github.com/googleapis/mcp-toolbox-sdk-go/core"
client, err := core.NewToolboxClient(
"http://localhost:5000",
core.WithProtocol(core.MCPv20250326),
)
```
## Loading Tools
You can load tools individually or in groups (toolsets) as defined in your
Toolbox service configuration. Loading a toolset is convenient when working with
multiple related functions, while loading a single tool offers more granular
control.
### Load a toolset
A toolset is a collection of related tools. You can load all tools in a toolset
or a specific one:
```go
// Load default toolset by providing an empty string as the name
tools, err := client.LoadToolset("", ctx)
// Load a specific toolset
tools, err := client.LoadToolset("my-toolset", ctx)
```
### Load a single tool
Loads a specific tool by its unique name. This provides fine-grained control.
```go
tool, err = client.LoadTool("my-tool", ctx)
```
## Invoking Tools
Once loaded, tools behave like Go structs. You invoke them using `Invoke` method
by passing arguments corresponding to the parameters defined in the tool's
configuration within the Toolbox service.
```go
tool, err = client.LoadTool("my-tool", ctx)
inputs := map[string]any{"location": "London"}
result, err := tool.Invoke(ctx, inputs)
```
{{< notice tip >}}
For a more comprehensive guide on setting up the Toolbox service itself, which you'll need running to use this SDK, please refer to the [Toolbox Quickstart Guide](../../../../../build-with-mcp-toolbox/local_quickstart.md).
{{< /notice >}}
## Client to Server Authentication
This section describes how to authenticate the ToolboxClient itself when
connecting to a Toolbox server instance that requires authentication. This is
crucial for securing your Toolbox server endpoint, especially when deployed on
platforms like Cloud Run, GKE, or any environment where unauthenticated access is restricted.
This client-to-server authentication ensures that the Toolbox server can verify
the identity of the client making the request before any tool is loaded or
called. It is different from [Authenticating Tools](#authenticating-tools),
which deals with providing credentials for specific tools within an already
connected Toolbox session.
### When is Client-to-Server Authentication Needed?
You'll need this type of authentication if your Toolbox server is configured to
deny unauthenticated requests. For example:
- Your Toolbox server is deployed on Cloud Run and configured to "Require authentication."
- Your server is behind an Identity-Aware Proxy (IAP) or a similar
authentication layer.
- You have custom authentication middleware on your self-hosted Toolbox server.
Without proper client authentication in these scenarios, attempts to connect or
make calls (like `LoadTool`) will likely fail with `Unauthorized` errors.
### How it works
The `ToolboxClient` allows you to specify TokenSources that dynamically generate HTTP headers for
every request sent to the Toolbox server. The most common use case is to add an
Authorization header with a bearer token (e.g., a Google ID token).
These header-generating functions are called just before each request, ensuring
that fresh credentials or header values can be used.
### Configuration
You can configure these dynamic headers as seen below:
```go
import "github.com/googleapis/mcp-toolbox-sdk-go/core"
tokenProvider := func() string {
return "header3_value"
}
staticTokenSource := oauth2.StaticTokenSource(&oauth2.Token{AccessToken: "header2_value"})
dynamicTokenSource := core.NewCustomTokenSource(tokenProvider)
client, err := core.NewToolboxClient(
"toolbox-url",
core.WithClientHeaderString("header1", "header1_value"),
core.WithClientHeaderTokenSource("header2", staticTokenSource),
core.WithClientHeaderTokenSource("header3", dynamicTokenSource),
)
```
### Authenticating with Google Cloud Servers
For Toolbox servers hosted on Google Cloud (e.g., Cloud Run) and requiring
`Google ID token` authentication, the helper module
[auth_methods](https://github.com/googleapis/mcp-toolbox-sdk-go/blob/main/core/auth.go) provides utility functions.
### Step by Step Guide for Cloud Run
1. **Configure Permissions**: [Grant](https://cloud.google.com/run/docs/securing/managing-access#service-add-principals) the `roles/run.invoker` IAM role on the Cloud
Run service to the principal. This could be your `user account email` or a
`service account`.
2. **Configure Credentials**
- Local Development: Set up
[ADC](https://cloud.google.com/docs/authentication/set-up-adc-local-dev-environment).
- Google Cloud Environments: When running within Google Cloud (e.g., Compute
Engine, GKE, another Cloud Run service, Cloud Functions), ADC is typically
configured automatically, using the environment's default service account.
3. **Connect to the Toolbox Server**
```go
import "github.com/googleapis/mcp-toolbox-sdk-go/core"
import "context"
ctx := context.Background()
token, err := core.GetGoogleIDToken(ctx, URL)
client, err := core.NewToolboxClient(
URL,
core.WithClientHeaderString("Authorization", token),
)
// Now, you can use the client as usual.
```
## Authenticating Tools
{{< notice info >}}
**Always use HTTPS** to connect your application with the Toolbox service, especially in **production environments** or whenever the communication involves **sensitive data** (including scenarios where tools requireauthentication tokens). Using plain HTTP lacks encryption and exposes your application and data to significant security risks, such as eavesdropping and tampering.
{{< /notice >}}
Tools can be configured within the Toolbox service to require authentication,
ensuring only authorized users or applications can invoke them, especially when
accessing sensitive data.
### When is Authentication Needed?
Authentication is configured per-tool within the Toolbox service itself. If a
tool you intend to use is marked as requiring authentication in the service, you
must configure the SDK client to provide the necessary credentials (currently
Oauth2 tokens) when invoking that specific tool.
### Supported Authentication Mechanisms
The Toolbox service enables secure tool usage through **Authenticated Parameters**.
For detailed information on how these mechanisms work within the Toolbox service and how to configure them, please refer to [Toolbox Service Documentation - Authenticated Parameters](../../../../configuration/tools/_index.md#authenticated-parameters).
### Step 1: Configure Tools in Toolbox Service
First, ensure the target tool(s) are configured correctly in the Toolbox service
to require authentication. Refer to the [Toolbox Service Documentation -
Authenticated
Parameters](../../../../configuration/tools/_index.md#authenticated-parameters)
for instructions.
### Step 2: Configure SDK Client
Your application needs a way to obtain the required Oauth2 token for the
authenticated user. The SDK requires you to provide a function capable of
retrieving this token *when the tool is invoked*.
#### Provide an ID Token Retriever Function
You must provide the SDK with a function that returns the
necessary token when called. The implementation depends on your application's
authentication flow (e.g., retrieving a stored token, initiating an OAuth flow).
{{< notice info >}}
The name used when registering the getter function with the SDK (e.g., `"my_api_token"`) must exactly match the `name` of the corresponding `authServices` defined in the tool's configuration within the Toolbox service.
{{< /notice >}}
```go
func getAuthToken() string {
// ... Logic to retrieve ID token (e.g., from local storage, OAuth flow)
// This example just returns a placeholder. Replace with your actual token retrieval.
return "YOUR_ID_TOKEN" // Placeholder
}
```
{{< notice tip >}}
Your token retriever function is invoked every time an authenticated parameter requires a token for a tool call. Consider implementing caching logic within this function to avoid redundant token fetching or generation, especially for tokens with longer validity periods or if the retrieval process is resource-intensive.
{{< /notice >}}
#### Option A: Add Default Authentication to a Client
You can add default tool level authentication to a client.
Every tool / toolset loaded by the client will contain the auth token.
```go
ctx := context.Background()
client, err := core.NewToolboxClient("http://127.0.0.1:5000",
core.WithDefaultToolOptions(
core.WithAuthTokenString("my-auth-1", "auth-value"),
),
)
AuthTool, err := client.LoadTool("my-tool", ctx)
```
#### Option B: Add Authentication to a Loaded Tool
You can add the token retriever function to a tool object *after* it has been
loaded. This modifies the specific tool instance.
```go
ctx := context.Background()
client, err := core.NewToolboxClient("http://127.0.0.1:5000")
tool, err := client.LoadTool("my-tool", ctx)
AuthTool, err := tool.ToolFrom(
core.WithAuthTokenSource("my-auth", headerTokenSource),
core.WithAuthTokenString("my-auth-1", "value"),
)
```
#### Option C: Add Authentication While Loading Tools
You can provide the token retriever(s) directly during the `LoadTool` or
`LoadToolset` calls. This applies the authentication configuration only to the
tools loaded in that specific call, without modifying the original tool objects
if they were loaded previously.
```go
AuthTool, err := client.LoadTool("my-tool", ctx, core.WithAuthTokenString("my-auth-1", "value"))
// or
AuthTools, err := client.LoadToolset(
"my-toolset",
ctx,
core.WithAuthTokenString("my-auth-1", "value"),
)
```
{{< notice note >}}
Adding auth tokens during loading only affect the tools loaded within that call.
{{< /notice >}}
### Complete Authentication Example
```go
import "github.com/googleapis/mcp-toolbox-sdk-go/core"
import "fmt"
func getAuthToken() string {
// ... Logic to retrieve ID token (e.g., from local storage, OAuth flow)
// This example just returns a placeholder. Replace with your actual token retrieval.
return "YOUR_ID_TOKEN" // Placeholder
}
func main() {
ctx := context.Background()
inputs := map[string]any{"input": "some input"}
dynamicTokenSource := core.NewCustomTokenSource(getAuthToken)
client, err := core.NewToolboxClient("http://127.0.0.1:5000")
tool, err := client.LoadTool("my-tool", ctx)
AuthTool, err := tool.ToolFrom(core.WithAuthTokenSource("my_auth", dynamicTokenSource))
result, err := AuthTool.Invoke(ctx, inputs)
fmt.Println(result)
}
```
{{< notice note >}}
An auth token getter for a specific name (e.g., "GOOGLE_ID") will replace any client header with the same name followed by "_token" (e.g., "GOOGLE_ID_token").
{{< /notice >}}
## Binding Parameter Values
The SDK allows you to pre-set, or "bind", values for specific tool parameters
before the tool is invoked or even passed to an LLM. These bound values are
fixed and will not be requested or modified by the LLM during tool use.
### Why Bind Parameters?
- **Protecting sensitive information:** API keys, secrets, etc.
- **Enforcing consistency:** Ensuring specific values for certain parameters.
- **Pre-filling known data:** Providing defaults or context.
{{< notice info >}}
The parameter names used for binding (e.g., `"api_key"`) must exactly match the parameter names defined in the tool's configuration within the Toolbox service.
{{< /notice >}}
{{< notice note >}}
You do not need to modify the tool's configuration in the Toolbox service to bind parameter values using the SDK.
{{< /notice >}}
#### Option A: Add Default Bound Parameters to a Client
You can add default tool level bound parameters to a client. Every tool / toolset
loaded by the client will have the bound parameter.
```go
ctx := context.Background()
client, err := core.NewToolboxClient("http://127.0.0.1:5000",
core.WithDefaultToolOptions(
core.WithBindParamString("param1", "value"),
),
)
boundTool, err := client.LoadTool("my-tool", ctx)
```
### Option B: Binding Parameters to a Loaded Tool
Bind values to a tool object *after* it has been loaded. This modifies the
specific tool instance.
```go
client, err := core.NewToolboxClient("http://127.0.0.1:5000")
tool, err := client.LoadTool("my-tool", ctx)
boundTool, err := tool.ToolFrom(
core.WithBindParamString("param1", "value"),
core.WithBindParamString("param2", "value")
)
```
### Option C: Binding Parameters While Loading Tools
Specify bound parameters directly when loading tools. This applies the binding
only to the tools loaded in that specific call.
```go
boundTool, err := client.LoadTool("my-tool", ctx, core.WithBindParamString("param", "value"))
// OR
boundTool, err := client.LoadToolset("", ctx, core.WithBindParamString("param", "value"))
```
{{< notice note >}} Bound values during loading only affect the tools loaded in that call. {{< /notice >}}
### Binding Dynamic Values
Instead of a static value, you can bind a parameter to a synchronous or
asynchronous function. This function will be called *each time* the tool is
invoked to dynamically determine the parameter's value at runtime.
Functions with the return type (data_type, error) can be provided.
```go
getDynamicValue := func() (string, error) { return "req-123", nil }
dynamicBoundTool, err := tool.ToolFrom(core.WithBindParamStringFunc("param", getDynamicValue))
```
{{< notice info >}} You don't need to modify tool configurations to bind parameter values. {{< /notice >}}
## Default Parameters
Tools defined in the MCP Toolbox server can specify default values for their optional parameters. When invoking a tool using the SDK, if an input for a parameter with a default value is not provided, the SDK will automatically populate the request with the default value.
```go
tool, err = client.LoadTool("my-tool", ctx)
// If 'my-tool' has a parameter 'param2' with a default value of "default-value",
// we can omit 'param2' from the inputs.
inputs := map[string]any{"param1": "value"}
// The invocation will automatically use param2="default-value" if not provided
result, err := tool.Invoke(ctx, inputs)
```
# Using with Orchestration Frameworks
To see how the MCP Toolbox Go SDK works with orchestration frameworks, check out these end-to-end examples given below.
Google GenAI
```go
// This sample demonstrates integration with the standard Google GenAI framework.
package main
import (
"context"
"encoding/json"
"fmt"
"log"
"os"
"github.com/googleapis/mcp-toolbox-sdk-go/core"
"google.golang.org/genai"
)
// ConvertToGenaiTool translates a ToolboxTool into the genai.FunctionDeclaration format.
func ConvertToGenaiTool(toolboxTool *core.ToolboxTool) *genai.Tool {
inputschema, err := toolboxTool.InputSchema()
if err != nil {
return &genai.Tool{}
}
var schema *genai.Schema
_ = json.Unmarshal(inputschema, &schema)
// First, create the function declaration.
funcDeclaration := &genai.FunctionDeclaration{
Name: toolboxTool.Name(),
Description: toolboxTool.Description(),
Parameters: schema,
}
// Then, wrap the function declaration in a genai.Tool struct.
return &genai.Tool{
FunctionDeclarations: []*genai.FunctionDeclaration{funcDeclaration},
}
}
// printResponse extracts and prints the relevant parts of the model's response.
func printResponse(resp *genai.GenerateContentResponse) {
for _, cand := range resp.Candidates {
if cand.Content != nil {
for _, part := range cand.Content.Parts {
fmt.Println(part.Text)
}
}
}
}
func main() {
// Setup
ctx := context.Background()
apiKey := os.Getenv("GOOGLE_API_KEY")
toolboxURL := "http://localhost:5000"
// Initialize the Google GenAI client using the explicit ClientConfig.
client, err := genai.NewClient(ctx, &genai.ClientConfig{
APIKey: apiKey,
})
if err != nil {
log.Fatalf("Failed to create Google GenAI client: %v", err)
}
// Initialize the MCP Toolbox client.
toolboxClient, err := core.NewToolboxClient(toolboxURL)
if err != nil {
log.Fatalf("Failed to create Toolbox client: %v", err)
}
// Load the tools using the MCP Toolbox SDK.
tools, err := toolboxClient.LoadToolset("my-toolset", ctx)
if err != nil {
log.Fatalf("Failed to load tools: %v\nMake sure your Toolbox server is running and the tool is configured.", err)
}
genAITools := make([]*genai.Tool, len(tools))
toolsMap := make(map[string]*core.ToolboxTool, len(tools))
for i, tool := range tools {
// Convert the tools into usable format
genAITools[i] = ConvertToGenaiTool(tool)
// Add tool to a map for lookup later
toolsMap[tool.Name()] = tool
}
// Set up the generative model with the available tool.
modelName := "gemini-2.0-flash"
query := "Find hotels in Basel with Basel in it's name and share the names with me"
// Create the initial content prompt for the model.
contents := []*genai.Content{
genai.NewContentFromText(query, genai.RoleUser),
}
config := &genai.GenerateContentConfig{
Tools: genAITools,
ToolConfig: &genai.ToolConfig{
FunctionCallingConfig: &genai.FunctionCallingConfig{
Mode: genai.FunctionCallingConfigModeAny,
},
},
}
genContentResp, _ := client.Models.GenerateContent(ctx, modelName, contents, config)
printResponse(genContentResp)
functionCalls := genContentResp.FunctionCalls()
if len(functionCalls) == 0 {
log.Println("No function call returned by the AI. The model likely answered directly.")
return
}
// Process the first function call (the example assumes one for simplicity).
fc := functionCalls[0]
log.Printf("--- Gemini requested function call: %s ---\n", fc.Name)
log.Printf("--- Arguments: %+v ---\n", fc.Args)
var toolResultString string
if fc.Name == "search-hotels-by-name" {
tool := toolsMap["search-hotels-by-name"]
toolResult, err := tool.Invoke(ctx, fc.Args)
toolResultString = fmt.Sprintf("%v", toolResult)
if err != nil {
log.Fatalf("Failed to execute tool '%s': %v", fc.Name, err)
}
} else {
log.Println("LLM did not request our tool")
}
resultContents := []*genai.Content{
genai.NewContentFromText("The tool returned this result, share it with the user based of their previous querys"+toolResultString, genai.RoleUser),
}
finalResponse, err := client.Models.GenerateContent(ctx, modelName, resultContents, &genai.GenerateContentConfig{})
if err != nil {
log.Fatalf("Error calling GenerateContent (with function result): %v", err)
}
log.Println("=== Final Response from Model (after processing function result) ===")
printResponse(finalResponse)
}
```
LangChain
```go
// This sample demonstrates how to use Toolbox tools as function definitions in LangChain Go.
package main
import (
"context"
"encoding/json"
"fmt"
"log"
"os"
"github.com/googleapis/mcp-toolbox-sdk-go/core"
"github.com/tmc/langchaingo/llms"
"github.com/tmc/langchaingo/llms/googleai"
)
// ConvertToLangchainTool converts a generic core.ToolboxTool into a LangChainGo llms.Tool.
func ConvertToLangchainTool(toolboxTool *core.ToolboxTool) llms.Tool {
// Fetch the tool's input schema
inputschema, err := toolboxTool.InputSchema()
if err != nil {
return llms.Tool{}
}
var paramsSchema map[string]any
_ = json.Unmarshal(inputschema, ¶msSchema)
// Convert into LangChain's llms.Tool
return llms.Tool{
Type: "function",
Function: &llms.FunctionDefinition{
Name: toolboxTool.Name(),
Description: toolboxTool.Description(),
Parameters: paramsSchema,
},
}
}
func main() {
genaiKey := os.Getenv("GOOGLE_API_KEY")
toolboxURL := "http://localhost:5000"
ctx := context.Background()
// Initialize the Google AI client (LLM).
llm, err := googleai.New(ctx, googleai.WithAPIKey(genaiKey), googleai.WithDefaultModel("gemini-1.5-flash"))
if err != nil {
log.Fatalf("Failed to create Google AI client: %v", err)
}
// Initialize the MCP Toolbox client.
toolboxClient, err := core.NewToolboxClient(toolboxURL)
if err != nil {
log.Fatalf("Failed to create Toolbox client: %v", err)
}
// Load the tools using the MCP Toolbox SDK.
tools, err := toolboxClient.LoadToolset("my-toolset", ctx)
if err != nil {
log.Fatalf("Failed to load tools: %v\nMake sure your Toolbox server is running and the tool is configured.", err)
}
toolsMap := make(map[string]*core.ToolboxTool, len(tools))
langchainTools := make([]llms.Tool, len(tools))
for i, tool := range tools {
// Convert the loaded ToolboxTools into the format LangChainGo requires.
langchainTools[i] = ConvertToLangchainTool(tool)
// Add tool to a map for lookup later
toolsMap[tool.Name()] = tool
}
// Start the conversation history.
messageHistory := []llms.MessageContent{
llms.TextParts(llms.ChatMessageTypeHuman, "Find hotels in Basel with Basel in it's name."),
}
// Make the first call to the LLM, making it aware of the tool.
resp, err := llm.GenerateContent(ctx, messageHistory, llms.WithTools(langchainTools))
if err != nil {
log.Fatalf("LLM call failed: %v", err)
}
// Add the model's response (which should be a tool call) to the history.
respChoice := resp.Choices[0]
assistantResponse := llms.TextParts(llms.ChatMessageTypeAI, respChoice.Content)
for _, tc := range respChoice.ToolCalls {
assistantResponse.Parts = append(assistantResponse.Parts, tc)
}
messageHistory = append(messageHistory, assistantResponse)
// Process each tool call requested by the model.
for _, tc := range respChoice.ToolCalls {
toolName := tc.FunctionCall.Name
switch tc.FunctionCall.Name {
case "search-hotels-by-name":
var args map[string]any
if err := json.Unmarshal([]byte(tc.FunctionCall.Arguments), &args); err != nil {
log.Fatalf("Failed to unmarshal arguments for tool '%s': %v", toolName, err)
}
tool := toolsMap["search-hotels-by-name"]
toolResult, err := tool.Invoke(ctx, args)
if err != nil {
log.Fatalf("Failed to execute tool '%s': %v", toolName, err)
}
// Create the tool call response message and add it to the history.
toolResponse := llms.MessageContent{
Role: llms.ChatMessageTypeTool,
Parts: []llms.ContentPart{
llms.ToolCallResponse{
Name: toolName,
Content: fmt.Sprintf("%v", toolResult),
},
},
}
messageHistory = append(messageHistory, toolResponse)
default:
log.Fatalf("got unexpected function call: %v", tc.FunctionCall.Name)
}
}
// Final LLM Call for Natural Language Response
log.Println("Sending tool response back to LLM for a final answer...")
// Call the LLM again with the updated history, which now includes the tool's result.
finalResp, err := llm.GenerateContent(ctx, messageHistory)
if err != nil {
log.Fatalf("Final LLM call failed: %v", err)
}
// Display the Result
fmt.Println("\n======================================")
fmt.Println("Final Response from LLM:")
fmt.Println(finalResp.Choices[0].Content)
fmt.Println("======================================")
}
```
OpenAI
```go
// This sample demonstrates integration with the OpenAI Go client.
package main
import (
"context"
"encoding/json"
"fmt"
"log"
"github.com/googleapis/mcp-toolbox-sdk-go/core"
openai "github.com/openai/openai-go"
)
// ConvertToOpenAITool converts a ToolboxTool into the go-openai library's Tool format.
func ConvertToOpenAITool(toolboxTool *core.ToolboxTool) openai.ChatCompletionToolParam {
// Get the input schema
jsonSchemaBytes, err := toolboxTool.InputSchema()
if err != nil {
return openai.ChatCompletionToolParam{}
}
// Unmarshal the JSON bytes into FunctionParameters
var paramsSchema openai.FunctionParameters
if err := json.Unmarshal(jsonSchemaBytes, ¶msSchema); err != nil {
return openai.ChatCompletionToolParam{}
}
// Create and return the final tool parameter struct.
return openai.ChatCompletionToolParam{
Function: openai.FunctionDefinitionParam{
Name: toolboxTool.Name(),
Description: openai.String(toolboxTool.Description()),
Parameters: paramsSchema,
},
}
}
func main() {
// Setup
ctx := context.Background()
toolboxURL := "http://localhost:5000"
openAIClient := openai.NewClient()
// Initialize the MCP Toolbox client.
toolboxClient, err := core.NewToolboxClient(toolboxURL)
if err != nil {
log.Fatalf("Failed to create Toolbox client: %v", err)
}
// Load the tools using the MCP Toolbox SDK.
tools, err := toolboxClient.LoadToolset("my-toolset", ctx)
if err != nil {
log.Fatalf("Failed to load tool : %v\nMake sure your Toolbox server is running and the tool is configured.", err)
}
openAITools := make([]openai.ChatCompletionToolParam, len(tools))
toolsMap := make(map[string]*core.ToolboxTool, len(tools))
for i, tool := range tools {
// Convert the Toolbox tool into the openAI FunctionDeclaration format.
openAITools[i] = ConvertToOpenAITool(tool)
// Add tool to a map for lookup later
toolsMap[tool.Name()] = tool
}
question := "Find hotels in Basel with Basel in it's name "
params := openai.ChatCompletionNewParams{
Messages: []openai.ChatCompletionMessageParamUnion{
openai.UserMessage(question),
},
Tools: openAITools,
Seed: openai.Int(0),
Model: openai.ChatModelGPT4o,
}
// Make initial chat completion request
completion, err := openAIClient.Chat.Completions.New(ctx, params)
if err != nil {
panic(err)
}
toolCalls := completion.Choices[0].Message.ToolCalls
// Return early if there are no tool calls
if len(toolCalls) == 0 {
fmt.Printf("No function call")
return
}
// If there was a function call, continue the conversation
params.Messages = append(params.Messages, completion.Choices[0].Message.ToParam())
for _, toolCall := range toolCalls {
if toolCall.Function.Name == "search-hotels-by-name" {
// Extract the location from the function call arguments
var args map[string]interface{}
tool := toolsMap["search-hotels-by-name"]
err := json.Unmarshal([]byte(toolCall.Function.Arguments), &args)
if err != nil {
panic(err)
}
result, err := tool.Invoke(ctx, args)
if err != nil {
log.Fatal("Could not invoke tool", err)
}
params.Messages = append(params.Messages, openai.ToolMessage(result.(string), toolCall.ID))
}
}
completion, err = openAIClient.Chat.Completions.New(ctx, params)
if err != nil {
panic(err)
}
fmt.Println(completion.Choices[0].Message.Content)
}
```
========================================================================
## Genkit Package
========================================================================
**Hierarchy:** MCP Toolbox for Databases > User Guide > Connect to Toolbox > Client SDKs > Go > Genkit Package
**URL:** https://googleapis.github.io/genai-toolbox/previews/PR-2723/user-guide/connect-to/client-sdks/go-sdk/tbgenkit/
**Description:** MCP Toolbox Genkit for integrating functionalities of MCP Toolbox into your Agentic apps.
## Overview
The `tbgenkit` package provides a Go interface to the MCP Toolbox service, enabling you to load and invoke tools from your own applications.
## Installation
```bash
go get github.com/googleapis/mcp-toolbox-sdk-go/tbgenkit
```
This SDK is supported on Go version 1.24.4 and higher.
{{< notice note >}}
**Breaking Change Notice**: As of version `0.6.0`, this repository has transitioned to a multi-module structure.
* **For new versions (`v0.6.0`+)**: You must import specific modules (e.g., `go get github.com/googleapis/mcp-toolbox-sdk-go/tbgenkit`).
* **For older versions (`v0.5.1` and below)**: The repository remains a single-module library (`go get github.com/googleapis/mcp-toolbox-sdk-go`).
* Please update your imports and `go.mod` accordingly when upgrading.
{{< /notice >}}
## Quickstart
For more information on how to load a `ToolboxTool`, see [the core package](https://github.com/googleapis/mcp-toolbox-sdk-go/tree/main/core)
## Convert Toolbox Tool to a Genkit Tool
```go
"github.com/googleapis/mcp-toolbox-sdk-go/tbgenkit"
func main() {
// Assuming the toolbox tool is loaded
// Make sure to add error checks for debugging
ctx := context.Background()
g, err := genkit.Init(ctx)
genkitTool, err := tbgenkit.ToGenkitTool(toolboxTool, g)
}
```
# Using with Orchestration Frameworks
To see how the MCP Toolbox Go SDK works with orchestration frameworks, check out these end-to-end examples given below.
Genkit Go
```go
//This sample contains a complete example on how to integrate MCP Toolbox Go SDK with Genkit Go using the tbgenkit package.
package main
import (
"context"
"fmt"
"log"
"github.com/googleapis/mcp-toolbox-sdk-go/core"
"github.com/googleapis/mcp-toolbox-sdk-go/tbgenkit"
"github.com/firebase/genkit/go/ai"
"github.com/firebase/genkit/go/genkit"
"github.com/firebase/genkit/go/plugins/googlegenai"
)
func main() {
ctx := context.Background()
toolboxClient, err := core.NewToolboxClient("http://127.0.0.1:5000")
if err != nil {
log.Fatalf("Failed to create Toolbox client: %v", err)
}
// Load the tools using the MCP Toolbox SDK.
tools, err := toolboxClient.LoadToolset("my-toolset", ctx)
if err != nil {
log.Fatalf("Failed to load tools: %v\nMake sure your Toolbox server is running and the tool is configured.", err)
}
// Initialize genkit
g := genkit.Init(ctx,
genkit.WithPlugins(&googlegenai.GoogleAI{}),
genkit.WithDefaultModel("googleai/gemini-1.5-flash"),
)
// Convert your tool to a Genkit tool.
genkitTools := make([]ai.Tool, len(tools))
for i, tool := range tools {
newTool, err := tbgenkit.ToGenkitTool(tool, g)
if err != nil {
log.Fatalf("Failed to convert tool: %v\n", err)
}
genkitTools[i] = newTool
}
toolRefs := make([]ai.ToolRef, len(genkitTools))
for i, tool := range genkitTools {
toolRefs[i] = tool
}
// Generate llm response using prompts and tools.
resp, err := genkit.Generate(ctx, g,
ai.WithPrompt("Find hotels in Basel with Basel in it's name."),
ai.WithTools(toolRefs...),
)
if err != nil {
log.Fatalf("%v\n", err)
}
fmt.Println(resp.Text())
}
```
========================================================================
## MCP Client
========================================================================
**Hierarchy:** MCP Toolbox for Databases > User Guide > Connect to Toolbox > MCP Client
**URL:** https://googleapis.github.io/genai-toolbox/previews/PR-2723/user-guide/connect-to/mcp-client/
**Description:** How to connect to Toolbox from a MCP Client.
## Toolbox SDKs vs Model Context Protocol (MCP)
Toolbox now supports connections via both the native Toolbox SDKs and via [Model
Context Protocol (MCP)](https://modelcontextprotocol.io/). However, Toolbox has
several features which are not supported in the MCP specification (such as
Authenticated Parameters and Authorized invocation).
We recommend using the native SDKs over MCP clients to leverage these features.
The native SDKs can be combined with MCP clients in many cases.
### Protocol Versions
Toolbox currently supports the following versions of MCP specification:
* [2025-11-25](https://modelcontextprotocol.io/specification/2025-11-25)
* [2025-06-18](https://modelcontextprotocol.io/specification/2025-06-18)
* [2025-03-26](https://modelcontextprotocol.io/specification/2025-03-26)
* [2024-11-05](https://modelcontextprotocol.io/specification/2024-11-05)
### Toolbox AuthZ/AuthN Not Supported by MCP
The auth implementation in Toolbox is not supported in MCP's auth specification.
This includes:
* [Authenticated Parameters](../../configuration/tools/_index.md#authenticated-parameters)
* [Authorized Invocations](../../configuration/tools/_index.md#authorized-invocations)
## Connecting to Toolbox with an MCP client
### Before you begin
{{< notice note >}}
MCP is only compatible with Toolbox version 0.3.0 and above.
{{< /notice >}}
1. [Install](../../introduction/_index.md#installing-the-server)
Toolbox version 0.3.0+.
1. Make sure you've set up and initialized your database.
1. [Set up](../../configuration/_index.md) your `tools.yaml` file.
### Connecting via Standard Input/Output (stdio)
Toolbox supports the
[stdio](https://modelcontextprotocol.io/docs/concepts/transports#standard-input%2Foutput-stdio)
transport protocol. Users that wish to use stdio will have to include the
`--stdio` flag when running Toolbox.
```bash
./toolbox --stdio
```
When running with stdio, Toolbox will listen via stdio instead of acting as a
remote HTTP server. Logs will be set to the `warn` level by default. `debug` and
`info` logs are not supported with stdio.
{{< notice note >}}
Toolbox enables dynamic reloading by default. To disable, use the
`--disable-reload` flag.
{{< /notice >}}
### Connecting via HTTP
Toolbox supports the HTTP transport protocol with and without SSE.
{{< tabpane text=true >}} {{% tab header="HTTP with SSE (deprecated)" lang="en" %}}
Add the following configuration to your MCP client configuration:
```bash
{
"mcpServers": {
"toolbox": {
"type": "sse",
"url": "http://127.0.0.1:5000/mcp/sse",
}
}
}
```
If you would like to connect to a specific toolset, replace `url` with
`"http://127.0.0.1:5000/mcp/{toolset_name}/sse"`.
HTTP with SSE is only supported in version `2024-11-05` and is currently
deprecated.
{{% /tab %}} {{% tab header="Streamable HTTP" lang="en" %}}
Add the following configuration to your MCP client configuration:
```bash
{
"mcpServers": {
"toolbox": {
"type": "http",
"url": "http://127.0.0.1:5000/mcp",
}
}
}
```
If you would like to connect to a specific toolset, replace `url` with
`"http://127.0.0.1:5000/mcp/{toolset_name}"`.
{{% /tab %}} {{< /tabpane >}}
### Using the MCP Inspector with Toolbox
Use MCP [Inspector](https://github.com/modelcontextprotocol/inspector) for
testing and debugging Toolbox server.
{{< tabpane text=true >}}
{{% tab header="STDIO" lang="en" %}}
1. Run Inspector with Toolbox as a subprocess:
```bash
npx @modelcontextprotocol/inspector ./toolbox --stdio
```
1. For `Transport Type` dropdown menu, select `STDIO`.
1. In `Command`, make sure that it is set to :`./toolbox` (or the correct path
to where the Toolbox binary is installed).
1. In `Arguments`, make sure that it's filled with `--stdio`.
1. Click the `Connect` button. It might take awhile to spin up Toolbox. Voila!
You should be able to inspect your toolbox tools!
{{% /tab %}}
{{% tab header="HTTP with SSE (deprecated)" lang="en" %}}
1. [Run Toolbox](../../introduction/_index.md#running-the-server).
1. In a separate terminal, run Inspector directly through `npx`:
```bash
npx @modelcontextprotocol/inspector
```
1. For `Transport Type` dropdown menu, select `SSE`.
1. For `URL`, type in `http://127.0.0.1:5000/mcp/sse` to use all tool or
`http//127.0.0.1:5000/mcp/{toolset_name}/sse` to use a specific toolset.
1. Click the `Connect` button. Voila! You should be able to inspect your toolbox
tools!
{{% /tab %}}
{{% tab header="Streamable HTTP" lang="en" %}}
1. [Run Toolbox](../../introduction/_index.md#running-the-server).
1. In a separate terminal, run Inspector directly through `npx`:
```bash
npx @modelcontextprotocol/inspector
```
1. For `Transport Type` dropdown menu, select `Streamable HTTP`.
1. For `URL`, type in `http://127.0.0.1:5000/mcp` to use all tool or
`http//127.0.0.1:5000/mcp/{toolset_name}` to use a specific toolset.
1. Click the `Connect` button. Voila! You should be able to inspect your toolbox
tools!
{{% /tab %}} {{< /tabpane >}}
### Tested Clients
| Client | SSE Works | MCP Config Docs |
|--------------------|------------|---------------------------------------------------------------------------------|
| Claude Desktop | ✅ | |
| MCP Inspector | ✅ | |
| Cursor | ✅ | |
| Windsurf | ✅ | |
| VS Code (Insiders) | ✅ | |
========================================================================
## Gemini CLI Extensions
========================================================================
**Hierarchy:** MCP Toolbox for Databases > User Guide > Connect to Toolbox > Gemini CLI Extensions
**URL:** https://googleapis.github.io/genai-toolbox/previews/PR-2723/user-guide/connect-to/gemini-cli/
**Description:** Connect to Toolbox via Gemini CLI Extensions.
## Gemini CLI Extensions
[Gemini CLI][gemini-cli] is an open-source AI agent designed to assist with
development workflows by assisting with coding, debugging, data exploration, and
content creation. Its mission is to provide an agentic interface for interacting
with database and analytics services and popular open-source databases.
### How extensions work
Gemini CLI is highly extensible, allowing for the addition of new tools and
capabilities through extensions. You can load the extensions from a GitHub URL,
a local directory, or a configurable registry. They provide new tools, slash
commands, and prompts to assist with your workflow.
Use the Gemini CLI Extensions to load prebuilt or custom tools to interact with
your databases.
[gemini-cli]: https://google-gemini.github.io/gemini-cli/
Below are a list of Gemini CLI Extensions powered by MCP Toolbox:
* [alloydb](https://github.com/gemini-cli-extensions/alloydb)
* [alloydb-observability](https://github.com/gemini-cli-extensions/alloydb-observability)
* [bigquery-conversational-analytics](https://github.com/gemini-cli-extensions/bigquery-conversational-analytics)
* [bigquery-data-analytics](https://github.com/gemini-cli-extensions/bigquery-data-analytics)
* [cloud-sql-mysql](https://github.com/gemini-cli-extensions/cloud-sql-mysql)
* [cloud-sql-mysql-observability](https://github.com/gemini-cli-extensions/cloud-sql-mysql-observability)
* [cloud-sql-postgresql](https://github.com/gemini-cli-extensions/cloud-sql-postgresql)
* [cloud-sql-postgresql-observability](https://github.com/gemini-cli-extensions/cloud-sql-postgresql-observability)
* [cloud-sql-sqlserver](https://github.com/gemini-cli-extensions/cloud-sql-sqlserver)
* [cloud-sql-sqlserver-observability](https://github.com/gemini-cli-extensions/cloud-sql-sqlserver-observability)
* [dataplex](https://github.com/gemini-cli-extensions/dataplex)
* [firestore-native](https://github.com/gemini-cli-extensions/firestore-native)
* [looker](https://github.com/gemini-cli-extensions/looker)
* [mcp-toolbox](https://github.com/gemini-cli-extensions/mcp-toolbox)
* [mysql](https://github.com/gemini-cli-extensions/mysql)
* [postgres](https://github.com/gemini-cli-extensions/postgres)
* [spanner](https://github.com/gemini-cli-extensions/spanner)
* [sql-server](https://github.com/gemini-cli-extensions/sql-server)
========================================================================
## IDEs
========================================================================
**Hierarchy:** MCP Toolbox for Databases > User Guide > Connect to Toolbox > IDEs
**URL:** https://googleapis.github.io/genai-toolbox/previews/PR-2723/user-guide/connect-to/ides/
**Description:** List of guides detailing how to connect your AI tools (IDEs) to Toolbox using MCP.
========================================================================
## AlloyDB Admin API using MCP
========================================================================
**Hierarchy:** MCP Toolbox for Databases > User Guide > Connect to Toolbox > IDEs > AlloyDB Admin API using MCP
**URL:** https://googleapis.github.io/genai-toolbox/previews/PR-2723/user-guide/connect-to/ides/alloydb_pg_admin_mcp/
**Description:** Create your AlloyDB database with MCP Toolbox.
========================================================================
## AlloyDB using MCP
========================================================================
**Hierarchy:** MCP Toolbox for Databases > User Guide > Connect to Toolbox > IDEs > AlloyDB using MCP
**URL:** https://googleapis.github.io/genai-toolbox/previews/PR-2723/user-guide/connect-to/ides/alloydb_pg_mcp/
**Description:** Connect your IDE to AlloyDB using Toolbox.
========================================================================
## BigQuery using MCP
========================================================================
**Hierarchy:** MCP Toolbox for Databases > User Guide > Connect to Toolbox > IDEs > BigQuery using MCP
**URL:** https://googleapis.github.io/genai-toolbox/previews/PR-2723/user-guide/connect-to/ides/bigquery_mcp/
**Description:** Connect your IDE to BigQuery using Toolbox.
========================================================================
## Cloud SQL for MySQL using MCP
========================================================================
**Hierarchy:** MCP Toolbox for Databases > User Guide > Connect to Toolbox > IDEs > Cloud SQL for MySQL using MCP
**URL:** https://googleapis.github.io/genai-toolbox/previews/PR-2723/user-guide/connect-to/ides/cloud_sql_mysql_mcp/
**Description:** Connect your IDE to Cloud SQL for MySQL using Toolbox.
========================================================================
## Cloud SQL for Postgres using MCP
========================================================================
**Hierarchy:** MCP Toolbox for Databases > User Guide > Connect to Toolbox > IDEs > Cloud SQL for Postgres using MCP
**URL:** https://googleapis.github.io/genai-toolbox/previews/PR-2723/user-guide/connect-to/ides/cloud_sql_pg_mcp/
**Description:** Connect your IDE to Cloud SQL for Postgres using Toolbox.
========================================================================
## Cloud SQL for SQL Server using MCP
========================================================================
**Hierarchy:** MCP Toolbox for Databases > User Guide > Connect to Toolbox > IDEs > Cloud SQL for SQL Server using MCP
**URL:** https://googleapis.github.io/genai-toolbox/previews/PR-2723/user-guide/connect-to/ides/cloud_sql_mssql_mcp/
**Description:** Connect your IDE to Cloud SQL for SQL Server using Toolbox.
========================================================================
## Firestore using MCP
========================================================================
**Hierarchy:** MCP Toolbox for Databases > User Guide > Connect to Toolbox > IDEs > Firestore using MCP
**URL:** https://googleapis.github.io/genai-toolbox/previews/PR-2723/user-guide/connect-to/ides/firestore_mcp/
**Description:** Connect your IDE to Firestore using Toolbox.
========================================================================
## Looker using MCP
========================================================================
**Hierarchy:** MCP Toolbox for Databases > User Guide > Connect to Toolbox > IDEs > Looker using MCP
**URL:** https://googleapis.github.io/genai-toolbox/previews/PR-2723/user-guide/connect-to/ides/looker_mcp/
**Description:** Connect your IDE to Looker using Toolbox.
[Model Context Protocol (MCP)](https://modelcontextprotocol.io/introduction) is
an open protocol for connecting Large Language Models (LLMs) to data sources
like Postgres. This guide covers how to use [MCP Toolbox for Databases][toolbox]
to expose your developer assistant tools to a Looker instance:
* [Gemini-CLI][gemini-cli]
* [Cursor][cursor]
* [Windsurf][windsurf] (Codium)
* [Visual Studio Code][vscode] (Copilot)
* [Cline][cline] (VS Code extension)
* [Claude desktop][claudedesktop]
* [Claude code][claudecode]
* [Antigravity][antigravity]
[toolbox]: https://github.com/googleapis/genai-toolbox
[gemini-cli]: #configure-your-mcp-client
[cursor]: #configure-your-mcp-client
[windsurf]: #configure-your-mcp-client
[vscode]: #configure-your-mcp-client
[cline]: #configure-your-mcp-client
[claudedesktop]: #configure-your-mcp-client
[claudecode]: #configure-your-mcp-client
[antigravity]: #connect-with-antigravity
## Set up Looker
1. Get a Looker Client ID and Client Secret. Follow the directions
[here](https://cloud.google.com/looker/docs/api-auth#authentication_with_an_sdk).
1. Have the base URL of your Looker instance available. It is likely
something like `https://looker.example.com`. In some cases the API is
listening at a different port, and you will need to use
`https://looker.example.com:19999` instead.
## Connect with Antigravity
You can connect Looker to Antigravity in the following ways:
* Using the MCP Store
* Using a custom configuration
{{< notice note >}}
You don't need to download the MCP Toolbox binary to use these methods.
{{< /notice >}}
{{< tabpane text=true >}}
{{% tab header="MCP Store" lang="en" %}}
The most straightforward way to connect to Looker in Antigravity is by using the built-in MCP Store.
1. Open Antigravity and open the editor's agent panel.
1. Click the **"..."** icon at the top of the panel and select **MCP Servers**.
1. Locate **Looker** in the list of available servers and click Install.
1. Follow the on-screen prompts to securely link your accounts where applicable.
After you install Looker in the MCP Store, resources and tools from the server are automatically available to the editor.
{{% /tab %}}
{{% tab header="Custom config" lang="en" %}}
To connect to a custom MCP server, follow these steps:
1. Open Antigravity and navigate to the MCP store using the **"..."** drop-down at the top of the editor's agent panel.
1. To open the **mcp_config.json** file, click **MCP Servers** and then click **Manage MCP Servers > View raw config**.
1. Add the following configuration, replace the environment variables with your values, and save.
```json
{
"mcpServers": {
"looker": {
"command": "npx",
"args": ["-y", "@toolbox-sdk/server", "--prebuilt", "looker", "--stdio"],
"env": {
"LOOKER_BASE_URL": "https://looker.example.com",
"LOOKER_CLIENT_ID": "your-client-id",
"LOOKER_CLIENT_SECRET": "your-client-secret"
}
}
}
}
```
{{% /tab %}}
{{< /tabpane >}}
## Install MCP Toolbox
1. Download the latest version of Toolbox as a binary. Select the [correct
binary](https://github.com/googleapis/genai-toolbox/releases) corresponding
to your OS and CPU architecture. You are required to use Toolbox version
v0.10.0+:
{{< tabpane persist=header >}}
{{< tab header="linux/amd64" lang="bash" >}}
curl -O https://storage.googleapis.com/genai-toolbox/v0.28.0/linux/amd64/toolbox
{{< /tab >}}
{{< tab header="darwin/arm64" lang="bash" >}}
curl -O https://storage.googleapis.com/genai-toolbox/v0.28.0/darwin/arm64/toolbox
{{< /tab >}}
{{< tab header="darwin/amd64" lang="bash" >}}
curl -O https://storage.googleapis.com/genai-toolbox/v0.28.0/darwin/amd64/toolbox
{{< /tab >}}
{{< tab header="windows/amd64" lang="bash" >}}
curl -O https://storage.googleapis.com/genai-toolbox/v0.28.0/windows/amd64/toolbox.exe
{{< /tab >}}
{{< /tabpane >}}
1. Make the binary executable:
```bash
chmod +x toolbox
```
1. Verify the installation:
```bash
./toolbox --version
```
## Configure your MCP Client
{{< tabpane text=true >}}
{{% tab header="Gemini-CLI" lang="en" %}}
1. Install
[Gemini-CLI](https://github.com/google-gemini/gemini-cli#install-globally-with-npm).
1. Create a directory `.gemini` in your home directory if it doesn't exist.
1. Create the file `.gemini/settings.json` if it doesn't exist.
1. Add the following configuration, or add the mcpServers stanza if you already
have a `settings.json` with content. Replace the path to the toolbox
executable and the environment variables with your values, and save:
```json
{
"mcpServers": {
"looker-toolbox": {
"command": "./PATH/TO/toolbox",
"args": ["--stdio", "--prebuilt", "looker"],
"env": {
"LOOKER_BASE_URL": "https://looker.example.com",
"LOOKER_CLIENT_ID": "",
"LOOKER_CLIENT_SECRET": "",
"LOOKER_VERIFY_SSL": "true"
}
}
}
}
```
1. Start Gemini-CLI with the `gemini` command and use the command `/mcp` to see
the configured MCP tools.
{{% /tab %}}
{{% tab header="Claude code" lang="en" %}}
1. Install [Claude
Code](https://docs.anthropic.com/en/docs/agents-and-tools/claude-code/overview).
1. Create a `.mcp.json` file in your project root if it doesn't exist.
1. Add the following configuration, replace the environment variables with your
values, and save:
```json
{
"mcpServers": {
"looker-toolbox": {
"command": "./PATH/TO/toolbox",
"args": ["--stdio", "--prebuilt", "looker"],
"env": {
"LOOKER_BASE_URL": "https://looker.example.com",
"LOOKER_CLIENT_ID": "",
"LOOKER_CLIENT_SECRET": "",
"LOOKER_VERIFY_SSL": "true"
}
}
}
}
```
1. Restart Claude Code to apply the new configuration.
{{% /tab %}}
{{% tab header="Claude desktop" lang="en" %}}
1. Open [Claude desktop](https://claude.ai/download) and navigate to Settings.
1. Under the Developer tab, tap Edit Config to open the configuration file.
1. Add the following configuration, replace the environment variables with your
values, and save:
```json
{
"mcpServers": {
"looker-toolbox": {
"command": "./PATH/TO/toolbox",
"args": ["--stdio", "--prebuilt", "looker"],
"env": {
"LOOKER_BASE_URL": "https://looker.example.com",
"LOOKER_CLIENT_ID": "",
"LOOKER_CLIENT_SECRET": "",
"LOOKER_VERIFY_SSL": "true"
}
}
}
}
```
1. Restart Claude desktop.
1. From the new chat screen, you should see a hammer (MCP) icon appear with the
new MCP server available.
{{% /tab %}}
{{% tab header="Cline" lang="en" %}}
1. Open the [Cline](https://github.com/cline/cline) extension in VS Code and tap
the **MCP Servers** icon.
1. Tap Configure MCP Servers to open the configuration file.
1. Add the following configuration, replace the environment variables with your
values, and save:
```json
{
"mcpServers": {
"looker-toolbox": {
"command": "./PATH/TO/toolbox",
"args": ["--stdio", "--prebuilt", "looker"],
"env": {
"LOOKER_BASE_URL": "https://looker.example.com",
"LOOKER_CLIENT_ID": "",
"LOOKER_CLIENT_SECRET": "",
"LOOKER_VERIFY_SSL": "true"
}
}
}
}
```
1. You should see a green active status after the server is successfully
connected.
{{% /tab %}}
{{% tab header="Cursor" lang="en" %}}
1. Create a `.cursor` directory in your project root if it doesn't exist.
1. Create a `.cursor/mcp.json` file if it doesn't exist and open it.
1. Add the following configuration, replace the environment variables with your
values, and save:
```json
{
"mcpServers": {
"looker-toolbox": {
"command": "./PATH/TO/toolbox",
"args": ["--stdio", "--prebuilt", "looker"],
"env": {
"LOOKER_BASE_URL": "https://looker.example.com",
"LOOKER_CLIENT_ID": "",
"LOOKER_CLIENT_SECRET": "",
"LOOKER_VERIFY_SSL": "true"
}
}
}
}
```
1. Open [Cursor](https://www.cursor.com/) and navigate to **Settings > Cursor
Settings > MCP**. You should see a green active status after the server is
successfully connected.
{{% /tab %}}
{{% tab header="Visual Studio Code (Copilot)" lang="en" %}}
1. Open [VS Code](https://code.visualstudio.com/docs/copilot/overview) and
create a `.vscode` directory in your project root if it doesn't exist.
1. Create a `.vscode/mcp.json` file if it doesn't exist and open it.
1. Add the following configuration, replace the environment variables with your
values, and save:
```json
{
"servers": {
"looker-toolbox": {
"command": "./PATH/TO/toolbox",
"args": ["--stdio", "--prebuilt", "looker"],
"env": {
"LOOKER_BASE_URL": "https://looker.example.com",
"LOOKER_CLIENT_ID": "",
"LOOKER_CLIENT_SECRET": "",
"LOOKER_VERIFY_SSL": "true"
}
}
}
}
```
{{% /tab %}}
{{% tab header="Windsurf" lang="en" %}}
1. Open [Windsurf](https://docs.codeium.com/windsurf) and navigate to the
Cascade assistant.
1. Tap on the hammer (MCP) icon, then Configure to open the configuration file.
1. Add the following configuration, replace the environment variables with your
values, and save:
```json
{
"mcpServers": {
"looker-toolbox": {
"command": "./PATH/TO/toolbox",
"args": ["--stdio", "--prebuilt", "looker"],
"env": {
"LOOKER_BASE_URL": "https://looker.example.com",
"LOOKER_CLIENT_ID": "",
"LOOKER_CLIENT_SECRET": "",
"LOOKER_VERIFY_SSL": "true"
}
}
}
}
```
{{% /tab %}}
{{< /tabpane >}}
## Use Tools
Your AI tool is now connected to Looker using MCP. Try asking your AI
assistant to list models, explores, dimensions, and measures. Run a
query, retrieve the SQL for a query, and run a saved Look.
The full tool list is available in the [Prebuilt Tools Reference](../../configuration/prebuilt-configs/_index.md#looker).
The following tools are available to the LLM:
### Looker Model and Query Tools
These tools are used to get information about a Looker model
and execute queries against that model.
1. **get_models**: list the LookML models in Looker
1. **get_explores**: list the explores in a given model
1. **get_dimensions**: list the dimensions in a given explore
1. **get_measures**: list the measures in a given explore
1. **get_filters**: list the filters in a given explore
1. **get_parameters**: list the parameters in a given explore
1. **query**: Run a query and return the data
1. **query_sql**: Return the SQL generated by Looker for a query
1. **query_url**: Return a link to the query in Looker for further exploration
### Looker Content Tools
These tools get saved content (Looks and Dashboards) from a Looker
instance and create new saved content.
1. **get_looks**: Return the saved Looks that match a title or description
1. **run_look**: Run a saved Look and return the data
1. **make_look**: Create a saved Look in Looker and return the URL
1. **get_dashboards**: Return the saved dashboards that match a title or
description
1. **run_dashboard**: Run the queries associated with a dashboard and return the
data
1. **make_dashboard**: Create a saved dashboard in Looker and return the URL
1. **add_dashboard_element**: Add a tile to a dashboard
1. **add_dashboard_filter**: Add a filter to a dashboard
1. **generate_embed_url**: Generate an embed url for content
### Looker Instance Health Tools
These tools offer the same health check algorithms that the popular
CLI [Henry](https://github.com/looker-open-source/henry) offers.
1. **health_pulse**: Check the health of a Looker intance
1. **health_analyze**: Analyze the usage of a Looker object
1. **health_vacuum**: Find LookML elements that might be unused
### LookML Authoring Tools
These tools allow enable the caller to write and modify LookML files
as well as get the database schema needed to write LookML effectively.
1. **dev_mode**: Activate dev mode.
1. **get_projects**: Get the list of LookML projects
1. **get_project_files**: Get the list of files in a project
1. **get_project_file**: Get the contents of a file in a project
1. **create_project_file**: Create a file in a project
1. **update_project_file**: Update the contents of a file in a project
1. **delete_project_file**: Delete a file in a project
1. **get_connections**: Get the list of connections
1. **get_connection_schemas**: Get the list of schemas for a connection
1. **get_connection_databases**: Get the list of databases for a connection
1. **get_connection_tables**: Get the list of tables for a connection
1. **get_connection_table_columns**: Get the list of columns for a table in a
connection
{{< notice note >}}
Prebuilt tools are pre-1.0, so expect some tool changes between versions. LLMs
will adapt to the tools available, so this shouldn't affect most users.
{{< /notice >}}
========================================================================
## MySQL using MCP
========================================================================
**Hierarchy:** MCP Toolbox for Databases > User Guide > Connect to Toolbox > IDEs > MySQL using MCP
**URL:** https://googleapis.github.io/genai-toolbox/previews/PR-2723/user-guide/connect-to/ides/mysql_mcp/
**Description:** Connect your IDE to MySQL using Toolbox.
[Model Context Protocol (MCP)](https://modelcontextprotocol.io/introduction) is
an open protocol for connecting Large Language Models (LLMs) to data sources
like MySQL. This guide covers how to use [MCP Toolbox for Databases][toolbox] to
expose your developer assistant tools to a MySQL instance:
* [Cursor][cursor]
* [Windsurf][windsurf] (Codium)
* [Visual Studio Code][vscode] (Copilot)
* [Cline][cline] (VS Code extension)
* [Claude desktop][claudedesktop]
* [Claude code][claudecode]
* [Gemini CLI][geminicli]
* [Gemini Code Assist][geminicodeassist]
[toolbox]: https://github.com/googleapis/genai-toolbox
[cursor]: #configure-your-mcp-client
[windsurf]: #configure-your-mcp-client
[vscode]: #configure-your-mcp-client
[cline]: #configure-your-mcp-client
[claudedesktop]: #configure-your-mcp-client
[claudecode]: #configure-your-mcp-client
[geminicli]: #configure-your-mcp-client
[geminicodeassist]: #configure-your-mcp-client
## Set up the database
1. [Create or select a MySQL instance.](https://dev.mysql.com/downloads/installer/)
## Install MCP Toolbox
1. Download the latest version of Toolbox as a binary. Select the [correct
binary](https://github.com/googleapis/genai-toolbox/releases) corresponding
to your OS and CPU architecture. You are required to use Toolbox version
V0.10.0+:
{{< tabpane persist=header >}}
{{< tab header="linux/amd64" lang="bash" >}}
curl -O https://storage.googleapis.com/genai-toolbox/v0.28.0/linux/amd64/toolbox
{{< /tab >}}
{{< tab header="darwin/arm64" lang="bash" >}}
curl -O https://storage.googleapis.com/genai-toolbox/v0.28.0/darwin/arm64/toolbox
{{< /tab >}}
{{< tab header="darwin/amd64" lang="bash" >}}
curl -O https://storage.googleapis.com/genai-toolbox/v0.28.0/darwin/amd64/toolbox
{{< /tab >}}
{{< tab header="windows/amd64" lang="bash" >}}
curl -O https://storage.googleapis.com/genai-toolbox/v0.28.0/windows/amd64/toolbox.exe
{{< /tab >}}
{{< /tabpane >}}
1. Make the binary executable:
```bash
chmod +x toolbox
```
1. Verify the installation:
```bash
./toolbox --version
```
## Configure your MCP Client
{{< tabpane text=true >}}
{{% tab header="Claude code" lang="en" %}}
1. Install [Claude
Code](https://docs.anthropic.com/en/docs/agents-and-tools/claude-code/overview).
1. Create a `.mcp.json` file in your project root if it doesn't exist.
1. Add the following configuration, replace the environment variables with your
values, and save:
```json
{
"mcpServers": {
"mysql": {
"command": "./PATH/TO/toolbox",
"args": ["--prebuilt", "mysql", "--stdio"],
"env": {
"MYSQL_HOST": "",
"MYSQL_PORT": "",
"MYSQL_DATABASE": "",
"MYSQL_USER": "",
"MYSQL_PASSWORD": ""
}
}
}
}
```
1. Restart Claude code to apply the new configuration.
{{% /tab %}}
{{% tab header="Claude desktop" lang="en" %}}
1. Open [Claude desktop](https://claude.ai/download) and navigate to Settings.
1. Under the Developer tab, tap Edit Config to open the configuration file.
1. Add the following configuration, replace the environment variables with your
values, and save:
```json
{
"mcpServers": {
"mysql": {
"command": "./PATH/TO/toolbox",
"args": ["--prebuilt", "mysql", "--stdio"],
"env": {
"MYSQL_HOST": "",
"MYSQL_PORT": "",
"MYSQL_DATABASE": "",
"MYSQL_USER": "",
"MYSQL_PASSWORD": ""
}
}
}
}
```
1. Restart Claude desktop.
1. From the new chat screen, you should see a hammer (MCP) icon appear with the
new MCP server available.
{{% /tab %}}
{{% tab header="Cline" lang="en" %}}
1. Open the [Cline](https://github.com/cline/cline) extension in VS Code and
tap the **MCP Servers** icon.
1. Tap Configure MCP Servers to open the configuration file.
1. Add the following configuration, replace the environment variables with your
values, and save:
```json
{
"mcpServers": {
"mysql": {
"command": "./PATH/TO/toolbox",
"args": ["--prebuilt", "mysql", "--stdio"],
"env": {
"MYSQL_HOST": "",
"MYSQL_PORT": "",
"MYSQL_DATABASE": "",
"MYSQL_USER": "",
"MYSQL_PASSWORD": ""
}
}
}
}
```
1. You should see a green active status after the server is successfully
connected.
{{% /tab %}}
{{% tab header="Cursor" lang="en" %}}
1. Create a `.cursor` directory in your project root if it doesn't exist.
1. Create a `.cursor/mcp.json` file if it doesn't exist and open it.
1. Add the following configuration, replace the environment variables with your
values, and save:
```json
{
"mcpServers": {
"mysql": {
"command": "./PATH/TO/toolbox",
"args": ["--prebuilt", "mysql", "--stdio"],
"env": {
"MYSQL_HOST": "",
"MYSQL_PORT": "",
"MYSQL_DATABASE": "",
"MYSQL_USER": "",
"MYSQL_PASSWORD": ""
}
}
}
}
```
1. Open [Cursor](https://www.cursor.com/) and navigate to **Settings > Cursor
Settings > MCP**. You should see a green active status after the server is
successfully connected.
{{% /tab %}}
{{% tab header="Visual Studio Code (Copilot)" lang="en" %}}
1. Open [VS Code](https://code.visualstudio.com/docs/copilot/overview) and
create a `.vscode` directory in your project root if it doesn't exist.
1. Create a `.vscode/mcp.json` file if it doesn't exist and open it.
1. Add the following configuration, replace the environment variables with your
values, and save:
```json
{
"servers": {
"mysql": {
"command": "./PATH/TO/toolbox",
"args": ["--prebuilt","mysql","--stdio"],
"env": {
"MYSQL_HOST": "",
"MYSQL_PORT": "",
"MYSQL_DATABASE": "",
"MYSQL_USER": "",
"MYSQL_PASSWORD": ""
}
}
}
}
```
{{% /tab %}}
{{% tab header="Windsurf" lang="en" %}}
1. Open [Windsurf](https://docs.codeium.com/windsurf) and navigate to the
Cascade assistant.
1. Tap on the hammer (MCP) icon, then Configure to open the configuration file.
1. Add the following configuration, replace the environment variables with your
values, and save:
```json
{
"mcpServers": {
"mysql": {
"command": "./PATH/TO/toolbox",
"args": ["--prebuilt","mysql","--stdio"],
"env": {
"MYSQL_HOST": "",
"MYSQL_PORT": "",
"MYSQL_DATABASE": "",
"MYSQL_USER": "",
"MYSQL_PASSWORD": ""
}
}
}
}
```
{{% /tab %}}
{{% tab header="Gemini CLI" lang="en" %}}
1. Install the [Gemini
CLI](https://github.com/google-gemini/gemini-cli?tab=readme-ov-file#quickstart).
1. In your working directory, create a folder named `.gemini`. Within it,
create a `settings.json` file.
1. Add the following configuration, replace the environment variables with your
values, and then save:
```json
{
"mcpServers": {
"mysql": {
"command": "./PATH/TO/toolbox",
"args": ["--prebuilt","mysql","--stdio"],
"env": {
"MYSQL_HOST": "",
"MYSQL_PORT": "",
"MYSQL_DATABASE": "",
"MYSQL_USER": "",
"MYSQL_PASSWORD": ""
}
}
}
}
```
{{% /tab %}}
{{% tab header="Gemini Code Assist" lang="en" %}}
1. Install the [Gemini Code
Assist](https://marketplace.visualstudio.com/items?itemName=Google.geminicodeassist)
extension in Visual Studio Code.
1. Enable Agent Mode in Gemini Code Assist chat.
1. In your working directory, create a folder named `.gemini`. Within it,
create a `settings.json` file.
1. Add the following configuration, replace the environment variables with your
values, and then save:
```json
{
"mcpServers": {
"mysql": {
"command": "./PATH/TO/toolbox",
"args": ["--prebuilt","mysql","--stdio"],
"env": {
"MYSQL_HOST": "",
"MYSQL_PORT": "",
"MYSQL_DATABASE": "",
"MYSQL_USER": "",
"MYSQL_PASSWORD": ""
}
}
}
}
```
{{% /tab %}}
{{< /tabpane >}}
## Use Tools
Your AI tool is now connected to MySQL using MCP. Try asking your AI assistant
to list tables, create a table, or define and execute other SQL statements.
The following tools are available to the LLM:
1. **list_tables**: lists tables and descriptions
1. **execute_sql**: execute any SQL statement
{{< notice note >}}
Prebuilt tools are pre-1.0, so expect some tool changes between versions. LLMs
will adapt to the tools available, so this shouldn't affect most users.
{{< /notice >}}
========================================================================
## Neo4j using MCP
========================================================================
**Hierarchy:** MCP Toolbox for Databases > User Guide > Connect to Toolbox > IDEs > Neo4j using MCP
**URL:** https://googleapis.github.io/genai-toolbox/previews/PR-2723/user-guide/connect-to/ides/neo4j_mcp/
**Description:** Connect your IDE to Neo4j using Toolbox.
[Model Context Protocol (MCP)](https://modelcontextprotocol.io/introduction) is
an open protocol for connecting Large Language Models (LLMs) to data sources
like Neo4j. This guide covers how to use [MCP Toolbox for Databases][toolbox] to
expose your developer assistant tools to a Neo4j instance:
* [Cursor][cursor]
* [Windsurf][windsurf] (Codium)
* [Visual Studio Code][vscode] (Copilot)
* [Cline][cline] (VS Code extension)
* [Claude desktop][claudedesktop]
* [Claude code][claudecode]
* [Gemini CLI][geminicli]
* [Gemini Code Assist][geminicodeassist]
[toolbox]: https://github.com/googleapis/genai-toolbox
[cursor]: #configure-your-mcp-client
[windsurf]: #configure-your-mcp-client
[vscode]: #configure-your-mcp-client
[cline]: #configure-your-mcp-client
[claudedesktop]: #configure-your-mcp-client
[claudecode]: #configure-your-mcp-client
[geminicli]: #configure-your-mcp-client
[geminicodeassist]: #configure-your-mcp-client
## Set up the database
1. [Create or select a Neo4j
instance.](https://neo4j.com/cloud/platform/aura-graph-database/)
## Install MCP Toolbox
1. Download the latest version of Toolbox as a binary. Select the [correct
binary](https://github.com/googleapis/genai-toolbox/releases) corresponding
to your OS and CPU architecture. You are required to use Toolbox version
v0.15.0+:
{{< tabpane persist=header >}}
{{< tab header="linux/amd64" lang="bash" >}}
curl -O https://storage.googleapis.com/genai-toolbox/v0.28.0/linux/amd64/toolbox
{{< /tab >}}
{{< tab header="darwin/arm64" lang="bash" >}}
curl -O https://storage.googleapis.com/genai-toolbox/v0.28.0/darwin/arm64/toolbox
{{< /tab >}}
{{< tab header="darwin/amd64" lang="bash" >}}
curl -O https://storage.googleapis.com/genai-toolbox/v0.28.0/darwin/amd64/toolbox
{{< /tab >}}
{{< tab header="windows/amd64" lang="bash" >}}
curl -O https://storage.googleapis.com/genai-toolbox/v0.28.0/windows/amd64/toolbox.exe
{{< /tab >}}
{{< /tabpane >}}
1. Make the binary executable:
```bash
chmod +x toolbox
```
1. Verify the installation:
```bash
./toolbox --version
```
## Configure your MCP Client
{{< tabpane text=true >}}
{{% tab header="Claude code" lang="en" %}}
1. Install [Claude
Code](https://docs.anthropic.com/en/docs/agents-and-tools/claude-code/overview).
1. Create a `.mcp.json` file in your project root if it doesn't exist.
1. Add the following configuration, replace the environment variables with your
values, and save:
```json
{
"mcpServers": {
"neo4j": {
"command": "./PATH/TO/toolbox",
"args": ["--prebuilt","neo4j","--stdio"],
"env": {
"NEO4J_URI": "",
"NEO4J_DATABASE": "",
"NEO4J_USERNAME": "",
"NEO4J_PASSWORD": ""
}
}
}
}
```
1. Restart Claude code to apply the new configuration.
{{% /tab %}}
{{% tab header="Claude desktop" lang="en" %}}
1. Open [Claude desktop](https://claude.ai/download) and navigate to Settings.
1. Under the Developer tab, tap Edit Config to open the configuration file.
1. Add the following configuration, replace the environment variables with your
values, and save:
```json
{
"mcpServers": {
"neo4j": {
"command": "./PATH/TO/toolbox",
"args": ["--prebuilt","neo4j","--stdio"],
"env": {
"NEO4J_URI": "",
"NEO4J_DATABASE": "",
"NEO4J_USERNAME": "",
"NEO4J_PASSWORD": ""
}
}
}
}
```
1. Restart Claude desktop.
1. From the new chat screen, you should see a hammer (MCP) icon appear with the
new MCP server available.
{{% /tab %}}
{{% tab header="Cline" lang="en" %}}
1. Open the [Cline](https://github.com/cline/cline) extension in VS Code and
tap the **MCP Servers** icon.
1. Tap Configure MCP Servers to open the configuration file.
1. Add the following configuration, replace the environment variables with your
values, and save:
```json
{
"mcpServers": {
"neo4j": {
"command": "./PATH/TO/toolbox",
"args": ["--prebuilt","neo4j","--stdio"],
"env": {
"NEO4J_URI": "",
"NEO4J_DATABASE": "",
"NEO4J_USERNAME": "",
"NEO4J_PASSWORD": ""
}
}
}
}
```
1. You should see a green active status after the server is successfully
connected.
{{% /tab %}}
{{% tab header="Cursor" lang="en" %}}
1. Create a `.cursor` directory in your project root if it doesn't exist.
1. Create a `.cursor/mcp.json` file if it doesn't exist and open it.
1. Add the following configuration, replace the environment variables with your
values, and save:
```json
{
"mcpServers": {
"neo4j": {
"command": "./PATH/TO/toolbox",
"args": ["--prebuilt","neo4j","--stdio"],
"env": {
"NEO4J_URI": "",
"NEO4J_DATABASE": "",
"NEO4J_USERNAME": "",
"NEO4J_PASSWORD": ""
}
}
}
}
```
1. Open [Cursor](https://www.cursor.com/) and navigate to **Settings > Cursor
Settings > MCP**. You should see a green active status after the server is
successfully connected.
{{% /tab %}}
{{% tab header="Visual Studio Code (Copilot)" lang="en" %}}
1. Open [VS Code](https://code.visualstudio.com/docs/copilot/overview) and
create a `.vscode` directory in your project root if it doesn't exist.
1. Create a `.vscode/mcp.json` file if it doesn't exist and open it.
1. Add the following configuration, replace the environment variables with your
values, and save:
```json
{
"mcp" : {
"servers": {
"neo4j": {
"command": "./PATH/TO/toolbox",
"args": ["--prebuilt","neo4j","--stdio"],
"env": {
"NEO4J_URI": "",
"NEO4J_DATABASE": "",
"NEO4J_USERNAME": "",
"NEO4J_PASSWORD": ""
}
}
}
}
}
```
{{% /tab %}}
{{% tab header="Windsurf" lang="en" %}}
1. Open [Windsurf](https://docs.codeium.com/windsurf) and navigate to the
Cascade assistant.
1. Tap on the hammer (MCP) icon, then Configure to open the configuration file.
1. Add the following configuration, replace the environment variables with your
values, and save:
```json
{
"mcpServers": {
"neo4j": {
"command": "./PATH/TO/toolbox",
"args": ["--prebuilt","neo4j","--stdio"],
"env": {
"NEO4J_URI": "",
"NEO4J_DATABASE": "",
"NEO4J_USERNAME": "",
"NEO4J_PASSWORD": ""
}
}
}
}
```
{{% /tab %}}
{{% tab header="Gemini CLI" lang="en" %}}
1. Install the [Gemini
CLI](https://github.com/google-gemini/gemini-cli?tab=readme-ov-file#quickstart).
1. In your working directory, create a folder named `.gemini`. Within it,
create a `settings.json` file.
1. Add the following configuration, replace the environment variables with your
values, and then save:
```json
{
"mcpServers": {
"neo4j": {
"command": "./PATH/TO/toolbox",
"args": ["--prebuilt","neo4j","--stdio"],
"env": {
"NEO4J_URI": "",
"NEO4J_DATABASE": "",
"NEO4J_USERNAME": "",
"NEO4J_PASSWORD": ""
}
}
}
}
```
{{% /tab %}}
{{% tab header="Gemini Code Assist" lang="en" %}}
1. Install the [Gemini Code
Assist](https://marketplace.visualstudio.com/items?itemName=Google.geminicodeassist)
extension in Visual Studio Code.
1. Enable Agent Mode in Gemini Code Assist chat.
1. In your working directory, create a folder named `.gemini`. Within it,
create a `settings.json` file.
1. Add the following configuration, replace the environment variables with your
values, and then save:
```json
{
"mcpServers": {
"neo4j": {
"command": "./PATH/TO/toolbox",
"args": ["--prebuilt","neo4j","--stdio"],
"env": {
"NEO4J_URI": "",
"NEO4J_DATABASE": "",
"NEO4J_USERNAME": "",
"NEO4J_PASSWORD": ""
}
}
}
}
```
{{% /tab %}}
{{< /tabpane >}}
## Use Tools
Your AI tool is now connected to Neo4j using MCP. Try asking your AI assistant
to get the graph schema or execute Cypher statements.
The following tools are available to the LLM:
1. **get_schema**: extracts the complete database schema, including details
about node labels, relationships, properties, constraints, and indexes.
1. **execute_cypher**: executes any arbitrary Cypher statement.
{{< notice note >}}
Prebuilt tools are pre-1.0, so expect some tool changes between versions. LLMs
will adapt to the tools available, so this shouldn't affect most users.
{{< /notice >}}
========================================================================
## PostgreSQL using MCP
========================================================================
**Hierarchy:** MCP Toolbox for Databases > User Guide > Connect to Toolbox > IDEs > PostgreSQL using MCP
**URL:** https://googleapis.github.io/genai-toolbox/previews/PR-2723/user-guide/connect-to/ides/postgres_mcp/
**Description:** Connect your IDE to PostgreSQL using Toolbox.
[Model Context Protocol (MCP)](https://modelcontextprotocol.io/introduction) is
an open protocol for connecting Large Language Models (LLMs) to data sources
like Postgres. This guide covers how to use [MCP Toolbox for Databases][toolbox]
to expose your developer assistant tools to a Postgres instance:
* [Cursor][cursor]
* [Windsurf][windsurf] (Codium)
* [Visual Studio Code][vscode] (Copilot)
* [Cline][cline] (VS Code extension)
* [Claude desktop][claudedesktop]
* [Claude code][claudecode]
* [Gemini CLI][geminicli]
* [Gemini Code Assist][geminicodeassist]
[toolbox]: https://github.com/googleapis/genai-toolbox
[cursor]: #configure-your-mcp-client
[windsurf]: #configure-your-mcp-client
[vscode]: #configure-your-mcp-client
[cline]: #configure-your-mcp-client
[claudedesktop]: #configure-your-mcp-client
[claudecode]: #configure-your-mcp-client
[geminicli]: #configure-your-mcp-client
[geminicodeassist]: #configure-your-mcp-client
{{< notice tip >}}
This guide can be used with [AlloyDB
Omni](https://cloud.google.com/alloydb/omni/docs/overview).
{{< /notice >}}
## Set up the database
1. Create or select a PostgreSQL instance.
* [Install PostgreSQL locally](https://www.postgresql.org/download/)
* [Install AlloyDB Omni](https://cloud.google.com/alloydb/omni/docs/quickstart)
1. Create or reuse [a database
user](https://docs.cloud.google.com/alloydb/omni/containers/current/docs/database-users/manage-users)
and have the username and password ready.
## Install MCP Toolbox
1. Download the latest version of Toolbox as a binary. Select the [correct
binary](https://github.com/googleapis/genai-toolbox/releases) corresponding
to your OS and CPU architecture. You are required to use Toolbox version
V0.6.0+:
{{< tabpane persist=header >}}
{{< tab header="linux/amd64" lang="bash" >}}
curl -O https://storage.googleapis.com/genai-toolbox/v0.28.0/linux/amd64/toolbox
{{< /tab >}}
{{< tab header="darwin/arm64" lang="bash" >}}
curl -O https://storage.googleapis.com/genai-toolbox/v0.28.0/darwin/arm64/toolbox
{{< /tab >}}
{{< tab header="darwin/amd64" lang="bash" >}}
curl -O https://storage.googleapis.com/genai-toolbox/v0.28.0/darwin/amd64/toolbox
{{< /tab >}}
{{< tab header="windows/amd64" lang="bash" >}}
curl -O https://storage.googleapis.com/genai-toolbox/v0.28.0/windows/amd64/toolbox.exe
{{< /tab >}}
{{< /tabpane >}}
1. Make the binary executable:
```bash
chmod +x toolbox
```
1. Verify the installation:
```bash
./toolbox --version
```
## Configure your MCP Client
{{< tabpane text=true >}}
{{% tab header="Claude code" lang="en" %}}
1. Install [Claude
Code](https://docs.anthropic.com/en/docs/agents-and-tools/claude-code/overview).
1. Create a `.mcp.json` file in your project root if it doesn't exist.
1. Add the following configuration, replace the environment variables with your
values, and save:
```json
{
"mcpServers": {
"postgres": {
"command": "./PATH/TO/toolbox",
"args": ["--prebuilt","postgres","--stdio"],
"env": {
"POSTGRES_HOST": "",
"POSTGRES_PORT": "",
"POSTGRES_DATABASE": "",
"POSTGRES_USER": "",
"POSTGRES_PASSWORD": ""
}
}
}
}
```
1. Restart Claude code to apply the new configuration.
{{% /tab %}}
{{% tab header="Claude desktop" lang="en" %}}
1. Open [Claude desktop](https://claude.ai/download) and navigate to Settings.
1. Under the Developer tab, tap Edit Config to open the configuration file.
1. Add the following configuration, replace the environment variables with your
values, and save:
```json
{
"mcpServers": {
"postgres": {
"command": "./PATH/TO/toolbox",
"args": ["--prebuilt","postgres","--stdio"],
"env": {
"POSTGRES_HOST": "",
"POSTGRES_PORT": "",
"POSTGRES_DATABASE": "",
"POSTGRES_USER": "",
"POSTGRES_PASSWORD": ""
}
}
}
}
```
1. Restart Claude desktop.
1. From the new chat screen, you should see a hammer (MCP) icon appear with the
new MCP server available.
{{% /tab %}}
{{% tab header="Cline" lang="en" %}}
1. Open the [Cline](https://github.com/cline/cline) extension in VS Code and tap
the **MCP Servers** icon.
1. Tap Configure MCP Servers to open the configuration file.
1. Add the following configuration, replace the environment variables with your
values, and save:
```json
{
"mcpServers": {
"postgres": {
"command": "./PATH/TO/toolbox",
"args": ["--prebuilt","postgres","--stdio"],
"env": {
"POSTGRES_HOST": "",
"POSTGRES_PORT": "",
"POSTGRES_DATABASE": "",
"POSTGRES_USER": "",
"POSTGRES_PASSWORD": ""
}
}
}
}
```
1. You should see a green active status after the server is successfully
connected.
{{% /tab %}}
{{% tab header="Cursor" lang="en" %}}
1. Create a `.cursor` directory in your project root if it doesn't exist.
1. Create a `.cursor/mcp.json` file if it doesn't exist and open it.
1. Add the following configuration, replace the environment variables with your
values, and save:
```json
{
"mcpServers": {
"postgres": {
"command": "./PATH/TO/toolbox",
"args": ["--prebuilt","postgres","--stdio"],
"env": {
"POSTGRES_HOST": "",
"POSTGRES_PORT": "",
"POSTGRES_DATABASE": "",
"POSTGRES_USER": "",
"POSTGRES_PASSWORD": ""
}
}
}
}
```
1. [Cursor](https://www.cursor.com/) and navigate to **Settings > Cursor
Settings > MCP**. You should see a green active status after the server is
successfully connected.
{{% /tab %}}
{{% tab header="Visual Studio Code (Copilot)" lang="en" %}}
1. Open [VS Code](https://code.visualstudio.com/docs/copilot/overview) and
create a `.vscode` directory in your project root if it doesn't exist.
1. Create a `.vscode/mcp.json` file if it doesn't exist and open it.
1. Add the following configuration, replace the environment variables with your
values, and save:
```json
{
"servers": {
"postgres": {
"command": "./PATH/TO/toolbox",
"args": ["--prebuilt","postgres","--stdio"],
"env": {
"POSTGRES_HOST": "",
"POSTGRES_PORT": "",
"POSTGRES_DATABASE": "",
"POSTGRES_USER": "",
"POSTGRES_PASSWORD": ""
}
}
}
}
```
{{% /tab %}}
{{% tab header="Windsurf" lang="en" %}}
1. Open [Windsurf](https://docs.codeium.com/windsurf) and navigate to the
Cascade assistant.
1. Tap on the hammer (MCP) icon, then Configure to open the configuration file.
1. Add the following configuration, replace the environment variables with your
values, and save:
```json
{
"mcpServers": {
"postgres": {
"command": "./PATH/TO/toolbox",
"args": ["--prebuilt","postgres","--stdio"],
"env": {
"POSTGRES_HOST": "",
"POSTGRES_PORT": "",
"POSTGRES_DATABASE": "",
"POSTGRES_USER": "",
"POSTGRES_PASSWORD": ""
}
}
}
}
```
{{% /tab %}}
{{% tab header="Gemini CLI" lang="en" %}}
1. Install the [Gemini CLI](https://github.com/google-gemini/gemini-cli?tab=readme-ov-file#quickstart).
1. In your working directory, create a folder named `.gemini`. Within it, create a `settings.json` file.
1. Add the following configuration, replace the environment variables with your values, and then save:
```json
{
"mcpServers": {
"postgres": {
"command": "./PATH/TO/toolbox",
"args": ["--prebuilt","postgres","--stdio"],
"env": {
"POSTGRES_HOST": "",
"POSTGRES_PORT": "",
"POSTGRES_DATABASE": "",
"POSTGRES_USER": "",
"POSTGRES_PASSWORD": ""
}
}
}
}
```
{{% /tab %}}
{{% tab header="Gemini Code Assist" lang="en" %}}
1. Install the [Gemini Code Assist](https://marketplace.visualstudio.com/items?itemName=Google.geminicodeassist) extension in Visual Studio Code.
1. Enable Agent Mode in Gemini Code Assist chat.
1. In your working directory, create a folder named `.gemini`. Within it, create a `settings.json` file.
1. Add the following configuration, replace the environment variables with your values, and then save:
```json
{
"mcpServers": {
"postgres": {
"command": "./PATH/TO/toolbox",
"args": ["--prebuilt","postgres","--stdio"],
"env": {
"POSTGRES_HOST": "",
"POSTGRES_PORT": "",
"POSTGRES_DATABASE": "",
"POSTGRES_USER": "",
"POSTGRES_PASSWORD": ""
}
}
}
}
```
{{% /tab %}}
{{< /tabpane >}}
## Use Tools
Your AI tool is now connected to Postgres using MCP. Try asking your AI
assistant to list tables, create a table, or define and execute other SQL
statements.
The following tools are available to the LLM:
1. **list_tables**: lists tables and descriptions
1. **execute_sql**: execute any SQL statement
{{< notice note >}}
Prebuilt tools are pre-1.0, so expect some tool changes between versions. LLMs
will adapt to the tools available, so this shouldn't affect most users.
{{< /notice >}}
========================================================================
## Spanner using MCP
========================================================================
**Hierarchy:** MCP Toolbox for Databases > User Guide > Connect to Toolbox > IDEs > Spanner using MCP
**URL:** https://googleapis.github.io/genai-toolbox/previews/PR-2723/user-guide/connect-to/ides/spanner_mcp/
**Description:** Connect your IDE to Spanner using Toolbox.
========================================================================
## SQL Server using MCP
========================================================================
**Hierarchy:** MCP Toolbox for Databases > User Guide > Connect to Toolbox > IDEs > SQL Server using MCP
**URL:** https://googleapis.github.io/genai-toolbox/previews/PR-2723/user-guide/connect-to/ides/mssql_mcp/
**Description:** Connect your IDE to SQL Server using Toolbox.
[Model Context Protocol (MCP)](https://modelcontextprotocol.io/introduction) is
an open protocol for connecting Large Language Models (LLMs) to data sources
like SQL Server. This guide covers how to use [MCP Toolbox for
Databases][toolbox] to expose your developer assistant tools to a SQL Server
instance:
* [Cursor][cursor]
* [Windsurf][windsurf] (Codium)
* [Visual Studio Code][vscode] (Copilot)
* [Cline][cline] (VS Code extension)
* [Claude desktop][claudedesktop]
* [Claude code][claudecode]
* [Gemini CLI][geminicli]
* [Gemini Code Assist][geminicodeassist]
[toolbox]: https://github.com/googleapis/genai-toolbox
[cursor]: #configure-your-mcp-client
[windsurf]: #configure-your-mcp-client
[vscode]: #configure-your-mcp-client
[cline]: #configure-your-mcp-client
[claudedesktop]: #configure-your-mcp-client
[claudecode]: #configure-your-mcp-client
[geminicli]: #configure-your-mcp-client
[geminicodeassist]: #configure-your-mcp-client
## Set up the database
1. [Create or select a SQL Server
instance.](https://www.microsoft.com/en-us/sql-server/sql-server-downloads)
## Install MCP Toolbox
1. Download the latest version of Toolbox as a binary. Select the [correct
binary](https://github.com/googleapis/genai-toolbox/releases) corresponding
to your OS and CPU architecture. You are required to use Toolbox version
V0.10.0+:
{{< tabpane persist=header >}}
{{< tab header="linux/amd64" lang="bash" >}}
curl -O https://storage.googleapis.com/genai-toolbox/v0.28.0/linux/amd64/toolbox
{{< /tab >}}
{{< tab header="darwin/arm64" lang="bash" >}}
curl -O https://storage.googleapis.com/genai-toolbox/v0.28.0/darwin/arm64/toolbox
{{< /tab >}}
{{< tab header="darwin/amd64" lang="bash" >}}
curl -O https://storage.googleapis.com/genai-toolbox/v0.28.0/darwin/amd64/toolbox
{{< /tab >}}
{{< tab header="windows/amd64" lang="bash" >}}
curl -O https://storage.googleapis.com/genai-toolbox/v0.28.0/windows/amd64/toolbox.exe
{{< /tab >}}
{{< /tabpane >}}
1. Make the binary executable:
```bash
chmod +x toolbox
```
1. Verify the installation:
```bash
./toolbox --version
```
## Configure your MCP Client
{{< tabpane text=true >}}
{{% tab header="Claude code" lang="en" %}}
1. Install [Claude
Code](https://docs.anthropic.com/en/docs/agents-and-tools/claude-code/overview).
1. Create a `.mcp.json` file in your project root if it doesn't exist.
1. Add the following configuration, replace the environment variables with your
values, and save:
```json
{
"mcpServers": {
"sqlserver": {
"command": "./PATH/TO/toolbox",
"args": ["--prebuilt","mssql","--stdio"],
"env": {
"MSSQL_HOST": "",
"MSSQL_PORT": "",
"MSSQL_DATABASE": "",
"MSSQL_USER": "",
"MSSQL_PASSWORD": ""
}
}
}
}
```
1. Restart Claude code to apply the new configuration.
{{% /tab %}}
{{% tab header="Claude desktop" lang="en" %}}
1. Open [Claude desktop](https://claude.ai/download) and navigate to Settings.
1. Under the Developer tab, tap Edit Config to open the configuration file.
1. Add the following configuration, replace the environment variables with your
values, and save:
```json
{
"mcpServers": {
"sqlserver": {
"command": "./PATH/TO/toolbox",
"args": ["--prebuilt","mssql","--stdio"],
"env": {
"MSSQL_HOST": "",
"MSSQL_PORT": "",
"MSSQL_DATABASE": "",
"MSSQL_USER": "",
"MSSQL_PASSWORD": ""
}
}
}
}
```
1. Restart Claude desktop.
1. From the new chat screen, you should see a hammer (MCP) icon appear with the
new MCP server available.
{{% /tab %}}
{{% tab header="Cline" lang="en" %}}
1. Open the [Cline](https://github.com/cline/cline) extension in VS Code and
tap the **MCP Servers** icon.
1. Tap Configure MCP Servers to open the configuration file.
1. Add the following configuration, replace the environment variables with your
values, and save:
```json
{
"mcpServers": {
"sqlserver": {
"command": "./PATH/TO/toolbox",
"args": ["--prebuilt","mssql","--stdio"],
"env": {
"MSSQL_HOST": "",
"MSSQL_PORT": "",
"MSSQL_DATABASE": "",
"MSSQL_USER": "",
"MSSQL_PASSWORD": ""
}
}
}
}
```
1. You should see a green active status after the server is successfully
connected.
{{% /tab %}}
{{% tab header="Cursor" lang="en" %}}
1. Create a `.cursor` directory in your project root if it doesn't exist.
1. Create a `.cursor/mcp.json` file if it doesn't exist and open it.
1. Add the following configuration, replace the environment variables with your
values, and save:
```json
{
"mcpServers": {
"sqlserver": {
"command": "./PATH/TO/toolbox",
"args": ["--prebuilt","mssql","--stdio"],
"env": {
"MSSQL_HOST": "",
"MSSQL_PORT": "",
"MSSQL_DATABASE": "",
"MSSQL_USER": "",
"MSSQL_PASSWORD": ""
}
}
}
}
```
1. Open [Cursor](https://www.cursor.com/) and navigate to **Settings > Cursor
Settings > MCP**. You should see a green active status after the server is
successfully connected.
{{% /tab %}}
{{% tab header="Visual Studio Code (Copilot)" lang="en" %}}
1. Open [VS Code](https://code.visualstudio.com/docs/copilot/overview) and
create a `.vscode` directory in your project root if it doesn't exist.
1. Create a `.vscode/mcp.json` file if it doesn't exist and open it.
1. Add the following configuration, replace the environment variables with your
values, and save:
```json
{
"servers": {
"mssql": {
"command": "./PATH/TO/toolbox",
"args": ["--prebuilt","mssql","--stdio"],
"env": {
"MSSQL_HOST": "",
"MSSQL_PORT": "",
"MSSQL_DATABASE": "",
"MSSQL_USER": "",
"MSSQL_PASSWORD": ""
}
}
}
}
```
{{% /tab %}}
{{% tab header="Windsurf" lang="en" %}}
1. Open [Windsurf](https://docs.codeium.com/windsurf) and navigate to the
Cascade assistant.
1. Tap on the hammer (MCP) icon, then Configure to open the configuration file.
1. Add the following configuration, replace the environment variables with your
values, and save:
```json
{
"mcpServers": {
"sqlserver": {
"command": "./PATH/TO/toolbox",
"args": ["--prebuilt","mssql","--stdio"],
"env": {
"MSSQL_HOST": "",
"MSSQL_PORT": "",
"MSSQL_DATABASE": "",
"MSSQL_USER": "",
"MSSQL_PASSWORD": ""
}
}
}
}
```
{{% /tab %}}
{{% tab header="Gemini CLI" lang="en" %}}
1. Install the [Gemini
CLI](https://github.com/google-gemini/gemini-cli?tab=readme-ov-file#quickstart).
1. In your working directory, create a folder named `.gemini`. Within it,
create a `settings.json` file.
1. Add the following configuration, replace the environment variables with your
values, and then save:
```json
{
"mcpServers": {
"sqlserver": {
"command": "./PATH/TO/toolbox",
"args": ["--prebuilt","mssql","--stdio"],
"env": {
"MSSQL_HOST": "",
"MSSQL_PORT": "",
"MSSQL_DATABASE": "",
"MSSQL_USER": "",
"MSSQL_PASSWORD": ""
}
}
}
}
```
{{% /tab %}}
{{% tab header="Gemini Code Assist" lang="en" %}}
1. Install the [Gemini Code
Assist](https://marketplace.visualstudio.com/items?itemName=Google.geminicodeassist)
extension in Visual Studio Code.
1. Enable Agent Mode in Gemini Code Assist chat.
1. In your working directory, create a folder named `.gemini`. Within it,
create a `settings.json` file.
1. Add the following configuration, replace the environment variables with your
values, and then save:
```json
{
"mcpServers": {
"sqlserver": {
"command": "./PATH/TO/toolbox",
"args": ["--prebuilt","mssql","--stdio"],
"env": {
"MSSQL_HOST": "",
"MSSQL_PORT": "",
"MSSQL_DATABASE": "",
"MSSQL_USER": "",
"MSSQL_PASSWORD": ""
}
}
}
}
```
{{% /tab %}}
{{< /tabpane >}}
## Use Tools
Your AI tool is now connected to SQL Server using MCP. Try asking your AI
assistant to list tables, create a table, or define and execute other SQL
statements.
The following tools are available to the LLM:
1. **list_tables**: lists tables and descriptions
1. **execute_sql**: execute any SQL statement
{{< notice note >}}
Prebuilt tools are pre-1.0, so expect some tool changes between versions. LLMs
will adapt to the tools available, so this shouldn't affect most users.
{{< /notice >}}
========================================================================
## SQLite using MCP
========================================================================
**Hierarchy:** MCP Toolbox for Databases > User Guide > Connect to Toolbox > IDEs > SQLite using MCP
**URL:** https://googleapis.github.io/genai-toolbox/previews/PR-2723/user-guide/connect-to/ides/sqlite_mcp/
**Description:** Connect your IDE to SQLite using Toolbox.
[Model Context Protocol (MCP)](https://modelcontextprotocol.io/introduction) is
an open protocol for connecting Large Language Models (LLMs) to data sources
like SQLite. This guide covers how to use [MCP Toolbox for Databases][toolbox]
to expose your developer assistant tools to a SQLite instance:
* [Cursor][cursor]
* [Windsurf][windsurf] (Codium)
* [Visual Studio Code][vscode] (Copilot)
* [Cline][cline] (VS Code extension)
* [Claude desktop][claudedesktop]
* [Claude code][claudecode]
* [Gemini CLI][geminicli]
* [Gemini Code Assist][geminicodeassist]
[toolbox]: https://github.com/googleapis/genai-toolbox
[cursor]: #configure-your-mcp-client
[windsurf]: #configure-your-mcp-client
[vscode]: #configure-your-mcp-client
[cline]: #configure-your-mcp-client
[claudedesktop]: #configure-your-mcp-client
[claudecode]: #configure-your-mcp-client
[geminicli]: #configure-your-mcp-client
[geminicodeassist]: #configure-your-mcp-client
## Set up the database
1. [Create or select a SQLite database file.](https://www.sqlite.org/download.html)
## Install MCP Toolbox
1. Download the latest version of Toolbox as a binary. Select the [correct
binary](https://github.com/googleapis/genai-toolbox/releases) corresponding
to your OS and CPU architecture. You are required to use Toolbox version
V0.10.0+:
{{< tabpane persist=header >}}
{{< tab header="linux/amd64" lang="bash" >}}
curl -O https://storage.googleapis.com/genai-toolbox/v0.28.0/linux/amd64/toolbox
{{< /tab >}}
{{< tab header="darwin/arm64" lang="bash" >}}
curl -O https://storage.googleapis.com/genai-toolbox/v0.28.0/darwin/arm64/toolbox
{{< /tab >}}
{{< tab header="darwin/amd64" lang="bash" >}}
curl -O https://storage.googleapis.com/genai-toolbox/v0.28.0/darwin/amd64/toolbox
{{< /tab >}}
{{< tab header="windows/amd64" lang="bash" >}}
curl -O https://storage.googleapis.com/genai-toolbox/v0.28.0/windows/amd64/toolbox.exe
{{< /tab >}}
{{< /tabpane >}}
1. Make the binary executable:
```bash
chmod +x toolbox
```
1. Verify the installation:
```bash
./toolbox --version
```
## Configure your MCP Client
{{< tabpane text=true >}}
{{% tab header="Claude code" lang="en" %}}
1. Install [Claude
Code](https://docs.anthropic.com/en/docs/agents-and-tools/claude-code/overview).
1. Create a `.mcp.json` file in your project root if it doesn't exist.
1. Add the following configuration, replace the environment variables with your
values, and save:
```json
{
"mcpServers": {
"sqlite": {
"command": "./PATH/TO/toolbox",
"args": ["--prebuilt", "sqlite", "--stdio"],
"env": {
"SQLITE_DATABASE": "./sample.db"
}
}
}
}
```
1. Restart Claude code to apply the new configuration.
{{% /tab %}}
{{% tab header="Claude desktop" lang="en" %}}
1. Open [Claude desktop](https://claude.ai/download) and navigate to Settings.
1. Under the Developer tab, tap Edit Config to open the configuration file.
1. Add the following configuration, replace the environment variables with your
values, and save:
```json
{
"mcpServers": {
"sqlite": {
"command": "./PATH/TO/toolbox",
"args": ["--prebuilt", "sqlite", "--stdio"],
"env": {
"SQLITE_DATABASE": "./sample.db"
}
}
}
}
```
1. Restart Claude desktop.
1. From the new chat screen, you should see a hammer (MCP) icon appear with the
new MCP server available.
{{% /tab %}}
{{% tab header="Cline" lang="en" %}}
1. Open the [Cline](https://github.com/cline/cline) extension in VS Code and
tap the **MCP Servers** icon.
1. Tap Configure MCP Servers to open the configuration file.
1. Add the following configuration, replace the environment variables with your
values, and save:
```json
{
"mcpServers": {
"sqlite": {
"command": "./PATH/TO/toolbox",
"args": ["--prebuilt", "sqlite", "--stdio"],
"env": {
"SQLITE_DATABASE": "./sample.db"
}
}
}
}
```
1. You should see a green active status after the server is successfully
connected.
{{% /tab %}}
{{% tab header="Cursor" lang="en" %}}
1. Create a `.cursor` directory in your project root if it doesn't exist.
1. Create a `.cursor/mcp.json` file if it doesn't exist and open it.
1. Add the following configuration, replace the environment variables with your
values, and save:
```json
{
"mcpServers": {
"sqlite": {
"command": "./PATH/TO/toolbox",
"args": ["--prebuilt", "sqlite", "--stdio"],
"env": {
"SQLITE_DATABASE": "./sample.db"
}
}
}
}
```
1. Open [Cursor](https://www.cursor.com/) and navigate to **Settings > Cursor
Settings > MCP**. You should see a green active status after the server is
successfully connected.
{{% /tab %}}
{{% tab header="Visual Studio Code (Copilot)" lang="en" %}}
1. Open [VS Code](https://code.visualstudio.com/docs/copilot/overview) and
create a `.vscode` directory in your project root if it doesn't exist.
1. Create a `.vscode/mcp.json` file if it doesn't exist and open it.
1. Add the following configuration, replace the environment variables with your
values, and save:
```json
{
"servers": {
"sqlite": {
"command": "./PATH/TO/toolbox",
"args": ["--prebuilt","sqlite","--stdio"],
"env": {
"SQLITE_DATABASE": "./sample.db"
}
}
}
}
```
{{% /tab %}}
{{% tab header="Windsurf" lang="en" %}}
1. Open [Windsurf](https://docs.codeium.com/windsurf) and navigate to the
Cascade assistant.
1. Tap on the hammer (MCP) icon, then Configure to open the configuration file.
1. Add the following configuration, replace the environment variables with your
values, and save:
```json
{
"mcpServers": {
"sqlite": {
"command": "./PATH/TO/toolbox",
"args": ["--prebuilt","sqlite","--stdio"],
"env": {
"SQLITE_DATABASE": "./sample.db"
}
}
}
}
```
{{% /tab %}}
{{% tab header="Gemini CLI" lang="en" %}}
1. Install the [Gemini
CLI](https://github.com/google-gemini/gemini-cli?tab=readme-ov-file#quickstart).
1. In your working directory, create a folder named `.gemini`. Within it,
create a `settings.json` file.
1. Add the following configuration, replace the environment variables with your
values, and then save:
```json
{
"mcpServers": {
"sqlite": {
"command": "./PATH/TO/toolbox",
"args": ["--prebuilt","sqlite","--stdio"],
"env": {
"SQLITE_DATABASE": "./sample.db"
}
}
}
}
```
{{% /tab %}}
{{% tab header="Gemini Code Assist" lang="en" %}}
1. Install the [Gemini Code
Assist](https://marketplace.visualstudio.com/items?itemName=Google.geminicodeassist)
extension in Visual Studio Code.
1. Enable Agent Mode in Gemini Code Assist chat.
1. In your working directory, create a folder named `.gemini`. Within it,
create a `settings.json` file.
1. Add the following configuration, replace the environment variables with your
values, and then save:
```json
{
"mcpServers": {
"sqlite": {
"command": "./PATH/TO/toolbox",
"args": ["--prebuilt","sqlite","--stdio"],
"env": {
"SQLITE_DATABASE": "./sample.db"
}
}
}
}
```
{{% /tab %}}
{{< /tabpane >}}
## Use Tools
Your AI tool is now connected to SQLite using MCP. Try asking your AI assistant
to list tables, create a table, or define and execute other SQL statements.
The following tools are available to the LLM:
1. **list_tables**: lists tables and descriptions
1. **execute_sql**: execute any SQL statement
{{< notice note >}}
Prebuilt tools are pre-1.0, so expect some tool changes between versions. LLMs
will adapt to the tools available, so this shouldn't affect most users.
{{< /notice >}}
========================================================================
## Cloud SQL for PostgreSQL Admin using MCP
========================================================================
**Hierarchy:** MCP Toolbox for Databases > User Guide > Connect to Toolbox > IDEs > Cloud SQL for PostgreSQL Admin using MCP
**URL:** https://googleapis.github.io/genai-toolbox/previews/PR-2723/user-guide/connect-to/ides/cloud_sql_pg_admin_mcp/
**Description:** Create and manage Cloud SQL for PostgreSQL (Admin) using Toolbox.
This guide covers how to use [MCP Toolbox for Databases][toolbox] to expose your
developer assistant tools to create and manage Cloud SQL for PostgreSQL
instance, database and users:
* [Cursor][cursor]
* [Windsurf][windsurf] (Codium)
* [Visual Studio Code][vscode] (Copilot)
* [Cline][cline] (VS Code extension)
* [Claude desktop][claudedesktop]
* [Claude code][claudecode]
* [Gemini CLI][geminicli]
* [Gemini Code Assist][geminicodeassist]
[toolbox]: https://github.com/googleapis/genai-toolbox
[cursor]: #configure-your-mcp-client
[windsurf]: #configure-your-mcp-client
[vscode]: #configure-your-mcp-client
[cline]: #configure-your-mcp-client
[claudedesktop]: #configure-your-mcp-client
[claudecode]: #configure-your-mcp-client
[geminicli]: #configure-your-mcp-client
[geminicodeassist]: #configure-your-mcp-client
## Before you begin
1. In the Google Cloud console, on the [project selector
page](https://console.cloud.google.com/projectselector2/home/dashboard),
select or create a Google Cloud project.
1. [Make sure that billing is enabled for your Google Cloud
project](https://cloud.google.com/billing/docs/how-to/verify-billing-enabled#confirm_billing_is_enabled_on_a_project).
1. Grant the necessary IAM roles to the user that will be running the MCP
server. The tools available will depend on the roles granted:
* `roles/cloudsql.viewer`: Provides read-only access to resources.
* `get_instance`
* `list_instances`
* `list_databases`
* `wait_for_operation`
* `roles/cloudsql.editor`: Provides permissions to manage existing resources.
* All `viewer` tools
* `create_database`
* `create_backup`
* `roles/cloudsql.admin`: Provides full control over all resources.
* All `editor` and `viewer` tools
* `create_instance`
* `create_user`
* `clone_instance`
* `restore_backup`
## Install MCP Toolbox
1. Download the latest version of Toolbox as a binary. Select the [correct
binary](https://github.com/googleapis/genai-toolbox/releases) corresponding
to your OS and CPU architecture. You are required to use Toolbox version
V0.15.0+:
{{< tabpane persist=header >}}
{{< tab header="linux/amd64" lang="bash" >}}
curl -O https://storage.googleapis.com/genai-toolbox/v0.15.0/linux/amd64/toolbox
{{< /tab >}}
{{< tab header="darwin/arm64" lang="bash" >}}
curl -O https://storage.googleapis.com/genai-toolbox/v0.15.0/darwin/arm64/toolbox
{{< /tab >}}
{{< tab header="darwin/amd64" lang="bash" >}}
curl -O https://storage.googleapis.com/genai-toolbox/v0.15.0/darwin/amd64/toolbox
{{< /tab >}}
{{< tab header="windows/amd64" lang="bash" >}}
curl -O https://storage.googleapis.com/genai-toolbox/v0.15.0/windows/amd64/toolbox.exe
{{< /tab >}}
{{< /tabpane >}}
1. Make the binary executable:
```bash
chmod +x toolbox
```
1. Verify the installation:
```bash
./toolbox --version
```
## Configure your MCP Client
{{< tabpane text=true >}}
{{% tab header="Claude code" lang="en" %}}
1. Install [Claude
Code](https://docs.anthropic.com/en/docs/agents-and-tools/claude-code/overview).
1. Create a `.mcp.json` file in your project root if it doesn't exist.
1. Add the following configuration and save:
```json
{
"mcpServers": {
"cloud-sql-postgres-admin": {
"command": "./PATH/TO/toolbox",
"args": ["--prebuilt","cloud-sql-postgres-admin","--stdio"],
"env": {
}
}
}
}
```
1. Restart Claude code to apply the new configuration.
{{% /tab %}}
{{% tab header="Claude desktop" lang="en" %}}
1. Open [Claude desktop](https://claude.ai/download) and navigate to Settings.
1. Under the Developer tab, tap Edit Config to open the configuration file.
1. Add the following configuration and save:
```json
{
"mcpServers": {
"cloud-sql-postgres-admin": {
"command": "./PATH/TO/toolbox",
"args": ["--prebuilt","cloud-sql-postgres-admin","--stdio"],
"env": {
}
}
}
}
```
1. Restart Claude desktop.
1. From the new chat screen, you should see a hammer (MCP) icon appear with the
new MCP server available.
{{% /tab %}}
{{% tab header="Cline" lang="en" %}}
1. Open the [Cline](https://github.com/cline/cline) extension in VS Code and tap
the **MCP Servers** icon.
1. Tap Configure MCP Servers to open the configuration file.
1. Add the following configuration and save:
```json
{
"mcpServers": {
"cloud-sql-postgres-admin": {
"command": "./PATH/TO/toolbox",
"args": ["--prebuilt","cloud-sql-postgres-admin","--stdio"],
"env": {
}
}
}
}
```
1. You should see a green active status after the server is successfully
connected.
{{% /tab %}}
{{% tab header="Cursor" lang="en" %}}
1. Create a `.cursor` directory in your project root if it doesn't exist.
1. Create a `.cursor/mcp.json` file if it doesn't exist and open it.
1. Add the following configuration and save:
```json
{
"mcpServers": {
"cloud-sql-postgres-admin": {
"command": "./PATH/TO/toolbox",
"args": ["--prebuilt","cloud-sql-postgres-admin","--stdio"],
"env": {
}
}
}
}
```
1. [Cursor](https://www.cursor.com/) and navigate to **Settings > Cursor
Settings > MCP**. You should see a green active status after the server is
successfully connected.
{{% /tab %}}
{{% tab header="Visual Studio Code (Copilot)" lang="en" %}}
1. Open [VS Code](https://code.visualstudio.com/docs/copilot/overview) and
create a `.vscode` directory in your project root if it doesn't exist.
1. Create a `.vscode/mcp.json` file if it doesn't exist and open it.
1. Add the following configuration and save:
```json
{
"servers": {
"cloud-sql-postgres-admin": {
"command": "./PATH/TO/toolbox",
"args": ["--prebuilt","cloud-sql-postgres-admin","--stdio"],
"env": {
}
}
}
}
```
{{% /tab %}}
{{% tab header="Windsurf" lang="en" %}}
1. Open [Windsurf](https://docs.codeium.com/windsurf) and navigate to the
Cascade assistant.
1. Tap on the hammer (MCP) icon, then Configure to open the configuration file.
1. Add the following configuration and save:
```json
{
"mcpServers": {
"cloud-sql-postgres-admin": {
"command": "./PATH/TO/toolbox",
"args": ["--prebuilt","cloud-sql-postgres-admin","--stdio"],
"env": {
}
}
}
}
```
{{% /tab %}}
{{% tab header="Gemini CLI" lang="en" %}}
1. Install the [Gemini
CLI](https://github.com/google-gemini/gemini-cli?tab=readme-ov-file#quickstart).
1. In your working directory, create a folder named `.gemini`. Within it,
create a `settings.json` file.
1. Add the following configuration and save:
```json
{
"mcpServers": {
"cloud-sql-postgres-admin": {
"command": "./PATH/TO/toolbox",
"args": ["--prebuilt","cloud-sql-postgres-admin","--stdio"],
"env": {
}
}
}
}
```
{{% /tab %}}
{{% tab header="Gemini Code Assist" lang="en" %}}
1. Install the [Gemini Code
Assist](https://marketplace.visualstudio.com/items?itemName=Google.geminicodeassist)
extension in Visual Studio Code.
1. Enable Agent Mode in Gemini Code Assist chat.
1. In your working directory, create a folder named `.gemini`. Within it,
create a `settings.json` file.
1. Add the following configuration and save:
```json
{
"mcpServers": {
"cloud-sql-postgres-admin": {
"command": "./PATH/TO/toolbox",
"args": ["--prebuilt","cloud-sql-postgres-admin","--stdio"],
"env": {
}
}
}
}
```
{{% /tab %}}
{{< /tabpane >}}
## Use Tools
Your AI tool is now connected to Cloud SQL for PostgreSQL using MCP.
The `cloud-sql-postgres-admin` server provides tools for managing your Cloud SQL
instances and interacting with your database:
* **create_instance**: Creates a new Cloud SQL for PostgreSQL instance.
* **get_instance**: Gets information about a Cloud SQL instance.
* **list_instances**: Lists Cloud SQL instances in a project.
* **create_database**: Creates a new database in a Cloud SQL instance.
* **list_databases**: Lists all databases for a Cloud SQL instance.
* **create_user**: Creates a new user in a Cloud SQL instance.
* **wait_for_operation**: Waits for a Cloud SQL operation to complete.
* **clone_instance**: Creates a clone of an existing Cloud SQL for PostgreSQL instance.
* **create_backup**: Creates a backup on a Cloud SQL instance.
* **restore_backup**: Restores a backup of a Cloud SQL instance.
{{< notice note >}}
Prebuilt tools are pre-1.0, so expect some tool changes between versions. LLMs
will adapt to the tools available, so this shouldn't affect most users.
{{< /notice >}}
========================================================================
## Cloud SQL for MySQL Admin using MCP
========================================================================
**Hierarchy:** MCP Toolbox for Databases > User Guide > Connect to Toolbox > IDEs > Cloud SQL for MySQL Admin using MCP
**URL:** https://googleapis.github.io/genai-toolbox/previews/PR-2723/user-guide/connect-to/ides/cloud_sql_mysql_admin_mcp/
**Description:** Create and manage Cloud SQL for MySQL (Admin) using Toolbox.
This guide covers how to use [MCP Toolbox for Databases][toolbox] to expose your
developer assistant tools to create and manage Cloud SQL for MySQL instance,
database and users:
* [Cursor][cursor]
* [Windsurf][windsurf] (Codium)
* [Visual Studio Code][vscode] (Copilot)
* [Cline][cline] (VS Code extension)
* [Claude desktop][claudedesktop]
* [Claude code][claudecode]
* [Gemini CLI][geminicli]
* [Gemini Code Assist][geminicodeassist]
[toolbox]: https://github.com/googleapis/genai-toolbox
[cursor]: #configure-your-mcp-client
[windsurf]: #configure-your-mcp-client
[vscode]: #configure-your-mcp-client
[cline]: #configure-your-mcp-client
[claudedesktop]: #configure-your-mcp-client
[claudecode]: #configure-your-mcp-client
[geminicli]: #configure-your-mcp-client
[geminicodeassist]: #configure-your-mcp-client
## Before you begin
1. In the Google Cloud console, on the [project selector
page](https://console.cloud.google.com/projectselector2/home/dashboard),
select or create a Google Cloud project.
1. [Make sure that billing is enabled for your Google Cloud
project](https://cloud.google.com/billing/docs/how-to/verify-billing-enabled#confirm_billing_is_enabled_on_a_project).
1. Grant the necessary IAM roles to the user that will be running the MCP
server. The tools available will depend on the roles granted:
* `roles/cloudsql.viewer`: Provides read-only access to resources.
* `get_instance`
* `list_instances`
* `list_databases`
* `wait_for_operation`
* `roles/cloudsql.editor`: Provides permissions to manage existing resources.
* All `viewer` tools
* `create_database`
* `create_backup`
* `roles/cloudsql.admin`: Provides full control over all resources.
* All `editor` and `viewer` tools
* `create_instance`
* `create_user`
* `clone_instance`
* `restore_backup`
## Install MCP Toolbox
1. Download the latest version of Toolbox as a binary. Select the [correct
binary](https://github.com/googleapis/genai-toolbox/releases) corresponding
to your OS and CPU architecture. You are required to use Toolbox version
V0.15.0+:
{{< tabpane persist=header >}}
{{< tab header="linux/amd64" lang="bash" >}}
curl -O https://storage.googleapis.com/genai-toolbox/v0.15.0/linux/amd64/toolbox
{{< /tab >}}
{{< tab header="darwin/arm64" lang="bash" >}}
curl -O https://storage.googleapis.com/genai-toolbox/v0.15.0/darwin/arm64/toolbox
{{< /tab >}}
{{< tab header="darwin/amd64" lang="bash" >}}
curl -O https://storage.googleapis.com/genai-toolbox/v0.15.0/darwin/amd64/toolbox
{{< /tab >}}
{{< tab header="windows/amd64" lang="bash" >}}
curl -O https://storage.googleapis.com/genai-toolbox/v0.15.0/windows/amd64/toolbox.exe
{{< /tab >}}
{{< /tabpane >}}
1. Make the binary executable:
```bash
chmod +x toolbox
```
1. Verify the installation:
```bash
./toolbox --version
```
## Configure your MCP Client
{{< tabpane text=true >}}
{{% tab header="Claude code" lang="en" %}}
1. Install [Claude
Code](https://docs.anthropic.com/en/docs/agents-and-tools/claude-code/overview).
1. Create a `.mcp.json` file in your project root if it doesn't exist.
1. Add the following configuration and save:
```json
{
"mcpServers": {
"cloud-sql-mysql-admin": {
"command": "./PATH/TO/toolbox",
"args": ["--prebuilt","cloud-sql-mysql-admin","--stdio"],
"env": {
}
}
}
}
```
1. Restart Claude code to apply the new configuration.
{{% /tab %}}
{{% tab header="Claude desktop" lang="en" %}}
1. Open [Claude desktop](https://claude.ai/download) and navigate to Settings.
1. Under the Developer tab, tap Edit Config to open the configuration file.
1. Add the following configuration and save:
```json
{
"mcpServers": {
"cloud-sql-mysql-admin": {
"command": "./PATH/TO/toolbox",
"args": ["--prebuilt","cloud-sql-mysql-admin","--stdio"],
"env": {
}
}
}
}
```
1. Restart Claude desktop.
1. From the new chat screen, you should see a hammer (MCP) icon appear with the
new MCP server available.
{{% /tab %}}
{{% tab header="Cline" lang="en" %}}
1. Open the [Cline](https://github.com/cline/cline) extension in VS Code and tap
the **MCP Servers** icon.
1. Tap Configure MCP Servers to open the configuration file.
1. Add the following configuration and save:
```json
{
"mcpServers": {
"cloud-sql-mysql-admin": {
"command": "./PATH/TO/toolbox",
"args": ["--prebuilt","cloud-sql-mysql-admin","--stdio"],
"env": {
}
}
}
}
```
1. You should see a green active status after the server is successfully
connected.
{{% /tab %}}
{{% tab header="Cursor" lang="en" %}}
1. Create a `.cursor` directory in your project root if it doesn't exist.
1. Create a `.cursor/mcp.json` file if it doesn't exist and open it.
1. Add the following configuration and save:
```json
{
"mcpServers": {
"cloud-sql-mysql-admin": {
"command": "./PATH/TO/toolbox",
"args": ["--prebuilt","cloud-sql-mysql-admin","--stdio"],
"env": {
}
}
}
}
```
1. [Cursor](https://www.cursor.com/) and navigate to **Settings > Cursor
Settings > MCP**. You should see a green active status after the server is
successfully connected.
{{% /tab %}}
{{% tab header="Visual Studio Code (Copilot)" lang="en" %}}
1. Open [VS Code](https://code.visualstudio.com/docs/copilot/overview) and
create a `.vscode` directory in your project root if it doesn't exist.
1. Create a `.vscode/mcp.json` file if it doesn't exist and open it.
1. Add the following configuration and save:
```json
{
"servers": {
"cloud-sql-mysql-admin": {
"command": "./PATH/TO/toolbox",
"args": ["--prebuilt","cloud-sql-mysql-admin","--stdio"],
"env": {
}
}
}
}
```
{{% /tab %}}
{{% tab header="Windsurf" lang="en" %}}
1. Open [Windsurf](https://docs.codeium.com/windsurf) and navigate to the
Cascade assistant.
1. Tap on the hammer (MCP) icon, then Configure to open the configuration file.
1. Add the following configuration and save:
```json
{
"mcpServers": {
"cloud-sql-mysql-admin": {
"command": "./PATH/TO/toolbox",
"args": ["--prebuilt","cloud-sql-mysql-admin","--stdio"],
"env": {
}
}
}
}
```
{{% /tab %}}
{{% tab header="Gemini CLI" lang="en" %}}
1. Install the [Gemini
CLI](https://github.com/google-gemini/gemini-cli?tab=readme-ov-file#quickstart).
1. In your working directory, create a folder named `.gemini`. Within it,
create a `settings.json` file.
1. Add the following configuration and save:
```json
{
"mcpServers": {
"cloud-sql-mysql-admin": {
"command": "./PATH/TO/toolbox",
"args": ["--prebuilt","cloud-sql-mysql-admin","--stdio"],
"env": {
}
}
}
}
```
{{% /tab %}}
{{% tab header="Gemini Code Assist" lang="en" %}}
1. Install the [Gemini Code
Assist](https://marketplace.visualstudio.com/items?itemName=Google.geminicodeassist)
extension in Visual Studio Code.
1. Enable Agent Mode in Gemini Code Assist chat.
1. In your working directory, create a folder named `.gemini`. Within it,
create a `settings.json` file.
1. Add the following configuration and save:
```json
{
"mcpServers": {
"cloud-sql-mysql-admin": {
"command": "./PATH/TO/toolbox",
"args": ["--prebuilt","cloud-sql-mysql-admin","--stdio"],
"env": {
}
}
}
}
```
{{% /tab %}}
{{< /tabpane >}}
## Use Tools
Your AI tool is now connected to Cloud SQL for MySQL using MCP.
The `cloud-sql-mysql-admin` server provides tools for managing your Cloud SQL
instances and interacting with your database:
* **create_instance**: Creates a new Cloud SQL for MySQL instance.
* **get_instance**: Gets information about a Cloud SQL instance.
* **list_instances**: Lists Cloud SQL instances in a project.
* **create_database**: Creates a new database in a Cloud SQL instance.
* **list_databases**: Lists all databases for a Cloud SQL instance.
* **create_user**: Creates a new user in a Cloud SQL instance.
* **wait_for_operation**: Waits for a Cloud SQL operation to complete.
* **clone_instance**: Creates a clone of an existing Cloud SQL for MySQL instance.
* **create_backup**: Creates a backup on a Cloud SQL instance.
* **restore_backup**: Restores a backup of a Cloud SQL instance.
{{< notice note >}}
Prebuilt tools are pre-1.0, so expect some tool changes between versions. LLMs
will adapt to the tools available, so this shouldn't affect most users.
{{< /notice >}}
========================================================================
## Cloud SQL for SQL Server Admin using MCP
========================================================================
**Hierarchy:** MCP Toolbox for Databases > User Guide > Connect to Toolbox > IDEs > Cloud SQL for SQL Server Admin using MCP
**URL:** https://googleapis.github.io/genai-toolbox/previews/PR-2723/user-guide/connect-to/ides/cloud_sql_mssql_admin_mcp/
**Description:** Create and manage Cloud SQL for SQL Server (Admin) using Toolbox.
This guide covers how to use [MCP Toolbox for Databases][toolbox] to expose your
developer assistant tools to create and manage Cloud SQL for SQL Server
instance, database and users:
* [Cursor][cursor]
* [Windsurf][windsurf] (Codium)
* [Visual Studio Code][vscode] (Copilot)
* [Cline][cline] (VS Code extension)
* [Claude desktop][claudedesktop]
* [Claude code][claudecode]
* [Gemini CLI][geminicli]
* [Gemini Code Assist][geminicodeassist]
[toolbox]: https://github.com/googleapis/genai-toolbox
[cursor]: #configure-your-mcp-client
[windsurf]: #configure-your-mcp-client
[vscode]: #configure-your-mcp-client
[cline]: #configure-your-mcp-client
[claudedesktop]: #configure-your-mcp-client
[claudecode]: #configure-your-mcp-client
[geminicli]: #configure-your-mcp-client
[geminicodeassist]: #configure-your-mcp-client
## Before you begin
1. In the Google Cloud console, on the [project selector
page](https://console.cloud.google.com/projectselector2/home/dashboard),
select or create a Google Cloud project.
1. [Make sure that billing is enabled for your Google Cloud
project](https://cloud.google.com/billing/docs/how-to/verify-billing-enabled#confirm_billing_is_enabled_on_a_project).
1. Grant the necessary IAM roles to the user that will be running the MCP
server. The tools available will depend on the roles granted:
* `roles/cloudsql.viewer`: Provides read-only access to resources.
* `get_instance`
* `list_instances`
* `list_databases`
* `wait_for_operation`
* `roles/cloudsql.editor`: Provides permissions to manage existing resources.
* All `viewer` tools
* `create_database`
* `create_backup`
* `roles/cloudsql.admin`: Provides full control over all resources.
* All `editor` and `viewer` tools
* `create_instance`
* `create_user`
* `clone_instance`
* `restore_backup`
## Install MCP Toolbox
1. Download the latest version of Toolbox as a binary. Select the [correct
binary](https://github.com/googleapis/genai-toolbox/releases) corresponding
to your OS and CPU architecture. You are required to use Toolbox version
V0.15.0+:
{{< tabpane persist=header >}}
{{< tab header="linux/amd64" lang="bash" >}}
curl -O https://storage.googleapis.com/genai-toolbox/v0.15.0/linux/amd64/toolbox
{{< /tab >}}
{{< tab header="darwin/arm64" lang="bash" >}}
curl -O https://storage.googleapis.com/genai-toolbox/v0.15.0/darwin/arm64/toolbox
{{< /tab >}}
{{< tab header="darwin/amd64" lang="bash" >}}
curl -O https://storage.googleapis.com/genai-toolbox/v0.15.0/darwin/amd64/toolbox
{{< /tab >}}
{{< tab header="windows/amd64" lang="bash" >}}
curl -O https://storage.googleapis.com/genai-toolbox/v0.15.0/windows/amd64/toolbox.exe
{{< /tab >}}
{{< /tabpane >}}
1. Make the binary executable:
```bash
chmod +x toolbox
```
1. Verify the installation:
```bash
./toolbox --version
```
## Configure your MCP Client
{{< tabpane text=true >}}
{{% tab header="Claude code" lang="en" %}}
1. Install [Claude
Code](https://docs.anthropic.com/en/docs/agents-and-tools/claude-code/overview).
1. Create a `.mcp.json` file in your project root if it doesn't exist.
1. Add the following configuration and save:
```json
{
"mcpServers": {
"cloud-sql-mssql-admin": {
"command": "./PATH/TO/toolbox",
"args": ["--prebuilt","cloud-sql-mssql-admin","--stdio"],
"env": {
}
}
}
}
```
1. Restart Claude code to apply the new configuration.
{{% /tab %}}
{{% tab header="Claude desktop" lang="en" %}}
1. Open [Claude desktop](https://claude.ai/download) and navigate to Settings.
1. Under the Developer tab, tap Edit Config to open the configuration file.
1. Add the following configuration and save:
```json
{
"mcpServers": {
"cloud-sql-mssql-admin": {
"command": "./PATH/TO/toolbox",
"args": ["--prebuilt","cloud-sql-mssql-admin","--stdio"],
"env": {
}
}
}
}
```
1. Restart Claude desktop.
1. From the new chat screen, you should see a hammer (MCP) icon appear with the
new MCP server available.
{{% /tab %}}
{{% tab header="Cline" lang="en" %}}
1. Open the [Cline](https://github.com/cline/cline) extension in VS Code and tap
the **MCP Servers** icon.
1. Tap Configure MCP Servers to open the configuration file.
1. Add the following configuration and save:
```json
{
"mcpServers": {
"cloud-sql-mssql-admin": {
"command": "./PATH/TO/toolbox",
"args": ["--prebuilt","cloud-sql-mssql-admin","--stdio"],
"env": {
}
}
}
}
```
1. You should see a green active status after the server is successfully
connected.
{{% /tab %}}
{{% tab header="Cursor" lang="en" %}}
1. Create a `.cursor` directory in your project root if it doesn't exist.
1. Create a `.cursor/mcp.json` file if it doesn't exist and open it.
1. Add the following configuration and save:
```json
{
"mcpServers": {
"cloud-sql-mssql-admin": {
"command": "./PATH/TO/toolbox",
"args": ["--prebuilt","cloud-sql-mssql-admin","--stdio"],
"env": {
}
}
}
}
```
1. [Cursor](https://www.cursor.com/) and navigate to **Settings > Cursor
Settings > MCP**. You should see a green active status after the server is
successfully connected.
{{% /tab %}}
{{% tab header="Visual Studio Code (Copilot)" lang="en" %}}
1. Open [VS Code](https://code.visualstudio.com/docs/copilot/overview) and
create a `.vscode` directory in your project root if it doesn't exist.
1. Create a `.vscode/mcp.json` file if it doesn't exist and open it.
1. Add the following configuration and save:
```json
{
"servers": {
"cloud-sql-mssql-admin": {
"command": "./PATH/TO/toolbox",
"args": ["--prebuilt","cloud-sql-mssql-admin","--stdio"],
"env": {
}
}
}
}
```
{{% /tab %}}
{{% tab header="Windsurf" lang="en" %}}
1. Open [Windsurf](https://docs.codeium.com/windsurf) and navigate to the
Cascade assistant.
1. Tap on the hammer (MCP) icon, then Configure to open the configuration file.
1. Add the following configuration and save:
```json
{
"mcpServers": {
"cloud-sql-mssql-admin": {
"command": "./PATH/TO/toolbox",
"args": ["--prebuilt","cloud-sql-mssql-admin","--stdio"],
"env": {
}
}
}
}
```
{{% /tab %}}
{{% tab header="Gemini CLI" lang="en" %}}
1. Install the [Gemini
CLI](https://github.com/google-gemini/gemini-cli?tab=readme-ov-file#quickstart).
1. In your working directory, create a folder named `.gemini`. Within it,
create a `settings.json` file.
1. Add the following configuration and save:
```json
{
"mcpServers": {
"cloud-sql-mssql-admin": {
"command": "./PATH/TO/toolbox",
"args": ["--prebuilt","cloud-sql-mssql-admin","--stdio"],
"env": {
}
}
}
}
```
{{% /tab %}}
{{% tab header="Gemini Code Assist" lang="en" %}}
1. Install the [Gemini Code
Assist](https://marketplace.visualstudio.com/items?itemName=Google.geminicodeassist)
extension in Visual Studio Code.
1. Enable Agent Mode in Gemini Code Assist chat.
1. In your working directory, create a folder named `.gemini`. Within it,
create a `settings.json` file.
1. Add the following configuration and save:
```json
{
"mcpServers": {
"cloud-sql-mssql-admin": {
"command": "./PATH/TO/toolbox",
"args": ["--prebuilt","cloud-sql-mssql-admin","--stdio"],
"env": {
}
}
}
}
```
{{% /tab %}}
{{< /tabpane >}}
## Use Tools
Your AI tool is now connected to Cloud SQL for SQL Server using MCP.
The `cloud-sql-mssql-admin` server provides tools for managing your Cloud SQL
instances and interacting with your database:
* **create_instance**: Creates a new Cloud SQL for SQL Server instance.
* **get_instance**: Gets information about a Cloud SQL instance.
* **list_instances**: Lists Cloud SQL instances in a project.
* **create_database**: Creates a new database in a Cloud SQL instance.
* **list_databases**: Lists all databases for a Cloud SQL instance.
* **create_user**: Creates a new user in a Cloud SQL instance.
* **wait_for_operation**: Waits for a Cloud SQL operation to complete.
* **clone_instance**: Creates a clone of an existing Cloud SQL for SQL Server instance.
* **create_backup**: Creates a backup on a Cloud SQL instance.
* **restore_backup**: Restores a backup of a Cloud SQL instance.
{{< notice note >}}
Prebuilt tools are pre-1.0, so expect some tool changes between versions. LLMs
will adapt to the tools available, so this shouldn't affect most users.
{{< /notice >}}
========================================================================
## Deploy Toolbox
========================================================================
**Hierarchy:** MCP Toolbox for Databases > User Guide > Deploy Toolbox
**URL:** https://googleapis.github.io/genai-toolbox/previews/PR-2723/user-guide/deploy-to/
**Description:** Learn how to deploy the MCP Toolbox server to production environments.
Once you have tested your MCP Toolbox configuration locally, you can deploy the server to a highly available, production-ready environment.
Choose your preferred deployment platform below to get started:
* **[Docker](./docker/)**: Run the official Toolbox container image on any Docker-compatible host.
* **[Google Cloud Run](./cloud-run/)**: Deploy a fully managed, scalable, and secure cloud run instance.
* **[Kubernetes](./kubernetes/)**: Deploy the Toolbox as a microservice using GKE.
{{< notice tip >}}
**Production Security:** When moving to production, never hardcode passwords or API keys directly into your `tools.yaml`. Always use environment variable substitution and inject those values securely through your deployment platform's secret manager.
{{< /notice >}}
========================================================================
## Cloud Run
========================================================================
**Hierarchy:** MCP Toolbox for Databases > User Guide > Deploy Toolbox > Cloud Run
**URL:** https://googleapis.github.io/genai-toolbox/previews/PR-2723/user-guide/deploy-to/cloud-run/
**Description:** How to set up and configure Toolbox to run on Cloud Run.
## Before you begin
1. [Install](https://cloud.google.com/sdk/docs/install) the Google Cloud CLI.
1. Set the PROJECT_ID environment variable:
```bash
export PROJECT_ID="my-project-id"
```
1. Initialize gcloud CLI:
```bash
gcloud init
gcloud config set project $PROJECT_ID
```
1. Make sure you've set up and initialized your database.
1. You must have the following APIs enabled:
```bash
gcloud services enable run.googleapis.com \
cloudbuild.googleapis.com \
artifactregistry.googleapis.com \
iam.googleapis.com \
secretmanager.googleapis.com
```
1. To create an IAM account, you must have the following IAM permissions (or
roles):
- Create Service Account role (roles/iam.serviceAccountCreator)
1. To create a secret, you must have the following roles:
- Secret Manager Admin role (roles/secretmanager.admin)
1. To deploy to Cloud Run, you must have the following set of roles:
- Cloud Run Developer (roles/run.developer)
- Service Account User role (roles/iam.serviceAccountUser)
{{< notice note >}}
If you are using sources that require VPC-access (such as
AlloyDB or Cloud SQL over private IP), make sure your Cloud Run service and the
database are in the same VPC network.
{{< /notice >}}
## Create a service account
1. Create a backend service account if you don't already have one:
```bash
gcloud iam service-accounts create toolbox-identity
```
1. Grant permissions to use secret manager:
```bash
gcloud projects add-iam-policy-binding $PROJECT_ID \
--member serviceAccount:toolbox-identity@$PROJECT_ID.iam.gserviceaccount.com \
--role roles/secretmanager.secretAccessor
```
1. Grant additional permissions to the service account that are specific to the
source, e.g.:
- [AlloyDB for PostgreSQL](../../../integrations/alloydb/_index.md#iam-permissions)
- [Cloud SQL for PostgreSQL](../../../integrations/cloud-sql-pg/_index.md#iam-permissions)
## Configure `tools.yaml` file
Create a `tools.yaml` file that contains your configuration for Toolbox. For
details, see the
[configuration](../../configuration/_index.md)
section.
## Deploy to Cloud Run
1. Upload `tools.yaml` as a secret:
```bash
gcloud secrets create tools --data-file=tools.yaml
```
If you already have a secret and want to update the secret version, execute
the following:
```bash
gcloud secrets versions add tools --data-file=tools.yaml
```
1. Set an environment variable to the container image that you want to use for
cloud run:
```bash
export IMAGE=us-central1-docker.pkg.dev/database-toolbox/toolbox/toolbox:latest
```
{{< notice note >}}
**The `$PORT` Environment Variable**
Google Cloud Run dictates the port your application must listen on by setting
the `$PORT` environment variable inside your container. This value defaults to
**8080**. Your application's `--port` argument **must** be set to listen on this
port. If there is a mismatch, the container will fail to start and the
deployment will time out.
{{< /notice >}}
1. Deploy Toolbox to Cloud Run using the following command:
```bash
gcloud run deploy toolbox \
--image $IMAGE \
--service-account toolbox-identity \
--region us-central1 \
--set-secrets "/app/tools.yaml=tools:latest" \
--args="--tools-file=/app/tools.yaml","--address=0.0.0.0","--port=8080"
# --allow-unauthenticated # https://cloud.google.com/run/docs/authenticating/public#gcloud
```
If you are using a VPC network, use the command below:
```bash
gcloud run deploy toolbox \
--image $IMAGE \
--service-account toolbox-identity \
--region us-central1 \
--set-secrets "/app/tools.yaml=tools:latest" \
--args="--tools-file=/app/tools.yaml","--address=0.0.0.0","--port=8080" \
# TODO(dev): update the following to match your VPC if necessary
--network default \
--subnet default
# --allow-unauthenticated # https://cloud.google.com/run/docs/authenticating/public#gcloud
```
### Update deployed server to be
{{< production-security-warning >}}
To prevent DNS rebinding attack, use the `--allowed-hosts` flag to specify a
list of hosts. In order to do that, you will
have to re-deploy the cloud run service with the new flag.
To implement CORs checks, use the `--allowed-origins` flag to specify a list of
origins permitted to access the server.
1. Set an environment variable to the cloud run url:
```bash
export URL=
export HOST=
```
2. Redeploy Toolbox:
```bash
gcloud run deploy toolbox \
--image $IMAGE \
--service-account toolbox-identity \
--region us-central1 \
--set-secrets "/app/tools.yaml=tools:latest" \
--args="--tools-file=/app/tools.yaml","--address=0.0.0.0","--port=8080","--allowed-origins=$URL","--allowed-hosts=$HOST"
# --allow-unauthenticated # https://cloud.google.com/run/docs/authenticating/public#gcloud
```
If you are using a VPC network, use the command below:
```bash
gcloud run deploy toolbox \
--image $IMAGE \
--service-account toolbox-identity \
--region us-central1 \
--set-secrets "/app/tools.yaml=tools:latest" \
--args="--tools-file=/app/tools.yaml","--address=0.0.0.0","--port=8080","--allowed-origins=$URL","--allowed-hosts=$HOST" \
# TODO(dev): update the following to match your VPC if necessary
--network default \
--subnet default
# --allow-unauthenticated # https://cloud.google.com/run/docs/authenticating/public#gcloud
```
## Connecting with Toolbox Client SDK
You can connect to Toolbox Cloud Run instances directly through the SDK.
1. [Set up `Cloud Run Invoker` role
access](https://cloud.google.com/run/docs/securing/managing-access#service-add-principals)
to your Cloud Run service.
1. (Only for local runs) Set up [Application Default
Credentials](https://cloud.google.com/docs/authentication/set-up-adc-local-dev-environment)
for the principal you set up the `Cloud Run Invoker` role access to.
1. Run the following to retrieve a non-deterministic URL for the cloud run service:
```bash
gcloud run services describe toolbox --format 'value(status.url)'
```
1. Import and initialize the toolbox client with the URL retrieved above:
{{< tabpane persist=header >}}
{{< tab header="Python" lang="python" >}}
import asyncio
from toolbox_core import ToolboxClient, auth_methods
# Replace with the Cloud Run service URL generated in the previous step
URL = "https://cloud-run-url.app"
auth_token_provider = auth_methods.aget_google_id_token(URL) # can also use sync method
async def main():
async with ToolboxClient(
URL,
client_headers={"Authorization": auth_token_provider},
) as toolbox:
toolset = await toolbox.load_toolset()
# ...
asyncio.run(main())
{{< /tab >}}
{{< tab header="Javascript" lang="javascript" >}}
import { ToolboxClient } from '@toolbox-sdk/core';
import {getGoogleIdToken} from '@toolbox-sdk/core/auth'
// Replace with the Cloud Run service URL generated in the previous step.
const URL = 'http://127.0.0.1:5000';
const authTokenProvider = () => getGoogleIdToken(URL);
const client = new ToolboxClient(URL, null, {"Authorization": authTokenProvider});
{{< /tab >}}
{{< tab header="Go" lang="go" >}}
import "github.com/googleapis/mcp-toolbox-sdk-go/core"
func main() {
// Replace with the Cloud Run service URL generated in the previous step.
URL := "http://127.0.0.1:5000"
auth_token_provider, err := core.GetGoogleIDToken(ctx, URL)
if err != nil {
log.Fatalf("Failed to fetch token %v", err)
}
toolboxClient, err := core.NewToolboxClient(
URL,
core.WithClientHeaderString("Authorization", auth_token_provider))
if err != nil {
log.Fatalf("Failed to create Toolbox client: %v", err)
}
}
{{< /tab >}}
{{< /tabpane >}}
Now, you can use this client to connect to the deployed Cloud Run instance!
## Troubleshooting
{{< notice note >}}
For any deployment or runtime error, the best first step is to check the logs
for your service in the Google Cloud Console's Cloud Run section. They often
contain the specific error message needed to diagnose the problem.
{{< /notice >}}
- **Deployment Fails with "Container failed to start":** This is almost always
caused by a port mismatch. Ensure your container's `--port` argument is set to
`8080` to match the `$PORT` environment variable provided by Cloud Run.
- **Client Receives Permission Denied Error (401 or 403):** If your client
application (e.g., your local SDK) gets a `401 Unauthorized` or `403
Forbidden` error when trying to call your Cloud Run service, it means the
client is not properly authenticated as an invoker.
- Ensure the user or service account calling the service has the **Cloud Run
Invoker** (`roles/run.invoker`) IAM role.
- If running locally, make sure your Application Default Credentials are set
up correctly by running `gcloud auth application-default login`.
- **Service Fails to Access Secrets (in logs):** If your application starts but
the logs show errors like "permission denied" when trying to access Secret
Manager, it means the Toolbox service account is missing permissions.
- Ensure the `toolbox-identity` service account has the **Secret Manager
Secret Accessor** (`roles/secretmanager.secretAccessor`) IAM role.
- **Cloud Run Connections via IAP:** Currently we do not support Cloud Run connections via [IAP](https://docs.cloud.google.com/iap/docs/concepts-overview). Please disable IAP if you are using it.
========================================================================
## Docker Compose
========================================================================
**Hierarchy:** MCP Toolbox for Databases > User Guide > Deploy Toolbox > Docker Compose
**URL:** https://googleapis.github.io/genai-toolbox/previews/PR-2723/user-guide/deploy-to/docker/
**Description:** How to deploy Toolbox using Docker Compose.
## Before you begin
1. [Install Docker Compose.](https://docs.docker.com/compose/install/)
## Configure `tools.yaml` file
Create a `tools.yaml` file that contains your configuration for Toolbox. For
details, see the
[configuration](https://github.com/googleapis/genai-toolbox/blob/main/README.md#configuration)
section.
## Deploy using Docker Compose
1. Create a `docker-compose.yml` file, customizing as needed:
```yaml
services:
toolbox:
# TODO: It is recommended to pin to a specific image version instead of latest.
image: us-central1-docker.pkg.dev/database-toolbox/toolbox/toolbox:latest
hostname: toolbox
platform: linux/amd64
ports:
- "5000:5000"
volumes:
- ./config:/config
command: [ "toolbox", "--tools-file", "/config/tools.yaml", "--address", "0.0.0.0"]
depends_on:
db:
condition: service_healthy
networks:
- tool-network
db:
# TODO: It is recommended to pin to a specific image version instead of latest.
image: postgres
hostname: db
environment:
POSTGRES_USER: toolbox_user
POSTGRES_PASSWORD: my-password
POSTGRES_DB: toolbox_db
ports:
- "5432:5432"
volumes:
- ./db:/var/lib/postgresql/data
# This file can be used to bootstrap your schema if needed.
# See "initialization scripts" on https://hub.docker.com/_/postgres/ for more info
- ./config/init.sql:/docker-entrypoint-initdb.d/init.sql
healthcheck:
test: ["CMD-SHELL", "pg_isready -U toolbox_user -d toolbox_db"]
interval: 10s
timeout: 5s
retries: 5
networks:
- tool-network
networks:
tool-network:
```
{{< production-security-warning >}}
1. Run the following command to bring up the Toolbox and Postgres instance
```bash
docker-compose up -d
```
{{< notice tip >}}
You can use this setup to quickly set up Toolbox + Postgres to follow along in our
[Quickstart](../../../build-with-mcp-toolbox/local_quickstart.md)
{{< /notice >}}
## Connecting with Toolbox Client SDK
Next, we will use Toolbox with the Client SDKs:
1. The url for the Toolbox server running using docker-compose will be:
```
http://localhost:5000
```
1. Import and initialize the client with the URL:
{{< tabpane persist=header >}}
{{< tab header="LangChain" lang="Python" >}}
from toolbox_langchain import ToolboxClient
# Replace with the cloud run service URL generated above
async with ToolboxClient("http://$YOUR_URL") as toolbox:
{{< /tab >}}
{{< tab header="Llamaindex" lang="Python" >}}
from toolbox_llamaindex import ToolboxClient
# Replace with the cloud run service URL generated above
async with ToolboxClient("http://$YOUR_URL") as toolbox:
{{< /tab >}}
{{< /tabpane >}}
========================================================================
## Kubernetes
========================================================================
**Hierarchy:** MCP Toolbox for Databases > User Guide > Deploy Toolbox > Kubernetes
**URL:** https://googleapis.github.io/genai-toolbox/previews/PR-2723/user-guide/deploy-to/kubernetes/
**Description:** How to set up and configure Toolbox to deploy on Kubernetes with Google Kubernetes Engine (GKE).
## Before you begin
1. Set the PROJECT_ID environment variable:
```bash
export PROJECT_ID="my-project-id"
```
1. [Install the `gcloud` CLI](https://cloud.google.com/sdk/docs/install).
1. Initialize gcloud CLI:
```bash
gcloud init
gcloud config set project $PROJECT_ID
```
1. You must have the following APIs enabled:
```bash
gcloud services enable artifactregistry.googleapis.com \
cloudbuild.googleapis.com \
container.googleapis.com \
iam.googleapis.com
```
1. `kubectl` is used to manage Kubernetes, the cluster orchestration system used
by GKE. Verify if you have `kubectl` installed:
```bash
kubectl version --client
```
1. If needed, install `kubectl` component using the Google Cloud CLI:
```bash
gcloud components install kubectl
```
## Create a service account
1. Specify a name for your service account with an environment variable:
```bash
export SA_NAME=toolbox
```
1. Create a backend service account:
```bash
gcloud iam service-accounts create $SA_NAME
```
1. Grant any IAM roles necessary to the IAM service account. Each source has a
list of necessary IAM permissions listed on its page. The example below is
for cloud sql postgres source:
```bash
gcloud projects add-iam-policy-binding $PROJECT_ID \
--member serviceAccount:$SA_NAME@$PROJECT_ID.iam.gserviceaccount.com \
--role roles/cloudsql.client
```
- [AlloyDB IAM Identity](../../../integrations/alloydb/_index.md#iam-permissions)
- [CloudSQL IAM Identity](../../../integrations/cloud-sql-pg/_index.md#iam-permissions)
- [Spanner IAM Identity](../../../integrations/spanner/_index.md#iam-permissions)
## Deploy to Kubernetes
1. Set environment variables:
```bash
export CLUSTER_NAME=toolbox-cluster
export DEPLOYMENT_NAME=toolbox
export SERVICE_NAME=toolbox-service
export REGION=us-central1
export NAMESPACE=toolbox-namespace
export SECRET_NAME=toolbox-config
export KSA_NAME=toolbox-service-account
```
1. Create a [GKE cluster](https://cloud.google.com/kubernetes-engine/docs/concepts/cluster-architecture).
```bash
gcloud container clusters create-auto $CLUSTER_NAME \
--location=us-central1
```
1. Get authentication credentials to interact with the cluster. This also
configures `kubectl` to use the cluster.
```bash
gcloud container clusters get-credentials $CLUSTER_NAME \
--region=$REGION \
--project=$PROJECT_ID
```
1. View the current context for `kubectl`.
```bash
kubectl config current-context
```
1. Create namespace for the deployment.
```bash
kubectl create namespace $NAMESPACE
```
1. Create a Kubernetes Service Account (KSA).
```bash
kubectl create serviceaccount $KSA_NAME --namespace $NAMESPACE
```
1. Enable the IAM binding between Google Service Account (GSA) and Kubernetes
Service Account (KSA).
```bash
gcloud iam service-accounts add-iam-policy-binding \
--role="roles/iam.workloadIdentityUser" \
--member="serviceAccount:$PROJECT_ID.svc.id.goog[$NAMESPACE/$KSA_NAME]" \
$SA_NAME@$PROJECT_ID.iam.gserviceaccount.com
```
1. Add annotation to KSA to complete binding:
```bash
kubectl annotate serviceaccount \
$KSA_NAME \
iam.gke.io/gcp-service-account=$SA_NAME@$PROJECT_ID.iam.gserviceaccount.com \
--namespace $NAMESPACE
```
1. Prepare the Kubernetes secret for your `tools.yaml` file.
```bash
kubectl create secret generic $SECRET_NAME \
--from-file=./tools.yaml \
--namespace=$NAMESPACE
```
1. Create a Kubernetes manifest file (`k8s_deployment.yaml`) to build deployment.
```yaml
apiVersion: apps/v1
kind: Deployment
metadata:
name: toolbox
namespace: toolbox-namespace
spec:
selector:
matchLabels:
app: toolbox
template:
metadata:
labels:
app: toolbox
spec:
serviceAccountName: toolbox-service-account
containers:
- name: toolbox
# Recommend to use the latest version of toolbox
image: us-central1-docker.pkg.dev/database-toolbox/toolbox/toolbox:latest
args: ["--address", "0.0.0.0"]
ports:
- containerPort: 5000
volumeMounts:
- name: toolbox-config
mountPath: "/app/tools.yaml"
subPath: tools.yaml
readOnly: true
volumes:
- name: toolbox-config
secret:
secretName: toolbox-config
items:
- key: tools.yaml
path: tools.yaml
```
{{< production-security-warning >}}
1. Create the deployment.
```bash
kubectl apply -f k8s_deployment.yaml --namespace $NAMESPACE
```
1. Check the status of deployment.
```bash
kubectl get deployments --namespace $NAMESPACE
```
1. Create a Kubernetes manifest file (`k8s_service.yaml`) to build service.
```yaml
apiVersion: v1
kind: Service
metadata:
name: toolbox-service
namespace: toolbox-namespace
annotations:
cloud.google.com/l4-rbs: "enabled"
spec:
selector:
app: toolbox
ports:
- port: 5000
targetPort: 5000
type: LoadBalancer
```
1. Create the service.
```bash
kubectl apply -f k8s_service.yaml --namespace $NAMESPACE
```
1. You can find your IP address created for your service by getting the service
information through the following.
```bash
kubectl describe services $SERVICE_NAME --namespace $NAMESPACE
```
1. To look at logs, run the following.
```bash
kubectl logs -f deploy/$DEPLOYMENT_NAME --namespace $NAMESPACE
```
1. You might have to wait a couple of minutes. It is ready when you can see
`EXTERNAL-IP` with the following command:
```bash
kubectl get svc -n $NAMESPACE
```
1. Access toolbox locally.
```bash
curl :5000
```
## Clean up resources
1. Delete secret.
```bash
kubectl delete secret $SECRET_NAME --namespace $NAMESPACE
```
1. Delete deployment.
```bash
kubectl delete deployment $DEPLOYMENT_NAME --namespace $NAMESPACE
```
1. Delete the application's service.
```bash
kubectl delete service $SERVICE_NAME --namespace $NAMESPACE
```
1. Delete the Kubernetes cluster.
```bash
gcloud container clusters delete $CLUSTER_NAME \
--location=$REGION
```
========================================================================
## Monitoring & Observability
========================================================================
**Hierarchy:** MCP Toolbox for Databases > User Guide > Monitoring & Observability
**URL:** https://googleapis.github.io/genai-toolbox/previews/PR-2723/user-guide/monitoring/
**Description:** Learn how to monitor, log, and trace the internal state of the MCP Toolbox.
Understanding the internal state of your system is critical when deploying AI agents. Explore the sections below to configure your telemetry signals and route them to your preferred observability backends:
* **[Telemetry](telemetry/index.md)**: Learn how to configure logging levels and understand the core metrics and traces emitted by the Toolbox server.
* **[Export Telemetry](export_telemetry.md)**: Discover how to deploy and configure an OpenTelemetry (OTel) Collector.
========================================================================
## Telemetry
========================================================================
**Hierarchy:** MCP Toolbox for Databases > User Guide > Monitoring & Observability > Telemetry
**URL:** https://googleapis.github.io/genai-toolbox/previews/PR-2723/user-guide/monitoring/telemetry/
**Description:** An overview of telemetry and observability in Toolbox.
## About
Telemetry data such as logs, metrics, and traces will help developers understand
the internal state of the system. This page walks though different types of
telemetry and observability available in Toolbox.
Toolbox exports telemetry data of logs via standard out/err, and traces/metrics
through [OpenTelemetry](https://opentelemetry.io/). Additional flags can be
passed to Toolbox to enable different logging behavior, or to export metrics
through a specific [exporter](#exporter).
## Logging
The following flags can be used to customize Toolbox logging:
| **Flag** | **Description** |
|--------------------|-----------------------------------------------------------------------------------------|
| `--log-level` | Preferred log level, allowed values: `debug`, `info`, `warn`, `error`. Default: `info`. |
| `--logging-format` | Preferred logging format, allowed values: `standard`, `json`. Default: `standard`. |
**Example:**
```bash
./toolbox --tools-file "tools.yaml" --log-level warn --logging-format json
```
### Level
Toolbox supports the following log levels, including:
| **Log level** | **Description** |
|---------------|------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|
| Debug | Debug logs typically contain information that is only useful during the debugging phase and may be of little value during production. |
| Info | Info logs include information about successful operations within the application, such as a successful start, pause, or exit of the application. |
| Warn | Warning logs are slightly less severe than error conditions. While it does not cause an error, it indicates that an operation might fail in the future if action is not taken now. |
| Error | Error log is assigned to event logs that contain an application error message. |
Toolbox will only output logs that are equal or more severe to the
level that it is set. Below are the log levels that Toolbox supports in the
order of severity.
### Format
Toolbox supports both standard and structured logging format.
The standard logging outputs log as string:
```
2024-11-12T15:08:11.451377-08:00 INFO "Initialized 0 sources.\n"
```
The structured logging outputs log as JSON:
```
{
"timestamp":"2024-11-04T16:45:11.987299-08:00",
"severity":"ERROR",
"logging.googleapis.com/sourceLocation":{...},
"message":"unable to parse tool file at \"tools.yaml\": \"cloud-sql-postgres1\" is not a valid type of data source"
}
```
{{< notice tip >}}
`logging.googleapis.com/sourceLocation` shows the source code
location information associated with the log entry, if any.
{{< /notice >}}
## Telemetry
Toolbox is supports exporting metrics and traces to any OpenTelemetry compatible
exporter.
### Metrics
A metric is a measurement of a service captured at runtime. The collected data
can be used to provide important insights into the service. Toolbox provides the
following custom metrics:
| **Metric Name** | **Description** |
|------------------------------------|---------------------------------------------------------|
| `toolbox.server.toolset.get.count` | Counts the number of toolset manifest requests served |
| `toolbox.server.tool.get.count` | Counts the number of tool manifest requests served |
| `toolbox.server.tool.get.invoke` | Counts the number of tool invocation requests served |
| `toolbox.server.mcp.sse.count` | Counts the number of mcp sse connection requests served |
| `toolbox.server.mcp.post.count` | Counts the number of mcp post requests served |
All custom metrics have the following attributes/labels:
| **Metric Attributes** | **Description** |
|----------------------------|-----------------------------------------------------------|
| `toolbox.name` | Name of the toolset or tool, if applicable. |
| `toolbox.operation.status` | Operation status code, for example: `success`, `failure`. |
| `toolbox.sse.sessionId` | Session id for sse connection, if applicable. |
| `toolbox.method` | Method of JSON-RPC request, if applicable. |
### Traces
A trace is a tree of spans that shows the path that a request makes through an
application.
Spans generated by Toolbox server is prefixed with `toolbox/server/`. For
example, when user run Toolbox, it will generate spans for the following, with
`toolbox/server/init` as the root span:

### Resource Attributes
All metrics and traces generated within Toolbox will be associated with a
unified [resource][resource]. The list of resource attributes included are:
| **Resource Name** | **Description** |
|-------------------------------------------------------------------------------------------|---------------------------------------------------------------------------------------------------------------------------------------------------------------|
| [TelemetrySDK](https://pkg.go.dev/go.opentelemetry.io/otel/sdk/resource#WithTelemetrySDK) | TelemetrySDK version info. |
| [OS](https://pkg.go.dev/go.opentelemetry.io/otel/sdk/resource#WithOS) | OS attributes including OS description and OS type. |
| [Container](https://pkg.go.dev/go.opentelemetry.io/otel/sdk/resource#WithContainer) | Container attributes including container ID, if applicable. |
| [Host](https://pkg.go.dev/go.opentelemetry.io/otel/sdk/resource#WithHost) | Host attributes including host name. |
| [SchemaURL](https://pkg.go.dev/go.opentelemetry.io/otel/sdk/resource#WithSchemaURL) | Sets the schema URL for the configured resource. |
| `service.name` | Open telemetry service name. Defaulted to `toolbox`. User can set the service name via flag mentioned above to distinguish between different toolbox service. |
| `service.version` | The version of Toolbox used. |
[resource]: https://opentelemetry.io/docs/languages/go/resources/
### Exporter
An exporter is responsible for processing and exporting telemetry data. Toolbox
generates telemetry data within the OpenTelemetry Protocol (OTLP), and user can
choose to use exporters that are designed to support the OpenTelemetry
Protocol. Within Toolbox, we provide two types of exporter implementation to
choose from, either the Google Cloud Exporter that will send data directly to
the backend, or the OTLP Exporter along with a Collector that will act as a
proxy to collect and export data to the telemetry backend of user's choice.

#### Google Cloud Exporter
The Google Cloud Exporter directly exports telemetry to Google Cloud Monitoring.
It utilizes the [GCP Metric Exporter][gcp-metric-exporter] and [GCP Trace
Exporter][gcp-trace-exporter].
[gcp-metric-exporter]:
https://github.com/GoogleCloudPlatform/opentelemetry-operations-go/tree/main/exporter/metric
[gcp-trace-exporter]:
https://github.com/GoogleCloudPlatform/opentelemetry-operations-go/tree/main/exporter/trace
{{< notice note >}}
If you're using Google Cloud Monitoring, the following APIs will need to be
enabled:
- [Cloud Logging API](https://cloud.google.com/logging/docs/api/enable-api)
- [Cloud Monitoring API](https://cloud.google.com/monitoring/api/enable-api)
- [Cloud Trace API](https://console.cloud.google.com/apis/enableflow?apiid=cloudtrace.googleapis.com)
{{< /notice >}}
#### OTLP Exporter
This implementation uses the default OTLP Exporter over HTTP for
[metrics][otlp-metric-exporter] and [traces][otlp-trace-exporter]. You can use
this exporter if you choose to export your telemetry data to a Collector.
[otlp-metric-exporter]: https://opentelemetry.io/docs/languages/go/exporters/#otlp-traces-over-http
[otlp-trace-exporter]: https://opentelemetry.io/docs/languages/go/exporters/#otlp-traces-over-http
### Collector
A collector acts as a proxy between the application and the telemetry backend.
It receives telemetry data, transforms it, and then exports data to backends
that can store it permanently. Toolbox provide an option to export telemetry
data to user's choice of backend(s) that are compatible with the Open Telemetry
Protocol (OTLP). If you would like to use a collector, please refer to this
[Export Telemetry using the Otel Collector](../export_telemetry.md).
### Flags
The following flags are used to determine Toolbox's telemetry configuration:
| **flag** | **type** | **description** |
|----------------------------|----------|-----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|
| `--telemetry-gcp` | bool | Enable exporting directly to Google Cloud Monitoring. Default is `false`. |
| `--telemetry-otlp` | string | Enable exporting using OpenTelemetry Protocol (OTLP) to the specified endpoint (e.g. "127.0.0.1:4318"). To pass an insecure endpoint here, set environment variable `OTEL_EXPORTER_OTLP_INSECURE=true`. |
| `--telemetry-service-name` | string | Sets the value of the `service.name` resource attribute. Default is `toolbox`. |
In addition to the flags noted above, you can also make additional configuration
for OpenTelemetry via the [General SDK Configuration][sdk-configuration] through
environmental variables.
[sdk-configuration]:
https://opentelemetry.io/docs/languages/sdk-configuration/general/
**Examples:**
To enable Google Cloud Exporter:
```bash
./toolbox --telemetry-gcp
```
To enable OTLP Exporter, provide Collector endpoint:
```bash
./toolbox --telemetry-otlp="127.0.0.1:4553"
```
========================================================================
## Export Telemetry
========================================================================
**Hierarchy:** MCP Toolbox for Databases > User Guide > Monitoring & Observability > Export Telemetry
**URL:** https://googleapis.github.io/genai-toolbox/previews/PR-2723/user-guide/monitoring/export_telemetry/
**Description:** How to set up and configure Toolbox to use the Otel Collector.
## About
The [OpenTelemetry Collector][about-collector] offers a vendor-agnostic
implementation of how to receive, process and export telemetry data. It removes
the need to run, operate, and maintain multiple agents/collectors.
[about-collector]: https://opentelemetry.io/docs/collector/
## Configure the Collector
To configure the collector, you will have to provide a configuration file. The
configuration file consists of four classes of pipeline component that access
telemetry data.
- `Receivers`
- `Processors`
- `Exporters`
- `Connectors`
Example of setting up the classes of pipeline components (in this example, we
don't use connectors):
```yaml
receivers:
otlp:
protocols:
http:
endpoint: "127.0.0.1:4553"
exporters:
googlecloud:
project:
processors:
batch:
send_batch_size: 200
```
After each pipeline component is configured, you will enable it within the
`service` section of the configuration file.
```yaml
service:
pipelines:
traces:
receivers: ["otlp"]
processors: ["batch"]
exporters: ["googlecloud"]
```
## Running the Collector
There are a couple of steps to run and use a Collector.
1. [Install the
Collector](https://opentelemetry.io/docs/collector/installation/) binary.
Pull a binary or Docker image for the OpenTelemetry contrib collector.
1. Set up credentials for telemetry backend.
1. Set up the Collector config. Below are some examples for setting up the
Collector config:
- [Google Cloud Exporter][google-cloud-exporter]
- [Google Managed Service for Prometheus Exporter][google-prometheus-exporter]
1. Run the Collector with the configuration file.
```bash
./otelcol-contrib --config=collector-config.yaml
```
1. Run toolbox with the `--telemetry-otlp` flag. Configure it to send them to
`127.0.0.1:4553` (for HTTP) or the Collector's URL.
```bash
./toolbox --telemetry-otlp=127.0.0.1:4553
```
{{< notice tip >}}
To pass an insecure endpoint, set environment variable `OTEL_EXPORTER_OTLP_INSECURE=true`.
{{< /notice >}}
1. Once telemetry datas are collected, you can view them in your telemetry
backend. If you are using GCP exporters, telemetry will be visible in GCP
dashboard at [Metrics Explorer][metrics-explorer] and [Trace
Explorer][trace-explorer].
{{< notice note >}}
If you are exporting to Google Cloud monitoring, we recommend that you use
the Google Cloud Exporter for traces and the Google Managed Service for
Prometheus Exporter for metrics.
{{< /notice >}}
[google-cloud-exporter]:
https://github.com/open-telemetry/opentelemetry-collector-contrib/tree/main/exporter/googlecloudexporter
[google-prometheus-exporter]:
https://github.com/open-telemetry/opentelemetry-collector-contrib/tree/main/exporter/googlemanagedprometheusexporter#example-configuration
[metrics-explorer]: https://console.cloud.google.com/monitoring/metrics-explorer
[trace-explorer]: https://console.cloud.google.com/traces
========================================================================
## Integrations
========================================================================
**Hierarchy:** MCP Toolbox for Databases > Integrations
**URL:** https://googleapis.github.io/genai-toolbox/previews/PR-2723/integrations/
**Description:** Integrations connect the MCP Toolbox to your external data sources, unlocking specific sets of tools for your agents.
An **Integration** represents a connection to a database or a HTTP Server.
You can define the connection the **Source** just once in your `tools.yaml` file. Once connected, that integration unlocks a suite of specialized **Tools** (like querying data, listing tables, or analyzing schemas) that your clients can immediately use.
## Exploring Integrations & Tools
Select an integration below to view its configuration requirements. Depending on the integration, the documentation will provide the tools.yaml snippets needed to establish a source connection, detail any specific tools available to your agents, or both.
========================================================================
## AlloyDB for PostgreSQL Source
========================================================================
**Hierarchy:** MCP Toolbox for Databases > Integrations > AlloyDB for PostgreSQL Source
**URL:** https://googleapis.github.io/genai-toolbox/previews/PR-2723/integrations/alloydb/
**Description:** AlloyDB for PostgreSQL is a fully-managed, PostgreSQL-compatible database for demanding transactional workloads.
## About
[AlloyDB for PostgreSQL][alloydb-docs] is a fully-managed, PostgreSQL-compatible
database for demanding transactional workloads. It provides enterprise-grade
performance and availability while maintaining 100% compatibility with
open-source PostgreSQL.
If you are new to AlloyDB for PostgreSQL, you can [create a free trial
cluster][alloydb-free-trial].
[alloydb-docs]: https://cloud.google.com/alloydb/docs
[alloydb-free-trial]: https://cloud.google.com/alloydb/docs/create-free-trial-cluster
## Available Tools
{{< list-tools dirs="/integrations/postgres" >}}
### Pre-built Configurations
- [AlloyDB using MCP](../../user-guide/connect-to/ides/alloydb_pg_mcp.md)
Connect your IDE to AlloyDB using Toolbox.
- [AlloyDB Admin API using MCP](../../user-guide/connect-to/ides/alloydb_pg_admin_mcp.md)
Create your AlloyDB database with MCP Toolbox.
## Requirements
### IAM Permissions
By default, AlloyDB for PostgreSQL source uses the [AlloyDB Go
Connector][alloydb-go-conn] to authorize and establish mTLS connections to your
AlloyDB instance. The Go connector uses your [Application Default Credentials
(ADC)][adc] to authorize your connection to AlloyDB.
In addition to [setting the ADC for your server][set-adc], you need to ensure
the IAM identity has been given the following IAM roles (or corresponding
permissions):
- `roles/alloydb.client`
- `roles/serviceusage.serviceUsageConsumer`
[alloydb-go-conn]: https://github.com/GoogleCloudPlatform/alloydb-go-connector
[adc]: https://cloud.google.com/docs/authentication#adc
[set-adc]: https://cloud.google.com/docs/authentication/provide-credentials-adc
### Networking
AlloyDB supports connecting over both from external networks via the internet
([public IP][public-ip]), and internal networks ([private IP][private-ip]).
For more information on choosing between the two options, see the AlloyDB page
[Connection overview][conn-overview].
You can configure the `ipType` parameter in your source configuration to
`public` or `private` to match your cluster's configuration. Regardless of which
you choose, all connections use IAM-based authorization and are encrypted with
mTLS.
[private-ip]: https://cloud.google.com/alloydb/docs/private-ip
[public-ip]: https://cloud.google.com/alloydb/docs/connect-public-ip
[conn-overview]: https://cloud.google.com/alloydb/docs/connection-overview
### Authentication
This source supports both password-based authentication and IAM
authentication (using your [Application Default Credentials][adc]).
#### Standard Authentication
To connect using user/password, [create
a PostgreSQL user][alloydb-users] and input your credentials in the `user` and
`password` fields.
```yaml
user: ${USER_NAME}
password: ${PASSWORD}
```
#### IAM Authentication
To connect using IAM authentication:
1. Prepare your database instance and user following this [guide][iam-guide].
2. You could choose one of the two ways to log in:
- Specify your IAM email as the `user`.
- Leave your `user` field blank. Toolbox will fetch the [ADC][adc]
automatically and log in using the email associated with it.
3. Leave the `password` field blank.
[iam-guide]: https://cloud.google.com/alloydb/docs/database-users/manage-iam-auth
[alloydb-users]: https://cloud.google.com/alloydb/docs/database-users/about
## Example
```yaml
kind: sources
name: my-alloydb-pg-source
type: alloydb-postgres
project: my-project-id
region: us-central1
cluster: my-cluster
instance: my-instance
database: my_db
user: ${USER_NAME}
password: ${PASSWORD}
# ipType: "public"
```
{{< notice tip >}}
Use environment variable replacement with the format ${ENV_NAME}
instead of hardcoding your secrets into the configuration file.
{{< /notice >}}
### Managed Connection Pooling
Toolbox automatically supports [Managed Connection Pooling][alloydb-mcp]. If your AlloyDB instance has Managed Connection Pooling enabled, the connection will immediately benefit from increased throughput and reduced latency.
The interface is identical, so there's no additional configuration required on the client. For more information on configuring your instance, see the [AlloyDB Managed Connection Pooling documentation][alloydb-mcp-docs].
[alloydb-mcp]: https://cloud.google.com/blog/products/databases/alloydb-managed-connection-pooling
[alloydb-mcp-docs]: https://cloud.google.com/alloydb/docs/configure-managed-connection-pooling
## Reference
| **field** | **type** | **required** | **description** |
|-----------|:--------:|:------------:|--------------------------------------------------------------------------------------------------------------------------|
| type | string | true | Must be "alloydb-postgres". |
| project | string | true | Id of the GCP project that the cluster was created in (e.g. "my-project-id"). |
| region | string | true | Name of the GCP region that the cluster was created in (e.g. "us-central1"). |
| cluster | string | true | Name of the AlloyDB cluster (e.g. "my-cluster"). |
| instance | string | true | Name of the AlloyDB instance within the cluster (e.g. "my-instance"). |
| database | string | true | Name of the Postgres database to connect to (e.g. "my_db"). |
| user | string | false | Name of the Postgres user to connect as (e.g. "my-pg-user"). Defaults to IAM auth using [ADC][adc] email if unspecified. |
| password | string | false | Password of the Postgres user (e.g. "my-password"). Defaults to attempting IAM authentication if unspecified. |
| ipType | string | false | IP Type of the AlloyDB instance; must be one of `public` or `private`. Default: `public`. |
========================================================================
## alloydb-ai-nl Tool
========================================================================
**Hierarchy:** MCP Toolbox for Databases > Integrations > AlloyDB for PostgreSQL Source > alloydb-ai-nl Tool
**URL:** https://googleapis.github.io/genai-toolbox/previews/PR-2723/integrations/alloydb/alloydb-ai-nl/
**Description:** The "alloydb-ai-nl" tool leverages [AlloyDB AI](https://cloud.google.com/alloydb/ai) next-generation Natural Language support to provide the ability to query the database directly using natural language.
## About
The `alloydb-ai-nl` tool leverages [AlloyDB AI next-generation natural
Language][alloydb-ai-nl-overview] support to allow an Agent the ability to query
the database directly using natural language. Natural language streamlines the
development of generative AI applications by transferring the complexity of
converting natural language to SQL from the application layer to the database
layer.
AlloyDB AI Natural Language delivers secure and accurate responses for
application end user natural language questions. Natural language streamlines
the development of generative AI applications by transferring the complexity
of converting natural language to SQL from the application layer to the
database layer.
## Compatible Sources
{{< compatible-sources >}}
## Requirements
{{< notice tip >}} AlloyDB AI natural language is currently in gated public
preview. For more information on availability and limitations, please see
[AlloyDB AI natural language
overview](https://cloud.google.com/alloydb/docs/ai/natural-language-overview)
{{< /notice >}}
To enable AlloyDB AI natural language for your AlloyDB cluster, please follow
the steps listed in the [Generate SQL queries that answer natural language
questions][alloydb-ai-gen-nl], including enabling the extension and configuring
context for your application.
{{< notice note >}}
As of AlloyDB AI NL v1.0.3+, the signature of `execute_nl_query` has been
updated. Run `SELECT extversion FROM pg_extension WHERE extname =
'alloydb_ai_nl';` to check which version your instance is using.
AlloyDB AI NL v1.0.3+ is required for Toolbox v0.19.0+. Starting with Toolbox
v0.19.0, users who previously used the create_configuration operation for the
natural language configuration must update it. To do so, please drop the
existing configuration and redefine it using the instructions
[here](https://docs.cloud.google.com/alloydb/docs/ai/use-natural-language-generate-sql-queries#create-config).
{{< /notice >}}
[alloydb-ai-nl-overview]:
https://cloud.google.com/alloydb/docs/ai/natural-language-overview
[alloydb-ai-gen-nl]:
https://cloud.google.com/alloydb/docs/ai/generate-sql-queries-natural-language
### Configuration
#### Specifying an `nl_config`
A `nl_config` is a configuration that associates an application to schema
objects, examples and other contexts that can be used. A large application can
also use different configurations for different parts of the app, as long as the
correct configuration can be specified when a question is sent from that part of
the application.
Once you've followed the steps for configuring context, you can use the
`context` field when configuring a `alloydb-ai-nl` tool. When this tool is
invoked, the SQL will be generated and executed using this context.
#### Specifying Parameters to PSV's
[Parameterized Secure Views (PSVs)][alloydb-psv] are a feature unique to AlloyDB
that allows you to require one or more named parameter values passed
to the view when querying it, somewhat like bind variables with ordinary
database queries.
You can use the `nlConfigParameters` to list the parameters required for your
`nl_config`. You **must** supply all parameters required for all PSVs in the
context. It's strongly recommended to use features like [Authenticated Parameters](../../user-guide/configuration/tools/_index.md#authenticated-parameters) or Bound Parameters to provide secure
access to queries generated using natural language, as these parameters are not
visible to the LLM.
[alloydb-psv]:
https://cloud.google.com/alloydb/docs/parameterized-secure-views-overview
{{< notice tip >}} Make sure to enable the `parameterized_views` extension
to utilize PSV feature (`nlConfigParameters`) with this tool. You can do so by
running this command in the AlloyDB studio:
```sql
CREATE EXTENSION IF NOT EXISTS parameterized_views;
```
{{< /notice >}}
## Example
```yaml
kind: tools
name: ask_questions
type: alloydb-ai-nl
source: my-alloydb-source
description: "Ask questions to check information about flights"
nlConfig: "cymbal_air_nl_config"
nlConfigParameters:
- name: user_email
type: string
description: User ID of the logged in user.
# note: we strongly recommend using features like Authenticated or
# Bound parameters to prevent the LLM from seeing these params and
# specifying values it shouldn't in the tool input
authServices:
- name: my_google_service
field: email
```
## Reference
| **field** | **type** | **required** | **description** |
|--------------------|:---------------------------------------:|:------------:|--------------------------------------------------------------------------|
| type | string | true | Must be "alloydb-ai-nl". |
| source | string | true | Name of the AlloyDB source the natural language query should execute on. |
| description | string | true | Description of the tool that is passed to the LLM. |
| nlConfig | string | true | The name of the `nl_config` in AlloyDB |
| nlConfigParameters | parameters | true | List of PSV parameters defined in the `nl_config` |
========================================================================
## AlloyDB Admin Source
========================================================================
**Hierarchy:** MCP Toolbox for Databases > Integrations > AlloyDB Admin Source
**URL:** https://googleapis.github.io/genai-toolbox/previews/PR-2723/integrations/alloydb-admin/
**Description:** The "alloydb-admin" source provides a client for the AlloyDB API.
## About
The `alloydb-admin` source provides a client to interact with the [Google
AlloyDB API](https://cloud.google.com/alloydb/docs/reference/rest). This allows
tools to perform administrative tasks on AlloyDB resources, such as managing
clusters, instances, and users.
Authentication can be handled in two ways:
1. **Application Default Credentials (ADC):** By default, the source uses ADC
to authenticate with the API.
2. **Client-side OAuth:** If `useClientOAuth` is set to `true`, the source will
expect an OAuth 2.0 access token to be provided by the client (e.g., a web
browser) for each request.
## Available Tools
{{< list-tools >}}
## Example
```yaml
kind: sources
name: my-alloydb-admin
type: alloydb-admin
---
kind: sources
name: my-oauth-alloydb-admin
type: alloydb-admin
useClientOAuth: true
```
## Reference
| **field** | **type** | **required** | **description** |
| -------------- | :------: | :----------: | ---------------------------------------------------------------------------------------------------------------------------------------------- |
| type | string | true | Must be "alloydb-admin". |
| defaultProject | string | false | The Google Cloud project ID to use for AlloyDB infrastructure tools. |
| useClientOAuth | boolean | false | If true, the source will use client-side OAuth for authorization. Otherwise, it will use Application Default Credentials. Defaults to `false`. |
========================================================================
## alloydb-create-cluster Tool
========================================================================
**Hierarchy:** MCP Toolbox for Databases > Integrations > AlloyDB Admin Source > alloydb-create-cluster Tool
**URL:** https://googleapis.github.io/genai-toolbox/previews/PR-2723/integrations/alloydb-admin/alloydb-create-cluster/
**Description:** The "alloydb-create-cluster" tool creates a new AlloyDB for PostgreSQL cluster in a specified project and location.
## About
The `alloydb-create-cluster` tool creates a new AlloyDB for PostgreSQL cluster
in a specified project and location.
This tool provisions a cluster with a **private IP address** within the specified VPC network.
**Permissions & APIs Required:**
Before using, ensure the following on your GCP project:
1. The [AlloyDB
API](https://console.cloud.google.com/apis/library/alloydb.googleapis.com) is
enabled.
2. The user or service account executing the tool has one of the following IAM
roles:
- `roles/alloydb.admin` (the AlloyDB Admin predefined IAM role)
- `roles/owner` (the Owner basic IAM role)
- `roles/editor` (the Editor basic IAM role)
The tool takes the following input parameters:
| Parameter | Type | Description | Required |
|:-----------|:-------|:--------------------------------------------------------------------------------------------------------------------------|:---------|
| `project` | string | The GCP project ID where the cluster will be created. | Yes |
| `cluster` | string | A unique identifier for the new AlloyDB cluster. | Yes |
| `password` | string | A secure password for the initial user. | Yes |
| `location` | string | The GCP location where the cluster will be created. Default: `us-central1`. If quota is exhausted then use other regions. | No |
| `network` | string | The name of the VPC network to connect the cluster to. Default: `default`. | No |
| `user` | string | The name for the initial superuser. Default: `postgres`. | No |
## Compatible Sources
{{< compatible-sources >}}
## Example
```yaml
kind: tools
name: create_cluster
type: alloydb-create-cluster
source: alloydb-admin-source
description: Use this tool to create a new AlloyDB cluster in a given project and location.
```
## Reference
| **field** | **type** | **required** | **description** |
|-------------|:--------:|:------------:|------------------------------------------------------|
| type | string | true | Must be alloydb-create-cluster. |
| source | string | true | The name of an `alloydb-admin` source. |
| description | string | false | Description of the tool that is passed to the agent. |
========================================================================
## alloydb-create-instance Tool
========================================================================
**Hierarchy:** MCP Toolbox for Databases > Integrations > AlloyDB Admin Source > alloydb-create-instance Tool
**URL:** https://googleapis.github.io/genai-toolbox/previews/PR-2723/integrations/alloydb-admin/alloydb-create-instance/
**Description:** The "alloydb-create-instance" tool creates a new AlloyDB instance within a specified cluster.
## About
The `alloydb-create-instance` tool creates a new AlloyDB instance (PRIMARY or
READ_POOL) within a specified cluster.
This tool provisions a new instance with a **public IP address**.
**Permissions & APIs Required:**
Before using, ensure the following on your GCP project:
1. The [AlloyDB
API](https://console.cloud.google.com/apis/library/alloydb.googleapis.com)
is enabled.
2. The user or service account executing the tool has one of the following IAM
roles:
- `roles/alloydb.admin` (the AlloyDB Admin predefined IAM role)
- `roles/owner` (the Owner basic IAM role)
- `roles/editor` (the Editor basic IAM role)
The tool takes the following input parameters:
| Parameter | Type | Description | Required |
| :------------- | :----- | :------------------------------------------------------------------------------------------------ | :------- |
| `project` | string | The GCP project ID where the cluster exists. | Yes |
| `location` | string | The GCP location where the cluster exists (e.g., `us-central1`). | Yes |
| `cluster` | string | The ID of the existing cluster to add this instance to. | Yes |
| `instance` | string | A unique identifier for the new AlloyDB instance. | Yes |
| `instanceType` | string | The type of instance. Valid values are: `PRIMARY` and `READ_POOL`. Default: `PRIMARY` | No |
| `displayName` | string | An optional, user-friendly name for the instance. | No |
| `nodeCount` | int | The number of nodes for a read pool. Required only if `instanceType` is `READ_POOL`. Default: `1` | No |
> Note
> The tool sets the `password.enforce_complexity` database flag to `on`,
> requiring new database passwords to meet complexity rules.
## Compatible Sources
{{< compatible-sources >}}
## Example
```yaml
kind: tools
name: create_instance
type: alloydb-create-instance
source: alloydb-admin-source
description: Use this tool to create a new AlloyDB instance within a specified cluster.
```
## Reference
| **field** | **type** | **required** | **description** |
| ----------- | :------: | :----------: | ---------------------------------------------------- |
| type | string | true | Must be alloydb-create-instance. |
| source | string | true | The name of an `alloydb-admin` source. |
| description | string | false | Description of the tool that is passed to the agent. |
========================================================================
## alloydb-get-cluster Tool
========================================================================
**Hierarchy:** MCP Toolbox for Databases > Integrations > AlloyDB Admin Source > alloydb-get-cluster Tool
**URL:** https://googleapis.github.io/genai-toolbox/previews/PR-2723/integrations/alloydb-admin/alloydb-get-cluster/
**Description:** The "alloydb-get-cluster" tool retrieves details for a specific AlloyDB cluster.
## About
The `alloydb-get-cluster` tool retrieves detailed information for a single,
specified AlloyDB cluster.
| Parameter | Type | Description | Required |
| :--------- | :----- | :------------------------------------------------- | :------- |
| `project` | string | The GCP project ID to get cluster for. | Yes |
| `location` | string | The location of the cluster (e.g., 'us-central1'). | Yes |
| `cluster` | string | The ID of the cluster to retrieve. | Yes |
## Compatible Sources
{{< compatible-sources >}}
## Example
```yaml
kind: tools
name: get_specific_cluster
type: alloydb-get-cluster
source: my-alloydb-admin-source
description: Use this tool to retrieve details for a specific AlloyDB cluster.
```
## Reference
| **field** | **type** | **required** | **description** |
| ----------- | :------: | :----------: | ---------------------------------------------------- |
| type | string | true | Must be alloydb-get-cluster. |
| source | string | true | The name of an `alloydb-admin` source. |
| description | string | false | Description of the tool that is passed to the agent. |
========================================================================
## alloydb-get-instance Tool
========================================================================
**Hierarchy:** MCP Toolbox for Databases > Integrations > AlloyDB Admin Source > alloydb-get-instance Tool
**URL:** https://googleapis.github.io/genai-toolbox/previews/PR-2723/integrations/alloydb-admin/alloydb-get-instance/
**Description:** The "alloydb-get-instance" tool retrieves details for a specific AlloyDB instance.
## About
The `alloydb-get-instance` tool retrieves detailed information for a single,
specified AlloyDB instance.
| Parameter | Type | Description | Required |
|:-----------|:-------|:----------------------------------------------------|:---------|
| `project` | string | The GCP project ID to get instance for. | Yes |
| `location` | string | The location of the instance (e.g., 'us-central1'). | Yes |
| `cluster` | string | The ID of the cluster. | Yes |
| `instance` | string | The ID of the instance to retrieve. | Yes |
## Compatible Sources
{{< compatible-sources >}}
## Example
```yaml
kind: tools
name: get_specific_instance
type: alloydb-get-instance
source: my-alloydb-admin-source
description: Use this tool to retrieve details for a specific AlloyDB instance.
```
## Reference
| **field** | **type** | **required** | **description** |
|-------------|:--------:|:------------:|------------------------------------------------------|
| type | string | true | Must be alloydb-get-instance. |
| source | string | true | The name of an `alloydb-admin` source. |
| description | string | false | Description of the tool that is passed to the agent. |
========================================================================
## alloydb-get-user Tool
========================================================================
**Hierarchy:** MCP Toolbox for Databases > Integrations > AlloyDB Admin Source > alloydb-get-user Tool
**URL:** https://googleapis.github.io/genai-toolbox/previews/PR-2723/integrations/alloydb-admin/alloydb-get-user/
**Description:** The "alloydb-get-user" tool retrieves details for a specific AlloyDB user.
## About
The `alloydb-get-user` tool retrieves detailed information for a single,
specified AlloyDB user.
| Parameter | Type | Description | Required |
| :--------- | :----- | :------------------------------------------------- | :------- |
| `project` | string | The GCP project ID to get user for. | Yes |
| `location` | string | The location of the cluster (e.g., 'us-central1'). | Yes |
| `cluster` | string | The ID of the cluster to retrieve the user from. | Yes |
| `user` | string | The ID of the user to retrieve. | Yes |
## Compatible Sources
{{< compatible-sources >}}
## Example
```yaml
kind: tools
name: get_specific_user
type: alloydb-get-user
source: my-alloydb-admin-source
description: Use this tool to retrieve details for a specific AlloyDB user.
```
## Reference
| **field** | **type** | **required** | **description** |
| ----------- | :------: | :----------: | ---------------------------------------------------- |
| type | string | true | Must be alloydb-get-user. |
| source | string | true | The name of an `alloydb-admin` source. |
| description | string | false | Description of the tool that is passed to the agent. |
========================================================================
## alloydb-list-clusters Tool
========================================================================
**Hierarchy:** MCP Toolbox for Databases > Integrations > AlloyDB Admin Source > alloydb-list-clusters Tool
**URL:** https://googleapis.github.io/genai-toolbox/previews/PR-2723/integrations/alloydb-admin/alloydb-list-clusters/
**Description:** The "alloydb-list-clusters" tool lists the AlloyDB clusters in a given project and location.
## About
The `alloydb-list-clusters` tool retrieves AlloyDB cluster information for all
or specified locations in a given project.
`alloydb-list-clusters` tool lists the detailed information of AlloyDB
cluster(cluster name, state, configuration, etc) for a given project and
location. The tool takes the following input parameters:
| Parameter | Type | Description | Required |
| :--------- | :----- | :----------------------------------------------------------------------------------------------- | :------- |
| `project` | string | The GCP project ID to list clusters for. | Yes |
| `location` | string | The location to list clusters in (e.g., 'us-central1'). Use `-` for all locations. Default: `-`. | No |
## Compatible Sources
{{< compatible-sources >}}
## Example
```yaml
kind: tools
name: list_clusters
type: alloydb-list-clusters
source: alloydb-admin-source
description: Use this tool to list all AlloyDB clusters in a given project and location.
```
## Reference
| **field** | **type** | **required** | **description** |
| ----------- | :------: | :----------: | ---------------------------------------------------- |
| type | string | true | Must be alloydb-list-clusters. |
| source | string | true | The name of an `alloydb-admin` source. |
| description | string | false | Description of the tool that is passed to the agent. |
========================================================================
## alloydb-list-instances Tool
========================================================================
**Hierarchy:** MCP Toolbox for Databases > Integrations > AlloyDB Admin Source > alloydb-list-instances Tool
**URL:** https://googleapis.github.io/genai-toolbox/previews/PR-2723/integrations/alloydb-admin/alloydb-list-instances/
**Description:** The "alloydb-list-instances" tool lists the AlloyDB instances for a given project, cluster and location.
## About
The `alloydb-list-instances` tool retrieves AlloyDB instance information for all
or specified clusters and locations in a given project.
`alloydb-list-instances` tool lists the detailed information of AlloyDB
instances (instance name, type, IP address, state, configuration, etc) for a
given project, cluster and location. The tool takes the following input
parameters:
| Parameter | Type | Description | Required |
| :--------- | :----- | :--------------------------------------------------------------------------------------------------------- | :------- |
| `project` | string | The GCP project ID to list instances for. | Yes |
| `cluster` | string | The ID of the cluster to list instances from. Use '-' to get results for all clusters. Default: `-`. | No |
| `location` | string | The location of the cluster (e.g., 'us-central1'). Use '-' to get results for all locations. Default: `-`. | No |
## Compatible Sources
{{< compatible-sources >}}
## Example
```yaml
kind: tools
name: list_instances
type: alloydb-list-instances
source: alloydb-admin-source
description: Use this tool to list all AlloyDB instances for a given project, cluster and location.
```
## Reference
| **field** | **type** | **required** | **description** |
| ----------- | :------: | :----------: | ---------------------------------------------------- |
| type | string | true | Must be alloydb-list-instances. |
| source | string | true | The name of an `alloydb-admin` source. |
| description | string | false | Description of the tool that is passed to the agent. |
========================================================================
## alloydb-list-users Tool
========================================================================
**Hierarchy:** MCP Toolbox for Databases > Integrations > AlloyDB Admin Source > alloydb-list-users Tool
**URL:** https://googleapis.github.io/genai-toolbox/previews/PR-2723/integrations/alloydb-admin/alloydb-list-users/
**Description:** The "alloydb-list-users" tool lists all database users within an AlloyDB cluster.
## About
The `alloydb-list-users` tool lists all database users within an AlloyDB
cluster.
The tool takes the following input parameters:
| Parameter | Type | Description | Required |
| :--------- | :----- | :------------------------------------------------- | :------- |
| `project` | string | The GCP project ID to list users for. | Yes |
| `cluster` | string | The ID of the cluster to list users from. | Yes |
| `location` | string | The location of the cluster (e.g., 'us-central1'). | Yes |
## Compatible Sources
{{< compatible-sources >}}
## Example
```yaml
kind: tools
name: list_users
type: alloydb-list-users
source: alloydb-admin-source
description: Use this tool to list all database users within an AlloyDB cluster
```
## Reference
| **field** | **type** | **required** | **description** |
| ----------- | :------: | :----------: | ---------------------------------------------------- |
| type | string | true | Must be alloydb-list-users. |
| source | string | true | The name of an `alloydb-admin` source. |
| description | string | false | Description of the tool that is passed to the agent. |
========================================================================
## alloydb-create-user Tool
========================================================================
**Hierarchy:** MCP Toolbox for Databases > Integrations > AlloyDB Admin Source > alloydb-create-user Tool
**URL:** https://googleapis.github.io/genai-toolbox/previews/PR-2723/integrations/alloydb-admin/alloydb-create-user/
**Description:** The "alloydb-create-user" tool creates a new database user within a specified AlloyDB cluster.
## About
The `alloydb-create-user` tool creates a new database user (`ALLOYDB_BUILT_IN`
or `ALLOYDB_IAM_USER`) within a specified cluster.
**Permissions & APIs Required:**
Before using, ensure the following on your GCP project:
1. The [AlloyDB
API](https://console.cloud.google.com/apis/library/alloydb.googleapis.com)
is enabled.
2. The user or service account executing the tool has one of the following IAM
roles:
- `roles/alloydb.admin` (the AlloyDB Admin predefined IAM role)
- `roles/owner` (the Owner basic IAM role)
- `roles/editor` (the Editor basic IAM role)
The tool takes the following input parameters:
| Parameter | Type | Description | Required |
| :-------------- | :------------ | :------------------------------------------------------------------------------------------------------------ | :------- |
| `project` | string | The GCP project ID where the cluster exists. | Yes |
| `cluster` | string | The ID of the existing cluster where the user will be created. | Yes |
| `location` | string | The GCP location where the cluster exists (e.g., `us-central1`). | Yes |
| `user` | string | The name for the new user. Must be unique within the cluster. | Yes |
| `userType` | string | The type of user. Valid values: `ALLOYDB_BUILT_IN` and `ALLOYDB_IAM_USER`. `ALLOYDB_IAM_USER` is recommended. | Yes |
| `password` | string | A secure password for the user. Required only if `userType` is `ALLOYDB_BUILT_IN`. | No |
| `databaseRoles` | array(string) | Optional. A list of database roles to grant to the new user (e.g., `pg_read_all_data`). | No |
## Compatible Sources
{{< compatible-sources >}}
## Example
```yaml
kind: tools
name: create_user
type: alloydb-create-user
source: alloydb-admin-source
description: Use this tool to create a new database user for an AlloyDB cluster.
```
## Reference
| **field** | **type** | **required** | **description** |
| ----------- | :------: | :----------: | ---------------------------------------------------- |
| type | string | true | Must be alloydb-create-user. |
| source | string | true | The name of an `alloydb-admin` source. |
| description | string | false | Description of the tool that is passed to the agent. |
========================================================================
## alloydb-wait-for-operation Tool
========================================================================
**Hierarchy:** MCP Toolbox for Databases > Integrations > AlloyDB Admin Source > alloydb-wait-for-operation Tool
**URL:** https://googleapis.github.io/genai-toolbox/previews/PR-2723/integrations/alloydb-admin/alloydb-wait-for-operation/
**Description:** Wait for a long-running AlloyDB operation to complete.
## About
The `alloydb-wait-for-operation` tool is a utility tool that waits for a
long-running AlloyDB operation to complete. It does this by polling the AlloyDB
Admin API operation status endpoint until the operation is finished, using
exponential backoff.
| Parameter | Type | Description | Required |
| :---------- | :----- | :--------------------------------------------------- | :------- |
| `project` | string | The GCP project ID. | Yes |
| `location` | string | The location of the operation (e.g., 'us-central1'). | Yes |
| `operation` | string | The ID of the operation to wait for. | Yes |
{{< notice info >}}
This tool is intended for developer assistant workflows with human-in-the-loop
and shouldn't be used for production agents.
{{< /notice >}}
## Compatible Sources
{{< compatible-sources >}}
## Example
```yaml
kind: tools
name: wait_for_operation
type: alloydb-wait-for-operation
source: my-alloydb-admin-source
description: "This will poll on operations API until the operation is done. For checking operation status we need projectId, locationID and operationId. Once instance is created give follow up steps on how to use the variables to bring data plane MCP server up in local and remote setup."
delay: 1s
maxDelay: 4m
multiplier: 2
maxRetries: 10
```
## Reference
| **field** | **type** | **required** | **description** |
| ----------- | :------: | :----------: | ---------------------------------------------------------------------------------------------------------------- |
| type | string | true | Must be "alloydb-wait-for-operation". |
| source | string | true | The name of a `alloydb-admin` source to use for authentication. |
| description | string | false | A description of the tool. |
| delay | duration | false | The initial delay between polling requests (e.g., `3s`). Defaults to 3 seconds. |
| maxDelay | duration | false | The maximum delay between polling requests (e.g., `4m`). Defaults to 4 minutes. |
| multiplier | float | false | The multiplier for the polling delay. The delay is multiplied by this value after each request. Defaults to 2.0. |
| maxRetries | int | false | The maximum number of polling attempts before giving up. Defaults to 10. |
========================================================================
## BigQuery Source
========================================================================
**Hierarchy:** MCP Toolbox for Databases > Integrations > BigQuery Source
**URL:** https://googleapis.github.io/genai-toolbox/previews/PR-2723/integrations/bigquery/
**Description:** BigQuery is Google Cloud's fully managed, petabyte-scale, and cost-effective analytics data warehouse that lets you run analytics over vast amounts of data in near real time. With BigQuery, there's no infrastructure to set up or manage, letting you focus on finding meaningful insights using GoogleSQL and taking advantage of flexible pricing models across on-demand and flat-rate options.
## About
[BigQuery][bigquery-docs] is Google Cloud's fully managed, petabyte-scale,
and cost-effective analytics data warehouse that lets you run analytics
over vast amounts of data in near real time. With BigQuery, there's no
infrastructure to set up or manage, letting you focus on finding meaningful
insights using GoogleSQL and taking advantage of flexible pricing models
across on-demand and flat-rate options.
If you are new to BigQuery, you can try to
[load and query data with the bq tool][bigquery-quickstart-cli].
BigQuery uses [GoogleSQL][bigquery-googlesql] for querying data. GoogleSQL
is an ANSI-compliant structured query language (SQL) that is also implemented
for other Google Cloud services. SQL queries are handled by cluster nodes
in the same way as NoSQL data requests. Therefore, the same best practices
apply when creating SQL queries to run against your BigQuery data, such as
avoiding full table scans or complex filters.
[bigquery-docs]: https://cloud.google.com/bigquery/docs
[bigquery-quickstart-cli]:
https://cloud.google.com/bigquery/docs/quickstarts/quickstart-command-line
[bigquery-googlesql]:
https://cloud.google.com/bigquery/docs/reference/standard-sql/
## Available Tools
{{< list-tools >}}
### Pre-built Configurations
- [BigQuery using
MCP](https://googleapis.github.io/genai-toolbox/how-to/connect-ide/bigquery_mcp/)
Connect your IDE to BigQuery using Toolbox.
## Requirements
### IAM Permissions
BigQuery uses [Identity and Access Management (IAM)][iam-overview] to control
user and group access to BigQuery resources like projects, datasets, and tables.
### Authentication via Application Default Credentials (ADC)
By **default**, Toolbox will use your [Application Default Credentials
(ADC)][adc] to authorize and authenticate when interacting with
[BigQuery][bigquery-docs].
When using this method, you need to ensure the IAM identity associated with your
ADC (such as a service account) has the correct permissions for the queries you
intend to run. Common roles include `roles/bigquery.user` (which includes
permissions to run jobs and read data) or `roles/bigbigquery.dataViewer`.
Follow this [guide][set-adc] to set up your ADC.
If you are running on Google Compute Engine (GCE) or Google Kubernetes Engine
(GKE), you might need to explicitly set the access scopes for the service
account. While you can configure scopes when creating the VM or node pool, you
can also specify them in the source configuration using the `scopes` field.
Common scopes include `https://www.googleapis.com/auth/bigquery` or
`https://www.googleapis.com/auth/cloud-platform`.
### Authentication via User's OAuth Access Token
If the `useClientOAuth` parameter is set to `true`, Toolbox will instead use the
OAuth access token for authentication. This token is parsed from the
`Authorization` header passed in with the tool invocation request. This method
allows Toolbox to make queries to [BigQuery][bigquery-docs] on behalf of the
client or the end-user.
When using this on-behalf-of authentication, you must ensure that the
identity used has been granted the correct IAM permissions.
[iam-overview]:
[adc]:
[set-adc]:
## Example
Initialize a BigQuery source that uses ADC:
```yaml
kind: sources
name: my-bigquery-source
type: "bigquery"
project: "my-project-id"
# location: "US" # Optional: Specifies the location for query jobs.
# writeMode: "allowed" # One of: allowed, blocked, protected. Defaults to "allowed".
# allowedDatasets: # Optional: Restricts tool access to a specific list of datasets.
# - "my_dataset_1"
# - "other_project.my_dataset_2"
# impersonateServiceAccount: "service-account@project-id.iam.gserviceaccount.com" # Optional: Service account to impersonate
# scopes: # Optional: List of OAuth scopes to request.
# - "https://www.googleapis.com/auth/bigquery"
# - "https://www.googleapis.com/auth/drive.readonly"
# maxQueryResultRows: 50 # Optional: Limits the number of rows returned by queries. Defaults to 50.
```
Initialize a BigQuery source that uses the client's access token:
```yaml
kind: sources
name: my-bigquery-client-auth-source
type: "bigquery"
project: "my-project-id"
useClientOAuth: true
# location: "US" # Optional: Specifies the location for query jobs.
# writeMode: "allowed" # One of: allowed, blocked, protected. Defaults to "allowed".
# allowedDatasets: # Optional: Restricts tool access to a specific list of datasets.
# - "my_dataset_1"
# - "other_project.my_dataset_2"
# impersonateServiceAccount: "service-account@project-id.iam.gserviceaccount.com" # Optional: Service account to impersonate
# scopes: # Optional: List of OAuth scopes to request.
# - "https://www.googleapis.com/auth/bigquery"
# - "https://www.googleapis.com/auth/drive.readonly"
# maxQueryResultRows: 50 # Optional: Limits the number of rows returned by queries. Defaults to 50.
```
## Reference
| **field** | **type** | **required** | **description** |
|---------------------------|:--------:|:------------:|---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|
| type | string | true | Must be "bigquery". |
| project | string | true | Id of the Google Cloud project to use for billing and as the default project for BigQuery resources. |
| location | string | false | Specifies the location (e.g., 'us', 'asia-northeast1') in which to run the query job. This location must match the location of any tables referenced in the query. Defaults to the table's location or 'US' if the location cannot be determined. [Learn More](https://cloud.google.com/bigquery/docs/locations) |
| writeMode | string | false | Controls the write behavior for tools. `allowed` (default): All queries are permitted. `blocked`: Only `SELECT` statements are allowed for the `bigquery-execute-sql` tool. `protected`: Enables session-based execution where all tools associated with this source instance share the same [BigQuery session](https://cloud.google.com/bigquery/docs/sessions-intro). This allows for stateful operations using temporary tables (e.g., `CREATE TEMP TABLE`). For `bigquery-execute-sql`, `SELECT` statements can be used on all tables, but write operations are restricted to the session's temporary dataset. For tools like `bigquery-sql`, `bigquery-forecast`, and `bigquery-analyze-contribution`, the `writeMode` restrictions do not apply, but they will operate within the shared session. **Note:** The `protected` mode cannot be used with `useClientOAuth: true`. It is also not recommended for multi-user server environments, as all users would share the same session. A session is terminated automatically after 24 hours of inactivity or after 7 days, whichever comes first. A new session is created on the next request, and any temporary data from the previous session will be lost. |
| allowedDatasets | []string | false | An optional list of dataset IDs that tools using this source are allowed to access. If provided, any tool operation attempting to access a dataset not in this list will be rejected. To enforce this, two types of operations are also disallowed: 1) Dataset-level operations (e.g., `CREATE SCHEMA`), and 2) operations where table access cannot be statically analyzed (e.g., `EXECUTE IMMEDIATE`, `CREATE PROCEDURE`). If a single dataset is provided, it will be treated as the default for prebuilt tools. |
| useClientOAuth | bool | false | If true, forwards the client's OAuth access token from the "Authorization" header to downstream queries. **Note:** This cannot be used with `writeMode: protected`. |
| scopes | []string | false | A list of OAuth 2.0 scopes to use for the credentials. If not provided, default scopes are used. |
| impersonateServiceAccount | string | false | Service account email to impersonate when making BigQuery and Dataplex API calls. The authenticated principal must have the `roles/iam.serviceAccountTokenCreator` role on the target service account. [Learn More](https://cloud.google.com/iam/docs/service-account-impersonation) |
| maxQueryResultRows | int | false | The maximum number of rows to return from a query. Defaults to 50. |
========================================================================
## bigquery-analyze-contribution Tool
========================================================================
**Hierarchy:** MCP Toolbox for Databases > Integrations > BigQuery Source > bigquery-analyze-contribution Tool
**URL:** https://googleapis.github.io/genai-toolbox/previews/PR-2723/integrations/bigquery/bigquery-analyze-contribution/
**Description:** A "bigquery-analyze-contribution" tool performs contribution analysis in BigQuery.
## About
A `bigquery-analyze-contribution` tool performs contribution analysis in
BigQuery by creating a temporary `CONTRIBUTION_ANALYSIS` model and then querying
it with `ML.GET_INSIGHTS` to find top contributors for a given metric.
`bigquery-analyze-contribution` takes the following parameters:
- **input_data** (string, required): The data that contain the test and control
data to analyze. This can be a fully qualified BigQuery table ID (e.g.,
`my-project.my_dataset.my_table`) or a SQL query that returns the data.
- **contribution_metric** (string, required): The name of the column that
contains the metric to analyze. This can be SUM(metric_column_name),
SUM(numerator_metric_column_name)/SUM(denominator_metric_column_name) or
SUM(metric_sum_column_name)/COUNT(DISTINCT categorical_column_name) depending
the type of metric to analyze.
- **is_test_col** (string, required): The name of the column that identifies
whether a row is in the test or control group. The column must contain boolean
values.
- **dimension_id_cols** (array of strings, optional): An array of column names
that uniquely identify each dimension.
- **top_k_insights_by_apriori_support** (integer, optional): The number of top
insights to return, ranked by apriori support. Default to '30'.
- **pruning_method** (string, optional): The method to use for pruning redundant
insights. Can be `'NO_PRUNING'` or `'PRUNE_REDUNDANT_INSIGHTS'`. Defaults to
`'PRUNE_REDUNDANT_INSIGHTS'`.
The behavior of this tool is influenced by the `writeMode` setting on its
`bigquery` source:
- **`allowed` (default) and `blocked`:** These modes do not impose any special
restrictions on the `bigquery-analyze-contribution` tool.
- **`protected`:** This mode enables session-based execution. The tool will
operate within the same BigQuery session as other tools using the same source.
This allows the `input_data` parameter to be a query that references temporary
resources (e.g., `TEMP` tables) created within that session.
The tool's behavior is also influenced by the `allowedDatasets` restriction on
the `bigquery` source:
- **Without `allowedDatasets` restriction:** The tool can use any table or query
for the `input_data` parameter.
- **With `allowedDatasets` restriction:** The tool verifies that the
`input_data` parameter only accesses tables within the allowed datasets.
- If `input_data` is a table ID, the tool checks if the table's dataset is in
the allowed list.
- If `input_data` is a query, the tool performs a dry run to analyze the query
and rejects it if it accesses any table outside the allowed list.
## Compatible Sources
{{< compatible-sources >}}
## Example
```yaml
kind: tools
name: contribution_analyzer
type: bigquery-analyze-contribution
source: my-bigquery-source
description: Use this tool to run contribution analysis on a dataset in BigQuery.
```
## Reference
| **field** | **type** | **required** | **description** |
|-------------|:--------:|:------------:|----------------------------------------------------|
| type | string | true | Must be "bigquery-analyze-contribution". |
| source | string | true | Name of the source the tool should execute on. |
| description | string | true | Description of the tool that is passed to the LLM. |
## Advanced Usage
### Sample Prompt
You can prepare a sample table following
https://cloud.google.com/bigquery/docs/get-contribution-analysis-insights.
And use the following sample prompts to call this tool:
- What drives the changes in sales in the table
`bqml_tutorial.iowa_liquor_sales_sum_data`? Use the project id myproject.
- Analyze the contribution for the `total_sales` metric in the table
`bqml_tutorial.iowa_liquor_sales_sum_data`. The test group is identified by
the `is_test` column. The dimensions are `store_name`, `city`, `vendor_name`,
`category_name` and `item_description`.
========================================================================
## bigquery-conversational-analytics Tool
========================================================================
**Hierarchy:** MCP Toolbox for Databases > Integrations > BigQuery Source > bigquery-conversational-analytics Tool
**URL:** https://googleapis.github.io/genai-toolbox/previews/PR-2723/integrations/bigquery/bigquery-conversational-analytics/
**Description:** A "bigquery-conversational-analytics" tool allows conversational interaction with a BigQuery source.
## About
A `bigquery-conversational-analytics` tool allows you to ask questions about
your data in natural language.
This function takes a user's question (which can include conversational history
for context) and references to specific BigQuery tables, and sends them to a
stateless conversational API.
The API uses a GenAI agent to understand the question, generate and execute SQL
queries and Python code, and formulate an answer. This function returns a
detailed, sequential log of this entire process, which includes any generated
SQL or Python code, the data retrieved, and the final text answer.
**Note**: This tool requires additional setup in your project. Please refer to
the official [Conversational Analytics API
documentation](https://cloud.google.com/gemini/docs/conversational-analytics-api/overview)
for instructions.
`bigquery-conversational-analytics` accepts the following parameters:
- **`user_query_with_context`:** The user's question, potentially including
conversation history and system instructions for context.
- **`table_references`:** A JSON string of a list of BigQuery tables to use as
context. Each object in the list must contain `projectId`, `datasetId`, and
`tableId`. Example: `'[{"projectId": "my-gcp-project", "datasetId":
"my_dataset", "tableId": "my_table"}]'`
The tool's behavior regarding these parameters is influenced by the
`allowedDatasets` restriction on the `bigquery` source:
- **Without `allowedDatasets` restriction:** The tool can use tables from any
dataset specified in the `table_references` parameter.
- **With `allowedDatasets` restriction:** Before processing the request, the
tool verifies that every table in `table_references` belongs to a dataset in
the allowed list. If any table is from a dataset that is not in the list, the
request is denied.
## Compatible Sources
{{< compatible-sources >}}
## Example
```yaml
kind: tools
name: ask_data_insights
type: bigquery-conversational-analytics
source: my-bigquery-source
description: |
Use this tool to perform data analysis, get insights, or answer complex
questions about the contents of specific BigQuery tables.
```
## Reference
| **field** | **type** | **required** | **description** |
|-------------|:--------:|:------------:|----------------------------------------------------|
| type | string | true | Must be "bigquery-conversational-analytics". |
| source | string | true | Name of the source for chat. |
| description | string | true | Description of the tool that is passed to the LLM. |
========================================================================
## bigquery-execute-sql Tool
========================================================================
**Hierarchy:** MCP Toolbox for Databases > Integrations > BigQuery Source > bigquery-execute-sql Tool
**URL:** https://googleapis.github.io/genai-toolbox/previews/PR-2723/integrations/bigquery/bigquery-execute-sql/
**Description:** A "bigquery-execute-sql" tool executes a SQL statement against BigQuery.
## About
A `bigquery-execute-sql` tool executes a SQL statement against BigQuery.
`bigquery-execute-sql` accepts the following parameters:
- **`sql`** (required): The GoogleSQL statement to execute.
- **`dry_run`** (optional): If set to `true`, the query is validated but not
run, returning information about the execution instead. Defaults to `false`.
The behavior of this tool is influenced by the `writeMode` setting on its
`bigquery` source:
- **`allowed` (default):** All SQL statements are permitted.
- **`blocked`:** Only `SELECT` statements are allowed. Any other type of
statement (e.g., `INSERT`, `UPDATE`, `CREATE`) will be rejected.
- **`protected`:** This mode enables session-based execution. `SELECT`
statements can be used on all tables, while write operations are allowed only
for the session's temporary dataset (e.g., `CREATE TEMP TABLE ...`). This
prevents modifications to permanent datasets while allowing stateful,
multi-step operations within a secure session.
The tool's behavior is influenced by the `allowedDatasets` restriction on the
`bigquery` source. Similar to `writeMode`, this setting provides an additional
layer of security by controlling which datasets can be accessed:
- **Without `allowedDatasets` restriction:** The tool can execute any valid
GoogleSQL query.
- **With `allowedDatasets` restriction:** Before execution, the tool performs a
dry run to analyze the query.
It will reject the query if it attempts to access any table outside the
allowed `datasets` list. To enforce this restriction, the following operations
are also disallowed:
- **Dataset-level operations** (e.g., `CREATE SCHEMA`, `ALTER SCHEMA`).
- **Unanalyzable operations** where the accessed tables cannot be determined
statically (e.g., `EXECUTE IMMEDIATE`, `CREATE PROCEDURE`, `CALL`).
> **Note:** This tool is intended for developer assistant workflows with
> human-in-the-loop and shouldn't be used for production agents.
## Compatible Sources
{{< compatible-sources >}}
## Example
```yaml
kind: tools
name: execute_sql_tool
type: bigquery-execute-sql
source: my-bigquery-source
description: Use this tool to execute sql statement.
```
## Reference
| **field** | **type** | **required** | **description** |
|-------------|:--------:|:------------:|----------------------------------------------------|
| type | string | true | Must be "bigquery-execute-sql". |
| source | string | true | Name of the source the SQL should execute on. |
| description | string | true | Description of the tool that is passed to the LLM. |
========================================================================
## bigquery-forecast Tool
========================================================================
**Hierarchy:** MCP Toolbox for Databases > Integrations > BigQuery Source > bigquery-forecast Tool
**URL:** https://googleapis.github.io/genai-toolbox/previews/PR-2723/integrations/bigquery/bigquery-forecast/
**Description:** A "bigquery-forecast" tool forecasts time series data in BigQuery.
## About
A `bigquery-forecast` tool forecasts time series data in BigQuery.
`bigquery-forecast` constructs and executes a `SELECT * FROM AI.FORECAST(...)`
query based on the provided parameters:
- **history_data** (string, required): This specifies the source of the
historical time series data. It can be either a fully qualified BigQuery table
ID (e.g., my-project.my_dataset.my_table) or a SQL query that returns the
data.
- **timestamp_col** (string, required): The name of the column in your
history_data that contains the timestamps.
- **data_col** (string, required): The name of the column in your history_data
that contains the numeric values to be forecasted.
- **id_cols** (array of strings, optional): If you are forecasting multiple time
series at once (e.g., sales for different products), this parameter takes an
array of column names that uniquely identify each series. It defaults to an
empty array if not provided.
- **horizon** (integer, optional): The number of future time steps you want to
predict. It defaults to 10 if not specified.
The behavior of this tool is influenced by the `writeMode` setting on its
`bigquery` source:
- **`allowed` (default) and `blocked`:** These modes do not impose any special
restrictions on the `bigquery-forecast` tool.
- **`protected`:** This mode enables session-based execution. The tool will
operate within the same BigQuery session as other tools using the same source.
This allows the `history_data` parameter to be a query that references
temporary resources (e.g., `TEMP` tables) created within that session.
The tool's behavior is also influenced by the `allowedDatasets` restriction on
the `bigquery` source:
- **Without `allowedDatasets` restriction:** The tool can use any table or query
for the `history_data` parameter.
- **With `allowedDatasets` restriction:** The tool verifies that the
`history_data` parameter only accesses tables within the allowed datasets.
- If `history_data` is a table ID, the tool checks if the table's dataset is
in the allowed list.
- If `history_data` is a query, the tool performs a dry run to analyze the
query and rejects it if it accesses any table outside the allowed list.
## Compatible Sources
{{< compatible-sources >}}
## Example
```yaml
kind: tools
name: forecast_tool
type: bigquery-forecast
source: my-bigquery-source
description: Use this tool to forecast time series data in BigQuery.
```
## Reference
| **field** | **type** | **required** | **description** |
|-------------|:--------:|:------------:|---------------------------------------------------------|
| type | string | true | Must be "bigquery-forecast". |
| source | string | true | Name of the source the forecast tool should execute on. |
| description | string | true | Description of the tool that is passed to the LLM. |
## Advanced Usage
### Sample Prompt
You can use the following sample prompts to call this tool:
- Can you forecast the history time series data in bigquery table
`bqml_tutorial.google_analytic`? Use project_id `myproject`.
- What are the future `total_visits` in bigquery table
`bqml_tutorial.google_analytic`?
========================================================================
## bigquery-get-dataset-info Tool
========================================================================
**Hierarchy:** MCP Toolbox for Databases > Integrations > BigQuery Source > bigquery-get-dataset-info Tool
**URL:** https://googleapis.github.io/genai-toolbox/previews/PR-2723/integrations/bigquery/bigquery-get-dataset-info/
**Description:** A "bigquery-get-dataset-info" tool retrieves metadata for a BigQuery dataset.
## About
A `bigquery-get-dataset-info` tool retrieves metadata for a BigQuery dataset.
`bigquery-get-dataset-info` accepts the following parameters:
- **`dataset`** (required): Specifies the dataset for which to retrieve metadata.
- **`project`** (optional): Defines the Google Cloud project ID. If not provided,
the tool defaults to the project from the source configuration.
The tool's behavior regarding these parameters is influenced by the
`allowedDatasets` restriction on the `bigquery` source:
- **Without `allowedDatasets` restriction:** The tool can retrieve metadata for
any dataset specified by the `dataset` and `project` parameters.
- **With `allowedDatasets` restriction:** Before retrieving metadata, the tool
verifies that the requested dataset is in the allowed list. If it is not, the
request is denied. If only one dataset is specified in the `allowedDatasets`
list, it will be used as the default value for the `dataset` parameter.
## Compatible Sources
{{< compatible-sources >}}
## Example
```yaml
kind: tools
name: bigquery_get_dataset_info
type: bigquery-get-dataset-info
source: my-bigquery-source
description: Use this tool to get dataset metadata.
```
## Reference
| **field** | **type** | **required** | **description** |
|-------------|:------------------------------------------:|:------------:|--------------------------------------------------------------------------------------------------|
| type | string | true | Must be "bigquery-get-dataset-info". |
| source | string | true | Name of the source the SQL should execute on. |
| description | string | true | Description of the tool that is passed to the LLM. |
========================================================================
## bigquery-get-table-info Tool
========================================================================
**Hierarchy:** MCP Toolbox for Databases > Integrations > BigQuery Source > bigquery-get-table-info Tool
**URL:** https://googleapis.github.io/genai-toolbox/previews/PR-2723/integrations/bigquery/bigquery-get-table-info/
**Description:** A "bigquery-get-table-info" tool retrieves metadata for a BigQuery table.
## About
A `bigquery-get-table-info` tool retrieves metadata for a BigQuery table.
`bigquery-get-table-info` accepts the following parameters:
- **`table`** (required): The name of the table for which to retrieve metadata.
- **`dataset`** (required): The dataset containing the specified table.
- **`project`** (optional): The Google Cloud project ID. If not provided, the
tool defaults to the project from the source configuration.
The tool's behavior regarding these parameters is influenced by the
`allowedDatasets` restriction on the `bigquery` source:
- **Without `allowedDatasets` restriction:** The tool can retrieve metadata for
any table specified by the `table`, `dataset`, and `project` parameters.
- **With `allowedDatasets` restriction:** Before retrieving metadata, the tool
verifies that the requested dataset is in the allowed list. If it is not, the
request is denied. If only one dataset is specified in the `allowedDatasets`
list, it will be used as the default value for the `dataset` parameter.
## Compatible Sources
{{< compatible-sources >}}
## Example
```yaml
kind: tools
name: bigquery_get_table_info
type: bigquery-get-table-info
source: my-bigquery-source
description: Use this tool to get table metadata.
```
## Reference
| **field** | **type** | **required** | **description** |
|-------------|:------------------------------------------:|:------------:|--------------------------------------------------------------------------------------------------|
| type | string | true | Must be "bigquery-get-table-info". |
| source | string | true | Name of the source the SQL should execute on. |
| description | string | true | Description of the tool that is passed to the LLM. |
========================================================================
## bigquery-list-dataset-ids Tool
========================================================================
**Hierarchy:** MCP Toolbox for Databases > Integrations > BigQuery Source > bigquery-list-dataset-ids Tool
**URL:** https://googleapis.github.io/genai-toolbox/previews/PR-2723/integrations/bigquery/bigquery-list-dataset-ids/
**Description:** A "bigquery-list-dataset-ids" tool returns all dataset IDs from the source.
## About
A `bigquery-list-dataset-ids` tool returns all dataset IDs from the source.
`bigquery-list-dataset-ids` accepts the following parameter:
- **`project`** (optional): Defines the Google Cloud project ID. If not provided,
the tool defaults to the project from the source configuration.
The tool's behavior regarding this parameter is influenced by the
`allowedDatasets` restriction on the `bigquery` source:
- **Without `allowedDatasets` restriction:** The tool can list datasets from any
project specified by the `project` parameter.
- **With `allowedDatasets` restriction:** The tool directly returns the
pre-configured list of dataset IDs from the source, and the `project`
parameter is ignored.
## Compatible Sources
{{< compatible-sources >}}
## Example
```yaml
kind: tools
name: bigquery_list_dataset_ids
type: bigquery-list-dataset-ids
source: my-bigquery-source
description: Use this tool to get dataset metadata.
```
## Reference
| **field** | **type** | **required** | **description** |
|-------------|:------------------------------------------:|:------------:|--------------------------------------------------------------------------------------------------|
| type | string | true | Must be "bigquery-list-dataset-ids". |
| source | string | true | Name of the source the SQL should execute on. |
| description | string | true | Description of the tool that is passed to the LLM. |
========================================================================
## bigquery-list-table-ids Tool
========================================================================
**Hierarchy:** MCP Toolbox for Databases > Integrations > BigQuery Source > bigquery-list-table-ids Tool
**URL:** https://googleapis.github.io/genai-toolbox/previews/PR-2723/integrations/bigquery/bigquery-list-table-ids/
**Description:** A "bigquery-list-table-ids" tool returns table IDs in a given BigQuery dataset.
## About
A `bigquery-list-table-ids` tool returns table IDs in a given BigQuery dataset.
`bigquery-list-table-ids` accepts the following parameters:
- **`dataset`** (required): Specifies the dataset from which to list table IDs.
- **`project`** (optional): Defines the Google Cloud project ID. If not provided,
the tool defaults to the project from the source configuration.
The tool's behavior regarding these parameters is influenced by the
`allowedDatasets` restriction on the `bigquery` source:
- **Without `allowedDatasets` restriction:** The tool can list tables from any
dataset specified by the `dataset` and `project` parameters.
- **With `allowedDatasets` restriction:** Before listing tables, the tool verifies
that the requested dataset is in the allowed list. If it is not, the request is
denied. If only one dataset is specified in the `allowedDatasets` list, it
will be used as the default value for the `dataset` parameter.
## Compatible Sources
{{< compatible-sources >}}
## Example
```yaml
kind: tools
name: bigquery_list_table_ids
type: bigquery-list-table-ids
source: my-bigquery-source
description: Use this tool to get table metadata.
```
## Reference
| **field** | **type** | **required** | **description** |
|-------------|:------------------------------------------:|:------------:|--------------------------------------------------------------------------------------------------|
| type | string | true | Must be "bigquery-list-table-ids". |
| source | string | true | Name of the source the SQL should execute on. |
| description | string | true | Description of the tool that is passed to the LLM. |
========================================================================
## bigquery-search-catalog Tool
========================================================================
**Hierarchy:** MCP Toolbox for Databases > Integrations > BigQuery Source > bigquery-search-catalog Tool
**URL:** https://googleapis.github.io/genai-toolbox/previews/PR-2723/integrations/bigquery/bigquery-search-catalog/
**Description:** A "bigquery-search-catalog" tool allows to search for entries based on the provided query.
## About
A `bigquery-search-catalog` tool returns all entries in Dataplex Catalog (e.g.
tables, views, models) with system=bigquery that matches given user query.
`bigquery-search-catalog` takes a required `query` parameter based on which
entries are filtered and returned to the user. It also optionally accepts
following parameters:
- `datasetIds` - The IDs of the bigquery dataset.
- `projectIds` - The IDs of the bigquery project.
- `types` - The type of the data. Accepted values are: CONNECTION, POLICY,
DATASET, MODEL, ROUTINE, TABLE, VIEW.
- `pageSize` - Number of results in the search page. Defaults to `5`.
## Compatible Sources
{{< compatible-sources >}}
## Requirements
### IAM Permissions
Bigquery uses [Identity and Access Management (IAM)][iam-overview] to control
user and group access to Dataplex resources. Toolbox will use your
[Application Default Credentials (ADC)][adc] to authorize and authenticate when
interacting with [Dataplex][dataplex-docs].
In addition to [setting the ADC for your server][set-adc], you need to ensure
the IAM identity has been given the correct IAM permissions for the tasks you
intend to perform. See [Dataplex Universal Catalog IAM permissions][iam-permissions]
and [Dataplex Universal Catalog IAM roles][iam-roles] for more information on
applying IAM permissions and roles to an identity.
[iam-overview]: https://cloud.google.com/dataplex/docs/iam-and-access-control
[adc]: https://cloud.google.com/docs/authentication#adc
[set-adc]: https://cloud.google.com/docs/authentication/provide-credentials-adc
[iam-permissions]: https://cloud.google.com/dataplex/docs/iam-permissions
[iam-roles]: https://cloud.google.com/dataplex/docs/iam-roles
## Example
```yaml
kind: tools
name: search_catalog
type: bigquery-search-catalog
source: bigquery-source
description: Use this tool to find tables, views, models, routines or connections.
```
## Reference
| **field** | **type** | **required** | **description** |
|-------------|:------------------------------------------:|:------------:|--------------------------------------------------------------------------------------------------|
| type | string | true | Must be "bigquery-search-catalog". |
| source | string | true | Name of the source the tool should execute on. |
| description | string | true | Description of the tool that is passed to the LLM. |
========================================================================
## bigquery-sql Tool
========================================================================
**Hierarchy:** MCP Toolbox for Databases > Integrations > BigQuery Source > bigquery-sql Tool
**URL:** https://googleapis.github.io/genai-toolbox/previews/PR-2723/integrations/bigquery/bigquery-sql/
**Description:** A "bigquery-sql" tool executes a pre-defined SQL statement.
## About
A `bigquery-sql` tool executes a pre-defined SQL statement.
The behavior of this tool is influenced by the `writeMode` setting on its
`bigquery` source:
- **`allowed` (default) and `blocked`:** These modes do not impose any
restrictions on the `bigquery-sql` tool. The pre-defined SQL statement will be
executed as-is.
- **`protected`:** This mode enables session-based execution. The tool will
operate within the same BigQuery session as other tools using the same source,
allowing it to interact with temporary resources like `TEMP` tables created
within that session.
## Compatible Sources
{{< compatible-sources >}}
### GoogleSQL
BigQuery uses [GoogleSQL][bigquery-googlesql] for querying data. The integration
with Toolbox supports this dialect. The specified SQL statement is executed, and
parameters can be inserted into the query. BigQuery supports both named parameters
(e.g., `@name`) and positional parameters (`?`), but they cannot be mixed in the
same query.
[bigquery-googlesql]:
https://cloud.google.com/bigquery/docs/reference/standard-sql/
## Example
> **Note:** This tool uses [parameterized
> queries](https://cloud.google.com/bigquery/docs/parameterized-queries) to
> prevent SQL injections. Query parameters can be used as substitutes for
> arbitrary expressions. Parameters cannot be used as substitutes for
> identifiers, column names, table names, or other parts of the query.
```yaml
# Example: Querying a user table in BigQuery
kind: tools
name: search_users_bq
type: bigquery-sql
source: my-bigquery-source
statement: |
SELECT
id,
name,
email
FROM
`my-project.my-dataset.users`
WHERE
id = @id OR email = @email;
description: |
Use this tool to get information for a specific user.
Takes an id number or a name and returns info on the user.
Example:
{{
"id": 123,
"name": "Alice",
}}
parameters:
- name: id
type: integer
description: User ID
- name: email
type: string
description: Email address of the user
```
### Example with Template Parameters
> **Note:** This tool allows direct modifications to the SQL statement,
> including identifiers, column names, and table names. **This makes it more
> vulnerable to SQL injections**. Using basic parameters only (see above) is
> recommended for performance and safety reasons. For more details, please check
> [templateParameters](../#template-parameters).
```yaml
kind: tools
name: list_table
type: bigquery-sql
source: my-bigquery-source
statement: |
SELECT * FROM {{.tableName}};
description: |
Use this tool to list all information from a specific table.
Example:
{{
"tableName": "flights",
}}
templateParameters:
- name: tableName
type: string
description: Table to select from
```
## Reference
| **field** | **type** | **required** | **description** |
|--------------------|:---------------------------------------------:|:------------:|-----------------------------------------------------------------------------------------------------------------------------------------|
| type | string | true | Must be "bigquery-sql". |
| source | string | true | Name of the source the GoogleSQL should execute on. |
| description | string | true | Description of the tool that is passed to the LLM. |
| statement | string | true | The GoogleSQL statement to execute. |
| parameters | [parameters](../#specifying-parameters) | false | List of [parameters](../#specifying-parameters) that will be inserted into the SQL statement. |
| templateParameters | [templateParameters](../#template-parameters) | false | List of [templateParameters](../#template-parameters) that will be inserted into the SQL statement before executing prepared statement. |
========================================================================
## Bigtable Source
========================================================================
**Hierarchy:** MCP Toolbox for Databases > Integrations > Bigtable Source
**URL:** https://googleapis.github.io/genai-toolbox/previews/PR-2723/integrations/bigtable/
**Description:** Bigtable is a low-latency NoSQL database service for machine learning, operational analytics, and user-facing operations. It's a wide-column, key-value store that can scale to billions of rows and thousands of columns. With Bigtable, you can replicate your data to regions across the world for high availability and data resiliency.
## About
[Bigtable][bigtable-docs] is a low-latency NoSQL database service for machine
learning, operational analytics, and user-facing operations. It's a wide-column,
key-value store that can scale to billions of rows and thousands of columns.
With Bigtable, you can replicate your data to regions across the world for high
availability and data resiliency.
If you are new to Bigtable, you can try to [create an instance and write data
with the cbt CLI][bigtable-quickstart-with-cli].
You can use [GoogleSQL statements][bigtable-googlesql] to query your Bigtable
data. GoogleSQL is an ANSI-compliant structured query language (SQL) that is
also implemented for other Google Cloud services. SQL queries are handled by
cluster nodes in the same way as NoSQL data requests. Therefore, the same best
practices apply when creating SQL queries to run against your Bigtable data,
such as avoiding full table scans or complex filters.
[bigtable-docs]: https://cloud.google.com/bigtable/docs
[bigtable-quickstart-with-cli]:
https://cloud.google.com/bigtable/docs/create-instance-write-data-cbt-cli
[bigtable-googlesql]:
https://cloud.google.com/bigtable/docs/googlesql-overview
## Available Tools
{{< list-tools >}}
## Requirements
### IAM Permissions
Bigtable uses [Identity and Access Management (IAM)][iam-overview] to control
user and group access to Bigtable resources at the project, instance, table, and
backup level. Toolbox will use your [Application Default Credentials (ADC)][adc]
to authorize and authenticate when interacting with [Bigtable][bigtable-docs].
In addition to [setting the ADC for your server][set-adc], you need to ensure
the IAM identity has been given the correct IAM permissions for the query
provided. See [Apply IAM roles][grant-permissions] for more information on
applying IAM permissions and roles to an identity.
[iam-overview]: https://cloud.google.com/bigtable/docs/access-control
[adc]: https://cloud.google.com/docs/authentication#adc
[set-adc]: https://cloud.google.com/docs/authentication/provide-credentials-adc
[grant-permissions]: https://cloud.google.com/bigtable/docs/access-control#iam-management-instance
## Example
```yaml
kind: sources
name: my-bigtable-source
type: "bigtable"
project: "my-project-id"
instance: "test-instance"
```
## Reference
| **field** | **type** | **required** | **description** |
|-----------|:--------:|:------------:|-------------------------------------------------------------------------------|
| type | string | true | Must be "bigtable". |
| project | string | true | Id of the GCP project that the cluster was created in (e.g. "my-project-id"). |
| instance | string | true | Name of the Bigtable instance. |
========================================================================
## bigtable-sql Tool
========================================================================
**Hierarchy:** MCP Toolbox for Databases > Integrations > Bigtable Source > bigtable-sql Tool
**URL:** https://googleapis.github.io/genai-toolbox/previews/PR-2723/integrations/bigtable/bigtable-sql/
**Description:** A "bigtable-sql" tool executes a pre-defined SQL statement against a Google Cloud Bigtable instance.
## About
A `bigtable-sql` tool executes a pre-defined SQL statement against a Bigtable
instance.
### GoogleSQL
Bigtable supports SQL queries. The integration with Toolbox supports `googlesql`
dialect, the specified SQL statement is executed as a [data manipulation
language (DML)][bigtable-googlesql] statements, and specified parameters will
inserted according to their name: e.g. `@name`.
{{}}
Bigtable's GoogleSQL support for DML statements might be limited to certain
query types. For detailed information on supported DML statements and use
cases, refer to the [Bigtable GoogleSQL use
cases](https://cloud.google.com/bigtable/docs/googlesql-overview#use-cases).
{{}}
[bigtable-googlesql]: https://cloud.google.com/bigtable/docs/googlesql-overview
## Compatible Sources
{{< compatible-sources >}}
## Example
> **Note:** This tool uses parameterized queries to prevent SQL injections.
> Query parameters can be used as substitutes for arbitrary expressions.
> Parameters cannot be used as substitutes for identifiers, column names, table
> names, or other parts of the query.
```yaml
kind: tools
name: search_user_by_id_or_name
type: bigtable-sql
source: my-bigtable-instance
statement: |
SELECT
TO_INT64(cf[ 'id' ]) as id,
CAST(cf[ 'name' ] AS string) as name,
FROM
mytable
WHERE
TO_INT64(cf[ 'id' ]) = @id
OR CAST(cf[ 'name' ] AS string) = @name;
description: |
Use this tool to get information for a specific user.
Takes an id number or a name and returns info on the user.
Example:
{{
"id": 123,
"name": "Alice",
}}
parameters:
- name: id
type: integer
description: User ID
- name: name
type: string
description: Name of the user
```
### Example with Template Parameters
> **Note:** This tool allows direct modifications to the SQL statement,
> including identifiers, column names, and table names. **This makes it more
> vulnerable to SQL injections**. Using basic parameters only (see above) is
> recommended for performance and safety reasons. For more details, please check
> [templateParameters](..#template-parameters).
```yaml
kind: tools
name: list_table
type: bigtable-sql
source: my-bigtable-instance
statement: |
SELECT * FROM {{.tableName}};
description: |
Use this tool to list all information from a specific table.
Example:
{{
"tableName": "flights",
}}
templateParameters:
- name: tableName
type: string
description: Table to select from
```
## Reference
| **field** | **type** | **required** | **description** |
|--------------------|:--------------------------------------------:|:------------:|----------------------------------------------------------------------------------------------------------------------------------------|
| type | string | true | Must be "bigtable-sql". |
| source | string | true | Name of the source the SQL should execute on. |
| description | string | true | Description of the tool that is passed to the LLM. |
| statement | string | true | SQL statement to execute on. |
| parameters | [parameters](../#specifying-parameters) | false | List of [parameters](../#specifying-parameters) that will be inserted into the SQL statement. |
| templateParameters | [templateParameters](..#template-parameters) | false | List of [templateParameters](..#template-parameters) that will be inserted into the SQL statement before executing prepared statement. |
## Advanced Usage
- [Bigtable Studio][bigtable-studio] is a useful to explore and manage your
Bigtable data. If you're unfamiliar with the query syntax, [Query
Builder][bigtable-querybuilder] lets you build a query, run it against a
table, and then view the results in the console.
- Some Python libraries limit the use of underscore columns such as `_key`. A
workaround would be to leverage Bigtable [Logical
Views][bigtable-logical-view] to rename the columns.
[bigtable-studio]:
https://cloud.google.com/bigtable/docs/manage-data-using-console
[bigtable-logical-view]:
https://cloud.google.com/bigtable/docs/create-manage-logical-views
[bigtable-querybuilder]: https://cloud.google.com/bigtable/docs/query-builder
========================================================================
## Cassandra Source
========================================================================
**Hierarchy:** MCP Toolbox for Databases > Integrations > Cassandra Source
**URL:** https://googleapis.github.io/genai-toolbox/previews/PR-2723/integrations/cassandra/
**Description:** Apache Cassandra is a NoSQL distributed database known for its horizontal scalability, distributed architecture, and flexible schema definition.
## About
[Apache Cassandra][cassandra-docs] is a NoSQL distributed database. By design,
NoSQL databases are lightweight, open-source, non-relational, and largely
distributed. Counted among their strengths are horizontal scalability,
distributed architectures, and a flexible approach to schema definition.
[cassandra-docs]: https://cassandra.apache.org/
## Available Tools
{{< list-tools >}}
## Example
```yaml
kind: sources
name: my-cassandra-source
type: cassandra
hosts:
- 127.0.0.1
keyspace: my_keyspace
protoVersion: 4
username: ${USER_NAME}
password: ${PASSWORD}
caPath: /path/to/ca.crt # Optional: path to CA certificate
certPath: /path/to/client.crt # Optional: path to client certificate
keyPath: /path/to/client.key # Optional: path to client key
enableHostVerification: true # Optional: enable host verification
```
{{< notice tip >}}
Use environment variable replacement with the format ${ENV_NAME}
instead of hardcoding your secrets into the configuration file.
{{< /notice >}}
## Reference
| **field** | **type** | **required** | **description** |
|------------------------|:--------:|:------------:|----------------------------------------------------------------------------------------------------------------------------------------------------|
| type | string | true | Must be "cassandra". |
| hosts | string[] | true | List of IP addresses to connect to (e.g., ["192.168.1.1:9042", "192.168.1.2:9042","192.168.1.3:9042"]). The default port is 9042 if not specified. |
| keyspace | string | true | Name of the Cassandra keyspace to connect to (e.g., "my_keyspace"). |
| protoVersion | integer | false | Protocol version for the Cassandra connection (e.g., 4). |
| username | string | false | Name of the Cassandra user to connect as (e.g., "my-cassandra-user"). |
| password | string | false | Password of the Cassandra user (e.g., "my-password"). |
| caPath | string | false | Path to the CA certificate for SSL/TLS (e.g., "/path/to/ca.crt"). |
| certPath | string | false | Path to the client certificate for SSL/TLS (e.g., "/path/to/client.crt"). |
| keyPath | string | false | Path to the client key for SSL/TLS (e.g., "/path/to/client.key"). |
| enableHostVerification | boolean | false | Enable host verification for SSL/TLS (e.g., true). By default, host verification is disabled. |
========================================================================
## cassandra-cql Tool
========================================================================
**Hierarchy:** MCP Toolbox for Databases > Integrations > Cassandra Source > cassandra-cql Tool
**URL:** https://googleapis.github.io/genai-toolbox/previews/PR-2723/integrations/cassandra/cassandra-cql/
**Description:** A "cassandra-cql" tool executes a pre-defined CQL statement against a Cassandra database.
## About
A `cassandra-cql` tool executes a pre-defined CQL statement against a Cassandra
database.
The specified CQL statement is executed as a [prepared
statement][cassandra-prepare], and expects parameters in the CQL query to be in
the form of placeholders `?`.
[cassandra-prepare]:
https://docs.datastax.com/en/datastax-drivers/developing/prepared-statements.html
## Compatible Sources
{{< compatible-sources >}}
## Example
> **Note:** This tool uses parameterized queries to prevent CQL injections.
> Query parameters can be used as substitutes for arbitrary expressions.
> Parameters cannot be used as substitutes for keyspaces, table names, column
> names, or other parts of the query.
```yaml
kind: tools
name: search_users_by_email
type: cassandra-cql
source: my-cassandra-cluster
statement: |
SELECT user_id, email, first_name, last_name, created_at
FROM users
WHERE email = ?
description: |
Use this tool to retrieve specific user information by their email address.
Takes an email address and returns user details including user ID, email,
first name, last name, and account creation timestamp.
Do NOT use this tool with a user ID or other identifiers.
Example:
{{
"email": "user@example.com",
}}
parameters:
- name: email
type: string
description: User's email address
```
### Example with Template Parameters
> **Note:** This tool allows direct modifications to the CQL statement,
> including keyspaces, table names, and column names. **This makes it more
> vulnerable to CQL injections**. Using basic parameters only (see above) is
> recommended for performance and safety reasons. For more details, please check
> [templateParameters](../#template-parameters).
```yaml
kind: tools
name: list_keyspace_table
type: cassandra-cql
source: my-cassandra-cluster
statement: |
SELECT * FROM {{.keyspace}}.{{.tableName}};
description: |
Use this tool to list all information from a specific table in a keyspace.
Example:
{{
"keyspace": "my_keyspace",
"tableName": "users",
}}
templateParameters:
- name: keyspace
type: string
description: Keyspace containing the table
- name: tableName
type: string
description: Table to select from
```
## Reference
| **field** | **type** | **required** | **description** |
|--------------------|:---------------------------------------------:|:------------:|-----------------------------------------------------------------------------------------------------------------------------------------|
| type | string | true | Must be "cassandra-cql". |
| source | string | true | Name of the source the CQL should execute on. |
| description | string | true | Description of the tool that is passed to the LLM. |
| statement | string | true | CQL statement to execute. |
| authRequired | []string | false | List of authentication requirements for the source. |
| parameters | [parameters](../#specifying-parameters) | false | List of [parameters](../#specifying-parameters) that will be inserted into the CQL statement. |
| templateParameters | [templateParameters](../#template-parameters) | false | List of [templateParameters](../#template-parameters) that will be inserted into the CQL statement before executing prepared statement. |
========================================================================
## ClickHouse Source
========================================================================
**Hierarchy:** MCP Toolbox for Databases > Integrations > ClickHouse Source
**URL:** https://googleapis.github.io/genai-toolbox/previews/PR-2723/integrations/clickhouse/
**Description:** ClickHouse is an open-source, OLTP database.
## About
[ClickHouse][clickhouse-docs] is a fast, open-source, column-oriented database
[clickhouse-docs]: https://clickhouse.com/docs
## Available Tools
{{< list-tools >}}
## Requirements
### Database User
This source uses standard ClickHouse authentication. You will need to [create a
ClickHouse user][clickhouse-users] (or with [ClickHouse
Cloud][clickhouse-cloud]) to connect to the database with. The user should have
appropriate permissions for the operations you plan to perform.
[clickhouse-cloud]:
https://clickhouse.com/docs/getting-started/quick-start/cloud#connect-with-your-app
[clickhouse-users]: https://clickhouse.com/docs/en/sql-reference/statements/create/user
### Network Access
ClickHouse supports multiple protocols:
- **HTTPS protocol** (default port 8443) - Secure HTTP access (default)
- **HTTP protocol** (default port 8123) - Good for web-based access
## Example
### Secure Connection Example
```yaml
kind: sources
name: secure-clickhouse-source
type: clickhouse
host: clickhouse.example.com
port: "8443"
database: analytics
user: ${CLICKHOUSE_USER}
password: ${CLICKHOUSE_PASSWORD}
protocol: https
secure: true
```
### HTTP Protocol Example
```yaml
kind: sources
name: http-clickhouse-source
type: clickhouse
host: localhost
port: "8123"
database: logs
user: ${CLICKHOUSE_USER}
password: ${CLICKHOUSE_PASSWORD}
protocol: http
secure: false
```
{{< notice tip >}}
Use environment variable replacement with the format ${ENV_NAME}
instead of hardcoding your secrets into the configuration file.
{{< /notice >}}
## Reference
| **field** | **type** | **required** | **description** |
|-----------|:--------:|:------------:|-------------------------------------------------------------------------------------|
| type | string | true | Must be "clickhouse". |
| host | string | true | IP address or hostname to connect to (e.g. "127.0.0.1" or "clickhouse.example.com") |
| port | string | true | Port to connect to (e.g. "8443" for HTTPS, "8123" for HTTP) |
| database | string | true | Name of the ClickHouse database to connect to (e.g. "my_database"). |
| user | string | true | Name of the ClickHouse user to connect as (e.g. "analytics_user"). |
| password | string | false | Password of the ClickHouse user (e.g. "my-password"). |
| protocol | string | false | Connection protocol: "https" (default) or "http". |
| secure | boolean | false | Whether to use a secure connection (TLS). Default: false. |
========================================================================
## clickhouse-execute-sql Tool
========================================================================
**Hierarchy:** MCP Toolbox for Databases > Integrations > ClickHouse Source > clickhouse-execute-sql Tool
**URL:** https://googleapis.github.io/genai-toolbox/previews/PR-2723/integrations/clickhouse/clickhouse-execute-sql/
**Description:** A "clickhouse-execute-sql" tool executes a SQL statement against a ClickHouse database.
## About
A `clickhouse-execute-sql` tool executes a SQL statement against a ClickHouse
database.
`clickhouse-execute-sql` takes one input parameter `sql` and runs the SQL
statement against the specified `source`. This tool includes query logging
capabilities for monitoring and debugging purposes.
> **Note:** This tool is intended for developer assistant workflows with
> human-in-the-loop and shouldn't be used for production agents.
## Compatible Sources
{{< compatible-sources >}}
## Parameters
| **parameter** | **type** | **required** | **description** |
|---------------|:--------:|:------------:|---------------------------------------------------|
| sql | string | true | The SQL statement to execute against the database |
## Example
```yaml
kind: tools
name: execute_sql_tool
type: clickhouse-execute-sql
source: my-clickhouse-instance
description: Use this tool to execute SQL statements against ClickHouse.
```
## Reference
| **field** | **type** | **required** | **description** |
|-------------|:--------:|:------------:|-------------------------------------------------------|
| type | string | true | Must be "clickhouse-execute-sql". |
| source | string | true | Name of the ClickHouse source to execute SQL against. |
| description | string | true | Description of the tool that is passed to the LLM. |
========================================================================
## clickhouse-sql Tool
========================================================================
**Hierarchy:** MCP Toolbox for Databases > Integrations > ClickHouse Source > clickhouse-sql Tool
**URL:** https://googleapis.github.io/genai-toolbox/previews/PR-2723/integrations/clickhouse/clickhouse-sql/
**Description:** A "clickhouse-sql" tool executes SQL queries as prepared statements in ClickHouse.
## About
A `clickhouse-sql` tool executes SQL queries as prepared statements against a
ClickHouse database.
This tool supports both template parameters (for SQL statement customization)
and regular parameters (for prepared statement values), providing flexible
query execution capabilities.
## Compatible Sources
{{< compatible-sources >}}
## Example
```yaml
kind: tools
name: my_analytics_query
type: clickhouse-sql
source: my-clickhouse-instance
description: Get user analytics for a specific date range
statement: |
SELECT
user_id,
count(*) as event_count,
max(timestamp) as last_event
FROM events
WHERE date >= ? AND date <= ?
GROUP BY user_id
ORDER BY event_count DESC
LIMIT ?
parameters:
- name: start_date
description: Start date for the query (YYYY-MM-DD format)
- name: end_date
description: End date for the query (YYYY-MM-DD format)
- name: limit
description: Maximum number of results to return
```
### Template Parameters Example
```yaml
kind: tools
name: flexible_table_query
type: clickhouse-sql
source: my-clickhouse-instance
description: Query any table with flexible columns
statement: |
SELECT {{columns}}
FROM {{table_name}}
WHERE created_date >= ?
LIMIT ?
templateParameters:
- name: columns
description: Comma-separated list of columns to select
- name: table_name
description: Name of the table to query
parameters:
- name: start_date
description: Start date filter
- name: limit
description: Maximum number of results
```
## Reference
| **field** | **type** | **required** | **description** |
|--------------------|:------------------:|:------------:|-------------------------------------------------------|
| type | string | true | Must be "clickhouse-sql". |
| source | string | true | Name of the ClickHouse source to execute SQL against. |
| description | string | true | Description of the tool that is passed to the LLM. |
| statement | string | true | The SQL statement template to execute. |
| parameters | array of Parameter | false | Parameters for prepared statement values. |
| templateParameters | array of Parameter | false | Parameters for SQL statement template customization. |
========================================================================
## clickhouse-list-databases Tool
========================================================================
**Hierarchy:** MCP Toolbox for Databases > Integrations > ClickHouse Source > clickhouse-list-databases Tool
**URL:** https://googleapis.github.io/genai-toolbox/previews/PR-2723/integrations/clickhouse/clickhouse-list-databases/
**Description:** A "clickhouse-list-databases" tool lists all databases in a ClickHouse instance.
## About
A `clickhouse-list-databases` tool lists all available databases in a ClickHouse
instance.
This tool executes the `SHOW DATABASES` command and returns a list of all
databases accessible to the configured user, making it useful for database
discovery and exploration tasks.
## Compatible Sources
{{< compatible-sources >}}
## Example
```yaml
kind: tools
name: list_clickhouse_databases
type: clickhouse-list-databases
source: my-clickhouse-instance
description: List all available databases in the ClickHouse instance
```
## Output Format
The tool returns an array of objects, where each object contains:
- `name`: The name of the database
Example response:
```json
[
{"name": "default"},
{"name": "system"},
{"name": "analytics"},
{"name": "user_data"}
]
```
## Reference
| **field** | **type** | **required** | **description** |
|--------------|:------------------:|:------------:|-------------------------------------------------------|
| type | string | true | Must be "clickhouse-list-databases". |
| source | string | true | Name of the ClickHouse source to list databases from. |
| description | string | true | Description of the tool that is passed to the LLM. |
| authRequired | array of string | false | Authentication services required to use this tool. |
| parameters | array of Parameter | false | Parameters for the tool (typically not used). |
========================================================================
## clickhouse-list-tables Tool
========================================================================
**Hierarchy:** MCP Toolbox for Databases > Integrations > ClickHouse Source > clickhouse-list-tables Tool
**URL:** https://googleapis.github.io/genai-toolbox/previews/PR-2723/integrations/clickhouse/clickhouse-list-tables/
**Description:** A "clickhouse-list-tables" tool lists all tables in a specific ClickHouse database.
## About
A `clickhouse-list-tables` tool lists all available tables in a specified
ClickHouse database.
This tool executes the `SHOW TABLES FROM ` command and returns a list
of all tables in the specified database that are accessible to the configured
user, making it useful for schema exploration and table discovery tasks.
## Compatible Sources
{{< compatible-sources >}}
## Parameters
| **parameter** | **type** | **required** | **description** |
|---------------|:--------:|:------------:|-----------------------------------|
| database | string | true | The database to list tables from. |
## Example
```yaml
kind: tools
name: list_clickhouse_tables
type: clickhouse-list-tables
source: my-clickhouse-instance
description: List all tables in a specific ClickHouse database
```
## Output Format
The tool returns an array of objects, where each object contains:
- `name`: The name of the table
- `database`: The database the table belongs to
Example response:
```json
[
{"name": "users", "database": "analytics"},
{"name": "events", "database": "analytics"},
{"name": "products", "database": "analytics"},
{"name": "orders", "database": "analytics"}
]
```
## Reference
| **field** | **type** | **required** | **description** |
|--------------|:------------------:|:------------:|---------------------------------------------------------|
| type | string | true | Must be "clickhouse-list-tables". |
| source | string | true | Name of the ClickHouse source to list tables from. |
| description | string | true | Description of the tool that is passed to the LLM. |
| authRequired | array of string | false | Authentication services required to use this tool. |
| parameters | array of Parameter | false | Parameters for the tool (see Parameters section above). |
========================================================================
## Cloud Healthcare API Source
========================================================================
**Hierarchy:** MCP Toolbox for Databases > Integrations > Cloud Healthcare API Source
**URL:** https://googleapis.github.io/genai-toolbox/previews/PR-2723/integrations/cloudhealthcare/
**Description:** The Cloud Healthcare API provides a managed solution for storing and accessing healthcare data in Google Cloud, providing a critical bridge between existing care systems and applications hosted on Google Cloud.
## About
The [Cloud Healthcare API][healthcare-docs] provides a managed solution
for storing and accessing healthcare data in Google Cloud, providing a
critical bridge between existing care systems and applications hosted on
Google Cloud. It supports healthcare data standards such as HL7® FHIR®,
HL7® v2, and DICOM®. It provides a fully managed, highly scalable,
enterprise-grade development environment for building clinical and analytics
solutions securely on Google Cloud.
A dataset is a container in your Google Cloud project that holds modality-specific
healthcare data. Datasets contain other data stores, such as FHIR stores and DICOM
stores, which in turn hold their own types of healthcare data.
A single dataset can contain one or many data stores, and those stores can all
service the same modality or different modalities as application needs dictate.
Using multiple stores in the same dataset might be appropriate in various
situations.
If you are new to the Cloud Healthcare API, you can try to
[create and view datasets and stores using curl][healthcare-quickstart-curl].
[healthcare-docs]: https://cloud.google.com/healthcare/docs
[healthcare-quickstart-curl]:
https://cloud.google.com/healthcare-api/docs/store-healthcare-data-rest
## Available Tools
{{< list-tools >}}
## Requirements
### IAM Permissions
The Cloud Healthcare API uses [Identity and Access Management
(IAM)][iam-overview] to control user and group access to Cloud Healthcare
resources like projects, datasets, and stores.
### Authentication via Application Default Credentials (ADC)
By **default**, Toolbox will use your [Application Default Credentials
(ADC)][adc] to authorize and authenticate when interacting with the
[Cloud Healthcare API][healthcare-docs].
When using this method, you need to ensure the IAM identity associated with your
ADC (such as a service account) has the correct permissions for the queries you
intend to run. Common roles include `roles/healthcare.fhirResourceReader` (which
includes permissions to read and search for FHIR resources) or
`roles/healthcare.dicomViewer` (for retrieving DICOM images).
Follow this [guide][set-adc] to set up your ADC.
### Authentication via User's OAuth Access Token
If the `useClientOAuth` parameter is set to `true`, Toolbox will instead use the
OAuth access token for authentication. This token is parsed from the
`Authorization` header passed in with the tool invocation request. This method
allows Toolbox to make queries to the [Cloud Healthcare API][healthcare-docs] on
behalf of the client or the end-user.
When using this on-behalf-of authentication, you must ensure that the
identity used has been granted the correct IAM permissions.
[iam-overview]:
[adc]:
[set-adc]:
## Example
Initialize a Cloud Healthcare API source that uses ADC:
```yaml
kind: sources
name: my-healthcare-source
type: "cloud-healthcare"
project: "my-project-id"
region: "us-central1"
dataset: "my-healthcare-dataset-id"
# allowedFhirStores: # Optional: Restricts tool access to a specific list of FHIR store IDs.
# - "my_fhir_store_1"
# allowedDicomStores: # Optional: Restricts tool access to a specific list of DICOM store IDs.
# - "my_dicom_store_1"
# - "my_dicom_store_2"
```
Initialize a Cloud Healthcare API source that uses the client's access token:
```yaml
kind: sources
name: my-healthcare-client-auth-source
type: "cloud-healthcare"
project: "my-project-id"
region: "us-central1"
dataset: "my-healthcare-dataset-id"
useClientOAuth: true
# allowedFhirStores: # Optional: Restricts tool access to a specific list of FHIR store IDs.
# - "my_fhir_store_1"
# allowedDicomStores: # Optional: Restricts tool access to a specific list of DICOM store IDs.
# - "my_dicom_store_1"
# - "my_dicom_store_2"
```
## Reference
| **field** | **type** | **required** | **description** |
|--------------------|:--------:|:------------:|------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|
| type | string | true | Must be "cloud-healthcare". |
| project | string | true | ID of the GCP project that the dataset lives in. |
| region | string | true | Specifies the region (e.g., 'us', 'asia-northeast1') of the healthcare dataset. [Learn More](https://cloud.google.com/healthcare-api/docs/regions) |
| dataset | string | true | ID of the healthcare dataset. |
| allowedFhirStores | []string | false | An optional list of FHIR store IDs that tools using this source are allowed to access. If provided, any tool operation attempting to access a store not in this list will be rejected. If a single store is provided, it will be treated as the default for prebuilt tools. |
| allowedDicomStores | []string | false | An optional list of DICOM store IDs that tools using this source are allowed to access. If provided, any tool operation attempting to access a store not in this list will be rejected. If a single store is provided, it will be treated as the default for prebuilt tools. |
| useClientOAuth | bool | false | If true, forwards the client's OAuth access token from the "Authorization" header to downstream queries. |
========================================================================
## cloud-healthcare-fhir-fetch-page Tool
========================================================================
**Hierarchy:** MCP Toolbox for Databases > Integrations > Cloud Healthcare API Source > cloud-healthcare-fhir-fetch-page Tool
**URL:** https://googleapis.github.io/genai-toolbox/previews/PR-2723/integrations/cloudhealthcare/cloud-healthcare-fhir-fetch-page/
**Description:** A "cloud-healthcare-fhir-fetch-page" tool fetches a page of FHIR resources from a given URL.
## About
A `cloud-healthcare-fhir-fetch-page` tool fetches a page of FHIR resources from
a given URL.
`cloud-healthcare-fhir-fetch-page` can be used for pagination when a previous
tool call (like `cloud-healthcare-fhir-patient-search` or
`cloud-healthcare-fhir-patient-everything`) returns a 'next' link in the
response bundle.
## Compatible Sources
{{< compatible-sources >}}
## Example
```yaml
kind: tools
name: get_fhir_store
type: cloud-healthcare-fhir-fetch-page
source: my-healthcare-source
description: Use this tool to fetch a page of FHIR resources from a FHIR Bundle's entry.link.url
```
## Reference
| **field** | **type** | **required** | **description** |
|-------------|:--------:|:------------:|----------------------------------------------------|
| type | string | true | Must be "cloud-healthcare-fhir-fetch-page". |
| source | string | true | Name of the healthcare source. |
| description | string | true | Description of the tool that is passed to the LLM. |
### Parameters
| **field** | **type** | **required** | **description** |
|-----------|:--------:|:------------:|-----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|
| pageURL | string | true | The full URL of the FHIR page to fetch. This would usually be the value of `Bundle.entry.link.url` field within the response returned from FHIR search or FHIR patient everything operations. |
========================================================================
## cloud-healthcare-fhir-patient-everything Tool
========================================================================
**Hierarchy:** MCP Toolbox for Databases > Integrations > Cloud Healthcare API Source > cloud-healthcare-fhir-patient-everything Tool
**URL:** https://googleapis.github.io/genai-toolbox/previews/PR-2723/integrations/cloudhealthcare/cloud-healthcare-fhir-patient-everything/
**Description:** A "cloud-healthcare-fhir-patient-everything" tool retrieves all information for a given patient.
## About
A `cloud-healthcare-fhir-patient-everything` tool retrieves resources related to
a given patient from a FHIR store.
`cloud-healthcare-fhir-patient-everything` returns all the information available
for a given patient ID. It can be configured to only return certain resource
types, or only resources that have been updated after a given time.
## Compatible Sources
{{< compatible-sources >}}
## Example
```yaml
kind: tools
name: fhir_patient_everything
type: cloud-healthcare-fhir-patient-everything
source: my-healthcare-source
description: Use this tool to retrieve all the information about a given patient.
```
## Reference
| **field** | **type** | **required** | **description** |
|-------------|:--------:|:------------:|-----------------------------------------------------|
| type | string | true | Must be "cloud-healthcare-fhir-patient-everything". |
| source | string | true | Name of the healthcare source. |
| description | string | true | Description of the tool that is passed to the LLM. |
### Parameters
| **field** | **type** | **required** | **description** |
|---------------------|:--------:|:------------:|-----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|
| patientID | string | true | The ID of the patient FHIR resource for which the information is required. |
| resourceTypesFilter | string | false | String of comma-delimited FHIR resource types. If provided, only resources of the specified resource type(s) are returned. |
| sinceFilter | string | false | If provided, only resources updated after this time are returned. The time uses the format YYYY-MM-DDThh:mm:ss.sss+zz:zz. The time must be specified to the second and include a time zone. For example, 2015-02-07T13:28:17.239+02:00 or 2017-01-01T00:00:00Z. |
| storeID | string | true* | The FHIR store ID to search in. |
*If the `allowedFHIRStores` in the source has length 1, then the `storeID`
parameter is not needed.
========================================================================
## cloud-healthcare-fhir-patient-search Tool
========================================================================
**Hierarchy:** MCP Toolbox for Databases > Integrations > Cloud Healthcare API Source > cloud-healthcare-fhir-patient-search Tool
**URL:** https://googleapis.github.io/genai-toolbox/previews/PR-2723/integrations/cloudhealthcare/cloud-healthcare-fhir-patient-search/
**Description:** A "cloud-healthcare-fhir-patient-search" tool searches for patients in a FHIR store.
## About
A `cloud-healthcare-fhir-patient-search` tool searches for patients in a FHIR
store based on a set of criteria.
`cloud-healthcare-fhir-patient-search` returns a list of patients that match the
given criteria.
## Compatible Sources
{{< compatible-sources >}}
## Example
```yaml
kind: tools
name: fhir_patient_search
type: cloud-healthcare-fhir-patient-search
source: my-healthcare-source
description: Use this tool to search for patients in the FHIR store.
```
## Reference
| **field** | **type** | **required** | **description** |
|-------------|:--------:|:------------:|----------------------------------------------------|
| type | string | true | Must be "cloud-healthcare-fhir-patient-search". |
| source | string | true | Name of the healthcare source. |
| description | string | true | Description of the tool that is passed to the LLM. |
### Parameters
| **field** | **type** | **required** | **description** |
|------------------|:--------:|:------------:|--------------------------------------------------------------------------------|
| active | string | false | Whether the patient record is active. |
| city | string | false | The city of the patient's address. |
| country | string | false | The country of the patient's address. |
| postalcode | string | false | The postal code of the patient's address. |
| state | string | false | The state of the patient's address. |
| addressSubstring | string | false | A substring to search for in any address field. |
| birthDateRange | string | false | A date range for the patient's birth date in the format YYYY-MM-DD/YYYY-MM-DD. |
| deathDateRange | string | false | A date range for the patient's death date in the format YYYY-MM-DD/YYYY-MM-DD. |
| deceased | string | false | Whether the patient is deceased. |
| email | string | false | The patient's email address. |
| gender | string | false | The patient's gender. |
| addressUse | string | false | The use of the patient's address. |
| name | string | false | The patient's name. |
| givenName | string | false | A portion of the given name of the patient. |
| familyName | string | false | A portion of the family name of the patient. |
| phone | string | false | The patient's phone number. |
| language | string | false | The patient's preferred language. |
| identifier | string | false | An identifier for the patient. |
| summary | boolean | false | Requests the server to return a subset of the resource. True by default. |
| storeID | string | true* | The FHIR store ID to search in. |
*If the `allowedFHIRStores` in the source has length 1, then the `storeID`
parameter is not needed.
========================================================================
## cloud-healthcare-get-dataset Tool
========================================================================
**Hierarchy:** MCP Toolbox for Databases > Integrations > Cloud Healthcare API Source > cloud-healthcare-get-dataset Tool
**URL:** https://googleapis.github.io/genai-toolbox/previews/PR-2723/integrations/cloudhealthcare/cloud-healthcare-get-dataset/
**Description:** A "cloud-healthcare-get-dataset" tool retrieves metadata for the Healthcare dataset in the source.
## About
A `cloud-healthcare-get-dataset` tool retrieves metadata for a Healthcare dataset.
`cloud-healthcare-get-dataset` returns the metadata of the healthcare dataset
configured in the source. It takes no extra parameters.
## Compatible Sources
{{< compatible-sources >}}
## Example
```yaml
kind: tools
name: get_dataset
type: cloud-healthcare-get-dataset
source: my-healthcare-source
description: Use this tool to get healthcare dataset metadata.
```
## Reference
| **field** | **type** | **required** | **description** |
|-------------|:------------------------------------------:|:------------:|----------------------------------------------------|
| type | string | true | Must be "cloud-healthcare-get-dataset". |
| source | string | true | Name of the healthcare source. |
| description | string | true | Description of the tool that is passed to the LLM. |
========================================================================
## cloud-healthcare-get-dicom-store Tool
========================================================================
**Hierarchy:** MCP Toolbox for Databases > Integrations > Cloud Healthcare API Source > cloud-healthcare-get-dicom-store Tool
**URL:** https://googleapis.github.io/genai-toolbox/previews/PR-2723/integrations/cloudhealthcare/cloud-healthcare-get-dicom-store/
**Description:** A "cloud-healthcare-get-dicom-store" tool retrieves information about a DICOM store.
## About
A `cloud-healthcare-get-dicom-store` tool retrieves information about a DICOM store.
`cloud-healthcare-get-dicom-store` returns the details of a DICOM store.
## Compatible Sources
{{< compatible-sources >}}
## Example
```yaml
kind: tools
name: get_dicom_store
type: cloud-healthcare-get-dicom-store
source: my-healthcare-source
description: Use this tool to get information about a DICOM store.
```
## Reference
| **field** | **type** | **required** | **description** |
|-------------|:--------:|:------------:|----------------------------------------------------|
| type | string | true | Must be "cloud-healthcare-get-dicom-store". |
| source | string | true | Name of the healthcare source. |
| description | string | true | Description of the tool that is passed to the LLM. |
### Parameters
| **field** | **type** | **required** | **description** |
|-----------|:--------:|:------------:|----------------------------------------|
| storeID | string | true* | The DICOM store ID to get details for. |
*If the `allowedDICOMStores` in the source has length 1, then the `storeID` parameter is not needed.
========================================================================
## cloud-healthcare-get-dicom-store-metrics Tool
========================================================================
**Hierarchy:** MCP Toolbox for Databases > Integrations > Cloud Healthcare API Source > cloud-healthcare-get-dicom-store-metrics Tool
**URL:** https://googleapis.github.io/genai-toolbox/previews/PR-2723/integrations/cloudhealthcare/cloud-healthcare-get-dicom-store-metrics/
**Description:** A "cloud-healthcare-get-dicom-store-metrics" tool retrieves metrics for a DICOM store.
## About
A `cloud-healthcare-get-dicom-store-metrics` tool retrieves metrics for a DICOM
store.
`cloud-healthcare-get-dicom-store-metrics` returns the metrics of a DICOM store.
## Compatible Sources
{{< compatible-sources >}}
## Example
```yaml
kind: tools
name: get_dicom_store_metrics
type: cloud-healthcare-get-dicom-store-metrics
source: my-healthcare-source
description: Use this tool to get metrics for a DICOM store.
```
## Reference
| **field** | **type** | **required** | **description** |
|-------------|:--------:|:------------:|-----------------------------------------------------|
| type | string | true | Must be "cloud-healthcare-get-dicom-store-metrics". |
| source | string | true | Name of the healthcare source. |
| description | string | true | Description of the tool that is passed to the LLM. |
### Parameters
| **field** | **type** | **required** | **description** |
|-----------|:--------:|:------------:|----------------------------------------|
| storeID | string | true* | The DICOM store ID to get metrics for. |
*If the `allowedDICOMStores` in the source has length 1, then the `storeID`
parameter is not needed.
========================================================================
## cloud-healthcare-get-fhir-resource Tool
========================================================================
**Hierarchy:** MCP Toolbox for Databases > Integrations > Cloud Healthcare API Source > cloud-healthcare-get-fhir-resource Tool
**URL:** https://googleapis.github.io/genai-toolbox/previews/PR-2723/integrations/cloudhealthcare/cloud-healthcare-get-fhir-resource/
**Description:** A "cloud-healthcare-get-fhir-resource" tool retrieves a specific FHIR resource.
## About
A `cloud-healthcare-get-fhir-resource` tool retrieves a specific FHIR resource
from a FHIR store.
`cloud-healthcare-get-fhir-resource` returns a single FHIR resource, identified
by its type and ID.
## Compatible Sources
{{< compatible-sources >}}
## Example
```yaml
kind: tools
name: get_fhir_resource
type: cloud-healthcare-get-fhir-resource
source: my-healthcare-source
description: Use this tool to retrieve a specific FHIR resource.
```
## Reference
| **field** | **type** | **required** | **description** |
|-------------|:--------:|:------------:|----------------------------------------------------|
| type | string | true | Must be "cloud-healthcare-get-fhir-resource". |
| source | string | true | Name of the healthcare source. |
| description | string | true | Description of the tool that is passed to the LLM. |
### Parameters
| **field** | **type** | **required** | **description** |
|--------------|:--------:|:------------:|------------------------------------------------------------------|
| resourceType | string | true | The FHIR resource type to retrieve (e.g., Patient, Observation). |
| resourceID | string | true | The ID of the FHIR resource to retrieve. |
| storeID | string | true* | The FHIR store ID to retrieve the resource from. |
*If the `allowedFHIRStores` in the source has length 1, then the `storeID`
parameter is not needed.
========================================================================
## cloud-healthcare-get-fhir-store Tool
========================================================================
**Hierarchy:** MCP Toolbox for Databases > Integrations > Cloud Healthcare API Source > cloud-healthcare-get-fhir-store Tool
**URL:** https://googleapis.github.io/genai-toolbox/previews/PR-2723/integrations/cloudhealthcare/cloud-healthcare-get-fhir-store/
**Description:** A "cloud-healthcare-get-fhir-store" tool retrieves information about a FHIR store.
## About
A `cloud-healthcare-get-fhir-store` tool retrieves information about a FHIR store.
`cloud-healthcare-get-fhir-store` returns the details of a FHIR store.
## Compatible Sources
{{< compatible-sources >}}
## Example
```yaml
kind: tools
name: get_fhir_store
type: cloud-healthcare-get-fhir-store
source: my-healthcare-source
description: Use this tool to get information about a FHIR store.
```
## Reference
| **field** | **type** | **required** | **description** |
|-------------|:--------:|:------------:|----------------------------------------------------|
| type | string | true | Must be "cloud-healthcare-get-fhir-store". |
| source | string | true | Name of the healthcare source. |
| description | string | true | Description of the tool that is passed to the LLM. |
### Parameters
| **field** | **type** | **required** | **description** |
|-----------|:--------:|:------------:|---------------------------------------|
| storeID | string | true* | The FHIR store ID to get details for. |
*If the `allowedFHIRStores` in the source has length 1, then the `storeID` parameter is not needed.
========================================================================
## cloud-healthcare-get-fhir-store-metrics Tool
========================================================================
**Hierarchy:** MCP Toolbox for Databases > Integrations > Cloud Healthcare API Source > cloud-healthcare-get-fhir-store-metrics Tool
**URL:** https://googleapis.github.io/genai-toolbox/previews/PR-2723/integrations/cloudhealthcare/cloud-healthcare-get-fhir-store-metrics/
**Description:** A "cloud-healthcare-get-fhir-store-metrics" tool retrieves metrics for a FHIR store.
## About
A `cloud-healthcare-get-fhir-store-metrics` tool retrieves metrics for a FHIR store.
`cloud-healthcare-get-fhir-store-metrics` returns the metrics of a FHIR store.
## Compatible Sources
{{< compatible-sources >}}
## Example
```yaml
kind: tools
name: get_fhir_store_metrics
type: cloud-healthcare-get-fhir-store-metrics
source: my-healthcare-source
description: Use this tool to get metrics for a FHIR store.
```
## Reference
| **field** | **type** | **required** | **description** |
|-------------|:--------:|:------------:|----------------------------------------------------|
| type | string | true | Must be "cloud-healthcare-get-fhir-store-metrics". |
| source | string | true | Name of the healthcare source. |
| description | string | true | Description of the tool that is passed to the LLM. |
### Parameters
| **field** | **type** | **required** | **description** |
|-----------|:--------:|:------------:|---------------------------------------|
| storeID | string | true* | The FHIR store ID to get metrics for. |
*If the `allowedFHIRStores` in the source has length 1, then the `storeID` parameter is not needed.
========================================================================
## cloud-healthcare-list-dicom-stores Tool
========================================================================
**Hierarchy:** MCP Toolbox for Databases > Integrations > Cloud Healthcare API Source > cloud-healthcare-list-dicom-stores Tool
**URL:** https://googleapis.github.io/genai-toolbox/previews/PR-2723/integrations/cloudhealthcare/cloud-healthcare-list-dicom-stores/
**Description:** A "cloud-healthcare-list-dicom-stores" lists the available DICOM stores in the healthcare dataset.
## About
A `cloud-healthcare-list-dicom-stores` lists the available DICOM stores in the
healthcare dataset.
`cloud-healthcare-list-dicom-stores` returns the details of the available DICOM
stores in the dataset of the healthcare source. It takes no extra parameters.
## Compatible Sources
{{< compatible-sources >}}
## Example
```yaml
kind: tools
name: list_dicom_stores
type: cloud-healthcare-list-dicom-stores
source: my-healthcare-source
description: Use this tool to list DICOM stores in the healthcare dataset.
```
## Reference
| **field** | **type** | **required** | **description** |
|-------------|:--------:|:------------:|----------------------------------------------------|
| type | string | true | Must be "cloud-healthcare-list-dicom-stores". |
| source | string | true | Name of the healthcare source. |
| description | string | true | Description of the tool that is passed to the LLM. |
========================================================================
## cloud-healthcare-list-fhir-stores Tool
========================================================================
**Hierarchy:** MCP Toolbox for Databases > Integrations > Cloud Healthcare API Source > cloud-healthcare-list-fhir-stores Tool
**URL:** https://googleapis.github.io/genai-toolbox/previews/PR-2723/integrations/cloudhealthcare/cloud-healthcare-list-fhir-stores/
**Description:** A "cloud-healthcare-list-fhir-stores" lists the available FHIR stores in the healthcare dataset.
## About
A `cloud-healthcare-list-fhir-stores` lists the available FHIR stores in the
healthcare dataset.
`cloud-healthcare-list-fhir-stores` returns the details of the available FHIR
stores in the dataset of the healthcare source. It takes no extra parameters.
## Compatible Sources
{{< compatible-sources >}}
## Example
```yaml
kind: tools
name: list_fhir_stores
type: cloud-healthcare-list-fhir-stores
source: my-healthcare-source
description: Use this tool to list FHIR stores in the healthcare dataset.
```
## Reference
| **field** | **type** | **required** | **description** |
|-------------|:--------:|:------------:|----------------------------------------------------|
| type | string | true | Must be "cloud-healthcare-list-fhir-stores". |
| source | string | true | Name of the healthcare source. |
| description | string | true | Description of the tool that is passed to the LLM. |
========================================================================
## cloud-healthcare-retrieve-rendered-dicom-instance Tool
========================================================================
**Hierarchy:** MCP Toolbox for Databases > Integrations > Cloud Healthcare API Source > cloud-healthcare-retrieve-rendered-dicom-instance Tool
**URL:** https://googleapis.github.io/genai-toolbox/previews/PR-2723/integrations/cloudhealthcare/cloud-healthcare-retrieve-rendered-dicom-instance/
**Description:** A "cloud-healthcare-retrieve-rendered-dicom-instance" tool retrieves a rendered DICOM instance from a DICOM store.
## About
A `cloud-healthcare-retrieve-rendered-dicom-instance` tool retrieves a rendered
DICOM instance from a DICOM store.
`cloud-healthcare-retrieve-rendered-dicom-instance` returns a base64 encoded
string of the image in JPEG format.
## Compatible Sources
{{< compatible-sources >}}
## Example
```yaml
kind: tools
name: retrieve_rendered_dicom_instance
type: cloud-healthcare-retrieve-rendered-dicom-instance
source: my-healthcare-source
description: Use this tool to retrieve a rendered DICOM instance from the DICOM store.
```
## Reference
| **field** | **type** | **required** | **description** |
|-------------|:--------:|:------------:|--------------------------------------------------------------|
| type | string | true | Must be "cloud-healthcare-retrieve-rendered-dicom-instance". |
| source | string | true | Name of the healthcare source. |
| description | string | true | Description of the tool that is passed to the LLM. |
### Parameters
| **field** | **type** | **required** | **description** |
|-------------------|:--------:|:------------:|--------------------------------------------------------------------------------------------------|
| StudyInstanceUID | string | true | The UID of the DICOM study. |
| SeriesInstanceUID | string | true | The UID of the DICOM series. |
| SOPInstanceUID | string | true | The UID of the SOP instance. |
| FrameNumber | integer | false | The frame number to retrieve (1-based). Only applicable to multi-frame instances. Defaults to 1. |
| storeID | string | true* | The DICOM store ID to retrieve from. |
*If the `allowedDICOMStores` in the source has length 1, then the `storeID`
parameter is not needed.
========================================================================
## cloud-healthcare-search-dicom-instances Tool
========================================================================
**Hierarchy:** MCP Toolbox for Databases > Integrations > Cloud Healthcare API Source > cloud-healthcare-search-dicom-instances Tool
**URL:** https://googleapis.github.io/genai-toolbox/previews/PR-2723/integrations/cloudhealthcare/cloud-healthcare-search-dicom-instances/
**Description:** A "cloud-healthcare-search-dicom-instances" tool searches for DICOM instances in a DICOM store.
## About
A `cloud-healthcare-search-dicom-instances` tool searches for DICOM instances in
a DICOM store based on a set of criteria.
`search-dicom-instances` returns a list of DICOM instances that match the given
criteria.
## Compatible Sources
{{< compatible-sources >}}
## Example
```yaml
kind: tools
name: search_dicom_instances
type: cloud-healthcare-search-dicom-instances
source: my-healthcare-source
description: Use this tool to search for DICOM instances in the DICOM store.
```
## Reference
| **field** | **type** | **required** | **description** |
|-------------|:--------:|:------------:|----------------------------------------------------|
| type | string | true | Must be "cloud-healthcare-search-dicom-instances". |
| source | string | true | Name of the healthcare source. |
| description | string | true | Description of the tool that is passed to the LLM. |
### Parameters
| **field** | **type** | **required** | **description** |
|------------------------|:--------:|:------------:|-------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|
| StudyInstanceUID | string | false | The UID of the DICOM study. |
| PatientName | string | false | The name of the patient. |
| PatientID | string | false | The ID of the patient. |
| AccessionNumber | string | false | The accession number of the study. |
| ReferringPhysicianName | string | false | The name of the referring physician. |
| StudyDate | string | false | The date of the study in the format `YYYYMMDD`. You can also specify a date range in the format `YYYYMMDD-YYYYMMDD`. |
| SeriesInstanceUID | string | false | The UID of the DICOM series. |
| Modality | string | false | The modality of the series. |
| SOPInstanceUID | string | false | The UID of the SOP instance. |
| fuzzymatching | boolean | false | Whether to enable fuzzy matching for patient names. Fuzzy matching will perform tokenization and normalization of both the value of PatientName in the query and the stored value. It will match if any search token is a prefix of any stored token. For example, if PatientName is "John^Doe", then "jo", "Do" and "John Doe" will all match. However "ohn" will not match. |
| includefield | []string | false | List of attributeIDs to include in the output, such as DICOM tag IDs or keywords. Set to `["all"]` to return all available tags. |
| storeID | string | true* | The DICOM store ID to search in. |
*If the `allowedDICOMStores` in the source has length 1, then the `storeID`
parameter is not needed.
========================================================================
## cloud-healthcare-search-dicom-series Tool
========================================================================
**Hierarchy:** MCP Toolbox for Databases > Integrations > Cloud Healthcare API Source > cloud-healthcare-search-dicom-series Tool
**URL:** https://googleapis.github.io/genai-toolbox/previews/PR-2723/integrations/cloudhealthcare/cloud-healthcare-search-dicom-series/
**Description:** A "cloud-healthcare-search-dicom-series" tool searches for DICOM series in a DICOM store.
## About
A `cloud-healthcare-search-dicom-series` tool searches for DICOM series in a DICOM store based on a
set of criteria.
`cloud-healthcare-search-dicom-series` returns a list of DICOM series that match the given criteria.
## Compatible Sources
{{< compatible-sources >}}
## Example
```yaml
kind: tools
name: search_dicom_series
type: cloud-healthcare-search-dicom-series
source: my-healthcare-source
description: Use this tool to search for DICOM series in the DICOM store.
```
## Reference
| **field** | **type** | **required** | **description** |
|-------------|:--------:|:------------:|----------------------------------------------------|
| type | string | true | Must be "cloud-healthcare-search-dicom-series". |
| source | string | true | Name of the healthcare source. |
| description | string | true | Description of the tool that is passed to the LLM. |
### Parameters
| **field** | **type** | **required** | **description** |
|------------------------|:--------:|:------------:|-------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|
| StudyInstanceUID | string | false | The UID of the DICOM study. |
| PatientName | string | false | The name of the patient. |
| PatientID | string | false | The ID of the patient. |
| AccessionNumber | string | false | The accession number of the study. |
| ReferringPhysicianName | string | false | The name of the referring physician. |
| StudyDate | string | false | The date of the study in the format `YYYYMMDD`. You can also specify a date range in the format `YYYYMMDD-YYYYMMDD`. |
| SeriesInstanceUID | string | false | The UID of the DICOM series. |
| Modality | string | false | The modality of the series. |
| fuzzymatching | boolean | false | Whether to enable fuzzy matching for patient names. Fuzzy matching will perform tokenization and normalization of both the value of PatientName in the query and the stored value. It will match if any search token is a prefix of any stored token. For example, if PatientName is "John^Doe", then "jo", "Do" and "John Doe" will all match. However "ohn" will not match. |
| includefield | []string | false | List of attributeIDs to include in the output, such as DICOM tag IDs or keywords. Set to `["all"]` to return all available tags. |
| storeID | string | true* | The DICOM store ID to search in. |
*If the `allowedDIComStores` in the source has length 1, then the `storeID` parameter is not needed.
========================================================================
## cloud-healthcare-search-dicom-studies Tool
========================================================================
**Hierarchy:** MCP Toolbox for Databases > Integrations > Cloud Healthcare API Source > cloud-healthcare-search-dicom-studies Tool
**URL:** https://googleapis.github.io/genai-toolbox/previews/PR-2723/integrations/cloudhealthcare/cloud-healthcare-search-dicom-studies/
**Description:** A "cloud-healthcare-search-dicom-studies" tool searches for DICOM studies in a DICOM store.
## About
A `cloud-healthcare-search-dicom-studies` tool searches for DICOM studies in a DICOM store based on a
set of criteria.
`cloud-healthcare-search-dicom-studies` returns a list of DICOM studies that match the given criteria.
## Compatible Sources
{{< compatible-sources >}}
## Example
```yaml
kind: tools
name: search_dicom_studies
type: cloud-healthcare-search-dicom-studies
source: my-healthcare-source
description: Use this tool to search for DICOM studies in the DICOM store.
```
## Reference
| **field** | **type** | **required** | **description** |
|-------------|:--------:|:------------:|----------------------------------------------------|
| type | string | true | Must be "cloud-healthcare-search-dicom-studies". |
| source | string | true | Name of the healthcare source. |
| description | string | true | Description of the tool that is passed to the LLM. |
### Parameters
| **field** | **type** | **required** | **description** |
|------------------------|:--------:|:------------:|-------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|
| StudyInstanceUID | string | false | The UID of the DICOM study. |
| PatientName | string | false | The name of the patient. |
| PatientID | string | false | The ID of the patient. |
| AccessionNumber | string | false | The accession number of the study. |
| ReferringPhysicianName | string | false | The name of the referring physician. |
| StudyDate | string | false | The date of the study in the format `YYYYMMDD`. You can also specify a date range in the format `YYYYMMDD-YYYYMMDD`. |
| fuzzymatching | boolean | false | Whether to enable fuzzy matching for patient names. Fuzzy matching will perform tokenization and normalization of both the value of PatientName in the query and the stored value. It will match if any search token is a prefix of any stored token. For example, if PatientName is "John^Doe", then "jo", "Do" and "John Doe" will all match. However "ohn" will not match. |
| includefield | []string | false | List of attributeIDs to include in the output, such as DICOM tag IDs or keywords. Set to `["all"]` to return all available tags. |
| storeID | string | true* | The DICOM store ID to search in. |
*If the `allowedDICOMStores` in the source has length 1, then the `storeID` parameter is not needed.
========================================================================
## Cloud Logging Admin Source
========================================================================
**Hierarchy:** MCP Toolbox for Databases > Integrations > Cloud Logging Admin Source
**URL:** https://googleapis.github.io/genai-toolbox/previews/PR-2723/integrations/cloudloggingadmin/
**Description:** The Cloud Logging Admin source enables tools to interact with the Cloud Logging API, allowing for the retrieval of log names, monitored resource types, and the querying of log data.
## About
The Cloud Logging Admin source provides a client to interact with the [Google
Cloud Logging API](https://cloud.google.com/logging/docs). This allows tools to list log names, monitored resource types, and query log entries.
Authentication can be handled in two ways:
1. **Application Default Credentials (ADC):** By default, the source uses ADC
to authenticate with the API.
2. **Client-side OAuth:** If `useClientOAuth` is set to `true`, the source will
expect an OAuth 2.0 access token to be provided by the client (e.g., a web
browser) for each request.
## Available Tools
{{< list-tools >}}
## Example
Initialize a Cloud Logging Admin source that uses ADC:
```yaml
kind: sources
name: my-cloud-logging
type: cloud-logging-admin
project: my-project-id
```
Initialize a Cloud Logging Admin source that uses client-side OAuth:
```yaml
kind: sources
name: my-oauth-cloud-logging
type: cloud-logging-admin
project: my-project-id
useClientOAuth: true
```
Initialize a Cloud Logging Admin source that uses service account impersonation:
```yaml
kind: sources
name: my-impersonated-cloud-logging
type: cloud-logging-admin
project: my-project-id
impersonateServiceAccount: "my-service-account@my-project.iam.gserviceaccount.com"
```
## Reference
| **field** | **type** | **required** | **description** |
|-----------------------------|:--------:|:------------:|-------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|
| type | string | true | Must be "cloud-logging-admin". |
| project | string | true | ID of the GCP project. |
| useClientOAuth | boolean | false | If true, the source will use client-side OAuth for authorization. Otherwise, it will use Application Default Credentials. Defaults to `false`. Cannot be used with `impersonateServiceAccount`. |
| impersonateServiceAccount | string | false | The service account to impersonate for API calls. Cannot be used with `useClientOAuth`. |
========================================================================
## cloud-logging-admin-list-log-names Tool
========================================================================
**Hierarchy:** MCP Toolbox for Databases > Integrations > Cloud Logging Admin Source > cloud-logging-admin-list-log-names Tool
**URL:** https://googleapis.github.io/genai-toolbox/previews/PR-2723/integrations/cloudloggingadmin/cloud-logging-admin-list-log-names/
**Description:** A "cloud-logging-admin-list-log-names" tool lists the log names in the project.
## About
The `cloud-logging-admin-list-log-names` tool lists the log names available in the Google Cloud project.
## Compatible Sources
{{< compatible-sources >}}
## Example
```yaml
kind: tools
name: list_log_names
type: cloud-logging-admin-list-log-names
source: my-cloud-logging
description: Lists all log names in the project.
```
## Reference
| **field** | **type** | **required** | **description** |
|-------------|:--------:|:------------:|----------------------------------------------------|
| type | string | true | Must be "cloud-logging-admin-list-log-names". |
| source | string | true | Name of the cloud-logging-admin source. |
| description | string | true | Description of the tool that is passed to the LLM. |
### Parameters
| **parameter** | **type** | **required** | **description** |
|:--------------|:--------:|:------------:|:----------------|
| limit | integer | false | Maximum number of log entries to return (default: 200). |
========================================================================
## cloud-logging-admin-list-resource-types Tool
========================================================================
**Hierarchy:** MCP Toolbox for Databases > Integrations > Cloud Logging Admin Source > cloud-logging-admin-list-resource-types Tool
**URL:** https://googleapis.github.io/genai-toolbox/previews/PR-2723/integrations/cloudloggingadmin/cloud-logging-admin-list-resource-types/
**Description:** A "cloud-logging-admin-list-resource-types" tool lists the monitored resource types.
## About
The `cloud-logging-admin-list-resource-types` tool lists the monitored resource types available in Google Cloud Logging.
## Compatible Sources
{{< compatible-sources >}}
## Example
```yaml
kind: tools
name: list_resource_types
type: cloud-logging-admin-list-resource-types
source: my-cloud-logging
description: Lists monitored resource types.
```
## Reference
| **field** | **type** | **required** | **description** |
|-------------|:--------:|:------------:|----------------------------------------------------|
| type | string | true | Must be "cloud-logging-admin-list-resource-types".|
| source | string | true | Name of the cloud-logging-admin source. |
| description | string | true | Description of the tool that is passed to the LLM. |
========================================================================
## cloud-logging-admin-query-logs Tool
========================================================================
**Hierarchy:** MCP Toolbox for Databases > Integrations > Cloud Logging Admin Source > cloud-logging-admin-query-logs Tool
**URL:** https://googleapis.github.io/genai-toolbox/previews/PR-2723/integrations/cloudloggingadmin/cloud-logging-admin-query-logs/
**Description:** A "cloud-logging-admin-query-logs" tool queries log entries.
## About
The `cloud-logging-admin-query-logs` tool allows you to query log entries from Google Cloud Logging using the advanced logs filter syntax.
## Compatible Sources
{{< compatible-sources >}}
## Example
```yaml
kind: tools
name: query_logs
type: cloud-logging-admin-query-logs
source: my-cloud-logging
description: Queries log entries from Cloud Logging.
```
## Reference
| **field** | **type** | **required** | **description** |
|-------------|:--------:|:------------:|----------------------------------------------------|
| type | string | true | Must be "cloud-logging-admin-query-logs". |
| source | string | true | Name of the cloud-logging-admin source. |
| description | string | true | Description of the tool that is passed to the LLM. |
### Parameters
| **parameter** | **type** | **required** | **description** |
|:--------------|:--------:|:------------:|:----------------|
| filter | string | false | Cloud Logging filter query. Common fields: resource.type, resource.labels.*, logName, severity, textPayload, jsonPayload.*, protoPayload.*, labels.*, httpRequest.*. Operators: =, !=, <, <=, >, >=, :, =~, AND, OR, NOT. |
| newestFirst | boolean | false | Set to true for newest logs first. Defaults to oldest first. |
| startTime | string | false | Start time in RFC3339 format (e.g., 2025-12-09T00:00:00Z). Defaults to 30 days ago. |
| endTime | string | false | End time in RFC3339 format (e.g., 2025-12-09T23:59:59Z). Defaults to now. |
| verbose | boolean | false | Include additional fields (insertId, trace, spanId, httpRequest, labels, operation, sourceLocation). Defaults to false. |
| limit | integer | false | Maximum number of log entries to return. Default: `200`. |
========================================================================
## Cloud Monitoring Source
========================================================================
**Hierarchy:** MCP Toolbox for Databases > Integrations > Cloud Monitoring Source
**URL:** https://googleapis.github.io/genai-toolbox/previews/PR-2723/integrations/cloudmonitoring/
**Description:** A "cloud-monitoring" source provides a client for the Cloud Monitoring API.
## About
The `cloud-monitoring` source provides a client to interact with the [Google
Cloud Monitoring API](https://cloud.google.com/monitoring/api). This allows
tools to access cloud monitoring metrics explorer and run promql queries.
Authentication can be handled in two ways:
1. **Application Default Credentials (ADC):** By default, the source uses ADC
to authenticate with the API.
2. **Client-side OAuth:** If `useClientOAuth` is set to `true`, the source will
expect an OAuth 2.0 access token to be provided by the client (e.g., a web
browser) for each request.
## Available Tools
{{< list-tools >}}
## Example
```yaml
kind: sources
name: my-cloud-monitoring
type: cloud-monitoring
---
kind: sources
name: my-oauth-cloud-monitoring
type: cloud-monitoring
useClientOAuth: true
```
## Reference
| **field** | **type** | **required** | **description** |
|----------------|:--------:|:------------:|------------------------------------------------------------------------------------------------------------------------------------------------|
| type | string | true | Must be "cloud-monitoring". |
| useClientOAuth | boolean | false | If true, the source will use client-side OAuth for authorization. Otherwise, it will use Application Default Credentials. Defaults to `false`. |
========================================================================
## cloud-monitoring-query-prometheus Tool
========================================================================
**Hierarchy:** MCP Toolbox for Databases > Integrations > Cloud Monitoring Source > cloud-monitoring-query-prometheus Tool
**URL:** https://googleapis.github.io/genai-toolbox/previews/PR-2723/integrations/cloudmonitoring/cloud-monitoring-query-prometheus/
**Description:** The "cloud-monitoring-query-prometheus" tool fetches time series metrics for a project using a given prometheus query.
The `cloud-monitoring-query-prometheus` tool fetches timeseries metrics data
from Google Cloud Monitoring for a project using a given prometheus query.
## About
The `cloud-monitoring-query-prometheus` tool allows you to query all metrics
available in Google Cloud Monitoring using the Prometheus Query Language
(PromQL).
### Use Cases
- **Ad-hoc analysis:** Quickly investigate performance issues by executing
direct promql queries for a database instance.
- **Prebuilt Configs:** Use the already added prebuilt tools mentioned in
prebuilt-tools.md to query the databases system/query level metrics.
Here are some common use cases for the `cloud-monitoring-query-prometheus` tool:
- **Monitoring resource utilization:** Track CPU, memory, and disk usage for
your database instance (Can use the [prebuilt
tools](../../user-guide/configuration/prebuilt-configs/_index.md)).
- **Monitoring query performance:** Monitor latency, execution_time, wait_time
for database instance or even for the queries running (Can use the [prebuilt
tools](../../user-guide/configuration/prebuilt-configs/_index.md)).
- **System Health:** Get the overall system health for the database instance
(Can use the [prebuilt tools](../../user-guide/configuration/prebuilt-configs/_index.md)).
## Compatible Sources
{{< compatible-sources >}}
## Requirements
To use this tool, you need to have the following IAM role on your Google Cloud
project:
- `roles/monitoring.viewer`
## Parameters
| Name | Type | Description |
|-------------|--------|----------------------------------|
| `projectId` | string | The Google Cloud project ID. |
| `query` | string | The Prometheus query to execute. |
## Example
Here are some examples of how to use the `cloud-monitoring-query-prometheus`
tool.
```yaml
kind: tools
name: get_wait_time_metrics
type: cloud-monitoring-query-prometheus
source: cloud-monitoring-source
description: |
This tool fetches system wait time information for AlloyDB cluster, instance. Get the `projectID`, `clusterID` and `instanceID` from the user intent. To use this tool, you must provide the Google Cloud `projectId` and a PromQL `query`.
Generate `query` using these metric details:
metric: `alloydb.googleapis.com/instance/postgresql/wait_time`, monitored_resource: `alloydb.googleapis.com/Instance`. labels: `cluster_id`, `instance_id`, `wait_event_type`, `wait_event_name`.
Basic time series example promql query: `avg_over_time({"__name__"="alloydb.googleapis.com/instance/postgresql/wait_time","monitored_resource"="alloydb.googleapis.com/Instance","instance_id"="alloydb-instance"}[5m])`
```
## Reference
| **field** | **type** | **required** | **description** |
|-------------|:--------:|:------------:|------------------------------------------------------|
| type | string | true | Must be cloud-monitoring-query-prometheus. |
| source | string | true | The name of an `cloud-monitoring` source. |
| description | string | true | Description of the tool that is passed to the agent. |
========================================================================
## Cloud SQL for MySQL Source
========================================================================
**Hierarchy:** MCP Toolbox for Databases > Integrations > Cloud SQL for MySQL Source
**URL:** https://googleapis.github.io/genai-toolbox/previews/PR-2723/integrations/cloud-sql-mysql/
**Description:** Cloud SQL for MySQL is a fully-managed database service for MySQL.
## About
[Cloud SQL for MySQL][csql-mysql-docs] is a fully-managed database service
that helps you set up, maintain, manage, and administer your MySQL
relational databases on Google Cloud Platform.
If you are new to Cloud SQL for MySQL, you can try [creating and connecting
to a database by following these instructions][csql-mysql-quickstart].
[csql-mysql-docs]: https://cloud.google.com/sql/docs/mysql
[csql-mysql-quickstart]: https://cloud.google.com/sql/docs/mysql/connect-instance-local-computer
## Available Tools
{{< list-tools dirs="/integrations/mysql" >}}
### Pre-built Configurations
- [Cloud SQL for MySQL using
MCP](https://googleapis.github.io/genai-toolbox/how-to/connect-ide/cloud_sql_mysql_mcp/)
Connect your IDE to Cloud SQL for MySQL using Toolbox.
## Requirements
### IAM Permissions
By default, this source uses the [Cloud SQL Go Connector][csql-go-conn] to
authorize and establish mTLS connections to your Cloud SQL instance. The Go
connector uses your [Application Default Credentials (ADC)][adc] to authorize
your connection to Cloud SQL.
In addition to [setting the ADC for your server][set-adc], you need to ensure
the IAM identity has been given the following IAM roles (or corresponding
permissions):
- `roles/cloudsql.client`
{{< notice tip >}}
If you are connecting from Compute Engine, make sure your VM
also has the [proper
scope](https://cloud.google.com/compute/docs/access/service-accounts#accesscopesiam)
to connect using the Cloud SQL Admin API.
{{< /notice >}}
[csql-go-conn]: https://github.com/GoogleCloudPlatform/cloud-sql-go-connector
[adc]: https://cloud.google.com/docs/authentication#adc
[set-adc]: https://cloud.google.com/docs/authentication/provide-credentials-adc
### Networking
Cloud SQL supports connecting over both from external networks via the internet
([public IP][public-ip]), and internal networks ([private IP][private-ip]).
For more information on choosing between the two options, see the Cloud SQL page
[Connection overview][conn-overview].
You can configure the `ipType` parameter in your source configuration to
`public` or `private` to match your cluster's configuration. Regardless of which
you choose, all connections use IAM-based authorization and are encrypted with
mTLS.
[private-ip]: https://cloud.google.com/sql/docs/mysql/configure-private-ip
[public-ip]: https://cloud.google.com/sql/docs/mysql/configure-ip
[conn-overview]: https://cloud.google.com/sql/docs/mysql/connect-overview
### Authentication
This source supports both password-based authentication and IAM
authentication (using your [Application Default Credentials][adc]).
#### Standard Authentication
To connect using user/password, [create
a MySQL user][cloud-sql-users] and input your credentials in the `user` and
`password` fields.
```yaml
user: ${USER_NAME}
password: ${PASSWORD}
```
[cloud-sql-users]: https://cloud.google.com/sql/docs/mysql/create-manage-users
#### IAM Authentication
To connect using IAM authentication:
1. Prepare your database instance and user following this [guide][iam-guide].
2. You could choose one of the two ways to log in:
- Specify your IAM email as the `user`.
- Leave your `user` field blank. Toolbox will fetch the [ADC][adc]
automatically and log in using the email associated with it.
3. Leave the `password` field blank.
[iam-guide]: https://cloud.google.com/sql/docs/mysql/iam-logins
[cloudsql-users]: https://cloud.google.com/sql/docs/mysql/create-manage-users
## Example
```yaml
kind: sources
name: my-cloud-sql-mysql-source
type: cloud-sql-mysql
project: my-project-id
region: us-central1
instance: my-instance
database: my_db
user: ${USER_NAME}
password: ${PASSWORD}
# ipType: "private"
```
{{< notice tip >}}
Use environment variable replacement with the format ${ENV_NAME}
instead of hardcoding your secrets into the configuration file.
{{< /notice >}}
## Reference
| **field** | **type** | **required** | **description** |
|-----------|:--------:|:------------:|------------------------------------------------------------------------------------------------------|
| type | string | true | Must be "cloud-sql-mysql". |
| project | string | true | Id of the GCP project that the cluster was created in (e.g. "my-project-id"). |
| region | string | true | Name of the GCP region that the cluster was created in (e.g. "us-central1"). |
| instance | string | true | Name of the Cloud SQL instance within the cluster (e.g. "my-instance"). |
| database | string | true | Name of the MySQL database to connect to (e.g. "my_db"). |
| user | string | false | Name of the MySQL user to connect as (e.g "my-mysql-user"). Defaults to IAM auth using [ADC][adc] email if unspecified. |
| password | string | false | Password of the MySQL user (e.g. "my-password"). Defaults to attempting IAM authentication if unspecified. |
| ipType | string | false | IP Type of the Cloud SQL instance, must be either `public`, `private`, or `psc`. Default: `public`. |
========================================================================
## Cloud SQL for PostgreSQL Source
========================================================================
**Hierarchy:** MCP Toolbox for Databases > Integrations > Cloud SQL for PostgreSQL Source
**URL:** https://googleapis.github.io/genai-toolbox/previews/PR-2723/integrations/cloud-sql-pg/
**Description:** Cloud SQL for PostgreSQL is a fully-managed database service for Postgres.
## About
[Cloud SQL for PostgreSQL][csql-pg-docs] is a fully-managed database service
that helps you set up, maintain, manage, and administer your PostgreSQL
relational databases on Google Cloud Platform.
If you are new to Cloud SQL for PostgreSQL, you can try [creating and connecting
to a database by following these instructions][csql-pg-quickstart].
[csql-pg-docs]: https://cloud.google.com/sql/docs/postgres
[csql-pg-quickstart]:
https://cloud.google.com/sql/docs/postgres/connect-instance-local-computer
## Available Tools
{{< list-tools dirs="/integrations/postgres" >}}
### Pre-built Configurations
- [Cloud SQL for Postgres using
MCP](https://googleapis.github.io/genai-toolbox/how-to/connect-ide/cloud_sql_pg_mcp/)
Connect your IDE to Cloud SQL for Postgres using Toolbox.
## Requirements
### IAM Permissions
By default, this source uses the [Cloud SQL Go Connector][csql-go-conn] to
authorize and establish mTLS connections to your Cloud SQL instance. The Go
connector uses your [Application Default Credentials (ADC)][adc] to authorize
your connection to Cloud SQL.
In addition to [setting the ADC for your server][set-adc], you need to ensure
the IAM identity has been given the following IAM roles (or corresponding
permissions):
- `roles/cloudsql.client`
{{< notice tip >}}
If you are connecting from Compute Engine, make sure your VM
also has the [proper
scope](https://cloud.google.com/compute/docs/access/service-accounts#accesscopesiam)
to connect using the Cloud SQL Admin API.
{{< /notice >}}
[csql-go-conn]:
[adc]:
[set-adc]:
### Networking
Cloud SQL supports connecting over both from external networks via the internet
([public IP][public-ip]), and internal networks ([private IP][private-ip]).
For more information on choosing between the two options, see the Cloud SQL page
[Connection overview][conn-overview].
You can configure the `ipType` parameter in your source configuration to
`public` or `private` to match your cluster's configuration. Regardless of which
you choose, all connections use IAM-based authorization and are encrypted with
mTLS.
[private-ip]: https://cloud.google.com/sql/docs/postgres/configure-private-ip
[public-ip]: https://cloud.google.com/sql/docs/postgres/configure-ip
[conn-overview]: https://cloud.google.com/sql/docs/postgres/connect-overview
### Authentication
This source supports both password-based authentication and IAM
authentication (using your [Application Default Credentials][adc]).
#### Standard Authentication
To connect using user/password, [create
a PostgreSQL user][cloudsql-users] and input your credentials in the `user` and
`password` fields.
```yaml
user: ${USER_NAME}
password: ${PASSWORD}
```
#### IAM Authentication
To connect using IAM authentication:
1. Prepare your database instance and user following this [guide][iam-guide].
2. You could choose one of the two ways to log in:
- Specify your IAM email as the `user`.
- Leave your `user` field blank. Toolbox will fetch the [ADC][adc]
automatically and log in using the email associated with it.
3. Leave the `password` field blank.
[iam-guide]: https://cloud.google.com/sql/docs/postgres/iam-logins
[cloudsql-users]: https://cloud.google.com/sql/docs/postgres/create-manage-users
## Example
```yaml
kind: sources
name: my-cloud-sql-pg-source
type: cloud-sql-postgres
project: my-project-id
region: us-central1
instance: my-instance
database: my_db
user: ${USER_NAME}
password: ${PASSWORD}
# ipType: "private"
```
{{< notice tip >}}
Use environment variable replacement with the format ${ENV_NAME}
instead of hardcoding your secrets into the configuration file.
{{< /notice >}}
### Managed Connection Pooling
Toolbox automatically supports [Managed Connection Pooling][csql-mcp]. If your Cloud SQL for PostgreSQL instance has Managed Connection Pooling enabled, the connection will immediately benefit from increased throughput and reduced latency.
The interface is identical, so there's no additional configuration required on the client. For more information on configuring your instance, see the [Cloud SQL Managed Connection Pooling documentation][csql-mcp-docs].
[csql-mcp]: https://docs.cloud.google.com/sql/docs/postgres/managed-connection-pooling
[csql-mcp-docs]: https://docs.cloud.google.com/sql/docs/postgres/configure-mcp
## Reference
| **field** | **type** | **required** | **description** |
|-----------|:--------:|:------------:|--------------------------------------------------------------------------------------------------------------------------|
| type | string | true | Must be "cloud-sql-postgres". |
| project | string | true | Id of the GCP project that the cluster was created in (e.g. "my-project-id"). |
| region | string | true | Name of the GCP region that the cluster was created in (e.g. "us-central1"). |
| instance | string | true | Name of the Cloud SQL instance within the cluster (e.g. "my-instance"). |
| database | string | true | Name of the Postgres database to connect to (e.g. "my_db"). |
| user | string | false | Name of the Postgres user to connect as (e.g. "my-pg-user"). Defaults to IAM auth using [ADC][adc] email if unspecified. |
| password | string | false | Password of the Postgres user (e.g. "my-password"). Defaults to attempting IAM authentication if unspecified. |
| ipType | string | false | IP Type of the Cloud SQL instance; must be one of `public`, `private`, or `psc`. Default: `public`. |
========================================================================
## Cloud SQL for SQL Server Source
========================================================================
**Hierarchy:** MCP Toolbox for Databases > Integrations > Cloud SQL for SQL Server Source
**URL:** https://googleapis.github.io/genai-toolbox/previews/PR-2723/integrations/cloud-sql-mssql/
**Description:** Cloud SQL for SQL Server is a fully-managed database service for SQL Server.
## About
[Cloud SQL for SQL Server][csql-mssql-docs] is a managed database service that
helps you set up, maintain, manage, and administer your SQL Server databases on
Google Cloud.
If you are new to Cloud SQL for SQL Server, you can try [creating and connecting
to a database by following these instructions][csql-mssql-connect].
[csql-mssql-docs]: https://cloud.google.com/sql/docs/sqlserver
[csql-mssql-connect]: https://cloud.google.com/sql/docs/sqlserver/connect-overview
## Available Tools
{{< list-tools dirs="/integrations/mssql" >}}
### Pre-built Configurations
- [Cloud SQL for SQL Server using MCP](../../user-guide/connect-to/ides/cloud_sql_mssql_mcp.md)
Connect your IDE to Cloud SQL for SQL Server using Toolbox.
## Requirements
### IAM Permissions
By default, this source uses the [Cloud SQL Go Connector][csql-go-conn] to
authorize and establish mTLS connections to your Cloud SQL instance. The Go
connector uses your [Application Default Credentials (ADC)][adc] to authorize
your connection to Cloud SQL.
In addition to [setting the ADC for your server][set-adc], you need to ensure
the IAM identity has been given the following IAM roles (or corresponding
permissions):
- `roles/cloudsql.client`
{{< notice tip >}}
If you are connecting from Compute Engine, make sure your VM
also has the [proper
scope](https://cloud.google.com/compute/docs/access/service-accounts#accesscopesiam)
to connect using the Cloud SQL Admin API.
{{< /notice >}}
[csql-go-conn]: https://github.com/GoogleCloudPlatform/cloud-sql-go-connector
[adc]: https://cloud.google.com/docs/authentication#adc
[set-adc]: https://cloud.google.com/docs/authentication/provide-credentials-adc
### Networking
Cloud SQL supports connecting over both from external networks via the internet
([public IP][public-ip]), and internal networks ([private IP][private-ip]).
For more information on choosing between the two options, see the Cloud SQL page
[Connection overview][conn-overview].
You can configure the `ipType` parameter in your source configuration to
`public` or `private` to match your cluster's configuration. Regardless of which
you choose, all connections use IAM-based authorization and are encrypted with
mTLS.
[private-ip]: https://cloud.google.com/sql/docs/sqlserver/configure-private-ip
[public-ip]: https://cloud.google.com/sql/docs/sqlserver/configure-ip
[conn-overview]: https://cloud.google.com/sql/docs/sqlserver/connect-overview
### Database User
Currently, this source only uses standard authentication. You will need to
[create a SQL Server user][cloud-sql-users] to login to the database with.
[cloud-sql-users]: https://cloud.google.com/sql/docs/sqlserver/create-manage-users
## Example
```yaml
kind: sources
name: my-cloud-sql-mssql-instance
type: cloud-sql-mssql
project: my-project
region: my-region
instance: my-instance
database: my_db
user: ${USER_NAME}
password: ${PASSWORD}
# ipType: private
```
{{< notice tip >}}
Use environment variable replacement with the format ${ENV_NAME}
instead of hardcoding your secrets into the configuration file.
{{< /notice >}}
## Reference
| **field** | **type** | **required** | **description** |
|-----------|:--------:|:------------:|------------------------------------------------------------------------------------------------------|
| type | string | true | Must be "cloud-sql-mssql". |
| project | string | true | Id of the GCP project that the cluster was created in (e.g. "my-project-id"). |
| region | string | true | Name of the GCP region that the cluster was created in (e.g. "us-central1"). |
| instance | string | true | Name of the Cloud SQL instance within the cluster (e.g. "my-instance"). |
| database | string | true | Name of the Cloud SQL database to connect to (e.g. "my_db"). |
| user | string | true | Name of the SQL Server user to connect as (e.g. "my-pg-user"). |
| password | string | true | Password of the SQL Server user (e.g. "my-password"). |
| ipType | string | false | IP Type of the Cloud SQL instance, must be either `public`, `private`, or `psc`. Default: `public`. |
========================================================================
## Cloud SQL Admin Source
========================================================================
**Hierarchy:** MCP Toolbox for Databases > Integrations > Cloud SQL Admin Source
**URL:** https://googleapis.github.io/genai-toolbox/previews/PR-2723/integrations/cloud-sql-admin/
**Description:** A "cloud-sql-admin" source provides a client for the Cloud SQL Admin API.
## About
The `cloud-sql-admin` source provides a client to interact with the [Google
Cloud SQL Admin API](https://cloud.google.com/sql/docs/mysql/admin-api). This
allows tools to perform administrative tasks on Cloud SQL instances, such as
creating users and databases.
Authentication can be handled in two ways:
1. **Application Default Credentials (ADC):** By default, the source uses ADC
to authenticate with the API.
2. **Client-side OAuth:** If `useClientOAuth` is set to `true`, the source will
expect an OAuth 2.0 access token to be provided by the client (e.g., a web
browser) for each request.
## Available Tools
{{< list-tools >}}
## Example
```yaml
kind: sources
name: my-cloud-sql-admin
type: cloud-sql-admin
---
kind: sources
name: my-oauth-cloud-sql-admin
type: cloud-sql-admin
useClientOAuth: true
```
## Reference
| **field** | **type** | **required** | **description** |
| -------------- | :------: | :----------: | ---------------------------------------------------------------------------------------------------------------------------------------------- |
| type | string | true | Must be "cloud-sql-admin". |
| defaultProject | string | false | The Google Cloud project ID to use for Cloud SQL infrastructure tools. |
| useClientOAuth | boolean | false | If true, the source will use client-side OAuth for authorization. Otherwise, it will use Application Default Credentials. Defaults to `false`. |
========================================================================
## cloud-sql-list-databases Tool
========================================================================
**Hierarchy:** MCP Toolbox for Databases > Integrations > Cloud SQL Admin Source > cloud-sql-list-databases Tool
**URL:** https://googleapis.github.io/genai-toolbox/previews/PR-2723/integrations/cloud-sql-admin/cloudsqllistdatabases/
**Description:** List Cloud SQL databases in an instance.
## About
The `cloud-sql-list-databases` tool lists all Cloud SQL databases in a specified
Google Cloud project and instance.
## Compatible Sources
{{< compatible-sources >}}
## Parameters
The `cloud-sql-list-databases` tool has two required parameters:
| **field** | **type** | **required** | **description** |
| --------- | :------: | :----------: | ---------------------------- |
| project | string | true | The Google Cloud project ID. |
| instance | string | true | The Cloud SQL instance ID. |
## Example
Here is an example of how to configure the `cloud-sql-list-databases` tool in your
`tools.yaml` file:
```yaml
kind: sources
name: my-cloud-sql-admin-source
type: cloud-sql-admin
---
kind: tools
name: list_my_databases
type: cloud-sql-list-databases
source: my-cloud-sql-admin-source
description: Use this tool to list all Cloud SQL databases in an instance.
```
## Reference
| **field** | **type** | **required** | **description** |
| ----------- | :------: | :----------: | -------------------------------------------------------------- |
| type | string | true | Must be "cloud-sql-list-databases". |
| source | string | true | The name of the `cloud-sql-admin` source to use for this tool. |
| description | string | false | Description of the tool that is passed to the agent. |
========================================================================
## cloud-sql-list-instances Tool
========================================================================
**Hierarchy:** MCP Toolbox for Databases > Integrations > Cloud SQL Admin Source > cloud-sql-list-instances Tool
**URL:** https://googleapis.github.io/genai-toolbox/previews/PR-2723/integrations/cloud-sql-admin/cloudsqllistinstances/
**Description:** List Cloud SQL instances in a project.
## About
The `cloud-sql-list-instances` tool lists all Cloud SQL instances in a specified
Google Cloud project.
## Compatible Sources
{{< compatible-sources >}}
## Parameters
The `cloud-sql-list-instances` tool has one required parameter:
| **field** | **type** | **required** | **description** |
| --------- | :------: | :----------: | ---------------------------- |
| project | string | true | The Google Cloud project ID. |
## Example
Here is an example of how to configure the `cloud-sql-list-instances` tool in
your `tools.yaml` file:
```yaml
kind: sources
name: my-cloud-sql-admin-source
type: cloud-sql-admin
---
kind: tools
name: list_my_instances
type: cloud-sql-list-instances
source: my-cloud-sql-admin-source
description: Use this tool to list all Cloud SQL instances in a project.
```
## Reference
| **field** | **type** | **required** | **description** |
|-------------|:--------:|:------------:|----------------------------------------------------------------|
| type | string | true | Must be "cloud-sql-list-instances". |
| description | string | false | Description of the tool that is passed to the agent. |
| source | string | true | The name of the `cloud-sql-admin` source to use for this tool. |
========================================================================
## cloud-sql-mysql-create-instance Tool
========================================================================
**Hierarchy:** MCP Toolbox for Databases > Integrations > Cloud SQL Admin Source > cloud-sql-mysql-create-instance Tool
**URL:** https://googleapis.github.io/genai-toolbox/previews/PR-2723/integrations/cloud-sql-admin/cloudsqlmysqlcreateinstance/
**Description:** Create a Cloud SQL for MySQL instance.
## About
The `cloud-sql-mysql-create-instance` tool creates a new Cloud SQL for MySQL
instance in a specified Google Cloud project.
## Compatible Sources
{{< compatible-sources >}}
## Parameters
The `cloud-sql-mysql-create-instance` tool has the following parameters:
| **field** | **type** | **required** | **description** |
| --------------- | :------: | :----------: | --------------------------------------------------------------------------------------------------------------- |
| project | string | true | The Google Cloud project ID. |
| name | string | true | The name of the instance to create. |
| databaseVersion | string | false | The database version for MySQL. If not specified, defaults to the latest available version (e.g., `MYSQL_8_4`). |
| rootPassword | string | true | The root password for the instance. |
| editionPreset | string | false | The edition of the instance. Can be `Production` or `Development`. Defaults to `Development`. |
## Example
Here is an example of how to configure the `cloud-sql-mysql-create-instance`
tool in your `tools.yaml` file:
```yaml
kind: sources
name: my-cloud-sql-admin-source
type: cloud-sql-admin
---
kind: tools
name: create_my_mysql_instance
type: cloud-sql-mysql-create-instance
source: my-cloud-sql-admin-source
description: "Creates a MySQL instance using `Production` and `Development` presets. For the `Development` template, it chooses a 2 vCPU, 16 GiB RAM, 100 GiB SSD configuration with Non-HA/zonal availability. For the `Production` template, it chooses an 8 vCPU, 64 GiB RAM, 250 GiB SSD configuration with HA/regional availability. The Enterprise Plus edition is used in both cases. The default database version is `MYSQL_8_4`. The agent should ask the user if they want to use a different version."
```
## Reference
| **field** | **type** | **required** | **description** |
| ----------- | :------: | :----------: | -------------------------------------------------------------- |
| type | string | true | Must be `cloud-sql-mysql-create-instance`. |
| source | string | true | The name of the `cloud-sql-admin` source to use for this tool. |
| description | string | false | A description of the tool that is passed to the agent. |
========================================================================
## cloud-sql-clone-instance Tool
========================================================================
**Hierarchy:** MCP Toolbox for Databases > Integrations > Cloud SQL Admin Source > cloud-sql-clone-instance Tool
**URL:** https://googleapis.github.io/genai-toolbox/previews/PR-2723/integrations/cloud-sql-admin/cloudsqlcloneinstance/
**Description:** Clone a Cloud SQL instance.
## About
The `cloud-sql-clone-instance` tool clones a Cloud SQL instance using the Cloud SQL Admin API.
## Compatible Sources
{{< compatible-sources >}}
## Example
Basic clone (current state)
```yaml
kind: tools
name: clone-instance-basic
type: cloud-sql-clone-instance
source: cloud-sql-admin-source
description: "Creates an exact copy of a Cloud SQL instance. Supports configuring instance zones and high-availability setup through zone preferences."
```
Point-in-time recovery (PITR) clone
```yaml
kind: tools
name: clone-instance-pitr
type: cloud-sql-clone-instance
source: cloud-sql-admin-source
description: "Creates an exact copy of a Cloud SQL instance at a specific point in time (PITR). Supports configuring instance zones and high-availability setup through zone preferences"
```
## Reference
### Tool Configuration
| **field** | **type** | **required** | **description** |
| -------------- | :------: | :----------: | ------------------------------------------------------------- |
| type | string | true | Must be "cloud-sql-clone-instance". |
| source | string | true | The name of the `cloud-sql-admin` source to use. |
| description | string | false | A description of the tool. |
### Tool Inputs
| **parameter** | **type** | **required** | **description** |
| -------------------------- | :------: | :----------: | ------------------------------------------------------------------------------- |
| project | string | true | The project ID. |
| sourceInstanceName | string | true | The name of the source instance to clone. |
| destinationInstanceName | string | true | The name of the new (cloned) instance. |
| pointInTime | string | false | (Optional) The point in time for a PITR (Point-In-Time Recovery) clone. |
| preferredZone | string | false | (Optional) The preferred zone for the cloned instance. If not specified, defaults to the source instance's zone. |
| preferredSecondaryZone | string | false | (Optional) The preferred secondary zone for the cloned instance (for HA). |
## Advanced Usage
- The tool supports both basic clone and point-in-time recovery (PITR) clone operations.
- For PITR, specify the `pointInTime` parameter in RFC3339 format (e.g., `2024-01-01T00:00:00Z`).
- The source must be a valid Cloud SQL Admin API source.
- You can optionally specify the `zone` parameter to set the zone for the cloned instance. If omitted, the zone of the source instance will be used.
- You can optionally specify the `preferredZone` and `preferredSecondaryZone` (only in REGIONAL instances) to set the preferred zones for the cloned instance. These are useful for high availability (HA) configurations. If omitted, defaults will be used based on the source instance.
## Additional Resources
- [Cloud SQL Admin API documentation](https://cloud.google.com/sql/docs/mysql/admin-api)
- [Toolbox Cloud SQL tools documentation](_index.md)
- [Cloud SQL Clone API documentation](https://cloud.google.com/sql/docs/mysql/clone-instance)
========================================================================
## cloud-sql-create-backup Tool
========================================================================
**Hierarchy:** MCP Toolbox for Databases > Integrations > Cloud SQL Admin Source > cloud-sql-create-backup Tool
**URL:** https://googleapis.github.io/genai-toolbox/previews/PR-2723/integrations/cloud-sql-admin/cloudsqlcreatebackup/
**Description:** Creates a backup on a Cloud SQL instance.
## About
The `cloud-sql-create-backup` tool creates an on-demand backup on a Cloud SQL instance using the Cloud SQL Admin API.
## Compatible Sources
{{< compatible-sources >}}
## Parameters
| **parameter** | **type** | **required** | **description** |
| -------------------------- | :------: | :----------: | ------------------------------------------------------------------------------- |
| project | string | true | The project ID. |
| instance | string | true | The name of the instance to take a backup on. Does not include the project ID. |
| location | string | false | (Optional) Location of the backup run. |
| backup_description | string | false | (Optional) The description of this backup run. |
## Example
Basic backup creation (current state)
```yaml
kind: tools
name: backup-creation-basic
type: cloud-sql-create-backup
source: cloud-sql-admin-source
description: "Creates a backup on the given Cloud SQL instance."
```
## Reference
| **field** | **type** | **required** | **description** |
| -------------- | :------: | :----------: | ------------------------------------------------------------- |
| type | string | true | Must be "cloud-sql-create-backup". |
| source | string | true | The name of the `cloud-sql-admin` source to use. |
| description | string | false | A description of the tool. |
## Additional Resources
- [Cloud SQL Admin API documentation](https://cloud.google.com/sql/docs/mysql/admin-api)
- [Toolbox Cloud SQL tools documentation](_index.md)
- [Cloud SQL Backup API documentation](https://cloud.google.com/sql/docs/mysql/backup-recovery/backups)
========================================================================
## cloud-sql-create-database Tool
========================================================================
**Hierarchy:** MCP Toolbox for Databases > Integrations > Cloud SQL Admin Source > cloud-sql-create-database Tool
**URL:** https://googleapis.github.io/genai-toolbox/previews/PR-2723/integrations/cloud-sql-admin/cloudsqlcreatedatabase/
**Description:** Create a new database in a Cloud SQL instance.
## About
The `cloud-sql-create-database` tool creates a new database in a specified Cloud
SQL instance.
## Compatible Sources
{{< compatible-sources >}}
## Parameters
| **parameter** | **type** | **required** | **description** |
| ------------- | :------: | :----------: | ------------------------------------------------------------------ |
| project | string | true | The project ID. |
| instance | string | true | The ID of the instance where the database will be created. |
| name | string | true | The name for the new database. Must be unique within the instance. |
## Example
```yaml
kind: tools
name: create-cloud-sql-database
type: cloud-sql-create-database
source: my-cloud-sql-admin-source
description: "Creates a new database in a Cloud SQL instance."
```
## Reference
| **field** | **type** | **required** | **description** |
| ----------- | :------: | :----------: | ------------------------------------------------ |
| type | string | true | Must be "cloud-sql-create-database". |
| source | string | true | The name of the `cloud-sql-admin` source to use. |
| description | string | false | A description of the tool. |
========================================================================
## cloud-sql-create-users Tool
========================================================================
**Hierarchy:** MCP Toolbox for Databases > Integrations > Cloud SQL Admin Source > cloud-sql-create-users Tool
**URL:** https://googleapis.github.io/genai-toolbox/previews/PR-2723/integrations/cloud-sql-admin/cloudsqlcreateusers/
**Description:** Create a new user in a Cloud SQL instance.
## About
The `cloud-sql-create-users` tool creates a new user in a specified Cloud SQL
instance. It can create both built-in and IAM users.
## Compatible Sources
{{< compatible-sources >}}
## Example
```yaml
kind: tools
name: create-cloud-sql-user
type: cloud-sql-create-users
source: my-cloud-sql-admin-source
description: "Creates a new user in a Cloud SQL instance. Both built-in and IAM users are supported. IAM users require an email account as the user name. IAM is the more secure and recommended way to manage users. The agent should always ask the user what type of user they want to create. For more information, see https://cloud.google.com/sql/docs/postgres/add-manage-iam-users"
```
## Reference
| **field** | **type** | **required** | **description** |
| ------------ | :-------: | :----------: | ------------------------------------------------ |
| type | string | true | Must be "cloud-sql-create-users". |
| description | string | false | A description of the tool. |
| source | string | true | The name of the `cloud-sql-admin` source to use. |
========================================================================
## cloud-sql-get-instance Tool
========================================================================
**Hierarchy:** MCP Toolbox for Databases > Integrations > Cloud SQL Admin Source > cloud-sql-get-instance Tool
**URL:** https://googleapis.github.io/genai-toolbox/previews/PR-2723/integrations/cloud-sql-admin/cloudsqlgetinstances/
**Description:** Get a Cloud SQL instance resource.
## About
The `cloud-sql-get-instance` tool retrieves a Cloud SQL instance resource using
the Cloud SQL Admin API.
## Compatible Sources
{{< compatible-sources >}}
## Example
```yaml
kind: tools
name: get-sql-instance
type: cloud-sql-get-instance
source: my-cloud-sql-admin-source
description: "Gets a particular cloud sql instance."
```
## Reference
| **field** | **type** | **required** | **description** |
| ----------- | :------: | :----------: | ------------------------------------------------ |
| type | string | true | Must be "cloud-sql-get-instance". |
| source | string | true | The name of the `cloud-sql-admin` source to use. |
| description | string | false | A description of the tool. |
========================================================================
## cloud-sql-mssql-create-instance Tool
========================================================================
**Hierarchy:** MCP Toolbox for Databases > Integrations > Cloud SQL Admin Source > cloud-sql-mssql-create-instance Tool
**URL:** https://googleapis.github.io/genai-toolbox/previews/PR-2723/integrations/cloud-sql-admin/cloudsqlmssqlcreateinstance/
**Description:** Create a Cloud SQL for SQL Server instance.
## About
The `cloud-sql-mssql-create-instance` tool creates a Cloud SQL for SQL Server
instance using the Cloud SQL Admin API.
## Compatible Sources
{{< compatible-sources >}}
## Example
```yaml
kind: tools
name: create-sql-instance
type: cloud-sql-mssql-create-instance
source: cloud-sql-admin-source
description: "Creates a SQL Server instance using `Production` and `Development` presets. For the `Development` template, it chooses a 2 vCPU, 8 GiB RAM (`db-custom-2-8192`) configuration with Non-HA/zonal availability. For the `Production` template, it chooses a 4 vCPU, 26 GiB RAM (`db-custom-4-26624`) configuration with HA/regional availability. The Enterprise edition is used in both cases. The default database version is `SQLSERVER_2022_STANDARD`. The agent should ask the user if they want to use a different version."
```
## Reference
### Tool Configuration
| **field** | **type** | **required** | **description** |
| ----------- | :------: | :----------: | ------------------------------------------------ |
| type | string | true | Must be "cloud-sql-mssql-create-instance". |
| source | string | true | The name of the `cloud-sql-admin` source to use. |
| description | string | false | A description of the tool. |
### Tool Inputs
| **parameter** | **type** | **required** | **description** |
|-----------------|:--------:|:------------:|----------------------------------------------------------------------------------------------------------------------------------------------------------|
| project | string | true | The project ID. |
| name | string | true | The name of the instance. |
| databaseVersion | string | false | The database version for SQL Server. If not specified, defaults to the latest available version (e.g., SQLSERVER_2022_STANDARD). |
| rootPassword | string | true | The root password for the instance. |
| editionPreset | string | false | The edition of the instance. Can be `Production` or `Development`. This determines the default machine type and availability. Defaults to `Development`. |
========================================================================
## cloud-sql-postgres-create-instance Tool
========================================================================
**Hierarchy:** MCP Toolbox for Databases > Integrations > Cloud SQL Admin Source > cloud-sql-postgres-create-instance Tool
**URL:** https://googleapis.github.io/genai-toolbox/previews/PR-2723/integrations/cloud-sql-admin/cloudsqlpgcreateinstances/
**Description:** Create a Cloud SQL for PostgreSQL instance.
## About
The `cloud-sql-postgres-create-instance` tool creates a Cloud SQL for PostgreSQL
instance using the Cloud SQL Admin API.
## Compatible Sources
{{< compatible-sources >}}
## Example
```yaml
kind: tools
name: create-sql-instance
type: cloud-sql-postgres-create-instance
source: cloud-sql-admin-source
description: "Creates a Postgres instance using `Production` and `Development` presets. For the `Development` template, it chooses a 2 vCPU, 16 GiB RAM, 100 GiB SSD configuration with Non-HA/zonal availability. For the `Production` template, it chooses an 8 vCPU, 64 GiB RAM, 250 GiB SSD configuration with HA/regional availability. The Enterprise Plus edition is used in both cases. The default database version is `POSTGRES_17`. The agent should ask the user if they want to use a different version."
```
## Reference
### Tool Configuration
| **field** | **type** | **required** | **description** |
| ----------- | :------: | :----------: | ------------------------------------------------ |
| type | string | true | Must be "cloud-sql-postgres-create-instance". |
| source | string | true | The name of the `cloud-sql-admin` source to use. |
| description | string | false | A description of the tool. |
### Tool Inputs
| **parameter** | **type** | **required** | **description** |
|-----------------|:--------:|:------------:|----------------------------------------------------------------------------------------------------------------------------------------------------------|
| project | string | true | The project ID. |
| name | string | true | The name of the instance. |
| databaseVersion | string | false | The database version for Postgres. If not specified, defaults to the latest available version (e.g., POSTGRES_17). |
| rootPassword | string | true | The root password for the instance. |
| editionPreset | string | false | The edition of the instance. Can be `Production` or `Development`. This determines the default machine type and availability. Defaults to `Development`. |
========================================================================
## cloud-sql-restore-backup Tool
========================================================================
**Hierarchy:** MCP Toolbox for Databases > Integrations > Cloud SQL Admin Source > cloud-sql-restore-backup Tool
**URL:** https://googleapis.github.io/genai-toolbox/previews/PR-2723/integrations/cloud-sql-admin/cloudsqlrestorebackup/
**Description:** Restores a backup of a Cloud SQL instance.
## About
The `cloud-sql-restore-backup` tool restores a backup on a Cloud SQL instance using the Cloud SQL Admin API.
## Compatible Sources
{{< compatible-sources >}}
## Parameters
| **parameter** | **type** | **required** | **description** |
| ------------------| :------: | :----------: | -----------------------------------------------------------------------------|
| target_project | string | true | The project ID of the instance to restore the backup onto. |
| target_instance | string | true | The instance to restore the backup onto. Does not include the project ID. |
| backup_id | string | true | The identifier of the backup being restored. |
| source_project | string | false | (Optional) The project ID of the instance that the backup belongs to. |
| source_instance | string | false | (Optional) Cloud SQL instance ID of the instance that the backup belongs to. |
## Example
Basic backup restore
```yaml
kind: tools
name: backup-restore-basic
type: cloud-sql-restore-backup
source: cloud-sql-admin-source
description: "Restores a backup onto the given Cloud SQL instance."
```
## Reference
| **field** | **type** | **required** | **description** |
| -------------- | :------: | :----------: | ------------------------------------------------ |
| type | string | true | Must be "cloud-sql-restore-backup". |
| source | string | true | The name of the `cloud-sql-admin` source to use. |
| description | string | false | A description of the tool. |
## Advanced Usage
- The `backup_id` field can be a BackupRun ID (which will be an int64), backup name, or BackupDR backup name.
- If the `backup_id` field contains a BackupRun ID (i.e. an int64), the optional fields `source_project` and `source_instance` must also be provided.
## Additional Resources
- [Cloud SQL Admin API documentation](https://cloud.google.com/sql/docs/mysql/admin-api)
- [Toolbox Cloud SQL tools documentation](_index.md)
- [Cloud SQL Restore API documentation](https://cloud.google.com/sql/docs/mysql/backup-recovery/restoring)
========================================================================
## cloud-sql-wait-for-operation Tool
========================================================================
**Hierarchy:** MCP Toolbox for Databases > Integrations > Cloud SQL Admin Source > cloud-sql-wait-for-operation Tool
**URL:** https://googleapis.github.io/genai-toolbox/previews/PR-2723/integrations/cloud-sql-admin/cloudsqlwaitforoperation/
**Description:** Wait for a long-running Cloud SQL operation to complete.
## About
The `cloud-sql-wait-for-operation` tool is a utility tool that waits for a
long-running Cloud SQL operation to complete. It does this by polling the Cloud
SQL Admin API operation status endpoint until the operation is finished, using
exponential backoff.
## Compatible Sources
{{< compatible-sources >}}
## Example
```yaml
kind: tools
name: cloudsql-operations-get
type: cloud-sql-wait-for-operation
source: my-cloud-sql-source
description: "This will poll on operations API until the operation is done. For checking operation status we need projectId and operationId. Once instance is created give follow up steps on how to use the variables to bring data plane MCP server up in local and remote setup."
delay: 1s
maxDelay: 4m
multiplier: 2
maxRetries: 10
```
## Reference
| **field** | **type** | **required** | **description** |
| ----------- | :------: | :----------: | ---------------------------------------------------------------------------------------------------------------- |
| type | string | true | Must be "cloud-sql-wait-for-operation". |
| source | string | true | The name of a `cloud-sql-admin` source to use for authentication. |
| description | string | false | A description of the tool. |
| delay | duration | false | The initial delay between polling requests (e.g., `3s`). Defaults to 3 seconds. |
| maxDelay | duration | false | The maximum delay between polling requests (e.g., `4m`). Defaults to 4 minutes. |
| multiplier | float | false | The multiplier for the polling delay. The delay is multiplied by this value after each request. Defaults to 2.0. |
| maxRetries | int | false | The maximum number of polling attempts before giving up. Defaults to 10. |
========================================================================
## postgres-upgrade-precheck Tool
========================================================================
**Hierarchy:** MCP Toolbox for Databases > Integrations > Cloud SQL Admin Source > postgres-upgrade-precheck Tool
**URL:** https://googleapis.github.io/genai-toolbox/previews/PR-2723/integrations/cloud-sql-admin/cloudsqlpgupgradeprecheck/
**Description:** Perform a pre-check for a Cloud SQL for PostgreSQL major version upgrade.
## About
The `postgres-upgrade-precheck` tool initiates a pre-check on a Cloud SQL for PostgreSQL
instance to assess its readiness for a major version upgrade using the Cloud SQL Admin API.
It helps identify potential incompatibilities or issues before starting the actual upgrade process.
## Compatible Sources
{{< compatible-sources >}}
## Example
```yaml
kind: tools
name: postgres-upgrade-precheck
type: postgres-upgrade-precheck
source: cloud-sql-admin-source
description: "Checks if a Cloud SQL PostgreSQL instance is ready for a major version upgrade to the specified target version."
```
## Reference
| **field** | **type** | **required** | **description** |
| ------------ | :------: | :----------: | --------------------------------------------------------- |
| type | string | true | Must be "postgres-upgrade-precheck". |
| source | string | true | The name of the `cloud-sql-admin` source to use. |
| description | string | false | A description of the tool. |
| **parameter** | **type** | **required** | **description** |
| ----------------------- | :------: | :----------: | ------------------------------------------------------------------------------- |
| project | string | true | The project ID containing the instance. |
| instance | string | true | The name of the Cloud SQL instance to check. |
| targetDatabaseVersion | string | false | The target PostgreSQL major version for the upgrade (e.g., `POSTGRES_18`). If not specified, defaults to the PostgreSQL 18. |
========================================================================
## CockroachDB Source
========================================================================
**Hierarchy:** MCP Toolbox for Databases > Integrations > CockroachDB Source
**URL:** https://googleapis.github.io/genai-toolbox/previews/PR-2723/integrations/cockroachdb/
**Description:** CockroachDB is a distributed SQL database built for cloud applications.
## About
[CockroachDB][crdb-docs] is a distributed SQL database designed for cloud-native applications. It provides strong consistency, horizontal scalability, and built-in resilience with automatic failover and recovery. CockroachDB uses the PostgreSQL wire protocol, making it compatible with many PostgreSQL tools and drivers while providing unique features like multi-region deployments and distributed transactions.
**Minimum Version:** CockroachDB v25.1 or later is recommended for full tool compatibility.
[crdb-docs]: https://www.cockroachlabs.com/docs/
## Available Tools
{{< list-tools >}}
## Requirements
### Database User
This source uses standard authentication. You will need to [create a CockroachDB user][crdb-users] to login to the database with. For CockroachDB Cloud deployments, SSL/TLS is required.
[crdb-users]: https://www.cockroachlabs.com/docs/stable/create-user.html
### SSL/TLS Configuration
CockroachDB Cloud clusters require SSL/TLS connections. Use the `queryParams` section to configure SSL settings:
- **For CockroachDB Cloud**: Use `sslmode: require` at minimum
- **For self-hosted with certificates**: Use `sslmode: verify-full` with certificate paths
- **For local development only**: Use `sslmode: disable` (not recommended for production)
## Example
```yaml
sources:
my_cockroachdb:
type: cockroachdb
host: your-cluster.cockroachlabs.cloud
port: "26257"
user: myuser
password: mypassword
database: defaultdb
maxRetries: 5
retryBaseDelay: 500ms
queryParams:
sslmode: require
application_name: my-app
# MCP Security Settings (recommended for production)
readOnlyMode: true # Read-only by default (MCP best practice)
enableWriteMode: false # Set to true to allow write operations
maxRowLimit: 1000 # Limit query results
queryTimeoutSec: 30 # Prevent long-running queries
enableTelemetry: true # Enable observability
telemetryVerbose: false # Set true for detailed logs
clusterID: "my-cluster" # Optional identifier
tools:
list_expenses:
type: cockroachdb-sql
source: my_cockroachdb
description: List all expenses
statement: SELECT id, description, amount, category FROM expenses WHERE user_id = $1
parameters:
- name: user_id
type: string
description: The user's ID
describe_expenses:
type: cockroachdb-describe-table
source: my_cockroachdb
description: Describe the expenses table schema
list_expenses_indexes:
type: cockroachdb-list-indexes
source: my_cockroachdb
description: List indexes on the expenses table
```
## Reference
### Required Parameters
| Parameter | Type | Description |
|-----------|------|-------------|
| `type` | string | Must be `cockroachdb` |
| `host` | string | The hostname or IP address of the CockroachDB cluster |
| `port` | string | The port number (typically "26257") |
| `user` | string | The database user name |
| `database` | string | The database name to connect to |
### Optional Parameters
| Parameter | Type | Default | Description |
|-----------|------|---------|-------------|
| `password` | string | "" | The database password (can be empty for certificate-based auth) |
| `maxRetries` | integer | 5 | Maximum number of connection retry attempts |
| `retryBaseDelay` | string | "500ms" | Base delay between retry attempts (exponential backoff) |
| `queryParams` | map | {} | Additional connection parameters (e.g., SSL configuration) |
### MCP Security Parameters
CockroachDB integration includes security features following the [Model Context Protocol (MCP)](https://modelcontextprotocol.io/) specification:
| Parameter | Type | Default | Description |
|-----------|------|---------|-------------|
| `readOnlyMode` | boolean | true | Enables read-only mode by default (MCP requirement) |
| `enableWriteMode` | boolean | false | Explicitly enable write operations (INSERT/UPDATE/DELETE/CREATE/DROP) |
| `maxRowLimit` | integer | 1000 | Maximum rows returned per SELECT query (auto-adds LIMIT clause) |
| `queryTimeoutSec` | integer | 30 | Query timeout in seconds to prevent long-running queries |
| `enableTelemetry` | boolean | true | Enable structured logging of tool invocations |
| `telemetryVerbose` | boolean | false | Enable detailed JSON telemetry output |
| `clusterID` | string | "" | Optional cluster identifier for telemetry |
### Query Parameters
Common query parameters for CockroachDB connections:
| Parameter | Values | Description |
|-----------|--------|-------------|
| `sslmode` | `disable`, `require`, `verify-ca`, `verify-full` | SSL/TLS mode (CockroachDB Cloud requires `require` or higher) |
| `sslrootcert` | file path | Path to root certificate for SSL verification |
| `sslcert` | file path | Path to client certificate |
| `sslkey` | file path | Path to client key |
| `application_name` | string | Application name for connection tracking |
## Advanced Usage
### Security and MCP Compliance
**Read-Only by Default**: The integration follows MCP best practices by defaulting to read-only mode. This prevents accidental data modifications:
```yaml
sources:
my_cockroachdb:
readOnlyMode: true # Default behavior
enableWriteMode: false # Explicit write opt-in required
```
To enable write operations:
```yaml
sources:
my_cockroachdb:
readOnlyMode: false # Disable read-only protection
enableWriteMode: true # Explicitly allow writes
```
**Query Limits**: Automatic row limits prevent excessive data retrieval:
- SELECT queries automatically get `LIMIT 1000` appended (configurable via `maxRowLimit`)
- Queries are terminated after 30 seconds (configurable via `queryTimeoutSec`)
**Observability**: Structured telemetry provides visibility into tool usage:
- Tool invocations are logged with status, latency, and row counts
- SQL queries are redacted to protect sensitive values
- Set `telemetryVerbose: true` for detailed JSON logs
### Use UUID Primary Keys
CockroachDB performs best with UUID primary keys rather than sequential integers to avoid transaction hotspots:
```sql
CREATE TABLE expenses (
id UUID PRIMARY KEY DEFAULT gen_random_uuid(),
description TEXT,
amount DECIMAL(10,2)
);
```
### Automatic Transaction Retry
This source uses the official `cockroach-go/v2` library which provides automatic transaction retry for serialization conflicts. For write operations requiring explicit transaction control, tools can use the `ExecuteTxWithRetry` method.
### Multi-Region Deployments
CockroachDB supports multi-region deployments with automatic data distribution. Configure your cluster's regions and survival goals separately from the Toolbox configuration. The source will connect to any node in the cluster.
### Connection Pooling
The source maintains a connection pool to the CockroachDB cluster. The pool automatically handles:
- Load balancing across cluster nodes
- Connection retry with exponential backoff
- Health checking of connections
## Troubleshooting
### SSL/TLS Errors
If you encounter "server requires encryption" errors:
1. For CockroachDB Cloud, ensure `sslmode` is set to `require` or higher:
```yaml
queryParams:
sslmode: require
```
2. For certificate verification, download your cluster's root certificate and configure:
```yaml
queryParams:
sslmode: verify-full
sslrootcert: /path/to/ca.crt
```
### Connection Timeouts
If experiencing connection timeouts:
1. Check network connectivity to the CockroachDB cluster
2. Verify firewall rules allow connections on port 26257
3. For CockroachDB Cloud, ensure IP allowlisting is configured
4. Increase `maxRetries` or `retryBaseDelay` if needed
### Transaction Retry Errors
CockroachDB may encounter serializable transaction conflicts. The integration automatically handles these retries using the cockroach-go library. If you see retry-related errors, check:
1. Database load and contention
2. Query patterns that might cause conflicts
3. Consider using `SELECT FOR UPDATE` for explicit locking
## Additional Resources
- [CockroachDB Documentation](https://www.cockroachlabs.com/docs/)
- [CockroachDB Best Practices](https://www.cockroachlabs.com/docs/stable/performance-best-practices-overview.html)
- [Multi-Region Capabilities](https://www.cockroachlabs.com/docs/stable/multiregion-overview.html)
- [Connection Parameters](https://www.cockroachlabs.com/docs/stable/connection-parameters.html)
========================================================================
## cockroachdb-execute-sql Tool
========================================================================
**Hierarchy:** MCP Toolbox for Databases > Integrations > CockroachDB Source > cockroachdb-execute-sql Tool
**URL:** https://googleapis.github.io/genai-toolbox/previews/PR-2723/integrations/cockroachdb/cockroachdb-execute-sql/
**Description:** Execute ad-hoc SQL statements against a CockroachDB database.
## About
A `cockroachdb-execute-sql` tool executes ad-hoc SQL statements against a CockroachDB database. This tool is designed for interactive workflows where the SQL query is provided dynamically at runtime, making it ideal for developer assistance and exploratory data analysis.
The tool takes a single `sql` parameter containing the SQL statement to execute and returns the query results.
> **Note:** This tool is intended for developer assistant workflows with human-in-the-loop and shouldn't be used for production agents. For production use cases with predefined queries, use [cockroachdb-sql](cockroachdb-sql.md) instead.
## Compatible Sources
{{< compatible-sources >}}
## Parameters
The tool accepts a single runtime parameter:
| Parameter | Type | Description |
|-----------|------|-------------|
| `sql` | string | The SQL statement to execute |
## Example
```yaml
sources:
my_cockroachdb:
type: cockroachdb
host: your-cluster.cockroachlabs.cloud
port: "26257"
user: myuser
password: mypassword
database: defaultdb
queryParams:
sslmode: require
tools:
execute_sql:
type: cockroachdb-execute-sql
source: my_cockroachdb
description: Execute any SQL statement against the CockroachDB database
```
### Usage Examples
#### Simple SELECT Query
```json
{
"sql": "SELECT * FROM users LIMIT 10"
}
```
#### Query with Aggregations
```json
{
"sql": "SELECT category, COUNT(*) as count, SUM(amount) as total FROM expenses GROUP BY category ORDER BY total DESC"
}
```
#### Database Introspection
```json
{
"sql": "SHOW TABLES"
}
```
```json
{
"sql": "SHOW COLUMNS FROM expenses"
}
```
#### Multi-Region Information
```json
{
"sql": "SHOW REGIONS FROM DATABASE defaultdb"
}
```
```json
{
"sql": "SHOW ZONE CONFIGURATIONS"
}
```
### CockroachDB-Specific Features
#### Check Cluster Version
```json
{
"sql": "SELECT version()"
}
```
#### View Node Status
```json
{
"sql": "SELECT node_id, address, locality, is_live FROM crdb_internal.gossip_nodes"
}
```
#### Check Replication Status
```json
{
"sql": "SELECT range_id, start_key, end_key, replicas, lease_holder FROM crdb_internal.ranges LIMIT 10"
}
```
#### View Table Regions
```json
{
"sql": "SHOW REGIONS FROM TABLE expenses"
}
```
## Reference
### Required Fields
| Field | Type | Description |
|-------|------|-------------|
| `type` | string | Must be `cockroachdb-execute-sql` |
| `source` | string | Name of the CockroachDB source to use |
| `description` | string | Human-readable description for the LLM |
### Optional Fields
| Field | Type | Description |
|-------|------|-------------|
| `authRequired` | array | List of authentication services required |
## Advanced Usage
### Best Practices
#### Use for Exploration, Not Production
This tool is ideal for:
- Interactive database exploration
- Ad-hoc analysis and reporting
- Debugging and troubleshooting
- Schema inspection
For production use cases, use [cockroachdb-sql](_index.md) with parameterized queries.
#### Be Cautious with Data Modification
While this tool can execute any SQL statement, be careful with:
- `INSERT`, `UPDATE`, `DELETE` statements
- `DROP` or `ALTER` statements
- Schema changes in production
#### Use LIMIT for Large Results
Always use `LIMIT` clauses when exploring data:
```sql
SELECT * FROM large_table LIMIT 100
```
#### Leverage CockroachDB's SQL Extensions
CockroachDB supports PostgreSQL syntax plus extensions:
```sql
-- Show database survival goal
SHOW SURVIVAL GOAL FROM DATABASE defaultdb;
-- View zone configurations
SHOW ZONE CONFIGURATION FOR TABLE expenses;
-- Check table localities
SHOW CREATE TABLE expenses;
```
### Security Considerations
#### SQL Injection Risk
Since this tool executes arbitrary SQL, it should only be used with:
- Trusted users in interactive sessions
- Human-in-the-loop workflows
- Development and testing environments
Never expose this tool directly to end users without proper authorization controls.
#### Use Authentication
Configure the `authRequired` field to restrict access:
```yaml
tools:
execute_sql:
type: cockroachdb-execute-sql
source: my_cockroachdb
description: Execute SQL statements
authRequired:
- my-auth-service
```
#### Read-Only Users
For safer exploration, create read-only database users:
```sql
CREATE USER readonly_user;
GRANT SELECT ON DATABASE defaultdb TO readonly_user;
```
### Common Use Cases
#### Database Administration
```sql
-- View database size
SELECT
table_name,
pg_size_pretty(pg_total_relation_size(table_name::regclass)) AS size
FROM information_schema.tables
WHERE table_schema = 'public'
ORDER BY pg_total_relation_size(table_name::regclass) DESC;
```
#### Performance Analysis
```sql
-- Find slow queries
SELECT query, count, mean_latency
FROM crdb_internal.statement_statistics
WHERE mean_latency > INTERVAL '1 second'
ORDER BY mean_latency DESC
LIMIT 10;
```
#### Data Quality Checks
```sql
-- Find NULL values
SELECT COUNT(*) as null_count
FROM expenses
WHERE description IS NULL OR amount IS NULL;
-- Find duplicates
SELECT user_id, email, COUNT(*) as count
FROM users
GROUP BY user_id, email
HAVING COUNT(*) > 1;
```
## Troubleshooting
The tool will return descriptive errors for:
- **Syntax errors**: Invalid SQL syntax
- **Permission errors**: Insufficient user privileges
- **Connection errors**: Network or authentication issues
- **Runtime errors**: Constraint violations, type mismatches, etc.
## Additional Resources
- [cockroachdb-sql](_index.md) - For parameterized, production-ready queries
- [cockroachdb-list-tables](./cockroachdb-list-tables.md) - List tables in the database
- [cockroachdb-list-schemas](./cockroachdb-list-schemas.md) - List database schemas
- [CockroachDB Source](_index.md) - Source configuration reference
- [CockroachDB SQL Reference](https://www.cockroachlabs.com/docs/stable/sql-statements.html) - Official SQL documentation
========================================================================
## cockroachdb-list-schemas Tool
========================================================================
**Hierarchy:** MCP Toolbox for Databases > Integrations > CockroachDB Source > cockroachdb-list-schemas Tool
**URL:** https://googleapis.github.io/genai-toolbox/previews/PR-2723/integrations/cockroachdb/cockroachdb-list-schemas/
**Description:** List schemas in a CockroachDB database.
## About
The `cockroachdb-list-schemas` tool retrieves a list of schemas (namespaces) in a CockroachDB database. Schemas are used to organize database objects such as tables, views, and functions into logical groups.
This tool is useful for:
- Understanding database organization
- Discovering available schemas
- Multi-tenant application analysis
- Schema-level access control planning
## Compatible Sources
{{< compatible-sources >}}
### Requirements
To list schemas, the user needs:
- `CONNECT` privilege on the database
- No specific schema privileges required for listing
To query objects within schemas, the user needs:
- `USAGE` privilege on the schema
- Appropriate object privileges (SELECT, INSERT, etc.)
## Example
```yaml
sources:
my_cockroachdb:
type: cockroachdb
host: your-cluster.cockroachlabs.cloud
port: "26257"
user: myuser
password: mypassword
database: defaultdb
queryParams:
sslmode: require
tools:
list_schemas:
type: cockroachdb-list-schemas
source: my_cockroachdb
description: List all schemas in the database
```
### Usage Example
```json
{}
```
No parameters are required. The tool automatically lists all user-defined schemas.
## Output Format
The tool returns a list of schemas with the following information:
```json
[
{
"catalog_name": "defaultdb",
"schema_name": "public",
"is_user_defined": true
},
{
"catalog_name": "defaultdb",
"schema_name": "analytics",
"is_user_defined": true
}
]
```
### Fields
| Field | Type | Description |
|-------|------|-------------|
| `catalog_name` | string | The database (catalog) name |
| `schema_name` | string | The schema name |
| `is_user_defined` | boolean | Whether this is a user-created schema (excludes system schemas) |
## Reference
### Required Fields
| Field | Type | Description |
|-------|------|-------------|
| `type` | string | Must be `cockroachdb-list-schemas` |
| `source` | string | Name of the CockroachDB source to use |
| `description` | string | Human-readable description for the LLM |
### Optional Fields
| Field | Type | Description |
|-------|------|-------------|
| `authRequired` | array | List of authentication services required |
## Advanced Usage
### Default Schemas
CockroachDB includes several standard schemas:
- **`public`**: The default schema for user objects
- **`pg_catalog`**: PostgreSQL system catalog (excluded from results)
- **`information_schema`**: SQL standard metadata views (excluded from results)
- **`crdb_internal`**: CockroachDB internal metadata (excluded from results)
- **`pg_extension`**: PostgreSQL extension objects (excluded from results)
The tool filters out system schemas and only returns user-defined schemas.
### Schema Management in CockroachDB
#### Creating Schemas
```sql
CREATE SCHEMA analytics;
```
#### Using Schemas
```sql
-- Create table in specific schema
CREATE TABLE analytics.revenue (
id UUID PRIMARY KEY DEFAULT gen_random_uuid(),
amount DECIMAL(10,2),
date DATE
);
-- Query from specific schema
SELECT * FROM analytics.revenue;
```
#### Schema Search Path
The search path determines which schemas are searched for unqualified object names:
```sql
-- Show current search path
SHOW search_path;
-- Set search path
SET search_path = analytics, public;
```
### Multi-Tenant Applications
Schemas are commonly used for multi-tenant applications:
```sql
-- Create schema per tenant
CREATE SCHEMA tenant_acme;
CREATE SCHEMA tenant_globex;
-- Create same table structure in each schema
CREATE TABLE tenant_acme.orders (...);
CREATE TABLE tenant_globex.orders (...);
```
The `cockroachdb-list-schemas` tool helps discover all tenant schemas:
```yaml
tools:
list_tenants:
type: cockroachdb-list-schemas
source: my_cockroachdb
description: |
List all tenant schemas in the database.
Each schema represents a separate tenant's data namespace.
```
### Best Practices
#### Use Schemas for Organization
Group related tables into schemas:
```sql
CREATE SCHEMA sales;
CREATE SCHEMA inventory;
CREATE SCHEMA hr;
CREATE TABLE sales.orders (...);
CREATE TABLE inventory.products (...);
CREATE TABLE hr.employees (...);
```
#### Schema Naming Conventions
Use clear, descriptive schema names:
- Lowercase names
- Use underscores for multi-word names
- Avoid reserved keywords
- Use prefixes for grouped schemas (e.g., `tenant_`, `app_`)
#### Schema-Level Permissions
Schemas enable fine-grained access control:
```sql
-- Grant access to specific schema
GRANT USAGE ON SCHEMA analytics TO analyst_role;
GRANT SELECT ON ALL TABLES IN SCHEMA analytics TO analyst_role;
-- Revoke access
REVOKE ALL ON SCHEMA hr FROM public;
```
### Integration with Other Tools
#### Combined with List Tables
```yaml
tools:
list_schemas:
type: cockroachdb-list-schemas
source: my_cockroachdb
description: List all schemas first
list_tables:
type: cockroachdb-list-tables
source: my_cockroachdb
description: |
List tables in the database.
Use list_schemas first to understand schema organization.
```
#### Schema Discovery Workflow
1. Call `cockroachdb-list-schemas` to discover schemas
2. Call `cockroachdb-list-tables` to see tables in each schema
3. Generate queries using fully qualified names: `schema.table`
### Common Use Cases
#### Discover Database Structure
```yaml
tools:
discover_schemas:
type: cockroachdb-list-schemas
source: my_cockroachdb
description: |
Discover how the database is organized into schemas.
Use this to understand the logical grouping of tables.
```
#### Multi-Tenant Analysis
```yaml
tools:
list_tenant_schemas:
type: cockroachdb-list-schemas
source: my_cockroachdb
description: |
List all tenant schemas (each tenant has their own schema).
Schema names follow the pattern: tenant_
```
#### Schema Migration Planning
```yaml
tools:
audit_schemas:
type: cockroachdb-list-schemas
source: my_cockroachdb
description: |
Audit existing schemas before migration.
Identifies all schemas that need to be migrated.
```
### CockroachDB-Specific Features
#### System Schemas
CockroachDB includes PostgreSQL-compatible system schemas plus CockroachDB-specific ones:
- `crdb_internal.*`: CockroachDB internal metadata and statistics
- `pg_catalog.*`: PostgreSQL system catalog
- `information_schema.*`: SQL standard information schema
These are automatically filtered from the results.
#### System Schemas
User-Defined Flag
The `is_user_defined` field helps distinguish:
- `true`: User-created schemas
- `false`: System schemas (already filtered out)
## Troubleshooting
The tool handles common errors:
- **Connection errors**: Returns connection failure details
- **Permission errors**: Returns error if user lacks USAGE privilege
- **Empty results**: Returns empty array if no user schemas exist
## Additional Resources
- [cockroachdb-sql](_index.md) - Execute parameterized queries
- [cockroachdb-execute-sql](./cockroachdb-execute-sql.md) - Execute ad-hoc SQL
- [cockroachdb-list-tables](./cockroachdb-list-tables.md) - List tables in the database
- [CockroachDB Source](_index.md) - Source configuration reference
- [CockroachDB Schema Design](https://www.cockroachlabs.com/docs/stable/schema-design-overview.html) - Official documentation
========================================================================
## cockroachdb-list-tables Tool
========================================================================
**Hierarchy:** MCP Toolbox for Databases > Integrations > CockroachDB Source > cockroachdb-list-tables Tool
**URL:** https://googleapis.github.io/genai-toolbox/previews/PR-2723/integrations/cockroachdb/cockroachdb-list-tables/
**Description:** List tables in a CockroachDB database with schema details.
## About
The `cockroachdb-list-tables` tool retrieves a list of tables from a CockroachDB database. It provides detailed information about table structure, including columns, constraints, indexes, and foreign key relationships.
This tool is useful for:
- Database schema discovery
- Understanding table relationships
- Generating context for AI-powered database queries
- Documentation and analysis
## Compatible Sources
{{< compatible-sources >}}
## Parameters
The tool accepts optional runtime parameters:
| Parameter | Type | Default | Description |
|-----------|------|---------|-------------|
| `table_names` | array | all tables | List of specific table names to retrieve |
| `output_format` | string | "detailed" | Output format: "simple" or "detailed" |
## Example
```yaml
sources:
my_cockroachdb:
type: cockroachdb
host: your-cluster.cockroachlabs.cloud
port: "26257"
user: myuser
password: mypassword
database: defaultdb
queryParams:
sslmode: require
tools:
list_all_tables:
type: cockroachdb-list-tables
source: my_cockroachdb
description: List all user tables in the database with their structure
```
### Usage Examples
#### List All Tables
```json
{}
```
#### List Specific Tables
```json
{
"table_names": ["users", "orders", "expenses"]
}
```
#### Simple Output
```json
{
"output_format": "simple"
}
```
#### Output Structure
##### Simple Format Output
```json
{
"table_name": "users",
"estimated_rows": 1000,
"size": "128 KB"
}
```
##### Detailed Format Output
```json
{
"table_name": "users",
"schema": "public",
"columns": [
{
"name": "id",
"type": "UUID",
"nullable": false,
"default": "gen_random_uuid()"
},
{
"name": "email",
"type": "STRING",
"nullable": false,
"default": null
},
{
"name": "created_at",
"type": "TIMESTAMP",
"nullable": false,
"default": "now()"
}
],
"primary_key": ["id"],
"indexes": [
{
"name": "users_pkey",
"columns": ["id"],
"unique": true,
"primary": true
},
{
"name": "users_email_idx",
"columns": ["email"],
"unique": true,
"primary": false
}
],
"foreign_keys": [],
"constraints": [
{
"name": "users_email_check",
"type": "CHECK",
"definition": "email ~* '^[A-Za-z0-9._%+-]+@[A-Za-z0-9.-]+\\.[A-Z|a-z]{2,}$'"
}
]
}
```
#### CockroachDB-Specific Information
##### UUID Primary Keys
The tool recognizes CockroachDB's recommended UUID primary key pattern:
```sql
CREATE TABLE users (
id UUID PRIMARY KEY DEFAULT gen_random_uuid(),
...
);
```
##### Multi-Region Tables
For multi-region tables, the output includes locality information:
```json
{
"table_name": "users",
"locality": "REGIONAL BY ROW",
"regions": ["us-east-1", "us-west-2", "eu-west-1"]
}
```
##### Interleaved Tables
The tool shows parent-child relationships for interleaved tables (legacy feature):
```json
{
"table_name": "order_items",
"interleaved_in": "orders"
}
```
## Output Format
### Simple Format
Returns basic table information:
- Table name
- Row count estimate
- Size information
```json
{
"table_names": ["users"],
"output_format": "simple"
}
```
### Detailed Format (Default)
Returns comprehensive table information:
- Table name and schema
- All columns with types and constraints
- Primary keys
- Foreign keys and relationships
- Indexes
- Check constraints
- Table size and row counts
```json
{
"table_names": ["users", "orders"],
"output_format": "detailed"
}
```
## Reference
### Required Fields
| Field | Type | Description |
|-------|------|-------------|
| `type` | string | Must be `cockroachdb-list-tables` |
| `source` | string | Name of the CockroachDB source to use |
| `description` | string | Human-readable description for the LLM |
### Optional Fields
| Field | Type | Description |
|-------|------|-------------|
| `authRequired` | array | List of authentication services required |
## Advanced Usage
### Best Practices
#### Use for Schema Discovery
The tool is ideal for helping AI assistants understand your database structure:
```yaml
tools:
discover_schema:
type: cockroachdb-list-tables
source: my_cockroachdb
description: |
Use this tool first to understand the database schema before generating queries.
It shows all tables, their columns, data types, and relationships.
```
#### Filter Large Schemas
For databases with many tables, specify relevant tables:
```json
{
"table_names": ["users", "orders", "products"],
"output_format": "detailed"
}
```
#### Use Simple Format for Overviews
When you need just table names and sizes:
```json
{
"output_format": "simple"
}
```
### Excluded Tables
The tool automatically excludes system tables and schemas:
- `pg_catalog.*` - PostgreSQL system catalog
- `information_schema.*` - SQL standard information schema
- `crdb_internal.*` - CockroachDB internal tables
- `pg_extension.*` - PostgreSQL extension tables
Only user-created tables in the public schema (and other user schemas) are returned.
### Integration with AI Assistants
#### Prompt Example
```yaml
tools:
list_tables:
type: cockroachdb-list-tables
source: my_cockroachdb
description: |
Lists all tables in the database with detailed schema information.
Use this tool to understand:
- What tables exist
- What columns each table has
- Data types and constraints
- Relationships between tables (foreign keys)
- Available indexes
Always call this tool before generating SQL queries to ensure
you use correct table and column names.
```
### Common Use Cases
#### Generate Context for Queries
```json
{}
```
This provides comprehensive schema information that helps AI assistants generate accurate SQL queries.
#### Analyze Table Structure
```json
{
"table_names": ["users"],
"output_format": "detailed"
}
```
Perfect for understanding a specific table's structure, constraints, and relationships.
#### Quick Schema Overview
```json
{
"output_format": "simple"
}
```
Gets a quick list of tables with basic statistics.
### Performance Considerations
- **Simple format** is faster for large databases
- **Detailed format** queries system tables extensively
- Specifying `table_names` reduces query time
- Results are fetched in a single query for efficiency
## Troubleshooting
The tool handles common errors:
- **Table not found**: Returns empty result for non-existent tables
- **Permission errors**: Returns error if user lacks SELECT privileges
- **Connection errors**: Returns connection failure details
## Additional Resources
- [cockroachdb-sql](_index.md) - Execute parameterized queries
- [cockroachdb-execute-sql](./cockroachdb-execute-sql.md) - Execute ad-hoc SQL
- [cockroachdb-list-schemas](./cockroachdb-list-schemas.md) - List database schemas
- [CockroachDB Source](_index.md) - Source configuration reference
- [CockroachDB Schema Design](https://www.cockroachlabs.com/docs/stable/schema-design-overview.html) - Best practices
========================================================================
## cockroachdb-sql Tool
========================================================================
**Hierarchy:** MCP Toolbox for Databases > Integrations > CockroachDB Source > cockroachdb-sql Tool
**URL:** https://googleapis.github.io/genai-toolbox/previews/PR-2723/integrations/cockroachdb/cockroachdb-sql/
**Description:** Execute parameterized SQL queries in CockroachDB.
## About
The `cockroachdb-sql` tool allows you to execute parameterized SQL queries against a CockroachDB database. This tool supports prepared statements with parameter binding, template parameters for dynamic query construction, and automatic transaction retry for resilience against serialization conflicts.
## Compatible Sources
{{< compatible-sources >}}
## Parameters
Parameters allow you to safely pass values into your SQL queries using prepared statements. CockroachDB uses PostgreSQL-style parameter placeholders: `$1`, `$2`, etc.
### Parameter Types
- `string`: Text values
- `number`: Numeric values (integers or decimals)
- `boolean`: True/false values
- `array`: Array of values
### Example with Multiple Parameters
```yaml
tools:
filter_expenses:
type: cockroachdb-sql
source: my_cockroachdb
description: Filter expenses by category and date range
statement: |
SELECT id, description, amount, category, expense_date
FROM expenses
WHERE user_id = $1
AND category = $2
AND expense_date >= $3
AND expense_date <= $4
ORDER BY expense_date DESC
parameters:
- name: user_id
type: string
description: The user's UUID
- name: category
type: string
description: Expense category (e.g., "Food", "Transport")
- name: start_date
type: string
description: Start date in YYYY-MM-DD format
- name: end_date
type: string
description: End date in YYYY-MM-DD format
```
### Template Parameters
Template parameters enable dynamic query construction by replacing placeholders in the SQL statement before parameter binding. This is useful for dynamic table names, column names, or query structure.
#### Example with Template Parameters
```yaml
tools:
get_column_data:
type: cockroachdb-sql
source: my_cockroachdb
description: Get data from a specific column
statement: |
SELECT {{column_name}}
FROM {{table_name}}
WHERE user_id = $1
LIMIT 100
templateParameters:
- name: table_name
type: string
description: The table to query
- name: column_name
type: string
description: The column to retrieve
parameters:
- name: user_id
type: string
description: The user's UUID
```
## Example
```yaml
sources:
my_cockroachdb:
type: cockroachdb
host: your-cluster.cockroachlabs.cloud
port: "26257"
user: myuser
password: mypassword
database: defaultdb
queryParams:
sslmode: require
tools:
get_user_orders:
type: cockroachdb-sql
source: my_cockroachdb
description: Get all orders for a specific user
statement: |
SELECT o.id, o.order_date, o.total_amount, o.status
FROM orders o
WHERE o.user_id = $1
ORDER BY o.order_date DESC
parameters:
- name: user_id
type: string
description: The UUID of the user
```
## Reference
### Required Fields
| Field | Type | Description |
|-------|------|-------------|
| `type` | string | Must be `cockroachdb-sql` |
| `source` | string | Name of the CockroachDB source to use |
| `description` | string | Human-readable description of what the tool does |
| `statement` | string | The SQL query to execute |
### Optional Fields
| Field | Type | Description |
|-------|------|-------------|
| `parameters` | array | List of parameter definitions for the query |
| `templateParameters` | array | List of template parameters for dynamic query construction |
| `authRequired` | array | List of authentication services required |
## Advanced Usage
### Use UUID Primary Keys
CockroachDB performs best with UUID primary keys to avoid transaction hotspots:
```sql
CREATE TABLE orders (
id UUID PRIMARY KEY DEFAULT gen_random_uuid(),
user_id UUID NOT NULL,
order_date TIMESTAMP DEFAULT now(),
total_amount DECIMAL(10,2)
);
```
### Use Indexes for Performance
Create indexes on frequently queried columns:
```sql
CREATE INDEX idx_orders_user_id ON orders(user_id);
CREATE INDEX idx_orders_date ON orders(order_date DESC);
```
### Use JOINs Efficiently
CockroachDB supports standard SQL JOINs. Keep joins efficient by:
- Adding appropriate indexes
- Using UUIDs for foreign keys
- Limiting result sets with WHERE clauses
```yaml
tools:
get_user_with_orders:
type: cockroachdb-sql
source: my_cockroachdb
description: Get user details with their recent orders
statement: |
SELECT u.name, u.email, o.id as order_id, o.order_date, o.total_amount
FROM users u
LEFT JOIN orders o ON u.id = o.user_id
WHERE u.id = $1
ORDER BY o.order_date DESC
LIMIT 10
parameters:
- name: user_id
type: string
description: The user's UUID
```
### Handle NULL Values
Use COALESCE or NULL checks when dealing with nullable columns:
```sql
SELECT id, description, COALESCE(notes, 'No notes') as notes
FROM expenses
WHERE user_id = $1
```
### Aggregations
```yaml
tools:
expense_summary:
type: cockroachdb-sql
source: my_cockroachdb
description: Get expense summary by category for a user
statement: |
SELECT
category,
COUNT(*) as count,
SUM(amount) as total_amount,
AVG(amount) as avg_amount
FROM expenses
WHERE user_id = $1
AND expense_date >= $2
GROUP BY category
ORDER BY total_amount DESC
parameters:
- name: user_id
type: string
description: The user's UUID
- name: start_date
type: string
description: Start date in YYYY-MM-DD format
```
### Window Functions
```yaml
tools:
running_total:
type: cockroachdb-sql
source: my_cockroachdb
description: Get running total of expenses
statement: |
SELECT
expense_date,
amount,
SUM(amount) OVER (ORDER BY expense_date) as running_total
FROM expenses
WHERE user_id = $1
ORDER BY expense_date
parameters:
- name: user_id
type: string
description: The user's UUID
```
### Common Table Expressions (CTEs)
```yaml
tools:
top_spenders:
type: cockroachdb-sql
source: my_cockroachdb
description: Find top spending users
statement: |
WITH user_totals AS (
SELECT
user_id,
SUM(amount) as total_spent
FROM expenses
WHERE expense_date >= $1
GROUP BY user_id
)
SELECT
u.name,
u.email,
ut.total_spent
FROM user_totals ut
JOIN users u ON ut.user_id = u.id
ORDER BY ut.total_spent DESC
LIMIT 10
parameters:
- name: start_date
type: string
description: Start date in YYYY-MM-DD format
```
## Troubleshooting
The tool automatically handles:
- **Connection errors**: Retried with exponential backoff
- **Serialization conflicts**: Automatically retried using cockroach-go library
- **Invalid parameters**: Returns descriptive error messages
- **SQL syntax errors**: Returns database error details
## Additional Resources
- [cockroachdb-execute-sql](./cockroachdb-execute-sql.md) - For ad-hoc SQL execution
- [cockroachdb-list-tables](./cockroachdb-list-tables.md) - List tables in the database
- [cockroachdb-list-schemas](./cockroachdb-list-schemas.md) - List database schemas
- [CockroachDB Source](_index.md) - Source configuration reference
========================================================================
## Couchbase Source
========================================================================
**Hierarchy:** MCP Toolbox for Databases > Integrations > Couchbase Source
**URL:** https://googleapis.github.io/genai-toolbox/previews/PR-2723/integrations/couchbase/
**Description:** A "couchbase" source connects to a Couchbase database.
## About
A `couchbase` source establishes a connection to a Couchbase database cluster,
allowing tools to execute SQL queries against it.
## Available Tools
{{< list-tools >}}
## Example
```yaml
kind: sources
name: my-couchbase-instance
type: couchbase
connectionString: couchbase://localhost
bucket: travel-sample
scope: inventory
username: Administrator
password: password
```
{{< notice note >}}
For more details about alternate addresses and custom ports refer to [Managing
Connections](https://docs.couchbase.com/java-sdk/current/howtos/managing-connections.html).
{{< /notice >}}
## Reference
| **field** | **type** | **required** | **description** |
|----------------------|:--------:|:------------:|------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|
| type | string | true | Must be "couchbase". |
| connectionString | string | true | Connection string for the Couchbase cluster. |
| bucket | string | true | Name of the bucket to connect to. |
| scope | string | true | Name of the scope within the bucket. |
| username | string | false | Username for authentication. |
| password | string | false | Password for authentication. |
| clientCert | string | false | Path to client certificate file for TLS authentication. |
| clientCertPassword | string | false | Password for the client certificate. |
| clientKey | string | false | Path to client key file for TLS authentication. |
| clientKeyPassword | string | false | Password for the client key. |
| caCert | string | false | Path to CA certificate file. |
| noSslVerify | boolean | false | If true, skip server certificate verification. **Warning:** This option should only be used in development or testing environments. Disabling SSL verification poses significant security risks in production as it makes your connection vulnerable to man-in-the-middle attacks. |
| profile | string | false | Name of the connection profile to apply. |
| queryScanConsistency | integer | false | Query scan consistency. Controls the consistency guarantee for index scanning. Values: 1 for "not_bounded" (fastest option, but results may not include the most recent operations), 2 for "request_plus" (highest consistency level, includes all operations up until the query started, but incurs a performance penalty). If not specified, defaults to the Couchbase Go SDK default. |
========================================================================
## couchbase-sql Tool
========================================================================
**Hierarchy:** MCP Toolbox for Databases > Integrations > Couchbase Source > couchbase-sql Tool
**URL:** https://googleapis.github.io/genai-toolbox/previews/PR-2723/integrations/couchbase/couchbase-sql/
**Description:** A "couchbase-sql" tool executes a pre-defined SQL statement against a Couchbase database.
## About
A `couchbase-sql` tool executes a pre-defined SQL statement against a Couchbase
database.
The specified SQL statement is executed as a parameterized statement, and specified
parameters will be used according to their name: e.g. `$id`.
## Compatible Sources
{{< compatible-sources >}}
## Example
> **Note:** This tool uses parameterized queries to prevent SQL injections.
> Query parameters can be used as substitutes for arbitrary expressions.
> Parameters cannot be used as substitutes for identifiers, column names, table
> names, or other parts of the query.
```yaml
kind: tools
name: search_products_by_category
type: couchbase-sql
source: my-couchbase-instance
statement: |
SELECT p.name, p.price, p.description
FROM products p
WHERE p.category = $category AND p.price < $max_price
ORDER BY p.price DESC
LIMIT 10
description: |
Use this tool to get a list of products for a specific category under a maximum price.
Takes a category name, e.g. "Electronics" and a maximum price e.g 500 and returns a list of product names, prices, and descriptions.
Do NOT use this tool with invalid category names. Do NOT guess a category name, Do NOT guess a price.
Example:
{{
"category": "Electronics",
"max_price": 500
}}
Example:
{{
"category": "Furniture",
"max_price": 1000
}}
parameters:
- name: category
type: string
description: Product category name
- name: max_price
type: integer
description: Maximum price (positive integer)
```
### Example with Template Parameters
> **Note:** This tool allows direct modifications to the SQL statement,
> including identifiers, column names, and table names. **This makes it more
> vulnerable to SQL injections**. Using basic parameters only (see above) is
> recommended for performance and safety reasons. For more details, please check
> [templateParameters](..#template-parameters).
```yaml
kind: tools
name: list_table
type: couchbase-sql
source: my-couchbase-instance
statement: |
SELECT * FROM {{.tableName}};
description: |
Use this tool to list all information from a specific table.
Example:
{{
"tableName": "flights",
}}
templateParameters:
- name: tableName
type: string
description: Table to select from
```
## Reference
| **field** | **type** | **required** | **description** |
|--------------------|:--------------------------------------------:|:------------:|----------------------------------------------------------------------------------------------------------------------------------------|
| type | string | true | Must be "couchbase-sql". |
| source | string | true | Name of the source the SQL query should execute on. |
| description | string | true | Description of the tool that is passed to the LLM. |
| statement | string | true | SQL statement to execute |
| parameters | [parameters](../#specifying-parameters) | false | List of [parameters](../#specifying-parameters) that will be used with the SQL statement. |
| templateParameters | [templateParameters](..#template-parameters) | false | List of [templateParameters](..#template-parameters) that will be inserted into the SQL statement before executing prepared statement. |
| authRequired | array[string] | false | List of auth services that are required to use this tool. |
========================================================================
## Dataform
========================================================================
**Hierarchy:** MCP Toolbox for Databases > Integrations > Dataform
**URL:** https://googleapis.github.io/genai-toolbox/previews/PR-2723/integrations/dataform/
**Description:** Tools that work with Dataform.
========================================================================
## dataform-compile-local Tool
========================================================================
**Hierarchy:** MCP Toolbox for Databases > Integrations > Dataform > dataform-compile-local Tool
**URL:** https://googleapis.github.io/genai-toolbox/previews/PR-2723/integrations/dataform/dataform-compile-local/
**Description:** A "dataform-compile-local" tool runs the `dataform compile` CLI command on a local project directory.
## About
A `dataform-compile-local` tool runs the `dataform compile` command on a local
Dataform project.
It is a standalone tool and **is not** compatible with any sources.
At invocation time, the tool executes `dataform compile --json` in the specified
project directory and returns the resulting JSON object from the CLI.
`dataform-compile-local` takes the following parameter:
- `project_dir` (string): The absolute or relative path to the local Dataform
project directory. The server process must have read access to this path.
## Requirements
### Dataform CLI
This tool executes the `dataform` command-line interface (CLI) via a system
call. You must have the **`dataform` CLI** installed and available in the
server's system `PATH`.
You can typically install the CLI via `npm`:
```bash
npm install -g @dataform/cli
```
See the [official Dataform
documentation](https://www.google.com/search?q=https://cloud.google.com/dataform/docs/install-dataform-cli)
for more details.
## Example
```yaml
kind: tools
name: my_dataform_compiler
type: dataform-compile-local
description: Use this tool to compile a local Dataform project.
```
## Reference
| **field** | **type** | **required** | **description** |
|:------------|:---------|:-------------|:---------------------------------------------------|
| type | string | true | Must be "dataform-compile-local". |
| description | string | true | Description of the tool that is passed to the LLM. |
========================================================================
## Dataplex Source
========================================================================
**Hierarchy:** MCP Toolbox for Databases > Integrations > Dataplex Source
**URL:** https://googleapis.github.io/genai-toolbox/previews/PR-2723/integrations/dataplex/
**Description:** Dataplex Universal Catalog is a unified, intelligent governance solution for data and AI assets in Google Cloud. Dataplex Universal Catalog powers AI, analytics, and business intelligence at scale.
## About
[Dataplex][dataplex-docs] Universal Catalog is a unified, intelligent governance
solution for data and AI assets in Google Cloud. Dataplex Universal Catalog
powers AI, analytics, and business intelligence at scale.
At the heart of these governance capabilities is a catalog that contains a
centralized inventory of the data assets in your organization. Dataplex
Universal Catalog holds business, technical, and runtime metadata for all of
your data. It helps you discover relationships and semantics in the metadata by
applying artificial intelligence and machine learning.
[dataplex-docs]: https://cloud.google.com/dataplex/docs
## Available Tools
{{< list-tools >}}
## Example
```yaml
kind: sources
name: my-dataplex-source
type: "dataplex"
project: "my-project-id"
```
## Reference
| **field** | **type** | **required** | **description** |
|-----------|:--------:|:------------:|----------------------------------------------------------------------------------|
| type | string | true | Must be "dataplex". |
| project | string | true | ID of the GCP project used for quota and billing purposes (e.g. "my-project-id").|
## Advanced Usage
### Sample System Prompt
You can use the following system prompt as "Custom Instructions" in your client
application.
```
# Objective
Your primary objective is to help discover, organize and manage metadata related to data assets.
# Tone and Style
1. Adopt the persona of a senior subject matter expert
2. Your communication style must be:
1. Concise: Always favor brevity.
2. Direct: Avoid greetings (e.g., "Hi there!", "Certainly!"). Get straight to the point.
Example (Incorrect): Hi there! I see that you are looking for...
Example (Correct): This problem likely stems from...
3. Do not reiterate or summarize the question in the answer.
4. Crucially, always convey a tone of uncertainty and caution. Since you are interpreting metadata and have no way to externally verify your answers, never express complete confidence. Frame your responses as interpretations based solely on the provided metadata. Use a suggestive tone, not a prescriptive one:
Example (Correct): "The entry describes..."
Example (Correct): "According to catalog,..."
Example (Correct): "Based on the metadata,..."
Example (Correct): "Based on the search results,..."
5. Do not make assumptions
# Data Model
## Entries
Entry represents a specific data asset. Entry acts as a metadata record for something that is managed by Catalog, such as:
- A BigQuery table or dataset
- A Cloud Storage bucket or folder
- An on-premises SQL table
## Aspects
While the Entry itself is a container, the rich descriptive information about the asset (e.g., schema, data types, business descriptions, classifications) is stored in associated components called Aspects. Aspects are created based on pre-defined blueprints known as Aspect Types.
## Aspect Types
Aspect Type is a reusable template that defines the schema for a set of metadata fields. Think of an Aspect Type as a structure for the kind of metadata that is organized in the catalog within the Entry.
Examples:
- projects/dataplex-types/locations/global/aspectTypes/analytics-hub-exchange
- projects/dataplex-types/locations/global/aspectTypes/analytics-hub
- projects/dataplex-types/locations/global/aspectTypes/analytics-hub-listing
- projects/dataplex-types/locations/global/aspectTypes/bigquery-connection
- projects/dataplex-types/locations/global/aspectTypes/bigquery-data-policy
- projects/dataplex-types/locations/global/aspectTypes/bigquery-dataset
- projects/dataplex-types/locations/global/aspectTypes/bigquery-model
- projects/dataplex-types/locations/global/aspectTypes/bigquery-policy
- projects/dataplex-types/locations/global/aspectTypes/bigquery-routine
- projects/dataplex-types/locations/global/aspectTypes/bigquery-row-access-policy
- projects/dataplex-types/locations/global/aspectTypes/bigquery-table
- projects/dataplex-types/locations/global/aspectTypes/bigquery-view
- projects/dataplex-types/locations/global/aspectTypes/cloud-bigtable-instance
- projects/dataplex-types/locations/global/aspectTypes/cloud-bigtable-table
- projects/dataplex-types/locations/global/aspectTypes/cloud-spanner-database
- projects/dataplex-types/locations/global/aspectTypes/cloud-spanner-instance
- projects/dataplex-types/locations/global/aspectTypes/cloud-spanner-table
- projects/dataplex-types/locations/global/aspectTypes/cloud-spanner-view
- projects/dataplex-types/locations/global/aspectTypes/cloudsql-database
- projects/dataplex-types/locations/global/aspectTypes/cloudsql-instance
- projects/dataplex-types/locations/global/aspectTypes/cloudsql-schema
- projects/dataplex-types/locations/global/aspectTypes/cloudsql-table
- projects/dataplex-types/locations/global/aspectTypes/cloudsql-view
- projects/dataplex-types/locations/global/aspectTypes/contacts
- projects/dataplex-types/locations/global/aspectTypes/dataform-code-asset
- projects/dataplex-types/locations/global/aspectTypes/dataform-repository
- projects/dataplex-types/locations/global/aspectTypes/dataform-workspace
- projects/dataplex-types/locations/global/aspectTypes/dataproc-metastore-database
- projects/dataplex-types/locations/global/aspectTypes/dataproc-metastore-service
- projects/dataplex-types/locations/global/aspectTypes/dataproc-metastore-table
- projects/dataplex-types/locations/global/aspectTypes/data-product
- projects/dataplex-types/locations/global/aspectTypes/data-quality-scorecard
- projects/dataplex-types/locations/global/aspectTypes/external-connection
- projects/dataplex-types/locations/global/aspectTypes/overview
- projects/dataplex-types/locations/global/aspectTypes/pubsub-topic
- projects/dataplex-types/locations/global/aspectTypes/schema
- projects/dataplex-types/locations/global/aspectTypes/sensitive-data-protection-job-result
- projects/dataplex-types/locations/global/aspectTypes/sensitive-data-protection-profile
- projects/dataplex-types/locations/global/aspectTypes/sql-access
- projects/dataplex-types/locations/global/aspectTypes/storage-bucket
- projects/dataplex-types/locations/global/aspectTypes/storage-folder
- projects/dataplex-types/locations/global/aspectTypes/storage
- projects/dataplex-types/locations/global/aspectTypes/usage
## Entry Types
Every Entry must conform to an Entry Type. The Entry Type acts as a template, defining the structure, required aspects, and constraints for Entries of that type.
Examples:
- projects/dataplex-types/locations/global/entryTypes/analytics-hub-exchange
- projects/dataplex-types/locations/global/entryTypes/analytics-hub-listing
- projects/dataplex-types/locations/global/entryTypes/bigquery-connection
- projects/dataplex-types/locations/global/entryTypes/bigquery-data-policy
- projects/dataplex-types/locations/global/entryTypes/bigquery-dataset
- projects/dataplex-types/locations/global/entryTypes/bigquery-model
- projects/dataplex-types/locations/global/entryTypes/bigquery-routine
- projects/dataplex-types/locations/global/entryTypes/bigquery-row-access-policy
- projects/dataplex-types/locations/global/entryTypes/bigquery-table
- projects/dataplex-types/locations/global/entryTypes/bigquery-view
- projects/dataplex-types/locations/global/entryTypes/cloud-bigtable-instance
- projects/dataplex-types/locations/global/entryTypes/cloud-bigtable-table
- projects/dataplex-types/locations/global/entryTypes/cloud-spanner-database
- projects/dataplex-types/locations/global/entryTypes/cloud-spanner-instance
- projects/dataplex-types/locations/global/entryTypes/cloud-spanner-table
- projects/dataplex-types/locations/global/entryTypes/cloud-spanner-view
- projects/dataplex-types/locations/global/entryTypes/cloudsql-mysql-database
- projects/dataplex-types/locations/global/entryTypes/cloudsql-mysql-instance
- projects/dataplex-types/locations/global/entryTypes/cloudsql-mysql-table
- projects/dataplex-types/locations/global/entryTypes/cloudsql-mysql-view
- projects/dataplex-types/locations/global/entryTypes/cloudsql-postgresql-database
- projects/dataplex-types/locations/global/entryTypes/cloudsql-postgresql-instance
- projects/dataplex-types/locations/global/entryTypes/cloudsql-postgresql-schema
- projects/dataplex-types/locations/global/entryTypes/cloudsql-postgresql-table
- projects/dataplex-types/locations/global/entryTypes/cloudsql-postgresql-view
- projects/dataplex-types/locations/global/entryTypes/cloudsql-sqlserver-database
- projects/dataplex-types/locations/global/entryTypes/cloudsql-sqlserver-instance
- projects/dataplex-types/locations/global/entryTypes/cloudsql-sqlserver-schema
- projects/dataplex-types/locations/global/entryTypes/cloudsql-sqlserver-table
- projects/dataplex-types/locations/global/entryTypes/cloudsql-sqlserver-view
- projects/dataplex-types/locations/global/entryTypes/dataform-code-asset
- projects/dataplex-types/locations/global/entryTypes/dataform-repository
- projects/dataplex-types/locations/global/entryTypes/dataform-workspace
- projects/dataplex-types/locations/global/entryTypes/dataproc-metastore-database
- projects/dataplex-types/locations/global/entryTypes/dataproc-metastore-service
- projects/dataplex-types/locations/global/entryTypes/dataproc-metastore-table
- projects/dataplex-types/locations/global/entryTypes/pubsub-topic
- projects/dataplex-types/locations/global/entryTypes/storage-bucket
- projects/dataplex-types/locations/global/entryTypes/storage-folder
- projects/dataplex-types/locations/global/entryTypes/vertexai-dataset
- projects/dataplex-types/locations/global/entryTypes/vertexai-feature-group
- projects/dataplex-types/locations/global/entryTypes/vertexai-feature-online-store
## Entry Groups
Entries are organized within Entry Groups, which are logical groupings of Entries. An Entry Group acts as a namespace for its Entries.
## Entry Links
Entries can be linked together using EntryLinks to represent relationships between data assets (e.g. foreign keys).
# Tool instructions
## Tool: dataplex_search_entries
## General
- Do not try to search within search results on your own.
- Do not fetch multiple pages of results unless explicitly asked.
## Search syntax
### Simple search
In its simplest form, a search query consists of a single predicate. Such a predicate can match several pieces of metadata:
- A substring of a name, display name, or description of a resource
- A substring of the type of a resource
- A substring of a column name (or nested column name) in the schema of a resource
- A substring of a project ID
- A string from an overview description
For example, the predicate foo matches the following resources:
- Resource with the name foo.bar
- Resource with the display name Foo Bar
- Resource with the description This is the foo script
- Resource with the exact type foo
- Column foo_bar in the schema of a resource
- Nested column foo_bar in the schema of a resource
- Project prod-foo-bar
- Resource with an overview containing the word foo
### Qualified predicates
You can qualify a predicate by prefixing it with a key that restricts the matching to a specific piece of metadata:
- An equal sign (=) restricts the search to an exact match.
- A colon (:) after the key matches the predicate to either a substring or a token within the value in the search results.
Tokenization splits the stream of text into a series of tokens, with each token usually corresponding to a single word. For example:
- name:foo selects resources with names that contain the foo substring, like foo1 and barfoo.
- description:foo selects resources with the foo token in the description, like bar and foo.
- location=foo matches resources in a specified location with foo as the location name.
The predicate keys type, system, location, and orgid support only the exact match (=) qualifier, not the substring qualifier (:). For example, type=foo or orgid=number.
Search syntax supports the following qualifiers:
- "name:x" - Matches x as a substring of the resource ID.
- "displayname:x" - Match x as a substring of the resource display name.
- "column:x" - Matches x as a substring of the column name (or nested column name) in the schema of the resource.
- "description:x" - Matches x as a token in the resource description.
- "label:bar" - Matches BigQuery resources that have a label (with some value) and the label key has bar as a substring.
- "label=bar" - Matches BigQuery resources that have a label (with some value) and the label key equals bar as a string.
- "label:bar:x" - Matches x as a substring in the value of a label with a key bar attached to a BigQuery resource.
- "label=foo:bar" - Matches BigQuery resources where the key equals foo and the key value equals bar.
- "label.foo=bar" - Matches BigQuery resources where the key equals foo and the key value equals bar.
- "label.foo" - Matches BigQuery resources that have a label whose key equals foo as a string.
- "type=TYPE" - Matches resources of a specific entry type or its type alias.
- "projectid:bar" - Matches resources within Google Cloud projects that match bar as a substring in the ID.
- "parent:x" - Matches x as a substring of the hierarchical path of a resource. It supports same syntax as `name` predicate.
- "orgid=number" - Matches resources within a Google Cloud organization with the exact ID value of the number.
- "system=SYSTEM" - Matches resources from a specified system. For example, system=bigquery matches BigQuery resources.
- "location=LOCATION" - Matches resources in a specified location with an exact name. For example, location=us-central1 matches assets hosted in Iowa. BigQuery Omni assets support this qualifier by using the BigQuery Omni location name. For example, location=aws-us-east-1 matches BigQuery Omni assets in Northern Virginia.
- "createtime" -
Finds resources that were created within, before, or after a given date or time. For example "createtime:2019-01-01" matches resources created on 2019-01-01.
- "updatetime" - Finds resources that were updated within, before, or after a given date or time. For example "updatetime>2019-01-01" matches resources updated after 2019-01-01.
### Aspect Search
To search for entries based on their attached aspects, use the following query syntax.
`has:x`
Matches `x` as a substring of the full path to the aspect type of an aspect that is attached to the entry, in the format `projectid.location.ASPECT_TYPE_ID`
`has=x`
Matches `x` as the full path to the aspect type of an aspect that is attached to the entry, in the format `projectid.location.ASPECT_TYPE_ID`
`xOPERATORvalue`
Searches for aspect field values. Matches x as a substring of the full path to the aspect type and field name of an aspect that is attached to the entry, in the format `projectid.location.ASPECT_TYPE_ID.FIELD_NAME`
The list of supported operators depends on the type of field in the aspect, as follows:
* **String**: `=` (exact match)
* **All number types**: `=`, `:`, `<`, `>`, `<=`, `>=`, `=>`, `=<`
* **Enum**: `=` (exact match only)
* **Datetime**: same as for numbers, but the values to compare are treated as datetimes instead of numbers
* **Boolean**: `=`
Only top-level fields of the aspect are searchable.
* Syntax for system aspect types:
* `ASPECT_TYPE_ID.FIELD_NAME`
* `dataplex-types.ASPECT_TYPE_ID.FIELD_NAME`
* `dataplex-types.LOCATION.ASPECT_TYPE_ID.FIELD_NAME`
For example, the following queries match entries where the value of the `type` field in the `bigquery-dataset` aspect is `default`:
* `bigquery-dataset.type=default`
* `dataplex-types.bigquery-dataset.type=default`
* `dataplex-types.global.bigquery-dataset.type=default`
* Syntax for custom aspect types:
* If the aspect is created in the global region: `PROJECT_ID.ASPECT_TYPE_ID.FIELD_NAME`
* If the aspect is created in a specific region: `PROJECT_ID.REGION.ASPECT_TYPE_ID.FIELD_NAME`
For example, the following queries match entries where the value of the `is-enrolled` field in the `employee-info` aspect is `true`.
* `example-project.us-central1.employee-info.is-enrolled=true`
* `example-project.employee-info.is-enrolled=true`
Example:-
You can use following filters
- dataplex-types.global.bigquery-table.type={BIGLAKE_TABLE, BIGLAKE_OBJECT_TABLE, EXTERNAL_TABLE, TABLE}
- dataplex-types.global.storage.type={STRUCTURED, UNSTRUCTURED}
### Logical operators
A query can consist of several predicates with logical operators. If you don't specify an operator, logical AND is implied. For example, foo bar returns resources that match both predicate foo and predicate bar.
Logical AND and logical OR are supported. For example, foo OR bar.
You can negate a predicate with a - (hyphen) or NOT prefix. For example, -name:foo returns resources with names that don't match the predicate foo.
Logical operators are case-sensitive. `OR` and `AND` are acceptable whereas `or` and `and` are not.
### Abbreviated syntax
An abbreviated search syntax is also available, using `|` (vertical bar) for `OR` operators and `,` (comma) for `AND` operators.
For example, to search for entries inside one of many projects using the `OR` operator, you can use the following abbreviated syntax:
`projectid:(id1|id2|id3|id4)`
The same search without using abbreviated syntax looks like the following:
`projectid:id1 OR projectid:id2 OR projectid:id3 OR projectid:id4`
To search for entries with matching column names, use the following:
* **AND**: `column:(name1,name2,name3)`
* **OR**: `column:(name1|name2|name3)`
This abbreviated syntax works for the qualified predicates except for `label` in keyword search.
### Request
1. Always try to rewrite the prompt using search syntax.
### Response
1. If there are multiple search results found
1. Present the list of search results
2. Format the output in nested ordered list, for example:
Given
```
{
results: [
{
name: "projects/test-project/locations/us/entryGroups/@bigquery-aws-us-east-1/entries/users"
entrySource: {
displayName: "Users"
description: "Table contains list of users."
location: "aws-us-east-1"
system: "BigQuery"
}
},
{
name: "projects/another_project/locations/us-central1/entryGroups/@bigquery/entries/top_customers"
entrySource: {
displayName: "Top customers",
description: "Table contains list of best customers."
location: "us-central1"
system: "BigQuery"
}
},
]
}
```
Return output formatted as markdown nested list:
```
* Users:
- projectId: test_project
- location: aws-us-east-1
- description: Table contains list of users.
* Top customers:
- projectId: another_project
- location: us-central1
- description: Table contains list of best customers.
```
3. Ask to select one of the presented search results
2. If there is only one search result found
1. Present the search result immediately.
3. If there are no search result found
1. Explain that no search result was found
2. Suggest to provide a more specific search query.
## Tool: dataplex_lookup_entry
### Request
1. Always try to limit the size of the response by specifying `aspect_types` parameter. Make sure to include to select view=CUSTOM when using aspect_types parameter. If you do not know the name of the aspect type, use the `dataplex_search_aspect_types` tool.
2. If you do not know the name of the entry, use `dataplex_search_entries` tool
### Response
1. Unless asked for a specific aspect, respond with all aspects attached to the entry.
```
========================================================================
## dataplex-lookup-entry Tool
========================================================================
**Hierarchy:** MCP Toolbox for Databases > Integrations > Dataplex Source > dataplex-lookup-entry Tool
**URL:** https://googleapis.github.io/genai-toolbox/previews/PR-2723/integrations/dataplex/dataplex-lookup-entry/
**Description:** A "dataplex-lookup-entry" tool returns details of a particular entry in Dataplex Catalog.
## About
A `dataplex-lookup-entry` tool returns details of a particular entry in Dataplex
Catalog.
`dataplex-lookup-entry` takes a required `name` parameter which contains the
project and location to which the request should be attributed in the following
form: projects/{project}/locations/{location} and also a required `entry`
parameter which is the resource name of the entry in the following form:
projects/{project}/locations/{location}/entryGroups/{entryGroup}/entries/{entry}.
It also optionally accepts following parameters:
- `view` - View to control which parts of an entry the service should return.
It takes integer values from 1-4 corresponding to type of view - BASIC,
FULL, CUSTOM, ALL
- `aspectTypes` - Limits the aspects returned to the provided aspect types in
the format
`projects/{project}/locations/{location}/aspectTypes/{aspectType}`. It only
works for CUSTOM view.
- `paths` - Limits the aspects returned to those associated with the provided
paths within the Entry. It only works for CUSTOM view.
## Compatible Sources
{{< compatible-sources >}}
## Requirements
### IAM Permissions
Dataplex uses [Identity and Access Management (IAM)][iam-overview] to control
user and group access to Dataplex resources. Toolbox will use your
[Application Default Credentials (ADC)][adc] to authorize and authenticate when
interacting with [Dataplex][dataplex-docs].
In addition to [setting the ADC for your server][set-adc], you need to ensure
the IAM identity has been given the correct IAM permissions for the tasks you
intend to perform. See [Dataplex Universal Catalog IAM permissions][iam-permissions]
and [Dataplex Universal Catalog IAM roles][iam-roles] for more information on
applying IAM permissions and roles to an identity.
[iam-overview]: https://cloud.google.com/dataplex/docs/iam-and-access-control
[adc]: https://cloud.google.com/docs/authentication#adc
[set-adc]: https://cloud.google.com/docs/authentication/provide-credentials-adc
[iam-permissions]: https://cloud.google.com/dataplex/docs/iam-permissions
[iam-roles]: https://cloud.google.com/dataplex/docs/iam-roles
## Example
```yaml
kind: tools
name: lookup_entry
type: dataplex-lookup-entry
source: my-dataplex-source
description: Use this tool to retrieve a specific entry in Dataplex Catalog.
```
## Reference
| **field** | **type** | **required** | **description** |
|-------------|:--------:|:------------:|----------------------------------------------------|
| type | string | true | Must be "dataplex-lookup-entry". |
| source | string | true | Name of the source the tool should execute on. |
| description | string | true | Description of the tool that is passed to the LLM. |
========================================================================
## dataplex-search-aspect-types Tool
========================================================================
**Hierarchy:** MCP Toolbox for Databases > Integrations > Dataplex Source > dataplex-search-aspect-types Tool
**URL:** https://googleapis.github.io/genai-toolbox/previews/PR-2723/integrations/dataplex/dataplex-search-aspect-types/
**Description:** A "dataplex-search-aspect-types" tool allows to to find aspect types relevant to the query.
## About
A `dataplex-search-aspect-types` tool allows to fetch the metadata template of
aspect types based on search query.
`dataplex-search-aspect-types` accepts following parameters optionally:
- `query` - Narrows down the search of aspect types to value of this parameter.
If not provided, it fetches all aspect types available to the user.
- `pageSize` - Number of returned aspect types in the search page. Defaults to `5`.
- `orderBy` - Specifies the ordering of results. Supported values are: relevance
(default), last_modified_timestamp, last_modified_timestamp asc.
## Compatible Sources
{{< compatible-sources >}}
## Requirements
### IAM Permissions
Dataplex uses [Identity and Access Management (IAM)][iam-overview] to control
user and group access to Dataplex resources. Toolbox will use your
[Application Default Credentials (ADC)][adc] to authorize and authenticate when
interacting with [Dataplex][dataplex-docs].
In addition to [setting the ADC for your server][set-adc], you need to ensure
the IAM identity has been given the correct IAM permissions for the tasks you
intend to perform. See [Dataplex Universal Catalog IAM permissions][iam-permissions]
and [Dataplex Universal Catalog IAM roles][iam-roles] for more information on
applying IAM permissions and roles to an identity.
[iam-overview]: https://cloud.google.com/dataplex/docs/iam-and-access-control
[adc]: https://cloud.google.com/docs/authentication#adc
[set-adc]: https://cloud.google.com/docs/authentication/provide-credentials-adc
[iam-permissions]: https://cloud.google.com/dataplex/docs/iam-permissions
[iam-roles]: https://cloud.google.com/dataplex/docs/iam-roles
[dataplex-docs]: https://cloud.google.com/dataplex
## Example
```yaml
kind: tools
name: dataplex-search-aspect-types
type: dataplex-search-aspect-types
source: my-dataplex-source
description: Use this tool to find aspect types relevant to the query.
```
## Reference
| **field** | **type** | **required** | **description** |
|-------------|:--------:|:------------:|----------------------------------------------------|
| type | string | true | Must be "dataplex-search-aspect-types". |
| source | string | true | Name of the source the tool should execute on. |
| description | string | true | Description of the tool that is passed to the LLM. |
========================================================================
## dataplex-search-entries Tool
========================================================================
**Hierarchy:** MCP Toolbox for Databases > Integrations > Dataplex Source > dataplex-search-entries Tool
**URL:** https://googleapis.github.io/genai-toolbox/previews/PR-2723/integrations/dataplex/dataplex-search-entries/
**Description:** A "dataplex-search-entries" tool allows to search for entries based on the provided query.
## About
A `dataplex-search-entries` tool returns all entries in Dataplex Catalog (e.g.
tables, views, models) that matches given user query.
`dataplex-search-entries` takes a required `query` parameter based on which
entries are filtered and returned to the user. It also optionally accepts
following parameters:
- `pageSize` - Number of results in the search page. Defaults to `5`.
- `orderBy` - Specifies the ordering of results. Supported values are: relevance
(default), last_modified_timestamp, last_modified_timestamp asc.
## Compatible Sources
{{< compatible-sources >}}
## Requirements
### IAM Permissions
Dataplex uses [Identity and Access Management (IAM)][iam-overview] to control
user and group access to Dataplex resources. Toolbox will use your
[Application Default Credentials (ADC)][adc] to authorize and authenticate when
interacting with [Dataplex][dataplex-docs].
In addition to [setting the ADC for your server][set-adc], you need to ensure
the IAM identity has been given the correct IAM permissions for the tasks you
intend to perform. See [Dataplex Universal Catalog IAM permissions][iam-permissions]
and [Dataplex Universal Catalog IAM roles][iam-roles] for more information on
applying IAM permissions and roles to an identity.
[iam-overview]: https://cloud.google.com/dataplex/docs/iam-and-access-control
[adc]: https://cloud.google.com/docs/authentication#adc
[set-adc]: https://cloud.google.com/docs/authentication/provide-credentials-adc
[iam-permissions]: https://cloud.google.com/dataplex/docs/iam-permissions
[iam-roles]: https://cloud.google.com/dataplex/docs/iam-roles
[dataplex-docs]: https://cloud.google.com/dataplex
## Example
```yaml
kind: tools
name: dataplex-search-entries
type: dataplex-search-entries
source: my-dataplex-source
description: Use this tool to get all the entries based on the provided query.
```
## Reference
| **field** | **type** | **required** | **description** |
|-------------|:--------:|:------------:|----------------------------------------------------|
| type | string | true | Must be "dataplex-search-entries". |
| source | string | true | Name of the source the tool should execute on. |
| description | string | true | Description of the tool that is passed to the LLM. |
========================================================================
## Dgraph Source
========================================================================
**Hierarchy:** MCP Toolbox for Databases > Integrations > Dgraph Source
**URL:** https://googleapis.github.io/genai-toolbox/previews/PR-2723/integrations/dgraph/
**Description:** Dgraph is fully open-source, built-for-scale graph database for Gen AI workloads
{{< notice note >}}
**⚠️ Best Effort Maintenance**
This integration is maintained on a best-effort basis by the project
team/community. While we strive to address issues and provide workarounds when
resources are available, there are no guaranteed response times or code fixes.
The automated integration tests for this module are currently non-functional or
failing.
{{< /notice >}}
## About
[Dgraph][dgraph-docs] is an open-source graph database. It is designed for
real-time workloads, horizontal scalability, and data flexibility. Implemented
as a distributed system, Dgraph processes queries in parallel to deliver the
fastest result.
This source can connect to either a self-managed Dgraph cluster or one hosted on
Dgraph Cloud. If you're new to Dgraph, the fastest way to get started is to
[sign up for Dgraph Cloud][dgraph-login].
[dgraph-docs]: https://dgraph.io/docs
[dgraph-login]: https://cloud.dgraph.io/login
## Available Tools
{{< list-tools >}}
## Requirements
### Database User
When **connecting to a hosted Dgraph database**, this source uses the API key
for access. If you are using a dedicated environment, you will additionally need
the namespace and user credentials for that namespace.
For **connecting to a local or self-hosted Dgraph database**, use the namespace
and user credentials for that namespace.
## Example
```yaml
kind: sources
name: my-dgraph-source
type: dgraph
dgraphUrl: https://xxxx.cloud.dgraph.io
user: ${USER_NAME}
password: ${PASSWORD}
apiKey: ${API_KEY}
namespace : 0
```
{{< notice tip >}}
Use environment variable replacement with the format ${ENV_NAME}
instead of hardcoding your secrets into the configuration file.
{{< /notice >}}
## Reference
| **Field** | **Type** | **Required** | **Description** |
|-------------|:--------:|:------------:|--------------------------------------------------------------------------------------------------|
| type | string | true | Must be "dgraph". |
| dgraphUrl | string | true | Connection URI (e.g. "", ""). |
| user | string | false | Name of the Dgraph user to connect as (e.g., "groot"). |
| password | string | false | Password of the Dgraph user (e.g., "password"). |
| apiKey | string | false | API key to connect to a Dgraph Cloud instance. |
| namespace | uint64 | false | Dgraph namespace (not required for Dgraph Cloud Shared Clusters). |
========================================================================
## dgraph-dql Tool
========================================================================
**Hierarchy:** MCP Toolbox for Databases > Integrations > Dgraph Source > dgraph-dql Tool
**URL:** https://googleapis.github.io/genai-toolbox/previews/PR-2723/integrations/dgraph/dgraph-dql/
**Description:** A "dgraph-dql" tool executes a pre-defined DQL statement against a Dgraph database.
{{< notice note >}}
**⚠️ Best Effort Maintenance**
This integration is maintained on a best-effort basis by the project
team/community. While we strive to address issues and provide workarounds when
resources are available, there are no guaranteed response times or code fixes.
The automated integration tests for this module are currently non-functional or
failing.
{{< /notice >}}
## About
A `dgraph-dql` tool executes a pre-defined DQL statement against a Dgraph
database.
To run a statement as a query, you need to set the config `isQuery=true`. For
upserts or mutations, set `isQuery=false`. You can also configure timeout for a
query.
> **Note:** This tool uses parameterized queries to prevent SQL injections.
> Query parameters can be used as substitutes for arbitrary expressions.
> Parameters cannot be used as substitutes for identifiers, column names, table
> names, or other parts of the query.
## Compatible Sources
{{< compatible-sources >}}
## Example
{{< tabpane persist="header" >}}
{{< tab header="Query" lang="yaml" >}}
kind: tools
name: search_user
type: dgraph-dql
source: my-dgraph-source
statement: |
query all($role: string){
users(func: has(name)) @filter(eq(role, $role) AND ge(age, 30) AND le(age, 50)) {
uid
name
email
role
age
}
}
isQuery: true
timeout: 20s
description: |
Use this tool to retrieve the details of users who are admins and are between 30 and 50 years old.
The query returns the user's name, email, role, and age.
This can be helpful when you want to fetch admin users within a specific age range.
Example: Fetch admins aged between 30 and 50:
[
{
"name": "Alice",
"role": "admin",
"age": 35
},
{
"name": "Bob",
"role": "admin",
"age": 45
}
]
parameters:
- name: $role
type: string
description: admin
{{< /tab >}}
{{< tab header="Mutation" lang="yaml" >}}
kind: tools
name: dgraph-manage-user-instance
type: dgraph-dql
source: my-dgraph-source
isQuery: false
statement: |
{
set {
_:user1 $user1 .
_:user1 $email1 .
_:user1 "admin" .
_:user1 "35" .
_:user2 $user2 .
_:user2 $email2 .
_:user2 "admin" .
_:user2 "45" .
}
}
description: |
Use this tool to insert or update user data into the Dgraph database.
The mutation adds or updates user details like name, email, role, and age.
Example: Add users Alice and Bob as admins with specific ages.
parameters:
- name: user1
type: string
description: Alice
- name: email1
type: string
description: alice@email.com
- name: user2
type: string
description: Bob
- name: email2
type: string
description: bob@email.com
{{< /tab >}}
{{< /tabpane >}}
## Reference
| **field** | **type** | **required** | **description** |
|-------------|:---------------------------------------:|:------------:|-------------------------------------------------------------------------------------------|
| type | string | true | Must be "dgraph-dql". |
| source | string | true | Name of the source the dql query should execute on. |
| description | string | true | Description of the tool that is passed to the LLM. |
| statement | string | true | dql statement to execute |
| isQuery | boolean | false | To run statement as query set true otherwise false |
| timeout | string | false | To set timeout for query |
| parameters | [parameters](../#specifying-parameters) | false | List of [parameters](../#specifying-parameters) that will be used with the dql statement. |
========================================================================
## Elasticsearch Source
========================================================================
**Hierarchy:** MCP Toolbox for Databases > Integrations > Elasticsearch Source
**URL:** https://googleapis.github.io/genai-toolbox/previews/PR-2723/integrations/elasticsearch/
**Description:** Elasticsearch is a distributed, free and open search and analytics engine for all types of data, including textual, numerical, geospatial, structured, and unstructured.
## About
[Elasticsearch][elasticsearch-docs] is a distributed, free and open search and
analytics engine for all types of data, including textual, numerical,
geospatial, structured, and unstructured.
If you are new to Elasticsearch, you can learn how to
[set up a cluster and start indexing data][elasticsearch-quickstart].
Elasticsearch uses [ES|QL][elasticsearch-esql] for querying data. ES|QL
is a powerful query language that allows you to search and aggregate data in
Elasticsearch.
See the [official
documentation](https://www.elastic.co/guide/en/elasticsearch/reference/current/index.html)
for more information.
[elasticsearch-docs]:
https://www.elastic.co/guide/en/elasticsearch/reference/current/index.html
[elasticsearch-quickstart]:
https://www.elastic.co/guide/en/elasticsearch/reference/current/getting-started.html
[elasticsearch-esql]:
https://www.elastic.co/guide/en/elasticsearch/reference/current/esql.html
## Available Tools
{{< list-tools >}}
## Requirements
### API Key
Toolbox uses an [API key][api-key] to authorize and authenticate when
interacting with [Elasticsearch][elasticsearch-docs].
In addition to [setting the API key for your server][set-api-key], you need to
ensure the API key has the correct permissions for the queries you intend to
run. See [API key management][api-key-management] for more information on
applying permissions to an API key.
[api-key]:
https://www.elastic.co/guide/en/elasticsearch/reference/current/security-api-create-api-key.html
[set-api-key]:
https://www.elastic.co/guide/en/elasticsearch/reference/current/security-api-create-api-key.html
[api-key-management]:
https://www.elastic.co/guide/en/elasticsearch/reference/current/security-api-get-api-key.html
## Example
```yaml
kind: sources
name: my-elasticsearch-source
type: "elasticsearch"
addresses:
- "http://localhost:9200"
apikey: "my-api-key"
```
## Reference
| **field** | **type** | **required** | **description** |
|-----------|:--------:|:------------:|--------------------------------------------|
| type | string | true | Must be "elasticsearch". |
| addresses | []string | true | List of Elasticsearch hosts to connect to. |
| apikey | string | true | The API key to use for authentication. |
========================================================================
## elasticsearch-esql Tool
========================================================================
**Hierarchy:** MCP Toolbox for Databases > Integrations > Elasticsearch Source > elasticsearch-esql Tool
**URL:** https://googleapis.github.io/genai-toolbox/previews/PR-2723/integrations/elasticsearch/elasticsearch-esql/
**Description:** Execute ES|QL queries.
## About
Execute ES|QL queries.
This tool allows you to execute ES|QL queries against your Elasticsearch
cluster. You can use this to perform complex searches and aggregations.
See the [official
documentation](https://www.elastic.co/docs/reference/query-languages/esql/esql-getting-started)
for more information.
## Compatible Sources
{{< compatible-sources >}}
## Parameters
| **name** | **type** | **required** | **description** |
|------------|:---------------------------------------:|:------------:|-----------------------------------------------------------------------------------------------------------------------------------------------------|
| query | string | false | The ES\|QL query to run. Can also be passed by parameters. |
| format | string | false | The format of the query. Default is json. Valid values are csv, json, tsv, txt, yaml, cbor, smile, or arrow. |
| timeout | integer | false | The timeout for the query in seconds. Default is 60 (1 minute). |
| parameters | [parameters](../#specifying-parameters) | false | List of [parameters](../#specifying-parameters) that will be used with the ES\|QL query. Only supports “string”, “integer”, “float”, “boolean”. |
## Example
```yaml
kind: tools
name: query_my_index
type: elasticsearch-esql
source: elasticsearch-source
description: Use this tool to execute ES|QL queries.
query: |
FROM my-index
| KEEP *
| LIMIT ?limit
parameters:
- name: limit
type: integer
description: Limit the number of results.
required: true
```
========================================================================
## Firebird Source
========================================================================
**Hierarchy:** MCP Toolbox for Databases > Integrations > Firebird Source
**URL:** https://googleapis.github.io/genai-toolbox/previews/PR-2723/integrations/firebird/
**Description:** Firebird is a powerful, cross-platform, and open-source relational database.
## About
[Firebird][fb-docs] is a relational database management system offering many
ANSI SQL standard features that runs on Linux, Windows, and a variety of Unix
platforms. It is known for its small footprint, powerful features, and easy
maintenance.
[fb-docs]: https://firebirdsql.org/
## Available Tools
{{< list-tools >}}
## Requirements
### Database User
This source uses standard authentication. You will need to [create a Firebird
user][fb-users] to login to the database with.
[fb-users]: https://www.firebirdsql.org/refdocs/langrefupd25-security-sql-user-mgmt.html#langrefupd25-security-create-user
## Example
```yaml
kind: sources
name: my_firebird_db
type: firebird
host: "localhost"
port: 3050
database: "/path/to/your/database.fdb"
user: ${FIREBIRD_USER}
password: ${FIREBIRD_PASS}
```
{{< notice tip >}}
Use environment variable replacement with the format ${ENV_NAME}
instead of hardcoding your secrets into the configuration file.
{{< /notice >}}
## Reference
| **field** | **type** | **required** | **description** |
|-----------|:--------:|:------------:|------------------------------------------------------------------------------|
| type | string | true | Must be "firebird". |
| host | string | true | IP address to connect to (e.g. "127.0.0.1") |
| port | string | true | Port to connect to (e.g. "3050") |
| database | string | true | Path to the Firebird database file (e.g. "/var/lib/firebird/data/test.fdb"). |
| user | string | true | Name of the Firebird user to connect as (e.g. "SYSDBA"). |
| password | string | true | Password of the Firebird user (e.g. "masterkey"). |
========================================================================
## firebird-execute-sql Tool
========================================================================
**Hierarchy:** MCP Toolbox for Databases > Integrations > Firebird Source > firebird-execute-sql Tool
**URL:** https://googleapis.github.io/genai-toolbox/previews/PR-2723/integrations/firebird/firebird-execute-sql/
**Description:** A "firebird-execute-sql" tool executes a SQL statement against a Firebird database.
## About
A `firebird-execute-sql` tool executes a SQL statement against a Firebird
database.
`firebird-execute-sql` takes one input parameter `sql` and runs the sql
statement against the `source`.
> **Note:** This tool is intended for developer assistant workflows with
> human-in-the-loop and shouldn't be used for production agents.
## Compatible Sources
{{< compatible-sources >}}
## Example
```yaml
kind: tools
name: execute_sql_tool
type: firebird-execute-sql
source: my_firebird_db
description: Use this tool to execute a SQL statement against the Firebird database.
```
## Reference
| **field** | **type** | **required** | **description** |
|-------------|:--------:|:------------:|----------------------------------------------------|
| type | string | true | Must be "firebird-execute-sql". |
| source | string | true | Name of the source the SQL should execute on. |
| description | string | true | Description of the tool that is passed to the LLM. |
========================================================================
## firebird-sql Tool
========================================================================
**Hierarchy:** MCP Toolbox for Databases > Integrations > Firebird Source > firebird-sql Tool
**URL:** https://googleapis.github.io/genai-toolbox/previews/PR-2723/integrations/firebird/firebird-sql/
**Description:** A "firebird-sql" tool executes a pre-defined SQL statement against a Firebird database.
## About
A `firebird-sql` tool executes a pre-defined SQL statement against a Firebird
database.
The specified SQL statement is executed as a [prepared statement][fb-prepare],
and supports both positional parameters (`?`) and named parameters (`:param_name`).
Parameters will be inserted according to their position or name. If template
parameters are included, they will be resolved before the execution of the
prepared statement.
[fb-prepare]: https://firebirdsql.org/refdocs/langrefupd25-psql-execstat.html
## Compatible Sources
{{< compatible-sources >}}
## Example
> **Note:** This tool uses parameterized queries to prevent SQL injections.
> Query parameters can be used as substitutes for arbitrary expressions.
> Parameters cannot be used as substitutes for identifiers, column names, table
> names, or other parts of the query.
```yaml
kind: tools
name: search_flights_by_number
type: firebird-sql
source: my_firebird_db
statement: |
SELECT * FROM flights
WHERE airline = ?
AND flight_number = ?
LIMIT 10
description: |
Use this tool to get information for a specific flight.
Takes an airline code and flight number and returns info on the flight.
Do NOT use this tool with a flight id. Do NOT guess an airline code or flight number.
A airline code is a code for an airline service consisting of two-character
airline designator and followed by flight number, which is 1 to 4 digit number.
For example, if given CY 0123, the airline is "CY", and flight_number is "123".
Another example for this is DL 1234, the airline is "DL", and flight_number is "1234".
If the tool returns more than one option choose the date closes to today.
Example:
{{
"airline": "CY",
"flight_number": "888",
}}
Example:
{{
"airline": "DL",
"flight_number": "1234",
}}
parameters:
- name: airline
type: string
description: Airline unique 2 letter identifier
- name: flight_number
type: string
description: 1 to 4 digit number
```
### Example with Named Parameters
```yaml
kind: tools
name: search_flights_by_airline
type: firebird-sql
source: my_firebird_db
statement: |
SELECT * FROM flights
WHERE airline = :airline
AND departure_date >= :start_date
AND departure_date <= :end_date
ORDER BY departure_date
description: |
Search for flights by airline within a date range using named parameters.
parameters:
- name: airline
type: string
description: Airline unique 2 letter identifier
- name: start_date
type: string
description: Start date in YYYY-MM-DD format
- name: end_date
type: string
description: End date in YYYY-MM-DD format
```
### Example with Template Parameters
> **Note:** This tool allows direct modifications to the SQL statement,
> including identifiers, column names, and table names. **This makes it more
> vulnerable to SQL injections**. Using basic parameters only (see above) is
> recommended for performance and safety reasons. For more details, please check
> [templateParameters](../#template-parameters).
```yaml
kind: tools
name: list_table
type: firebird-sql
source: my_firebird_db
statement: |
SELECT * FROM {{.tableName}}
description: |
Use this tool to list all information from a specific table.
Example:
{{
"tableName": "flights",
}}
templateParameters:
- name: tableName
type: string
description: Table to select from
```
## Reference
| **field** | **type** | **required** | **description** |
|--------------------|:---------------------------------------------:|:------------:|-----------------------------------------------------------------------------------------------------------------------------------------|
| type | string | true | Must be "firebird-sql". |
| source | string | true | Name of the source the SQL should execute on. |
| description | string | true | Description of the tool that is passed to the LLM. |
| statement | string | true | SQL statement to execute on. |
| parameters | [parameters](../#specifying-parameters) | false | List of [parameters](../#specifying-parameters) that will be inserted into the SQL statement. |
| templateParameters | [templateParameters](../#template-parameters) | false | List of [templateParameters](../#template-parameters) that will be inserted into the SQL statement before executing prepared statement. |
========================================================================
## Firestore Source
========================================================================
**Hierarchy:** MCP Toolbox for Databases > Integrations > Firestore Source
**URL:** https://googleapis.github.io/genai-toolbox/previews/PR-2723/integrations/firestore/
**Description:** Firestore is a NoSQL document database built for automatic scaling, high performance, and ease of application development. It's a fully managed, serverless database that supports mobile, web, and server development.
## About
[Firestore][firestore-docs] is a NoSQL document database built for automatic
scaling, high performance, and ease of application development. While the
Firestore interface has many of the same features as traditional databases,
as a NoSQL database it differs from them in the way it describes relationships
between data objects.
If you are new to Firestore, you can [create a database and learn the
basics][firestore-quickstart].
[firestore-docs]: https://cloud.google.com/firestore/docs
[firestore-quickstart]: https://cloud.google.com/firestore/docs/quickstart-servers
## Available Tools
{{< list-tools >}}
## Requirements
### IAM Permissions
Firestore uses [Identity and Access Management (IAM)][iam-overview] to control
user and group access to Firestore resources. Toolbox will use your [Application
Default Credentials (ADC)][adc] to authorize and authenticate when interacting
with [Firestore][firestore-docs].
In addition to [setting the ADC for your server][set-adc], you need to ensure
the IAM identity has been given the correct IAM permissions for accessing
Firestore. Common roles include:
- `roles/datastore.user` - Read and write access to Firestore
- `roles/datastore.viewer` - Read-only access to Firestore
- `roles/firebaserules.admin` - Full management of Firebase Security Rules for
Firestore. This role is required for operations that involve creating,
updating, or managing Firestore security rules (see [Firebase Security Rules
roles][firebaserules-roles])
See [Firestore access control][firestore-iam] for more information on
applying IAM permissions and roles to an identity.
[iam-overview]: https://cloud.google.com/firestore/docs/security/iam
[adc]: https://cloud.google.com/docs/authentication#adc
[set-adc]: https://cloud.google.com/docs/authentication/provide-credentials-adc
[firestore-iam]: https://cloud.google.com/firestore/docs/security/iam
[firebaserules-roles]:
https://cloud.google.com/iam/docs/roles-permissions/firebaserules
### Database Selection
Firestore allows you to create multiple databases within a single project. Each
database is isolated from the others and has its own set of documents and
collections. If you don't specify a database in your configuration, the default
database named `(default)` will be used.
## Example
```yaml
kind: sources
name: my-firestore-source
type: "firestore"
project: "my-project-id"
# database: "my-database" # Optional, defaults to "(default)"
```
## Reference
| **field** | **type** | **required** | **description** |
|-----------|:--------:|:------------:|----------------------------------------------------------------------------------------------------------|
| type | string | true | Must be "firestore". |
| project | string | true | Id of the GCP project that contains the Firestore database (e.g. "my-project-id"). |
| database | string | false | Name of the Firestore database to connect to. Defaults to "(default)" if not specified. |
========================================================================
## firestore-add-documents Tool
========================================================================
**Hierarchy:** MCP Toolbox for Databases > Integrations > Firestore Source > firestore-add-documents Tool
**URL:** https://googleapis.github.io/genai-toolbox/previews/PR-2723/integrations/firestore/firestore-add-documents/
**Description:** A "firestore-add-documents" tool adds document to a given collection path.
## About
The `firestore-add-documents` tool allows you to add new documents to a
Firestore collection. It supports all Firestore data types using Firestore's
native JSON format. The tool automatically generates a unique document ID for
each new document.
## Compatible Sources
{{< compatible-sources >}}
## Parameters
| Parameter | Type | Required | Description |
|------------------|---------|----------|---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|
| `collectionPath` | string | Yes | The path of the collection where the document will be added |
| `documentData` | map | Yes | The data to be added as a document to the given collection. Must use [Firestore's native JSON format](https://cloud.google.com/firestore/docs/reference/rest/Shared.Types/ArrayValue#Value) with typed values |
| `returnData` | boolean | No | If set to true, the output will include the data of the created document. Defaults to false to help avoid overloading the context |
### Data Type Format
The tool requires Firestore's native JSON format for document data. Each field
must be wrapped with its type indicator:
#### Basic Types
- **String**: `{"stringValue": "your string"}`
- **Integer**: `{"integerValue": "123"}` or `{"integerValue": 123}`
- **Double**: `{"doubleValue": 123.45}`
- **Boolean**: `{"booleanValue": true}`
- **Null**: `{"nullValue": null}`
- **Bytes**: `{"bytesValue": "base64EncodedString"}`
- **Timestamp**: `{"timestampValue": "2025-01-07T10:00:00Z"}` (RFC3339 format)
#### Complex Types
- **GeoPoint**: `{"geoPointValue": {"latitude": 34.052235, "longitude": -118.243683}}`
- **Array**: `{"arrayValue": {"values": [{"stringValue": "item1"}, {"integerValue": "2"}]}}`
- **Map**: `{"mapValue": {"fields": {"key1": {"stringValue": "value1"}, "key2": {"booleanValue": true}}}}`
- **Reference**: `{"referenceValue": "collection/document"}`
## Example
### Basic Document Creation
```yaml
kind: tools
name: add-company-doc
type: firestore-add-documents
source: my-firestore
description: Add a new company document
```
Usage:
```json
{
"collectionPath": "companies",
"documentData": {
"name": {
"stringValue": "Acme Corporation"
},
"establishmentDate": {
"timestampValue": "2000-01-15T10:30:00Z"
},
"location": {
"geoPointValue": {
"latitude": 34.052235,
"longitude": -118.243683
}
},
"active": {
"booleanValue": true
},
"employeeCount": {
"integerValue": "1500"
},
"annualRevenue": {
"doubleValue": 1234567.89
}
}
}
```
### With Nested Maps and Arrays
```json
{
"collectionPath": "companies",
"documentData": {
"name": {
"stringValue": "Tech Innovations Inc"
},
"contactInfo": {
"mapValue": {
"fields": {
"email": {
"stringValue": "info@techinnovations.com"
},
"phone": {
"stringValue": "+1-555-123-4567"
},
"address": {
"mapValue": {
"fields": {
"street": {
"stringValue": "123 Innovation Drive"
},
"city": {
"stringValue": "San Francisco"
},
"state": {
"stringValue": "CA"
},
"zipCode": {
"stringValue": "94105"
}
}
}
}
}
}
},
"products": {
"arrayValue": {
"values": [
{
"stringValue": "Product A"
},
{
"stringValue": "Product B"
},
{
"mapValue": {
"fields": {
"productName": {
"stringValue": "Product C Premium"
},
"version": {
"integerValue": "3"
},
"features": {
"arrayValue": {
"values": [
{
"stringValue": "Advanced Analytics"
},
{
"stringValue": "Real-time Sync"
}
]
}
}
}
}
}
]
}
}
},
"returnData": true
}
```
### Complete Example with All Data Types
```json
{
"collectionPath": "test-documents",
"documentData": {
"stringField": {
"stringValue": "Hello World"
},
"integerField": {
"integerValue": "42"
},
"doubleField": {
"doubleValue": 3.14159
},
"booleanField": {
"booleanValue": true
},
"nullField": {
"nullValue": null
},
"timestampField": {
"timestampValue": "2025-01-07T15:30:00Z"
},
"geoPointField": {
"geoPointValue": {
"latitude": 37.7749,
"longitude": -122.4194
}
},
"bytesField": {
"bytesValue": "SGVsbG8gV29ybGQh"
},
"arrayField": {
"arrayValue": {
"values": [
{
"stringValue": "item1"
},
{
"integerValue": "2"
},
{
"booleanValue": false
}
]
}
},
"mapField": {
"mapValue": {
"fields": {
"nestedString": {
"stringValue": "nested value"
},
"nestedNumber": {
"doubleValue": 99.99
}
}
}
}
}
}
```
## Output Format
The tool returns a map containing:
| Field | Type | Description |
|----------------|--------|--------------------------------------------------------------------------------------------------------------------------------|
| `documentPath` | string | The full resource name of the created document (e.g., `projects/{projectId}/databases/{databaseId}/documents/{document_path}`) |
| `createTime` | string | The timestamp when the document was created |
| `documentData` | map | The data that was added (only included when `returnData` is true) |
## Advanced Usage
### Authentication
The tool can be configured to require authentication:
```yaml
kind: tools
name: secure-add-docs
type: firestore-add-documents
source: prod-firestore
description: Add documents with authentication required
authRequired:
- google-oauth
- api-key
```
### Best Practices
1. **Always use typed values**: Every field must be wrapped with its appropriate
type indicator (e.g., `{"stringValue": "text"}`)
2. **Integer values can be strings**: The tool accepts integer values as strings
(e.g., `{"integerValue": "1500"}`)
3. **Use returnData sparingly**: Only set to true when you need to verify the
exact data that was written
4. **Validate data before sending**: Ensure your data matches Firestore's native
JSON format
5. **Handle timestamps properly**: Use RFC3339 format for timestamp strings
6. **Base64 encode binary data**: Binary data must be base64 encoded in the
`bytesValue` field
7. **Consider security rules**: Ensure your Firestore security rules allow
document creation in the target collection
## Troubleshooting
Common errors include:
- Invalid collection path
- Missing or invalid document data
- Permission denied (if Firestore security rules block the operation)
- Invalid data type conversions
## Additional Resources
- [`firestore-get-documents`](firestore-get-documents.md) - Retrieve documents
by their paths
- [`firestore-query-collection`](firestore-query-collection.md) - Query
documents in a collection
- [`firestore-delete-documents`](firestore-delete-documents.md) - Delete
documents from Firestore
========================================================================
## firestore-delete-documents Tool
========================================================================
**Hierarchy:** MCP Toolbox for Databases > Integrations > Firestore Source > firestore-delete-documents Tool
**URL:** https://googleapis.github.io/genai-toolbox/previews/PR-2723/integrations/firestore/firestore-delete-documents/
**Description:** A "firestore-delete-documents" tool deletes multiple documents from Firestore by their paths.
## About
A `firestore-delete-documents` tool deletes multiple documents from Firestore by
their paths.
`firestore-delete-documents` takes one input parameter `documentPaths` which is
an array of document paths to delete. The tool uses Firestore's BulkWriter for
efficient batch deletion and returns the success status for each document.
## Compatible Sources
{{< compatible-sources >}}
## Example
```yaml
kind: tools
name: delete_user_documents
type: firestore-delete-documents
source: my-firestore-source
description: Use this tool to delete multiple documents from Firestore.
```
## Reference
| **field** | **type** | **required** | **description** |
|-------------|:--------------:|:------------:|----------------------------------------------------------|
| type | string | true | Must be "firestore-delete-documents". |
| source | string | true | Name of the Firestore source to delete documents from. |
| description | string | true | Description of the tool that is passed to the LLM. |
========================================================================
## firestore-get-documents Tool
========================================================================
**Hierarchy:** MCP Toolbox for Databases > Integrations > Firestore Source > firestore-get-documents Tool
**URL:** https://googleapis.github.io/genai-toolbox/previews/PR-2723/integrations/firestore/firestore-get-documents/
**Description:** A "firestore-get-documents" tool retrieves multiple documents from Firestore by their paths.
## About
A `firestore-get-documents` tool retrieves multiple documents from Firestore by
their paths.
`firestore-get-documents` takes one input parameter `documentPaths` which is an
array of document paths, and returns the documents' data along with metadata
such as existence status, creation time, update time, and read time.
## Compatible Sources
{{< compatible-sources >}}
## Example
```yaml
kind: tools
name: get_user_documents
type: firestore-get-documents
source: my-firestore-source
description: Use this tool to retrieve multiple documents from Firestore.
```
## Reference
| **field** | **type** | **required** | **description** |
|-------------|:--------------:|:------------:|------------------------------------------------------------|
| type | string | true | Must be "firestore-get-documents". |
| source | string | true | Name of the Firestore source to retrieve documents from. |
| description | string | true | Description of the tool that is passed to the LLM. |
========================================================================
## firestore-get-rules Tool
========================================================================
**Hierarchy:** MCP Toolbox for Databases > Integrations > Firestore Source > firestore-get-rules Tool
**URL:** https://googleapis.github.io/genai-toolbox/previews/PR-2723/integrations/firestore/firestore-get-rules/
**Description:** A "firestore-get-rules" tool retrieves the active Firestore security rules for the current project.
## About
A `firestore-get-rules` tool retrieves the active [Firestore security
rules](https://firebase.google.com/docs/firestore/security/get-started) for the
current project.
`firestore-get-rules` takes no input parameters and returns the security rules
content along with metadata such as the ruleset name, and timestamps.
## Compatible Sources
{{< compatible-sources >}}
## Example
```yaml
kind: tools
name: get_firestore_rules
type: firestore-get-rules
source: my-firestore-source
description: Use this tool to retrieve the active Firestore security rules.
```
## Reference
| **field** | **type** | **required** | **description** |
|-------------|:-------------:|:------------:|-------------------------------------------------------|
| type | string | true | Must be "firestore-get-rules". |
| source | string | true | Name of the Firestore source to retrieve rules from. |
| description | string | true | Description of the tool that is passed to the LLM. |
========================================================================
## firestore-list-collections Tool
========================================================================
**Hierarchy:** MCP Toolbox for Databases > Integrations > Firestore Source > firestore-list-collections Tool
**URL:** https://googleapis.github.io/genai-toolbox/previews/PR-2723/integrations/firestore/firestore-list-collections/
**Description:** A "firestore-list-collections" tool lists collections in Firestore, either at the root level or as subcollections of a document.
## About
A `firestore-list-collections` tool lists
[collections](https://firebase.google.com/docs/firestore/data-model#collections)
in Firestore, either at the root level or as
[subcollections](https://firebase.google.com/docs/firestore/data-model#subcollections)
of a specific document.
`firestore-list-collections` takes an optional `parentPath` parameter to specify
a document path. If provided, it lists all subcollections of that document. If
not provided, it lists all root-level collections in the database.
## Compatible Sources
{{< compatible-sources >}}
## Example
```yaml
kind: tools
name: list_firestore_collections
type: firestore-list-collections
source: my-firestore-source
description: Use this tool to list collections in Firestore.
```
## Reference
| **field** | **type** | **required** | **description** |
|-------------|:----------------:|:------------:|--------------------------------------------------------|
| type | string | true | Must be "firestore-list-collections". |
| source | string | true | Name of the Firestore source to list collections from. |
| description | string | true | Description of the tool that is passed to the LLM. |
========================================================================
## firestore-query Tool
========================================================================
**Hierarchy:** MCP Toolbox for Databases > Integrations > Firestore Source > firestore-query Tool
**URL:** https://googleapis.github.io/genai-toolbox/previews/PR-2723/integrations/firestore/firestore-query/
**Description:** Query a Firestore collection with parameterizable filters and Firestore native JSON value types
## About
The `firestore-query` tool allows you to query Firestore collections with
dynamic, parameterizable filters that support Firestore's native JSON value
types. This tool is designed for querying single collection, which is the
standard pattern in Firestore. The collection path itself can be parameterized,
making it flexible for various use cases. This tool is particularly useful when
you need to create reusable query templates with parameters that can be
substituted at runtime.
### Key Features
- **Parameterizable Queries**: Use Go template syntax to create dynamic queries
- **Dynamic Collection Paths**: The collection path can be parameterized for
flexibility
- **Native JSON Value Types**: Support for Firestore's typed values
(stringValue, integerValue, doubleValue, etc.)
- **Complex Filter Logic**: Support for AND/OR logical operators in filters
- **Template Substitution**: Dynamic collection paths, filters, and ordering
- **Query Analysis**: Optional query performance analysis with explain metrics
(non-parameterizable)
**Developer Note**: This tool serves as the general querying foundation that
developers can use to create custom tools with specific query patterns.
## Compatible Sources
{{< compatible-sources >}}
## Parameters
### Configuration Parameters
| Parameter | Type | Required | Description |
|------------------|---------|----------|-------------------------------------------------------------------------------------------------------------|
| `type` | string | Yes | Must be `firestore-query` |
| `source` | string | Yes | Name of the Firestore source to use |
| `description` | string | Yes | Description of what this tool does |
| `collectionPath` | string | Yes | Path to the collection to query (supports templates) |
| `filters` | string | No | JSON string defining query filters (supports templates) |
| `select` | array | No | Fields to select from documents(supports templates - string or array) |
| `orderBy` | object | No | Ordering configuration with `field` and `direction`(supports templates for the value of field or direction) |
| `limit` | integer | No | Maximum number of documents to return (default: 100) (supports templates) |
| `analyzeQuery` | boolean | No | Whether to analyze query performance (default: false) |
| `parameters` | array | Yes | Parameter definitions for template substitution |
### Runtime Parameters
Runtime parameters are defined in the `parameters` array and can be used in
templates throughout the configuration.
### Filter Format
#### Simple Filter
```json
{
"field": "age",
"op": ">",
"value": {"integerValue": "25"}
}
```
#### AND Filter
```json
{
"and": [
{"field": "status", "op": "==", "value": {"stringValue": "active"}},
{"field": "age", "op": ">=", "value": {"integerValue": "18"}}
]
}
```
#### OR Filter
```json
{
"or": [
{"field": "role", "op": "==", "value": {"stringValue": "admin"}},
{"field": "role", "op": "==", "value": {"stringValue": "moderator"}}
]
}
```
#### Nested Filters
```json
{
"or": [
{"field": "type", "op": "==", "value": {"stringValue": "premium"}},
{
"and": [
{"field": "type", "op": "==", "value": {"stringValue": "standard"}},
{"field": "credits", "op": ">", "value": {"integerValue": "1000"}}
]
}
]
}
```
## Example
### Basic Configuration
```yaml
kind: tools
name: query_countries
type: firestore-query
source: my-firestore-source
description: Query countries with dynamic filters
collectionPath: "countries"
filters: |
{
"field": "continent",
"op": "==",
"value": {"stringValue": "{{.continent}}"}
}
parameters:
- name: continent
type: string
description: Continent to filter by
required: true
```
### Advanced Configuration with Complex Filters
```yaml
kind: tools
name: advanced_query
type: firestore-query
source: my-firestore-source
description: Advanced query with complex filters
collectionPath: "{{.collection}}"
filters: |
{
"or": [
{"field": "status", "op": "==", "value": {"stringValue": "{{.status}}"}},
{
"and": [
{"field": "priority", "op": ">", "value": {"integerValue": "{{.priority}}"}},
{"field": "area", "op": "<", "value": {"doubleValue": {{.maxArea}}}},
{"field": "active", "op": "==", "value": {"booleanValue": {{.isActive}}}}
]
}
]
}
select:
- name
- status
- priority
orderBy:
field: "{{.sortField}}"
direction: "{{.sortDirection}}"
limit: 100
analyzeQuery: true
parameters:
- name: collection
type: string
description: Collection to query
required: true
- name: status
type: string
description: Status to filter by
required: true
- name: priority
type: string
description: Minimum priority value
required: true
- name: maxArea
type: float
description: Maximum area value
required: true
- name: isActive
type: boolean
description: Filter by active status
required: true
- name: sortField
type: string
description: Field to sort by
required: false
default: "createdAt"
- name: sortDirection
type: string
description: Sort direction (ASCENDING or DESCENDING)
required: false
default: "DESCENDING"
```
### Firestore Native Value Types
The tool supports all Firestore native JSON value types:
| Type | Format | Example |
|-----------|------------------------------------------------------|----------------------------------------------------------------|
| String | `{"stringValue": "text"}` | `{"stringValue": "{{.name}}"}` |
| Integer | `{"integerValue": "123"}` or `{"integerValue": 123}` | `{"integerValue": "{{.age}}"}` or `{"integerValue": {{.age}}}` |
| Double | `{"doubleValue": 45.67}` | `{"doubleValue": {{.price}}}` |
| Boolean | `{"booleanValue": true}` | `{"booleanValue": {{.active}}}` |
| Null | `{"nullValue": null}` | `{"nullValue": null}` |
| Timestamp | `{"timestampValue": "RFC3339"}` | `{"timestampValue": "{{.date}}"}` |
| GeoPoint | `{"geoPointValue": {"latitude": 0, "longitude": 0}}` | See below |
| Array | `{"arrayValue": {"values": [...]}}` | See below |
| Map | `{"mapValue": {"fields": {...}}}` | See below |
#### Complex Type Examples
**GeoPoint:**
```json
{
"field": "location",
"op": "==",
"value": {
"geoPointValue": {
"latitude": 37.7749,
"longitude": -122.4194
}
}
}
```
**Array:**
```json
{
"field": "tags",
"op": "array-contains",
"value": {"stringValue": "{{.tag}}"}
}
```
### Supported Operators
- `<` - Less than
- `<=` - Less than or equal
- `>` - Greater than
- `>=` - Greater than or equal
- `==` - Equal
- `!=` - Not equal
- `array-contains` - Array contains value
- `array-contains-any` - Array contains any of the values
- `in` - Value is in array
- `not-in` - Value is not in array
### Example 1: Query with Dynamic Collection Path
```yaml
kind: tools
name: user_documents
type: firestore-query
source: my-firestore
description: Query user-specific documents
collectionPath: "users/{{.userId}}/documents"
filters: |
{
"field": "type",
"op": "==",
"value": {"stringValue": "{{.docType}}"}
}
parameters:
- name: userId
type: string
description: User ID
required: true
- name: docType
type: string
description: Document type to filter
required: true
```
### Example 2: Complex Geographic Query
```yaml
kind: tools
name: location_search
type: firestore-query
source: my-firestore
description: Search locations by area and population
collectionPath: "cities"
filters: |
{
"and": [
{"field": "country", "op": "==", "value": {"stringValue": "{{.country}}"}},
{"field": "population", "op": ">", "value": {"integerValue": "{{.minPopulation}}"}},
{"field": "area", "op": "<", "value": {"doubleValue": {{.maxArea}}}}
]
}
orderBy:
field: "population"
direction: "DESCENDING"
limit: 50
parameters:
- name: country
type: string
description: Country code
required: true
- name: minPopulation
type: string
description: Minimum population (as string for large numbers)
required: true
- name: maxArea
type: float
description: Maximum area in square kilometers
required: true
```
### Example 3: Time-based Query with Analysis
```yaml
kind: tools
name: activity_log
type: firestore-query
source: my-firestore
description: Query activity logs within time range
collectionPath: "logs"
filters: |
{
"and": [
{"field": "timestamp", "op": ">=", "value": {"timestampValue": "{{.startTime}}"}},
{"field": "timestamp", "op": "<=", "value": {"timestampValue": "{{.endTime}}"}},
{"field": "severity", "op": "in", "value": {"arrayValue": {"values": [
{"stringValue": "ERROR"},
{"stringValue": "CRITICAL"}
]}}}
]
}
select:
- timestamp
- message
- severity
- userId
orderBy:
field: "timestamp"
direction: "DESCENDING"
analyzeQuery: true
parameters:
- name: startTime
type: string
description: Start time in RFC3339 format
required: true
- name: endTime
type: string
description: End time in RFC3339 format
required: true
```
## Advanced Usage
### Invoking the Tool
```bash
# Using curl
curl -X POST http://localhost:5000/api/tool/your-tool-name/invoke \
-H "Content-Type: application/json" \
-d '{
"continent": "Europe",
"minPopulation": "1000000",
"maxArea": 500000.5,
"isActive": true
}'
```
### Response Format
**Without analyzeQuery:**
```json
[
{
"id": "doc1",
"path": "countries/doc1",
"data": {
"name": "France",
"continent": "Europe",
"population": 67000000,
"area": 551695
},
"createTime": "2024-01-01T00:00:00Z",
"updateTime": "2024-01-15T10:30:00Z"
}
]
```
**With analyzeQuery:**
```json
{
"documents": [...],
"explainMetrics": {
"planSummary": {
"indexesUsed": [...]
},
"executionStats": {
"resultsReturned": 10,
"executionDuration": "15ms",
"readOperations": 10
}
}
}
```
### Best Practices
1. **Use Typed Values**: Always use Firestore's native JSON value types for
proper type handling
2. **String Numbers for Large Integers**: Use string representation for large
integers to avoid precision loss
3. **Template Security**: Validate all template parameters to prevent injection
attacks
4. **Index Optimization**: Use `analyzeQuery` to identify missing indexes
5. **Limit Results**: Always set a reasonable `limit` to prevent excessive data
retrieval
6. **Field Selection**: Use `select` to retrieve only necessary fields
### Technical Notes
- Queries operate on a single collection (the standard Firestore pattern)
- Maximum of 100 filters per query (configurable)
- Template parameters must be properly escaped in JSON contexts
- Complex nested queries may require composite indexes
## Additional Resources
- [firestore-query-collection](firestore-query-collection.md) -
Non-parameterizable query tool
- [Firestore Source Configuration](_index.md)
- [Firestore Query
Documentation](https://firebase.google.com/docs/firestore/query-data/queries)
========================================================================
## firestore-query-collection Tool
========================================================================
**Hierarchy:** MCP Toolbox for Databases > Integrations > Firestore Source > firestore-query-collection Tool
**URL:** https://googleapis.github.io/genai-toolbox/previews/PR-2723/integrations/firestore/firestore-query-collection/
**Description:** A "firestore-query-collection" tool allow to query collections in Firestore.
## About
The `firestore-query-collection` tool allows you to query Firestore collections
with filters, ordering, and limit capabilities.
## Compatible Sources
{{< compatible-sources >}}
## Parameters
| **parameters** | **type** | **required** | **default** | **description** |
|------------------|:------------:|:------------:|:-----------:|-----------------------------------------------------------------------|
| `collectionPath` | string | true | - | The Firestore Rules source code to validate |
| `filters` | array | false | - | Array of filter objects (as JSON strings) to apply to the query |
| `orderBy` | string | false | - | JSON string specifying field and direction to order results |
| `limit` | integer | false | 100 | Maximum number of documents to return |
| `analyzeQuery` | boolean | false | false | If true, returns query explain metrics including execution statistics |
## Example
To use this tool, you need to configure it in your YAML configuration file:
```yaml
kind: sources
name: my-firestore
type: firestore
project: my-gcp-project
database: "(default)"
---
kind: tools
name: query_collection
type: firestore-query-collection
source: my-firestore
description: Query Firestore collections with advanced filtering
```
### Filter Format
Each filter in the `filters` array should be a JSON string with the following
structure:
```json
{
"field": "fieldName",
"op": "operator",
"value": "compareValue"
}
```
Supported operators:
- `<` - Less than
- `<=` - Less than or equal to
- `>` - Greater than
- `>=` - Greater than or equal to
- `==` - Equal to
- `!=` - Not equal to
- `array-contains` - Array contains a specific value
- `array-contains-any` - Array contains any of the specified values
- `in` - Field value is in the specified array
- `not-in` - Field value is not in the specified array
Value types supported:
- String: `"value": "text"`
- Number: `"value": 123` or `"value": 45.67`
- Boolean: `"value": true` or `"value": false`
- Array: `"value": ["item1", "item2"]` (for `in`, `not-in`, `array-contains-any`
operators)
### OrderBy Format
The `orderBy` parameter should be a JSON string with the following structure:
```json
{
"field": "fieldName",
"direction": "ASCENDING"
}
```
Direction values:
- `ASCENDING`
- `DESCENDING`
### Example Usage
#### Query with filters
```json
{
"collectionPath": "users",
"filters": [
"{\"field\": \"age\", \"op\": \">\", \"value\": 18}",
"{\"field\": \"status\", \"op\": \"==\", \"value\": \"active\"}"
],
"orderBy": "{\"field\": \"createdAt\", \"direction\": \"DESCENDING\"}",
"limit": 50
}
```
#### Query with array contains filter
```json
{
"collectionPath": "products",
"filters": [
"{\"field\": \"categories\", \"op\": \"array-contains\", \"value\": \"electronics\"}",
"{\"field\": \"price\", \"op\": \"<\", \"value\": 1000}"
],
"orderBy": "{\"field\": \"price\", \"direction\": \"ASCENDING\"}",
"limit": 20
}
```
#### Query with IN operator
```json
{
"collectionPath": "orders",
"filters": [
"{\"field\": \"status\", \"op\": \"in\", \"value\": [\"pending\", \"processing\"]}"
],
"limit": 100
}
```
#### Query with explain metrics
```json
{
"collectionPath": "users",
"filters": [
"{\"field\": \"age\", \"op\": \">=\", \"value\": 21}",
"{\"field\": \"active\", \"op\": \"==\", \"value\": true}"
],
"orderBy": "{\"field\": \"lastLogin\", \"direction\": \"DESCENDING\"}",
"limit": 25,
"analyzeQuery": true
}
```
## Output Format
### Standard Response (analyzeQuery = false)
The tool returns an array of documents, where each document includes:
```json
{
"id": "documentId",
"path": "collection/documentId",
"data": {
// Document fields
},
"createTime": "2025-01-07T12:00:00Z",
"updateTime": "2025-01-07T12:00:00Z",
"readTime": "2025-01-07T12:00:00Z"
}
```
### Response with Query Analysis (analyzeQuery = true)
When `analyzeQuery` is set to true, the tool returns a single object containing
documents and explain metrics:
```json
{
"documents": [
// Array of document objects as shown above
],
"explainMetrics": {
"planSummary": {
"indexesUsed": [
{
"query_scope": "Collection",
"properties": "(field ASC, __name__ ASC)"
}
]
},
"executionStats": {
"resultsReturned": 50,
"readOperations": 50,
"executionDuration": "120ms",
"debugStats": {
"indexes_entries_scanned": "1000",
"documents_scanned": "50",
"billing_details": {
"documents_billable": "50",
"index_entries_billable": "1000",
"min_query_cost": "0"
}
}
}
}
}
```
## Troubleshooting
The tool will return errors for:
- Invalid collection path
- Malformed filter JSON
- Unsupported operators
- Query execution failures
- Invalid orderBy format
========================================================================
## firestore-update-document Tool
========================================================================
**Hierarchy:** MCP Toolbox for Databases > Integrations > Firestore Source > firestore-update-document Tool
**URL:** https://googleapis.github.io/genai-toolbox/previews/PR-2723/integrations/firestore/firestore-update-document/
**Description:** A "firestore-update-document" tool updates an existing document in Firestore.
## About
The `firestore-update-document` tool allows you to update existing documents in
Firestore. It supports all Firestore data types using Firestore's native JSON
format. The tool can perform both full document updates (replacing all fields)
or selective field updates using an update mask. When using an update mask,
fields referenced in the mask but not present in the document data will be
deleted from the document, following Firestore's native behavior.
## Compatible Sources
{{< compatible-sources >}}
## Parameters
| Parameter | Type | Required | Description |
|----------------|---------|----------|--------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|
| `documentPath` | string | Yes | The path of the document which needs to be updated |
| `documentData` | map | Yes | The data to update in the document. Must use [Firestore's native JSON format](https://cloud.google.com/firestore/docs/reference/rest/Shared.Types/ArrayValue#Value) with typed values |
| `updateMask` | array | No | The selective fields to update. If not provided, all fields in documentData will be updated. When provided, only the specified fields will be updated. Fields referenced in the mask but not present in documentData will be deleted from the document |
| `returnData` | boolean | No | If set to true, the output will include the data of the updated document. Defaults to false to help avoid overloading the context |
### Data Type Format
The tool requires Firestore's native JSON format for document data. Each field
must be wrapped with its type indicator:
#### Basic Types
- **String**: `{"stringValue": "your string"}`
- **Integer**: `{"integerValue": "123"}` or `{"integerValue": 123}`
- **Double**: `{"doubleValue": 123.45}`
- **Boolean**: `{"booleanValue": true}`
- **Null**: `{"nullValue": null}`
- **Bytes**: `{"bytesValue": "base64EncodedString"}`
- **Timestamp**: `{"timestampValue": "2025-01-07T10:00:00Z"}` (RFC3339 format)
#### Complex Types
- **GeoPoint**: `{"geoPointValue": {"latitude": 34.052235, "longitude": -118.243683}}`
- **Array**: `{"arrayValue": {"values": [{"stringValue": "item1"}, {"integerValue": "2"}]}}`
- **Map**: `{"mapValue": {"fields": {"key1": {"stringValue": "value1"}, "key2": {"booleanValue": true}}}}`
- **Reference**: `{"referenceValue": "collection/document"}`
### Update Modes
#### Full Document Update (Merge All)
When `updateMask` is not provided, the tool performs a merge operation that
updates all fields specified in `documentData` while preserving other existing
fields in the document.
#### Selective Field Update
When `updateMask` is provided, only the fields listed in the mask are updated.
This allows for precise control over which fields are modified, added, or
deleted. To delete a field, include it in the `updateMask` but omit it from
`documentData`.
## Example
### Basic Document Update (Full Merge)
```yaml
kind: tools
name: update-user-doc
type: firestore-update-document
source: my-firestore
description: Update a user document
```
Usage:
```json
{
"documentPath": "users/user123",
"documentData": {
"name": {
"stringValue": "Jane Doe"
},
"lastUpdated": {
"timestampValue": "2025-01-15T10:30:00Z"
},
"status": {
"stringValue": "active"
},
"score": {
"integerValue": "150"
}
}
}
```
### Selective Field Update with Update Mask
```json
{
"documentPath": "users/user123",
"documentData": {
"email": {
"stringValue": "newemail@example.com"
},
"profile": {
"mapValue": {
"fields": {
"bio": {
"stringValue": "Updated bio text"
},
"avatar": {
"stringValue": "https://example.com/new-avatar.jpg"
}
}
}
}
},
"updateMask": ["email", "profile.bio", "profile.avatar"]
}
```
### Update with Field Deletion
To delete fields, include them in the `updateMask` but omit them from
`documentData`:
```json
{
"documentPath": "users/user123",
"documentData": {
"name": {
"stringValue": "John Smith"
}
},
"updateMask": ["name", "temporaryField", "obsoleteData"],
"returnData": true
}
```
In this example:
- `name` will be updated to "John Smith"
- `temporaryField` and `obsoleteData` will be deleted from the document (they
are in the mask but not in the data)
### Complex Update with Nested Data
```json
{
"documentPath": "companies/company456",
"documentData": {
"metadata": {
"mapValue": {
"fields": {
"lastModified": {
"timestampValue": "2025-01-15T14:30:00Z"
},
"modifiedBy": {
"stringValue": "admin@company.com"
}
}
}
},
"locations": {
"arrayValue": {
"values": [
{
"mapValue": {
"fields": {
"city": {
"stringValue": "San Francisco"
},
"coordinates": {
"geoPointValue": {
"latitude": 37.7749,
"longitude": -122.4194
}
}
}
}
},
{
"mapValue": {
"fields": {
"city": {
"stringValue": "New York"
},
"coordinates": {
"geoPointValue": {
"latitude": 40.7128,
"longitude": -74.0060
}
}
}
}
}
]
}
},
"revenue": {
"doubleValue": 5678901.23
}
},
"updateMask": ["metadata", "locations", "revenue"]
}
```
### Update with All Data Types
```json
{
"documentPath": "test-documents/doc789",
"documentData": {
"stringField": {
"stringValue": "Updated string"
},
"integerField": {
"integerValue": "999"
},
"doubleField": {
"doubleValue": 2.71828
},
"booleanField": {
"booleanValue": false
},
"nullField": {
"nullValue": null
},
"timestampField": {
"timestampValue": "2025-01-15T16:45:00Z"
},
"geoPointField": {
"geoPointValue": {
"latitude": 51.5074,
"longitude": -0.1278
}
},
"bytesField": {
"bytesValue": "VXBkYXRlZCBkYXRh"
},
"arrayField": {
"arrayValue": {
"values": [
{
"stringValue": "updated1"
},
{
"integerValue": "200"
},
{
"booleanValue": true
}
]
}
},
"mapField": {
"mapValue": {
"fields": {
"nestedString": {
"stringValue": "updated nested value"
},
"nestedNumber": {
"doubleValue": 88.88
}
}
}
},
"referenceField": {
"referenceValue": "users/updatedUser"
}
},
"returnData": true
}
```
## Output Format
The tool returns a map containing:
| Field | Type | Description |
|----------------|--------|---------------------------------------------------------------------------------------------|
| `documentPath` | string | The full path of the updated document |
| `updateTime` | string | The timestamp when the document was updated |
| `documentData` | map | The current data of the document after the update (only included when `returnData` is true) |
## Reference
| **field** | **type** | **required** | **description** |
|-------------|:--------:|:------------:|------------------------------------------------------|
| type | string | true | Must be "firestore-update-document". |
| source | string | true | Name of the Firestore source to update documents in. |
| description | string | true | Description of the tool that is passed to the LLM. |
## Advanced Usage
### Authentication
The tool can be configured to require authentication:
```yaml
kind: tools
name: secure-update-doc
type: firestore-update-document
source: prod-firestore
description: Update documents with authentication required
authRequired:
- google-oauth
- api-key
```
### Best Practices
1. **Use update masks for precision**: When you only need to update specific
fields, use the `updateMask` parameter to avoid unintended changes
2. **Always use typed values**: Every field must be wrapped with its appropriate
type indicator (e.g., `{"stringValue": "text"}`)
3. **Integer values can be strings**: The tool accepts integer values as strings
(e.g., `{"integerValue": "1500"}`)
4. **Use returnData sparingly**: Only set to true when you need to verify the
exact data after the update
5. **Validate data before sending**: Ensure your data matches Firestore's native
JSON format
6. **Handle timestamps properly**: Use RFC3339 format for timestamp strings
7. **Base64 encode binary data**: Binary data must be base64 encoded in the
`bytesValue` field
8. **Consider security rules**: Ensure your Firestore security rules allow
document updates
9. **Delete fields using update mask**: To delete fields, include them in the
`updateMask` but omit them from `documentData`
10. **Test with non-production data first**: Always test your updates on
non-critical documents first
### Differences from Add Documents
- **Purpose**: Updates existing documents vs. creating new ones
- **Document must exist**: For standard updates (though not using updateMask
will create if missing with given document id)
- **Update mask support**: Allows selective field updates
- **Field deletion**: Supports removing specific fields by including them in the
mask but not in the data
- **Returns updateTime**: Instead of createTime
## Troubleshooting
Common errors include:
- Document not found (when using update with a non-existent document)
- Invalid document path
- Missing or invalid document data
- Permission denied (if Firestore security rules block the operation)
- Invalid data type conversions
## Additional Resources
- [`firestore-add-documents`](firestore-add-documents.md) - Add new documents to
Firestore
- [`firestore-get-documents`](firestore-get-documents.md) - Retrieve documents
by their paths
- [`firestore-query-collection`](firestore-query-collection.md) - Query
documents in a collection
- [`firestore-delete-documents`](firestore-delete-documents.md) - Delete
documents from Firestore
========================================================================
## firestore-validate-rules Tool
========================================================================
**Hierarchy:** MCP Toolbox for Databases > Integrations > Firestore Source > firestore-validate-rules Tool
**URL:** https://googleapis.github.io/genai-toolbox/previews/PR-2723/integrations/firestore/firestore-validate-rules/
**Description:** A "firestore-validate-rules" tool validates Firestore security rules syntax and semantic correctness without deploying them. It provides detailed error reporting with source positions and code snippets.
## About
The `firestore-validate-rules` tool validates Firestore security rules syntax
and semantic correctness without deploying them. It provides detailed error
reporting with source positions and code snippets.
### Use Cases
1. **Pre-deployment validation**: Validate rules before deploying to production
2. **CI/CD integration**: Integrate rules validation into your build pipeline
3. **Development workflow**: Quickly check rules syntax while developing
4. **Error debugging**: Get detailed error locations with code snippets
## Compatible Sources
{{< compatible-sources >}}
## Parameters
| **parameters** | **type** | **required** | **description** |
|-----------------|:------------:|:------------:|----------------------------------------------|
| source | string | true | The Firestore Rules source code to validate |
## Example
```yaml
kind: tools
name: firestore-validate-rules
type: firestore-validate-rules
source:
description: "Checks the provided Firestore Rules source for syntax and validation errors"
```
## Output Format
The tool returns a `ValidationResult` object containing:
```json
{
"valid": "boolean",
"issueCount": "number",
"formattedIssues": "string",
"rawIssues": [
{
"sourcePosition": {
"fileName": "string",
"line": "number",
"column": "number",
"currentOffset": "number",
"endOffset": "number"
},
"description": "string",
"severity": "string"
}
]
}
```
## Advanced Usage
### Authentication
This tool requires authentication if the source requires authentication.
### Example Usage
#### Validate simple rules
```json
{
"source": "rules_version = '2';\nservice cloud.firestore {\n match /databases/{database}/documents {\n match /{document=**} {\n allow read, write: if true;\n }\n }\n}"
}
```
#### Example response for valid rules
```json
{
"valid": true,
"issueCount": 0,
"formattedIssues": "✓ No errors detected. Rules are valid."
}
```
#### Example response with errors
```json
{
"valid": false,
"issueCount": 1,
"formattedIssues": "Found 1 issue(s) in rules source:\n\nERROR: Unexpected token ';' [Ln 4, Col 32]\n```\n allow read, write: if true;;\n ^\n```",
"rawIssues": [
{
"sourcePosition": {
"line": 4,
"column": 32,
"currentOffset": 105,
"endOffset": 106
},
"description": "Unexpected token ';'",
"severity": "ERROR"
}
]
}
```
## Troubleshooting
The tool will return errors for:
- Missing or empty `source` parameter
- API errors when calling the Firebase Rules service
- Network connectivity issues
## Additional Resources
- [firestore-get-rules]({{< ref "firestore-get-rules" >}}): Retrieve current
active rules
- [firestore-query-collection]({{< ref "firestore-query-collection" >}}): Test
rules by querying collections
========================================================================
## Gemini Data Analytics Source
========================================================================
**Hierarchy:** MCP Toolbox for Databases > Integrations > Gemini Data Analytics Source
**URL:** https://googleapis.github.io/genai-toolbox/previews/PR-2723/integrations/cloudgda/
**Description:** A "cloud-gemini-data-analytics" source provides a client for the Gemini Data Analytics API.
## About
The `cloud-gemini-data-analytics` source provides a client to interact with the [Gemini Data Analytics API](https://docs.cloud.google.com/gemini/docs/conversational-analytics-api/reference/rest). This allows tools to send natural language queries to the API.
Authentication can be handled in two ways:
1. **Application Default Credentials (ADC) (Recommended):** By default, the source uses ADC to authenticate with the API. The Toolbox server will fetch the credentials from its running environment (server-side authentication). This is the recommended method.
2. **Client-side OAuth:** If `useClientOAuth` is set to `true`, the source expects the authentication token to be provided by the caller when making a request to the Toolbox server (typically via an HTTP Bearer token). The Toolbox server will then forward this token to the underlying Gemini Data Analytics API calls.
## Available Tools
{{< list-tools >}}
## Example
```yaml
kind: sources
name: my-gda-source
type: cloud-gemini-data-analytics
projectId: my-project-id
---
kind: sources
name: my-oauth-gda-source
type: cloud-gemini-data-analytics
projectId: my-project-id
useClientOAuth: true
```
## Reference
| **field** | **type** | **required** | **description** |
| -------------- | :------: | :----------: | ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------- |
| type | string | true | Must be "cloud-gemini-data-analytics". |
| projectId | string | true | The Google Cloud Project ID where the API is enabled. |
| useClientOAuth | boolean | false | If true, the source uses the token provided by the caller (forwarded to the API). Otherwise, it uses server-side Application Default Credentials (ADC). Defaults to `false`. |
========================================================================
## cloud-gemini-data-analytics-query Tool
========================================================================
**Hierarchy:** MCP Toolbox for Databases > Integrations > Gemini Data Analytics Source > cloud-gemini-data-analytics-query Tool
**URL:** https://googleapis.github.io/genai-toolbox/previews/PR-2723/integrations/cloudgda/cloud-gda-query/
**Description:** A tool to convert natural language queries into SQL statements using the Gemini Data Analytics QueryData API.
## About
The `cloud-gemini-data-analytics-query` tool allows you to send natural language questions to the Gemini Data Analytics API and receive structured responses containing SQL queries, natural language answers, and explanations. For details on defining data agent context for database data sources, see the official [documentation](https://docs.cloud.google.com/gemini/docs/conversational-analytics-api/data-agent-authored-context-databases).
> [!NOTE]
> Only `alloydb`, `spannerReference`, and `cloudSqlReference` are supported as [datasource references](https://clouddocs.devsite.corp.google.com/gemini/docs/conversational-analytics-api/reference/rest/v1beta/projects.locations.dataAgents#DatasourceReferences).
## Compatible Sources
{{< compatible-sources >}}
## Example
```yaml
kind: tools
name: my-gda-query-tool
type: cloud-gemini-data-analytics-query
source: my-gda-source
description: "Use this tool to send natural language queries to the Gemini Data Analytics API and receive SQL, natural language answers, and explanations."
location: ${your_database_location}
context:
datasourceReferences:
cloudSqlReference:
databaseReference:
projectId: "${your_project_id}"
region: "${your_database_instance_region}"
instanceId: "${your_database_instance_id}"
databaseId: "${your_database_name}"
engine: "POSTGRESQL"
agentContextReference:
contextSetId: "${your_context_set_id}" # E.g. projects/${project_id}/locations/${context_set_location}/contextSets/${context_set_id}
generationOptions:
generateQueryResult: true
generateNaturalLanguageAnswer: true
generateExplanation: true
generateDisambiguationQuestion: true
```
### Usage Flow
When using this tool, a `query` parameter containing a natural language query is provided to the tool (typically by an agent). The tool then interacts with the Gemini Data Analytics API using the context defined in your configuration.
The structure of the response depends on the `generationOptions` configured in your tool definition (e.g., enabling `generateQueryResult` will include the SQL query results).
See [Data Analytics API REST documentation](https://clouddocs.devsite.corp.google.com/gemini/docs/conversational-analytics-api/reference/rest/v1alpha/projects.locations/queryData?rep_location=global) for details.
**Example Input Query:**
```text
How many accounts who have region in Prague are eligible for loans? A3 contains the data of region.
```
**Example API Response:**
```json
{
"generatedQuery": "SELECT COUNT(T1.account_id) FROM account AS T1 INNER JOIN loan AS T2 ON T1.account_id = T2.account_id INNER JOIN district AS T3 ON T1.district_id = T3.district_id WHERE T3.A3 = 'Prague'",
"intentExplanation": "I found a template that matches the user's question. The template asks about the number of accounts who have region in a given city and are eligible for loans. The question asks about the number of accounts who have region in Prague and are eligible for loans. The template's parameterized SQL is 'SELECT COUNT(T1.account_id) FROM account AS T1 INNER JOIN loan AS T2 ON T1.account_id = T2.account_id INNER JOIN district AS T3 ON T1.district_id = T3.district_id WHERE T3.A3 = ?'. I will replace the named parameter '?' with 'Prague'.",
"naturalLanguageAnswer": "There are 84 accounts from the Prague region that are eligible for loans.",
"queryResult": {
"columns": [
{
"type": "INT64"
}
],
"rows": [
{
"values": [
{
"value": "84"
}
]
}
],
"totalRowCount": "1"
}
}
```
## Reference
| **field** | **type** | **required** | **description** |
| ----------------- | :------: | :----------: | ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------ |
| type | string | true | Must be "cloud-gemini-data-analytics-query". |
| source | string | true | The name of the `cloud-gemini-data-analytics` source to use. |
| description | string | true | A description of the tool's purpose. |
| location | string | true | The Google Cloud location of the target database resource (e.g., "us-central1"). This is used to construct the parent resource name in the API call. |
| context | object | true | The context for the query, including datasource references. See [QueryDataContext](https://github.com/googleapis/googleapis/blob/b32495a713a68dd0dff90cf0b24021debfca048a/google/cloud/geminidataanalytics/v1beta/data_chat_service.proto#L156) for details. |
| generationOptions | object | false | Options for generating the response. See [GenerationOptions](https://github.com/googleapis/googleapis/blob/b32495a713a68dd0dff90cf0b24021debfca048a/google/cloud/geminidataanalytics/v1beta/data_chat_service.proto#L135) for details. |
========================================================================
## HTTP Source
========================================================================
**Hierarchy:** MCP Toolbox for Databases > Integrations > HTTP Source
**URL:** https://googleapis.github.io/genai-toolbox/previews/PR-2723/integrations/http/
**Description:** The HTTP source enables the Toolbox to retrieve data from a remote server using HTTP requests.
## About
The HTTP Source allows Toolbox to retrieve data from arbitrary HTTP
endpoints. This enables Generative AI applications to access data from web APIs
and other HTTP-accessible resources.
## Available Tools
{{< list-tools >}}
## Example
```yaml
kind: sources
name: my-http-source
type: http
baseUrl: https://api.example.com/data
timeout: 10s # default to 30s
headers:
Authorization: Bearer ${API_KEY}
Content-Type: application/json
queryParams:
param1: value1
param2: value2
# disableSslVerification: false
```
{{< notice tip >}}
Use environment variable replacement with the format ${ENV_NAME}
instead of hardcoding your secrets into the configuration file.
{{< /notice >}}
## Reference
| **field** | **type** | **required** | **description** |
|------------------------|:-----------------:|:------------:|------------------------------------------------------------------------------------------------------------------------------------|
| type | string | true | Must be "http". |
| baseUrl | string | true | The base URL for the HTTP requests (e.g., `https://api.example.com`). |
| timeout | string | false | The timeout for HTTP requests (e.g., "5s", "1m", refer to [ParseDuration][parse-duration-doc] for more examples). Defaults to 30s. |
| headers | map[string]string | false | Default headers to include in the HTTP requests. |
| queryParams | map[string]string | false | Default query parameters to include in the HTTP requests. |
| disableSslVerification | bool | false | Disable SSL certificate verification. This should only be used for local development. Defaults to `false`. |
[parse-duration-doc]: https://pkg.go.dev/time#ParseDuration
========================================================================
## http Tool
========================================================================
**Hierarchy:** MCP Toolbox for Databases > Integrations > HTTP Source > http Tool
**URL:** https://googleapis.github.io/genai-toolbox/previews/PR-2723/integrations/http/http-tool/
**Description:** A "http" tool sends out an HTTP request to an HTTP endpoint.
## About
The `http` tool allows you to make HTTP requests to APIs to retrieve data.
An HTTP request is the method by which a client communicates with a server to
retrieve or manipulate resources.
Toolbox allows you to configure the request URL, method, headers, query
parameters, and the request body for an HTTP Tool.
## Compatible Sources
{{< compatible-sources >}}
### URL
An HTTP request URL identifies the target the client wants to access.
Toolbox composes the request URL from 3 places:
1. The HTTP Source's `baseUrl`.
2. The HTTP Tool's `path` field.
3. The HTTP Tool's `pathParams` for dynamic path composed during Tool
invocation.
For example, the following config allows you to reach different paths of the
same server using multiple Tools:
```yaml
kind: sources
name: my-http-source
type: http
baseUrl: https://api.example.com
---
kind: tools
name: my-post-tool
type: http
source: my-http-source
method: POST
path: /update
description: Tool to update information to the example API
---
kind: tools
name: my-get-tool
type: http
source: my-http-source
method: GET
path: /search
description: Tool to search information from the example API
---
kind: tools
name: my-dynamic-path-tool
type: http
source: my-http-source
method: GET
path: /{{.myPathParam}}/search
description: Tool to reach endpoint based on the input to `myPathParam`
pathParams:
- name: myPathParam
type: string
description: The dynamic path parameter
```
### Headers
An HTTP request header is a key-value pair sent by a client to a server,
providing additional information about the request, such as the client's
preferences, the request body content type, and other metadata.
Headers specified by the HTTP Tool are combined with its HTTP Source headers for
the resulting HTTP request, and override the Source headers in case of conflict.
The HTTP Tool allows you to specify headers in two different ways:
- Static headers can be specified using the `headers` field, and will be the
same for every invocation:
```yaml
kind: tools
name: my-http-tool
type: http
source: my-http-source
method: GET
path: /search
description: Tool to search data from API
headers:
Authorization: API_KEY
Content-Type: application/json
```
- Dynamic headers can be specified as parameters in the `headerParams` field.
The `name` of the `headerParams` will be used as the header key, and the value
is determined by the LLM input upon Tool invocation:
```yaml
kind: tools
name: my-http-tool
type: http
source: my-http-source
method: GET
path: /search
description: some description
headerParams:
- name: Content-Type # Example LLM input: "application/json"
description: request content type
type: string
```
### Query parameters
Query parameters are key-value pairs appended to a URL after a question mark (?)
to provide additional information to the server for processing the request, like
filtering or sorting data.
- Static request query parameters should be specified in the `path` as part of
the URL itself:
```yaml
kind: tools
name: my-http-tool
type: http
source: my-http-source
method: GET
path: /search?language=en&id=1
description: Tool to search for item with ID 1 in English
```
- Dynamic request query parameters should be specified as parameters in the
`queryParams` section:
```yaml
kind: tools
name: my-http-tool
type: http
source: my-http-source
method: GET
path: /search
description: Tool to search for item with ID
queryParams:
- name: id
description: item ID
type: integer
```
### Request body
The request body payload is a string that supports parameter replacement
following [Go template][go-template-doc]'s annotations.
The parameter names in the `requestBody` should be preceded by "." and enclosed
by double curly brackets "{{}}". The values will be populated into the request
body payload upon Tool invocation.
Example:
```yaml
kind: tools
name: my-http-tool
type: http
source: my-http-source
method: GET
path: /search
description: Tool to search for person with name and age
requestBody: |
{
"age": {{.age}},
"name": "{{.name}}"
}
bodyParams:
- name: age
description: age number
type: integer
- name: name
description: name string
type: string
```
#### Formatting Parameters
Some complex parameters (such as arrays) may require additional formatting to
match the expected output. For convenience, you can specify one of the following
pre-defined functions before the parameter name to format it:
##### JSON
The `json` keyword converts a parameter into a JSON format.
{{< notice note >}}
Using JSON may add quotes to the variable name for certain types (such as
strings).
{{< /notice >}}
Example:
```yaml
requestBody: |
{
"age": {{json .age}},
"name": {{json .name}},
"nickname": "{{json .nickname}}",
"nameArray": {{json .nameArray}}
}
```
will send the following output:
```yaml
{
"age": 18,
"name": "Katherine",
"nickname": ""Kat"", # Duplicate quotes
"nameArray": ["A", "B", "C"]
}
```
## Example
```yaml
kind: tools
name: my-http-tool
type: http
source: my-http-source
method: GET
path: /search
description: some description
authRequired:
- my-google-auth-service
- other-auth-service
queryParams:
- name: country
description: some description
type: string
requestBody: |
{
"age": {{.age}},
"city": "{{.city}}"
}
bodyParams:
- name: age
description: age number
type: integer
- name: city
description: city string
type: string
headers:
Authorization: API_KEY
Content-Type: application/json
headerParams:
- name: Language
description: language string
type: string
```
## Reference
| **field** | **type** | **required** | **description** |
|--------------|:---------------------------------------:|:------------:|----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|
| type | string | true | Must be "http". |
| source | string | true | Name of the source the HTTP request should be sent to. |
| description | string | true | Description of the tool that is passed to the LLM. |
| path | string | true | The path of the HTTP request. You can include static query parameters in the path string. |
| method | string | true | The HTTP method to use (e.g., GET, POST, PUT, DELETE). |
| headers | map[string]string | false | A map of headers to include in the HTTP request (overrides source headers). |
| requestBody | string | false | The request body payload. Use [go template][go-template-doc] with the parameter name as the placeholder (e.g., `{{.id}}` will be replaced with the value of the parameter that has name `id` in the `bodyParams` section). |
| queryParams | [parameters](../#specifying-parameters) | false | List of [parameters](../#specifying-parameters) that will be inserted into the query string. |
| bodyParams | [parameters](../#specifying-parameters) | false | List of [parameters](../#specifying-parameters) that will be inserted into the request body payload. |
| headerParams | [parameters](../#specifying-parameters) | false | List of [parameters](../#specifying-parameters) that will be inserted as the request headers. |
[go-template-doc]:
========================================================================
## Looker Source
========================================================================
**Hierarchy:** MCP Toolbox for Databases > Integrations > Looker Source
**URL:** https://googleapis.github.io/genai-toolbox/previews/PR-2723/integrations/looker/
**Description:** Looker is a business intelligence tool that also provides a semantic layer.
## About
[Looker][looker-docs] is a web based business intelligence and data management
tool that provides a semantic layer to facilitate querying. It can be deployed
in the cloud, on GCP, or on premises.
[looker-docs]: https://cloud.google.com/looker/docs
## Available Tools
{{< list-tools >}}
## Requirements
### Looker User
This source only uses API authentication. You will need to
[create an API user][looker-user] to login to Looker.
[looker-user]:
https://cloud.google.com/looker/docs/api-auth#authentication_with_an_sdk
{{< notice note >}}
To use the Conversational Analytics API, you will need to have the following
Google Cloud Project API enabled and IAM permissions.
{{< /notice >}}
### API Enablement in GCP
Enable the following APIs in your Google Cloud Project:
```
gcloud services enable geminidataanalytics.googleapis.com --project=$PROJECT_ID
gcloud services enable cloudaicompanion.googleapis.com --project=$PROJECT_ID
```
### IAM Permissions in GCP
In addition to [setting the ADC for your server][set-adc], you need to ensure
the IAM identity has been given the following IAM roles (or corresponding
permissions):
- `roles/looker.instanceUser`
- `roles/cloudaicompanion.user`
- `roles/geminidataanalytics.dataAgentStatelessUser`
To initialize the application default credential run `gcloud auth login
--update-adc` in your environment before starting MCP Toolbox.
[set-adc]: https://cloud.google.com/docs/authentication/provide-credentials-adc
## Example
```yaml
kind: sources
name: my-looker-source
type: looker
base_url: http://looker.example.com
client_id: ${LOOKER_CLIENT_ID}
client_secret: ${LOOKER_CLIENT_SECRET}
project: ${LOOKER_PROJECT}
location: ${LOOKER_LOCATION}
verify_ssl: true
timeout: 600s
```
The Looker base url will look like "https://looker.example.com", don't include
a trailing "/". In some cases, especially if your Looker is deployed
on-premises, you may need to add the API port number like
"https://looker.example.com:19999".
Verify ssl should almost always be "true" (all lower case) unless you are using
a self-signed ssl certificate for the Looker server. Anything other than "true"
will be interpreted as false.
The client id and client secret are seemingly random character sequences
assigned by the looker server. If you are using Looker OAuth you don't need
these settings
The `project` and `location` fields are utilized **only** when using the
conversational analytics tool.
{{< notice tip >}}
Use environment variable replacement with the format ${ENV_NAME}
instead of hardcoding your secrets into the configuration file.
{{< /notice >}}
## Reference
| **field** | **type** | **required** | **description** |
|----------------------|:--------:|:------------:|-----------------------------------------------------------------------------------------------------------------------------------------------------|
| type | string | true | Must be "looker". |
| base_url | string | true | The URL of your Looker server with no trailing /. |
| client_id | string | false | The client id assigned by Looker. |
| client_secret | string | false | The client secret assigned by Looker. |
| verify_ssl | string | false | Whether to check the ssl certificate of the server. |
| project | string | false | The project id to use in Google Cloud. |
| location | string | false | The location to use in Google Cloud. (default: us) |
| timeout | string | false | Maximum time to wait for query execution (e.g. "30s", "2m"). By default, 120s is applied. |
| use_client_oauth | string | false | Use OAuth tokens instead of client_id and client_secret. (default: false) If a header name is provided, it will be used instead of "Authorization". |
| show_hidden_models | string | false | Show or hide hidden models. (default: true) |
| show_hidden_explores | string | false | Show or hide hidden explores. (default: true) |
| show_hidden_fields | string | false | Show or hide hidden fields. (default: true) |
========================================================================
## looker-add-dashboard-element Tool
========================================================================
**Hierarchy:** MCP Toolbox for Databases > Integrations > Looker Source > looker-add-dashboard-element Tool
**URL:** https://googleapis.github.io/genai-toolbox/previews/PR-2723/integrations/looker/looker-add-dashboard-element/
**Description:** "looker-add-dashboard-element" creates a dashboard element in the given dashboard.
## About
The `looker-add-dashboard-element` tool creates a new tile (element) within an existing Looker dashboard.
Tiles are added in the order this tool is called for a given `dashboard_id`.
CRITICAL ORDER OF OPERATIONS:
1. Create the dashboard using `make_dashboard`.
2. Add any dashboard-level filters using `add_dashboard_filter`.
3. Then, add elements (tiles) using this tool.
## Compatible Sources
{{< compatible-sources >}}
## Example
```yaml
kind: tools
name: add_dashboard_element
type: looker-add-dashboard-element
source: looker-source
description: |
This tool creates a new tile (element) within an existing Looker dashboard.
Tiles are added in the order this tool is called for a given `dashboard_id`.
CRITICAL ORDER OF OPERATIONS:
1. Create the dashboard using `make_dashboard`.
2. Add any dashboard-level filters using `add_dashboard_filter`.
3. Then, add elements (tiles) using this tool.
Required Parameters:
- dashboard_id: The ID of the target dashboard, obtained from `make_dashboard`.
- model_name, explore_name, fields: These query parameters are inherited
from the `query` tool and are required to define the data for the tile.
Optional Parameters:
- title: An optional title for the dashboard tile.
- pivots, filters, sorts, limit, query_timezone: These query parameters are
inherited from the `query` tool and can be used to customize the tile's query.
- vis_config: A JSON object defining the visualization settings for this tile.
The structure and options are the same as for the `query_url` tool's `vis_config`.
Connecting to Dashboard Filters:
A dashboard element can be connected to one or more dashboard filters (created with
`add_dashboard_filter`). To do this, specify the `name` of the dashboard filter
and the `field` from the element's query that the filter should apply to.
The format for specifying the field is `view_name.field_name`.
```
## Reference
| **field** | **type** | **required** | **description** |
|:------------|:--------:|:------------:|----------------------------------------------------|
| type | string | true | Must be "looker-add-dashboard-element". |
| source | string | true | Name of the source the SQL should execute on. |
| description | string | true | Description of the tool that is passed to the LLM. |
========================================================================
## looker-add-dashboard-filter Tool
========================================================================
**Hierarchy:** MCP Toolbox for Databases > Integrations > Looker Source > looker-add-dashboard-filter Tool
**URL:** https://googleapis.github.io/genai-toolbox/previews/PR-2723/integrations/looker/looker-add-dashboard-filter/
**Description:** The "looker-add-dashboard-filter" tool adds a filter to a specified dashboard.
## About
The `looker-add-dashboard-filter` tool adds a filter to a specified Looker dashboard.
CRITICAL ORDER OF OPERATIONS:
1. Create a dashboard using `make_dashboard`.
2. Add all desired filters using this tool (`add_dashboard_filter`).
3. Finally, add dashboard elements (tiles) using `add_dashboard_element`.
## Compatible Sources
{{< compatible-sources >}}
## Parameters
| **parameter** | **type** | **required** | **default** | **description** |
|:----------------------|:--------:|:-----------------:|:--------------:|-------------------------------------------------------------------------------------------------------------------------------|
| dashboard_id | string | true | none | The ID of the dashboard to add the filter to, obtained from `make_dashboard`. |
| name | string | true | none | A unique internal identifier for the filter. This name is used later in `add_dashboard_element` to bind tiles to this filter. |
| title | string | true | none | The label displayed to users in the Looker UI. |
| filter_type | string | true | `field_filter` | The filter type of filter. Can be `date_filter`, `number_filter`, `string_filter`, or `field_filter`. |
| default_value | string | false | none | The initial value for the filter. |
| model | string | if `field_filter` | none | The name of the LookML model, obtained from `get_models`. |
| explore | string | if `field_filter` | none | The name of the explore within the model, obtained from `get_explores`. |
| dimension | string | if `field_filter` | none | The name of the field (e.g., `view_name.field_name`) to base the filter on, obtained from `get_dimensions`. |
| allow_multiple_values | boolean | false | true | The Dashboard Filter should allow multiple values |
| required | boolean | false | false | The Dashboard Filter is required to run dashboard |
## Example
```yaml
kind: tools
name: add_dashboard_filter
type: looker-add-dashboard-filter
source: looker-source
description: |
This tool adds a filter to a Looker dashboard.
CRITICAL ORDER OF OPERATIONS:
1. Create a dashboard using `make_dashboard`.
2. Add all desired filters using this tool (`add_dashboard_filter`).
3. Finally, add dashboard elements (tiles) using `add_dashboard_element`.
Parameters:
- dashboard_id (required): The ID from `make_dashboard`.
- name (required): A unique internal identifier for the filter. You will use this `name` later in `add_dashboard_element` to bind tiles to this filter.
- title (required): The label displayed to users in the UI.
- filter_type (required): One of `date_filter`, `number_filter`, `string_filter`, or `field_filter`.
- default_value (optional): The initial value for the filter.
Field Filters (`flter_type: field_filter`):
If creating a field filter, you must also provide:
- model
- explore
- dimension
The filter will inherit suggestions and type information from this LookML field.
```
## Reference
| **field** | **type** | **required** | **description** |
|-------------|:--------:|:------------:|----------------------------------------------------|
| type | string | true | Must be "looker-add-dashboard-filter". |
| source | string | true | Name of the source the SQL should execute on. |
| description | string | true | Description of the tool that is passed to the LLM. |
========================================================================
## looker-conversational-analytics Tool
========================================================================
**Hierarchy:** MCP Toolbox for Databases > Integrations > Looker Source > looker-conversational-analytics Tool
**URL:** https://googleapis.github.io/genai-toolbox/previews/PR-2723/integrations/looker/looker-conversational-analytics/
**Description:** The "looker-conversational-analytics" tool will use the Conversational Analaytics API to analyze data from Looker
## About
A `looker-conversational-analytics` tool allows you to ask questions about your
Looker data.
`looker-conversational-analytics` accepts two parameters:
1. `user_query_with_context`: The question asked of the Conversational Analytics
system.
2. `explore_references`: A list of one to five explores that can be queried to
answer the question. The form of the entry is `[{"model": "model name",
"explore": "explore name"}, ...]`
## Compatible Sources
{{< compatible-sources >}}
## Example
```yaml
kind: tools
name: ask_data_insights
type: looker-conversational-analytics
source: looker-source
description: |
Use this tool to ask questions about your data using the Looker Conversational
Analytics API. You must provide a natural language query and a list of
1 to 5 model and explore combinations (e.g. [{'model': 'the_model', 'explore': 'the_explore'}]).
Use the 'get_models' and 'get_explores' tools to discover available models and explores.
```
## Reference
| **field** | **type** | **required** | **description** |
|-------------|:--------:|:------------:|----------------------------------------------------|
| type | string | true | Must be "lookerca-conversational-analytics". |
| source | string | true | Name of the source the SQL should execute on. |
| description | string | true | Description of the tool that is passed to the LLM. |
========================================================================
## looker-create-project-directory Tool
========================================================================
**Hierarchy:** MCP Toolbox for Databases > Integrations > Looker Source > looker-create-project-directory Tool
**URL:** https://googleapis.github.io/genai-toolbox/previews/PR-2723/integrations/looker/looker-create-project-directory/
**Description:** A "looker-create-project-directory" tool creates a new directory in a LookML project.
## About
A `looker-create-project-directory` tool creates a new directory within a specified LookML project.
## Compatible Sources
{{< compatible-sources >}}
## Example
```yaml
kind: tools
name: looker-create-project-directory
type: looker-create-project-directory
source: looker-source
description: |
This tool creates a new directory within a specific LookML project.
It is useful for organizing project files.
Parameters:
- project_id (string): The ID of the LookML project.
- directory_path (string): The path of the directory to create.
Output:
A string confirming the creation of the directory.
```
## Reference
| **field** | **type** | **required** | **description** |
|-------------|:--------:|:------------:|----------------------------------------------------|
| type | string | true | Must be "looker-create-project-directory". |
| source | string | true | Name of the source Looker instance. |
| description | string | true | Description of the tool that is passed to the LLM. |
========================================================================
## looker-create-project-file Tool
========================================================================
**Hierarchy:** MCP Toolbox for Databases > Integrations > Looker Source > looker-create-project-file Tool
**URL:** https://googleapis.github.io/genai-toolbox/previews/PR-2723/integrations/looker/looker-create-project-file/
**Description:** A "looker-create-project-file" tool creates a new LookML file in a project.
## About
A `looker-create-project-file` tool creates a new LookML file in a project
`looker-create-project-file` accepts a project_id parameter and a file_path parameter
as well as the file content.
## Compatible Sources
{{< compatible-sources >}}
## Example
```yaml
kind: tools
name: create_project_file
type: looker-create-project-file
source: looker-source
description: |
This tool creates a new LookML file within a specified project, populating
it with the provided content.
Prerequisite: The Looker session must be in Development Mode. Use `dev_mode: true` first.
Parameters:
- project_id (required): The unique ID of the LookML project.
- file_path (required): The desired path and filename for the new file within the project.
- content (required): The full LookML content to write into the new file.
Output:
A confirmation message upon successful file creation.
```
## Reference
| **field** | **type** | **required** | **description** |
|-------------|:--------:|:------------:|----------------------------------------------------|
| type | string | true | Must be "looker-create-project-file". |
| source | string | true | Name of the source Looker instance. |
| description | string | true | Description of the tool that is passed to the LLM. |
========================================================================
## looker-delete-project-directory Tool
========================================================================
**Hierarchy:** MCP Toolbox for Databases > Integrations > Looker Source > looker-delete-project-directory Tool
**URL:** https://googleapis.github.io/genai-toolbox/previews/PR-2723/integrations/looker/looker-delete-project-directory/
**Description:** A "looker-delete-project-directory" tool deletes a directory from a LookML project.
## About
A `looker-delete-project-directory` tool deletes a directory from a specified LookML project.
## Compatible Sources
{{< compatible-sources >}}
## Example
```yaml
kind: tools
name: looker-delete-project-directory
type: looker-delete-project-directory
source: looker-source
description: |
This tool deletes a directory from a specific LookML project.
It is useful for removing unnecessary or obsolete directories.
Parameters:
- project_id (string): The ID of the LookML project.
- directory_path (string): The path of the directory to delete.
Output:
A string confirming the deletion of the directory.
```
## Reference
| **field** | **type** | **required** | **description** |
|-------------|:--------:|:------------:|----------------------------------------------------|
| type | string | true | Must be "looker-delete-project-directory". |
| source | string | true | Name of the source Looker instance. |
| description | string | true | Description of the tool that is passed to the LLM. |
========================================================================
## looker-delete-project-file Tool
========================================================================
**Hierarchy:** MCP Toolbox for Databases > Integrations > Looker Source > looker-delete-project-file Tool
**URL:** https://googleapis.github.io/genai-toolbox/previews/PR-2723/integrations/looker/looker-delete-project-file/
**Description:** A "looker-delete-project-file" tool deletes a LookML file in a project.
## About
A `looker-delete-project-file` tool deletes a LookML file in a project
`looker-delete-project-file` accepts a project_id parameter and a file_path parameter.
## Compatible Sources
{{< compatible-sources >}}
## Example
```yaml
kind: tools
name: delete_project_file
type: looker-delete-project-file
source: looker-source
description: |
This tool permanently deletes a specified LookML file from within a project.
Use with caution, as this action cannot be undone through the API.
Prerequisite: The Looker session must be in Development Mode. Use `dev_mode: true` first.
Parameters:
- project_id (required): The unique ID of the LookML project.
- file_path (required): The exact path to the LookML file to delete within the project.
Output:
A confirmation message upon successful file deletion.
```
## Reference
| **field** | **type** | **required** | **description** |
|-------------|:--------:|:------------:|----------------------------------------------------|
| type | string | true | Must be "looker-delete-project-file". |
| source | string | true | Name of the source Looker instance. |
| description | string | true | Description of the tool that is passed to the LLM. |
========================================================================
## looker-dev-mode Tool
========================================================================
**Hierarchy:** MCP Toolbox for Databases > Integrations > Looker Source > looker-dev-mode Tool
**URL:** https://googleapis.github.io/genai-toolbox/previews/PR-2723/integrations/looker/looker-dev-mode/
**Description:** A "looker-dev-mode" tool changes the current session into and out of dev mode
## About
A `looker-dev-mode` tool changes the session into and out of dev mode.
`looker-dev-mode` accepts a boolean parameter, true to enter dev mode and false
to exit dev mode.
## Compatible Sources
{{< compatible-sources >}}
## Example
```yaml
kind: tools
name: dev_mode
type: looker-dev-mode
source: looker-source
description: |
This tool allows toggling the Looker IDE session between Development Mode and Production Mode.
Development Mode enables making and testing changes to LookML projects.
Parameters:
- enable (required): A boolean value.
- `true`: Switches the current session to Development Mode.
- `false`: Switches the current session to Production Mode.
```
## Reference
| **field** | **type** | **required** | **description** |
|-------------|:--------:|:------------:|----------------------------------------------------|
| type | string | true | Must be "looker-dev-mode". |
| source | string | true | Name of the source Looker instance. |
| description | string | true | Description of the tool that is passed to the LLM. |
========================================================================
## looker-generate-embed-url Tool
========================================================================
**Hierarchy:** MCP Toolbox for Databases > Integrations > Looker Source > looker-generate-embed-url Tool
**URL:** https://googleapis.github.io/genai-toolbox/previews/PR-2723/integrations/looker/looker-generate-embed-url/
**Description:** "looker-generate-embed-url" generates an embeddable URL for Looker content.
## About
The `looker-generate-embed-url` tool generates an embeddable URL for a given
piece of Looker content. The url generated is created for the user authenticated
to the Looker source. When opened in the browser it will create a Looker Embed
session.
`looker-generate-embed-url` takes two parameters:
1. the `type` of content (e.g., "dashboards", "looks", "query-visualization")
2. the `id` of the content
It's recommended to use other tools from the Looker MCP toolbox with this tool
to do things like fetch dashboard id's, generate a query, etc that can be
supplied to this tool.
## Compatible Sources
{{< compatible-sources >}}
## Example
```yaml
kind: tools
name: generate_embed_url
type: looker-generate-embed-url
source: looker-source
description: |
This tool generates a signed, private embed URL for specific Looker content,
allowing users to access it directly.
Parameters:
- type (required): The type of content to embed. Common values include:
- `dashboards`
- `looks`
- `explore`
- id (required): The unique identifier for the content.
- For dashboards and looks, use the numeric ID (e.g., "123").
- For explores, use the format "model_name/explore_name".
```
## Reference
| **field** | **type** | **required** | **description** |
|-------------|:--------:|:------------:|----------------------------------------------------|
| type | string | true | Must be "looker-generate-embed-url" |
| source | string | true | Name of the source the SQL should execute on. |
| description | string | true | Description of the tool that is passed to the LLM. |
========================================================================
## looker-get-connection-databases Tool
========================================================================
**Hierarchy:** MCP Toolbox for Databases > Integrations > Looker Source > looker-get-connection-databases Tool
**URL:** https://googleapis.github.io/genai-toolbox/previews/PR-2723/integrations/looker/looker-get-connection-databases/
**Description:** A "looker-get-connection-databases" tool returns all the databases in a connection.
## About
A `looker-get-connection-databases` tool returns all the databases in a connection.
`looker-get-connection-databases` accepts a `conn` parameter.
## Compatible Sources
{{< compatible-sources >}}
## Example
```yaml
kind: tools
name: get_connection_databases
type: looker-get-connection-databases
source: looker-source
description: |
This tool retrieves a list of databases available through a specified Looker connection.
This is only applicable for connections that support multiple databases.
Use `get_connections` to check if a connection supports multiple databases.
Parameters:
- connection_name (required): The name of the database connection, obtained from `get_connections`.
Output:
A JSON array of strings, where each string is the name of an available database.
If the connection does not support multiple databases, an empty list or an error will be returned.
```
## Reference
| **field** | **type** | **required** | **description** |
|-------------|:--------:|:------------:|----------------------------------------------------|
| type | string | true | Must be "looker-get-connection-databases". |
| source | string | true | Name of the source Looker instance. |
| description | string | true | Description of the tool that is passed to the LLM. |
========================================================================
## looker-get-connection-schemas Tool
========================================================================
**Hierarchy:** MCP Toolbox for Databases > Integrations > Looker Source > looker-get-connection-schemas Tool
**URL:** https://googleapis.github.io/genai-toolbox/previews/PR-2723/integrations/looker/looker-get-connection-schemas/
**Description:** A "looker-get-connection-schemas" tool returns all the schemas in a connection.
## About
A `looker-get-connection-schemas` tool returns all the schemas in a connection.
`looker-get-connection-schemas` accepts a `conn` parameter and an optional `db` parameter.
## Compatible Sources
{{< compatible-sources >}}
## Example
```yaml
kind: tools
name: get_connection_schemas
type: looker-get-connection-schemas
source: looker-source
description: |
This tool retrieves a list of database schemas available through a specified
Looker connection.
Parameters:
- connection_name (required): The name of the database connection, obtained from `get_connections`.
- database (optional): An optional database name to filter the schemas.
Only applicable for connections that support multiple databases.
Output:
A JSON array of strings, where each string is the name of an available schema.
```
## Reference
| **field** | **type** | **required** | **description** |
|-------------|:--------:|:------------:|----------------------------------------------------|
| type | string | true | Must be "looker-get-connection-schemas". |
| source | string | true | Name of the source Looker instance. |
| description | string | true | Description of the tool that is passed to the LLM. |
========================================================================
## looker-get-connection-table-columns Tool
========================================================================
**Hierarchy:** MCP Toolbox for Databases > Integrations > Looker Source > looker-get-connection-table-columns Tool
**URL:** https://googleapis.github.io/genai-toolbox/previews/PR-2723/integrations/looker/looker-get-connection-table-columns/
**Description:** A "looker-get-connection-table-columns" tool returns all the columns for each table specified.
## About
A `looker-get-connection-table-columns` tool returns all the columnes for each table specified.
`looker-get-connection-table-columns` accepts a `conn` parameter, a `schema` parameter, a `tables` parameter with a comma separated list of tables, and an optional `db` parameter.
## Compatible Sources
{{< compatible-sources >}}
## Example
```yaml
kind: tools
name: get_connection_table_columns
type: looker-get-connection-table-columns
source: looker-source
description: |
This tool retrieves a list of columns for one or more specified tables within a
given database schema and connection.
Parameters:
- connection_name (required): The name of the database connection, obtained from `get_connections`.
- schema (required): The name of the schema where the tables reside, obtained from `get_connection_schemas`.
- tables (required): A comma-separated string of table names for which to retrieve columns
(e.g., "users,orders,products"), obtained from `get_connection_tables`.
- database (optional): The name of the database to filter by. Only applicable for connections
that support multiple databases (check with `get_connections`).
Output:
A JSON array of objects, where each object represents a column and contains details
such as `table_name`, `column_name`, `data_type`, and `is_nullable`.
```
## Reference
| **field** | **type** | **required** | **description** |
|-------------|:--------:|:------------:|----------------------------------------------------|
| type | string | true | Must be "looker-get-connection-table-columns". |
| source | string | true | Name of the source Looker instance. |
| description | string | true | Description of the tool that is passed to the LLM. |
========================================================================
## looker-get-connection-tables Tool
========================================================================
**Hierarchy:** MCP Toolbox for Databases > Integrations > Looker Source > looker-get-connection-tables Tool
**URL:** https://googleapis.github.io/genai-toolbox/previews/PR-2723/integrations/looker/looker-get-connection-tables/
**Description:** A "looker-get-connection-tables" tool returns all the tables in a connection.
## About
A `looker-get-connection-tables` tool returns all the tables in a connection.
`looker-get-connection-tables` accepts a `conn` parameter, a `schema` parameter,
and an optional `db` parameter.
## Compatible Sources
{{< compatible-sources >}}
## Example
```yaml
kind: tools
name: get_connection_tables
type: looker-get-connection-tables
source: looker-source
description: |
This tool retrieves a list of tables available within a specified database schema
through a Looker connection.
Parameters:
- connection_name (required): The name of the database connection, obtained from `get_connections`.
- schema (required): The name of the schema to list tables from, obtained from `get_connection_schemas`.
- database (optional): The name of the database to filter by. Only applicable for connections
that support multiple databases (check with `get_connections`).
Output:
A JSON array of strings, where each string is the name of an available table.
```
## Reference
| **field** | **type** | **required** | **description** |
|-------------|:--------:|:------------:|----------------------------------------------------|
| type | string | true | Must be "looker-get-connection-tables". |
| source | string | true | Name of the source Looker instance. |
| description | string | true | Description of the tool that is passed to the LLM. |
========================================================================
## looker-get-connections Tool
========================================================================
**Hierarchy:** MCP Toolbox for Databases > Integrations > Looker Source > looker-get-connections Tool
**URL:** https://googleapis.github.io/genai-toolbox/previews/PR-2723/integrations/looker/looker-get-connections/
**Description:** A "looker-get-connections" tool returns all the connections in the source.
## About
A `looker-get-connections` tool returns all the connections in the source.
`looker-get-connections` accepts no parameters.
## Compatible Sources
{{< compatible-sources >}}
## Example
```yaml
kind: tools
name: get_connections
type: looker-get-connections
source: looker-source
description: |
This tool retrieves a list of all database connections configured in the Looker system.
Parameters:
This tool takes no parameters.
Output:
A JSON array of objects, each representing a database connection and including details such as:
- `name`: The connection's unique identifier.
- `dialect`: The database dialect (e.g., "mysql", "postgresql", "bigquery").
- `default_schema`: The default schema for the connection.
- `database`: The associated database name (if applicable).
- `supports_multiple_databases`: A boolean indicating if the connection can access multiple databases.
```
## Reference
| **field** | **type** | **required** | **description** |
|-------------|:--------:|:------------:|----------------------------------------------------|
| type | string | true | Must be "looker-get-connections". |
| source | string | true | Name of the source Looker instance. |
| description | string | true | Description of the tool that is passed to the LLM. |
========================================================================
## looker-get-dashboards Tool
========================================================================
**Hierarchy:** MCP Toolbox for Databases > Integrations > Looker Source > looker-get-dashboards Tool
**URL:** https://googleapis.github.io/genai-toolbox/previews/PR-2723/integrations/looker/looker-get-dashboards/
**Description:** "looker-get-dashboards" tool searches for a saved Dashboard by name or description.
## About
The `looker-get-dashboards` tool searches for a saved Dashboard by
name or description.
`looker-get-dashboards` takes four parameters, the `title`, `desc`, `limit`
and `offset`.
Title and description use SQL style wildcards and are case insensitive.
Limit and offset are used to page through a larger set of matches and
default to 100 and 0.
## Compatible Sources
{{< compatible-sources >}}
## Example
```yaml
kind: tools
name: get_dashboards
type: looker-get-dashboards
source: looker-source
description: |
This tool searches for saved dashboards in a Looker instance. It returns a list of JSON objects, each representing a dashboard.
Search Parameters:
- title (optional): Filter by dashboard title (supports wildcards).
- folder_id (optional): Filter by the ID of the folder where the dashboard is saved.
- user_id (optional): Filter by the ID of the user who created the dashboard.
- description (optional): Filter by description content (supports wildcards).
- id (optional): Filter by specific dashboard ID.
- limit (optional): Maximum number of results to return. Defaults to a system limit.
- offset (optional): Starting point for pagination.
String Search Behavior:
- Case-insensitive matching.
- Supports SQL LIKE pattern match wildcards:
- `%`: Matches any sequence of zero or more characters. (e.g., `"finan%"` matches "financial", "finance")
- `_`: Matches any single character. (e.g., `"s_les"` matches "sales")
- Special expressions for null checks:
- `"IS NULL"`: Matches dashboards where the field is null.
- `"NOT NULL"`: Excludes dashboards where the field is null.
```
## Reference
| **field** | **type** | **required** | **description** |
|-------------|:--------:|:------------:|----------------------------------------------------|
| type | string | true | Must be "looker-get-dashboards" |
| source | string | true | Name of the source the SQL should execute on. |
| description | string | true | Description of the tool that is passed to the LLM. |
========================================================================
## looker-get-dimensions Tool
========================================================================
**Hierarchy:** MCP Toolbox for Databases > Integrations > Looker Source > looker-get-dimensions Tool
**URL:** https://googleapis.github.io/genai-toolbox/previews/PR-2723/integrations/looker/looker-get-dimensions/
**Description:** A "looker-get-dimensions" tool returns all the dimensions from a given explore in a given model in the source.
## About
A `looker-get-dimensions` tool returns all the dimensions from a given explore
in a given model in the source.
`looker-get-dimensions` accepts two parameters, the `model` and the `explore`.
## Compatible Sources
{{< compatible-sources >}}
## Example
```yaml
kind: tools
name: get_dimensions
type: looker-get-dimensions
source: looker-source
description: |
This tool retrieves a list of dimensions defined within a specific Looker explore.
Dimensions are non-aggregatable attributes or characteristics of your data
(e.g., product name, order date, customer city) that can be used for grouping,
filtering, or segmenting query results.
Parameters:
- model_name (required): The name of the LookML model, obtained from `get_models`.
- explore_name (required): The name of the explore within the model, obtained from `get_explores`.
Output Details:
- If a dimension includes a `suggestions` field, its contents are valid values
that can be used directly as filters for that dimension.
- If a `suggest_explore` and `suggest_dimension` are provided, you can query
that specified explore and dimension to retrieve a list of valid filter values.
```
The response is a json array with the following elements:
```json
{
"name": "field name",
"description": "field description",
"type": "field type",
"label": "field label",
"label_short": "field short label",
"tags": ["tags", ...],
"synonyms": ["synonyms", ...],
"suggestions": ["suggestion", ...],
"suggest_explore": "explore",
"suggest_dimension": "dimension"
}
```
## Reference
| **field** | **type** | **required** | **description** |
|-------------|:--------:|:------------:|----------------------------------------------------|
| type | string | true | Must be "looker-get-dimensions". |
| source | string | true | Name of the source the SQL should execute on. |
| description | string | true | Description of the tool that is passed to the LLM. |
========================================================================
## looker-get-explores Tool
========================================================================
**Hierarchy:** MCP Toolbox for Databases > Integrations > Looker Source > looker-get-explores Tool
**URL:** https://googleapis.github.io/genai-toolbox/previews/PR-2723/integrations/looker/looker-get-explores/
**Description:** A "looker-get-explores" tool returns all explores for the given model from the source.
## About
A `looker-get-explores` tool returns all explores
for a given model from the source.
`looker-get-explores` accepts one parameter, the
`model` id.
The return type is an array of maps, each map is formatted like:
```json
{
"name": "explore name",
"description": "explore description",
"label": "explore label",
"group_label": "group label"
}
```
## Compatible Sources
{{< compatible-sources >}}
## Example
```yaml
kind: tools
name: get_explores
type: looker-get-explores
source: looker-source
description: |
This tool retrieves a list of explores defined within a specific LookML model.
Explores represent a curated view of your data, typically joining several
tables together to allow for focused analysis on a particular subject area.
The output provides details like the explore's `name` and `label`.
Parameters:
- model_name (required): The name of the LookML model, obtained from `get_models`.
```
## Reference
| **field** | **type** | **required** | **description** |
|-------------|:--------:|:------------:|----------------------------------------------------|
| type | string | true | Must be "looker-get-explores". |
| source | string | true | Name of the source the SQL should execute on. |
| description | string | true | Description of the tool that is passed to the LLM. |
========================================================================
## looker-get-filters Tool
========================================================================
**Hierarchy:** MCP Toolbox for Databases > Integrations > Looker Source > looker-get-filters Tool
**URL:** https://googleapis.github.io/genai-toolbox/previews/PR-2723/integrations/looker/looker-get-filters/
**Description:** A "looker-get-filters" tool returns all the filters from a given explore in a given model in the source.
## About
A `looker-get-filters` tool returns all the filters from a given explore
in a given model in the source.
`looker-get-filters` accepts two parameters, the `model` and the `explore`.
## Compatible Sources
{{< compatible-sources >}}
## Example
```yaml
kind: tools
name: get_filters
type: looker-get-filters
source: looker-source
description: |
This tool retrieves a list of "filter-only fields" defined within a specific
Looker explore. These are special fields defined in LookML specifically to
create user-facing filter controls that do not directly affect the `GROUP BY`
clause of the SQL query. They are often used in conjunction with liquid templating
to create dynamic queries.
Note: Regular dimensions and measures can also be used as filters in a query.
This tool *only* returns fields explicitly defined as `filter:` in LookML.
Parameters:
- model_name (required): The name of the LookML model, obtained from `get_models`.
- explore_name (required): The name of the explore within the model, obtained from `get_explores`.
```
The response is a json array with the following elements:
```json
{
"name": "field name",
"description": "field description",
"type": "field type",
"label": "field label",
"label_short": "field short label",
"tags": ["tags", ...],
"synonyms": ["synonyms", ...],
"suggestions": ["suggestion", ...],
"suggest_explore": "explore",
"suggest_dimension": "dimension"
}
```
## Reference
| **field** | **type** | **required** | **description** |
|-------------|:--------:|:------------:|----------------------------------------------------|
| type | string | true | Must be "looker-get-filters". |
| source | string | true | Name of the source the SQL should execute on. |
| description | string | true | Description of the tool that is passed to the LLM. |
========================================================================
## looker-get-looks Tool
========================================================================
**Hierarchy:** MCP Toolbox for Databases > Integrations > Looker Source > looker-get-looks Tool
**URL:** https://googleapis.github.io/genai-toolbox/previews/PR-2723/integrations/looker/looker-get-looks/
**Description:** "looker-get-looks" searches for saved Looks in a Looker source.
## About
The `looker-get-looks` tool searches for a saved Look by
name or description.
`looker-get-looks` takes four parameters, the `title`, `desc`, `limit`
and `offset`.
Title and description use SQL style wildcards and are case insensitive.
Limit and offset are used to page through a larger set of matches and
default to 100 and 0.
## Compatible Sources
{{< compatible-sources >}}
## Example
```yaml
kind: tools
name: get_looks
type: looker-get-looks
source: looker-source
description: |
This tool searches for saved Looks (pre-defined queries and visualizations)
in a Looker instance. It returns a list of JSON objects, each representing a Look.
Search Parameters:
- title (optional): Filter by Look title (supports wildcards).
- folder_id (optional): Filter by the ID of the folder where the Look is saved.
- user_id (optional): Filter by the ID of the user who created the Look.
- description (optional): Filter by description content (supports wildcards).
- id (optional): Filter by specific Look ID.
- limit (optional): Maximum number of results to return. Defaults to a system limit.
- offset (optional): Starting point for pagination.
String Search Behavior:
- Case-insensitive matching.
- Supports SQL LIKE pattern match wildcards:
- `%`: Matches any sequence of zero or more characters. (e.g., `"dan%"` matches "danger", "Danzig")
- `_`: Matches any single character. (e.g., `"D_m%"` matches "Damage", "dump")
- Special expressions for null checks:
- `"IS NULL"`: Matches Looks where the field is null.
- `"NOT NULL"`: Excludes Looks where the field is null.
```
## Reference
| **field** | **type** | **required** | **description** |
|-------------|:--------:|:------------:|----------------------------------------------------|
| type | string | true | Must be "looker-get-looks" |
| source | string | true | Name of the source the SQL should execute on. |
| description | string | true | Description of the tool that is passed to the LLM. |
========================================================================
## looker-get-measures Tool
========================================================================
**Hierarchy:** MCP Toolbox for Databases > Integrations > Looker Source > looker-get-measures Tool
**URL:** https://googleapis.github.io/genai-toolbox/previews/PR-2723/integrations/looker/looker-get-measures/
**Description:** A "looker-get-measures" tool returns all the measures from a given explore in a given model in the source.
## About
A `looker-get-measures` tool returns all the measures from a given explore
in a given model in the source.
`looker-get-measures` accepts two parameters, the `model` and the `explore`.
## Compatible Sources
{{< compatible-sources >}}
## Example
```yaml
kind: tools
name: get_measures
type: looker-get-measures
source: looker-source
description: |
This tool retrieves a list of measures defined within a specific Looker explore.
Measures are aggregatable metrics (e.g., total sales, average price, count of users)
that are used for calculations and quantitative analysis in your queries.
Parameters:
- model_name (required): The name of the LookML model, obtained from `get_models`.
- explore_name (required): The name of the explore within the model, obtained from `get_explores`.
Output Details:
- If a measure includes a `suggestions` field, its contents are valid values
that can be used directly as filters for that measure.
- If a `suggest_explore` and `suggest_dimension` are provided, you can query
that specified explore and dimension to retrieve a list of valid filter values.
```
The response is a json array with the following elements:
```json
{
"name": "field name",
"description": "field description",
"type": "field type",
"label": "field label",
"label_short": "field short label",
"tags": ["tags", ...],
"synonyms": ["synonyms", ...],
"suggestions": ["suggestion", ...],
"suggest_explore": "explore",
"suggest_dimension": "dimension"
}
```
## Reference
| **field** | **type** | **required** | **description** |
|-------------|:--------:|:------------:|----------------------------------------------------|
| type | string | true | Must be "looker-get-measures". |
| source | string | true | Name of the source the SQL should execute on. |
| description | string | true | Description of the tool that is passed to the LLM. |
========================================================================
## looker-get-models Tool
========================================================================
**Hierarchy:** MCP Toolbox for Databases > Integrations > Looker Source > looker-get-models Tool
**URL:** https://googleapis.github.io/genai-toolbox/previews/PR-2723/integrations/looker/looker-get-models/
**Description:** A "looker-get-models" tool returns all the models in the source.
## About
A `looker-get-models` tool returns all the models in the source.
`looker-get-models` accepts no parameters.
## Compatible Sources
{{< compatible-sources >}}
## Example
```yaml
kind: tools
name: get_models
type: looker-get-models
source: looker-source
description: |
This tool retrieves a list of available LookML models in the Looker instance.
LookML models define the data structure and relationships that users can query.
The output includes details like the model's `name` and `label`, which are
essential for subsequent calls to tools like `get_explores` or `query`.
This tool takes no parameters.
```
## Reference
| **field** | **type** | **required** | **description** |
|-------------|:--------:|:------------:|----------------------------------------------------|
| type | string | true | Must be "looker-get-models". |
| source | string | true | Name of the source the SQL should execute on. |
| description | string | true | Description of the tool that is passed to the LLM. |
========================================================================
## looker-get-parameters Tool
========================================================================
**Hierarchy:** MCP Toolbox for Databases > Integrations > Looker Source > looker-get-parameters Tool
**URL:** https://googleapis.github.io/genai-toolbox/previews/PR-2723/integrations/looker/looker-get-parameters/
**Description:** A "looker-get-parameters" tool returns all the parameters from a given explore in a given model in the source.
## About
A `looker-get-parameters` tool returns all the parameters from a given explore
in a given model in the source.
`looker-get-parameters` accepts two parameters, the `model` and the `explore`.
## Compatible Sources
{{< compatible-sources >}}
## Example
```yaml
kind: tools
name: get_parameters
type: looker-get-parameters
source: looker-source
description: |
This tool retrieves a list of parameters defined within a specific Looker explore.
LookML parameters are dynamic input fields that allow users to influence query
behavior without directly modifying the underlying LookML. They are often used
with `liquid` templating to create flexible dashboards and reports, enabling
users to choose dimensions, measures, or other query components at runtime.
Parameters:
- model_name (required): The name of the LookML model, obtained from `get_models`.
- explore_name (required): The name of the explore within the model, obtained from `get_explores`.
```
The response is a json array with the following elements:
```json
{
"name": "field name",
"description": "field description",
"type": "field type",
"label": "field label",
"label_short": "field short label",
"tags": ["tags", ...],
"synonyms": ["synonyms", ...],
"suggestions": ["suggestion", ...],
"suggest_explore": "explore",
"suggest_dimension": "dimension"
}
```
## Reference
| **field** | **type** | **required** | **description** |
|-------------|:--------:|:------------:|----------------------------------------------------|
| type | string | true | Must be "looker-get-parameters". |
| source | string | true | Name of the source the SQL should execute on. |
| description | string | true | Description of the tool that is passed to the LLM. |
========================================================================
## looker-get-project-directories Tool
========================================================================
**Hierarchy:** MCP Toolbox for Databases > Integrations > Looker Source > looker-get-project-directories Tool
**URL:** https://googleapis.github.io/genai-toolbox/previews/PR-2723/integrations/looker/looker-get-project-directories/
**Description:** A "looker-get-project-directories" tool returns the directories within a specific LookML project.
## About
A `looker-get-project-directories` tool retrieves the directories within a specified LookML project.
## Compatible Sources
{{< compatible-sources >}}
## Example
```yaml
kind: tools
name: looker-get-project-directories
type: looker-get-project-directories
source: looker-source
description: |
This tool retrieves a list of directories within a specific LookML project.
It is useful for exploring the project structure.
Parameters:
- project_id (string): The ID of the LookML project.
Output:
A JSON array of strings, representing the directories within the project.
```
## Reference
| **field** | **type** | **required** | **description** |
|-------------|:--------:|:------------:|----------------------------------------------------|
| type | string | true | Must be "looker-get-project-directories". |
| source | string | true | Name of the source Looker instance. |
| description | string | true | Description of the tool that is passed to the LLM. |
========================================================================
## looker-get-project-file Tool
========================================================================
**Hierarchy:** MCP Toolbox for Databases > Integrations > Looker Source > looker-get-project-file Tool
**URL:** https://googleapis.github.io/genai-toolbox/previews/PR-2723/integrations/looker/looker-get-project-file/
**Description:** A "looker-get-project-file" tool returns the contents of a LookML fle.
## About
A `looker-get-project-file` tool returns the contents of a LookML file.
`looker-get-project-file` accepts a project_id parameter and a file_path parameter.
## Compatible Sources
{{< compatible-sources >}}
## Example
```yaml
kind: tools
name: get_project_file
type: looker-get-project-file
source: looker-source
description: |
This tool retrieves the raw content of a specific LookML file from within a project.
Parameters:
- project_id (required): The unique ID of the LookML project, obtained from `get_projects`.
- file_path (required): The path to the LookML file within the project,
typically obtained from `get_project_files`.
Output:
The raw text content of the specified LookML file.
```
## Reference
| **field** | **type** | **required** | **description** |
|-------------|:--------:|:------------:|----------------------------------------------------|
| type | string | true | Must be "looker-get-project-file". |
| source | string | true | Name of the source Looker instance. |
| description | string | true | Description of the tool that is passed to the LLM. |
========================================================================
## looker-get-project-files Tool
========================================================================
**Hierarchy:** MCP Toolbox for Databases > Integrations > Looker Source > looker-get-project-files Tool
**URL:** https://googleapis.github.io/genai-toolbox/previews/PR-2723/integrations/looker/looker-get-project-files/
**Description:** A "looker-get-project-files" tool returns all the LookML fles in a project in the source.
## About
A `looker-get-project-files` tool returns all the lookml files in a project in the source.
`looker-get-project-files` accepts a project_id parameter.
## Compatible Sources
{{< compatible-sources >}}
## Example
```yaml
kind: tools
name: get_project_files
type: looker-get-project-files
source: looker-source
description: |
This tool retrieves a list of all LookML files within a specified project,
providing details about each file.
Parameters:
- project_id (required): The unique ID of the LookML project, obtained from `get_projects`.
Output:
A JSON array of objects, each representing a LookML file and containing
details such as `path`, `id`, `type`, and `git_status`.
```
## Reference
| **field** | **type** | **required** | **description** |
|-------------|:--------:|:------------:|----------------------------------------------------|
| type | string | true | Must be "looker-get-project-files". |
| source | string | true | Name of the source Looker instance. |
| description | string | true | Description of the tool that is passed to the LLM. |
========================================================================
## looker-get-projects Tool
========================================================================
**Hierarchy:** MCP Toolbox for Databases > Integrations > Looker Source > looker-get-projects Tool
**URL:** https://googleapis.github.io/genai-toolbox/previews/PR-2723/integrations/looker/looker-get-projects/
**Description:** A "looker-get-projects" tool returns all the LookML projects in the source.
## About
A `looker-get-projects` tool returns all the projects in the source.
`looker-get-projects` accepts no parameters.
## Compatible Sources
{{< compatible-sources >}}
## Example
```yaml
kind: tools
name: get_projects
type: looker-get-projects
source: looker-source
description: |
This tool retrieves a list of all LookML projects available on the Looker instance.
It is useful for identifying projects before performing actions like retrieving
project files or making modifications.
Parameters:
This tool takes no parameters.
Output:
A JSON array of objects, each containing the `project_id` and `project_name`
for a LookML project.
```
## Reference
| **field** | **type** | **required** | **description** |
|-------------|:--------:|:------------:|----------------------------------------------------|
| type | string | true | Must be "looker-get-projects". |
| source | string | true | Name of the source Looker instance. |
| description | string | true | Description of the tool that is passed to the LLM. |
========================================================================
## looker-health-analyze Tool
========================================================================
**Hierarchy:** MCP Toolbox for Databases > Integrations > Looker Source > looker-health-analyze Tool
**URL:** https://googleapis.github.io/genai-toolbox/previews/PR-2723/integrations/looker/looker-health-analyze/
**Description:** "looker-health-analyze" provides a set of analytical commands for a Looker instance, allowing users to analyze projects, models, and explores.
## About
The `looker-health-analyze` tool performs various analysis tasks on a Looker
instance. The `action` parameter selects the type of analysis to perform:
- `projects`: Analyzes all projects or a specified project, reporting on the
number of models and view files, as well as Git connection and validation
status.
- `models`: Analyzes all models or a specified model, providing a count of
explores, unused explores, and total query counts.
- `explores`: Analyzes all explores or a specified explore, reporting on the
number of joins, unused joins, fields, unused fields, and query counts. Being
classified as **Unused** is determined by whether a field has been used as a
field or filter within the past 90 days in production.
## Compatible Sources
{{< compatible-sources >}}
## Parameters
| **field** | **type** | **required** | **description** |
|:------------|:---------|:-------------|:-------------------------------------------------------------------------------------------|
| action | string | true | The analysis to perform: `projects`, `models`, or `explores`. |
| project | string | false | The name of the Looker project to analyze. |
| model | string | false | The name of the Looker model to analyze. Required for `explores` actions. |
| explore | string | false | The name of the Looker explore to analyze. Required for the `explores` action. |
| timeframe | int | false | The timeframe in days to analyze. Defaults to 90. |
| min_queries | int | false | The minimum number of queries for a model or explore to be considered used. Defaults to 1. |
## Example
```yaml
kind: tools
name: health_analyze
type: looker-health-analyze
source: looker-source
description: |
This tool calculates the usage statistics for Looker projects, models, and explores.
Parameters:
- action (required): The type of resource to analyze. Can be `"projects"`, `"models"`, or `"explores"`.
- project (optional): The specific project ID to analyze.
- model (optional): The specific model name to analyze. Requires `project` if used without `explore`.
- explore (optional): The specific explore name to analyze. Requires `model` if used.
- timeframe (optional): The lookback period in days for usage data. Defaults to `90` days.
- min_queries (optional): The minimum number of queries for a resource to be considered active. Defaults to `1`.
Output:
The result is a JSON object containing usage metrics for the specified resources.
```
## Reference
| **field** | **type** | **required** | **description** |
|-------------|:--------:|:------------:|----------------------------------------------------|
| type | string | true | Must be "looker-health-analyze" |
| source | string | true | Looker source name |
| description | string | true | Description of the tool that is passed to the LLM. |
========================================================================
## looker-health-pulse Tool
========================================================================
**Hierarchy:** MCP Toolbox for Databases > Integrations > Looker Source > looker-health-pulse Tool
**URL:** https://googleapis.github.io/genai-toolbox/previews/PR-2723/integrations/looker/looker-health-pulse/
**Description:** "looker-health-pulse" performs health checks on a Looker instance, with multiple actions available (e.g., checking database connections, dashboard performance, etc).
## About
The `looker-health-pulse` tool performs health checks on a Looker instance. The
`action` parameter selects the type of check to perform:
- `check_db_connections`: Checks all database connections, runs supported tests,
and reports query counts.
- `check_dashboard_performance`: Finds dashboards with slow running queries in
the last 7 days.
- `check_dashboard_errors`: Lists dashboards with erroring queries in the last 7
days.
- `check_explore_performance`: Lists the slowest explores in the last 7 days and
reports average query runtime.
- `check_schedule_failures`: Lists schedules that have failed in the last 7
days.
- `check_legacy_features`: Lists enabled legacy features. (*To note, this
function is not available in Looker Core.*)
## Compatible Sources
{{< compatible-sources >}}
## Parameters
| **field** | **type** | **required** | **description** |
|-----------|:--------:|:------------:|-----------------------------|
| action | string | true | The health check to perform |
| **action** | **description** |
|-----------------------------|---------------------------------------------------------------------|
| check_db_connections | Checks all database connections and reports query counts and errors |
| check_dashboard_performance | Finds dashboards with slow queries (>30s) in the last 7 days |
| check_dashboard_errors | Lists dashboards with erroring queries in the last 7 days |
| check_explore_performance | Lists slowest explores and average query runtime |
| check_schedule_failures | Lists failed schedules in the last 7 days |
| check_legacy_features | Lists enabled legacy features |
## Example
```yaml
kind: tools
name: health_pulse
type: looker-health-pulse
source: looker-source
description: |
This tool performs various health checks on a Looker instance.
Parameters:
- action (required): Specifies the type of health check to perform.
Choose one of the following:
- `check_db_connections`: Verifies database connectivity.
- `check_dashboard_performance`: Assesses dashboard loading performance.
- `check_dashboard_errors`: Identifies errors within dashboards.
- `check_explore_performance`: Evaluates explore query performance.
- `check_schedule_failures`: Reports on failed scheduled deliveries.
- `check_legacy_features`: Checks for the usage of legacy features.
Note on `check_legacy_features`:
This action is exclusively available in Looker Core instances. If invoked
on a non-Looker Core instance, it will return a notice rather than an error.
This notice should be considered normal behavior and not an indication of an issue.
```
## Reference
| **field** | **type** | **required** | **description** |
|-------------|:--------:|:------------:|----------------------------------------------------|
| type | string | true | Must be "looker-health-pulse" |
| source | string | true | Looker source name |
| description | string | true | Description of the tool that is passed to the LLM. |
========================================================================
## looker-health-vacuum Tool
========================================================================
**Hierarchy:** MCP Toolbox for Databases > Integrations > Looker Source > looker-health-vacuum Tool
**URL:** https://googleapis.github.io/genai-toolbox/previews/PR-2723/integrations/looker/looker-health-vacuum/
**Description:** "looker-health-vacuum" provides a set of commands to audit and identify unused LookML objects in a Looker instance.
## About
The `looker-health-vacuum` tool helps you identify unused LookML objects such as
models, explores, joins, and fields. The `action` parameter selects the type of
vacuum to perform:
- `models`: Identifies unused explores within a model.
- `explores`: Identifies unused joins and fields within an explore.
## Compatible Sources
{{< compatible-sources >}}
## Parameters
| **field** | **type** | **required** | **description** |
|:------------|:---------|:-------------|:----------------------------------------------------------------------------------|
| action | string | true | The vacuum to perform: `models`, or `explores`. |
| project | string | false | The name of the Looker project to vacuum. |
| model | string | false | The name of the Looker model to vacuum. |
| explore | string | false | The name of the Looker explore to vacuum. |
| timeframe | int | false | The timeframe in days to analyze for usage. Defaults to 90. |
| min_queries | int | false | The minimum number of queries for an object to be considered used. Defaults to 1. |
## Example
Identify unnused fields (*in this case, less than 1 query in the last 20 days*)
and joins in the `order_items` explore and `thelook` model
```yaml
kind: tools
name: health_vacuum
type: looker-health-vacuum
source: looker-source
description: |
This tool identifies and suggests LookML models or explores that can be
safely removed due to inactivity or low usage.
Parameters:
- action (required): The type of resource to analyze for removal candidates. Can be `"models"` or `"explores"`.
- project (optional): The specific project ID to consider.
- model (optional): The specific model name to consider. Requires `project` if used without `explore`.
- explore (optional): The specific explore name to consider. Requires `model` if used.
- timeframe (optional): The lookback period in days to assess usage. Defaults to `90` days.
- min_queries (optional): The minimum number of queries for a resource to be considered active. Defaults to `1`.
Output:
A JSON array of objects, each representing a model or explore that is a candidate for deletion due to low usage.
```
| **field** | **type** | **required** | **description** |
|-------------|:--------:|:------------:|----------------------------------------------------|
| type | string | true | Must be "looker-health-vacuum" |
| source | string | true | Looker source name |
| description | string | true | Description of the tool that is passed to the LLM. |
========================================================================
## looker-make-dashboard Tool
========================================================================
**Hierarchy:** MCP Toolbox for Databases > Integrations > Looker Source > looker-make-dashboard Tool
**URL:** https://googleapis.github.io/genai-toolbox/previews/PR-2723/integrations/looker/looker-make-dashboard/
**Description:** "looker-make-dashboard" generates a Looker dashboard in the users personal folder in Looker
## About
The `looker-make-dashboard` creates a dashboard in the user's
Looker personal folder.
`looker-make-dashboard` takes three parameters:
1. the `title`
2. the `description`
3. an optional `folder` id. If not provided, the user's default folder will be used.
## Compatible Sources
{{< compatible-sources >}}
## Example
```yaml
kind: tools
name: make_dashboard
type: looker-make-dashboard
source: looker-source
description: |
This tool creates a new, empty dashboard in Looker. Dashboards are stored
in the user's personal folder, and the dashboard name must be unique.
After creation, use `add_dashboard_filter` to add filters and
`add_dashboard_element` to add content tiles.
Required Parameters:
- title (required): A unique title for the new dashboard.
- description (required): A brief description of the dashboard's purpose.
Output:
A JSON object containing a link (`url`) to the newly created dashboard and
its unique `id`. This `dashboard_id` is crucial for subsequent calls to
`add_dashboard_filter` and `add_dashboard_element`.
```
## Reference
| **field** | **type** | **required** | **description** |
|-------------|:--------:|:------------:|----------------------------------------------------|
| type | string | true | Must be "looker-make-dashboard" |
| source | string | true | Name of the source the SQL should execute on. |
| description | string | true | Description of the tool that is passed to the LLM. |
========================================================================
## looker-make-look Tool
========================================================================
**Hierarchy:** MCP Toolbox for Databases > Integrations > Looker Source > looker-make-look Tool
**URL:** https://googleapis.github.io/genai-toolbox/previews/PR-2723/integrations/looker/looker-make-look/
**Description:** "looker-make-look" generates a Looker look in the users personal folder in Looker
## About
The `looker-make-look` creates a saved Look in the user's
Looker personal folder.
`looker-make-look` takes twelve parameters:
1. the `model`
2. the `explore`
3. the `fields` list
4. an optional set of `filters`
5. an optional set of `pivots`
6. an optional set of `sorts`
7. an optional `limit`
8. an optional `tz`
9. an optional `vis_config`
10. the `title`
11. an optional `description`
12. an optional `folder` id. If not provided, the user's default folder will be used.
## Compatible Sources
{{< compatible-sources >}}
## Example
```yaml
kind: tools
name: make_look
type: looker-make-look
source: looker-source
description: |
This tool creates a new Look (saved query with visualization) in Looker.
The Look will be saved in the user's personal folder, and its name must be unique.
Required Parameters:
- title: A unique title for the new Look.
- description: A brief description of the Look's purpose.
- model_name: The name of the LookML model (from `get_models`).
- explore_name: The name of the explore (from `get_explores`).
- fields: A list of field names (dimensions, measures, filters, or parameters) to include in the query.
Optional Parameters:
- pivots, filters, sorts, limit, query_timezone: These parameters are identical
to those described for the `query` tool.
- vis_config: A JSON object defining the visualization settings for the Look.
The structure and options are the same as for the `query_url` tool's `vis_config`.
Output:
A JSON object containing a link (`url`) to the newly created Look, along with its `id` and `slug`.
```
## Reference
| **field** | **type** | **required** | **description** |
|-------------|:--------:|:------------:|----------------------------------------------------|
| type | string | true | Must be "looker-make-look" |
| source | string | true | Name of the source the SQL should execute on. |
| description | string | true | Description of the tool that is passed to the LLM. |
========================================================================
## looker-query Tool
========================================================================
**Hierarchy:** MCP Toolbox for Databases > Integrations > Looker Source > looker-query Tool
**URL:** https://googleapis.github.io/genai-toolbox/previews/PR-2723/integrations/looker/looker-query/
**Description:** "looker-query" runs an inline query using the Looker semantic model.
## About
The `looker-query` tool runs a query using the Looker
semantic model.
`looker-query` takes eight parameters:
1. the `model`
2. the `explore`
3. the `fields` list
4. an optional set of `filters`
5. an optional set of `pivots`
6. an optional set of `sorts`
7. an optional `limit`
8. an optional `tz`
Starting in Looker v25.18, these queries can be identified in Looker's
System Activity. In the History explore, use the field API Client Name
to find MCP Toolbox queries.
## Compatible Sources
{{< compatible-sources >}}
## Example
```yaml
kind: tools
name: query
type: looker-query
source: looker-source
description: |
This tool runs a query against a LookML model and returns the results in JSON format.
Required Parameters:
- model_name: The name of the LookML model (from `get_models`).
- explore_name: The name of the explore (from `get_explores`).
- fields: A list of field names (dimensions, measures, filters, or parameters) to include in the query.
Optional Parameters:
- pivots: A list of fields to pivot the results by. These fields must also be included in the `fields` list.
- filters: A map of filter expressions, e.g., `{"view.field": "value", "view.date": "7 days"}`.
- Do not quote field names.
- Use `not null` instead of `-NULL`.
- If a value contains a comma, enclose it in single quotes (e.g., "'New York, NY'").
- sorts: A list of fields to sort by, optionally including direction (e.g., `["view.field desc"]`).
- limit: Row limit (default 500). Use "-1" for unlimited.
- query_timezone: specific timezone for the query (e.g. `America/Los_Angeles`).
Note: Use `get_dimensions`, `get_measures`, `get_filters`, and `get_parameters` to find valid fields.
The result of the query tool is JSON
```
## Reference
| **field** | **type** | **required** | **description** |
|-------------|:--------:|:------------:|----------------------------------------------------|
| type | string | true | Must be "looker-query" |
| source | string | true | Name of the source the SQL should execute on. |
| description | string | true | Description of the tool that is passed to the LLM. |
========================================================================
## looker-query-sql Tool
========================================================================
**Hierarchy:** MCP Toolbox for Databases > Integrations > Looker Source > looker-query-sql Tool
**URL:** https://googleapis.github.io/genai-toolbox/previews/PR-2723/integrations/looker/looker-query-sql/
**Description:** "looker-query-sql" generates a sql query using the Looker semantic model.
## About
The `looker-query-sql` generates a sql query using the Looker
semantic model.
`looker-query-sql` takes eight parameters:
1. the `model`
2. the `explore`
3. the `fields` list
4. an optional set of `filters`
5. an optional set of `pivots`
6. an optional set of `sorts`
7. an optional `limit`
8. an optional `tz`
Starting in Looker v25.18, these queries can be identified in Looker's
System Activity. In the History explore, use the field API Client Name
to find MCP Toolbox queries.
## Compatible Sources
{{< compatible-sources >}}
## Example
```yaml
kind: tools
name: query_sql
type: looker-query-sql
source: looker-source
description: |
This tool generates the underlying SQL query that Looker would execute
against the database for a given set of parameters. It is useful for
understanding how Looker translates a request into SQL.
Parameters:
All parameters for this tool are identical to those of the `query` tool.
This includes `model_name`, `explore_name`, `fields` (required),
and optional parameters like `pivots`, `filters`, `sorts`, `limit`, and `query_timezone`.
Output:
The result of this tool is the raw SQL text.
```
## Reference
| **field** | **type** | **required** | **description** |
|-------------|:--------:|:------------:|----------------------------------------------------|
| type | string | true | Must be "looker-query-sql" |
| source | string | true | Name of the source the SQL should execute on. |
| description | string | true | Description of the tool that is passed to the LLM. |
========================================================================
## looker-query-url Tool
========================================================================
**Hierarchy:** MCP Toolbox for Databases > Integrations > Looker Source > looker-query-url Tool
**URL:** https://googleapis.github.io/genai-toolbox/previews/PR-2723/integrations/looker/looker-query-url/
**Description:** "looker-query-url" generates a url link to a Looker explore.
## About
The `looker-query-url` generates a url link to an explore in
Looker so the query can be investigated further.
`looker-query-url` takes nine parameters:
1. the `model`
2. the `explore`
3. the `fields` list
4. an optional set of `filters`
5. an optional set of `pivots`
6. an optional set of `sorts`
7. an optional `limit`
8. an optional `tz`
9. an optional `vis_config`
## Compatible Sources
{{< compatible-sources >}}
## Example
```yaml
kind: tools
name: query_url
type: looker-query-url
source: looker-source
description: |
This tool generates a shareable URL for a Looker query, allowing users to
explore the query further within the Looker UI. It returns the generated URL,
along with the `query_id` and `slug`.
Parameters:
All query parameters (e.g., `model_name`, `explore_name`, `fields`, `pivots`,
`filters`, `sorts`, `limit`, `query_timezone`) are the same as the `query` tool.
Additionally, it accepts an optional `vis_config` parameter:
- vis_config (optional): A JSON object that controls the default visualization
settings for the generated query.
vis_config Details:
The `vis_config` object supports a wide range of properties for various chart types.
Here are some notes on making visualizations.
### Cartesian Charts (Area, Bar, Column, Line, Scatter)
These chart types share a large number of configuration options.
**General**
* `type`: The type of visualization (`looker_area`, `looker_bar`, `looker_column`, `looker_line`, `looker_scatter`).
* `series_types`: Override the chart type for individual series.
* `show_view_names`: Display view names in labels and tooltips (`true`/`false`).
* `series_labels`: Provide custom names for series.
**Styling & Colors**
* `colors`: An array of color values to be used for the chart series.
* `series_colors`: A mapping of series names to specific color values.
* `color_application`: Advanced controls for color palette application (collection, palette, reverse, etc.).
* `font_size`: Font size for labels (e.g., '12px').
**Legend**
* `hide_legend`: Show or hide the chart legend (`true`/`false`).
* `legend_position`: Placement of the legend (`'center'`, `'left'`, `'right'`).
**Axes**
* `swap_axes`: Swap the X and Y axes (`true`/`false`).
* `x_axis_scale`: Scale of the x-axis (`'auto'`, `'ordinal'`, `'linear'`, `'time'`).
* `x_axis_reversed`, `y_axis_reversed`: Reverse the direction of an axis (`true`/`false`).
* `x_axis_gridlines`, `y_axis_gridlines`: Display gridlines for an axis (`true`/`false`).
* `show_x_axis_label`, `show_y_axis_label`: Show or hide the axis title (`true`/`false`).
* `show_x_axis_ticks`, `show_y_axis_ticks`: Show or hide axis tick marks (`true`/`false`).
* `x_axis_label`, `y_axis_label`: Set a custom title for an axis.
* `x_axis_datetime_label`: A format string for datetime labels on the x-axis (e.g., `'%Y-%m'`).
* `x_padding_left`, `x_padding_right`: Adjust padding on the ends of the x-axis.
* `x_axis_label_rotation`, `x_axis_label_rotation_bar`: Set rotation for x-axis labels.
* `x_axis_zoom`, `y_axis_zoom`: Enable zooming on an axis (`true`/`false`).
* `y_axes`: An array of configuration objects for multiple y-axes.
**Data & Series**
* `stacking`: How to stack series (`''` for none, `'normal'`, `'percent'`).
* `ordering`: Order of series in a stack (`'none'`, etc.).
* `limit_displayed_rows`: Enable or disable limiting the number of rows displayed (`true`/`false`).
* `limit_displayed_rows_values`: Configuration for the row limit (e.g., `{ "first_last": "first", "show_hide": "show", "num_rows": 10 }`).
* `discontinuous_nulls`: How to render null values in line charts (`true`/`false`).
* `point_style`: Style for points on line and area charts (`'none'`, `'circle'`, `'circle_outline'`).
* `series_point_styles`: Override point styles for individual series.
* `interpolation`: Line interpolation style (`'linear'`, `'monotone'`, `'step'`, etc.).
* `show_value_labels`: Display values on data points (`true`/`false`).
* `label_value_format`: A format string for value labels.
* `show_totals_labels`: Display total labels on stacked charts (`true`/`false`).
* `totals_color`: Color for total labels.
* `show_silhouette`: Display a "silhouette" of hidden series in stacked charts (`true`/`false`).
* `hidden_series`: An array of series names to hide from the visualization.
**Scatter/Bubble Specific**
* `size_by_field`: The field used to determine the size of bubbles.
* `color_by_field`: The field used to determine the color of bubbles.
* `plot_size_by_field`: Whether to display the size-by field in the legend.
* `cluster_points`: Group nearby points into clusters (`true`/`false`).
* `quadrants_enabled`: Display quadrants on the chart (`true`/`false`).
* `quadrant_properties`: Configuration for quadrant labels and colors.
* `custom_quadrant_value_x`, `custom_quadrant_value_y`: Set quadrant boundaries as a percentage.
* `custom_quadrant_point_x`, `custom_quadrant_point_y`: Set quadrant boundaries to a specific value.
**Miscellaneous**
* `reference_lines`: Configuration for displaying reference lines.
* `trend_lines`: Configuration for displaying trend lines.
* `trellis`: Configuration for creating trellis (small multiple) charts.
* `crossfilterEnabled`, `crossfilters`: Configuration for cross-filtering interactions.
### Boxplot
* Inherits most of the Cartesian chart options.
* `type`: Must be `looker_boxplot`.
### Funnel
* `type`: Must be `looker_funnel`.
* `orientation`: How data is read (`'automatic'`, `'dataInRows'`, `'dataInColumns'`).
* `percentType`: How percentages are calculated (`'percentOfMaxValue'`, `'percentOfPriorRow'`).
* `labelPosition`, `valuePosition`, `percentPosition`: Placement of labels (`'left'`, `'right'`, `'inline'`, `'hidden'`).
* `labelColor`, `labelColorEnabled`: Set a custom color for labels.
* `labelOverlap`: Allow labels to overlap (`true`/`false`).
* `barColors`: An array of colors for the funnel steps.
* `color_application`: Advanced color palette controls.
* `crossfilterEnabled`, `crossfilters`: Configuration for cross-filtering.
### Pie / Donut
* `type`: Must be `looker_pie`.
* `value_labels`: Where to display values (`'legend'`, `'labels'`).
* `label_type`: The format of data labels (`'labPer'`, `'labVal'`, `'lab'`, `'val'`, `'per'`).
* `start_angle`, `end_angle`: The start and end angles of the pie chart.
* `inner_radius`: The inner radius, used to create a donut chart.
* `series_colors`, `series_labels`: Override colors and labels for specific slices.
* `color_application`: Advanced color palette controls.
* `crossfilterEnabled`, `crossfilters`: Configuration for cross-filtering.
* `advanced_vis_config`: A string containing JSON for advanced Highcharts configuration.
### Waterfall
* Inherits most of the Cartesian chart options.
* `type`: Must be `looker_waterfall`.
* `up_color`: Color for positive (increasing) values.
* `down_color`: Color for negative (decreasing) values.
* `total_color`: Color for the total bar.
### Word Cloud
* `type`: Must be `looker_wordcloud`.
* `rotation`: Enable random word rotation (`true`/`false`).
* `colors`: An array of colors for the words.
* `color_application`: Advanced color palette controls.
* `crossfilterEnabled`, `crossfilters`: Configuration for cross-filtering.
These are some sample vis_config settings.
A bar chart -
{{
"defaults_version": 1,
"label_density": 25,
"legend_position": "center",
"limit_displayed_rows": false,
"ordering": "none",
"plot_size_by_field": false,
"point_style": "none",
"show_null_labels": false,
"show_silhouette": false,
"show_totals_labels": false,
"show_value_labels": false,
"show_view_names": false,
"show_x_axis_label": true,
"show_x_axis_ticks": true,
"show_y_axis_labels": true,
"show_y_axis_ticks": true,
"stacking": "normal",
"totals_color": "#808080",
"trellis": "",
"type": "looker_bar",
"x_axis_gridlines": false,
"x_axis_reversed": false,
"x_axis_scale": "auto",
"x_axis_zoom": true,
"y_axis_combined": true,
"y_axis_gridlines": true,
"y_axis_reversed": false,
"y_axis_scale_mode": "linear",
"y_axis_tick_density": "default",
"y_axis_tick_density_custom": 5,
"y_axis_zoom": true
}}
A column chart with an option advanced_vis_config -
{{
"advanced_vis_config": "{ chart: { type: 'pie', spacingBottom: 50, spacingLeft: 50, spacingRight: 50, spacingTop: 50, }, legend: { enabled: false, }, plotOptions: { pie: { dataLabels: { enabled: true, format: '\u003cb\u003e{key}\u003c/b\u003e\u003cspan style=\"font-weight: normal\"\u003e - {percentage:.2f}%\u003c/span\u003e', }, showInLegend: false, }, }, series: [], }",
"colors": [
"grey"
],
"defaults_version": 1,
"hidden_fields": [],
"label_density": 25,
"legend_position": "center",
"limit_displayed_rows": false,
"note_display": "below",
"note_state": "collapsed",
"note_text": "Unsold inventory only",
"ordering": "none",
"plot_size_by_field": false,
"point_style": "none",
"series_colors": {},
"show_null_labels": false,
"show_silhouette": false,
"show_totals_labels": false,
"show_value_labels": true,
"show_view_names": false,
"show_x_axis_label": true,
"show_x_axis_ticks": true,
"show_y_axis_labels": true,
"show_y_axis_ticks": true,
"stacking": "normal",
"totals_color": "#808080",
"trellis": "",
"type": "looker_column",
"x_axis_gridlines": false,
"x_axis_reversed": false,
"x_axis_scale": "auto",
"x_axis_zoom": true,
"y_axes": [],
"y_axis_combined": true,
"y_axis_gridlines": true,
"y_axis_reversed": false,
"y_axis_scale_mode": "linear",
"y_axis_tick_density": "default",
"y_axis_tick_density_custom": 5,
"y_axis_zoom": true
}}
A line chart -
{{
"defaults_version": 1,
"hidden_pivots": {},
"hidden_series": [],
"interpolation": "linear",
"label_density": 25,
"legend_position": "center",
"limit_displayed_rows": false,
"plot_size_by_field": false,
"point_style": "none",
"series_types": {},
"show_null_points": true,
"show_value_labels": false,
"show_view_names": false,
"show_x_axis_label": true,
"show_x_axis_ticks": true,
"show_y_axis_labels": true,
"show_y_axis_ticks": true,
"stacking": "",
"trellis": "",
"type": "looker_line",
"x_axis_gridlines": false,
"x_axis_reversed": false,
"x_axis_scale": "auto",
"y_axis_combined": true,
"y_axis_gridlines": true,
"y_axis_reversed": false,
"y_axis_scale_mode": "linear",
"y_axis_tick_density": "default",
"y_axis_tick_density_custom": 5
}}
An area chart -
{{
"defaults_version": 1,
"interpolation": "linear",
"label_density": 25,
"legend_position": "center",
"limit_displayed_rows": false,
"plot_size_by_field": false,
"point_style": "none",
"series_types": {},
"show_null_points": true,
"show_silhouette": false,
"show_totals_labels": false,
"show_value_labels": false,
"show_view_names": false,
"show_x_axis_label": true,
"show_x_axis_ticks": true,
"show_y_axis_labels": true,
"show_y_axis_ticks": true,
"stacking": "normal",
"totals_color": "#808080",
"trellis": "",
"type": "looker_area",
"x_axis_gridlines": false,
"x_axis_reversed": false,
"x_axis_scale": "auto",
"x_axis_zoom": true,
"y_axis_combined": true,
"y_axis_gridlines": true,
"y_axis_reversed": false,
"y_axis_scale_mode": "linear",
"y_axis_tick_density": "default",
"y_axis_tick_density_custom": 5,
"y_axis_zoom": true
}}
A scatter plot -
{{
"cluster_points": false,
"custom_quadrant_point_x": 5,
"custom_quadrant_point_y": 5,
"custom_value_label_column": "",
"custom_x_column": "",
"custom_y_column": "",
"defaults_version": 1,
"hidden_fields": [],
"hidden_pivots": {},
"hidden_points_if_no": [],
"hidden_series": [],
"interpolation": "linear",
"label_density": 25,
"legend_position": "center",
"limit_displayed_rows": false,
"limit_displayed_rows_values": {
"first_last": "first",
"num_rows": 0,
"show_hide": "hide"
},
"plot_size_by_field": false,
"point_style": "circle",
"quadrant_properties": {
"0": {
"color": "",
"label": "Quadrant 1"
},
"1": {
"color": "",
"label": "Quadrant 2"
},
"2": {
"color": "",
"label": "Quadrant 3"
},
"3": {
"color": "",
"label": "Quadrant 4"
}
},
"quadrants_enabled": false,
"series_labels": {},
"series_types": {},
"show_null_points": false,
"show_value_labels": false,
"show_view_names": true,
"show_x_axis_label": true,
"show_x_axis_ticks": true,
"show_y_axis_labels": true,
"show_y_axis_ticks": true,
"size_by_field": "roi",
"stacking": "normal",
"swap_axes": true,
"trellis": "",
"type": "looker_scatter",
"x_axis_gridlines": false,
"x_axis_reversed": false,
"x_axis_scale": "auto",
"x_axis_zoom": true,
"y_axes": [
{
"label": "",
"orientation": "bottom",
"series": [
{
"axisId": "Channel_0 - average_of_roi_first",
"id": "Channel_0 - average_of_roi_first",
"name": "Channel_0"
},
{
"axisId": "Channel_1 - average_of_roi_first",
"id": "Channel_1 - average_of_roi_first",
"name": "Channel_1"
},
{
"axisId": "Channel_2 - average_of_roi_first",
"id": "Channel_2 - average_of_roi_first",
"name": "Channel_2"
},
{
"axisId": "Channel_3 - average_of_roi_first",
"id": "Channel_3 - average_of_roi_first",
"name": "Channel_3"
},
{
"axisId": "Channel_4 - average_of_roi_first",
"id": "Channel_4 - average_of_roi_first",
"name": "Channel_4"
}
],
"showLabels": true,
"showValues": true,
"tickDensity": "custom",
"tickDensityCustom": 100,
"type": "linear",
"unpinAxis": false
}
],
"y_axis_combined": true,
"y_axis_gridlines": true,
"y_axis_reversed": false,
"y_axis_scale_mode": "linear",
"y_axis_tick_density": "default",
"y_axis_tick_density_custom": 5,
"y_axis_zoom": true
}}
A single record visualization -
{{
"defaults_version": 1,
"show_view_names": false,
"type": "looker_single_record"
}}
A single value visualization -
{{
"comparison_reverse_colors": false,
"comparison_type": "value", "conditional_formatting_include_nulls": false, "conditional_formatting_include_totals": false,
"custom_color": "#1A73E8",
"custom_color_enabled": true,
"defaults_version": 1,
"enable_conditional_formatting": false,
"series_types": {},
"show_comparison": false,
"show_comparison_label": true,
"show_single_value_title": true,
"single_value_title": "Total Clicks",
"type": "single_value"
}}
A Pie chart -
{{
"defaults_version": 1,
"label_density": 25,
"label_type": "labPer",
"legend_position": "center",
"limit_displayed_rows": false,
"ordering": "none",
"plot_size_by_field": false,
"point_style": "none",
"series_types": {},
"show_null_labels": false,
"show_silhouette": false,
"show_totals_labels": false,
"show_value_labels": false,
"show_view_names": false,
"show_x_axis_label": true,
"show_x_axis_ticks": true,
"show_y_axis_labels": true,
"show_y_axis_ticks": true,
"stacking": "",
"totals_color": "#808080",
"trellis": "",
"type": "looker_pie",
"value_labels": "legend",
"x_axis_gridlines": false,
"x_axis_reversed": false,
"x_axis_scale": "auto",
"y_axis_combined": true,
"y_axis_gridlines": true,
"y_axis_reversed": false,
"y_axis_scale_mode": "linear",
"y_axis_tick_density": "default",
"y_axis_tick_density_custom": 5
}}
The result is a JSON object with the id, slug, the url, and
the long_url.
```
## Reference
| **field** | **type** | **required** | **description** |
|-------------|:--------:|:------------:|----------------------------------------------------|
| type | string | true | Must be "looker-query-url" |
| source | string | true | Name of the source the SQL should execute on. |
| description | string | true | Description of the tool that is passed to the LLM. |
========================================================================
## looker-run-dashboard Tool
========================================================================
**Hierarchy:** MCP Toolbox for Databases > Integrations > Looker Source > looker-run-dashboard Tool
**URL:** https://googleapis.github.io/genai-toolbox/previews/PR-2723/integrations/looker/looker-run-dashboard/
**Description:** "looker-run-dashboard" runs the queries associated with a dashboard.
## About
The `looker-run-dashboard` tool runs the queries associated with a
dashboard.
`looker-run-dashboard` takes one parameter, the `dashboard_id`.
## Compatible Sources
{{< compatible-sources >}}
## Example
```yaml
kind: tools
name: run_dashboard
type: looker-run-dashboard
source: looker-source
description: |
This tool executes the queries associated with each tile in a specified dashboard
and returns the aggregated data in a JSON structure.
Parameters:
- dashboard_id (required): The unique identifier of the dashboard to run,
typically obtained from the `get_dashboards` tool.
Output:
The data from all dashboard tiles is returned as a JSON object.
```
## Reference
| **field** | **type** | **required** | **description** |
|-------------|:--------:|:------------:|----------------------------------------------------|
| type | string | true | Must be "looker-run-dashboard" |
| source | string | true | Name of the source the SQL should execute on. |
| description | string | true | Description of the tool that is passed to the LLM. |
========================================================================
## looker-run-look Tool
========================================================================
**Hierarchy:** MCP Toolbox for Databases > Integrations > Looker Source > looker-run-look Tool
**URL:** https://googleapis.github.io/genai-toolbox/previews/PR-2723/integrations/looker/looker-run-look/
**Description:** "looker-run-look" runs the query associated with a saved Look.
## About
The `looker-run-look` tool runs the query associated with a
saved Look.
`looker-run-look` takes one parameter, the `look_id`.
## Compatible Sources
{{< compatible-sources >}}
## Example
```yaml
kind: tools
name: run_look
type: looker-run-look
source: looker-source
description: |
This tool executes the query associated with a saved Look and
returns the resulting data in a JSON structure.
Parameters:
- look_id (required): The unique identifier of the Look to run,
typically obtained from the `get_looks` tool.
Output:
The query results are returned as a JSON object.
```
## Reference
| **field** | **type** | **required** | **description** |
|-------------|:--------:|:------------:|----------------------------------------------------|
| type | string | true | Must be "looker-run-look" |
| source | string | true | Name of the source the SQL should execute on. |
| description | string | true | Description of the tool that is passed to the LLM. |
========================================================================
## looker-update-project-file Tool
========================================================================
**Hierarchy:** MCP Toolbox for Databases > Integrations > Looker Source > looker-update-project-file Tool
**URL:** https://googleapis.github.io/genai-toolbox/previews/PR-2723/integrations/looker/looker-update-project-file/
**Description:** A "looker-update-project-file" tool updates the content of a LookML file in a project.
## About
A `looker-update-project-file` tool updates the content of a LookML file.
`looker-update-project-file` accepts a project_id parameter and a file_path parameter
as well as the new file content.
## Compatible Sources
{{< compatible-sources >}}
## Example
```yaml
kind: tools
name: update_project_file
type: looker-update-project-file
source: looker-source
description: |
This tool modifies the content of an existing LookML file within a specified project.
Prerequisite: The Looker session must be in Development Mode. Use `dev_mode: true` first.
Parameters:
- project_id (required): The unique ID of the LookML project.
- file_path (required): The exact path to the LookML file to modify within the project.
- content (required): The new, complete LookML content to overwrite the existing file.
Output:
A confirmation message upon successful file modification.
```
## Reference
| **field** | **type** | **required** | **description** |
|-------------|:--------:|:------------:|----------------------------------------------------|
| type | string | true | Must be "looker-update-project-file". |
| source | string | true | Name of the source Looker instance. |
| description | string | true | Description of the tool that is passed to the LLM. |
========================================================================
## looker-validate-project Tool
========================================================================
**Hierarchy:** MCP Toolbox for Databases > Integrations > Looker Source > looker-validate-project Tool
**URL:** https://googleapis.github.io/genai-toolbox/previews/PR-2723/integrations/looker/looker-validate-project/
**Description:** A "looker-validate-project" tool checks the syntax of a LookML project and reports any errors
## About
A "looker-validate-project" tool checks the syntax of a LookML project and reports any errors
`looker-validate-project` accepts a project_id parameter.
## Compatible Sources
{{< compatible-sources >}}
## Example
```yaml
tools:
validate_project:
kind: looker-validate-project
source: looker-source
description: |
This tool checks a LookML project for syntax errors.
Prerequisite: The Looker session must be in Development Mode. Use `dev_mode: true` first.
Parameters:
- project_id (required): The unique ID of the LookML project.
Output:
A list of error details including the file path and line number, and also a list of models
that are not currently valid due to LookML errors.
```
## Reference
| **field** | **type** | **required** | **description** |
|-------------|:--------:|:------------:|----------------------------------------------------|
| kind | string | true | Must be "looker-validate-project". |
| source | string | true | Name of the source Looker instance. |
| description | string | true | Description of the tool that is passed to the LLM. |
========================================================================
## MariaDB Source
========================================================================
**Hierarchy:** MCP Toolbox for Databases > Integrations > MariaDB Source
**URL:** https://googleapis.github.io/genai-toolbox/previews/PR-2723/integrations/mariadb/
**Description:** MariaDB is an open-source relational database compatible with MySQL.
## About
MariaDB is a relational database management system derived from MySQL. It
implements the MySQL protocol and client libraries and supports modern SQL
features with a focus on performance and reliability.
**Note**: MariaDB is supported using the MySQL source.
## Available Tools
{{< list-tools dirs="/integrations/mysql" >}}
## Requirements
### Database User
This source only uses standard authentication. You will need to [create a
MariaDB user][mariadb-users] to log in to the database.
[mariadb-users]: https://mariadb.com/kb/en/create-user/
## Example
```yaml
kind: sources
name: my_mariadb_db
type: mysql
host: 127.0.0.1
port: 3306
database: my_db
user: ${MARIADB_USER}
password: ${MARIADB_PASS}
# Optional TLS and other driver parameters. For example, enable preferred TLS:
# queryParams:
# tls: preferred
queryTimeout: 30s # Optional: query timeout duration
```
{{< notice tip >}}
Use environment variables instead of committing credentials to source files.
{{< /notice >}}
## Reference
| **field** | **type** | **required** | **description** |
| ------------ | :------: | :----------: | ----------------------------------------------------------------------------------------------- |
| type | string | true | Must be `mysql`. |
| host | string | true | IP address to connect to (e.g. "127.0.0.1"). |
| port | string | true | Port to connect to (e.g. "3307"). |
| database | string | true | Name of the MariaDB database to connect to (e.g. "my_db"). |
| user | string | true | Name of the MariaDB user to connect as (e.g. "my-mysql-user"). |
| password | string | true | Password of the MariaDB user (e.g. "my-password"). |
| queryTimeout | string | false | Maximum time to wait for query execution (e.g. "30s", "2m"). By default, no timeout is applied. |
| queryParams | map | false | Arbitrary DSN parameters passed to the driver (e.g. `tls: preferred`, `charset: utf8mb4`). Useful for enabling TLS or other connection options. |
========================================================================
## MindsDB Source
========================================================================
**Hierarchy:** MCP Toolbox for Databases > Integrations > MindsDB Source
**URL:** https://googleapis.github.io/genai-toolbox/previews/PR-2723/integrations/mindsdb/
**Description:** MindsDB is an AI federated database that enables SQL queries across hundreds of datasources and ML models.
## About
[MindsDB][mindsdb-docs] is an AI federated database in the world. It allows you
to combine information from hundreds of datasources as if they were SQL,
supporting joins across datasources and enabling you to query all unstructured
data as if it were structured.
MindsDB translates MySQL queries into whatever API is needed - whether it's REST
APIs, GraphQL, or native database protocols. This means you can write standard
SQL queries and MindsDB automatically handles the translation to APIs like
Salesforce, Jira, GitHub, email systems, MongoDB, and hundreds of other
datasources.
MindsDB also enables you to use ML frameworks to train and use models as virtual
tables from the data in those datasources. With MindsDB, the GenAI Toolbox can
now expand to hundreds of datasources and leverage all of MindsDB's capabilities
on ML and unstructured data.
**Key Features:**
- **Federated Database**: Connect and query hundreds of datasources through a
single SQL interface
- **Cross-Datasource Joins**: Perform joins across different datasources
seamlessly
- **API Translation**: Automatically translates MySQL queries into REST APIs,
GraphQL, and native protocols
- **Unstructured Data Support**: Query unstructured data as if it were
structured
- **ML as Virtual Tables**: Train and use ML models as virtual tables
- **MySQL Wire Protocol**: Compatible with standard MySQL clients and tools
[mindsdb-docs]: https://docs.mindsdb.com/
[mindsdb-github]: https://github.com/mindsdb/mindsdb
### Supported Datasources
MindsDB supports hundreds of datasources, including:
#### **Business Applications**
- **Salesforce**: Query leads, opportunities, accounts, and custom objects
- **Jira**: Access issues, projects, workflows, and team data
- **GitHub**: Query repositories, commits, pull requests, and issues
- **Slack**: Access channels, messages, and team communications
- **HubSpot**: Query contacts, companies, deals, and marketing data
#### **Databases & Storage**
- **MongoDB**: Query NoSQL collections as structured tables
- **Redis**: Key-value stores and caching layers
- **Elasticsearch**: Search and analytics data
- **S3/Google Cloud Storage**: File storage and data lakes
#### **Communication & Email**
- **Gmail/Outlook**: Query emails, attachments, and metadata
- **Slack**: Access workspace data and conversations
- **Microsoft Teams**: Team communications and files
- **Discord**: Server data and message history
### Example Queries
#### Cross-Datasource Analytics
```sql
-- Join Salesforce opportunities with GitHub activity
SELECT
s.opportunity_name,
s.amount,
g.repository_name,
COUNT(g.commits) as commit_count
FROM salesforce.opportunities s
JOIN github.repositories g ON s.account_id = g.owner_id
WHERE s.stage = 'Closed Won'
GROUP BY s.opportunity_name, s.amount, g.repository_name;
```
#### Email & Communication Analysis
```sql
-- Analyze email patterns with Slack activity
SELECT
e.sender,
e.subject,
s.channel_name,
COUNT(s.messages) as message_count
FROM gmail.emails e
JOIN slack.messages s ON e.sender = s.user_name
WHERE e.date >= '2024-01-01'
GROUP BY e.sender, e.subject, s.channel_name;
```
#### ML Model Predictions
```sql
-- Use ML model to predict customer churn
SELECT
customer_id,
customer_name,
predicted_churn_probability,
recommended_action
FROM customer_churn_model
WHERE predicted_churn_probability > 0.8;
```
### Use Cases
With MindsDB integration, you can:
- **Query Multiple Datasources**: Connect to databases, APIs, file systems, and
more through a single SQL interface
- **Cross-Datasource Analytics**: Perform joins and analytics across different
data sources
- **ML Model Integration**: Use trained ML models as virtual tables for
predictions and insights
- **Unstructured Data Processing**: Query documents, images, and other
unstructured data as structured tables
- **Real-time Predictions**: Get real-time predictions from ML models through
SQL queries
- **API Abstraction**: Write SQL queries that automatically translate to REST
APIs, GraphQL, and native protocols
## Available Tools
{{< list-tools >}}
## Requirements
### Database User
This source uses standard MySQL authentication since MindsDB implements the
MySQL wire protocol. You will need to [create a MindsDB user][mindsdb-users] to
login to the database with. If MindsDB is configured without authentication, you
can omit the password field.
[mindsdb-users]: https://docs.mindsdb.com/
## Example
```yaml
kind: sources
name: my-mindsdb-source
type: mindsdb
host: 127.0.0.1
port: 3306
database: my_db
user: ${USER_NAME}
password: ${PASSWORD} # Optional: omit if MindsDB is configured without authentication
queryTimeout: 30s # Optional: query timeout duration
```
### Working Configuration Example
Here's a working configuration that has been tested:
```yaml
kind: sources
name: my-pg-source
type: mindsdb
host: 127.0.0.1
port: 47335
database: files
user: mindsdb
```
{{< notice tip >}}
Use environment variable replacement with the format ${ENV_NAME}
instead of hardcoding your secrets into the configuration file.
{{< /notice >}}
## Reference
| **field** | **type** | **required** | **description** |
|--------------|:--------:|:------------:|--------------------------------------------------------------------------------------------------------------|
| type | string | true | Must be "mindsdb". |
| host | string | true | IP address to connect to (e.g. "127.0.0.1"). |
| port | string | true | Port to connect to (e.g. "3306"). |
| database | string | true | Name of the MindsDB database to connect to (e.g. "my_db"). |
| user | string | true | Name of the MindsDB user to connect as (e.g. "my-mindsdb-user"). |
| password | string | false | Password of the MindsDB user (e.g. "my-password"). Optional if MindsDB is configured without authentication. |
| queryTimeout | string | false | Maximum time to wait for query execution (e.g. "30s", "2m"). By default, no timeout is applied. |
## Additional Resources
- [MindsDB Documentation][mindsdb-docs] - Official documentation and guides
- [MindsDB GitHub][mindsdb-github] - Source code and community
========================================================================
## mindsdb-execute-sql Tool
========================================================================
**Hierarchy:** MCP Toolbox for Databases > Integrations > MindsDB Source > mindsdb-execute-sql Tool
**URL:** https://googleapis.github.io/genai-toolbox/previews/PR-2723/integrations/mindsdb/mindsdb-execute-sql/
**Description:** A "mindsdb-execute-sql" tool executes a SQL statement against a MindsDB federated database.
## About
A `mindsdb-execute-sql` tool executes a SQL statement against a MindsDB
federated database.
`mindsdb-execute-sql` takes one input parameter `sql` and runs the SQL
statement against the `source`. This tool enables you to:
- **Query Multiple Datasources**: Execute SQL across hundreds of connected
datasources
- **Cross-Datasource Joins**: Perform joins between different databases, APIs,
and file systems
- **ML Model Predictions**: Query ML models as virtual tables for real-time
predictions
- **Unstructured Data**: Query documents, images, and other unstructured data as
structured tables
- **Federated Analytics**: Perform analytics across multiple datasources
simultaneously
- **API Translation**: Automatically translate SQL queries into REST APIs,
GraphQL, and native protocols
## Compatible Sources
{{< compatible-sources >}}
## Example
```yaml
kind: tools
name: execute_sql_tool
type: mindsdb-execute-sql
source: my-mindsdb-instance
description: Use this tool to execute SQL statements across multiple datasources and ML models.
```
### Example Queries
#### Cross-Datasource Analytics
```sql
-- Join Salesforce opportunities with GitHub activity
SELECT
s.opportunity_name,
s.amount,
g.repository_name,
COUNT(g.commits) as commit_count
FROM salesforce.opportunities s
JOIN github.repositories g ON s.account_id = g.owner_id
WHERE s.stage = 'Closed Won'
GROUP BY s.opportunity_name, s.amount, g.repository_name;
```
#### Email & Communication Analysis
```sql
-- Analyze email patterns with Slack activity
SELECT
e.sender,
e.subject,
s.channel_name,
COUNT(s.messages) as message_count
FROM gmail.emails e
JOIN slack.messages s ON e.sender = s.user_name
WHERE e.date >= '2024-01-01'
GROUP BY e.sender, e.subject, s.channel_name;
```
#### ML Model Predictions
```sql
-- Use ML model to predict customer churn
SELECT
customer_id,
customer_name,
predicted_churn_probability,
recommended_action
FROM customer_churn_model
WHERE predicted_churn_probability > 0.8;
```
#### MongoDB Query
```sql
-- Query MongoDB collections as structured tables
SELECT
name,
email,
department,
created_at
FROM mongodb.users
WHERE department = 'Engineering'
ORDER BY created_at DESC;
```
> **Note:** This tool is intended for developer assistant workflows with
> human-in-the-loop and shouldn't be used for production agents.
### Working Configuration Example
Here's a working configuration that has been tested:
```yaml
kind: sources
name: my-pg-source
type: mindsdb
host: 127.0.0.1
port: 47335
database: files
user: mindsdb
---
kind: tools
name: mindsdb-execute-sql
type: mindsdb-execute-sql
source: my-pg-source
description: |
Execute SQL queries directly on MindsDB database.
Use this tool to run any SQL statement against your MindsDB instance.
Example: SELECT * FROM my_table LIMIT 10
```
## Reference
| **field** | **type** | **required** | **description** |
|-------------|:--------:|:------------:|----------------------------------------------------|
| type | string | true | Must be "mindsdb-execute-sql". |
| source | string | true | Name of the source the SQL should execute on. |
| description | string | true | Description of the tool that is passed to the LLM. |
========================================================================
## mindsdb-sql Tool
========================================================================
**Hierarchy:** MCP Toolbox for Databases > Integrations > MindsDB Source > mindsdb-sql Tool
**URL:** https://googleapis.github.io/genai-toolbox/previews/PR-2723/integrations/mindsdb/mindsdb-sql/
**Description:** A "mindsdb-sql" tool executes a pre-defined SQL statement against a MindsDB federated database.
## About
A `mindsdb-sql` tool executes a pre-defined SQL statement against a MindsDB
federated database.
The specified SQL statement is executed as a [prepared statement][mysql-prepare],
and expects parameters in the SQL query to be in the form of placeholders `?`.
This tool enables you to:
- **Query Multiple Datasources**: Execute parameterized SQL across hundreds of connected datasources
- **Cross-Datasource Joins**: Perform joins between different databases, APIs, and file systems
- **ML Model Predictions**: Query ML models as virtual tables for real-time predictions
- **Unstructured Data**: Query documents, images, and other unstructured data as structured tables
- **Federated Analytics**: Perform analytics across multiple datasources simultaneously
- **API Translation**: Automatically translate SQL queries into REST APIs, GraphQL, and native protocols
[mysql-prepare]: https://dev.mysql.com/doc/refman/8.4/en/sql-prepared-statements.html
## Compatible Sources
{{< compatible-sources >}}
## Example
> **Note:** This tool uses parameterized queries to prevent SQL injections.
> Query parameters can be used as substitutes for arbitrary expressions.
> Parameters cannot be used as substitutes for identifiers, column names, table
> names, or other parts of the query.
```yaml
kind: tools
name: search_flights_by_number
type: mindsdb-sql
source: my-mindsdb-instance
statement: |
SELECT * FROM flights
WHERE airline = ?
AND flight_number = ?
LIMIT 10
description: |
Use this tool to get information for a specific flight.
Takes an airline code and flight number and returns info on the flight.
Do NOT use this tool with a flight id. Do NOT guess an airline code or flight number.
A airline code is a code for an airline service consisting of two-character
airline designator and followed by flight number, which is 1 to 4 digit number.
For example, if given CY 0123, the airline is "CY", and flight_number is "123".
Another example for this is DL 1234, the airline is "DL", and flight_number is "1234".
If the tool returns more than one option choose the date closes to today.
Example:
{{
"airline": "CY",
"flight_number": "888",
}}
Example:
{{
"airline": "DL",
"flight_number": "1234",
}}
parameters:
- name: airline
type: string
description: Airline unique 2 letter identifier
- name: flight_number
type: string
description: 1 to 4 digit number
```
### Example with Template Parameters
> **Note:** This tool allows direct modifications to the SQL statement,
> including identifiers, column names, and table names. **This makes it more
> vulnerable to SQL injections**. Using basic parameters only (see above) is
> recommended for performance and safety reasons. For more details, please check
> [templateParameters](../#template-parameters).
```yaml
kind: tools
name: list_table
type: mindsdb-sql
source: my-mindsdb-instance
statement: |
SELECT * FROM {{.tableName}};
description: |
Use this tool to list all information from a specific table.
Example:
{{
"tableName": "flights",
}}
templateParameters:
- name: tableName
type: string
description: Table to select from
```
### Example Queries
#### Cross-Datasource Analytics
```sql
-- Join Salesforce opportunities with GitHub activity
SELECT
s.opportunity_name,
s.amount,
g.repository_name,
COUNT(g.commits) as commit_count
FROM salesforce.opportunities s
JOIN github.repositories g ON s.account_id = g.owner_id
WHERE s.stage = ?
GROUP BY s.opportunity_name, s.amount, g.repository_name;
```
#### Email & Communication Analysis
```sql
-- Analyze email patterns with Slack activity
SELECT
e.sender,
e.subject,
s.channel_name,
COUNT(s.messages) as message_count
FROM gmail.emails e
JOIN slack.messages s ON e.sender = s.user_name
WHERE e.date >= ?
GROUP BY e.sender, e.subject, s.channel_name;
```
#### ML Model Predictions
```sql
-- Use ML model to predict customer churn
SELECT
customer_id,
customer_name,
predicted_churn_probability,
recommended_action
FROM customer_churn_model
WHERE predicted_churn_probability > ?;
```
#### MongoDB Query
```sql
-- Query MongoDB collections as structured tables
SELECT
name,
email,
department,
created_at
FROM mongodb.users
WHERE department = ?
ORDER BY created_at DESC;
```
## Reference
| **field** | **type** | **required** | **description** |
|--------------------|:------------------------------------------------:|:------------:|--------------------------------------------------------------------------------------------------------------------------------------------|
| type | string | true | Must be "mindsdb-sql". |
| source | string | true | Name of the source the SQL should execute on. |
| description | string | true | Description of the tool that is passed to the LLM. |
| statement | string | true | SQL statement to execute on. |
| parameters | [parameters](../#specifying-parameters) | false | List of [parameters](../#specifying-parameters) that will be inserted into the SQL statement. |
| templateParameters | [templateParameters](../#template-parameters) | false | List of [templateParameters](../#template-parameters) that will be inserted into the SQL statement before executing prepared statement. |
========================================================================
## MongoDB Source
========================================================================
**Hierarchy:** MCP Toolbox for Databases > Integrations > MongoDB Source
**URL:** https://googleapis.github.io/genai-toolbox/previews/PR-2723/integrations/mongodb/
**Description:** MongoDB is a no-sql data platform that can not only serve general purpose data requirements also perform VectorSearch where both operational data and embeddings used of search can reside in same document.
## About
[MongoDB][mongodb-docs] is a popular NoSQL database that stores data in
flexible, JSON-like documents, making it easy to develop and scale applications.
[mongodb-docs]: https://www.mongodb.com/docs/atlas/getting-started/
## Available Tools
{{< list-tools >}}
## Example
```yaml
kind: sources
name: my-mongodb
type: mongodb
uri: "mongodb+srv://username:password@host.mongodb.net"
```
## Reference
| **field** | **type** | **required** | **description** |
|-----------|:--------:|:------------:|-------------------------------------------------------------------|
| type | string | true | Must be "mongodb". |
| uri | string | true | connection string to connect to MongoDB |
========================================================================
## mongodb-aggregate Tool
========================================================================
**Hierarchy:** MCP Toolbox for Databases > Integrations > MongoDB Source > mongodb-aggregate Tool
**URL:** https://googleapis.github.io/genai-toolbox/previews/PR-2723/integrations/mongodb/mongodb-aggregate/
**Description:** A "mongodb-aggregate" tool executes a multi-stage aggregation pipeline against a MongoDB collection.
## About
The `mongodb-aggregate` tool is the most powerful query tool for MongoDB,
allowing you to process data through a multi-stage pipeline. Each stage
transforms the documents as they pass through, enabling complex operations like
grouping, filtering, reshaping documents, and performing calculations.
The core of this tool is the `pipelinePayload`, which must be a string
containing a **JSON array of pipeline stage documents**. The tool returns a JSON
array of documents produced by the final stage of the pipeline.
A `readOnly` flag can be set to `true` as a safety measure to ensure the
pipeline does not contain any write stages (like `$out` or `$merge`).
## Compatible Sources
{{< compatible-sources >}}
## Example
Here is an example that calculates the average price and total count of products
for each category, but only for products with an "active" status.
```yaml
kind: tools
name: get_category_stats
type: mongodb-aggregate
source: my-mongo-source
description: Calculates average price and count of products, grouped by category.
database: ecommerce
collection: products
readOnly: true
pipelinePayload: |
[
{
"$match": {
"status": {{json .status_filter}}
}
},
{
"$group": {
"_id": "$category",
"average_price": { "$avg": "$price" },
"item_count": { "$sum": 1 }
}
},
{
"$sort": {
"average_price": -1
}
}
]
pipelineParams:
- name: status_filter
type: string
description: The product status to filter by (e.g., "active").
```
## Reference
| **field** | **type** | **required** | **description** |
|:----------------|:---------|:-------------|:---------------------------------------------------------------------------------------------------------------|
| type | string | true | Must be `mongodb-aggregate`. |
| source | string | true | The name of the `mongodb` source to use. |
| description | string | true | A description of the tool that is passed to the LLM. |
| database | string | true | The name of the MongoDB database containing the collection. |
| collection | string | true | The name of the MongoDB collection to run the aggregation on. |
| pipelinePayload | string | true | A JSON array of aggregation stage documents, provided as a string. Uses `{{json .param_name}}` for templating. |
| pipelineParams | list | true | A list of parameter objects that define the variables used in the `pipelinePayload`. |
| canonical | bool | false | Determines if the pipeline string is parsed using MongoDB's Canonical or Relaxed Extended JSON format. |
| readOnly | bool | false | If `true`, the tool will fail if the pipeline contains write stages (`$out` or `$merge`). Defaults to `false`. |
========================================================================
## mongodb-delete-many Tool
========================================================================
**Hierarchy:** MCP Toolbox for Databases > Integrations > MongoDB Source > mongodb-delete-many Tool
**URL:** https://googleapis.github.io/genai-toolbox/previews/PR-2723/integrations/mongodb/mongodb-delete-many/
**Description:** A "mongodb-delete-many" tool deletes all documents from a MongoDB collection that match a filter.
## About
The `mongodb-delete-many` tool performs a **bulk destructive operation**,
deleting **ALL** documents from a collection that match a specified filter.
The tool returns the total count of documents that were deleted. If the filter
does not match any documents (i.e., the deleted count is 0), the tool will
return an error.
## Compatible Sources
{{< compatible-sources >}}
---
## Example
Here is an example that performs a cleanup task by deleting all products from
the `inventory` collection that belong to a discontinued brand.
```yaml
kind: tools
name: retire_brand_products
type: mongodb-delete-many
source: my-mongo-source
description: Deletes all products from a specified discontinued brand.
database: ecommerce
collection: inventory
filterPayload: |
{ "brand_name": {{json .brand_to_delete}} }
filterParams:
- name: brand_to_delete
type: string
description: The name of the discontinued brand whose products should be deleted.
```
## Reference
| **field** | **type** | **required** | **description** |
|:--------------|:---------|:-------------|:--------------------------------------------------------------------------------------------------------------------|
| type | string | true | Must be `mongodb-delete-many`. |
| source | string | true | The name of the `mongodb` source to use. |
| description | string | true | A description of the tool that is passed to the LLM. |
| database | string | true | The name of the MongoDB database containing the collection. |
| collection | string | true | The name of the MongoDB collection from which to delete documents. |
| filterPayload | string | true | The MongoDB query filter document to select the documents for deletion. Uses `{{json .param_name}}` for templating. |
| filterParams | list | false | A list of parameter objects that define the variables used in the `filterPayload`. |
========================================================================
## mongodb-delete-one Tool
========================================================================
**Hierarchy:** MCP Toolbox for Databases > Integrations > MongoDB Source > mongodb-delete-one Tool
**URL:** https://googleapis.github.io/genai-toolbox/previews/PR-2723/integrations/mongodb/mongodb-delete-one/
**Description:** A "mongodb-delete-one" tool deletes a single document from a MongoDB collection.
## About
The `mongodb-delete-one` tool performs a destructive operation, deleting the
**first single document** that matches a specified filter from a MongoDB
collection.
If the filter matches multiple documents, only the first one found by the
database will be deleted. This tool is useful for removing specific entries,
such as a user account or a single item from an inventory based on a unique ID.
The tool returns the number of documents deleted, which will be either `1` if a
document was found and deleted, or `0` if no matching document was found.
## Compatible Sources
{{< compatible-sources >}}
---
## Example
Here is an example that deletes a specific user account from the `users`
collection by matching their unique email address. This is a permanent action.
```yaml
kind: tools
name: delete_user_account
type: mongodb-delete-one
source: my-mongo-source
description: Permanently deletes a user account by their email address.
database: user_data
collection: users
filterPayload: |
{ "email": {{json .email_address}} }
filterParams:
- name: email_address
type: string
description: The email of the user account to delete.
```
## Reference
| **field** | **type** | **required** | **description** |
|:--------------|:---------|:-------------|:-------------------------------------------------------------------------------------------------------------------|
| type | string | true | Must be `mongodb-delete-one`. |
| source | string | true | The name of the `mongodb` source to use. |
| description | string | true | A description of the tool that is passed to the LLM. |
| database | string | true | The name of the MongoDB database containing the collection. |
| collection | string | true | The name of the MongoDB collection from which to delete a document. |
| filterPayload | string | true | The MongoDB query filter document to select the document for deletion. Uses `{{json .param_name}}` for templating. |
| filterParams | list | false | A list of parameter objects that define the variables used in the `filterPayload`. |
========================================================================
## mongodb-find Tool
========================================================================
**Hierarchy:** MCP Toolbox for Databases > Integrations > MongoDB Source > mongodb-find Tool
**URL:** https://googleapis.github.io/genai-toolbox/previews/PR-2723/integrations/mongodb/mongodb-find/
**Description:** A "mongodb-find" tool finds and retrieves documents from a MongoDB collection.
## About
A `mongodb-find` tool is used to query a MongoDB collection and retrieve
documents that match a specified filter. It's a flexible tool that allows you to
shape the output by selecting specific fields (**projection**), ordering the
results (**sorting**), and restricting the number of documents returned
(**limiting**).
The tool returns a JSON array of the documents found.
## Compatible Sources
{{< compatible-sources >}}
## Example
Here's an example that finds up to 10 users from the `customers` collection who
live in a specific city. The results are sorted by their last name, and only
their first name, last name, and email are returned.
```yaml
kind: tools
name: find_local_customers
type: mongodb-find
source: my-mongo-source
description: Finds customers by city, sorted by last name.
database: crm
collection: customers
limit: 10
filterPayload: |
{ "address.city": {{json .city}} }
filterParams:
- name: city
type: string
description: The city to search for customers in.
projectPayload: |
{
"first_name": 1,
"last_name": 1,
"email": 1,
"_id": 0
}
sortPayload: |
{ "last_name": {{json .sort_order}} }
sortParams:
- name: sort_order
type: integer
description: The sort order (1 for ascending, -1 for descending).
```
## Reference
| **field** | **type** | **required** | **description** |
|:---------------|:---------|:-------------|:----------------------------------------------------------------------------------------------------------------------------|
| type | string | true | Must be `mongodb-find`. |
| source | string | true | The name of the `mongodb` source to use. |
| description | string | true | A description of the tool that is passed to the LLM. |
| database | string | true | The name of the MongoDB database to query. |
| collection | string | true | The name of the MongoDB collection to query. |
| filterPayload | string | true | The MongoDB query filter document to select which documents to return. Uses `{{json .param_name}}` for templating. |
| filterParams | list | false | A list of parameter objects that define the variables used in the `filterPayload`. |
| projectPayload | string | false | An optional MongoDB projection document to specify which fields to include (1) or exclude (0) in the results. |
| projectParams | list | false | A list of parameter objects for the `projectPayload`. |
| sortPayload | string | false | An optional MongoDB sort document to define the order of the returned documents. Use 1 for ascending and -1 for descending. |
| sortParams | list | false | A list of parameter objects for the `sortPayload`. |
| limit | integer | false | An optional integer specifying the maximum number of documents to return. |
========================================================================
## mongodb-find-one Tool
========================================================================
**Hierarchy:** MCP Toolbox for Databases > Integrations > MongoDB Source > mongodb-find-one Tool
**URL:** https://googleapis.github.io/genai-toolbox/previews/PR-2723/integrations/mongodb/mongodb-find-one/
**Description:** A "mongodb-find-one" tool finds and retrieves a single document from a MongoDB collection.
## About
A `mongodb-find-one` tool is used to retrieve the **first single document** that
matches a specified filter from a MongoDB collection. If multiple documents
match the filter, you can use `sort` options to control which document is
returned. Otherwise, the selection is not guaranteed.
The tool returns a single JSON object representing the document, wrapped in a
JSON array.
## Compatible Sources
{{< compatible-sources >}}
---
## Example
Here's a common use case: finding a specific user by their unique email address
and returning their profile information, while excluding sensitive fields like
the password hash.
```yaml
kind: tools
name: get_user_profile
type: mongodb-find-one
source: my-mongo-source
description: Retrieves a user's profile by their email address.
database: user_data
collection: profiles
filterPayload: |
{ "email": {{json .email}} }
filterParams:
- name: email
type: string
description: The email address of the user to find.
projectPayload: |
{
"password_hash": 0,
"login_history": 0
}
```
## Reference
| **field** | **type** | **required** | **description** |
|:---------------|:---------|:-------------|:---------------------------------------------------------------------------------------------------------------------------------------------|
| type | string | true | Must be `mongodb-find-one`. |
| source | string | true | The name of the `mongodb` source to use. |
| description | string | true | A description of the tool that is passed to the LLM. |
| database | string | true | The name of the MongoDB database to query. |
| collection | string | true | The name of the MongoDB collection to query. |
| filterPayload | string | true | The MongoDB query filter document to select the document. Uses `{{json .param_name}}` for templating. |
| filterParams | list | false | A list of parameter objects that define the variables used in the `filterPayload`. |
| projectPayload | string | false | An optional MongoDB projection document to specify which fields to include (1) or exclude (0) in the result. |
| projectParams | list | false | A list of parameter objects for the `projectPayload`. |
========================================================================
## mongodb-insert-many Tool
========================================================================
**Hierarchy:** MCP Toolbox for Databases > Integrations > MongoDB Source > mongodb-insert-many Tool
**URL:** https://googleapis.github.io/genai-toolbox/previews/PR-2723/integrations/mongodb/mongodb-insert-many/
**Description:** A "mongodb-insert-many" tool inserts multiple new documents into a MongoDB collection.
## About
The `mongodb-insert-many` tool inserts **multiple new documents** into a
specified MongoDB collection in a single bulk operation. This is highly
efficient for adding large amounts of data at once.
This tool takes one required parameter named `data`. This `data` parameter must
be a string containing a **JSON array of document objects**. Upon successful
insertion, the tool returns a JSON array containing the unique `_id` of **each**
new document that was created.
## Compatible Sources
{{< compatible-sources >}}
---
## Example
Here is an example configuration for a tool that logs multiple events at once.
```yaml
kind: tools
name: log_batch_events
type: mongodb-insert-many
source: my-mongo-source
description: Inserts a batch of event logs into the database.
database: logging
collection: events
canonical: true
```
An LLM would call this tool by providing an array of documents as a JSON string
in the `data` parameter, like this:
`tool_code: log_batch_events(data='[{"event": "login", "user": "user1"}, {"event": "click", "user": "user2"}, {"event": "logout", "user": "user1"}]')`
---
## Reference
| **field** | **type** | **required** | **description** |
|:------------|:---------|:-------------|:------------------------------------------------------------------------------------------------------------------------|
| type | string | true | Must be `mongodb-insert-many`. |
| source | string | true | The name of the `mongodb` source to use. |
| description | string | true | A description of the tool that is passed to the LLM. |
| database | string | true | The name of the MongoDB database containing the collection. |
| collection | string | true | The name of the MongoDB collection into which the documents will be inserted. |
| canonical | bool | false | Determines if the data string is parsed using MongoDB's Canonical or Relaxed Extended JSON format. Defaults to `false`. |
========================================================================
## mongodb-insert-one Tool
========================================================================
**Hierarchy:** MCP Toolbox for Databases > Integrations > MongoDB Source > mongodb-insert-one Tool
**URL:** https://googleapis.github.io/genai-toolbox/previews/PR-2723/integrations/mongodb/mongodb-insert-one/
**Description:** A "mongodb-insert-one" tool inserts a single new document into a MongoDB collection.
## About
The `mongodb-insert-one` tool inserts a **single new document** into a specified
MongoDB collection.
This tool takes one required parameter named `data`, which must be a string
containing the JSON object you want to insert. Upon successful insertion, the
tool returns the unique `_id` of the newly created document.
## Compatible Sources
{{< compatible-sources >}}
## Example
Here is an example configuration for a tool that adds a new user to a `users`
collection.
```yaml
kind: tools
name: create_new_user
type: mongodb-insert-one
source: my-mongo-source
description: Creates a new user record in the database.
database: user_data
collection: users
canonical: false
```
An LLM would call this tool by providing the document as a JSON string in the
`data` parameter, like this:
`tool_code: create_new_user(data='{"email": "new.user@example.com", "name": "Jane Doe", "status": "active"}')`
## Reference
| **field** | **type** | **required** | **description** |
|:------------|:---------|:-------------|:------------------------------------------------------------------------------------------------------------------------|
| type | string | true | Must be `mongodb-insert-one`. |
| source | string | true | The name of the `mongodb` source to use. |
| description | string | true | A description of the tool that is passed to the LLM. |
| database | string | true | The name of the MongoDB database containing the collection. |
| collection | string | true | The name of the MongoDB collection into which the document will be inserted. |
| canonical | bool | false | Determines if the data string is parsed using MongoDB's Canonical or Relaxed Extended JSON format. Defaults to `false`. |
========================================================================
## mongodb-update-many Tool
========================================================================
**Hierarchy:** MCP Toolbox for Databases > Integrations > MongoDB Source > mongodb-update-many Tool
**URL:** https://googleapis.github.io/genai-toolbox/previews/PR-2723/integrations/mongodb/mongodb-update-many/
**Description:** A "mongodb-update-many" tool updates all documents in a MongoDB collection that match a filter.
## About
A `mongodb-update-many` tool updates **all** documents within a specified
MongoDB collection that match a given filter. It locates the documents using a
`filterPayload` and applies the modifications defined in an `updatePayload`.
The tool returns an array of three integers: `[ModifiedCount, UpsertedCount,
MatchedCount]`.
## Compatible Sources
{{< compatible-sources >}}
---
## Example
Here's an example configuration. This tool applies a discount to all items
within a specific category and also marks them as being on sale.
```yaml
kind: tools
name: apply_category_discount
type: mongodb-update-many
source: my-mongo-source
description: Use this tool to apply a discount to all items in a given category.
database: products
collection: inventory
filterPayload: |
{ "category": {{json .category_name}} }
filterParams:
- name: category_name
type: string
description: The category of items to update.
updatePayload: |
{
"$mul": { "price": {{json .discount_multiplier}} },
"$set": { "on_sale": true }
}
updateParams:
- name: discount_multiplier
type: number
description: The multiplier to apply to the price (e.g., 0.8 for a 20% discount).
canonical: false
upsert: false
```
## Reference
| **field** | **type** | **required** | **description** |
|:--------------|:---------|:-------------|:-------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|
| type | string | true | Must be `mongodb-update-many`. |
| source | string | true | The name of the `mongodb` source to use. |
| description | string | true | A description of the tool that is passed to the LLM. |
| database | string | true | The name of the MongoDB database containing the collection. |
| collection | string | true | The name of the MongoDB collection in which to update documents. |
| filterPayload | string | true | The MongoDB query filter document to select the documents for updating. It's written as a Go template, using `{{json .param_name}}` to insert parameters. |
| filterParams | list | false | A list of parameter objects that define the variables used in the `filterPayload`. |
| updatePayload | string | true | The MongoDB update document, It's written as a Go template, using `{{json .param_name}}` to insert parameters. |
| updateParams | list | true | A list of parameter objects that define the variables used in the `updatePayload`. |
| canonical | bool | false | Determines if the `filterPayload` and `updatePayload` strings are parsed using MongoDB's Canonical or Relaxed Extended JSON format. **Canonical** is stricter about type representation, while **Relaxed** is more lenient. Defaults to `false`. |
| upsert | bool | false | If `true`, a new document is created if no document matches the `filterPayload`. Defaults to `false`. |
========================================================================
## mongodb-update-one Tool
========================================================================
**Hierarchy:** MCP Toolbox for Databases > Integrations > MongoDB Source > mongodb-update-one Tool
**URL:** https://googleapis.github.io/genai-toolbox/previews/PR-2723/integrations/mongodb/mongodb-update-one/
**Description:** A "mongodb-update-one" tool updates a single document in a MongoDB collection.
## About
A `mongodb-update-one` tool updates a single document within a specified MongoDB
collection. It locates the document to be updated using a `filterPayload` and
applies modifications defined in an `updatePayload`. If the filter matches
multiple documents, only the first one found will be updated.
## Compatible Sources
{{< compatible-sources >}}
---
## Example
Here's an example of a `mongodb-update-one` tool configuration. This tool
updates the `stock` and `status` fields of a document in the `inventory`
collection where the `item` field matches a provided value. If no matching
document is found, the `upsert: true` option will create a new one.
```yaml
kind: tools
name: update_inventory_item
type: mongodb-update-one
source: my-mongo-source
description: Use this tool to update an item's stock and status in the inventory.
database: products
collection: inventory
filterPayload: |
{ "item": {{json .item_name}} }
filterParams:
- name: item_name
type: string
description: The name of the item to update.
updatePayload: |
{ "$set": { "stock": {{json .new_stock}}, "status": {{json .new_status}} } }
updateParams:
- name: new_stock
type: integer
description: The new stock quantity.
- name: new_status
type: string
description: The new status of the item (e.g., "In Stock", "Backordered").
canonical: false
upsert: true
```
## Reference
| **field** | **type** | **required** | **description** |
|:--------------|:---------|:-------------|:-----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|
| type | string | true | Must be `mongodb-update-one`. |
| source | string | true | The name of the `mongodb` source to use. |
| description | string | true | A description of the tool that is passed to the LLM. |
| database | string | true | The name of the MongoDB database containing the collection. |
| collection | string | true | The name of the MongoDB collection to update a document in. |
| filterPayload | string | true | The MongoDB query filter document to select the document for updating. It's written as a Go template, using `{{json .param_name}}` to insert parameters. |
| filterParams | list | false | A list of parameter objects that define the variables used in the `filterPayload`. |
| updatePayload | string | true | The MongoDB update document, which specifies the modifications. This often uses update operators like `$set`. It's written as a Go template, using `{{json .param_name}}` to insert parameters. |
| updateParams | list | true | A list of parameter objects that define the variables used in the `updatePayload`. |
| canonical | bool | false | Determines if the `updatePayload` string is parsed using MongoDB's Canonical or Relaxed Extended JSON format. **Canonical** is stricter about type representation (e.g., `{"$numberInt": "42"}`), while **Relaxed** is more lenient (e.g., `42`). Defaults to `false`. |
| upsert | bool | false | If `true`, a new document is created if no document matches the `filterPayload`. Defaults to `false`. |
========================================================================
## MySQL Source
========================================================================
**Hierarchy:** MCP Toolbox for Databases > Integrations > MySQL Source
**URL:** https://googleapis.github.io/genai-toolbox/previews/PR-2723/integrations/mysql/
**Description:** MySQL is a relational database management system that stores and manages data.
## About
[MySQL][mysql-docs] is a relational database management system (RDBMS) that
stores and manages data. It's a popular choice for developers because of its
reliability, performance, and ease of use.
[mysql-docs]: https://www.mysql.com/
## Available Tools
{{< list-tools >}}
## Requirements
### Database User
This source only uses standard authentication. You will need to [create a
MySQL user][mysql-users] to login to the database with.
[mysql-users]: https://dev.mysql.com/doc/refman/8.4/en/user-names.html
## Example
```yaml
kind: sources
name: my-mysql-source
type: mysql
host: 127.0.0.1
port: 3306
database: my_db
user: ${USER_NAME}
password: ${PASSWORD}
# Optional TLS and other driver parameters. For example, enable preferred TLS:
# queryParams:
# tls: preferred
queryTimeout: 30s # Optional: query timeout duration
```
{{< notice tip >}}
Use environment variable replacement with the format ${ENV_NAME}
instead of hardcoding your secrets into the configuration file.
{{< /notice >}}
## Reference
| **field** | **type** | **required** | **description** |
| ------------ | :------: | :----------: | ----------------------------------------------------------------------------------------------- |
| type | string | true | Must be "mysql". |
| host | string | true | IP address to connect to (e.g. "127.0.0.1"). |
| port | string | true | Port to connect to (e.g. "3306"). |
| database | string | true | Name of the MySQL database to connect to (e.g. "my_db"). |
| user | string | true | Name of the MySQL user to connect as (e.g. "my-mysql-user"). |
| password | string | true | Password of the MySQL user (e.g. "my-password"). |
| queryTimeout | string | false | Maximum time to wait for query execution (e.g. "30s", "2m"). By default, no timeout is applied. |
| queryParams | map | false | Arbitrary DSN parameters passed to the driver (e.g. `tls: preferred`, `charset: utf8mb4`). Useful for enabling TLS or other connection options. |
========================================================================
## mysql-execute-sql Tool
========================================================================
**Hierarchy:** MCP Toolbox for Databases > Integrations > MySQL Source > mysql-execute-sql Tool
**URL:** https://googleapis.github.io/genai-toolbox/previews/PR-2723/integrations/mysql/mysql-execute-sql/
**Description:** A "mysql-execute-sql" tool executes a SQL statement against a MySQL database.
## About
A `mysql-execute-sql` tool executes a SQL statement against a MySQL
database.
`mysql-execute-sql` takes one input parameter `sql` and run the sql
statement against the `source`.
> **Note:** This tool is intended for developer assistant workflows with
> human-in-the-loop and shouldn't be used for production agents.
## Compatible Sources
{{< compatible-sources others="integrations/cloud-sql-mysql">}}
## Example
```yaml
kind: tools
name: execute_sql_tool
type: mysql-execute-sql
source: my-mysql-instance
description: Use this tool to execute sql statement.
```
## Reference
| **field** | **type** | **required** | **description** |
|-------------|:------------------------------------------:|:------------:|--------------------------------------------------------------------------------------------------|
| type | string | true | Must be "mysql-execute-sql". |
| source | string | true | Name of the source the SQL should execute on. |
| description | string | true | Description of the tool that is passed to the LLM. |
========================================================================
## mysql-get-query-plan Tool
========================================================================
**Hierarchy:** MCP Toolbox for Databases > Integrations > MySQL Source > mysql-get-query-plan Tool
**URL:** https://googleapis.github.io/genai-toolbox/previews/PR-2723/integrations/mysql/mysql-get-query-plan/
**Description:** A "mysql-get-query-plan" tool gets the execution plan for a SQL statement against a MySQL database.
## About
A `mysql-get-query-plan` tool gets the execution plan for a SQL statement against a MySQL
database.
`mysql-get-query-plan` takes one input parameter `sql_statement` and gets the execution plan for the SQL
statement against the `source`.
## Compatible Sources
{{< compatible-sources others="integrations/cloud-sql-mysql">}}
## Example
```yaml
kind: tools
name: get_query_plan_tool
type: mysql-get-query-plan
source: my-mysql-instance
description: Use this tool to get the execution plan for a sql statement.
```
## Reference
| **field** | **type** | **required** | **description** |
|-------------|:------------------------------------------:|:------------:|--------------------------------------------------------------------------------------------------|
| type | string | true | Must be "mysql-get-query-plan". |
| source | string | true | Name of the source the SQL should execute on. |
| description | string | true | Description of the tool that is passed to the LLM. |
========================================================================
## mysql-list-active-queries Tool
========================================================================
**Hierarchy:** MCP Toolbox for Databases > Integrations > MySQL Source > mysql-list-active-queries Tool
**URL:** https://googleapis.github.io/genai-toolbox/previews/PR-2723/integrations/mysql/mysql-list-active-queries/
**Description:** A "mysql-list-active-queries" tool lists active queries in a MySQL database.
## About
A `mysql-list-active-queries` tool retrieves information about active queries in
a MySQL database.
`mysql-list-active-queries` outputs detailed information as JSON for current
active queries, ordered by execution time in descending order.
This tool takes 2 optional input parameters:
- `min_duration_secs` (optional): Only show queries running for at least this
long in seconds, default `0`.
- `limit` (optional): max number of queries to return, default `10`.
## Compatible Sources
{{< compatible-sources others="integrations/cloud-sql-mysql">}}
## Example
```yaml
kind: tools
name: list_active_queries
type: mysql-list-active-queries
source: my-mysql-instance
description: Lists top N (default 10) ongoing queries from processlist and innodb_trx, ordered by execution time in descending order. Returns detailed information of those queries in json format, including process id, query, transaction duration, transaction wait duration, process time, transaction state, process state, username with host, transaction rows locked, transaction rows modified, and db schema.
```
The response is a json array with the following fields:
```json
{
"proccess_id": "id of the MySQL process/connection this query belongs to",
"query": "query text",
"trx_started": "the time when the transaction (this query belongs to) started",
"trx_duration_seconds": "the total elapsed time (in seconds) of the owning transaction so far",
"trx_wait_duration_seconds": "the total wait time (in seconds) of the owning transaction so far",
"query_time": "the time (in seconds) that the owning connection has been in its current state",
"trx_state": "the transaction execution state",
"proces_state": "the current state of the owning connection",
"user": "the user who issued this query",
"trx_rows_locked": "the approximate number of rows locked by the owning transaction",
"trx_rows_modified": "the approximate number of rows modified by the owning transaction",
"db": "the default database for the owning connection"
}
```
## Reference
| **field** | **type** | **required** | **description** |
|-------------|:--------:|:------------:|----------------------------------------------------|
| type | string | true | Must be "mysql-list-active-queries". |
| source | string | true | Name of the source the SQL should execute on. |
| description | string | true | Description of the tool that is passed to the LLM. |
========================================================================
## mysql-list-table-fragmentation Tool
========================================================================
**Hierarchy:** MCP Toolbox for Databases > Integrations > MySQL Source > mysql-list-table-fragmentation Tool
**URL:** https://googleapis.github.io/genai-toolbox/previews/PR-2723/integrations/mysql/mysql-list-table-fragmentation/
**Description:** A "mysql-list-table-fragmentation" tool lists top N fragemented tables in MySQL.
## About
A `mysql-list-table-fragmentation` tool checks table fragmentation of MySQL
tables by calculating the size of the data and index files in bytes and
comparing with free space allocated to each table. This tool calculates
`fragmentation_percentage` which represents the proportion of free space
relative to the total data and index size.
`mysql-list-table-fragmentation` outputs detailed information as JSON , ordered
by the fragmentation percentage in descending order.
This tool takes 4 optional input parameters:
- `table_schema` (optional): The database where fragmentation check is to be
executed. Check all tables visible to the current user if not specified.
- `table_name` (optional): Name of the table to be checked. Check all tables
visible to the current user if not specified.
- `data_free_threshold_bytes` (optional): Only show tables with at least this
much free space in bytes. Default 1.
- `limit` (optional): Max rows to return, default 10.
## Compatible Sources
{{< compatible-sources others="integrations/cloud-sql-mysql">}}
## Example
```yaml
kind: tools
name: list_table_fragmentation
type: mysql-list-table-fragmentation
source: my-mysql-instance
description: List table fragmentation in MySQL, by calculating the size of the data and index files and free space allocated to each table. The query calculates fragmentation percentage which represents the proportion of free space relative to the total data and index size. Storage can be reclaimed for tables with high fragmentation using OPTIMIZE TABLE.
```
The response is a json array with the following fields:
```json
{
"table_schema": "The schema/database this table belongs to",
"table_name": "Name of this table",
"data_size": "Size of the table data in bytes",
"index_size": "Size of the table's indexes in bytes",
"data_free": "Free space (bytes) available in the table's data file",
"fragmentation_percentage": "How much fragementation this table has",
}
```
## Reference
| **field** | **type** | **required** | **description** |
|-------------|:--------:|:------------:|----------------------------------------------------|
| type | string | true | Must be "mysql-list-table-fragmentation". |
| source | string | true | Name of the source the SQL should execute on. |
| description | string | true | Description of the tool that is passed to the LLM. |
========================================================================
## mysql-list-tables Tool
========================================================================
**Hierarchy:** MCP Toolbox for Databases > Integrations > MySQL Source > mysql-list-tables Tool
**URL:** https://googleapis.github.io/genai-toolbox/previews/PR-2723/integrations/mysql/mysql-list-tables/
**Description:** The "mysql-list-tables" tool lists schema information for all or specified tables in a MySQL database.
## About
The `mysql-list-tables` tool retrieves schema information for all or specified
tables in a MySQL database.
`mysql-list-tables` lists detailed schema information (object type, columns,
constraints, indexes, triggers, owner, comment) as JSON for user-created tables
(ordinary or partitioned). Filters by a comma-separated list of names. If names
are omitted, it lists all tables in user schemas. The output format can be set
to `simple` which will return only the table names or `detailed` which is the
default.
The tool takes the following input parameters:
| Parameter | Type | Description | Required |
|:----------------|:-------|:---------------------------------------------------------------------------------------------------------------------------------------------------------------|:---------|
| `table_names` | string | Filters by a comma-separated list of names. By default, it lists all tables in user schemas. Default: `""` | No |
| `output_format` | string | Indicate the output format of table schema. `simple` will return only the table names, `detailed` will return the full table information. Default: `detailed`. | No |
## Compatible Sources
{{< compatible-sources others="integrations/cloud-sql-mysql">}}
## Example
```yaml
kind: tools
name: mysql_list_tables
type: mysql-list-tables
source: mysql-source
description: Use this tool to retrieve schema information for all or specified tables. Output format can be simple (only table names) or detailed.
```
## Reference
| **field** | **type** | **required** | **description** |
|-------------|:--------:|:------------:|------------------------------------------------------|
| type | string | true | Must be "mysql-list-tables". |
| source | string | true | Name of the source the SQL should execute on. |
| description | string | true | Description of the tool that is passed to the agent. |
========================================================================
## mysql-list-tables-missing-unique-indexes Tool
========================================================================
**Hierarchy:** MCP Toolbox for Databases > Integrations > MySQL Source > mysql-list-tables-missing-unique-indexes Tool
**URL:** https://googleapis.github.io/genai-toolbox/previews/PR-2723/integrations/mysql/mysql-list-tables-missing-unique-indexes/
**Description:** A "mysql-list-tables-missing-unique-indexes" tool lists tables that do not have primary or unique indices in a MySQL instance.
## About
A `mysql-list-tables-missing-unique-indexes` tool searches tables that do not
have primary or unique indices in a MySQL database.
`mysql-list-tables-missing-unique-indexes` outputs table names, including
`table_schema` and `table_name` in JSON format. It takes 2 optional input
parameters:
- `table_schema` (optional): Only check tables in this specific schema/database.
Search all visible tables in all visible databases if not specified.
- `limit` (optional): max number of queries to return, default `50`.
## Compatible Sources
{{< compatible-sources others="integrations/cloud-sql-mysql">}}
## Example
```yaml
kind: tools
name: list_tables_missing_unique_indexes
type: mysql-list-tables-missing-unique-indexes
source: my-mysql-instance
description: Find tables that do not have primary or unique key constraint. A primary key or unique key is the only mechanism that guaranttes a row is unique. Without them, the database-level protection against data integrity issues will be missing.
```
The response is a json array with the following fields:
```json
{
"table_schema": "the schema/database this table belongs to",
"table_name": "name of the table",
}
```
## Reference
| **field** | **type** | **required** | **description** |
|-------------|:--------:|:------------:|----------------------------------------------------|
| type | string | true | Must be "mysql-list-active-queries". |
| source | string | true | Name of the source the SQL should execute on. |
| description | string | true | Description of the tool that is passed to the LLM. |
========================================================================
## mysql-sql Tool
========================================================================
**Hierarchy:** MCP Toolbox for Databases > Integrations > MySQL Source > mysql-sql Tool
**URL:** https://googleapis.github.io/genai-toolbox/previews/PR-2723/integrations/mysql/mysql-sql/
**Description:** A "mysql-sql" tool executes a pre-defined SQL statement against a MySQL database.
## About
A `mysql-sql` tool executes a pre-defined SQL statement against a MySQL
database.
The specified SQL statement is executed as a [prepared statement][mysql-prepare],
and expects parameters in the SQL query to be in the form of placeholders `?`.
[mysql-prepare]: https://dev.mysql.com/doc/refman/8.4/en/sql-prepared-statements.html
## Compatible Sources
{{< compatible-sources others="integrations/cloud-sql-mysql">}}
## Example
> **Note:** This tool uses parameterized queries to prevent SQL injections.
> Query parameters can be used as substitutes for arbitrary expressions.
> Parameters cannot be used as substitutes for identifiers, column names, table
> names, or other parts of the query.
```yaml
kind: tools
name: search_flights_by_number
type: mysql-sql
source: my-mysql-instance
statement: |
SELECT * FROM flights
WHERE airline = ?
AND flight_number = ?
LIMIT 10
description: |
Use this tool to get information for a specific flight.
Takes an airline code and flight number and returns info on the flight.
Do NOT use this tool with a flight id. Do NOT guess an airline code or flight number.
A airline code is a code for an airline service consisting of two-character
airline designator and followed by flight number, which is 1 to 4 digit number.
For example, if given CY 0123, the airline is "CY", and flight_number is "123".
Another example for this is DL 1234, the airline is "DL", and flight_number is "1234".
If the tool returns more than one option choose the date closes to today.
Example:
{{
"airline": "CY",
"flight_number": "888",
}}
Example:
{{
"airline": "DL",
"flight_number": "1234",
}}
parameters:
- name: airline
type: string
description: Airline unique 2 letter identifier
- name: flight_number
type: string
description: 1 to 4 digit number
```
### Example with Template Parameters
> **Note:** This tool allows direct modifications to the SQL statement,
> including identifiers, column names, and table names. **This makes it more
> vulnerable to SQL injections**. Using basic parameters only (see above) is
> recommended for performance and safety reasons. For more details, please check
> [templateParameters](..#template-parameters).
```yaml
kind: tools
name: list_table
type: mysql-sql
source: my-mysql-instance
statement: |
SELECT * FROM {{.tableName}};
description: |
Use this tool to list all information from a specific table.
Example:
{{
"tableName": "flights",
}}
templateParameters:
- name: tableName
type: string
description: Table to select from
```
## Reference
| **field** | **type** | **required** | **description** |
|--------------------|:------------------------------------------------:|:------------:|--------------------------------------------------------------------------------------------------------------------------------------------|
| type | string | true | Must be "mysql-sql". |
| source | string | true | Name of the source the SQL should execute on. |
| description | string | true | Description of the tool that is passed to the LLM. |
| statement | string | true | SQL statement to execute on. |
| parameters | [parameters](../#specifying-parameters) | false | List of [parameters](../#specifying-parameters) that will be inserted into the SQL statement. |
| templateParameters | [templateParameters](..#template-parameters) | false | List of [templateParameters](..#template-parameters) that will be inserted into the SQL statement before executing prepared statement. |
========================================================================
## Neo4j Source
========================================================================
**Hierarchy:** MCP Toolbox for Databases > Integrations > Neo4j Source
**URL:** https://googleapis.github.io/genai-toolbox/previews/PR-2723/integrations/neo4j/
**Description:** Neo4j is a powerful, open source graph database system
## About
[Neo4j][neo4j-docs] is a powerful, open source graph database system with over
15 years of active development that has earned it a strong reputation for
reliability, feature robustness, and performance.
[neo4j-docs]: https://neo4j.com/docs
## Available Tools
{{< list-tools >}}
## Requirements
### Database User
This source only uses standard authentication. You will need to [create a Neo4j
user][neo4j-users] to log in to the database with, or use the default `neo4j`
user if available.
[neo4j-users]: https://neo4j.com/docs/operations-manual/current/authentication-authorization/manage-users/
## Example
```yaml
kind: sources
name: my-neo4j-source
type: neo4j
uri: neo4j+s://xxxx.databases.neo4j.io:7687
user: ${USER_NAME}
password: ${PASSWORD}
database: "neo4j"
```
{{< notice tip >}}
Use environment variable replacement with the format ${ENV_NAME}
instead of hardcoding your secrets into the configuration file.
{{< /notice >}}
## Reference
| **field** | **type** | **required** | **description** |
|-----------|:--------:|:------------:|----------------------------------------------------------------------|
| type | string | true | Must be "neo4j". |
| uri | string | true | Connect URI ("bolt://localhost", "neo4j+s://xxx.databases.neo4j.io") |
| user | string | true | Name of the Neo4j user to connect as (e.g. "neo4j"). |
| password | string | true | Password of the Neo4j user (e.g. "my-password"). |
| database | string | true | Name of the Neo4j database to connect to (e.g. "neo4j"). |
========================================================================
## neo4j-cypher Tool
========================================================================
**Hierarchy:** MCP Toolbox for Databases > Integrations > Neo4j Source > neo4j-cypher Tool
**URL:** https://googleapis.github.io/genai-toolbox/previews/PR-2723/integrations/neo4j/neo4j-cypher/
**Description:** A "neo4j-cypher" tool executes a pre-defined cypher statement against a Neo4j database.
## About
A `neo4j-cypher` tool executes a pre-defined Cypher statement against a Neo4j
database.
The specified Cypher statement is executed as a [parameterized
statement][neo4j-parameters], and specified parameters will be used according to
their name: e.g. `$id`.
> **Note:** This tool uses parameterized queries to prevent SQL injections.
> Query parameters can be used as substitutes for arbitrary expressions.
> Parameters cannot be used as substitutes for identifiers, column names, table
> names, or other parts of the query.
[neo4j-parameters]:
https://neo4j.com/docs/cypher-manual/current/syntax/parameters/
## Compatible Sources
{{< compatible-sources >}}
## Example
```yaml
kind: tools
name: search_movies_by_actor
type: neo4j-cypher
source: my-neo4j-movies-instance
statement: |
MATCH (m:Movie)<-[:ACTED_IN]-(p:Person)
WHERE p.name = $name AND m.year > $year
RETURN m.title, m.year
LIMIT 10
description: |
Use this tool to get a list of movies for a specific actor and a given minimum release year.
Takes a full actor name, e.g. "Tom Hanks" and a year e.g 1993 and returns a list of movie titles and release years.
Do NOT use this tool with a movie title. Do NOT guess an actor name, Do NOT guess a year.
A actor name is a fully qualified name with first and last name separated by a space.
For example, if given "Hanks, Tom" the actor name is "Tom Hanks".
If the tool returns more than one option choose the most recent movies.
Example:
{{
"name": "Meg Ryan",
"year": 1993
}}
Example:
{{
"name": "Clint Eastwood",
"year": 2000
}}
parameters:
- name: name
type: string
description: Full actor name, "firstname lastname"
- name: year
type: integer
description: 4 digit number starting in 1900 up to the current year
```
## Reference
| **field** | **type** | **required** | **description** |
|-------------|:---------------------------------------:|:------------:|----------------------------------------------------------------------------------------------|
| type | string | true | Must be "neo4j-cypher". |
| source | string | true | Name of the source the Cypher query should execute on. |
| description | string | true | Description of the tool that is passed to the LLM. |
| statement | string | true | Cypher statement to execute |
| parameters | [parameters](../#specifying-parameters) | false | List of [parameters](../#specifying-parameters) that will be used with the Cypher statement. |
========================================================================
## neo4j-execute-cypher Tool
========================================================================
**Hierarchy:** MCP Toolbox for Databases > Integrations > Neo4j Source > neo4j-execute-cypher Tool
**URL:** https://googleapis.github.io/genai-toolbox/previews/PR-2723/integrations/neo4j/neo4j-execute-cypher/
**Description:** A "neo4j-execute-cypher" tool executes any arbitrary Cypher statement against a Neo4j database.
## About
A `neo4j-execute-cypher` tool executes an arbitrary Cypher query provided as a
string parameter against a Neo4j database. It's designed to be a flexible tool
for interacting with the database when a pre-defined query is not sufficient.
For security, the tool can be configured to be read-only. If the `readOnly` flag
is set to `true`, the tool will analyze the incoming Cypher query and reject any
write operations (like `CREATE`, `MERGE`, `DELETE`, etc.) before execution.
The Cypher query uses standard [Neo4j
Cypher](https://neo4j.com/docs/cypher-manual/current/queries/) syntax and
supports all Cypher features, including pattern matching, filtering, and
aggregation.
`neo4j-execute-cypher` takes a required input parameter `cypher` and run the
cypher query against the `source`. It also supports an optional `dry_run`
parameter to validate a query without executing it.
> **Note:** This tool is intended for developer assistant workflows with
> human-in-the-loop and shouldn't be used for production agents.
## Compatible Sources
{{< compatible-sources >}}
## Example
```yaml
kind: tools
name: query_neo4j
type: neo4j-execute-cypher
source: my-neo4j-prod-db
readOnly: true
description: |
Use this tool to execute a Cypher query against the production database.
Only read-only queries are allowed.
Takes a single 'cypher' parameter containing the full query string.
Example:
{{
"cypher": "MATCH (m:Movie {title: 'The Matrix'}) RETURN m.released"
}}
```
## Reference
| **field** | **type** | **required** | **description** |
|-------------|:--------:|:------------:|------------------------------------------------------------------------------------------------------|
| type | string | true | Must be "neo4j-cypher". |
| source | string | true | Name of the source the Cypher query should execute on. |
| description | string | true | Description of the tool that is passed to the LLM. |
| readOnly | boolean | false | If set to `true`, the tool will reject any write operations in the Cypher query. Default is `false`. |
========================================================================
## neo4j-schema Tool
========================================================================
**Hierarchy:** MCP Toolbox for Databases > Integrations > Neo4j Source > neo4j-schema Tool
**URL:** https://googleapis.github.io/genai-toolbox/previews/PR-2723/integrations/neo4j/neo4j-schema/
**Description:** A "neo4j-schema" tool extracts a comprehensive schema from a Neo4j database.
## About
A `neo4j-schema` tool connects to a Neo4j database and extracts its complete
schema information. It runs multiple queries concurrently to efficiently gather
details about node labels, relationships, properties, constraints, and indexes.
The tool automatically detects if the APOC (Awesome Procedures on Cypher)
library is available. If so, it uses APOC procedures like `apoc.meta.schema` for
a highly detailed overview of the database structure; otherwise, it falls back
to using native Cypher queries.
The extracted schema is **cached** to improve performance for subsequent
requests. The output is a structured JSON object containing all the schema
details, which can be invaluable for providing database context to an LLM. This
tool is compatible with a `neo4j` source and takes no parameters.
## Compatible Sources
{{< compatible-sources >}}
## Example
```yaml
kind: tools
name: get_movie_db_schema
type: neo4j-schema
source: my-neo4j-movies-instance
description: |
Use this tool to get the full schema of the movie database.
This provides information on all available node labels (like Movie, Person),
relationships (like ACTED_IN), and the properties on each.
This tool takes no parameters.
# Optional configuration to cache the schema for 2 hours
cacheExpireMinutes: 120
```
## Reference
| **field** | **type** | **required** | **description** |
|--------------------|:--------:|:------------:|---------------------------------------------------------|
| type | string | true | Must be `neo4j-schema`. |
| source | string | true | Name of the source the schema should be extracted from. |
| description | string | true | Description of the tool that is passed to the LLM. |
| cacheExpireMinutes | integer | false | Cache expiration time in minutes. Defaults to 60. |
========================================================================
## OceanBase Source
========================================================================
**Hierarchy:** MCP Toolbox for Databases > Integrations > OceanBase Source
**URL:** https://googleapis.github.io/genai-toolbox/previews/PR-2723/integrations/oceanbase/
**Description:** OceanBase is a distributed relational database that provides high availability, scalability, and compatibility with MySQL.
## About
[OceanBase][oceanbase-docs] is a distributed relational database management
system (RDBMS) that provides high availability, scalability, and strong
consistency. It's designed to handle large-scale data processing and is
compatible with MySQL, making it easy for developers to migrate from MySQL to
OceanBase.
[oceanbase-docs]: https://www.oceanbase.com/
### Features
#### MySQL Compatibility
OceanBase is highly compatible with MySQL, supporting most MySQL SQL syntax,
data types, and functions. This makes it easy to migrate existing MySQL
applications to OceanBase.
#### High Availability
OceanBase provides automatic failover and data replication across multiple
nodes, ensuring high availability and data durability.
#### Scalability
OceanBase can scale horizontally by adding more nodes to the cluster, making it
suitable for large-scale applications.
#### Strong Consistency
OceanBase provides strong consistency guarantees, ensuring that all transactions
are ACID compliant.
## Available Tools
{{< list-tools >}}
## Requirements
### Database User
This source only uses standard authentication. You will need to create an
OceanBase user to login to the database with. OceanBase supports
MySQL-compatible user management syntax.
### Network Connectivity
Ensure that your application can connect to the OceanBase cluster. OceanBase
typically runs on ports 2881 (for MySQL protocol) or 3881 (for MySQL protocol
with SSL).
## Example
```yaml
kind: sources
name: my-oceanbase-source
type: oceanbase
host: 127.0.0.1
port: 2881
database: my_db
user: ${USER_NAME}
password: ${PASSWORD}
queryTimeout: 30s # Optional: query timeout duration
```
{{< notice tip >}}
Use environment variable replacement with the format ${ENV_NAME}
instead of hardcoding your secrets into the configuration file.
{{< /notice >}}
## Reference
| **field** | **type** | **required** | **description** |
| ------------ | :------: | :----------: |-------------------------------------------------------------------------------------------------|
| type | string | true | Must be "oceanbase". |
| host | string | true | IP address to connect to (e.g. "127.0.0.1"). |
| port | string | true | Port to connect to (e.g. "2881"). |
| database | string | true | Name of the OceanBase database to connect to (e.g. "my_db"). |
| user | string | true | Name of the OceanBase user to connect as (e.g. "my-oceanbase-user"). |
| password | string | true | Password of the OceanBase user (e.g. "my-password"). |
| queryTimeout | string | false | Maximum time to wait for query execution (e.g. "30s", "2m"). By default, no timeout is applied. |
========================================================================
## oceanbase-execute-sql Tool
========================================================================
**Hierarchy:** MCP Toolbox for Databases > Integrations > OceanBase Source > oceanbase-execute-sql Tool
**URL:** https://googleapis.github.io/genai-toolbox/previews/PR-2723/integrations/oceanbase/oceanbase-execute-sql/
**Description:** An "oceanbase-execute-sql" tool executes a SQL statement against an OceanBase database.
## About
An `oceanbase-execute-sql` tool executes a SQL statement against an OceanBase
database.
`oceanbase-execute-sql` takes one input parameter `sql` and runs the sql
statement against the `source`.
> **Note:** This tool is intended for developer assistant workflows with
> human-in-the-loop and shouldn't be used for production agents.
## Compatible Sources
{{< compatible-sources >}}
## Example
```yaml
kind: tools
name: execute_sql_tool
type: oceanbase-execute-sql
source: my-oceanbase-instance
description: Use this tool to execute sql statement.
```
## Reference
| **field** | **type** | **required** | **description** |
|-------------|:--------:|:------------:|----------------------------------------------------|
| type | string | true | Must be "oceanbase-execute-sql". |
| source | string | true | Name of the source the SQL should execute on. |
| description | string | true | Description of the tool that is passed to the LLM. |
========================================================================
## oceanbase-sql Tool
========================================================================
**Hierarchy:** MCP Toolbox for Databases > Integrations > OceanBase Source > oceanbase-sql Tool
**URL:** https://googleapis.github.io/genai-toolbox/previews/PR-2723/integrations/oceanbase/oceanbase-sql/
**Description:** An "oceanbase-sql" tool executes a pre-defined SQL statement against an OceanBase database.
## About
An `oceanbase-sql` tool executes a pre-defined SQL statement against an
OceanBase database.
The specified SQL statement is executed as a [prepared
statement][mysql-prepare], and expects parameters in the SQL query to be in the
form of placeholders `?`.
[mysql-prepare]: https://dev.mysql.com/doc/refman/8.4/en/sql-prepared-statements.html
## Compatible Sources
{{< compatible-sources >}}
## Example
> **Note:** This tool uses parameterized queries to prevent SQL injections.
> Query parameters can be used as substitutes for arbitrary expressions.
> Parameters cannot be used as substitutes for identifiers, column names, table
> names, or other parts of the query.
```yaml
kind: tools
name: search_flights_by_number
type: oceanbase-sql
source: my-oceanbase-instance
statement: |
SELECT * FROM flights
WHERE airline = ?
AND flight_number = ?
LIMIT 10
description: |
Use this tool to get information for a specific flight.
Takes an airline code and flight number and returns info on the flight.
Do NOT use this tool with a flight id. Do NOT guess an airline code or flight number.
Example:
{{
"airline": "CY",
"flight_number": "888",
}}
parameters:
- name: airline
type: string
description: Airline unique 2 letter identifier
- name: flight_number
type: string
description: 1 to 4 digit number
```
### Example with Template Parameters
> **Note:** This tool allows direct modifications to the SQL statement,
> including identifiers, column names, and table names. **This makes it more
> vulnerable to SQL injections**. Using basic parameters only (see above) is
> recommended for performance and safety reasons.
```yaml
kind: tools
name: list_table
type: oceanbase-sql
source: my-oceanbase-instance
statement: |
SELECT * FROM {{.tableName}};
description: |
Use this tool to list all information from a specific table.
Example:
{{
"tableName": "flights",
}}
templateParameters:
- name: tableName
type: string
description: Table to select from
```
### Example with Array Parameters
```yaml
kind: tools
name: search_flights_by_ids
type: oceanbase-sql
source: my-oceanbase-instance
statement: |
SELECT * FROM flights
WHERE id IN (?)
AND status IN (?)
description: |
Use this tool to get information for multiple flights by their IDs and statuses.
Example:
{{
"flight_ids": [1, 2, 3],
"statuses": ["active", "scheduled"]
}}
parameters:
- name: flight_ids
type: array
description: List of flight IDs to search for
items:
name: flight_id
type: integer
description: Individual flight ID
- name: statuses
type: array
description: List of flight statuses to filter by
items:
name: status
type: string
description: Individual flight status
```
## Reference
| **field** | **type** | **required** | **description** |
|--------------------|:--------------------------------------------:|:------------:|----------------------------------------------------------------------------------------------------------------------------------------|
| type | string | true | Must be "oceanbase-sql". |
| source | string | true | Name of the source the SQL should execute on. |
| description | string | true | Description of the tool that is passed to the LLM. |
| statement | string | true | SQL statement to execute on. |
| parameters | [parameters](..#specifying-parameters) | false | List of [parameters](..#specifying-parameters) that will be inserted into the SQL statement. |
| templateParameters | [templateParameters](..#template-parameters) | false | List of [templateParameters](..#template-parameters) that will be inserted into the SQL statement before executing prepared statement. |
========================================================================
## Oracle Source
========================================================================
**Hierarchy:** MCP Toolbox for Databases > Integrations > Oracle Source
**URL:** https://googleapis.github.io/genai-toolbox/previews/PR-2723/integrations/oracle/
**Description:** Oracle Database is a widely-used relational database management system.
## About
[Oracle Database][oracle-docs] is a multi-model database management system
produced and marketed by Oracle Corporation. It is commonly used for running
online transaction processing (OLTP), data warehousing (DW), and mixed (OLTP &
DW) database workloads.
[oracle-docs]: https://www.oracle.com/database/
## Available Tools
{{< list-tools >}}
## Requirements
### Database User
This source uses standard authentication. You will need to [create an Oracle
user][oracle-users] to log in to the database with the necessary permissions.
[oracle-users]:
https://docs.oracle.com/en/database/oracle/oracle-database/21/sqlrf/CREATE-USER.html
### Oracle Driver Requirement (Conditional)
The Oracle source offers two connection drivers:
1. **Pure Go Driver (`useOCI: false`, default):** Uses the `go-ora` library.
This driver is simpler and does not require any local Oracle software
installation, but it **lacks support for advanced features** like Oracle
Wallets or Kerberos authentication.
2. **OCI-Based Driver (`useOCI: true`):** Uses the `godror` library, which
provides access to **advanced Oracle features** like Digital Wallet support.
If you set `useOCI: true`, you **must** install the **Oracle Instant Client**
libraries on the machine where this tool runs.
You can download the Instant Client from the official Oracle website: [Oracle
Instant Client
Downloads](https://www.oracle.com/database/technologies/instant-client/downloads.html)
### Connection Methods
You can configure the connection to your Oracle database using one of the
following three methods. **You should only use one method** in your source
configuration.
#### Basic Connection (Host/Port/Service Name)
This is the most straightforward method, where you provide the connection
details as separate fields:
- `host`: The IP address or hostname of the database server.
- `port`: The port number the Oracle listener is running on (typically 1521).
- `serviceName`: The service name for the database instance you wish to connect
to.
#### Connection String
As an alternative, you can provide all the connection details in a single
`connectionString`. This is a convenient way to consolidate the connection
information. The typical format is `hostname:port/servicename`.
#### TNS Alias
For environments that use a `tnsnames.ora` configuration file, you can connect
using a TNS (Transparent Network Substrate) alias.
- `tnsAlias`: Specify the alias name defined in your `tnsnames.ora` file.
- `tnsAdmin` (Optional): If your configuration file is not in a standard
location, you can use this field to provide the path to the directory
containing it. This setting will override the `TNS_ADMIN` environment
variable.
## Example
This example demonstrates the four connection methods you could choose from:
```yaml
kind: sources
name: my-oracle-source
type: oracle
# --- Choose one connection method ---
# 1. Host, Port, and Service Name
host: 127.0.0.1
port: 1521
serviceName: XEPDB1
# 2. Direct Connection String
connectionString: "127.0.0.1:1521/XEPDB1"
# 3. TNS Alias (requires tnsnames.ora)
tnsAlias: "MY_DB_ALIAS"
tnsAdmin: "/opt/oracle/network/admin" # Optional: overrides TNS_ADMIN env var
user: ${USER_NAME}
password: ${PASSWORD}
# Optional: Set to true to use the OCI-based driver for advanced features (Requires Oracle Instant Client)
```
### Using an Oracle Wallet
Oracle Wallet allows you to store credentails used for database connection. Depending whether you are using an OCI-based driver, the wallet configuration is different.
#### Pure Go Driver (`useOCI: false`) - Oracle Wallet
The `go-ora` driver uses the `walletLocation` field to connect to a database secured with an Oracle Wallet without standard username and password.
```yaml
kind: sources
name: pure-go-wallet
type: oracle
connectionString: "127.0.0.1:1521/XEPDB1"
user: ${USER_NAME}
password: ${PASSWORD}
# The TNS Alias is often required to connect to a service registered in tnsnames.ora
tnsAlias: "SECURE_DB_ALIAS"
walletLocation: "/path/to/my/wallet/directory"
```
#### OCI-Based Driver (`useOCI: true`) - Oracle Wallet
For the OCI-based driver, wallet authentication is triggered by setting tnsAdmin to the wallet directory and connecting via a tnsAlias.
```yaml
kind: sources
name: oci-wallet
type: oracle
connectionString: "127.0.0.1:1521/XEPDB1"
user: ${USER_NAME}
password: ${PASSWORD}
tnsAlias: "WALLET_DB_ALIAS"
tnsAdmin: "/opt/oracle/wallet" # Directory containing tnsnames.ora, sqlnet.ora, and wallet files
useOCI: true
```
{{< notice tip >}}
Use environment variable replacement with the format ${ENV_NAME}
instead of hardcoding your secrets into the configuration file.
{{< /notice >}}
## Reference
| **field** | **type** | **required** | **description** |
|------------------|:--------:|:------------:|-----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|
| type | string | true | Must be "oracle". |
| user | string | true | Name of the Oracle user to connect as (e.g. "my-oracle-user"). |
| password | string | true | Password of the Oracle user (e.g. "my-password"). |
| host | string | false | IP address or hostname to connect to (e.g. "127.0.0.1"). Required if not using `connectionString` or `tnsAlias`. |
| port | integer | false | Port to connect to (e.g. "1521"). Required if not using `connectionString` or `tnsAlias`. |
| serviceName | string | false | The Oracle service name of the database to connect to. Required if not using `connectionString` or `tnsAlias`. |
| connectionString | string | false | A direct connection string (e.g. "hostname:port/servicename"). Use as an alternative to `host`, `port`, and `serviceName`. |
| tnsAlias | string | false | A TNS alias from a `tnsnames.ora` file. Use as an alternative to `host`/`port` or `connectionString`. |
| tnsAdmin | string | false | Path to the directory containing the `tnsnames.ora` file. This overrides the `TNS_ADMIN` environment variable if it is set. |
| useOCI | bool | false | If true, uses the OCI-based driver (godror) which supports Oracle Wallet/Kerberos but requires the Oracle Instant Client libraries to be installed. Defaults to false (pure Go driver). |
========================================================================
## oracle-execute-sql Tool
========================================================================
**Hierarchy:** MCP Toolbox for Databases > Integrations > Oracle Source > oracle-execute-sql Tool
**URL:** https://googleapis.github.io/genai-toolbox/previews/PR-2723/integrations/oracle/oracle-execute-sql/
**Description:** An "oracle-execute-sql" tool executes a SQL statement against an Oracle database.
## About
An `oracle-execute-sql` tool executes a SQL statement against an Oracle
database.
`oracle-execute-sql` takes one input parameter `sql` and runs the sql
statement against the `source`.
> **Note:** This tool is intended for developer assistant workflows with
> human-in-the-loop and shouldn't be used for production agents.
## Compatible Sources
{{< compatible-sources >}}
## Example
```yaml
kind: tools
name: execute_sql_tool
type: oracle-execute-sql
source: my-oracle-instance
description: Use this tool to execute sql statement.
```
========================================================================
## oracle-sql Tool
========================================================================
**Hierarchy:** MCP Toolbox for Databases > Integrations > Oracle Source > oracle-sql Tool
**URL:** https://googleapis.github.io/genai-toolbox/previews/PR-2723/integrations/oracle/oracle-sql/
**Description:** An "oracle-sql" tool executes a pre-defined SQL statement against an Oracle database.
## About
An `oracle-sql` tool executes a pre-defined SQL statement against an
Oracle database.
The specified SQL statement is executed using [prepared statements][oracle-stmt]
for security and performance. It expects parameter placeholders in the SQL query
to be in the native Oracle format (e.g., `:1`, `:2`).
By default, tools are configured as **read-only** (SAFE mode). To execute data modification
statements (INSERT, UPDATE, DELETE), you must explicitly set the `readOnly`
field to `false`.
[oracle-stmt]: https://docs.oracle.com/javase/tutorial/jdbc/basics/prepared.html
## Compatible Sources
{{< compatible-sources >}}
## Example
> **Note:** This tool uses parameterized queries to prevent SQL injections.
> Query parameters can be used as substitutes for arbitrary expressions.
> Parameters cannot be used as substitutes for identifiers, column names, table
> names, or other parts of the query.
```yaml
tools:
search_flights_by_number:
kind: oracle-sql
source: my-oracle-instance
statement: |
SELECT * FROM flights
WHERE airline = :1
AND flight_number = :2
FETCH FIRST 10 ROWS ONLY
description: |
Use this tool to get information for a specific flight.
Takes an airline code and flight number and returns info on the flight.
Do NOT use this tool with a flight id. Do NOT guess an airline code or flight number.
Example:
{{
"airline": "CY",
"flight_number": "888",
}}
parameters:
- name: airline
type: string
description: Airline unique 2 letter identifier
- name: flight_number
type: string
description: 1 to 4 digit number
update_flight_status:
kind: oracle-sql
source: my-oracle-instance
readOnly: false # Required for INSERT/UPDATE/DELETE
statement: |
UPDATE flights
SET status = :1
WHERE airline = :2 AND flight_number = :3
description: Updates the status of a specific flight.
parameters:
- name: status
type: string
- name: airline
type: string
- name: flight_number
type: string
========================================================================
## PostgreSQL Source
========================================================================
**Hierarchy:** MCP Toolbox for Databases > Integrations > PostgreSQL Source
**URL:** https://googleapis.github.io/genai-toolbox/previews/PR-2723/integrations/postgres/
**Description:** PostgreSQL is a powerful, open source object-relational database.
## About
[PostgreSQL][pg-docs] is a powerful, open source object-relational database
system with over 35 years of active development that has earned it a strong
reputation for reliability, feature robustness, and performance.
[pg-docs]: https://www.postgresql.org/
## Available Tools
{{< list-tools >}}
### Pre-built Configurations
- [PostgreSQL using MCP](../../user-guide/connect-to/ides/postgres_mcp.md)
Connect your IDE to PostgreSQL using Toolbox.
## Requirements
### Database User
This source only uses standard authentication. You will need to [create a
PostgreSQL user][pg-users] to login to the database with.
[pg-users]: https://www.postgresql.org/docs/current/sql-createuser.html
## Example
```yaml
kind: sources
name: my-pg-source
type: postgres
host: 127.0.0.1
port: 5432
database: my_db
user: ${USER_NAME}
password: ${PASSWORD}
```
{{< notice tip >}}
Use environment variable replacement with the format ${ENV_NAME}
instead of hardcoding your secrets into the configuration file.
{{< /notice >}}
## Reference
| **field** | **type** | **required** | **description** |
|-------------|:------------------:|:------------:|------------------------------------------------------------------------|
| type | string | true | Must be "postgres". |
| host | string | true | IP address to connect to (e.g. "127.0.0.1") |
| port | string | true | Port to connect to (e.g. "5432") |
| database | string | true | Name of the Postgres database to connect to (e.g. "my_db"). |
| user | string | true | Name of the Postgres user to connect as (e.g. "my-pg-user"). |
| password | string | true | Password of the Postgres user (e.g. "my-password"). |
| queryParams | map[string]string | false | Raw query to be added to the db connection string. |
| queryExecMode | string | false | pgx query execution mode. Valid values: `cache_statement` (default), `cache_describe`, `describe_exec`, `exec`, `simple_protocol`. Useful with connection poolers that don't support prepared statement caching. |
========================================================================
## postgres-database-overview Tool
========================================================================
**Hierarchy:** MCP Toolbox for Databases > Integrations > PostgreSQL Source > postgres-database-overview Tool
**URL:** https://googleapis.github.io/genai-toolbox/previews/PR-2723/integrations/postgres/postgres-database-overview/
**Description:** The "postgres-database-overview" fetches the current state of the PostgreSQL server.
## About
The `postgres-database-overview` fetches the current state of the PostgreSQL
server.
`postgres-database-overview` fetches the current state of the PostgreSQL server
This tool does not take any input parameters.
## Compatible Sources
{{< compatible-sources others="integrations/alloydb-pg, integrations/cloud-sql-pg">}}
## Example
```yaml
kind: tools
name: database_overview
type: postgres-database-overview
source: cloudsql-pg-source
description: |
fetches the current state of the PostgreSQL server. It returns the postgres version, whether it's a replica, uptime duration, maximum connection limit, number of current connections, number of active connections and the percentage of connections in use.
```
The response is a JSON object with the following elements:
```json
{
"pg_version": "PostgreSQL server version string",
"is_replica": "boolean indicating if the instance is in recovery mode",
"uptime": "interval string representing the total server uptime",
"max_connections": "integer maximum number of allowed connections",
"current_connections": "integer number of current connections",
"active_connections": "integer number of currently active connections",
"pct_connections_used": "float percentage of max_connections currently in use"
}
```
## Reference
| **field** | **type** | **required** | **description** |
|-------------|:--------:|:------------:|------------------------------------------------------|
| type | string | true | Must be "postgres-database-overview". |
| source | string | true | Name of the source the SQL should execute on. |
| description | string | false | Description of the tool that is passed to the agent. |
========================================================================
## postgres-execute-sql Tool
========================================================================
**Hierarchy:** MCP Toolbox for Databases > Integrations > PostgreSQL Source > postgres-execute-sql Tool
**URL:** https://googleapis.github.io/genai-toolbox/previews/PR-2723/integrations/postgres/postgres-execute-sql/
**Description:** A "postgres-execute-sql" tool executes a SQL statement against a Postgres database.
## About
A `postgres-execute-sql` tool executes a SQL statement against a Postgres
database.
`postgres-execute-sql` takes one input parameter `sql` and run the sql
statement against the `source`.
> **Note:** This tool is intended for developer assistant workflows with
> human-in-the-loop and shouldn't be used for production agents.
## Compatible Sources
{{< compatible-sources others="integrations/alloydb-pg, integrations/cloud-sql-pg">}}
## Example
```yaml
kind: tools
name: execute_sql_tool
type: postgres-execute-sql
source: my-pg-instance
description: Use this tool to execute sql statement.
```
## Reference
| **field** | **type** | **required** | **description** |
|-------------|:------------------------------------------:|:------------:|--------------------------------------------------------------------------------------------------|
| type | string | true | Must be "postgres-execute-sql". |
| source | string | true | Name of the source the SQL should execute on. |
| description | string | true | Description of the tool that is passed to the LLM. |
========================================================================
## postgres-get-column-cardinality Tool
========================================================================
**Hierarchy:** MCP Toolbox for Databases > Integrations > PostgreSQL Source > postgres-get-column-cardinality Tool
**URL:** https://googleapis.github.io/genai-toolbox/previews/PR-2723/integrations/postgres/postgres-get-column-cardinality/
**Description:** The "postgres-get-column-cardinality" tool estimates the number of unique values in one or all columns of a Postgres database table.
## About
The `postgres-get-column-cardinality` tool estimates the number of unique values
(cardinality) for one or all columns in a specific PostgreSQL table by using the
database's internal statistics.
`postgres-get-column-cardinality` returns detailed information as JSON about column
cardinality values, ordered by estimated cardinality in descending order. The tool takes
the following input parameters:
- `schema_name` (required): The schema name in which the table is present.
- `table_name` (required): The table name in which the column is present.
- `column_name` (optional): The column name for which the cardinality is to be found.
If not provided, cardinality for all columns will be returned. Default: `""`.
## Compatible Sources
{{< compatible-sources others="integrations/alloydb-pg, integrations/cloud-sql-pg">}}
## Example
```yaml
kind: tools
name: get_column_cardinality
type: postgres-get-column-cardinality
source: postgres-source
description: Estimates the number of unique values (cardinality) quickly for one or all columns in a specific PostgreSQL table by using the database's internal statistics, returning the results in descending order of estimated cardinality. Please run ANALYZE on the table before using this tool to get accurate results. The tool returns the column_name and the estimated_cardinality. If the column_name is not provided, the tool returns all columns along with their estimated cardinality.
```
The response is a json array with the following elements:
```json
[
{
"column_name": "name of the column",
"estimated_cardinality": "estimated number of unique values in the column"
}
]
```
## Reference
| **field** | **type** | **required** | **description** |
|-------------|:--------:|:------------:|------------------------------------------------------|
| type | string | true | Must be "postgres-get-column-cardinality". |
| source | string | true | Name of the source the SQL should execute on. |
| description | string | true | Description of the tool that is passed to the LLM. |
## Advanced Usage
For accurate results, it's recommended to run `ANALYZE` on the table before using this
tool. The `ANALYZE` command updates the database statistics that this tool relies on
to estimate cardinality.
========================================================================
## postgres-list-active-queries Tool
========================================================================
**Hierarchy:** MCP Toolbox for Databases > Integrations > PostgreSQL Source > postgres-list-active-queries Tool
**URL:** https://googleapis.github.io/genai-toolbox/previews/PR-2723/integrations/postgres/postgres-list-active-queries/
**Description:** The "postgres-list-active-queries" tool lists currently active queries in a Postgres database.
## About
The `postgres-list-active-queries` tool retrieves information about currently
active queries in a Postgres database.
`postgres-list-active-queries` lists detailed information as JSON for currently
active queries. The tool takes the following input parameters:
- `min_duraton` (optional): Only show queries running at least this long (e.g.,
'1 minute', '1 second', '2 seconds'). Default: '1 minute'.
- `exclude_application_names` (optional): A comma-separated list of application
names to exclude from the query results. This is useful for filtering out
queries from specific applications (e.g., 'psql', 'pgAdmin', 'DBeaver'). The
match is case-sensitive. Whitespace around commas and names is automatically
handled. If this parameter is omitted, no applications are excluded.
- `limit` (optional): The maximum number of rows to return. Default: `50`.
## Compatible Sources
{{< compatible-sources others="integrations/alloydb-pg, integrations/cloud-sql-pg">}}
## Example
```yaml
kind: tools
name: list_active_queries
type: postgres-list-active-queries
source: postgres-source
description: List the top N (default 50) currently running queries (state='active') from pg_stat_activity, ordered by longest-running first. Returns pid, user, database, application_name, client_addr, state, wait_event_type/wait_event, backend/xact/query start times, computed query_duration, and the SQL text.
```
The response is a json array with the following elements:
```json
{
"pid": "process id",
"user": "database user name",
"datname": "database name",
"application_name": "connecting application name",
"client_addr": "connecting client ip address",
"state": "connection state",
"wait_event_type": "connection wait event type",
"wait_event": "connection wait event",
"backend_start": "connection start time",
"xact_start": "transaction start time",
"query_start": "query start time",
"query_duration": "query duration",
"query": "query text"
}
```
## Reference
| **field** | **type** | **required** | **description** |
|-------------|:--------:|:------------:|----------------------------------------------------|
| type | string | true | Must be "postgres-list-active-queries". |
| source | string | true | Name of the source the SQL should execute on. |
| description | string | true | Description of the tool that is passed to the LLM. |
========================================================================
## postgres-list-available-extensions Tool
========================================================================
**Hierarchy:** MCP Toolbox for Databases > Integrations > PostgreSQL Source > postgres-list-available-extensions Tool
**URL:** https://googleapis.github.io/genai-toolbox/previews/PR-2723/integrations/postgres/postgres-list-available-extensions/
**Description:** The "postgres-list-available-extensions" tool retrieves all PostgreSQL extensions available for installation on a Postgres database.
## About
The `postgres-list-available-extensions` tool retrieves all PostgreSQL
extensions available for installation on a Postgres database.
`postgres-list-available-extensions` lists all PostgreSQL extensions available
for installation (extension name, default version description) as JSON. The does
not support any input parameter.
## Compatible Sources
{{< compatible-sources others="integrations/alloydb-pg, integrations/cloud-sql-pg">}}
## Example
```yaml
kind: tools
name: list_available_extensions
type: postgres-list-available-extensions
source: postgres-source
description: Discover all PostgreSQL extensions available for installation on this server, returning name, default_version, and description.
```
## Reference
| **name** | **default_version** | **description** |
|----------------------|---------------------|---------------------------------------------------------------------------------------------------------------------|
| address_standardizer | 3.5.2 | Used to parse an address into constituent elements. Generally used to support geocoding address normalization step. |
| amcheck | 1.4 | functions for verifying relation integrity |
| anon | 1.0.0 | Data anonymization tools |
| autoinc | 1.0 | functions for autoincrementing fields |
========================================================================
## postgres-list-database-stats Tool
========================================================================
**Hierarchy:** MCP Toolbox for Databases > Integrations > PostgreSQL Source > postgres-list-database-stats Tool
**URL:** https://googleapis.github.io/genai-toolbox/previews/PR-2723/integrations/postgres/postgres-list-database-stats/
**Description:** The "postgres-list-database-stats" tool lists lists key performance and activity statistics of PostgreSQL databases.
## About
The `postgres-list-database-stats` lists the key performance and activity statistics for each PostgreSQL database in the instance, offering insights into cache efficiency, transaction throughput, row-level activity, temporary file usage, and contention.
`postgres-list-database-stats` lists detailed information as JSON for each database. The tool
takes the following input parameters:
- `database_name` (optional): A text to filter results by database name. Default: `""`
- `include_templates` (optional): Boolean, set to `true` to include template databases in the results. Default: `false`
- `database_owner` (optional): A text to filter results by database owner. Default: `""`
- `default_tablespace` (optional): A text to filter results by the default tablespace name. Default: `""`
- `order_by` (optional): Specifies the sorting order. Valid values are `'size'` (descending) or `'commit'` (descending). Default: `database_name` ascending.
- `limit` (optional): The maximum number of databases to return. Default: `10`
## Compatible Sources
{{< compatible-sources others="integrations/alloydb-pg, integrations/cloud-sql-pg">}}
## Example
```yaml
kind: tools
name: list_database_stats
type: postgres-list-database-stats
source: postgres-source
description: |
Lists the key performance and activity statistics for each PostgreSQL
database in the instance, offering insights into cache efficiency,
transaction throughput row-level activity, temporary file usage, and
contention. It returns: the database name, whether the database is
connectable, database owner, default tablespace name, the percentage of
data blocks found in the buffer cache rather than being read from disk
(a higher value indicates better cache performance), the total number of
disk blocks read from disk, the total number of times disk blocks were
found already in the cache; the total number of committed transactions,
the total number of rolled back transactions, the percentage of rolled
back transactions compared to the total number of completed
transactions, the total number of rows returned by queries, the total
number of live rows fetched by scans, the total number of rows inserted,
the total number of rows updated, the total number of rows deleted, the
number of temporary files created by queries, the total size of
temporary files used by queries in bytes, the number of query
cancellations due to conflicts with recovery, the number of deadlocks
detected, the current number of active backend connections, the
timestamp when the database statistics were last reset, and the total
database size in bytes.
```
The response is a json array with the following elements:
```json
{
"database_name": "Name of the database",
"is_connectable": "Boolean indicating Whether the database allows connections",
"database_owner": "Username of the database owner",
"default_tablespace": "Name of the default tablespace for the database",
"cache_hit_ratio_percent": "The percentage of data blocks found in the buffer cache rather than being read from disk",
"blocks_read_from_disk": "The total number of disk blocks read for this database",
"blocks_hit_in_cache": "The total number of times disk blocks were found already in the cache.",
"xact_commit": "The total number of committed transactions",
"xact_rollback": "The total number of rolled back transactions",
"rollback_ratio_percent": "The percentage of rolled back transactions compared to the total number of completed transactions",
"rows_returned_by_queries": "The total number of rows returned by queries",
"rows_fetched_by_scans": "The total number of live rows fetched by scans",
"tup_inserted": "The total number of rows inserted",
"tup_updated": "The total number of rows updated",
"tup_deleted": "The total number of rows deleted",
"temp_files": "The number of temporary files created by queries",
"temp_size_bytes": "The total size of temporary files used by queries in bytes",
"conflicts": "Number of query cancellations due to conflicts",
"deadlocks": "Number of deadlocks detected",
"active_connections": "The current number of active backend connections",
"statistics_last_reset": "The timestamp when the database statistics were last reset",
"database_size_bytes": "The total disk size of the database in bytes"
}
```
## Reference
| **field** | **type** | **required** | **description** |
|-------------|:--------:|:------------:|------------------------------------------------------|
| type | string | true | Must be "postgres-list-database-stats". |
| source | string | true | Name of the source the SQL should execute on. |
| description | string | false | Description of the tool that is passed to the agent. |
========================================================================
## postgres-list-indexes Tool
========================================================================
**Hierarchy:** MCP Toolbox for Databases > Integrations > PostgreSQL Source > postgres-list-indexes Tool
**URL:** https://googleapis.github.io/genai-toolbox/previews/PR-2723/integrations/postgres/postgres-list-indexes/
**Description:** The "postgres-list-indexes" tool lists indexes in a Postgres database.
## About
The `postgres-list-indexes` tool lists available user indexes in the database
excluding those in `pg_catalog` and `information_schema`.
`postgres-list-indexes` lists detailed information as JSON for indexes. The tool
takes the following input parameters:
- `table_name` (optional): A text to filter results by table name. Default: `""`
- `index_name` (optional): A text to filter results by index name. Default: `""`
- `schema_name` (optional): A text to filter results by schema name. Default: `""`
- `only_unused` (optional): If true, returns indexes that have never been used.
- `limit` (optional): The maximum number of rows to return. Default: `50`.
## Compatible Sources
{{< compatible-sources others="integrations/alloydb-pg, integrations/cloud-sql-pg">}}
## Example
```yaml
kind: tools
name: list_indexes
type: postgres-list-indexes
source: postgres-source
description: |
Lists available user indexes in the database, excluding system schemas (pg_catalog,
information_schema). For each index, the following properties are returned:
schema name, table name, index name, index type (access method), a boolean
indicating if it's a unique index, a boolean indicating if it's for a primary key,
the index definition, index size in bytes, the number of index scans, the number of
index tuples read, the number of table tuples fetched via index scans, and a boolean
indicating if the index has been used at least once.
```
The response is a json array with the following elements:
```json
{
"schema_name": "schema name",
"table_name": "table name",
"index_name": "index name",
"index_type": "index access method (e.g btree, hash, gin)",
"is_unique": "boolean indicating if the index is unique",
"is_primary": "boolean indicating if the index is for a primary key",
"index_definition": "index definition statement",
"index_size_bytes": "index size in bytes",
"index_scans": "Number of index scans initiated on this index",
"tuples_read": "Number of index entries returned by scans on this index",
"tuples_fetched": "Number of live table rows fetched by simple index scans using this index",
"is_used": "boolean indicating if the index has been scanned at least once"
}
```
## Reference
| **field** | **type** | **required** | **description** |
|-------------|:--------:|:------------:|------------------------------------------------------|
| type | string | true | Must be "postgres-list-indexes". |
| source | string | true | Name of the source the SQL should execute on. |
| description | string | false | Description of the tool that is passed to the agent. |
========================================================================
## postgres-list-installed-extensions Tool
========================================================================
**Hierarchy:** MCP Toolbox for Databases > Integrations > PostgreSQL Source > postgres-list-installed-extensions Tool
**URL:** https://googleapis.github.io/genai-toolbox/previews/PR-2723/integrations/postgres/postgres-list-installed-extensions/
**Description:** The "postgres-list-installed-extensions" tool retrieves all PostgreSQL extensions installed on a Postgres database.
## About
The `postgres-list-installed-extensions` tool retrieves all PostgreSQL
extensions installed on a Postgres database.
`postgres-list-installed-extensions` lists all installed PostgreSQL extensions
(extension name, version, schema, owner, description) as JSON. The does not
support any input parameter.
## Compatible Sources
{{< compatible-sources others="integrations/alloydb-pg, integrations/cloud-sql-pg">}}
## Example
```yaml
kind: tools
name: list_installed_extensions
type: postgres-list-installed-extensions
source: postgres-source
description: List all installed PostgreSQL extensions with their name, version, schema, owner, and description.
```
## Reference
| **field** | **type** | **required** | **description** |
|-------------|:--------:|:------------:|----------------------------------------------------|
| type | string | true | Must be "postgres-list-active-queries". |
| source | string | true | Name of the source the SQL should execute on. |
| description | string | true | Description of the tool that is passed to the LLM. |
========================================================================
## postgres-list-locks Tool
========================================================================
**Hierarchy:** MCP Toolbox for Databases > Integrations > PostgreSQL Source > postgres-list-locks Tool
**URL:** https://googleapis.github.io/genai-toolbox/previews/PR-2723/integrations/postgres/postgres-list-locks/
**Description:** The "postgres-list-locks" tool lists active locks in the database, including the associated process, lock type, relation, mode, and the query holding or waiting on the lock.
## About
The `postgres-list-locks` tool displays information about active locks by joining pg_stat_activity with pg_locks. This is useful to find transactions holding or waiting for locks and to troubleshoot contention.
This tool identifies all locks held by active processes showing the process ID, user, query text, and an aggregated list of all transactions and specific locks (relation, mode, grant status) associated with each process.
## Compatible Sources
{{< compatible-sources others="integrations/alloydb-pg, integrations/cloud-sql-pg">}}
## Example
```yaml
kind: tools
name: list_locks
type: postgres-list-locks
source: postgres-source
description: "Lists active locks with associated process and query information."
```
### Query
The tool aggregates locks per backend (process) and returns the concatenated transaction ids and lock entries. The SQL used by the tool looks like:
```sql
SELECT
locked.pid,
locked.usename,
locked.query,
string_agg(locked.transactionid::text,':') as trxid,
string_agg(locked.lockinfo,'||') as locks
FROM
(SELECT
a.pid,
a.usename,
a.query,
l.transactionid,
(l.granted::text||','||coalesce(l.relation::regclass,0)::text||','||l.mode::text)::text as lockinfo
FROM
pg_stat_activity a
JOIN pg_locks l ON l.pid = a.pid AND a.pid != pg_backend_pid()) as locked
GROUP BY
locked.pid, locked.usename, locked.query;
```
## Output Format
Example response element (aggregated per process):
```json
{
"pid": 23456,
"usename": "dbuser",
"query": "INSERT INTO orders (...) VALUES (...);",
"trxid": "12345:0",
"locks": "true,public.orders,RowExclusiveLock||false,0,ShareUpdateExclusiveLock"
}
```
## Reference
| field | type | required | description |
|:--------|:--------|:--------:|:------------|
| pid | integer | true | Process id (backend pid). |
| usename | string | true | Database user. |
| query | string | true | SQL text associated with the session. |
| trxid | string | true | Aggregated transaction ids for the process, joined by ':' (string). Each element is the transactionid as text. |
| locks | string | true | Aggregated lock info entries for the process, joined by '||'. Each entry is a comma-separated triple: `granted,relation,mode` where `relation` may be `0` when not resolvable via regclass. |
========================================================================
## postgres-list-pg-settings Tool
========================================================================
**Hierarchy:** MCP Toolbox for Databases > Integrations > PostgreSQL Source > postgres-list-pg-settings Tool
**URL:** https://googleapis.github.io/genai-toolbox/previews/PR-2723/integrations/postgres/postgres-list-pg-settings/
**Description:** The "postgres-list-pg-settings" tool lists PostgreSQL run-time configuration settings.
## About
The `postgres-list-pg-settings` tool lists the configuration parameters for the postgres server, their current values, and related information.
`postgres-list-pg-settings` lists detailed information as JSON for each setting. The tool
takes the following input parameters:
- `setting_name` (optional): A text to filter results by setting name. Default: `""`
- `limit` (optional): The maximum number of rows to return. Default: `50`.
## Compatible Sources
{{< compatible-sources others="integrations/alloydb, integrations/cloud-sql-pg">}}
## Example
```yaml
kind: tools
name: list_indexes
type: postgres-list-pg-settings
source: postgres-source
description: |
Lists configuration parameters for the postgres server ordered lexicographically,
with a default limit of 50 rows. It returns the parameter name, its current setting,
unit of measurement, a short description, the source of the current setting (e.g.,
default, configuration file, session), and whether a restart is required when the
parameter value is changed."
```
The response is a json array with the following elements:
```json
{
"name": "Setting name",
"current_value": "Current value of the setting",
"unit": "Unit of the setting",
"short_desc": "Short description of the setting",
"source": "Source of the current value (e.g., default, configuration file, session)",
"requires_restart": "Indicates if a server restart is required to apply a change ('Yes', 'No', or 'No (Reload sufficient)')"
}
```
## Reference
| **field** | **type** | **required** | **description** |
|-------------|:--------:|:------------:|------------------------------------------------------|
| type | string | true | Must be "postgres-list-pg-settings". |
| source | string | true | Name of the source the SQL should execute on. |
| description | string | false | Description of the tool that is passed to the agent. |
========================================================================
## postgres-list-publication-tables Tool
========================================================================
**Hierarchy:** MCP Toolbox for Databases > Integrations > PostgreSQL Source > postgres-list-publication-tables Tool
**URL:** https://googleapis.github.io/genai-toolbox/previews/PR-2723/integrations/postgres/postgres-list-publication-tables/
**Description:** The "postgres-list-publication-tables" tool lists publication tables in a Postgres database.
## About
The `postgres-list-publication-tables` tool lists all publication tables in the database.
`postgres-list-publication-tables` lists detailed information as JSON for publication tables. A publication table in PostgreSQL is a
table that is explicitly included as a source for replication within a publication (a set of changes generated from a table or group
of tables) as part of the logical replication feature. The tool takes the following input parameters:
- `table_names` (optional): Filters by a comma-separated list of table names. Default: `""`
- `publication_names` (optional): Filters by a comma-separated list of publication names. Default: `""`
- `schema_names` (optional): Filters by a comma-separated list of schema names. Default: `""`
- `limit` (optional): The maximum number of rows to return. Default: `50`
## Compatible Sources
{{< compatible-sources others="integrations/alloydb, integrations/cloud-sql-pg">}}
## Example
```yaml
kind: tools
name: list_indexes
type: postgres-list-publication-tables
source: postgres-source
description: |
Lists all tables that are explicitly part of a publication in the database.
Tables that are part of a publication via 'FOR ALL TABLES' are not included,
unless they are also explicitly added to the publication.
Returns the publication name, schema name, and table name, along with
definition details indicating if it publishes all tables, whether it
replicates inserts, updates, deletes, or truncates, and the publication
owner.
```
The response is a JSON array with the following elements:
```json
{
"publication_name": "Name of the publication",
"schema_name": "Name of the schema the table belongs to",
"table_name": "Name of the table",
"publishes_all_tables": "boolean indicating if the publication was created with FOR ALL TABLES",
"publishes_inserts": "boolean indicating if INSERT operations are replicated",
"publishes_updates": "boolean indicating if UPDATE operations are replicated",
"publishes_deletes": "boolean indicating if DELETE operations are replicated",
"publishes_truncates": "boolean indicating if TRUNCATE operations are replicated",
"publication_owner": "Username of the database role that owns the publication"
}
```
## Reference
| **field** | **type** | **required** | **description** |
|-------------|:--------:|:------------:|------------------------------------------------------|
| type | string | true | Must be "postgres-list-publication-tables". |
| source | string | true | Name of the source the SQL should execute on. |
| description | string | false | Description of the tool that is passed to the agent. |
========================================================================
## postgres-list-query-stats Tool
========================================================================
**Hierarchy:** MCP Toolbox for Databases > Integrations > PostgreSQL Source > postgres-list-query-stats Tool
**URL:** https://googleapis.github.io/genai-toolbox/previews/PR-2723/integrations/postgres/postgres-list-query-stats/
**Description:** The "postgres-list-query-stats" tool lists query statistics from a Postgres database.
## About
The `postgres-list-query-stats` tool retrieves query statistics from the
`pg_stat_statements` extension in a PostgreSQL database. It provides detailed
performance metrics for executed queries.
`postgres-list-query-stats` lists detailed query statistics as JSON, ordered by
total execution time in descending order. The tool takes the following input parameters:
- `database_name` (optional): The database name to filter query stats for. The input is
used within a LIKE clause. Default: `""` (all databases).
- `limit` (optional): The maximum number of results to return. Default: `50`.
## Compatible Sources
{{< compatible-sources others="integrations/alloydb, integrations/cloud-sql-pg">}}
## Requirements
This tool requires the `pg_stat_statements` extension to be installed and enabled
on the PostgreSQL database. The `pg_stat_statements` extension tracks execution
statistics for all SQL statements executed by the server, which is useful for
identifying slow queries and understanding query performance patterns.
## Example
```yaml
kind: tools
name: list_query_stats
type: postgres-list-query-stats
source: postgres-source
description: List query statistics from pg_stat_statements, showing performance metrics for queries including execution counts, timing information, and resource usage. Results are ordered by total execution time descending.
```
The response is a json array with the following elements:
```json
[
{
"datname": "database name",
"query": "the SQL query text",
"calls": "number of times the query was executed",
"total_exec_time": "total execution time in milliseconds",
"min_exec_time": "minimum execution time in milliseconds",
"max_exec_time": "maximum execution time in milliseconds",
"mean_exec_time": "mean execution time in milliseconds",
"rows": "total number of rows retrieved or affected",
"shared_blks_hit": "number of shared block cache hits",
"shared_blks_read": "number of shared block disk reads"
}
]
```
## Reference
| **field** | **type** | **required** | **description** |
|-------------|:--------:|:------------:|------------------------------------------------------|
| type | string | true | Must be "postgres-list-query-stats". |
| source | string | true | Name of the source the SQL should execute on. |
| description | string | true | Description of the tool that is passed to the LLM. |
========================================================================
## postgres-list-roles Tool
========================================================================
**Hierarchy:** MCP Toolbox for Databases > Integrations > PostgreSQL Source > postgres-list-roles Tool
**URL:** https://googleapis.github.io/genai-toolbox/previews/PR-2723/integrations/postgres/postgres-list-roles/
**Description:** The "postgres-list-roles" tool lists user-created roles in a Postgres database.
## About
The `postgres-list-roles` tool lists all the user-created roles in the instance, excluding system roles (like `cloudsql%` or `pg_%`). It provides details about each role's attributes and memberships.
`postgres-list-roles` lists detailed information as JSON for each role. The tool
takes the following input parameters:
- `role_name` (optional): A text to filter results by role name. Default: `""`
- `limit` (optional): The maximum number of roles to return. Default: `50`
## Compatible Sources
{{< compatible-sources others="integrations/alloydb, integrations/cloud-sql-pg">}}
## Example
```yaml
kind: tools
name: list_indexes
type: postgres-list-roles
source: postgres-source
description: |
Lists all the user-created roles in the instance . It returns the role name,
Object ID, the maximum number of concurrent connections the role can make,
along with boolean indicators for: superuser status, privilege inheritance
from member roles, ability to create roles, ability to create databases,
ability to log in, replication privilege, and the ability to bypass
row-level security, the password expiration timestamp, a list of direct
members belonging to this role, and a list of other roles/groups that this
role is a member of.
```
The response is a json array with the following elements:
```json
{
"role_name": "Name of the role",
"oid": "Object ID of the role",
"connection_limit": "Maximum concurrent connections allowed (-1 for no limit)",
"is_superuser": "Boolean, true if the role is a superuser",
"inherits_privileges": "Boolean, true if the role inherits privileges of roles it is a member of",
"can_create_roles": "Boolean, true if the role can create other roles",
"can_create_db": "Boolean, true if the role can create databases",
"can_login": "Boolean, true if the role can log in",
"is_replication_role": "Boolean, true if this is a replication role",
"bypass_rls": "Boolean, true if the role bypasses row-level security policies",
"valid_until": "Timestamp until the password is valid (null if forever)",
"direct_members": ["Array of role names that are direct members of this role"],
"member_of": ["Array of role names that this role is a member of"]
}
```
## Reference
| **field** | **type** | **required** | **description** |
|-------------|:--------:|:------------:|------------------------------------------------------|
| type | string | true | Must be "postgres-list-roles". |
| source | string | true | Name of the source the SQL should execute on. |
| description | string | false | Description of the tool that is passed to the agent. |
========================================================================
## postgres-list-schemas Tool
========================================================================
**Hierarchy:** MCP Toolbox for Databases > Integrations > PostgreSQL Source > postgres-list-schemas Tool
**URL:** https://googleapis.github.io/genai-toolbox/previews/PR-2723/integrations/postgres/postgres-list-schemas/
**Description:** The "postgres-list-schemas" tool lists user-defined schemas in a database.
## About
The `postgres-list-schemas` tool retrieves information about schemas in a
database excluding system and temporary schemas.
`postgres-list-schemas` lists detailed information as JSON for each schema. The
tool takes the following input parameters:
- `schema_name` (optional): A text to filter results by schema name. Default: `""`
- `owner` (optional): A text to filter results by owner name. Default: `""`
- `limit` (optional): The maximum number of rows to return. Default: `50`.
## Compatible Sources
{{< compatible-sources others="integrations/alloydb, integrations/cloud-sql-pg">}}
## Example
```yaml
kind: tools
name: list_schemas
type: postgres-list-schemas
source: postgres-source
description: "Lists all schemas in the database ordered by schema name and excluding system and temporary schemas. It returns the schema name, schema owner, grants, number of functions, number of tables and number of views within each schema."
```
The response is a json array with the following elements:
```json
{
"schema_name": "name of the schema.",
"owner": "role that owns the schema",
"grants": "A JSON object detailing the privileges (e.g., USAGE, CREATE) granted to different roles or PUBLIC on the schema.",
"tables": "The total count of tables within the schema",
"views": "The total count of views within the schema",
"functions": "The total count of functions",
}
```
## Reference
| **field** | **type** | **required** | **description** |
|-------------|:--------:|:------------:|----------------------------------------------------|
| type | string | true | Must be "postgres-list-schemas". |
| source | string | true | Name of the source the SQL should execute on. |
| description | string | false | Description of the tool that is passed to the LLM. |
========================================================================
## postgres-list-sequences Tool
========================================================================
**Hierarchy:** MCP Toolbox for Databases > Integrations > PostgreSQL Source > postgres-list-sequences Tool
**URL:** https://googleapis.github.io/genai-toolbox/previews/PR-2723/integrations/postgres/postgres-list-sequences/
**Description:** The "postgres-list-sequences" tool lists sequences in a Postgres database.
## About
The `postgres-list-sequences` tool retrieves information about sequences in a
Postgres database.
`postgres-list-sequences` lists detailed information as JSON for all sequences.
The tool takes the following input parameters:
- `sequence_name` (optional): A text to filter results by sequence name. The
input is used within a LIKE clause. Default: `""`
- `schema_name` (optional): A text to filter results by schema name. The input is
used within a LIKE clause. Default: `""`
- `limit` (optional): The maximum number of rows to return. Default: `50`.
## Compatible Sources
{{< compatible-sources others="integrations/alloydb, integrations/cloud-sql-pg">}}
## Example
```yaml
kind: tools
name: list_indexes
type: postgres-list-sequences
source: postgres-source
description: |
Lists all the sequences in the database ordered by sequence name.
Returns sequence name, schema name, sequence owner, data type of the
sequence, starting value, minimum value, maximum value of the sequence,
the value by which the sequence is incremented, and the last value
generated by generated by the sequence in the current session.
```
The response is a json array with the following elements:
```json
{
"sequence_name": "sequence name",
"schema_name": "schema name",
"sequence_owner": "owner of the sequence",
"data_type": "data type of the sequence",
"start_value": "starting value of the sequence",
"min_value": "minimum value of the sequence",
"max_value": "maximum value of the sequence",
"increment_by": "increment value of the sequence",
"last_value": "last value of the sequence"
}
```
## Reference
| **field** | **type** | **required** | **description** |
|-------------|:--------:|:------------:|------------------------------------------------------|
| type | string | true | Must be "postgres-list-sequences". |
| source | string | true | Name of the source the SQL should execute on. |
| description | string | false | Description of the tool that is passed to the agent. |
========================================================================
## postgres-list-stored-procedure Tool
========================================================================
**Hierarchy:** MCP Toolbox for Databases > Integrations > PostgreSQL Source > postgres-list-stored-procedure Tool
**URL:** https://googleapis.github.io/genai-toolbox/previews/PR-2723/integrations/postgres/postgres-list-stored-procedure/
**Description:** The "postgres-list-stored-procedure" tool retrieves metadata for stored procedures in PostgreSQL, including procedure definitions, owners, languages, and descriptions.
## About
The `postgres-list-stored-procedure` tool queries PostgreSQL system catalogs (`pg_proc`, `pg_namespace`, `pg_roles`, and `pg_language`) to retrieve comprehensive metadata about stored procedures in the database. It filters for procedures (kind = 'p') and provides the full procedure definition along with ownership and language information.
The tool returns a JSON array where each element represents a stored procedure with its schema, name, owner, language, complete definition, and optional description. Results are sorted by schema name and procedure name, with a default limit of 20 procedures.
### Use Cases
- **Code review and auditing**: Export procedure definitions for version control or compliance audits.
- **Documentation generation**: Automatically extract procedure metadata and descriptions for documentation.
- **Permission auditing**: Identify procedures owned by specific users or in specific schemas.
- **Migration planning**: Retrieve all procedure definitions when planning database migrations.
- **Dependency analysis**: Review procedure definitions to understand dependencies and call chains.
- **Security assessment**: Audit which roles own and can modify stored procedures.
## Compatible Sources
{{< compatible-sources others="integrations/alloydb, integrations/cloud-sql-pg">}}
## Parameters
| parameter | type | required | default | description |
|--------------|---------|----------|---------|-------------|
| role_name | string | false | null | Optional: The owner name to filter stored procedures by (supports partial matching) |
| schema_name | string | false | null | Optional: The schema name to filter stored procedures by (supports partial matching) |
| limit | integer | false | 20 | Optional: The maximum number of stored procedures to return |
## Example
```yaml
kind: tools
name: list_stored_procedure
type: postgres-list-stored-procedure
source: postgres-source
description: "Retrieves stored procedure metadata including definitions and owners."
```
### Example Requests
**List all stored procedures (default limit 20):**
```json
{}
```
**Filter by specific owner (role):**
```json
{
"role_name": "app_user"
}
```
**Filter by schema:**
```json
{
"schema_name": "public"
}
```
**Filter by owner and schema with custom limit:**
```json
{
"role_name": "postgres",
"schema_name": "public",
"limit": 50
}
```
**Filter by partial schema name:**
```json
{
"schema_name": "audit"
}
```
### Example Response
```json
[
{
"schema_name": "public",
"name": "process_payment",
"owner": "postgres",
"language": "plpgsql",
"definition": "CREATE OR REPLACE PROCEDURE public.process_payment(p_order_id integer, p_amount numeric)\n LANGUAGE plpgsql\nAS $procedure$\nBEGIN\n UPDATE orders SET status = 'paid', amount = p_amount WHERE id = p_order_id;\n INSERT INTO payment_log (order_id, amount, timestamp) VALUES (p_order_id, p_amount, now());\n COMMIT;\nEND\n$procedure$",
"description": "Processes payment for an order and logs the transaction"
},
{
"schema_name": "public",
"name": "cleanup_old_records",
"owner": "postgres",
"language": "plpgsql",
"definition": "CREATE OR REPLACE PROCEDURE public.cleanup_old_records(p_days_old integer)\n LANGUAGE plpgsql\nAS $procedure$\nDECLARE\n v_deleted integer;\nBEGIN\n DELETE FROM audit_logs WHERE created_at < now() - (p_days_old || ' days')::interval;\n GET DIAGNOSTICS v_deleted = ROW_COUNT;\n RAISE NOTICE 'Deleted % records', v_deleted;\nEND\n$procedure$",
"description": "Removes audit log records older than specified days"
},
{
"schema_name": "audit",
"name": "audit_table_changes",
"owner": "app_user",
"language": "plpgsql",
"definition": "CREATE OR REPLACE PROCEDURE audit.audit_table_changes()\n LANGUAGE plpgsql\nAS $procedure$\nBEGIN\n INSERT INTO audit.change_log (table_name, operation, changed_at) VALUES (TG_TABLE_NAME, TG_OP, now());\nEND\n$procedure$",
"description": null
}
]
```
## Output Format
| field | type | description |
|-------------|---------|-------------|
| schema_name | string | Name of the schema containing the stored procedure. |
| name | string | Name of the stored procedure. |
| owner | string | PostgreSQL role/user who owns the stored procedure. |
| language | string | Programming language in which the procedure is written (e.g., plpgsql, sql, c). |
| definition | string | Complete SQL definition of the stored procedure, including the CREATE PROCEDURE statement. |
| description | string | Optional description or comment for the procedure (may be null if no comment is set). |
## Advanced Usage
### Performance Considerations
- The tool filters at the database level using LIKE pattern matching, so partial matches are supported.
- Procedure definitions can be large; consider using the `limit` parameter for large databases with many procedures.
- Results are ordered by schema name and procedure name for consistent output.
- The default limit of 20 procedures is suitable for most use cases; increase as needed.
### Notes
- Only stored **procedures** are returned; functions and other callable objects are excluded via the `prokind = 'p'` filter.
- Filtering uses `LIKE` pattern matching, so filter values support partial matches (e.g., `role_name: "app"` will match "app_user", "app_admin", etc.).
- The `definition` field contains the complete, runnable CREATE PROCEDURE statement.
- The `description` field is populated from comments set via PostgreSQL's COMMENT command and may be null.
========================================================================
## postgres-list-table-stats Tool
========================================================================
**Hierarchy:** MCP Toolbox for Databases > Integrations > PostgreSQL Source > postgres-list-table-stats Tool
**URL:** https://googleapis.github.io/genai-toolbox/previews/PR-2723/integrations/postgres/postgres-list-table-stats/
**Description:** The "postgres-list-table-stats" tool reports table statistics including size, scan metrics, and bloat indicators for PostgreSQL tables.
## About
The `postgres-list-table-stats` tool queries `pg_stat_all_tables` to provide comprehensive statistics about tables in the database. It calculates useful metrics like index scan ratio and dead row ratio to help identify performance issues and table bloat.
The tool returns a JSON array where each element represents statistics for a table, including scan metrics, row counts, and vacuum history. Results are sorted by sequential scans by default and limited to 50 rows.
### Use Cases
- **Finding ineffective indexes**: Identify tables with low `idx_scan_ratio_percent` to evaluate index strategy.
- **Detecting table bloat**: Sort by `dead_rows` to find tables needing VACUUM.
- **Monitoring growth**: Track `total_size_bytes` over time for capacity planning.
- **Audit maintenance**: Check `last_autovacuum` and `last_autoanalyze` timestamps to ensure maintenance tasks are running.
- **Understanding workload**: Examine `seq_scan` vs `idx_scan` ratios to understand query patterns.
## Compatible Sources
{{< compatible-sources others="integrations/alloydb, integrations/cloud-sql-pg">}}
## Parameters
| parameter | type | required | default | description |
|-------------|---------|----------|---------|-------------|
| schema_name | string | false | "public" | Optional: A specific schema name to filter by (supports partial matching) |
| table_name | string | false | null | Optional: A specific table name to filter by (supports partial matching) |
| owner | string | false | null | Optional: A specific owner to filter by (supports partial matching) |
| sort_by | string | false | null | Optional: The column to sort by. Valid values: `size`, `dead_rows`, `seq_scan`, `idx_scan` (defaults to `seq_scan`) |
| limit | integer | false | 50 | Optional: The maximum number of results to return |
## Example
```yaml
kind: tools
name: list_table_stats
type: postgres-list-table-stats
source: postgres-source
description: "Lists table statistics including size, scans, and bloat metrics."
```
### Example Requests
**List default tables in public schema:**
```json
{}
```
**Filter by specific table name:**
```json
{
"table_name": "users"
}
```
**Filter by owner and sort by size:**
```json
{
"owner": "app_user",
"sort_by": "size",
"limit": 10
}
```
**Find tables with high dead row ratio:**
```json
{
"sort_by": "dead_rows",
"limit": 20
}
```
### Example Response
```json
[
{
"schema_name": "public",
"table_name": "users",
"owner": "postgres",
"total_size_bytes": 8388608,
"seq_scan": 150,
"idx_scan": 450,
"idx_scan_ratio_percent": 75.0,
"live_rows": 50000,
"dead_rows": 1200,
"dead_row_ratio_percent": 2.34,
"n_tup_ins": 52000,
"n_tup_upd": 12500,
"n_tup_del": 800,
"last_vacuum": "2025-11-27T10:30:00Z",
"last_autovacuum": "2025-11-27T09:15:00Z",
"last_autoanalyze": "2025-11-27T09:16:00Z"
},
{
"schema_name": "public",
"table_name": "orders",
"owner": "postgres",
"total_size_bytes": 16777216,
"seq_scan": 50,
"idx_scan": 1200,
"idx_scan_ratio_percent": 96.0,
"live_rows": 100000,
"dead_rows": 5000,
"dead_row_ratio_percent": 4.76,
"n_tup_ins": 120000,
"n_tup_upd": 45000,
"n_tup_del": 15000,
"last_vacuum": "2025-11-26T14:22:00Z",
"last_autovacuum": "2025-11-27T02:30:00Z",
"last_autoanalyze": "2025-11-27T02:31:00Z"
}
]
```
## Output Format
| field | type | description |
|------------------------|-----------|-------------|
| schema_name | string | Name of the schema containing the table. |
| table_name | string | Name of the table. |
| owner | string | PostgreSQL user who owns the table. |
| total_size_bytes | integer | Total size of the table including all indexes in bytes. |
| seq_scan | integer | Number of sequential (full table) scans performed on this table. |
| idx_scan | integer | Number of index scans performed on this table. |
| idx_scan_ratio_percent | decimal | Percentage of total scans (seq_scan + idx_scan) that used an index. A low ratio may indicate missing or ineffective indexes. |
| live_rows | integer | Number of live (non-deleted) rows in the table. |
| dead_rows | integer | Number of dead (deleted but not yet vacuumed) rows in the table. |
| dead_row_ratio_percent | decimal | Percentage of dead rows relative to total rows. High values indicate potential table bloat. |
| n_tup_ins | integer | Total number of rows inserted into this table. |
| n_tup_upd | integer | Total number of rows updated in this table. |
| n_tup_del | integer | Total number of rows deleted from this table. |
| last_vacuum | timestamp | Timestamp of the last manual VACUUM operation on this table (null if never manually vacuumed). |
| last_autovacuum | timestamp | Timestamp of the last automatic vacuum operation on this table. |
| last_autoanalyze | timestamp | Timestamp of the last automatic analyze operation on this table. |
## Advanced Usage
### Interpretation Guide
#### Index Scan Ratio (`idx_scan_ratio_percent`)
- **High ratio (> 80%)**: Table queries are efficiently using indexes. This is typically desirable.
- **Low ratio (< 20%)**: Many sequential scans indicate missing indexes or queries that cannot use existing indexes effectively. Consider adding indexes to frequently searched columns.
- **0%**: No index scans performed; all queries performed sequential scans. May warrant index investigation.
#### Dead Row Ratio (`dead_row_ratio_percent`)
- **< 2%**: Healthy table with minimal bloat.
- **2-5%**: Moderate bloat; consider running VACUUM if not recent.
- **> 5%**: High bloat; may benefit from manual VACUUM or VACUUM FULL.
#### Vacuum History
- **Null `last_vacuum`**: Table has never been manually vacuumed; relies on autovacuum.
- **Recent `last_autovacuum`**: Autovacuum is actively managing the table.
- **Stale timestamps**: Consider running manual VACUUM and ANALYZE if maintenance windows exist.
### Performance Considerations
- Statistics are collected from `pg_stat_all_tables`, which resets on PostgreSQL restart.
- Run `ANALYZE` on tables to update statistics for accurate query planning.
- The tool defaults to limiting results to 50 rows; adjust the `limit` parameter for larger result sets.
- Filtering by schema, table name, or owner uses `LIKE` pattern matching (supports partial matches).
========================================================================
## postgres-list-tables Tool
========================================================================
**Hierarchy:** MCP Toolbox for Databases > Integrations > PostgreSQL Source > postgres-list-tables Tool
**URL:** https://googleapis.github.io/genai-toolbox/previews/PR-2723/integrations/postgres/postgres-list-tables/
**Description:** The "postgres-list-tables" tool lists schema information for all or specified tables in a Postgres database.
## About
The `postgres-list-tables` tool retrieves schema information for all or
specified tables in a Postgres database.
`postgres-list-tables` lists detailed schema information (object type, columns,
constraints, indexes, triggers, owner, comment) as JSON for user-created tables
(ordinary or partitioned). The tool takes the following input parameters: *
`table_names` (optional): Filters by a comma-separated list of names. By
default, it lists all tables in user schemas.* `output_format` (optional):
Indicate the output format of table schema. `simple` will return only the
table names, `detailed` will return the full table information. Default:
`detailed`.
## Compatible Sources
{{< compatible-sources others="integrations/alloydb, integrations/cloud-sql-pg">}}
## Example
```yaml
kind: tools
name: postgres_list_tables
type: postgres-list-tables
source: postgres-source
description: Use this tool to retrieve schema information for all or
specified tables. Output format can be simple (only table names) or detailed.
```
## Reference
| **field** | **type** | **required** | **description** |
|-------------|:--------:|:------------:|------------------------------------------------------|
| type | string | true | Must be "postgres-list-tables". |
| source | string | true | Name of the source the SQL should execute on. |
| description | string | true | Description of the tool that is passed to the agent. |
========================================================================
## postgres-list-tablespaces Tool
========================================================================
**Hierarchy:** MCP Toolbox for Databases > Integrations > PostgreSQL Source > postgres-list-tablespaces Tool
**URL:** https://googleapis.github.io/genai-toolbox/previews/PR-2723/integrations/postgres/postgres-list-tablespaces/
**Description:** The "postgres-list-tablespaces" tool lists tablespaces in a Postgres database.
## About
The `postgres-list-tablespaces` tool lists available tablespaces in the database.
`postgres-list-tablespaces` lists detailed information as JSON for tablespaces. The tool takes the following input parameters:
- `tablespace_name` (optional): A text to filter results by tablespace name. Default: `""`
- `limit` (optional): The maximum number of tablespaces to return. Default: `50`
## Compatible Sources
{{< compatible-sources others="integrations/alloydb, integrations/cloud-sql-pg">}}
## Example
```yaml
kind: tools
name: list_tablespaces
type: postgres-list-tablespaces
source: postgres-source
description: |
Lists all tablespaces in the database. Returns the tablespace name,
owner name, size in bytes(if the current user has CREATE privileges on
the tablespace, otherwise NULL), internal object ID, the access control
list regarding permissions, and any specific tablespace options.
```
The response is a json array with the following elements:
```json
{
"tablespace_name": "name of the tablespace",
"owner_username": "owner of the tablespace",
"size_in_bytes": "size in bytes if the current user has CREATE privileges on the tablespace, otherwise NULL",
"oid": "Object ID of the tablespace",
"spcacl": "Access privileges",
"spcoptions": "Tablespace-level options (e.g., seq_page_cost, random_page_cost)"
}
```
## Reference
| **field** | **type** | **required** | **description** |
|-------------|:--------:|:-------------:|------------------------------------------------------|
| type | string | true | Must be "postgres-list-tablespaces". |
| source | string | true | Name of the source the SQL should execute on. |
| description | string | false | Description of the tool that is passed to the agent. |
========================================================================
## postgres-list-triggers Tool
========================================================================
**Hierarchy:** MCP Toolbox for Databases > Integrations > PostgreSQL Source > postgres-list-triggers Tool
**URL:** https://googleapis.github.io/genai-toolbox/previews/PR-2723/integrations/postgres/postgres-list-triggers/
**Description:** The "postgres-list-triggers" tool lists triggers in a Postgres database.
## About
The `postgres-list-triggers` tool lists available non-internal triggers in the
database.
`postgres-list-triggers` lists detailed information as JSON for triggers. The
tool takes the following input parameters:
- `trigger_name` (optional): A text to filter results by trigger name. The input
is used within a LIKE clause. Default: `""`
- `schema_name` (optional): A text to filter results by schema name. The input
is used within a LIKE clause. Default: `""`
- `table_name` (optional): A text to filter results by table name. The input is
used within a LIKE clause. Default: `""`
- `limit` (optional): The maximum number of triggers to return. Default: `50`
## Compatible Sources
{{< compatible-sources others="integrations/alloydb, integrations/cloud-sql-pg">}}
## Example
```yaml
kind: tools
name: list_triggers
type: postgres-list-triggers
source: postgres-source
description: |
Lists all non-internal triggers in a database. Returns trigger name, schema name, table name, wether its enabled or disabled, timing (e.g BEFORE/AFTER of the event), the events that cause the trigger to fire such as INSERT, UPDATE, or DELETE, whether the trigger activates per ROW or per STATEMENT, the handler function executed by the trigger and full definition.
```
The response is a json array with the following elements:
```json
{
"trigger_name": "trigger name",
"schema_name": "schema name",
"table_name": "table name",
"status": "Whether the trigger is currently active (ENABLED, DISABLED, REPLICA, ALWAYS).",
"timing": "When it runs relative to the event (BEFORE, AFTER, INSTEAD OF).",
"events": "The specific operations that fire it (INSERT, UPDATE, DELETE, TRUNCATE)",
"activation_level": "Granularity of execution (ROW vs STATEMENT).",
"function_name": "The function it executes",
"definition": "Full SQL definition of the trigger"
}
```
## Reference
| **field** | **type** | **required** | **description** |
|-------------|:--------:|:------------:|------------------------------------------------------|
| type | string | true | Must be "postgres-list-triggers". |
| source | string | true | Name of the source the SQL should execute on. |
| description | string | false | Description of the tool that is passed to the agent. |
========================================================================
## postgres-list-views Tool
========================================================================
**Hierarchy:** MCP Toolbox for Databases > Integrations > PostgreSQL Source > postgres-list-views Tool
**URL:** https://googleapis.github.io/genai-toolbox/previews/PR-2723/integrations/postgres/postgres-list-views/
**Description:** The "postgres-list-views" tool lists views in a Postgres database, with a default limit of 50 rows.
## About
The `postgres-list-views` tool retrieves a list of top N (default 50) views from
a Postgres database, excluding those in system schemas (`pg_catalog`,
`information_schema`).
`postgres-list-views` lists detailed view information (schemaname, viewname,
ownername, definition) as JSON for views in a database. The tool takes the following input
parameters:
- `view_name` (optional): A string pattern to filter view names. Default: `""`
- `schema_name` (optional): A string pattern to filter schema names. Default: `""`
- `limit` (optional): The maximum number of rows to return. Default: `50`.
## Compatible Sources
{{< compatible-sources others="integrations/alloydb, integrations/cloud-sql-pg">}}
## Example
```yaml
kind: tools
name: list_views
type: postgres-list-views
source: cloudsql-pg-source
```
## Reference
| **field** | **type** | **required** | **description** |
|-------------|:--------:|:------------:|------------------------------------------------------|
| type | string | true | Must be "postgres-list-views". |
| source | string | true | Name of the source the SQL should execute on. |
| description | string | false | Description of the tool that is passed to the agent. |
========================================================================
## postgres-long-running-transactions Tool
========================================================================
**Hierarchy:** MCP Toolbox for Databases > Integrations > PostgreSQL Source > postgres-long-running-transactions Tool
**URL:** https://googleapis.github.io/genai-toolbox/previews/PR-2723/integrations/postgres/postgres-long-running-transactions/
**Description:** The postgres-long-running-transactions tool Identifies and lists database transactions that exceed a specified time limit. For each of the long running transactions, the output contains the process id, database name, user name, application name, client address, state, connection age, transaction age, query age, last activity age, wait event type, wait event, and query string.
## About
The `postgres-long-running-transactions` tool reports transactions that exceed a configured duration threshold by scanning `pg_stat_activity` for sessions where `xact_start` is set and older than the configured interval.
The tool returns a JSON array with one object per matching session (non-idle). Each object contains the process id, database and user, application name, client address, session state, several age intervals (connection, transaction, query, and last activity), wait event info, and the SQL text currently associated with the session.
Parameters:
- `min_duration` (optional): Only show transactions running at least this long (Postgres interval format, e.g., '5 minutes'). Default: `5 minutes`.
- `limit` (optional): Maximum number of results to return. Default: `20`.
## Compatible Sources
{{< compatible-sources others="integrations/alloydb, integrations/cloud-sql-pg">}}
## Parameters
| field | type | required | description |
|:---------------------|:--------|:--------:|:------------|
| pid | integer | true | Process id (backend pid). |
| datname | string | true | Database name. |
| usename | string | true | Database user name. |
| appname | string | false | Application name (client application). |
| client_addr | string | false | Client IPv4/IPv6 address (may be null for local connections). |
| state | string | true | Session state (e.g., active, idle in transaction). |
| conn_age | string | true | Age of the connection: `now() - backend_start` (Postgres interval serialized as string). |
| xact_age | string | true | Age of the transaction: `now() - xact_start` (Postgres interval serialized as string). |
| query_age | string | true | Age of the currently running query: `now() - query_start` (Postgres interval serialized as string). |
| last_activity_age | string | true | Time since last state change: `now() - state_change` (Postgres interval serialized as string). |
| wait_event_type | string | false | Type of event the backend is waiting on (may be null). |
| wait_event | string | false | Specific wait event name (may be null). |
| query | string | true | SQL text associated with the session. |
## Example
```yaml
kind: tools
name: long_running_transactions
type: postgres-long-running-transactions
source: postgres-source
description: "Identifies transactions open longer than a threshold and returns details including query text and durations."
```
Example response element:
```json
{
"pid": 12345,
"datname": "my_database",
"usename": "dbuser",
"appname": "my_app",
"client_addr": "10.0.0.5",
"state": "idle in transaction",
"conn_age": "00:12:34",
"xact_age": "00:06:00",
"query_age": "00:02:00",
"last_activity_age": "00:01:30",
"wait_event_type": null,
"wait_event": null,
"query": "UPDATE users SET last_seen = now() WHERE id = 42;"
}
```
### Query
The SQL used by the tool looks like:
```sql
SELECT
pid,
datname,
usename,
application_name as appname,
client_addr,
state,
now() - backend_start as conn_age,
now() - xact_start as xact_age,
now() - query_start as query_age,
now() - state_change as last_activity_age,
wait_event_type,
wait_event,
query
FROM
pg_stat_activity
WHERE
state <> 'idle'
AND (now() - xact_start) > COALESCE($1::INTERVAL, interval '5 minutes')
AND xact_start IS NOT NULL
AND pid <> pg_backend_pid()
ORDER BY
xact_age DESC
LIMIT
COALESCE($2::int, 20);
```
========================================================================
## postgres-replication-stats Tool
========================================================================
**Hierarchy:** MCP Toolbox for Databases > Integrations > PostgreSQL Source > postgres-replication-stats Tool
**URL:** https://googleapis.github.io/genai-toolbox/previews/PR-2723/integrations/postgres/postgres-replication-stats/
**Description:** The "postgres-replication-stats" tool reports replication-related metrics for WAL streaming replicas, including lag sizes presented in human-readable form.
## About
The `postgres-replication-stats` tool queries pg_stat_replication to surface the status of connected replicas. It reports application_name, client address, connection and sync state, and human-readable lag sizes (sent, write, flush, replay, and total) computed using WAL LSN differences.
This tool takes no parameters. It returns a JSON array; each element represents a replication connection on the primary and includes lag metrics formatted by pg_size_pretty.
## Compatible Sources
{{< compatible-sources others="integrations/alloydb, integrations/cloud-sql-pg">}}
## Example
```yaml
kind: tools
name: replication_stats
type: postgres-replication-stats
source: postgres-source
description: "Lists replication connections and readable WAL lag metrics."
```
Example response element:
```json
{
"pid": 12345,
"usename": "replication_user",
"application_name": "replica-1",
"backend_xmin": "0/0",
"client_addr": "10.0.0.7",
"state": "streaming",
"sync_state": "sync",
"sent_lag": "1234 kB",
"write_lag": "12 kB",
"flush_lag": "0 bytes",
"replay_lag": "0 bytes",
"total_lag": "1234 kB"
}
```
## Reference
| field | type | required | description |
|------------------:|:-------:|:--------:|:------------|
| pid | integer | true | Process ID of the replication backend on the primary. |
| usename | string | true | Name of the user performing the replication connection. |
| application_name | string | true | Name of the application (replica) connecting to the primary. |
| backend_xmin | string | false | Standby's xmin horizon reported by hot_standby_feedback (may be null). |
| client_addr | string | false | Client IP address of the replica (may be null). |
| state | string | true | Connection state (e.g., streaming). |
| sync_state | string | true | Sync state (e.g., async, sync, potential). |
| sent_lag | string | true | Human-readable size difference between current WAL LSN and sent_lsn. |
| write_lag | string | true | Human-readable write lag between sent_lsn and write_lsn. |
| flush_lag | string | true | Human-readable flush lag between write_lsn and flush_lsn. |
| replay_lag | string | true | Human-readable replay lag between flush_lsn and replay_lsn. |
| total_lag | string | true | Human-readable total lag between current WAL LSN and replay_lsn. |
========================================================================
## postgres-sql Tool
========================================================================
**Hierarchy:** MCP Toolbox for Databases > Integrations > PostgreSQL Source > postgres-sql Tool
**URL:** https://googleapis.github.io/genai-toolbox/previews/PR-2723/integrations/postgres/postgres-sql/
**Description:** A "postgres-sql" tool executes a pre-defined SQL statement against a Postgres database.
## About
A `postgres-sql` tool executes a pre-defined SQL statement against a Postgres
database.
The specified SQL statement is executed as a [prepared statement][pg-prepare],
and specified parameters will be inserted according to their position: e.g. `$1`
will be the first parameter specified, `$2` will be the second parameter, and so
on. If template parameters are included, they will be resolved before execution
of the prepared statement.
[pg-prepare]: https://www.postgresql.org/docs/current/sql-prepare.html
## Compatible Sources
{{< compatible-sources others="integrations/alloydb, integrations/cloud-sql-pg">}}
## Example
> **Note:** This tool uses parameterized queries to prevent SQL injections.
> Query parameters can be used as substitutes for arbitrary expressions.
> Parameters cannot be used as substitutes for identifiers, column names, table
> names, or other parts of the query.
```yaml
kind: tools
name: search_flights_by_number
type: postgres-sql
source: my-pg-instance
statement: |
SELECT * FROM flights
WHERE airline = $1
AND flight_number = $2
LIMIT 10
description: |
Use this tool to get information for a specific flight.
Takes an airline code and flight number and returns info on the flight.
Do NOT use this tool with a flight id. Do NOT guess an airline code or flight number.
A airline code is a code for an airline service consisting of two-character
airline designator and followed by flight number, which is 1 to 4 digit number.
For example, if given CY 0123, the airline is "CY", and flight_number is "123".
Another example for this is DL 1234, the airline is "DL", and flight_number is "1234".
If the tool returns more than one option choose the date closes to today.
Example:
{{
"airline": "CY",
"flight_number": "888",
}}
Example:
{{
"airline": "DL",
"flight_number": "1234",
}}
parameters:
- name: airline
type: string
description: Airline unique 2 letter identifier
- name: flight_number
type: string
description: 1 to 4 digit number
```
### Example with Template Parameters
> **Note:** This tool allows direct modifications to the SQL statement,
> including identifiers, column names, and table names. **This makes it more
> vulnerable to SQL injections**. Using basic parameters only (see above) is
> recommended for performance and safety reasons. For more details, please check
> [templateParameters](..#template-parameters).
```yaml
kind: tools
name: list_table
type: postgres-sql
source: my-pg-instance
statement: |
SELECT * FROM {{.tableName}}
description: |
Use this tool to list all information from a specific table.
Example:
{{
"tableName": "flights",
}}
templateParameters:
- name: tableName
type: string
description: Table to select from
```
## Reference
| **field** | **type** | **required** | **description** |
|--------------------|:--------------------------------------------:|:------------:|----------------------------------------------------------------------------------------------------------------------------------------|
| type | string | true | Must be "postgres-sql". |
| source | string | true | Name of the source the SQL should execute on. |
| description | string | true | Description of the tool that is passed to the LLM. |
| statement | string | true | SQL statement to execute on. |
| parameters | [parameters](../#specifying-parameters) | false | List of [parameters](../#specifying-parameters) that will be inserted into the SQL statement. |
| templateParameters | [templateParameters](..#template-parameters) | false | List of [templateParameters](..#template-parameters) that will be inserted into the SQL statement before executing prepared statement. |
========================================================================
## Redis Source
========================================================================
**Hierarchy:** MCP Toolbox for Databases > Integrations > Redis Source
**URL:** https://googleapis.github.io/genai-toolbox/previews/PR-2723/integrations/redis/
**Description:** Redis is a in-memory data structure store.
## About
Redis is a in-memory data structure store, used as a database,
cache, and message broker. It supports data structures such as strings, hashes,
lists, sets, sorted sets with range queries, bitmaps, hyperloglogs, and
geospatial indexes with radius queries.
If you are new to Redis, you can find installation and getting started guides on
the [official Redis website](https://redis.io/docs/).
## Available Tools
{{< list-tools >}}
## Requirements
## Example
### Redis
[AUTH string][auth] is a password for connection to Redis. If you have the
`requirepass` directive set in your Redis configuration, incoming client
connections must authenticate in order to connect.
Specify your AUTH string in the password field:
```yaml
kind: sources
name: my-redis-instance
type: redis
address:
- 127.0.0.1:6379
username: ${MY_USER_NAME}
password: ${MY_AUTH_STRING} # Omit this field if you don't have a password.
# database: 0
# clusterEnabled: false
# useGCPIAM: false
# tls:
# enabled: false
# insecureSkipVerify: false
```
{{< notice tip >}}
Use environment variable replacement with the format ${ENV_NAME}
instead of hardcoding your secrets into the configuration file.
{{< /notice >}}
### Memorystore For Redis
Memorystore standalone instances support authentication using an [AUTH][auth]
string.
Here is an example tools.yaml config with [AUTH][auth] enabled:
```yaml
kind: sources
name: my-redis-cluster-instance
type: redis
address:
- 127.0.0.1:6379
password: ${MY_AUTH_STRING}
# useGCPIAM: false
# clusterEnabled: false
```
Memorystore Redis Cluster supports IAM authentication instead. Grant your
account the required [IAM role][iam] and make sure to set `useGCPIAM` to `true`.
Here is an example tools.yaml config for Memorystore Redis Cluster instances
using IAM authentication:
```yaml
kind: sources
name: my-redis-cluster-instance
type: redis
address:
- 127.0.0.1:6379
useGCPIAM: true
clusterEnabled: true
```
[iam]: https://cloud.google.com/memorystore/docs/cluster/about-iam-auth
## Reference
| **field** | **type** | **required** | **description** |
|------------------------|:--------:|:------------:|-----------------------------------------------------------------------------------------------------------------------------------------------|
| type | string | true | Must be "redis". |
| address | string | true | Primary endpoint for the Memorystore Redis instance to connect to. |
| username | string | false | If you are using a non-default user, specify the user name here. If you are using Memorystore for Redis, leave this field blank |
| password | string | false | If you have [Redis AUTH][auth] enabled, specify the AUTH string here |
| database | int | false | The Redis database to connect to. Not applicable for cluster enabled instances. The default database is `0`. |
| tls.enabled | bool | false | Set it to `true` to enable TLS for the Redis connection. Defaults to `false`. |
| tls.insecureSkipVerify | bool | false | Set it to `true` to skip TLS certificate verification. **Warning:** This is insecure and not recommended for production. Defaults to `false`. |
| clusterEnabled | bool | false | Set it to `true` if using a Redis Cluster instance. Defaults to `false`. |
| useGCPIAM | bool | false | Set it to `true` if you are using GCP's IAM authentication. Defaults to `false`. |
[auth]: https://cloud.google.com/memorystore/docs/redis/about-redis-auth
========================================================================
## redis Tool
========================================================================
**Hierarchy:** MCP Toolbox for Databases > Integrations > Redis Source > redis Tool
**URL:** https://googleapis.github.io/genai-toolbox/previews/PR-2723/integrations/redis/redis-tool/
**Description:** A "redis" tool executes a set of pre-defined Redis commands against a Redis instance.
## About
A redis tool executes a series of pre-defined Redis commands against a
Redis source.
The specified Redis commands are executed sequentially. Each command is
represented as a string list, where the first element is the command name (e.g.,
SET, GET, HGETALL) and subsequent elements are its arguments.
### Dynamic Command Parameters
Command arguments can be templated using the `$variableName` annotation. The
array type parameters will be expanded once into multiple arguments. Take the
following config for example:
```yaml
commands:
- [SADD, userNames, $userNames] # Array will be flattened into multiple arguments.
parameters:
- name: userNames
type: array
description: The user names to be set.
items:
name: userName # the item name doesn't matter but it has to exist
type: string
description: username
```
If the input is an array of strings `["Alice", "Sid", "Bob"]`, The final command
to be executed after argument expansion will be `[SADD, userNames, Alice, Sid, Bob]`.
## Compatible Sources
{{< compatible-sources >}}
## Example
```yaml
kind: tools
name: user_data_tool
type: redis
source: my-redis-instance
description: |
Use this tool to interact with user data stored in Redis.
It can set, retrieve, and delete user-specific information.
commands:
- [SADD, userNames, $userNames] # Array will be flattened into multiple arguments.
- [GET, $userId]
parameters:
- name: userId
type: string
description: The unique identifier for the user.
- name: userNames
type: array
description: The user names to be set.
```
========================================================================
## Serverless for Apache Spark Source
========================================================================
**Hierarchy:** MCP Toolbox for Databases > Integrations > Serverless for Apache Spark Source
**URL:** https://googleapis.github.io/genai-toolbox/previews/PR-2723/integrations/serverless-spark/
**Description:** Google Cloud Serverless for Apache Spark lets you run Spark workloads without requiring you to provision and manage your own Spark cluster.
## About
The [Serverless for Apache
Spark](https://cloud.google.com/dataproc-serverless/docs/overview) source allows
Toolbox to interact with Spark batches hosted on Google Cloud Serverless for
Apache Spark.
## Available Tools
{{< list-tools >}}
## Requirements
### IAM Permissions
Serverless for Apache Spark uses [Identity and Access Management
(IAM)](https://cloud.google.com/bigquery/docs/access-control) to control user
and group access to serverless Spark resources like batches and sessions.
Toolbox will use your [Application Default Credentials
(ADC)](https://cloud.google.com/docs/authentication#adc) to authorize and
authenticate when interacting with Google Cloud Serverless for Apache Spark.
When using this method, you need to ensure the IAM identity associated with your
ADC has the correct
[permissions](https://cloud.google.com/dataproc-serverless/docs/concepts/iam)
for the actions you intend to perform. Common roles include
`roles/dataproc.serverlessEditor` (which includes permissions to run batches) or
`roles/dataproc.serverlessViewer`. Follow this
[guide](https://cloud.google.com/docs/authentication/provide-credentials-adc) to
set up your ADC.
## Example
```yaml
kind: sources
name: my-serverless-spark-source
type: serverless-spark
project: my-project-id
location: us-central1
```
## Reference
| **field** | **type** | **required** | **description** |
| --------- | :------: | :----------: | ----------------------------------------------------------------- |
| type | string | true | Must be "serverless-spark". |
| project | string | true | ID of the GCP project with Serverless for Apache Spark resources. |
| location | string | true | Location containing Serverless for Apache Spark resources. |
========================================================================
## serverless-spark-get-batch Tool
========================================================================
**Hierarchy:** MCP Toolbox for Databases > Integrations > Serverless for Apache Spark Source > serverless-spark-get-batch Tool
**URL:** https://googleapis.github.io/genai-toolbox/previews/PR-2723/integrations/serverless-spark/serverless-spark-get-batch/
**Description:** A "serverless-spark-get-batch" tool gets a single Spark batch from the source.
## About
The `serverless-spark-get-batch` tool allows you to retrieve a specific
Serverless Spark batch job.
`serverless-spark-list-batches` accepts the following parameters:
- **`name`**: The short name of the batch, e.g. for
`projects/my-project/locations/us-central1/my-batch`, pass `my-batch`.
The tool gets the `project` and `location` from the source configuration.
## Compatible Sources
{{< compatible-sources >}}
## Example
```yaml
kind: tools
name: get_my_batch
type: serverless-spark-get-batch
source: my-serverless-spark-source
description: Use this tool to get a serverless spark batch.
```
## Output Format
The response contains the full Batch object as defined in the [API
spec](https://cloud.google.com/dataproc-serverless/docs/reference/rest/v1/projects.locations.batches#Batch),
plus additional fields `consoleUrl` and `logsUrl` where a human can go for more
detailed information.
```json
{
"batch": {
"createTime": "2025-10-10T15:15:21.303146Z",
"creator": "alice@example.com",
"labels": {
"goog-dataproc-batch-uuid": "aaaaaaaa-bbbb-cccc-dddd-eeeeeeeeeeee",
"goog-dataproc-location": "us-central1"
},
"name": "projects/google.com:hadoop-cloud-dev/locations/us-central1/batches/alice-20251010-abcd",
"operation": "projects/google.com:hadoop-cloud-dev/regions/us-central1/operations/11111111-2222-3333-4444-555555555555",
"runtimeConfig": {
"properties": {
"spark:spark.driver.cores": "4",
"spark:spark.driver.memory": "12200m"
}
},
"sparkBatch": {
"jarFileUris": [
"file:///usr/lib/spark/examples/jars/spark-examples.jar"
],
"mainClass": "org.apache.spark.examples.SparkPi"
},
"state": "SUCCEEDED",
"stateHistory": [
{
"state": "PENDING",
"stateStartTime": "2025-10-10T15:15:21.303146Z"
},
{
"state": "RUNNING",
"stateStartTime": "2025-10-10T15:16:41.291747Z"
}
],
"stateTime": "2025-10-10T15:17:21.265493Z",
"uuid": "aaaaaaaa-bbbb-cccc-dddd-eeeeeeeeeeee"
},
"consoleUrl": "https://console.cloud.google.com/dataproc/batches/...",
"logsUrl": "https://console.cloud.google.com/logs/viewer?..."
}
```
## Reference
| **field** | **type** | **required** | **description** |
| ------------ | :------: | :----------: | -------------------------------------------------- |
| type | string | true | Must be "serverless-spark-get-batch". |
| source | string | true | Name of the source the tool should use. |
| description | string | true | Description of the tool that is passed to the LLM. |
| authRequired | string[] | false | List of auth services required to invoke this tool |
========================================================================
## serverless-spark-list-batches Tool
========================================================================
**Hierarchy:** MCP Toolbox for Databases > Integrations > Serverless for Apache Spark Source > serverless-spark-list-batches Tool
**URL:** https://googleapis.github.io/genai-toolbox/previews/PR-2723/integrations/serverless-spark/serverless-spark-list-batches/
**Description:** A "serverless-spark-list-batches" tool returns a list of Spark batches from the source.
## About
A `serverless-spark-list-batches` tool returns a list of Spark batches from a
Google Cloud Serverless for Apache Spark source.
`serverless-spark-list-batches` accepts the following parameters:
- **`filter`** (optional): A filter expression to limit the batches returned.
Filters are case sensitive and may contain multiple clauses combined with
logical operators (AND/OR). Supported fields are `batch_id`, `batch_uuid`,
`state`, `create_time`, and `labels`. For example: `state = RUNNING AND
create_time < "2023-01-01T00:00:00Z"`.
- **`pageSize`** (optional): The maximum number of batches to return in a single
page.
- **`pageToken`** (optional): A page token, received from a previous call, to
retrieve the next page of results.
The tool gets the `project` and `location` from the source configuration.
## Compatible Sources
{{< compatible-sources >}}
## Example
```yaml
kind: tools
name: list_spark_batches
type: serverless-spark-list-batches
source: my-serverless-spark-source
description: Use this tool to list and filter serverless spark batches.
```
## Output Format
```json
{
"batches": [
{
"name": "projects/my-project/locations/us-central1/batches/batch-abc-123",
"uuid": "a1b2c3d4-e5f6-7890-1234-567890abcdef",
"state": "SUCCEEDED",
"creator": "alice@example.com",
"createTime": "2023-10-27T10:00:00Z",
"consoleUrl": "https://console.cloud.google.com/dataproc/batches/us-central1/batch-abc-123/summary?project=my-project",
"logsUrl": "https://console.cloud.google.com/logs/viewer?advancedFilter=resource.type%3D%22cloud_dataproc_batch%22%0Aresource.labels.project_id%3D%22my-project%22%0Aresource.labels.location%3D%22us-central1%22%0Aresource.labels.batch_id%3D%22batch-abc-123%22%0Atimestamp%3E%3D%222023-10-27T09%3A59%3A00Z%22%0Atimestamp%3C%3D%222023-10-27T10%3A10%3A00Z%22&project=my-project&resource=cloud_dataproc_batch%2Fbatch_id%2Fbatch-abc-123"
},
{
"name": "projects/my-project/locations/us-central1/batches/batch-def-456",
"uuid": "b2c3d4e5-f6a7-8901-2345-678901bcdefa",
"state": "FAILED",
"creator": "alice@example.com",
"createTime": "2023-10-27T11:30:00Z",
"consoleUrl": "https://console.cloud.google.com/dataproc/batches/us-central1/batch-def-456/summary?project=my-project",
"logsUrl": "https://console.cloud.google.com/logs/viewer?advancedFilter=resource.type%3D%22cloud_dataproc_batch%22%0Aresource.labels.project_id%3D%22my-project%22%0Aresource.labels.location%3D%22us-central1%22%0Aresource.labels.batch_id%3D%22batch-def-456%22%0Atimestamp%3E%3D%222023-10-27T11%3A29%3A00Z%22%0Atimestamp%3C%3D%222023-10-27T11%3A40%3A00Z%22&project=my-project&resource=cloud_dataproc_batch%2Fbatch_id%2Fbatch-def-456"
}
],
"nextPageToken": "abcd1234"
}
```
## Reference
| **field** | **type** | **required** | **description** |
| ------------ | :------: | :----------: | -------------------------------------------------- |
| type | string | true | Must be "serverless-spark-list-batches". |
| source | string | true | Name of the source the tool should use. |
| description | string | true | Description of the tool that is passed to the LLM. |
| authRequired | string[] | false | List of auth services required to invoke this tool |
========================================================================
## serverless-spark-cancel-batch Tool
========================================================================
**Hierarchy:** MCP Toolbox for Databases > Integrations > Serverless for Apache Spark Source > serverless-spark-cancel-batch Tool
**URL:** https://googleapis.github.io/genai-toolbox/previews/PR-2723/integrations/serverless-spark/serverless-spark-cancel-batch/
**Description:** A "serverless-spark-cancel-batch" tool cancels a running Spark batch operation.
## About
`serverless-spark-cancel-batch` tool cancels a running Spark batch operation in
a Google Cloud Serverless for Apache Spark source. The cancellation request is
asynchronous, so the batch state will not change immediately after the tool
returns; it can take a minute or so for the cancellation to be reflected.
`serverless-spark-cancel-batch` accepts the following parameters:
- **`operation`** (required): The name of the operation to cancel. For example,
for `projects/my-project/locations/us-central1/operations/my-operation`, you
would pass `my-operation`.
The tool inherits the `project` and `location` from the source configuration.
## Compatible Sources
{{< compatible-sources >}}
## Example
```yaml
kind: tools
name: cancel_spark_batch
type: serverless-spark-cancel-batch
source: my-serverless-spark-source
description: Use this tool to cancel a running serverless spark batch operation.
```
## Output Format
```json
"Cancelled [projects/my-project/regions/us-central1/operations/my-operation]."
```
## Reference
| **field** | **type** | **required** | **description** |
| ------------ | :------: | :----------: | -------------------------------------------------- |
| type | string | true | Must be "serverless-spark-cancel-batch". |
| source | string | true | Name of the source the tool should use. |
| description | string | true | Description of the tool that is passed to the LLM. |
| authRequired | string[] | false | List of auth services required to invoke this tool |
========================================================================
## serverless-spark-create-pyspark-batch Tool
========================================================================
**Hierarchy:** MCP Toolbox for Databases > Integrations > Serverless for Apache Spark Source > serverless-spark-create-pyspark-batch Tool
**URL:** https://googleapis.github.io/genai-toolbox/previews/PR-2723/integrations/serverless-spark/serverless-spark-create-pyspark-batch/
**Description:** A "serverless-spark-create-pyspark-batch" tool submits a Spark batch to run asynchronously.
## About
A `serverless-spark-create-pyspark-batch` tool submits a Spark batch to a Google
Cloud Serverless for Apache Spark source. The workload executes asynchronously
and takes around a minute to begin executing; status can be polled using the
[get batch](serverless-spark-get-batch.md) tool.
`serverless-spark-create-pyspark-batch` accepts the following parameters:
- **`mainFile`**: The path to the main Python file, as a gs://... URI.
- **`args`** Optional. A list of arguments passed to the main file.
- **`version`** Optional. The Serverless [runtime
version](https://docs.cloud.google.com/dataproc-serverless/docs/concepts/versions/dataproc-serverless-versions)
to execute with.
## Compatible Sources
{{< compatible-sources >}}
## Example
```yaml
kind: tools
name: serverless-spark-create-pyspark-batch
type: serverless-spark-create-pyspark-batch
source: "my-serverless-spark-source"
runtimeConfig:
properties:
spark.driver.memory: "1024m"
environmentConfig:
executionConfig:
networkUri: "my-network"
```
### Custom Configuration
This tool supports custom
[`runtimeConfig`](https://docs.cloud.google.com/dataproc-serverless/docs/reference/rest/v1/RuntimeConfig)
and
[`environmentConfig`](https://docs.cloud.google.com/dataproc-serverless/docs/reference/rest/v1/EnvironmentConfig)
settings, which can be specified in a `tools.yaml` file. These configurations
are parsed as YAML and passed to the Dataproc API.
**Note:** If your project requires custom runtime or environment configuration,
you must write a custom `tools.yaml`, you cannot use the `serverless-spark`
prebuilt config.
## Output Format
The response contains the
[operation](https://docs.cloud.google.com/dataproc-serverless/docs/reference/rest/v1/projects.locations.operations#resource:-operation)
metadata JSON object corresponding to [batch operation
metadata](https://pkg.go.dev/cloud.google.com/go/dataproc/v2/apiv1/dataprocpb#BatchOperationMetadata),
plus additional fields `consoleUrl` and `logsUrl` where a human can go for more
detailed information.
```json
{
"opMetadata": {
"batch": "projects/myproject/locations/us-central1/batches/aaaaaaaa-bbbb-cccc-dddd-eeeeeeeeeeee",
"batchUuid": "aaaaaaaa-bbbb-cccc-dddd-eeeeeeeeeeee",
"createTime": "2025-11-19T16:36:47.607119Z",
"description": "Batch",
"labels": {
"goog-dataproc-batch-uuid": "aaaaaaaa-bbbb-cccc-dddd-eeeeeeeeeeee",
"goog-dataproc-location": "us-central1"
},
"operationType": "BATCH",
"warnings": [
"No runtime version specified. Using the default runtime version."
]
},
"consoleUrl": "https://console.cloud.google.com/dataproc/batches/...",
"logsUrl": "https://console.cloud.google.com/logs/viewer?..."
}
```
## Reference
| **field** | **type** | **required** | **description** |
| ----------------- | :------: | :----------: | -------------------------------------------------------------------------------------------------------------------------------------------------------- |
| type | string | true | Must be "serverless-spark-create-pyspark-batch". |
| source | string | true | Name of the source the tool should use. |
| description | string | false | Description of the tool that is passed to the LLM. |
| runtimeConfig | map | false | [Runtime config](https://docs.cloud.google.com/dataproc-serverless/docs/reference/rest/v1/RuntimeConfig) for all batches created with this tool. |
| environmentConfig | map | false | [Environment config](https://docs.cloud.google.com/dataproc-serverless/docs/reference/rest/v1/EnvironmentConfig) for all batches created with this tool. |
| authRequired | string[] | false | List of auth services required to invoke this tool. |
========================================================================
## serverless-spark-create-spark-batch Tool
========================================================================
**Hierarchy:** MCP Toolbox for Databases > Integrations > Serverless for Apache Spark Source > serverless-spark-create-spark-batch Tool
**URL:** https://googleapis.github.io/genai-toolbox/previews/PR-2723/integrations/serverless-spark/serverless-spark-create-spark-batch/
**Description:** A "serverless-spark-create-spark-batch" tool submits a Spark batch to run asynchronously.
## About
A `serverless-spark-create-spark-batch` tool submits a Java Spark batch to a
Google Cloud Serverless for Apache Spark source. The workload executes
asynchronously and takes around a minute to begin executing; status can be
polled using the [get batch](serverless-spark-get-batch.md) tool.
`serverless-spark-create-spark-batch` accepts the following parameters:
- **`mainJarFile`**: Optional. The gs:// URI of the jar file that contains the
main class. Exactly one of mainJarFile or mainClass must be specified.
- **`mainClass`**: Optional. The name of the driver's main class. Exactly one of
mainJarFile or mainClass must be specified.
- **`jarFiles`**: Optional. A list of gs:// URIs of jar files to add to the CLASSPATHs of
the Spark driver and tasks.
- **`args`** Optional. A list of arguments passed to the driver.
- **`version`** Optional. The Serverless [runtime
version](https://docs.cloud.google.com/dataproc-serverless/docs/concepts/versions/dataproc-serverless-versions)
to execute with.
## Compatible Sources
{{< compatible-sources >}}
## Example
```yaml
kind: tools
name: "serverless-spark-create-spark-batch"
type: "serverless-spark-create-spark-batch"
source: "my-serverless-spark-source"
runtimeConfig:
properties:
spark.driver.memory: "1024m"
environmentConfig:
executionConfig:
networkUri: "my-network"
```
### Custom Configuration
This tool supports custom
[`runtimeConfig`](https://docs.cloud.google.com/dataproc-serverless/docs/reference/rest/v1/RuntimeConfig)
and
[`environmentConfig`](https://docs.cloud.google.com/dataproc-serverless/docs/reference/rest/v1/EnvironmentConfig)
settings, which can be specified in a `tools.yaml` file. These configurations
are parsed as YAML and passed to the Dataproc API.
**Note:** If your project requires custom runtime or environment configuration,
you must write a custom `tools.yaml`, you cannot use the `serverless-spark`
prebuilt config.
## Output Format
The response contains the
[operation](https://docs.cloud.google.com/dataproc-serverless/docs/reference/rest/v1/projects.locations.operations#resource:-operation)
metadata JSON object corresponding to [batch operation
metadata](https://pkg.go.dev/cloud.google.com/go/dataproc/v2/apiv1/dataprocpb#BatchOperationMetadata),
plus additional fields `consoleUrl` and `logsUrl` where a human can go for more
detailed information.
```json
{
"opMetadata": {
"batch": "projects/myproject/locations/us-central1/batches/aaaaaaaa-bbbb-cccc-dddd-eeeeeeeeeeee",
"batchUuid": "aaaaaaaa-bbbb-cccc-dddd-eeeeeeeeeeee",
"createTime": "2025-11-19T16:36:47.607119Z",
"description": "Batch",
"labels": {
"goog-dataproc-batch-uuid": "aaaaaaaa-bbbb-cccc-dddd-eeeeeeeeeeee",
"goog-dataproc-location": "us-central1"
},
"operationType": "BATCH",
"warnings": [
"No runtime version specified. Using the default runtime version."
]
},
"consoleUrl": "https://console.cloud.google.com/dataproc/batches/...",
"logsUrl": "https://console.cloud.google.com/logs/viewer?..."
}
```
## Reference
| **field** | **type** | **required** | **description** |
| ----------------- | :------: | :----------: | -------------------------------------------------------------------------------------------------------------------------------------------------------- |
| type | string | true | Must be "serverless-spark-create-spark-batch". |
| source | string | true | Name of the source the tool should use. |
| description | string | false | Description of the tool that is passed to the LLM. |
| runtimeConfig | map | false | [Runtime config](https://docs.cloud.google.com/dataproc-serverless/docs/reference/rest/v1/RuntimeConfig) for all batches created with this tool. |
| environmentConfig | map | false | [Environment config](https://docs.cloud.google.com/dataproc-serverless/docs/reference/rest/v1/EnvironmentConfig) for all batches created with this tool. |
| authRequired | string[] | false | List of auth services required to invoke this tool. |
========================================================================
## SingleStore Source
========================================================================
**Hierarchy:** MCP Toolbox for Databases > Integrations > SingleStore Source
**URL:** https://googleapis.github.io/genai-toolbox/previews/PR-2723/integrations/singlestore/
**Description:** SingleStore is the cloud-native database built with speed and scale to power data-intensive applications.
## About
[SingleStore][singlestore-docs] is a distributed SQL database built to power
intelligent applications. It is both relational and multi-model, enabling
developers to easily build and scale applications and workloads.
SingleStore is built around Universal Storage which combines in-memory rowstore
and on-disk columnstore data formats to deliver a single table type that is
optimized to handle both transactional and analytical workloads.
[singlestore-docs]: https://docs.singlestore.com/
## Available Tools
{{< list-tools >}}
## Requirements
### Database User
This source only uses standard authentication. You will need to [create a
database user][singlestore-user] to login to the database with.
[singlestore-user]:
https://docs.singlestore.com/cloud/reference/sql-reference/security-management-commands/create-user/
## Example
```yaml
kind: sources
name: my-singlestore-source
type: singlestore
host: 127.0.0.1
port: 3306
database: my_db
user: ${USER_NAME}
password: ${PASSWORD}
queryTimeout: 30s # Optional: query timeout duration
```
{{< notice tip >}}
Use environment variable replacement with the format ${ENV_NAME}
instead of hardcoding your secrets into the configuration file.
{{< /notice >}}
## Reference
| **field** | **type** | **required** | **description** |
|--------------|:--------:|:------------:|-------------------------------------------------------------------------------------------------|
| type | string | true | Must be "singlestore". |
| host | string | true | IP address to connect to (e.g. "127.0.0.1"). |
| port | string | true | Port to connect to (e.g. "3306"). |
| database | string | true | Name of the SingleStore database to connect to (e.g. "my_db"). |
| user | string | true | Name of the SingleStore database user to connect as (e.g. "admin"). |
| password | string | true | Password of the SingleStore database user. |
| queryTimeout | string | false | Maximum time to wait for query execution (e.g. "30s", "2m"). By default, no timeout is applied. |
========================================================================
## singlestore-execute-sql Tool
========================================================================
**Hierarchy:** MCP Toolbox for Databases > Integrations > SingleStore Source > singlestore-execute-sql Tool
**URL:** https://googleapis.github.io/genai-toolbox/previews/PR-2723/integrations/singlestore/singlestore-execute-sql/
**Description:** A "singlestore-execute-sql" tool executes a SQL statement against a SingleStore database.
## About
A `singlestore-execute-sql` tool executes a SQL statement against a SingleStore
database.
`singlestore-execute-sql` takes one input parameter `sql` and runs the sql
statement against the `source`.
> **Note:** This tool is intended for developer assistant workflows with
> human-in-the-loop and shouldn't be used for production agents.
## Compatible Sources
{{< compatible-sources >}}
## Example
```yaml
kind: tools
name: execute_sql_tool
type: singlestore-execute-sql
source: my-s2-instance
description: Use this tool to execute sql statement
```
## Reference
| **field** | **type** | **required** | **description** |
|-------------|:--------:|:------------:|----------------------------------------------------|
| type | string | true | Must be "singlestore-execute-sql". |
| source | string | true | Name of the source the SQL should execute on. |
| description | string | true | Description of the tool that is passed to the LLM. |
========================================================================
## singlestore-sql Tool
========================================================================
**Hierarchy:** MCP Toolbox for Databases > Integrations > SingleStore Source > singlestore-sql Tool
**URL:** https://googleapis.github.io/genai-toolbox/previews/PR-2723/integrations/singlestore/singlestore-sql/
**Description:** A "singlestore-sql" tool executes a pre-defined SQL statement against a SingleStore database.
## About
A `singlestore-execute-sql` tool executes a SQL statement against a SingleStore
database.
The specified SQL statement expects parameters in the SQL query to be in the
form of placeholders `?`.
## Compatible Sources
{{< compatible-sources >}}
## Example
> **Note:** This tool uses parameterized queries to prevent SQL injections.
> Query parameters can be used as substitutes for arbitrary expressions.
> Parameters cannot be used as substitutes for identifiers, column names, table
> names, or other parts of the query.
```yaml
kind: tools
name: search_flights_by_number
type: singlestore-sql
source: my-s2-instance
statement: |
SELECT * FROM flights
WHERE airline = ?
AND flight_number = ?
LIMIT 10
description: |
Use this tool to get information for a specific flight.
Takes an airline code and flight number and returns info on the flight.
Do NOT use this tool with a flight id. Do NOT guess an airline code or flight number.
A airline code is a code for an airline service consisting of two-character
airline designator and followed by flight number, which is 1 to 4 digit number.
For example, if given CY 0123, the airline is "CY", and flight_number is "123".
Another example for this is DL 1234, the airline is "DL", and flight_number is "1234".
If the tool returns more than one option choose the date closes to today.
Example:
{{
"airline": "CY",
"flight_number": "888",
}}
Example:
{{
"airline": "DL",
"flight_number": "1234",
}}
parameters:
- name: airline
type: string
description: Airline unique 2 letter identifier
- name: flight_number
type: string
description: 1 to 4 digit number
```
### Example with Template Parameters
> **Note:** This tool allows direct modifications to the SQL statement,
> including identifiers, column names, and table names. **This makes it more
> vulnerable to SQL injections**. Using basic parameters only (see above) is
> recommended for performance and safety reasons. For more details, please check
> [templateParameters](..#template-parameters).
```yaml
kind: tools
name: list_table
type: singlestore-sql
source: my-s2-instance
statement: |
SELECT * FROM {{.tableName}};
description: |
Use this tool to list all information from a specific table.
Example:
{{
"tableName": "flights",
}}
templateParameters:
- name: tableName
type: string
description: Table to select from
```
## Reference
| **field** | **type** | **required** | **description** |
|--------------------|:--------------------------------------------:|:------------:|----------------------------------------------------------------------------------------------------------------------------------------|
| type | string | true | Must be "singlestore-sql". |
| source | string | true | Name of the source the SQL should execute on. |
| description | string | true | Description of the tool that is passed to the LLM. |
| statement | string | true | SQL statement to execute on. |
| parameters | [parameters](../#specifying-parameters) | false | List of [parameters](../#specifying-parameters) that will be inserted into the SQL statement. |
| templateParameters | [templateParameters](..#template-parameters) | false | List of [templateParameters](..#template-parameters) that will be inserted into the SQL statement before executing prepared statement. |
========================================================================
## Snowflake Source
========================================================================
**Hierarchy:** MCP Toolbox for Databases > Integrations > Snowflake Source
**URL:** https://googleapis.github.io/genai-toolbox/previews/PR-2723/integrations/snowflake/
**Description:** Snowflake is a cloud-based data platform.
## About
[Snowflake][sf-docs] is a cloud data platform that provides a data warehouse-as-a-service designed for the cloud.
[sf-docs]: https://docs.snowflake.com/
## Available Tools
{{< list-tools >}}
## Requirements
### Database User
This source only uses standard authentication. You will need to create a
Snowflake user to login to the database with.
## Example
```yaml
kind: sources
name: my-sf-source
type: snowflake
account: ${SNOWFLAKE_ACCOUNT}
user: ${SNOWFLAKE_USER}
password: ${SNOWFLAKE_PASSWORD}
database: ${SNOWFLAKE_DATABASE}
schema: ${SNOWFLAKE_SCHEMA}
warehouse: ${SNOWFLAKE_WAREHOUSE}
role: ${SNOWFLAKE_ROLE}
```
{{< notice tip >}}
Use environment variable replacement with the format ${ENV_NAME}
instead of hardcoding your secrets into the configuration file.
{{< /notice >}}
## Reference
| **field** | **type** | **required** | **description** |
|-----------|:--------:|:------------:|------------------------------------------------------------------------|
| type | string | true | Must be "snowflake". |
| account | string | true | Your Snowflake account identifier. |
| user | string | true | Name of the Snowflake user to connect as (e.g. "my-sf-user"). |
| password | string | true | Password of the Snowflake user (e.g. "my-password"). |
| database | string | true | Name of the Snowflake database to connect to (e.g. "my_db"). |
| schema | string | true | Name of the schema to use (e.g. "my_schema"). |
| warehouse | string | false | The virtual warehouse to use. Defaults to "COMPUTE_WH". |
| role | string | false | The security role to use. Defaults to "ACCOUNTADMIN". |
| timeout | integer | false | The connection timeout in seconds. Defaults to 60. |
========================================================================
## snowflake-execute-sql Tool
========================================================================
**Hierarchy:** MCP Toolbox for Databases > Integrations > Snowflake Source > snowflake-execute-sql Tool
**URL:** https://googleapis.github.io/genai-toolbox/previews/PR-2723/integrations/snowflake/snowflake-execute-sql/
**Description:** A "snowflake-execute-sql" tool executes a SQL statement against a Snowflake database.
## About
A `snowflake-execute-sql` tool executes a SQL statement against a Snowflake
database.
`snowflake-execute-sql` takes one input parameter `sql` and run the sql
statement against the `source`.
> **Note:** This tool is intended for developer assistant workflows with
> human-in-the-loop and shouldn't be used for production agents.
## Compatible Sources
{{< compatible-sources >}}
## Example
```yaml
kind: tools
name: execute_sql_tool
type: snowflake-execute-sql
source: my-snowflake-instance
description: Use this tool to execute sql statement.
```
## Reference
| **field** | **type** | **required** | **description** |
|--------------|:-------------:|:------------:|-----------------------------------------------------------|
| type | string | true | Must be "snowflake-execute-sql". |
| source | string | true | Name of the source the SQL should execute on. |
| description | string | true | Description of the tool that is passed to the LLM. |
| authRequired | array[string] | false | List of auth services that are required to use this tool. |
========================================================================
## snowflake-sql Tool
========================================================================
**Hierarchy:** MCP Toolbox for Databases > Integrations > Snowflake Source > snowflake-sql Tool
**URL:** https://googleapis.github.io/genai-toolbox/previews/PR-2723/integrations/snowflake/snowflake-sql/
**Description:** A "snowflake-sql" tool executes a pre-defined SQL statement against a Snowflake database.
## About
A `snowflake-sql` tool executes a pre-defined SQL statement against a Snowflake
database.
## Compatible Sources
{{< compatible-sources >}}
The specified SQL statement is executed as a prepared statement, and specified
parameters will be inserted according to their position: e.g. `:1` will be the
first parameter specified, `:2` will be the second parameter, and so on.
> **Note:** This tool uses parameterized queries to prevent SQL injections.
> Query parameters can be used as substitutes for arbitrary expressions.
> Parameters cannot be used as substitutes for identifiers, column names, table
> names, or other parts of the query.
## Example
```yaml
kind: tools
name: search_flights_by_number
type: snowflake-sql
source: my-snowflake-instance
statement: |
SELECT * FROM flights
WHERE airline = :1
AND flight_number = :2
LIMIT 10
description: |
Use this tool to get information for a specific flight.
Takes an airline code and flight number and returns info on the flight.
Do NOT use this tool with a flight id. Do NOT guess an airline code or flight number.
A airline code is a code for an airline service consisting of two-character
airline designator and followed by flight number, which is 1 to 4 digit number.
For example, if given CY 0123, the airline is "CY", and flight_number is "123".
Another example for this is DL 1234, the airline is "DL", and flight_number is "1234".
If the tool returns more than one option choose the date closes to today.
Example:
{{
"airline": "CY",
"flight_number": "888",
}}
Example:
{{
"airline": "DL",
"flight_number": "1234",
}}
parameters:
- name: airline
type: string
description: Airline unique 2 letter identifier
- name: flight_number
type: string
description: 1 to 4 digit number
```
### Example with Template Parameters
> **Note:** This tool allows direct modifications to the SQL statement,
> including identifiers, column names, and table names. **This makes it more
> vulnerable to SQL injections**. Using basic parameters only (see above) is
> recommended for performance and safety reasons. For more details, please check
> [templateParameters](..#template-parameters).
```yaml
kind: tools
name: list_table
type: snowflake
source: my-snowflake-instance
statement: |
SELECT * FROM {{.tableName}};
description: |
Use this tool to list all information from a specific table.
Example:
{{
"tableName": "flights",
}}
templateParameters:
- name: tableName
type: string
description: Table to select from
```
## Reference
| **field** | **type** | **required** | **description** |
|--------------------|:--------------------------------------------:|:------------:|----------------------------------------------------------------------------------------------------------------------------------------|
| type | string | true | Must be "snowflake-sql". |
| source | string | true | Name of the source the SQL should execute on. |
| description | string | true | Description of the tool that is passed to the LLM. |
| statement | string | true | SQL statement to execute on. |
| parameters | [parameters](../#specifying-parameters) | false | List of [parameters](../#specifying-parameters) that will be inserted into the SQL statement. |
| templateParameters | [templateParameters](..#template-parameters) | false | List of [templateParameters](..#template-parameters) that will be inserted into the SQL statement before executing prepared statement. |
| authRequired | array[string] | false | List of auth services that are required to use this tool. |
========================================================================
## Spanner Source
========================================================================
**Hierarchy:** MCP Toolbox for Databases > Integrations > Spanner Source
**URL:** https://googleapis.github.io/genai-toolbox/previews/PR-2723/integrations/spanner/
**Description:** Spanner is a fully managed database service from Google Cloud that combines relational, key-value, graph, and search capabilities.
## About
[Spanner][spanner-docs] is a fully managed, mission-critical database service
that brings together relational, graph, key-value, and search. It offers
transactional consistency at global scale, automatic, synchronous replication
for high availability, and support for two SQL dialects: GoogleSQL (ANSI 2011
with extensions) and PostgreSQL.
If you are new to Spanner, you can try to [create and query a database using
the Google Cloud console][spanner-quickstart].
[spanner-docs]: https://cloud.google.com/spanner/docs
[spanner-quickstart]:
https://cloud.google.com/spanner/docs/create-query-database-console
## Available Tools
{{< list-tools >}}
### Pre-built Configurations
- [Spanner using MCP](../../user-guide/connect-to/ides/spanner_mcp.md)
Connect your IDE to Spanner using Toolbox.
## Requirements
### IAM Permissions
Spanner uses [Identity and Access Management (IAM)][iam-overview] to control
user and group access to Spanner resources at the project, Spanner instance, and
Spanner database levels. Toolbox will use your [Application Default Credentials
(ADC)][adc] to authorize and authenticate when interacting with Spanner.
In addition to [setting the ADC for your server][set-adc], you need to ensure
the IAM identity has been given the correct IAM permissions for the query
provided. See [Apply IAM roles][grant-permissions] for more information on
applying IAM permissions and roles to an identity.
[iam-overview]: https://cloud.google.com/spanner/docs/iam
[adc]: https://cloud.google.com/docs/authentication#adc
[set-adc]: https://cloud.google.com/docs/authentication/provide-credentials-adc
[grant-permissions]: https://cloud.google.com/spanner/docs/grant-permissions
## Example
```yaml
kind: sources
name: my-spanner-source
type: "spanner"
project: "my-project-id"
instance: "my-instance"
database: "my_db"
# dialect: "googlesql"
```
## Reference
| **field** | **type** | **required** | **description** |
|-----------|:--------:|:------------:|---------------------------------------------------------------------------------------------------------------------|
| type | string | true | Must be "spanner". |
| project | string | true | Id of the GCP project that the cluster was created in (e.g. "my-project-id"). |
| instance | string | true | Name of the Spanner instance. |
| database | string | true | Name of the database on the Spanner instance |
| dialect | string | false | Name of the dialect type of the Spanner database, must be either `googlesql` or `postgresql`. Default: `googlesql`. |
========================================================================
## spanner-execute-sql Tool
========================================================================
**Hierarchy:** MCP Toolbox for Databases > Integrations > Spanner Source > spanner-execute-sql Tool
**URL:** https://googleapis.github.io/genai-toolbox/previews/PR-2723/integrations/spanner/spanner-execute-sql/
**Description:** A "spanner-execute-sql" tool executes a SQL statement against a Spanner database.
## About
A `spanner-execute-sql` tool executes a SQL statement against a Spanner
database.
`spanner-execute-sql` takes one input parameter `sql` and run the sql
statement against the `source`.
> **Note:** This tool is intended for developer assistant workflows with
> human-in-the-loop and shouldn't be used for production agents.
## Compatible Sources
{{< compatible-sources >}}
## Example
```yaml
kind: tools
name: execute_sql_tool
type: spanner-execute-sql
source: my-spanner-instance
description: Use this tool to execute sql statement.
```
## Reference
| **field** | **type** | **required** | **description** |
|-------------|:--------:|:------------:|------------------------------------------------------------------------------------------|
| type | string | true | Must be "spanner-execute-sql". |
| source | string | true | Name of the source the SQL should execute on. |
| description | string | true | Description of the tool that is passed to the LLM. |
| readOnly | bool | false | When set to `true`, the `statement` is run as a read-only transaction. Default: `false`. |
========================================================================
## spanner-sql Tool
========================================================================
**Hierarchy:** MCP Toolbox for Databases > Integrations > Spanner Source > spanner-sql Tool
**URL:** https://googleapis.github.io/genai-toolbox/previews/PR-2723/integrations/spanner/spanner-sql/
**Description:** A "spanner-sql" tool executes a pre-defined SQL statement against a Google Cloud Spanner database.
## About
A `spanner-sql` tool executes a pre-defined SQL statement (either `googlesql` or
`postgresql`) against a Cloud Spanner database.
## Compatible Sources
{{< compatible-sources >}}
### GoogleSQL
For the `googlesql` dialect, the specified SQL statement is executed as a [data
manipulation language (DML)][gsql-dml] statements, and specified parameters will
inserted according to their name: e.g. `@name`.
> **Note:** This tool uses parameterized queries to prevent SQL injections.
> Query parameters can be used as substitutes for arbitrary expressions.
> Parameters cannot be used as substitutes for identifiers, column names, table
> names, or other parts of the query.
[gsql-dml]:
https://cloud.google.com/spanner/docs/reference/standard-sql/dml-syntax
### PostgreSQL
For the `postgresql` dialect, the specified SQL statement is executed as a [prepared
statement][pg-prepare], and specified parameters will be inserted according to
their position: e.g. `$1` will be the first parameter specified, `$2` will be
the second parameter, and so on.
[pg-prepare]: https://www.postgresql.org/docs/current/sql-prepare.html
## Example
> **Note:** This tool uses parameterized queries to prevent SQL injections.
> Query parameters can be used as substitutes for arbitrary expressions.
> Parameters cannot be used as substitutes for identifiers, column names, table
> names, or other parts of the query.
{{< tabpane persist="header" >}}
{{< tab header="GoogleSQL" lang="yaml" >}}
kind: tools
name: search_flights_by_number
type: spanner-sql
source: my-spanner-instance
statement: |
SELECT * FROM flights
WHERE airline = @airline
AND flight_number = @flight_number
LIMIT 10
description: |
Use this tool to get information for a specific flight.
Takes an airline code and flight number and returns info on the flight.
Do NOT use this tool with a flight id. Do NOT guess an airline code or flight number.
A airline code is a code for an airline service consisting of two-character
airline designator and followed by flight number, which is 1 to 4 digit number.
For example, if given CY 0123, the airline is "CY", and flight_number is "123".
Another example for this is DL 1234, the airline is "DL", and flight_number is "1234".
If the tool returns more than one option choose the date closes to today.
Example:
{{
"airline": "CY",
"flight_number": "888",
}}
Example:
{{
"airline": "DL",
"flight_number": "1234",
}}
parameters:
- name: airline
type: string
description: Airline unique 2 letter identifier
- name: flight_number
type: string
description: 1 to 4 digit number
{{< /tab >}}
{{< tab header="PostgreSQL" lang="yaml" >}}
kind: tools
name: search_flights_by_number
type: spanner
source: my-spanner-instance
statement: |
SELECT * FROM flights
WHERE airline = $1
AND flight_number = $2
LIMIT 10
description: |
Use this tool to get information for a specific flight.
Takes an airline code and flight number and returns info on the flight.
Do NOT use this tool with a flight id. Do NOT guess an airline code or flight number.
A airline code is a code for an airline service consisting of two-character
airline designator and followed by flight number, which is 1 to 4 digit number.
For example, if given CY 0123, the airline is "CY", and flight_number is "123".
Another example for this is DL 1234, the airline is "DL", and flight_number is "1234".
If the tool returns more than one option choose the date closes to today.
Example:
{{
"airline": "CY",
"flight_number": "888",
}}
Example:
{{
"airline": "DL",
"flight_number": "1234",
}}
parameters:
- name: airline
type: string
description: Airline unique 2 letter identifier
- name: flight_number
type: string
description: 1 to 4 digit number
{{< /tab >}}
{{< /tabpane >}}
### Example with Template Parameters
> **Note:** This tool allows direct modifications to the SQL statement,
> including identifiers, column names, and table names. **This makes it more
> vulnerable to SQL injections**. Using basic parameters only (see above) is
> recommended for performance and safety reasons. For more details, please check
> [templateParameters](..#template-parameters).
```yaml
kind: tools
name: list_table
type: spanner
source: my-spanner-instance
statement: |
SELECT * FROM {{.tableName}};
description: |
Use this tool to list all information from a specific table.
Example:
{{
"tableName": "flights",
}}
templateParameters:
- name: tableName
type: string
description: Table to select from
```
## Reference
| **field** | **type** | **required** | **description** |
|--------------------|:--------------------------------------------:|:------------:|----------------------------------------------------------------------------------------------------------------------------------------|
| type | string | true | Must be "spanner-sql". |
| source | string | true | Name of the source the SQL should execute on. |
| description | string | true | Description of the tool that is passed to the LLM. |
| statement | string | true | SQL statement to execute on. |
| parameters | [parameters](../#specifying-parameters) | false | List of [parameters](../#specifying-parameters) that will be inserted into the SQL statement. |
| readOnly | bool | false | When set to `true`, the `statement` is run as a read-only transaction. Default: `false`. |
| templateParameters | [templateParameters](..#template-parameters) | false | List of [templateParameters](..#template-parameters) that will be inserted into the SQL statement before executing prepared statement. |
========================================================================
## spanner-list-graphs Tool
========================================================================
**Hierarchy:** MCP Toolbox for Databases > Integrations > Spanner Source > spanner-list-graphs Tool
**URL:** https://googleapis.github.io/genai-toolbox/previews/PR-2723/integrations/spanner/spanner-list-graphs/
**Description:** A "spanner-list-graphs" tool retrieves schema information about graphs in a Google Cloud Spanner database.
## About
A `spanner-list-graphs` tool retrieves comprehensive schema information about
graphs in a Cloud Spanner database. It returns detailed metadata including
node tables, edge tables, labels and property declarations.
### Features
- **Comprehensive Schema Information**: Returns node tables, edge tables, labels
and property declarations
- **Flexible Filtering**: Can list all graphs or filter by specific graph names
- **Output Format Options**: Choose between simple (graph names only) or detailed
(full schema information) output
### Use Cases
1. **Database Documentation**: Generate comprehensive documentation of your
database schema
2. **Schema Validation**: Verify that expected graphs, node and edge exist
3. **Migration Planning**: Understand the current schema before making changes
4. **Development Tools**: Build tools that need to understand database structure
5. **Audit and Compliance**: Track schema changes and ensure compliance with
data governance policies
This tool is read-only and executes pre-defined SQL queries against the
`INFORMATION_SCHEMA` tables to gather metadata.
{{< notice warning >}}
The tool only works for the GoogleSQL
source dialect, as Spanner Graph isn't available in the PostgreSQL dialect.
{{< /notice >}}
## Compatible Sources
{{< compatible-sources >}}
## Parameters
The tool accepts two optional parameters:
| **parameter** | **type** | **default** | **description** |
|---------------|:--------:|:-----------:|------------------------------------------------------------------------------------------------------|
| graph_names | string | "" | Comma-separated list of graph names to filter. If empty, lists all graphs in user-accessible schemas |
| output_format | string | "detailed" | Output format: "simple" returns only graph names, "detailed" returns full schema information |
## Example
### Basic Usage - List All Graphs
```yaml
kind: sources
name: my-spanner-db
type: spanner
project: ${SPANNER_PROJECT}
instance: ${SPANNER_INSTANCE}
database: ${SPANNER_DATABASE}
dialect: googlesql # wont work for postgresql
---
kind: tools
name: list_all_graphs
type: spanner-list-graphs
source: my-spanner-db
description: Lists all graphs with their complete schema information
```
### List Specific Graphs
```yaml
kind: tools
name: list_specific_graphs
type: spanner-list-graphs
source: my-spanner-db
description: |
Lists schema information for specific graphs.
Example usage:
{
"graph_names": "FinGraph,SocialGraph",
"output_format": "detailed"
}
```
## Output Format
### Simple Format
When `output_format` is set to "simple", the tool returns a minimal JSON structure:
```json
[
{
"object_details": {
"name": "FinGraph"
},
"object_name": "FinGraph",
"schema_name": ""
},
{
"object_details": {
"name": "SocialGraph"
},
"object_name": "SocialGraph",
"schema_name": ""
}
]
```
### Detailed Format
When `output_format` is set to "detailed" (default), the tool returns
comprehensive schema information:
```json
[
{
"object_details": {
"catalog": "",
"edge_tables": [
{
"baseCatalogName": "",
"baseSchemaName": "",
"baseTableName": "Knows",
"destinationNodeTable": {
"edgeTableColumns": [
"DstId"
],
"nodeTableColumns": [
"Id"
],
"nodeTableName": "Person"
},
"keyColumns": [
"SrcId",
"DstId"
],
"kind": "EDGE",
"labelNames": [
"Knows"
],
"name": "Knows",
"propertyDefinitions": [
{
"propertyDeclarationName": "DstId",
"valueExpressionSql": "DstId"
},
{
"propertyDeclarationName": "SrcId",
"valueExpressionSql": "SrcId"
}
],
"sourceNodeTable": {
"edgeTableColumns": [
"SrcId"
],
"nodeTableColumns": [
"Id"
],
"nodeTableName": "Person"
}
}
],
"labels": [
{
"name": "Knows",
"propertyDeclarationNames": [
"DstId",
"SrcId"
]
},
{
"name": "Person",
"propertyDeclarationNames": [
"Id",
"Name"
]
}
],
"node_tables": [
{
"baseCatalogName": "",
"baseSchemaName": "",
"baseTableName": "Person",
"keyColumns": [
"Id"
],
"kind": "NODE",
"labelNames": [
"Person"
],
"name": "Person",
"propertyDefinitions": [
{
"propertyDeclarationName": "Id",
"valueExpressionSql": "Id"
},
{
"propertyDeclarationName": "Name",
"valueExpressionSql": "Name"
}
]
}
],
"object_name": "SocialGraph",
"property_declarations": [
{
"name": "DstId",
"type": "INT64"
},
{
"name": "Id",
"type": "INT64"
},
{
"name": "Name",
"type": "STRING"
},
{
"name": "SrcId",
"type": "INT64"
}
],
"schema_name": ""
},
"object_name": "SocialGraph",
"schema_name": ""
}
]
```
## Reference
| **field** | **type** | **required** | **description** |
|--------------|:--------:|:------------:|-----------------------------------------------------------------|
| type | string | true | Must be "spanner-list-graphs" |
| source | string | true | Name of the Spanner source to query (dialect must be GoogleSQL) |
| description | string | false | Description of the tool that is passed to the LLM |
| authRequired | string[] | false | List of auth services required to invoke this tool |
## Advanced Usage
### Example with Agent Integration
```yaml
kind: sources
name: spanner-db
type: spanner
project: my-project
instance: my-instance
database: my-database
dialect: googlesql
---
kind: tools
name: schema_inspector
type: spanner-list-graphs
source: spanner-db
description: |
Use this tool to inspect database schema information.
You can:
- List all graphs by leaving graph_names empty
- Get specific graph schemas by providing comma-separated graph names
- Choose between simple (names only) or detailed (full schema) output
Examples:
1. List all graphs with details: {"output_format": "detailed"}
2. Get specific graphs: {"graph_names": "FinGraph,SocialGraph", "output_format": "detailed"}
3. Just get graph names: {"output_format": "simple"}
```
## Troubleshooting
- This tool is read-only and does not modify any data
- The tool only works for the GoogleSQL source dialect
- Large databases with many graphs may take longer to query
========================================================================
## spanner-list-tables Tool
========================================================================
**Hierarchy:** MCP Toolbox for Databases > Integrations > Spanner Source > spanner-list-tables Tool
**URL:** https://googleapis.github.io/genai-toolbox/previews/PR-2723/integrations/spanner/spanner-list-tables/
**Description:** A "spanner-list-tables" tool retrieves schema information about tables in a Google Cloud Spanner database.
## About
A `spanner-list-tables` tool retrieves comprehensive schema information about
tables in a Cloud Spanner database. It automatically adapts to the database
dialect (GoogleSQL or PostgreSQL) and returns detailed metadata including
columns, constraints, and indexes.
This tool is read-only and executes pre-defined SQL queries against the
`INFORMATION_SCHEMA` tables to gather metadata. The tool automatically detects
the database dialect from the source configuration and uses the appropriate SQL
syntax.
### Features
- **Automatic Dialect Detection**: Adapts queries based on whether the database
uses GoogleSQL or PostgreSQL dialect
- **Comprehensive Schema Information**: Returns columns, data types, constraints,
indexes, and table relationships
- **Flexible Filtering**: Can list all tables or filter by specific table names
- **Output Format Options**: Choose between simple (table names only) or detailed
(full schema information) output
### Use Cases
1. **Database Documentation**: Generate comprehensive documentation of your
database schema
2. **Schema Validation**: Verify that expected tables and columns exist
3. **Migration Planning**: Understand the current schema before making changes
4. **Development Tools**: Build tools that need to understand database structure
5. **Audit and Compliance**: Track schema changes and ensure compliance with
data governance policies
## Compatible Sources
{{< compatible-sources >}}
## Parameters
The tool accepts two optional parameters:
| **parameter** | **type** | **default** | **description** |
|---------------|:--------:|:-----------:|------------------------------------------------------------------------------------------------------|
| table_names | string | "" | Comma-separated list of table names to filter. If empty, lists all tables in user-accessible schemas |
| output_format | string | "detailed" | Output format: "simple" returns only table names, "detailed" returns full schema information |
## Example
### Basic Usage - List All Tables
```yaml
kind: sources
name: my-spanner-db
type: spanner
project: ${SPANNER_PROJECT}
instance: ${SPANNER_INSTANCE}
database: ${SPANNER_DATABASE}
dialect: googlesql # or postgresql
---
kind: tools
name: list_all_tables
type: spanner-list-tables
source: my-spanner-db
description: Lists all tables with their complete schema information
```
### List Specific Tables
```yaml
kind: tools
name: list_specific_tables
type: spanner-list-tables
source: my-spanner-db
description: |
Lists schema information for specific tables.
Example usage:
{
"table_names": "users,orders,products",
"output_format": "detailed"
}
```
## Output Format
### Simple Format
When `output_format` is set to "simple", the tool returns a minimal JSON structure:
```json
[
{
"schema_name": "public",
"object_name": "users",
"object_details": {
"name": "users"
}
},
{
"schema_name": "public",
"object_name": "orders",
"object_details": {
"name": "orders"
}
}
]
```
### Detailed Format
When `output_format` is set to "detailed" (default), the tool returns
comprehensive schema information:
```json
[
{
"schema_name": "public",
"object_name": "users",
"object_details": {
"schema_name": "public",
"object_name": "users",
"object_type": "BASE TABLE",
"columns": [
{
"column_name": "id",
"data_type": "INT64",
"ordinal_position": 1,
"is_not_nullable": true,
"column_default": null
},
{
"column_name": "email",
"data_type": "STRING(255)",
"ordinal_position": 2,
"is_not_nullable": true,
"column_default": null
}
],
"constraints": [
{
"constraint_name": "PK_users",
"constraint_type": "PRIMARY KEY",
"constraint_definition": "PRIMARY KEY (id)",
"constraint_columns": [
"id"
],
"foreign_key_referenced_table": null,
"foreign_key_referenced_columns": []
}
],
"indexes": [
{
"index_name": "idx_users_email",
"index_type": "INDEX",
"is_unique": true,
"is_null_filtered": false,
"interleaved_in_table": null,
"index_key_columns": [
{
"column_name": "email",
"ordering": "ASC"
}
],
"storing_columns": []
}
]
}
}
]
```
## Reference
| **field** | **type** | **required** | **description** |
|--------------|:--------:|:------------:|----------------------------------------------------|
| type | string | true | Must be "spanner-list-tables" |
| source | string | true | Name of the Spanner source to query |
| description | string | false | Description of the tool that is passed to the LLM |
| authRequired | string[] | false | List of auth services required to invoke this tool |
## Advanced Usage
### Example with Agent Integration
```yaml
kind: sources
name: spanner-db
type: spanner
project: my-project
instance: my-instance
database: my-database
dialect: googlesql
---
kind: tools
name: schema_inspector
type: spanner-list-tables
source: spanner-db
description: |
Use this tool to inspect database schema information.
You can:
- List all tables by leaving table_names empty
- Get specific table schemas by providing comma-separated table names
- Choose between simple (names only) or detailed (full schema) output
Examples:
1. List all tables with details: {"output_format": "detailed"}
2. Get specific tables: {"table_names": "users,orders", "output_format": "detailed"}
3. Just get table names: {"output_format": "simple"}
```
## Additional Resources
- This tool is read-only and does not modify any data
- The tool automatically handles both GoogleSQL and PostgreSQL dialects
- Large databases with many tables may take longer to query
========================================================================
## SQL Server Source
========================================================================
**Hierarchy:** MCP Toolbox for Databases > Integrations > SQL Server Source
**URL:** https://googleapis.github.io/genai-toolbox/previews/PR-2723/integrations/mssql/
**Description:** SQL Server is a relational database management system (RDBMS).
## About
[SQL Server][mssql-docs] is a relational database management system (RDBMS)
developed by Microsoft that allows users to store, retrieve, and manage large
amount of data through a structured format.
[mssql-docs]: https://www.microsoft.com/en-us/sql-server
## Available Tools
{{< list-tools >}}
## Requirements
### Database User
This source only uses standard authentication. You will need to [create a
SQL Server user][mssql-users] to login to the database with.
[mssql-users]:
https://learn.microsoft.com/en-us/sql/relational-databases/security/authentication-access/create-a-database-user?view=sql-server-ver16
## Example
```yaml
kind: sources
name: my-mssql-source
type: mssql
host: 127.0.0.1
port: 1433
database: my_db
user: ${USER_NAME}
password: ${PASSWORD}
# encrypt: strict
```
{{< notice tip >}}
Use environment variable replacement with the format ${ENV_NAME}
instead of hardcoding your secrets into the configuration file.
{{< /notice >}}
## Reference
| **field** | **type** | **required** | **description** |
|-----------|:--------:|:------------:|--------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|
| type | string | true | Must be "mssql". |
| host | string | true | IP address to connect to (e.g. "127.0.0.1"). |
| port | string | true | Port to connect to (e.g. "1433"). |
| database | string | true | Name of the SQL Server database to connect to (e.g. "my_db"). |
| user | string | true | Name of the SQL Server user to connect as (e.g. "my-user"). |
| password | string | true | Password of the SQL Server user (e.g. "my-password"). |
| encrypt | string | false | Encryption level for data transmitted between the client and server (e.g., "strict"). If not specified, defaults to the [github.com/microsoft/go-mssqldb](https://github.com/microsoft/go-mssqldb?tab=readme-ov-file#common-parameters) package's default encrypt value. |
========================================================================
## mssql-execute-sql Tool
========================================================================
**Hierarchy:** MCP Toolbox for Databases > Integrations > SQL Server Source > mssql-execute-sql Tool
**URL:** https://googleapis.github.io/genai-toolbox/previews/PR-2723/integrations/mssql/mssql-execute-sql/
**Description:** A "mssql-execute-sql" tool executes a SQL statement against a SQL Server database.
## About
A `mssql-execute-sql` tool executes a SQL statement against a SQL Server
database. It's compatible with any of the following sources:
`mssql-execute-sql` takes one input parameter `sql` and run the sql
statement against the `source`.
> **Note:** This tool is intended for developer assistant workflows with
> human-in-the-loop and shouldn't be used for production agents.
## Compatible Sources
{{< compatible-sources others="integrations/cloud-sql-mssql">}}
## Example
```yaml
kind: tools
name: execute_sql_tool
type: mssql-execute-sql
source: my-mssql-instance
description: Use this tool to execute sql statement.
```
## Reference
| **field** | **type** | **required** | **description** |
|-------------|:------------------------------------------:|:------------:|----------------------------------------------------|
| type | string | true | Must be "mssql-execute-sql". |
| source | string | true | Name of the source the SQL should execute on. |
| description | string | true | Description of the tool that is passed to the LLM. |
========================================================================
## mssql-list-tables Tool
========================================================================
**Hierarchy:** MCP Toolbox for Databases > Integrations > SQL Server Source > mssql-list-tables Tool
**URL:** https://googleapis.github.io/genai-toolbox/previews/PR-2723/integrations/mssql/mssql-list-tables/
**Description:** The "mssql-list-tables" tool lists schema information for all or specified tables in a SQL server database.
## About
The `mssql-list-tables` tool retrieves schema information for all or specified
tables in a SQL server database.
`mssql-list-tables` lists detailed schema information (object type, columns,
constraints, indexes, triggers, owner, comment) as JSON for user-created tables
(ordinary or partitioned).
The tool takes the following input parameters:
- **`table_names`** (string, optional): Filters by a comma-separated list of
names. By default, it lists all tables in user schemas. Default: `""`.
- **`output_format`** (string, optional): Indicate the output format of table
schema. `simple` will return only the table names, `detailed` will return the
full table information. Default: `detailed`.
## Compatible Sources
{{< compatible-sources others="integrations/cloud-sql-mssql">}}
## Example
```yaml
kind: tools
name: mssql_list_tables
type: mssql-list-tables
source: mssql-source
description: Use this tool to retrieve schema information for all or specified tables. Output format can be simple (only table names) or detailed.
```
## Reference
| **field** | **type** | **required** | **description** |
|-------------|:--------:|:------------:|------------------------------------------------------|
| type | string | true | Must be "mssql-list-tables". |
| source | string | true | Name of the source the SQL should execute on. |
| description | string | true | Description of the tool that is passed to the agent. |
========================================================================
## mssql-sql Tool
========================================================================
**Hierarchy:** MCP Toolbox for Databases > Integrations > SQL Server Source > mssql-sql Tool
**URL:** https://googleapis.github.io/genai-toolbox/previews/PR-2723/integrations/mssql/mssql-sql/
**Description:** A "mssql-sql" tool executes a pre-defined SQL statement against a SQL Server database.
## About
A `mssql-sql` tool executes a pre-defined SQL statement against a SQL Server
database.
Toolbox supports the [prepare statement syntax][prepare-statement] of MS SQL
Server and expects parameters in the SQL query to be in the form of either
`@Name` or `@p1` to `@pN` (ordinal position).
```go
db.QueryContext(ctx, `select * from t where ID = @ID and Name = @p2;`, sql.Named("ID", 6), "Bob")
```
[prepare-statement]:
https://learn.microsoft.com/sql/relational-databases/system-stored-procedures/sp-prepare-transact-sql?view=sql-server-ver16
## Compatible Sources
{{< compatible-sources others="integrations/cloud-sql-mssql">}}
## Example
> **Note:** This tool uses parameterized queries to prevent SQL injections.
> Query parameters can be used as substitutes for arbitrary expressions.
> Parameters cannot be used as substitutes for identifiers, column names, table
> names, or other parts of the query.
```yaml
kind: tools
name: search_flights_by_number
type: mssql-sql
source: my-instance
statement: |
SELECT * FROM flights
WHERE airline = @airline
AND flight_number = @flight_number
LIMIT 10
description: |
Use this tool to get information for a specific flight.
Takes an airline code and flight number and returns info on the flight.
Do NOT use this tool with a flight id. Do NOT guess an airline code or flight number.
A airline code is a code for an airline service consisting of two-character
airline designator and followed by flight number, which is 1 to 4 digit number.
For example, if given CY 0123, the airline is "CY", and flight_number is "123".
Another example for this is DL 1234, the airline is "DL", and flight_number is "1234".
If the tool returns more than one option choose the date closes to today.
Example:
{{
"airline": "CY",
"flight_number": "888",
}}
Example:
{{
"airline": "DL",
"flight_number": "1234",
}}
parameters:
- name: airline
type: string
description: Airline unique 2 letter identifier
- name: flight_number
type: string
description: 1 to 4 digit number
```
### Example with Template Parameters
> **Note:** This tool allows direct modifications to the SQL statement,
> including identifiers, column names, and table names. **This makes it more
> vulnerable to SQL injections**. Using basic parameters only (see above) is
> recommended for performance and safety reasons. For more details, please check
> [templateParameters](..#template-parameters).
```yaml
kind: tools
name: list_table
type: mssql-sql
source: my-instance
statement: |
SELECT * FROM {{.tableName}};
description: |
Use this tool to list all information from a specific table.
Example:
{{
"tableName": "flights",
}}
templateParameters:
- name: tableName
type: string
description: Table to select from
```
## Reference
| **field** | **type** | **required** | **description** |
|--------------------|:--------------------------------------------:|:------------:|----------------------------------------------------------------------------------------------------------------------------------------|
| type | string | true | Must be "mssql-sql". |
| source | string | true | Name of the source the T-SQL statement should execute on. |
| description | string | true | Description of the tool that is passed to the LLM. |
| statement | string | true | SQL statement to execute. |
| parameters | [parameters](../#specifying-parameters) | false | List of [parameters](../#specifying-parameters) that will be inserted into the SQL statement. |
| templateParameters | [templateParameters](..#template-parameters) | false | List of [templateParameters](..#template-parameters) that will be inserted into the SQL statement before executing prepared statement. |
========================================================================
## SQLite Source
========================================================================
**Hierarchy:** MCP Toolbox for Databases > Integrations > SQLite Source
**URL:** https://googleapis.github.io/genai-toolbox/previews/PR-2723/integrations/sqlite/
**Description:** SQLite is a C-language library that implements a small, fast, self-contained, high-reliability, full-featured, SQL database engine.
## About
[SQLite](https://sqlite.org/) is a software library that provides a relational
database management system. The lite in SQLite means lightweight in terms of
setup, database administration, and required resources.
SQLite has the following notable characteristics:
- Self-contained with no external dependencies
- Serverless - the SQLite library accesses its storage files directly
- Single database file that can be easily copied or moved
- Zero-configuration - no setup or administration needed
- Transactional with ACID properties
## Available Tools
{{< list-tools >}}
### Pre-built Configurations
- [SQLite using MCP](../../user-guide/connect-to/ides/sqlite_mcp.md)
Connect your IDE to SQlite using Toolbox.
## Requirements
### Database File
You need a SQLite database file. This can be:
- An existing database file
- A path where a new database file should be created
- `:memory:` for an in-memory database
## Example
```yaml
kind: sources
name: my-sqlite-db
type: "sqlite"
database: "/path/to/database.db"
```
For an in-memory database:
```yaml
kind: sources
name: my-sqlite-memory-db
type: "sqlite"
database: ":memory:"
```
## Reference
### Configuration Fields
| **field** | **type** | **required** | **description** |
|-----------|:--------:|:------------:|---------------------------------------------------------------------------------------------------------------------|
| type | string | true | Must be "sqlite". |
| database | string | true | Path to SQLite database file, or ":memory:" for an in-memory database. |
### Connection Properties
SQLite connections are configured with these defaults for optimal performance:
- `MaxOpenConns`: 1 (SQLite only supports one writer at a time)
- `MaxIdleConns`: 1
========================================================================
## sqlite-execute-sql Tool
========================================================================
**Hierarchy:** MCP Toolbox for Databases > Integrations > SQLite Source > sqlite-execute-sql Tool
**URL:** https://googleapis.github.io/genai-toolbox/previews/PR-2723/integrations/sqlite/sqlite-execute-sql/
**Description:** A "sqlite-execute-sql" tool executes a single SQL statement against a SQLite database.
## About
A `sqlite-execute-sql` tool executes a single SQL statement against a SQLite
database.
This tool is designed for direct execution of SQL statements. It takes a single
`sql` input parameter and runs the SQL statement against the configured SQLite
`source`.
> **Note:** This tool is intended for developer assistant workflows with
> human-in-the-loop and shouldn't be used for production agents.
## Compatible Sources
{{< compatible-sources >}}
## Example
```yaml
kind: tools
name: execute_sql_tool
type: sqlite-execute-sql
source: my-sqlite-db
description: Use this tool to execute a SQL statement.
```
## Reference
| **field** | **type** | **required** | **description** |
|-------------|:--------:|:------------:|----------------------------------------------------|
| type | string | true | Must be "sqlite-execute-sql". |
| source | string | true | Name of the source the SQL should execute on. |
| description | string | true | Description of the tool that is passed to the LLM. |
========================================================================
## sqlite-sql Tool
========================================================================
**Hierarchy:** MCP Toolbox for Databases > Integrations > SQLite Source > sqlite-sql Tool
**URL:** https://googleapis.github.io/genai-toolbox/previews/PR-2723/integrations/sqlite/sqlite-sql/
**Description:** Execute SQL statements against a SQLite database.
## About
A `sqlite-sql` tool executes SQL statements against a SQLite database.
SQLite uses the `?` placeholder for parameters in SQL statements. Parameters are
bound in the order they are provided.
The statement field supports any valid SQLite SQL statement, including `SELECT`,
`INSERT`, `UPDATE`, `DELETE`, `CREATE/ALTER/DROP` table statements, and other
DDL statements.
## Compatible Sources
{{< compatible-sources >}}
## Example
> **Note:** This tool uses parameterized queries to prevent SQL injections.
> Query parameters can be used as substitutes for arbitrary expressions.
> Parameters cannot be used as substitutes for identifiers, column names, table
> names, or other parts of the query.
```yaml
kind: tools
name: search-users
type: sqlite-sql
source: my-sqlite-db
description: Search users by name and age
parameters:
- name: name
type: string
description: The name to search for
- name: min_age
type: integer
description: Minimum age
statement: SELECT * FROM users WHERE name LIKE ? AND age >= ?
```
### Example with Template Parameters
> **Note:** This tool allows direct modifications to the SQL statement,
> including identifiers, column names, and table names. **This makes it more
> vulnerable to SQL injections**. Using basic parameters only (see above) is
> recommended for performance and safety reasons. For more details, please check
> [templateParameters](..#template-parameters).
```yaml
kind: tools
name: list_table
type: sqlite-sql
source: my-sqlite-db
statement: |
SELECT * FROM {{.tableName}};
description: |
Use this tool to list all information from a specific table.
Example:
{{
"tableName": "flights",
}}
templateParameters:
- name: tableName
type: string
description: Table to select from
```
## Reference
| **field** | **type** | **required** | **description** |
|--------------------|:--------------------------------------------:|:------------:|----------------------------------------------------------------------------------------------------------------------------------------|
| type | string | true | Must be "sqlite-sql". |
| source | string | true | Name of the source the SQLite source configuration. |
| description | string | true | Description of the tool that is passed to the LLM. |
| statement | string | true | The SQL statement to execute. |
| parameters | [parameters](../#specifying-parameters) | false | List of [parameters](../#specifying-parameters) that will be inserted into the SQL statement. |
| templateParameters | [templateParameters](..#template-parameters) | false | List of [templateParameters](..#template-parameters) that will be inserted into the SQL statement before executing prepared statement. |
========================================================================
## TiDB Source
========================================================================
**Hierarchy:** MCP Toolbox for Databases > Integrations > TiDB Source
**URL:** https://googleapis.github.io/genai-toolbox/previews/PR-2723/integrations/tidb/
**Description:** TiDB is a distributed SQL database that combines the best of traditional RDBMS and NoSQL databases.
## About
[TiDB][tidb-docs] is an open-source distributed SQL database that supports
Hybrid Transactional and Analytical Processing (HTAP) workloads. It is
MySQL-compatible and features horizontal scalability, strong consistency, and
high availability.
[tidb-docs]: https://docs.pingcap.com/tidb/stable
## Available Tools
{{< list-tools >}}
## Requirements
### Database User
This source uses standard MySQL protocol authentication. You will need to
[create a TiDB user][tidb-users] to login to the database with.
For TiDB Cloud users, you can create database users through the TiDB Cloud
console.
[tidb-users]: https://docs.pingcap.com/tidb/stable/user-account-management
## Example
- TiDB Cloud
```yaml
kind: sources
name: my-tidb-cloud-source
type: tidb
host: gateway01.us-west-2.prod.aws.tidbcloud.com
port: 4000
database: my_db
user: ${TIDB_USERNAME}
password: ${TIDB_PASSWORD}
# SSL is automatically enabled for TiDB Cloud
```
- Self-Hosted TiDB
```yaml
kind: sources
name: my-tidb-source
type: tidb
host: 127.0.0.1
port: 4000
database: my_db
user: ${TIDB_USERNAME}
password: ${TIDB_PASSWORD}
# ssl: true # Optional: enable SSL for secure connections
```
{{< notice tip >}}
Use environment variable replacement with the format ${ENV_NAME}
instead of hardcoding your secrets into the configuration file.
{{< /notice >}}
## Reference
| **field** | **type** | **required** | **description** |
|-----------|:--------:|:------------:|--------------------------------------------------------------------------------------------|
| type | string | true | Must be "tidb". |
| host | string | true | IP address or hostname to connect to (e.g. "127.0.0.1" or "gateway01.*.tidbcloud.com"). |
| port | string | true | Port to connect to (typically "4000" for TiDB). |
| database | string | true | Name of the TiDB database to connect to (e.g. "my_db"). |
| user | string | true | Name of the TiDB user to connect as (e.g. "my-tidb-user"). |
| password | string | true | Password of the TiDB user (e.g. "my-password"). |
| ssl | boolean | false | Whether to use SSL/TLS encryption. Automatically enabled for TiDB Cloud instances. |
## Advanced Usage
### SSL Configuration
- TiDB Cloud
For TiDB Cloud instances, SSL is automatically enabled when the hostname
matches the TiDB Cloud pattern (`gateway*.*.*.tidbcloud.com`). You don't
need to explicitly set `ssl: true` for TiDB Cloud connections.
- Self-Hosted TiDB
For self-hosted TiDB instances, you can optionally enable SSL by setting
`ssl: true` in your configuration.
========================================================================
## tidb-execute-sql Tool
========================================================================
**Hierarchy:** MCP Toolbox for Databases > Integrations > TiDB Source > tidb-execute-sql Tool
**URL:** https://googleapis.github.io/genai-toolbox/previews/PR-2723/integrations/tidb/tidb-execute-sql/
**Description:** A "tidb-execute-sql" tool executes a SQL statement against a TiDB database.
## About
A `tidb-execute-sql` tool executes a SQL statement against a TiDB
database.
`tidb-execute-sql` takes one input parameter `sql` and run the sql
statement against the `source`.
> **Note:** This tool is intended for developer assistant workflows with
> human-in-the-loop and shouldn't be used for production agents.
## Compatible Sources
{{< compatible-sources >}}
## Example
```yaml
kind: tools
name: execute_sql_tool
type: tidb-execute-sql
source: my-tidb-instance
description: Use this tool to execute sql statement.
```
## Reference
| **field** | **type** | **required** | **description** |
|-------------|:--------:|:------------:|----------------------------------------------------|
| type | string | true | Must be "tidb-execute-sql". |
| source | string | true | Name of the source the SQL should execute on. |
| description | string | true | Description of the tool that is passed to the LLM. |
========================================================================
## tidb-sql Tool
========================================================================
**Hierarchy:** MCP Toolbox for Databases > Integrations > TiDB Source > tidb-sql Tool
**URL:** https://googleapis.github.io/genai-toolbox/previews/PR-2723/integrations/tidb/tidb-sql/
**Description:** A "tidb-sql" tool executes a pre-defined SQL statement against a TiDB database.
## About
A `tidb-sql` tool executes a pre-defined SQL statement against a TiDB
database.
The specified SQL statement is executed as a [prepared statement][tidb-prepare],
and expects parameters in the SQL query to be in the form of placeholders `?`.
[tidb-prepare]: https://docs.pingcap.com/tidb/stable/sql-prepared-plan-cache
## Compatible Sources
{{< compatible-sources >}}
## Example
> **Note:** This tool uses parameterized queries to prevent SQL injections.
> Query parameters can be used as substitutes for arbitrary expressions.
> Parameters cannot be used as substitutes for identifiers, column names, table
> names, or other parts of the query.
```yaml
kind: tools
name: search_flights_by_number
type: tidb-sql
source: my-tidb-instance
statement: |
SELECT * FROM flights
WHERE airline = ?
AND flight_number = ?
LIMIT 10
description: |
Use this tool to get information for a specific flight.
Takes an airline code and flight number and returns info on the flight.
Do NOT use this tool with a flight id. Do NOT guess an airline code or flight number.
A airline code is a code for an airline service consisting of two-character
airline designator and followed by flight number, which is 1 to 4 digit number.
For example, if given CY 0123, the airline is "CY", and flight_number is "123".
Another example for this is DL 1234, the airline is "DL", and flight_number is "1234".
If the tool returns more than one option choose the date closes to today.
Example:
{{
"airline": "CY",
"flight_number": "888",
}}
Example:
{{
"airline": "DL",
"flight_number": "1234",
}}
parameters:
- name: airline
type: string
description: Airline unique 2 letter identifier
- name: flight_number
type: string
description: 1 to 4 digit number
```
### Example with Template Parameters
> **Note:** This tool allows direct modifications to the SQL statement,
> including identifiers, column names, and table names. **This makes it more
> vulnerable to SQL injections**. Using basic parameters only (see above) is
> recommended for performance and safety reasons. For more details, please check
> [templateParameters](..#template-parameters).
```yaml
kind: tools
name: list_table
type: tidb-sql
source: my-tidb-instance
statement: |
SELECT * FROM {{.tableName}};
description: |
Use this tool to list all information from a specific table.
Example:
{{
"tableName": "flights",
}}
templateParameters:
- name: tableName
type: string
description: Table to select from
```
## Reference
| **field** | **type** | **required** | **description** |
|--------------------|:--------------------------------------------:|:------------:|----------------------------------------------------------------------------------------------------------------------------------------|
| type | string | true | Must be "tidb-sql". |
| source | string | true | Name of the source the SQL should execute on. |
| description | string | true | Description of the tool that is passed to the LLM. |
| statement | string | true | SQL statement to execute on. |
| parameters | [parameters](..#specifying-parameters) | false | List of [parameters](..#specifying-parameters) that will be inserted into the SQL statement. |
| templateParameters | [templateParameters](..#template-parameters) | false | List of [templateParameters](..#template-parameters) that will be inserted into the SQL statement before executing prepared statement. |
========================================================================
## Trino Source
========================================================================
**Hierarchy:** MCP Toolbox for Databases > Integrations > Trino Source
**URL:** https://googleapis.github.io/genai-toolbox/previews/PR-2723/integrations/trino/
**Description:** Trino is a distributed SQL query engine for big data analytics.
## About
[Trino][trino-docs] is a distributed SQL query engine designed for fast analytic
queries against data of any size. It allows you to query data where it lives,
including Hive, Cassandra, relational databases or even proprietary data stores.
[trino-docs]: https://trino.io/docs/
## Available Tools
{{< list-tools >}}
## Requirements
### Trino Cluster
You need access to a running Trino cluster with appropriate user permissions for
the catalogs and schemas you want to query.
## Example
```yaml
kind: sources
name: my-trino-source
type: trino
host: trino.example.com
port: "8080"
user: ${TRINO_USER} # Optional for anonymous access
password: ${TRINO_PASSWORD} # Optional
catalog: hive
schema: default
```
{{< notice tip >}}
Use environment variable replacement with the format ${ENV_NAME}
instead of hardcoding your secrets into the configuration file.
{{< /notice >}}
## Reference
| **field** | **type** | **required** | **description** |
| ---------------------- | :------: | :----------: | ---------------------------------------------------------------------------- |
| type | string | true | Must be "trino". |
| host | string | true | Trino coordinator hostname (e.g. "trino.example.com") |
| port | string | true | Trino coordinator port (e.g. "8080", "8443") |
| user | string | false | Username for authentication (e.g. "analyst"). Optional for anonymous access. |
| password | string | false | Password for basic authentication |
| catalog | string | true | Default catalog to use for queries (e.g. "hive") |
| schema | string | true | Default schema to use for queries (e.g. "default") |
| queryTimeout | string | false | Query timeout duration (e.g. "30m", "1h") |
| accessToken | string | false | JWT access token for authentication |
| kerberosEnabled | boolean | false | Enable Kerberos authentication (default: false) |
| sslEnabled | boolean | false | Enable SSL/TLS (default: false) |
| disableSslVerification | boolean | false | Skip SSL/TLS certificate verification (default: false) |
| sslCertPath | string | false | Path to a custom SSL/TLS certificate file |
| sslCert | string | false | Custom SSL/TLS certificate content |
========================================================================
## trino-execute-sql Tool
========================================================================
**Hierarchy:** MCP Toolbox for Databases > Integrations > Trino Source > trino-execute-sql Tool
**URL:** https://googleapis.github.io/genai-toolbox/previews/PR-2723/integrations/trino/trino-execute-sql/
**Description:** A "trino-execute-sql" tool executes a SQL statement against a Trino database.
## About
A `trino-execute-sql` tool executes a SQL statement against a Trino
database.
`trino-execute-sql` takes one input parameter `sql` and run the sql
statement against the `source`.
> **Note:** This tool is intended for developer assistant workflows with
> human-in-the-loop and shouldn't be used for production agents.
## Compatible Sources
{{< compatible-sources >}}
## Example
```yaml
kind: tools
name: execute_sql_tool
type: trino-execute-sql
source: my-trino-instance
description: Use this tool to execute sql statement.
```
## Reference
| **field** | **type** | **required** | **description** |
|-------------|:------------------------------------------:|:------------:|--------------------------------------------------------------------------------------------------|
| type | string | true | Must be "trino-execute-sql". |
| source | string | true | Name of the source the SQL should execute on. |
| description | string | true | Description of the tool that is passed to the LLM. |
========================================================================
## trino-sql Tool
========================================================================
**Hierarchy:** MCP Toolbox for Databases > Integrations > Trino Source > trino-sql Tool
**URL:** https://googleapis.github.io/genai-toolbox/previews/PR-2723/integrations/trino/trino-sql/
**Description:** A "trino-sql" tool executes a pre-defined SQL statement against a Trino database.
## About
A `trino-sql` tool executes a pre-defined SQL statement against a Trino
database.
The specified SQL statement is executed as a [prepared statement][trino-prepare], and expects parameters in the SQL query to be in the form of placeholders `?`.
[trino-prepare]: https://trino.io/docs/current/sql/prepare.html
## Compatible Sources
{{< compatible-sources >}}
## Example
> **Note:** This tool uses parameterized queries to prevent SQL injections.
> Query parameters can be used as substitutes for arbitrary expressions.
> Parameters cannot be used as substitutes for identifiers, column names, table
> names, or other parts of the query.
```yaml
kind: tools
name: search_orders_by_region
type: trino-sql
source: my-trino-instance
statement: |
SELECT * FROM hive.sales.orders
WHERE region = ?
AND order_date >= DATE(?)
LIMIT 10
description: |
Use this tool to get information for orders in a specific region.
Takes a region code and date and returns info on the orders.
Do NOT use this tool with an order id. Do NOT guess a region code or date.
A region code is a code for a geographic region consisting of two-character
region designator and followed by optional subregion.
For example, if given US-WEST, the region is "US-WEST".
Another example for this is EU-CENTRAL, the region is "EU-CENTRAL".
If the tool returns more than one option choose the date closest to today.
Example:
{{
"region": "US-WEST",
"order_date": "2024-01-01",
}}
Example:
{{
"region": "EU-CENTRAL",
"order_date": "2024-01-15",
}}
parameters:
- name: region
type: string
description: Region unique identifier
- name: order_date
type: string
description: Order date in YYYY-MM-DD format
```
### Example with Template Parameters
> **Note:** This tool allows direct modifications to the SQL statement,
> including identifiers, column names, and table names. **This makes it more
> vulnerable to SQL injections**. Using basic parameters only (see above) is
> recommended for performance and safety reasons. For more details, please check
> [templateParameters](..#template-parameters).
```yaml
kind: tools
name: list_table
type: trino-sql
source: my-trino-instance
statement: |
SELECT * FROM {{.tableName}}
description: |
Use this tool to list all information from a specific table.
Example:
{{
"tableName": "hive.sales.orders",
}}
templateParameters:
- name: tableName
type: string
description: Table to select from
```
## Reference
| **field** | **type** | **required** | **description** |
|--------------------|:--------------------------------------------:|:------------:|----------------------------------------------------------------------------------------------------------------------------------------|
| type | string | true | Must be "trino-sql". |
| source | string | true | Name of the source the SQL should execute on. |
| description | string | true | Description of the tool that is passed to the LLM. |
| statement | string | true | SQL statement to execute on. |
| parameters | [parameters](../#specifying-parameters) | false | List of [parameters](../#specifying-parameters) that will be inserted into the SQL statement. |
| templateParameters | [templateParameters](..#template-parameters) | false | List of [templateParameters](..#template-parameters) that will be inserted into the SQL statement before executing prepared statement. |
========================================================================
## Utility tools
========================================================================
**Hierarchy:** MCP Toolbox for Databases > Integrations > Utility tools
**URL:** https://googleapis.github.io/genai-toolbox/previews/PR-2723/integrations/utility/
**Description:** Tools that provide utility.
========================================================================
## wait Tool
========================================================================
**Hierarchy:** MCP Toolbox for Databases > Integrations > Utility tools > wait Tool
**URL:** https://googleapis.github.io/genai-toolbox/previews/PR-2723/integrations/utility/wait/
**Description:** A "wait" tool pauses execution for a specified duration.
## About
A `wait` tool pauses execution for a specified duration. This can be useful in
workflows where a delay is needed between steps.
`wait` takes one input parameter `duration` which is a string representing the
time to wait (e.g., "10s", "2m", "1h").
{{< notice info >}}
This tool is intended for developer assistant workflows with human-in-the-loop
and shouldn't be used for production agents.
{{< /notice >}}
## Example
```yaml
kind: tools
name: wait_for_tool
type: wait
description: Use this tool to pause execution for a specified duration.
timeout: 30s
```
## Reference
| **field** | **type** | **required** | **description** |
|-------------|:--------------:|:------------:|-------------------------------------------------------|
| type | string | true | Must be "wait". |
| description | string | true | Description of the tool that is passed to the LLM. |
| timeout | string | true | The default duration the tool can wait for. |
========================================================================
## Valkey Source
========================================================================
**Hierarchy:** MCP Toolbox for Databases > Integrations > Valkey Source
**URL:** https://googleapis.github.io/genai-toolbox/previews/PR-2723/integrations/valkey/
**Description:** Valkey is an open-source, in-memory data structure store, forked from Redis.
## About
Valkey is an open-source, in-memory data structure store that originated as a
fork of Redis. It's designed to be used as a database, cache, and message
broker, supporting a wide range of data structures like strings, hashes, lists,
sets, sorted sets with range queries, bitmaps, hyperloglogs, and geospatial
indexes with radius queries.
If you're new to Valkey, you can find installation and getting started guides on
the [official Valkey website](https://valkey.io/topics/quickstart/).
## Available Tools
{{< list-tools >}}
## Example
```yaml
kind: sources
name: my-valkey-instance
type: valkey
address:
- 127.0.0.1:6379
username: ${YOUR_USERNAME}
password: ${YOUR_PASSWORD}
# database: 0
# useGCPIAM: false
# disableCache: false
```
{{< notice tip >}}
Use environment variable replacement with the format ${ENV_NAME}
instead of hardcoding your secrets into the configuration file.
{{< /notice >}}
### IAM Authentication
If you are using GCP's Memorystore for Valkey, you can connect using IAM
authentication. Grant your account the required [IAM role][iam] and set
`useGCPIAM` to `true`:
```yaml
kind: sources
name: my-valkey-instance
type: valkey
address:
- 127.0.0.1:6379
useGCPIAM: true
```
[iam]: https://cloud.google.com/memorystore/docs/valkey/about-iam-auth
## Reference
| **field** | **type** | **required** | **description** |
|--------------|:--------:|:------------:|----------------------------------------------------------------------------------------------------------------------------------|
| type | string | true | Must be "valkey". |
| address | []string | true | Endpoints for the Valkey instance to connect to. |
| username | string | false | If you are using a non-default user, specify the user name here. If you are using Memorystore for Valkey, leave this field blank |
| password | string | false | Password for the Valkey instance |
| database | int | false | The Valkey database to connect to. Not applicable for cluster enabled instances. The default database is `0`. |
| useGCPIAM | bool | false | Set it to `true` if you are using GCP's IAM authentication. Defaults to `false`. |
| disableCache | bool | false | Set it to `true` if you want to enable client-side caching. Defaults to `false`. |
========================================================================
## valkey Tool
========================================================================
**Hierarchy:** MCP Toolbox for Databases > Integrations > Valkey Source > valkey Tool
**URL:** https://googleapis.github.io/genai-toolbox/previews/PR-2723/integrations/valkey/valkey-tool/
**Description:** A "valkey" tool executes a set of pre-defined Valkey commands against a Valkey instance.
## About
A valkey tool executes a series of pre-defined Valkey commands against a
Valkey instance.
The specified Valkey commands are executed sequentially. Each command is
represented as a string array, where the first element is the command name
(e.g., SET, GET, HGETALL) and subsequent elements are its arguments.
### Dynamic Command Parameters
Command arguments can be templated using the `$variableName` annotation. The
array type parameters will be expanded once into multiple arguments. Take the
following config for example:
```yaml
commands:
- [SADD, userNames, $userNames] # Array will be flattened into multiple arguments.
parameters:
- name: userNames
type: array
description: The user names to be set.
```
If the input is an array of strings `["Alice", "Sid", "Bob"]`, The final command
to be executed after argument expansion will be `[SADD, userNames, Alice, Sid, Bob]`.
## Compatible Sources
{{< compatible-sources >}}
## Example
```yaml
kind: tools
name: user_data_tool
type: valkey
source: my-valkey-instance
description: |
Use this tool to interact with user data stored in Valkey.
It can set, retrieve, and delete user-specific information.
commands:
- [SADD, userNames, $userNames] # Array will be flattened into multiple arguments.
- [GET, $userId]
parameters:
- name: userId
type: string
description: The unique identifier for the user.
- name: userNames
type: array
description: The user names to be set.
```
========================================================================
## YugabyteDB Source
========================================================================
**Hierarchy:** MCP Toolbox for Databases > Integrations > YugabyteDB Source
**URL:** https://googleapis.github.io/genai-toolbox/previews/PR-2723/integrations/yuagbytedb/
**Description:** YugabyteDB is a high-performance, distributed SQL database.
## About
[YugabyteDB][yugabytedb] is a high-performance, distributed SQL database
designed for global, internet-scale applications, with full PostgreSQL
compatibility.
[yugabytedb]: https://www.yugabyte.com/
## Available Tools
{{< list-tools >}}
## Example
```yaml
kind: sources
name: my-yb-source
type: yugabytedb
host: 127.0.0.1
port: 5433
database: yugabyte
user: ${USER_NAME}
password: ${PASSWORD}
loadBalance: true
topologyKeys: cloud.region.zone1:1,cloud.region.zone2:2
```
## Reference
| **field** | **type** | **required** | **description** |
|------------------------------|:--------:|:------------:|-----------------------------------------------------------------------------------------------------------------------------------------------------------------------|
| type | string | true | Must be "yugabytedb". |
| host | string | true | IP address to connect to. |
| port | integer | true | Port to connect to. The default port is 5433. |
| database | string | true | Name of the YugabyteDB database to connect to. The default database name is yugabyte. |
| user | string | true | Name of the YugabyteDB user to connect as. The default user is yugabyte. |
| password | string | true | Password of the YugabyteDB user. The default password is yugabyte. |
| loadBalance | boolean | false | If true, enable uniform load balancing. The default loadBalance value is false. |
| topologyKeys | string | false | Comma-separated geo-locations in the form cloud.region.zone:priority to enable topology-aware load balancing. Ignored if loadBalance is false. It is null by default. |
| ybServersRefreshInterval | integer | false | The interval (in seconds) to refresh the servers list; ignored if loadBalance is false. The default value of ybServersRefreshInterval is 300. |
| fallbackToTopologyKeysOnly | boolean | false | If set to true and topologyKeys are specified, only connect to nodes specified in topologyKeys. By defualt, this is set to false. |
| failedHostReconnectDelaySecs | integer | false | Time (in seconds) to wait before trying to connect to failed nodes. The default value of is 5. |
========================================================================
## yugabytedb-sql Tool
========================================================================
**Hierarchy:** MCP Toolbox for Databases > Integrations > YugabyteDB Source > yugabytedb-sql Tool
**URL:** https://googleapis.github.io/genai-toolbox/previews/PR-2723/integrations/yuagbytedb/yugabytedb-sql/
**Description:** A "yugabytedb-sql" tool executes a pre-defined SQL statement against a YugabyteDB database.
## About
A `yugabytedb-sql` tool executes a pre-defined SQL statement against a
YugabyteDB database.
The specified SQL statement is executed as a prepared statement,
and specified parameters will inserted according to their position: e.g. `1`
will be the first parameter specified, `$@` will be the second parameter, and so
on. If template parameters are included, they will be resolved before execution
of the prepared statement.
## Compatible Sources
{{< compatible-sources >}}
## Example
> **Note:** This tool uses parameterized queries to prevent SQL injections.
> Query parameters can be used as substitutes for arbitrary expressions.
> Parameters cannot be used as substitutes for identifiers, column names, table
> names, or other parts of the query.
```yaml
kind: tools
name: search_flights_by_number
type: yugabytedb-sql
source: my-yb-instance
statement: |
SELECT * FROM flights
WHERE airline = $1
AND flight_number = $2
LIMIT 10
description: |
Use this tool to get information for a specific flight.
Takes an airline code and flight number and returns info on the flight.
Do NOT use this tool with a flight id. Do NOT guess an airline code or flight number.
A airline code is a code for an airline service consisting of two-character
airline designator and followed by flight number, which is 1 to 4 digit number.
For example, if given CY 0123, the airline is "CY", and flight_number is "123".
Another example for this is DL 1234, the airline is "DL", and flight_number is "1234".
If the tool returns more than one option choose the date closes to today.
Example:
{{
"airline": "CY",
"flight_number": "888",
}}
Example:
{{
"airline": "DL",
"flight_number": "1234",
}}
parameters:
- name: airline
type: string
description: Airline unique 2 letter identifier
- name: flight_number
type: string
description: 1 to 4 digit number
```
### Example with Template Parameters
> **Note:** This tool allows direct modifications to the SQL statement,
> including identifiers, column names, and table names. **This makes it more
> vulnerable to SQL injections**. Using basic parameters only (see above) is
> recommended for performance and safety reasons. For more details, please check
> [templateParameters](..#template-parameters).
```yaml
kind: tools
name: list_table
type: yugabytedb-sql
source: my-yb-instance
statement: |
SELECT * FROM {{.tableName}}
description: |
Use this tool to list all information from a specific table.
Example:
{{
"tableName": "flights",
}}
templateParameters:
- name: tableName
type: string
description: Table to select from
```
## Reference
| **field** | **type** | **required** | **description** |
|--------------------|:--------------------------------------------:|:------------:|----------------------------------------------------------------------------------------------------------------------------------------|
| type | string | true | Must be "yugabytedb-sql". |
| source | string | true | Name of the source the SQL should execute on. |
| description | string | true | Description of the tool that is passed to the LLM. |
| statement | string | true | SQL statement to execute on. |
| parameters | [parameters](..#specifying-parameters) | false | List of [parameters](..#specifying-parameters) that will be inserted into the SQL statement. |
| templateParameters | [templateParameters](..#template-parameters) | false | List of [templateParameters](..#template-parameters) that will be inserted into the SQL statement before executing prepared statement. |
========================================================================
## Build with MCP Toolbox
========================================================================
**Hierarchy:** MCP Toolbox for Databases > Build with MCP Toolbox
**URL:** https://googleapis.github.io/genai-toolbox/previews/PR-2723/build-with-mcp-toolbox/
**Description:** Step-by-step tutorials and project guides for building AI agents and workflows with the MCP Toolbox.
Now that you understand the core concepts and have your server configured, it is time to put those tools to work in real-world scenarios.
Explore the step-by-step guides below to learn how to integrate your databases with different orchestration frameworks and build capable AI agents:
========================================================================
## Python Quickstart (Local)
========================================================================
**Hierarchy:** MCP Toolbox for Databases > Build with MCP Toolbox > Python Quickstart (Local)
**URL:** https://googleapis.github.io/genai-toolbox/previews/PR-2723/build-with-mcp-toolbox/local_quickstart/
**Description:** How to get started running MCP Toolbox locally with [Python](https://github.com/googleapis/mcp-toolbox-sdk-python), PostgreSQL, and [Agent Development Kit](https://google.github.io/adk-docs/), [LangGraph](https://www.langchain.com/langgraph), [LlamaIndex](https://www.llamaindex.ai/) or [GoogleGenAI](https://pypi.org/project/google-genai/).
[](https://colab.research.google.com/github/googleapis/genai-toolbox/blob/main/docs/en/getting-started/colab_quickstart.ipynb)
## Before you begin
This guide assumes you have already done the following:
1. Installed [Python (3.10 or higher)][install-python] (including [pip][install-pip] and
your preferred virtual environment tool for managing dependencies e.g.
[venv][install-venv]).
1. Installed [PostgreSQL 16+ and the `psql` client][install-postgres].
[install-python]: https://wiki.python.org/moin/BeginnersGuide/Download
[install-pip]: https://pip.pypa.io/en/stable/installation/
[install-venv]:
https://packaging.python.org/en/latest/tutorials/installing-packages/#creating-virtual-environments
[install-postgres]: https://www.postgresql.org/download/
### Cloud Setup (Optional)
{{< regionInclude "quickstart/shared/cloud_setup.md" "cloud_setup" >}}
## Step 1: Set up your database
{{< regionInclude "quickstart/shared/database_setup.md" "database_setup" >}}
## Step 2: Install and configure MCP Toolbox
{{< regionInclude "quickstart/shared/configure_toolbox.md" "configure_toolbox" >}}
## Step 3: Connect your agent to MCP Toolbox
In this section, we will write and run an agent that will load the Tools
from MCP Toolbox.
{{< notice tip>}}
If you prefer to experiment within a Google Colab environment, you can connect
to a [local
runtime](https://research.google.com/colaboratory/local-runtimes.html).
{{< /notice >}}
1. In a new terminal, install the SDK package.
{{< tabpane persist=header >}}
{{< tab header="ADK" lang="bash" >}}
pip install google-adk[toolbox]
{{< /tab >}}
{{< tab header="Langchain" lang="bash" >}}
pip install toolbox-langchain
{{< /tab >}}
{{< tab header="LlamaIndex" lang="bash" >}}
pip install toolbox-llamaindex
{{< /tab >}}
{{< tab header="Core" lang="bash" >}}
pip install toolbox-core
{{< /tab >}}
{{< /tabpane >}}
1. Install other required dependencies:
{{< tabpane persist=header >}}
{{< tab header="ADK" lang="bash" >}}
# No other dependencies required for ADK
{{< /tab >}}
{{< tab header="Langchain" lang="bash" >}}
# TODO(developer): replace with correct package if needed
pip install langgraph langchain-google-vertexai
# pip install langchain-google-genai
# pip install langchain-anthropic
{{< /tab >}}
{{< tab header="LlamaIndex" lang="bash" >}}
# TODO(developer): replace with correct package if needed
pip install llama-index-llms-google-genai
# pip install llama-index-llms-anthropic
{{< /tab >}}
{{< tab header="Core" lang="bash" >}}
pip install google-genai
{{< /tab >}}
{{< /tabpane >}}
1. Create the agent:
{{< tabpane persist=header >}}
{{% tab header="ADK" text=true %}}
1. Create a new agent project. This will create a new directory named `my_agent`
with a file `agent.py`.
```bash
adk create my_agent
```
1. Update `my_agent/agent.py` with the following content to connect to MCP Toolbox:
{{< regionInclude "quickstart/python/adk/quickstart.py" "quickstart" "python" >}}
1. Create a `.env` file with your Google API key:
```bash
echo 'GOOGLE_API_KEY="YOUR_API_KEY"' > my_agent/.env
```
{{% /tab %}}
{{% tab header="LangChain" text=true %}}
Create a new file named `agent.py` and copy the following code:
{{< include "quickstart/python/langchain/quickstart.py" "python" >}}
{{% /tab %}}
{{% tab header="LlamaIndex" text=true %}}
Create a new file named `agent.py` and copy the following code:
{{< include "quickstart/python/llamaindex/quickstart.py" "python" >}}
{{% /tab %}}
{{% tab header="Core" text=true %}}
Create a new file named `agent.py` and copy the following code:
{{< include "quickstart/python/core/quickstart.py" "python" >}}
{{% /tab %}}
{{< /tabpane >}}
{{< tabpane text=true persist=header >}}
{{% tab header="ADK" lang="en" %}}
To learn more about Agent Development Kit, check out the [ADK
Documentation](https://google.github.io/adk-docs/get-started/python/).
{{% /tab %}}
{{% tab header="Langchain" lang="en" %}}
To learn more about Agents in LangChain, check out the [LangGraph Agent
Documentation](https://langchain-ai.github.io/langgraph/reference/prebuilt/#langgraph.prebuilt.chat_agent_executor.create_react_agent).
{{% /tab %}}
{{% tab header="LlamaIndex" lang="en" %}}
To learn more about Agents in LlamaIndex, check out the [LlamaIndex
AgentWorkflow
Documentation](https://docs.llamaindex.ai/en/stable/examples/agent/agent_workflow_basic/).
{{% /tab %}}
{{% tab header="Core" lang="en" %}}
To learn more about tool calling with Google GenAI, check out the
[Google GenAI
Documentation](https://github.com/googleapis/python-genai?tab=readme-ov-file#manually-declare-and-invoke-a-function-for-function-calling).
{{% /tab %}}
{{< /tabpane >}}
4. Run your agent, and observe the results:
{{< tabpane persist=header >}}
{{% tab header="ADK" text=true %}}
Run your agent locally for testing:
```sh
adk run my_agent
```
Alternatively, serve it via a web interface:
```sh
adk web --port 8000
```
For more information, refer to the ADK documentation on [Running
Agents](https://google.github.io/adk-docs/get-started/python/#run-your-agent)
and [Deploying to Cloud](https://google.github.io/adk-docs/deploy/).
{{% /tab %}}
{{< tab header="Langchain" lang="bash" >}}
python agent.py
{{< /tab >}}
{{< tab header="LlamaIndex" lang="bash" >}}
python agent.py
{{< /tab >}}
{{< tab header="Core" lang="bash" >}}
python agent.py
{{< /tab >}}
{{< /tabpane >}}
{{< notice info >}}
For more information, visit the [Python SDK
repo](https://github.com/googleapis/mcp-toolbox-sdk-python).
{{ notice >}}
========================================================================
## JS Quickstart (Local)
========================================================================
**Hierarchy:** MCP Toolbox for Databases > Build with MCP Toolbox > JS Quickstart (Local)
**URL:** https://googleapis.github.io/genai-toolbox/previews/PR-2723/build-with-mcp-toolbox/local_quickstart_js/
**Description:** How to get started running MCP Toolbox locally with [JavaScript](https://github.com/googleapis/mcp-toolbox-sdk-js), PostgreSQL, and orchestration frameworks such as [LangChain](https://js.langchain.com/docs/introduction/), [GenkitJS](https://genkit.dev/docs/get-started/), [LlamaIndex](https://ts.llamaindex.ai/) and [GoogleGenAI](https://github.com/googleapis/js-genai).
## Before you begin
This guide assumes you have already done the following:
1. Installed [Node.js (v18 or higher)].
1. Installed [PostgreSQL 16+ and the `psql` client][install-postgres].
[Node.js (v18 or higher)]: https://nodejs.org/
[install-postgres]: https://www.postgresql.org/download/
### Cloud Setup (Optional)
{{< regionInclude "quickstart/shared/cloud_setup.md" "cloud_setup" >}}
## Step 1: Set up your database
{{< regionInclude "quickstart/shared/database_setup.md" "database_setup" >}}
## Step 2: Install and configure MCP Toolbox
{{< regionInclude "quickstart/shared/configure_toolbox.md" "configure_toolbox" >}}
## Step 3: Connect your agent to MCP Toolbox
In this section, we will write and run an agent that will load the Tools
from MCP Toolbox.
1. (Optional) Initialize a Node.js project:
```bash
npm init -y
```
1. In a new terminal, install the
SDK package.
{{< tabpane persist=header >}}
{{< tab header="LangChain" lang="bash" >}}
npm install @toolbox-sdk/core
{{< /tab >}}
{{< tab header="GenkitJS" lang="bash" >}}
npm install @toolbox-sdk/core
{{< /tab >}}
{{< tab header="LlamaIndex" lang="bash" >}}
npm install @toolbox-sdk/core
{{< /tab >}}
{{< tab header="GoogleGenAI" lang="bash" >}}
npm install @toolbox-sdk/core
{{< /tab >}}
{{< tab header="ADK" lang="bash" >}}
npm install @toolbox-sdk/adk
{{< /tab >}}
{{< /tabpane >}}
1. Install other required dependencies
{{< tabpane persist=header >}}
{{< tab header="LangChain" lang="bash" >}}
npm install langchain @langchain/google-genai
{{< /tab >}}
{{< tab header="GenkitJS" lang="bash" >}}
npm install genkit @genkit-ai/googleai
{{< /tab >}}
{{< tab header="LlamaIndex" lang="bash" >}}
npm install llamaindex @llamaindex/google @llamaindex/workflow
{{< /tab >}}
{{< tab header="GoogleGenAI" lang="bash" >}}
npm install @google/genai
{{< /tab >}}
{{< tab header="ADK" lang="bash" >}}
npm install @google/adk
{{< /tab >}}
{{< /tabpane >}}
1. Create a new file named `hotelAgent.js` and copy the following code to create
an agent:
{{< tabpane persist=header >}}
{{< tab header="LangChain" lang="js" >}}
{{< include "quickstart/js/langchain/quickstart.js" >}}
{{< /tab >}}
{{< tab header="GenkitJS" lang="js" >}}
{{< include "quickstart/js/genkit/quickstart.js" >}}
{{< /tab >}}
{{< tab header="LlamaIndex" lang="js" >}}
{{< include "quickstart/js/llamaindex/quickstart.js" >}}
{{< /tab >}}
{{< tab header="GoogleGenAI" lang="js" >}}
{{< include "quickstart/js/genAI/quickstart.js" >}}
{{< /tab >}}
{{< tab header="ADK" lang="js" >}}
{{< include "quickstart/js/adk/quickstart.js" >}}
{{< /tab >}}
{{< /tabpane >}}
1. Run your agent, and observe the results:
```sh
node hotelAgent.js
```
{{< notice info >}}
For more information, visit the [JS SDK
repo](https://github.com/googleapis/mcp-toolbox-sdk-js).
{{ notice >}}
========================================================================
## Go Quickstart (Local)
========================================================================
**Hierarchy:** MCP Toolbox for Databases > Build with MCP Toolbox > Go Quickstart (Local)
**URL:** https://googleapis.github.io/genai-toolbox/previews/PR-2723/build-with-mcp-toolbox/local_quickstart_go/
**Description:** How to get started running MCP Toolbox locally with [Go](https://github.com/googleapis/mcp-toolbox-sdk-go), PostgreSQL, and orchestration frameworks such as [LangChain Go](https://tmc.github.io/langchaingo/docs/), [GenkitGo](https://genkit.dev/go/docs/get-started-go/), [Go GenAI](https://github.com/googleapis/go-genai) and [OpenAI Go](https://github.com/openai/openai-go).
## Before you begin
This guide assumes you have already done the following:
1. Installed [Go (v1.24.2 or higher)].
1. Installed [PostgreSQL 16+ and the `psql` client][install-postgres].
[Go (v1.24.2 or higher)]: https://go.dev/doc/install
[install-postgres]: https://www.postgresql.org/download/
### Cloud Setup (Optional)
{{< regionInclude "quickstart/shared/cloud_setup.md" "cloud_setup" >}}
## Step 1: Set up your database
{{< regionInclude "quickstart/shared/database_setup.md" "database_setup" >}}
## Step 2: Install and configure MCP Toolbox
{{< regionInclude "quickstart/shared/configure_toolbox.md" "configure_toolbox" >}}
## Step 3: Connect your agent to MCP Toolbox
In this section, we will write and run an agent that will load the Tools
from MCP Toolbox.
1. Initialize a go module:
```bash
go mod init main
```
1. In a new terminal, install the Go SDK Module:
{{< notice warning >}}
Breaking Change Notice: As of version `0.6.0`, this SDK has transitioned to a multi-module structure.
* For new versions (`v0.6.0`+): You must import specific modules (e.g., `go get github.com/googleapis/mcp-toolbox-sdk-go/core`).
* For older versions (`v0.5.1` and below): The SDK remains a single-module library (`go get github.com/googleapis/mcp-toolbox-sdk-go`).
* Please update your imports and `go.mod` accordingly when upgrading.
{{< /notice >}}
{{< tabpane persist=header >}}
{{< tab header="LangChain Go" lang="bash" >}}
go get github.com/googleapis/mcp-toolbox-sdk-go/core
{{< /tab >}}
{{< tab header="Genkit Go" lang="bash" >}}
go get github.com/googleapis/mcp-toolbox-sdk-go/core
go get github.com/googleapis/mcp-toolbox-sdk-go/tbgenkit
{{< /tab >}}
{{< tab header="Go GenAI" lang="bash" >}}
go get github.com/googleapis/mcp-toolbox-sdk-go/core
{{< /tab >}}
{{< tab header="OpenAI Go" lang="bash" >}}
go get github.com/googleapis/mcp-toolbox-sdk-go/core
{{< /tab >}}
{{< tab header="ADK Go" lang="bash" >}}
go get github.com/googleapis/mcp-toolbox-sdk-go/core
go get github.com/googleapis/mcp-toolbox-sdk-go/tbadk
{{< /tab >}}
{{< /tabpane >}}
2. Create a new file named `hotelagent.go` and copy the following code to create
an agent:
{{< tabpane persist=header >}}
{{< tab header="LangChain Go" lang="go" >}}
{{< include "quickstart/go/langchain/quickstart.go" >}}
{{< /tab >}}
{{< tab header="Genkit Go" lang="go" >}}
{{< include "quickstart/go/genkit/quickstart.go" >}}
{{< /tab >}}
{{< tab header="Go GenAI" lang="go" >}}
{{< include "quickstart/go/genAI/quickstart.go" >}}
{{< /tab >}}
{{< tab header="OpenAI Go" lang="go" >}}
{{< include "quickstart/go/openAI/quickstart.go" >}}
{{< /tab >}}
{{< tab header="ADK Go" lang="go" >}}
{{< include "quickstart/go/adkgo/quickstart.go" >}}
{{< /tab >}}
{{< /tabpane >}}
1. Ensure all dependencies are installed:
```sh
go mod tidy
```
2. Run your agent, and observe the results:
```sh
go run hotelagent.go
```
{{< notice info >}}
For more information, visit the [Go SDK
repo](https://github.com/googleapis/mcp-toolbox-sdk-go).
{{ notice >}}
========================================================================
## Deploy ADK Agent and MCP Toolbox
========================================================================
**Hierarchy:** MCP Toolbox for Databases > Build with MCP Toolbox > Deploy ADK Agent and MCP Toolbox
**URL:** https://googleapis.github.io/genai-toolbox/previews/PR-2723/build-with-mcp-toolbox/deploy_adk_agent/
**Description:** How to deploy your ADK Agent to Vertex AI Agent Engine and connect it to an MCP Toolbox deployed on Cloud Run.
## Before you begin
This guide assumes you have already done the following:
1. Completed the [Python Quickstart
(Local)](../build-with-mcp-toolbox/local_quickstart.md) and have a working ADK
agent running locally.
2. Installed the [Google Cloud CLI](https://cloud.google.com/sdk/docs/install).
3. A Google Cloud project with billing enabled.
## Step 1: Deploy MCP Toolbox to Cloud Run
Before deploying your agent, your MCP Toolbox server needs to be accessible from
the cloud. We will deploy MCP Toolbox to Cloud Run.
Follow the [Deploy to Cloud Run](../user-guide/deploy-to/cloud-run/_index.md) guide to deploy your MCP
Toolbox instance.
{{% alert title="Important" %}}
After deployment, note down the Service URL of your MCP Toolbox Cloud Run
service. You will need this to configure your agent.
{{% /alert %}}
## Step 2: Prepare your Agent for Deployment
We will use the `agent-starter-pack` tool to enhance your local agent project
with the necessary configuration for deployment to Vertex AI Agent Engine.
1. Open a terminal and navigate to the **parent directory** of your agent
project (the directory containing the `my_agent` folder).
2. Run the following command to enhance your project:
```bash
uvx agent-starter-pack enhance --adk -d agent_engine
```
3. Follow the interactive prompts to configure your deployment settings. This
process will generate deployment configuration files (like a `Makefile` and
`Dockerfile`) in your project directory.
4. Add `google-adk[toolbox]` as a dependency to the new project:
```bash
uv add google-adk[toolbox]
```
## Step 3: Configure Google Cloud Authentication
Ensure your local environment is authenticated with Google Cloud to perform the
deployment.
1. Login with Application Default Credentials (ADC):
```bash
gcloud auth application-default login
```
2. Set your active project:
```bash
gcloud config set project
```
## Step 4: Connect Agent to Deployed MCP Toolbox
You need to update your agent's code to connect to the Cloud Run URL of your MCP
Toolbox instead of the local address.
1. Recall that you can find the Cloud Run deployment URL of the MCP Toolbox
server using the following command:
```bash
gcloud run services describe toolbox --format 'value(status.url)'
```
2. Open your agent file (`my_agent/agent.py`).
3. Update the `ToolboxToolset` initialization to point to your Cloud Run service URL. Replace the existing initialization code with the following:
{{% alert color="info" title="Note" %}}
Since Cloud Run services are secured by default, you also need to provide a workload identity.
{{% /alert %}}
```python
from google.adk import Agent
from google.adk.apps import App
from google.adk.tools.toolbox_toolset import ToolboxToolset
from toolbox_adk import CredentialStrategy
# TODO(developer): Replace with your Toolbox Cloud Run Service URL
TOOLBOX_URL = "https://your-toolbox-service-xyz.a.run.app"
# Initialize the toolset with Workload Identity (generates ID token for the URL)
toolset = ToolboxToolset(
server_url=TOOLBOX_URL,
credentials=CredentialStrategy.workload_identity(target_audience=TOOLBOX_URL)
)
root_agent = Agent(
name='root_agent',
model='gemini-2.5-flash',
instruction="You are a helpful AI assistant designed to provide accurate and useful information.",
tools=[toolset],
)
app = App(root_agent=root_agent, name="my_agent")
```
{{% alert title="Important" %}}
Ensure that the `name` parameter in the `App` initialization matches the name of
your agent's parent directory (e.g., `my_agent`).
```python
...
app = App(root_agent=root_agent, name="my_agent")
```
{{% /alert %}}
## Step 5: Deploy to Agent Engine
Run the deployment command:
```bash
make deploy
```
This command will build your agent's container image and deploy it to Vertex AI.
## Step 6: Test your Deployment
Once the deployment command (`make deploy`) completes, it will output the URL
for the Agent Engine Playground. You can click on this URL to open the
Playground in your browser and start chatting with your agent to test the tools.
For additional test scenarios, refer to the [Test deployed
agent](https://google.github.io/adk-docs/deploy/agent-engine/#test-deployment)
section in the ADK documentation.
========================================================================
## Quickstart (MCP)
========================================================================
**Hierarchy:** MCP Toolbox for Databases > Build with MCP Toolbox > Quickstart (MCP)
**URL:** https://googleapis.github.io/genai-toolbox/previews/PR-2723/build-with-mcp-toolbox/mcp_quickstart/
**Description:** How to get started running Toolbox locally with MCP Inspector.
## Overview
[Model Context Protocol](https://modelcontextprotocol.io) is an open protocol
that standardizes how applications provide context to LLMs. Check out this page
on how to [connect to Toolbox via MCP](../../user-guide/connect-to/mcp-client/_index.md).
## Step 1: Set up your database
In this section, we will create a database, insert some data that needs to be
access by our agent, and create a database user for Toolbox to connect with.
1. Connect to postgres using the `psql` command:
```bash
psql -h 127.0.0.1 -U postgres
```
Here, `postgres` denotes the default postgres superuser.
1. Create a new database and a new user:
{{< notice tip >}}
For a real application, it's best to follow the principle of least permission
and only grant the privileges your application needs.
{{< /notice >}}
```sql
CREATE USER toolbox_user WITH PASSWORD 'my-password';
CREATE DATABASE toolbox_db;
GRANT ALL PRIVILEGES ON DATABASE toolbox_db TO toolbox_user;
ALTER DATABASE toolbox_db OWNER TO toolbox_user;
```
1. End the database session:
```bash
\q
```
1. Connect to your database with your new user:
```bash
psql -h 127.0.0.1 -U toolbox_user -d toolbox_db
```
1. Create a table using the following command:
```sql
CREATE TABLE hotels(
id INTEGER NOT NULL PRIMARY KEY,
name VARCHAR NOT NULL,
location VARCHAR NOT NULL,
price_tier VARCHAR NOT NULL,
checkin_date DATE NOT NULL,
checkout_date DATE NOT NULL,
booked BIT NOT NULL
);
```
1. Insert data into the table.
```sql
INSERT INTO hotels(id, name, location, price_tier, checkin_date, checkout_date, booked)
VALUES
(1, 'Hilton Basel', 'Basel', 'Luxury', '2024-04-22', '2024-04-20', B'0'),
(2, 'Marriott Zurich', 'Zurich', 'Upscale', '2024-04-14', '2024-04-21', B'0'),
(3, 'Hyatt Regency Basel', 'Basel', 'Upper Upscale', '2024-04-02', '2024-04-20', B'0'),
(4, 'Radisson Blu Lucerne', 'Lucerne', 'Midscale', '2024-04-24', '2024-04-05', B'0'),
(5, 'Best Western Bern', 'Bern', 'Upper Midscale', '2024-04-23', '2024-04-01', B'0'),
(6, 'InterContinental Geneva', 'Geneva', 'Luxury', '2024-04-23', '2024-04-28', B'0'),
(7, 'Sheraton Zurich', 'Zurich', 'Upper Upscale', '2024-04-27', '2024-04-02', B'0'),
(8, 'Holiday Inn Basel', 'Basel', 'Upper Midscale', '2024-04-24', '2024-04-09', B'0'),
(9, 'Courtyard Zurich', 'Zurich', 'Upscale', '2024-04-03', '2024-04-13', B'0'),
(10, 'Comfort Inn Bern', 'Bern', 'Midscale', '2024-04-04', '2024-04-16', B'0');
```
1. End the database session:
```bash
\q
```
## Step 2: Install and configure Toolbox
In this section, we will download Toolbox, configure our tools in a
`tools.yaml`, and then run the Toolbox server.
1. Download the latest version of Toolbox as a binary:
{{< notice tip >}}
Select the
[correct binary](https://github.com/googleapis/genai-toolbox/releases)
corresponding to your OS and CPU architecture.
{{< /notice >}}
```bash
export OS="linux/amd64" # one of linux/amd64, darwin/arm64, darwin/amd64, or windows/amd64
curl -O https://storage.googleapis.com/genai-toolbox/v0.28.0/$OS/toolbox
```
1. Make the binary executable:
```bash
chmod +x toolbox
```
1. Write the following into a `tools.yaml` file. Be sure to update any fields
such as `user`, `password`, or `database` that you may have customized in the
previous step.
{{< notice tip >}}
In practice, use environment variable replacement with the format ${ENV_NAME}
instead of hardcoding your secrets into the configuration file.
{{< /notice >}}
```yaml
kind: sources
name: my-pg-source
type: postgres
host: 127.0.0.1
port: 5432
database: toolbox_db
user: toolbox_user
password: my-password
---
kind: tools
name: search-hotels-by-name
type: postgres-sql
source: my-pg-source
description: Search for hotels based on name.
parameters:
- name: name
type: string
description: The name of the hotel.
statement: SELECT * FROM hotels WHERE name ILIKE '%' || $1 || '%';
---
kind: tools
name: search-hotels-by-location
type: postgres-sql
source: my-pg-source
description: Search for hotels based on location.
parameters:
- name: location
type: string
description: The location of the hotel.
statement: SELECT * FROM hotels WHERE location ILIKE '%' || $1 || '%';
---
kind: tools
name: book-hotel
type: postgres-sql
source: my-pg-source
description: >-
Book a hotel by its ID. If the hotel is successfully booked, returns a NULL, raises an error if not.
parameters:
- name: hotel_id
type: string
description: The ID of the hotel to book.
statement: UPDATE hotels SET booked = B'1' WHERE id = $1;
---
kind: tools
name: update-hotel
type: postgres-sql
source: my-pg-source
description: >-
Update a hotel's check-in and check-out dates by its ID. Returns a message
indicating whether the hotel was successfully updated or not.
parameters:
- name: hotel_id
type: string
description: The ID of the hotel to update.
- name: checkin_date
type: string
description: The new check-in date of the hotel.
- name: checkout_date
type: string
description: The new check-out date of the hotel.
statement: >-
UPDATE hotels SET checkin_date = CAST($2 as date), checkout_date = CAST($3
as date) WHERE id = $1;
---
kind: tools
name: cancel-hotel
type: postgres-sql
source: my-pg-source
description: Cancel a hotel by its ID.
parameters:
- name: hotel_id
type: string
description: The ID of the hotel to cancel.
statement: UPDATE hotels SET booked = B'0' WHERE id = $1;
---
kind: toolsets
name: my-toolset
tools:
- search-hotels-by-name
- search-hotels-by-location
- book-hotel
- update-hotel
- cancel-hotel
```
For more info on tools, check out the
[Tools](../../user-guide/configuration/tools/_index.md) section.
1. Run the Toolbox server, pointing to the `tools.yaml` file created earlier:
```bash
./toolbox --tools-file "tools.yaml"
```
## Step 3: Connect to MCP Inspector
1. Run the MCP Inspector:
```bash
npx @modelcontextprotocol/inspector
```
1. Type `y` when it asks to install the inspector package.
1. It should show the following when the MCP Inspector is up and running (please
take note of ``):
```bash
Starting MCP inspector...
⚙️ Proxy server listening on localhost:6277
🔑 Session token:
Use this token to authenticate requests or set DANGEROUSLY_OMIT_AUTH=true to disable auth
🚀 MCP Inspector is up and running at:
http://localhost:6274/?MCP_PROXY_AUTH_TOKEN=
```
1. Open the above link in your browser.
1. For `Transport Type`, select `Streamable HTTP`.
1. For `URL`, type in `http://127.0.0.1:5000/mcp`.
1. For `Configuration` -> `Proxy Session Token`, make sure
`` is present.
1. Click Connect.

1. Select `List Tools`, you will see a list of tools configured in `tools.yaml`.

1. Test out your tools here!
========================================================================
## Pre- and Post- Processing
========================================================================
**Hierarchy:** MCP Toolbox for Databases > Build with MCP Toolbox > Pre- and Post- Processing
**URL:** https://googleapis.github.io/genai-toolbox/previews/PR-2723/build-with-mcp-toolbox/pre-post-processing/
**Description:** Intercept and modify interactions between the agent and its tools either before or after a tool is executed.
Pre- and post- processing allow developers to intercept and modify interactions between the agent and its tools or the user.
{{< notice note >}}
These capabilities are typically features of **orchestration frameworks** (like LangChain, LangGraph, or Agent Builder) rather than the Toolbox SDK itself. However, Toolbox tools are designed to fully leverage these framework capabilities to support robust, secure, and compliant agent architectures.
{{< /notice >}}
## Types of Processing
### Pre-processing
Pre-processing occurs before a tool is executed or an agent processes a message. Key types include:
- **Input Sanitization & Redaction**: Detecting and masking sensitive information (like PII) in user queries or tool arguments to prevent it from being logged or sent to unauthorized systems.
- **Business Logic Validation**: Verifying that the proposed action complies with business rules (e.g., ensuring a requested hotel stay does not exceed 14 days, or checking if a user has sufficient permission).
- **Security Guardrails**: Analyzing inputs for potential prompt injection attacks or malicious payloads.
### Post-processing
Post-processing occurs after a tool has executed or the model has generated a response. Key types include:
- **Response Enrichment**: Injecting additional data into the tool output that wasn't part of the raw API response (e.g., calculating loyalty points earned based on the booking value).
- **Output Formatting**: Transforming raw data (like JSON or XML) into a more human-readable or model-friendly format to improve the agent's understanding.
- **Compliance Auditing**: Logging the final outcome of transactions, including the original request and the result, to a secure audit trail.
## Processing Scopes
While processing logic can be applied at various levels (Agent, Model, Tool), this guide primarily focuses on **Tool Level** processing, which is most relevant for granular control over tool execution.
### Tool Level (Primary Focus)
Wraps individual tool executions. This is best for logic specific to a single tool or a set of tools.
- **Scope**: Intercepts the raw inputs (arguments) to a tool and its outputs.
- **Use Cases**: Argument validation, output formatting, specific privacy rules for sensitive tools.
### Other Levels
It is helpful to understand how tool-level processing differs from other scopes:
- **Model Level**: Intercepts individual calls to the LLM (prompts and responses). Unlike tool-level, this applies globally to all text sent/received, making it better for global PII redaction or token tracking.
- **Agent Level**: Wraps the high-level execution loop (e.g., a "turn" in the conversation). Unlike tool-level, this envelopes the entire turn (user input to final response), making it suitable for session management or end-to-end auditing.
## Best Practices
### Security & Guardrails
- **Principle of Least Privilege**: Ensure that tools run with the minimum necessary permissions. Middleware is an excellent place to enforce "read-only" modes or verify user identity before executing sensitive actions.
- **Input Sanitization**: Actively strip potential PII (like credit card numbers or raw emails) from tool arguments before logging them.
- **Prompt Injection Defense**: Use pre-processing hooks to scan user inputs for known jailbreak patterns or malicious directives before they reach the model or tools.
### Observability & Debugging
- **Structured Logging**: Instead of simple print statements, use structured JSON logging with correlation IDs. This allows you to trace a single user request through multiple agent turns and tool calls.
- **Logging for Testability**: LLM responses are non-deterministic and may summarize away key details.
- **Pattern**: Add explicit logging markers in your post-processing middleware (e.g., `logger.info("ACTION_SUCCESS: ")`).
- **Benefit**: Your integration tests can grep logs for these stable markers to verify tool success, rather than painfully parsing variable natural language responses.
### Performance & Cost Optimization
- **Token Economy**: Tools often return verbose JSON. Use post-processing to strip unnecessary fields or summarize large datasets *before* returning the result to the LLM's context window. This saves tokens and reduces latency.
- **Caching**: For read-heavy tools (like "search_knowledge_base"), implement caching middleware to return previous results for identical queries, saving both time and API costs.
### Error Handling
- **Graceful Degradation**: If a tool fails (e.g., API timeout), catch the exception in middleware and return a structured error message to the LLM (e.g., `Error: Database timeout, please try again`).
- **Self-Correction**: Well-formatted error messages often allow the LLM to understand *why* a call failed and retry it with corrected parameters automatically.
## Samples
========================================================================
## Python
========================================================================
**Hierarchy:** MCP Toolbox for Databases > Build with MCP Toolbox > Pre- and Post- Processing > Python
**URL:** https://googleapis.github.io/genai-toolbox/previews/PR-2723/build-with-mcp-toolbox/pre-post-processing/python/
**Description:** How to add pre- and post- processing to your Agents using Python.
## Prerequisites
This tutorial assumes that you have set up MCP Toolbox with a basic agent as described in the [local quickstart](../../build-with-mcp-toolbox/local_quickstart.md).
This guide demonstrates how to implement these patterns in your Toolbox applications.
## Implementation
{{< tabpane persist=header >}}
{{% tab header="ADK" text=true %}}
The following example demonstrates how to use `ToolboxToolset` with ADK's pre and post processing hooks to implement pre and post processing for tool calls.
```py
{{< include "python/adk/agent.py" >}}
```
You can also add model-level (`before_model_callback`, `after_model_callback`) and agent-level (`before_agent_callback`, `after_agent_callback`) hooks to intercept messages at different stages of the execution loop.
For more information, see the [ADK Callbacks documentation](https://google.github.io/adk-docs/callbacks/types-of-callbacks/).
{{% /tab %}}
{{% tab header="Langchain" text=true %}}
The following example demonstrates how to use `ToolboxClient` with LangChain's middleware to implement pre- and post- processing for tool calls.
{{< include "python/langchain/agent.py" "python" >}}
You can also add model-level (`wrap_model`) and agent-level (`before_agent`, `after_agent`) hooks to intercept messages at different stages of the execution loop. See the [LangChain Middleware documentation](https://docs.langchain.com/oss/python/langchain/middleware/custom#wrap-style-hooks) for details on these additional hook types.
{{% /tab %}}
{{< /tabpane >}}
## Results
The output should look similar to the following.
{{< notice note >}}
The exact responses may vary due to the non-deterministic nature of LLMs and differences between orchestration frameworks.
{{< /notice >}}
```
AI: Booking Confirmed! You earned 500 Loyalty Points with this stay.
AI: Error: Maximum stay duration is 14 days.
```
========================================================================
## Javascript
========================================================================
**Hierarchy:** MCP Toolbox for Databases > Build with MCP Toolbox > Pre- and Post- Processing > Javascript
**URL:** https://googleapis.github.io/genai-toolbox/previews/PR-2723/build-with-mcp-toolbox/pre-post-processing/js/
**Description:** How to add pre- and post- processing to your Agents using JS.
## Prerequisites
This tutorial assumes that you have set up MCP Toolbox with a basic agent as described in the [local quickstart](../../build-with-mcp-toolbox/local_quickstart_js.md).
This guide demonstrates how to implement these patterns in your Toolbox applications.
## Implementation
{{< tabpane persist=header >}}
{{% tab header="ADK" text=true %}}
The following example demonstrates how to use the `beforeToolCallback` and `afterToolCallback` hooks in the ADK `LlmAgent` to implement pre and post processing logic.
{{< include "js/adk/agent.js" "js" >}}
You can also add model-level (`beforeModelCallback`, `afterModelCallback`) and agent-level (`beforeAgentCallback`, `afterAgentCallback`) hooks to intercept messages at different stages of the execution loop.
For more information, see the [ADK Callbacks documentation](https://google.github.io/adk-docs/callbacks/types-of-callbacks/).
{{% /tab %}}
{{% tab header="Langchain" text=true %}}
The following example demonstrates how to use `ToolboxClient` with LangChain's middleware to implement pre and post processing for tool calls.
{{< include "js/langchain/agent.js" "js" >}}
You can also use the `wrapModelCall` hook to intercept messages before and after model calls.
You can also use [node-style hooks](https://docs.langchain.com/oss/javascript/langchain/middleware/custom#node-style-hooks) to intercept messages at the agent and model level.
See the [LangChain Middleware documentation](https://docs.langchain.com/oss/javascript/langchain/middleware/custom#tool-call-monitoring) for details on these additional hook types.
{{% /tab %}}
{{< /tabpane >}}
## Results
The output should look similar to the following.
{{< notice note >}}
The exact responses may vary due to the non-deterministic nature of LLMs and differences between orchestration frameworks.
{{< /notice >}}
```
AI: Booking Confirmed! You earned 500 Loyalty Points with this stay.
AI: Error: Maximum stay duration is 14 days.
```
========================================================================
## Go
========================================================================
**Hierarchy:** MCP Toolbox for Databases > Build with MCP Toolbox > Pre- and Post- Processing > Go
**URL:** https://googleapis.github.io/genai-toolbox/previews/PR-2723/build-with-mcp-toolbox/pre-post-processing/go/
**Description:** How to add pre- and post- processing to your Agents using Go.
## Prerequisites
This tutorial assumes that you have set up MCP Toolbox with a basic agent as described in the [local quickstart](../../build-with-mcp-toolbox/local_quickstart_go.md).
This guide demonstrates how to implement these patterns in your Toolbox applications.
## Implementation
{{< tabpane persist=header >}}
{{% tab header="ADK" text=true %}}
The following example demonstrates how to use the `beforeToolCallback` and `afterToolCallback` hooks in the ADK `LlmAgent` to implement pre and post processing logic.
{{< include "go/adk/agent.go" "go" >}}
You can also add model-level (`beforeModelCallback`, `afterModelCallback`) and agent-level (`beforeAgentCallback`, `afterAgentCallback`) hooks to intercept messages at different stages of the execution loop.
For more information, see the [ADK Callbacks documentation](https://google.github.io/adk-docs/callbacks/types-of-callbacks/).{{% /tab %}}
{{< /tabpane >}}
## Results
The output should look similar to the following.
{{< notice note >}}
The exact responses may vary due to the non-deterministic nature of LLMs and differences between orchestration frameworks.
{{< /notice >}}
```
AI: Booking Confirmed! You earned 500 Loyalty Points with this stay.
AI: Error: Maximum stay duration is 14 days.
```
========================================================================
## Prompts using Gemini CLI
========================================================================
**Hierarchy:** MCP Toolbox for Databases > Build with MCP Toolbox > Prompts using Gemini CLI
**URL:** https://googleapis.github.io/genai-toolbox/previews/PR-2723/build-with-mcp-toolbox/prompts_quickstart_gemini_cli/
**Description:** How to get started using Toolbox prompts locally with PostgreSQL and [Gemini CLI](https://pypi.org/project/gemini-cli/).
## Before you begin
This guide assumes you have already done the following:
1. Installed [PostgreSQL 16+ and the `psql` client][install-postgres].
[install-postgres]: https://www.postgresql.org/download/
## Step 1: Set up your database
In this section, we will create a database, insert some data that needs to be
accessed by our agent, and create a database user for Toolbox to connect with.
1. Connect to postgres using the `psql` command:
```bash
psql -h 127.0.0.1 -U postgres
```
Here, `postgres` denotes the default postgres superuser.
{{< notice info >}}
#### **Having trouble connecting?**
* **Password Prompt:** If you are prompted for a password for the `postgres`
user and do not know it (or a blank password doesn't work), your PostgreSQL
installation might require a password or a different authentication method.
* **`FATAL: role "postgres" does not exist`:** This error means the default
`postgres` superuser role isn't available under that name on your system.
* **`Connection refused`:** Ensure your PostgreSQL server is actually running.
You can typically check with `sudo systemctl status postgresql` and start it
with `sudo systemctl start postgresql` on Linux systems.
#### **Common Solution**
For password issues or if the `postgres` role seems inaccessible directly, try
switching to the `postgres` operating system user first. This user often has
permission to connect without a password for local connections (this is called
peer authentication).
```bash
sudo -i -u postgres
psql -h 127.0.0.1
```
Once you are in the `psql` shell using this method, you can proceed with the
database creation steps below. Afterwards, type `\q` to exit `psql`, and then
`exit` to return to your normal user shell.
If desired, once connected to `psql` as the `postgres` OS user, you can set a
password for the `postgres` *database* user using: `ALTER USER postgres WITH
PASSWORD 'your_chosen_password';`. This would allow direct connection with `-U
postgres` and a password next time.
{{< /notice >}}
1. Create a new database and a new user:
{{< notice tip >}}
For a real application, it's best to follow the principle of least permission
and only grant the privileges your application needs.
{{< /notice >}}
```sql
CREATE USER toolbox_user WITH PASSWORD 'my-password';
CREATE DATABASE toolbox_db;
GRANT ALL PRIVILEGES ON DATABASE toolbox_db TO toolbox_user;
ALTER DATABASE toolbox_db OWNER TO toolbox_user;
```
1. End the database session:
```bash
\q
```
(If you used `sudo -i -u postgres` and then `psql`, remember you might also
need to type `exit` after `\q` to leave the `postgres` user's shell
session.)
1. Connect to your database with your new user:
```bash
psql -h 127.0.0.1 -U toolbox_user -d toolbox_db
```
1. Create the required tables using the following commands:
```sql
CREATE TABLE users (
id SERIAL PRIMARY KEY,
username VARCHAR(50) NOT NULL,
email VARCHAR(100) UNIQUE NOT NULL,
created_at TIMESTAMPTZ DEFAULT NOW()
);
CREATE TABLE restaurants (
id SERIAL PRIMARY KEY,
name VARCHAR(100) NOT NULL,
location VARCHAR(100)
);
CREATE TABLE reviews (
id SERIAL PRIMARY KEY,
user_id INT REFERENCES users(id),
restaurant_id INT REFERENCES restaurants(id),
rating INT CHECK (rating >= 1 AND rating <= 5),
review_text TEXT,
is_published BOOLEAN DEFAULT false,
moderation_status VARCHAR(50) DEFAULT 'pending_manual_review',
created_at TIMESTAMPTZ DEFAULT NOW()
);
```
1. Insert dummy data into the tables.
```sql
INSERT INTO users (id, username, email) VALUES
(123, 'jane_d', 'jane.d@example.com'),
(124, 'john_s', 'john.s@example.com'),
(125, 'sam_b', 'sam.b@example.com');
INSERT INTO restaurants (id, name, location) VALUES
(455, 'Pizza Palace', '123 Main St'),
(456, 'The Corner Bistro', '456 Oak Ave'),
(457, 'Sushi Spot', '789 Pine Ln');
INSERT INTO reviews (user_id, restaurant_id, rating, review_text, is_published, moderation_status) VALUES
(124, 455, 5, 'Best pizza in town! The crust was perfect.', true, 'approved'),
(125, 457, 4, 'Great sushi, very fresh. A bit pricey but worth it.', true, 'approved'),
(123, 457, 5, 'Absolutely loved the dragon roll. Will be back!', true, 'approved'),
(123, 456, 4, 'The atmosphere was lovely and the food was great. My photo upload might have been weird though.', false, 'pending_manual_review'),
(125, 456, 1, 'This review contains inappropriate language.', false, 'rejected');
```
1. End the database session:
```bash
\q
```
## Step 2: Configure Toolbox
Create a file named `tools.yaml`. This file defines the database connection, the
SQL tools available, and the prompts the agents will use.
```yaml
kind: sources
name: my-foodiefind-db
type: postgres
host: 127.0.0.1
port: 5432
database: toolbox_db
user: toolbox_user
password: my-password
---
kind: tools
name: find_user_by_email
type: postgres-sql
source: my-foodiefind-db
description: Find a user's ID by their email address.
parameters:
- name: email
type: string
description: The email address of the user to find.
statement: SELECT id FROM users WHERE email = $1;
---
kind: tools
name: find_restaurant_by_name
type: postgres-sql
source: my-foodiefind-db
description: Find a restaurant's ID by its exact name.
parameters:
- name: name
type: string
description: The name of the restaurant to find.
statement: SELECT id FROM restaurants WHERE name = $1;
---
kind: tools
name: find_review_by_user_and_restaurant
type: postgres-sql
source: my-foodiefind-db
description: Find the full record for a specific review using the user's ID and the restaurant's ID.
parameters:
- name: user_id
type: integer
description: The numerical ID of the user.
- name: restaurant_id
type: integer
description: The numerical ID of the restaurant.
statement: SELECT * FROM reviews WHERE user_id = $1 AND restaurant_id = $2;
---
kind: prompts
name: investigate_missing_review
description: "Investigates a user's missing review by finding the user, restaurant, and the review itself, then analyzing its status."
arguments:
- name: "user_email"
description: "The email of the user who wrote the review."
- name: "restaurant_name"
description: "The name of the restaurant being reviewed."
messages:
- content: >-
**Goal:** Find the review written by the user with email '{{.user_email}}' for the restaurant named '{{.restaurant_name}}' and understand its status.
**Workflow:**
1. Use the `find_user_by_email` tool with the email '{{.user_email}}' to get the `user_id`.
2. Use the `find_restaurant_by_name` tool with the name '{{.restaurant_name}}' to get the `restaurant_id`.
3. Use the `find_review_by_user_and_restaurant` tool with the `user_id` and `restaurant_id` you just found.
4. Analyze the results from the final tool call. Examine the `is_published` and `moderation_status` fields and explain the review's status to the user in a clear, human-readable sentence.
```
## Step 3: Connect to Gemini CLI
Configure the Gemini CLI to talk to your local Toolbox MCP server.
1. Open or create your Gemini settings file: `~/.gemini/settings.json`.
2. Add the following configuration to the file:
```json
{
"mcpServers": {
"MCPToolbox": {
"httpUrl": "http://localhost:5000/mcp"
}
},
"mcp": {
"allowed": ["MCPToolbox"]
}
}
```
3. Start Gemini CLI using
```sh
gemini
```
In case Gemini CLI is already running, use `/mcp refresh` to refresh the MCP server.
4. Use gemini slash commands to run your prompt:
```sh
/investigate_missing_review --user_email="jane.d@example.com" --restaurant_name="The Corner Bistro"
```
========================================================================
## AlloyDB
========================================================================
**Hierarchy:** MCP Toolbox for Databases > Build with MCP Toolbox > AlloyDB
**URL:** https://googleapis.github.io/genai-toolbox/previews/PR-2723/build-with-mcp-toolbox/alloydb/
**Description:** How to get started with Toolbox using AlloyDB.
========================================================================
## Quickstart (MCP with AlloyDB)
========================================================================
**Hierarchy:** MCP Toolbox for Databases > Build with MCP Toolbox > AlloyDB > Quickstart (MCP with AlloyDB)
**URL:** https://googleapis.github.io/genai-toolbox/previews/PR-2723/build-with-mcp-toolbox/alloydb/mcp_quickstart/
**Description:** How to get started running Toolbox with MCP Inspector and AlloyDB as the source.
## Overview
[Model Context Protocol](https://modelcontextprotocol.io) is an open protocol
that standardizes how applications provide context to LLMs. Check out this page
on how to [connect to Toolbox via MCP](../../user-guide/connect-to/mcp-client/_index.md).
## Before you begin
This guide assumes you have already done the following:
1. [Create a AlloyDB cluster and
instance](https://cloud.google.com/alloydb/docs/cluster-create) with a
database and user.
1. Connect to the instance using [AlloyDB
Studio](https://cloud.google.com/alloydb/docs/manage-data-using-studio),
[`psql` command-line tool](https://www.postgresql.org/download/), or any
other PostgreSQL client.
1. Enable the `pgvector` and `google_ml_integration`
[extensions](https://cloud.google.com/alloydb/docs/ai). These are required
for Semantic Search and Natural Language to SQL tools. Run the following SQL
commands:
```sql
CREATE EXTENSION IF NOT EXISTS "vector";
CREATE EXTENSION IF NOT EXISTS "google_ml_integration";
CREATE EXTENSION IF NOT EXISTS alloydb_ai_nl cascade;
CREATE EXTENSION IF NOT EXISTS parameterized_views;
```
## Step 1: Set up your AlloyDB database
In this section, we will create the necessary tables and functions in your
AlloyDB instance.
1. Create tables using the following commands:
```sql
CREATE TABLE products (
product_id SERIAL PRIMARY KEY,
name VARCHAR(255) NOT NULL,
description TEXT,
price DECIMAL(10, 2) NOT NULL,
category_id INT,
embedding vector(3072) -- Vector size for model(gemini-embedding-001)
);
CREATE TABLE customers (
customer_id SERIAL PRIMARY KEY,
name VARCHAR(255) NOT NULL,
email VARCHAR(255) UNIQUE NOT NULL
);
CREATE TABLE cart (
cart_id SERIAL PRIMARY KEY,
customer_id INT UNIQUE NOT NULL,
created_at TIMESTAMP WITH TIME ZONE DEFAULT CURRENT_TIMESTAMP,
FOREIGN KEY (customer_id) REFERENCES customers(customer_id)
);
CREATE TABLE cart_items (
cart_item_id SERIAL PRIMARY KEY,
cart_id INT NOT NULL,
product_id INT NOT NULL,
quantity INT NOT NULL,
price DECIMAL(10, 2) NOT NULL,
FOREIGN KEY (cart_id) REFERENCES cart(cart_id),
FOREIGN KEY (product_id) REFERENCES products(product_id)
);
CREATE TABLE categories (
category_id SERIAL PRIMARY KEY,
name VARCHAR(255) NOT NULL
);
```
2. Insert sample data into the tables:
```sql
INSERT INTO categories (category_id, name) VALUES
(1, 'Flowers'),
(2, 'Vases');
INSERT INTO products (product_id, name, description, price, category_id, embedding) VALUES
(1, 'Rose', 'A beautiful red rose', 2.50, 1, embedding('gemini-embedding-001', 'A beautiful red rose')),
(2, 'Tulip', 'A colorful tulip', 1.50, 1, embedding('gemini-embedding-001', 'A colorful tulip')),
(3, 'Glass Vase', 'A transparent glass vase', 10.00, 2, embedding('gemini-embedding-001', 'A transparent glass vase')),
(4, 'Ceramic Vase', 'A handmade ceramic vase', 15.00, 2, embedding('gemini-embedding-001', 'A handmade ceramic vase'));
INSERT INTO customers (customer_id, name, email) VALUES
(1, 'John Doe', 'john.doe@example.com'),
(2, 'Jane Smith', 'jane.smith@example.com');
INSERT INTO cart (cart_id, customer_id) VALUES
(1, 1),
(2, 2);
INSERT INTO cart_items (cart_id, product_id, quantity, price) VALUES
(1, 1, 2, 2.50),
(1, 3, 1, 10.00),
(2, 2, 5, 1.50);
```
## Step 2: Install Toolbox
In this section, we will download and install the Toolbox binary.
1. Download the latest version of Toolbox as a binary:
{{< notice tip >}}
Select the
[correct binary](https://github.com/googleapis/genai-toolbox/releases)
corresponding to your OS and CPU architecture.
{{< /notice >}}
```bash
export OS="linux/amd64" # one of linux/amd64, darwin/arm64, darwin/amd64, or windows/amd64
export VERSION="0.28.0"
curl -O https://storage.googleapis.com/genai-toolbox/v$VERSION/$OS/toolbox
```
1. Make the binary executable:
```bash
chmod +x toolbox
```
## Step 3: Configure the tools
Create a `tools.yaml` file and add the following content. You must replace the
placeholders with your actual AlloyDB configuration.
First, define the data source for your tools. This tells Toolbox how to connect
to your AlloyDB instance.
```yaml
kind: sources
name: alloydb-pg-source
type: alloydb-postgres
project: YOUR_PROJECT_ID
region: YOUR_REGION
cluster: YOUR_CLUSTER
instance: YOUR_INSTANCE
database: YOUR_DATABASE
user: YOUR_USER
password: YOUR_PASSWORD
```
Next, define the tools the agent can use. We will categorize them into three
types:
### 1. Structured Queries Tools
These tools execute predefined SQL statements. They are ideal for common,
structured queries like managing a shopping cart. Add the following to your
`tools.yaml` file:
```yaml
kind: tools
name: access-cart-information
type: postgres-sql
source: alloydb-pg-source
description: >-
List items in customer cart.
Use this tool to list items in a customer cart. This tool requires the cart ID.
parameters:
- name: cart_id
type: integer
description: The id of the cart.
statement: |
SELECT
p.name AS product_name,
ci.quantity,
ci.price AS item_price,
(ci.quantity * ci.price) AS total_item_price,
c.created_at AS cart_created_at,
ci.product_id AS product_id
FROM
cart_items ci JOIN cart c ON ci.cart_id = c.cart_id
JOIN products p ON ci.product_id = p.product_id
WHERE
c.cart_id = $1;
---
kind: tools
name: add-to-cart
type: postgres-sql
source: alloydb-pg-source
description: >-
Add items to customer cart using the product ID and product prices from the product list.
Use this tool to add items to a customer cart.
This tool requires the cart ID, product ID, quantity, and price.
parameters:
- name: cart_id
type: integer
description: The id of the cart.
- name: product_id
type: integer
description: The id of the product.
- name: quantity
type: integer
description: The quantity of items to add.
- name: price
type: float
description: The price of items to add.
statement: |
INSERT INTO
cart_items (cart_id, product_id, quantity, price)
VALUES($1,$2,$3,$4);
---
kind: tools
name: delete-from-cart
type: postgres-sql
source: alloydb-pg-source
description: >-
Remove products from customer cart.
Use this tool to remove products from a customer cart.
This tool requires the cart ID and product ID.
parameters:
- name: cart_id
type: integer
description: The id of the cart.
- name: product_id
type: integer
description: The id of the product.
statement: |
DELETE FROM
cart_items
WHERE
cart_id = $1 AND product_id = $2;
```
### 2. Semantic Search Tools
These tools use vector embeddings to find the most relevant results based on the
meaning of a user's query, rather than just keywords. Append the following tools
to the `tools` section in your `tools.yaml`:
```yaml
kind: tools
name: search-product-recommendations
type: postgres-sql
source: alloydb-pg-source
description: >-
Search for products based on user needs.
Use this tool to search for products. This tool requires the user's needs.
parameters:
- name: query
type: string
description: The product characteristics
statement: |
SELECT
product_id,
name,
description,
ROUND(CAST(price AS numeric), 2) as price
FROM
products
ORDER BY
embedding('gemini-embedding-001', $1)::vector <=> embedding
LIMIT 5;
```
### 3. Natural Language to SQL (NL2SQL) Tools
1. Create a [natural language
configuration](https://cloud.google.com/alloydb/docs/ai/use-natural-language-generate-sql-queries#create-config)
for your AlloyDB cluster.
{{< notice tip >}}Before using NL2SQL tools,
you must first install the `alloydb_ai_nl` extension and
create the [semantic
layer](https://cloud.google.com/alloydb/docs/ai/natural-language-overview)
under a configuration named `flower_shop`.
{{< /notice >}}
2. Configure your NL2SQL tool to use your configuration. These tools translate
natural language questions into SQL queries, allowing users to interact with
the database conversationally. Append the following tool to the `tools`
section:
```yaml
kind: tools
name: ask-questions-about-products
type: alloydb-ai-nl
source: alloydb-pg-source
nlConfig: flower_shop
description: >-
Ask questions related to products or brands.
Use this tool to ask questions about products or brands.
Always SELECT the IDs of objects when generating queries.
```
Finally, group the tools into a `toolset` to make them available to the model.
Add the following to the end of your `tools.yaml` file:
```yaml
kind: toolsets
name: flower_shop
tools:
- access-cart-information
- search-product-recommendations
- ask-questions-about-products
- add-to-cart
- delete-from-cart
```
For more info on tools, check out the
[Tools](../../user-guide/configuration/tools/_index.md) section.
## Step 4: Run the Toolbox server
Run the Toolbox server, pointing to the `tools.yaml` file created earlier:
```bash
./toolbox --tools-file "tools.yaml"
```
## Step 5: Connect to MCP Inspector
1. Run the MCP Inspector:
```bash
npx @modelcontextprotocol/inspector
```
1. Type `y` when it asks to install the inspector package.
1. It should show the following when the MCP Inspector is up and running (please
take note of ``):
```bash
Starting MCP inspector...
⚙️ Proxy server listening on localhost:6277
🔑 Session token:
Use this token to authenticate requests or set DANGEROUSLY_OMIT_AUTH=true to disable auth
🚀 MCP Inspector is up and running at:
http://localhost:6274/?MCP_PROXY_AUTH_TOKEN=
```
1. Open the above link in your browser.
1. For `Transport Type`, select `Streamable HTTP`.
1. For `URL`, type in `http://127.0.0.1:5000/mcp`.
1. For `Configuration` -> `Proxy Session Token`, make sure
`` is present.
1. Click Connect.
1. Select `List Tools`, you will see a list of tools configured in `tools.yaml`.
1. Test out your tools here!
## What's next
- Learn more about [MCP Inspector](../../user-guide/connect-to/mcp-client/_index.md).
- Learn more about [Toolbox User Guide](../../user-guide/configuration/_index.md).
- Learn more about [Toolbox Tutorials](../../build-with-mcp-toolbox/_index.md).
========================================================================
## Getting started with alloydb-ai-nl tool
========================================================================
**Hierarchy:** MCP Toolbox for Databases > Build with MCP Toolbox > AlloyDB > Getting started with alloydb-ai-nl tool
**URL:** https://googleapis.github.io/genai-toolbox/previews/PR-2723/build-with-mcp-toolbox/alloydb/ai-nl/
**Description:** An end to end tutorial for building an ADK agent using the AlloyDB AI NL tool.
{{< ipynb "alloydb_ai_nl.ipynb" >}}
========================================================================
## BigQuery
========================================================================
**Hierarchy:** MCP Toolbox for Databases > Build with MCP Toolbox > BigQuery
**URL:** https://googleapis.github.io/genai-toolbox/previews/PR-2723/build-with-mcp-toolbox/bigquery/
**Description:** How to get started with Toolbox using BigQuery.
========================================================================
## Quickstart (Local with BigQuery)
========================================================================
**Hierarchy:** MCP Toolbox for Databases > Build with MCP Toolbox > BigQuery > Quickstart (Local with BigQuery)
**URL:** https://googleapis.github.io/genai-toolbox/previews/PR-2723/build-with-mcp-toolbox/bigquery/local_quickstart/
**Description:** How to get started running Toolbox locally with Python, BigQuery, and LangGraph, LlamaIndex, or ADK.
[](https://colab.research.google.com/github/googleapis/genai-toolbox/blob/main/docs/en/samples/bigquery/colab_quickstart_bigquery.ipynb)
## Before you begin
This guide assumes you have already done the following:
1. Installed [Python 3.10+][install-python] (including [pip][install-pip] and
your preferred virtual environment tool for managing dependencies e.g.
[venv][install-venv]).
1. Installed and configured the [Google Cloud SDK (gcloud CLI)][install-gcloud].
1. Authenticated with Google Cloud for Application Default Credentials (ADC):
```bash
gcloud auth login --update-adc
```
1. Set your default Google Cloud project (replace `YOUR_PROJECT_ID` with your
actual project ID):
```bash
gcloud config set project YOUR_PROJECT_ID
export GOOGLE_CLOUD_PROJECT=YOUR_PROJECT_ID
```
Toolbox and the client libraries will use this project for BigQuery, unless
overridden in configurations.
1. [Enabled the BigQuery API][enable-bq-api] in your Google Cloud project.
1. Installed the BigQuery client library for Python:
```bash
pip install google-cloud-bigquery
```
1. Completed setup for usage with an LLM model such as
{{< tabpane text=true persist=header >}}
{{% tab header="Core" lang="en" %}}
- [langchain-vertexai](https://python.langchain.com/docs/integrations/llms/google_vertex_ai_palm/#setup)
package.
- [langchain-google-genai](https://python.langchain.com/docs/integrations/chat/google_generative_ai/#setup)
package.
- [langchain-anthropic](https://python.langchain.com/docs/integrations/chat/anthropic/#setup)
package.
{{% /tab %}}
{{% tab header="LangChain" lang="en" %}}
- [langchain-vertexai](https://python.langchain.com/docs/integrations/llms/google_vertex_ai_palm/#setup)
package.
- [langchain-google-genai](https://python.langchain.com/docs/integrations/chat/google_generative_ai/#setup)
package.
- [langchain-anthropic](https://python.langchain.com/docs/integrations/chat/anthropic/#setup)
package.
{{% /tab %}}
{{% tab header="LlamaIndex" lang="en" %}}
- [llama-index-llms-google-genai](https://pypi.org/project/llama-index-llms-google-genai/)
package.
- [llama-index-llms-anthropic](https://docs.llamaindex.ai/en/stable/examples/llm/anthropic)
package.
{{% /tab %}}
{{% tab header="ADK" lang="en" %}}
- [google-adk](https://pypi.org/project/google-adk/) package.
{{% /tab %}}
{{< /tabpane >}}
[install-python]: https://wiki.python.org/moin/BeginnersGuide/Download
[install-pip]: https://pip.pypa.io/en/stable/installation/
[install-venv]:
https://packaging.python.org/en/latest/tutorials/installing-packages/#creating-virtual-environments
[install-gcloud]: https://cloud.google.com/sdk/docs/install
[enable-bq-api]:
https://cloud.google.com/bigquery/docs/quickstarts/query-public-dataset-console#before-you-begin
## Step 1: Set up your BigQuery Dataset and Table
In this section, we will create a BigQuery dataset and a table, then insert some
data that needs to be accessed by our agent. BigQuery operations are performed
against your configured Google Cloud project.
1. Create a new BigQuery dataset (replace `YOUR_DATASET_NAME` with your desired
dataset name, e.g., `toolbox_ds`, and optionally specify a location like `US`
or `EU`):
```bash
export BQ_DATASET_NAME="YOUR_DATASET_NAME" # e.g., toolbox_ds
export BQ_LOCATION="US" # e.g., US, EU, asia-northeast1
bq --location=$BQ_LOCATION mk $BQ_DATASET_NAME
```
You can also do this through the [Google Cloud
Console](https://console.cloud.google.com/bigquery).
{{< notice tip >}}
For a real application, ensure that the service account or user running Toolbox
has the necessary IAM permissions (e.g., BigQuery Data Editor, BigQuery User)
on the dataset or project. For this local quickstart with user credentials,
your own permissions will apply.
{{< /notice >}}
1. The hotels table needs to be defined in your new dataset for use with the bq
query command. First, create a file named `create_hotels_table.sql` with the
following content:
```sql
CREATE TABLE IF NOT EXISTS `YOUR_PROJECT_ID.YOUR_DATASET_NAME.hotels` (
id INT64 NOT NULL,
name STRING NOT NULL,
location STRING NOT NULL,
price_tier STRING NOT NULL,
checkin_date DATE NOT NULL,
checkout_date DATE NOT NULL,
booked BOOLEAN NOT NULL
);
```
> **Note:** Replace `YOUR_PROJECT_ID` and `YOUR_DATASET_NAME` in the SQL
> with your actual project ID and dataset name.
Then run the command below to execute the sql query:
```bash
bq query --project_id=$GOOGLE_CLOUD_PROJECT --dataset_id=$BQ_DATASET_NAME --use_legacy_sql=false < create_hotels_table.sql
```
1. Next, populate the hotels table with some initial data. To do this, create a
file named `insert_hotels_data.sql` and add the following SQL INSERT
statement to it.
```sql
INSERT INTO `YOUR_PROJECT_ID.YOUR_DATASET_NAME.hotels` (id, name, location, price_tier, checkin_date, checkout_date, booked)
VALUES
(1, 'Hilton Basel', 'Basel', 'Luxury', '2024-04-20', '2024-04-22', FALSE),
(2, 'Marriott Zurich', 'Zurich', 'Upscale', '2024-04-14', '2024-04-21', FALSE),
(3, 'Hyatt Regency Basel', 'Basel', 'Upper Upscale', '2024-04-02', '2024-04-20', FALSE),
(4, 'Radisson Blu Lucerne', 'Lucerne', 'Midscale', '2024-04-05', '2024-04-24', FALSE),
(5, 'Best Western Bern', 'Bern', 'Upper Midscale', '2024-04-01', '2024-04-23', FALSE),
(6, 'InterContinental Geneva', 'Geneva', 'Luxury', '2024-04-23', '2024-04-28', FALSE),
(7, 'Sheraton Zurich', 'Zurich', 'Upper Upscale', '2024-04-02', '2024-04-27', FALSE),
(8, 'Holiday Inn Basel', 'Basel', 'Upper Midscale', '2024-04-09', '2024-04-24', FALSE),
(9, 'Courtyard Zurich', 'Zurich', 'Upscale', '2024-04-03', '2024-04-13', FALSE),
(10, 'Comfort Inn Bern', 'Bern', 'Midscale', '2024-04-04', '2024-04-16', FALSE);
```
> **Note:** Replace `YOUR_PROJECT_ID` and `YOUR_DATASET_NAME` in the SQL
> with your actual project ID and dataset name.
Then run the command below to execute the sql query:
```bash
bq query --project_id=$GOOGLE_CLOUD_PROJECT --dataset_id=$BQ_DATASET_NAME --use_legacy_sql=false < insert_hotels_data.sql
```
## Step 2: Install and configure Toolbox
In this section, we will download Toolbox, configure our tools in a `tools.yaml`
to use BigQuery, and then run the Toolbox server.
1. Download the latest version of Toolbox as a binary:
{{< notice tip >}}
Select the
[correct binary](https://github.com/googleapis/genai-toolbox/releases)
corresponding to your OS and CPU architecture.
{{< /notice >}}
```bash
export OS="linux/amd64" # one of linux/amd64, darwin/arm64, darwin/amd64, or windows/amd64
curl -O https://storage.googleapis.com/genai-toolbox/v0.28.0/$OS/toolbox
```
1. Make the binary executable:
```bash
chmod +x toolbox
```
1. Write the following into a `tools.yaml` file. You must replace the
`YOUR_PROJECT_ID` and `YOUR_DATASET_NAME` placeholder in the config with your
actual BigQuery project and dataset name. The `location` field is optional;
if not specified, it defaults to 'us'. The table name `hotels` is used
directly in the statements.
{{< notice tip >}}
Authentication with BigQuery is handled via Application Default Credentials
(ADC). Ensure you have run `gcloud auth application-default login`.
{{< /notice >}}
```yaml
kind: sources
name: my-bigquery-source
type: bigquery
project: YOUR_PROJECT_ID
location: us
---
kind: tools
name: search-hotels-by-name
type: bigquery-sql
source: my-bigquery-source
description: Search for hotels based on name.
parameters:
- name: name
type: string
description: The name of the hotel.
statement: SELECT * FROM `YOUR_DATASET_NAME.hotels` WHERE LOWER(name) LIKE LOWER(CONCAT('%', @name, '%'));
---
kind: tools
name: search-hotels-by-location
type: bigquery-sql
source: my-bigquery-source
description: Search for hotels based on location.
parameters:
- name: location
type: string
description: The location of the hotel.
statement: SELECT * FROM `YOUR_DATASET_NAME.hotels` WHERE LOWER(location) LIKE LOWER(CONCAT('%', @location, '%'));
---
kind: tools
name: book-hotel
type: bigquery-sql
source: my-bigquery-source
description: >-
Book a hotel by its ID. If the hotel is successfully booked, returns a NULL, raises an error if not.
parameters:
- name: hotel_id
type: integer
description: The ID of the hotel to book.
statement: UPDATE `YOUR_DATASET_NAME.hotels` SET booked = TRUE WHERE id = @hotel_id;
---
kind: tools
name: update-hotel
type: bigquery-sql
source: my-bigquery-source
description: >-
Update a hotel's check-in and check-out dates by its ID. Returns a message indicating whether the hotel was successfully updated or not.
parameters:
- name: checkin_date
type: string
description: The new check-in date of the hotel.
- name: checkout_date
type: string
description: The new check-out date of the hotel.
- name: hotel_id
type: integer
description: The ID of the hotel to update.
statement: >-
UPDATE `YOUR_DATASET_NAME.hotels` SET checkin_date = PARSE_DATE('%Y-%m-%d', @checkin_date), checkout_date = PARSE_DATE('%Y-%m-%d', @checkout_date) WHERE id = @hotel_id;
---
kind: tools
name: cancel-hotel
type: bigquery-sql
source: my-bigquery-source
description: Cancel a hotel by its ID.
parameters:
- name: hotel_id
type: integer
description: The ID of the hotel to cancel.
statement: UPDATE `YOUR_DATASET_NAME.hotels` SET booked = FALSE WHERE id = @hotel_id;
```
**Important Note on `toolsets`**: The `tools.yaml` content above does not
include a `toolsets` section. The Python agent examples in Step 3 (e.g.,
`await toolbox_client.load_toolset("my-toolset")`) rely on a toolset named
`my-toolset`. To make those examples work, you will need to add a `toolsets`
section to your `tools.yaml` file, for example:
```yaml
# Add this to your tools.yaml if using load_toolset("my-toolset")
# Ensure it's at the same indentation level as 'sources:' and 'tools:'
kind: toolsets
name: my-toolset
tools:
- search-hotels-by-name
- search-hotels-by-location
- book-hotel
- update-hotel
- cancel-hotel
```
Alternatively, you can modify the agent code to load tools individually
(e.g., using `await toolbox_client.load_tool("search-hotels-by-name")`).
For more info on tools, check out the [Configuring Tools](../../user-guide/configuration/tools/_index.md) section
of the docs.
1. Run the Toolbox server, pointing to the `tools.yaml` file created earlier:
```bash
./toolbox --tools-file "tools.yaml"
```
{{< notice note >}}
Toolbox enables dynamic reloading by default. To disable, use the
`--disable-reload` flag.
{{< /notice >}}
## Step 3: Connect your agent to Toolbox
In this section, we will write and run an agent that will load the Tools
from Toolbox.
{{< notice tip>}} If you prefer to experiment within a Google Colab environment,
you can connect to a
[local runtime](https://research.google.com/colaboratory/local-runtimes.html).
{{< /notice >}}
1. In a new terminal, install the SDK package.
{{< tabpane persist=header >}}
{{< tab header="Core" lang="bash" >}}
pip install toolbox-core
{{< /tab >}}
{{< tab header="Langchain" lang="bash" >}}
pip install toolbox-langchain
{{< /tab >}}
{{< tab header="LlamaIndex" lang="bash" >}}
pip install toolbox-llamaindex
{{< /tab >}}
{{< tab header="ADK" lang="bash" >}}
pip install google-adk[toolbox]
{{< /tab >}}
{{< /tabpane >}}
1. Install other required dependencies:
{{< tabpane persist=header >}}
{{< tab header="Core" lang="bash" >}}
# TODO(developer): replace with correct package if needed
pip install langgraph langchain-google-vertexai
# pip install langchain-google-genai
# pip install langchain-anthropic
{{< /tab >}}
{{< tab header="Langchain" lang="bash" >}}
# TODO(developer): replace with correct package if needed
pip install langgraph langchain-google-vertexai
# pip install langchain-google-genai
# pip install langchain-anthropic
{{< /tab >}}
{{< tab header="LlamaIndex" lang="bash" >}}
# TODO(developer): replace with correct package if needed
pip install llama-index-llms-google-genai
# pip install llama-index-llms-anthropic
{{< /tab >}}
{{< tab header="ADK" lang="bash" >}}
# No other dependencies required for ADK
{{< /tab >}}
{{< /tabpane >}}
1. Create a new file named `hotel_agent.py` and copy the following
code to create an agent:
{{< tabpane persist=header >}}
{{< tab header="Core" lang="python" >}}
import asyncio
from google import genai
from google.genai.types import (
Content,
FunctionDeclaration,
GenerateContentConfig,
Part,
Tool,
)
from toolbox_core import ToolboxClient
prompt = """
You're a helpful hotel assistant. You handle hotel searching, booking and
cancellations. When the user searches for a hotel, mention it's name, id,
location and price tier. Always mention hotel id while performing any
searches. This is very important for any operations. For any bookings or
cancellations, please provide the appropriate confirmation. Be sure to
update checkin or checkout dates if mentioned by the user.
Don't ask for confirmations from the user.
"""
queries = [
"Find hotels in Basel with Basel in it's name.",
"Please book the hotel Hilton Basel for me.",
"This is too expensive. Please cancel it.",
"Please book Hyatt Regency for me",
"My check in dates for my booking would be from April 10, 2024 to April 19, 2024.",
]
async def run_application():
async with ToolboxClient("") as toolbox_client:
# The toolbox_tools list contains Python callables (functions/methods) designed for LLM tool-use
# integration. While this example uses Google's genai client, these callables can be adapted for
# various function-calling or agent frameworks. For easier integration with supported frameworks
# (https://github.com/googleapis/mcp-toolbox-python-sdk/tree/main/packages), use the
# provided wrapper packages, which handle framework-specific boilerplate.
toolbox_tools = await toolbox_client.load_toolset("my-toolset")
genai_client = genai.Client(
vertexai=True, project="project-id", location="us-central1"
)
genai_tools = [
Tool(
function_declarations=[
FunctionDeclaration.from_callable_with_api_option(callable=tool)
]
)
for tool in toolbox_tools
]
history = []
for query in queries:
user_prompt_content = Content(
role="user",
parts=[Part.from_text(text=query)],
)
history.append(user_prompt_content)
response = genai_client.models.generate_content(
model="gemini-2.0-flash-001",
contents=history,
config=GenerateContentConfig(
system_instruction=prompt,
tools=genai_tools,
),
)
history.append(response.candidates[0].content)
function_response_parts = []
for function_call in response.function_calls:
fn_name = function_call.name
# The tools are sorted alphabetically
if fn_name == "search-hotels-by-name":
function_result = await toolbox_tools[3](**function_call.args)
elif fn_name == "search-hotels-by-location":
function_result = await toolbox_tools[2](**function_call.args)
elif fn_name == "book-hotel":
function_result = await toolbox_tools[0](**function_call.args)
elif fn_name == "update-hotel":
function_result = await toolbox_tools[4](**function_call.args)
elif fn_name == "cancel-hotel":
function_result = await toolbox_tools[1](**function_call.args)
else:
raise ValueError("Function name not present.")
function_response = {"result": function_result}
function_response_part = Part.from_function_response(
name=function_call.name,
response=function_response,
)
function_response_parts.append(function_response_part)
if function_response_parts:
tool_response_content = Content(role="tool", parts=function_response_parts)
history.append(tool_response_content)
response2 = genai_client.models.generate_content(
model="gemini-2.0-flash-001",
contents=history,
config=GenerateContentConfig(
tools=genai_tools,
),
)
final_model_response_content = response2.candidates[0].content
history.append(final_model_response_content)
print(response2.text)
asyncio.run(run_application())
{{< /tab >}}
{{< tab header="LangChain" lang="python" >}}
import asyncio
from langgraph.prebuilt import create_react_agent
# TODO(developer): replace this with another import if needed
from langchain_google_vertexai import ChatVertexAI
# from langchain_google_genai import ChatGoogleGenerativeAI
# from langchain_anthropic import ChatAnthropic
from langgraph.checkpoint.memory import MemorySaver
from toolbox_langchain import ToolboxClient
prompt = """
You're a helpful hotel assistant. You handle hotel searching, booking and
cancellations. When the user searches for a hotel, mention it's name, id,
location and price tier. Always mention hotel ids while performing any
searches. This is very important for any operations. For any bookings or
cancellations, please provide the appropriate confirmation. Be sure to
update checkin or checkout dates if mentioned by the user.
Don't ask for confirmations from the user.
"""
queries = [
"Find hotels in Basel with Basel in its name.",
"Can you book the Hilton Basel for me?",
"Oh wait, this is too expensive. Please cancel it and book the Hyatt Regency instead.",
"My check in dates would be from April 10, 2024 to April 19, 2024.",
]
async def main():
# TODO(developer): replace this with another model if needed
model = ChatVertexAI(model_name="gemini-2.0-flash-001")
# model = ChatGoogleGenerativeAI(model="gemini-2.0-flash-001")
# model = ChatAnthropic(model="claude-3-5-sonnet-20240620")
# Load the tools from the Toolbox server
client = ToolboxClient("http://127.0.0.1:5000")
tools = await client.aload_toolset()
agent = create_react_agent(model, tools, checkpointer=MemorySaver())
config = {"configurable": {"thread_id": "thread-1"}}
for query in queries:
inputs = {"messages": [("user", prompt + query)]}
response = await agent.ainvoke(inputs, stream_mode="values", config=config)
print(response["messages"][-1].content)
asyncio.run(main())
{{< /tab >}}
{{< tab header="LlamaIndex" lang="python" >}}
import asyncio
import os
from llama_index.core.agent.workflow import AgentWorkflow
from llama_index.core.workflow import Context
# TODO(developer): replace this with another import if needed
from llama_index.llms.google_genai import GoogleGenAI
# from llama_index.llms.anthropic import Anthropic
from toolbox_llamaindex import ToolboxClient
prompt = """
You're a helpful hotel assistant. You handle hotel searching, booking and
cancellations. When the user searches for a hotel, mention it's name, id,
location and price tier. Always mention hotel ids while performing any
searches. This is very important for any operations. For any bookings or
cancellations, please provide the appropriate confirmation. Be sure to
update checkin or checkout dates if mentioned by the user.
Don't ask for confirmations from the user.
"""
queries = [
"Find hotels in Basel with Basel in it's name.",
"Can you book the Hilton Basel for me?",
"Oh wait, this is too expensive. Please cancel it and book the Hyatt Regency instead.",
"My check in dates would be from April 10, 2024 to April 19, 2024.",
]
async def main():
# TODO(developer): replace this with another model if needed
llm = GoogleGenAI(
model="gemini-2.0-flash-001",
vertexai_config={"location": "us-central1"},
)
# llm = GoogleGenAI(
# api_key=os.getenv("GOOGLE_API_KEY"),
# model="gemini-2.0-flash-001",
# )
# llm = Anthropic(
# model="claude-3-7-sonnet-latest",
# api_key=os.getenv("ANTHROPIC_API_KEY")
# )
# Load the tools from the Toolbox server
client = ToolboxClient("http://127.0.0.1:5000")
tools = await client.aload_toolset()
agent = AgentWorkflow.from_tools_or_functions(
tools,
llm=llm,
system_prompt=prompt,
)
ctx = Context(agent)
for query in queries:
response = await agent.arun(user_msg=query, ctx=ctx)
print(f"---- {query} ----")
print(str(response))
asyncio.run(main())
{{< /tab >}}
{{< tab header="ADK" lang="python" >}}
from google.adk.agents import Agent
from google.adk.runners import Runner
from google.adk.sessions import InMemorySessionService
from google.adk.artifacts.in_memory_artifact_service import InMemoryArtifactService
from google.adk.tools.toolbox_toolset import ToolboxToolset
from google.genai import types # For constructing message content
import os
os.environ['GOOGLE_GENAI_USE_VERTEXAI'] = 'True'
# TODO(developer): Replace 'YOUR_PROJECT_ID' with your Google Cloud Project ID
os.environ['GOOGLE_CLOUD_PROJECT'] = 'YOUR_PROJECT_ID'
# TODO(developer): Replace 'us-central1' with your Google Cloud Location (region)
os.environ['GOOGLE_CLOUD_LOCATION'] = 'us-central1'
# --- Load Tools from Toolbox ---
# TODO(developer): Ensure the Toolbox server is running at http://127.0.0.1:5000
toolset = ToolboxToolset(server_url="http://127.0.0.1:5000")
# --- Define the Agent's Prompt ---
prompt = """
You're a helpful hotel assistant. You handle hotel searching, booking and
cancellations. When the user searches for a hotel, mention it's name, id,
location and price tier. Always mention hotel ids while performing any
searches. This is very important for any operations. For any bookings or
cancellations, please provide the appropriate confirmation. Be sure to
update checkin or checkout dates if mentioned by the user.
Don't ask for confirmations from the user.
"""
# --- Configure the Agent ---
root_agent = Agent(
model='gemini-2.0-flash-001',
name='hotel_agent',
description='A helpful AI assistant that can search and book hotels.',
instruction=prompt,
tools=[toolset], # Pass the loaded toolset
)
# --- Initialize Services for Running the Agent ---
session_service = InMemorySessionService()
artifacts_service = InMemoryArtifactService()
runner = Runner(
app_name='hotel_agent',
agent=root_agent,
artifact_service=artifacts_service,
session_service=session_service,
)
async def main():
# Create a new session for the interaction.
session = await session_service.create_session(
state={}, app_name='hotel_agent', user_id='123'
)
# --- Define Queries and Run the Agent ---
queries = [
"Find hotels in Basel with Basel in it's name.",
"Can you book the Hilton Basel for me?",
"Oh wait, this is too expensive. Please cancel it and book the Hyatt Regency instead.",
"My check in dates would be from April 10, 2024 to April 19, 2024.",
]
for query in queries:
content = types.Content(role='user', parts=[types.Part(text=query)])
events = runner.run(session_id=session.id,
user_id='123', new_message=content)
responses = (
part.text
for event in events
for part in event.content.parts
if part.text is not None
)
for text in responses:
print(text)
import asyncio
if __name__ == "__main__":
asyncio.run(main())
{{< /tab >}}
{{< /tabpane >}}
{{< tabpane text=true persist=header >}}
{{% tab header="Core" lang="en" %}}
To learn more about the Core SDK, check out the [Toolbox Core SDK
documentation.](https://github.com/googleapis/mcp-toolbox-sdk-python/blob/main/packages/toolbox-core/README.md)
{{% /tab %}}
{{% tab header="Langchain" lang="en" %}}
To learn more about Agents in LangChain, check out the [LangGraph Agent
documentation.](https://langchain-ai.github.io/langgraph/reference/prebuilt/#langgraph.prebuilt.chat_agent_executor.create_react_agent)
{{% /tab %}}
{{% tab header="LlamaIndex" lang="en" %}}
To learn more about Agents in LlamaIndex, check out the [LlamaIndex
AgentWorkflow
documentation.](https://docs.llamaindex.ai/en/stable/examples/agent/agent_workflow_basic/)
{{% /tab %}}
{{% tab header="ADK" lang="en" %}}
To learn more about Agents in ADK, check out the [ADK
documentation.](https://google.github.io/adk-docs/)
{{% /tab %}}
{{< /tabpane >}}
1. Run your agent, and observe the results:
```sh
python hotel_agent.py
```
========================================================================
## Quickstart (MCP with BigQuery)
========================================================================
**Hierarchy:** MCP Toolbox for Databases > Build with MCP Toolbox > BigQuery > Quickstart (MCP with BigQuery)
**URL:** https://googleapis.github.io/genai-toolbox/previews/PR-2723/build-with-mcp-toolbox/bigquery/mcp_quickstart/
**Description:** How to get started running Toolbox with MCP Inspector and BigQuery as the source.
## Overview
[Model Context Protocol](https://modelcontextprotocol.io) is an open protocol
that standardizes how applications provide context to LLMs. Check out this page
on how to [connect to Toolbox via MCP](../../../user-guide/connect-to/mcp-client/_index.md).
## Step 1: Set up your BigQuery Dataset and Table
In this section, we will create a BigQuery dataset and a table, then insert some
data that needs to be accessed by our agent.
1. Create a new BigQuery dataset (replace `YOUR_DATASET_NAME` with your desired
dataset name, e.g., `toolbox_mcp_ds`, and optionally specify a location like
`US` or `EU`):
```bash
export BQ_DATASET_NAME="YOUR_DATASET_NAME"
export BQ_LOCATION="US"
bq --location=$BQ_LOCATION mk $BQ_DATASET_NAME
```
You can also do this through the [Google Cloud
Console](https://console.cloud.google.com/bigquery).
1. The `hotels` table needs to be defined in your new dataset. First, create a
file named `create_hotels_table.sql` with the following content:
```sql
CREATE TABLE IF NOT EXISTS `YOUR_PROJECT_ID.YOUR_DATASET_NAME.hotels` (
id INT64 NOT NULL,
name STRING NOT NULL,
location STRING NOT NULL,
price_tier STRING NOT NULL,
checkin_date DATE NOT NULL,
checkout_date DATE NOT NULL,
booked BOOLEAN NOT NULL
);
```
> **Note:** Replace `YOUR_PROJECT_ID` and `YOUR_DATASET_NAME` in the SQL
> with your actual project ID and dataset name.
Then run the command below to execute the sql query:
```bash
bq query --project_id=$GOOGLE_CLOUD_PROJECT --dataset_id=$BQ_DATASET_NAME --use_legacy_sql=false < create_hotels_table.sql
```
1. . Next, populate the hotels table with some initial data. To do this, create
a file named `insert_hotels_data.sql` and add the following SQL INSERT
statement to it.
```sql
INSERT INTO `YOUR_PROJECT_ID.YOUR_DATASET_NAME.hotels` (id, name, location, price_tier, checkin_date, checkout_date, booked)
VALUES
(1, 'Hilton Basel', 'Basel', 'Luxury', '2024-04-20', '2024-04-22', FALSE),
(2, 'Marriott Zurich', 'Zurich', 'Upscale', '2024-04-14', '2024-04-21', FALSE),
(3, 'Hyatt Regency Basel', 'Basel', 'Upper Upscale', '2024-04-02', '2024-04-20', FALSE),
(4, 'Radisson Blu Lucerne', 'Lucerne', 'Midscale', '2024-04-05', '2024-04-24', FALSE),
(5, 'Best Western Bern', 'Bern', 'Upper Midscale', '2024-04-01', '2024-04-23', FALSE),
(6, 'InterContinental Geneva', 'Geneva', 'Luxury', '2024-04-23', '2024-04-28', FALSE),
(7, 'Sheraton Zurich', 'Zurich', 'Upper Upscale', '2024-04-02', '2024-04-27', FALSE),
(8, 'Holiday Inn Basel', 'Basel', 'Upper Midscale', '2024-04-09', '2024-04-24', FALSE),
(9, 'Courtyard Zurich', 'Zurich', 'Upscale', '2024-04-03', '2024-04-13', FALSE),
(10, 'Comfort Inn Bern', 'Bern', 'Midscale', '2024-04-04', '2024-04-16', FALSE);
```
> **Note:** Replace `YOUR_PROJECT_ID` and `YOUR_DATASET_NAME` in the SQL
> with your actual project ID and dataset name.
Then run the command below to execute the sql query:
```bash
bq query --project_id=$GOOGLE_CLOUD_PROJECT --dataset_id=$BQ_DATASET_NAME --use_legacy_sql=false < insert_hotels_data.sql
```
## Step 2: Install and configure Toolbox
In this section, we will download Toolbox, configure our tools in a
`tools.yaml`, and then run the Toolbox server.
1. Download the latest version of Toolbox as a binary:
{{< notice tip >}}
Select the
[correct binary](https://github.com/googleapis/genai-toolbox/releases)
corresponding to your OS and CPU architecture.
{{< /notice >}}
```bash
export OS="linux/amd64" # one of linux/amd64, darwin/arm64, darwin/amd64, or windows/amd64
curl -O https://storage.googleapis.com/genai-toolbox/v0.28.0/$OS/toolbox
```
1. Make the binary executable:
```bash
chmod +x toolbox
```
1. Write the following into a `tools.yaml` file. You must replace the
`YOUR_PROJECT_ID` and `YOUR_DATASET_NAME` placeholder in the config with your
actual BigQuery project and dataset name. The `location` field is optional;
if not specified, it defaults to 'us'. The table name `hotels` is used
directly in the statements.
{{< notice tip >}}
Authentication with BigQuery is handled via Application Default Credentials
(ADC). Ensure you have run `gcloud auth application-default login`.
{{< /notice >}}
```yaml
kind: sources
name: my-bigquery-source
type: bigquery
project: YOUR_PROJECT_ID
location: us
---
kind: tools
name: search-hotels-by-name
type: bigquery-sql
source: my-bigquery-source
description: Search for hotels based on name.
parameters:
- name: name
type: string
description: The name of the hotel.
statement: SELECT * FROM `YOUR_DATASET_NAME.hotels` WHERE LOWER(name) LIKE LOWER(CONCAT('%', @name, '%'));
---
kind: tools
name: search-hotels-by-location
type: bigquery-sql
source: my-bigquery-source
description: Search for hotels based on location.
parameters:
- name: location
type: string
description: The location of the hotel.
statement: SELECT * FROM `YOUR_DATASET_NAME.hotels` WHERE LOWER(location) LIKE LOWER(CONCAT('%', @location, '%'));
---
kind: tools
name: book-hotel
type: bigquery-sql
source: my-bigquery-source
description: >-
Book a hotel by its ID. If the hotel is successfully booked, returns a NULL, raises an error if not.
parameters:
- name: hotel_id
type: integer
description: The ID of the hotel to book.
statement: UPDATE `YOUR_DATASET_NAME.hotels` SET booked = TRUE WHERE id = @hotel_id;
---
kind: tools
name: update-hotel
type: bigquery-sql
source: my-bigquery-source
description: >-
Update a hotel's check-in and check-out dates by its ID. Returns a message indicating whether the hotel was successfully updated or not.
parameters:
- name: checkin_date
type: string
description: The new check-in date of the hotel.
- name: checkout_date
type: string
description: The new check-out date of the hotel.
- name: hotel_id
type: integer
description: The ID of the hotel to update.
statement: >-
UPDATE `YOUR_DATASET_NAME.hotels` SET checkin_date = PARSE_DATE('%Y-%m-%d', @checkin_date), checkout_date = PARSE_DATE('%Y-%m-%d', @checkout_date) WHERE id = @hotel_id;
---
kind: tools
name: cancel-hotel
type: bigquery-sql
source: my-bigquery-source
description: Cancel a hotel by its ID.
parameters:
- name: hotel_id
type: integer
description: The ID of the hotel to cancel.
statement: UPDATE `YOUR_DATASET_NAME.hotels` SET booked = FALSE WHERE id = @hotel_id;
---
kind: toolsets
name: my-toolset
tools:
- search-hotels-by-name
- search-hotels-by-location
- book-hotel
- update-hotel
- cancel-hotel
```
For more info on tools, check out the
[Tools](../../../user-guide/configuration/tools/_index.md) section.
1. Run the Toolbox server, pointing to the `tools.yaml` file created earlier:
```bash
./toolbox --tools-file "tools.yaml"
```
## Step 3: Connect to MCP Inspector
1. Run the MCP Inspector:
```bash
npx @modelcontextprotocol/inspector
```
1. Type `y` when it asks to install the inspector package.
1. It should show the following when the MCP Inspector is up and running (please
take note of ``):
```bash
Starting MCP inspector...
⚙️ Proxy server listening on localhost:6277
🔑 Session token:
Use this token to authenticate requests or set DANGEROUSLY_OMIT_AUTH=true to disable auth
🚀 MCP Inspector is up and running at:
http://localhost:6274/?MCP_PROXY_AUTH_TOKEN=
```
1. Open the above link in your browser.
1. For `Transport Type`, select `Streamable HTTP`.
1. For `URL`, type in `http://127.0.0.1:5000/mcp`.
1. For `Configuration` -> `Proxy Session Token`, make sure
`` is present.
1. Click Connect.

1. Select `List Tools`, you will see a list of tools configured in `tools.yaml`.

1. Test out your tools here!
========================================================================
## Looker
========================================================================
**Hierarchy:** MCP Toolbox for Databases > Build with MCP Toolbox > Looker
**URL:** https://googleapis.github.io/genai-toolbox/previews/PR-2723/build-with-mcp-toolbox/looker/
**Description:** How to get started with Toolbox using Looker.
========================================================================
## Gemini-CLI and OAuth
========================================================================
**Hierarchy:** MCP Toolbox for Databases > Build with MCP Toolbox > Looker > Gemini-CLI and OAuth
**URL:** https://googleapis.github.io/genai-toolbox/previews/PR-2723/build-with-mcp-toolbox/looker/looker_gemini_oauth/
**Description:** How to connect to Looker from Gemini-CLI with end-user credentials
## Overview
Gemini-CLI can be configured to get an OAuth token from Looker, then send this
token to MCP Toolbox as part of the request. MCP Toolbox can then use this token
to authentincate with Looker. This means that there is no need to get a Looker
Client ID and Client Secret. This also means that MCP Toolbox can be set up as a
shared resource.
This configuration requires Toolbox v0.14.0 or later.
## Step 1: Register the OAuth App in Looker
You first need to register the OAuth application. Refer to the documentation
[here](https://cloud.google.com/looker/docs/api-cors#registering_an_oauth_client_application).
You may need to ask an administrator to do this for you.
1. Go to the API Explorer application, locate "Register OAuth App", and press
the "Run It" button.
1. Set the `client_guid` to "gemini-cli".
1. Set the `redirect_uri` to "http://localhost:7777/oauth/callback".
1. The `display_name` and `description` can be "Gemini-CLI" or anything
meaningful.
1. Set `enabled` to "true".
1. Check the box confirming that you understand this API will change data.
1. Click the "Run" button.

## Step 2: Install and configure Toolbox
In this section, we will download Toolbox and run the Toolbox server.
1. Download the latest version of Toolbox as a binary:
{{< notice tip >}}
Select the
[correct binary](https://github.com/googleapis/genai-toolbox/releases)
corresponding to your OS and CPU architecture.
{{< /notice >}}
```bash
export OS="linux/amd64" # one of linux/amd64, darwin/arm64, darwin/amd64, or windows/amd64
curl -O https://storage.googleapis.com/genai-toolbox/v0.28.0/$OS/toolbox
```
1. Make the binary executable:
```bash
chmod +x toolbox
```
1. Create a file `looker_env` with the settings for your
Looker instance.
```bash
export LOOKER_BASE_URL=https://looker.example.com
export LOOKER_VERIFY_SSL=true
export LOOKER_USE_CLIENT_OAUTH=true
```
In some instances you may need to append `:19999` to
the LOOKER_BASE_URL.
1. Load the looker_env file into your environment.
```bash
source looker_env
```
1. Run the Toolbox server using the prebuilt Looker tools.
```bash
./toolbox --prebuilt looker
```
The toolbox server will begin listening on localhost port 5000. Leave it
running and continue in another terminal.
Later, when it is time to shut everything down, you can quit the toolbox
server with Ctrl-C in this terminal window.
## Step 3: Configure Gemini-CLI
1. Edit the file `~/.gemini/settings.json`. Add the following, substituting your
Looker server host name for `looker.example.com`.
```json
"mcpServers": {
"looker": {
"httpUrl": "http://localhost:5000/mcp",
"oauth": {
"enabled": true,
"clientId": "gemini-cli",
"authorizationUrl": "https://looker.example.com/auth",
"tokenUrl": "https://looker.example.com/api/token",
"scopes": ["cors_api"]
}
}
}
```
The `authorizationUrl` should point to the URL you use to access Looker via the
web UI. The `tokenUrl` should point to the URL you use to access Looker via
the API. In some cases you will need to use the port number `:19999` after
the host name but before the `/api/token` part.
1. Start Gemini-CLI.
1. Authenticate with the command `/mcp auth looker`. Gemini-CLI will open up a
browser where you will confirm that you want to access Looker with your
account.


1. Use Gemini-CLI with your tools.
## Using Toolbox as a Shared Service
Toolbox can be run on another server as a shared service accessed by multiple
users. We strongly recommend running toolbox behind a web proxy such as `nginx`
which will provide SSL encryption. Google Cloud Run is another good way to run
toolbox. You will connect to a service like `https://toolbox.example.com/mcp`.
The proxy server will handle the SSL encryption and certificates. Then it will
foward the requests to `http://localhost:5000/mcp` running in that environment.
The details of the config are beyond the scope of this document, but will be
familiar to your system administrators.
To use the shared service, just change the `localhost:5000` in the `httpUrl` in
`~/.gemini/settings.json` to the host name and possibly the port of the shared
service.
========================================================================
## Quickstart (MCP with Looker and Gemini-CLI)
========================================================================
**Hierarchy:** MCP Toolbox for Databases > Build with MCP Toolbox > Looker > Quickstart (MCP with Looker and Gemini-CLI)
**URL:** https://googleapis.github.io/genai-toolbox/previews/PR-2723/build-with-mcp-toolbox/looker/looker_gemini/
**Description:** How to get started running Toolbox with Gemini-CLI and Looker as the source.
## Overview
[Model Context Protocol](https://modelcontextprotocol.io) is an open protocol
that standardizes how applications provide context to LLMs. Check out this page
on how to [connect to Toolbox via MCP](../../user-guide/connect-to/mcp-client/_index.md).
## Step 1: Get a Looker Client ID and Client Secret
The Looker Client ID and Client Secret can be obtained from the Users page of
your Looker instance. Refer to the documentation
[here](https://cloud.google.com/looker/docs/api-auth#authentication_with_an_sdk).
You may need to ask an administrator to get the Client ID and Client Secret
for you.
## Step 2: Install and configure Toolbox
In this section, we will download Toolbox and run the Toolbox server.
1. Download the latest version of Toolbox as a binary:
{{< notice tip >}}
Select the
[correct binary](https://github.com/googleapis/genai-toolbox/releases)
corresponding to your OS and CPU architecture.
{{< /notice >}}
```bash
export OS="linux/amd64" # one of linux/amd64, darwin/arm64, darwin/amd64, or windows/amd64
curl -O https://storage.googleapis.com/genai-toolbox/v0.28.0/$OS/toolbox
```
1. Make the binary executable:
```bash
chmod +x toolbox
```
1. Edit the file `~/.gemini/settings.json` and add the following
to the list of mcpServers. Use the Client Id and Client Secret
you obtained earlier. The name of the server - here
`looker-toolbox` - can be anything meaningful to you.
```json
"mcpServers": {
"looker-toolbox": {
"command": "/path/to/toolbox",
"args": [
"--stdio",
"--prebuilt",
"looker"
],
"env": {
"LOOKER_BASE_URL": "https://looker.example.com",
"LOOKER_CLIENT_ID": "",
"LOOKER_CLIENT_SECRET": "",
"LOOKER_VERIFY_SSL": "true"
}
}
}
```
In some instances you may need to append `:19999` to
the LOOKER_BASE_URL.
## Step 3: Start Gemini-CLI
1. Run Gemini-CLI:
```bash
npx https://github.com/google-gemini/gemini-cli
```
1. Type `y` when it asks to download.
1. Log into Gemini-CLI
1. Enter the command `/mcp` and you should see a list of
available tools like
```
ℹ Configured MCP servers:
🟢 looker-toolbox - Ready (10 tools)
- looker-toolbox__get_models
- looker-toolbox__query
- looker-toolbox__get_looks
- looker-toolbox__get_measures
- looker-toolbox__get_filters
- looker-toolbox__get_parameters
- looker-toolbox__get_explores
- looker-toolbox__query_sql
- looker-toolbox__get_dimensions
- looker-toolbox__run_look
- looker-toolbox__query_url
```
1. Start exploring your Looker instance with commands like
`Find an explore to see orders` or `show me my current
inventory broken down by item category`.
1. Gemini will prompt you for your approval before using
a tool. You can approve all the tools at once or
one at a time.
========================================================================
## Quickstart (MCP with Looker)
========================================================================
**Hierarchy:** MCP Toolbox for Databases > Build with MCP Toolbox > Looker > Quickstart (MCP with Looker)
**URL:** https://googleapis.github.io/genai-toolbox/previews/PR-2723/build-with-mcp-toolbox/looker/looker_mcp_inspector/
**Description:** How to get started running Toolbox with MCP Inspector and Looker as the source.
## Overview
[Model Context Protocol](https://modelcontextprotocol.io) is an open protocol
that standardizes how applications provide context to LLMs. Check out this page
on how to [connect to Toolbox via MCP](../../../user-guide/connect-to/mcp-client/_index.md).
## Step 1: Get a Looker Client ID and Client Secret
The Looker Client ID and Client Secret can be obtained from the Users page of
your Looker instance. Refer to the documentation
[here](https://cloud.google.com/looker/docs/api-auth#authentication_with_an_sdk).
You may need to ask an administrator to get the Client ID and Client Secret
for you.
## Step 2: Install and configure Toolbox
In this section, we will download Toolbox and run the Toolbox server.
1. Download the latest version of Toolbox as a binary:
{{< notice tip >}}
Select the
[correct binary](https://github.com/googleapis/genai-toolbox/releases)
corresponding to your OS and CPU architecture.
{{< /notice >}}
```bash
export OS="linux/amd64" # one of linux/amd64, darwin/arm64, darwin/amd64, or windows/amd64
curl -O https://storage.googleapis.com/genai-toolbox/v0.28.0/$OS/toolbox
```
1. Make the binary executable:
```bash
chmod +x toolbox
```
1. Create a file `looker_env` with the settings for your
Looker instance. Use the Client ID and Client Secret
you obtained earlier.
```bash
export LOOKER_BASE_URL=https://looker.example.com
export LOOKER_VERIFY_SSL=true
export LOOKER_CLIENT_ID=Q7ynZkRkvj9S9FHPm4Wj
export LOOKER_CLIENT_SECRET=P5JvZstFnhpkhCYy2yNSfJ6x
```
In some instances you may need to append `:19999` to
the LOOKER_BASE_URL.
1. Load the looker_env file into your environment.
```bash
source looker_env
```
1. Run the Toolbox server using the prebuilt Looker tools.
```bash
./toolbox --prebuilt looker
```
## Step 3: Connect to MCP Inspector
1. Run the MCP Inspector:
```bash
npx @modelcontextprotocol/inspector
```
1. Type `y` when it asks to install the inspector package.
1. It should show the following when the MCP Inspector is up and running:
```bash
🔍 MCP Inspector is up and running at http://127.0.0.1:5173 🚀
```
1. Open the above link in your browser.
1. For `Transport Type`, select `SSE`.
1. For `URL`, type in `http://127.0.0.1:5000/mcp/sse`.
1. Click Connect.

1. Select `List Tools`, you will see a list of tools.

1. Test out your tools here!
========================================================================
## Neo4j
========================================================================
**Hierarchy:** MCP Toolbox for Databases > Build with MCP Toolbox > Neo4j
**URL:** https://googleapis.github.io/genai-toolbox/previews/PR-2723/build-with-mcp-toolbox/neo4j/
**Description:** How to get started with Toolbox using Neo4j.
========================================================================
## Quickstart (MCP with Neo4j)
========================================================================
**Hierarchy:** MCP Toolbox for Databases > Build with MCP Toolbox > Neo4j > Quickstart (MCP with Neo4j)
**URL:** https://googleapis.github.io/genai-toolbox/previews/PR-2723/build-with-mcp-toolbox/neo4j/mcp_quickstart/
**Description:** How to get started running Toolbox with MCP Inspector and Neo4j as the source.
## Overview
[Model Context Protocol](https://modelcontextprotocol.io) is an open protocol that standardizes how applications provide context to LLMs. Check out this page on how to [connect to Toolbox via MCP](../../user-guide/connect-to/mcp-client/_index.md).
## Step 1: Set up your Neo4j Database and Data
In this section, you'll set up a database and populate it with sample data for a movies-related agent. This guide assumes you have a running Neo4j instance, either locally or in the cloud.
. **Populate the database with data.**
To make this quickstart straightforward, we'll use the built-in Movies dataset available in Neo4j.
. In your Neo4j Browser, run the following command to create and populate the database:
+
```cypher
:play movies
````
. Follow the instructions to load the data. This will create a graph with `Movie`, `Person`, and `Actor` nodes and their relationships.
## Step 2: Install and configure Toolbox
In this section, we will install the MCP Toolbox, configure our tools in a `tools.yaml` file, and then run the Toolbox server.
. **Install the Toolbox binary.**
The simplest way to get started is to download the latest binary for your operating system.
. Download the latest version of Toolbox as a binary:
\+
```bash
export OS="linux/amd64" # one of linux/amd64, darwin/arm64, darwin/amd64, or windows/amd64
curl -O [https://storage.googleapis.com/genai-toolbox/v0.16.0/$OS/toolbox](https://storage.googleapis.com/genai-toolbox/v0.16.0/$OS/toolbox)
```
+
. Make the binary executable:
\+
```bash
chmod +x toolbox
```
. **Create the `tools.yaml` file.**
This file defines your Neo4j source and the specific tools that will be exposed to your AI agent.
\+
{{\< notice tip \>}}
Authentication for the Neo4j source uses standard username and password fields. For production use, it is highly recommended to use environment variables for sensitive information like passwords.
{{\< /notice \>}}
\+
Write the following into a `tools.yaml` file:
\+
```yaml
kind: sources
name: my-neo4j-source
type: neo4j
uri: bolt://localhost:7687
user: neo4j
password: my-password # Replace with your actual password
---
kind: tools
name: search-movies-by-actor
type: neo4j-cypher
source: my-neo4j-source
description: "Searches for movies an actor has appeared in based on their name. Useful for questions like 'What movies has Tom Hanks been in?'"
parameters:
- name: actor_name
type: string
description: The full name of the actor to search for.
statement: |
MATCH (p:Person {name: $actor_name}) -[:ACTED_IN]-> (m:Movie)
RETURN m.title AS title, m.year AS year, m.genre AS genre
---
kind: tools
name: get-actor-for-movie
type: neo4j-cypher
source: my-neo4j-source
description: "Finds the actors who starred in a specific movie. Useful for questions like 'Who acted in Inception?'"
parameters:
- name: movie_title
type: string
description: The exact title of the movie.
statement: |
MATCH (p:Person) -[:ACTED_IN]-> (m:Movie {title: $movie_title})
RETURN p.name AS actor
```
. **Start the Toolbox server.**
Run the Toolbox server, pointing to the `tools.yaml` file you created earlier.
\+
```bash
./toolbox --tools-file "tools.yaml"
```
## Step 3: Connect to MCP Inspector
. **Run the MCP Inspector:**
\+
```bash
npx @modelcontextprotocol/inspector
```
. Type `y` when it asks to install the inspector package.
. It should show the following when the MCP Inspector is up and running (please take note of ``):
\+
```bash
Starting MCP inspector...
⚙️ Proxy server listening on localhost:6277
🔑 Session token:
Use this token to authenticate requests or set DANGEROUSLY_OMIT_AUTH=true to disable auth
🚀 MCP Inspector is up and running at:
http://localhost:6274/?MCP_PROXY_AUTH_TOKEN=
```
1. Open the above link in your browser.
1. For `Transport Type`, select `Streamable HTTP`.
1. For `URL`, type in `http://127.0.0.1:5000/mcp`.
1. For `Configuration` -\> `Proxy Session Token`, make sure `` is present.
1. Click `Connect`.
1. Select `List Tools`, you will see a list of tools configured in `tools.yaml`.
1. Test out your tools here\!
========================================================================
## Snowflake
========================================================================
**Hierarchy:** MCP Toolbox for Databases > Build with MCP Toolbox > Snowflake
**URL:** https://googleapis.github.io/genai-toolbox/previews/PR-2723/build-with-mcp-toolbox/snowflake/
**Description:** How to get started running Toolbox with MCP Inspector and Snowflake as the source.
## Overview
[Model Context Protocol](https://modelcontextprotocol.io) is an open protocol
that standardizes how applications provide context to LLMs. Check out this page
on how to [connect to Toolbox via MCP](../../user-guide/connect-to/mcp-client/_index.md).
## Before you begin
This guide assumes you have already done the following:
1. [Create a Snowflake account](https://signup.snowflake.com/).
1. Connect to the instance using [SnowSQL](https://docs.snowflake.com/en/user-guide/snowsql), or any other Snowflake client.
## Step 1: Set up your environment
Copy the environment template and update it with your Snowflake credentials:
```bash
cp examples/snowflake-env.sh my-snowflake-env.sh
```
Edit `my-snowflake-env.sh` with your actual Snowflake connection details:
```bash
export SNOWFLAKE_ACCOUNT="your-account-identifier"
export SNOWFLAKE_USER="your-username"
export SNOWFLAKE_PASSWORD="your-password"
export SNOWFLAKE_DATABASE="your-database"
export SNOWFLAKE_SCHEMA="your-schema"
export SNOWFLAKE_WAREHOUSE="COMPUTE_WH"
export SNOWFLAKE_ROLE="ACCOUNTADMIN"
```
## Step 2: Install Toolbox
In this section, we will download and install the Toolbox binary.
1. Download the latest version of Toolbox as a binary:
{{< notice tip >}}
Select the
[correct binary](https://github.com/googleapis/genai-toolbox/releases)
corresponding to your OS and CPU architecture.
{{< /notice >}}
```bash
export OS="linux/amd64" # one of linux/amd64, darwin/arm64, darwin/amd64, or windows/amd64
export VERSION="0.10.0"
curl -O https://storage.googleapis.com/genai-toolbox/v$VERSION/$OS/toolbox
```
1. Make the binary executable:
```bash
chmod +x toolbox
```
## Step 3: Configure the tools
You have two options:
#### Option A: Use the prebuilt configuration
```bash
./toolbox --prebuilt snowflake
```
#### Option B: Use the custom configuration
Create a `tools.yaml` file and add the following content. You must replace the placeholders with your actual Snowflake configuration.
```yaml
kind: sources
name: snowflake-source
type: snowflake
account: ${SNOWFLAKE_ACCOUNT}
user: ${SNOWFLAKE_USER}
password: ${SNOWFLAKE_PASSWORD}
database: ${SNOWFLAKE_DATABASE}
schema: ${SNOWFLAKE_SCHEMA}
warehouse: ${SNOWFLAKE_WAREHOUSE}
role: ${SNOWFLAKE_ROLE}
---
kind: tools
name: execute_sql
type: snowflake-execute-sql
source: snowflake-source
description: Use this tool to execute SQL.
---
kind: tools
name: list_tables
type: snowflake-sql
source: snowflake-source
description: "Lists detailed schema information for user-created tables."
statement: |
SELECT table_name, table_type
FROM information_schema.tables
WHERE table_schema = current_schema()
ORDER BY table_name;
```
For more info on tools, check out the
[Tools](../../user-guide/configuration/tools/_index.md) section.
## Step 4: Run the Toolbox server
Run the Toolbox server, pointing to the `tools.yaml` file created earlier:
```bash
./toolbox --tools-file "tools.yaml"
```
## Step 5: Connect to MCP Inspector
1. Run the MCP Inspector:
```bash
npx @modelcontextprotocol/inspector
```
1. Type `y` when it asks to install the inspector package.
1. It should show the following when the MCP Inspector is up and running (please take note of ``):
```bash
Starting MCP inspector...
⚙️ Proxy server listening on localhost:6277
🔑 Session token:
Use this token to authenticate requests or set DANGEROUSLY_OMIT_AUTH=true to disable auth
🚀 MCP Inspector is up and running at:
http://localhost:6274/?MCP_PROXY_AUTH_TOKEN=
```
1. Open the above link in your browser.
1. For `Transport Type`, select `Streamable HTTP`.
1. For `URL`, type in `http://127.0.0.1:5000/mcp`.
1. For `Configuration` -> `Proxy Session Token`, make sure `` is present.
1. Click Connect.
1. Select `List Tools`, you will see a list of tools configured in `tools.yaml`.
1. Test out your tools here!
## What's next
- Learn more about [MCP Inspector](../../user-guide/connect-to/mcp-client/_index.md).
- Learn more about [Toolbox User Guide](../../user-guide/configuration/_index.md).
- Learn more about [Toolbox Tutorials](../../build-with-mcp-toolbox/_index.md).
========================================================================
## Reference
========================================================================
**Hierarchy:** MCP Toolbox for Databases > Reference
**URL:** https://googleapis.github.io/genai-toolbox/previews/PR-2723/reference/
**Description:** This section contains reference documentation.
========================================================================
## CLI
========================================================================
**Hierarchy:** MCP Toolbox for Databases > Reference > CLI
**URL:** https://googleapis.github.io/genai-toolbox/previews/PR-2723/reference/cli/
**Description:** This page describes the `toolbox` command-line options.
## Reference
| Flag (Short) | Flag (Long) | Description | Default |
|--------------|----------------------------|----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|-------------|
| `-a` | `--address` | Address of the interface the server will listen on. | `127.0.0.1` |
| | `--disable-reload` | Disables dynamic reloading of tools file. | |
| `-h` | `--help` | help for toolbox | |
| | `--log-level` | Specify the minimum level logged. Allowed: 'DEBUG', 'INFO', 'WARN', 'ERROR'. | `info` |
| | `--logging-format` | Specify logging format to use. Allowed: 'standard' or 'JSON'. | `standard` |
| `-p` | `--port` | Port the server will listen on. | `5000` |
| | `--prebuilt` | Use one or more prebuilt tool configuration by source type. See [Prebuilt Tools Reference](../user-guide/configuration/prebuilt-configs/_index.md) for allowed values. | |
| | `--stdio` | Listens via MCP STDIO instead of acting as a remote HTTP server. | |
| | `--telemetry-gcp` | Enable exporting directly to Google Cloud Monitoring. | |
| | `--telemetry-otlp` | Enable exporting using OpenTelemetry Protocol (OTLP) to the specified endpoint (e.g. 'http://127.0.0.1:4318') | |
| | `--telemetry-service-name` | Sets the value of the service.name resource attribute for telemetry data. | `toolbox` |
| | `--tools-file` | File path specifying the tool configuration. Cannot be used with --tools-files or --tools-folder. | |
| | `--tools-files` | Multiple file paths specifying tool configurations. Files will be merged. Cannot be used with --tools-file or --tools-folder. | |
| | `--tools-folder` | Directory path containing YAML tool configuration files. All .yaml and .yml files in the directory will be loaded and merged. Cannot be used with --tools-file or --tools-files. | |
| | `--ui` | Launches the Toolbox UI web server. | |
| | `--allowed-origins` | Specifies a list of origins permitted to access this server for CORs access. | `*` |
| | `--allowed-hosts` | Specifies a list of hosts permitted to access this server to prevent DNS rebinding attacks. | `*` |
| | `--user-agent-metadata` | Appends additional metadata to the User-Agent. | |
| | `--poll-interval` | Specifies the polling frequency (seconds) for configuration file updates. | `0` |
| `-v` | `--version` | version for toolbox | |
## Sub Commands
invoke
Executes a tool directly with the provided parameters. This is useful for testing tool configurations and parameters without needing a full client setup.
**Syntax:**
```bash
toolbox invoke [params]
```
**Arguments:**
- `tool-name`: The name of the tool to execute (as defined in your configuration).
- `params`: (Optional) A JSON string containing the parameters for the tool.
For more detailed instructions, see [Invoke Tools via CLI](../user-guide/configuration/tools/invoke_tool.md).
skills-generate
Generates a skill package from a specified toolset. Each tool in the toolset will have a corresponding Node.js execution script in the generated skill.
**Syntax:**
```bash
toolbox skills-generate --name --description --toolset --output-dir
## Examples
### Transport Configuration
**Server Settings:**
- `--address`, `-a`: Server listening address (default: "127.0.0.1")
- `--port`, `-p`: Server listening port (default: 5000)
**STDIO:**
- `--stdio`: Run in MCP STDIO mode instead of HTTP server
#### Usage Examples
```bash
# Basic server with custom port configuration
./toolbox --tools-file "tools.yaml" --port 8080
# Server with prebuilt + custom tools configurations
./toolbox --tools-file tools.yaml --prebuilt alloydb-postgres
# Server with multiple prebuilt tools configurations
./toolbox --prebuilt alloydb-postgres,alloydb-postgres-admin
# OR
./toolbox --prebuilt alloydb-postgres --prebuilt alloydb-postgres-admin
```
### Tool Configuration Sources
The CLI supports multiple mutually exclusive ways to specify tool configurations:
**Single File:** (default)
- `--tools-file`: Path to a single YAML configuration file (default: `tools.yaml`)
**Multiple Files:**
- `--tools-files`: Comma-separated list of YAML files to merge
**Directory:**
- `--tools-folder`: Directory containing YAML files to load and merge
**Prebuilt Configurations:**
- `--prebuilt`: Use one or more predefined configurations for specific database types (e.g.,
'bigquery', 'postgres', 'spanner'). See [Prebuilt Tools](../user-guide/configuration/prebuilt-configs/_index.md) for allowed values.
{{< notice tip >}}
The CLI enforces mutual exclusivity between configuration source flags,
preventing simultaneous use of the file-based options ensuring only one of
`--tools-file`, `--tools-files`, or `--tools-folder` is
used at a time.
{{< /notice >}}
### Hot Reload
Toolbox supports two methods for detecting configuration changes: **Push**
(event-driven) and **Poll** (interval-based). To completely disable all hot
reloading, use the `--disable-reload` flag.
* **Push (Default):** Toolbox uses a highly efficient push system that listens
for instant OS-level file events to reload configurations the moment you save.
* **Poll (Fallback):** Alternatively, you can use the
`--poll-interval=` flag to actively check for updates at a set
cadence. Unlike the push system, polling "pulls" the file status manually,
which is a great fallback for network drives or container volumes where OS
events might get dropped. Set the interval to `0` to disable the polling
system.
### Toolbox UI
To launch Toolbox's interactive UI, use the `--ui` flag. This allows you to test
tools and toolsets with features such as authorized parameters. To learn more,
visit [Toolbox UI](../user-guide/configuration/toolbox-ui/index.md).
========================================================================
## FAQ
========================================================================
**Hierarchy:** MCP Toolbox for Databases > Reference > FAQ
**URL:** https://googleapis.github.io/genai-toolbox/previews/PR-2723/reference/faq/
**Description:** Frequently asked questions about Toolbox.
## How can I deploy or run Toolbox?
MCP Toolbox for Databases is open-source and can be run or deployed to a
multitude of environments. For convenience, we release [compiled binaries and
docker images][release-notes] (but you can always compile yourself as well!).
For detailed instructions, check out these resources:
- [Quickstart: How to Run Locally](../build-with-mcp-toolbox/local_quickstart.md)
- [Deploy to Cloud Run](../user-guide/deploy-to/cloud-run/_index.md)
[release-notes]: https://github.com/googleapis/genai-toolbox/releases/
## Do I need a Google Cloud account/project to get started with Toolbox?
Nope! While some of the sources Toolbox connects to may require GCP credentials,
Toolbox doesn't require them and can connect to a bunch of different resources
that don't.
## Does Toolbox take contributions from external users?
Absolutely! Please check out our [DEVELOPER.md][] for instructions on how to get
started developing _on_ Toolbox instead of with it, and the [CONTRIBUTING.md][]
for instructions on completing the CLA and getting a PR accepted.
[DEVELOPER.md]: https://github.com/googleapis/genai-toolbox/blob/main/DEVELOPER.md
[CONTRIBUTING.MD]: https://github.com/googleapis/genai-toolbox/blob/main/CONTRIBUTING.md
## Can Toolbox support a feature to let me do _$FOO_?
Maybe? The best place to start is by [opening an issue][github-issue] for
discussion (or seeing if there is already one open), so we can better understand
your use case and the best way to solve it. Generally we aim to prioritize the
most popular issues, so make sure to +1 ones you are the most interested in.
[github-issue]: https://github.com/googleapis/genai-toolbox/issues
## Can Toolbox be used for non-database tools?
**Yes!** While Toolbox is primarily focused on databases, it also supports generic
**HTTP tools** (`type: http`). These allow you to connect your agents to REST APIs
and other web services, enabling workflows that extend beyond database interactions.
For configuration details, see the [HTTP Tools documentation](../integrations/http/_index.md).
## Can I use _$BAR_ orchestration framework to use tools from Toolbox?
Currently, Toolbox only supports a limited number of client SDKs at our initial
launch. We are investigating support for more frameworks as well as more general
approaches for users without a framework -- look forward to seeing an update
soon.
## Why does Toolbox use a server-client architecture pattern?
Toolbox's server-client architecture allows us to more easily support a wide
variety of languages and frameworks with a centralized implementation. It also
allows us to tackle problems like connection pooling, auth, or caching more
completely than entirely client-side solutions.
## Why was Toolbox written in Go?
While a large part of the Gen AI Ecosystem is predominately Python, we opted to
use Go. We chose Go because it's still easy and simple to use, but also easier
to write fast, efficient, and concurrent servers. Additionally, given the
server-client architecture, we can still meet many developers where they are
with clients in their preferred language. As Gen AI matures, we want developers
to be able to use Toolbox on the serving path of mission critical applications.
It's easier to build the needed robustness, performance and scalability in Go
than in Python.
## Is Toolbox compatible with Model Context Protocol (MCP)?
Yes! Toolbox is compatible with [Anthropic's Model Context Protocol
(MCP)](https://modelcontextprotocol.io/). Please checkout [Connect via
MCP](../user-guide/connect-to/mcp-client/_index.md) on how to connect to Toolbox with an MCP
client.