Returns the operations Resource.
Close httplib2 connections.
create(parent, body=None, x__xgafv=None)
Creates an Evaluation Item.
Deletes an Evaluation Item.
Gets an Evaluation Item.
list(parent, filter=None, orderBy=None, pageSize=None, pageToken=None, x__xgafv=None)
Lists Evaluation Items.
Retrieves the next page of results.
close()
Close httplib2 connections.
create(parent, body=None, x__xgafv=None)
Creates an Evaluation Item.
Args:
parent: string, Required. The resource name of the Location to create the Evaluation Item in. Format: `projects/{project}/locations/{location}` (required)
body: object, The request body.
The object takes the form of:
{ # EvaluationItem is a single evaluation request or result. The content of an EvaluationItem is immutable - it cannot be updated once created. EvaluationItems can be deleted when no longer needed.
"createTime": "A String", # Output only. Timestamp when this item was created.
"displayName": "A String", # Required. The display name of the EvaluationItem.
"error": { # The `Status` type defines a logical error model that is suitable for different programming environments, including REST APIs and RPC APIs. It is used by [gRPC](https://github.com/grpc). Each `Status` message contains three pieces of data: error code, error message, and error details. You can find out more about this error model and how to work with it in the [API Design Guide](https://cloud.google.com/apis/design/errors). # Output only. Error for the evaluation item.
"code": 42, # The status code, which should be an enum value of google.rpc.Code.
"details": [ # A list of messages that carry the error details. There is a common set of message types for APIs to use.
{
"a_key": "", # Properties of the object. Contains field @type with type URL.
},
],
"message": "A String", # A developer-facing error message, which should be in English. Any user-facing error message should be localized and sent in the google.rpc.Status.details field, or localized by the client.
},
"evaluationItemType": "A String", # Required. The type of the EvaluationItem.
"evaluationRequest": { # A single evaluation request supporting input for both single-turn model generation and multi-turn agent execution traces. Valid input modes: 1. Inference Mode: `prompt` is set (containing text or AgentData context). 2. Offline Eval Mode: `prompt` is unset, and `candidate_responses` contains `agent_data` (the completed execution trace). Validation Rule: Either `prompt` must be set, OR at least one of the `candidate_responses` must contain `agent_data`. # The request to evaluate.
"candidateResponses": [ # Optional. Responses from model under test and other baseline models for comparison.
{ # Responses from model or agent.
"agentData": { # Represents data specific to multi-turn agent evaluations. # Optional. Represents the complete execution trace of a multi-turn conversation, which can involve single or multiple agents. This field is used to provide the full output of an agent's run, including all turns and events, for direct evaluation.
"agents": { # Optional. A map containing the static configurations for each agent in the system. Key: agent_id (matches the `author` field in events). Value: The static configuration of the agent.
"a_key": { # Represents configuration for an Agent.
"agentId": "A String", # Required. Unique identifier of the agent. This ID is used to refer to this agent, e.g., in AgentEvent.author, or in the `sub_agents` field. It must be unique within the `agents` map.
"agentType": "A String", # Optional. The type or class of the agent (e.g., "LlmAgent", "RouterAgent", "ToolUseAgent"). Useful for the autorater to understand the expected behavior of the agent.
"description": "A String", # Optional. A high-level description of the agent's role and responsibilities. Critical for evaluating if the agent is routing tasks correctly.
"instruction": "A String", # Optional. Provides instructions for the LLM model, guiding the agent's behavior. Can be static or dynamic. Dynamic instructions can contain placeholders like {variable_name} that will be resolved at runtime using the `AgentEvent.state_delta` field.
"subAgents": [ # Optional. The list of valid agent IDs that this agent can delegate to. This defines the directed edges in the multi-agent system graph topology.
"A String",
],
"tools": [ # Optional. The list of tools available to this agent.
{ # Tool details that the model may use to generate response. A `Tool` is a piece of code that enables the system to interact with external systems to perform an action, or set of actions, outside of knowledge and scope of the model. A Tool object should contain exactly one type of Tool (e.g FunctionDeclaration, Retrieval or GoogleSearchRetrieval).
"codeExecution": { # Tool that executes code generated by the model, and automatically returns the result to the model. See also [ExecutableCode]and [CodeExecutionResult] which are input and output to this tool. # Optional. CodeExecution tool type. Enables the model to execute code as part of generation.
},
"computerUse": { # Tool to support computer use. # Optional. Tool to support the model interacting directly with the computer. If enabled, it automatically populates computer-use specific Function Declarations.
"environment": "A String", # Required. The environment being operated.
"excludedPredefinedFunctions": [ # Optional. By default, [predefined functions](https://cloud.google.com/vertex-ai/generative-ai/docs/computer-use#supported-actions) are included in the final model call. Some of them can be explicitly excluded from being automatically included. This can serve two purposes: 1. Using a more restricted / different action space. 2. Improving the definitions / instructions of predefined functions.
"A String",
],
},
"enterpriseWebSearch": { # Tool to search public web data, powered by Vertex AI Search and Sec4 compliance. # Optional. Tool to support searching public web data, powered by Vertex AI Search and Sec4 compliance.
"blockingConfidence": "A String", # Optional. Sites with confidence level chosen & above this value will be blocked from the search results.
"excludeDomains": [ # Optional. List of domains to be excluded from the search results. The default limit is 2000 domains.
"A String",
],
},
"functionDeclarations": [ # Optional. Function tool type. One or more function declarations to be passed to the model along with the current user query. Model may decide to call a subset of these functions by populating FunctionCall in the response. User should provide a FunctionResponse for each function call in the next turn. Based on the function responses, Model will generate the final response back to the user. Maximum 512 function declarations can be provided.
{ # Structured representation of a function declaration as defined by the [OpenAPI 3.0 specification](https://spec.openapis.org/oas/v3.0.3). Included in this declaration are the function name, description, parameters and response type. This FunctionDeclaration is a representation of a block of code that can be used as a `Tool` by the model and executed by the client.
"description": "A String", # Optional. Description and purpose of the function. Model uses it to decide how and whether to call the function.
"name": "A String", # Required. The name of the function to call. Must start with a letter or an underscore. Must be a-z, A-Z, 0-9, or contain underscores, dots, colons and dashes, with a maximum length of 64.
"parameters": { # Defines the schema of input and output data. This is a subset of the [OpenAPI 3.0 Schema Object](https://spec.openapis.org/oas/v3.0.3#schema-object). # Optional. Describes the parameters to this function in JSON Schema Object format. Reflects the Open API 3.03 Parameter Object. string Key: the name of the parameter. Parameter names are case sensitive. Schema Value: the Schema defining the type used for the parameter. For function with no parameters, this can be left unset. Parameter names must start with a letter or an underscore and must only contain chars a-z, A-Z, 0-9, or underscores with a maximum length of 64. Example with 1 required and 1 optional parameter: type: OBJECT properties: param1: type: STRING param2: type: INTEGER required: - param1
"additionalProperties": "", # Optional. If `type` is `OBJECT`, specifies how to handle properties not defined in `properties`. If it is a boolean `false`, no additional properties are allowed. If it is a schema, additional properties are allowed if they conform to the schema.
"anyOf": [ # Optional. The instance must be valid against any (one or more) of the subschemas listed in `any_of`.
# Object with schema name: GoogleCloudAiplatformV1beta1Schema
],
"default": "", # Optional. Default value to use if the field is not specified.
"defs": { # Optional. `defs` provides a map of schema definitions that can be reused by `ref` elsewhere in the schema. Only allowed at root level of the schema.
"a_key": # Object with schema name: GoogleCloudAiplatformV1beta1Schema
},
"description": "A String", # Optional. Description of the schema.
"enum": [ # Optional. Possible values of the field. This field can be used to restrict a value to a fixed set of values. To mark a field as an enum, set `format` to `enum` and provide the list of possible values in `enum`. For example: 1. To define directions: `{type:STRING, format:enum, enum:["EAST", "NORTH", "SOUTH", "WEST"]}` 2. To define apartment numbers: `{type:INTEGER, format:enum, enum:["101", "201", "301"]}`
"A String",
],
"example": "", # Optional. Example of an instance of this schema.
"format": "A String", # Optional. The format of the data. For `NUMBER` type, format can be `float` or `double`. For `INTEGER` type, format can be `int32` or `int64`. For `STRING` type, format can be `email`, `byte`, `date`, `date-time`, `password`, and other formats to further refine the data type.
"items": # Object with schema name: GoogleCloudAiplatformV1beta1Schema # Optional. If type is `ARRAY`, `items` specifies the schema of elements in the array.
"maxItems": "A String", # Optional. If type is `ARRAY`, `max_items` specifies the maximum number of items in an array.
"maxLength": "A String", # Optional. If type is `STRING`, `max_length` specifies the maximum length of the string.
"maxProperties": "A String", # Optional. If type is `OBJECT`, `max_properties` specifies the maximum number of properties that can be provided.
"maximum": 3.14, # Optional. If type is `INTEGER` or `NUMBER`, `maximum` specifies the maximum allowed value.
"minItems": "A String", # Optional. If type is `ARRAY`, `min_items` specifies the minimum number of items in an array.
"minLength": "A String", # Optional. If type is `STRING`, `min_length` specifies the minimum length of the string.
"minProperties": "A String", # Optional. If type is `OBJECT`, `min_properties` specifies the minimum number of properties that can be provided.
"minimum": 3.14, # Optional. If type is `INTEGER` or `NUMBER`, `minimum` specifies the minimum allowed value.
"nullable": True or False, # Optional. Indicates if the value of this field can be null.
"pattern": "A String", # Optional. If type is `STRING`, `pattern` specifies a regular expression that the string must match.
"properties": { # Optional. If type is `OBJECT`, `properties` is a map of property names to schema definitions for each property of the object.
"a_key": # Object with schema name: GoogleCloudAiplatformV1beta1Schema
},
"propertyOrdering": [ # Optional. Order of properties displayed or used where order matters. This is not a standard field in OpenAPI specification, but can be used to control the order of properties.
"A String",
],
"ref": "A String", # Optional. Allows referencing another schema definition to use in place of this schema. The value must be a valid reference to a schema in `defs`. For example, the following schema defines a reference to a schema node named "Pet": type: object properties: pet: ref: #/defs/Pet defs: Pet: type: object properties: name: type: string The value of the "pet" property is a reference to the schema node named "Pet". See details in https://json-schema.org/understanding-json-schema/structuring
"required": [ # Optional. If type is `OBJECT`, `required` lists the names of properties that must be present.
"A String",
],
"title": "A String", # Optional. Title for the schema.
"type": "A String", # Optional. Data type of the schema field.
},
"parametersJsonSchema": "", # Optional. Describes the parameters to the function in JSON Schema format. The schema must describe an object where the properties are the parameters to the function. For example: ``` { "type": "object", "properties": { "name": { "type": "string" }, "age": { "type": "integer" } }, "additionalProperties": false, "required": ["name", "age"], "propertyOrdering": ["name", "age"] } ``` This field is mutually exclusive with `parameters`.
"response": { # Defines the schema of input and output data. This is a subset of the [OpenAPI 3.0 Schema Object](https://spec.openapis.org/oas/v3.0.3#schema-object). # Optional. Describes the output from this function in JSON Schema format. Reflects the Open API 3.03 Response Object. The Schema defines the type used for the response value of the function.
"additionalProperties": "", # Optional. If `type` is `OBJECT`, specifies how to handle properties not defined in `properties`. If it is a boolean `false`, no additional properties are allowed. If it is a schema, additional properties are allowed if they conform to the schema.
"anyOf": [ # Optional. The instance must be valid against any (one or more) of the subschemas listed in `any_of`.
# Object with schema name: GoogleCloudAiplatformV1beta1Schema
],
"default": "", # Optional. Default value to use if the field is not specified.
"defs": { # Optional. `defs` provides a map of schema definitions that can be reused by `ref` elsewhere in the schema. Only allowed at root level of the schema.
"a_key": # Object with schema name: GoogleCloudAiplatformV1beta1Schema
},
"description": "A String", # Optional. Description of the schema.
"enum": [ # Optional. Possible values of the field. This field can be used to restrict a value to a fixed set of values. To mark a field as an enum, set `format` to `enum` and provide the list of possible values in `enum`. For example: 1. To define directions: `{type:STRING, format:enum, enum:["EAST", "NORTH", "SOUTH", "WEST"]}` 2. To define apartment numbers: `{type:INTEGER, format:enum, enum:["101", "201", "301"]}`
"A String",
],
"example": "", # Optional. Example of an instance of this schema.
"format": "A String", # Optional. The format of the data. For `NUMBER` type, format can be `float` or `double`. For `INTEGER` type, format can be `int32` or `int64`. For `STRING` type, format can be `email`, `byte`, `date`, `date-time`, `password`, and other formats to further refine the data type.
"items": # Object with schema name: GoogleCloudAiplatformV1beta1Schema # Optional. If type is `ARRAY`, `items` specifies the schema of elements in the array.
"maxItems": "A String", # Optional. If type is `ARRAY`, `max_items` specifies the maximum number of items in an array.
"maxLength": "A String", # Optional. If type is `STRING`, `max_length` specifies the maximum length of the string.
"maxProperties": "A String", # Optional. If type is `OBJECT`, `max_properties` specifies the maximum number of properties that can be provided.
"maximum": 3.14, # Optional. If type is `INTEGER` or `NUMBER`, `maximum` specifies the maximum allowed value.
"minItems": "A String", # Optional. If type is `ARRAY`, `min_items` specifies the minimum number of items in an array.
"minLength": "A String", # Optional. If type is `STRING`, `min_length` specifies the minimum length of the string.
"minProperties": "A String", # Optional. If type is `OBJECT`, `min_properties` specifies the minimum number of properties that can be provided.
"minimum": 3.14, # Optional. If type is `INTEGER` or `NUMBER`, `minimum` specifies the minimum allowed value.
"nullable": True or False, # Optional. Indicates if the value of this field can be null.
"pattern": "A String", # Optional. If type is `STRING`, `pattern` specifies a regular expression that the string must match.
"properties": { # Optional. If type is `OBJECT`, `properties` is a map of property names to schema definitions for each property of the object.
"a_key": # Object with schema name: GoogleCloudAiplatformV1beta1Schema
},
"propertyOrdering": [ # Optional. Order of properties displayed or used where order matters. This is not a standard field in OpenAPI specification, but can be used to control the order of properties.
"A String",
],
"ref": "A String", # Optional. Allows referencing another schema definition to use in place of this schema. The value must be a valid reference to a schema in `defs`. For example, the following schema defines a reference to a schema node named "Pet": type: object properties: pet: ref: #/defs/Pet defs: Pet: type: object properties: name: type: string The value of the "pet" property is a reference to the schema node named "Pet". See details in https://json-schema.org/understanding-json-schema/structuring
"required": [ # Optional. If type is `OBJECT`, `required` lists the names of properties that must be present.
"A String",
],
"title": "A String", # Optional. Title for the schema.
"type": "A String", # Optional. Data type of the schema field.
},
"responseJsonSchema": "", # Optional. Describes the output from this function in JSON Schema format. The value specified by the schema is the response value of the function. This field is mutually exclusive with `response`.
},
],
"googleMaps": { # Tool to retrieve public maps data for grounding, powered by Google. # Optional. GoogleMaps tool type. Tool to support Google Maps in Model.
"enableWidget": True or False, # Optional. If true, include the widget context token in the response.
},
"googleSearch": { # GoogleSearch tool type. Tool to support Google Search in Model. Powered by Google. # Optional. GoogleSearch tool type. Tool to support Google Search in Model. Powered by Google.
"blockingConfidence": "A String", # Optional. Sites with confidence level chosen & above this value will be blocked from the search results.
"excludeDomains": [ # Optional. List of domains to be excluded from the search results. The default limit is 2000 domains. Example: ["amazon.com", "facebook.com"].
"A String",
],
},
"googleSearchRetrieval": { # Tool to retrieve public web data for grounding, powered by Google. # Optional. Specialized retrieval tool that is powered by Google Search.
"dynamicRetrievalConfig": { # Describes the options to customize dynamic retrieval. # Specifies the dynamic retrieval configuration for the given source.
"dynamicThreshold": 3.14, # Optional. The threshold to be used in dynamic retrieval. If not set, a system default value is used.
"mode": "A String", # The mode of the predictor to be used in dynamic retrieval.
},
},
"parallelAiSearch": { # ParallelAiSearch tool type. A tool that uses the Parallel.ai search engine for grounding. # Optional. If specified, Vertex AI will use Parallel.ai to search for information to answer user queries. The search results will be grounded on Parallel.ai and presented to the model for response generation
"apiKey": "A String", # Optional. The API key for ParallelAiSearch. If an API key is not provided, the system will attempt to verify access by checking for an active Parallel.ai subscription through the Google Cloud Marketplace. See https://docs.parallel.ai/search/search-quickstart for more details.
"customConfigs": { # Optional. Custom configs for ParallelAiSearch. This field can be used to pass any parameter from the Parallel.ai Search API. See the Parallel.ai documentation for the full list of available parameters and their usage: https://docs.parallel.ai/api-reference/search-beta/search Currently only `source_policy`, `excerpts`, `max_results`, `mode`, `fetch_policy` can be set via this field. For example: { "source_policy": { "include_domains": ["google.com", "wikipedia.org"], "exclude_domains": ["example.com"] }, "fetch_policy": { "max_age_seconds": 3600 } }
"a_key": "", # Properties of the object.
},
},
"retrieval": { # Defines a retrieval tool that model can call to access external knowledge. # Optional. Retrieval tool type. System will always execute the provided retrieval tool(s) to get external knowledge to answer the prompt. Retrieval results are presented to the model for generation.
"disableAttribution": True or False, # Optional. Deprecated. This option is no longer supported.
"externalApi": { # Retrieve from data source powered by external API for grounding. The external API is not owned by Google, but need to follow the pre-defined API spec. # Use data source powered by external API for grounding.
"apiAuth": { # The generic reusable api auth config. Deprecated. Please use AuthConfig (google/cloud/aiplatform/master/auth.proto) instead. # The authentication config to access the API. Deprecated. Please use auth_config instead.
"apiKeyConfig": { # The API secret. # The API secret.
"apiKeySecretVersion": "A String", # Required. The SecretManager secret version resource name storing API key. e.g. projects/{project}/secrets/{secret}/versions/{version}
"apiKeyString": "A String", # The API key string. Either this or `api_key_secret_version` must be set.
},
},
"apiSpec": "A String", # The API spec that the external API implements.
"authConfig": { # Auth configuration to run the extension. # The authentication config to access the API.
"apiKeyConfig": { # Config for authentication with API key. # Config for API key auth.
"apiKeySecret": "A String", # Optional. The name of the SecretManager secret version resource storing the API key. Format: `projects/{project}/secrets/{secrete}/versions/{version}` - If both `api_key_secret` and `api_key_string` are specified, this field takes precedence over `api_key_string`. - If specified, the `secretmanager.versions.access` permission should be granted to Vertex AI Extension Service Agent (https://cloud.google.com/vertex-ai/docs/general/access-control#service-agents) on the specified resource.
"apiKeyString": "A String", # Optional. The API key to be used in the request directly.
"httpElementLocation": "A String", # Optional. The location of the API key.
"name": "A String", # Optional. The parameter name of the API key. E.g. If the API request is "https://example.com/act?api_key=", "api_key" would be the parameter name.
},
"authType": "A String", # Type of auth scheme.
"googleServiceAccountConfig": { # Config for Google Service Account Authentication. # Config for Google Service Account auth.
"serviceAccount": "A String", # Optional. The service account that the extension execution service runs as. - If the service account is specified, the `iam.serviceAccounts.getAccessToken` permission should be granted to Vertex AI Extension Service Agent (https://cloud.google.com/vertex-ai/docs/general/access-control#service-agents) on the specified service account. - If not specified, the Vertex AI Extension Service Agent will be used to execute the Extension.
},
"httpBasicAuthConfig": { # Config for HTTP Basic Authentication. # Config for HTTP Basic auth.
"credentialSecret": "A String", # Required. The name of the SecretManager secret version resource storing the base64 encoded credentials. Format: `projects/{project}/secrets/{secrete}/versions/{version}` - If specified, the `secretmanager.versions.access` permission should be granted to Vertex AI Extension Service Agent (https://cloud.google.com/vertex-ai/docs/general/access-control#service-agents) on the specified resource.
},
"oauthConfig": { # Config for user oauth. # Config for user oauth.
"accessToken": "A String", # Access token for extension endpoint. Only used to propagate token from [[ExecuteExtensionRequest.runtime_auth_config]] at request time.
"serviceAccount": "A String", # The service account used to generate access tokens for executing the Extension. - If the service account is specified, the `iam.serviceAccounts.getAccessToken` permission should be granted to Vertex AI Extension Service Agent (https://cloud.google.com/vertex-ai/docs/general/access-control#service-agents) on the provided service account.
},
"oidcConfig": { # Config for user OIDC auth. # Config for user OIDC auth.
"idToken": "A String", # OpenID Connect formatted ID token for extension endpoint. Only used to propagate token from [[ExecuteExtensionRequest.runtime_auth_config]] at request time.
"serviceAccount": "A String", # The service account used to generate an OpenID Connect (OIDC)-compatible JWT token signed by the Google OIDC Provider (accounts.google.com) for extension endpoint (https://cloud.google.com/iam/docs/create-short-lived-credentials-direct#sa-credentials-oidc). - The audience for the token will be set to the URL in the server url defined in the OpenApi spec. - If the service account is provided, the service account should grant `iam.serviceAccounts.getOpenIdToken` permission to Vertex AI Extension Service Agent (https://cloud.google.com/vertex-ai/docs/general/access-control#service-agents).
},
},
"elasticSearchParams": { # The search parameters to use for the ELASTIC_SEARCH spec. # Parameters for the elastic search API.
"index": "A String", # The ElasticSearch index to use.
"numHits": 42, # Optional. Number of hits (chunks) to request. When specified, it is passed to Elasticsearch as the `num_hits` param.
"searchTemplate": "A String", # The ElasticSearch search template to use.
},
"endpoint": "A String", # The endpoint of the external API. The system will call the API at this endpoint to retrieve the data for grounding. Example: https://acme.com:443/search
"simpleSearchParams": { # The search parameters to use for SIMPLE_SEARCH spec. # Parameters for the simple search API.
},
},
"vertexAiSearch": { # Retrieve from Vertex AI Search datastore or engine for grounding. datastore and engine are mutually exclusive. See https://cloud.google.com/products/agent-builder # Set to use data source powered by Vertex AI Search.
"dataStoreSpecs": [ # Specifications that define the specific DataStores to be searched, along with configurations for those data stores. This is only considered for Engines with multiple data stores. It should only be set if engine is used.
{ # Define data stores within engine to filter on in a search call and configurations for those data stores. For more information, see https://cloud.google.com/generative-ai-app-builder/docs/reference/rpc/google.cloud.discoveryengine.v1#datastorespec
"dataStore": "A String", # Full resource name of DataStore, such as Format: `projects/{project}/locations/{location}/collections/{collection}/dataStores/{dataStore}`
"filter": "A String", # Optional. Filter specification to filter documents in the data store specified by data_store field. For more information on filtering, see [Filtering](https://cloud.google.com/generative-ai-app-builder/docs/filter-search-metadata)
},
],
"datastore": "A String", # Optional. Fully-qualified Vertex AI Search data store resource ID. Format: `projects/{project}/locations/{location}/collections/{collection}/dataStores/{dataStore}`
"engine": "A String", # Optional. Fully-qualified Vertex AI Search engine resource ID. Format: `projects/{project}/locations/{location}/collections/{collection}/engines/{engine}`
"filter": "A String", # Optional. Filter strings to be passed to the search API.
"maxResults": 42, # Optional. Number of search results to return per query. The default value is 10. The maximumm allowed value is 10.
},
"vertexRagStore": { # Retrieve from Vertex RAG Store for grounding. # Set to use data source powered by Vertex RAG store. User data is uploaded via the VertexRagDataService.
"ragCorpora": [ # Optional. Deprecated. Please use rag_resources instead.
"A String",
],
"ragResources": [ # Optional. The representation of the rag source. It can be used to specify corpus only or ragfiles. Currently only support one corpus or multiple files from one corpus. In the future we may open up multiple corpora support.
{ # The definition of the Rag resource.
"ragCorpus": "A String", # Optional. RagCorpora resource name. Format: `projects/{project}/locations/{location}/ragCorpora/{rag_corpus}`
"ragFileIds": [ # Optional. rag_file_id. The files should be in the same rag_corpus set in rag_corpus field.
"A String",
],
},
],
"ragRetrievalConfig": { # Specifies the context retrieval config. # Optional. The retrieval config for the Rag query.
"filter": { # Config for filters. # Optional. Config for filters.
"metadataFilter": "A String", # Optional. String for metadata filtering.
"vectorDistanceThreshold": 3.14, # Optional. Only returns contexts with vector distance smaller than the threshold.
"vectorSimilarityThreshold": 3.14, # Optional. Only returns contexts with vector similarity larger than the threshold.
},
"hybridSearch": { # Config for Hybrid Search. # Optional. Config for Hybrid Search.
"alpha": 3.14, # Optional. Alpha value controls the weight between dense and sparse vector search results. The range is [0, 1], while 0 means sparse vector search only and 1 means dense vector search only. The default value is 0.5 which balances sparse and dense vector search equally.
},
"ranking": { # Config for ranking and reranking. # Optional. Config for ranking and reranking.
"llmRanker": { # Config for LlmRanker. # Optional. Config for LlmRanker.
"modelName": "A String", # Optional. The model name used for ranking. See [Supported models](https://cloud.google.com/vertex-ai/generative-ai/docs/model-reference/inference#supported-models).
},
"rankService": { # Config for Rank Service. # Optional. Config for Rank Service.
"modelName": "A String", # Optional. The model name of the rank service. Format: `semantic-ranker-512@latest`
},
},
"topK": 42, # Optional. The number of contexts to retrieve.
},
"similarityTopK": 42, # Optional. Number of top k results to return from the selected corpora.
"storeContext": True or False, # Optional. Currently only supported for Gemini Multimodal Live API. In Gemini Multimodal Live API, if `store_context` bool is specified, Gemini will leverage it to automatically memorize the interactions between the client and Gemini, and retrieve context when needed to augment the response generation for users' ongoing and future interactions.
"vectorDistanceThreshold": 3.14, # Optional. Only return results with vector distance smaller than the threshold.
},
},
"urlContext": { # Tool to support URL context. # Optional. Tool to support URL context retrieval.
},
},
],
},
},
"turns": [ # Optional. A chronological list of conversation turns. Each turn represents a logical execution cycle (e.g., User Input -> Agent Response).
{ # Represents a single turn/invocation in the conversation.
"events": [ # Optional. The list of events that occurred during this turn.
{ # Represents a single event in the execution trace.
"activeTools": [ # Optional. The list of tools that were active/available to the agent at the time of this event. This overrides the `AgentConfig.tools` if set.
{ # Tool details that the model may use to generate response. A `Tool` is a piece of code that enables the system to interact with external systems to perform an action, or set of actions, outside of knowledge and scope of the model. A Tool object should contain exactly one type of Tool (e.g FunctionDeclaration, Retrieval or GoogleSearchRetrieval).
"codeExecution": { # Tool that executes code generated by the model, and automatically returns the result to the model. See also [ExecutableCode]and [CodeExecutionResult] which are input and output to this tool. # Optional. CodeExecution tool type. Enables the model to execute code as part of generation.
},
"computerUse": { # Tool to support computer use. # Optional. Tool to support the model interacting directly with the computer. If enabled, it automatically populates computer-use specific Function Declarations.
"environment": "A String", # Required. The environment being operated.
"excludedPredefinedFunctions": [ # Optional. By default, [predefined functions](https://cloud.google.com/vertex-ai/generative-ai/docs/computer-use#supported-actions) are included in the final model call. Some of them can be explicitly excluded from being automatically included. This can serve two purposes: 1. Using a more restricted / different action space. 2. Improving the definitions / instructions of predefined functions.
"A String",
],
},
"enterpriseWebSearch": { # Tool to search public web data, powered by Vertex AI Search and Sec4 compliance. # Optional. Tool to support searching public web data, powered by Vertex AI Search and Sec4 compliance.
"blockingConfidence": "A String", # Optional. Sites with confidence level chosen & above this value will be blocked from the search results.
"excludeDomains": [ # Optional. List of domains to be excluded from the search results. The default limit is 2000 domains.
"A String",
],
},
"functionDeclarations": [ # Optional. Function tool type. One or more function declarations to be passed to the model along with the current user query. Model may decide to call a subset of these functions by populating FunctionCall in the response. User should provide a FunctionResponse for each function call in the next turn. Based on the function responses, Model will generate the final response back to the user. Maximum 512 function declarations can be provided.
{ # Structured representation of a function declaration as defined by the [OpenAPI 3.0 specification](https://spec.openapis.org/oas/v3.0.3). Included in this declaration are the function name, description, parameters and response type. This FunctionDeclaration is a representation of a block of code that can be used as a `Tool` by the model and executed by the client.
"description": "A String", # Optional. Description and purpose of the function. Model uses it to decide how and whether to call the function.
"name": "A String", # Required. The name of the function to call. Must start with a letter or an underscore. Must be a-z, A-Z, 0-9, or contain underscores, dots, colons and dashes, with a maximum length of 64.
"parameters": { # Defines the schema of input and output data. This is a subset of the [OpenAPI 3.0 Schema Object](https://spec.openapis.org/oas/v3.0.3#schema-object). # Optional. Describes the parameters to this function in JSON Schema Object format. Reflects the Open API 3.03 Parameter Object. string Key: the name of the parameter. Parameter names are case sensitive. Schema Value: the Schema defining the type used for the parameter. For function with no parameters, this can be left unset. Parameter names must start with a letter or an underscore and must only contain chars a-z, A-Z, 0-9, or underscores with a maximum length of 64. Example with 1 required and 1 optional parameter: type: OBJECT properties: param1: type: STRING param2: type: INTEGER required: - param1
"additionalProperties": "", # Optional. If `type` is `OBJECT`, specifies how to handle properties not defined in `properties`. If it is a boolean `false`, no additional properties are allowed. If it is a schema, additional properties are allowed if they conform to the schema.
"anyOf": [ # Optional. The instance must be valid against any (one or more) of the subschemas listed in `any_of`.
# Object with schema name: GoogleCloudAiplatformV1beta1Schema
],
"default": "", # Optional. Default value to use if the field is not specified.
"defs": { # Optional. `defs` provides a map of schema definitions that can be reused by `ref` elsewhere in the schema. Only allowed at root level of the schema.
"a_key": # Object with schema name: GoogleCloudAiplatformV1beta1Schema
},
"description": "A String", # Optional. Description of the schema.
"enum": [ # Optional. Possible values of the field. This field can be used to restrict a value to a fixed set of values. To mark a field as an enum, set `format` to `enum` and provide the list of possible values in `enum`. For example: 1. To define directions: `{type:STRING, format:enum, enum:["EAST", "NORTH", "SOUTH", "WEST"]}` 2. To define apartment numbers: `{type:INTEGER, format:enum, enum:["101", "201", "301"]}`
"A String",
],
"example": "", # Optional. Example of an instance of this schema.
"format": "A String", # Optional. The format of the data. For `NUMBER` type, format can be `float` or `double`. For `INTEGER` type, format can be `int32` or `int64`. For `STRING` type, format can be `email`, `byte`, `date`, `date-time`, `password`, and other formats to further refine the data type.
"items": # Object with schema name: GoogleCloudAiplatformV1beta1Schema # Optional. If type is `ARRAY`, `items` specifies the schema of elements in the array.
"maxItems": "A String", # Optional. If type is `ARRAY`, `max_items` specifies the maximum number of items in an array.
"maxLength": "A String", # Optional. If type is `STRING`, `max_length` specifies the maximum length of the string.
"maxProperties": "A String", # Optional. If type is `OBJECT`, `max_properties` specifies the maximum number of properties that can be provided.
"maximum": 3.14, # Optional. If type is `INTEGER` or `NUMBER`, `maximum` specifies the maximum allowed value.
"minItems": "A String", # Optional. If type is `ARRAY`, `min_items` specifies the minimum number of items in an array.
"minLength": "A String", # Optional. If type is `STRING`, `min_length` specifies the minimum length of the string.
"minProperties": "A String", # Optional. If type is `OBJECT`, `min_properties` specifies the minimum number of properties that can be provided.
"minimum": 3.14, # Optional. If type is `INTEGER` or `NUMBER`, `minimum` specifies the minimum allowed value.
"nullable": True or False, # Optional. Indicates if the value of this field can be null.
"pattern": "A String", # Optional. If type is `STRING`, `pattern` specifies a regular expression that the string must match.
"properties": { # Optional. If type is `OBJECT`, `properties` is a map of property names to schema definitions for each property of the object.
"a_key": # Object with schema name: GoogleCloudAiplatformV1beta1Schema
},
"propertyOrdering": [ # Optional. Order of properties displayed or used where order matters. This is not a standard field in OpenAPI specification, but can be used to control the order of properties.
"A String",
],
"ref": "A String", # Optional. Allows referencing another schema definition to use in place of this schema. The value must be a valid reference to a schema in `defs`. For example, the following schema defines a reference to a schema node named "Pet": type: object properties: pet: ref: #/defs/Pet defs: Pet: type: object properties: name: type: string The value of the "pet" property is a reference to the schema node named "Pet". See details in https://json-schema.org/understanding-json-schema/structuring
"required": [ # Optional. If type is `OBJECT`, `required` lists the names of properties that must be present.
"A String",
],
"title": "A String", # Optional. Title for the schema.
"type": "A String", # Optional. Data type of the schema field.
},
"parametersJsonSchema": "", # Optional. Describes the parameters to the function in JSON Schema format. The schema must describe an object where the properties are the parameters to the function. For example: ``` { "type": "object", "properties": { "name": { "type": "string" }, "age": { "type": "integer" } }, "additionalProperties": false, "required": ["name", "age"], "propertyOrdering": ["name", "age"] } ``` This field is mutually exclusive with `parameters`.
"response": { # Defines the schema of input and output data. This is a subset of the [OpenAPI 3.0 Schema Object](https://spec.openapis.org/oas/v3.0.3#schema-object). # Optional. Describes the output from this function in JSON Schema format. Reflects the Open API 3.03 Response Object. The Schema defines the type used for the response value of the function.
"additionalProperties": "", # Optional. If `type` is `OBJECT`, specifies how to handle properties not defined in `properties`. If it is a boolean `false`, no additional properties are allowed. If it is a schema, additional properties are allowed if they conform to the schema.
"anyOf": [ # Optional. The instance must be valid against any (one or more) of the subschemas listed in `any_of`.
# Object with schema name: GoogleCloudAiplatformV1beta1Schema
],
"default": "", # Optional. Default value to use if the field is not specified.
"defs": { # Optional. `defs` provides a map of schema definitions that can be reused by `ref` elsewhere in the schema. Only allowed at root level of the schema.
"a_key": # Object with schema name: GoogleCloudAiplatformV1beta1Schema
},
"description": "A String", # Optional. Description of the schema.
"enum": [ # Optional. Possible values of the field. This field can be used to restrict a value to a fixed set of values. To mark a field as an enum, set `format` to `enum` and provide the list of possible values in `enum`. For example: 1. To define directions: `{type:STRING, format:enum, enum:["EAST", "NORTH", "SOUTH", "WEST"]}` 2. To define apartment numbers: `{type:INTEGER, format:enum, enum:["101", "201", "301"]}`
"A String",
],
"example": "", # Optional. Example of an instance of this schema.
"format": "A String", # Optional. The format of the data. For `NUMBER` type, format can be `float` or `double`. For `INTEGER` type, format can be `int32` or `int64`. For `STRING` type, format can be `email`, `byte`, `date`, `date-time`, `password`, and other formats to further refine the data type.
"items": # Object with schema name: GoogleCloudAiplatformV1beta1Schema # Optional. If type is `ARRAY`, `items` specifies the schema of elements in the array.
"maxItems": "A String", # Optional. If type is `ARRAY`, `max_items` specifies the maximum number of items in an array.
"maxLength": "A String", # Optional. If type is `STRING`, `max_length` specifies the maximum length of the string.
"maxProperties": "A String", # Optional. If type is `OBJECT`, `max_properties` specifies the maximum number of properties that can be provided.
"maximum": 3.14, # Optional. If type is `INTEGER` or `NUMBER`, `maximum` specifies the maximum allowed value.
"minItems": "A String", # Optional. If type is `ARRAY`, `min_items` specifies the minimum number of items in an array.
"minLength": "A String", # Optional. If type is `STRING`, `min_length` specifies the minimum length of the string.
"minProperties": "A String", # Optional. If type is `OBJECT`, `min_properties` specifies the minimum number of properties that can be provided.
"minimum": 3.14, # Optional. If type is `INTEGER` or `NUMBER`, `minimum` specifies the minimum allowed value.
"nullable": True or False, # Optional. Indicates if the value of this field can be null.
"pattern": "A String", # Optional. If type is `STRING`, `pattern` specifies a regular expression that the string must match.
"properties": { # Optional. If type is `OBJECT`, `properties` is a map of property names to schema definitions for each property of the object.
"a_key": # Object with schema name: GoogleCloudAiplatformV1beta1Schema
},
"propertyOrdering": [ # Optional. Order of properties displayed or used where order matters. This is not a standard field in OpenAPI specification, but can be used to control the order of properties.
"A String",
],
"ref": "A String", # Optional. Allows referencing another schema definition to use in place of this schema. The value must be a valid reference to a schema in `defs`. For example, the following schema defines a reference to a schema node named "Pet": type: object properties: pet: ref: #/defs/Pet defs: Pet: type: object properties: name: type: string The value of the "pet" property is a reference to the schema node named "Pet". See details in https://json-schema.org/understanding-json-schema/structuring
"required": [ # Optional. If type is `OBJECT`, `required` lists the names of properties that must be present.
"A String",
],
"title": "A String", # Optional. Title for the schema.
"type": "A String", # Optional. Data type of the schema field.
},
"responseJsonSchema": "", # Optional. Describes the output from this function in JSON Schema format. The value specified by the schema is the response value of the function. This field is mutually exclusive with `response`.
},
],
"googleMaps": { # Tool to retrieve public maps data for grounding, powered by Google. # Optional. GoogleMaps tool type. Tool to support Google Maps in Model.
"enableWidget": True or False, # Optional. If true, include the widget context token in the response.
},
"googleSearch": { # GoogleSearch tool type. Tool to support Google Search in Model. Powered by Google. # Optional. GoogleSearch tool type. Tool to support Google Search in Model. Powered by Google.
"blockingConfidence": "A String", # Optional. Sites with confidence level chosen & above this value will be blocked from the search results.
"excludeDomains": [ # Optional. List of domains to be excluded from the search results. The default limit is 2000 domains. Example: ["amazon.com", "facebook.com"].
"A String",
],
},
"googleSearchRetrieval": { # Tool to retrieve public web data for grounding, powered by Google. # Optional. Specialized retrieval tool that is powered by Google Search.
"dynamicRetrievalConfig": { # Describes the options to customize dynamic retrieval. # Specifies the dynamic retrieval configuration for the given source.
"dynamicThreshold": 3.14, # Optional. The threshold to be used in dynamic retrieval. If not set, a system default value is used.
"mode": "A String", # The mode of the predictor to be used in dynamic retrieval.
},
},
"parallelAiSearch": { # ParallelAiSearch tool type. A tool that uses the Parallel.ai search engine for grounding. # Optional. If specified, Vertex AI will use Parallel.ai to search for information to answer user queries. The search results will be grounded on Parallel.ai and presented to the model for response generation
"apiKey": "A String", # Optional. The API key for ParallelAiSearch. If an API key is not provided, the system will attempt to verify access by checking for an active Parallel.ai subscription through the Google Cloud Marketplace. See https://docs.parallel.ai/search/search-quickstart for more details.
"customConfigs": { # Optional. Custom configs for ParallelAiSearch. This field can be used to pass any parameter from the Parallel.ai Search API. See the Parallel.ai documentation for the full list of available parameters and their usage: https://docs.parallel.ai/api-reference/search-beta/search Currently only `source_policy`, `excerpts`, `max_results`, `mode`, `fetch_policy` can be set via this field. For example: { "source_policy": { "include_domains": ["google.com", "wikipedia.org"], "exclude_domains": ["example.com"] }, "fetch_policy": { "max_age_seconds": 3600 } }
"a_key": "", # Properties of the object.
},
},
"retrieval": { # Defines a retrieval tool that model can call to access external knowledge. # Optional. Retrieval tool type. System will always execute the provided retrieval tool(s) to get external knowledge to answer the prompt. Retrieval results are presented to the model for generation.
"disableAttribution": True or False, # Optional. Deprecated. This option is no longer supported.
"externalApi": { # Retrieve from data source powered by external API for grounding. The external API is not owned by Google, but need to follow the pre-defined API spec. # Use data source powered by external API for grounding.
"apiAuth": { # The generic reusable api auth config. Deprecated. Please use AuthConfig (google/cloud/aiplatform/master/auth.proto) instead. # The authentication config to access the API. Deprecated. Please use auth_config instead.
"apiKeyConfig": { # The API secret. # The API secret.
"apiKeySecretVersion": "A String", # Required. The SecretManager secret version resource name storing API key. e.g. projects/{project}/secrets/{secret}/versions/{version}
"apiKeyString": "A String", # The API key string. Either this or `api_key_secret_version` must be set.
},
},
"apiSpec": "A String", # The API spec that the external API implements.
"authConfig": { # Auth configuration to run the extension. # The authentication config to access the API.
"apiKeyConfig": { # Config for authentication with API key. # Config for API key auth.
"apiKeySecret": "A String", # Optional. The name of the SecretManager secret version resource storing the API key. Format: `projects/{project}/secrets/{secrete}/versions/{version}` - If both `api_key_secret` and `api_key_string` are specified, this field takes precedence over `api_key_string`. - If specified, the `secretmanager.versions.access` permission should be granted to Vertex AI Extension Service Agent (https://cloud.google.com/vertex-ai/docs/general/access-control#service-agents) on the specified resource.
"apiKeyString": "A String", # Optional. The API key to be used in the request directly.
"httpElementLocation": "A String", # Optional. The location of the API key.
"name": "A String", # Optional. The parameter name of the API key. E.g. If the API request is "https://example.com/act?api_key=", "api_key" would be the parameter name.
},
"authType": "A String", # Type of auth scheme.
"googleServiceAccountConfig": { # Config for Google Service Account Authentication. # Config for Google Service Account auth.
"serviceAccount": "A String", # Optional. The service account that the extension execution service runs as. - If the service account is specified, the `iam.serviceAccounts.getAccessToken` permission should be granted to Vertex AI Extension Service Agent (https://cloud.google.com/vertex-ai/docs/general/access-control#service-agents) on the specified service account. - If not specified, the Vertex AI Extension Service Agent will be used to execute the Extension.
},
"httpBasicAuthConfig": { # Config for HTTP Basic Authentication. # Config for HTTP Basic auth.
"credentialSecret": "A String", # Required. The name of the SecretManager secret version resource storing the base64 encoded credentials. Format: `projects/{project}/secrets/{secrete}/versions/{version}` - If specified, the `secretmanager.versions.access` permission should be granted to Vertex AI Extension Service Agent (https://cloud.google.com/vertex-ai/docs/general/access-control#service-agents) on the specified resource.
},
"oauthConfig": { # Config for user oauth. # Config for user oauth.
"accessToken": "A String", # Access token for extension endpoint. Only used to propagate token from [[ExecuteExtensionRequest.runtime_auth_config]] at request time.
"serviceAccount": "A String", # The service account used to generate access tokens for executing the Extension. - If the service account is specified, the `iam.serviceAccounts.getAccessToken` permission should be granted to Vertex AI Extension Service Agent (https://cloud.google.com/vertex-ai/docs/general/access-control#service-agents) on the provided service account.
},
"oidcConfig": { # Config for user OIDC auth. # Config for user OIDC auth.
"idToken": "A String", # OpenID Connect formatted ID token for extension endpoint. Only used to propagate token from [[ExecuteExtensionRequest.runtime_auth_config]] at request time.
"serviceAccount": "A String", # The service account used to generate an OpenID Connect (OIDC)-compatible JWT token signed by the Google OIDC Provider (accounts.google.com) for extension endpoint (https://cloud.google.com/iam/docs/create-short-lived-credentials-direct#sa-credentials-oidc). - The audience for the token will be set to the URL in the server url defined in the OpenApi spec. - If the service account is provided, the service account should grant `iam.serviceAccounts.getOpenIdToken` permission to Vertex AI Extension Service Agent (https://cloud.google.com/vertex-ai/docs/general/access-control#service-agents).
},
},
"elasticSearchParams": { # The search parameters to use for the ELASTIC_SEARCH spec. # Parameters for the elastic search API.
"index": "A String", # The ElasticSearch index to use.
"numHits": 42, # Optional. Number of hits (chunks) to request. When specified, it is passed to Elasticsearch as the `num_hits` param.
"searchTemplate": "A String", # The ElasticSearch search template to use.
},
"endpoint": "A String", # The endpoint of the external API. The system will call the API at this endpoint to retrieve the data for grounding. Example: https://acme.com:443/search
"simpleSearchParams": { # The search parameters to use for SIMPLE_SEARCH spec. # Parameters for the simple search API.
},
},
"vertexAiSearch": { # Retrieve from Vertex AI Search datastore or engine for grounding. datastore and engine are mutually exclusive. See https://cloud.google.com/products/agent-builder # Set to use data source powered by Vertex AI Search.
"dataStoreSpecs": [ # Specifications that define the specific DataStores to be searched, along with configurations for those data stores. This is only considered for Engines with multiple data stores. It should only be set if engine is used.
{ # Define data stores within engine to filter on in a search call and configurations for those data stores. For more information, see https://cloud.google.com/generative-ai-app-builder/docs/reference/rpc/google.cloud.discoveryengine.v1#datastorespec
"dataStore": "A String", # Full resource name of DataStore, such as Format: `projects/{project}/locations/{location}/collections/{collection}/dataStores/{dataStore}`
"filter": "A String", # Optional. Filter specification to filter documents in the data store specified by data_store field. For more information on filtering, see [Filtering](https://cloud.google.com/generative-ai-app-builder/docs/filter-search-metadata)
},
],
"datastore": "A String", # Optional. Fully-qualified Vertex AI Search data store resource ID. Format: `projects/{project}/locations/{location}/collections/{collection}/dataStores/{dataStore}`
"engine": "A String", # Optional. Fully-qualified Vertex AI Search engine resource ID. Format: `projects/{project}/locations/{location}/collections/{collection}/engines/{engine}`
"filter": "A String", # Optional. Filter strings to be passed to the search API.
"maxResults": 42, # Optional. Number of search results to return per query. The default value is 10. The maximumm allowed value is 10.
},
"vertexRagStore": { # Retrieve from Vertex RAG Store for grounding. # Set to use data source powered by Vertex RAG store. User data is uploaded via the VertexRagDataService.
"ragCorpora": [ # Optional. Deprecated. Please use rag_resources instead.
"A String",
],
"ragResources": [ # Optional. The representation of the rag source. It can be used to specify corpus only or ragfiles. Currently only support one corpus or multiple files from one corpus. In the future we may open up multiple corpora support.
{ # The definition of the Rag resource.
"ragCorpus": "A String", # Optional. RagCorpora resource name. Format: `projects/{project}/locations/{location}/ragCorpora/{rag_corpus}`
"ragFileIds": [ # Optional. rag_file_id. The files should be in the same rag_corpus set in rag_corpus field.
"A String",
],
},
],
"ragRetrievalConfig": { # Specifies the context retrieval config. # Optional. The retrieval config for the Rag query.
"filter": { # Config for filters. # Optional. Config for filters.
"metadataFilter": "A String", # Optional. String for metadata filtering.
"vectorDistanceThreshold": 3.14, # Optional. Only returns contexts with vector distance smaller than the threshold.
"vectorSimilarityThreshold": 3.14, # Optional. Only returns contexts with vector similarity larger than the threshold.
},
"hybridSearch": { # Config for Hybrid Search. # Optional. Config for Hybrid Search.
"alpha": 3.14, # Optional. Alpha value controls the weight between dense and sparse vector search results. The range is [0, 1], while 0 means sparse vector search only and 1 means dense vector search only. The default value is 0.5 which balances sparse and dense vector search equally.
},
"ranking": { # Config for ranking and reranking. # Optional. Config for ranking and reranking.
"llmRanker": { # Config for LlmRanker. # Optional. Config for LlmRanker.
"modelName": "A String", # Optional. The model name used for ranking. See [Supported models](https://cloud.google.com/vertex-ai/generative-ai/docs/model-reference/inference#supported-models).
},
"rankService": { # Config for Rank Service. # Optional. Config for Rank Service.
"modelName": "A String", # Optional. The model name of the rank service. Format: `semantic-ranker-512@latest`
},
},
"topK": 42, # Optional. The number of contexts to retrieve.
},
"similarityTopK": 42, # Optional. Number of top k results to return from the selected corpora.
"storeContext": True or False, # Optional. Currently only supported for Gemini Multimodal Live API. In Gemini Multimodal Live API, if `store_context` bool is specified, Gemini will leverage it to automatically memorize the interactions between the client and Gemini, and retrieve context when needed to augment the response generation for users' ongoing and future interactions.
"vectorDistanceThreshold": 3.14, # Optional. Only return results with vector distance smaller than the threshold.
},
},
"urlContext": { # Tool to support URL context. # Optional. Tool to support URL context retrieval.
},
},
],
"author": "A String", # Required. The ID of the agent or entity that generated this event. Use "user" to denote events generated by the end-user.
"content": { # The structured data content of a message. A Content message contains a `role` field, which indicates the producer of the content, and a `parts` field, which contains the multi-part data of the message. # Required. The content of the event (e.g., text response, tool call, tool response).
"parts": [ # Required. A list of Part objects that make up a single message. Parts of a message can have different MIME types. A Content message must have at least one Part.
{ # A datatype containing media that is part of a multi-part Content message. A `Part` consists of data which has an associated datatype. A `Part` can only contain one of the accepted types in `Part.data`. For media types that are not text, `Part` must have a fixed IANA MIME type identifying the type and subtype of the media if `inline_data` or `file_data` field is filled with raw bytes.
"codeExecutionResult": { # Result of executing the [ExecutableCode]. Only generated when using the [CodeExecution] tool, and always follows a `part` containing the [ExecutableCode]. # Optional. The result of executing the ExecutableCode.
"outcome": "A String", # Required. Outcome of the code execution.
"output": "A String", # Optional. Contains stdout when code execution is successful, stderr or other description otherwise.
},
"executableCode": { # Code generated by the model that is meant to be executed, and the result returned to the model. Generated when using the [CodeExecution] tool, in which the code will be automatically executed, and a corresponding [CodeExecutionResult] will also be generated. # Optional. Code generated by the model that is intended to be executed.
"code": "A String", # Required. The code to be executed.
"language": "A String", # Required. Programming language of the `code`.
},
"fileData": { # URI-based data. A FileData message contains a URI pointing to data of a specific media type. It is used to represent images, audio, and video stored in Google Cloud Storage. # Optional. The URI-based data of the part. This can be used to include files from Google Cloud Storage.
"displayName": "A String", # Optional. The display name of the file. Used to provide a label or filename to distinguish files. This field is only returned in `PromptMessage` for prompt management. It is used in the Gemini calls only when server side tools (`code_execution`, `google_search`, and `url_context`) are enabled.
"fileUri": "A String", # Required. The URI of the file in Google Cloud Storage.
"mimeType": "A String", # Required. The IANA standard MIME type of the source data.
},
"functionCall": { # A predicted [FunctionCall] returned from the model that contains a string representing the [FunctionDeclaration.name] and a structured JSON object containing the parameters and their values. # Optional. A predicted function call returned from the model. This contains the name of the function to call and the arguments to pass to the function.
"args": { # Optional. The function parameters and values in JSON object format. See [FunctionDeclaration.parameters] for parameter details.
"a_key": "", # Properties of the object.
},
"id": "A String", # Optional. The unique id of the function call. If populated, the client to execute the `function_call` and return the response with the matching `id`.
"name": "A String", # Optional. The name of the function to call. Matches [FunctionDeclaration.name].
"partialArgs": [ # Optional. The partial argument value of the function call. If provided, represents the arguments/fields that are streamed incrementally.
{ # Partial argument value of the function call.
"boolValue": True or False, # Optional. Represents a boolean value.
"jsonPath": "A String", # Required. A JSON Path (RFC 9535) to the argument being streamed. https://datatracker.ietf.org/doc/html/rfc9535. e.g. "$.foo.bar[0].data".
"nullValue": "A String", # Optional. Represents a null value.
"numberValue": 3.14, # Optional. Represents a double value.
"stringValue": "A String", # Optional. Represents a string value.
"willContinue": True or False, # Optional. Whether this is not the last part of the same json_path. If true, another PartialArg message for the current json_path is expected to follow.
},
],
"willContinue": True or False, # Optional. Whether this is the last part of the FunctionCall. If true, another partial message for the current FunctionCall is expected to follow.
},
"functionResponse": { # The result output from a [FunctionCall] that contains a string representing the [FunctionDeclaration.name] and a structured JSON object containing any output from the function is used as context to the model. This should contain the result of a [FunctionCall] made based on model prediction. # Optional. The result of a function call. This is used to provide the model with the result of a function call that it predicted.
"id": "A String", # Optional. The id of the function call this response is for. Populated by the client to match the corresponding function call `id`.
"name": "A String", # Required. The name of the function to call. Matches [FunctionDeclaration.name] and [FunctionCall.name].
"parts": [ # Optional. Ordered `Parts` that constitute a function response. Parts may have different IANA MIME types.
{ # A datatype containing media that is part of a `FunctionResponse` message. A `FunctionResponsePart` consists of data which has an associated datatype. A `FunctionResponsePart` can only contain one of the accepted types in `FunctionResponsePart.data`. A `FunctionResponsePart` must have a fixed IANA MIME type identifying the type and subtype of the media if the `inline_data` field is filled with raw bytes.
"fileData": { # URI based data for function response. # URI based data.
"displayName": "A String", # Optional. Display name of the file data. Used to provide a label or filename to distinguish file datas. This field is only returned in PromptMessage for prompt management. It is currently used in the Gemini GenerateContent calls only when server side tools (code_execution, google_search, and url_context) are enabled.
"fileUri": "A String", # Required. URI.
"mimeType": "A String", # Required. The IANA standard MIME type of the source data.
},
"inlineData": { # Raw media bytes for function response. Text should not be sent as raw bytes, use the 'text' field. # Inline media bytes.
"data": "A String", # Required. Raw bytes.
"displayName": "A String", # Optional. Display name of the blob. Used to provide a label or filename to distinguish blobs. This field is only returned in PromptMessage for prompt management. It is currently used in the Gemini GenerateContent calls only when server side tools (code_execution, google_search, and url_context) are enabled.
"mimeType": "A String", # Required. The IANA standard MIME type of the source data.
},
},
],
"response": { # Required. The function response in JSON object format. Use "output" key to specify function output and "error" key to specify error details (if any). If "output" and "error" keys are not specified, then whole "response" is treated as function output.
"a_key": "", # Properties of the object.
},
},
"inlineData": { # A content blob. A Blob contains data of a specific media type. It is used to represent images, audio, and video. # Optional. The inline data content of the part. This can be used to include images, audio, or video in a request.
"data": "A String", # Required. The raw bytes of the data.
"displayName": "A String", # Optional. The display name of the blob. Used to provide a label or filename to distinguish blobs. This field is only returned in `PromptMessage` for prompt management. It is used in the Gemini calls only when server-side tools (`code_execution`, `google_search`, and `url_context`) are enabled.
"mimeType": "A String", # Required. The IANA standard MIME type of the source data.
},
"mediaResolution": { # per part media resolution. Media resolution for the input media. # per part media resolution. Media resolution for the input media.
"level": "A String", # The tokenization quality used for given media.
},
"text": "A String", # Optional. The text content of the part. When sent from the VSCode Gemini Code Assist extension, references to @mentioned items will be converted to markdown boldface text. For example `@my-repo` will be converted to and sent as `**my-repo**` by the IDE agent.
"thought": True or False, # Optional. Indicates whether the `part` represents the model's thought process or reasoning.
"thoughtSignature": "A String", # Optional. An opaque signature for the thought so it can be reused in subsequent requests.
"videoMetadata": { # Provides metadata for a video, including the start and end offsets for clipping and the frame rate. # Optional. Video metadata. The metadata should only be specified while the video data is presented in inline_data or file_data.
"endOffset": "A String", # Optional. The end offset of the video.
"fps": 3.14, # Optional. The frame rate of the video sent to the model. If not specified, the default value is 1.0. The valid range is (0.0, 24.0].
"startOffset": "A String", # Optional. The start offset of the video.
},
},
],
"role": "A String", # Optional. The producer of the content. Must be either 'user' or 'model'. If not set, the service will default to 'user'.
},
"eventTime": "A String", # Optional. The timestamp when the event occurred.
"stateDelta": { # Optional. The change in the session state caused by this event. This is a key-value map of fields that were modified or added by the event.
"a_key": "", # Properties of the object.
},
},
],
"turnId": "A String", # Optional. A unique identifier for the turn. Useful for referencing specific turns across systems.
"turnIndex": 42, # Required. The 0-based index of the turn in the conversation sequence.
},
],
},
"candidate": "A String", # Required. The name of the candidate that produced the response.
"error": { # The `Status` type defines a logical error model that is suitable for different programming environments, including REST APIs and RPC APIs. It is used by [gRPC](https://github.com/grpc). Each `Status` message contains three pieces of data: error code, error message, and error details. You can find out more about this error model and how to work with it in the [API Design Guide](https://cloud.google.com/apis/design/errors). # Output only. Error while scraping model or agent.
"code": 42, # The status code, which should be an enum value of google.rpc.Code.
"details": [ # A list of messages that carry the error details. There is a common set of message types for APIs to use.
{
"a_key": "", # Properties of the object. Contains field @type with type URL.
},
],
"message": "A String", # A developer-facing error message, which should be in English. Any user-facing error message should be localized and sent in the google.rpc.Status.details field, or localized by the client.
},
"events": [ # Optional. Intermediate events (such as tool calls and responses) that led to the final response.
{ # The structured data content of a message. A Content message contains a `role` field, which indicates the producer of the content, and a `parts` field, which contains the multi-part data of the message.
"parts": [ # Required. A list of Part objects that make up a single message. Parts of a message can have different MIME types. A Content message must have at least one Part.
{ # A datatype containing media that is part of a multi-part Content message. A `Part` consists of data which has an associated datatype. A `Part` can only contain one of the accepted types in `Part.data`. For media types that are not text, `Part` must have a fixed IANA MIME type identifying the type and subtype of the media if `inline_data` or `file_data` field is filled with raw bytes.
"codeExecutionResult": { # Result of executing the [ExecutableCode]. Only generated when using the [CodeExecution] tool, and always follows a `part` containing the [ExecutableCode]. # Optional. The result of executing the ExecutableCode.
"outcome": "A String", # Required. Outcome of the code execution.
"output": "A String", # Optional. Contains stdout when code execution is successful, stderr or other description otherwise.
},
"executableCode": { # Code generated by the model that is meant to be executed, and the result returned to the model. Generated when using the [CodeExecution] tool, in which the code will be automatically executed, and a corresponding [CodeExecutionResult] will also be generated. # Optional. Code generated by the model that is intended to be executed.
"code": "A String", # Required. The code to be executed.
"language": "A String", # Required. Programming language of the `code`.
},
"fileData": { # URI-based data. A FileData message contains a URI pointing to data of a specific media type. It is used to represent images, audio, and video stored in Google Cloud Storage. # Optional. The URI-based data of the part. This can be used to include files from Google Cloud Storage.
"displayName": "A String", # Optional. The display name of the file. Used to provide a label or filename to distinguish files. This field is only returned in `PromptMessage` for prompt management. It is used in the Gemini calls only when server side tools (`code_execution`, `google_search`, and `url_context`) are enabled.
"fileUri": "A String", # Required. The URI of the file in Google Cloud Storage.
"mimeType": "A String", # Required. The IANA standard MIME type of the source data.
},
"functionCall": { # A predicted [FunctionCall] returned from the model that contains a string representing the [FunctionDeclaration.name] and a structured JSON object containing the parameters and their values. # Optional. A predicted function call returned from the model. This contains the name of the function to call and the arguments to pass to the function.
"args": { # Optional. The function parameters and values in JSON object format. See [FunctionDeclaration.parameters] for parameter details.
"a_key": "", # Properties of the object.
},
"id": "A String", # Optional. The unique id of the function call. If populated, the client to execute the `function_call` and return the response with the matching `id`.
"name": "A String", # Optional. The name of the function to call. Matches [FunctionDeclaration.name].
"partialArgs": [ # Optional. The partial argument value of the function call. If provided, represents the arguments/fields that are streamed incrementally.
{ # Partial argument value of the function call.
"boolValue": True or False, # Optional. Represents a boolean value.
"jsonPath": "A String", # Required. A JSON Path (RFC 9535) to the argument being streamed. https://datatracker.ietf.org/doc/html/rfc9535. e.g. "$.foo.bar[0].data".
"nullValue": "A String", # Optional. Represents a null value.
"numberValue": 3.14, # Optional. Represents a double value.
"stringValue": "A String", # Optional. Represents a string value.
"willContinue": True or False, # Optional. Whether this is not the last part of the same json_path. If true, another PartialArg message for the current json_path is expected to follow.
},
],
"willContinue": True or False, # Optional. Whether this is the last part of the FunctionCall. If true, another partial message for the current FunctionCall is expected to follow.
},
"functionResponse": { # The result output from a [FunctionCall] that contains a string representing the [FunctionDeclaration.name] and a structured JSON object containing any output from the function is used as context to the model. This should contain the result of a [FunctionCall] made based on model prediction. # Optional. The result of a function call. This is used to provide the model with the result of a function call that it predicted.
"id": "A String", # Optional. The id of the function call this response is for. Populated by the client to match the corresponding function call `id`.
"name": "A String", # Required. The name of the function to call. Matches [FunctionDeclaration.name] and [FunctionCall.name].
"parts": [ # Optional. Ordered `Parts` that constitute a function response. Parts may have different IANA MIME types.
{ # A datatype containing media that is part of a `FunctionResponse` message. A `FunctionResponsePart` consists of data which has an associated datatype. A `FunctionResponsePart` can only contain one of the accepted types in `FunctionResponsePart.data`. A `FunctionResponsePart` must have a fixed IANA MIME type identifying the type and subtype of the media if the `inline_data` field is filled with raw bytes.
"fileData": { # URI based data for function response. # URI based data.
"displayName": "A String", # Optional. Display name of the file data. Used to provide a label or filename to distinguish file datas. This field is only returned in PromptMessage for prompt management. It is currently used in the Gemini GenerateContent calls only when server side tools (code_execution, google_search, and url_context) are enabled.
"fileUri": "A String", # Required. URI.
"mimeType": "A String", # Required. The IANA standard MIME type of the source data.
},
"inlineData": { # Raw media bytes for function response. Text should not be sent as raw bytes, use the 'text' field. # Inline media bytes.
"data": "A String", # Required. Raw bytes.
"displayName": "A String", # Optional. Display name of the blob. Used to provide a label or filename to distinguish blobs. This field is only returned in PromptMessage for prompt management. It is currently used in the Gemini GenerateContent calls only when server side tools (code_execution, google_search, and url_context) are enabled.
"mimeType": "A String", # Required. The IANA standard MIME type of the source data.
},
},
],
"response": { # Required. The function response in JSON object format. Use "output" key to specify function output and "error" key to specify error details (if any). If "output" and "error" keys are not specified, then whole "response" is treated as function output.
"a_key": "", # Properties of the object.
},
},
"inlineData": { # A content blob. A Blob contains data of a specific media type. It is used to represent images, audio, and video. # Optional. The inline data content of the part. This can be used to include images, audio, or video in a request.
"data": "A String", # Required. The raw bytes of the data.
"displayName": "A String", # Optional. The display name of the blob. Used to provide a label or filename to distinguish blobs. This field is only returned in `PromptMessage` for prompt management. It is used in the Gemini calls only when server-side tools (`code_execution`, `google_search`, and `url_context`) are enabled.
"mimeType": "A String", # Required. The IANA standard MIME type of the source data.
},
"mediaResolution": { # per part media resolution. Media resolution for the input media. # per part media resolution. Media resolution for the input media.
"level": "A String", # The tokenization quality used for given media.
},
"text": "A String", # Optional. The text content of the part. When sent from the VSCode Gemini Code Assist extension, references to @mentioned items will be converted to markdown boldface text. For example `@my-repo` will be converted to and sent as `**my-repo**` by the IDE agent.
"thought": True or False, # Optional. Indicates whether the `part` represents the model's thought process or reasoning.
"thoughtSignature": "A String", # Optional. An opaque signature for the thought so it can be reused in subsequent requests.
"videoMetadata": { # Provides metadata for a video, including the start and end offsets for clipping and the frame rate. # Optional. Video metadata. The metadata should only be specified while the video data is presented in inline_data or file_data.
"endOffset": "A String", # Optional. The end offset of the video.
"fps": 3.14, # Optional. The frame rate of the video sent to the model. If not specified, the default value is 1.0. The valid range is (0.0, 24.0].
"startOffset": "A String", # Optional. The start offset of the video.
},
},
],
"role": "A String", # Optional. The producer of the content. Must be either 'user' or 'model'. If not set, the service will default to 'user'.
},
],
"text": "A String", # Text response.
"value": "", # Fields and values that can be used to populate the response template.
},
],
"goldenResponse": { # Responses from model or agent. # Optional. The Ideal response or ground truth.
"agentData": { # Represents data specific to multi-turn agent evaluations. # Optional. Represents the complete execution trace of a multi-turn conversation, which can involve single or multiple agents. This field is used to provide the full output of an agent's run, including all turns and events, for direct evaluation.
"agents": { # Optional. A map containing the static configurations for each agent in the system. Key: agent_id (matches the `author` field in events). Value: The static configuration of the agent.
"a_key": { # Represents configuration for an Agent.
"agentId": "A String", # Required. Unique identifier of the agent. This ID is used to refer to this agent, e.g., in AgentEvent.author, or in the `sub_agents` field. It must be unique within the `agents` map.
"agentType": "A String", # Optional. The type or class of the agent (e.g., "LlmAgent", "RouterAgent", "ToolUseAgent"). Useful for the autorater to understand the expected behavior of the agent.
"description": "A String", # Optional. A high-level description of the agent's role and responsibilities. Critical for evaluating if the agent is routing tasks correctly.
"instruction": "A String", # Optional. Provides instructions for the LLM model, guiding the agent's behavior. Can be static or dynamic. Dynamic instructions can contain placeholders like {variable_name} that will be resolved at runtime using the `AgentEvent.state_delta` field.
"subAgents": [ # Optional. The list of valid agent IDs that this agent can delegate to. This defines the directed edges in the multi-agent system graph topology.
"A String",
],
"tools": [ # Optional. The list of tools available to this agent.
{ # Tool details that the model may use to generate response. A `Tool` is a piece of code that enables the system to interact with external systems to perform an action, or set of actions, outside of knowledge and scope of the model. A Tool object should contain exactly one type of Tool (e.g FunctionDeclaration, Retrieval or GoogleSearchRetrieval).
"codeExecution": { # Tool that executes code generated by the model, and automatically returns the result to the model. See also [ExecutableCode]and [CodeExecutionResult] which are input and output to this tool. # Optional. CodeExecution tool type. Enables the model to execute code as part of generation.
},
"computerUse": { # Tool to support computer use. # Optional. Tool to support the model interacting directly with the computer. If enabled, it automatically populates computer-use specific Function Declarations.
"environment": "A String", # Required. The environment being operated.
"excludedPredefinedFunctions": [ # Optional. By default, [predefined functions](https://cloud.google.com/vertex-ai/generative-ai/docs/computer-use#supported-actions) are included in the final model call. Some of them can be explicitly excluded from being automatically included. This can serve two purposes: 1. Using a more restricted / different action space. 2. Improving the definitions / instructions of predefined functions.
"A String",
],
},
"enterpriseWebSearch": { # Tool to search public web data, powered by Vertex AI Search and Sec4 compliance. # Optional. Tool to support searching public web data, powered by Vertex AI Search and Sec4 compliance.
"blockingConfidence": "A String", # Optional. Sites with confidence level chosen & above this value will be blocked from the search results.
"excludeDomains": [ # Optional. List of domains to be excluded from the search results. The default limit is 2000 domains.
"A String",
],
},
"functionDeclarations": [ # Optional. Function tool type. One or more function declarations to be passed to the model along with the current user query. Model may decide to call a subset of these functions by populating FunctionCall in the response. User should provide a FunctionResponse for each function call in the next turn. Based on the function responses, Model will generate the final response back to the user. Maximum 512 function declarations can be provided.
{ # Structured representation of a function declaration as defined by the [OpenAPI 3.0 specification](https://spec.openapis.org/oas/v3.0.3). Included in this declaration are the function name, description, parameters and response type. This FunctionDeclaration is a representation of a block of code that can be used as a `Tool` by the model and executed by the client.
"description": "A String", # Optional. Description and purpose of the function. Model uses it to decide how and whether to call the function.
"name": "A String", # Required. The name of the function to call. Must start with a letter or an underscore. Must be a-z, A-Z, 0-9, or contain underscores, dots, colons and dashes, with a maximum length of 64.
"parameters": { # Defines the schema of input and output data. This is a subset of the [OpenAPI 3.0 Schema Object](https://spec.openapis.org/oas/v3.0.3#schema-object). # Optional. Describes the parameters to this function in JSON Schema Object format. Reflects the Open API 3.03 Parameter Object. string Key: the name of the parameter. Parameter names are case sensitive. Schema Value: the Schema defining the type used for the parameter. For function with no parameters, this can be left unset. Parameter names must start with a letter or an underscore and must only contain chars a-z, A-Z, 0-9, or underscores with a maximum length of 64. Example with 1 required and 1 optional parameter: type: OBJECT properties: param1: type: STRING param2: type: INTEGER required: - param1
"additionalProperties": "", # Optional. If `type` is `OBJECT`, specifies how to handle properties not defined in `properties`. If it is a boolean `false`, no additional properties are allowed. If it is a schema, additional properties are allowed if they conform to the schema.
"anyOf": [ # Optional. The instance must be valid against any (one or more) of the subschemas listed in `any_of`.
# Object with schema name: GoogleCloudAiplatformV1beta1Schema
],
"default": "", # Optional. Default value to use if the field is not specified.
"defs": { # Optional. `defs` provides a map of schema definitions that can be reused by `ref` elsewhere in the schema. Only allowed at root level of the schema.
"a_key": # Object with schema name: GoogleCloudAiplatformV1beta1Schema
},
"description": "A String", # Optional. Description of the schema.
"enum": [ # Optional. Possible values of the field. This field can be used to restrict a value to a fixed set of values. To mark a field as an enum, set `format` to `enum` and provide the list of possible values in `enum`. For example: 1. To define directions: `{type:STRING, format:enum, enum:["EAST", "NORTH", "SOUTH", "WEST"]}` 2. To define apartment numbers: `{type:INTEGER, format:enum, enum:["101", "201", "301"]}`
"A String",
],
"example": "", # Optional. Example of an instance of this schema.
"format": "A String", # Optional. The format of the data. For `NUMBER` type, format can be `float` or `double`. For `INTEGER` type, format can be `int32` or `int64`. For `STRING` type, format can be `email`, `byte`, `date`, `date-time`, `password`, and other formats to further refine the data type.
"items": # Object with schema name: GoogleCloudAiplatformV1beta1Schema # Optional. If type is `ARRAY`, `items` specifies the schema of elements in the array.
"maxItems": "A String", # Optional. If type is `ARRAY`, `max_items` specifies the maximum number of items in an array.
"maxLength": "A String", # Optional. If type is `STRING`, `max_length` specifies the maximum length of the string.
"maxProperties": "A String", # Optional. If type is `OBJECT`, `max_properties` specifies the maximum number of properties that can be provided.
"maximum": 3.14, # Optional. If type is `INTEGER` or `NUMBER`, `maximum` specifies the maximum allowed value.
"minItems": "A String", # Optional. If type is `ARRAY`, `min_items` specifies the minimum number of items in an array.
"minLength": "A String", # Optional. If type is `STRING`, `min_length` specifies the minimum length of the string.
"minProperties": "A String", # Optional. If type is `OBJECT`, `min_properties` specifies the minimum number of properties that can be provided.
"minimum": 3.14, # Optional. If type is `INTEGER` or `NUMBER`, `minimum` specifies the minimum allowed value.
"nullable": True or False, # Optional. Indicates if the value of this field can be null.
"pattern": "A String", # Optional. If type is `STRING`, `pattern` specifies a regular expression that the string must match.
"properties": { # Optional. If type is `OBJECT`, `properties` is a map of property names to schema definitions for each property of the object.
"a_key": # Object with schema name: GoogleCloudAiplatformV1beta1Schema
},
"propertyOrdering": [ # Optional. Order of properties displayed or used where order matters. This is not a standard field in OpenAPI specification, but can be used to control the order of properties.
"A String",
],
"ref": "A String", # Optional. Allows referencing another schema definition to use in place of this schema. The value must be a valid reference to a schema in `defs`. For example, the following schema defines a reference to a schema node named "Pet": type: object properties: pet: ref: #/defs/Pet defs: Pet: type: object properties: name: type: string The value of the "pet" property is a reference to the schema node named "Pet". See details in https://json-schema.org/understanding-json-schema/structuring
"required": [ # Optional. If type is `OBJECT`, `required` lists the names of properties that must be present.
"A String",
],
"title": "A String", # Optional. Title for the schema.
"type": "A String", # Optional. Data type of the schema field.
},
"parametersJsonSchema": "", # Optional. Describes the parameters to the function in JSON Schema format. The schema must describe an object where the properties are the parameters to the function. For example: ``` { "type": "object", "properties": { "name": { "type": "string" }, "age": { "type": "integer" } }, "additionalProperties": false, "required": ["name", "age"], "propertyOrdering": ["name", "age"] } ``` This field is mutually exclusive with `parameters`.
"response": { # Defines the schema of input and output data. This is a subset of the [OpenAPI 3.0 Schema Object](https://spec.openapis.org/oas/v3.0.3#schema-object). # Optional. Describes the output from this function in JSON Schema format. Reflects the Open API 3.03 Response Object. The Schema defines the type used for the response value of the function.
"additionalProperties": "", # Optional. If `type` is `OBJECT`, specifies how to handle properties not defined in `properties`. If it is a boolean `false`, no additional properties are allowed. If it is a schema, additional properties are allowed if they conform to the schema.
"anyOf": [ # Optional. The instance must be valid against any (one or more) of the subschemas listed in `any_of`.
# Object with schema name: GoogleCloudAiplatformV1beta1Schema
],
"default": "", # Optional. Default value to use if the field is not specified.
"defs": { # Optional. `defs` provides a map of schema definitions that can be reused by `ref` elsewhere in the schema. Only allowed at root level of the schema.
"a_key": # Object with schema name: GoogleCloudAiplatformV1beta1Schema
},
"description": "A String", # Optional. Description of the schema.
"enum": [ # Optional. Possible values of the field. This field can be used to restrict a value to a fixed set of values. To mark a field as an enum, set `format` to `enum` and provide the list of possible values in `enum`. For example: 1. To define directions: `{type:STRING, format:enum, enum:["EAST", "NORTH", "SOUTH", "WEST"]}` 2. To define apartment numbers: `{type:INTEGER, format:enum, enum:["101", "201", "301"]}`
"A String",
],
"example": "", # Optional. Example of an instance of this schema.
"format": "A String", # Optional. The format of the data. For `NUMBER` type, format can be `float` or `double`. For `INTEGER` type, format can be `int32` or `int64`. For `STRING` type, format can be `email`, `byte`, `date`, `date-time`, `password`, and other formats to further refine the data type.
"items": # Object with schema name: GoogleCloudAiplatformV1beta1Schema # Optional. If type is `ARRAY`, `items` specifies the schema of elements in the array.
"maxItems": "A String", # Optional. If type is `ARRAY`, `max_items` specifies the maximum number of items in an array.
"maxLength": "A String", # Optional. If type is `STRING`, `max_length` specifies the maximum length of the string.
"maxProperties": "A String", # Optional. If type is `OBJECT`, `max_properties` specifies the maximum number of properties that can be provided.
"maximum": 3.14, # Optional. If type is `INTEGER` or `NUMBER`, `maximum` specifies the maximum allowed value.
"minItems": "A String", # Optional. If type is `ARRAY`, `min_items` specifies the minimum number of items in an array.
"minLength": "A String", # Optional. If type is `STRING`, `min_length` specifies the minimum length of the string.
"minProperties": "A String", # Optional. If type is `OBJECT`, `min_properties` specifies the minimum number of properties that can be provided.
"minimum": 3.14, # Optional. If type is `INTEGER` or `NUMBER`, `minimum` specifies the minimum allowed value.
"nullable": True or False, # Optional. Indicates if the value of this field can be null.
"pattern": "A String", # Optional. If type is `STRING`, `pattern` specifies a regular expression that the string must match.
"properties": { # Optional. If type is `OBJECT`, `properties` is a map of property names to schema definitions for each property of the object.
"a_key": # Object with schema name: GoogleCloudAiplatformV1beta1Schema
},
"propertyOrdering": [ # Optional. Order of properties displayed or used where order matters. This is not a standard field in OpenAPI specification, but can be used to control the order of properties.
"A String",
],
"ref": "A String", # Optional. Allows referencing another schema definition to use in place of this schema. The value must be a valid reference to a schema in `defs`. For example, the following schema defines a reference to a schema node named "Pet": type: object properties: pet: ref: #/defs/Pet defs: Pet: type: object properties: name: type: string The value of the "pet" property is a reference to the schema node named "Pet". See details in https://json-schema.org/understanding-json-schema/structuring
"required": [ # Optional. If type is `OBJECT`, `required` lists the names of properties that must be present.
"A String",
],
"title": "A String", # Optional. Title for the schema.
"type": "A String", # Optional. Data type of the schema field.
},
"responseJsonSchema": "", # Optional. Describes the output from this function in JSON Schema format. The value specified by the schema is the response value of the function. This field is mutually exclusive with `response`.
},
],
"googleMaps": { # Tool to retrieve public maps data for grounding, powered by Google. # Optional. GoogleMaps tool type. Tool to support Google Maps in Model.
"enableWidget": True or False, # Optional. If true, include the widget context token in the response.
},
"googleSearch": { # GoogleSearch tool type. Tool to support Google Search in Model. Powered by Google. # Optional. GoogleSearch tool type. Tool to support Google Search in Model. Powered by Google.
"blockingConfidence": "A String", # Optional. Sites with confidence level chosen & above this value will be blocked from the search results.
"excludeDomains": [ # Optional. List of domains to be excluded from the search results. The default limit is 2000 domains. Example: ["amazon.com", "facebook.com"].
"A String",
],
},
"googleSearchRetrieval": { # Tool to retrieve public web data for grounding, powered by Google. # Optional. Specialized retrieval tool that is powered by Google Search.
"dynamicRetrievalConfig": { # Describes the options to customize dynamic retrieval. # Specifies the dynamic retrieval configuration for the given source.
"dynamicThreshold": 3.14, # Optional. The threshold to be used in dynamic retrieval. If not set, a system default value is used.
"mode": "A String", # The mode of the predictor to be used in dynamic retrieval.
},
},
"parallelAiSearch": { # ParallelAiSearch tool type. A tool that uses the Parallel.ai search engine for grounding. # Optional. If specified, Vertex AI will use Parallel.ai to search for information to answer user queries. The search results will be grounded on Parallel.ai and presented to the model for response generation
"apiKey": "A String", # Optional. The API key for ParallelAiSearch. If an API key is not provided, the system will attempt to verify access by checking for an active Parallel.ai subscription through the Google Cloud Marketplace. See https://docs.parallel.ai/search/search-quickstart for more details.
"customConfigs": { # Optional. Custom configs for ParallelAiSearch. This field can be used to pass any parameter from the Parallel.ai Search API. See the Parallel.ai documentation for the full list of available parameters and their usage: https://docs.parallel.ai/api-reference/search-beta/search Currently only `source_policy`, `excerpts`, `max_results`, `mode`, `fetch_policy` can be set via this field. For example: { "source_policy": { "include_domains": ["google.com", "wikipedia.org"], "exclude_domains": ["example.com"] }, "fetch_policy": { "max_age_seconds": 3600 } }
"a_key": "", # Properties of the object.
},
},
"retrieval": { # Defines a retrieval tool that model can call to access external knowledge. # Optional. Retrieval tool type. System will always execute the provided retrieval tool(s) to get external knowledge to answer the prompt. Retrieval results are presented to the model for generation.
"disableAttribution": True or False, # Optional. Deprecated. This option is no longer supported.
"externalApi": { # Retrieve from data source powered by external API for grounding. The external API is not owned by Google, but need to follow the pre-defined API spec. # Use data source powered by external API for grounding.
"apiAuth": { # The generic reusable api auth config. Deprecated. Please use AuthConfig (google/cloud/aiplatform/master/auth.proto) instead. # The authentication config to access the API. Deprecated. Please use auth_config instead.
"apiKeyConfig": { # The API secret. # The API secret.
"apiKeySecretVersion": "A String", # Required. The SecretManager secret version resource name storing API key. e.g. projects/{project}/secrets/{secret}/versions/{version}
"apiKeyString": "A String", # The API key string. Either this or `api_key_secret_version` must be set.
},
},
"apiSpec": "A String", # The API spec that the external API implements.
"authConfig": { # Auth configuration to run the extension. # The authentication config to access the API.
"apiKeyConfig": { # Config for authentication with API key. # Config for API key auth.
"apiKeySecret": "A String", # Optional. The name of the SecretManager secret version resource storing the API key. Format: `projects/{project}/secrets/{secrete}/versions/{version}` - If both `api_key_secret` and `api_key_string` are specified, this field takes precedence over `api_key_string`. - If specified, the `secretmanager.versions.access` permission should be granted to Vertex AI Extension Service Agent (https://cloud.google.com/vertex-ai/docs/general/access-control#service-agents) on the specified resource.
"apiKeyString": "A String", # Optional. The API key to be used in the request directly.
"httpElementLocation": "A String", # Optional. The location of the API key.
"name": "A String", # Optional. The parameter name of the API key. E.g. If the API request is "https://example.com/act?api_key=", "api_key" would be the parameter name.
},
"authType": "A String", # Type of auth scheme.
"googleServiceAccountConfig": { # Config for Google Service Account Authentication. # Config for Google Service Account auth.
"serviceAccount": "A String", # Optional. The service account that the extension execution service runs as. - If the service account is specified, the `iam.serviceAccounts.getAccessToken` permission should be granted to Vertex AI Extension Service Agent (https://cloud.google.com/vertex-ai/docs/general/access-control#service-agents) on the specified service account. - If not specified, the Vertex AI Extension Service Agent will be used to execute the Extension.
},
"httpBasicAuthConfig": { # Config for HTTP Basic Authentication. # Config for HTTP Basic auth.
"credentialSecret": "A String", # Required. The name of the SecretManager secret version resource storing the base64 encoded credentials. Format: `projects/{project}/secrets/{secrete}/versions/{version}` - If specified, the `secretmanager.versions.access` permission should be granted to Vertex AI Extension Service Agent (https://cloud.google.com/vertex-ai/docs/general/access-control#service-agents) on the specified resource.
},
"oauthConfig": { # Config for user oauth. # Config for user oauth.
"accessToken": "A String", # Access token for extension endpoint. Only used to propagate token from [[ExecuteExtensionRequest.runtime_auth_config]] at request time.
"serviceAccount": "A String", # The service account used to generate access tokens for executing the Extension. - If the service account is specified, the `iam.serviceAccounts.getAccessToken` permission should be granted to Vertex AI Extension Service Agent (https://cloud.google.com/vertex-ai/docs/general/access-control#service-agents) on the provided service account.
},
"oidcConfig": { # Config for user OIDC auth. # Config for user OIDC auth.
"idToken": "A String", # OpenID Connect formatted ID token for extension endpoint. Only used to propagate token from [[ExecuteExtensionRequest.runtime_auth_config]] at request time.
"serviceAccount": "A String", # The service account used to generate an OpenID Connect (OIDC)-compatible JWT token signed by the Google OIDC Provider (accounts.google.com) for extension endpoint (https://cloud.google.com/iam/docs/create-short-lived-credentials-direct#sa-credentials-oidc). - The audience for the token will be set to the URL in the server url defined in the OpenApi spec. - If the service account is provided, the service account should grant `iam.serviceAccounts.getOpenIdToken` permission to Vertex AI Extension Service Agent (https://cloud.google.com/vertex-ai/docs/general/access-control#service-agents).
},
},
"elasticSearchParams": { # The search parameters to use for the ELASTIC_SEARCH spec. # Parameters for the elastic search API.
"index": "A String", # The ElasticSearch index to use.
"numHits": 42, # Optional. Number of hits (chunks) to request. When specified, it is passed to Elasticsearch as the `num_hits` param.
"searchTemplate": "A String", # The ElasticSearch search template to use.
},
"endpoint": "A String", # The endpoint of the external API. The system will call the API at this endpoint to retrieve the data for grounding. Example: https://acme.com:443/search
"simpleSearchParams": { # The search parameters to use for SIMPLE_SEARCH spec. # Parameters for the simple search API.
},
},
"vertexAiSearch": { # Retrieve from Vertex AI Search datastore or engine for grounding. datastore and engine are mutually exclusive. See https://cloud.google.com/products/agent-builder # Set to use data source powered by Vertex AI Search.
"dataStoreSpecs": [ # Specifications that define the specific DataStores to be searched, along with configurations for those data stores. This is only considered for Engines with multiple data stores. It should only be set if engine is used.
{ # Define data stores within engine to filter on in a search call and configurations for those data stores. For more information, see https://cloud.google.com/generative-ai-app-builder/docs/reference/rpc/google.cloud.discoveryengine.v1#datastorespec
"dataStore": "A String", # Full resource name of DataStore, such as Format: `projects/{project}/locations/{location}/collections/{collection}/dataStores/{dataStore}`
"filter": "A String", # Optional. Filter specification to filter documents in the data store specified by data_store field. For more information on filtering, see [Filtering](https://cloud.google.com/generative-ai-app-builder/docs/filter-search-metadata)
},
],
"datastore": "A String", # Optional. Fully-qualified Vertex AI Search data store resource ID. Format: `projects/{project}/locations/{location}/collections/{collection}/dataStores/{dataStore}`
"engine": "A String", # Optional. Fully-qualified Vertex AI Search engine resource ID. Format: `projects/{project}/locations/{location}/collections/{collection}/engines/{engine}`
"filter": "A String", # Optional. Filter strings to be passed to the search API.
"maxResults": 42, # Optional. Number of search results to return per query. The default value is 10. The maximumm allowed value is 10.
},
"vertexRagStore": { # Retrieve from Vertex RAG Store for grounding. # Set to use data source powered by Vertex RAG store. User data is uploaded via the VertexRagDataService.
"ragCorpora": [ # Optional. Deprecated. Please use rag_resources instead.
"A String",
],
"ragResources": [ # Optional. The representation of the rag source. It can be used to specify corpus only or ragfiles. Currently only support one corpus or multiple files from one corpus. In the future we may open up multiple corpora support.
{ # The definition of the Rag resource.
"ragCorpus": "A String", # Optional. RagCorpora resource name. Format: `projects/{project}/locations/{location}/ragCorpora/{rag_corpus}`
"ragFileIds": [ # Optional. rag_file_id. The files should be in the same rag_corpus set in rag_corpus field.
"A String",
],
},
],
"ragRetrievalConfig": { # Specifies the context retrieval config. # Optional. The retrieval config for the Rag query.
"filter": { # Config for filters. # Optional. Config for filters.
"metadataFilter": "A String", # Optional. String for metadata filtering.
"vectorDistanceThreshold": 3.14, # Optional. Only returns contexts with vector distance smaller than the threshold.
"vectorSimilarityThreshold": 3.14, # Optional. Only returns contexts with vector similarity larger than the threshold.
},
"hybridSearch": { # Config for Hybrid Search. # Optional. Config for Hybrid Search.
"alpha": 3.14, # Optional. Alpha value controls the weight between dense and sparse vector search results. The range is [0, 1], while 0 means sparse vector search only and 1 means dense vector search only. The default value is 0.5 which balances sparse and dense vector search equally.
},
"ranking": { # Config for ranking and reranking. # Optional. Config for ranking and reranking.
"llmRanker": { # Config for LlmRanker. # Optional. Config for LlmRanker.
"modelName": "A String", # Optional. The model name used for ranking. See [Supported models](https://cloud.google.com/vertex-ai/generative-ai/docs/model-reference/inference#supported-models).
},
"rankService": { # Config for Rank Service. # Optional. Config for Rank Service.
"modelName": "A String", # Optional. The model name of the rank service. Format: `semantic-ranker-512@latest`
},
},
"topK": 42, # Optional. The number of contexts to retrieve.
},
"similarityTopK": 42, # Optional. Number of top k results to return from the selected corpora.
"storeContext": True or False, # Optional. Currently only supported for Gemini Multimodal Live API. In Gemini Multimodal Live API, if `store_context` bool is specified, Gemini will leverage it to automatically memorize the interactions between the client and Gemini, and retrieve context when needed to augment the response generation for users' ongoing and future interactions.
"vectorDistanceThreshold": 3.14, # Optional. Only return results with vector distance smaller than the threshold.
},
},
"urlContext": { # Tool to support URL context. # Optional. Tool to support URL context retrieval.
},
},
],
},
},
"turns": [ # Optional. A chronological list of conversation turns. Each turn represents a logical execution cycle (e.g., User Input -> Agent Response).
{ # Represents a single turn/invocation in the conversation.
"events": [ # Optional. The list of events that occurred during this turn.
{ # Represents a single event in the execution trace.
"activeTools": [ # Optional. The list of tools that were active/available to the agent at the time of this event. This overrides the `AgentConfig.tools` if set.
{ # Tool details that the model may use to generate response. A `Tool` is a piece of code that enables the system to interact with external systems to perform an action, or set of actions, outside of knowledge and scope of the model. A Tool object should contain exactly one type of Tool (e.g FunctionDeclaration, Retrieval or GoogleSearchRetrieval).
"codeExecution": { # Tool that executes code generated by the model, and automatically returns the result to the model. See also [ExecutableCode]and [CodeExecutionResult] which are input and output to this tool. # Optional. CodeExecution tool type. Enables the model to execute code as part of generation.
},
"computerUse": { # Tool to support computer use. # Optional. Tool to support the model interacting directly with the computer. If enabled, it automatically populates computer-use specific Function Declarations.
"environment": "A String", # Required. The environment being operated.
"excludedPredefinedFunctions": [ # Optional. By default, [predefined functions](https://cloud.google.com/vertex-ai/generative-ai/docs/computer-use#supported-actions) are included in the final model call. Some of them can be explicitly excluded from being automatically included. This can serve two purposes: 1. Using a more restricted / different action space. 2. Improving the definitions / instructions of predefined functions.
"A String",
],
},
"enterpriseWebSearch": { # Tool to search public web data, powered by Vertex AI Search and Sec4 compliance. # Optional. Tool to support searching public web data, powered by Vertex AI Search and Sec4 compliance.
"blockingConfidence": "A String", # Optional. Sites with confidence level chosen & above this value will be blocked from the search results.
"excludeDomains": [ # Optional. List of domains to be excluded from the search results. The default limit is 2000 domains.
"A String",
],
},
"functionDeclarations": [ # Optional. Function tool type. One or more function declarations to be passed to the model along with the current user query. Model may decide to call a subset of these functions by populating FunctionCall in the response. User should provide a FunctionResponse for each function call in the next turn. Based on the function responses, Model will generate the final response back to the user. Maximum 512 function declarations can be provided.
{ # Structured representation of a function declaration as defined by the [OpenAPI 3.0 specification](https://spec.openapis.org/oas/v3.0.3). Included in this declaration are the function name, description, parameters and response type. This FunctionDeclaration is a representation of a block of code that can be used as a `Tool` by the model and executed by the client.
"description": "A String", # Optional. Description and purpose of the function. Model uses it to decide how and whether to call the function.
"name": "A String", # Required. The name of the function to call. Must start with a letter or an underscore. Must be a-z, A-Z, 0-9, or contain underscores, dots, colons and dashes, with a maximum length of 64.
"parameters": { # Defines the schema of input and output data. This is a subset of the [OpenAPI 3.0 Schema Object](https://spec.openapis.org/oas/v3.0.3#schema-object). # Optional. Describes the parameters to this function in JSON Schema Object format. Reflects the Open API 3.03 Parameter Object. string Key: the name of the parameter. Parameter names are case sensitive. Schema Value: the Schema defining the type used for the parameter. For function with no parameters, this can be left unset. Parameter names must start with a letter or an underscore and must only contain chars a-z, A-Z, 0-9, or underscores with a maximum length of 64. Example with 1 required and 1 optional parameter: type: OBJECT properties: param1: type: STRING param2: type: INTEGER required: - param1
"additionalProperties": "", # Optional. If `type` is `OBJECT`, specifies how to handle properties not defined in `properties`. If it is a boolean `false`, no additional properties are allowed. If it is a schema, additional properties are allowed if they conform to the schema.
"anyOf": [ # Optional. The instance must be valid against any (one or more) of the subschemas listed in `any_of`.
# Object with schema name: GoogleCloudAiplatformV1beta1Schema
],
"default": "", # Optional. Default value to use if the field is not specified.
"defs": { # Optional. `defs` provides a map of schema definitions that can be reused by `ref` elsewhere in the schema. Only allowed at root level of the schema.
"a_key": # Object with schema name: GoogleCloudAiplatformV1beta1Schema
},
"description": "A String", # Optional. Description of the schema.
"enum": [ # Optional. Possible values of the field. This field can be used to restrict a value to a fixed set of values. To mark a field as an enum, set `format` to `enum` and provide the list of possible values in `enum`. For example: 1. To define directions: `{type:STRING, format:enum, enum:["EAST", "NORTH", "SOUTH", "WEST"]}` 2. To define apartment numbers: `{type:INTEGER, format:enum, enum:["101", "201", "301"]}`
"A String",
],
"example": "", # Optional. Example of an instance of this schema.
"format": "A String", # Optional. The format of the data. For `NUMBER` type, format can be `float` or `double`. For `INTEGER` type, format can be `int32` or `int64`. For `STRING` type, format can be `email`, `byte`, `date`, `date-time`, `password`, and other formats to further refine the data type.
"items": # Object with schema name: GoogleCloudAiplatformV1beta1Schema # Optional. If type is `ARRAY`, `items` specifies the schema of elements in the array.
"maxItems": "A String", # Optional. If type is `ARRAY`, `max_items` specifies the maximum number of items in an array.
"maxLength": "A String", # Optional. If type is `STRING`, `max_length` specifies the maximum length of the string.
"maxProperties": "A String", # Optional. If type is `OBJECT`, `max_properties` specifies the maximum number of properties that can be provided.
"maximum": 3.14, # Optional. If type is `INTEGER` or `NUMBER`, `maximum` specifies the maximum allowed value.
"minItems": "A String", # Optional. If type is `ARRAY`, `min_items` specifies the minimum number of items in an array.
"minLength": "A String", # Optional. If type is `STRING`, `min_length` specifies the minimum length of the string.
"minProperties": "A String", # Optional. If type is `OBJECT`, `min_properties` specifies the minimum number of properties that can be provided.
"minimum": 3.14, # Optional. If type is `INTEGER` or `NUMBER`, `minimum` specifies the minimum allowed value.
"nullable": True or False, # Optional. Indicates if the value of this field can be null.
"pattern": "A String", # Optional. If type is `STRING`, `pattern` specifies a regular expression that the string must match.
"properties": { # Optional. If type is `OBJECT`, `properties` is a map of property names to schema definitions for each property of the object.
"a_key": # Object with schema name: GoogleCloudAiplatformV1beta1Schema
},
"propertyOrdering": [ # Optional. Order of properties displayed or used where order matters. This is not a standard field in OpenAPI specification, but can be used to control the order of properties.
"A String",
],
"ref": "A String", # Optional. Allows referencing another schema definition to use in place of this schema. The value must be a valid reference to a schema in `defs`. For example, the following schema defines a reference to a schema node named "Pet": type: object properties: pet: ref: #/defs/Pet defs: Pet: type: object properties: name: type: string The value of the "pet" property is a reference to the schema node named "Pet". See details in https://json-schema.org/understanding-json-schema/structuring
"required": [ # Optional. If type is `OBJECT`, `required` lists the names of properties that must be present.
"A String",
],
"title": "A String", # Optional. Title for the schema.
"type": "A String", # Optional. Data type of the schema field.
},
"parametersJsonSchema": "", # Optional. Describes the parameters to the function in JSON Schema format. The schema must describe an object where the properties are the parameters to the function. For example: ``` { "type": "object", "properties": { "name": { "type": "string" }, "age": { "type": "integer" } }, "additionalProperties": false, "required": ["name", "age"], "propertyOrdering": ["name", "age"] } ``` This field is mutually exclusive with `parameters`.
"response": { # Defines the schema of input and output data. This is a subset of the [OpenAPI 3.0 Schema Object](https://spec.openapis.org/oas/v3.0.3#schema-object). # Optional. Describes the output from this function in JSON Schema format. Reflects the Open API 3.03 Response Object. The Schema defines the type used for the response value of the function.
"additionalProperties": "", # Optional. If `type` is `OBJECT`, specifies how to handle properties not defined in `properties`. If it is a boolean `false`, no additional properties are allowed. If it is a schema, additional properties are allowed if they conform to the schema.
"anyOf": [ # Optional. The instance must be valid against any (one or more) of the subschemas listed in `any_of`.
# Object with schema name: GoogleCloudAiplatformV1beta1Schema
],
"default": "", # Optional. Default value to use if the field is not specified.
"defs": { # Optional. `defs` provides a map of schema definitions that can be reused by `ref` elsewhere in the schema. Only allowed at root level of the schema.
"a_key": # Object with schema name: GoogleCloudAiplatformV1beta1Schema
},
"description": "A String", # Optional. Description of the schema.
"enum": [ # Optional. Possible values of the field. This field can be used to restrict a value to a fixed set of values. To mark a field as an enum, set `format` to `enum` and provide the list of possible values in `enum`. For example: 1. To define directions: `{type:STRING, format:enum, enum:["EAST", "NORTH", "SOUTH", "WEST"]}` 2. To define apartment numbers: `{type:INTEGER, format:enum, enum:["101", "201", "301"]}`
"A String",
],
"example": "", # Optional. Example of an instance of this schema.
"format": "A String", # Optional. The format of the data. For `NUMBER` type, format can be `float` or `double`. For `INTEGER` type, format can be `int32` or `int64`. For `STRING` type, format can be `email`, `byte`, `date`, `date-time`, `password`, and other formats to further refine the data type.
"items": # Object with schema name: GoogleCloudAiplatformV1beta1Schema # Optional. If type is `ARRAY`, `items` specifies the schema of elements in the array.
"maxItems": "A String", # Optional. If type is `ARRAY`, `max_items` specifies the maximum number of items in an array.
"maxLength": "A String", # Optional. If type is `STRING`, `max_length` specifies the maximum length of the string.
"maxProperties": "A String", # Optional. If type is `OBJECT`, `max_properties` specifies the maximum number of properties that can be provided.
"maximum": 3.14, # Optional. If type is `INTEGER` or `NUMBER`, `maximum` specifies the maximum allowed value.
"minItems": "A String", # Optional. If type is `ARRAY`, `min_items` specifies the minimum number of items in an array.
"minLength": "A String", # Optional. If type is `STRING`, `min_length` specifies the minimum length of the string.
"minProperties": "A String", # Optional. If type is `OBJECT`, `min_properties` specifies the minimum number of properties that can be provided.
"minimum": 3.14, # Optional. If type is `INTEGER` or `NUMBER`, `minimum` specifies the minimum allowed value.
"nullable": True or False, # Optional. Indicates if the value of this field can be null.
"pattern": "A String", # Optional. If type is `STRING`, `pattern` specifies a regular expression that the string must match.
"properties": { # Optional. If type is `OBJECT`, `properties` is a map of property names to schema definitions for each property of the object.
"a_key": # Object with schema name: GoogleCloudAiplatformV1beta1Schema
},
"propertyOrdering": [ # Optional. Order of properties displayed or used where order matters. This is not a standard field in OpenAPI specification, but can be used to control the order of properties.
"A String",
],
"ref": "A String", # Optional. Allows referencing another schema definition to use in place of this schema. The value must be a valid reference to a schema in `defs`. For example, the following schema defines a reference to a schema node named "Pet": type: object properties: pet: ref: #/defs/Pet defs: Pet: type: object properties: name: type: string The value of the "pet" property is a reference to the schema node named "Pet". See details in https://json-schema.org/understanding-json-schema/structuring
"required": [ # Optional. If type is `OBJECT`, `required` lists the names of properties that must be present.
"A String",
],
"title": "A String", # Optional. Title for the schema.
"type": "A String", # Optional. Data type of the schema field.
},
"responseJsonSchema": "", # Optional. Describes the output from this function in JSON Schema format. The value specified by the schema is the response value of the function. This field is mutually exclusive with `response`.
},
],
"googleMaps": { # Tool to retrieve public maps data for grounding, powered by Google. # Optional. GoogleMaps tool type. Tool to support Google Maps in Model.
"enableWidget": True or False, # Optional. If true, include the widget context token in the response.
},
"googleSearch": { # GoogleSearch tool type. Tool to support Google Search in Model. Powered by Google. # Optional. GoogleSearch tool type. Tool to support Google Search in Model. Powered by Google.
"blockingConfidence": "A String", # Optional. Sites with confidence level chosen & above this value will be blocked from the search results.
"excludeDomains": [ # Optional. List of domains to be excluded from the search results. The default limit is 2000 domains. Example: ["amazon.com", "facebook.com"].
"A String",
],
},
"googleSearchRetrieval": { # Tool to retrieve public web data for grounding, powered by Google. # Optional. Specialized retrieval tool that is powered by Google Search.
"dynamicRetrievalConfig": { # Describes the options to customize dynamic retrieval. # Specifies the dynamic retrieval configuration for the given source.
"dynamicThreshold": 3.14, # Optional. The threshold to be used in dynamic retrieval. If not set, a system default value is used.
"mode": "A String", # The mode of the predictor to be used in dynamic retrieval.
},
},
"parallelAiSearch": { # ParallelAiSearch tool type. A tool that uses the Parallel.ai search engine for grounding. # Optional. If specified, Vertex AI will use Parallel.ai to search for information to answer user queries. The search results will be grounded on Parallel.ai and presented to the model for response generation
"apiKey": "A String", # Optional. The API key for ParallelAiSearch. If an API key is not provided, the system will attempt to verify access by checking for an active Parallel.ai subscription through the Google Cloud Marketplace. See https://docs.parallel.ai/search/search-quickstart for more details.
"customConfigs": { # Optional. Custom configs for ParallelAiSearch. This field can be used to pass any parameter from the Parallel.ai Search API. See the Parallel.ai documentation for the full list of available parameters and their usage: https://docs.parallel.ai/api-reference/search-beta/search Currently only `source_policy`, `excerpts`, `max_results`, `mode`, `fetch_policy` can be set via this field. For example: { "source_policy": { "include_domains": ["google.com", "wikipedia.org"], "exclude_domains": ["example.com"] }, "fetch_policy": { "max_age_seconds": 3600 } }
"a_key": "", # Properties of the object.
},
},
"retrieval": { # Defines a retrieval tool that model can call to access external knowledge. # Optional. Retrieval tool type. System will always execute the provided retrieval tool(s) to get external knowledge to answer the prompt. Retrieval results are presented to the model for generation.
"disableAttribution": True or False, # Optional. Deprecated. This option is no longer supported.
"externalApi": { # Retrieve from data source powered by external API for grounding. The external API is not owned by Google, but need to follow the pre-defined API spec. # Use data source powered by external API for grounding.
"apiAuth": { # The generic reusable api auth config. Deprecated. Please use AuthConfig (google/cloud/aiplatform/master/auth.proto) instead. # The authentication config to access the API. Deprecated. Please use auth_config instead.
"apiKeyConfig": { # The API secret. # The API secret.
"apiKeySecretVersion": "A String", # Required. The SecretManager secret version resource name storing API key. e.g. projects/{project}/secrets/{secret}/versions/{version}
"apiKeyString": "A String", # The API key string. Either this or `api_key_secret_version` must be set.
},
},
"apiSpec": "A String", # The API spec that the external API implements.
"authConfig": { # Auth configuration to run the extension. # The authentication config to access the API.
"apiKeyConfig": { # Config for authentication with API key. # Config for API key auth.
"apiKeySecret": "A String", # Optional. The name of the SecretManager secret version resource storing the API key. Format: `projects/{project}/secrets/{secrete}/versions/{version}` - If both `api_key_secret` and `api_key_string` are specified, this field takes precedence over `api_key_string`. - If specified, the `secretmanager.versions.access` permission should be granted to Vertex AI Extension Service Agent (https://cloud.google.com/vertex-ai/docs/general/access-control#service-agents) on the specified resource.
"apiKeyString": "A String", # Optional. The API key to be used in the request directly.
"httpElementLocation": "A String", # Optional. The location of the API key.
"name": "A String", # Optional. The parameter name of the API key. E.g. If the API request is "https://example.com/act?api_key=", "api_key" would be the parameter name.
},
"authType": "A String", # Type of auth scheme.
"googleServiceAccountConfig": { # Config for Google Service Account Authentication. # Config for Google Service Account auth.
"serviceAccount": "A String", # Optional. The service account that the extension execution service runs as. - If the service account is specified, the `iam.serviceAccounts.getAccessToken` permission should be granted to Vertex AI Extension Service Agent (https://cloud.google.com/vertex-ai/docs/general/access-control#service-agents) on the specified service account. - If not specified, the Vertex AI Extension Service Agent will be used to execute the Extension.
},
"httpBasicAuthConfig": { # Config for HTTP Basic Authentication. # Config for HTTP Basic auth.
"credentialSecret": "A String", # Required. The name of the SecretManager secret version resource storing the base64 encoded credentials. Format: `projects/{project}/secrets/{secrete}/versions/{version}` - If specified, the `secretmanager.versions.access` permission should be granted to Vertex AI Extension Service Agent (https://cloud.google.com/vertex-ai/docs/general/access-control#service-agents) on the specified resource.
},
"oauthConfig": { # Config for user oauth. # Config for user oauth.
"accessToken": "A String", # Access token for extension endpoint. Only used to propagate token from [[ExecuteExtensionRequest.runtime_auth_config]] at request time.
"serviceAccount": "A String", # The service account used to generate access tokens for executing the Extension. - If the service account is specified, the `iam.serviceAccounts.getAccessToken` permission should be granted to Vertex AI Extension Service Agent (https://cloud.google.com/vertex-ai/docs/general/access-control#service-agents) on the provided service account.
},
"oidcConfig": { # Config for user OIDC auth. # Config for user OIDC auth.
"idToken": "A String", # OpenID Connect formatted ID token for extension endpoint. Only used to propagate token from [[ExecuteExtensionRequest.runtime_auth_config]] at request time.
"serviceAccount": "A String", # The service account used to generate an OpenID Connect (OIDC)-compatible JWT token signed by the Google OIDC Provider (accounts.google.com) for extension endpoint (https://cloud.google.com/iam/docs/create-short-lived-credentials-direct#sa-credentials-oidc). - The audience for the token will be set to the URL in the server url defined in the OpenApi spec. - If the service account is provided, the service account should grant `iam.serviceAccounts.getOpenIdToken` permission to Vertex AI Extension Service Agent (https://cloud.google.com/vertex-ai/docs/general/access-control#service-agents).
},
},
"elasticSearchParams": { # The search parameters to use for the ELASTIC_SEARCH spec. # Parameters for the elastic search API.
"index": "A String", # The ElasticSearch index to use.
"numHits": 42, # Optional. Number of hits (chunks) to request. When specified, it is passed to Elasticsearch as the `num_hits` param.
"searchTemplate": "A String", # The ElasticSearch search template to use.
},
"endpoint": "A String", # The endpoint of the external API. The system will call the API at this endpoint to retrieve the data for grounding. Example: https://acme.com:443/search
"simpleSearchParams": { # The search parameters to use for SIMPLE_SEARCH spec. # Parameters for the simple search API.
},
},
"vertexAiSearch": { # Retrieve from Vertex AI Search datastore or engine for grounding. datastore and engine are mutually exclusive. See https://cloud.google.com/products/agent-builder # Set to use data source powered by Vertex AI Search.
"dataStoreSpecs": [ # Specifications that define the specific DataStores to be searched, along with configurations for those data stores. This is only considered for Engines with multiple data stores. It should only be set if engine is used.
{ # Define data stores within engine to filter on in a search call and configurations for those data stores. For more information, see https://cloud.google.com/generative-ai-app-builder/docs/reference/rpc/google.cloud.discoveryengine.v1#datastorespec
"dataStore": "A String", # Full resource name of DataStore, such as Format: `projects/{project}/locations/{location}/collections/{collection}/dataStores/{dataStore}`
"filter": "A String", # Optional. Filter specification to filter documents in the data store specified by data_store field. For more information on filtering, see [Filtering](https://cloud.google.com/generative-ai-app-builder/docs/filter-search-metadata)
},
],
"datastore": "A String", # Optional. Fully-qualified Vertex AI Search data store resource ID. Format: `projects/{project}/locations/{location}/collections/{collection}/dataStores/{dataStore}`
"engine": "A String", # Optional. Fully-qualified Vertex AI Search engine resource ID. Format: `projects/{project}/locations/{location}/collections/{collection}/engines/{engine}`
"filter": "A String", # Optional. Filter strings to be passed to the search API.
"maxResults": 42, # Optional. Number of search results to return per query. The default value is 10. The maximumm allowed value is 10.
},
"vertexRagStore": { # Retrieve from Vertex RAG Store for grounding. # Set to use data source powered by Vertex RAG store. User data is uploaded via the VertexRagDataService.
"ragCorpora": [ # Optional. Deprecated. Please use rag_resources instead.
"A String",
],
"ragResources": [ # Optional. The representation of the rag source. It can be used to specify corpus only or ragfiles. Currently only support one corpus or multiple files from one corpus. In the future we may open up multiple corpora support.
{ # The definition of the Rag resource.
"ragCorpus": "A String", # Optional. RagCorpora resource name. Format: `projects/{project}/locations/{location}/ragCorpora/{rag_corpus}`
"ragFileIds": [ # Optional. rag_file_id. The files should be in the same rag_corpus set in rag_corpus field.
"A String",
],
},
],
"ragRetrievalConfig": { # Specifies the context retrieval config. # Optional. The retrieval config for the Rag query.
"filter": { # Config for filters. # Optional. Config for filters.
"metadataFilter": "A String", # Optional. String for metadata filtering.
"vectorDistanceThreshold": 3.14, # Optional. Only returns contexts with vector distance smaller than the threshold.
"vectorSimilarityThreshold": 3.14, # Optional. Only returns contexts with vector similarity larger than the threshold.
},
"hybridSearch": { # Config for Hybrid Search. # Optional. Config for Hybrid Search.
"alpha": 3.14, # Optional. Alpha value controls the weight between dense and sparse vector search results. The range is [0, 1], while 0 means sparse vector search only and 1 means dense vector search only. The default value is 0.5 which balances sparse and dense vector search equally.
},
"ranking": { # Config for ranking and reranking. # Optional. Config for ranking and reranking.
"llmRanker": { # Config for LlmRanker. # Optional. Config for LlmRanker.
"modelName": "A String", # Optional. The model name used for ranking. See [Supported models](https://cloud.google.com/vertex-ai/generative-ai/docs/model-reference/inference#supported-models).
},
"rankService": { # Config for Rank Service. # Optional. Config for Rank Service.
"modelName": "A String", # Optional. The model name of the rank service. Format: `semantic-ranker-512@latest`
},
},
"topK": 42, # Optional. The number of contexts to retrieve.
},
"similarityTopK": 42, # Optional. Number of top k results to return from the selected corpora.
"storeContext": True or False, # Optional. Currently only supported for Gemini Multimodal Live API. In Gemini Multimodal Live API, if `store_context` bool is specified, Gemini will leverage it to automatically memorize the interactions between the client and Gemini, and retrieve context when needed to augment the response generation for users' ongoing and future interactions.
"vectorDistanceThreshold": 3.14, # Optional. Only return results with vector distance smaller than the threshold.
},
},
"urlContext": { # Tool to support URL context. # Optional. Tool to support URL context retrieval.
},
},
],
"author": "A String", # Required. The ID of the agent or entity that generated this event. Use "user" to denote events generated by the end-user.
"content": { # The structured data content of a message. A Content message contains a `role` field, which indicates the producer of the content, and a `parts` field, which contains the multi-part data of the message. # Required. The content of the event (e.g., text response, tool call, tool response).
"parts": [ # Required. A list of Part objects that make up a single message. Parts of a message can have different MIME types. A Content message must have at least one Part.
{ # A datatype containing media that is part of a multi-part Content message. A `Part` consists of data which has an associated datatype. A `Part` can only contain one of the accepted types in `Part.data`. For media types that are not text, `Part` must have a fixed IANA MIME type identifying the type and subtype of the media if `inline_data` or `file_data` field is filled with raw bytes.
"codeExecutionResult": { # Result of executing the [ExecutableCode]. Only generated when using the [CodeExecution] tool, and always follows a `part` containing the [ExecutableCode]. # Optional. The result of executing the ExecutableCode.
"outcome": "A String", # Required. Outcome of the code execution.
"output": "A String", # Optional. Contains stdout when code execution is successful, stderr or other description otherwise.
},
"executableCode": { # Code generated by the model that is meant to be executed, and the result returned to the model. Generated when using the [CodeExecution] tool, in which the code will be automatically executed, and a corresponding [CodeExecutionResult] will also be generated. # Optional. Code generated by the model that is intended to be executed.
"code": "A String", # Required. The code to be executed.
"language": "A String", # Required. Programming language of the `code`.
},
"fileData": { # URI-based data. A FileData message contains a URI pointing to data of a specific media type. It is used to represent images, audio, and video stored in Google Cloud Storage. # Optional. The URI-based data of the part. This can be used to include files from Google Cloud Storage.
"displayName": "A String", # Optional. The display name of the file. Used to provide a label or filename to distinguish files. This field is only returned in `PromptMessage` for prompt management. It is used in the Gemini calls only when server side tools (`code_execution`, `google_search`, and `url_context`) are enabled.
"fileUri": "A String", # Required. The URI of the file in Google Cloud Storage.
"mimeType": "A String", # Required. The IANA standard MIME type of the source data.
},
"functionCall": { # A predicted [FunctionCall] returned from the model that contains a string representing the [FunctionDeclaration.name] and a structured JSON object containing the parameters and their values. # Optional. A predicted function call returned from the model. This contains the name of the function to call and the arguments to pass to the function.
"args": { # Optional. The function parameters and values in JSON object format. See [FunctionDeclaration.parameters] for parameter details.
"a_key": "", # Properties of the object.
},
"id": "A String", # Optional. The unique id of the function call. If populated, the client to execute the `function_call` and return the response with the matching `id`.
"name": "A String", # Optional. The name of the function to call. Matches [FunctionDeclaration.name].
"partialArgs": [ # Optional. The partial argument value of the function call. If provided, represents the arguments/fields that are streamed incrementally.
{ # Partial argument value of the function call.
"boolValue": True or False, # Optional. Represents a boolean value.
"jsonPath": "A String", # Required. A JSON Path (RFC 9535) to the argument being streamed. https://datatracker.ietf.org/doc/html/rfc9535. e.g. "$.foo.bar[0].data".
"nullValue": "A String", # Optional. Represents a null value.
"numberValue": 3.14, # Optional. Represents a double value.
"stringValue": "A String", # Optional. Represents a string value.
"willContinue": True or False, # Optional. Whether this is not the last part of the same json_path. If true, another PartialArg message for the current json_path is expected to follow.
},
],
"willContinue": True or False, # Optional. Whether this is the last part of the FunctionCall. If true, another partial message for the current FunctionCall is expected to follow.
},
"functionResponse": { # The result output from a [FunctionCall] that contains a string representing the [FunctionDeclaration.name] and a structured JSON object containing any output from the function is used as context to the model. This should contain the result of a [FunctionCall] made based on model prediction. # Optional. The result of a function call. This is used to provide the model with the result of a function call that it predicted.
"id": "A String", # Optional. The id of the function call this response is for. Populated by the client to match the corresponding function call `id`.
"name": "A String", # Required. The name of the function to call. Matches [FunctionDeclaration.name] and [FunctionCall.name].
"parts": [ # Optional. Ordered `Parts` that constitute a function response. Parts may have different IANA MIME types.
{ # A datatype containing media that is part of a `FunctionResponse` message. A `FunctionResponsePart` consists of data which has an associated datatype. A `FunctionResponsePart` can only contain one of the accepted types in `FunctionResponsePart.data`. A `FunctionResponsePart` must have a fixed IANA MIME type identifying the type and subtype of the media if the `inline_data` field is filled with raw bytes.
"fileData": { # URI based data for function response. # URI based data.
"displayName": "A String", # Optional. Display name of the file data. Used to provide a label or filename to distinguish file datas. This field is only returned in PromptMessage for prompt management. It is currently used in the Gemini GenerateContent calls only when server side tools (code_execution, google_search, and url_context) are enabled.
"fileUri": "A String", # Required. URI.
"mimeType": "A String", # Required. The IANA standard MIME type of the source data.
},
"inlineData": { # Raw media bytes for function response. Text should not be sent as raw bytes, use the 'text' field. # Inline media bytes.
"data": "A String", # Required. Raw bytes.
"displayName": "A String", # Optional. Display name of the blob. Used to provide a label or filename to distinguish blobs. This field is only returned in PromptMessage for prompt management. It is currently used in the Gemini GenerateContent calls only when server side tools (code_execution, google_search, and url_context) are enabled.
"mimeType": "A String", # Required. The IANA standard MIME type of the source data.
},
},
],
"response": { # Required. The function response in JSON object format. Use "output" key to specify function output and "error" key to specify error details (if any). If "output" and "error" keys are not specified, then whole "response" is treated as function output.
"a_key": "", # Properties of the object.
},
},
"inlineData": { # A content blob. A Blob contains data of a specific media type. It is used to represent images, audio, and video. # Optional. The inline data content of the part. This can be used to include images, audio, or video in a request.
"data": "A String", # Required. The raw bytes of the data.
"displayName": "A String", # Optional. The display name of the blob. Used to provide a label or filename to distinguish blobs. This field is only returned in `PromptMessage` for prompt management. It is used in the Gemini calls only when server-side tools (`code_execution`, `google_search`, and `url_context`) are enabled.
"mimeType": "A String", # Required. The IANA standard MIME type of the source data.
},
"mediaResolution": { # per part media resolution. Media resolution for the input media. # per part media resolution. Media resolution for the input media.
"level": "A String", # The tokenization quality used for given media.
},
"text": "A String", # Optional. The text content of the part. When sent from the VSCode Gemini Code Assist extension, references to @mentioned items will be converted to markdown boldface text. For example `@my-repo` will be converted to and sent as `**my-repo**` by the IDE agent.
"thought": True or False, # Optional. Indicates whether the `part` represents the model's thought process or reasoning.
"thoughtSignature": "A String", # Optional. An opaque signature for the thought so it can be reused in subsequent requests.
"videoMetadata": { # Provides metadata for a video, including the start and end offsets for clipping and the frame rate. # Optional. Video metadata. The metadata should only be specified while the video data is presented in inline_data or file_data.
"endOffset": "A String", # Optional. The end offset of the video.
"fps": 3.14, # Optional. The frame rate of the video sent to the model. If not specified, the default value is 1.0. The valid range is (0.0, 24.0].
"startOffset": "A String", # Optional. The start offset of the video.
},
},
],
"role": "A String", # Optional. The producer of the content. Must be either 'user' or 'model'. If not set, the service will default to 'user'.
},
"eventTime": "A String", # Optional. The timestamp when the event occurred.
"stateDelta": { # Optional. The change in the session state caused by this event. This is a key-value map of fields that were modified or added by the event.
"a_key": "", # Properties of the object.
},
},
],
"turnId": "A String", # Optional. A unique identifier for the turn. Useful for referencing specific turns across systems.
"turnIndex": 42, # Required. The 0-based index of the turn in the conversation sequence.
},
],
},
"candidate": "A String", # Required. The name of the candidate that produced the response.
"error": { # The `Status` type defines a logical error model that is suitable for different programming environments, including REST APIs and RPC APIs. It is used by [gRPC](https://github.com/grpc). Each `Status` message contains three pieces of data: error code, error message, and error details. You can find out more about this error model and how to work with it in the [API Design Guide](https://cloud.google.com/apis/design/errors). # Output only. Error while scraping model or agent.
"code": 42, # The status code, which should be an enum value of google.rpc.Code.
"details": [ # A list of messages that carry the error details. There is a common set of message types for APIs to use.
{
"a_key": "", # Properties of the object. Contains field @type with type URL.
},
],
"message": "A String", # A developer-facing error message, which should be in English. Any user-facing error message should be localized and sent in the google.rpc.Status.details field, or localized by the client.
},
"events": [ # Optional. Intermediate events (such as tool calls and responses) that led to the final response.
{ # The structured data content of a message. A Content message contains a `role` field, which indicates the producer of the content, and a `parts` field, which contains the multi-part data of the message.
"parts": [ # Required. A list of Part objects that make up a single message. Parts of a message can have different MIME types. A Content message must have at least one Part.
{ # A datatype containing media that is part of a multi-part Content message. A `Part` consists of data which has an associated datatype. A `Part` can only contain one of the accepted types in `Part.data`. For media types that are not text, `Part` must have a fixed IANA MIME type identifying the type and subtype of the media if `inline_data` or `file_data` field is filled with raw bytes.
"codeExecutionResult": { # Result of executing the [ExecutableCode]. Only generated when using the [CodeExecution] tool, and always follows a `part` containing the [ExecutableCode]. # Optional. The result of executing the ExecutableCode.
"outcome": "A String", # Required. Outcome of the code execution.
"output": "A String", # Optional. Contains stdout when code execution is successful, stderr or other description otherwise.
},
"executableCode": { # Code generated by the model that is meant to be executed, and the result returned to the model. Generated when using the [CodeExecution] tool, in which the code will be automatically executed, and a corresponding [CodeExecutionResult] will also be generated. # Optional. Code generated by the model that is intended to be executed.
"code": "A String", # Required. The code to be executed.
"language": "A String", # Required. Programming language of the `code`.
},
"fileData": { # URI-based data. A FileData message contains a URI pointing to data of a specific media type. It is used to represent images, audio, and video stored in Google Cloud Storage. # Optional. The URI-based data of the part. This can be used to include files from Google Cloud Storage.
"displayName": "A String", # Optional. The display name of the file. Used to provide a label or filename to distinguish files. This field is only returned in `PromptMessage` for prompt management. It is used in the Gemini calls only when server side tools (`code_execution`, `google_search`, and `url_context`) are enabled.
"fileUri": "A String", # Required. The URI of the file in Google Cloud Storage.
"mimeType": "A String", # Required. The IANA standard MIME type of the source data.
},
"functionCall": { # A predicted [FunctionCall] returned from the model that contains a string representing the [FunctionDeclaration.name] and a structured JSON object containing the parameters and their values. # Optional. A predicted function call returned from the model. This contains the name of the function to call and the arguments to pass to the function.
"args": { # Optional. The function parameters and values in JSON object format. See [FunctionDeclaration.parameters] for parameter details.
"a_key": "", # Properties of the object.
},
"id": "A String", # Optional. The unique id of the function call. If populated, the client to execute the `function_call` and return the response with the matching `id`.
"name": "A String", # Optional. The name of the function to call. Matches [FunctionDeclaration.name].
"partialArgs": [ # Optional. The partial argument value of the function call. If provided, represents the arguments/fields that are streamed incrementally.
{ # Partial argument value of the function call.
"boolValue": True or False, # Optional. Represents a boolean value.
"jsonPath": "A String", # Required. A JSON Path (RFC 9535) to the argument being streamed. https://datatracker.ietf.org/doc/html/rfc9535. e.g. "$.foo.bar[0].data".
"nullValue": "A String", # Optional. Represents a null value.
"numberValue": 3.14, # Optional. Represents a double value.
"stringValue": "A String", # Optional. Represents a string value.
"willContinue": True or False, # Optional. Whether this is not the last part of the same json_path. If true, another PartialArg message for the current json_path is expected to follow.
},
],
"willContinue": True or False, # Optional. Whether this is the last part of the FunctionCall. If true, another partial message for the current FunctionCall is expected to follow.
},
"functionResponse": { # The result output from a [FunctionCall] that contains a string representing the [FunctionDeclaration.name] and a structured JSON object containing any output from the function is used as context to the model. This should contain the result of a [FunctionCall] made based on model prediction. # Optional. The result of a function call. This is used to provide the model with the result of a function call that it predicted.
"id": "A String", # Optional. The id of the function call this response is for. Populated by the client to match the corresponding function call `id`.
"name": "A String", # Required. The name of the function to call. Matches [FunctionDeclaration.name] and [FunctionCall.name].
"parts": [ # Optional. Ordered `Parts` that constitute a function response. Parts may have different IANA MIME types.
{ # A datatype containing media that is part of a `FunctionResponse` message. A `FunctionResponsePart` consists of data which has an associated datatype. A `FunctionResponsePart` can only contain one of the accepted types in `FunctionResponsePart.data`. A `FunctionResponsePart` must have a fixed IANA MIME type identifying the type and subtype of the media if the `inline_data` field is filled with raw bytes.
"fileData": { # URI based data for function response. # URI based data.
"displayName": "A String", # Optional. Display name of the file data. Used to provide a label or filename to distinguish file datas. This field is only returned in PromptMessage for prompt management. It is currently used in the Gemini GenerateContent calls only when server side tools (code_execution, google_search, and url_context) are enabled.
"fileUri": "A String", # Required. URI.
"mimeType": "A String", # Required. The IANA standard MIME type of the source data.
},
"inlineData": { # Raw media bytes for function response. Text should not be sent as raw bytes, use the 'text' field. # Inline media bytes.
"data": "A String", # Required. Raw bytes.
"displayName": "A String", # Optional. Display name of the blob. Used to provide a label or filename to distinguish blobs. This field is only returned in PromptMessage for prompt management. It is currently used in the Gemini GenerateContent calls only when server side tools (code_execution, google_search, and url_context) are enabled.
"mimeType": "A String", # Required. The IANA standard MIME type of the source data.
},
},
],
"response": { # Required. The function response in JSON object format. Use "output" key to specify function output and "error" key to specify error details (if any). If "output" and "error" keys are not specified, then whole "response" is treated as function output.
"a_key": "", # Properties of the object.
},
},
"inlineData": { # A content blob. A Blob contains data of a specific media type. It is used to represent images, audio, and video. # Optional. The inline data content of the part. This can be used to include images, audio, or video in a request.
"data": "A String", # Required. The raw bytes of the data.
"displayName": "A String", # Optional. The display name of the blob. Used to provide a label or filename to distinguish blobs. This field is only returned in `PromptMessage` for prompt management. It is used in the Gemini calls only when server-side tools (`code_execution`, `google_search`, and `url_context`) are enabled.
"mimeType": "A String", # Required. The IANA standard MIME type of the source data.
},
"mediaResolution": { # per part media resolution. Media resolution for the input media. # per part media resolution. Media resolution for the input media.
"level": "A String", # The tokenization quality used for given media.
},
"text": "A String", # Optional. The text content of the part. When sent from the VSCode Gemini Code Assist extension, references to @mentioned items will be converted to markdown boldface text. For example `@my-repo` will be converted to and sent as `**my-repo**` by the IDE agent.
"thought": True or False, # Optional. Indicates whether the `part` represents the model's thought process or reasoning.
"thoughtSignature": "A String", # Optional. An opaque signature for the thought so it can be reused in subsequent requests.
"videoMetadata": { # Provides metadata for a video, including the start and end offsets for clipping and the frame rate. # Optional. Video metadata. The metadata should only be specified while the video data is presented in inline_data or file_data.
"endOffset": "A String", # Optional. The end offset of the video.
"fps": 3.14, # Optional. The frame rate of the video sent to the model. If not specified, the default value is 1.0. The valid range is (0.0, 24.0].
"startOffset": "A String", # Optional. The start offset of the video.
},
},
],
"role": "A String", # Optional. The producer of the content. Must be either 'user' or 'model'. If not set, the service will default to 'user'.
},
],
"text": "A String", # Text response.
"value": "", # Fields and values that can be used to populate the response template.
},
"prompt": { # Prompt to be evaluated. This can represent a single-turn prompt or a multi-turn conversation for agent evaluations. # Optional. The request/prompt to evaluate.
"agentData": { # Represents data specific to multi-turn agent evaluations. # Optional. Represents the complete execution trace of a multi-turn conversation, which can involve single or multiple agents. This serves as the input context for agent scraping.
"agents": { # Optional. A map containing the static configurations for each agent in the system. Key: agent_id (matches the `author` field in events). Value: The static configuration of the agent.
"a_key": { # Represents configuration for an Agent.
"agentId": "A String", # Required. Unique identifier of the agent. This ID is used to refer to this agent, e.g., in AgentEvent.author, or in the `sub_agents` field. It must be unique within the `agents` map.
"agentType": "A String", # Optional. The type or class of the agent (e.g., "LlmAgent", "RouterAgent", "ToolUseAgent"). Useful for the autorater to understand the expected behavior of the agent.
"description": "A String", # Optional. A high-level description of the agent's role and responsibilities. Critical for evaluating if the agent is routing tasks correctly.
"instruction": "A String", # Optional. Provides instructions for the LLM model, guiding the agent's behavior. Can be static or dynamic. Dynamic instructions can contain placeholders like {variable_name} that will be resolved at runtime using the `AgentEvent.state_delta` field.
"subAgents": [ # Optional. The list of valid agent IDs that this agent can delegate to. This defines the directed edges in the multi-agent system graph topology.
"A String",
],
"tools": [ # Optional. The list of tools available to this agent.
{ # Tool details that the model may use to generate response. A `Tool` is a piece of code that enables the system to interact with external systems to perform an action, or set of actions, outside of knowledge and scope of the model. A Tool object should contain exactly one type of Tool (e.g FunctionDeclaration, Retrieval or GoogleSearchRetrieval).
"codeExecution": { # Tool that executes code generated by the model, and automatically returns the result to the model. See also [ExecutableCode]and [CodeExecutionResult] which are input and output to this tool. # Optional. CodeExecution tool type. Enables the model to execute code as part of generation.
},
"computerUse": { # Tool to support computer use. # Optional. Tool to support the model interacting directly with the computer. If enabled, it automatically populates computer-use specific Function Declarations.
"environment": "A String", # Required. The environment being operated.
"excludedPredefinedFunctions": [ # Optional. By default, [predefined functions](https://cloud.google.com/vertex-ai/generative-ai/docs/computer-use#supported-actions) are included in the final model call. Some of them can be explicitly excluded from being automatically included. This can serve two purposes: 1. Using a more restricted / different action space. 2. Improving the definitions / instructions of predefined functions.
"A String",
],
},
"enterpriseWebSearch": { # Tool to search public web data, powered by Vertex AI Search and Sec4 compliance. # Optional. Tool to support searching public web data, powered by Vertex AI Search and Sec4 compliance.
"blockingConfidence": "A String", # Optional. Sites with confidence level chosen & above this value will be blocked from the search results.
"excludeDomains": [ # Optional. List of domains to be excluded from the search results. The default limit is 2000 domains.
"A String",
],
},
"functionDeclarations": [ # Optional. Function tool type. One or more function declarations to be passed to the model along with the current user query. Model may decide to call a subset of these functions by populating FunctionCall in the response. User should provide a FunctionResponse for each function call in the next turn. Based on the function responses, Model will generate the final response back to the user. Maximum 512 function declarations can be provided.
{ # Structured representation of a function declaration as defined by the [OpenAPI 3.0 specification](https://spec.openapis.org/oas/v3.0.3). Included in this declaration are the function name, description, parameters and response type. This FunctionDeclaration is a representation of a block of code that can be used as a `Tool` by the model and executed by the client.
"description": "A String", # Optional. Description and purpose of the function. Model uses it to decide how and whether to call the function.
"name": "A String", # Required. The name of the function to call. Must start with a letter or an underscore. Must be a-z, A-Z, 0-9, or contain underscores, dots, colons and dashes, with a maximum length of 64.
"parameters": { # Defines the schema of input and output data. This is a subset of the [OpenAPI 3.0 Schema Object](https://spec.openapis.org/oas/v3.0.3#schema-object). # Optional. Describes the parameters to this function in JSON Schema Object format. Reflects the Open API 3.03 Parameter Object. string Key: the name of the parameter. Parameter names are case sensitive. Schema Value: the Schema defining the type used for the parameter. For function with no parameters, this can be left unset. Parameter names must start with a letter or an underscore and must only contain chars a-z, A-Z, 0-9, or underscores with a maximum length of 64. Example with 1 required and 1 optional parameter: type: OBJECT properties: param1: type: STRING param2: type: INTEGER required: - param1
"additionalProperties": "", # Optional. If `type` is `OBJECT`, specifies how to handle properties not defined in `properties`. If it is a boolean `false`, no additional properties are allowed. If it is a schema, additional properties are allowed if they conform to the schema.
"anyOf": [ # Optional. The instance must be valid against any (one or more) of the subschemas listed in `any_of`.
# Object with schema name: GoogleCloudAiplatformV1beta1Schema
],
"default": "", # Optional. Default value to use if the field is not specified.
"defs": { # Optional. `defs` provides a map of schema definitions that can be reused by `ref` elsewhere in the schema. Only allowed at root level of the schema.
"a_key": # Object with schema name: GoogleCloudAiplatformV1beta1Schema
},
"description": "A String", # Optional. Description of the schema.
"enum": [ # Optional. Possible values of the field. This field can be used to restrict a value to a fixed set of values. To mark a field as an enum, set `format` to `enum` and provide the list of possible values in `enum`. For example: 1. To define directions: `{type:STRING, format:enum, enum:["EAST", "NORTH", "SOUTH", "WEST"]}` 2. To define apartment numbers: `{type:INTEGER, format:enum, enum:["101", "201", "301"]}`
"A String",
],
"example": "", # Optional. Example of an instance of this schema.
"format": "A String", # Optional. The format of the data. For `NUMBER` type, format can be `float` or `double`. For `INTEGER` type, format can be `int32` or `int64`. For `STRING` type, format can be `email`, `byte`, `date`, `date-time`, `password`, and other formats to further refine the data type.
"items": # Object with schema name: GoogleCloudAiplatformV1beta1Schema # Optional. If type is `ARRAY`, `items` specifies the schema of elements in the array.
"maxItems": "A String", # Optional. If type is `ARRAY`, `max_items` specifies the maximum number of items in an array.
"maxLength": "A String", # Optional. If type is `STRING`, `max_length` specifies the maximum length of the string.
"maxProperties": "A String", # Optional. If type is `OBJECT`, `max_properties` specifies the maximum number of properties that can be provided.
"maximum": 3.14, # Optional. If type is `INTEGER` or `NUMBER`, `maximum` specifies the maximum allowed value.
"minItems": "A String", # Optional. If type is `ARRAY`, `min_items` specifies the minimum number of items in an array.
"minLength": "A String", # Optional. If type is `STRING`, `min_length` specifies the minimum length of the string.
"minProperties": "A String", # Optional. If type is `OBJECT`, `min_properties` specifies the minimum number of properties that can be provided.
"minimum": 3.14, # Optional. If type is `INTEGER` or `NUMBER`, `minimum` specifies the minimum allowed value.
"nullable": True or False, # Optional. Indicates if the value of this field can be null.
"pattern": "A String", # Optional. If type is `STRING`, `pattern` specifies a regular expression that the string must match.
"properties": { # Optional. If type is `OBJECT`, `properties` is a map of property names to schema definitions for each property of the object.
"a_key": # Object with schema name: GoogleCloudAiplatformV1beta1Schema
},
"propertyOrdering": [ # Optional. Order of properties displayed or used where order matters. This is not a standard field in OpenAPI specification, but can be used to control the order of properties.
"A String",
],
"ref": "A String", # Optional. Allows referencing another schema definition to use in place of this schema. The value must be a valid reference to a schema in `defs`. For example, the following schema defines a reference to a schema node named "Pet": type: object properties: pet: ref: #/defs/Pet defs: Pet: type: object properties: name: type: string The value of the "pet" property is a reference to the schema node named "Pet". See details in https://json-schema.org/understanding-json-schema/structuring
"required": [ # Optional. If type is `OBJECT`, `required` lists the names of properties that must be present.
"A String",
],
"title": "A String", # Optional. Title for the schema.
"type": "A String", # Optional. Data type of the schema field.
},
"parametersJsonSchema": "", # Optional. Describes the parameters to the function in JSON Schema format. The schema must describe an object where the properties are the parameters to the function. For example: ``` { "type": "object", "properties": { "name": { "type": "string" }, "age": { "type": "integer" } }, "additionalProperties": false, "required": ["name", "age"], "propertyOrdering": ["name", "age"] } ``` This field is mutually exclusive with `parameters`.
"response": { # Defines the schema of input and output data. This is a subset of the [OpenAPI 3.0 Schema Object](https://spec.openapis.org/oas/v3.0.3#schema-object). # Optional. Describes the output from this function in JSON Schema format. Reflects the Open API 3.03 Response Object. The Schema defines the type used for the response value of the function.
"additionalProperties": "", # Optional. If `type` is `OBJECT`, specifies how to handle properties not defined in `properties`. If it is a boolean `false`, no additional properties are allowed. If it is a schema, additional properties are allowed if they conform to the schema.
"anyOf": [ # Optional. The instance must be valid against any (one or more) of the subschemas listed in `any_of`.
# Object with schema name: GoogleCloudAiplatformV1beta1Schema
],
"default": "", # Optional. Default value to use if the field is not specified.
"defs": { # Optional. `defs` provides a map of schema definitions that can be reused by `ref` elsewhere in the schema. Only allowed at root level of the schema.
"a_key": # Object with schema name: GoogleCloudAiplatformV1beta1Schema
},
"description": "A String", # Optional. Description of the schema.
"enum": [ # Optional. Possible values of the field. This field can be used to restrict a value to a fixed set of values. To mark a field as an enum, set `format` to `enum` and provide the list of possible values in `enum`. For example: 1. To define directions: `{type:STRING, format:enum, enum:["EAST", "NORTH", "SOUTH", "WEST"]}` 2. To define apartment numbers: `{type:INTEGER, format:enum, enum:["101", "201", "301"]}`
"A String",
],
"example": "", # Optional. Example of an instance of this schema.
"format": "A String", # Optional. The format of the data. For `NUMBER` type, format can be `float` or `double`. For `INTEGER` type, format can be `int32` or `int64`. For `STRING` type, format can be `email`, `byte`, `date`, `date-time`, `password`, and other formats to further refine the data type.
"items": # Object with schema name: GoogleCloudAiplatformV1beta1Schema # Optional. If type is `ARRAY`, `items` specifies the schema of elements in the array.
"maxItems": "A String", # Optional. If type is `ARRAY`, `max_items` specifies the maximum number of items in an array.
"maxLength": "A String", # Optional. If type is `STRING`, `max_length` specifies the maximum length of the string.
"maxProperties": "A String", # Optional. If type is `OBJECT`, `max_properties` specifies the maximum number of properties that can be provided.
"maximum": 3.14, # Optional. If type is `INTEGER` or `NUMBER`, `maximum` specifies the maximum allowed value.
"minItems": "A String", # Optional. If type is `ARRAY`, `min_items` specifies the minimum number of items in an array.
"minLength": "A String", # Optional. If type is `STRING`, `min_length` specifies the minimum length of the string.
"minProperties": "A String", # Optional. If type is `OBJECT`, `min_properties` specifies the minimum number of properties that can be provided.
"minimum": 3.14, # Optional. If type is `INTEGER` or `NUMBER`, `minimum` specifies the minimum allowed value.
"nullable": True or False, # Optional. Indicates if the value of this field can be null.
"pattern": "A String", # Optional. If type is `STRING`, `pattern` specifies a regular expression that the string must match.
"properties": { # Optional. If type is `OBJECT`, `properties` is a map of property names to schema definitions for each property of the object.
"a_key": # Object with schema name: GoogleCloudAiplatformV1beta1Schema
},
"propertyOrdering": [ # Optional. Order of properties displayed or used where order matters. This is not a standard field in OpenAPI specification, but can be used to control the order of properties.
"A String",
],
"ref": "A String", # Optional. Allows referencing another schema definition to use in place of this schema. The value must be a valid reference to a schema in `defs`. For example, the following schema defines a reference to a schema node named "Pet": type: object properties: pet: ref: #/defs/Pet defs: Pet: type: object properties: name: type: string The value of the "pet" property is a reference to the schema node named "Pet". See details in https://json-schema.org/understanding-json-schema/structuring
"required": [ # Optional. If type is `OBJECT`, `required` lists the names of properties that must be present.
"A String",
],
"title": "A String", # Optional. Title for the schema.
"type": "A String", # Optional. Data type of the schema field.
},
"responseJsonSchema": "", # Optional. Describes the output from this function in JSON Schema format. The value specified by the schema is the response value of the function. This field is mutually exclusive with `response`.
},
],
"googleMaps": { # Tool to retrieve public maps data for grounding, powered by Google. # Optional. GoogleMaps tool type. Tool to support Google Maps in Model.
"enableWidget": True or False, # Optional. If true, include the widget context token in the response.
},
"googleSearch": { # GoogleSearch tool type. Tool to support Google Search in Model. Powered by Google. # Optional. GoogleSearch tool type. Tool to support Google Search in Model. Powered by Google.
"blockingConfidence": "A String", # Optional. Sites with confidence level chosen & above this value will be blocked from the search results.
"excludeDomains": [ # Optional. List of domains to be excluded from the search results. The default limit is 2000 domains. Example: ["amazon.com", "facebook.com"].
"A String",
],
},
"googleSearchRetrieval": { # Tool to retrieve public web data for grounding, powered by Google. # Optional. Specialized retrieval tool that is powered by Google Search.
"dynamicRetrievalConfig": { # Describes the options to customize dynamic retrieval. # Specifies the dynamic retrieval configuration for the given source.
"dynamicThreshold": 3.14, # Optional. The threshold to be used in dynamic retrieval. If not set, a system default value is used.
"mode": "A String", # The mode of the predictor to be used in dynamic retrieval.
},
},
"parallelAiSearch": { # ParallelAiSearch tool type. A tool that uses the Parallel.ai search engine for grounding. # Optional. If specified, Vertex AI will use Parallel.ai to search for information to answer user queries. The search results will be grounded on Parallel.ai and presented to the model for response generation
"apiKey": "A String", # Optional. The API key for ParallelAiSearch. If an API key is not provided, the system will attempt to verify access by checking for an active Parallel.ai subscription through the Google Cloud Marketplace. See https://docs.parallel.ai/search/search-quickstart for more details.
"customConfigs": { # Optional. Custom configs for ParallelAiSearch. This field can be used to pass any parameter from the Parallel.ai Search API. See the Parallel.ai documentation for the full list of available parameters and their usage: https://docs.parallel.ai/api-reference/search-beta/search Currently only `source_policy`, `excerpts`, `max_results`, `mode`, `fetch_policy` can be set via this field. For example: { "source_policy": { "include_domains": ["google.com", "wikipedia.org"], "exclude_domains": ["example.com"] }, "fetch_policy": { "max_age_seconds": 3600 } }
"a_key": "", # Properties of the object.
},
},
"retrieval": { # Defines a retrieval tool that model can call to access external knowledge. # Optional. Retrieval tool type. System will always execute the provided retrieval tool(s) to get external knowledge to answer the prompt. Retrieval results are presented to the model for generation.
"disableAttribution": True or False, # Optional. Deprecated. This option is no longer supported.
"externalApi": { # Retrieve from data source powered by external API for grounding. The external API is not owned by Google, but need to follow the pre-defined API spec. # Use data source powered by external API for grounding.
"apiAuth": { # The generic reusable api auth config. Deprecated. Please use AuthConfig (google/cloud/aiplatform/master/auth.proto) instead. # The authentication config to access the API. Deprecated. Please use auth_config instead.
"apiKeyConfig": { # The API secret. # The API secret.
"apiKeySecretVersion": "A String", # Required. The SecretManager secret version resource name storing API key. e.g. projects/{project}/secrets/{secret}/versions/{version}
"apiKeyString": "A String", # The API key string. Either this or `api_key_secret_version` must be set.
},
},
"apiSpec": "A String", # The API spec that the external API implements.
"authConfig": { # Auth configuration to run the extension. # The authentication config to access the API.
"apiKeyConfig": { # Config for authentication with API key. # Config for API key auth.
"apiKeySecret": "A String", # Optional. The name of the SecretManager secret version resource storing the API key. Format: `projects/{project}/secrets/{secrete}/versions/{version}` - If both `api_key_secret` and `api_key_string` are specified, this field takes precedence over `api_key_string`. - If specified, the `secretmanager.versions.access` permission should be granted to Vertex AI Extension Service Agent (https://cloud.google.com/vertex-ai/docs/general/access-control#service-agents) on the specified resource.
"apiKeyString": "A String", # Optional. The API key to be used in the request directly.
"httpElementLocation": "A String", # Optional. The location of the API key.
"name": "A String", # Optional. The parameter name of the API key. E.g. If the API request is "https://example.com/act?api_key=", "api_key" would be the parameter name.
},
"authType": "A String", # Type of auth scheme.
"googleServiceAccountConfig": { # Config for Google Service Account Authentication. # Config for Google Service Account auth.
"serviceAccount": "A String", # Optional. The service account that the extension execution service runs as. - If the service account is specified, the `iam.serviceAccounts.getAccessToken` permission should be granted to Vertex AI Extension Service Agent (https://cloud.google.com/vertex-ai/docs/general/access-control#service-agents) on the specified service account. - If not specified, the Vertex AI Extension Service Agent will be used to execute the Extension.
},
"httpBasicAuthConfig": { # Config for HTTP Basic Authentication. # Config for HTTP Basic auth.
"credentialSecret": "A String", # Required. The name of the SecretManager secret version resource storing the base64 encoded credentials. Format: `projects/{project}/secrets/{secrete}/versions/{version}` - If specified, the `secretmanager.versions.access` permission should be granted to Vertex AI Extension Service Agent (https://cloud.google.com/vertex-ai/docs/general/access-control#service-agents) on the specified resource.
},
"oauthConfig": { # Config for user oauth. # Config for user oauth.
"accessToken": "A String", # Access token for extension endpoint. Only used to propagate token from [[ExecuteExtensionRequest.runtime_auth_config]] at request time.
"serviceAccount": "A String", # The service account used to generate access tokens for executing the Extension. - If the service account is specified, the `iam.serviceAccounts.getAccessToken` permission should be granted to Vertex AI Extension Service Agent (https://cloud.google.com/vertex-ai/docs/general/access-control#service-agents) on the provided service account.
},
"oidcConfig": { # Config for user OIDC auth. # Config for user OIDC auth.
"idToken": "A String", # OpenID Connect formatted ID token for extension endpoint. Only used to propagate token from [[ExecuteExtensionRequest.runtime_auth_config]] at request time.
"serviceAccount": "A String", # The service account used to generate an OpenID Connect (OIDC)-compatible JWT token signed by the Google OIDC Provider (accounts.google.com) for extension endpoint (https://cloud.google.com/iam/docs/create-short-lived-credentials-direct#sa-credentials-oidc). - The audience for the token will be set to the URL in the server url defined in the OpenApi spec. - If the service account is provided, the service account should grant `iam.serviceAccounts.getOpenIdToken` permission to Vertex AI Extension Service Agent (https://cloud.google.com/vertex-ai/docs/general/access-control#service-agents).
},
},
"elasticSearchParams": { # The search parameters to use for the ELASTIC_SEARCH spec. # Parameters for the elastic search API.
"index": "A String", # The ElasticSearch index to use.
"numHits": 42, # Optional. Number of hits (chunks) to request. When specified, it is passed to Elasticsearch as the `num_hits` param.
"searchTemplate": "A String", # The ElasticSearch search template to use.
},
"endpoint": "A String", # The endpoint of the external API. The system will call the API at this endpoint to retrieve the data for grounding. Example: https://acme.com:443/search
"simpleSearchParams": { # The search parameters to use for SIMPLE_SEARCH spec. # Parameters for the simple search API.
},
},
"vertexAiSearch": { # Retrieve from Vertex AI Search datastore or engine for grounding. datastore and engine are mutually exclusive. See https://cloud.google.com/products/agent-builder # Set to use data source powered by Vertex AI Search.
"dataStoreSpecs": [ # Specifications that define the specific DataStores to be searched, along with configurations for those data stores. This is only considered for Engines with multiple data stores. It should only be set if engine is used.
{ # Define data stores within engine to filter on in a search call and configurations for those data stores. For more information, see https://cloud.google.com/generative-ai-app-builder/docs/reference/rpc/google.cloud.discoveryengine.v1#datastorespec
"dataStore": "A String", # Full resource name of DataStore, such as Format: `projects/{project}/locations/{location}/collections/{collection}/dataStores/{dataStore}`
"filter": "A String", # Optional. Filter specification to filter documents in the data store specified by data_store field. For more information on filtering, see [Filtering](https://cloud.google.com/generative-ai-app-builder/docs/filter-search-metadata)
},
],
"datastore": "A String", # Optional. Fully-qualified Vertex AI Search data store resource ID. Format: `projects/{project}/locations/{location}/collections/{collection}/dataStores/{dataStore}`
"engine": "A String", # Optional. Fully-qualified Vertex AI Search engine resource ID. Format: `projects/{project}/locations/{location}/collections/{collection}/engines/{engine}`
"filter": "A String", # Optional. Filter strings to be passed to the search API.
"maxResults": 42, # Optional. Number of search results to return per query. The default value is 10. The maximumm allowed value is 10.
},
"vertexRagStore": { # Retrieve from Vertex RAG Store for grounding. # Set to use data source powered by Vertex RAG store. User data is uploaded via the VertexRagDataService.
"ragCorpora": [ # Optional. Deprecated. Please use rag_resources instead.
"A String",
],
"ragResources": [ # Optional. The representation of the rag source. It can be used to specify corpus only or ragfiles. Currently only support one corpus or multiple files from one corpus. In the future we may open up multiple corpora support.
{ # The definition of the Rag resource.
"ragCorpus": "A String", # Optional. RagCorpora resource name. Format: `projects/{project}/locations/{location}/ragCorpora/{rag_corpus}`
"ragFileIds": [ # Optional. rag_file_id. The files should be in the same rag_corpus set in rag_corpus field.
"A String",
],
},
],
"ragRetrievalConfig": { # Specifies the context retrieval config. # Optional. The retrieval config for the Rag query.
"filter": { # Config for filters. # Optional. Config for filters.
"metadataFilter": "A String", # Optional. String for metadata filtering.
"vectorDistanceThreshold": 3.14, # Optional. Only returns contexts with vector distance smaller than the threshold.
"vectorSimilarityThreshold": 3.14, # Optional. Only returns contexts with vector similarity larger than the threshold.
},
"hybridSearch": { # Config for Hybrid Search. # Optional. Config for Hybrid Search.
"alpha": 3.14, # Optional. Alpha value controls the weight between dense and sparse vector search results. The range is [0, 1], while 0 means sparse vector search only and 1 means dense vector search only. The default value is 0.5 which balances sparse and dense vector search equally.
},
"ranking": { # Config for ranking and reranking. # Optional. Config for ranking and reranking.
"llmRanker": { # Config for LlmRanker. # Optional. Config for LlmRanker.
"modelName": "A String", # Optional. The model name used for ranking. See [Supported models](https://cloud.google.com/vertex-ai/generative-ai/docs/model-reference/inference#supported-models).
},
"rankService": { # Config for Rank Service. # Optional. Config for Rank Service.
"modelName": "A String", # Optional. The model name of the rank service. Format: `semantic-ranker-512@latest`
},
},
"topK": 42, # Optional. The number of contexts to retrieve.
},
"similarityTopK": 42, # Optional. Number of top k results to return from the selected corpora.
"storeContext": True or False, # Optional. Currently only supported for Gemini Multimodal Live API. In Gemini Multimodal Live API, if `store_context` bool is specified, Gemini will leverage it to automatically memorize the interactions between the client and Gemini, and retrieve context when needed to augment the response generation for users' ongoing and future interactions.
"vectorDistanceThreshold": 3.14, # Optional. Only return results with vector distance smaller than the threshold.
},
},
"urlContext": { # Tool to support URL context. # Optional. Tool to support URL context retrieval.
},
},
],
},
},
"turns": [ # Optional. A chronological list of conversation turns. Each turn represents a logical execution cycle (e.g., User Input -> Agent Response).
{ # Represents a single turn/invocation in the conversation.
"events": [ # Optional. The list of events that occurred during this turn.
{ # Represents a single event in the execution trace.
"activeTools": [ # Optional. The list of tools that were active/available to the agent at the time of this event. This overrides the `AgentConfig.tools` if set.
{ # Tool details that the model may use to generate response. A `Tool` is a piece of code that enables the system to interact with external systems to perform an action, or set of actions, outside of knowledge and scope of the model. A Tool object should contain exactly one type of Tool (e.g FunctionDeclaration, Retrieval or GoogleSearchRetrieval).
"codeExecution": { # Tool that executes code generated by the model, and automatically returns the result to the model. See also [ExecutableCode]and [CodeExecutionResult] which are input and output to this tool. # Optional. CodeExecution tool type. Enables the model to execute code as part of generation.
},
"computerUse": { # Tool to support computer use. # Optional. Tool to support the model interacting directly with the computer. If enabled, it automatically populates computer-use specific Function Declarations.
"environment": "A String", # Required. The environment being operated.
"excludedPredefinedFunctions": [ # Optional. By default, [predefined functions](https://cloud.google.com/vertex-ai/generative-ai/docs/computer-use#supported-actions) are included in the final model call. Some of them can be explicitly excluded from being automatically included. This can serve two purposes: 1. Using a more restricted / different action space. 2. Improving the definitions / instructions of predefined functions.
"A String",
],
},
"enterpriseWebSearch": { # Tool to search public web data, powered by Vertex AI Search and Sec4 compliance. # Optional. Tool to support searching public web data, powered by Vertex AI Search and Sec4 compliance.
"blockingConfidence": "A String", # Optional. Sites with confidence level chosen & above this value will be blocked from the search results.
"excludeDomains": [ # Optional. List of domains to be excluded from the search results. The default limit is 2000 domains.
"A String",
],
},
"functionDeclarations": [ # Optional. Function tool type. One or more function declarations to be passed to the model along with the current user query. Model may decide to call a subset of these functions by populating FunctionCall in the response. User should provide a FunctionResponse for each function call in the next turn. Based on the function responses, Model will generate the final response back to the user. Maximum 512 function declarations can be provided.
{ # Structured representation of a function declaration as defined by the [OpenAPI 3.0 specification](https://spec.openapis.org/oas/v3.0.3). Included in this declaration are the function name, description, parameters and response type. This FunctionDeclaration is a representation of a block of code that can be used as a `Tool` by the model and executed by the client.
"description": "A String", # Optional. Description and purpose of the function. Model uses it to decide how and whether to call the function.
"name": "A String", # Required. The name of the function to call. Must start with a letter or an underscore. Must be a-z, A-Z, 0-9, or contain underscores, dots, colons and dashes, with a maximum length of 64.
"parameters": { # Defines the schema of input and output data. This is a subset of the [OpenAPI 3.0 Schema Object](https://spec.openapis.org/oas/v3.0.3#schema-object). # Optional. Describes the parameters to this function in JSON Schema Object format. Reflects the Open API 3.03 Parameter Object. string Key: the name of the parameter. Parameter names are case sensitive. Schema Value: the Schema defining the type used for the parameter. For function with no parameters, this can be left unset. Parameter names must start with a letter or an underscore and must only contain chars a-z, A-Z, 0-9, or underscores with a maximum length of 64. Example with 1 required and 1 optional parameter: type: OBJECT properties: param1: type: STRING param2: type: INTEGER required: - param1
"additionalProperties": "", # Optional. If `type` is `OBJECT`, specifies how to handle properties not defined in `properties`. If it is a boolean `false`, no additional properties are allowed. If it is a schema, additional properties are allowed if they conform to the schema.
"anyOf": [ # Optional. The instance must be valid against any (one or more) of the subschemas listed in `any_of`.
# Object with schema name: GoogleCloudAiplatformV1beta1Schema
],
"default": "", # Optional. Default value to use if the field is not specified.
"defs": { # Optional. `defs` provides a map of schema definitions that can be reused by `ref` elsewhere in the schema. Only allowed at root level of the schema.
"a_key": # Object with schema name: GoogleCloudAiplatformV1beta1Schema
},
"description": "A String", # Optional. Description of the schema.
"enum": [ # Optional. Possible values of the field. This field can be used to restrict a value to a fixed set of values. To mark a field as an enum, set `format` to `enum` and provide the list of possible values in `enum`. For example: 1. To define directions: `{type:STRING, format:enum, enum:["EAST", "NORTH", "SOUTH", "WEST"]}` 2. To define apartment numbers: `{type:INTEGER, format:enum, enum:["101", "201", "301"]}`
"A String",
],
"example": "", # Optional. Example of an instance of this schema.
"format": "A String", # Optional. The format of the data. For `NUMBER` type, format can be `float` or `double`. For `INTEGER` type, format can be `int32` or `int64`. For `STRING` type, format can be `email`, `byte`, `date`, `date-time`, `password`, and other formats to further refine the data type.
"items": # Object with schema name: GoogleCloudAiplatformV1beta1Schema # Optional. If type is `ARRAY`, `items` specifies the schema of elements in the array.
"maxItems": "A String", # Optional. If type is `ARRAY`, `max_items` specifies the maximum number of items in an array.
"maxLength": "A String", # Optional. If type is `STRING`, `max_length` specifies the maximum length of the string.
"maxProperties": "A String", # Optional. If type is `OBJECT`, `max_properties` specifies the maximum number of properties that can be provided.
"maximum": 3.14, # Optional. If type is `INTEGER` or `NUMBER`, `maximum` specifies the maximum allowed value.
"minItems": "A String", # Optional. If type is `ARRAY`, `min_items` specifies the minimum number of items in an array.
"minLength": "A String", # Optional. If type is `STRING`, `min_length` specifies the minimum length of the string.
"minProperties": "A String", # Optional. If type is `OBJECT`, `min_properties` specifies the minimum number of properties that can be provided.
"minimum": 3.14, # Optional. If type is `INTEGER` or `NUMBER`, `minimum` specifies the minimum allowed value.
"nullable": True or False, # Optional. Indicates if the value of this field can be null.
"pattern": "A String", # Optional. If type is `STRING`, `pattern` specifies a regular expression that the string must match.
"properties": { # Optional. If type is `OBJECT`, `properties` is a map of property names to schema definitions for each property of the object.
"a_key": # Object with schema name: GoogleCloudAiplatformV1beta1Schema
},
"propertyOrdering": [ # Optional. Order of properties displayed or used where order matters. This is not a standard field in OpenAPI specification, but can be used to control the order of properties.
"A String",
],
"ref": "A String", # Optional. Allows referencing another schema definition to use in place of this schema. The value must be a valid reference to a schema in `defs`. For example, the following schema defines a reference to a schema node named "Pet": type: object properties: pet: ref: #/defs/Pet defs: Pet: type: object properties: name: type: string The value of the "pet" property is a reference to the schema node named "Pet". See details in https://json-schema.org/understanding-json-schema/structuring
"required": [ # Optional. If type is `OBJECT`, `required` lists the names of properties that must be present.
"A String",
],
"title": "A String", # Optional. Title for the schema.
"type": "A String", # Optional. Data type of the schema field.
},
"parametersJsonSchema": "", # Optional. Describes the parameters to the function in JSON Schema format. The schema must describe an object where the properties are the parameters to the function. For example: ``` { "type": "object", "properties": { "name": { "type": "string" }, "age": { "type": "integer" } }, "additionalProperties": false, "required": ["name", "age"], "propertyOrdering": ["name", "age"] } ``` This field is mutually exclusive with `parameters`.
"response": { # Defines the schema of input and output data. This is a subset of the [OpenAPI 3.0 Schema Object](https://spec.openapis.org/oas/v3.0.3#schema-object). # Optional. Describes the output from this function in JSON Schema format. Reflects the Open API 3.03 Response Object. The Schema defines the type used for the response value of the function.
"additionalProperties": "", # Optional. If `type` is `OBJECT`, specifies how to handle properties not defined in `properties`. If it is a boolean `false`, no additional properties are allowed. If it is a schema, additional properties are allowed if they conform to the schema.
"anyOf": [ # Optional. The instance must be valid against any (one or more) of the subschemas listed in `any_of`.
# Object with schema name: GoogleCloudAiplatformV1beta1Schema
],
"default": "", # Optional. Default value to use if the field is not specified.
"defs": { # Optional. `defs` provides a map of schema definitions that can be reused by `ref` elsewhere in the schema. Only allowed at root level of the schema.
"a_key": # Object with schema name: GoogleCloudAiplatformV1beta1Schema
},
"description": "A String", # Optional. Description of the schema.
"enum": [ # Optional. Possible values of the field. This field can be used to restrict a value to a fixed set of values. To mark a field as an enum, set `format` to `enum` and provide the list of possible values in `enum`. For example: 1. To define directions: `{type:STRING, format:enum, enum:["EAST", "NORTH", "SOUTH", "WEST"]}` 2. To define apartment numbers: `{type:INTEGER, format:enum, enum:["101", "201", "301"]}`
"A String",
],
"example": "", # Optional. Example of an instance of this schema.
"format": "A String", # Optional. The format of the data. For `NUMBER` type, format can be `float` or `double`. For `INTEGER` type, format can be `int32` or `int64`. For `STRING` type, format can be `email`, `byte`, `date`, `date-time`, `password`, and other formats to further refine the data type.
"items": # Object with schema name: GoogleCloudAiplatformV1beta1Schema # Optional. If type is `ARRAY`, `items` specifies the schema of elements in the array.
"maxItems": "A String", # Optional. If type is `ARRAY`, `max_items` specifies the maximum number of items in an array.
"maxLength": "A String", # Optional. If type is `STRING`, `max_length` specifies the maximum length of the string.
"maxProperties": "A String", # Optional. If type is `OBJECT`, `max_properties` specifies the maximum number of properties that can be provided.
"maximum": 3.14, # Optional. If type is `INTEGER` or `NUMBER`, `maximum` specifies the maximum allowed value.
"minItems": "A String", # Optional. If type is `ARRAY`, `min_items` specifies the minimum number of items in an array.
"minLength": "A String", # Optional. If type is `STRING`, `min_length` specifies the minimum length of the string.
"minProperties": "A String", # Optional. If type is `OBJECT`, `min_properties` specifies the minimum number of properties that can be provided.
"minimum": 3.14, # Optional. If type is `INTEGER` or `NUMBER`, `minimum` specifies the minimum allowed value.
"nullable": True or False, # Optional. Indicates if the value of this field can be null.
"pattern": "A String", # Optional. If type is `STRING`, `pattern` specifies a regular expression that the string must match.
"properties": { # Optional. If type is `OBJECT`, `properties` is a map of property names to schema definitions for each property of the object.
"a_key": # Object with schema name: GoogleCloudAiplatformV1beta1Schema
},
"propertyOrdering": [ # Optional. Order of properties displayed or used where order matters. This is not a standard field in OpenAPI specification, but can be used to control the order of properties.
"A String",
],
"ref": "A String", # Optional. Allows referencing another schema definition to use in place of this schema. The value must be a valid reference to a schema in `defs`. For example, the following schema defines a reference to a schema node named "Pet": type: object properties: pet: ref: #/defs/Pet defs: Pet: type: object properties: name: type: string The value of the "pet" property is a reference to the schema node named "Pet". See details in https://json-schema.org/understanding-json-schema/structuring
"required": [ # Optional. If type is `OBJECT`, `required` lists the names of properties that must be present.
"A String",
],
"title": "A String", # Optional. Title for the schema.
"type": "A String", # Optional. Data type of the schema field.
},
"responseJsonSchema": "", # Optional. Describes the output from this function in JSON Schema format. The value specified by the schema is the response value of the function. This field is mutually exclusive with `response`.
},
],
"googleMaps": { # Tool to retrieve public maps data for grounding, powered by Google. # Optional. GoogleMaps tool type. Tool to support Google Maps in Model.
"enableWidget": True or False, # Optional. If true, include the widget context token in the response.
},
"googleSearch": { # GoogleSearch tool type. Tool to support Google Search in Model. Powered by Google. # Optional. GoogleSearch tool type. Tool to support Google Search in Model. Powered by Google.
"blockingConfidence": "A String", # Optional. Sites with confidence level chosen & above this value will be blocked from the search results.
"excludeDomains": [ # Optional. List of domains to be excluded from the search results. The default limit is 2000 domains. Example: ["amazon.com", "facebook.com"].
"A String",
],
},
"googleSearchRetrieval": { # Tool to retrieve public web data for grounding, powered by Google. # Optional. Specialized retrieval tool that is powered by Google Search.
"dynamicRetrievalConfig": { # Describes the options to customize dynamic retrieval. # Specifies the dynamic retrieval configuration for the given source.
"dynamicThreshold": 3.14, # Optional. The threshold to be used in dynamic retrieval. If not set, a system default value is used.
"mode": "A String", # The mode of the predictor to be used in dynamic retrieval.
},
},
"parallelAiSearch": { # ParallelAiSearch tool type. A tool that uses the Parallel.ai search engine for grounding. # Optional. If specified, Vertex AI will use Parallel.ai to search for information to answer user queries. The search results will be grounded on Parallel.ai and presented to the model for response generation
"apiKey": "A String", # Optional. The API key for ParallelAiSearch. If an API key is not provided, the system will attempt to verify access by checking for an active Parallel.ai subscription through the Google Cloud Marketplace. See https://docs.parallel.ai/search/search-quickstart for more details.
"customConfigs": { # Optional. Custom configs for ParallelAiSearch. This field can be used to pass any parameter from the Parallel.ai Search API. See the Parallel.ai documentation for the full list of available parameters and their usage: https://docs.parallel.ai/api-reference/search-beta/search Currently only `source_policy`, `excerpts`, `max_results`, `mode`, `fetch_policy` can be set via this field. For example: { "source_policy": { "include_domains": ["google.com", "wikipedia.org"], "exclude_domains": ["example.com"] }, "fetch_policy": { "max_age_seconds": 3600 } }
"a_key": "", # Properties of the object.
},
},
"retrieval": { # Defines a retrieval tool that model can call to access external knowledge. # Optional. Retrieval tool type. System will always execute the provided retrieval tool(s) to get external knowledge to answer the prompt. Retrieval results are presented to the model for generation.
"disableAttribution": True or False, # Optional. Deprecated. This option is no longer supported.
"externalApi": { # Retrieve from data source powered by external API for grounding. The external API is not owned by Google, but need to follow the pre-defined API spec. # Use data source powered by external API for grounding.
"apiAuth": { # The generic reusable api auth config. Deprecated. Please use AuthConfig (google/cloud/aiplatform/master/auth.proto) instead. # The authentication config to access the API. Deprecated. Please use auth_config instead.
"apiKeyConfig": { # The API secret. # The API secret.
"apiKeySecretVersion": "A String", # Required. The SecretManager secret version resource name storing API key. e.g. projects/{project}/secrets/{secret}/versions/{version}
"apiKeyString": "A String", # The API key string. Either this or `api_key_secret_version` must be set.
},
},
"apiSpec": "A String", # The API spec that the external API implements.
"authConfig": { # Auth configuration to run the extension. # The authentication config to access the API.
"apiKeyConfig": { # Config for authentication with API key. # Config for API key auth.
"apiKeySecret": "A String", # Optional. The name of the SecretManager secret version resource storing the API key. Format: `projects/{project}/secrets/{secrete}/versions/{version}` - If both `api_key_secret` and `api_key_string` are specified, this field takes precedence over `api_key_string`. - If specified, the `secretmanager.versions.access` permission should be granted to Vertex AI Extension Service Agent (https://cloud.google.com/vertex-ai/docs/general/access-control#service-agents) on the specified resource.
"apiKeyString": "A String", # Optional. The API key to be used in the request directly.
"httpElementLocation": "A String", # Optional. The location of the API key.
"name": "A String", # Optional. The parameter name of the API key. E.g. If the API request is "https://example.com/act?api_key=", "api_key" would be the parameter name.
},
"authType": "A String", # Type of auth scheme.
"googleServiceAccountConfig": { # Config for Google Service Account Authentication. # Config for Google Service Account auth.
"serviceAccount": "A String", # Optional. The service account that the extension execution service runs as. - If the service account is specified, the `iam.serviceAccounts.getAccessToken` permission should be granted to Vertex AI Extension Service Agent (https://cloud.google.com/vertex-ai/docs/general/access-control#service-agents) on the specified service account. - If not specified, the Vertex AI Extension Service Agent will be used to execute the Extension.
},
"httpBasicAuthConfig": { # Config for HTTP Basic Authentication. # Config for HTTP Basic auth.
"credentialSecret": "A String", # Required. The name of the SecretManager secret version resource storing the base64 encoded credentials. Format: `projects/{project}/secrets/{secrete}/versions/{version}` - If specified, the `secretmanager.versions.access` permission should be granted to Vertex AI Extension Service Agent (https://cloud.google.com/vertex-ai/docs/general/access-control#service-agents) on the specified resource.
},
"oauthConfig": { # Config for user oauth. # Config for user oauth.
"accessToken": "A String", # Access token for extension endpoint. Only used to propagate token from [[ExecuteExtensionRequest.runtime_auth_config]] at request time.
"serviceAccount": "A String", # The service account used to generate access tokens for executing the Extension. - If the service account is specified, the `iam.serviceAccounts.getAccessToken` permission should be granted to Vertex AI Extension Service Agent (https://cloud.google.com/vertex-ai/docs/general/access-control#service-agents) on the provided service account.
},
"oidcConfig": { # Config for user OIDC auth. # Config for user OIDC auth.
"idToken": "A String", # OpenID Connect formatted ID token for extension endpoint. Only used to propagate token from [[ExecuteExtensionRequest.runtime_auth_config]] at request time.
"serviceAccount": "A String", # The service account used to generate an OpenID Connect (OIDC)-compatible JWT token signed by the Google OIDC Provider (accounts.google.com) for extension endpoint (https://cloud.google.com/iam/docs/create-short-lived-credentials-direct#sa-credentials-oidc). - The audience for the token will be set to the URL in the server url defined in the OpenApi spec. - If the service account is provided, the service account should grant `iam.serviceAccounts.getOpenIdToken` permission to Vertex AI Extension Service Agent (https://cloud.google.com/vertex-ai/docs/general/access-control#service-agents).
},
},
"elasticSearchParams": { # The search parameters to use for the ELASTIC_SEARCH spec. # Parameters for the elastic search API.
"index": "A String", # The ElasticSearch index to use.
"numHits": 42, # Optional. Number of hits (chunks) to request. When specified, it is passed to Elasticsearch as the `num_hits` param.
"searchTemplate": "A String", # The ElasticSearch search template to use.
},
"endpoint": "A String", # The endpoint of the external API. The system will call the API at this endpoint to retrieve the data for grounding. Example: https://acme.com:443/search
"simpleSearchParams": { # The search parameters to use for SIMPLE_SEARCH spec. # Parameters for the simple search API.
},
},
"vertexAiSearch": { # Retrieve from Vertex AI Search datastore or engine for grounding. datastore and engine are mutually exclusive. See https://cloud.google.com/products/agent-builder # Set to use data source powered by Vertex AI Search.
"dataStoreSpecs": [ # Specifications that define the specific DataStores to be searched, along with configurations for those data stores. This is only considered for Engines with multiple data stores. It should only be set if engine is used.
{ # Define data stores within engine to filter on in a search call and configurations for those data stores. For more information, see https://cloud.google.com/generative-ai-app-builder/docs/reference/rpc/google.cloud.discoveryengine.v1#datastorespec
"dataStore": "A String", # Full resource name of DataStore, such as Format: `projects/{project}/locations/{location}/collections/{collection}/dataStores/{dataStore}`
"filter": "A String", # Optional. Filter specification to filter documents in the data store specified by data_store field. For more information on filtering, see [Filtering](https://cloud.google.com/generative-ai-app-builder/docs/filter-search-metadata)
},
],
"datastore": "A String", # Optional. Fully-qualified Vertex AI Search data store resource ID. Format: `projects/{project}/locations/{location}/collections/{collection}/dataStores/{dataStore}`
"engine": "A String", # Optional. Fully-qualified Vertex AI Search engine resource ID. Format: `projects/{project}/locations/{location}/collections/{collection}/engines/{engine}`
"filter": "A String", # Optional. Filter strings to be passed to the search API.
"maxResults": 42, # Optional. Number of search results to return per query. The default value is 10. The maximumm allowed value is 10.
},
"vertexRagStore": { # Retrieve from Vertex RAG Store for grounding. # Set to use data source powered by Vertex RAG store. User data is uploaded via the VertexRagDataService.
"ragCorpora": [ # Optional. Deprecated. Please use rag_resources instead.
"A String",
],
"ragResources": [ # Optional. The representation of the rag source. It can be used to specify corpus only or ragfiles. Currently only support one corpus or multiple files from one corpus. In the future we may open up multiple corpora support.
{ # The definition of the Rag resource.
"ragCorpus": "A String", # Optional. RagCorpora resource name. Format: `projects/{project}/locations/{location}/ragCorpora/{rag_corpus}`
"ragFileIds": [ # Optional. rag_file_id. The files should be in the same rag_corpus set in rag_corpus field.
"A String",
],
},
],
"ragRetrievalConfig": { # Specifies the context retrieval config. # Optional. The retrieval config for the Rag query.
"filter": { # Config for filters. # Optional. Config for filters.
"metadataFilter": "A String", # Optional. String for metadata filtering.
"vectorDistanceThreshold": 3.14, # Optional. Only returns contexts with vector distance smaller than the threshold.
"vectorSimilarityThreshold": 3.14, # Optional. Only returns contexts with vector similarity larger than the threshold.
},
"hybridSearch": { # Config for Hybrid Search. # Optional. Config for Hybrid Search.
"alpha": 3.14, # Optional. Alpha value controls the weight between dense and sparse vector search results. The range is [0, 1], while 0 means sparse vector search only and 1 means dense vector search only. The default value is 0.5 which balances sparse and dense vector search equally.
},
"ranking": { # Config for ranking and reranking. # Optional. Config for ranking and reranking.
"llmRanker": { # Config for LlmRanker. # Optional. Config for LlmRanker.
"modelName": "A String", # Optional. The model name used for ranking. See [Supported models](https://cloud.google.com/vertex-ai/generative-ai/docs/model-reference/inference#supported-models).
},
"rankService": { # Config for Rank Service. # Optional. Config for Rank Service.
"modelName": "A String", # Optional. The model name of the rank service. Format: `semantic-ranker-512@latest`
},
},
"topK": 42, # Optional. The number of contexts to retrieve.
},
"similarityTopK": 42, # Optional. Number of top k results to return from the selected corpora.
"storeContext": True or False, # Optional. Currently only supported for Gemini Multimodal Live API. In Gemini Multimodal Live API, if `store_context` bool is specified, Gemini will leverage it to automatically memorize the interactions between the client and Gemini, and retrieve context when needed to augment the response generation for users' ongoing and future interactions.
"vectorDistanceThreshold": 3.14, # Optional. Only return results with vector distance smaller than the threshold.
},
},
"urlContext": { # Tool to support URL context. # Optional. Tool to support URL context retrieval.
},
},
],
"author": "A String", # Required. The ID of the agent or entity that generated this event. Use "user" to denote events generated by the end-user.
"content": { # The structured data content of a message. A Content message contains a `role` field, which indicates the producer of the content, and a `parts` field, which contains the multi-part data of the message. # Required. The content of the event (e.g., text response, tool call, tool response).
"parts": [ # Required. A list of Part objects that make up a single message. Parts of a message can have different MIME types. A Content message must have at least one Part.
{ # A datatype containing media that is part of a multi-part Content message. A `Part` consists of data which has an associated datatype. A `Part` can only contain one of the accepted types in `Part.data`. For media types that are not text, `Part` must have a fixed IANA MIME type identifying the type and subtype of the media if `inline_data` or `file_data` field is filled with raw bytes.
"codeExecutionResult": { # Result of executing the [ExecutableCode]. Only generated when using the [CodeExecution] tool, and always follows a `part` containing the [ExecutableCode]. # Optional. The result of executing the ExecutableCode.
"outcome": "A String", # Required. Outcome of the code execution.
"output": "A String", # Optional. Contains stdout when code execution is successful, stderr or other description otherwise.
},
"executableCode": { # Code generated by the model that is meant to be executed, and the result returned to the model. Generated when using the [CodeExecution] tool, in which the code will be automatically executed, and a corresponding [CodeExecutionResult] will also be generated. # Optional. Code generated by the model that is intended to be executed.
"code": "A String", # Required. The code to be executed.
"language": "A String", # Required. Programming language of the `code`.
},
"fileData": { # URI-based data. A FileData message contains a URI pointing to data of a specific media type. It is used to represent images, audio, and video stored in Google Cloud Storage. # Optional. The URI-based data of the part. This can be used to include files from Google Cloud Storage.
"displayName": "A String", # Optional. The display name of the file. Used to provide a label or filename to distinguish files. This field is only returned in `PromptMessage` for prompt management. It is used in the Gemini calls only when server side tools (`code_execution`, `google_search`, and `url_context`) are enabled.
"fileUri": "A String", # Required. The URI of the file in Google Cloud Storage.
"mimeType": "A String", # Required. The IANA standard MIME type of the source data.
},
"functionCall": { # A predicted [FunctionCall] returned from the model that contains a string representing the [FunctionDeclaration.name] and a structured JSON object containing the parameters and their values. # Optional. A predicted function call returned from the model. This contains the name of the function to call and the arguments to pass to the function.
"args": { # Optional. The function parameters and values in JSON object format. See [FunctionDeclaration.parameters] for parameter details.
"a_key": "", # Properties of the object.
},
"id": "A String", # Optional. The unique id of the function call. If populated, the client to execute the `function_call` and return the response with the matching `id`.
"name": "A String", # Optional. The name of the function to call. Matches [FunctionDeclaration.name].
"partialArgs": [ # Optional. The partial argument value of the function call. If provided, represents the arguments/fields that are streamed incrementally.
{ # Partial argument value of the function call.
"boolValue": True or False, # Optional. Represents a boolean value.
"jsonPath": "A String", # Required. A JSON Path (RFC 9535) to the argument being streamed. https://datatracker.ietf.org/doc/html/rfc9535. e.g. "$.foo.bar[0].data".
"nullValue": "A String", # Optional. Represents a null value.
"numberValue": 3.14, # Optional. Represents a double value.
"stringValue": "A String", # Optional. Represents a string value.
"willContinue": True or False, # Optional. Whether this is not the last part of the same json_path. If true, another PartialArg message for the current json_path is expected to follow.
},
],
"willContinue": True or False, # Optional. Whether this is the last part of the FunctionCall. If true, another partial message for the current FunctionCall is expected to follow.
},
"functionResponse": { # The result output from a [FunctionCall] that contains a string representing the [FunctionDeclaration.name] and a structured JSON object containing any output from the function is used as context to the model. This should contain the result of a [FunctionCall] made based on model prediction. # Optional. The result of a function call. This is used to provide the model with the result of a function call that it predicted.
"id": "A String", # Optional. The id of the function call this response is for. Populated by the client to match the corresponding function call `id`.
"name": "A String", # Required. The name of the function to call. Matches [FunctionDeclaration.name] and [FunctionCall.name].
"parts": [ # Optional. Ordered `Parts` that constitute a function response. Parts may have different IANA MIME types.
{ # A datatype containing media that is part of a `FunctionResponse` message. A `FunctionResponsePart` consists of data which has an associated datatype. A `FunctionResponsePart` can only contain one of the accepted types in `FunctionResponsePart.data`. A `FunctionResponsePart` must have a fixed IANA MIME type identifying the type and subtype of the media if the `inline_data` field is filled with raw bytes.
"fileData": { # URI based data for function response. # URI based data.
"displayName": "A String", # Optional. Display name of the file data. Used to provide a label or filename to distinguish file datas. This field is only returned in PromptMessage for prompt management. It is currently used in the Gemini GenerateContent calls only when server side tools (code_execution, google_search, and url_context) are enabled.
"fileUri": "A String", # Required. URI.
"mimeType": "A String", # Required. The IANA standard MIME type of the source data.
},
"inlineData": { # Raw media bytes for function response. Text should not be sent as raw bytes, use the 'text' field. # Inline media bytes.
"data": "A String", # Required. Raw bytes.
"displayName": "A String", # Optional. Display name of the blob. Used to provide a label or filename to distinguish blobs. This field is only returned in PromptMessage for prompt management. It is currently used in the Gemini GenerateContent calls only when server side tools (code_execution, google_search, and url_context) are enabled.
"mimeType": "A String", # Required. The IANA standard MIME type of the source data.
},
},
],
"response": { # Required. The function response in JSON object format. Use "output" key to specify function output and "error" key to specify error details (if any). If "output" and "error" keys are not specified, then whole "response" is treated as function output.
"a_key": "", # Properties of the object.
},
},
"inlineData": { # A content blob. A Blob contains data of a specific media type. It is used to represent images, audio, and video. # Optional. The inline data content of the part. This can be used to include images, audio, or video in a request.
"data": "A String", # Required. The raw bytes of the data.
"displayName": "A String", # Optional. The display name of the blob. Used to provide a label or filename to distinguish blobs. This field is only returned in `PromptMessage` for prompt management. It is used in the Gemini calls only when server-side tools (`code_execution`, `google_search`, and `url_context`) are enabled.
"mimeType": "A String", # Required. The IANA standard MIME type of the source data.
},
"mediaResolution": { # per part media resolution. Media resolution for the input media. # per part media resolution. Media resolution for the input media.
"level": "A String", # The tokenization quality used for given media.
},
"text": "A String", # Optional. The text content of the part. When sent from the VSCode Gemini Code Assist extension, references to @mentioned items will be converted to markdown boldface text. For example `@my-repo` will be converted to and sent as `**my-repo**` by the IDE agent.
"thought": True or False, # Optional. Indicates whether the `part` represents the model's thought process or reasoning.
"thoughtSignature": "A String", # Optional. An opaque signature for the thought so it can be reused in subsequent requests.
"videoMetadata": { # Provides metadata for a video, including the start and end offsets for clipping and the frame rate. # Optional. Video metadata. The metadata should only be specified while the video data is presented in inline_data or file_data.
"endOffset": "A String", # Optional. The end offset of the video.
"fps": 3.14, # Optional. The frame rate of the video sent to the model. If not specified, the default value is 1.0. The valid range is (0.0, 24.0].
"startOffset": "A String", # Optional. The start offset of the video.
},
},
],
"role": "A String", # Optional. The producer of the content. Must be either 'user' or 'model'. If not set, the service will default to 'user'.
},
"eventTime": "A String", # Optional. The timestamp when the event occurred.
"stateDelta": { # Optional. The change in the session state caused by this event. This is a key-value map of fields that were modified or added by the event.
"a_key": "", # Properties of the object.
},
},
],
"turnId": "A String", # Optional. A unique identifier for the turn. Useful for referencing specific turns across systems.
"turnIndex": 42, # Required. The 0-based index of the turn in the conversation sequence.
},
],
},
"promptTemplateData": { # Message to hold a prompt template and the values to populate the template. # Prompt template data.
"values": { # The values for fields in the prompt template.
"a_key": { # The structured data content of a message. A Content message contains a `role` field, which indicates the producer of the content, and a `parts` field, which contains the multi-part data of the message.
"parts": [ # Required. A list of Part objects that make up a single message. Parts of a message can have different MIME types. A Content message must have at least one Part.
{ # A datatype containing media that is part of a multi-part Content message. A `Part` consists of data which has an associated datatype. A `Part` can only contain one of the accepted types in `Part.data`. For media types that are not text, `Part` must have a fixed IANA MIME type identifying the type and subtype of the media if `inline_data` or `file_data` field is filled with raw bytes.
"codeExecutionResult": { # Result of executing the [ExecutableCode]. Only generated when using the [CodeExecution] tool, and always follows a `part` containing the [ExecutableCode]. # Optional. The result of executing the ExecutableCode.
"outcome": "A String", # Required. Outcome of the code execution.
"output": "A String", # Optional. Contains stdout when code execution is successful, stderr or other description otherwise.
},
"executableCode": { # Code generated by the model that is meant to be executed, and the result returned to the model. Generated when using the [CodeExecution] tool, in which the code will be automatically executed, and a corresponding [CodeExecutionResult] will also be generated. # Optional. Code generated by the model that is intended to be executed.
"code": "A String", # Required. The code to be executed.
"language": "A String", # Required. Programming language of the `code`.
},
"fileData": { # URI-based data. A FileData message contains a URI pointing to data of a specific media type. It is used to represent images, audio, and video stored in Google Cloud Storage. # Optional. The URI-based data of the part. This can be used to include files from Google Cloud Storage.
"displayName": "A String", # Optional. The display name of the file. Used to provide a label or filename to distinguish files. This field is only returned in `PromptMessage` for prompt management. It is used in the Gemini calls only when server side tools (`code_execution`, `google_search`, and `url_context`) are enabled.
"fileUri": "A String", # Required. The URI of the file in Google Cloud Storage.
"mimeType": "A String", # Required. The IANA standard MIME type of the source data.
},
"functionCall": { # A predicted [FunctionCall] returned from the model that contains a string representing the [FunctionDeclaration.name] and a structured JSON object containing the parameters and their values. # Optional. A predicted function call returned from the model. This contains the name of the function to call and the arguments to pass to the function.
"args": { # Optional. The function parameters and values in JSON object format. See [FunctionDeclaration.parameters] for parameter details.
"a_key": "", # Properties of the object.
},
"id": "A String", # Optional. The unique id of the function call. If populated, the client to execute the `function_call` and return the response with the matching `id`.
"name": "A String", # Optional. The name of the function to call. Matches [FunctionDeclaration.name].
"partialArgs": [ # Optional. The partial argument value of the function call. If provided, represents the arguments/fields that are streamed incrementally.
{ # Partial argument value of the function call.
"boolValue": True or False, # Optional. Represents a boolean value.
"jsonPath": "A String", # Required. A JSON Path (RFC 9535) to the argument being streamed. https://datatracker.ietf.org/doc/html/rfc9535. e.g. "$.foo.bar[0].data".
"nullValue": "A String", # Optional. Represents a null value.
"numberValue": 3.14, # Optional. Represents a double value.
"stringValue": "A String", # Optional. Represents a string value.
"willContinue": True or False, # Optional. Whether this is not the last part of the same json_path. If true, another PartialArg message for the current json_path is expected to follow.
},
],
"willContinue": True or False, # Optional. Whether this is the last part of the FunctionCall. If true, another partial message for the current FunctionCall is expected to follow.
},
"functionResponse": { # The result output from a [FunctionCall] that contains a string representing the [FunctionDeclaration.name] and a structured JSON object containing any output from the function is used as context to the model. This should contain the result of a [FunctionCall] made based on model prediction. # Optional. The result of a function call. This is used to provide the model with the result of a function call that it predicted.
"id": "A String", # Optional. The id of the function call this response is for. Populated by the client to match the corresponding function call `id`.
"name": "A String", # Required. The name of the function to call. Matches [FunctionDeclaration.name] and [FunctionCall.name].
"parts": [ # Optional. Ordered `Parts` that constitute a function response. Parts may have different IANA MIME types.
{ # A datatype containing media that is part of a `FunctionResponse` message. A `FunctionResponsePart` consists of data which has an associated datatype. A `FunctionResponsePart` can only contain one of the accepted types in `FunctionResponsePart.data`. A `FunctionResponsePart` must have a fixed IANA MIME type identifying the type and subtype of the media if the `inline_data` field is filled with raw bytes.
"fileData": { # URI based data for function response. # URI based data.
"displayName": "A String", # Optional. Display name of the file data. Used to provide a label or filename to distinguish file datas. This field is only returned in PromptMessage for prompt management. It is currently used in the Gemini GenerateContent calls only when server side tools (code_execution, google_search, and url_context) are enabled.
"fileUri": "A String", # Required. URI.
"mimeType": "A String", # Required. The IANA standard MIME type of the source data.
},
"inlineData": { # Raw media bytes for function response. Text should not be sent as raw bytes, use the 'text' field. # Inline media bytes.
"data": "A String", # Required. Raw bytes.
"displayName": "A String", # Optional. Display name of the blob. Used to provide a label or filename to distinguish blobs. This field is only returned in PromptMessage for prompt management. It is currently used in the Gemini GenerateContent calls only when server side tools (code_execution, google_search, and url_context) are enabled.
"mimeType": "A String", # Required. The IANA standard MIME type of the source data.
},
},
],
"response": { # Required. The function response in JSON object format. Use "output" key to specify function output and "error" key to specify error details (if any). If "output" and "error" keys are not specified, then whole "response" is treated as function output.
"a_key": "", # Properties of the object.
},
},
"inlineData": { # A content blob. A Blob contains data of a specific media type. It is used to represent images, audio, and video. # Optional. The inline data content of the part. This can be used to include images, audio, or video in a request.
"data": "A String", # Required. The raw bytes of the data.
"displayName": "A String", # Optional. The display name of the blob. Used to provide a label or filename to distinguish blobs. This field is only returned in `PromptMessage` for prompt management. It is used in the Gemini calls only when server-side tools (`code_execution`, `google_search`, and `url_context`) are enabled.
"mimeType": "A String", # Required. The IANA standard MIME type of the source data.
},
"mediaResolution": { # per part media resolution. Media resolution for the input media. # per part media resolution. Media resolution for the input media.
"level": "A String", # The tokenization quality used for given media.
},
"text": "A String", # Optional. The text content of the part. When sent from the VSCode Gemini Code Assist extension, references to @mentioned items will be converted to markdown boldface text. For example `@my-repo` will be converted to and sent as `**my-repo**` by the IDE agent.
"thought": True or False, # Optional. Indicates whether the `part` represents the model's thought process or reasoning.
"thoughtSignature": "A String", # Optional. An opaque signature for the thought so it can be reused in subsequent requests.
"videoMetadata": { # Provides metadata for a video, including the start and end offsets for clipping and the frame rate. # Optional. Video metadata. The metadata should only be specified while the video data is presented in inline_data or file_data.
"endOffset": "A String", # Optional. The end offset of the video.
"fps": 3.14, # Optional. The frame rate of the video sent to the model. If not specified, the default value is 1.0. The valid range is (0.0, 24.0].
"startOffset": "A String", # Optional. The start offset of the video.
},
},
],
"role": "A String", # Optional. The producer of the content. Must be either 'user' or 'model'. If not set, the service will default to 'user'.
},
},
},
"text": "A String", # Text prompt.
"value": "", # Fields and values that can be used to populate the prompt template.
},
"rubrics": { # Optional. Named groups of rubrics associated with this prompt. The key is a user-defined name for the rubric group.
"a_key": { # A group of rubrics, used for grouping rubrics based on a metric or a version.
"displayName": "A String", # Human-readable name for the group. This should be unique within a given context if used for display or selection. Example: "Instruction Following V1", "Content Quality - Summarization Task".
"groupId": "A String", # Unique identifier for the group.
"rubrics": [ # Rubrics that are part of this group.
{ # Message representing a single testable criterion for evaluation. One input prompt could have multiple rubrics.
"content": { # Content of the rubric, defining the testable criteria. # Required. The actual testable criteria for the rubric.
"property": { # Defines criteria based on a specific property. # Evaluation criteria based on a specific property.
"description": "A String", # Description of the property being evaluated. Example: "The model's response is grammatically correct."
},
},
"importance": "A String", # Optional. The relative importance of this rubric.
"rubricId": "A String", # Unique identifier for the rubric. This ID is used to refer to this rubric, e.g., in RubricVerdict.
"type": "A String", # Optional. A type designator for the rubric, which can inform how it's evaluated or interpreted by systems or users. It's recommended to use consistent, well-defined, upper snake_case strings. Examples: "SUMMARIZATION_QUALITY", "SAFETY_HARMFUL_CONTENT", "INSTRUCTION_ADHERENCE".
},
],
},
},
},
"evaluationResponse": { # Evaluation result. # Output only. The response from evaluation.
"candidateResults": [ # Optional. The results for the metric.
{ # Result for a single candidate.
"additionalResults": "", # Optional. Additional results for the metric.
"candidate": "A String", # Required. The candidate that is being evaluated. The value is the same as the candidate name in the EvaluationRequest.
"explanation": "A String", # Optional. The explanation for the metric.
"metric": "A String", # Required. The metric that was evaluated.
"rubricVerdicts": [ # Optional. The rubric verdicts for the metric.
{ # Represents the verdict of an evaluation against a single rubric.
"evaluatedRubric": { # Message representing a single testable criterion for evaluation. One input prompt could have multiple rubrics. # Required. The full rubric definition that was evaluated. Storing this ensures the verdict is self-contained and understandable, especially if the original rubric definition changes or was dynamically generated.
"content": { # Content of the rubric, defining the testable criteria. # Required. The actual testable criteria for the rubric.
"property": { # Defines criteria based on a specific property. # Evaluation criteria based on a specific property.
"description": "A String", # Description of the property being evaluated. Example: "The model's response is grammatically correct."
},
},
"importance": "A String", # Optional. The relative importance of this rubric.
"rubricId": "A String", # Unique identifier for the rubric. This ID is used to refer to this rubric, e.g., in RubricVerdict.
"type": "A String", # Optional. A type designator for the rubric, which can inform how it's evaluated or interpreted by systems or users. It's recommended to use consistent, well-defined, upper snake_case strings. Examples: "SUMMARIZATION_QUALITY", "SAFETY_HARMFUL_CONTENT", "INSTRUCTION_ADHERENCE".
},
"reasoning": "A String", # Optional. Human-readable reasoning or explanation for the verdict. This can include specific examples or details from the evaluated content that justify the given verdict.
"verdict": True or False, # Required. Outcome of the evaluation against the rubric, represented as a boolean. `true` indicates a "Pass", `false` indicates a "Fail".
},
],
"score": 3.14, # Optional. The score for the metric.
},
],
"evaluationRequest": "A String", # Required. The request item that was evaluated. Format: projects/{project}/locations/{location}/evaluationItems/{evaluation_item}
"evaluationRun": "A String", # Required. The evaluation run that was used to generate the result. Format: projects/{project}/locations/{location}/evaluationRuns/{evaluation_run}
"metadata": "", # Optional. Metadata about the evaluation result.
"metric": "A String", # Required. The metric that was evaluated.
"request": { # A single evaluation request supporting input for both single-turn model generation and multi-turn agent execution traces. Valid input modes: 1. Inference Mode: `prompt` is set (containing text or AgentData context). 2. Offline Eval Mode: `prompt` is unset, and `candidate_responses` contains `agent_data` (the completed execution trace). Validation Rule: Either `prompt` must be set, OR at least one of the `candidate_responses` must contain `agent_data`. # Required. The request that was evaluated.
"candidateResponses": [ # Optional. Responses from model under test and other baseline models for comparison.
{ # Responses from model or agent.
"agentData": { # Represents data specific to multi-turn agent evaluations. # Optional. Represents the complete execution trace of a multi-turn conversation, which can involve single or multiple agents. This field is used to provide the full output of an agent's run, including all turns and events, for direct evaluation.
"agents": { # Optional. A map containing the static configurations for each agent in the system. Key: agent_id (matches the `author` field in events). Value: The static configuration of the agent.
"a_key": { # Represents configuration for an Agent.
"agentId": "A String", # Required. Unique identifier of the agent. This ID is used to refer to this agent, e.g., in AgentEvent.author, or in the `sub_agents` field. It must be unique within the `agents` map.
"agentType": "A String", # Optional. The type or class of the agent (e.g., "LlmAgent", "RouterAgent", "ToolUseAgent"). Useful for the autorater to understand the expected behavior of the agent.
"description": "A String", # Optional. A high-level description of the agent's role and responsibilities. Critical for evaluating if the agent is routing tasks correctly.
"instruction": "A String", # Optional. Provides instructions for the LLM model, guiding the agent's behavior. Can be static or dynamic. Dynamic instructions can contain placeholders like {variable_name} that will be resolved at runtime using the `AgentEvent.state_delta` field.
"subAgents": [ # Optional. The list of valid agent IDs that this agent can delegate to. This defines the directed edges in the multi-agent system graph topology.
"A String",
],
"tools": [ # Optional. The list of tools available to this agent.
{ # Tool details that the model may use to generate response. A `Tool` is a piece of code that enables the system to interact with external systems to perform an action, or set of actions, outside of knowledge and scope of the model. A Tool object should contain exactly one type of Tool (e.g FunctionDeclaration, Retrieval or GoogleSearchRetrieval).
"codeExecution": { # Tool that executes code generated by the model, and automatically returns the result to the model. See also [ExecutableCode]and [CodeExecutionResult] which are input and output to this tool. # Optional. CodeExecution tool type. Enables the model to execute code as part of generation.
},
"computerUse": { # Tool to support computer use. # Optional. Tool to support the model interacting directly with the computer. If enabled, it automatically populates computer-use specific Function Declarations.
"environment": "A String", # Required. The environment being operated.
"excludedPredefinedFunctions": [ # Optional. By default, [predefined functions](https://cloud.google.com/vertex-ai/generative-ai/docs/computer-use#supported-actions) are included in the final model call. Some of them can be explicitly excluded from being automatically included. This can serve two purposes: 1. Using a more restricted / different action space. 2. Improving the definitions / instructions of predefined functions.
"A String",
],
},
"enterpriseWebSearch": { # Tool to search public web data, powered by Vertex AI Search and Sec4 compliance. # Optional. Tool to support searching public web data, powered by Vertex AI Search and Sec4 compliance.
"blockingConfidence": "A String", # Optional. Sites with confidence level chosen & above this value will be blocked from the search results.
"excludeDomains": [ # Optional. List of domains to be excluded from the search results. The default limit is 2000 domains.
"A String",
],
},
"functionDeclarations": [ # Optional. Function tool type. One or more function declarations to be passed to the model along with the current user query. Model may decide to call a subset of these functions by populating FunctionCall in the response. User should provide a FunctionResponse for each function call in the next turn. Based on the function responses, Model will generate the final response back to the user. Maximum 512 function declarations can be provided.
{ # Structured representation of a function declaration as defined by the [OpenAPI 3.0 specification](https://spec.openapis.org/oas/v3.0.3). Included in this declaration are the function name, description, parameters and response type. This FunctionDeclaration is a representation of a block of code that can be used as a `Tool` by the model and executed by the client.
"description": "A String", # Optional. Description and purpose of the function. Model uses it to decide how and whether to call the function.
"name": "A String", # Required. The name of the function to call. Must start with a letter or an underscore. Must be a-z, A-Z, 0-9, or contain underscores, dots, colons and dashes, with a maximum length of 64.
"parameters": { # Defines the schema of input and output data. This is a subset of the [OpenAPI 3.0 Schema Object](https://spec.openapis.org/oas/v3.0.3#schema-object). # Optional. Describes the parameters to this function in JSON Schema Object format. Reflects the Open API 3.03 Parameter Object. string Key: the name of the parameter. Parameter names are case sensitive. Schema Value: the Schema defining the type used for the parameter. For function with no parameters, this can be left unset. Parameter names must start with a letter or an underscore and must only contain chars a-z, A-Z, 0-9, or underscores with a maximum length of 64. Example with 1 required and 1 optional parameter: type: OBJECT properties: param1: type: STRING param2: type: INTEGER required: - param1
"additionalProperties": "", # Optional. If `type` is `OBJECT`, specifies how to handle properties not defined in `properties`. If it is a boolean `false`, no additional properties are allowed. If it is a schema, additional properties are allowed if they conform to the schema.
"anyOf": [ # Optional. The instance must be valid against any (one or more) of the subschemas listed in `any_of`.
# Object with schema name: GoogleCloudAiplatformV1beta1Schema
],
"default": "", # Optional. Default value to use if the field is not specified.
"defs": { # Optional. `defs` provides a map of schema definitions that can be reused by `ref` elsewhere in the schema. Only allowed at root level of the schema.
"a_key": # Object with schema name: GoogleCloudAiplatformV1beta1Schema
},
"description": "A String", # Optional. Description of the schema.
"enum": [ # Optional. Possible values of the field. This field can be used to restrict a value to a fixed set of values. To mark a field as an enum, set `format` to `enum` and provide the list of possible values in `enum`. For example: 1. To define directions: `{type:STRING, format:enum, enum:["EAST", "NORTH", "SOUTH", "WEST"]}` 2. To define apartment numbers: `{type:INTEGER, format:enum, enum:["101", "201", "301"]}`
"A String",
],
"example": "", # Optional. Example of an instance of this schema.
"format": "A String", # Optional. The format of the data. For `NUMBER` type, format can be `float` or `double`. For `INTEGER` type, format can be `int32` or `int64`. For `STRING` type, format can be `email`, `byte`, `date`, `date-time`, `password`, and other formats to further refine the data type.
"items": # Object with schema name: GoogleCloudAiplatformV1beta1Schema # Optional. If type is `ARRAY`, `items` specifies the schema of elements in the array.
"maxItems": "A String", # Optional. If type is `ARRAY`, `max_items` specifies the maximum number of items in an array.
"maxLength": "A String", # Optional. If type is `STRING`, `max_length` specifies the maximum length of the string.
"maxProperties": "A String", # Optional. If type is `OBJECT`, `max_properties` specifies the maximum number of properties that can be provided.
"maximum": 3.14, # Optional. If type is `INTEGER` or `NUMBER`, `maximum` specifies the maximum allowed value.
"minItems": "A String", # Optional. If type is `ARRAY`, `min_items` specifies the minimum number of items in an array.
"minLength": "A String", # Optional. If type is `STRING`, `min_length` specifies the minimum length of the string.
"minProperties": "A String", # Optional. If type is `OBJECT`, `min_properties` specifies the minimum number of properties that can be provided.
"minimum": 3.14, # Optional. If type is `INTEGER` or `NUMBER`, `minimum` specifies the minimum allowed value.
"nullable": True or False, # Optional. Indicates if the value of this field can be null.
"pattern": "A String", # Optional. If type is `STRING`, `pattern` specifies a regular expression that the string must match.
"properties": { # Optional. If type is `OBJECT`, `properties` is a map of property names to schema definitions for each property of the object.
"a_key": # Object with schema name: GoogleCloudAiplatformV1beta1Schema
},
"propertyOrdering": [ # Optional. Order of properties displayed or used where order matters. This is not a standard field in OpenAPI specification, but can be used to control the order of properties.
"A String",
],
"ref": "A String", # Optional. Allows referencing another schema definition to use in place of this schema. The value must be a valid reference to a schema in `defs`. For example, the following schema defines a reference to a schema node named "Pet": type: object properties: pet: ref: #/defs/Pet defs: Pet: type: object properties: name: type: string The value of the "pet" property is a reference to the schema node named "Pet". See details in https://json-schema.org/understanding-json-schema/structuring
"required": [ # Optional. If type is `OBJECT`, `required` lists the names of properties that must be present.
"A String",
],
"title": "A String", # Optional. Title for the schema.
"type": "A String", # Optional. Data type of the schema field.
},
"parametersJsonSchema": "", # Optional. Describes the parameters to the function in JSON Schema format. The schema must describe an object where the properties are the parameters to the function. For example: ``` { "type": "object", "properties": { "name": { "type": "string" }, "age": { "type": "integer" } }, "additionalProperties": false, "required": ["name", "age"], "propertyOrdering": ["name", "age"] } ``` This field is mutually exclusive with `parameters`.
"response": { # Defines the schema of input and output data. This is a subset of the [OpenAPI 3.0 Schema Object](https://spec.openapis.org/oas/v3.0.3#schema-object). # Optional. Describes the output from this function in JSON Schema format. Reflects the Open API 3.03 Response Object. The Schema defines the type used for the response value of the function.
"additionalProperties": "", # Optional. If `type` is `OBJECT`, specifies how to handle properties not defined in `properties`. If it is a boolean `false`, no additional properties are allowed. If it is a schema, additional properties are allowed if they conform to the schema.
"anyOf": [ # Optional. The instance must be valid against any (one or more) of the subschemas listed in `any_of`.
# Object with schema name: GoogleCloudAiplatformV1beta1Schema
],
"default": "", # Optional. Default value to use if the field is not specified.
"defs": { # Optional. `defs` provides a map of schema definitions that can be reused by `ref` elsewhere in the schema. Only allowed at root level of the schema.
"a_key": # Object with schema name: GoogleCloudAiplatformV1beta1Schema
},
"description": "A String", # Optional. Description of the schema.
"enum": [ # Optional. Possible values of the field. This field can be used to restrict a value to a fixed set of values. To mark a field as an enum, set `format` to `enum` and provide the list of possible values in `enum`. For example: 1. To define directions: `{type:STRING, format:enum, enum:["EAST", "NORTH", "SOUTH", "WEST"]}` 2. To define apartment numbers: `{type:INTEGER, format:enum, enum:["101", "201", "301"]}`
"A String",
],
"example": "", # Optional. Example of an instance of this schema.
"format": "A String", # Optional. The format of the data. For `NUMBER` type, format can be `float` or `double`. For `INTEGER` type, format can be `int32` or `int64`. For `STRING` type, format can be `email`, `byte`, `date`, `date-time`, `password`, and other formats to further refine the data type.
"items": # Object with schema name: GoogleCloudAiplatformV1beta1Schema # Optional. If type is `ARRAY`, `items` specifies the schema of elements in the array.
"maxItems": "A String", # Optional. If type is `ARRAY`, `max_items` specifies the maximum number of items in an array.
"maxLength": "A String", # Optional. If type is `STRING`, `max_length` specifies the maximum length of the string.
"maxProperties": "A String", # Optional. If type is `OBJECT`, `max_properties` specifies the maximum number of properties that can be provided.
"maximum": 3.14, # Optional. If type is `INTEGER` or `NUMBER`, `maximum` specifies the maximum allowed value.
"minItems": "A String", # Optional. If type is `ARRAY`, `min_items` specifies the minimum number of items in an array.
"minLength": "A String", # Optional. If type is `STRING`, `min_length` specifies the minimum length of the string.
"minProperties": "A String", # Optional. If type is `OBJECT`, `min_properties` specifies the minimum number of properties that can be provided.
"minimum": 3.14, # Optional. If type is `INTEGER` or `NUMBER`, `minimum` specifies the minimum allowed value.
"nullable": True or False, # Optional. Indicates if the value of this field can be null.
"pattern": "A String", # Optional. If type is `STRING`, `pattern` specifies a regular expression that the string must match.
"properties": { # Optional. If type is `OBJECT`, `properties` is a map of property names to schema definitions for each property of the object.
"a_key": # Object with schema name: GoogleCloudAiplatformV1beta1Schema
},
"propertyOrdering": [ # Optional. Order of properties displayed or used where order matters. This is not a standard field in OpenAPI specification, but can be used to control the order of properties.
"A String",
],
"ref": "A String", # Optional. Allows referencing another schema definition to use in place of this schema. The value must be a valid reference to a schema in `defs`. For example, the following schema defines a reference to a schema node named "Pet": type: object properties: pet: ref: #/defs/Pet defs: Pet: type: object properties: name: type: string The value of the "pet" property is a reference to the schema node named "Pet". See details in https://json-schema.org/understanding-json-schema/structuring
"required": [ # Optional. If type is `OBJECT`, `required` lists the names of properties that must be present.
"A String",
],
"title": "A String", # Optional. Title for the schema.
"type": "A String", # Optional. Data type of the schema field.
},
"responseJsonSchema": "", # Optional. Describes the output from this function in JSON Schema format. The value specified by the schema is the response value of the function. This field is mutually exclusive with `response`.
},
],
"googleMaps": { # Tool to retrieve public maps data for grounding, powered by Google. # Optional. GoogleMaps tool type. Tool to support Google Maps in Model.
"enableWidget": True or False, # Optional. If true, include the widget context token in the response.
},
"googleSearch": { # GoogleSearch tool type. Tool to support Google Search in Model. Powered by Google. # Optional. GoogleSearch tool type. Tool to support Google Search in Model. Powered by Google.
"blockingConfidence": "A String", # Optional. Sites with confidence level chosen & above this value will be blocked from the search results.
"excludeDomains": [ # Optional. List of domains to be excluded from the search results. The default limit is 2000 domains. Example: ["amazon.com", "facebook.com"].
"A String",
],
},
"googleSearchRetrieval": { # Tool to retrieve public web data for grounding, powered by Google. # Optional. Specialized retrieval tool that is powered by Google Search.
"dynamicRetrievalConfig": { # Describes the options to customize dynamic retrieval. # Specifies the dynamic retrieval configuration for the given source.
"dynamicThreshold": 3.14, # Optional. The threshold to be used in dynamic retrieval. If not set, a system default value is used.
"mode": "A String", # The mode of the predictor to be used in dynamic retrieval.
},
},
"parallelAiSearch": { # ParallelAiSearch tool type. A tool that uses the Parallel.ai search engine for grounding. # Optional. If specified, Vertex AI will use Parallel.ai to search for information to answer user queries. The search results will be grounded on Parallel.ai and presented to the model for response generation
"apiKey": "A String", # Optional. The API key for ParallelAiSearch. If an API key is not provided, the system will attempt to verify access by checking for an active Parallel.ai subscription through the Google Cloud Marketplace. See https://docs.parallel.ai/search/search-quickstart for more details.
"customConfigs": { # Optional. Custom configs for ParallelAiSearch. This field can be used to pass any parameter from the Parallel.ai Search API. See the Parallel.ai documentation for the full list of available parameters and their usage: https://docs.parallel.ai/api-reference/search-beta/search Currently only `source_policy`, `excerpts`, `max_results`, `mode`, `fetch_policy` can be set via this field. For example: { "source_policy": { "include_domains": ["google.com", "wikipedia.org"], "exclude_domains": ["example.com"] }, "fetch_policy": { "max_age_seconds": 3600 } }
"a_key": "", # Properties of the object.
},
},
"retrieval": { # Defines a retrieval tool that model can call to access external knowledge. # Optional. Retrieval tool type. System will always execute the provided retrieval tool(s) to get external knowledge to answer the prompt. Retrieval results are presented to the model for generation.
"disableAttribution": True or False, # Optional. Deprecated. This option is no longer supported.
"externalApi": { # Retrieve from data source powered by external API for grounding. The external API is not owned by Google, but need to follow the pre-defined API spec. # Use data source powered by external API for grounding.
"apiAuth": { # The generic reusable api auth config. Deprecated. Please use AuthConfig (google/cloud/aiplatform/master/auth.proto) instead. # The authentication config to access the API. Deprecated. Please use auth_config instead.
"apiKeyConfig": { # The API secret. # The API secret.
"apiKeySecretVersion": "A String", # Required. The SecretManager secret version resource name storing API key. e.g. projects/{project}/secrets/{secret}/versions/{version}
"apiKeyString": "A String", # The API key string. Either this or `api_key_secret_version` must be set.
},
},
"apiSpec": "A String", # The API spec that the external API implements.
"authConfig": { # Auth configuration to run the extension. # The authentication config to access the API.
"apiKeyConfig": { # Config for authentication with API key. # Config for API key auth.
"apiKeySecret": "A String", # Optional. The name of the SecretManager secret version resource storing the API key. Format: `projects/{project}/secrets/{secrete}/versions/{version}` - If both `api_key_secret` and `api_key_string` are specified, this field takes precedence over `api_key_string`. - If specified, the `secretmanager.versions.access` permission should be granted to Vertex AI Extension Service Agent (https://cloud.google.com/vertex-ai/docs/general/access-control#service-agents) on the specified resource.
"apiKeyString": "A String", # Optional. The API key to be used in the request directly.
"httpElementLocation": "A String", # Optional. The location of the API key.
"name": "A String", # Optional. The parameter name of the API key. E.g. If the API request is "https://example.com/act?api_key=", "api_key" would be the parameter name.
},
"authType": "A String", # Type of auth scheme.
"googleServiceAccountConfig": { # Config for Google Service Account Authentication. # Config for Google Service Account auth.
"serviceAccount": "A String", # Optional. The service account that the extension execution service runs as. - If the service account is specified, the `iam.serviceAccounts.getAccessToken` permission should be granted to Vertex AI Extension Service Agent (https://cloud.google.com/vertex-ai/docs/general/access-control#service-agents) on the specified service account. - If not specified, the Vertex AI Extension Service Agent will be used to execute the Extension.
},
"httpBasicAuthConfig": { # Config for HTTP Basic Authentication. # Config for HTTP Basic auth.
"credentialSecret": "A String", # Required. The name of the SecretManager secret version resource storing the base64 encoded credentials. Format: `projects/{project}/secrets/{secrete}/versions/{version}` - If specified, the `secretmanager.versions.access` permission should be granted to Vertex AI Extension Service Agent (https://cloud.google.com/vertex-ai/docs/general/access-control#service-agents) on the specified resource.
},
"oauthConfig": { # Config for user oauth. # Config for user oauth.
"accessToken": "A String", # Access token for extension endpoint. Only used to propagate token from [[ExecuteExtensionRequest.runtime_auth_config]] at request time.
"serviceAccount": "A String", # The service account used to generate access tokens for executing the Extension. - If the service account is specified, the `iam.serviceAccounts.getAccessToken` permission should be granted to Vertex AI Extension Service Agent (https://cloud.google.com/vertex-ai/docs/general/access-control#service-agents) on the provided service account.
},
"oidcConfig": { # Config for user OIDC auth. # Config for user OIDC auth.
"idToken": "A String", # OpenID Connect formatted ID token for extension endpoint. Only used to propagate token from [[ExecuteExtensionRequest.runtime_auth_config]] at request time.
"serviceAccount": "A String", # The service account used to generate an OpenID Connect (OIDC)-compatible JWT token signed by the Google OIDC Provider (accounts.google.com) for extension endpoint (https://cloud.google.com/iam/docs/create-short-lived-credentials-direct#sa-credentials-oidc). - The audience for the token will be set to the URL in the server url defined in the OpenApi spec. - If the service account is provided, the service account should grant `iam.serviceAccounts.getOpenIdToken` permission to Vertex AI Extension Service Agent (https://cloud.google.com/vertex-ai/docs/general/access-control#service-agents).
},
},
"elasticSearchParams": { # The search parameters to use for the ELASTIC_SEARCH spec. # Parameters for the elastic search API.
"index": "A String", # The ElasticSearch index to use.
"numHits": 42, # Optional. Number of hits (chunks) to request. When specified, it is passed to Elasticsearch as the `num_hits` param.
"searchTemplate": "A String", # The ElasticSearch search template to use.
},
"endpoint": "A String", # The endpoint of the external API. The system will call the API at this endpoint to retrieve the data for grounding. Example: https://acme.com:443/search
"simpleSearchParams": { # The search parameters to use for SIMPLE_SEARCH spec. # Parameters for the simple search API.
},
},
"vertexAiSearch": { # Retrieve from Vertex AI Search datastore or engine for grounding. datastore and engine are mutually exclusive. See https://cloud.google.com/products/agent-builder # Set to use data source powered by Vertex AI Search.
"dataStoreSpecs": [ # Specifications that define the specific DataStores to be searched, along with configurations for those data stores. This is only considered for Engines with multiple data stores. It should only be set if engine is used.
{ # Define data stores within engine to filter on in a search call and configurations for those data stores. For more information, see https://cloud.google.com/generative-ai-app-builder/docs/reference/rpc/google.cloud.discoveryengine.v1#datastorespec
"dataStore": "A String", # Full resource name of DataStore, such as Format: `projects/{project}/locations/{location}/collections/{collection}/dataStores/{dataStore}`
"filter": "A String", # Optional. Filter specification to filter documents in the data store specified by data_store field. For more information on filtering, see [Filtering](https://cloud.google.com/generative-ai-app-builder/docs/filter-search-metadata)
},
],
"datastore": "A String", # Optional. Fully-qualified Vertex AI Search data store resource ID. Format: `projects/{project}/locations/{location}/collections/{collection}/dataStores/{dataStore}`
"engine": "A String", # Optional. Fully-qualified Vertex AI Search engine resource ID. Format: `projects/{project}/locations/{location}/collections/{collection}/engines/{engine}`
"filter": "A String", # Optional. Filter strings to be passed to the search API.
"maxResults": 42, # Optional. Number of search results to return per query. The default value is 10. The maximumm allowed value is 10.
},
"vertexRagStore": { # Retrieve from Vertex RAG Store for grounding. # Set to use data source powered by Vertex RAG store. User data is uploaded via the VertexRagDataService.
"ragCorpora": [ # Optional. Deprecated. Please use rag_resources instead.
"A String",
],
"ragResources": [ # Optional. The representation of the rag source. It can be used to specify corpus only or ragfiles. Currently only support one corpus or multiple files from one corpus. In the future we may open up multiple corpora support.
{ # The definition of the Rag resource.
"ragCorpus": "A String", # Optional. RagCorpora resource name. Format: `projects/{project}/locations/{location}/ragCorpora/{rag_corpus}`
"ragFileIds": [ # Optional. rag_file_id. The files should be in the same rag_corpus set in rag_corpus field.
"A String",
],
},
],
"ragRetrievalConfig": { # Specifies the context retrieval config. # Optional. The retrieval config for the Rag query.
"filter": { # Config for filters. # Optional. Config for filters.
"metadataFilter": "A String", # Optional. String for metadata filtering.
"vectorDistanceThreshold": 3.14, # Optional. Only returns contexts with vector distance smaller than the threshold.
"vectorSimilarityThreshold": 3.14, # Optional. Only returns contexts with vector similarity larger than the threshold.
},
"hybridSearch": { # Config for Hybrid Search. # Optional. Config for Hybrid Search.
"alpha": 3.14, # Optional. Alpha value controls the weight between dense and sparse vector search results. The range is [0, 1], while 0 means sparse vector search only and 1 means dense vector search only. The default value is 0.5 which balances sparse and dense vector search equally.
},
"ranking": { # Config for ranking and reranking. # Optional. Config for ranking and reranking.
"llmRanker": { # Config for LlmRanker. # Optional. Config for LlmRanker.
"modelName": "A String", # Optional. The model name used for ranking. See [Supported models](https://cloud.google.com/vertex-ai/generative-ai/docs/model-reference/inference#supported-models).
},
"rankService": { # Config for Rank Service. # Optional. Config for Rank Service.
"modelName": "A String", # Optional. The model name of the rank service. Format: `semantic-ranker-512@latest`
},
},
"topK": 42, # Optional. The number of contexts to retrieve.
},
"similarityTopK": 42, # Optional. Number of top k results to return from the selected corpora.
"storeContext": True or False, # Optional. Currently only supported for Gemini Multimodal Live API. In Gemini Multimodal Live API, if `store_context` bool is specified, Gemini will leverage it to automatically memorize the interactions between the client and Gemini, and retrieve context when needed to augment the response generation for users' ongoing and future interactions.
"vectorDistanceThreshold": 3.14, # Optional. Only return results with vector distance smaller than the threshold.
},
},
"urlContext": { # Tool to support URL context. # Optional. Tool to support URL context retrieval.
},
},
],
},
},
"turns": [ # Optional. A chronological list of conversation turns. Each turn represents a logical execution cycle (e.g., User Input -> Agent Response).
{ # Represents a single turn/invocation in the conversation.
"events": [ # Optional. The list of events that occurred during this turn.
{ # Represents a single event in the execution trace.
"activeTools": [ # Optional. The list of tools that were active/available to the agent at the time of this event. This overrides the `AgentConfig.tools` if set.
{ # Tool details that the model may use to generate response. A `Tool` is a piece of code that enables the system to interact with external systems to perform an action, or set of actions, outside of knowledge and scope of the model. A Tool object should contain exactly one type of Tool (e.g FunctionDeclaration, Retrieval or GoogleSearchRetrieval).
"codeExecution": { # Tool that executes code generated by the model, and automatically returns the result to the model. See also [ExecutableCode]and [CodeExecutionResult] which are input and output to this tool. # Optional. CodeExecution tool type. Enables the model to execute code as part of generation.
},
"computerUse": { # Tool to support computer use. # Optional. Tool to support the model interacting directly with the computer. If enabled, it automatically populates computer-use specific Function Declarations.
"environment": "A String", # Required. The environment being operated.
"excludedPredefinedFunctions": [ # Optional. By default, [predefined functions](https://cloud.google.com/vertex-ai/generative-ai/docs/computer-use#supported-actions) are included in the final model call. Some of them can be explicitly excluded from being automatically included. This can serve two purposes: 1. Using a more restricted / different action space. 2. Improving the definitions / instructions of predefined functions.
"A String",
],
},
"enterpriseWebSearch": { # Tool to search public web data, powered by Vertex AI Search and Sec4 compliance. # Optional. Tool to support searching public web data, powered by Vertex AI Search and Sec4 compliance.
"blockingConfidence": "A String", # Optional. Sites with confidence level chosen & above this value will be blocked from the search results.
"excludeDomains": [ # Optional. List of domains to be excluded from the search results. The default limit is 2000 domains.
"A String",
],
},
"functionDeclarations": [ # Optional. Function tool type. One or more function declarations to be passed to the model along with the current user query. Model may decide to call a subset of these functions by populating FunctionCall in the response. User should provide a FunctionResponse for each function call in the next turn. Based on the function responses, Model will generate the final response back to the user. Maximum 512 function declarations can be provided.
{ # Structured representation of a function declaration as defined by the [OpenAPI 3.0 specification](https://spec.openapis.org/oas/v3.0.3). Included in this declaration are the function name, description, parameters and response type. This FunctionDeclaration is a representation of a block of code that can be used as a `Tool` by the model and executed by the client.
"description": "A String", # Optional. Description and purpose of the function. Model uses it to decide how and whether to call the function.
"name": "A String", # Required. The name of the function to call. Must start with a letter or an underscore. Must be a-z, A-Z, 0-9, or contain underscores, dots, colons and dashes, with a maximum length of 64.
"parameters": { # Defines the schema of input and output data. This is a subset of the [OpenAPI 3.0 Schema Object](https://spec.openapis.org/oas/v3.0.3#schema-object). # Optional. Describes the parameters to this function in JSON Schema Object format. Reflects the Open API 3.03 Parameter Object. string Key: the name of the parameter. Parameter names are case sensitive. Schema Value: the Schema defining the type used for the parameter. For function with no parameters, this can be left unset. Parameter names must start with a letter or an underscore and must only contain chars a-z, A-Z, 0-9, or underscores with a maximum length of 64. Example with 1 required and 1 optional parameter: type: OBJECT properties: param1: type: STRING param2: type: INTEGER required: - param1
"additionalProperties": "", # Optional. If `type` is `OBJECT`, specifies how to handle properties not defined in `properties`. If it is a boolean `false`, no additional properties are allowed. If it is a schema, additional properties are allowed if they conform to the schema.
"anyOf": [ # Optional. The instance must be valid against any (one or more) of the subschemas listed in `any_of`.
# Object with schema name: GoogleCloudAiplatformV1beta1Schema
],
"default": "", # Optional. Default value to use if the field is not specified.
"defs": { # Optional. `defs` provides a map of schema definitions that can be reused by `ref` elsewhere in the schema. Only allowed at root level of the schema.
"a_key": # Object with schema name: GoogleCloudAiplatformV1beta1Schema
},
"description": "A String", # Optional. Description of the schema.
"enum": [ # Optional. Possible values of the field. This field can be used to restrict a value to a fixed set of values. To mark a field as an enum, set `format` to `enum` and provide the list of possible values in `enum`. For example: 1. To define directions: `{type:STRING, format:enum, enum:["EAST", "NORTH", "SOUTH", "WEST"]}` 2. To define apartment numbers: `{type:INTEGER, format:enum, enum:["101", "201", "301"]}`
"A String",
],
"example": "", # Optional. Example of an instance of this schema.
"format": "A String", # Optional. The format of the data. For `NUMBER` type, format can be `float` or `double`. For `INTEGER` type, format can be `int32` or `int64`. For `STRING` type, format can be `email`, `byte`, `date`, `date-time`, `password`, and other formats to further refine the data type.
"items": # Object with schema name: GoogleCloudAiplatformV1beta1Schema # Optional. If type is `ARRAY`, `items` specifies the schema of elements in the array.
"maxItems": "A String", # Optional. If type is `ARRAY`, `max_items` specifies the maximum number of items in an array.
"maxLength": "A String", # Optional. If type is `STRING`, `max_length` specifies the maximum length of the string.
"maxProperties": "A String", # Optional. If type is `OBJECT`, `max_properties` specifies the maximum number of properties that can be provided.
"maximum": 3.14, # Optional. If type is `INTEGER` or `NUMBER`, `maximum` specifies the maximum allowed value.
"minItems": "A String", # Optional. If type is `ARRAY`, `min_items` specifies the minimum number of items in an array.
"minLength": "A String", # Optional. If type is `STRING`, `min_length` specifies the minimum length of the string.
"minProperties": "A String", # Optional. If type is `OBJECT`, `min_properties` specifies the minimum number of properties that can be provided.
"minimum": 3.14, # Optional. If type is `INTEGER` or `NUMBER`, `minimum` specifies the minimum allowed value.
"nullable": True or False, # Optional. Indicates if the value of this field can be null.
"pattern": "A String", # Optional. If type is `STRING`, `pattern` specifies a regular expression that the string must match.
"properties": { # Optional. If type is `OBJECT`, `properties` is a map of property names to schema definitions for each property of the object.
"a_key": # Object with schema name: GoogleCloudAiplatformV1beta1Schema
},
"propertyOrdering": [ # Optional. Order of properties displayed or used where order matters. This is not a standard field in OpenAPI specification, but can be used to control the order of properties.
"A String",
],
"ref": "A String", # Optional. Allows referencing another schema definition to use in place of this schema. The value must be a valid reference to a schema in `defs`. For example, the following schema defines a reference to a schema node named "Pet": type: object properties: pet: ref: #/defs/Pet defs: Pet: type: object properties: name: type: string The value of the "pet" property is a reference to the schema node named "Pet". See details in https://json-schema.org/understanding-json-schema/structuring
"required": [ # Optional. If type is `OBJECT`, `required` lists the names of properties that must be present.
"A String",
],
"title": "A String", # Optional. Title for the schema.
"type": "A String", # Optional. Data type of the schema field.
},
"parametersJsonSchema": "", # Optional. Describes the parameters to the function in JSON Schema format. The schema must describe an object where the properties are the parameters to the function. For example: ``` { "type": "object", "properties": { "name": { "type": "string" }, "age": { "type": "integer" } }, "additionalProperties": false, "required": ["name", "age"], "propertyOrdering": ["name", "age"] } ``` This field is mutually exclusive with `parameters`.
"response": { # Defines the schema of input and output data. This is a subset of the [OpenAPI 3.0 Schema Object](https://spec.openapis.org/oas/v3.0.3#schema-object). # Optional. Describes the output from this function in JSON Schema format. Reflects the Open API 3.03 Response Object. The Schema defines the type used for the response value of the function.
"additionalProperties": "", # Optional. If `type` is `OBJECT`, specifies how to handle properties not defined in `properties`. If it is a boolean `false`, no additional properties are allowed. If it is a schema, additional properties are allowed if they conform to the schema.
"anyOf": [ # Optional. The instance must be valid against any (one or more) of the subschemas listed in `any_of`.
# Object with schema name: GoogleCloudAiplatformV1beta1Schema
],
"default": "", # Optional. Default value to use if the field is not specified.
"defs": { # Optional. `defs` provides a map of schema definitions that can be reused by `ref` elsewhere in the schema. Only allowed at root level of the schema.
"a_key": # Object with schema name: GoogleCloudAiplatformV1beta1Schema
},
"description": "A String", # Optional. Description of the schema.
"enum": [ # Optional. Possible values of the field. This field can be used to restrict a value to a fixed set of values. To mark a field as an enum, set `format` to `enum` and provide the list of possible values in `enum`. For example: 1. To define directions: `{type:STRING, format:enum, enum:["EAST", "NORTH", "SOUTH", "WEST"]}` 2. To define apartment numbers: `{type:INTEGER, format:enum, enum:["101", "201", "301"]}`
"A String",
],
"example": "", # Optional. Example of an instance of this schema.
"format": "A String", # Optional. The format of the data. For `NUMBER` type, format can be `float` or `double`. For `INTEGER` type, format can be `int32` or `int64`. For `STRING` type, format can be `email`, `byte`, `date`, `date-time`, `password`, and other formats to further refine the data type.
"items": # Object with schema name: GoogleCloudAiplatformV1beta1Schema # Optional. If type is `ARRAY`, `items` specifies the schema of elements in the array.
"maxItems": "A String", # Optional. If type is `ARRAY`, `max_items` specifies the maximum number of items in an array.
"maxLength": "A String", # Optional. If type is `STRING`, `max_length` specifies the maximum length of the string.
"maxProperties": "A String", # Optional. If type is `OBJECT`, `max_properties` specifies the maximum number of properties that can be provided.
"maximum": 3.14, # Optional. If type is `INTEGER` or `NUMBER`, `maximum` specifies the maximum allowed value.
"minItems": "A String", # Optional. If type is `ARRAY`, `min_items` specifies the minimum number of items in an array.
"minLength": "A String", # Optional. If type is `STRING`, `min_length` specifies the minimum length of the string.
"minProperties": "A String", # Optional. If type is `OBJECT`, `min_properties` specifies the minimum number of properties that can be provided.
"minimum": 3.14, # Optional. If type is `INTEGER` or `NUMBER`, `minimum` specifies the minimum allowed value.
"nullable": True or False, # Optional. Indicates if the value of this field can be null.
"pattern": "A String", # Optional. If type is `STRING`, `pattern` specifies a regular expression that the string must match.
"properties": { # Optional. If type is `OBJECT`, `properties` is a map of property names to schema definitions for each property of the object.
"a_key": # Object with schema name: GoogleCloudAiplatformV1beta1Schema
},
"propertyOrdering": [ # Optional. Order of properties displayed or used where order matters. This is not a standard field in OpenAPI specification, but can be used to control the order of properties.
"A String",
],
"ref": "A String", # Optional. Allows referencing another schema definition to use in place of this schema. The value must be a valid reference to a schema in `defs`. For example, the following schema defines a reference to a schema node named "Pet": type: object properties: pet: ref: #/defs/Pet defs: Pet: type: object properties: name: type: string The value of the "pet" property is a reference to the schema node named "Pet". See details in https://json-schema.org/understanding-json-schema/structuring
"required": [ # Optional. If type is `OBJECT`, `required` lists the names of properties that must be present.
"A String",
],
"title": "A String", # Optional. Title for the schema.
"type": "A String", # Optional. Data type of the schema field.
},
"responseJsonSchema": "", # Optional. Describes the output from this function in JSON Schema format. The value specified by the schema is the response value of the function. This field is mutually exclusive with `response`.
},
],
"googleMaps": { # Tool to retrieve public maps data for grounding, powered by Google. # Optional. GoogleMaps tool type. Tool to support Google Maps in Model.
"enableWidget": True or False, # Optional. If true, include the widget context token in the response.
},
"googleSearch": { # GoogleSearch tool type. Tool to support Google Search in Model. Powered by Google. # Optional. GoogleSearch tool type. Tool to support Google Search in Model. Powered by Google.
"blockingConfidence": "A String", # Optional. Sites with confidence level chosen & above this value will be blocked from the search results.
"excludeDomains": [ # Optional. List of domains to be excluded from the search results. The default limit is 2000 domains. Example: ["amazon.com", "facebook.com"].
"A String",
],
},
"googleSearchRetrieval": { # Tool to retrieve public web data for grounding, powered by Google. # Optional. Specialized retrieval tool that is powered by Google Search.
"dynamicRetrievalConfig": { # Describes the options to customize dynamic retrieval. # Specifies the dynamic retrieval configuration for the given source.
"dynamicThreshold": 3.14, # Optional. The threshold to be used in dynamic retrieval. If not set, a system default value is used.
"mode": "A String", # The mode of the predictor to be used in dynamic retrieval.
},
},
"parallelAiSearch": { # ParallelAiSearch tool type. A tool that uses the Parallel.ai search engine for grounding. # Optional. If specified, Vertex AI will use Parallel.ai to search for information to answer user queries. The search results will be grounded on Parallel.ai and presented to the model for response generation
"apiKey": "A String", # Optional. The API key for ParallelAiSearch. If an API key is not provided, the system will attempt to verify access by checking for an active Parallel.ai subscription through the Google Cloud Marketplace. See https://docs.parallel.ai/search/search-quickstart for more details.
"customConfigs": { # Optional. Custom configs for ParallelAiSearch. This field can be used to pass any parameter from the Parallel.ai Search API. See the Parallel.ai documentation for the full list of available parameters and their usage: https://docs.parallel.ai/api-reference/search-beta/search Currently only `source_policy`, `excerpts`, `max_results`, `mode`, `fetch_policy` can be set via this field. For example: { "source_policy": { "include_domains": ["google.com", "wikipedia.org"], "exclude_domains": ["example.com"] }, "fetch_policy": { "max_age_seconds": 3600 } }
"a_key": "", # Properties of the object.
},
},
"retrieval": { # Defines a retrieval tool that model can call to access external knowledge. # Optional. Retrieval tool type. System will always execute the provided retrieval tool(s) to get external knowledge to answer the prompt. Retrieval results are presented to the model for generation.
"disableAttribution": True or False, # Optional. Deprecated. This option is no longer supported.
"externalApi": { # Retrieve from data source powered by external API for grounding. The external API is not owned by Google, but need to follow the pre-defined API spec. # Use data source powered by external API for grounding.
"apiAuth": { # The generic reusable api auth config. Deprecated. Please use AuthConfig (google/cloud/aiplatform/master/auth.proto) instead. # The authentication config to access the API. Deprecated. Please use auth_config instead.
"apiKeyConfig": { # The API secret. # The API secret.
"apiKeySecretVersion": "A String", # Required. The SecretManager secret version resource name storing API key. e.g. projects/{project}/secrets/{secret}/versions/{version}
"apiKeyString": "A String", # The API key string. Either this or `api_key_secret_version` must be set.
},
},
"apiSpec": "A String", # The API spec that the external API implements.
"authConfig": { # Auth configuration to run the extension. # The authentication config to access the API.
"apiKeyConfig": { # Config for authentication with API key. # Config for API key auth.
"apiKeySecret": "A String", # Optional. The name of the SecretManager secret version resource storing the API key. Format: `projects/{project}/secrets/{secrete}/versions/{version}` - If both `api_key_secret` and `api_key_string` are specified, this field takes precedence over `api_key_string`. - If specified, the `secretmanager.versions.access` permission should be granted to Vertex AI Extension Service Agent (https://cloud.google.com/vertex-ai/docs/general/access-control#service-agents) on the specified resource.
"apiKeyString": "A String", # Optional. The API key to be used in the request directly.
"httpElementLocation": "A String", # Optional. The location of the API key.
"name": "A String", # Optional. The parameter name of the API key. E.g. If the API request is "https://example.com/act?api_key=", "api_key" would be the parameter name.
},
"authType": "A String", # Type of auth scheme.
"googleServiceAccountConfig": { # Config for Google Service Account Authentication. # Config for Google Service Account auth.
"serviceAccount": "A String", # Optional. The service account that the extension execution service runs as. - If the service account is specified, the `iam.serviceAccounts.getAccessToken` permission should be granted to Vertex AI Extension Service Agent (https://cloud.google.com/vertex-ai/docs/general/access-control#service-agents) on the specified service account. - If not specified, the Vertex AI Extension Service Agent will be used to execute the Extension.
},
"httpBasicAuthConfig": { # Config for HTTP Basic Authentication. # Config for HTTP Basic auth.
"credentialSecret": "A String", # Required. The name of the SecretManager secret version resource storing the base64 encoded credentials. Format: `projects/{project}/secrets/{secrete}/versions/{version}` - If specified, the `secretmanager.versions.access` permission should be granted to Vertex AI Extension Service Agent (https://cloud.google.com/vertex-ai/docs/general/access-control#service-agents) on the specified resource.
},
"oauthConfig": { # Config for user oauth. # Config for user oauth.
"accessToken": "A String", # Access token for extension endpoint. Only used to propagate token from [[ExecuteExtensionRequest.runtime_auth_config]] at request time.
"serviceAccount": "A String", # The service account used to generate access tokens for executing the Extension. - If the service account is specified, the `iam.serviceAccounts.getAccessToken` permission should be granted to Vertex AI Extension Service Agent (https://cloud.google.com/vertex-ai/docs/general/access-control#service-agents) on the provided service account.
},
"oidcConfig": { # Config for user OIDC auth. # Config for user OIDC auth.
"idToken": "A String", # OpenID Connect formatted ID token for extension endpoint. Only used to propagate token from [[ExecuteExtensionRequest.runtime_auth_config]] at request time.
"serviceAccount": "A String", # The service account used to generate an OpenID Connect (OIDC)-compatible JWT token signed by the Google OIDC Provider (accounts.google.com) for extension endpoint (https://cloud.google.com/iam/docs/create-short-lived-credentials-direct#sa-credentials-oidc). - The audience for the token will be set to the URL in the server url defined in the OpenApi spec. - If the service account is provided, the service account should grant `iam.serviceAccounts.getOpenIdToken` permission to Vertex AI Extension Service Agent (https://cloud.google.com/vertex-ai/docs/general/access-control#service-agents).
},
},
"elasticSearchParams": { # The search parameters to use for the ELASTIC_SEARCH spec. # Parameters for the elastic search API.
"index": "A String", # The ElasticSearch index to use.
"numHits": 42, # Optional. Number of hits (chunks) to request. When specified, it is passed to Elasticsearch as the `num_hits` param.
"searchTemplate": "A String", # The ElasticSearch search template to use.
},
"endpoint": "A String", # The endpoint of the external API. The system will call the API at this endpoint to retrieve the data for grounding. Example: https://acme.com:443/search
"simpleSearchParams": { # The search parameters to use for SIMPLE_SEARCH spec. # Parameters for the simple search API.
},
},
"vertexAiSearch": { # Retrieve from Vertex AI Search datastore or engine for grounding. datastore and engine are mutually exclusive. See https://cloud.google.com/products/agent-builder # Set to use data source powered by Vertex AI Search.
"dataStoreSpecs": [ # Specifications that define the specific DataStores to be searched, along with configurations for those data stores. This is only considered for Engines with multiple data stores. It should only be set if engine is used.
{ # Define data stores within engine to filter on in a search call and configurations for those data stores. For more information, see https://cloud.google.com/generative-ai-app-builder/docs/reference/rpc/google.cloud.discoveryengine.v1#datastorespec
"dataStore": "A String", # Full resource name of DataStore, such as Format: `projects/{project}/locations/{location}/collections/{collection}/dataStores/{dataStore}`
"filter": "A String", # Optional. Filter specification to filter documents in the data store specified by data_store field. For more information on filtering, see [Filtering](https://cloud.google.com/generative-ai-app-builder/docs/filter-search-metadata)
},
],
"datastore": "A String", # Optional. Fully-qualified Vertex AI Search data store resource ID. Format: `projects/{project}/locations/{location}/collections/{collection}/dataStores/{dataStore}`
"engine": "A String", # Optional. Fully-qualified Vertex AI Search engine resource ID. Format: `projects/{project}/locations/{location}/collections/{collection}/engines/{engine}`
"filter": "A String", # Optional. Filter strings to be passed to the search API.
"maxResults": 42, # Optional. Number of search results to return per query. The default value is 10. The maximumm allowed value is 10.
},
"vertexRagStore": { # Retrieve from Vertex RAG Store for grounding. # Set to use data source powered by Vertex RAG store. User data is uploaded via the VertexRagDataService.
"ragCorpora": [ # Optional. Deprecated. Please use rag_resources instead.
"A String",
],
"ragResources": [ # Optional. The representation of the rag source. It can be used to specify corpus only or ragfiles. Currently only support one corpus or multiple files from one corpus. In the future we may open up multiple corpora support.
{ # The definition of the Rag resource.
"ragCorpus": "A String", # Optional. RagCorpora resource name. Format: `projects/{project}/locations/{location}/ragCorpora/{rag_corpus}`
"ragFileIds": [ # Optional. rag_file_id. The files should be in the same rag_corpus set in rag_corpus field.
"A String",
],
},
],
"ragRetrievalConfig": { # Specifies the context retrieval config. # Optional. The retrieval config for the Rag query.
"filter": { # Config for filters. # Optional. Config for filters.
"metadataFilter": "A String", # Optional. String for metadata filtering.
"vectorDistanceThreshold": 3.14, # Optional. Only returns contexts with vector distance smaller than the threshold.
"vectorSimilarityThreshold": 3.14, # Optional. Only returns contexts with vector similarity larger than the threshold.
},
"hybridSearch": { # Config for Hybrid Search. # Optional. Config for Hybrid Search.
"alpha": 3.14, # Optional. Alpha value controls the weight between dense and sparse vector search results. The range is [0, 1], while 0 means sparse vector search only and 1 means dense vector search only. The default value is 0.5 which balances sparse and dense vector search equally.
},
"ranking": { # Config for ranking and reranking. # Optional. Config for ranking and reranking.
"llmRanker": { # Config for LlmRanker. # Optional. Config for LlmRanker.
"modelName": "A String", # Optional. The model name used for ranking. See [Supported models](https://cloud.google.com/vertex-ai/generative-ai/docs/model-reference/inference#supported-models).
},
"rankService": { # Config for Rank Service. # Optional. Config for Rank Service.
"modelName": "A String", # Optional. The model name of the rank service. Format: `semantic-ranker-512@latest`
},
},
"topK": 42, # Optional. The number of contexts to retrieve.
},
"similarityTopK": 42, # Optional. Number of top k results to return from the selected corpora.
"storeContext": True or False, # Optional. Currently only supported for Gemini Multimodal Live API. In Gemini Multimodal Live API, if `store_context` bool is specified, Gemini will leverage it to automatically memorize the interactions between the client and Gemini, and retrieve context when needed to augment the response generation for users' ongoing and future interactions.
"vectorDistanceThreshold": 3.14, # Optional. Only return results with vector distance smaller than the threshold.
},
},
"urlContext": { # Tool to support URL context. # Optional. Tool to support URL context retrieval.
},
},
],
"author": "A String", # Required. The ID of the agent or entity that generated this event. Use "user" to denote events generated by the end-user.
"content": { # The structured data content of a message. A Content message contains a `role` field, which indicates the producer of the content, and a `parts` field, which contains the multi-part data of the message. # Required. The content of the event (e.g., text response, tool call, tool response).
"parts": [ # Required. A list of Part objects that make up a single message. Parts of a message can have different MIME types. A Content message must have at least one Part.
{ # A datatype containing media that is part of a multi-part Content message. A `Part` consists of data which has an associated datatype. A `Part` can only contain one of the accepted types in `Part.data`. For media types that are not text, `Part` must have a fixed IANA MIME type identifying the type and subtype of the media if `inline_data` or `file_data` field is filled with raw bytes.
"codeExecutionResult": { # Result of executing the [ExecutableCode]. Only generated when using the [CodeExecution] tool, and always follows a `part` containing the [ExecutableCode]. # Optional. The result of executing the ExecutableCode.
"outcome": "A String", # Required. Outcome of the code execution.
"output": "A String", # Optional. Contains stdout when code execution is successful, stderr or other description otherwise.
},
"executableCode": { # Code generated by the model that is meant to be executed, and the result returned to the model. Generated when using the [CodeExecution] tool, in which the code will be automatically executed, and a corresponding [CodeExecutionResult] will also be generated. # Optional. Code generated by the model that is intended to be executed.
"code": "A String", # Required. The code to be executed.
"language": "A String", # Required. Programming language of the `code`.
},
"fileData": { # URI-based data. A FileData message contains a URI pointing to data of a specific media type. It is used to represent images, audio, and video stored in Google Cloud Storage. # Optional. The URI-based data of the part. This can be used to include files from Google Cloud Storage.
"displayName": "A String", # Optional. The display name of the file. Used to provide a label or filename to distinguish files. This field is only returned in `PromptMessage` for prompt management. It is used in the Gemini calls only when server side tools (`code_execution`, `google_search`, and `url_context`) are enabled.
"fileUri": "A String", # Required. The URI of the file in Google Cloud Storage.
"mimeType": "A String", # Required. The IANA standard MIME type of the source data.
},
"functionCall": { # A predicted [FunctionCall] returned from the model that contains a string representing the [FunctionDeclaration.name] and a structured JSON object containing the parameters and their values. # Optional. A predicted function call returned from the model. This contains the name of the function to call and the arguments to pass to the function.
"args": { # Optional. The function parameters and values in JSON object format. See [FunctionDeclaration.parameters] for parameter details.
"a_key": "", # Properties of the object.
},
"id": "A String", # Optional. The unique id of the function call. If populated, the client to execute the `function_call` and return the response with the matching `id`.
"name": "A String", # Optional. The name of the function to call. Matches [FunctionDeclaration.name].
"partialArgs": [ # Optional. The partial argument value of the function call. If provided, represents the arguments/fields that are streamed incrementally.
{ # Partial argument value of the function call.
"boolValue": True or False, # Optional. Represents a boolean value.
"jsonPath": "A String", # Required. A JSON Path (RFC 9535) to the argument being streamed. https://datatracker.ietf.org/doc/html/rfc9535. e.g. "$.foo.bar[0].data".
"nullValue": "A String", # Optional. Represents a null value.
"numberValue": 3.14, # Optional. Represents a double value.
"stringValue": "A String", # Optional. Represents a string value.
"willContinue": True or False, # Optional. Whether this is not the last part of the same json_path. If true, another PartialArg message for the current json_path is expected to follow.
},
],
"willContinue": True or False, # Optional. Whether this is the last part of the FunctionCall. If true, another partial message for the current FunctionCall is expected to follow.
},
"functionResponse": { # The result output from a [FunctionCall] that contains a string representing the [FunctionDeclaration.name] and a structured JSON object containing any output from the function is used as context to the model. This should contain the result of a [FunctionCall] made based on model prediction. # Optional. The result of a function call. This is used to provide the model with the result of a function call that it predicted.
"id": "A String", # Optional. The id of the function call this response is for. Populated by the client to match the corresponding function call `id`.
"name": "A String", # Required. The name of the function to call. Matches [FunctionDeclaration.name] and [FunctionCall.name].
"parts": [ # Optional. Ordered `Parts` that constitute a function response. Parts may have different IANA MIME types.
{ # A datatype containing media that is part of a `FunctionResponse` message. A `FunctionResponsePart` consists of data which has an associated datatype. A `FunctionResponsePart` can only contain one of the accepted types in `FunctionResponsePart.data`. A `FunctionResponsePart` must have a fixed IANA MIME type identifying the type and subtype of the media if the `inline_data` field is filled with raw bytes.
"fileData": { # URI based data for function response. # URI based data.
"displayName": "A String", # Optional. Display name of the file data. Used to provide a label or filename to distinguish file datas. This field is only returned in PromptMessage for prompt management. It is currently used in the Gemini GenerateContent calls only when server side tools (code_execution, google_search, and url_context) are enabled.
"fileUri": "A String", # Required. URI.
"mimeType": "A String", # Required. The IANA standard MIME type of the source data.
},
"inlineData": { # Raw media bytes for function response. Text should not be sent as raw bytes, use the 'text' field. # Inline media bytes.
"data": "A String", # Required. Raw bytes.
"displayName": "A String", # Optional. Display name of the blob. Used to provide a label or filename to distinguish blobs. This field is only returned in PromptMessage for prompt management. It is currently used in the Gemini GenerateContent calls only when server side tools (code_execution, google_search, and url_context) are enabled.
"mimeType": "A String", # Required. The IANA standard MIME type of the source data.
},
},
],
"response": { # Required. The function response in JSON object format. Use "output" key to specify function output and "error" key to specify error details (if any). If "output" and "error" keys are not specified, then whole "response" is treated as function output.
"a_key": "", # Properties of the object.
},
},
"inlineData": { # A content blob. A Blob contains data of a specific media type. It is used to represent images, audio, and video. # Optional. The inline data content of the part. This can be used to include images, audio, or video in a request.
"data": "A String", # Required. The raw bytes of the data.
"displayName": "A String", # Optional. The display name of the blob. Used to provide a label or filename to distinguish blobs. This field is only returned in `PromptMessage` for prompt management. It is used in the Gemini calls only when server-side tools (`code_execution`, `google_search`, and `url_context`) are enabled.
"mimeType": "A String", # Required. The IANA standard MIME type of the source data.
},
"mediaResolution": { # per part media resolution. Media resolution for the input media. # per part media resolution. Media resolution for the input media.
"level": "A String", # The tokenization quality used for given media.
},
"text": "A String", # Optional. The text content of the part. When sent from the VSCode Gemini Code Assist extension, references to @mentioned items will be converted to markdown boldface text. For example `@my-repo` will be converted to and sent as `**my-repo**` by the IDE agent.
"thought": True or False, # Optional. Indicates whether the `part` represents the model's thought process or reasoning.
"thoughtSignature": "A String", # Optional. An opaque signature for the thought so it can be reused in subsequent requests.
"videoMetadata": { # Provides metadata for a video, including the start and end offsets for clipping and the frame rate. # Optional. Video metadata. The metadata should only be specified while the video data is presented in inline_data or file_data.
"endOffset": "A String", # Optional. The end offset of the video.
"fps": 3.14, # Optional. The frame rate of the video sent to the model. If not specified, the default value is 1.0. The valid range is (0.0, 24.0].
"startOffset": "A String", # Optional. The start offset of the video.
},
},
],
"role": "A String", # Optional. The producer of the content. Must be either 'user' or 'model'. If not set, the service will default to 'user'.
},
"eventTime": "A String", # Optional. The timestamp when the event occurred.
"stateDelta": { # Optional. The change in the session state caused by this event. This is a key-value map of fields that were modified or added by the event.
"a_key": "", # Properties of the object.
},
},
],
"turnId": "A String", # Optional. A unique identifier for the turn. Useful for referencing specific turns across systems.
"turnIndex": 42, # Required. The 0-based index of the turn in the conversation sequence.
},
],
},
"candidate": "A String", # Required. The name of the candidate that produced the response.
"error": { # The `Status` type defines a logical error model that is suitable for different programming environments, including REST APIs and RPC APIs. It is used by [gRPC](https://github.com/grpc). Each `Status` message contains three pieces of data: error code, error message, and error details. You can find out more about this error model and how to work with it in the [API Design Guide](https://cloud.google.com/apis/design/errors). # Output only. Error while scraping model or agent.
"code": 42, # The status code, which should be an enum value of google.rpc.Code.
"details": [ # A list of messages that carry the error details. There is a common set of message types for APIs to use.
{
"a_key": "", # Properties of the object. Contains field @type with type URL.
},
],
"message": "A String", # A developer-facing error message, which should be in English. Any user-facing error message should be localized and sent in the google.rpc.Status.details field, or localized by the client.
},
"events": [ # Optional. Intermediate events (such as tool calls and responses) that led to the final response.
{ # The structured data content of a message. A Content message contains a `role` field, which indicates the producer of the content, and a `parts` field, which contains the multi-part data of the message.
"parts": [ # Required. A list of Part objects that make up a single message. Parts of a message can have different MIME types. A Content message must have at least one Part.
{ # A datatype containing media that is part of a multi-part Content message. A `Part` consists of data which has an associated datatype. A `Part` can only contain one of the accepted types in `Part.data`. For media types that are not text, `Part` must have a fixed IANA MIME type identifying the type and subtype of the media if `inline_data` or `file_data` field is filled with raw bytes.
"codeExecutionResult": { # Result of executing the [ExecutableCode]. Only generated when using the [CodeExecution] tool, and always follows a `part` containing the [ExecutableCode]. # Optional. The result of executing the ExecutableCode.
"outcome": "A String", # Required. Outcome of the code execution.
"output": "A String", # Optional. Contains stdout when code execution is successful, stderr or other description otherwise.
},
"executableCode": { # Code generated by the model that is meant to be executed, and the result returned to the model. Generated when using the [CodeExecution] tool, in which the code will be automatically executed, and a corresponding [CodeExecutionResult] will also be generated. # Optional. Code generated by the model that is intended to be executed.
"code": "A String", # Required. The code to be executed.
"language": "A String", # Required. Programming language of the `code`.
},
"fileData": { # URI-based data. A FileData message contains a URI pointing to data of a specific media type. It is used to represent images, audio, and video stored in Google Cloud Storage. # Optional. The URI-based data of the part. This can be used to include files from Google Cloud Storage.
"displayName": "A String", # Optional. The display name of the file. Used to provide a label or filename to distinguish files. This field is only returned in `PromptMessage` for prompt management. It is used in the Gemini calls only when server side tools (`code_execution`, `google_search`, and `url_context`) are enabled.
"fileUri": "A String", # Required. The URI of the file in Google Cloud Storage.
"mimeType": "A String", # Required. The IANA standard MIME type of the source data.
},
"functionCall": { # A predicted [FunctionCall] returned from the model that contains a string representing the [FunctionDeclaration.name] and a structured JSON object containing the parameters and their values. # Optional. A predicted function call returned from the model. This contains the name of the function to call and the arguments to pass to the function.
"args": { # Optional. The function parameters and values in JSON object format. See [FunctionDeclaration.parameters] for parameter details.
"a_key": "", # Properties of the object.
},
"id": "A String", # Optional. The unique id of the function call. If populated, the client to execute the `function_call` and return the response with the matching `id`.
"name": "A String", # Optional. The name of the function to call. Matches [FunctionDeclaration.name].
"partialArgs": [ # Optional. The partial argument value of the function call. If provided, represents the arguments/fields that are streamed incrementally.
{ # Partial argument value of the function call.
"boolValue": True or False, # Optional. Represents a boolean value.
"jsonPath": "A String", # Required. A JSON Path (RFC 9535) to the argument being streamed. https://datatracker.ietf.org/doc/html/rfc9535. e.g. "$.foo.bar[0].data".
"nullValue": "A String", # Optional. Represents a null value.
"numberValue": 3.14, # Optional. Represents a double value.
"stringValue": "A String", # Optional. Represents a string value.
"willContinue": True or False, # Optional. Whether this is not the last part of the same json_path. If true, another PartialArg message for the current json_path is expected to follow.
},
],
"willContinue": True or False, # Optional. Whether this is the last part of the FunctionCall. If true, another partial message for the current FunctionCall is expected to follow.
},
"functionResponse": { # The result output from a [FunctionCall] that contains a string representing the [FunctionDeclaration.name] and a structured JSON object containing any output from the function is used as context to the model. This should contain the result of a [FunctionCall] made based on model prediction. # Optional. The result of a function call. This is used to provide the model with the result of a function call that it predicted.
"id": "A String", # Optional. The id of the function call this response is for. Populated by the client to match the corresponding function call `id`.
"name": "A String", # Required. The name of the function to call. Matches [FunctionDeclaration.name] and [FunctionCall.name].
"parts": [ # Optional. Ordered `Parts` that constitute a function response. Parts may have different IANA MIME types.
{ # A datatype containing media that is part of a `FunctionResponse` message. A `FunctionResponsePart` consists of data which has an associated datatype. A `FunctionResponsePart` can only contain one of the accepted types in `FunctionResponsePart.data`. A `FunctionResponsePart` must have a fixed IANA MIME type identifying the type and subtype of the media if the `inline_data` field is filled with raw bytes.
"fileData": { # URI based data for function response. # URI based data.
"displayName": "A String", # Optional. Display name of the file data. Used to provide a label or filename to distinguish file datas. This field is only returned in PromptMessage for prompt management. It is currently used in the Gemini GenerateContent calls only when server side tools (code_execution, google_search, and url_context) are enabled.
"fileUri": "A String", # Required. URI.
"mimeType": "A String", # Required. The IANA standard MIME type of the source data.
},
"inlineData": { # Raw media bytes for function response. Text should not be sent as raw bytes, use the 'text' field. # Inline media bytes.
"data": "A String", # Required. Raw bytes.
"displayName": "A String", # Optional. Display name of the blob. Used to provide a label or filename to distinguish blobs. This field is only returned in PromptMessage for prompt management. It is currently used in the Gemini GenerateContent calls only when server side tools (code_execution, google_search, and url_context) are enabled.
"mimeType": "A String", # Required. The IANA standard MIME type of the source data.
},
},
],
"response": { # Required. The function response in JSON object format. Use "output" key to specify function output and "error" key to specify error details (if any). If "output" and "error" keys are not specified, then whole "response" is treated as function output.
"a_key": "", # Properties of the object.
},
},
"inlineData": { # A content blob. A Blob contains data of a specific media type. It is used to represent images, audio, and video. # Optional. The inline data content of the part. This can be used to include images, audio, or video in a request.
"data": "A String", # Required. The raw bytes of the data.
"displayName": "A String", # Optional. The display name of the blob. Used to provide a label or filename to distinguish blobs. This field is only returned in `PromptMessage` for prompt management. It is used in the Gemini calls only when server-side tools (`code_execution`, `google_search`, and `url_context`) are enabled.
"mimeType": "A String", # Required. The IANA standard MIME type of the source data.
},
"mediaResolution": { # per part media resolution. Media resolution for the input media. # per part media resolution. Media resolution for the input media.
"level": "A String", # The tokenization quality used for given media.
},
"text": "A String", # Optional. The text content of the part. When sent from the VSCode Gemini Code Assist extension, references to @mentioned items will be converted to markdown boldface text. For example `@my-repo` will be converted to and sent as `**my-repo**` by the IDE agent.
"thought": True or False, # Optional. Indicates whether the `part` represents the model's thought process or reasoning.
"thoughtSignature": "A String", # Optional. An opaque signature for the thought so it can be reused in subsequent requests.
"videoMetadata": { # Provides metadata for a video, including the start and end offsets for clipping and the frame rate. # Optional. Video metadata. The metadata should only be specified while the video data is presented in inline_data or file_data.
"endOffset": "A String", # Optional. The end offset of the video.
"fps": 3.14, # Optional. The frame rate of the video sent to the model. If not specified, the default value is 1.0. The valid range is (0.0, 24.0].
"startOffset": "A String", # Optional. The start offset of the video.
},
},
],
"role": "A String", # Optional. The producer of the content. Must be either 'user' or 'model'. If not set, the service will default to 'user'.
},
],
"text": "A String", # Text response.
"value": "", # Fields and values that can be used to populate the response template.
},
],
"goldenResponse": { # Responses from model or agent. # Optional. The Ideal response or ground truth.
"agentData": { # Represents data specific to multi-turn agent evaluations. # Optional. Represents the complete execution trace of a multi-turn conversation, which can involve single or multiple agents. This field is used to provide the full output of an agent's run, including all turns and events, for direct evaluation.
"agents": { # Optional. A map containing the static configurations for each agent in the system. Key: agent_id (matches the `author` field in events). Value: The static configuration of the agent.
"a_key": { # Represents configuration for an Agent.
"agentId": "A String", # Required. Unique identifier of the agent. This ID is used to refer to this agent, e.g., in AgentEvent.author, or in the `sub_agents` field. It must be unique within the `agents` map.
"agentType": "A String", # Optional. The type or class of the agent (e.g., "LlmAgent", "RouterAgent", "ToolUseAgent"). Useful for the autorater to understand the expected behavior of the agent.
"description": "A String", # Optional. A high-level description of the agent's role and responsibilities. Critical for evaluating if the agent is routing tasks correctly.
"instruction": "A String", # Optional. Provides instructions for the LLM model, guiding the agent's behavior. Can be static or dynamic. Dynamic instructions can contain placeholders like {variable_name} that will be resolved at runtime using the `AgentEvent.state_delta` field.
"subAgents": [ # Optional. The list of valid agent IDs that this agent can delegate to. This defines the directed edges in the multi-agent system graph topology.
"A String",
],
"tools": [ # Optional. The list of tools available to this agent.
{ # Tool details that the model may use to generate response. A `Tool` is a piece of code that enables the system to interact with external systems to perform an action, or set of actions, outside of knowledge and scope of the model. A Tool object should contain exactly one type of Tool (e.g FunctionDeclaration, Retrieval or GoogleSearchRetrieval).
"codeExecution": { # Tool that executes code generated by the model, and automatically returns the result to the model. See also [ExecutableCode]and [CodeExecutionResult] which are input and output to this tool. # Optional. CodeExecution tool type. Enables the model to execute code as part of generation.
},
"computerUse": { # Tool to support computer use. # Optional. Tool to support the model interacting directly with the computer. If enabled, it automatically populates computer-use specific Function Declarations.
"environment": "A String", # Required. The environment being operated.
"excludedPredefinedFunctions": [ # Optional. By default, [predefined functions](https://cloud.google.com/vertex-ai/generative-ai/docs/computer-use#supported-actions) are included in the final model call. Some of them can be explicitly excluded from being automatically included. This can serve two purposes: 1. Using a more restricted / different action space. 2. Improving the definitions / instructions of predefined functions.
"A String",
],
},
"enterpriseWebSearch": { # Tool to search public web data, powered by Vertex AI Search and Sec4 compliance. # Optional. Tool to support searching public web data, powered by Vertex AI Search and Sec4 compliance.
"blockingConfidence": "A String", # Optional. Sites with confidence level chosen & above this value will be blocked from the search results.
"excludeDomains": [ # Optional. List of domains to be excluded from the search results. The default limit is 2000 domains.
"A String",
],
},
"functionDeclarations": [ # Optional. Function tool type. One or more function declarations to be passed to the model along with the current user query. Model may decide to call a subset of these functions by populating FunctionCall in the response. User should provide a FunctionResponse for each function call in the next turn. Based on the function responses, Model will generate the final response back to the user. Maximum 512 function declarations can be provided.
{ # Structured representation of a function declaration as defined by the [OpenAPI 3.0 specification](https://spec.openapis.org/oas/v3.0.3). Included in this declaration are the function name, description, parameters and response type. This FunctionDeclaration is a representation of a block of code that can be used as a `Tool` by the model and executed by the client.
"description": "A String", # Optional. Description and purpose of the function. Model uses it to decide how and whether to call the function.
"name": "A String", # Required. The name of the function to call. Must start with a letter or an underscore. Must be a-z, A-Z, 0-9, or contain underscores, dots, colons and dashes, with a maximum length of 64.
"parameters": { # Defines the schema of input and output data. This is a subset of the [OpenAPI 3.0 Schema Object](https://spec.openapis.org/oas/v3.0.3#schema-object). # Optional. Describes the parameters to this function in JSON Schema Object format. Reflects the Open API 3.03 Parameter Object. string Key: the name of the parameter. Parameter names are case sensitive. Schema Value: the Schema defining the type used for the parameter. For function with no parameters, this can be left unset. Parameter names must start with a letter or an underscore and must only contain chars a-z, A-Z, 0-9, or underscores with a maximum length of 64. Example with 1 required and 1 optional parameter: type: OBJECT properties: param1: type: STRING param2: type: INTEGER required: - param1
"additionalProperties": "", # Optional. If `type` is `OBJECT`, specifies how to handle properties not defined in `properties`. If it is a boolean `false`, no additional properties are allowed. If it is a schema, additional properties are allowed if they conform to the schema.
"anyOf": [ # Optional. The instance must be valid against any (one or more) of the subschemas listed in `any_of`.
# Object with schema name: GoogleCloudAiplatformV1beta1Schema
],
"default": "", # Optional. Default value to use if the field is not specified.
"defs": { # Optional. `defs` provides a map of schema definitions that can be reused by `ref` elsewhere in the schema. Only allowed at root level of the schema.
"a_key": # Object with schema name: GoogleCloudAiplatformV1beta1Schema
},
"description": "A String", # Optional. Description of the schema.
"enum": [ # Optional. Possible values of the field. This field can be used to restrict a value to a fixed set of values. To mark a field as an enum, set `format` to `enum` and provide the list of possible values in `enum`. For example: 1. To define directions: `{type:STRING, format:enum, enum:["EAST", "NORTH", "SOUTH", "WEST"]}` 2. To define apartment numbers: `{type:INTEGER, format:enum, enum:["101", "201", "301"]}`
"A String",
],
"example": "", # Optional. Example of an instance of this schema.
"format": "A String", # Optional. The format of the data. For `NUMBER` type, format can be `float` or `double`. For `INTEGER` type, format can be `int32` or `int64`. For `STRING` type, format can be `email`, `byte`, `date`, `date-time`, `password`, and other formats to further refine the data type.
"items": # Object with schema name: GoogleCloudAiplatformV1beta1Schema # Optional. If type is `ARRAY`, `items` specifies the schema of elements in the array.
"maxItems": "A String", # Optional. If type is `ARRAY`, `max_items` specifies the maximum number of items in an array.
"maxLength": "A String", # Optional. If type is `STRING`, `max_length` specifies the maximum length of the string.
"maxProperties": "A String", # Optional. If type is `OBJECT`, `max_properties` specifies the maximum number of properties that can be provided.
"maximum": 3.14, # Optional. If type is `INTEGER` or `NUMBER`, `maximum` specifies the maximum allowed value.
"minItems": "A String", # Optional. If type is `ARRAY`, `min_items` specifies the minimum number of items in an array.
"minLength": "A String", # Optional. If type is `STRING`, `min_length` specifies the minimum length of the string.
"minProperties": "A String", # Optional. If type is `OBJECT`, `min_properties` specifies the minimum number of properties that can be provided.
"minimum": 3.14, # Optional. If type is `INTEGER` or `NUMBER`, `minimum` specifies the minimum allowed value.
"nullable": True or False, # Optional. Indicates if the value of this field can be null.
"pattern": "A String", # Optional. If type is `STRING`, `pattern` specifies a regular expression that the string must match.
"properties": { # Optional. If type is `OBJECT`, `properties` is a map of property names to schema definitions for each property of the object.
"a_key": # Object with schema name: GoogleCloudAiplatformV1beta1Schema
},
"propertyOrdering": [ # Optional. Order of properties displayed or used where order matters. This is not a standard field in OpenAPI specification, but can be used to control the order of properties.
"A String",
],
"ref": "A String", # Optional. Allows referencing another schema definition to use in place of this schema. The value must be a valid reference to a schema in `defs`. For example, the following schema defines a reference to a schema node named "Pet": type: object properties: pet: ref: #/defs/Pet defs: Pet: type: object properties: name: type: string The value of the "pet" property is a reference to the schema node named "Pet". See details in https://json-schema.org/understanding-json-schema/structuring
"required": [ # Optional. If type is `OBJECT`, `required` lists the names of properties that must be present.
"A String",
],
"title": "A String", # Optional. Title for the schema.
"type": "A String", # Optional. Data type of the schema field.
},
"parametersJsonSchema": "", # Optional. Describes the parameters to the function in JSON Schema format. The schema must describe an object where the properties are the parameters to the function. For example: ``` { "type": "object", "properties": { "name": { "type": "string" }, "age": { "type": "integer" } }, "additionalProperties": false, "required": ["name", "age"], "propertyOrdering": ["name", "age"] } ``` This field is mutually exclusive with `parameters`.
"response": { # Defines the schema of input and output data. This is a subset of the [OpenAPI 3.0 Schema Object](https://spec.openapis.org/oas/v3.0.3#schema-object). # Optional. Describes the output from this function in JSON Schema format. Reflects the Open API 3.03 Response Object. The Schema defines the type used for the response value of the function.
"additionalProperties": "", # Optional. If `type` is `OBJECT`, specifies how to handle properties not defined in `properties`. If it is a boolean `false`, no additional properties are allowed. If it is a schema, additional properties are allowed if they conform to the schema.
"anyOf": [ # Optional. The instance must be valid against any (one or more) of the subschemas listed in `any_of`.
# Object with schema name: GoogleCloudAiplatformV1beta1Schema
],
"default": "", # Optional. Default value to use if the field is not specified.
"defs": { # Optional. `defs` provides a map of schema definitions that can be reused by `ref` elsewhere in the schema. Only allowed at root level of the schema.
"a_key": # Object with schema name: GoogleCloudAiplatformV1beta1Schema
},
"description": "A String", # Optional. Description of the schema.
"enum": [ # Optional. Possible values of the field. This field can be used to restrict a value to a fixed set of values. To mark a field as an enum, set `format` to `enum` and provide the list of possible values in `enum`. For example: 1. To define directions: `{type:STRING, format:enum, enum:["EAST", "NORTH", "SOUTH", "WEST"]}` 2. To define apartment numbers: `{type:INTEGER, format:enum, enum:["101", "201", "301"]}`
"A String",
],
"example": "", # Optional. Example of an instance of this schema.
"format": "A String", # Optional. The format of the data. For `NUMBER` type, format can be `float` or `double`. For `INTEGER` type, format can be `int32` or `int64`. For `STRING` type, format can be `email`, `byte`, `date`, `date-time`, `password`, and other formats to further refine the data type.
"items": # Object with schema name: GoogleCloudAiplatformV1beta1Schema # Optional. If type is `ARRAY`, `items` specifies the schema of elements in the array.
"maxItems": "A String", # Optional. If type is `ARRAY`, `max_items` specifies the maximum number of items in an array.
"maxLength": "A String", # Optional. If type is `STRING`, `max_length` specifies the maximum length of the string.
"maxProperties": "A String", # Optional. If type is `OBJECT`, `max_properties` specifies the maximum number of properties that can be provided.
"maximum": 3.14, # Optional. If type is `INTEGER` or `NUMBER`, `maximum` specifies the maximum allowed value.
"minItems": "A String", # Optional. If type is `ARRAY`, `min_items` specifies the minimum number of items in an array.
"minLength": "A String", # Optional. If type is `STRING`, `min_length` specifies the minimum length of the string.
"minProperties": "A String", # Optional. If type is `OBJECT`, `min_properties` specifies the minimum number of properties that can be provided.
"minimum": 3.14, # Optional. If type is `INTEGER` or `NUMBER`, `minimum` specifies the minimum allowed value.
"nullable": True or False, # Optional. Indicates if the value of this field can be null.
"pattern": "A String", # Optional. If type is `STRING`, `pattern` specifies a regular expression that the string must match.
"properties": { # Optional. If type is `OBJECT`, `properties` is a map of property names to schema definitions for each property of the object.
"a_key": # Object with schema name: GoogleCloudAiplatformV1beta1Schema
},
"propertyOrdering": [ # Optional. Order of properties displayed or used where order matters. This is not a standard field in OpenAPI specification, but can be used to control the order of properties.
"A String",
],
"ref": "A String", # Optional. Allows referencing another schema definition to use in place of this schema. The value must be a valid reference to a schema in `defs`. For example, the following schema defines a reference to a schema node named "Pet": type: object properties: pet: ref: #/defs/Pet defs: Pet: type: object properties: name: type: string The value of the "pet" property is a reference to the schema node named "Pet". See details in https://json-schema.org/understanding-json-schema/structuring
"required": [ # Optional. If type is `OBJECT`, `required` lists the names of properties that must be present.
"A String",
],
"title": "A String", # Optional. Title for the schema.
"type": "A String", # Optional. Data type of the schema field.
},
"responseJsonSchema": "", # Optional. Describes the output from this function in JSON Schema format. The value specified by the schema is the response value of the function. This field is mutually exclusive with `response`.
},
],
"googleMaps": { # Tool to retrieve public maps data for grounding, powered by Google. # Optional. GoogleMaps tool type. Tool to support Google Maps in Model.
"enableWidget": True or False, # Optional. If true, include the widget context token in the response.
},
"googleSearch": { # GoogleSearch tool type. Tool to support Google Search in Model. Powered by Google. # Optional. GoogleSearch tool type. Tool to support Google Search in Model. Powered by Google.
"blockingConfidence": "A String", # Optional. Sites with confidence level chosen & above this value will be blocked from the search results.
"excludeDomains": [ # Optional. List of domains to be excluded from the search results. The default limit is 2000 domains. Example: ["amazon.com", "facebook.com"].
"A String",
],
},
"googleSearchRetrieval": { # Tool to retrieve public web data for grounding, powered by Google. # Optional. Specialized retrieval tool that is powered by Google Search.
"dynamicRetrievalConfig": { # Describes the options to customize dynamic retrieval. # Specifies the dynamic retrieval configuration for the given source.
"dynamicThreshold": 3.14, # Optional. The threshold to be used in dynamic retrieval. If not set, a system default value is used.
"mode": "A String", # The mode of the predictor to be used in dynamic retrieval.
},
},
"parallelAiSearch": { # ParallelAiSearch tool type. A tool that uses the Parallel.ai search engine for grounding. # Optional. If specified, Vertex AI will use Parallel.ai to search for information to answer user queries. The search results will be grounded on Parallel.ai and presented to the model for response generation
"apiKey": "A String", # Optional. The API key for ParallelAiSearch. If an API key is not provided, the system will attempt to verify access by checking for an active Parallel.ai subscription through the Google Cloud Marketplace. See https://docs.parallel.ai/search/search-quickstart for more details.
"customConfigs": { # Optional. Custom configs for ParallelAiSearch. This field can be used to pass any parameter from the Parallel.ai Search API. See the Parallel.ai documentation for the full list of available parameters and their usage: https://docs.parallel.ai/api-reference/search-beta/search Currently only `source_policy`, `excerpts`, `max_results`, `mode`, `fetch_policy` can be set via this field. For example: { "source_policy": { "include_domains": ["google.com", "wikipedia.org"], "exclude_domains": ["example.com"] }, "fetch_policy": { "max_age_seconds": 3600 } }
"a_key": "", # Properties of the object.
},
},
"retrieval": { # Defines a retrieval tool that model can call to access external knowledge. # Optional. Retrieval tool type. System will always execute the provided retrieval tool(s) to get external knowledge to answer the prompt. Retrieval results are presented to the model for generation.
"disableAttribution": True or False, # Optional. Deprecated. This option is no longer supported.
"externalApi": { # Retrieve from data source powered by external API for grounding. The external API is not owned by Google, but need to follow the pre-defined API spec. # Use data source powered by external API for grounding.
"apiAuth": { # The generic reusable api auth config. Deprecated. Please use AuthConfig (google/cloud/aiplatform/master/auth.proto) instead. # The authentication config to access the API. Deprecated. Please use auth_config instead.
"apiKeyConfig": { # The API secret. # The API secret.
"apiKeySecretVersion": "A String", # Required. The SecretManager secret version resource name storing API key. e.g. projects/{project}/secrets/{secret}/versions/{version}
"apiKeyString": "A String", # The API key string. Either this or `api_key_secret_version` must be set.
},
},
"apiSpec": "A String", # The API spec that the external API implements.
"authConfig": { # Auth configuration to run the extension. # The authentication config to access the API.
"apiKeyConfig": { # Config for authentication with API key. # Config for API key auth.
"apiKeySecret": "A String", # Optional. The name of the SecretManager secret version resource storing the API key. Format: `projects/{project}/secrets/{secrete}/versions/{version}` - If both `api_key_secret` and `api_key_string` are specified, this field takes precedence over `api_key_string`. - If specified, the `secretmanager.versions.access` permission should be granted to Vertex AI Extension Service Agent (https://cloud.google.com/vertex-ai/docs/general/access-control#service-agents) on the specified resource.
"apiKeyString": "A String", # Optional. The API key to be used in the request directly.
"httpElementLocation": "A String", # Optional. The location of the API key.
"name": "A String", # Optional. The parameter name of the API key. E.g. If the API request is "https://example.com/act?api_key=", "api_key" would be the parameter name.
},
"authType": "A String", # Type of auth scheme.
"googleServiceAccountConfig": { # Config for Google Service Account Authentication. # Config for Google Service Account auth.
"serviceAccount": "A String", # Optional. The service account that the extension execution service runs as. - If the service account is specified, the `iam.serviceAccounts.getAccessToken` permission should be granted to Vertex AI Extension Service Agent (https://cloud.google.com/vertex-ai/docs/general/access-control#service-agents) on the specified service account. - If not specified, the Vertex AI Extension Service Agent will be used to execute the Extension.
},
"httpBasicAuthConfig": { # Config for HTTP Basic Authentication. # Config for HTTP Basic auth.
"credentialSecret": "A String", # Required. The name of the SecretManager secret version resource storing the base64 encoded credentials. Format: `projects/{project}/secrets/{secrete}/versions/{version}` - If specified, the `secretmanager.versions.access` permission should be granted to Vertex AI Extension Service Agent (https://cloud.google.com/vertex-ai/docs/general/access-control#service-agents) on the specified resource.
},
"oauthConfig": { # Config for user oauth. # Config for user oauth.
"accessToken": "A String", # Access token for extension endpoint. Only used to propagate token from [[ExecuteExtensionRequest.runtime_auth_config]] at request time.
"serviceAccount": "A String", # The service account used to generate access tokens for executing the Extension. - If the service account is specified, the `iam.serviceAccounts.getAccessToken` permission should be granted to Vertex AI Extension Service Agent (https://cloud.google.com/vertex-ai/docs/general/access-control#service-agents) on the provided service account.
},
"oidcConfig": { # Config for user OIDC auth. # Config for user OIDC auth.
"idToken": "A String", # OpenID Connect formatted ID token for extension endpoint. Only used to propagate token from [[ExecuteExtensionRequest.runtime_auth_config]] at request time.
"serviceAccount": "A String", # The service account used to generate an OpenID Connect (OIDC)-compatible JWT token signed by the Google OIDC Provider (accounts.google.com) for extension endpoint (https://cloud.google.com/iam/docs/create-short-lived-credentials-direct#sa-credentials-oidc). - The audience for the token will be set to the URL in the server url defined in the OpenApi spec. - If the service account is provided, the service account should grant `iam.serviceAccounts.getOpenIdToken` permission to Vertex AI Extension Service Agent (https://cloud.google.com/vertex-ai/docs/general/access-control#service-agents).
},
},
"elasticSearchParams": { # The search parameters to use for the ELASTIC_SEARCH spec. # Parameters for the elastic search API.
"index": "A String", # The ElasticSearch index to use.
"numHits": 42, # Optional. Number of hits (chunks) to request. When specified, it is passed to Elasticsearch as the `num_hits` param.
"searchTemplate": "A String", # The ElasticSearch search template to use.
},
"endpoint": "A String", # The endpoint of the external API. The system will call the API at this endpoint to retrieve the data for grounding. Example: https://acme.com:443/search
"simpleSearchParams": { # The search parameters to use for SIMPLE_SEARCH spec. # Parameters for the simple search API.
},
},
"vertexAiSearch": { # Retrieve from Vertex AI Search datastore or engine for grounding. datastore and engine are mutually exclusive. See https://cloud.google.com/products/agent-builder # Set to use data source powered by Vertex AI Search.
"dataStoreSpecs": [ # Specifications that define the specific DataStores to be searched, along with configurations for those data stores. This is only considered for Engines with multiple data stores. It should only be set if engine is used.
{ # Define data stores within engine to filter on in a search call and configurations for those data stores. For more information, see https://cloud.google.com/generative-ai-app-builder/docs/reference/rpc/google.cloud.discoveryengine.v1#datastorespec
"dataStore": "A String", # Full resource name of DataStore, such as Format: `projects/{project}/locations/{location}/collections/{collection}/dataStores/{dataStore}`
"filter": "A String", # Optional. Filter specification to filter documents in the data store specified by data_store field. For more information on filtering, see [Filtering](https://cloud.google.com/generative-ai-app-builder/docs/filter-search-metadata)
},
],
"datastore": "A String", # Optional. Fully-qualified Vertex AI Search data store resource ID. Format: `projects/{project}/locations/{location}/collections/{collection}/dataStores/{dataStore}`
"engine": "A String", # Optional. Fully-qualified Vertex AI Search engine resource ID. Format: `projects/{project}/locations/{location}/collections/{collection}/engines/{engine}`
"filter": "A String", # Optional. Filter strings to be passed to the search API.
"maxResults": 42, # Optional. Number of search results to return per query. The default value is 10. The maximumm allowed value is 10.
},
"vertexRagStore": { # Retrieve from Vertex RAG Store for grounding. # Set to use data source powered by Vertex RAG store. User data is uploaded via the VertexRagDataService.
"ragCorpora": [ # Optional. Deprecated. Please use rag_resources instead.
"A String",
],
"ragResources": [ # Optional. The representation of the rag source. It can be used to specify corpus only or ragfiles. Currently only support one corpus or multiple files from one corpus. In the future we may open up multiple corpora support.
{ # The definition of the Rag resource.
"ragCorpus": "A String", # Optional. RagCorpora resource name. Format: `projects/{project}/locations/{location}/ragCorpora/{rag_corpus}`
"ragFileIds": [ # Optional. rag_file_id. The files should be in the same rag_corpus set in rag_corpus field.
"A String",
],
},
],
"ragRetrievalConfig": { # Specifies the context retrieval config. # Optional. The retrieval config for the Rag query.
"filter": { # Config for filters. # Optional. Config for filters.
"metadataFilter": "A String", # Optional. String for metadata filtering.
"vectorDistanceThreshold": 3.14, # Optional. Only returns contexts with vector distance smaller than the threshold.
"vectorSimilarityThreshold": 3.14, # Optional. Only returns contexts with vector similarity larger than the threshold.
},
"hybridSearch": { # Config for Hybrid Search. # Optional. Config for Hybrid Search.
"alpha": 3.14, # Optional. Alpha value controls the weight between dense and sparse vector search results. The range is [0, 1], while 0 means sparse vector search only and 1 means dense vector search only. The default value is 0.5 which balances sparse and dense vector search equally.
},
"ranking": { # Config for ranking and reranking. # Optional. Config for ranking and reranking.
"llmRanker": { # Config for LlmRanker. # Optional. Config for LlmRanker.
"modelName": "A String", # Optional. The model name used for ranking. See [Supported models](https://cloud.google.com/vertex-ai/generative-ai/docs/model-reference/inference#supported-models).
},
"rankService": { # Config for Rank Service. # Optional. Config for Rank Service.
"modelName": "A String", # Optional. The model name of the rank service. Format: `semantic-ranker-512@latest`
},
},
"topK": 42, # Optional. The number of contexts to retrieve.
},
"similarityTopK": 42, # Optional. Number of top k results to return from the selected corpora.
"storeContext": True or False, # Optional. Currently only supported for Gemini Multimodal Live API. In Gemini Multimodal Live API, if `store_context` bool is specified, Gemini will leverage it to automatically memorize the interactions between the client and Gemini, and retrieve context when needed to augment the response generation for users' ongoing and future interactions.
"vectorDistanceThreshold": 3.14, # Optional. Only return results with vector distance smaller than the threshold.
},
},
"urlContext": { # Tool to support URL context. # Optional. Tool to support URL context retrieval.
},
},
],
},
},
"turns": [ # Optional. A chronological list of conversation turns. Each turn represents a logical execution cycle (e.g., User Input -> Agent Response).
{ # Represents a single turn/invocation in the conversation.
"events": [ # Optional. The list of events that occurred during this turn.
{ # Represents a single event in the execution trace.
"activeTools": [ # Optional. The list of tools that were active/available to the agent at the time of this event. This overrides the `AgentConfig.tools` if set.
{ # Tool details that the model may use to generate response. A `Tool` is a piece of code that enables the system to interact with external systems to perform an action, or set of actions, outside of knowledge and scope of the model. A Tool object should contain exactly one type of Tool (e.g FunctionDeclaration, Retrieval or GoogleSearchRetrieval).
"codeExecution": { # Tool that executes code generated by the model, and automatically returns the result to the model. See also [ExecutableCode]and [CodeExecutionResult] which are input and output to this tool. # Optional. CodeExecution tool type. Enables the model to execute code as part of generation.
},
"computerUse": { # Tool to support computer use. # Optional. Tool to support the model interacting directly with the computer. If enabled, it automatically populates computer-use specific Function Declarations.
"environment": "A String", # Required. The environment being operated.
"excludedPredefinedFunctions": [ # Optional. By default, [predefined functions](https://cloud.google.com/vertex-ai/generative-ai/docs/computer-use#supported-actions) are included in the final model call. Some of them can be explicitly excluded from being automatically included. This can serve two purposes: 1. Using a more restricted / different action space. 2. Improving the definitions / instructions of predefined functions.
"A String",
],
},
"enterpriseWebSearch": { # Tool to search public web data, powered by Vertex AI Search and Sec4 compliance. # Optional. Tool to support searching public web data, powered by Vertex AI Search and Sec4 compliance.
"blockingConfidence": "A String", # Optional. Sites with confidence level chosen & above this value will be blocked from the search results.
"excludeDomains": [ # Optional. List of domains to be excluded from the search results. The default limit is 2000 domains.
"A String",
],
},
"functionDeclarations": [ # Optional. Function tool type. One or more function declarations to be passed to the model along with the current user query. Model may decide to call a subset of these functions by populating FunctionCall in the response. User should provide a FunctionResponse for each function call in the next turn. Based on the function responses, Model will generate the final response back to the user. Maximum 512 function declarations can be provided.
{ # Structured representation of a function declaration as defined by the [OpenAPI 3.0 specification](https://spec.openapis.org/oas/v3.0.3). Included in this declaration are the function name, description, parameters and response type. This FunctionDeclaration is a representation of a block of code that can be used as a `Tool` by the model and executed by the client.
"description": "A String", # Optional. Description and purpose of the function. Model uses it to decide how and whether to call the function.
"name": "A String", # Required. The name of the function to call. Must start with a letter or an underscore. Must be a-z, A-Z, 0-9, or contain underscores, dots, colons and dashes, with a maximum length of 64.
"parameters": { # Defines the schema of input and output data. This is a subset of the [OpenAPI 3.0 Schema Object](https://spec.openapis.org/oas/v3.0.3#schema-object). # Optional. Describes the parameters to this function in JSON Schema Object format. Reflects the Open API 3.03 Parameter Object. string Key: the name of the parameter. Parameter names are case sensitive. Schema Value: the Schema defining the type used for the parameter. For function with no parameters, this can be left unset. Parameter names must start with a letter or an underscore and must only contain chars a-z, A-Z, 0-9, or underscores with a maximum length of 64. Example with 1 required and 1 optional parameter: type: OBJECT properties: param1: type: STRING param2: type: INTEGER required: - param1
"additionalProperties": "", # Optional. If `type` is `OBJECT`, specifies how to handle properties not defined in `properties`. If it is a boolean `false`, no additional properties are allowed. If it is a schema, additional properties are allowed if they conform to the schema.
"anyOf": [ # Optional. The instance must be valid against any (one or more) of the subschemas listed in `any_of`.
# Object with schema name: GoogleCloudAiplatformV1beta1Schema
],
"default": "", # Optional. Default value to use if the field is not specified.
"defs": { # Optional. `defs` provides a map of schema definitions that can be reused by `ref` elsewhere in the schema. Only allowed at root level of the schema.
"a_key": # Object with schema name: GoogleCloudAiplatformV1beta1Schema
},
"description": "A String", # Optional. Description of the schema.
"enum": [ # Optional. Possible values of the field. This field can be used to restrict a value to a fixed set of values. To mark a field as an enum, set `format` to `enum` and provide the list of possible values in `enum`. For example: 1. To define directions: `{type:STRING, format:enum, enum:["EAST", "NORTH", "SOUTH", "WEST"]}` 2. To define apartment numbers: `{type:INTEGER, format:enum, enum:["101", "201", "301"]}`
"A String",
],
"example": "", # Optional. Example of an instance of this schema.
"format": "A String", # Optional. The format of the data. For `NUMBER` type, format can be `float` or `double`. For `INTEGER` type, format can be `int32` or `int64`. For `STRING` type, format can be `email`, `byte`, `date`, `date-time`, `password`, and other formats to further refine the data type.
"items": # Object with schema name: GoogleCloudAiplatformV1beta1Schema # Optional. If type is `ARRAY`, `items` specifies the schema of elements in the array.
"maxItems": "A String", # Optional. If type is `ARRAY`, `max_items` specifies the maximum number of items in an array.
"maxLength": "A String", # Optional. If type is `STRING`, `max_length` specifies the maximum length of the string.
"maxProperties": "A String", # Optional. If type is `OBJECT`, `max_properties` specifies the maximum number of properties that can be provided.
"maximum": 3.14, # Optional. If type is `INTEGER` or `NUMBER`, `maximum` specifies the maximum allowed value.
"minItems": "A String", # Optional. If type is `ARRAY`, `min_items` specifies the minimum number of items in an array.
"minLength": "A String", # Optional. If type is `STRING`, `min_length` specifies the minimum length of the string.
"minProperties": "A String", # Optional. If type is `OBJECT`, `min_properties` specifies the minimum number of properties that can be provided.
"minimum": 3.14, # Optional. If type is `INTEGER` or `NUMBER`, `minimum` specifies the minimum allowed value.
"nullable": True or False, # Optional. Indicates if the value of this field can be null.
"pattern": "A String", # Optional. If type is `STRING`, `pattern` specifies a regular expression that the string must match.
"properties": { # Optional. If type is `OBJECT`, `properties` is a map of property names to schema definitions for each property of the object.
"a_key": # Object with schema name: GoogleCloudAiplatformV1beta1Schema
},
"propertyOrdering": [ # Optional. Order of properties displayed or used where order matters. This is not a standard field in OpenAPI specification, but can be used to control the order of properties.
"A String",
],
"ref": "A String", # Optional. Allows referencing another schema definition to use in place of this schema. The value must be a valid reference to a schema in `defs`. For example, the following schema defines a reference to a schema node named "Pet": type: object properties: pet: ref: #/defs/Pet defs: Pet: type: object properties: name: type: string The value of the "pet" property is a reference to the schema node named "Pet". See details in https://json-schema.org/understanding-json-schema/structuring
"required": [ # Optional. If type is `OBJECT`, `required` lists the names of properties that must be present.
"A String",
],
"title": "A String", # Optional. Title for the schema.
"type": "A String", # Optional. Data type of the schema field.
},
"parametersJsonSchema": "", # Optional. Describes the parameters to the function in JSON Schema format. The schema must describe an object where the properties are the parameters to the function. For example: ``` { "type": "object", "properties": { "name": { "type": "string" }, "age": { "type": "integer" } }, "additionalProperties": false, "required": ["name", "age"], "propertyOrdering": ["name", "age"] } ``` This field is mutually exclusive with `parameters`.
"response": { # Defines the schema of input and output data. This is a subset of the [OpenAPI 3.0 Schema Object](https://spec.openapis.org/oas/v3.0.3#schema-object). # Optional. Describes the output from this function in JSON Schema format. Reflects the Open API 3.03 Response Object. The Schema defines the type used for the response value of the function.
"additionalProperties": "", # Optional. If `type` is `OBJECT`, specifies how to handle properties not defined in `properties`. If it is a boolean `false`, no additional properties are allowed. If it is a schema, additional properties are allowed if they conform to the schema.
"anyOf": [ # Optional. The instance must be valid against any (one or more) of the subschemas listed in `any_of`.
# Object with schema name: GoogleCloudAiplatformV1beta1Schema
],
"default": "", # Optional. Default value to use if the field is not specified.
"defs": { # Optional. `defs` provides a map of schema definitions that can be reused by `ref` elsewhere in the schema. Only allowed at root level of the schema.
"a_key": # Object with schema name: GoogleCloudAiplatformV1beta1Schema
},
"description": "A String", # Optional. Description of the schema.
"enum": [ # Optional. Possible values of the field. This field can be used to restrict a value to a fixed set of values. To mark a field as an enum, set `format` to `enum` and provide the list of possible values in `enum`. For example: 1. To define directions: `{type:STRING, format:enum, enum:["EAST", "NORTH", "SOUTH", "WEST"]}` 2. To define apartment numbers: `{type:INTEGER, format:enum, enum:["101", "201", "301"]}`
"A String",
],
"example": "", # Optional. Example of an instance of this schema.
"format": "A String", # Optional. The format of the data. For `NUMBER` type, format can be `float` or `double`. For `INTEGER` type, format can be `int32` or `int64`. For `STRING` type, format can be `email`, `byte`, `date`, `date-time`, `password`, and other formats to further refine the data type.
"items": # Object with schema name: GoogleCloudAiplatformV1beta1Schema # Optional. If type is `ARRAY`, `items` specifies the schema of elements in the array.
"maxItems": "A String", # Optional. If type is `ARRAY`, `max_items` specifies the maximum number of items in an array.
"maxLength": "A String", # Optional. If type is `STRING`, `max_length` specifies the maximum length of the string.
"maxProperties": "A String", # Optional. If type is `OBJECT`, `max_properties` specifies the maximum number of properties that can be provided.
"maximum": 3.14, # Optional. If type is `INTEGER` or `NUMBER`, `maximum` specifies the maximum allowed value.
"minItems": "A String", # Optional. If type is `ARRAY`, `min_items` specifies the minimum number of items in an array.
"minLength": "A String", # Optional. If type is `STRING`, `min_length` specifies the minimum length of the string.
"minProperties": "A String", # Optional. If type is `OBJECT`, `min_properties` specifies the minimum number of properties that can be provided.
"minimum": 3.14, # Optional. If type is `INTEGER` or `NUMBER`, `minimum` specifies the minimum allowed value.
"nullable": True or False, # Optional. Indicates if the value of this field can be null.
"pattern": "A String", # Optional. If type is `STRING`, `pattern` specifies a regular expression that the string must match.
"properties": { # Optional. If type is `OBJECT`, `properties` is a map of property names to schema definitions for each property of the object.
"a_key": # Object with schema name: GoogleCloudAiplatformV1beta1Schema
},
"propertyOrdering": [ # Optional. Order of properties displayed or used where order matters. This is not a standard field in OpenAPI specification, but can be used to control the order of properties.
"A String",
],
"ref": "A String", # Optional. Allows referencing another schema definition to use in place of this schema. The value must be a valid reference to a schema in `defs`. For example, the following schema defines a reference to a schema node named "Pet": type: object properties: pet: ref: #/defs/Pet defs: Pet: type: object properties: name: type: string The value of the "pet" property is a reference to the schema node named "Pet". See details in https://json-schema.org/understanding-json-schema/structuring
"required": [ # Optional. If type is `OBJECT`, `required` lists the names of properties that must be present.
"A String",
],
"title": "A String", # Optional. Title for the schema.
"type": "A String", # Optional. Data type of the schema field.
},
"responseJsonSchema": "", # Optional. Describes the output from this function in JSON Schema format. The value specified by the schema is the response value of the function. This field is mutually exclusive with `response`.
},
],
"googleMaps": { # Tool to retrieve public maps data for grounding, powered by Google. # Optional. GoogleMaps tool type. Tool to support Google Maps in Model.
"enableWidget": True or False, # Optional. If true, include the widget context token in the response.
},
"googleSearch": { # GoogleSearch tool type. Tool to support Google Search in Model. Powered by Google. # Optional. GoogleSearch tool type. Tool to support Google Search in Model. Powered by Google.
"blockingConfidence": "A String", # Optional. Sites with confidence level chosen & above this value will be blocked from the search results.
"excludeDomains": [ # Optional. List of domains to be excluded from the search results. The default limit is 2000 domains. Example: ["amazon.com", "facebook.com"].
"A String",
],
},
"googleSearchRetrieval": { # Tool to retrieve public web data for grounding, powered by Google. # Optional. Specialized retrieval tool that is powered by Google Search.
"dynamicRetrievalConfig": { # Describes the options to customize dynamic retrieval. # Specifies the dynamic retrieval configuration for the given source.
"dynamicThreshold": 3.14, # Optional. The threshold to be used in dynamic retrieval. If not set, a system default value is used.
"mode": "A String", # The mode of the predictor to be used in dynamic retrieval.
},
},
"parallelAiSearch": { # ParallelAiSearch tool type. A tool that uses the Parallel.ai search engine for grounding. # Optional. If specified, Vertex AI will use Parallel.ai to search for information to answer user queries. The search results will be grounded on Parallel.ai and presented to the model for response generation
"apiKey": "A String", # Optional. The API key for ParallelAiSearch. If an API key is not provided, the system will attempt to verify access by checking for an active Parallel.ai subscription through the Google Cloud Marketplace. See https://docs.parallel.ai/search/search-quickstart for more details.
"customConfigs": { # Optional. Custom configs for ParallelAiSearch. This field can be used to pass any parameter from the Parallel.ai Search API. See the Parallel.ai documentation for the full list of available parameters and their usage: https://docs.parallel.ai/api-reference/search-beta/search Currently only `source_policy`, `excerpts`, `max_results`, `mode`, `fetch_policy` can be set via this field. For example: { "source_policy": { "include_domains": ["google.com", "wikipedia.org"], "exclude_domains": ["example.com"] }, "fetch_policy": { "max_age_seconds": 3600 } }
"a_key": "", # Properties of the object.
},
},
"retrieval": { # Defines a retrieval tool that model can call to access external knowledge. # Optional. Retrieval tool type. System will always execute the provided retrieval tool(s) to get external knowledge to answer the prompt. Retrieval results are presented to the model for generation.
"disableAttribution": True or False, # Optional. Deprecated. This option is no longer supported.
"externalApi": { # Retrieve from data source powered by external API for grounding. The external API is not owned by Google, but need to follow the pre-defined API spec. # Use data source powered by external API for grounding.
"apiAuth": { # The generic reusable api auth config. Deprecated. Please use AuthConfig (google/cloud/aiplatform/master/auth.proto) instead. # The authentication config to access the API. Deprecated. Please use auth_config instead.
"apiKeyConfig": { # The API secret. # The API secret.
"apiKeySecretVersion": "A String", # Required. The SecretManager secret version resource name storing API key. e.g. projects/{project}/secrets/{secret}/versions/{version}
"apiKeyString": "A String", # The API key string. Either this or `api_key_secret_version` must be set.
},
},
"apiSpec": "A String", # The API spec that the external API implements.
"authConfig": { # Auth configuration to run the extension. # The authentication config to access the API.
"apiKeyConfig": { # Config for authentication with API key. # Config for API key auth.
"apiKeySecret": "A String", # Optional. The name of the SecretManager secret version resource storing the API key. Format: `projects/{project}/secrets/{secrete}/versions/{version}` - If both `api_key_secret` and `api_key_string` are specified, this field takes precedence over `api_key_string`. - If specified, the `secretmanager.versions.access` permission should be granted to Vertex AI Extension Service Agent (https://cloud.google.com/vertex-ai/docs/general/access-control#service-agents) on the specified resource.
"apiKeyString": "A String", # Optional. The API key to be used in the request directly.
"httpElementLocation": "A String", # Optional. The location of the API key.
"name": "A String", # Optional. The parameter name of the API key. E.g. If the API request is "https://example.com/act?api_key=", "api_key" would be the parameter name.
},
"authType": "A String", # Type of auth scheme.
"googleServiceAccountConfig": { # Config for Google Service Account Authentication. # Config for Google Service Account auth.
"serviceAccount": "A String", # Optional. The service account that the extension execution service runs as. - If the service account is specified, the `iam.serviceAccounts.getAccessToken` permission should be granted to Vertex AI Extension Service Agent (https://cloud.google.com/vertex-ai/docs/general/access-control#service-agents) on the specified service account. - If not specified, the Vertex AI Extension Service Agent will be used to execute the Extension.
},
"httpBasicAuthConfig": { # Config for HTTP Basic Authentication. # Config for HTTP Basic auth.
"credentialSecret": "A String", # Required. The name of the SecretManager secret version resource storing the base64 encoded credentials. Format: `projects/{project}/secrets/{secrete}/versions/{version}` - If specified, the `secretmanager.versions.access` permission should be granted to Vertex AI Extension Service Agent (https://cloud.google.com/vertex-ai/docs/general/access-control#service-agents) on the specified resource.
},
"oauthConfig": { # Config for user oauth. # Config for user oauth.
"accessToken": "A String", # Access token for extension endpoint. Only used to propagate token from [[ExecuteExtensionRequest.runtime_auth_config]] at request time.
"serviceAccount": "A String", # The service account used to generate access tokens for executing the Extension. - If the service account is specified, the `iam.serviceAccounts.getAccessToken` permission should be granted to Vertex AI Extension Service Agent (https://cloud.google.com/vertex-ai/docs/general/access-control#service-agents) on the provided service account.
},
"oidcConfig": { # Config for user OIDC auth. # Config for user OIDC auth.
"idToken": "A String", # OpenID Connect formatted ID token for extension endpoint. Only used to propagate token from [[ExecuteExtensionRequest.runtime_auth_config]] at request time.
"serviceAccount": "A String", # The service account used to generate an OpenID Connect (OIDC)-compatible JWT token signed by the Google OIDC Provider (accounts.google.com) for extension endpoint (https://cloud.google.com/iam/docs/create-short-lived-credentials-direct#sa-credentials-oidc). - The audience for the token will be set to the URL in the server url defined in the OpenApi spec. - If the service account is provided, the service account should grant `iam.serviceAccounts.getOpenIdToken` permission to Vertex AI Extension Service Agent (https://cloud.google.com/vertex-ai/docs/general/access-control#service-agents).
},
},
"elasticSearchParams": { # The search parameters to use for the ELASTIC_SEARCH spec. # Parameters for the elastic search API.
"index": "A String", # The ElasticSearch index to use.
"numHits": 42, # Optional. Number of hits (chunks) to request. When specified, it is passed to Elasticsearch as the `num_hits` param.
"searchTemplate": "A String", # The ElasticSearch search template to use.
},
"endpoint": "A String", # The endpoint of the external API. The system will call the API at this endpoint to retrieve the data for grounding. Example: https://acme.com:443/search
"simpleSearchParams": { # The search parameters to use for SIMPLE_SEARCH spec. # Parameters for the simple search API.
},
},
"vertexAiSearch": { # Retrieve from Vertex AI Search datastore or engine for grounding. datastore and engine are mutually exclusive. See https://cloud.google.com/products/agent-builder # Set to use data source powered by Vertex AI Search.
"dataStoreSpecs": [ # Specifications that define the specific DataStores to be searched, along with configurations for those data stores. This is only considered for Engines with multiple data stores. It should only be set if engine is used.
{ # Define data stores within engine to filter on in a search call and configurations for those data stores. For more information, see https://cloud.google.com/generative-ai-app-builder/docs/reference/rpc/google.cloud.discoveryengine.v1#datastorespec
"dataStore": "A String", # Full resource name of DataStore, such as Format: `projects/{project}/locations/{location}/collections/{collection}/dataStores/{dataStore}`
"filter": "A String", # Optional. Filter specification to filter documents in the data store specified by data_store field. For more information on filtering, see [Filtering](https://cloud.google.com/generative-ai-app-builder/docs/filter-search-metadata)
},
],
"datastore": "A String", # Optional. Fully-qualified Vertex AI Search data store resource ID. Format: `projects/{project}/locations/{location}/collections/{collection}/dataStores/{dataStore}`
"engine": "A String", # Optional. Fully-qualified Vertex AI Search engine resource ID. Format: `projects/{project}/locations/{location}/collections/{collection}/engines/{engine}`
"filter": "A String", # Optional. Filter strings to be passed to the search API.
"maxResults": 42, # Optional. Number of search results to return per query. The default value is 10. The maximumm allowed value is 10.
},
"vertexRagStore": { # Retrieve from Vertex RAG Store for grounding. # Set to use data source powered by Vertex RAG store. User data is uploaded via the VertexRagDataService.
"ragCorpora": [ # Optional. Deprecated. Please use rag_resources instead.
"A String",
],
"ragResources": [ # Optional. The representation of the rag source. It can be used to specify corpus only or ragfiles. Currently only support one corpus or multiple files from one corpus. In the future we may open up multiple corpora support.
{ # The definition of the Rag resource.
"ragCorpus": "A String", # Optional. RagCorpora resource name. Format: `projects/{project}/locations/{location}/ragCorpora/{rag_corpus}`
"ragFileIds": [ # Optional. rag_file_id. The files should be in the same rag_corpus set in rag_corpus field.
"A String",
],
},
],
"ragRetrievalConfig": { # Specifies the context retrieval config. # Optional. The retrieval config for the Rag query.
"filter": { # Config for filters. # Optional. Config for filters.
"metadataFilter": "A String", # Optional. String for metadata filtering.
"vectorDistanceThreshold": 3.14, # Optional. Only returns contexts with vector distance smaller than the threshold.
"vectorSimilarityThreshold": 3.14, # Optional. Only returns contexts with vector similarity larger than the threshold.
},
"hybridSearch": { # Config for Hybrid Search. # Optional. Config for Hybrid Search.
"alpha": 3.14, # Optional. Alpha value controls the weight between dense and sparse vector search results. The range is [0, 1], while 0 means sparse vector search only and 1 means dense vector search only. The default value is 0.5 which balances sparse and dense vector search equally.
},
"ranking": { # Config for ranking and reranking. # Optional. Config for ranking and reranking.
"llmRanker": { # Config for LlmRanker. # Optional. Config for LlmRanker.
"modelName": "A String", # Optional. The model name used for ranking. See [Supported models](https://cloud.google.com/vertex-ai/generative-ai/docs/model-reference/inference#supported-models).
},
"rankService": { # Config for Rank Service. # Optional. Config for Rank Service.
"modelName": "A String", # Optional. The model name of the rank service. Format: `semantic-ranker-512@latest`
},
},
"topK": 42, # Optional. The number of contexts to retrieve.
},
"similarityTopK": 42, # Optional. Number of top k results to return from the selected corpora.
"storeContext": True or False, # Optional. Currently only supported for Gemini Multimodal Live API. In Gemini Multimodal Live API, if `store_context` bool is specified, Gemini will leverage it to automatically memorize the interactions between the client and Gemini, and retrieve context when needed to augment the response generation for users' ongoing and future interactions.
"vectorDistanceThreshold": 3.14, # Optional. Only return results with vector distance smaller than the threshold.
},
},
"urlContext": { # Tool to support URL context. # Optional. Tool to support URL context retrieval.
},
},
],
"author": "A String", # Required. The ID of the agent or entity that generated this event. Use "user" to denote events generated by the end-user.
"content": { # The structured data content of a message. A Content message contains a `role` field, which indicates the producer of the content, and a `parts` field, which contains the multi-part data of the message. # Required. The content of the event (e.g., text response, tool call, tool response).
"parts": [ # Required. A list of Part objects that make up a single message. Parts of a message can have different MIME types. A Content message must have at least one Part.
{ # A datatype containing media that is part of a multi-part Content message. A `Part` consists of data which has an associated datatype. A `Part` can only contain one of the accepted types in `Part.data`. For media types that are not text, `Part` must have a fixed IANA MIME type identifying the type and subtype of the media if `inline_data` or `file_data` field is filled with raw bytes.
"codeExecutionResult": { # Result of executing the [ExecutableCode]. Only generated when using the [CodeExecution] tool, and always follows a `part` containing the [ExecutableCode]. # Optional. The result of executing the ExecutableCode.
"outcome": "A String", # Required. Outcome of the code execution.
"output": "A String", # Optional. Contains stdout when code execution is successful, stderr or other description otherwise.
},
"executableCode": { # Code generated by the model that is meant to be executed, and the result returned to the model. Generated when using the [CodeExecution] tool, in which the code will be automatically executed, and a corresponding [CodeExecutionResult] will also be generated. # Optional. Code generated by the model that is intended to be executed.
"code": "A String", # Required. The code to be executed.
"language": "A String", # Required. Programming language of the `code`.
},
"fileData": { # URI-based data. A FileData message contains a URI pointing to data of a specific media type. It is used to represent images, audio, and video stored in Google Cloud Storage. # Optional. The URI-based data of the part. This can be used to include files from Google Cloud Storage.
"displayName": "A String", # Optional. The display name of the file. Used to provide a label or filename to distinguish files. This field is only returned in `PromptMessage` for prompt management. It is used in the Gemini calls only when server side tools (`code_execution`, `google_search`, and `url_context`) are enabled.
"fileUri": "A String", # Required. The URI of the file in Google Cloud Storage.
"mimeType": "A String", # Required. The IANA standard MIME type of the source data.
},
"functionCall": { # A predicted [FunctionCall] returned from the model that contains a string representing the [FunctionDeclaration.name] and a structured JSON object containing the parameters and their values. # Optional. A predicted function call returned from the model. This contains the name of the function to call and the arguments to pass to the function.
"args": { # Optional. The function parameters and values in JSON object format. See [FunctionDeclaration.parameters] for parameter details.
"a_key": "", # Properties of the object.
},
"id": "A String", # Optional. The unique id of the function call. If populated, the client to execute the `function_call` and return the response with the matching `id`.
"name": "A String", # Optional. The name of the function to call. Matches [FunctionDeclaration.name].
"partialArgs": [ # Optional. The partial argument value of the function call. If provided, represents the arguments/fields that are streamed incrementally.
{ # Partial argument value of the function call.
"boolValue": True or False, # Optional. Represents a boolean value.
"jsonPath": "A String", # Required. A JSON Path (RFC 9535) to the argument being streamed. https://datatracker.ietf.org/doc/html/rfc9535. e.g. "$.foo.bar[0].data".
"nullValue": "A String", # Optional. Represents a null value.
"numberValue": 3.14, # Optional. Represents a double value.
"stringValue": "A String", # Optional. Represents a string value.
"willContinue": True or False, # Optional. Whether this is not the last part of the same json_path. If true, another PartialArg message for the current json_path is expected to follow.
},
],
"willContinue": True or False, # Optional. Whether this is the last part of the FunctionCall. If true, another partial message for the current FunctionCall is expected to follow.
},
"functionResponse": { # The result output from a [FunctionCall] that contains a string representing the [FunctionDeclaration.name] and a structured JSON object containing any output from the function is used as context to the model. This should contain the result of a [FunctionCall] made based on model prediction. # Optional. The result of a function call. This is used to provide the model with the result of a function call that it predicted.
"id": "A String", # Optional. The id of the function call this response is for. Populated by the client to match the corresponding function call `id`.
"name": "A String", # Required. The name of the function to call. Matches [FunctionDeclaration.name] and [FunctionCall.name].
"parts": [ # Optional. Ordered `Parts` that constitute a function response. Parts may have different IANA MIME types.
{ # A datatype containing media that is part of a `FunctionResponse` message. A `FunctionResponsePart` consists of data which has an associated datatype. A `FunctionResponsePart` can only contain one of the accepted types in `FunctionResponsePart.data`. A `FunctionResponsePart` must have a fixed IANA MIME type identifying the type and subtype of the media if the `inline_data` field is filled with raw bytes.
"fileData": { # URI based data for function response. # URI based data.
"displayName": "A String", # Optional. Display name of the file data. Used to provide a label or filename to distinguish file datas. This field is only returned in PromptMessage for prompt management. It is currently used in the Gemini GenerateContent calls only when server side tools (code_execution, google_search, and url_context) are enabled.
"fileUri": "A String", # Required. URI.
"mimeType": "A String", # Required. The IANA standard MIME type of the source data.
},
"inlineData": { # Raw media bytes for function response. Text should not be sent as raw bytes, use the 'text' field. # Inline media bytes.
"data": "A String", # Required. Raw bytes.
"displayName": "A String", # Optional. Display name of the blob. Used to provide a label or filename to distinguish blobs. This field is only returned in PromptMessage for prompt management. It is currently used in the Gemini GenerateContent calls only when server side tools (code_execution, google_search, and url_context) are enabled.
"mimeType": "A String", # Required. The IANA standard MIME type of the source data.
},
},
],
"response": { # Required. The function response in JSON object format. Use "output" key to specify function output and "error" key to specify error details (if any). If "output" and "error" keys are not specified, then whole "response" is treated as function output.
"a_key": "", # Properties of the object.
},
},
"inlineData": { # A content blob. A Blob contains data of a specific media type. It is used to represent images, audio, and video. # Optional. The inline data content of the part. This can be used to include images, audio, or video in a request.
"data": "A String", # Required. The raw bytes of the data.
"displayName": "A String", # Optional. The display name of the blob. Used to provide a label or filename to distinguish blobs. This field is only returned in `PromptMessage` for prompt management. It is used in the Gemini calls only when server-side tools (`code_execution`, `google_search`, and `url_context`) are enabled.
"mimeType": "A String", # Required. The IANA standard MIME type of the source data.
},
"mediaResolution": { # per part media resolution. Media resolution for the input media. # per part media resolution. Media resolution for the input media.
"level": "A String", # The tokenization quality used for given media.
},
"text": "A String", # Optional. The text content of the part. When sent from the VSCode Gemini Code Assist extension, references to @mentioned items will be converted to markdown boldface text. For example `@my-repo` will be converted to and sent as `**my-repo**` by the IDE agent.
"thought": True or False, # Optional. Indicates whether the `part` represents the model's thought process or reasoning.
"thoughtSignature": "A String", # Optional. An opaque signature for the thought so it can be reused in subsequent requests.
"videoMetadata": { # Provides metadata for a video, including the start and end offsets for clipping and the frame rate. # Optional. Video metadata. The metadata should only be specified while the video data is presented in inline_data or file_data.
"endOffset": "A String", # Optional. The end offset of the video.
"fps": 3.14, # Optional. The frame rate of the video sent to the model. If not specified, the default value is 1.0. The valid range is (0.0, 24.0].
"startOffset": "A String", # Optional. The start offset of the video.
},
},
],
"role": "A String", # Optional. The producer of the content. Must be either 'user' or 'model'. If not set, the service will default to 'user'.
},
"eventTime": "A String", # Optional. The timestamp when the event occurred.
"stateDelta": { # Optional. The change in the session state caused by this event. This is a key-value map of fields that were modified or added by the event.
"a_key": "", # Properties of the object.
},
},
],
"turnId": "A String", # Optional. A unique identifier for the turn. Useful for referencing specific turns across systems.
"turnIndex": 42, # Required. The 0-based index of the turn in the conversation sequence.
},
],
},
"candidate": "A String", # Required. The name of the candidate that produced the response.
"error": { # The `Status` type defines a logical error model that is suitable for different programming environments, including REST APIs and RPC APIs. It is used by [gRPC](https://github.com/grpc). Each `Status` message contains three pieces of data: error code, error message, and error details. You can find out more about this error model and how to work with it in the [API Design Guide](https://cloud.google.com/apis/design/errors). # Output only. Error while scraping model or agent.
"code": 42, # The status code, which should be an enum value of google.rpc.Code.
"details": [ # A list of messages that carry the error details. There is a common set of message types for APIs to use.
{
"a_key": "", # Properties of the object. Contains field @type with type URL.
},
],
"message": "A String", # A developer-facing error message, which should be in English. Any user-facing error message should be localized and sent in the google.rpc.Status.details field, or localized by the client.
},
"events": [ # Optional. Intermediate events (such as tool calls and responses) that led to the final response.
{ # The structured data content of a message. A Content message contains a `role` field, which indicates the producer of the content, and a `parts` field, which contains the multi-part data of the message.
"parts": [ # Required. A list of Part objects that make up a single message. Parts of a message can have different MIME types. A Content message must have at least one Part.
{ # A datatype containing media that is part of a multi-part Content message. A `Part` consists of data which has an associated datatype. A `Part` can only contain one of the accepted types in `Part.data`. For media types that are not text, `Part` must have a fixed IANA MIME type identifying the type and subtype of the media if `inline_data` or `file_data` field is filled with raw bytes.
"codeExecutionResult": { # Result of executing the [ExecutableCode]. Only generated when using the [CodeExecution] tool, and always follows a `part` containing the [ExecutableCode]. # Optional. The result of executing the ExecutableCode.
"outcome": "A String", # Required. Outcome of the code execution.
"output": "A String", # Optional. Contains stdout when code execution is successful, stderr or other description otherwise.
},
"executableCode": { # Code generated by the model that is meant to be executed, and the result returned to the model. Generated when using the [CodeExecution] tool, in which the code will be automatically executed, and a corresponding [CodeExecutionResult] will also be generated. # Optional. Code generated by the model that is intended to be executed.
"code": "A String", # Required. The code to be executed.
"language": "A String", # Required. Programming language of the `code`.
},
"fileData": { # URI-based data. A FileData message contains a URI pointing to data of a specific media type. It is used to represent images, audio, and video stored in Google Cloud Storage. # Optional. The URI-based data of the part. This can be used to include files from Google Cloud Storage.
"displayName": "A String", # Optional. The display name of the file. Used to provide a label or filename to distinguish files. This field is only returned in `PromptMessage` for prompt management. It is used in the Gemini calls only when server side tools (`code_execution`, `google_search`, and `url_context`) are enabled.
"fileUri": "A String", # Required. The URI of the file in Google Cloud Storage.
"mimeType": "A String", # Required. The IANA standard MIME type of the source data.
},
"functionCall": { # A predicted [FunctionCall] returned from the model that contains a string representing the [FunctionDeclaration.name] and a structured JSON object containing the parameters and their values. # Optional. A predicted function call returned from the model. This contains the name of the function to call and the arguments to pass to the function.
"args": { # Optional. The function parameters and values in JSON object format. See [FunctionDeclaration.parameters] for parameter details.
"a_key": "", # Properties of the object.
},
"id": "A String", # Optional. The unique id of the function call. If populated, the client to execute the `function_call` and return the response with the matching `id`.
"name": "A String", # Optional. The name of the function to call. Matches [FunctionDeclaration.name].
"partialArgs": [ # Optional. The partial argument value of the function call. If provided, represents the arguments/fields that are streamed incrementally.
{ # Partial argument value of the function call.
"boolValue": True or False, # Optional. Represents a boolean value.
"jsonPath": "A String", # Required. A JSON Path (RFC 9535) to the argument being streamed. https://datatracker.ietf.org/doc/html/rfc9535. e.g. "$.foo.bar[0].data".
"nullValue": "A String", # Optional. Represents a null value.
"numberValue": 3.14, # Optional. Represents a double value.
"stringValue": "A String", # Optional. Represents a string value.
"willContinue": True or False, # Optional. Whether this is not the last part of the same json_path. If true, another PartialArg message for the current json_path is expected to follow.
},
],
"willContinue": True or False, # Optional. Whether this is the last part of the FunctionCall. If true, another partial message for the current FunctionCall is expected to follow.
},
"functionResponse": { # The result output from a [FunctionCall] that contains a string representing the [FunctionDeclaration.name] and a structured JSON object containing any output from the function is used as context to the model. This should contain the result of a [FunctionCall] made based on model prediction. # Optional. The result of a function call. This is used to provide the model with the result of a function call that it predicted.
"id": "A String", # Optional. The id of the function call this response is for. Populated by the client to match the corresponding function call `id`.
"name": "A String", # Required. The name of the function to call. Matches [FunctionDeclaration.name] and [FunctionCall.name].
"parts": [ # Optional. Ordered `Parts` that constitute a function response. Parts may have different IANA MIME types.
{ # A datatype containing media that is part of a `FunctionResponse` message. A `FunctionResponsePart` consists of data which has an associated datatype. A `FunctionResponsePart` can only contain one of the accepted types in `FunctionResponsePart.data`. A `FunctionResponsePart` must have a fixed IANA MIME type identifying the type and subtype of the media if the `inline_data` field is filled with raw bytes.
"fileData": { # URI based data for function response. # URI based data.
"displayName": "A String", # Optional. Display name of the file data. Used to provide a label or filename to distinguish file datas. This field is only returned in PromptMessage for prompt management. It is currently used in the Gemini GenerateContent calls only when server side tools (code_execution, google_search, and url_context) are enabled.
"fileUri": "A String", # Required. URI.
"mimeType": "A String", # Required. The IANA standard MIME type of the source data.
},
"inlineData": { # Raw media bytes for function response. Text should not be sent as raw bytes, use the 'text' field. # Inline media bytes.
"data": "A String", # Required. Raw bytes.
"displayName": "A String", # Optional. Display name of the blob. Used to provide a label or filename to distinguish blobs. This field is only returned in PromptMessage for prompt management. It is currently used in the Gemini GenerateContent calls only when server side tools (code_execution, google_search, and url_context) are enabled.
"mimeType": "A String", # Required. The IANA standard MIME type of the source data.
},
},
],
"response": { # Required. The function response in JSON object format. Use "output" key to specify function output and "error" key to specify error details (if any). If "output" and "error" keys are not specified, then whole "response" is treated as function output.
"a_key": "", # Properties of the object.
},
},
"inlineData": { # A content blob. A Blob contains data of a specific media type. It is used to represent images, audio, and video. # Optional. The inline data content of the part. This can be used to include images, audio, or video in a request.
"data": "A String", # Required. The raw bytes of the data.
"displayName": "A String", # Optional. The display name of the blob. Used to provide a label or filename to distinguish blobs. This field is only returned in `PromptMessage` for prompt management. It is used in the Gemini calls only when server-side tools (`code_execution`, `google_search`, and `url_context`) are enabled.
"mimeType": "A String", # Required. The IANA standard MIME type of the source data.
},
"mediaResolution": { # per part media resolution. Media resolution for the input media. # per part media resolution. Media resolution for the input media.
"level": "A String", # The tokenization quality used for given media.
},
"text": "A String", # Optional. The text content of the part. When sent from the VSCode Gemini Code Assist extension, references to @mentioned items will be converted to markdown boldface text. For example `@my-repo` will be converted to and sent as `**my-repo**` by the IDE agent.
"thought": True or False, # Optional. Indicates whether the `part` represents the model's thought process or reasoning.
"thoughtSignature": "A String", # Optional. An opaque signature for the thought so it can be reused in subsequent requests.
"videoMetadata": { # Provides metadata for a video, including the start and end offsets for clipping and the frame rate. # Optional. Video metadata. The metadata should only be specified while the video data is presented in inline_data or file_data.
"endOffset": "A String", # Optional. The end offset of the video.
"fps": 3.14, # Optional. The frame rate of the video sent to the model. If not specified, the default value is 1.0. The valid range is (0.0, 24.0].
"startOffset": "A String", # Optional. The start offset of the video.
},
},
],
"role": "A String", # Optional. The producer of the content. Must be either 'user' or 'model'. If not set, the service will default to 'user'.
},
],
"text": "A String", # Text response.
"value": "", # Fields and values that can be used to populate the response template.
},
"prompt": { # Prompt to be evaluated. This can represent a single-turn prompt or a multi-turn conversation for agent evaluations. # Optional. The request/prompt to evaluate.
"agentData": { # Represents data specific to multi-turn agent evaluations. # Optional. Represents the complete execution trace of a multi-turn conversation, which can involve single or multiple agents. This serves as the input context for agent scraping.
"agents": { # Optional. A map containing the static configurations for each agent in the system. Key: agent_id (matches the `author` field in events). Value: The static configuration of the agent.
"a_key": { # Represents configuration for an Agent.
"agentId": "A String", # Required. Unique identifier of the agent. This ID is used to refer to this agent, e.g., in AgentEvent.author, or in the `sub_agents` field. It must be unique within the `agents` map.
"agentType": "A String", # Optional. The type or class of the agent (e.g., "LlmAgent", "RouterAgent", "ToolUseAgent"). Useful for the autorater to understand the expected behavior of the agent.
"description": "A String", # Optional. A high-level description of the agent's role and responsibilities. Critical for evaluating if the agent is routing tasks correctly.
"instruction": "A String", # Optional. Provides instructions for the LLM model, guiding the agent's behavior. Can be static or dynamic. Dynamic instructions can contain placeholders like {variable_name} that will be resolved at runtime using the `AgentEvent.state_delta` field.
"subAgents": [ # Optional. The list of valid agent IDs that this agent can delegate to. This defines the directed edges in the multi-agent system graph topology.
"A String",
],
"tools": [ # Optional. The list of tools available to this agent.
{ # Tool details that the model may use to generate response. A `Tool` is a piece of code that enables the system to interact with external systems to perform an action, or set of actions, outside of knowledge and scope of the model. A Tool object should contain exactly one type of Tool (e.g FunctionDeclaration, Retrieval or GoogleSearchRetrieval).
"codeExecution": { # Tool that executes code generated by the model, and automatically returns the result to the model. See also [ExecutableCode]and [CodeExecutionResult] which are input and output to this tool. # Optional. CodeExecution tool type. Enables the model to execute code as part of generation.
},
"computerUse": { # Tool to support computer use. # Optional. Tool to support the model interacting directly with the computer. If enabled, it automatically populates computer-use specific Function Declarations.
"environment": "A String", # Required. The environment being operated.
"excludedPredefinedFunctions": [ # Optional. By default, [predefined functions](https://cloud.google.com/vertex-ai/generative-ai/docs/computer-use#supported-actions) are included in the final model call. Some of them can be explicitly excluded from being automatically included. This can serve two purposes: 1. Using a more restricted / different action space. 2. Improving the definitions / instructions of predefined functions.
"A String",
],
},
"enterpriseWebSearch": { # Tool to search public web data, powered by Vertex AI Search and Sec4 compliance. # Optional. Tool to support searching public web data, powered by Vertex AI Search and Sec4 compliance.
"blockingConfidence": "A String", # Optional. Sites with confidence level chosen & above this value will be blocked from the search results.
"excludeDomains": [ # Optional. List of domains to be excluded from the search results. The default limit is 2000 domains.
"A String",
],
},
"functionDeclarations": [ # Optional. Function tool type. One or more function declarations to be passed to the model along with the current user query. Model may decide to call a subset of these functions by populating FunctionCall in the response. User should provide a FunctionResponse for each function call in the next turn. Based on the function responses, Model will generate the final response back to the user. Maximum 512 function declarations can be provided.
{ # Structured representation of a function declaration as defined by the [OpenAPI 3.0 specification](https://spec.openapis.org/oas/v3.0.3). Included in this declaration are the function name, description, parameters and response type. This FunctionDeclaration is a representation of a block of code that can be used as a `Tool` by the model and executed by the client.
"description": "A String", # Optional. Description and purpose of the function. Model uses it to decide how and whether to call the function.
"name": "A String", # Required. The name of the function to call. Must start with a letter or an underscore. Must be a-z, A-Z, 0-9, or contain underscores, dots, colons and dashes, with a maximum length of 64.
"parameters": { # Defines the schema of input and output data. This is a subset of the [OpenAPI 3.0 Schema Object](https://spec.openapis.org/oas/v3.0.3#schema-object). # Optional. Describes the parameters to this function in JSON Schema Object format. Reflects the Open API 3.03 Parameter Object. string Key: the name of the parameter. Parameter names are case sensitive. Schema Value: the Schema defining the type used for the parameter. For function with no parameters, this can be left unset. Parameter names must start with a letter or an underscore and must only contain chars a-z, A-Z, 0-9, or underscores with a maximum length of 64. Example with 1 required and 1 optional parameter: type: OBJECT properties: param1: type: STRING param2: type: INTEGER required: - param1
"additionalProperties": "", # Optional. If `type` is `OBJECT`, specifies how to handle properties not defined in `properties`. If it is a boolean `false`, no additional properties are allowed. If it is a schema, additional properties are allowed if they conform to the schema.
"anyOf": [ # Optional. The instance must be valid against any (one or more) of the subschemas listed in `any_of`.
# Object with schema name: GoogleCloudAiplatformV1beta1Schema
],
"default": "", # Optional. Default value to use if the field is not specified.
"defs": { # Optional. `defs` provides a map of schema definitions that can be reused by `ref` elsewhere in the schema. Only allowed at root level of the schema.
"a_key": # Object with schema name: GoogleCloudAiplatformV1beta1Schema
},
"description": "A String", # Optional. Description of the schema.
"enum": [ # Optional. Possible values of the field. This field can be used to restrict a value to a fixed set of values. To mark a field as an enum, set `format` to `enum` and provide the list of possible values in `enum`. For example: 1. To define directions: `{type:STRING, format:enum, enum:["EAST", "NORTH", "SOUTH", "WEST"]}` 2. To define apartment numbers: `{type:INTEGER, format:enum, enum:["101", "201", "301"]}`
"A String",
],
"example": "", # Optional. Example of an instance of this schema.
"format": "A String", # Optional. The format of the data. For `NUMBER` type, format can be `float` or `double`. For `INTEGER` type, format can be `int32` or `int64`. For `STRING` type, format can be `email`, `byte`, `date`, `date-time`, `password`, and other formats to further refine the data type.
"items": # Object with schema name: GoogleCloudAiplatformV1beta1Schema # Optional. If type is `ARRAY`, `items` specifies the schema of elements in the array.
"maxItems": "A String", # Optional. If type is `ARRAY`, `max_items` specifies the maximum number of items in an array.
"maxLength": "A String", # Optional. If type is `STRING`, `max_length` specifies the maximum length of the string.
"maxProperties": "A String", # Optional. If type is `OBJECT`, `max_properties` specifies the maximum number of properties that can be provided.
"maximum": 3.14, # Optional. If type is `INTEGER` or `NUMBER`, `maximum` specifies the maximum allowed value.
"minItems": "A String", # Optional. If type is `ARRAY`, `min_items` specifies the minimum number of items in an array.
"minLength": "A String", # Optional. If type is `STRING`, `min_length` specifies the minimum length of the string.
"minProperties": "A String", # Optional. If type is `OBJECT`, `min_properties` specifies the minimum number of properties that can be provided.
"minimum": 3.14, # Optional. If type is `INTEGER` or `NUMBER`, `minimum` specifies the minimum allowed value.
"nullable": True or False, # Optional. Indicates if the value of this field can be null.
"pattern": "A String", # Optional. If type is `STRING`, `pattern` specifies a regular expression that the string must match.
"properties": { # Optional. If type is `OBJECT`, `properties` is a map of property names to schema definitions for each property of the object.
"a_key": # Object with schema name: GoogleCloudAiplatformV1beta1Schema
},
"propertyOrdering": [ # Optional. Order of properties displayed or used where order matters. This is not a standard field in OpenAPI specification, but can be used to control the order of properties.
"A String",
],
"ref": "A String", # Optional. Allows referencing another schema definition to use in place of this schema. The value must be a valid reference to a schema in `defs`. For example, the following schema defines a reference to a schema node named "Pet": type: object properties: pet: ref: #/defs/Pet defs: Pet: type: object properties: name: type: string The value of the "pet" property is a reference to the schema node named "Pet". See details in https://json-schema.org/understanding-json-schema/structuring
"required": [ # Optional. If type is `OBJECT`, `required` lists the names of properties that must be present.
"A String",
],
"title": "A String", # Optional. Title for the schema.
"type": "A String", # Optional. Data type of the schema field.
},
"parametersJsonSchema": "", # Optional. Describes the parameters to the function in JSON Schema format. The schema must describe an object where the properties are the parameters to the function. For example: ``` { "type": "object", "properties": { "name": { "type": "string" }, "age": { "type": "integer" } }, "additionalProperties": false, "required": ["name", "age"], "propertyOrdering": ["name", "age"] } ``` This field is mutually exclusive with `parameters`.
"response": { # Defines the schema of input and output data. This is a subset of the [OpenAPI 3.0 Schema Object](https://spec.openapis.org/oas/v3.0.3#schema-object). # Optional. Describes the output from this function in JSON Schema format. Reflects the Open API 3.03 Response Object. The Schema defines the type used for the response value of the function.
"additionalProperties": "", # Optional. If `type` is `OBJECT`, specifies how to handle properties not defined in `properties`. If it is a boolean `false`, no additional properties are allowed. If it is a schema, additional properties are allowed if they conform to the schema.
"anyOf": [ # Optional. The instance must be valid against any (one or more) of the subschemas listed in `any_of`.
# Object with schema name: GoogleCloudAiplatformV1beta1Schema
],
"default": "", # Optional. Default value to use if the field is not specified.
"defs": { # Optional. `defs` provides a map of schema definitions that can be reused by `ref` elsewhere in the schema. Only allowed at root level of the schema.
"a_key": # Object with schema name: GoogleCloudAiplatformV1beta1Schema
},
"description": "A String", # Optional. Description of the schema.
"enum": [ # Optional. Possible values of the field. This field can be used to restrict a value to a fixed set of values. To mark a field as an enum, set `format` to `enum` and provide the list of possible values in `enum`. For example: 1. To define directions: `{type:STRING, format:enum, enum:["EAST", "NORTH", "SOUTH", "WEST"]}` 2. To define apartment numbers: `{type:INTEGER, format:enum, enum:["101", "201", "301"]}`
"A String",
],
"example": "", # Optional. Example of an instance of this schema.
"format": "A String", # Optional. The format of the data. For `NUMBER` type, format can be `float` or `double`. For `INTEGER` type, format can be `int32` or `int64`. For `STRING` type, format can be `email`, `byte`, `date`, `date-time`, `password`, and other formats to further refine the data type.
"items": # Object with schema name: GoogleCloudAiplatformV1beta1Schema # Optional. If type is `ARRAY`, `items` specifies the schema of elements in the array.
"maxItems": "A String", # Optional. If type is `ARRAY`, `max_items` specifies the maximum number of items in an array.
"maxLength": "A String", # Optional. If type is `STRING`, `max_length` specifies the maximum length of the string.
"maxProperties": "A String", # Optional. If type is `OBJECT`, `max_properties` specifies the maximum number of properties that can be provided.
"maximum": 3.14, # Optional. If type is `INTEGER` or `NUMBER`, `maximum` specifies the maximum allowed value.
"minItems": "A String", # Optional. If type is `ARRAY`, `min_items` specifies the minimum number of items in an array.
"minLength": "A String", # Optional. If type is `STRING`, `min_length` specifies the minimum length of the string.
"minProperties": "A String", # Optional. If type is `OBJECT`, `min_properties` specifies the minimum number of properties that can be provided.
"minimum": 3.14, # Optional. If type is `INTEGER` or `NUMBER`, `minimum` specifies the minimum allowed value.
"nullable": True or False, # Optional. Indicates if the value of this field can be null.
"pattern": "A String", # Optional. If type is `STRING`, `pattern` specifies a regular expression that the string must match.
"properties": { # Optional. If type is `OBJECT`, `properties` is a map of property names to schema definitions for each property of the object.
"a_key": # Object with schema name: GoogleCloudAiplatformV1beta1Schema
},
"propertyOrdering": [ # Optional. Order of properties displayed or used where order matters. This is not a standard field in OpenAPI specification, but can be used to control the order of properties.
"A String",
],
"ref": "A String", # Optional. Allows referencing another schema definition to use in place of this schema. The value must be a valid reference to a schema in `defs`. For example, the following schema defines a reference to a schema node named "Pet": type: object properties: pet: ref: #/defs/Pet defs: Pet: type: object properties: name: type: string The value of the "pet" property is a reference to the schema node named "Pet". See details in https://json-schema.org/understanding-json-schema/structuring
"required": [ # Optional. If type is `OBJECT`, `required` lists the names of properties that must be present.
"A String",
],
"title": "A String", # Optional. Title for the schema.
"type": "A String", # Optional. Data type of the schema field.
},
"responseJsonSchema": "", # Optional. Describes the output from this function in JSON Schema format. The value specified by the schema is the response value of the function. This field is mutually exclusive with `response`.
},
],
"googleMaps": { # Tool to retrieve public maps data for grounding, powered by Google. # Optional. GoogleMaps tool type. Tool to support Google Maps in Model.
"enableWidget": True or False, # Optional. If true, include the widget context token in the response.
},
"googleSearch": { # GoogleSearch tool type. Tool to support Google Search in Model. Powered by Google. # Optional. GoogleSearch tool type. Tool to support Google Search in Model. Powered by Google.
"blockingConfidence": "A String", # Optional. Sites with confidence level chosen & above this value will be blocked from the search results.
"excludeDomains": [ # Optional. List of domains to be excluded from the search results. The default limit is 2000 domains. Example: ["amazon.com", "facebook.com"].
"A String",
],
},
"googleSearchRetrieval": { # Tool to retrieve public web data for grounding, powered by Google. # Optional. Specialized retrieval tool that is powered by Google Search.
"dynamicRetrievalConfig": { # Describes the options to customize dynamic retrieval. # Specifies the dynamic retrieval configuration for the given source.
"dynamicThreshold": 3.14, # Optional. The threshold to be used in dynamic retrieval. If not set, a system default value is used.
"mode": "A String", # The mode of the predictor to be used in dynamic retrieval.
},
},
"parallelAiSearch": { # ParallelAiSearch tool type. A tool that uses the Parallel.ai search engine for grounding. # Optional. If specified, Vertex AI will use Parallel.ai to search for information to answer user queries. The search results will be grounded on Parallel.ai and presented to the model for response generation
"apiKey": "A String", # Optional. The API key for ParallelAiSearch. If an API key is not provided, the system will attempt to verify access by checking for an active Parallel.ai subscription through the Google Cloud Marketplace. See https://docs.parallel.ai/search/search-quickstart for more details.
"customConfigs": { # Optional. Custom configs for ParallelAiSearch. This field can be used to pass any parameter from the Parallel.ai Search API. See the Parallel.ai documentation for the full list of available parameters and their usage: https://docs.parallel.ai/api-reference/search-beta/search Currently only `source_policy`, `excerpts`, `max_results`, `mode`, `fetch_policy` can be set via this field. For example: { "source_policy": { "include_domains": ["google.com", "wikipedia.org"], "exclude_domains": ["example.com"] }, "fetch_policy": { "max_age_seconds": 3600 } }
"a_key": "", # Properties of the object.
},
},
"retrieval": { # Defines a retrieval tool that model can call to access external knowledge. # Optional. Retrieval tool type. System will always execute the provided retrieval tool(s) to get external knowledge to answer the prompt. Retrieval results are presented to the model for generation.
"disableAttribution": True or False, # Optional. Deprecated. This option is no longer supported.
"externalApi": { # Retrieve from data source powered by external API for grounding. The external API is not owned by Google, but need to follow the pre-defined API spec. # Use data source powered by external API for grounding.
"apiAuth": { # The generic reusable api auth config. Deprecated. Please use AuthConfig (google/cloud/aiplatform/master/auth.proto) instead. # The authentication config to access the API. Deprecated. Please use auth_config instead.
"apiKeyConfig": { # The API secret. # The API secret.
"apiKeySecretVersion": "A String", # Required. The SecretManager secret version resource name storing API key. e.g. projects/{project}/secrets/{secret}/versions/{version}
"apiKeyString": "A String", # The API key string. Either this or `api_key_secret_version` must be set.
},
},
"apiSpec": "A String", # The API spec that the external API implements.
"authConfig": { # Auth configuration to run the extension. # The authentication config to access the API.
"apiKeyConfig": { # Config for authentication with API key. # Config for API key auth.
"apiKeySecret": "A String", # Optional. The name of the SecretManager secret version resource storing the API key. Format: `projects/{project}/secrets/{secrete}/versions/{version}` - If both `api_key_secret` and `api_key_string` are specified, this field takes precedence over `api_key_string`. - If specified, the `secretmanager.versions.access` permission should be granted to Vertex AI Extension Service Agent (https://cloud.google.com/vertex-ai/docs/general/access-control#service-agents) on the specified resource.
"apiKeyString": "A String", # Optional. The API key to be used in the request directly.
"httpElementLocation": "A String", # Optional. The location of the API key.
"name": "A String", # Optional. The parameter name of the API key. E.g. If the API request is "https://example.com/act?api_key=", "api_key" would be the parameter name.
},
"authType": "A String", # Type of auth scheme.
"googleServiceAccountConfig": { # Config for Google Service Account Authentication. # Config for Google Service Account auth.
"serviceAccount": "A String", # Optional. The service account that the extension execution service runs as. - If the service account is specified, the `iam.serviceAccounts.getAccessToken` permission should be granted to Vertex AI Extension Service Agent (https://cloud.google.com/vertex-ai/docs/general/access-control#service-agents) on the specified service account. - If not specified, the Vertex AI Extension Service Agent will be used to execute the Extension.
},
"httpBasicAuthConfig": { # Config for HTTP Basic Authentication. # Config for HTTP Basic auth.
"credentialSecret": "A String", # Required. The name of the SecretManager secret version resource storing the base64 encoded credentials. Format: `projects/{project}/secrets/{secrete}/versions/{version}` - If specified, the `secretmanager.versions.access` permission should be granted to Vertex AI Extension Service Agent (https://cloud.google.com/vertex-ai/docs/general/access-control#service-agents) on the specified resource.
},
"oauthConfig": { # Config for user oauth. # Config for user oauth.
"accessToken": "A String", # Access token for extension endpoint. Only used to propagate token from [[ExecuteExtensionRequest.runtime_auth_config]] at request time.
"serviceAccount": "A String", # The service account used to generate access tokens for executing the Extension. - If the service account is specified, the `iam.serviceAccounts.getAccessToken` permission should be granted to Vertex AI Extension Service Agent (https://cloud.google.com/vertex-ai/docs/general/access-control#service-agents) on the provided service account.
},
"oidcConfig": { # Config for user OIDC auth. # Config for user OIDC auth.
"idToken": "A String", # OpenID Connect formatted ID token for extension endpoint. Only used to propagate token from [[ExecuteExtensionRequest.runtime_auth_config]] at request time.
"serviceAccount": "A String", # The service account used to generate an OpenID Connect (OIDC)-compatible JWT token signed by the Google OIDC Provider (accounts.google.com) for extension endpoint (https://cloud.google.com/iam/docs/create-short-lived-credentials-direct#sa-credentials-oidc). - The audience for the token will be set to the URL in the server url defined in the OpenApi spec. - If the service account is provided, the service account should grant `iam.serviceAccounts.getOpenIdToken` permission to Vertex AI Extension Service Agent (https://cloud.google.com/vertex-ai/docs/general/access-control#service-agents).
},
},
"elasticSearchParams": { # The search parameters to use for the ELASTIC_SEARCH spec. # Parameters for the elastic search API.
"index": "A String", # The ElasticSearch index to use.
"numHits": 42, # Optional. Number of hits (chunks) to request. When specified, it is passed to Elasticsearch as the `num_hits` param.
"searchTemplate": "A String", # The ElasticSearch search template to use.
},
"endpoint": "A String", # The endpoint of the external API. The system will call the API at this endpoint to retrieve the data for grounding. Example: https://acme.com:443/search
"simpleSearchParams": { # The search parameters to use for SIMPLE_SEARCH spec. # Parameters for the simple search API.
},
},
"vertexAiSearch": { # Retrieve from Vertex AI Search datastore or engine for grounding. datastore and engine are mutually exclusive. See https://cloud.google.com/products/agent-builder # Set to use data source powered by Vertex AI Search.
"dataStoreSpecs": [ # Specifications that define the specific DataStores to be searched, along with configurations for those data stores. This is only considered for Engines with multiple data stores. It should only be set if engine is used.
{ # Define data stores within engine to filter on in a search call and configurations for those data stores. For more information, see https://cloud.google.com/generative-ai-app-builder/docs/reference/rpc/google.cloud.discoveryengine.v1#datastorespec
"dataStore": "A String", # Full resource name of DataStore, such as Format: `projects/{project}/locations/{location}/collections/{collection}/dataStores/{dataStore}`
"filter": "A String", # Optional. Filter specification to filter documents in the data store specified by data_store field. For more information on filtering, see [Filtering](https://cloud.google.com/generative-ai-app-builder/docs/filter-search-metadata)
},
],
"datastore": "A String", # Optional. Fully-qualified Vertex AI Search data store resource ID. Format: `projects/{project}/locations/{location}/collections/{collection}/dataStores/{dataStore}`
"engine": "A String", # Optional. Fully-qualified Vertex AI Search engine resource ID. Format: `projects/{project}/locations/{location}/collections/{collection}/engines/{engine}`
"filter": "A String", # Optional. Filter strings to be passed to the search API.
"maxResults": 42, # Optional. Number of search results to return per query. The default value is 10. The maximumm allowed value is 10.
},
"vertexRagStore": { # Retrieve from Vertex RAG Store for grounding. # Set to use data source powered by Vertex RAG store. User data is uploaded via the VertexRagDataService.
"ragCorpora": [ # Optional. Deprecated. Please use rag_resources instead.
"A String",
],
"ragResources": [ # Optional. The representation of the rag source. It can be used to specify corpus only or ragfiles. Currently only support one corpus or multiple files from one corpus. In the future we may open up multiple corpora support.
{ # The definition of the Rag resource.
"ragCorpus": "A String", # Optional. RagCorpora resource name. Format: `projects/{project}/locations/{location}/ragCorpora/{rag_corpus}`
"ragFileIds": [ # Optional. rag_file_id. The files should be in the same rag_corpus set in rag_corpus field.
"A String",
],
},
],
"ragRetrievalConfig": { # Specifies the context retrieval config. # Optional. The retrieval config for the Rag query.
"filter": { # Config for filters. # Optional. Config for filters.
"metadataFilter": "A String", # Optional. String for metadata filtering.
"vectorDistanceThreshold": 3.14, # Optional. Only returns contexts with vector distance smaller than the threshold.
"vectorSimilarityThreshold": 3.14, # Optional. Only returns contexts with vector similarity larger than the threshold.
},
"hybridSearch": { # Config for Hybrid Search. # Optional. Config for Hybrid Search.
"alpha": 3.14, # Optional. Alpha value controls the weight between dense and sparse vector search results. The range is [0, 1], while 0 means sparse vector search only and 1 means dense vector search only. The default value is 0.5 which balances sparse and dense vector search equally.
},
"ranking": { # Config for ranking and reranking. # Optional. Config for ranking and reranking.
"llmRanker": { # Config for LlmRanker. # Optional. Config for LlmRanker.
"modelName": "A String", # Optional. The model name used for ranking. See [Supported models](https://cloud.google.com/vertex-ai/generative-ai/docs/model-reference/inference#supported-models).
},
"rankService": { # Config for Rank Service. # Optional. Config for Rank Service.
"modelName": "A String", # Optional. The model name of the rank service. Format: `semantic-ranker-512@latest`
},
},
"topK": 42, # Optional. The number of contexts to retrieve.
},
"similarityTopK": 42, # Optional. Number of top k results to return from the selected corpora.
"storeContext": True or False, # Optional. Currently only supported for Gemini Multimodal Live API. In Gemini Multimodal Live API, if `store_context` bool is specified, Gemini will leverage it to automatically memorize the interactions between the client and Gemini, and retrieve context when needed to augment the response generation for users' ongoing and future interactions.
"vectorDistanceThreshold": 3.14, # Optional. Only return results with vector distance smaller than the threshold.
},
},
"urlContext": { # Tool to support URL context. # Optional. Tool to support URL context retrieval.
},
},
],
},
},
"turns": [ # Optional. A chronological list of conversation turns. Each turn represents a logical execution cycle (e.g., User Input -> Agent Response).
{ # Represents a single turn/invocation in the conversation.
"events": [ # Optional. The list of events that occurred during this turn.
{ # Represents a single event in the execution trace.
"activeTools": [ # Optional. The list of tools that were active/available to the agent at the time of this event. This overrides the `AgentConfig.tools` if set.
{ # Tool details that the model may use to generate response. A `Tool` is a piece of code that enables the system to interact with external systems to perform an action, or set of actions, outside of knowledge and scope of the model. A Tool object should contain exactly one type of Tool (e.g FunctionDeclaration, Retrieval or GoogleSearchRetrieval).
"codeExecution": { # Tool that executes code generated by the model, and automatically returns the result to the model. See also [ExecutableCode]and [CodeExecutionResult] which are input and output to this tool. # Optional. CodeExecution tool type. Enables the model to execute code as part of generation.
},
"computerUse": { # Tool to support computer use. # Optional. Tool to support the model interacting directly with the computer. If enabled, it automatically populates computer-use specific Function Declarations.
"environment": "A String", # Required. The environment being operated.
"excludedPredefinedFunctions": [ # Optional. By default, [predefined functions](https://cloud.google.com/vertex-ai/generative-ai/docs/computer-use#supported-actions) are included in the final model call. Some of them can be explicitly excluded from being automatically included. This can serve two purposes: 1. Using a more restricted / different action space. 2. Improving the definitions / instructions of predefined functions.
"A String",
],
},
"enterpriseWebSearch": { # Tool to search public web data, powered by Vertex AI Search and Sec4 compliance. # Optional. Tool to support searching public web data, powered by Vertex AI Search and Sec4 compliance.
"blockingConfidence": "A String", # Optional. Sites with confidence level chosen & above this value will be blocked from the search results.
"excludeDomains": [ # Optional. List of domains to be excluded from the search results. The default limit is 2000 domains.
"A String",
],
},
"functionDeclarations": [ # Optional. Function tool type. One or more function declarations to be passed to the model along with the current user query. Model may decide to call a subset of these functions by populating FunctionCall in the response. User should provide a FunctionResponse for each function call in the next turn. Based on the function responses, Model will generate the final response back to the user. Maximum 512 function declarations can be provided.
{ # Structured representation of a function declaration as defined by the [OpenAPI 3.0 specification](https://spec.openapis.org/oas/v3.0.3). Included in this declaration are the function name, description, parameters and response type. This FunctionDeclaration is a representation of a block of code that can be used as a `Tool` by the model and executed by the client.
"description": "A String", # Optional. Description and purpose of the function. Model uses it to decide how and whether to call the function.
"name": "A String", # Required. The name of the function to call. Must start with a letter or an underscore. Must be a-z, A-Z, 0-9, or contain underscores, dots, colons and dashes, with a maximum length of 64.
"parameters": { # Defines the schema of input and output data. This is a subset of the [OpenAPI 3.0 Schema Object](https://spec.openapis.org/oas/v3.0.3#schema-object). # Optional. Describes the parameters to this function in JSON Schema Object format. Reflects the Open API 3.03 Parameter Object. string Key: the name of the parameter. Parameter names are case sensitive. Schema Value: the Schema defining the type used for the parameter. For function with no parameters, this can be left unset. Parameter names must start with a letter or an underscore and must only contain chars a-z, A-Z, 0-9, or underscores with a maximum length of 64. Example with 1 required and 1 optional parameter: type: OBJECT properties: param1: type: STRING param2: type: INTEGER required: - param1
"additionalProperties": "", # Optional. If `type` is `OBJECT`, specifies how to handle properties not defined in `properties`. If it is a boolean `false`, no additional properties are allowed. If it is a schema, additional properties are allowed if they conform to the schema.
"anyOf": [ # Optional. The instance must be valid against any (one or more) of the subschemas listed in `any_of`.
# Object with schema name: GoogleCloudAiplatformV1beta1Schema
],
"default": "", # Optional. Default value to use if the field is not specified.
"defs": { # Optional. `defs` provides a map of schema definitions that can be reused by `ref` elsewhere in the schema. Only allowed at root level of the schema.
"a_key": # Object with schema name: GoogleCloudAiplatformV1beta1Schema
},
"description": "A String", # Optional. Description of the schema.
"enum": [ # Optional. Possible values of the field. This field can be used to restrict a value to a fixed set of values. To mark a field as an enum, set `format` to `enum` and provide the list of possible values in `enum`. For example: 1. To define directions: `{type:STRING, format:enum, enum:["EAST", "NORTH", "SOUTH", "WEST"]}` 2. To define apartment numbers: `{type:INTEGER, format:enum, enum:["101", "201", "301"]}`
"A String",
],
"example": "", # Optional. Example of an instance of this schema.
"format": "A String", # Optional. The format of the data. For `NUMBER` type, format can be `float` or `double`. For `INTEGER` type, format can be `int32` or `int64`. For `STRING` type, format can be `email`, `byte`, `date`, `date-time`, `password`, and other formats to further refine the data type.
"items": # Object with schema name: GoogleCloudAiplatformV1beta1Schema # Optional. If type is `ARRAY`, `items` specifies the schema of elements in the array.
"maxItems": "A String", # Optional. If type is `ARRAY`, `max_items` specifies the maximum number of items in an array.
"maxLength": "A String", # Optional. If type is `STRING`, `max_length` specifies the maximum length of the string.
"maxProperties": "A String", # Optional. If type is `OBJECT`, `max_properties` specifies the maximum number of properties that can be provided.
"maximum": 3.14, # Optional. If type is `INTEGER` or `NUMBER`, `maximum` specifies the maximum allowed value.
"minItems": "A String", # Optional. If type is `ARRAY`, `min_items` specifies the minimum number of items in an array.
"minLength": "A String", # Optional. If type is `STRING`, `min_length` specifies the minimum length of the string.
"minProperties": "A String", # Optional. If type is `OBJECT`, `min_properties` specifies the minimum number of properties that can be provided.
"minimum": 3.14, # Optional. If type is `INTEGER` or `NUMBER`, `minimum` specifies the minimum allowed value.
"nullable": True or False, # Optional. Indicates if the value of this field can be null.
"pattern": "A String", # Optional. If type is `STRING`, `pattern` specifies a regular expression that the string must match.
"properties": { # Optional. If type is `OBJECT`, `properties` is a map of property names to schema definitions for each property of the object.
"a_key": # Object with schema name: GoogleCloudAiplatformV1beta1Schema
},
"propertyOrdering": [ # Optional. Order of properties displayed or used where order matters. This is not a standard field in OpenAPI specification, but can be used to control the order of properties.
"A String",
],
"ref": "A String", # Optional. Allows referencing another schema definition to use in place of this schema. The value must be a valid reference to a schema in `defs`. For example, the following schema defines a reference to a schema node named "Pet": type: object properties: pet: ref: #/defs/Pet defs: Pet: type: object properties: name: type: string The value of the "pet" property is a reference to the schema node named "Pet". See details in https://json-schema.org/understanding-json-schema/structuring
"required": [ # Optional. If type is `OBJECT`, `required` lists the names of properties that must be present.
"A String",
],
"title": "A String", # Optional. Title for the schema.
"type": "A String", # Optional. Data type of the schema field.
},
"parametersJsonSchema": "", # Optional. Describes the parameters to the function in JSON Schema format. The schema must describe an object where the properties are the parameters to the function. For example: ``` { "type": "object", "properties": { "name": { "type": "string" }, "age": { "type": "integer" } }, "additionalProperties": false, "required": ["name", "age"], "propertyOrdering": ["name", "age"] } ``` This field is mutually exclusive with `parameters`.
"response": { # Defines the schema of input and output data. This is a subset of the [OpenAPI 3.0 Schema Object](https://spec.openapis.org/oas/v3.0.3#schema-object). # Optional. Describes the output from this function in JSON Schema format. Reflects the Open API 3.03 Response Object. The Schema defines the type used for the response value of the function.
"additionalProperties": "", # Optional. If `type` is `OBJECT`, specifies how to handle properties not defined in `properties`. If it is a boolean `false`, no additional properties are allowed. If it is a schema, additional properties are allowed if they conform to the schema.
"anyOf": [ # Optional. The instance must be valid against any (one or more) of the subschemas listed in `any_of`.
# Object with schema name: GoogleCloudAiplatformV1beta1Schema
],
"default": "", # Optional. Default value to use if the field is not specified.
"defs": { # Optional. `defs` provides a map of schema definitions that can be reused by `ref` elsewhere in the schema. Only allowed at root level of the schema.
"a_key": # Object with schema name: GoogleCloudAiplatformV1beta1Schema
},
"description": "A String", # Optional. Description of the schema.
"enum": [ # Optional. Possible values of the field. This field can be used to restrict a value to a fixed set of values. To mark a field as an enum, set `format` to `enum` and provide the list of possible values in `enum`. For example: 1. To define directions: `{type:STRING, format:enum, enum:["EAST", "NORTH", "SOUTH", "WEST"]}` 2. To define apartment numbers: `{type:INTEGER, format:enum, enum:["101", "201", "301"]}`
"A String",
],
"example": "", # Optional. Example of an instance of this schema.
"format": "A String", # Optional. The format of the data. For `NUMBER` type, format can be `float` or `double`. For `INTEGER` type, format can be `int32` or `int64`. For `STRING` type, format can be `email`, `byte`, `date`, `date-time`, `password`, and other formats to further refine the data type.
"items": # Object with schema name: GoogleCloudAiplatformV1beta1Schema # Optional. If type is `ARRAY`, `items` specifies the schema of elements in the array.
"maxItems": "A String", # Optional. If type is `ARRAY`, `max_items` specifies the maximum number of items in an array.
"maxLength": "A String", # Optional. If type is `STRING`, `max_length` specifies the maximum length of the string.
"maxProperties": "A String", # Optional. If type is `OBJECT`, `max_properties` specifies the maximum number of properties that can be provided.
"maximum": 3.14, # Optional. If type is `INTEGER` or `NUMBER`, `maximum` specifies the maximum allowed value.
"minItems": "A String", # Optional. If type is `ARRAY`, `min_items` specifies the minimum number of items in an array.
"minLength": "A String", # Optional. If type is `STRING`, `min_length` specifies the minimum length of the string.
"minProperties": "A String", # Optional. If type is `OBJECT`, `min_properties` specifies the minimum number of properties that can be provided.
"minimum": 3.14, # Optional. If type is `INTEGER` or `NUMBER`, `minimum` specifies the minimum allowed value.
"nullable": True or False, # Optional. Indicates if the value of this field can be null.
"pattern": "A String", # Optional. If type is `STRING`, `pattern` specifies a regular expression that the string must match.
"properties": { # Optional. If type is `OBJECT`, `properties` is a map of property names to schema definitions for each property of the object.
"a_key": # Object with schema name: GoogleCloudAiplatformV1beta1Schema
},
"propertyOrdering": [ # Optional. Order of properties displayed or used where order matters. This is not a standard field in OpenAPI specification, but can be used to control the order of properties.
"A String",
],
"ref": "A String", # Optional. Allows referencing another schema definition to use in place of this schema. The value must be a valid reference to a schema in `defs`. For example, the following schema defines a reference to a schema node named "Pet": type: object properties: pet: ref: #/defs/Pet defs: Pet: type: object properties: name: type: string The value of the "pet" property is a reference to the schema node named "Pet". See details in https://json-schema.org/understanding-json-schema/structuring
"required": [ # Optional. If type is `OBJECT`, `required` lists the names of properties that must be present.
"A String",
],
"title": "A String", # Optional. Title for the schema.
"type": "A String", # Optional. Data type of the schema field.
},
"responseJsonSchema": "", # Optional. Describes the output from this function in JSON Schema format. The value specified by the schema is the response value of the function. This field is mutually exclusive with `response`.
},
],
"googleMaps": { # Tool to retrieve public maps data for grounding, powered by Google. # Optional. GoogleMaps tool type. Tool to support Google Maps in Model.
"enableWidget": True or False, # Optional. If true, include the widget context token in the response.
},
"googleSearch": { # GoogleSearch tool type. Tool to support Google Search in Model. Powered by Google. # Optional. GoogleSearch tool type. Tool to support Google Search in Model. Powered by Google.
"blockingConfidence": "A String", # Optional. Sites with confidence level chosen & above this value will be blocked from the search results.
"excludeDomains": [ # Optional. List of domains to be excluded from the search results. The default limit is 2000 domains. Example: ["amazon.com", "facebook.com"].
"A String",
],
},
"googleSearchRetrieval": { # Tool to retrieve public web data for grounding, powered by Google. # Optional. Specialized retrieval tool that is powered by Google Search.
"dynamicRetrievalConfig": { # Describes the options to customize dynamic retrieval. # Specifies the dynamic retrieval configuration for the given source.
"dynamicThreshold": 3.14, # Optional. The threshold to be used in dynamic retrieval. If not set, a system default value is used.
"mode": "A String", # The mode of the predictor to be used in dynamic retrieval.
},
},
"parallelAiSearch": { # ParallelAiSearch tool type. A tool that uses the Parallel.ai search engine for grounding. # Optional. If specified, Vertex AI will use Parallel.ai to search for information to answer user queries. The search results will be grounded on Parallel.ai and presented to the model for response generation
"apiKey": "A String", # Optional. The API key for ParallelAiSearch. If an API key is not provided, the system will attempt to verify access by checking for an active Parallel.ai subscription through the Google Cloud Marketplace. See https://docs.parallel.ai/search/search-quickstart for more details.
"customConfigs": { # Optional. Custom configs for ParallelAiSearch. This field can be used to pass any parameter from the Parallel.ai Search API. See the Parallel.ai documentation for the full list of available parameters and their usage: https://docs.parallel.ai/api-reference/search-beta/search Currently only `source_policy`, `excerpts`, `max_results`, `mode`, `fetch_policy` can be set via this field. For example: { "source_policy": { "include_domains": ["google.com", "wikipedia.org"], "exclude_domains": ["example.com"] }, "fetch_policy": { "max_age_seconds": 3600 } }
"a_key": "", # Properties of the object.
},
},
"retrieval": { # Defines a retrieval tool that model can call to access external knowledge. # Optional. Retrieval tool type. System will always execute the provided retrieval tool(s) to get external knowledge to answer the prompt. Retrieval results are presented to the model for generation.
"disableAttribution": True or False, # Optional. Deprecated. This option is no longer supported.
"externalApi": { # Retrieve from data source powered by external API for grounding. The external API is not owned by Google, but need to follow the pre-defined API spec. # Use data source powered by external API for grounding.
"apiAuth": { # The generic reusable api auth config. Deprecated. Please use AuthConfig (google/cloud/aiplatform/master/auth.proto) instead. # The authentication config to access the API. Deprecated. Please use auth_config instead.
"apiKeyConfig": { # The API secret. # The API secret.
"apiKeySecretVersion": "A String", # Required. The SecretManager secret version resource name storing API key. e.g. projects/{project}/secrets/{secret}/versions/{version}
"apiKeyString": "A String", # The API key string. Either this or `api_key_secret_version` must be set.
},
},
"apiSpec": "A String", # The API spec that the external API implements.
"authConfig": { # Auth configuration to run the extension. # The authentication config to access the API.
"apiKeyConfig": { # Config for authentication with API key. # Config for API key auth.
"apiKeySecret": "A String", # Optional. The name of the SecretManager secret version resource storing the API key. Format: `projects/{project}/secrets/{secrete}/versions/{version}` - If both `api_key_secret` and `api_key_string` are specified, this field takes precedence over `api_key_string`. - If specified, the `secretmanager.versions.access` permission should be granted to Vertex AI Extension Service Agent (https://cloud.google.com/vertex-ai/docs/general/access-control#service-agents) on the specified resource.
"apiKeyString": "A String", # Optional. The API key to be used in the request directly.
"httpElementLocation": "A String", # Optional. The location of the API key.
"name": "A String", # Optional. The parameter name of the API key. E.g. If the API request is "https://example.com/act?api_key=", "api_key" would be the parameter name.
},
"authType": "A String", # Type of auth scheme.
"googleServiceAccountConfig": { # Config for Google Service Account Authentication. # Config for Google Service Account auth.
"serviceAccount": "A String", # Optional. The service account that the extension execution service runs as. - If the service account is specified, the `iam.serviceAccounts.getAccessToken` permission should be granted to Vertex AI Extension Service Agent (https://cloud.google.com/vertex-ai/docs/general/access-control#service-agents) on the specified service account. - If not specified, the Vertex AI Extension Service Agent will be used to execute the Extension.
},
"httpBasicAuthConfig": { # Config for HTTP Basic Authentication. # Config for HTTP Basic auth.
"credentialSecret": "A String", # Required. The name of the SecretManager secret version resource storing the base64 encoded credentials. Format: `projects/{project}/secrets/{secrete}/versions/{version}` - If specified, the `secretmanager.versions.access` permission should be granted to Vertex AI Extension Service Agent (https://cloud.google.com/vertex-ai/docs/general/access-control#service-agents) on the specified resource.
},
"oauthConfig": { # Config for user oauth. # Config for user oauth.
"accessToken": "A String", # Access token for extension endpoint. Only used to propagate token from [[ExecuteExtensionRequest.runtime_auth_config]] at request time.
"serviceAccount": "A String", # The service account used to generate access tokens for executing the Extension. - If the service account is specified, the `iam.serviceAccounts.getAccessToken` permission should be granted to Vertex AI Extension Service Agent (https://cloud.google.com/vertex-ai/docs/general/access-control#service-agents) on the provided service account.
},
"oidcConfig": { # Config for user OIDC auth. # Config for user OIDC auth.
"idToken": "A String", # OpenID Connect formatted ID token for extension endpoint. Only used to propagate token from [[ExecuteExtensionRequest.runtime_auth_config]] at request time.
"serviceAccount": "A String", # The service account used to generate an OpenID Connect (OIDC)-compatible JWT token signed by the Google OIDC Provider (accounts.google.com) for extension endpoint (https://cloud.google.com/iam/docs/create-short-lived-credentials-direct#sa-credentials-oidc). - The audience for the token will be set to the URL in the server url defined in the OpenApi spec. - If the service account is provided, the service account should grant `iam.serviceAccounts.getOpenIdToken` permission to Vertex AI Extension Service Agent (https://cloud.google.com/vertex-ai/docs/general/access-control#service-agents).
},
},
"elasticSearchParams": { # The search parameters to use for the ELASTIC_SEARCH spec. # Parameters for the elastic search API.
"index": "A String", # The ElasticSearch index to use.
"numHits": 42, # Optional. Number of hits (chunks) to request. When specified, it is passed to Elasticsearch as the `num_hits` param.
"searchTemplate": "A String", # The ElasticSearch search template to use.
},
"endpoint": "A String", # The endpoint of the external API. The system will call the API at this endpoint to retrieve the data for grounding. Example: https://acme.com:443/search
"simpleSearchParams": { # The search parameters to use for SIMPLE_SEARCH spec. # Parameters for the simple search API.
},
},
"vertexAiSearch": { # Retrieve from Vertex AI Search datastore or engine for grounding. datastore and engine are mutually exclusive. See https://cloud.google.com/products/agent-builder # Set to use data source powered by Vertex AI Search.
"dataStoreSpecs": [ # Specifications that define the specific DataStores to be searched, along with configurations for those data stores. This is only considered for Engines with multiple data stores. It should only be set if engine is used.
{ # Define data stores within engine to filter on in a search call and configurations for those data stores. For more information, see https://cloud.google.com/generative-ai-app-builder/docs/reference/rpc/google.cloud.discoveryengine.v1#datastorespec
"dataStore": "A String", # Full resource name of DataStore, such as Format: `projects/{project}/locations/{location}/collections/{collection}/dataStores/{dataStore}`
"filter": "A String", # Optional. Filter specification to filter documents in the data store specified by data_store field. For more information on filtering, see [Filtering](https://cloud.google.com/generative-ai-app-builder/docs/filter-search-metadata)
},
],
"datastore": "A String", # Optional. Fully-qualified Vertex AI Search data store resource ID. Format: `projects/{project}/locations/{location}/collections/{collection}/dataStores/{dataStore}`
"engine": "A String", # Optional. Fully-qualified Vertex AI Search engine resource ID. Format: `projects/{project}/locations/{location}/collections/{collection}/engines/{engine}`
"filter": "A String", # Optional. Filter strings to be passed to the search API.
"maxResults": 42, # Optional. Number of search results to return per query. The default value is 10. The maximumm allowed value is 10.
},
"vertexRagStore": { # Retrieve from Vertex RAG Store for grounding. # Set to use data source powered by Vertex RAG store. User data is uploaded via the VertexRagDataService.
"ragCorpora": [ # Optional. Deprecated. Please use rag_resources instead.
"A String",
],
"ragResources": [ # Optional. The representation of the rag source. It can be used to specify corpus only or ragfiles. Currently only support one corpus or multiple files from one corpus. In the future we may open up multiple corpora support.
{ # The definition of the Rag resource.
"ragCorpus": "A String", # Optional. RagCorpora resource name. Format: `projects/{project}/locations/{location}/ragCorpora/{rag_corpus}`
"ragFileIds": [ # Optional. rag_file_id. The files should be in the same rag_corpus set in rag_corpus field.
"A String",
],
},
],
"ragRetrievalConfig": { # Specifies the context retrieval config. # Optional. The retrieval config for the Rag query.
"filter": { # Config for filters. # Optional. Config for filters.
"metadataFilter": "A String", # Optional. String for metadata filtering.
"vectorDistanceThreshold": 3.14, # Optional. Only returns contexts with vector distance smaller than the threshold.
"vectorSimilarityThreshold": 3.14, # Optional. Only returns contexts with vector similarity larger than the threshold.
},
"hybridSearch": { # Config for Hybrid Search. # Optional. Config for Hybrid Search.
"alpha": 3.14, # Optional. Alpha value controls the weight between dense and sparse vector search results. The range is [0, 1], while 0 means sparse vector search only and 1 means dense vector search only. The default value is 0.5 which balances sparse and dense vector search equally.
},
"ranking": { # Config for ranking and reranking. # Optional. Config for ranking and reranking.
"llmRanker": { # Config for LlmRanker. # Optional. Config for LlmRanker.
"modelName": "A String", # Optional. The model name used for ranking. See [Supported models](https://cloud.google.com/vertex-ai/generative-ai/docs/model-reference/inference#supported-models).
},
"rankService": { # Config for Rank Service. # Optional. Config for Rank Service.
"modelName": "A String", # Optional. The model name of the rank service. Format: `semantic-ranker-512@latest`
},
},
"topK": 42, # Optional. The number of contexts to retrieve.
},
"similarityTopK": 42, # Optional. Number of top k results to return from the selected corpora.
"storeContext": True or False, # Optional. Currently only supported for Gemini Multimodal Live API. In Gemini Multimodal Live API, if `store_context` bool is specified, Gemini will leverage it to automatically memorize the interactions between the client and Gemini, and retrieve context when needed to augment the response generation for users' ongoing and future interactions.
"vectorDistanceThreshold": 3.14, # Optional. Only return results with vector distance smaller than the threshold.
},
},
"urlContext": { # Tool to support URL context. # Optional. Tool to support URL context retrieval.
},
},
],
"author": "A String", # Required. The ID of the agent or entity that generated this event. Use "user" to denote events generated by the end-user.
"content": { # The structured data content of a message. A Content message contains a `role` field, which indicates the producer of the content, and a `parts` field, which contains the multi-part data of the message. # Required. The content of the event (e.g., text response, tool call, tool response).
"parts": [ # Required. A list of Part objects that make up a single message. Parts of a message can have different MIME types. A Content message must have at least one Part.
{ # A datatype containing media that is part of a multi-part Content message. A `Part` consists of data which has an associated datatype. A `Part` can only contain one of the accepted types in `Part.data`. For media types that are not text, `Part` must have a fixed IANA MIME type identifying the type and subtype of the media if `inline_data` or `file_data` field is filled with raw bytes.
"codeExecutionResult": { # Result of executing the [ExecutableCode]. Only generated when using the [CodeExecution] tool, and always follows a `part` containing the [ExecutableCode]. # Optional. The result of executing the ExecutableCode.
"outcome": "A String", # Required. Outcome of the code execution.
"output": "A String", # Optional. Contains stdout when code execution is successful, stderr or other description otherwise.
},
"executableCode": { # Code generated by the model that is meant to be executed, and the result returned to the model. Generated when using the [CodeExecution] tool, in which the code will be automatically executed, and a corresponding [CodeExecutionResult] will also be generated. # Optional. Code generated by the model that is intended to be executed.
"code": "A String", # Required. The code to be executed.
"language": "A String", # Required. Programming language of the `code`.
},
"fileData": { # URI-based data. A FileData message contains a URI pointing to data of a specific media type. It is used to represent images, audio, and video stored in Google Cloud Storage. # Optional. The URI-based data of the part. This can be used to include files from Google Cloud Storage.
"displayName": "A String", # Optional. The display name of the file. Used to provide a label or filename to distinguish files. This field is only returned in `PromptMessage` for prompt management. It is used in the Gemini calls only when server side tools (`code_execution`, `google_search`, and `url_context`) are enabled.
"fileUri": "A String", # Required. The URI of the file in Google Cloud Storage.
"mimeType": "A String", # Required. The IANA standard MIME type of the source data.
},
"functionCall": { # A predicted [FunctionCall] returned from the model that contains a string representing the [FunctionDeclaration.name] and a structured JSON object containing the parameters and their values. # Optional. A predicted function call returned from the model. This contains the name of the function to call and the arguments to pass to the function.
"args": { # Optional. The function parameters and values in JSON object format. See [FunctionDeclaration.parameters] for parameter details.
"a_key": "", # Properties of the object.
},
"id": "A String", # Optional. The unique id of the function call. If populated, the client to execute the `function_call` and return the response with the matching `id`.
"name": "A String", # Optional. The name of the function to call. Matches [FunctionDeclaration.name].
"partialArgs": [ # Optional. The partial argument value of the function call. If provided, represents the arguments/fields that are streamed incrementally.
{ # Partial argument value of the function call.
"boolValue": True or False, # Optional. Represents a boolean value.
"jsonPath": "A String", # Required. A JSON Path (RFC 9535) to the argument being streamed. https://datatracker.ietf.org/doc/html/rfc9535. e.g. "$.foo.bar[0].data".
"nullValue": "A String", # Optional. Represents a null value.
"numberValue": 3.14, # Optional. Represents a double value.
"stringValue": "A String", # Optional. Represents a string value.
"willContinue": True or False, # Optional. Whether this is not the last part of the same json_path. If true, another PartialArg message for the current json_path is expected to follow.
},
],
"willContinue": True or False, # Optional. Whether this is the last part of the FunctionCall. If true, another partial message for the current FunctionCall is expected to follow.
},
"functionResponse": { # The result output from a [FunctionCall] that contains a string representing the [FunctionDeclaration.name] and a structured JSON object containing any output from the function is used as context to the model. This should contain the result of a [FunctionCall] made based on model prediction. # Optional. The result of a function call. This is used to provide the model with the result of a function call that it predicted.
"id": "A String", # Optional. The id of the function call this response is for. Populated by the client to match the corresponding function call `id`.
"name": "A String", # Required. The name of the function to call. Matches [FunctionDeclaration.name] and [FunctionCall.name].
"parts": [ # Optional. Ordered `Parts` that constitute a function response. Parts may have different IANA MIME types.
{ # A datatype containing media that is part of a `FunctionResponse` message. A `FunctionResponsePart` consists of data which has an associated datatype. A `FunctionResponsePart` can only contain one of the accepted types in `FunctionResponsePart.data`. A `FunctionResponsePart` must have a fixed IANA MIME type identifying the type and subtype of the media if the `inline_data` field is filled with raw bytes.
"fileData": { # URI based data for function response. # URI based data.
"displayName": "A String", # Optional. Display name of the file data. Used to provide a label or filename to distinguish file datas. This field is only returned in PromptMessage for prompt management. It is currently used in the Gemini GenerateContent calls only when server side tools (code_execution, google_search, and url_context) are enabled.
"fileUri": "A String", # Required. URI.
"mimeType": "A String", # Required. The IANA standard MIME type of the source data.
},
"inlineData": { # Raw media bytes for function response. Text should not be sent as raw bytes, use the 'text' field. # Inline media bytes.
"data": "A String", # Required. Raw bytes.
"displayName": "A String", # Optional. Display name of the blob. Used to provide a label or filename to distinguish blobs. This field is only returned in PromptMessage for prompt management. It is currently used in the Gemini GenerateContent calls only when server side tools (code_execution, google_search, and url_context) are enabled.
"mimeType": "A String", # Required. The IANA standard MIME type of the source data.
},
},
],
"response": { # Required. The function response in JSON object format. Use "output" key to specify function output and "error" key to specify error details (if any). If "output" and "error" keys are not specified, then whole "response" is treated as function output.
"a_key": "", # Properties of the object.
},
},
"inlineData": { # A content blob. A Blob contains data of a specific media type. It is used to represent images, audio, and video. # Optional. The inline data content of the part. This can be used to include images, audio, or video in a request.
"data": "A String", # Required. The raw bytes of the data.
"displayName": "A String", # Optional. The display name of the blob. Used to provide a label or filename to distinguish blobs. This field is only returned in `PromptMessage` for prompt management. It is used in the Gemini calls only when server-side tools (`code_execution`, `google_search`, and `url_context`) are enabled.
"mimeType": "A String", # Required. The IANA standard MIME type of the source data.
},
"mediaResolution": { # per part media resolution. Media resolution for the input media. # per part media resolution. Media resolution for the input media.
"level": "A String", # The tokenization quality used for given media.
},
"text": "A String", # Optional. The text content of the part. When sent from the VSCode Gemini Code Assist extension, references to @mentioned items will be converted to markdown boldface text. For example `@my-repo` will be converted to and sent as `**my-repo**` by the IDE agent.
"thought": True or False, # Optional. Indicates whether the `part` represents the model's thought process or reasoning.
"thoughtSignature": "A String", # Optional. An opaque signature for the thought so it can be reused in subsequent requests.
"videoMetadata": { # Provides metadata for a video, including the start and end offsets for clipping and the frame rate. # Optional. Video metadata. The metadata should only be specified while the video data is presented in inline_data or file_data.
"endOffset": "A String", # Optional. The end offset of the video.
"fps": 3.14, # Optional. The frame rate of the video sent to the model. If not specified, the default value is 1.0. The valid range is (0.0, 24.0].
"startOffset": "A String", # Optional. The start offset of the video.
},
},
],
"role": "A String", # Optional. The producer of the content. Must be either 'user' or 'model'. If not set, the service will default to 'user'.
},
"eventTime": "A String", # Optional. The timestamp when the event occurred.
"stateDelta": { # Optional. The change in the session state caused by this event. This is a key-value map of fields that were modified or added by the event.
"a_key": "", # Properties of the object.
},
},
],
"turnId": "A String", # Optional. A unique identifier for the turn. Useful for referencing specific turns across systems.
"turnIndex": 42, # Required. The 0-based index of the turn in the conversation sequence.
},
],
},
"promptTemplateData": { # Message to hold a prompt template and the values to populate the template. # Prompt template data.
"values": { # The values for fields in the prompt template.
"a_key": { # The structured data content of a message. A Content message contains a `role` field, which indicates the producer of the content, and a `parts` field, which contains the multi-part data of the message.
"parts": [ # Required. A list of Part objects that make up a single message. Parts of a message can have different MIME types. A Content message must have at least one Part.
{ # A datatype containing media that is part of a multi-part Content message. A `Part` consists of data which has an associated datatype. A `Part` can only contain one of the accepted types in `Part.data`. For media types that are not text, `Part` must have a fixed IANA MIME type identifying the type and subtype of the media if `inline_data` or `file_data` field is filled with raw bytes.
"codeExecutionResult": { # Result of executing the [ExecutableCode]. Only generated when using the [CodeExecution] tool, and always follows a `part` containing the [ExecutableCode]. # Optional. The result of executing the ExecutableCode.
"outcome": "A String", # Required. Outcome of the code execution.
"output": "A String", # Optional. Contains stdout when code execution is successful, stderr or other description otherwise.
},
"executableCode": { # Code generated by the model that is meant to be executed, and the result returned to the model. Generated when using the [CodeExecution] tool, in which the code will be automatically executed, and a corresponding [CodeExecutionResult] will also be generated. # Optional. Code generated by the model that is intended to be executed.
"code": "A String", # Required. The code to be executed.
"language": "A String", # Required. Programming language of the `code`.
},
"fileData": { # URI-based data. A FileData message contains a URI pointing to data of a specific media type. It is used to represent images, audio, and video stored in Google Cloud Storage. # Optional. The URI-based data of the part. This can be used to include files from Google Cloud Storage.
"displayName": "A String", # Optional. The display name of the file. Used to provide a label or filename to distinguish files. This field is only returned in `PromptMessage` for prompt management. It is used in the Gemini calls only when server side tools (`code_execution`, `google_search`, and `url_context`) are enabled.
"fileUri": "A String", # Required. The URI of the file in Google Cloud Storage.
"mimeType": "A String", # Required. The IANA standard MIME type of the source data.
},
"functionCall": { # A predicted [FunctionCall] returned from the model that contains a string representing the [FunctionDeclaration.name] and a structured JSON object containing the parameters and their values. # Optional. A predicted function call returned from the model. This contains the name of the function to call and the arguments to pass to the function.
"args": { # Optional. The function parameters and values in JSON object format. See [FunctionDeclaration.parameters] for parameter details.
"a_key": "", # Properties of the object.
},
"id": "A String", # Optional. The unique id of the function call. If populated, the client to execute the `function_call` and return the response with the matching `id`.
"name": "A String", # Optional. The name of the function to call. Matches [FunctionDeclaration.name].
"partialArgs": [ # Optional. The partial argument value of the function call. If provided, represents the arguments/fields that are streamed incrementally.
{ # Partial argument value of the function call.
"boolValue": True or False, # Optional. Represents a boolean value.
"jsonPath": "A String", # Required. A JSON Path (RFC 9535) to the argument being streamed. https://datatracker.ietf.org/doc/html/rfc9535. e.g. "$.foo.bar[0].data".
"nullValue": "A String", # Optional. Represents a null value.
"numberValue": 3.14, # Optional. Represents a double value.
"stringValue": "A String", # Optional. Represents a string value.
"willContinue": True or False, # Optional. Whether this is not the last part of the same json_path. If true, another PartialArg message for the current json_path is expected to follow.
},
],
"willContinue": True or False, # Optional. Whether this is the last part of the FunctionCall. If true, another partial message for the current FunctionCall is expected to follow.
},
"functionResponse": { # The result output from a [FunctionCall] that contains a string representing the [FunctionDeclaration.name] and a structured JSON object containing any output from the function is used as context to the model. This should contain the result of a [FunctionCall] made based on model prediction. # Optional. The result of a function call. This is used to provide the model with the result of a function call that it predicted.
"id": "A String", # Optional. The id of the function call this response is for. Populated by the client to match the corresponding function call `id`.
"name": "A String", # Required. The name of the function to call. Matches [FunctionDeclaration.name] and [FunctionCall.name].
"parts": [ # Optional. Ordered `Parts` that constitute a function response. Parts may have different IANA MIME types.
{ # A datatype containing media that is part of a `FunctionResponse` message. A `FunctionResponsePart` consists of data which has an associated datatype. A `FunctionResponsePart` can only contain one of the accepted types in `FunctionResponsePart.data`. A `FunctionResponsePart` must have a fixed IANA MIME type identifying the type and subtype of the media if the `inline_data` field is filled with raw bytes.
"fileData": { # URI based data for function response. # URI based data.
"displayName": "A String", # Optional. Display name of the file data. Used to provide a label or filename to distinguish file datas. This field is only returned in PromptMessage for prompt management. It is currently used in the Gemini GenerateContent calls only when server side tools (code_execution, google_search, and url_context) are enabled.
"fileUri": "A String", # Required. URI.
"mimeType": "A String", # Required. The IANA standard MIME type of the source data.
},
"inlineData": { # Raw media bytes for function response. Text should not be sent as raw bytes, use the 'text' field. # Inline media bytes.
"data": "A String", # Required. Raw bytes.
"displayName": "A String", # Optional. Display name of the blob. Used to provide a label or filename to distinguish blobs. This field is only returned in PromptMessage for prompt management. It is currently used in the Gemini GenerateContent calls only when server side tools (code_execution, google_search, and url_context) are enabled.
"mimeType": "A String", # Required. The IANA standard MIME type of the source data.
},
},
],
"response": { # Required. The function response in JSON object format. Use "output" key to specify function output and "error" key to specify error details (if any). If "output" and "error" keys are not specified, then whole "response" is treated as function output.
"a_key": "", # Properties of the object.
},
},
"inlineData": { # A content blob. A Blob contains data of a specific media type. It is used to represent images, audio, and video. # Optional. The inline data content of the part. This can be used to include images, audio, or video in a request.
"data": "A String", # Required. The raw bytes of the data.
"displayName": "A String", # Optional. The display name of the blob. Used to provide a label or filename to distinguish blobs. This field is only returned in `PromptMessage` for prompt management. It is used in the Gemini calls only when server-side tools (`code_execution`, `google_search`, and `url_context`) are enabled.
"mimeType": "A String", # Required. The IANA standard MIME type of the source data.
},
"mediaResolution": { # per part media resolution. Media resolution for the input media. # per part media resolution. Media resolution for the input media.
"level": "A String", # The tokenization quality used for given media.
},
"text": "A String", # Optional. The text content of the part. When sent from the VSCode Gemini Code Assist extension, references to @mentioned items will be converted to markdown boldface text. For example `@my-repo` will be converted to and sent as `**my-repo**` by the IDE agent.
"thought": True or False, # Optional. Indicates whether the `part` represents the model's thought process or reasoning.
"thoughtSignature": "A String", # Optional. An opaque signature for the thought so it can be reused in subsequent requests.
"videoMetadata": { # Provides metadata for a video, including the start and end offsets for clipping and the frame rate. # Optional. Video metadata. The metadata should only be specified while the video data is presented in inline_data or file_data.
"endOffset": "A String", # Optional. The end offset of the video.
"fps": 3.14, # Optional. The frame rate of the video sent to the model. If not specified, the default value is 1.0. The valid range is (0.0, 24.0].
"startOffset": "A String", # Optional. The start offset of the video.
},
},
],
"role": "A String", # Optional. The producer of the content. Must be either 'user' or 'model'. If not set, the service will default to 'user'.
},
},
},
"text": "A String", # Text prompt.
"value": "", # Fields and values that can be used to populate the prompt template.
},
"rubrics": { # Optional. Named groups of rubrics associated with this prompt. The key is a user-defined name for the rubric group.
"a_key": { # A group of rubrics, used for grouping rubrics based on a metric or a version.
"displayName": "A String", # Human-readable name for the group. This should be unique within a given context if used for display or selection. Example: "Instruction Following V1", "Content Quality - Summarization Task".
"groupId": "A String", # Unique identifier for the group.
"rubrics": [ # Rubrics that are part of this group.
{ # Message representing a single testable criterion for evaluation. One input prompt could have multiple rubrics.
"content": { # Content of the rubric, defining the testable criteria. # Required. The actual testable criteria for the rubric.
"property": { # Defines criteria based on a specific property. # Evaluation criteria based on a specific property.
"description": "A String", # Description of the property being evaluated. Example: "The model's response is grammatically correct."
},
},
"importance": "A String", # Optional. The relative importance of this rubric.
"rubricId": "A String", # Unique identifier for the rubric. This ID is used to refer to this rubric, e.g., in RubricVerdict.
"type": "A String", # Optional. A type designator for the rubric, which can inform how it's evaluated or interpreted by systems or users. It's recommended to use consistent, well-defined, upper snake_case strings. Examples: "SUMMARIZATION_QUALITY", "SAFETY_HARMFUL_CONTENT", "INSTRUCTION_ADHERENCE".
},
],
},
},
},
},
"gcsUri": "A String", # The Cloud Storage object where the request or response is stored.
"labels": { # Optional. Labels for the EvaluationItem.
"a_key": "A String",
},
"metadata": "", # Optional. Metadata for the EvaluationItem.
"name": "A String", # Identifier. The resource name of the EvaluationItem. Format: `projects/{project}/locations/{location}/evaluationItems/{evaluation_item}`
}
x__xgafv: string, V1 error format.
Allowed values
1 - v1 error format
2 - v2 error format
Returns:
An object of the form:
{ # EvaluationItem is a single evaluation request or result. The content of an EvaluationItem is immutable - it cannot be updated once created. EvaluationItems can be deleted when no longer needed.
"createTime": "A String", # Output only. Timestamp when this item was created.
"displayName": "A String", # Required. The display name of the EvaluationItem.
"error": { # The `Status` type defines a logical error model that is suitable for different programming environments, including REST APIs and RPC APIs. It is used by [gRPC](https://github.com/grpc). Each `Status` message contains three pieces of data: error code, error message, and error details. You can find out more about this error model and how to work with it in the [API Design Guide](https://cloud.google.com/apis/design/errors). # Output only. Error for the evaluation item.
"code": 42, # The status code, which should be an enum value of google.rpc.Code.
"details": [ # A list of messages that carry the error details. There is a common set of message types for APIs to use.
{
"a_key": "", # Properties of the object. Contains field @type with type URL.
},
],
"message": "A String", # A developer-facing error message, which should be in English. Any user-facing error message should be localized and sent in the google.rpc.Status.details field, or localized by the client.
},
"evaluationItemType": "A String", # Required. The type of the EvaluationItem.
"evaluationRequest": { # A single evaluation request supporting input for both single-turn model generation and multi-turn agent execution traces. Valid input modes: 1. Inference Mode: `prompt` is set (containing text or AgentData context). 2. Offline Eval Mode: `prompt` is unset, and `candidate_responses` contains `agent_data` (the completed execution trace). Validation Rule: Either `prompt` must be set, OR at least one of the `candidate_responses` must contain `agent_data`. # The request to evaluate.
"candidateResponses": [ # Optional. Responses from model under test and other baseline models for comparison.
{ # Responses from model or agent.
"agentData": { # Represents data specific to multi-turn agent evaluations. # Optional. Represents the complete execution trace of a multi-turn conversation, which can involve single or multiple agents. This field is used to provide the full output of an agent's run, including all turns and events, for direct evaluation.
"agents": { # Optional. A map containing the static configurations for each agent in the system. Key: agent_id (matches the `author` field in events). Value: The static configuration of the agent.
"a_key": { # Represents configuration for an Agent.
"agentId": "A String", # Required. Unique identifier of the agent. This ID is used to refer to this agent, e.g., in AgentEvent.author, or in the `sub_agents` field. It must be unique within the `agents` map.
"agentType": "A String", # Optional. The type or class of the agent (e.g., "LlmAgent", "RouterAgent", "ToolUseAgent"). Useful for the autorater to understand the expected behavior of the agent.
"description": "A String", # Optional. A high-level description of the agent's role and responsibilities. Critical for evaluating if the agent is routing tasks correctly.
"instruction": "A String", # Optional. Provides instructions for the LLM model, guiding the agent's behavior. Can be static or dynamic. Dynamic instructions can contain placeholders like {variable_name} that will be resolved at runtime using the `AgentEvent.state_delta` field.
"subAgents": [ # Optional. The list of valid agent IDs that this agent can delegate to. This defines the directed edges in the multi-agent system graph topology.
"A String",
],
"tools": [ # Optional. The list of tools available to this agent.
{ # Tool details that the model may use to generate response. A `Tool` is a piece of code that enables the system to interact with external systems to perform an action, or set of actions, outside of knowledge and scope of the model. A Tool object should contain exactly one type of Tool (e.g FunctionDeclaration, Retrieval or GoogleSearchRetrieval).
"codeExecution": { # Tool that executes code generated by the model, and automatically returns the result to the model. See also [ExecutableCode]and [CodeExecutionResult] which are input and output to this tool. # Optional. CodeExecution tool type. Enables the model to execute code as part of generation.
},
"computerUse": { # Tool to support computer use. # Optional. Tool to support the model interacting directly with the computer. If enabled, it automatically populates computer-use specific Function Declarations.
"environment": "A String", # Required. The environment being operated.
"excludedPredefinedFunctions": [ # Optional. By default, [predefined functions](https://cloud.google.com/vertex-ai/generative-ai/docs/computer-use#supported-actions) are included in the final model call. Some of them can be explicitly excluded from being automatically included. This can serve two purposes: 1. Using a more restricted / different action space. 2. Improving the definitions / instructions of predefined functions.
"A String",
],
},
"enterpriseWebSearch": { # Tool to search public web data, powered by Vertex AI Search and Sec4 compliance. # Optional. Tool to support searching public web data, powered by Vertex AI Search and Sec4 compliance.
"blockingConfidence": "A String", # Optional. Sites with confidence level chosen & above this value will be blocked from the search results.
"excludeDomains": [ # Optional. List of domains to be excluded from the search results. The default limit is 2000 domains.
"A String",
],
},
"functionDeclarations": [ # Optional. Function tool type. One or more function declarations to be passed to the model along with the current user query. Model may decide to call a subset of these functions by populating FunctionCall in the response. User should provide a FunctionResponse for each function call in the next turn. Based on the function responses, Model will generate the final response back to the user. Maximum 512 function declarations can be provided.
{ # Structured representation of a function declaration as defined by the [OpenAPI 3.0 specification](https://spec.openapis.org/oas/v3.0.3). Included in this declaration are the function name, description, parameters and response type. This FunctionDeclaration is a representation of a block of code that can be used as a `Tool` by the model and executed by the client.
"description": "A String", # Optional. Description and purpose of the function. Model uses it to decide how and whether to call the function.
"name": "A String", # Required. The name of the function to call. Must start with a letter or an underscore. Must be a-z, A-Z, 0-9, or contain underscores, dots, colons and dashes, with a maximum length of 64.
"parameters": { # Defines the schema of input and output data. This is a subset of the [OpenAPI 3.0 Schema Object](https://spec.openapis.org/oas/v3.0.3#schema-object). # Optional. Describes the parameters to this function in JSON Schema Object format. Reflects the Open API 3.03 Parameter Object. string Key: the name of the parameter. Parameter names are case sensitive. Schema Value: the Schema defining the type used for the parameter. For function with no parameters, this can be left unset. Parameter names must start with a letter or an underscore and must only contain chars a-z, A-Z, 0-9, or underscores with a maximum length of 64. Example with 1 required and 1 optional parameter: type: OBJECT properties: param1: type: STRING param2: type: INTEGER required: - param1
"additionalProperties": "", # Optional. If `type` is `OBJECT`, specifies how to handle properties not defined in `properties`. If it is a boolean `false`, no additional properties are allowed. If it is a schema, additional properties are allowed if they conform to the schema.
"anyOf": [ # Optional. The instance must be valid against any (one or more) of the subschemas listed in `any_of`.
# Object with schema name: GoogleCloudAiplatformV1beta1Schema
],
"default": "", # Optional. Default value to use if the field is not specified.
"defs": { # Optional. `defs` provides a map of schema definitions that can be reused by `ref` elsewhere in the schema. Only allowed at root level of the schema.
"a_key": # Object with schema name: GoogleCloudAiplatformV1beta1Schema
},
"description": "A String", # Optional. Description of the schema.
"enum": [ # Optional. Possible values of the field. This field can be used to restrict a value to a fixed set of values. To mark a field as an enum, set `format` to `enum` and provide the list of possible values in `enum`. For example: 1. To define directions: `{type:STRING, format:enum, enum:["EAST", "NORTH", "SOUTH", "WEST"]}` 2. To define apartment numbers: `{type:INTEGER, format:enum, enum:["101", "201", "301"]}`
"A String",
],
"example": "", # Optional. Example of an instance of this schema.
"format": "A String", # Optional. The format of the data. For `NUMBER` type, format can be `float` or `double`. For `INTEGER` type, format can be `int32` or `int64`. For `STRING` type, format can be `email`, `byte`, `date`, `date-time`, `password`, and other formats to further refine the data type.
"items": # Object with schema name: GoogleCloudAiplatformV1beta1Schema # Optional. If type is `ARRAY`, `items` specifies the schema of elements in the array.
"maxItems": "A String", # Optional. If type is `ARRAY`, `max_items` specifies the maximum number of items in an array.
"maxLength": "A String", # Optional. If type is `STRING`, `max_length` specifies the maximum length of the string.
"maxProperties": "A String", # Optional. If type is `OBJECT`, `max_properties` specifies the maximum number of properties that can be provided.
"maximum": 3.14, # Optional. If type is `INTEGER` or `NUMBER`, `maximum` specifies the maximum allowed value.
"minItems": "A String", # Optional. If type is `ARRAY`, `min_items` specifies the minimum number of items in an array.
"minLength": "A String", # Optional. If type is `STRING`, `min_length` specifies the minimum length of the string.
"minProperties": "A String", # Optional. If type is `OBJECT`, `min_properties` specifies the minimum number of properties that can be provided.
"minimum": 3.14, # Optional. If type is `INTEGER` or `NUMBER`, `minimum` specifies the minimum allowed value.
"nullable": True or False, # Optional. Indicates if the value of this field can be null.
"pattern": "A String", # Optional. If type is `STRING`, `pattern` specifies a regular expression that the string must match.
"properties": { # Optional. If type is `OBJECT`, `properties` is a map of property names to schema definitions for each property of the object.
"a_key": # Object with schema name: GoogleCloudAiplatformV1beta1Schema
},
"propertyOrdering": [ # Optional. Order of properties displayed or used where order matters. This is not a standard field in OpenAPI specification, but can be used to control the order of properties.
"A String",
],
"ref": "A String", # Optional. Allows referencing another schema definition to use in place of this schema. The value must be a valid reference to a schema in `defs`. For example, the following schema defines a reference to a schema node named "Pet": type: object properties: pet: ref: #/defs/Pet defs: Pet: type: object properties: name: type: string The value of the "pet" property is a reference to the schema node named "Pet". See details in https://json-schema.org/understanding-json-schema/structuring
"required": [ # Optional. If type is `OBJECT`, `required` lists the names of properties that must be present.
"A String",
],
"title": "A String", # Optional. Title for the schema.
"type": "A String", # Optional. Data type of the schema field.
},
"parametersJsonSchema": "", # Optional. Describes the parameters to the function in JSON Schema format. The schema must describe an object where the properties are the parameters to the function. For example: ``` { "type": "object", "properties": { "name": { "type": "string" }, "age": { "type": "integer" } }, "additionalProperties": false, "required": ["name", "age"], "propertyOrdering": ["name", "age"] } ``` This field is mutually exclusive with `parameters`.
"response": { # Defines the schema of input and output data. This is a subset of the [OpenAPI 3.0 Schema Object](https://spec.openapis.org/oas/v3.0.3#schema-object). # Optional. Describes the output from this function in JSON Schema format. Reflects the Open API 3.03 Response Object. The Schema defines the type used for the response value of the function.
"additionalProperties": "", # Optional. If `type` is `OBJECT`, specifies how to handle properties not defined in `properties`. If it is a boolean `false`, no additional properties are allowed. If it is a schema, additional properties are allowed if they conform to the schema.
"anyOf": [ # Optional. The instance must be valid against any (one or more) of the subschemas listed in `any_of`.
# Object with schema name: GoogleCloudAiplatformV1beta1Schema
],
"default": "", # Optional. Default value to use if the field is not specified.
"defs": { # Optional. `defs` provides a map of schema definitions that can be reused by `ref` elsewhere in the schema. Only allowed at root level of the schema.
"a_key": # Object with schema name: GoogleCloudAiplatformV1beta1Schema
},
"description": "A String", # Optional. Description of the schema.
"enum": [ # Optional. Possible values of the field. This field can be used to restrict a value to a fixed set of values. To mark a field as an enum, set `format` to `enum` and provide the list of possible values in `enum`. For example: 1. To define directions: `{type:STRING, format:enum, enum:["EAST", "NORTH", "SOUTH", "WEST"]}` 2. To define apartment numbers: `{type:INTEGER, format:enum, enum:["101", "201", "301"]}`
"A String",
],
"example": "", # Optional. Example of an instance of this schema.
"format": "A String", # Optional. The format of the data. For `NUMBER` type, format can be `float` or `double`. For `INTEGER` type, format can be `int32` or `int64`. For `STRING` type, format can be `email`, `byte`, `date`, `date-time`, `password`, and other formats to further refine the data type.
"items": # Object with schema name: GoogleCloudAiplatformV1beta1Schema # Optional. If type is `ARRAY`, `items` specifies the schema of elements in the array.
"maxItems": "A String", # Optional. If type is `ARRAY`, `max_items` specifies the maximum number of items in an array.
"maxLength": "A String", # Optional. If type is `STRING`, `max_length` specifies the maximum length of the string.
"maxProperties": "A String", # Optional. If type is `OBJECT`, `max_properties` specifies the maximum number of properties that can be provided.
"maximum": 3.14, # Optional. If type is `INTEGER` or `NUMBER`, `maximum` specifies the maximum allowed value.
"minItems": "A String", # Optional. If type is `ARRAY`, `min_items` specifies the minimum number of items in an array.
"minLength": "A String", # Optional. If type is `STRING`, `min_length` specifies the minimum length of the string.
"minProperties": "A String", # Optional. If type is `OBJECT`, `min_properties` specifies the minimum number of properties that can be provided.
"minimum": 3.14, # Optional. If type is `INTEGER` or `NUMBER`, `minimum` specifies the minimum allowed value.
"nullable": True or False, # Optional. Indicates if the value of this field can be null.
"pattern": "A String", # Optional. If type is `STRING`, `pattern` specifies a regular expression that the string must match.
"properties": { # Optional. If type is `OBJECT`, `properties` is a map of property names to schema definitions for each property of the object.
"a_key": # Object with schema name: GoogleCloudAiplatformV1beta1Schema
},
"propertyOrdering": [ # Optional. Order of properties displayed or used where order matters. This is not a standard field in OpenAPI specification, but can be used to control the order of properties.
"A String",
],
"ref": "A String", # Optional. Allows referencing another schema definition to use in place of this schema. The value must be a valid reference to a schema in `defs`. For example, the following schema defines a reference to a schema node named "Pet": type: object properties: pet: ref: #/defs/Pet defs: Pet: type: object properties: name: type: string The value of the "pet" property is a reference to the schema node named "Pet". See details in https://json-schema.org/understanding-json-schema/structuring
"required": [ # Optional. If type is `OBJECT`, `required` lists the names of properties that must be present.
"A String",
],
"title": "A String", # Optional. Title for the schema.
"type": "A String", # Optional. Data type of the schema field.
},
"responseJsonSchema": "", # Optional. Describes the output from this function in JSON Schema format. The value specified by the schema is the response value of the function. This field is mutually exclusive with `response`.
},
],
"googleMaps": { # Tool to retrieve public maps data for grounding, powered by Google. # Optional. GoogleMaps tool type. Tool to support Google Maps in Model.
"enableWidget": True or False, # Optional. If true, include the widget context token in the response.
},
"googleSearch": { # GoogleSearch tool type. Tool to support Google Search in Model. Powered by Google. # Optional. GoogleSearch tool type. Tool to support Google Search in Model. Powered by Google.
"blockingConfidence": "A String", # Optional. Sites with confidence level chosen & above this value will be blocked from the search results.
"excludeDomains": [ # Optional. List of domains to be excluded from the search results. The default limit is 2000 domains. Example: ["amazon.com", "facebook.com"].
"A String",
],
},
"googleSearchRetrieval": { # Tool to retrieve public web data for grounding, powered by Google. # Optional. Specialized retrieval tool that is powered by Google Search.
"dynamicRetrievalConfig": { # Describes the options to customize dynamic retrieval. # Specifies the dynamic retrieval configuration for the given source.
"dynamicThreshold": 3.14, # Optional. The threshold to be used in dynamic retrieval. If not set, a system default value is used.
"mode": "A String", # The mode of the predictor to be used in dynamic retrieval.
},
},
"parallelAiSearch": { # ParallelAiSearch tool type. A tool that uses the Parallel.ai search engine for grounding. # Optional. If specified, Vertex AI will use Parallel.ai to search for information to answer user queries. The search results will be grounded on Parallel.ai and presented to the model for response generation
"apiKey": "A String", # Optional. The API key for ParallelAiSearch. If an API key is not provided, the system will attempt to verify access by checking for an active Parallel.ai subscription through the Google Cloud Marketplace. See https://docs.parallel.ai/search/search-quickstart for more details.
"customConfigs": { # Optional. Custom configs for ParallelAiSearch. This field can be used to pass any parameter from the Parallel.ai Search API. See the Parallel.ai documentation for the full list of available parameters and their usage: https://docs.parallel.ai/api-reference/search-beta/search Currently only `source_policy`, `excerpts`, `max_results`, `mode`, `fetch_policy` can be set via this field. For example: { "source_policy": { "include_domains": ["google.com", "wikipedia.org"], "exclude_domains": ["example.com"] }, "fetch_policy": { "max_age_seconds": 3600 } }
"a_key": "", # Properties of the object.
},
},
"retrieval": { # Defines a retrieval tool that model can call to access external knowledge. # Optional. Retrieval tool type. System will always execute the provided retrieval tool(s) to get external knowledge to answer the prompt. Retrieval results are presented to the model for generation.
"disableAttribution": True or False, # Optional. Deprecated. This option is no longer supported.
"externalApi": { # Retrieve from data source powered by external API for grounding. The external API is not owned by Google, but need to follow the pre-defined API spec. # Use data source powered by external API for grounding.
"apiAuth": { # The generic reusable api auth config. Deprecated. Please use AuthConfig (google/cloud/aiplatform/master/auth.proto) instead. # The authentication config to access the API. Deprecated. Please use auth_config instead.
"apiKeyConfig": { # The API secret. # The API secret.
"apiKeySecretVersion": "A String", # Required. The SecretManager secret version resource name storing API key. e.g. projects/{project}/secrets/{secret}/versions/{version}
"apiKeyString": "A String", # The API key string. Either this or `api_key_secret_version` must be set.
},
},
"apiSpec": "A String", # The API spec that the external API implements.
"authConfig": { # Auth configuration to run the extension. # The authentication config to access the API.
"apiKeyConfig": { # Config for authentication with API key. # Config for API key auth.
"apiKeySecret": "A String", # Optional. The name of the SecretManager secret version resource storing the API key. Format: `projects/{project}/secrets/{secrete}/versions/{version}` - If both `api_key_secret` and `api_key_string` are specified, this field takes precedence over `api_key_string`. - If specified, the `secretmanager.versions.access` permission should be granted to Vertex AI Extension Service Agent (https://cloud.google.com/vertex-ai/docs/general/access-control#service-agents) on the specified resource.
"apiKeyString": "A String", # Optional. The API key to be used in the request directly.
"httpElementLocation": "A String", # Optional. The location of the API key.
"name": "A String", # Optional. The parameter name of the API key. E.g. If the API request is "https://example.com/act?api_key=", "api_key" would be the parameter name.
},
"authType": "A String", # Type of auth scheme.
"googleServiceAccountConfig": { # Config for Google Service Account Authentication. # Config for Google Service Account auth.
"serviceAccount": "A String", # Optional. The service account that the extension execution service runs as. - If the service account is specified, the `iam.serviceAccounts.getAccessToken` permission should be granted to Vertex AI Extension Service Agent (https://cloud.google.com/vertex-ai/docs/general/access-control#service-agents) on the specified service account. - If not specified, the Vertex AI Extension Service Agent will be used to execute the Extension.
},
"httpBasicAuthConfig": { # Config for HTTP Basic Authentication. # Config for HTTP Basic auth.
"credentialSecret": "A String", # Required. The name of the SecretManager secret version resource storing the base64 encoded credentials. Format: `projects/{project}/secrets/{secrete}/versions/{version}` - If specified, the `secretmanager.versions.access` permission should be granted to Vertex AI Extension Service Agent (https://cloud.google.com/vertex-ai/docs/general/access-control#service-agents) on the specified resource.
},
"oauthConfig": { # Config for user oauth. # Config for user oauth.
"accessToken": "A String", # Access token for extension endpoint. Only used to propagate token from [[ExecuteExtensionRequest.runtime_auth_config]] at request time.
"serviceAccount": "A String", # The service account used to generate access tokens for executing the Extension. - If the service account is specified, the `iam.serviceAccounts.getAccessToken` permission should be granted to Vertex AI Extension Service Agent (https://cloud.google.com/vertex-ai/docs/general/access-control#service-agents) on the provided service account.
},
"oidcConfig": { # Config for user OIDC auth. # Config for user OIDC auth.
"idToken": "A String", # OpenID Connect formatted ID token for extension endpoint. Only used to propagate token from [[ExecuteExtensionRequest.runtime_auth_config]] at request time.
"serviceAccount": "A String", # The service account used to generate an OpenID Connect (OIDC)-compatible JWT token signed by the Google OIDC Provider (accounts.google.com) for extension endpoint (https://cloud.google.com/iam/docs/create-short-lived-credentials-direct#sa-credentials-oidc). - The audience for the token will be set to the URL in the server url defined in the OpenApi spec. - If the service account is provided, the service account should grant `iam.serviceAccounts.getOpenIdToken` permission to Vertex AI Extension Service Agent (https://cloud.google.com/vertex-ai/docs/general/access-control#service-agents).
},
},
"elasticSearchParams": { # The search parameters to use for the ELASTIC_SEARCH spec. # Parameters for the elastic search API.
"index": "A String", # The ElasticSearch index to use.
"numHits": 42, # Optional. Number of hits (chunks) to request. When specified, it is passed to Elasticsearch as the `num_hits` param.
"searchTemplate": "A String", # The ElasticSearch search template to use.
},
"endpoint": "A String", # The endpoint of the external API. The system will call the API at this endpoint to retrieve the data for grounding. Example: https://acme.com:443/search
"simpleSearchParams": { # The search parameters to use for SIMPLE_SEARCH spec. # Parameters for the simple search API.
},
},
"vertexAiSearch": { # Retrieve from Vertex AI Search datastore or engine for grounding. datastore and engine are mutually exclusive. See https://cloud.google.com/products/agent-builder # Set to use data source powered by Vertex AI Search.
"dataStoreSpecs": [ # Specifications that define the specific DataStores to be searched, along with configurations for those data stores. This is only considered for Engines with multiple data stores. It should only be set if engine is used.
{ # Define data stores within engine to filter on in a search call and configurations for those data stores. For more information, see https://cloud.google.com/generative-ai-app-builder/docs/reference/rpc/google.cloud.discoveryengine.v1#datastorespec
"dataStore": "A String", # Full resource name of DataStore, such as Format: `projects/{project}/locations/{location}/collections/{collection}/dataStores/{dataStore}`
"filter": "A String", # Optional. Filter specification to filter documents in the data store specified by data_store field. For more information on filtering, see [Filtering](https://cloud.google.com/generative-ai-app-builder/docs/filter-search-metadata)
},
],
"datastore": "A String", # Optional. Fully-qualified Vertex AI Search data store resource ID. Format: `projects/{project}/locations/{location}/collections/{collection}/dataStores/{dataStore}`
"engine": "A String", # Optional. Fully-qualified Vertex AI Search engine resource ID. Format: `projects/{project}/locations/{location}/collections/{collection}/engines/{engine}`
"filter": "A String", # Optional. Filter strings to be passed to the search API.
"maxResults": 42, # Optional. Number of search results to return per query. The default value is 10. The maximumm allowed value is 10.
},
"vertexRagStore": { # Retrieve from Vertex RAG Store for grounding. # Set to use data source powered by Vertex RAG store. User data is uploaded via the VertexRagDataService.
"ragCorpora": [ # Optional. Deprecated. Please use rag_resources instead.
"A String",
],
"ragResources": [ # Optional. The representation of the rag source. It can be used to specify corpus only or ragfiles. Currently only support one corpus or multiple files from one corpus. In the future we may open up multiple corpora support.
{ # The definition of the Rag resource.
"ragCorpus": "A String", # Optional. RagCorpora resource name. Format: `projects/{project}/locations/{location}/ragCorpora/{rag_corpus}`
"ragFileIds": [ # Optional. rag_file_id. The files should be in the same rag_corpus set in rag_corpus field.
"A String",
],
},
],
"ragRetrievalConfig": { # Specifies the context retrieval config. # Optional. The retrieval config for the Rag query.
"filter": { # Config for filters. # Optional. Config for filters.
"metadataFilter": "A String", # Optional. String for metadata filtering.
"vectorDistanceThreshold": 3.14, # Optional. Only returns contexts with vector distance smaller than the threshold.
"vectorSimilarityThreshold": 3.14, # Optional. Only returns contexts with vector similarity larger than the threshold.
},
"hybridSearch": { # Config for Hybrid Search. # Optional. Config for Hybrid Search.
"alpha": 3.14, # Optional. Alpha value controls the weight between dense and sparse vector search results. The range is [0, 1], while 0 means sparse vector search only and 1 means dense vector search only. The default value is 0.5 which balances sparse and dense vector search equally.
},
"ranking": { # Config for ranking and reranking. # Optional. Config for ranking and reranking.
"llmRanker": { # Config for LlmRanker. # Optional. Config for LlmRanker.
"modelName": "A String", # Optional. The model name used for ranking. See [Supported models](https://cloud.google.com/vertex-ai/generative-ai/docs/model-reference/inference#supported-models).
},
"rankService": { # Config for Rank Service. # Optional. Config for Rank Service.
"modelName": "A String", # Optional. The model name of the rank service. Format: `semantic-ranker-512@latest`
},
},
"topK": 42, # Optional. The number of contexts to retrieve.
},
"similarityTopK": 42, # Optional. Number of top k results to return from the selected corpora.
"storeContext": True or False, # Optional. Currently only supported for Gemini Multimodal Live API. In Gemini Multimodal Live API, if `store_context` bool is specified, Gemini will leverage it to automatically memorize the interactions between the client and Gemini, and retrieve context when needed to augment the response generation for users' ongoing and future interactions.
"vectorDistanceThreshold": 3.14, # Optional. Only return results with vector distance smaller than the threshold.
},
},
"urlContext": { # Tool to support URL context. # Optional. Tool to support URL context retrieval.
},
},
],
},
},
"turns": [ # Optional. A chronological list of conversation turns. Each turn represents a logical execution cycle (e.g., User Input -> Agent Response).
{ # Represents a single turn/invocation in the conversation.
"events": [ # Optional. The list of events that occurred during this turn.
{ # Represents a single event in the execution trace.
"activeTools": [ # Optional. The list of tools that were active/available to the agent at the time of this event. This overrides the `AgentConfig.tools` if set.
{ # Tool details that the model may use to generate response. A `Tool` is a piece of code that enables the system to interact with external systems to perform an action, or set of actions, outside of knowledge and scope of the model. A Tool object should contain exactly one type of Tool (e.g FunctionDeclaration, Retrieval or GoogleSearchRetrieval).
"codeExecution": { # Tool that executes code generated by the model, and automatically returns the result to the model. See also [ExecutableCode]and [CodeExecutionResult] which are input and output to this tool. # Optional. CodeExecution tool type. Enables the model to execute code as part of generation.
},
"computerUse": { # Tool to support computer use. # Optional. Tool to support the model interacting directly with the computer. If enabled, it automatically populates computer-use specific Function Declarations.
"environment": "A String", # Required. The environment being operated.
"excludedPredefinedFunctions": [ # Optional. By default, [predefined functions](https://cloud.google.com/vertex-ai/generative-ai/docs/computer-use#supported-actions) are included in the final model call. Some of them can be explicitly excluded from being automatically included. This can serve two purposes: 1. Using a more restricted / different action space. 2. Improving the definitions / instructions of predefined functions.
"A String",
],
},
"enterpriseWebSearch": { # Tool to search public web data, powered by Vertex AI Search and Sec4 compliance. # Optional. Tool to support searching public web data, powered by Vertex AI Search and Sec4 compliance.
"blockingConfidence": "A String", # Optional. Sites with confidence level chosen & above this value will be blocked from the search results.
"excludeDomains": [ # Optional. List of domains to be excluded from the search results. The default limit is 2000 domains.
"A String",
],
},
"functionDeclarations": [ # Optional. Function tool type. One or more function declarations to be passed to the model along with the current user query. Model may decide to call a subset of these functions by populating FunctionCall in the response. User should provide a FunctionResponse for each function call in the next turn. Based on the function responses, Model will generate the final response back to the user. Maximum 512 function declarations can be provided.
{ # Structured representation of a function declaration as defined by the [OpenAPI 3.0 specification](https://spec.openapis.org/oas/v3.0.3). Included in this declaration are the function name, description, parameters and response type. This FunctionDeclaration is a representation of a block of code that can be used as a `Tool` by the model and executed by the client.
"description": "A String", # Optional. Description and purpose of the function. Model uses it to decide how and whether to call the function.
"name": "A String", # Required. The name of the function to call. Must start with a letter or an underscore. Must be a-z, A-Z, 0-9, or contain underscores, dots, colons and dashes, with a maximum length of 64.
"parameters": { # Defines the schema of input and output data. This is a subset of the [OpenAPI 3.0 Schema Object](https://spec.openapis.org/oas/v3.0.3#schema-object). # Optional. Describes the parameters to this function in JSON Schema Object format. Reflects the Open API 3.03 Parameter Object. string Key: the name of the parameter. Parameter names are case sensitive. Schema Value: the Schema defining the type used for the parameter. For function with no parameters, this can be left unset. Parameter names must start with a letter or an underscore and must only contain chars a-z, A-Z, 0-9, or underscores with a maximum length of 64. Example with 1 required and 1 optional parameter: type: OBJECT properties: param1: type: STRING param2: type: INTEGER required: - param1
"additionalProperties": "", # Optional. If `type` is `OBJECT`, specifies how to handle properties not defined in `properties`. If it is a boolean `false`, no additional properties are allowed. If it is a schema, additional properties are allowed if they conform to the schema.
"anyOf": [ # Optional. The instance must be valid against any (one or more) of the subschemas listed in `any_of`.
# Object with schema name: GoogleCloudAiplatformV1beta1Schema
],
"default": "", # Optional. Default value to use if the field is not specified.
"defs": { # Optional. `defs` provides a map of schema definitions that can be reused by `ref` elsewhere in the schema. Only allowed at root level of the schema.
"a_key": # Object with schema name: GoogleCloudAiplatformV1beta1Schema
},
"description": "A String", # Optional. Description of the schema.
"enum": [ # Optional. Possible values of the field. This field can be used to restrict a value to a fixed set of values. To mark a field as an enum, set `format` to `enum` and provide the list of possible values in `enum`. For example: 1. To define directions: `{type:STRING, format:enum, enum:["EAST", "NORTH", "SOUTH", "WEST"]}` 2. To define apartment numbers: `{type:INTEGER, format:enum, enum:["101", "201", "301"]}`
"A String",
],
"example": "", # Optional. Example of an instance of this schema.
"format": "A String", # Optional. The format of the data. For `NUMBER` type, format can be `float` or `double`. For `INTEGER` type, format can be `int32` or `int64`. For `STRING` type, format can be `email`, `byte`, `date`, `date-time`, `password`, and other formats to further refine the data type.
"items": # Object with schema name: GoogleCloudAiplatformV1beta1Schema # Optional. If type is `ARRAY`, `items` specifies the schema of elements in the array.
"maxItems": "A String", # Optional. If type is `ARRAY`, `max_items` specifies the maximum number of items in an array.
"maxLength": "A String", # Optional. If type is `STRING`, `max_length` specifies the maximum length of the string.
"maxProperties": "A String", # Optional. If type is `OBJECT`, `max_properties` specifies the maximum number of properties that can be provided.
"maximum": 3.14, # Optional. If type is `INTEGER` or `NUMBER`, `maximum` specifies the maximum allowed value.
"minItems": "A String", # Optional. If type is `ARRAY`, `min_items` specifies the minimum number of items in an array.
"minLength": "A String", # Optional. If type is `STRING`, `min_length` specifies the minimum length of the string.
"minProperties": "A String", # Optional. If type is `OBJECT`, `min_properties` specifies the minimum number of properties that can be provided.
"minimum": 3.14, # Optional. If type is `INTEGER` or `NUMBER`, `minimum` specifies the minimum allowed value.
"nullable": True or False, # Optional. Indicates if the value of this field can be null.
"pattern": "A String", # Optional. If type is `STRING`, `pattern` specifies a regular expression that the string must match.
"properties": { # Optional. If type is `OBJECT`, `properties` is a map of property names to schema definitions for each property of the object.
"a_key": # Object with schema name: GoogleCloudAiplatformV1beta1Schema
},
"propertyOrdering": [ # Optional. Order of properties displayed or used where order matters. This is not a standard field in OpenAPI specification, but can be used to control the order of properties.
"A String",
],
"ref": "A String", # Optional. Allows referencing another schema definition to use in place of this schema. The value must be a valid reference to a schema in `defs`. For example, the following schema defines a reference to a schema node named "Pet": type: object properties: pet: ref: #/defs/Pet defs: Pet: type: object properties: name: type: string The value of the "pet" property is a reference to the schema node named "Pet". See details in https://json-schema.org/understanding-json-schema/structuring
"required": [ # Optional. If type is `OBJECT`, `required` lists the names of properties that must be present.
"A String",
],
"title": "A String", # Optional. Title for the schema.
"type": "A String", # Optional. Data type of the schema field.
},
"parametersJsonSchema": "", # Optional. Describes the parameters to the function in JSON Schema format. The schema must describe an object where the properties are the parameters to the function. For example: ``` { "type": "object", "properties": { "name": { "type": "string" }, "age": { "type": "integer" } }, "additionalProperties": false, "required": ["name", "age"], "propertyOrdering": ["name", "age"] } ``` This field is mutually exclusive with `parameters`.
"response": { # Defines the schema of input and output data. This is a subset of the [OpenAPI 3.0 Schema Object](https://spec.openapis.org/oas/v3.0.3#schema-object). # Optional. Describes the output from this function in JSON Schema format. Reflects the Open API 3.03 Response Object. The Schema defines the type used for the response value of the function.
"additionalProperties": "", # Optional. If `type` is `OBJECT`, specifies how to handle properties not defined in `properties`. If it is a boolean `false`, no additional properties are allowed. If it is a schema, additional properties are allowed if they conform to the schema.
"anyOf": [ # Optional. The instance must be valid against any (one or more) of the subschemas listed in `any_of`.
# Object with schema name: GoogleCloudAiplatformV1beta1Schema
],
"default": "", # Optional. Default value to use if the field is not specified.
"defs": { # Optional. `defs` provides a map of schema definitions that can be reused by `ref` elsewhere in the schema. Only allowed at root level of the schema.
"a_key": # Object with schema name: GoogleCloudAiplatformV1beta1Schema
},
"description": "A String", # Optional. Description of the schema.
"enum": [ # Optional. Possible values of the field. This field can be used to restrict a value to a fixed set of values. To mark a field as an enum, set `format` to `enum` and provide the list of possible values in `enum`. For example: 1. To define directions: `{type:STRING, format:enum, enum:["EAST", "NORTH", "SOUTH", "WEST"]}` 2. To define apartment numbers: `{type:INTEGER, format:enum, enum:["101", "201", "301"]}`
"A String",
],
"example": "", # Optional. Example of an instance of this schema.
"format": "A String", # Optional. The format of the data. For `NUMBER` type, format can be `float` or `double`. For `INTEGER` type, format can be `int32` or `int64`. For `STRING` type, format can be `email`, `byte`, `date`, `date-time`, `password`, and other formats to further refine the data type.
"items": # Object with schema name: GoogleCloudAiplatformV1beta1Schema # Optional. If type is `ARRAY`, `items` specifies the schema of elements in the array.
"maxItems": "A String", # Optional. If type is `ARRAY`, `max_items` specifies the maximum number of items in an array.
"maxLength": "A String", # Optional. If type is `STRING`, `max_length` specifies the maximum length of the string.
"maxProperties": "A String", # Optional. If type is `OBJECT`, `max_properties` specifies the maximum number of properties that can be provided.
"maximum": 3.14, # Optional. If type is `INTEGER` or `NUMBER`, `maximum` specifies the maximum allowed value.
"minItems": "A String", # Optional. If type is `ARRAY`, `min_items` specifies the minimum number of items in an array.
"minLength": "A String", # Optional. If type is `STRING`, `min_length` specifies the minimum length of the string.
"minProperties": "A String", # Optional. If type is `OBJECT`, `min_properties` specifies the minimum number of properties that can be provided.
"minimum": 3.14, # Optional. If type is `INTEGER` or `NUMBER`, `minimum` specifies the minimum allowed value.
"nullable": True or False, # Optional. Indicates if the value of this field can be null.
"pattern": "A String", # Optional. If type is `STRING`, `pattern` specifies a regular expression that the string must match.
"properties": { # Optional. If type is `OBJECT`, `properties` is a map of property names to schema definitions for each property of the object.
"a_key": # Object with schema name: GoogleCloudAiplatformV1beta1Schema
},
"propertyOrdering": [ # Optional. Order of properties displayed or used where order matters. This is not a standard field in OpenAPI specification, but can be used to control the order of properties.
"A String",
],
"ref": "A String", # Optional. Allows referencing another schema definition to use in place of this schema. The value must be a valid reference to a schema in `defs`. For example, the following schema defines a reference to a schema node named "Pet": type: object properties: pet: ref: #/defs/Pet defs: Pet: type: object properties: name: type: string The value of the "pet" property is a reference to the schema node named "Pet". See details in https://json-schema.org/understanding-json-schema/structuring
"required": [ # Optional. If type is `OBJECT`, `required` lists the names of properties that must be present.
"A String",
],
"title": "A String", # Optional. Title for the schema.
"type": "A String", # Optional. Data type of the schema field.
},
"responseJsonSchema": "", # Optional. Describes the output from this function in JSON Schema format. The value specified by the schema is the response value of the function. This field is mutually exclusive with `response`.
},
],
"googleMaps": { # Tool to retrieve public maps data for grounding, powered by Google. # Optional. GoogleMaps tool type. Tool to support Google Maps in Model.
"enableWidget": True or False, # Optional. If true, include the widget context token in the response.
},
"googleSearch": { # GoogleSearch tool type. Tool to support Google Search in Model. Powered by Google. # Optional. GoogleSearch tool type. Tool to support Google Search in Model. Powered by Google.
"blockingConfidence": "A String", # Optional. Sites with confidence level chosen & above this value will be blocked from the search results.
"excludeDomains": [ # Optional. List of domains to be excluded from the search results. The default limit is 2000 domains. Example: ["amazon.com", "facebook.com"].
"A String",
],
},
"googleSearchRetrieval": { # Tool to retrieve public web data for grounding, powered by Google. # Optional. Specialized retrieval tool that is powered by Google Search.
"dynamicRetrievalConfig": { # Describes the options to customize dynamic retrieval. # Specifies the dynamic retrieval configuration for the given source.
"dynamicThreshold": 3.14, # Optional. The threshold to be used in dynamic retrieval. If not set, a system default value is used.
"mode": "A String", # The mode of the predictor to be used in dynamic retrieval.
},
},
"parallelAiSearch": { # ParallelAiSearch tool type. A tool that uses the Parallel.ai search engine for grounding. # Optional. If specified, Vertex AI will use Parallel.ai to search for information to answer user queries. The search results will be grounded on Parallel.ai and presented to the model for response generation
"apiKey": "A String", # Optional. The API key for ParallelAiSearch. If an API key is not provided, the system will attempt to verify access by checking for an active Parallel.ai subscription through the Google Cloud Marketplace. See https://docs.parallel.ai/search/search-quickstart for more details.
"customConfigs": { # Optional. Custom configs for ParallelAiSearch. This field can be used to pass any parameter from the Parallel.ai Search API. See the Parallel.ai documentation for the full list of available parameters and their usage: https://docs.parallel.ai/api-reference/search-beta/search Currently only `source_policy`, `excerpts`, `max_results`, `mode`, `fetch_policy` can be set via this field. For example: { "source_policy": { "include_domains": ["google.com", "wikipedia.org"], "exclude_domains": ["example.com"] }, "fetch_policy": { "max_age_seconds": 3600 } }
"a_key": "", # Properties of the object.
},
},
"retrieval": { # Defines a retrieval tool that model can call to access external knowledge. # Optional. Retrieval tool type. System will always execute the provided retrieval tool(s) to get external knowledge to answer the prompt. Retrieval results are presented to the model for generation.
"disableAttribution": True or False, # Optional. Deprecated. This option is no longer supported.
"externalApi": { # Retrieve from data source powered by external API for grounding. The external API is not owned by Google, but need to follow the pre-defined API spec. # Use data source powered by external API for grounding.
"apiAuth": { # The generic reusable api auth config. Deprecated. Please use AuthConfig (google/cloud/aiplatform/master/auth.proto) instead. # The authentication config to access the API. Deprecated. Please use auth_config instead.
"apiKeyConfig": { # The API secret. # The API secret.
"apiKeySecretVersion": "A String", # Required. The SecretManager secret version resource name storing API key. e.g. projects/{project}/secrets/{secret}/versions/{version}
"apiKeyString": "A String", # The API key string. Either this or `api_key_secret_version` must be set.
},
},
"apiSpec": "A String", # The API spec that the external API implements.
"authConfig": { # Auth configuration to run the extension. # The authentication config to access the API.
"apiKeyConfig": { # Config for authentication with API key. # Config for API key auth.
"apiKeySecret": "A String", # Optional. The name of the SecretManager secret version resource storing the API key. Format: `projects/{project}/secrets/{secrete}/versions/{version}` - If both `api_key_secret` and `api_key_string` are specified, this field takes precedence over `api_key_string`. - If specified, the `secretmanager.versions.access` permission should be granted to Vertex AI Extension Service Agent (https://cloud.google.com/vertex-ai/docs/general/access-control#service-agents) on the specified resource.
"apiKeyString": "A String", # Optional. The API key to be used in the request directly.
"httpElementLocation": "A String", # Optional. The location of the API key.
"name": "A String", # Optional. The parameter name of the API key. E.g. If the API request is "https://example.com/act?api_key=", "api_key" would be the parameter name.
},
"authType": "A String", # Type of auth scheme.
"googleServiceAccountConfig": { # Config for Google Service Account Authentication. # Config for Google Service Account auth.
"serviceAccount": "A String", # Optional. The service account that the extension execution service runs as. - If the service account is specified, the `iam.serviceAccounts.getAccessToken` permission should be granted to Vertex AI Extension Service Agent (https://cloud.google.com/vertex-ai/docs/general/access-control#service-agents) on the specified service account. - If not specified, the Vertex AI Extension Service Agent will be used to execute the Extension.
},
"httpBasicAuthConfig": { # Config for HTTP Basic Authentication. # Config for HTTP Basic auth.
"credentialSecret": "A String", # Required. The name of the SecretManager secret version resource storing the base64 encoded credentials. Format: `projects/{project}/secrets/{secrete}/versions/{version}` - If specified, the `secretmanager.versions.access` permission should be granted to Vertex AI Extension Service Agent (https://cloud.google.com/vertex-ai/docs/general/access-control#service-agents) on the specified resource.
},
"oauthConfig": { # Config for user oauth. # Config for user oauth.
"accessToken": "A String", # Access token for extension endpoint. Only used to propagate token from [[ExecuteExtensionRequest.runtime_auth_config]] at request time.
"serviceAccount": "A String", # The service account used to generate access tokens for executing the Extension. - If the service account is specified, the `iam.serviceAccounts.getAccessToken` permission should be granted to Vertex AI Extension Service Agent (https://cloud.google.com/vertex-ai/docs/general/access-control#service-agents) on the provided service account.
},
"oidcConfig": { # Config for user OIDC auth. # Config for user OIDC auth.
"idToken": "A String", # OpenID Connect formatted ID token for extension endpoint. Only used to propagate token from [[ExecuteExtensionRequest.runtime_auth_config]] at request time.
"serviceAccount": "A String", # The service account used to generate an OpenID Connect (OIDC)-compatible JWT token signed by the Google OIDC Provider (accounts.google.com) for extension endpoint (https://cloud.google.com/iam/docs/create-short-lived-credentials-direct#sa-credentials-oidc). - The audience for the token will be set to the URL in the server url defined in the OpenApi spec. - If the service account is provided, the service account should grant `iam.serviceAccounts.getOpenIdToken` permission to Vertex AI Extension Service Agent (https://cloud.google.com/vertex-ai/docs/general/access-control#service-agents).
},
},
"elasticSearchParams": { # The search parameters to use for the ELASTIC_SEARCH spec. # Parameters for the elastic search API.
"index": "A String", # The ElasticSearch index to use.
"numHits": 42, # Optional. Number of hits (chunks) to request. When specified, it is passed to Elasticsearch as the `num_hits` param.
"searchTemplate": "A String", # The ElasticSearch search template to use.
},
"endpoint": "A String", # The endpoint of the external API. The system will call the API at this endpoint to retrieve the data for grounding. Example: https://acme.com:443/search
"simpleSearchParams": { # The search parameters to use for SIMPLE_SEARCH spec. # Parameters for the simple search API.
},
},
"vertexAiSearch": { # Retrieve from Vertex AI Search datastore or engine for grounding. datastore and engine are mutually exclusive. See https://cloud.google.com/products/agent-builder # Set to use data source powered by Vertex AI Search.
"dataStoreSpecs": [ # Specifications that define the specific DataStores to be searched, along with configurations for those data stores. This is only considered for Engines with multiple data stores. It should only be set if engine is used.
{ # Define data stores within engine to filter on in a search call and configurations for those data stores. For more information, see https://cloud.google.com/generative-ai-app-builder/docs/reference/rpc/google.cloud.discoveryengine.v1#datastorespec
"dataStore": "A String", # Full resource name of DataStore, such as Format: `projects/{project}/locations/{location}/collections/{collection}/dataStores/{dataStore}`
"filter": "A String", # Optional. Filter specification to filter documents in the data store specified by data_store field. For more information on filtering, see [Filtering](https://cloud.google.com/generative-ai-app-builder/docs/filter-search-metadata)
},
],
"datastore": "A String", # Optional. Fully-qualified Vertex AI Search data store resource ID. Format: `projects/{project}/locations/{location}/collections/{collection}/dataStores/{dataStore}`
"engine": "A String", # Optional. Fully-qualified Vertex AI Search engine resource ID. Format: `projects/{project}/locations/{location}/collections/{collection}/engines/{engine}`
"filter": "A String", # Optional. Filter strings to be passed to the search API.
"maxResults": 42, # Optional. Number of search results to return per query. The default value is 10. The maximumm allowed value is 10.
},
"vertexRagStore": { # Retrieve from Vertex RAG Store for grounding. # Set to use data source powered by Vertex RAG store. User data is uploaded via the VertexRagDataService.
"ragCorpora": [ # Optional. Deprecated. Please use rag_resources instead.
"A String",
],
"ragResources": [ # Optional. The representation of the rag source. It can be used to specify corpus only or ragfiles. Currently only support one corpus or multiple files from one corpus. In the future we may open up multiple corpora support.
{ # The definition of the Rag resource.
"ragCorpus": "A String", # Optional. RagCorpora resource name. Format: `projects/{project}/locations/{location}/ragCorpora/{rag_corpus}`
"ragFileIds": [ # Optional. rag_file_id. The files should be in the same rag_corpus set in rag_corpus field.
"A String",
],
},
],
"ragRetrievalConfig": { # Specifies the context retrieval config. # Optional. The retrieval config for the Rag query.
"filter": { # Config for filters. # Optional. Config for filters.
"metadataFilter": "A String", # Optional. String for metadata filtering.
"vectorDistanceThreshold": 3.14, # Optional. Only returns contexts with vector distance smaller than the threshold.
"vectorSimilarityThreshold": 3.14, # Optional. Only returns contexts with vector similarity larger than the threshold.
},
"hybridSearch": { # Config for Hybrid Search. # Optional. Config for Hybrid Search.
"alpha": 3.14, # Optional. Alpha value controls the weight between dense and sparse vector search results. The range is [0, 1], while 0 means sparse vector search only and 1 means dense vector search only. The default value is 0.5 which balances sparse and dense vector search equally.
},
"ranking": { # Config for ranking and reranking. # Optional. Config for ranking and reranking.
"llmRanker": { # Config for LlmRanker. # Optional. Config for LlmRanker.
"modelName": "A String", # Optional. The model name used for ranking. See [Supported models](https://cloud.google.com/vertex-ai/generative-ai/docs/model-reference/inference#supported-models).
},
"rankService": { # Config for Rank Service. # Optional. Config for Rank Service.
"modelName": "A String", # Optional. The model name of the rank service. Format: `semantic-ranker-512@latest`
},
},
"topK": 42, # Optional. The number of contexts to retrieve.
},
"similarityTopK": 42, # Optional. Number of top k results to return from the selected corpora.
"storeContext": True or False, # Optional. Currently only supported for Gemini Multimodal Live API. In Gemini Multimodal Live API, if `store_context` bool is specified, Gemini will leverage it to automatically memorize the interactions between the client and Gemini, and retrieve context when needed to augment the response generation for users' ongoing and future interactions.
"vectorDistanceThreshold": 3.14, # Optional. Only return results with vector distance smaller than the threshold.
},
},
"urlContext": { # Tool to support URL context. # Optional. Tool to support URL context retrieval.
},
},
],
"author": "A String", # Required. The ID of the agent or entity that generated this event. Use "user" to denote events generated by the end-user.
"content": { # The structured data content of a message. A Content message contains a `role` field, which indicates the producer of the content, and a `parts` field, which contains the multi-part data of the message. # Required. The content of the event (e.g., text response, tool call, tool response).
"parts": [ # Required. A list of Part objects that make up a single message. Parts of a message can have different MIME types. A Content message must have at least one Part.
{ # A datatype containing media that is part of a multi-part Content message. A `Part` consists of data which has an associated datatype. A `Part` can only contain one of the accepted types in `Part.data`. For media types that are not text, `Part` must have a fixed IANA MIME type identifying the type and subtype of the media if `inline_data` or `file_data` field is filled with raw bytes.
"codeExecutionResult": { # Result of executing the [ExecutableCode]. Only generated when using the [CodeExecution] tool, and always follows a `part` containing the [ExecutableCode]. # Optional. The result of executing the ExecutableCode.
"outcome": "A String", # Required. Outcome of the code execution.
"output": "A String", # Optional. Contains stdout when code execution is successful, stderr or other description otherwise.
},
"executableCode": { # Code generated by the model that is meant to be executed, and the result returned to the model. Generated when using the [CodeExecution] tool, in which the code will be automatically executed, and a corresponding [CodeExecutionResult] will also be generated. # Optional. Code generated by the model that is intended to be executed.
"code": "A String", # Required. The code to be executed.
"language": "A String", # Required. Programming language of the `code`.
},
"fileData": { # URI-based data. A FileData message contains a URI pointing to data of a specific media type. It is used to represent images, audio, and video stored in Google Cloud Storage. # Optional. The URI-based data of the part. This can be used to include files from Google Cloud Storage.
"displayName": "A String", # Optional. The display name of the file. Used to provide a label or filename to distinguish files. This field is only returned in `PromptMessage` for prompt management. It is used in the Gemini calls only when server side tools (`code_execution`, `google_search`, and `url_context`) are enabled.
"fileUri": "A String", # Required. The URI of the file in Google Cloud Storage.
"mimeType": "A String", # Required. The IANA standard MIME type of the source data.
},
"functionCall": { # A predicted [FunctionCall] returned from the model that contains a string representing the [FunctionDeclaration.name] and a structured JSON object containing the parameters and their values. # Optional. A predicted function call returned from the model. This contains the name of the function to call and the arguments to pass to the function.
"args": { # Optional. The function parameters and values in JSON object format. See [FunctionDeclaration.parameters] for parameter details.
"a_key": "", # Properties of the object.
},
"id": "A String", # Optional. The unique id of the function call. If populated, the client to execute the `function_call` and return the response with the matching `id`.
"name": "A String", # Optional. The name of the function to call. Matches [FunctionDeclaration.name].
"partialArgs": [ # Optional. The partial argument value of the function call. If provided, represents the arguments/fields that are streamed incrementally.
{ # Partial argument value of the function call.
"boolValue": True or False, # Optional. Represents a boolean value.
"jsonPath": "A String", # Required. A JSON Path (RFC 9535) to the argument being streamed. https://datatracker.ietf.org/doc/html/rfc9535. e.g. "$.foo.bar[0].data".
"nullValue": "A String", # Optional. Represents a null value.
"numberValue": 3.14, # Optional. Represents a double value.
"stringValue": "A String", # Optional. Represents a string value.
"willContinue": True or False, # Optional. Whether this is not the last part of the same json_path. If true, another PartialArg message for the current json_path is expected to follow.
},
],
"willContinue": True or False, # Optional. Whether this is the last part of the FunctionCall. If true, another partial message for the current FunctionCall is expected to follow.
},
"functionResponse": { # The result output from a [FunctionCall] that contains a string representing the [FunctionDeclaration.name] and a structured JSON object containing any output from the function is used as context to the model. This should contain the result of a [FunctionCall] made based on model prediction. # Optional. The result of a function call. This is used to provide the model with the result of a function call that it predicted.
"id": "A String", # Optional. The id of the function call this response is for. Populated by the client to match the corresponding function call `id`.
"name": "A String", # Required. The name of the function to call. Matches [FunctionDeclaration.name] and [FunctionCall.name].
"parts": [ # Optional. Ordered `Parts` that constitute a function response. Parts may have different IANA MIME types.
{ # A datatype containing media that is part of a `FunctionResponse` message. A `FunctionResponsePart` consists of data which has an associated datatype. A `FunctionResponsePart` can only contain one of the accepted types in `FunctionResponsePart.data`. A `FunctionResponsePart` must have a fixed IANA MIME type identifying the type and subtype of the media if the `inline_data` field is filled with raw bytes.
"fileData": { # URI based data for function response. # URI based data.
"displayName": "A String", # Optional. Display name of the file data. Used to provide a label or filename to distinguish file datas. This field is only returned in PromptMessage for prompt management. It is currently used in the Gemini GenerateContent calls only when server side tools (code_execution, google_search, and url_context) are enabled.
"fileUri": "A String", # Required. URI.
"mimeType": "A String", # Required. The IANA standard MIME type of the source data.
},
"inlineData": { # Raw media bytes for function response. Text should not be sent as raw bytes, use the 'text' field. # Inline media bytes.
"data": "A String", # Required. Raw bytes.
"displayName": "A String", # Optional. Display name of the blob. Used to provide a label or filename to distinguish blobs. This field is only returned in PromptMessage for prompt management. It is currently used in the Gemini GenerateContent calls only when server side tools (code_execution, google_search, and url_context) are enabled.
"mimeType": "A String", # Required. The IANA standard MIME type of the source data.
},
},
],
"response": { # Required. The function response in JSON object format. Use "output" key to specify function output and "error" key to specify error details (if any). If "output" and "error" keys are not specified, then whole "response" is treated as function output.
"a_key": "", # Properties of the object.
},
},
"inlineData": { # A content blob. A Blob contains data of a specific media type. It is used to represent images, audio, and video. # Optional. The inline data content of the part. This can be used to include images, audio, or video in a request.
"data": "A String", # Required. The raw bytes of the data.
"displayName": "A String", # Optional. The display name of the blob. Used to provide a label or filename to distinguish blobs. This field is only returned in `PromptMessage` for prompt management. It is used in the Gemini calls only when server-side tools (`code_execution`, `google_search`, and `url_context`) are enabled.
"mimeType": "A String", # Required. The IANA standard MIME type of the source data.
},
"mediaResolution": { # per part media resolution. Media resolution for the input media. # per part media resolution. Media resolution for the input media.
"level": "A String", # The tokenization quality used for given media.
},
"text": "A String", # Optional. The text content of the part. When sent from the VSCode Gemini Code Assist extension, references to @mentioned items will be converted to markdown boldface text. For example `@my-repo` will be converted to and sent as `**my-repo**` by the IDE agent.
"thought": True or False, # Optional. Indicates whether the `part` represents the model's thought process or reasoning.
"thoughtSignature": "A String", # Optional. An opaque signature for the thought so it can be reused in subsequent requests.
"videoMetadata": { # Provides metadata for a video, including the start and end offsets for clipping and the frame rate. # Optional. Video metadata. The metadata should only be specified while the video data is presented in inline_data or file_data.
"endOffset": "A String", # Optional. The end offset of the video.
"fps": 3.14, # Optional. The frame rate of the video sent to the model. If not specified, the default value is 1.0. The valid range is (0.0, 24.0].
"startOffset": "A String", # Optional. The start offset of the video.
},
},
],
"role": "A String", # Optional. The producer of the content. Must be either 'user' or 'model'. If not set, the service will default to 'user'.
},
"eventTime": "A String", # Optional. The timestamp when the event occurred.
"stateDelta": { # Optional. The change in the session state caused by this event. This is a key-value map of fields that were modified or added by the event.
"a_key": "", # Properties of the object.
},
},
],
"turnId": "A String", # Optional. A unique identifier for the turn. Useful for referencing specific turns across systems.
"turnIndex": 42, # Required. The 0-based index of the turn in the conversation sequence.
},
],
},
"candidate": "A String", # Required. The name of the candidate that produced the response.
"error": { # The `Status` type defines a logical error model that is suitable for different programming environments, including REST APIs and RPC APIs. It is used by [gRPC](https://github.com/grpc). Each `Status` message contains three pieces of data: error code, error message, and error details. You can find out more about this error model and how to work with it in the [API Design Guide](https://cloud.google.com/apis/design/errors). # Output only. Error while scraping model or agent.
"code": 42, # The status code, which should be an enum value of google.rpc.Code.
"details": [ # A list of messages that carry the error details. There is a common set of message types for APIs to use.
{
"a_key": "", # Properties of the object. Contains field @type with type URL.
},
],
"message": "A String", # A developer-facing error message, which should be in English. Any user-facing error message should be localized and sent in the google.rpc.Status.details field, or localized by the client.
},
"events": [ # Optional. Intermediate events (such as tool calls and responses) that led to the final response.
{ # The structured data content of a message. A Content message contains a `role` field, which indicates the producer of the content, and a `parts` field, which contains the multi-part data of the message.
"parts": [ # Required. A list of Part objects that make up a single message. Parts of a message can have different MIME types. A Content message must have at least one Part.
{ # A datatype containing media that is part of a multi-part Content message. A `Part` consists of data which has an associated datatype. A `Part` can only contain one of the accepted types in `Part.data`. For media types that are not text, `Part` must have a fixed IANA MIME type identifying the type and subtype of the media if `inline_data` or `file_data` field is filled with raw bytes.
"codeExecutionResult": { # Result of executing the [ExecutableCode]. Only generated when using the [CodeExecution] tool, and always follows a `part` containing the [ExecutableCode]. # Optional. The result of executing the ExecutableCode.
"outcome": "A String", # Required. Outcome of the code execution.
"output": "A String", # Optional. Contains stdout when code execution is successful, stderr or other description otherwise.
},
"executableCode": { # Code generated by the model that is meant to be executed, and the result returned to the model. Generated when using the [CodeExecution] tool, in which the code will be automatically executed, and a corresponding [CodeExecutionResult] will also be generated. # Optional. Code generated by the model that is intended to be executed.
"code": "A String", # Required. The code to be executed.
"language": "A String", # Required. Programming language of the `code`.
},
"fileData": { # URI-based data. A FileData message contains a URI pointing to data of a specific media type. It is used to represent images, audio, and video stored in Google Cloud Storage. # Optional. The URI-based data of the part. This can be used to include files from Google Cloud Storage.
"displayName": "A String", # Optional. The display name of the file. Used to provide a label or filename to distinguish files. This field is only returned in `PromptMessage` for prompt management. It is used in the Gemini calls only when server side tools (`code_execution`, `google_search`, and `url_context`) are enabled.
"fileUri": "A String", # Required. The URI of the file in Google Cloud Storage.
"mimeType": "A String", # Required. The IANA standard MIME type of the source data.
},
"functionCall": { # A predicted [FunctionCall] returned from the model that contains a string representing the [FunctionDeclaration.name] and a structured JSON object containing the parameters and their values. # Optional. A predicted function call returned from the model. This contains the name of the function to call and the arguments to pass to the function.
"args": { # Optional. The function parameters and values in JSON object format. See [FunctionDeclaration.parameters] for parameter details.
"a_key": "", # Properties of the object.
},
"id": "A String", # Optional. The unique id of the function call. If populated, the client to execute the `function_call` and return the response with the matching `id`.
"name": "A String", # Optional. The name of the function to call. Matches [FunctionDeclaration.name].
"partialArgs": [ # Optional. The partial argument value of the function call. If provided, represents the arguments/fields that are streamed incrementally.
{ # Partial argument value of the function call.
"boolValue": True or False, # Optional. Represents a boolean value.
"jsonPath": "A String", # Required. A JSON Path (RFC 9535) to the argument being streamed. https://datatracker.ietf.org/doc/html/rfc9535. e.g. "$.foo.bar[0].data".
"nullValue": "A String", # Optional. Represents a null value.
"numberValue": 3.14, # Optional. Represents a double value.
"stringValue": "A String", # Optional. Represents a string value.
"willContinue": True or False, # Optional. Whether this is not the last part of the same json_path. If true, another PartialArg message for the current json_path is expected to follow.
},
],
"willContinue": True or False, # Optional. Whether this is the last part of the FunctionCall. If true, another partial message for the current FunctionCall is expected to follow.
},
"functionResponse": { # The result output from a [FunctionCall] that contains a string representing the [FunctionDeclaration.name] and a structured JSON object containing any output from the function is used as context to the model. This should contain the result of a [FunctionCall] made based on model prediction. # Optional. The result of a function call. This is used to provide the model with the result of a function call that it predicted.
"id": "A String", # Optional. The id of the function call this response is for. Populated by the client to match the corresponding function call `id`.
"name": "A String", # Required. The name of the function to call. Matches [FunctionDeclaration.name] and [FunctionCall.name].
"parts": [ # Optional. Ordered `Parts` that constitute a function response. Parts may have different IANA MIME types.
{ # A datatype containing media that is part of a `FunctionResponse` message. A `FunctionResponsePart` consists of data which has an associated datatype. A `FunctionResponsePart` can only contain one of the accepted types in `FunctionResponsePart.data`. A `FunctionResponsePart` must have a fixed IANA MIME type identifying the type and subtype of the media if the `inline_data` field is filled with raw bytes.
"fileData": { # URI based data for function response. # URI based data.
"displayName": "A String", # Optional. Display name of the file data. Used to provide a label or filename to distinguish file datas. This field is only returned in PromptMessage for prompt management. It is currently used in the Gemini GenerateContent calls only when server side tools (code_execution, google_search, and url_context) are enabled.
"fileUri": "A String", # Required. URI.
"mimeType": "A String", # Required. The IANA standard MIME type of the source data.
},
"inlineData": { # Raw media bytes for function response. Text should not be sent as raw bytes, use the 'text' field. # Inline media bytes.
"data": "A String", # Required. Raw bytes.
"displayName": "A String", # Optional. Display name of the blob. Used to provide a label or filename to distinguish blobs. This field is only returned in PromptMessage for prompt management. It is currently used in the Gemini GenerateContent calls only when server side tools (code_execution, google_search, and url_context) are enabled.
"mimeType": "A String", # Required. The IANA standard MIME type of the source data.
},
},
],
"response": { # Required. The function response in JSON object format. Use "output" key to specify function output and "error" key to specify error details (if any). If "output" and "error" keys are not specified, then whole "response" is treated as function output.
"a_key": "", # Properties of the object.
},
},
"inlineData": { # A content blob. A Blob contains data of a specific media type. It is used to represent images, audio, and video. # Optional. The inline data content of the part. This can be used to include images, audio, or video in a request.
"data": "A String", # Required. The raw bytes of the data.
"displayName": "A String", # Optional. The display name of the blob. Used to provide a label or filename to distinguish blobs. This field is only returned in `PromptMessage` for prompt management. It is used in the Gemini calls only when server-side tools (`code_execution`, `google_search`, and `url_context`) are enabled.
"mimeType": "A String", # Required. The IANA standard MIME type of the source data.
},
"mediaResolution": { # per part media resolution. Media resolution for the input media. # per part media resolution. Media resolution for the input media.
"level": "A String", # The tokenization quality used for given media.
},
"text": "A String", # Optional. The text content of the part. When sent from the VSCode Gemini Code Assist extension, references to @mentioned items will be converted to markdown boldface text. For example `@my-repo` will be converted to and sent as `**my-repo**` by the IDE agent.
"thought": True or False, # Optional. Indicates whether the `part` represents the model's thought process or reasoning.
"thoughtSignature": "A String", # Optional. An opaque signature for the thought so it can be reused in subsequent requests.
"videoMetadata": { # Provides metadata for a video, including the start and end offsets for clipping and the frame rate. # Optional. Video metadata. The metadata should only be specified while the video data is presented in inline_data or file_data.
"endOffset": "A String", # Optional. The end offset of the video.
"fps": 3.14, # Optional. The frame rate of the video sent to the model. If not specified, the default value is 1.0. The valid range is (0.0, 24.0].
"startOffset": "A String", # Optional. The start offset of the video.
},
},
],
"role": "A String", # Optional. The producer of the content. Must be either 'user' or 'model'. If not set, the service will default to 'user'.
},
],
"text": "A String", # Text response.
"value": "", # Fields and values that can be used to populate the response template.
},
],
"goldenResponse": { # Responses from model or agent. # Optional. The Ideal response or ground truth.
"agentData": { # Represents data specific to multi-turn agent evaluations. # Optional. Represents the complete execution trace of a multi-turn conversation, which can involve single or multiple agents. This field is used to provide the full output of an agent's run, including all turns and events, for direct evaluation.
"agents": { # Optional. A map containing the static configurations for each agent in the system. Key: agent_id (matches the `author` field in events). Value: The static configuration of the agent.
"a_key": { # Represents configuration for an Agent.
"agentId": "A String", # Required. Unique identifier of the agent. This ID is used to refer to this agent, e.g., in AgentEvent.author, or in the `sub_agents` field. It must be unique within the `agents` map.
"agentType": "A String", # Optional. The type or class of the agent (e.g., "LlmAgent", "RouterAgent", "ToolUseAgent"). Useful for the autorater to understand the expected behavior of the agent.
"description": "A String", # Optional. A high-level description of the agent's role and responsibilities. Critical for evaluating if the agent is routing tasks correctly.
"instruction": "A String", # Optional. Provides instructions for the LLM model, guiding the agent's behavior. Can be static or dynamic. Dynamic instructions can contain placeholders like {variable_name} that will be resolved at runtime using the `AgentEvent.state_delta` field.
"subAgents": [ # Optional. The list of valid agent IDs that this agent can delegate to. This defines the directed edges in the multi-agent system graph topology.
"A String",
],
"tools": [ # Optional. The list of tools available to this agent.
{ # Tool details that the model may use to generate response. A `Tool` is a piece of code that enables the system to interact with external systems to perform an action, or set of actions, outside of knowledge and scope of the model. A Tool object should contain exactly one type of Tool (e.g FunctionDeclaration, Retrieval or GoogleSearchRetrieval).
"codeExecution": { # Tool that executes code generated by the model, and automatically returns the result to the model. See also [ExecutableCode]and [CodeExecutionResult] which are input and output to this tool. # Optional. CodeExecution tool type. Enables the model to execute code as part of generation.
},
"computerUse": { # Tool to support computer use. # Optional. Tool to support the model interacting directly with the computer. If enabled, it automatically populates computer-use specific Function Declarations.
"environment": "A String", # Required. The environment being operated.
"excludedPredefinedFunctions": [ # Optional. By default, [predefined functions](https://cloud.google.com/vertex-ai/generative-ai/docs/computer-use#supported-actions) are included in the final model call. Some of them can be explicitly excluded from being automatically included. This can serve two purposes: 1. Using a more restricted / different action space. 2. Improving the definitions / instructions of predefined functions.
"A String",
],
},
"enterpriseWebSearch": { # Tool to search public web data, powered by Vertex AI Search and Sec4 compliance. # Optional. Tool to support searching public web data, powered by Vertex AI Search and Sec4 compliance.
"blockingConfidence": "A String", # Optional. Sites with confidence level chosen & above this value will be blocked from the search results.
"excludeDomains": [ # Optional. List of domains to be excluded from the search results. The default limit is 2000 domains.
"A String",
],
},
"functionDeclarations": [ # Optional. Function tool type. One or more function declarations to be passed to the model along with the current user query. Model may decide to call a subset of these functions by populating FunctionCall in the response. User should provide a FunctionResponse for each function call in the next turn. Based on the function responses, Model will generate the final response back to the user. Maximum 512 function declarations can be provided.
{ # Structured representation of a function declaration as defined by the [OpenAPI 3.0 specification](https://spec.openapis.org/oas/v3.0.3). Included in this declaration are the function name, description, parameters and response type. This FunctionDeclaration is a representation of a block of code that can be used as a `Tool` by the model and executed by the client.
"description": "A String", # Optional. Description and purpose of the function. Model uses it to decide how and whether to call the function.
"name": "A String", # Required. The name of the function to call. Must start with a letter or an underscore. Must be a-z, A-Z, 0-9, or contain underscores, dots, colons and dashes, with a maximum length of 64.
"parameters": { # Defines the schema of input and output data. This is a subset of the [OpenAPI 3.0 Schema Object](https://spec.openapis.org/oas/v3.0.3#schema-object). # Optional. Describes the parameters to this function in JSON Schema Object format. Reflects the Open API 3.03 Parameter Object. string Key: the name of the parameter. Parameter names are case sensitive. Schema Value: the Schema defining the type used for the parameter. For function with no parameters, this can be left unset. Parameter names must start with a letter or an underscore and must only contain chars a-z, A-Z, 0-9, or underscores with a maximum length of 64. Example with 1 required and 1 optional parameter: type: OBJECT properties: param1: type: STRING param2: type: INTEGER required: - param1
"additionalProperties": "", # Optional. If `type` is `OBJECT`, specifies how to handle properties not defined in `properties`. If it is a boolean `false`, no additional properties are allowed. If it is a schema, additional properties are allowed if they conform to the schema.
"anyOf": [ # Optional. The instance must be valid against any (one or more) of the subschemas listed in `any_of`.
# Object with schema name: GoogleCloudAiplatformV1beta1Schema
],
"default": "", # Optional. Default value to use if the field is not specified.
"defs": { # Optional. `defs` provides a map of schema definitions that can be reused by `ref` elsewhere in the schema. Only allowed at root level of the schema.
"a_key": # Object with schema name: GoogleCloudAiplatformV1beta1Schema
},
"description": "A String", # Optional. Description of the schema.
"enum": [ # Optional. Possible values of the field. This field can be used to restrict a value to a fixed set of values. To mark a field as an enum, set `format` to `enum` and provide the list of possible values in `enum`. For example: 1. To define directions: `{type:STRING, format:enum, enum:["EAST", "NORTH", "SOUTH", "WEST"]}` 2. To define apartment numbers: `{type:INTEGER, format:enum, enum:["101", "201", "301"]}`
"A String",
],
"example": "", # Optional. Example of an instance of this schema.
"format": "A String", # Optional. The format of the data. For `NUMBER` type, format can be `float` or `double`. For `INTEGER` type, format can be `int32` or `int64`. For `STRING` type, format can be `email`, `byte`, `date`, `date-time`, `password`, and other formats to further refine the data type.
"items": # Object with schema name: GoogleCloudAiplatformV1beta1Schema # Optional. If type is `ARRAY`, `items` specifies the schema of elements in the array.
"maxItems": "A String", # Optional. If type is `ARRAY`, `max_items` specifies the maximum number of items in an array.
"maxLength": "A String", # Optional. If type is `STRING`, `max_length` specifies the maximum length of the string.
"maxProperties": "A String", # Optional. If type is `OBJECT`, `max_properties` specifies the maximum number of properties that can be provided.
"maximum": 3.14, # Optional. If type is `INTEGER` or `NUMBER`, `maximum` specifies the maximum allowed value.
"minItems": "A String", # Optional. If type is `ARRAY`, `min_items` specifies the minimum number of items in an array.
"minLength": "A String", # Optional. If type is `STRING`, `min_length` specifies the minimum length of the string.
"minProperties": "A String", # Optional. If type is `OBJECT`, `min_properties` specifies the minimum number of properties that can be provided.
"minimum": 3.14, # Optional. If type is `INTEGER` or `NUMBER`, `minimum` specifies the minimum allowed value.
"nullable": True or False, # Optional. Indicates if the value of this field can be null.
"pattern": "A String", # Optional. If type is `STRING`, `pattern` specifies a regular expression that the string must match.
"properties": { # Optional. If type is `OBJECT`, `properties` is a map of property names to schema definitions for each property of the object.
"a_key": # Object with schema name: GoogleCloudAiplatformV1beta1Schema
},
"propertyOrdering": [ # Optional. Order of properties displayed or used where order matters. This is not a standard field in OpenAPI specification, but can be used to control the order of properties.
"A String",
],
"ref": "A String", # Optional. Allows referencing another schema definition to use in place of this schema. The value must be a valid reference to a schema in `defs`. For example, the following schema defines a reference to a schema node named "Pet": type: object properties: pet: ref: #/defs/Pet defs: Pet: type: object properties: name: type: string The value of the "pet" property is a reference to the schema node named "Pet". See details in https://json-schema.org/understanding-json-schema/structuring
"required": [ # Optional. If type is `OBJECT`, `required` lists the names of properties that must be present.
"A String",
],
"title": "A String", # Optional. Title for the schema.
"type": "A String", # Optional. Data type of the schema field.
},
"parametersJsonSchema": "", # Optional. Describes the parameters to the function in JSON Schema format. The schema must describe an object where the properties are the parameters to the function. For example: ``` { "type": "object", "properties": { "name": { "type": "string" }, "age": { "type": "integer" } }, "additionalProperties": false, "required": ["name", "age"], "propertyOrdering": ["name", "age"] } ``` This field is mutually exclusive with `parameters`.
"response": { # Defines the schema of input and output data. This is a subset of the [OpenAPI 3.0 Schema Object](https://spec.openapis.org/oas/v3.0.3#schema-object). # Optional. Describes the output from this function in JSON Schema format. Reflects the Open API 3.03 Response Object. The Schema defines the type used for the response value of the function.
"additionalProperties": "", # Optional. If `type` is `OBJECT`, specifies how to handle properties not defined in `properties`. If it is a boolean `false`, no additional properties are allowed. If it is a schema, additional properties are allowed if they conform to the schema.
"anyOf": [ # Optional. The instance must be valid against any (one or more) of the subschemas listed in `any_of`.
# Object with schema name: GoogleCloudAiplatformV1beta1Schema
],
"default": "", # Optional. Default value to use if the field is not specified.
"defs": { # Optional. `defs` provides a map of schema definitions that can be reused by `ref` elsewhere in the schema. Only allowed at root level of the schema.
"a_key": # Object with schema name: GoogleCloudAiplatformV1beta1Schema
},
"description": "A String", # Optional. Description of the schema.
"enum": [ # Optional. Possible values of the field. This field can be used to restrict a value to a fixed set of values. To mark a field as an enum, set `format` to `enum` and provide the list of possible values in `enum`. For example: 1. To define directions: `{type:STRING, format:enum, enum:["EAST", "NORTH", "SOUTH", "WEST"]}` 2. To define apartment numbers: `{type:INTEGER, format:enum, enum:["101", "201", "301"]}`
"A String",
],
"example": "", # Optional. Example of an instance of this schema.
"format": "A String", # Optional. The format of the data. For `NUMBER` type, format can be `float` or `double`. For `INTEGER` type, format can be `int32` or `int64`. For `STRING` type, format can be `email`, `byte`, `date`, `date-time`, `password`, and other formats to further refine the data type.
"items": # Object with schema name: GoogleCloudAiplatformV1beta1Schema # Optional. If type is `ARRAY`, `items` specifies the schema of elements in the array.
"maxItems": "A String", # Optional. If type is `ARRAY`, `max_items` specifies the maximum number of items in an array.
"maxLength": "A String", # Optional. If type is `STRING`, `max_length` specifies the maximum length of the string.
"maxProperties": "A String", # Optional. If type is `OBJECT`, `max_properties` specifies the maximum number of properties that can be provided.
"maximum": 3.14, # Optional. If type is `INTEGER` or `NUMBER`, `maximum` specifies the maximum allowed value.
"minItems": "A String", # Optional. If type is `ARRAY`, `min_items` specifies the minimum number of items in an array.
"minLength": "A String", # Optional. If type is `STRING`, `min_length` specifies the minimum length of the string.
"minProperties": "A String", # Optional. If type is `OBJECT`, `min_properties` specifies the minimum number of properties that can be provided.
"minimum": 3.14, # Optional. If type is `INTEGER` or `NUMBER`, `minimum` specifies the minimum allowed value.
"nullable": True or False, # Optional. Indicates if the value of this field can be null.
"pattern": "A String", # Optional. If type is `STRING`, `pattern` specifies a regular expression that the string must match.
"properties": { # Optional. If type is `OBJECT`, `properties` is a map of property names to schema definitions for each property of the object.
"a_key": # Object with schema name: GoogleCloudAiplatformV1beta1Schema
},
"propertyOrdering": [ # Optional. Order of properties displayed or used where order matters. This is not a standard field in OpenAPI specification, but can be used to control the order of properties.
"A String",
],
"ref": "A String", # Optional. Allows referencing another schema definition to use in place of this schema. The value must be a valid reference to a schema in `defs`. For example, the following schema defines a reference to a schema node named "Pet": type: object properties: pet: ref: #/defs/Pet defs: Pet: type: object properties: name: type: string The value of the "pet" property is a reference to the schema node named "Pet". See details in https://json-schema.org/understanding-json-schema/structuring
"required": [ # Optional. If type is `OBJECT`, `required` lists the names of properties that must be present.
"A String",
],
"title": "A String", # Optional. Title for the schema.
"type": "A String", # Optional. Data type of the schema field.
},
"responseJsonSchema": "", # Optional. Describes the output from this function in JSON Schema format. The value specified by the schema is the response value of the function. This field is mutually exclusive with `response`.
},
],
"googleMaps": { # Tool to retrieve public maps data for grounding, powered by Google. # Optional. GoogleMaps tool type. Tool to support Google Maps in Model.
"enableWidget": True or False, # Optional. If true, include the widget context token in the response.
},
"googleSearch": { # GoogleSearch tool type. Tool to support Google Search in Model. Powered by Google. # Optional. GoogleSearch tool type. Tool to support Google Search in Model. Powered by Google.
"blockingConfidence": "A String", # Optional. Sites with confidence level chosen & above this value will be blocked from the search results.
"excludeDomains": [ # Optional. List of domains to be excluded from the search results. The default limit is 2000 domains. Example: ["amazon.com", "facebook.com"].
"A String",
],
},
"googleSearchRetrieval": { # Tool to retrieve public web data for grounding, powered by Google. # Optional. Specialized retrieval tool that is powered by Google Search.
"dynamicRetrievalConfig": { # Describes the options to customize dynamic retrieval. # Specifies the dynamic retrieval configuration for the given source.
"dynamicThreshold": 3.14, # Optional. The threshold to be used in dynamic retrieval. If not set, a system default value is used.
"mode": "A String", # The mode of the predictor to be used in dynamic retrieval.
},
},
"parallelAiSearch": { # ParallelAiSearch tool type. A tool that uses the Parallel.ai search engine for grounding. # Optional. If specified, Vertex AI will use Parallel.ai to search for information to answer user queries. The search results will be grounded on Parallel.ai and presented to the model for response generation
"apiKey": "A String", # Optional. The API key for ParallelAiSearch. If an API key is not provided, the system will attempt to verify access by checking for an active Parallel.ai subscription through the Google Cloud Marketplace. See https://docs.parallel.ai/search/search-quickstart for more details.
"customConfigs": { # Optional. Custom configs for ParallelAiSearch. This field can be used to pass any parameter from the Parallel.ai Search API. See the Parallel.ai documentation for the full list of available parameters and their usage: https://docs.parallel.ai/api-reference/search-beta/search Currently only `source_policy`, `excerpts`, `max_results`, `mode`, `fetch_policy` can be set via this field. For example: { "source_policy": { "include_domains": ["google.com", "wikipedia.org"], "exclude_domains": ["example.com"] }, "fetch_policy": { "max_age_seconds": 3600 } }
"a_key": "", # Properties of the object.
},
},
"retrieval": { # Defines a retrieval tool that model can call to access external knowledge. # Optional. Retrieval tool type. System will always execute the provided retrieval tool(s) to get external knowledge to answer the prompt. Retrieval results are presented to the model for generation.
"disableAttribution": True or False, # Optional. Deprecated. This option is no longer supported.
"externalApi": { # Retrieve from data source powered by external API for grounding. The external API is not owned by Google, but need to follow the pre-defined API spec. # Use data source powered by external API for grounding.
"apiAuth": { # The generic reusable api auth config. Deprecated. Please use AuthConfig (google/cloud/aiplatform/master/auth.proto) instead. # The authentication config to access the API. Deprecated. Please use auth_config instead.
"apiKeyConfig": { # The API secret. # The API secret.
"apiKeySecretVersion": "A String", # Required. The SecretManager secret version resource name storing API key. e.g. projects/{project}/secrets/{secret}/versions/{version}
"apiKeyString": "A String", # The API key string. Either this or `api_key_secret_version` must be set.
},
},
"apiSpec": "A String", # The API spec that the external API implements.
"authConfig": { # Auth configuration to run the extension. # The authentication config to access the API.
"apiKeyConfig": { # Config for authentication with API key. # Config for API key auth.
"apiKeySecret": "A String", # Optional. The name of the SecretManager secret version resource storing the API key. Format: `projects/{project}/secrets/{secrete}/versions/{version}` - If both `api_key_secret` and `api_key_string` are specified, this field takes precedence over `api_key_string`. - If specified, the `secretmanager.versions.access` permission should be granted to Vertex AI Extension Service Agent (https://cloud.google.com/vertex-ai/docs/general/access-control#service-agents) on the specified resource.
"apiKeyString": "A String", # Optional. The API key to be used in the request directly.
"httpElementLocation": "A String", # Optional. The location of the API key.
"name": "A String", # Optional. The parameter name of the API key. E.g. If the API request is "https://example.com/act?api_key=", "api_key" would be the parameter name.
},
"authType": "A String", # Type of auth scheme.
"googleServiceAccountConfig": { # Config for Google Service Account Authentication. # Config for Google Service Account auth.
"serviceAccount": "A String", # Optional. The service account that the extension execution service runs as. - If the service account is specified, the `iam.serviceAccounts.getAccessToken` permission should be granted to Vertex AI Extension Service Agent (https://cloud.google.com/vertex-ai/docs/general/access-control#service-agents) on the specified service account. - If not specified, the Vertex AI Extension Service Agent will be used to execute the Extension.
},
"httpBasicAuthConfig": { # Config for HTTP Basic Authentication. # Config for HTTP Basic auth.
"credentialSecret": "A String", # Required. The name of the SecretManager secret version resource storing the base64 encoded credentials. Format: `projects/{project}/secrets/{secrete}/versions/{version}` - If specified, the `secretmanager.versions.access` permission should be granted to Vertex AI Extension Service Agent (https://cloud.google.com/vertex-ai/docs/general/access-control#service-agents) on the specified resource.
},
"oauthConfig": { # Config for user oauth. # Config for user oauth.
"accessToken": "A String", # Access token for extension endpoint. Only used to propagate token from [[ExecuteExtensionRequest.runtime_auth_config]] at request time.
"serviceAccount": "A String", # The service account used to generate access tokens for executing the Extension. - If the service account is specified, the `iam.serviceAccounts.getAccessToken` permission should be granted to Vertex AI Extension Service Agent (https://cloud.google.com/vertex-ai/docs/general/access-control#service-agents) on the provided service account.
},
"oidcConfig": { # Config for user OIDC auth. # Config for user OIDC auth.
"idToken": "A String", # OpenID Connect formatted ID token for extension endpoint. Only used to propagate token from [[ExecuteExtensionRequest.runtime_auth_config]] at request time.
"serviceAccount": "A String", # The service account used to generate an OpenID Connect (OIDC)-compatible JWT token signed by the Google OIDC Provider (accounts.google.com) for extension endpoint (https://cloud.google.com/iam/docs/create-short-lived-credentials-direct#sa-credentials-oidc). - The audience for the token will be set to the URL in the server url defined in the OpenApi spec. - If the service account is provided, the service account should grant `iam.serviceAccounts.getOpenIdToken` permission to Vertex AI Extension Service Agent (https://cloud.google.com/vertex-ai/docs/general/access-control#service-agents).
},
},
"elasticSearchParams": { # The search parameters to use for the ELASTIC_SEARCH spec. # Parameters for the elastic search API.
"index": "A String", # The ElasticSearch index to use.
"numHits": 42, # Optional. Number of hits (chunks) to request. When specified, it is passed to Elasticsearch as the `num_hits` param.
"searchTemplate": "A String", # The ElasticSearch search template to use.
},
"endpoint": "A String", # The endpoint of the external API. The system will call the API at this endpoint to retrieve the data for grounding. Example: https://acme.com:443/search
"simpleSearchParams": { # The search parameters to use for SIMPLE_SEARCH spec. # Parameters for the simple search API.
},
},
"vertexAiSearch": { # Retrieve from Vertex AI Search datastore or engine for grounding. datastore and engine are mutually exclusive. See https://cloud.google.com/products/agent-builder # Set to use data source powered by Vertex AI Search.
"dataStoreSpecs": [ # Specifications that define the specific DataStores to be searched, along with configurations for those data stores. This is only considered for Engines with multiple data stores. It should only be set if engine is used.
{ # Define data stores within engine to filter on in a search call and configurations for those data stores. For more information, see https://cloud.google.com/generative-ai-app-builder/docs/reference/rpc/google.cloud.discoveryengine.v1#datastorespec
"dataStore": "A String", # Full resource name of DataStore, such as Format: `projects/{project}/locations/{location}/collections/{collection}/dataStores/{dataStore}`
"filter": "A String", # Optional. Filter specification to filter documents in the data store specified by data_store field. For more information on filtering, see [Filtering](https://cloud.google.com/generative-ai-app-builder/docs/filter-search-metadata)
},
],
"datastore": "A String", # Optional. Fully-qualified Vertex AI Search data store resource ID. Format: `projects/{project}/locations/{location}/collections/{collection}/dataStores/{dataStore}`
"engine": "A String", # Optional. Fully-qualified Vertex AI Search engine resource ID. Format: `projects/{project}/locations/{location}/collections/{collection}/engines/{engine}`
"filter": "A String", # Optional. Filter strings to be passed to the search API.
"maxResults": 42, # Optional. Number of search results to return per query. The default value is 10. The maximumm allowed value is 10.
},
"vertexRagStore": { # Retrieve from Vertex RAG Store for grounding. # Set to use data source powered by Vertex RAG store. User data is uploaded via the VertexRagDataService.
"ragCorpora": [ # Optional. Deprecated. Please use rag_resources instead.
"A String",
],
"ragResources": [ # Optional. The representation of the rag source. It can be used to specify corpus only or ragfiles. Currently only support one corpus or multiple files from one corpus. In the future we may open up multiple corpora support.
{ # The definition of the Rag resource.
"ragCorpus": "A String", # Optional. RagCorpora resource name. Format: `projects/{project}/locations/{location}/ragCorpora/{rag_corpus}`
"ragFileIds": [ # Optional. rag_file_id. The files should be in the same rag_corpus set in rag_corpus field.
"A String",
],
},
],
"ragRetrievalConfig": { # Specifies the context retrieval config. # Optional. The retrieval config for the Rag query.
"filter": { # Config for filters. # Optional. Config for filters.
"metadataFilter": "A String", # Optional. String for metadata filtering.
"vectorDistanceThreshold": 3.14, # Optional. Only returns contexts with vector distance smaller than the threshold.
"vectorSimilarityThreshold": 3.14, # Optional. Only returns contexts with vector similarity larger than the threshold.
},
"hybridSearch": { # Config for Hybrid Search. # Optional. Config for Hybrid Search.
"alpha": 3.14, # Optional. Alpha value controls the weight between dense and sparse vector search results. The range is [0, 1], while 0 means sparse vector search only and 1 means dense vector search only. The default value is 0.5 which balances sparse and dense vector search equally.
},
"ranking": { # Config for ranking and reranking. # Optional. Config for ranking and reranking.
"llmRanker": { # Config for LlmRanker. # Optional. Config for LlmRanker.
"modelName": "A String", # Optional. The model name used for ranking. See [Supported models](https://cloud.google.com/vertex-ai/generative-ai/docs/model-reference/inference#supported-models).
},
"rankService": { # Config for Rank Service. # Optional. Config for Rank Service.
"modelName": "A String", # Optional. The model name of the rank service. Format: `semantic-ranker-512@latest`
},
},
"topK": 42, # Optional. The number of contexts to retrieve.
},
"similarityTopK": 42, # Optional. Number of top k results to return from the selected corpora.
"storeContext": True or False, # Optional. Currently only supported for Gemini Multimodal Live API. In Gemini Multimodal Live API, if `store_context` bool is specified, Gemini will leverage it to automatically memorize the interactions between the client and Gemini, and retrieve context when needed to augment the response generation for users' ongoing and future interactions.
"vectorDistanceThreshold": 3.14, # Optional. Only return results with vector distance smaller than the threshold.
},
},
"urlContext": { # Tool to support URL context. # Optional. Tool to support URL context retrieval.
},
},
],
},
},
"turns": [ # Optional. A chronological list of conversation turns. Each turn represents a logical execution cycle (e.g., User Input -> Agent Response).
{ # Represents a single turn/invocation in the conversation.
"events": [ # Optional. The list of events that occurred during this turn.
{ # Represents a single event in the execution trace.
"activeTools": [ # Optional. The list of tools that were active/available to the agent at the time of this event. This overrides the `AgentConfig.tools` if set.
{ # Tool details that the model may use to generate response. A `Tool` is a piece of code that enables the system to interact with external systems to perform an action, or set of actions, outside of knowledge and scope of the model. A Tool object should contain exactly one type of Tool (e.g FunctionDeclaration, Retrieval or GoogleSearchRetrieval).
"codeExecution": { # Tool that executes code generated by the model, and automatically returns the result to the model. See also [ExecutableCode]and [CodeExecutionResult] which are input and output to this tool. # Optional. CodeExecution tool type. Enables the model to execute code as part of generation.
},
"computerUse": { # Tool to support computer use. # Optional. Tool to support the model interacting directly with the computer. If enabled, it automatically populates computer-use specific Function Declarations.
"environment": "A String", # Required. The environment being operated.
"excludedPredefinedFunctions": [ # Optional. By default, [predefined functions](https://cloud.google.com/vertex-ai/generative-ai/docs/computer-use#supported-actions) are included in the final model call. Some of them can be explicitly excluded from being automatically included. This can serve two purposes: 1. Using a more restricted / different action space. 2. Improving the definitions / instructions of predefined functions.
"A String",
],
},
"enterpriseWebSearch": { # Tool to search public web data, powered by Vertex AI Search and Sec4 compliance. # Optional. Tool to support searching public web data, powered by Vertex AI Search and Sec4 compliance.
"blockingConfidence": "A String", # Optional. Sites with confidence level chosen & above this value will be blocked from the search results.
"excludeDomains": [ # Optional. List of domains to be excluded from the search results. The default limit is 2000 domains.
"A String",
],
},
"functionDeclarations": [ # Optional. Function tool type. One or more function declarations to be passed to the model along with the current user query. Model may decide to call a subset of these functions by populating FunctionCall in the response. User should provide a FunctionResponse for each function call in the next turn. Based on the function responses, Model will generate the final response back to the user. Maximum 512 function declarations can be provided.
{ # Structured representation of a function declaration as defined by the [OpenAPI 3.0 specification](https://spec.openapis.org/oas/v3.0.3). Included in this declaration are the function name, description, parameters and response type. This FunctionDeclaration is a representation of a block of code that can be used as a `Tool` by the model and executed by the client.
"description": "A String", # Optional. Description and purpose of the function. Model uses it to decide how and whether to call the function.
"name": "A String", # Required. The name of the function to call. Must start with a letter or an underscore. Must be a-z, A-Z, 0-9, or contain underscores, dots, colons and dashes, with a maximum length of 64.
"parameters": { # Defines the schema of input and output data. This is a subset of the [OpenAPI 3.0 Schema Object](https://spec.openapis.org/oas/v3.0.3#schema-object). # Optional. Describes the parameters to this function in JSON Schema Object format. Reflects the Open API 3.03 Parameter Object. string Key: the name of the parameter. Parameter names are case sensitive. Schema Value: the Schema defining the type used for the parameter. For function with no parameters, this can be left unset. Parameter names must start with a letter or an underscore and must only contain chars a-z, A-Z, 0-9, or underscores with a maximum length of 64. Example with 1 required and 1 optional parameter: type: OBJECT properties: param1: type: STRING param2: type: INTEGER required: - param1
"additionalProperties": "", # Optional. If `type` is `OBJECT`, specifies how to handle properties not defined in `properties`. If it is a boolean `false`, no additional properties are allowed. If it is a schema, additional properties are allowed if they conform to the schema.
"anyOf": [ # Optional. The instance must be valid against any (one or more) of the subschemas listed in `any_of`.
# Object with schema name: GoogleCloudAiplatformV1beta1Schema
],
"default": "", # Optional. Default value to use if the field is not specified.
"defs": { # Optional. `defs` provides a map of schema definitions that can be reused by `ref` elsewhere in the schema. Only allowed at root level of the schema.
"a_key": # Object with schema name: GoogleCloudAiplatformV1beta1Schema
},
"description": "A String", # Optional. Description of the schema.
"enum": [ # Optional. Possible values of the field. This field can be used to restrict a value to a fixed set of values. To mark a field as an enum, set `format` to `enum` and provide the list of possible values in `enum`. For example: 1. To define directions: `{type:STRING, format:enum, enum:["EAST", "NORTH", "SOUTH", "WEST"]}` 2. To define apartment numbers: `{type:INTEGER, format:enum, enum:["101", "201", "301"]}`
"A String",
],
"example": "", # Optional. Example of an instance of this schema.
"format": "A String", # Optional. The format of the data. For `NUMBER` type, format can be `float` or `double`. For `INTEGER` type, format can be `int32` or `int64`. For `STRING` type, format can be `email`, `byte`, `date`, `date-time`, `password`, and other formats to further refine the data type.
"items": # Object with schema name: GoogleCloudAiplatformV1beta1Schema # Optional. If type is `ARRAY`, `items` specifies the schema of elements in the array.
"maxItems": "A String", # Optional. If type is `ARRAY`, `max_items` specifies the maximum number of items in an array.
"maxLength": "A String", # Optional. If type is `STRING`, `max_length` specifies the maximum length of the string.
"maxProperties": "A String", # Optional. If type is `OBJECT`, `max_properties` specifies the maximum number of properties that can be provided.
"maximum": 3.14, # Optional. If type is `INTEGER` or `NUMBER`, `maximum` specifies the maximum allowed value.
"minItems": "A String", # Optional. If type is `ARRAY`, `min_items` specifies the minimum number of items in an array.
"minLength": "A String", # Optional. If type is `STRING`, `min_length` specifies the minimum length of the string.
"minProperties": "A String", # Optional. If type is `OBJECT`, `min_properties` specifies the minimum number of properties that can be provided.
"minimum": 3.14, # Optional. If type is `INTEGER` or `NUMBER`, `minimum` specifies the minimum allowed value.
"nullable": True or False, # Optional. Indicates if the value of this field can be null.
"pattern": "A String", # Optional. If type is `STRING`, `pattern` specifies a regular expression that the string must match.
"properties": { # Optional. If type is `OBJECT`, `properties` is a map of property names to schema definitions for each property of the object.
"a_key": # Object with schema name: GoogleCloudAiplatformV1beta1Schema
},
"propertyOrdering": [ # Optional. Order of properties displayed or used where order matters. This is not a standard field in OpenAPI specification, but can be used to control the order of properties.
"A String",
],
"ref": "A String", # Optional. Allows referencing another schema definition to use in place of this schema. The value must be a valid reference to a schema in `defs`. For example, the following schema defines a reference to a schema node named "Pet": type: object properties: pet: ref: #/defs/Pet defs: Pet: type: object properties: name: type: string The value of the "pet" property is a reference to the schema node named "Pet". See details in https://json-schema.org/understanding-json-schema/structuring
"required": [ # Optional. If type is `OBJECT`, `required` lists the names of properties that must be present.
"A String",
],
"title": "A String", # Optional. Title for the schema.
"type": "A String", # Optional. Data type of the schema field.
},
"parametersJsonSchema": "", # Optional. Describes the parameters to the function in JSON Schema format. The schema must describe an object where the properties are the parameters to the function. For example: ``` { "type": "object", "properties": { "name": { "type": "string" }, "age": { "type": "integer" } }, "additionalProperties": false, "required": ["name", "age"], "propertyOrdering": ["name", "age"] } ``` This field is mutually exclusive with `parameters`.
"response": { # Defines the schema of input and output data. This is a subset of the [OpenAPI 3.0 Schema Object](https://spec.openapis.org/oas/v3.0.3#schema-object). # Optional. Describes the output from this function in JSON Schema format. Reflects the Open API 3.03 Response Object. The Schema defines the type used for the response value of the function.
"additionalProperties": "", # Optional. If `type` is `OBJECT`, specifies how to handle properties not defined in `properties`. If it is a boolean `false`, no additional properties are allowed. If it is a schema, additional properties are allowed if they conform to the schema.
"anyOf": [ # Optional. The instance must be valid against any (one or more) of the subschemas listed in `any_of`.
# Object with schema name: GoogleCloudAiplatformV1beta1Schema
],
"default": "", # Optional. Default value to use if the field is not specified.
"defs": { # Optional. `defs` provides a map of schema definitions that can be reused by `ref` elsewhere in the schema. Only allowed at root level of the schema.
"a_key": # Object with schema name: GoogleCloudAiplatformV1beta1Schema
},
"description": "A String", # Optional. Description of the schema.
"enum": [ # Optional. Possible values of the field. This field can be used to restrict a value to a fixed set of values. To mark a field as an enum, set `format` to `enum` and provide the list of possible values in `enum`. For example: 1. To define directions: `{type:STRING, format:enum, enum:["EAST", "NORTH", "SOUTH", "WEST"]}` 2. To define apartment numbers: `{type:INTEGER, format:enum, enum:["101", "201", "301"]}`
"A String",
],
"example": "", # Optional. Example of an instance of this schema.
"format": "A String", # Optional. The format of the data. For `NUMBER` type, format can be `float` or `double`. For `INTEGER` type, format can be `int32` or `int64`. For `STRING` type, format can be `email`, `byte`, `date`, `date-time`, `password`, and other formats to further refine the data type.
"items": # Object with schema name: GoogleCloudAiplatformV1beta1Schema # Optional. If type is `ARRAY`, `items` specifies the schema of elements in the array.
"maxItems": "A String", # Optional. If type is `ARRAY`, `max_items` specifies the maximum number of items in an array.
"maxLength": "A String", # Optional. If type is `STRING`, `max_length` specifies the maximum length of the string.
"maxProperties": "A String", # Optional. If type is `OBJECT`, `max_properties` specifies the maximum number of properties that can be provided.
"maximum": 3.14, # Optional. If type is `INTEGER` or `NUMBER`, `maximum` specifies the maximum allowed value.
"minItems": "A String", # Optional. If type is `ARRAY`, `min_items` specifies the minimum number of items in an array.
"minLength": "A String", # Optional. If type is `STRING`, `min_length` specifies the minimum length of the string.
"minProperties": "A String", # Optional. If type is `OBJECT`, `min_properties` specifies the minimum number of properties that can be provided.
"minimum": 3.14, # Optional. If type is `INTEGER` or `NUMBER`, `minimum` specifies the minimum allowed value.
"nullable": True or False, # Optional. Indicates if the value of this field can be null.
"pattern": "A String", # Optional. If type is `STRING`, `pattern` specifies a regular expression that the string must match.
"properties": { # Optional. If type is `OBJECT`, `properties` is a map of property names to schema definitions for each property of the object.
"a_key": # Object with schema name: GoogleCloudAiplatformV1beta1Schema
},
"propertyOrdering": [ # Optional. Order of properties displayed or used where order matters. This is not a standard field in OpenAPI specification, but can be used to control the order of properties.
"A String",
],
"ref": "A String", # Optional. Allows referencing another schema definition to use in place of this schema. The value must be a valid reference to a schema in `defs`. For example, the following schema defines a reference to a schema node named "Pet": type: object properties: pet: ref: #/defs/Pet defs: Pet: type: object properties: name: type: string The value of the "pet" property is a reference to the schema node named "Pet". See details in https://json-schema.org/understanding-json-schema/structuring
"required": [ # Optional. If type is `OBJECT`, `required` lists the names of properties that must be present.
"A String",
],
"title": "A String", # Optional. Title for the schema.
"type": "A String", # Optional. Data type of the schema field.
},
"responseJsonSchema": "", # Optional. Describes the output from this function in JSON Schema format. The value specified by the schema is the response value of the function. This field is mutually exclusive with `response`.
},
],
"googleMaps": { # Tool to retrieve public maps data for grounding, powered by Google. # Optional. GoogleMaps tool type. Tool to support Google Maps in Model.
"enableWidget": True or False, # Optional. If true, include the widget context token in the response.
},
"googleSearch": { # GoogleSearch tool type. Tool to support Google Search in Model. Powered by Google. # Optional. GoogleSearch tool type. Tool to support Google Search in Model. Powered by Google.
"blockingConfidence": "A String", # Optional. Sites with confidence level chosen & above this value will be blocked from the search results.
"excludeDomains": [ # Optional. List of domains to be excluded from the search results. The default limit is 2000 domains. Example: ["amazon.com", "facebook.com"].
"A String",
],
},
"googleSearchRetrieval": { # Tool to retrieve public web data for grounding, powered by Google. # Optional. Specialized retrieval tool that is powered by Google Search.
"dynamicRetrievalConfig": { # Describes the options to customize dynamic retrieval. # Specifies the dynamic retrieval configuration for the given source.
"dynamicThreshold": 3.14, # Optional. The threshold to be used in dynamic retrieval. If not set, a system default value is used.
"mode": "A String", # The mode of the predictor to be used in dynamic retrieval.
},
},
"parallelAiSearch": { # ParallelAiSearch tool type. A tool that uses the Parallel.ai search engine for grounding. # Optional. If specified, Vertex AI will use Parallel.ai to search for information to answer user queries. The search results will be grounded on Parallel.ai and presented to the model for response generation
"apiKey": "A String", # Optional. The API key for ParallelAiSearch. If an API key is not provided, the system will attempt to verify access by checking for an active Parallel.ai subscription through the Google Cloud Marketplace. See https://docs.parallel.ai/search/search-quickstart for more details.
"customConfigs": { # Optional. Custom configs for ParallelAiSearch. This field can be used to pass any parameter from the Parallel.ai Search API. See the Parallel.ai documentation for the full list of available parameters and their usage: https://docs.parallel.ai/api-reference/search-beta/search Currently only `source_policy`, `excerpts`, `max_results`, `mode`, `fetch_policy` can be set via this field. For example: { "source_policy": { "include_domains": ["google.com", "wikipedia.org"], "exclude_domains": ["example.com"] }, "fetch_policy": { "max_age_seconds": 3600 } }
"a_key": "", # Properties of the object.
},
},
"retrieval": { # Defines a retrieval tool that model can call to access external knowledge. # Optional. Retrieval tool type. System will always execute the provided retrieval tool(s) to get external knowledge to answer the prompt. Retrieval results are presented to the model for generation.
"disableAttribution": True or False, # Optional. Deprecated. This option is no longer supported.
"externalApi": { # Retrieve from data source powered by external API for grounding. The external API is not owned by Google, but need to follow the pre-defined API spec. # Use data source powered by external API for grounding.
"apiAuth": { # The generic reusable api auth config. Deprecated. Please use AuthConfig (google/cloud/aiplatform/master/auth.proto) instead. # The authentication config to access the API. Deprecated. Please use auth_config instead.
"apiKeyConfig": { # The API secret. # The API secret.
"apiKeySecretVersion": "A String", # Required. The SecretManager secret version resource name storing API key. e.g. projects/{project}/secrets/{secret}/versions/{version}
"apiKeyString": "A String", # The API key string. Either this or `api_key_secret_version` must be set.
},
},
"apiSpec": "A String", # The API spec that the external API implements.
"authConfig": { # Auth configuration to run the extension. # The authentication config to access the API.
"apiKeyConfig": { # Config for authentication with API key. # Config for API key auth.
"apiKeySecret": "A String", # Optional. The name of the SecretManager secret version resource storing the API key. Format: `projects/{project}/secrets/{secrete}/versions/{version}` - If both `api_key_secret` and `api_key_string` are specified, this field takes precedence over `api_key_string`. - If specified, the `secretmanager.versions.access` permission should be granted to Vertex AI Extension Service Agent (https://cloud.google.com/vertex-ai/docs/general/access-control#service-agents) on the specified resource.
"apiKeyString": "A String", # Optional. The API key to be used in the request directly.
"httpElementLocation": "A String", # Optional. The location of the API key.
"name": "A String", # Optional. The parameter name of the API key. E.g. If the API request is "https://example.com/act?api_key=", "api_key" would be the parameter name.
},
"authType": "A String", # Type of auth scheme.
"googleServiceAccountConfig": { # Config for Google Service Account Authentication. # Config for Google Service Account auth.
"serviceAccount": "A String", # Optional. The service account that the extension execution service runs as. - If the service account is specified, the `iam.serviceAccounts.getAccessToken` permission should be granted to Vertex AI Extension Service Agent (https://cloud.google.com/vertex-ai/docs/general/access-control#service-agents) on the specified service account. - If not specified, the Vertex AI Extension Service Agent will be used to execute the Extension.
},
"httpBasicAuthConfig": { # Config for HTTP Basic Authentication. # Config for HTTP Basic auth.
"credentialSecret": "A String", # Required. The name of the SecretManager secret version resource storing the base64 encoded credentials. Format: `projects/{project}/secrets/{secrete}/versions/{version}` - If specified, the `secretmanager.versions.access` permission should be granted to Vertex AI Extension Service Agent (https://cloud.google.com/vertex-ai/docs/general/access-control#service-agents) on the specified resource.
},
"oauthConfig": { # Config for user oauth. # Config for user oauth.
"accessToken": "A String", # Access token for extension endpoint. Only used to propagate token from [[ExecuteExtensionRequest.runtime_auth_config]] at request time.
"serviceAccount": "A String", # The service account used to generate access tokens for executing the Extension. - If the service account is specified, the `iam.serviceAccounts.getAccessToken` permission should be granted to Vertex AI Extension Service Agent (https://cloud.google.com/vertex-ai/docs/general/access-control#service-agents) on the provided service account.
},
"oidcConfig": { # Config for user OIDC auth. # Config for user OIDC auth.
"idToken": "A String", # OpenID Connect formatted ID token for extension endpoint. Only used to propagate token from [[ExecuteExtensionRequest.runtime_auth_config]] at request time.
"serviceAccount": "A String", # The service account used to generate an OpenID Connect (OIDC)-compatible JWT token signed by the Google OIDC Provider (accounts.google.com) for extension endpoint (https://cloud.google.com/iam/docs/create-short-lived-credentials-direct#sa-credentials-oidc). - The audience for the token will be set to the URL in the server url defined in the OpenApi spec. - If the service account is provided, the service account should grant `iam.serviceAccounts.getOpenIdToken` permission to Vertex AI Extension Service Agent (https://cloud.google.com/vertex-ai/docs/general/access-control#service-agents).
},
},
"elasticSearchParams": { # The search parameters to use for the ELASTIC_SEARCH spec. # Parameters for the elastic search API.
"index": "A String", # The ElasticSearch index to use.
"numHits": 42, # Optional. Number of hits (chunks) to request. When specified, it is passed to Elasticsearch as the `num_hits` param.
"searchTemplate": "A String", # The ElasticSearch search template to use.
},
"endpoint": "A String", # The endpoint of the external API. The system will call the API at this endpoint to retrieve the data for grounding. Example: https://acme.com:443/search
"simpleSearchParams": { # The search parameters to use for SIMPLE_SEARCH spec. # Parameters for the simple search API.
},
},
"vertexAiSearch": { # Retrieve from Vertex AI Search datastore or engine for grounding. datastore and engine are mutually exclusive. See https://cloud.google.com/products/agent-builder # Set to use data source powered by Vertex AI Search.
"dataStoreSpecs": [ # Specifications that define the specific DataStores to be searched, along with configurations for those data stores. This is only considered for Engines with multiple data stores. It should only be set if engine is used.
{ # Define data stores within engine to filter on in a search call and configurations for those data stores. For more information, see https://cloud.google.com/generative-ai-app-builder/docs/reference/rpc/google.cloud.discoveryengine.v1#datastorespec
"dataStore": "A String", # Full resource name of DataStore, such as Format: `projects/{project}/locations/{location}/collections/{collection}/dataStores/{dataStore}`
"filter": "A String", # Optional. Filter specification to filter documents in the data store specified by data_store field. For more information on filtering, see [Filtering](https://cloud.google.com/generative-ai-app-builder/docs/filter-search-metadata)
},
],
"datastore": "A String", # Optional. Fully-qualified Vertex AI Search data store resource ID. Format: `projects/{project}/locations/{location}/collections/{collection}/dataStores/{dataStore}`
"engine": "A String", # Optional. Fully-qualified Vertex AI Search engine resource ID. Format: `projects/{project}/locations/{location}/collections/{collection}/engines/{engine}`
"filter": "A String", # Optional. Filter strings to be passed to the search API.
"maxResults": 42, # Optional. Number of search results to return per query. The default value is 10. The maximumm allowed value is 10.
},
"vertexRagStore": { # Retrieve from Vertex RAG Store for grounding. # Set to use data source powered by Vertex RAG store. User data is uploaded via the VertexRagDataService.
"ragCorpora": [ # Optional. Deprecated. Please use rag_resources instead.
"A String",
],
"ragResources": [ # Optional. The representation of the rag source. It can be used to specify corpus only or ragfiles. Currently only support one corpus or multiple files from one corpus. In the future we may open up multiple corpora support.
{ # The definition of the Rag resource.
"ragCorpus": "A String", # Optional. RagCorpora resource name. Format: `projects/{project}/locations/{location}/ragCorpora/{rag_corpus}`
"ragFileIds": [ # Optional. rag_file_id. The files should be in the same rag_corpus set in rag_corpus field.
"A String",
],
},
],
"ragRetrievalConfig": { # Specifies the context retrieval config. # Optional. The retrieval config for the Rag query.
"filter": { # Config for filters. # Optional. Config for filters.
"metadataFilter": "A String", # Optional. String for metadata filtering.
"vectorDistanceThreshold": 3.14, # Optional. Only returns contexts with vector distance smaller than the threshold.
"vectorSimilarityThreshold": 3.14, # Optional. Only returns contexts with vector similarity larger than the threshold.
},
"hybridSearch": { # Config for Hybrid Search. # Optional. Config for Hybrid Search.
"alpha": 3.14, # Optional. Alpha value controls the weight between dense and sparse vector search results. The range is [0, 1], while 0 means sparse vector search only and 1 means dense vector search only. The default value is 0.5 which balances sparse and dense vector search equally.
},
"ranking": { # Config for ranking and reranking. # Optional. Config for ranking and reranking.
"llmRanker": { # Config for LlmRanker. # Optional. Config for LlmRanker.
"modelName": "A String", # Optional. The model name used for ranking. See [Supported models](https://cloud.google.com/vertex-ai/generative-ai/docs/model-reference/inference#supported-models).
},
"rankService": { # Config for Rank Service. # Optional. Config for Rank Service.
"modelName": "A String", # Optional. The model name of the rank service. Format: `semantic-ranker-512@latest`
},
},
"topK": 42, # Optional. The number of contexts to retrieve.
},
"similarityTopK": 42, # Optional. Number of top k results to return from the selected corpora.
"storeContext": True or False, # Optional. Currently only supported for Gemini Multimodal Live API. In Gemini Multimodal Live API, if `store_context` bool is specified, Gemini will leverage it to automatically memorize the interactions between the client and Gemini, and retrieve context when needed to augment the response generation for users' ongoing and future interactions.
"vectorDistanceThreshold": 3.14, # Optional. Only return results with vector distance smaller than the threshold.
},
},
"urlContext": { # Tool to support URL context. # Optional. Tool to support URL context retrieval.
},
},
],
"author": "A String", # Required. The ID of the agent or entity that generated this event. Use "user" to denote events generated by the end-user.
"content": { # The structured data content of a message. A Content message contains a `role` field, which indicates the producer of the content, and a `parts` field, which contains the multi-part data of the message. # Required. The content of the event (e.g., text response, tool call, tool response).
"parts": [ # Required. A list of Part objects that make up a single message. Parts of a message can have different MIME types. A Content message must have at least one Part.
{ # A datatype containing media that is part of a multi-part Content message. A `Part` consists of data which has an associated datatype. A `Part` can only contain one of the accepted types in `Part.data`. For media types that are not text, `Part` must have a fixed IANA MIME type identifying the type and subtype of the media if `inline_data` or `file_data` field is filled with raw bytes.
"codeExecutionResult": { # Result of executing the [ExecutableCode]. Only generated when using the [CodeExecution] tool, and always follows a `part` containing the [ExecutableCode]. # Optional. The result of executing the ExecutableCode.
"outcome": "A String", # Required. Outcome of the code execution.
"output": "A String", # Optional. Contains stdout when code execution is successful, stderr or other description otherwise.
},
"executableCode": { # Code generated by the model that is meant to be executed, and the result returned to the model. Generated when using the [CodeExecution] tool, in which the code will be automatically executed, and a corresponding [CodeExecutionResult] will also be generated. # Optional. Code generated by the model that is intended to be executed.
"code": "A String", # Required. The code to be executed.
"language": "A String", # Required. Programming language of the `code`.
},
"fileData": { # URI-based data. A FileData message contains a URI pointing to data of a specific media type. It is used to represent images, audio, and video stored in Google Cloud Storage. # Optional. The URI-based data of the part. This can be used to include files from Google Cloud Storage.
"displayName": "A String", # Optional. The display name of the file. Used to provide a label or filename to distinguish files. This field is only returned in `PromptMessage` for prompt management. It is used in the Gemini calls only when server side tools (`code_execution`, `google_search`, and `url_context`) are enabled.
"fileUri": "A String", # Required. The URI of the file in Google Cloud Storage.
"mimeType": "A String", # Required. The IANA standard MIME type of the source data.
},
"functionCall": { # A predicted [FunctionCall] returned from the model that contains a string representing the [FunctionDeclaration.name] and a structured JSON object containing the parameters and their values. # Optional. A predicted function call returned from the model. This contains the name of the function to call and the arguments to pass to the function.
"args": { # Optional. The function parameters and values in JSON object format. See [FunctionDeclaration.parameters] for parameter details.
"a_key": "", # Properties of the object.
},
"id": "A String", # Optional. The unique id of the function call. If populated, the client to execute the `function_call` and return the response with the matching `id`.
"name": "A String", # Optional. The name of the function to call. Matches [FunctionDeclaration.name].
"partialArgs": [ # Optional. The partial argument value of the function call. If provided, represents the arguments/fields that are streamed incrementally.
{ # Partial argument value of the function call.
"boolValue": True or False, # Optional. Represents a boolean value.
"jsonPath": "A String", # Required. A JSON Path (RFC 9535) to the argument being streamed. https://datatracker.ietf.org/doc/html/rfc9535. e.g. "$.foo.bar[0].data".
"nullValue": "A String", # Optional. Represents a null value.
"numberValue": 3.14, # Optional. Represents a double value.
"stringValue": "A String", # Optional. Represents a string value.
"willContinue": True or False, # Optional. Whether this is not the last part of the same json_path. If true, another PartialArg message for the current json_path is expected to follow.
},
],
"willContinue": True or False, # Optional. Whether this is the last part of the FunctionCall. If true, another partial message for the current FunctionCall is expected to follow.
},
"functionResponse": { # The result output from a [FunctionCall] that contains a string representing the [FunctionDeclaration.name] and a structured JSON object containing any output from the function is used as context to the model. This should contain the result of a [FunctionCall] made based on model prediction. # Optional. The result of a function call. This is used to provide the model with the result of a function call that it predicted.
"id": "A String", # Optional. The id of the function call this response is for. Populated by the client to match the corresponding function call `id`.
"name": "A String", # Required. The name of the function to call. Matches [FunctionDeclaration.name] and [FunctionCall.name].
"parts": [ # Optional. Ordered `Parts` that constitute a function response. Parts may have different IANA MIME types.
{ # A datatype containing media that is part of a `FunctionResponse` message. A `FunctionResponsePart` consists of data which has an associated datatype. A `FunctionResponsePart` can only contain one of the accepted types in `FunctionResponsePart.data`. A `FunctionResponsePart` must have a fixed IANA MIME type identifying the type and subtype of the media if the `inline_data` field is filled with raw bytes.
"fileData": { # URI based data for function response. # URI based data.
"displayName": "A String", # Optional. Display name of the file data. Used to provide a label or filename to distinguish file datas. This field is only returned in PromptMessage for prompt management. It is currently used in the Gemini GenerateContent calls only when server side tools (code_execution, google_search, and url_context) are enabled.
"fileUri": "A String", # Required. URI.
"mimeType": "A String", # Required. The IANA standard MIME type of the source data.
},
"inlineData": { # Raw media bytes for function response. Text should not be sent as raw bytes, use the 'text' field. # Inline media bytes.
"data": "A String", # Required. Raw bytes.
"displayName": "A String", # Optional. Display name of the blob. Used to provide a label or filename to distinguish blobs. This field is only returned in PromptMessage for prompt management. It is currently used in the Gemini GenerateContent calls only when server side tools (code_execution, google_search, and url_context) are enabled.
"mimeType": "A String", # Required. The IANA standard MIME type of the source data.
},
},
],
"response": { # Required. The function response in JSON object format. Use "output" key to specify function output and "error" key to specify error details (if any). If "output" and "error" keys are not specified, then whole "response" is treated as function output.
"a_key": "", # Properties of the object.
},
},
"inlineData": { # A content blob. A Blob contains data of a specific media type. It is used to represent images, audio, and video. # Optional. The inline data content of the part. This can be used to include images, audio, or video in a request.
"data": "A String", # Required. The raw bytes of the data.
"displayName": "A String", # Optional. The display name of the blob. Used to provide a label or filename to distinguish blobs. This field is only returned in `PromptMessage` for prompt management. It is used in the Gemini calls only when server-side tools (`code_execution`, `google_search`, and `url_context`) are enabled.
"mimeType": "A String", # Required. The IANA standard MIME type of the source data.
},
"mediaResolution": { # per part media resolution. Media resolution for the input media. # per part media resolution. Media resolution for the input media.
"level": "A String", # The tokenization quality used for given media.
},
"text": "A String", # Optional. The text content of the part. When sent from the VSCode Gemini Code Assist extension, references to @mentioned items will be converted to markdown boldface text. For example `@my-repo` will be converted to and sent as `**my-repo**` by the IDE agent.
"thought": True or False, # Optional. Indicates whether the `part` represents the model's thought process or reasoning.
"thoughtSignature": "A String", # Optional. An opaque signature for the thought so it can be reused in subsequent requests.
"videoMetadata": { # Provides metadata for a video, including the start and end offsets for clipping and the frame rate. # Optional. Video metadata. The metadata should only be specified while the video data is presented in inline_data or file_data.
"endOffset": "A String", # Optional. The end offset of the video.
"fps": 3.14, # Optional. The frame rate of the video sent to the model. If not specified, the default value is 1.0. The valid range is (0.0, 24.0].
"startOffset": "A String", # Optional. The start offset of the video.
},
},
],
"role": "A String", # Optional. The producer of the content. Must be either 'user' or 'model'. If not set, the service will default to 'user'.
},
"eventTime": "A String", # Optional. The timestamp when the event occurred.
"stateDelta": { # Optional. The change in the session state caused by this event. This is a key-value map of fields that were modified or added by the event.
"a_key": "", # Properties of the object.
},
},
],
"turnId": "A String", # Optional. A unique identifier for the turn. Useful for referencing specific turns across systems.
"turnIndex": 42, # Required. The 0-based index of the turn in the conversation sequence.
},
],
},
"candidate": "A String", # Required. The name of the candidate that produced the response.
"error": { # The `Status` type defines a logical error model that is suitable for different programming environments, including REST APIs and RPC APIs. It is used by [gRPC](https://github.com/grpc). Each `Status` message contains three pieces of data: error code, error message, and error details. You can find out more about this error model and how to work with it in the [API Design Guide](https://cloud.google.com/apis/design/errors). # Output only. Error while scraping model or agent.
"code": 42, # The status code, which should be an enum value of google.rpc.Code.
"details": [ # A list of messages that carry the error details. There is a common set of message types for APIs to use.
{
"a_key": "", # Properties of the object. Contains field @type with type URL.
},
],
"message": "A String", # A developer-facing error message, which should be in English. Any user-facing error message should be localized and sent in the google.rpc.Status.details field, or localized by the client.
},
"events": [ # Optional. Intermediate events (such as tool calls and responses) that led to the final response.
{ # The structured data content of a message. A Content message contains a `role` field, which indicates the producer of the content, and a `parts` field, which contains the multi-part data of the message.
"parts": [ # Required. A list of Part objects that make up a single message. Parts of a message can have different MIME types. A Content message must have at least one Part.
{ # A datatype containing media that is part of a multi-part Content message. A `Part` consists of data which has an associated datatype. A `Part` can only contain one of the accepted types in `Part.data`. For media types that are not text, `Part` must have a fixed IANA MIME type identifying the type and subtype of the media if `inline_data` or `file_data` field is filled with raw bytes.
"codeExecutionResult": { # Result of executing the [ExecutableCode]. Only generated when using the [CodeExecution] tool, and always follows a `part` containing the [ExecutableCode]. # Optional. The result of executing the ExecutableCode.
"outcome": "A String", # Required. Outcome of the code execution.
"output": "A String", # Optional. Contains stdout when code execution is successful, stderr or other description otherwise.
},
"executableCode": { # Code generated by the model that is meant to be executed, and the result returned to the model. Generated when using the [CodeExecution] tool, in which the code will be automatically executed, and a corresponding [CodeExecutionResult] will also be generated. # Optional. Code generated by the model that is intended to be executed.
"code": "A String", # Required. The code to be executed.
"language": "A String", # Required. Programming language of the `code`.
},
"fileData": { # URI-based data. A FileData message contains a URI pointing to data of a specific media type. It is used to represent images, audio, and video stored in Google Cloud Storage. # Optional. The URI-based data of the part. This can be used to include files from Google Cloud Storage.
"displayName": "A String", # Optional. The display name of the file. Used to provide a label or filename to distinguish files. This field is only returned in `PromptMessage` for prompt management. It is used in the Gemini calls only when server side tools (`code_execution`, `google_search`, and `url_context`) are enabled.
"fileUri": "A String", # Required. The URI of the file in Google Cloud Storage.
"mimeType": "A String", # Required. The IANA standard MIME type of the source data.
},
"functionCall": { # A predicted [FunctionCall] returned from the model that contains a string representing the [FunctionDeclaration.name] and a structured JSON object containing the parameters and their values. # Optional. A predicted function call returned from the model. This contains the name of the function to call and the arguments to pass to the function.
"args": { # Optional. The function parameters and values in JSON object format. See [FunctionDeclaration.parameters] for parameter details.
"a_key": "", # Properties of the object.
},
"id": "A String", # Optional. The unique id of the function call. If populated, the client to execute the `function_call` and return the response with the matching `id`.
"name": "A String", # Optional. The name of the function to call. Matches [FunctionDeclaration.name].
"partialArgs": [ # Optional. The partial argument value of the function call. If provided, represents the arguments/fields that are streamed incrementally.
{ # Partial argument value of the function call.
"boolValue": True or False, # Optional. Represents a boolean value.
"jsonPath": "A String", # Required. A JSON Path (RFC 9535) to the argument being streamed. https://datatracker.ietf.org/doc/html/rfc9535. e.g. "$.foo.bar[0].data".
"nullValue": "A String", # Optional. Represents a null value.
"numberValue": 3.14, # Optional. Represents a double value.
"stringValue": "A String", # Optional. Represents a string value.
"willContinue": True or False, # Optional. Whether this is not the last part of the same json_path. If true, another PartialArg message for the current json_path is expected to follow.
},
],
"willContinue": True or False, # Optional. Whether this is the last part of the FunctionCall. If true, another partial message for the current FunctionCall is expected to follow.
},
"functionResponse": { # The result output from a [FunctionCall] that contains a string representing the [FunctionDeclaration.name] and a structured JSON object containing any output from the function is used as context to the model. This should contain the result of a [FunctionCall] made based on model prediction. # Optional. The result of a function call. This is used to provide the model with the result of a function call that it predicted.
"id": "A String", # Optional. The id of the function call this response is for. Populated by the client to match the corresponding function call `id`.
"name": "A String", # Required. The name of the function to call. Matches [FunctionDeclaration.name] and [FunctionCall.name].
"parts": [ # Optional. Ordered `Parts` that constitute a function response. Parts may have different IANA MIME types.
{ # A datatype containing media that is part of a `FunctionResponse` message. A `FunctionResponsePart` consists of data which has an associated datatype. A `FunctionResponsePart` can only contain one of the accepted types in `FunctionResponsePart.data`. A `FunctionResponsePart` must have a fixed IANA MIME type identifying the type and subtype of the media if the `inline_data` field is filled with raw bytes.
"fileData": { # URI based data for function response. # URI based data.
"displayName": "A String", # Optional. Display name of the file data. Used to provide a label or filename to distinguish file datas. This field is only returned in PromptMessage for prompt management. It is currently used in the Gemini GenerateContent calls only when server side tools (code_execution, google_search, and url_context) are enabled.
"fileUri": "A String", # Required. URI.
"mimeType": "A String", # Required. The IANA standard MIME type of the source data.
},
"inlineData": { # Raw media bytes for function response. Text should not be sent as raw bytes, use the 'text' field. # Inline media bytes.
"data": "A String", # Required. Raw bytes.
"displayName": "A String", # Optional. Display name of the blob. Used to provide a label or filename to distinguish blobs. This field is only returned in PromptMessage for prompt management. It is currently used in the Gemini GenerateContent calls only when server side tools (code_execution, google_search, and url_context) are enabled.
"mimeType": "A String", # Required. The IANA standard MIME type of the source data.
},
},
],
"response": { # Required. The function response in JSON object format. Use "output" key to specify function output and "error" key to specify error details (if any). If "output" and "error" keys are not specified, then whole "response" is treated as function output.
"a_key": "", # Properties of the object.
},
},
"inlineData": { # A content blob. A Blob contains data of a specific media type. It is used to represent images, audio, and video. # Optional. The inline data content of the part. This can be used to include images, audio, or video in a request.
"data": "A String", # Required. The raw bytes of the data.
"displayName": "A String", # Optional. The display name of the blob. Used to provide a label or filename to distinguish blobs. This field is only returned in `PromptMessage` for prompt management. It is used in the Gemini calls only when server-side tools (`code_execution`, `google_search`, and `url_context`) are enabled.
"mimeType": "A String", # Required. The IANA standard MIME type of the source data.
},
"mediaResolution": { # per part media resolution. Media resolution for the input media. # per part media resolution. Media resolution for the input media.
"level": "A String", # The tokenization quality used for given media.
},
"text": "A String", # Optional. The text content of the part. When sent from the VSCode Gemini Code Assist extension, references to @mentioned items will be converted to markdown boldface text. For example `@my-repo` will be converted to and sent as `**my-repo**` by the IDE agent.
"thought": True or False, # Optional. Indicates whether the `part` represents the model's thought process or reasoning.
"thoughtSignature": "A String", # Optional. An opaque signature for the thought so it can be reused in subsequent requests.
"videoMetadata": { # Provides metadata for a video, including the start and end offsets for clipping and the frame rate. # Optional. Video metadata. The metadata should only be specified while the video data is presented in inline_data or file_data.
"endOffset": "A String", # Optional. The end offset of the video.
"fps": 3.14, # Optional. The frame rate of the video sent to the model. If not specified, the default value is 1.0. The valid range is (0.0, 24.0].
"startOffset": "A String", # Optional. The start offset of the video.
},
},
],
"role": "A String", # Optional. The producer of the content. Must be either 'user' or 'model'. If not set, the service will default to 'user'.
},
],
"text": "A String", # Text response.
"value": "", # Fields and values that can be used to populate the response template.
},
"prompt": { # Prompt to be evaluated. This can represent a single-turn prompt or a multi-turn conversation for agent evaluations. # Optional. The request/prompt to evaluate.
"agentData": { # Represents data specific to multi-turn agent evaluations. # Optional. Represents the complete execution trace of a multi-turn conversation, which can involve single or multiple agents. This serves as the input context for agent scraping.
"agents": { # Optional. A map containing the static configurations for each agent in the system. Key: agent_id (matches the `author` field in events). Value: The static configuration of the agent.
"a_key": { # Represents configuration for an Agent.
"agentId": "A String", # Required. Unique identifier of the agent. This ID is used to refer to this agent, e.g., in AgentEvent.author, or in the `sub_agents` field. It must be unique within the `agents` map.
"agentType": "A String", # Optional. The type or class of the agent (e.g., "LlmAgent", "RouterAgent", "ToolUseAgent"). Useful for the autorater to understand the expected behavior of the agent.
"description": "A String", # Optional. A high-level description of the agent's role and responsibilities. Critical for evaluating if the agent is routing tasks correctly.
"instruction": "A String", # Optional. Provides instructions for the LLM model, guiding the agent's behavior. Can be static or dynamic. Dynamic instructions can contain placeholders like {variable_name} that will be resolved at runtime using the `AgentEvent.state_delta` field.
"subAgents": [ # Optional. The list of valid agent IDs that this agent can delegate to. This defines the directed edges in the multi-agent system graph topology.
"A String",
],
"tools": [ # Optional. The list of tools available to this agent.
{ # Tool details that the model may use to generate response. A `Tool` is a piece of code that enables the system to interact with external systems to perform an action, or set of actions, outside of knowledge and scope of the model. A Tool object should contain exactly one type of Tool (e.g FunctionDeclaration, Retrieval or GoogleSearchRetrieval).
"codeExecution": { # Tool that executes code generated by the model, and automatically returns the result to the model. See also [ExecutableCode]and [CodeExecutionResult] which are input and output to this tool. # Optional. CodeExecution tool type. Enables the model to execute code as part of generation.
},
"computerUse": { # Tool to support computer use. # Optional. Tool to support the model interacting directly with the computer. If enabled, it automatically populates computer-use specific Function Declarations.
"environment": "A String", # Required. The environment being operated.
"excludedPredefinedFunctions": [ # Optional. By default, [predefined functions](https://cloud.google.com/vertex-ai/generative-ai/docs/computer-use#supported-actions) are included in the final model call. Some of them can be explicitly excluded from being automatically included. This can serve two purposes: 1. Using a more restricted / different action space. 2. Improving the definitions / instructions of predefined functions.
"A String",
],
},
"enterpriseWebSearch": { # Tool to search public web data, powered by Vertex AI Search and Sec4 compliance. # Optional. Tool to support searching public web data, powered by Vertex AI Search and Sec4 compliance.
"blockingConfidence": "A String", # Optional. Sites with confidence level chosen & above this value will be blocked from the search results.
"excludeDomains": [ # Optional. List of domains to be excluded from the search results. The default limit is 2000 domains.
"A String",
],
},
"functionDeclarations": [ # Optional. Function tool type. One or more function declarations to be passed to the model along with the current user query. Model may decide to call a subset of these functions by populating FunctionCall in the response. User should provide a FunctionResponse for each function call in the next turn. Based on the function responses, Model will generate the final response back to the user. Maximum 512 function declarations can be provided.
{ # Structured representation of a function declaration as defined by the [OpenAPI 3.0 specification](https://spec.openapis.org/oas/v3.0.3). Included in this declaration are the function name, description, parameters and response type. This FunctionDeclaration is a representation of a block of code that can be used as a `Tool` by the model and executed by the client.
"description": "A String", # Optional. Description and purpose of the function. Model uses it to decide how and whether to call the function.
"name": "A String", # Required. The name of the function to call. Must start with a letter or an underscore. Must be a-z, A-Z, 0-9, or contain underscores, dots, colons and dashes, with a maximum length of 64.
"parameters": { # Defines the schema of input and output data. This is a subset of the [OpenAPI 3.0 Schema Object](https://spec.openapis.org/oas/v3.0.3#schema-object). # Optional. Describes the parameters to this function in JSON Schema Object format. Reflects the Open API 3.03 Parameter Object. string Key: the name of the parameter. Parameter names are case sensitive. Schema Value: the Schema defining the type used for the parameter. For function with no parameters, this can be left unset. Parameter names must start with a letter or an underscore and must only contain chars a-z, A-Z, 0-9, or underscores with a maximum length of 64. Example with 1 required and 1 optional parameter: type: OBJECT properties: param1: type: STRING param2: type: INTEGER required: - param1
"additionalProperties": "", # Optional. If `type` is `OBJECT`, specifies how to handle properties not defined in `properties`. If it is a boolean `false`, no additional properties are allowed. If it is a schema, additional properties are allowed if they conform to the schema.
"anyOf": [ # Optional. The instance must be valid against any (one or more) of the subschemas listed in `any_of`.
# Object with schema name: GoogleCloudAiplatformV1beta1Schema
],
"default": "", # Optional. Default value to use if the field is not specified.
"defs": { # Optional. `defs` provides a map of schema definitions that can be reused by `ref` elsewhere in the schema. Only allowed at root level of the schema.
"a_key": # Object with schema name: GoogleCloudAiplatformV1beta1Schema
},
"description": "A String", # Optional. Description of the schema.
"enum": [ # Optional. Possible values of the field. This field can be used to restrict a value to a fixed set of values. To mark a field as an enum, set `format` to `enum` and provide the list of possible values in `enum`. For example: 1. To define directions: `{type:STRING, format:enum, enum:["EAST", "NORTH", "SOUTH", "WEST"]}` 2. To define apartment numbers: `{type:INTEGER, format:enum, enum:["101", "201", "301"]}`
"A String",
],
"example": "", # Optional. Example of an instance of this schema.
"format": "A String", # Optional. The format of the data. For `NUMBER` type, format can be `float` or `double`. For `INTEGER` type, format can be `int32` or `int64`. For `STRING` type, format can be `email`, `byte`, `date`, `date-time`, `password`, and other formats to further refine the data type.
"items": # Object with schema name: GoogleCloudAiplatformV1beta1Schema # Optional. If type is `ARRAY`, `items` specifies the schema of elements in the array.
"maxItems": "A String", # Optional. If type is `ARRAY`, `max_items` specifies the maximum number of items in an array.
"maxLength": "A String", # Optional. If type is `STRING`, `max_length` specifies the maximum length of the string.
"maxProperties": "A String", # Optional. If type is `OBJECT`, `max_properties` specifies the maximum number of properties that can be provided.
"maximum": 3.14, # Optional. If type is `INTEGER` or `NUMBER`, `maximum` specifies the maximum allowed value.
"minItems": "A String", # Optional. If type is `ARRAY`, `min_items` specifies the minimum number of items in an array.
"minLength": "A String", # Optional. If type is `STRING`, `min_length` specifies the minimum length of the string.
"minProperties": "A String", # Optional. If type is `OBJECT`, `min_properties` specifies the minimum number of properties that can be provided.
"minimum": 3.14, # Optional. If type is `INTEGER` or `NUMBER`, `minimum` specifies the minimum allowed value.
"nullable": True or False, # Optional. Indicates if the value of this field can be null.
"pattern": "A String", # Optional. If type is `STRING`, `pattern` specifies a regular expression that the string must match.
"properties": { # Optional. If type is `OBJECT`, `properties` is a map of property names to schema definitions for each property of the object.
"a_key": # Object with schema name: GoogleCloudAiplatformV1beta1Schema
},
"propertyOrdering": [ # Optional. Order of properties displayed or used where order matters. This is not a standard field in OpenAPI specification, but can be used to control the order of properties.
"A String",
],
"ref": "A String", # Optional. Allows referencing another schema definition to use in place of this schema. The value must be a valid reference to a schema in `defs`. For example, the following schema defines a reference to a schema node named "Pet": type: object properties: pet: ref: #/defs/Pet defs: Pet: type: object properties: name: type: string The value of the "pet" property is a reference to the schema node named "Pet". See details in https://json-schema.org/understanding-json-schema/structuring
"required": [ # Optional. If type is `OBJECT`, `required` lists the names of properties that must be present.
"A String",
],
"title": "A String", # Optional. Title for the schema.
"type": "A String", # Optional. Data type of the schema field.
},
"parametersJsonSchema": "", # Optional. Describes the parameters to the function in JSON Schema format. The schema must describe an object where the properties are the parameters to the function. For example: ``` { "type": "object", "properties": { "name": { "type": "string" }, "age": { "type": "integer" } }, "additionalProperties": false, "required": ["name", "age"], "propertyOrdering": ["name", "age"] } ``` This field is mutually exclusive with `parameters`.
"response": { # Defines the schema of input and output data. This is a subset of the [OpenAPI 3.0 Schema Object](https://spec.openapis.org/oas/v3.0.3#schema-object). # Optional. Describes the output from this function in JSON Schema format. Reflects the Open API 3.03 Response Object. The Schema defines the type used for the response value of the function.
"additionalProperties": "", # Optional. If `type` is `OBJECT`, specifies how to handle properties not defined in `properties`. If it is a boolean `false`, no additional properties are allowed. If it is a schema, additional properties are allowed if they conform to the schema.
"anyOf": [ # Optional. The instance must be valid against any (one or more) of the subschemas listed in `any_of`.
# Object with schema name: GoogleCloudAiplatformV1beta1Schema
],
"default": "", # Optional. Default value to use if the field is not specified.
"defs": { # Optional. `defs` provides a map of schema definitions that can be reused by `ref` elsewhere in the schema. Only allowed at root level of the schema.
"a_key": # Object with schema name: GoogleCloudAiplatformV1beta1Schema
},
"description": "A String", # Optional. Description of the schema.
"enum": [ # Optional. Possible values of the field. This field can be used to restrict a value to a fixed set of values. To mark a field as an enum, set `format` to `enum` and provide the list of possible values in `enum`. For example: 1. To define directions: `{type:STRING, format:enum, enum:["EAST", "NORTH", "SOUTH", "WEST"]}` 2. To define apartment numbers: `{type:INTEGER, format:enum, enum:["101", "201", "301"]}`
"A String",
],
"example": "", # Optional. Example of an instance of this schema.
"format": "A String", # Optional. The format of the data. For `NUMBER` type, format can be `float` or `double`. For `INTEGER` type, format can be `int32` or `int64`. For `STRING` type, format can be `email`, `byte`, `date`, `date-time`, `password`, and other formats to further refine the data type.
"items": # Object with schema name: GoogleCloudAiplatformV1beta1Schema # Optional. If type is `ARRAY`, `items` specifies the schema of elements in the array.
"maxItems": "A String", # Optional. If type is `ARRAY`, `max_items` specifies the maximum number of items in an array.
"maxLength": "A String", # Optional. If type is `STRING`, `max_length` specifies the maximum length of the string.
"maxProperties": "A String", # Optional. If type is `OBJECT`, `max_properties` specifies the maximum number of properties that can be provided.
"maximum": 3.14, # Optional. If type is `INTEGER` or `NUMBER`, `maximum` specifies the maximum allowed value.
"minItems": "A String", # Optional. If type is `ARRAY`, `min_items` specifies the minimum number of items in an array.
"minLength": "A String", # Optional. If type is `STRING`, `min_length` specifies the minimum length of the string.
"minProperties": "A String", # Optional. If type is `OBJECT`, `min_properties` specifies the minimum number of properties that can be provided.
"minimum": 3.14, # Optional. If type is `INTEGER` or `NUMBER`, `minimum` specifies the minimum allowed value.
"nullable": True or False, # Optional. Indicates if the value of this field can be null.
"pattern": "A String", # Optional. If type is `STRING`, `pattern` specifies a regular expression that the string must match.
"properties": { # Optional. If type is `OBJECT`, `properties` is a map of property names to schema definitions for each property of the object.
"a_key": # Object with schema name: GoogleCloudAiplatformV1beta1Schema
},
"propertyOrdering": [ # Optional. Order of properties displayed or used where order matters. This is not a standard field in OpenAPI specification, but can be used to control the order of properties.
"A String",
],
"ref": "A String", # Optional. Allows referencing another schema definition to use in place of this schema. The value must be a valid reference to a schema in `defs`. For example, the following schema defines a reference to a schema node named "Pet": type: object properties: pet: ref: #/defs/Pet defs: Pet: type: object properties: name: type: string The value of the "pet" property is a reference to the schema node named "Pet". See details in https://json-schema.org/understanding-json-schema/structuring
"required": [ # Optional. If type is `OBJECT`, `required` lists the names of properties that must be present.
"A String",
],
"title": "A String", # Optional. Title for the schema.
"type": "A String", # Optional. Data type of the schema field.
},
"responseJsonSchema": "", # Optional. Describes the output from this function in JSON Schema format. The value specified by the schema is the response value of the function. This field is mutually exclusive with `response`.
},
],
"googleMaps": { # Tool to retrieve public maps data for grounding, powered by Google. # Optional. GoogleMaps tool type. Tool to support Google Maps in Model.
"enableWidget": True or False, # Optional. If true, include the widget context token in the response.
},
"googleSearch": { # GoogleSearch tool type. Tool to support Google Search in Model. Powered by Google. # Optional. GoogleSearch tool type. Tool to support Google Search in Model. Powered by Google.
"blockingConfidence": "A String", # Optional. Sites with confidence level chosen & above this value will be blocked from the search results.
"excludeDomains": [ # Optional. List of domains to be excluded from the search results. The default limit is 2000 domains. Example: ["amazon.com", "facebook.com"].
"A String",
],
},
"googleSearchRetrieval": { # Tool to retrieve public web data for grounding, powered by Google. # Optional. Specialized retrieval tool that is powered by Google Search.
"dynamicRetrievalConfig": { # Describes the options to customize dynamic retrieval. # Specifies the dynamic retrieval configuration for the given source.
"dynamicThreshold": 3.14, # Optional. The threshold to be used in dynamic retrieval. If not set, a system default value is used.
"mode": "A String", # The mode of the predictor to be used in dynamic retrieval.
},
},
"parallelAiSearch": { # ParallelAiSearch tool type. A tool that uses the Parallel.ai search engine for grounding. # Optional. If specified, Vertex AI will use Parallel.ai to search for information to answer user queries. The search results will be grounded on Parallel.ai and presented to the model for response generation
"apiKey": "A String", # Optional. The API key for ParallelAiSearch. If an API key is not provided, the system will attempt to verify access by checking for an active Parallel.ai subscription through the Google Cloud Marketplace. See https://docs.parallel.ai/search/search-quickstart for more details.
"customConfigs": { # Optional. Custom configs for ParallelAiSearch. This field can be used to pass any parameter from the Parallel.ai Search API. See the Parallel.ai documentation for the full list of available parameters and their usage: https://docs.parallel.ai/api-reference/search-beta/search Currently only `source_policy`, `excerpts`, `max_results`, `mode`, `fetch_policy` can be set via this field. For example: { "source_policy": { "include_domains": ["google.com", "wikipedia.org"], "exclude_domains": ["example.com"] }, "fetch_policy": { "max_age_seconds": 3600 } }
"a_key": "", # Properties of the object.
},
},
"retrieval": { # Defines a retrieval tool that model can call to access external knowledge. # Optional. Retrieval tool type. System will always execute the provided retrieval tool(s) to get external knowledge to answer the prompt. Retrieval results are presented to the model for generation.
"disableAttribution": True or False, # Optional. Deprecated. This option is no longer supported.
"externalApi": { # Retrieve from data source powered by external API for grounding. The external API is not owned by Google, but need to follow the pre-defined API spec. # Use data source powered by external API for grounding.
"apiAuth": { # The generic reusable api auth config. Deprecated. Please use AuthConfig (google/cloud/aiplatform/master/auth.proto) instead. # The authentication config to access the API. Deprecated. Please use auth_config instead.
"apiKeyConfig": { # The API secret. # The API secret.
"apiKeySecretVersion": "A String", # Required. The SecretManager secret version resource name storing API key. e.g. projects/{project}/secrets/{secret}/versions/{version}
"apiKeyString": "A String", # The API key string. Either this or `api_key_secret_version` must be set.
},
},
"apiSpec": "A String", # The API spec that the external API implements.
"authConfig": { # Auth configuration to run the extension. # The authentication config to access the API.
"apiKeyConfig": { # Config for authentication with API key. # Config for API key auth.
"apiKeySecret": "A String", # Optional. The name of the SecretManager secret version resource storing the API key. Format: `projects/{project}/secrets/{secrete}/versions/{version}` - If both `api_key_secret` and `api_key_string` are specified, this field takes precedence over `api_key_string`. - If specified, the `secretmanager.versions.access` permission should be granted to Vertex AI Extension Service Agent (https://cloud.google.com/vertex-ai/docs/general/access-control#service-agents) on the specified resource.
"apiKeyString": "A String", # Optional. The API key to be used in the request directly.
"httpElementLocation": "A String", # Optional. The location of the API key.
"name": "A String", # Optional. The parameter name of the API key. E.g. If the API request is "https://example.com/act?api_key=", "api_key" would be the parameter name.
},
"authType": "A String", # Type of auth scheme.
"googleServiceAccountConfig": { # Config for Google Service Account Authentication. # Config for Google Service Account auth.
"serviceAccount": "A String", # Optional. The service account that the extension execution service runs as. - If the service account is specified, the `iam.serviceAccounts.getAccessToken` permission should be granted to Vertex AI Extension Service Agent (https://cloud.google.com/vertex-ai/docs/general/access-control#service-agents) on the specified service account. - If not specified, the Vertex AI Extension Service Agent will be used to execute the Extension.
},
"httpBasicAuthConfig": { # Config for HTTP Basic Authentication. # Config for HTTP Basic auth.
"credentialSecret": "A String", # Required. The name of the SecretManager secret version resource storing the base64 encoded credentials. Format: `projects/{project}/secrets/{secrete}/versions/{version}` - If specified, the `secretmanager.versions.access` permission should be granted to Vertex AI Extension Service Agent (https://cloud.google.com/vertex-ai/docs/general/access-control#service-agents) on the specified resource.
},
"oauthConfig": { # Config for user oauth. # Config for user oauth.
"accessToken": "A String", # Access token for extension endpoint. Only used to propagate token from [[ExecuteExtensionRequest.runtime_auth_config]] at request time.
"serviceAccount": "A String", # The service account used to generate access tokens for executing the Extension. - If the service account is specified, the `iam.serviceAccounts.getAccessToken` permission should be granted to Vertex AI Extension Service Agent (https://cloud.google.com/vertex-ai/docs/general/access-control#service-agents) on the provided service account.
},
"oidcConfig": { # Config for user OIDC auth. # Config for user OIDC auth.
"idToken": "A String", # OpenID Connect formatted ID token for extension endpoint. Only used to propagate token from [[ExecuteExtensionRequest.runtime_auth_config]] at request time.
"serviceAccount": "A String", # The service account used to generate an OpenID Connect (OIDC)-compatible JWT token signed by the Google OIDC Provider (accounts.google.com) for extension endpoint (https://cloud.google.com/iam/docs/create-short-lived-credentials-direct#sa-credentials-oidc). - The audience for the token will be set to the URL in the server url defined in the OpenApi spec. - If the service account is provided, the service account should grant `iam.serviceAccounts.getOpenIdToken` permission to Vertex AI Extension Service Agent (https://cloud.google.com/vertex-ai/docs/general/access-control#service-agents).
},
},
"elasticSearchParams": { # The search parameters to use for the ELASTIC_SEARCH spec. # Parameters for the elastic search API.
"index": "A String", # The ElasticSearch index to use.
"numHits": 42, # Optional. Number of hits (chunks) to request. When specified, it is passed to Elasticsearch as the `num_hits` param.
"searchTemplate": "A String", # The ElasticSearch search template to use.
},
"endpoint": "A String", # The endpoint of the external API. The system will call the API at this endpoint to retrieve the data for grounding. Example: https://acme.com:443/search
"simpleSearchParams": { # The search parameters to use for SIMPLE_SEARCH spec. # Parameters for the simple search API.
},
},
"vertexAiSearch": { # Retrieve from Vertex AI Search datastore or engine for grounding. datastore and engine are mutually exclusive. See https://cloud.google.com/products/agent-builder # Set to use data source powered by Vertex AI Search.
"dataStoreSpecs": [ # Specifications that define the specific DataStores to be searched, along with configurations for those data stores. This is only considered for Engines with multiple data stores. It should only be set if engine is used.
{ # Define data stores within engine to filter on in a search call and configurations for those data stores. For more information, see https://cloud.google.com/generative-ai-app-builder/docs/reference/rpc/google.cloud.discoveryengine.v1#datastorespec
"dataStore": "A String", # Full resource name of DataStore, such as Format: `projects/{project}/locations/{location}/collections/{collection}/dataStores/{dataStore}`
"filter": "A String", # Optional. Filter specification to filter documents in the data store specified by data_store field. For more information on filtering, see [Filtering](https://cloud.google.com/generative-ai-app-builder/docs/filter-search-metadata)
},
],
"datastore": "A String", # Optional. Fully-qualified Vertex AI Search data store resource ID. Format: `projects/{project}/locations/{location}/collections/{collection}/dataStores/{dataStore}`
"engine": "A String", # Optional. Fully-qualified Vertex AI Search engine resource ID. Format: `projects/{project}/locations/{location}/collections/{collection}/engines/{engine}`
"filter": "A String", # Optional. Filter strings to be passed to the search API.
"maxResults": 42, # Optional. Number of search results to return per query. The default value is 10. The maximumm allowed value is 10.
},
"vertexRagStore": { # Retrieve from Vertex RAG Store for grounding. # Set to use data source powered by Vertex RAG store. User data is uploaded via the VertexRagDataService.
"ragCorpora": [ # Optional. Deprecated. Please use rag_resources instead.
"A String",
],
"ragResources": [ # Optional. The representation of the rag source. It can be used to specify corpus only or ragfiles. Currently only support one corpus or multiple files from one corpus. In the future we may open up multiple corpora support.
{ # The definition of the Rag resource.
"ragCorpus": "A String", # Optional. RagCorpora resource name. Format: `projects/{project}/locations/{location}/ragCorpora/{rag_corpus}`
"ragFileIds": [ # Optional. rag_file_id. The files should be in the same rag_corpus set in rag_corpus field.
"A String",
],
},
],
"ragRetrievalConfig": { # Specifies the context retrieval config. # Optional. The retrieval config for the Rag query.
"filter": { # Config for filters. # Optional. Config for filters.
"metadataFilter": "A String", # Optional. String for metadata filtering.
"vectorDistanceThreshold": 3.14, # Optional. Only returns contexts with vector distance smaller than the threshold.
"vectorSimilarityThreshold": 3.14, # Optional. Only returns contexts with vector similarity larger than the threshold.
},
"hybridSearch": { # Config for Hybrid Search. # Optional. Config for Hybrid Search.
"alpha": 3.14, # Optional. Alpha value controls the weight between dense and sparse vector search results. The range is [0, 1], while 0 means sparse vector search only and 1 means dense vector search only. The default value is 0.5 which balances sparse and dense vector search equally.
},
"ranking": { # Config for ranking and reranking. # Optional. Config for ranking and reranking.
"llmRanker": { # Config for LlmRanker. # Optional. Config for LlmRanker.
"modelName": "A String", # Optional. The model name used for ranking. See [Supported models](https://cloud.google.com/vertex-ai/generative-ai/docs/model-reference/inference#supported-models).
},
"rankService": { # Config for Rank Service. # Optional. Config for Rank Service.
"modelName": "A String", # Optional. The model name of the rank service. Format: `semantic-ranker-512@latest`
},
},
"topK": 42, # Optional. The number of contexts to retrieve.
},
"similarityTopK": 42, # Optional. Number of top k results to return from the selected corpora.
"storeContext": True or False, # Optional. Currently only supported for Gemini Multimodal Live API. In Gemini Multimodal Live API, if `store_context` bool is specified, Gemini will leverage it to automatically memorize the interactions between the client and Gemini, and retrieve context when needed to augment the response generation for users' ongoing and future interactions.
"vectorDistanceThreshold": 3.14, # Optional. Only return results with vector distance smaller than the threshold.
},
},
"urlContext": { # Tool to support URL context. # Optional. Tool to support URL context retrieval.
},
},
],
},
},
"turns": [ # Optional. A chronological list of conversation turns. Each turn represents a logical execution cycle (e.g., User Input -> Agent Response).
{ # Represents a single turn/invocation in the conversation.
"events": [ # Optional. The list of events that occurred during this turn.
{ # Represents a single event in the execution trace.
"activeTools": [ # Optional. The list of tools that were active/available to the agent at the time of this event. This overrides the `AgentConfig.tools` if set.
{ # Tool details that the model may use to generate response. A `Tool` is a piece of code that enables the system to interact with external systems to perform an action, or set of actions, outside of knowledge and scope of the model. A Tool object should contain exactly one type of Tool (e.g FunctionDeclaration, Retrieval or GoogleSearchRetrieval).
"codeExecution": { # Tool that executes code generated by the model, and automatically returns the result to the model. See also [ExecutableCode]and [CodeExecutionResult] which are input and output to this tool. # Optional. CodeExecution tool type. Enables the model to execute code as part of generation.
},
"computerUse": { # Tool to support computer use. # Optional. Tool to support the model interacting directly with the computer. If enabled, it automatically populates computer-use specific Function Declarations.
"environment": "A String", # Required. The environment being operated.
"excludedPredefinedFunctions": [ # Optional. By default, [predefined functions](https://cloud.google.com/vertex-ai/generative-ai/docs/computer-use#supported-actions) are included in the final model call. Some of them can be explicitly excluded from being automatically included. This can serve two purposes: 1. Using a more restricted / different action space. 2. Improving the definitions / instructions of predefined functions.
"A String",
],
},
"enterpriseWebSearch": { # Tool to search public web data, powered by Vertex AI Search and Sec4 compliance. # Optional. Tool to support searching public web data, powered by Vertex AI Search and Sec4 compliance.
"blockingConfidence": "A String", # Optional. Sites with confidence level chosen & above this value will be blocked from the search results.
"excludeDomains": [ # Optional. List of domains to be excluded from the search results. The default limit is 2000 domains.
"A String",
],
},
"functionDeclarations": [ # Optional. Function tool type. One or more function declarations to be passed to the model along with the current user query. Model may decide to call a subset of these functions by populating FunctionCall in the response. User should provide a FunctionResponse for each function call in the next turn. Based on the function responses, Model will generate the final response back to the user. Maximum 512 function declarations can be provided.
{ # Structured representation of a function declaration as defined by the [OpenAPI 3.0 specification](https://spec.openapis.org/oas/v3.0.3). Included in this declaration are the function name, description, parameters and response type. This FunctionDeclaration is a representation of a block of code that can be used as a `Tool` by the model and executed by the client.
"description": "A String", # Optional. Description and purpose of the function. Model uses it to decide how and whether to call the function.
"name": "A String", # Required. The name of the function to call. Must start with a letter or an underscore. Must be a-z, A-Z, 0-9, or contain underscores, dots, colons and dashes, with a maximum length of 64.
"parameters": { # Defines the schema of input and output data. This is a subset of the [OpenAPI 3.0 Schema Object](https://spec.openapis.org/oas/v3.0.3#schema-object). # Optional. Describes the parameters to this function in JSON Schema Object format. Reflects the Open API 3.03 Parameter Object. string Key: the name of the parameter. Parameter names are case sensitive. Schema Value: the Schema defining the type used for the parameter. For function with no parameters, this can be left unset. Parameter names must start with a letter or an underscore and must only contain chars a-z, A-Z, 0-9, or underscores with a maximum length of 64. Example with 1 required and 1 optional parameter: type: OBJECT properties: param1: type: STRING param2: type: INTEGER required: - param1
"additionalProperties": "", # Optional. If `type` is `OBJECT`, specifies how to handle properties not defined in `properties`. If it is a boolean `false`, no additional properties are allowed. If it is a schema, additional properties are allowed if they conform to the schema.
"anyOf": [ # Optional. The instance must be valid against any (one or more) of the subschemas listed in `any_of`.
# Object with schema name: GoogleCloudAiplatformV1beta1Schema
],
"default": "", # Optional. Default value to use if the field is not specified.
"defs": { # Optional. `defs` provides a map of schema definitions that can be reused by `ref` elsewhere in the schema. Only allowed at root level of the schema.
"a_key": # Object with schema name: GoogleCloudAiplatformV1beta1Schema
},
"description": "A String", # Optional. Description of the schema.
"enum": [ # Optional. Possible values of the field. This field can be used to restrict a value to a fixed set of values. To mark a field as an enum, set `format` to `enum` and provide the list of possible values in `enum`. For example: 1. To define directions: `{type:STRING, format:enum, enum:["EAST", "NORTH", "SOUTH", "WEST"]}` 2. To define apartment numbers: `{type:INTEGER, format:enum, enum:["101", "201", "301"]}`
"A String",
],
"example": "", # Optional. Example of an instance of this schema.
"format": "A String", # Optional. The format of the data. For `NUMBER` type, format can be `float` or `double`. For `INTEGER` type, format can be `int32` or `int64`. For `STRING` type, format can be `email`, `byte`, `date`, `date-time`, `password`, and other formats to further refine the data type.
"items": # Object with schema name: GoogleCloudAiplatformV1beta1Schema # Optional. If type is `ARRAY`, `items` specifies the schema of elements in the array.
"maxItems": "A String", # Optional. If type is `ARRAY`, `max_items` specifies the maximum number of items in an array.
"maxLength": "A String", # Optional. If type is `STRING`, `max_length` specifies the maximum length of the string.
"maxProperties": "A String", # Optional. If type is `OBJECT`, `max_properties` specifies the maximum number of properties that can be provided.
"maximum": 3.14, # Optional. If type is `INTEGER` or `NUMBER`, `maximum` specifies the maximum allowed value.
"minItems": "A String", # Optional. If type is `ARRAY`, `min_items` specifies the minimum number of items in an array.
"minLength": "A String", # Optional. If type is `STRING`, `min_length` specifies the minimum length of the string.
"minProperties": "A String", # Optional. If type is `OBJECT`, `min_properties` specifies the minimum number of properties that can be provided.
"minimum": 3.14, # Optional. If type is `INTEGER` or `NUMBER`, `minimum` specifies the minimum allowed value.
"nullable": True or False, # Optional. Indicates if the value of this field can be null.
"pattern": "A String", # Optional. If type is `STRING`, `pattern` specifies a regular expression that the string must match.
"properties": { # Optional. If type is `OBJECT`, `properties` is a map of property names to schema definitions for each property of the object.
"a_key": # Object with schema name: GoogleCloudAiplatformV1beta1Schema
},
"propertyOrdering": [ # Optional. Order of properties displayed or used where order matters. This is not a standard field in OpenAPI specification, but can be used to control the order of properties.
"A String",
],
"ref": "A String", # Optional. Allows referencing another schema definition to use in place of this schema. The value must be a valid reference to a schema in `defs`. For example, the following schema defines a reference to a schema node named "Pet": type: object properties: pet: ref: #/defs/Pet defs: Pet: type: object properties: name: type: string The value of the "pet" property is a reference to the schema node named "Pet". See details in https://json-schema.org/understanding-json-schema/structuring
"required": [ # Optional. If type is `OBJECT`, `required` lists the names of properties that must be present.
"A String",
],
"title": "A String", # Optional. Title for the schema.
"type": "A String", # Optional. Data type of the schema field.
},
"parametersJsonSchema": "", # Optional. Describes the parameters to the function in JSON Schema format. The schema must describe an object where the properties are the parameters to the function. For example: ``` { "type": "object", "properties": { "name": { "type": "string" }, "age": { "type": "integer" } }, "additionalProperties": false, "required": ["name", "age"], "propertyOrdering": ["name", "age"] } ``` This field is mutually exclusive with `parameters`.
"response": { # Defines the schema of input and output data. This is a subset of the [OpenAPI 3.0 Schema Object](https://spec.openapis.org/oas/v3.0.3#schema-object). # Optional. Describes the output from this function in JSON Schema format. Reflects the Open API 3.03 Response Object. The Schema defines the type used for the response value of the function.
"additionalProperties": "", # Optional. If `type` is `OBJECT`, specifies how to handle properties not defined in `properties`. If it is a boolean `false`, no additional properties are allowed. If it is a schema, additional properties are allowed if they conform to the schema.
"anyOf": [ # Optional. The instance must be valid against any (one or more) of the subschemas listed in `any_of`.
# Object with schema name: GoogleCloudAiplatformV1beta1Schema
],
"default": "", # Optional. Default value to use if the field is not specified.
"defs": { # Optional. `defs` provides a map of schema definitions that can be reused by `ref` elsewhere in the schema. Only allowed at root level of the schema.
"a_key": # Object with schema name: GoogleCloudAiplatformV1beta1Schema
},
"description": "A String", # Optional. Description of the schema.
"enum": [ # Optional. Possible values of the field. This field can be used to restrict a value to a fixed set of values. To mark a field as an enum, set `format` to `enum` and provide the list of possible values in `enum`. For example: 1. To define directions: `{type:STRING, format:enum, enum:["EAST", "NORTH", "SOUTH", "WEST"]}` 2. To define apartment numbers: `{type:INTEGER, format:enum, enum:["101", "201", "301"]}`
"A String",
],
"example": "", # Optional. Example of an instance of this schema.
"format": "A String", # Optional. The format of the data. For `NUMBER` type, format can be `float` or `double`. For `INTEGER` type, format can be `int32` or `int64`. For `STRING` type, format can be `email`, `byte`, `date`, `date-time`, `password`, and other formats to further refine the data type.
"items": # Object with schema name: GoogleCloudAiplatformV1beta1Schema # Optional. If type is `ARRAY`, `items` specifies the schema of elements in the array.
"maxItems": "A String", # Optional. If type is `ARRAY`, `max_items` specifies the maximum number of items in an array.
"maxLength": "A String", # Optional. If type is `STRING`, `max_length` specifies the maximum length of the string.
"maxProperties": "A String", # Optional. If type is `OBJECT`, `max_properties` specifies the maximum number of properties that can be provided.
"maximum": 3.14, # Optional. If type is `INTEGER` or `NUMBER`, `maximum` specifies the maximum allowed value.
"minItems": "A String", # Optional. If type is `ARRAY`, `min_items` specifies the minimum number of items in an array.
"minLength": "A String", # Optional. If type is `STRING`, `min_length` specifies the minimum length of the string.
"minProperties": "A String", # Optional. If type is `OBJECT`, `min_properties` specifies the minimum number of properties that can be provided.
"minimum": 3.14, # Optional. If type is `INTEGER` or `NUMBER`, `minimum` specifies the minimum allowed value.
"nullable": True or False, # Optional. Indicates if the value of this field can be null.
"pattern": "A String", # Optional. If type is `STRING`, `pattern` specifies a regular expression that the string must match.
"properties": { # Optional. If type is `OBJECT`, `properties` is a map of property names to schema definitions for each property of the object.
"a_key": # Object with schema name: GoogleCloudAiplatformV1beta1Schema
},
"propertyOrdering": [ # Optional. Order of properties displayed or used where order matters. This is not a standard field in OpenAPI specification, but can be used to control the order of properties.
"A String",
],
"ref": "A String", # Optional. Allows referencing another schema definition to use in place of this schema. The value must be a valid reference to a schema in `defs`. For example, the following schema defines a reference to a schema node named "Pet": type: object properties: pet: ref: #/defs/Pet defs: Pet: type: object properties: name: type: string The value of the "pet" property is a reference to the schema node named "Pet". See details in https://json-schema.org/understanding-json-schema/structuring
"required": [ # Optional. If type is `OBJECT`, `required` lists the names of properties that must be present.
"A String",
],
"title": "A String", # Optional. Title for the schema.
"type": "A String", # Optional. Data type of the schema field.
},
"responseJsonSchema": "", # Optional. Describes the output from this function in JSON Schema format. The value specified by the schema is the response value of the function. This field is mutually exclusive with `response`.
},
],
"googleMaps": { # Tool to retrieve public maps data for grounding, powered by Google. # Optional. GoogleMaps tool type. Tool to support Google Maps in Model.
"enableWidget": True or False, # Optional. If true, include the widget context token in the response.
},
"googleSearch": { # GoogleSearch tool type. Tool to support Google Search in Model. Powered by Google. # Optional. GoogleSearch tool type. Tool to support Google Search in Model. Powered by Google.
"blockingConfidence": "A String", # Optional. Sites with confidence level chosen & above this value will be blocked from the search results.
"excludeDomains": [ # Optional. List of domains to be excluded from the search results. The default limit is 2000 domains. Example: ["amazon.com", "facebook.com"].
"A String",
],
},
"googleSearchRetrieval": { # Tool to retrieve public web data for grounding, powered by Google. # Optional. Specialized retrieval tool that is powered by Google Search.
"dynamicRetrievalConfig": { # Describes the options to customize dynamic retrieval. # Specifies the dynamic retrieval configuration for the given source.
"dynamicThreshold": 3.14, # Optional. The threshold to be used in dynamic retrieval. If not set, a system default value is used.
"mode": "A String", # The mode of the predictor to be used in dynamic retrieval.
},
},
"parallelAiSearch": { # ParallelAiSearch tool type. A tool that uses the Parallel.ai search engine for grounding. # Optional. If specified, Vertex AI will use Parallel.ai to search for information to answer user queries. The search results will be grounded on Parallel.ai and presented to the model for response generation
"apiKey": "A String", # Optional. The API key for ParallelAiSearch. If an API key is not provided, the system will attempt to verify access by checking for an active Parallel.ai subscription through the Google Cloud Marketplace. See https://docs.parallel.ai/search/search-quickstart for more details.
"customConfigs": { # Optional. Custom configs for ParallelAiSearch. This field can be used to pass any parameter from the Parallel.ai Search API. See the Parallel.ai documentation for the full list of available parameters and their usage: https://docs.parallel.ai/api-reference/search-beta/search Currently only `source_policy`, `excerpts`, `max_results`, `mode`, `fetch_policy` can be set via this field. For example: { "source_policy": { "include_domains": ["google.com", "wikipedia.org"], "exclude_domains": ["example.com"] }, "fetch_policy": { "max_age_seconds": 3600 } }
"a_key": "", # Properties of the object.
},
},
"retrieval": { # Defines a retrieval tool that model can call to access external knowledge. # Optional. Retrieval tool type. System will always execute the provided retrieval tool(s) to get external knowledge to answer the prompt. Retrieval results are presented to the model for generation.
"disableAttribution": True or False, # Optional. Deprecated. This option is no longer supported.
"externalApi": { # Retrieve from data source powered by external API for grounding. The external API is not owned by Google, but need to follow the pre-defined API spec. # Use data source powered by external API for grounding.
"apiAuth": { # The generic reusable api auth config. Deprecated. Please use AuthConfig (google/cloud/aiplatform/master/auth.proto) instead. # The authentication config to access the API. Deprecated. Please use auth_config instead.
"apiKeyConfig": { # The API secret. # The API secret.
"apiKeySecretVersion": "A String", # Required. The SecretManager secret version resource name storing API key. e.g. projects/{project}/secrets/{secret}/versions/{version}
"apiKeyString": "A String", # The API key string. Either this or `api_key_secret_version` must be set.
},
},
"apiSpec": "A String", # The API spec that the external API implements.
"authConfig": { # Auth configuration to run the extension. # The authentication config to access the API.
"apiKeyConfig": { # Config for authentication with API key. # Config for API key auth.
"apiKeySecret": "A String", # Optional. The name of the SecretManager secret version resource storing the API key. Format: `projects/{project}/secrets/{secrete}/versions/{version}` - If both `api_key_secret` and `api_key_string` are specified, this field takes precedence over `api_key_string`. - If specified, the `secretmanager.versions.access` permission should be granted to Vertex AI Extension Service Agent (https://cloud.google.com/vertex-ai/docs/general/access-control#service-agents) on the specified resource.
"apiKeyString": "A String", # Optional. The API key to be used in the request directly.
"httpElementLocation": "A String", # Optional. The location of the API key.
"name": "A String", # Optional. The parameter name of the API key. E.g. If the API request is "https://example.com/act?api_key=", "api_key" would be the parameter name.
},
"authType": "A String", # Type of auth scheme.
"googleServiceAccountConfig": { # Config for Google Service Account Authentication. # Config for Google Service Account auth.
"serviceAccount": "A String", # Optional. The service account that the extension execution service runs as. - If the service account is specified, the `iam.serviceAccounts.getAccessToken` permission should be granted to Vertex AI Extension Service Agent (https://cloud.google.com/vertex-ai/docs/general/access-control#service-agents) on the specified service account. - If not specified, the Vertex AI Extension Service Agent will be used to execute the Extension.
},
"httpBasicAuthConfig": { # Config for HTTP Basic Authentication. # Config for HTTP Basic auth.
"credentialSecret": "A String", # Required. The name of the SecretManager secret version resource storing the base64 encoded credentials. Format: `projects/{project}/secrets/{secrete}/versions/{version}` - If specified, the `secretmanager.versions.access` permission should be granted to Vertex AI Extension Service Agent (https://cloud.google.com/vertex-ai/docs/general/access-control#service-agents) on the specified resource.
},
"oauthConfig": { # Config for user oauth. # Config for user oauth.
"accessToken": "A String", # Access token for extension endpoint. Only used to propagate token from [[ExecuteExtensionRequest.runtime_auth_config]] at request time.
"serviceAccount": "A String", # The service account used to generate access tokens for executing the Extension. - If the service account is specified, the `iam.serviceAccounts.getAccessToken` permission should be granted to Vertex AI Extension Service Agent (https://cloud.google.com/vertex-ai/docs/general/access-control#service-agents) on the provided service account.
},
"oidcConfig": { # Config for user OIDC auth. # Config for user OIDC auth.
"idToken": "A String", # OpenID Connect formatted ID token for extension endpoint. Only used to propagate token from [[ExecuteExtensionRequest.runtime_auth_config]] at request time.
"serviceAccount": "A String", # The service account used to generate an OpenID Connect (OIDC)-compatible JWT token signed by the Google OIDC Provider (accounts.google.com) for extension endpoint (https://cloud.google.com/iam/docs/create-short-lived-credentials-direct#sa-credentials-oidc). - The audience for the token will be set to the URL in the server url defined in the OpenApi spec. - If the service account is provided, the service account should grant `iam.serviceAccounts.getOpenIdToken` permission to Vertex AI Extension Service Agent (https://cloud.google.com/vertex-ai/docs/general/access-control#service-agents).
},
},
"elasticSearchParams": { # The search parameters to use for the ELASTIC_SEARCH spec. # Parameters for the elastic search API.
"index": "A String", # The ElasticSearch index to use.
"numHits": 42, # Optional. Number of hits (chunks) to request. When specified, it is passed to Elasticsearch as the `num_hits` param.
"searchTemplate": "A String", # The ElasticSearch search template to use.
},
"endpoint": "A String", # The endpoint of the external API. The system will call the API at this endpoint to retrieve the data for grounding. Example: https://acme.com:443/search
"simpleSearchParams": { # The search parameters to use for SIMPLE_SEARCH spec. # Parameters for the simple search API.
},
},
"vertexAiSearch": { # Retrieve from Vertex AI Search datastore or engine for grounding. datastore and engine are mutually exclusive. See https://cloud.google.com/products/agent-builder # Set to use data source powered by Vertex AI Search.
"dataStoreSpecs": [ # Specifications that define the specific DataStores to be searched, along with configurations for those data stores. This is only considered for Engines with multiple data stores. It should only be set if engine is used.
{ # Define data stores within engine to filter on in a search call and configurations for those data stores. For more information, see https://cloud.google.com/generative-ai-app-builder/docs/reference/rpc/google.cloud.discoveryengine.v1#datastorespec
"dataStore": "A String", # Full resource name of DataStore, such as Format: `projects/{project}/locations/{location}/collections/{collection}/dataStores/{dataStore}`
"filter": "A String", # Optional. Filter specification to filter documents in the data store specified by data_store field. For more information on filtering, see [Filtering](https://cloud.google.com/generative-ai-app-builder/docs/filter-search-metadata)
},
],
"datastore": "A String", # Optional. Fully-qualified Vertex AI Search data store resource ID. Format: `projects/{project}/locations/{location}/collections/{collection}/dataStores/{dataStore}`
"engine": "A String", # Optional. Fully-qualified Vertex AI Search engine resource ID. Format: `projects/{project}/locations/{location}/collections/{collection}/engines/{engine}`
"filter": "A String", # Optional. Filter strings to be passed to the search API.
"maxResults": 42, # Optional. Number of search results to return per query. The default value is 10. The maximumm allowed value is 10.
},
"vertexRagStore": { # Retrieve from Vertex RAG Store for grounding. # Set to use data source powered by Vertex RAG store. User data is uploaded via the VertexRagDataService.
"ragCorpora": [ # Optional. Deprecated. Please use rag_resources instead.
"A String",
],
"ragResources": [ # Optional. The representation of the rag source. It can be used to specify corpus only or ragfiles. Currently only support one corpus or multiple files from one corpus. In the future we may open up multiple corpora support.
{ # The definition of the Rag resource.
"ragCorpus": "A String", # Optional. RagCorpora resource name. Format: `projects/{project}/locations/{location}/ragCorpora/{rag_corpus}`
"ragFileIds": [ # Optional. rag_file_id. The files should be in the same rag_corpus set in rag_corpus field.
"A String",
],
},
],
"ragRetrievalConfig": { # Specifies the context retrieval config. # Optional. The retrieval config for the Rag query.
"filter": { # Config for filters. # Optional. Config for filters.
"metadataFilter": "A String", # Optional. String for metadata filtering.
"vectorDistanceThreshold": 3.14, # Optional. Only returns contexts with vector distance smaller than the threshold.
"vectorSimilarityThreshold": 3.14, # Optional. Only returns contexts with vector similarity larger than the threshold.
},
"hybridSearch": { # Config for Hybrid Search. # Optional. Config for Hybrid Search.
"alpha": 3.14, # Optional. Alpha value controls the weight between dense and sparse vector search results. The range is [0, 1], while 0 means sparse vector search only and 1 means dense vector search only. The default value is 0.5 which balances sparse and dense vector search equally.
},
"ranking": { # Config for ranking and reranking. # Optional. Config for ranking and reranking.
"llmRanker": { # Config for LlmRanker. # Optional. Config for LlmRanker.
"modelName": "A String", # Optional. The model name used for ranking. See [Supported models](https://cloud.google.com/vertex-ai/generative-ai/docs/model-reference/inference#supported-models).
},
"rankService": { # Config for Rank Service. # Optional. Config for Rank Service.
"modelName": "A String", # Optional. The model name of the rank service. Format: `semantic-ranker-512@latest`
},
},
"topK": 42, # Optional. The number of contexts to retrieve.
},
"similarityTopK": 42, # Optional. Number of top k results to return from the selected corpora.
"storeContext": True or False, # Optional. Currently only supported for Gemini Multimodal Live API. In Gemini Multimodal Live API, if `store_context` bool is specified, Gemini will leverage it to automatically memorize the interactions between the client and Gemini, and retrieve context when needed to augment the response generation for users' ongoing and future interactions.
"vectorDistanceThreshold": 3.14, # Optional. Only return results with vector distance smaller than the threshold.
},
},
"urlContext": { # Tool to support URL context. # Optional. Tool to support URL context retrieval.
},
},
],
"author": "A String", # Required. The ID of the agent or entity that generated this event. Use "user" to denote events generated by the end-user.
"content": { # The structured data content of a message. A Content message contains a `role` field, which indicates the producer of the content, and a `parts` field, which contains the multi-part data of the message. # Required. The content of the event (e.g., text response, tool call, tool response).
"parts": [ # Required. A list of Part objects that make up a single message. Parts of a message can have different MIME types. A Content message must have at least one Part.
{ # A datatype containing media that is part of a multi-part Content message. A `Part` consists of data which has an associated datatype. A `Part` can only contain one of the accepted types in `Part.data`. For media types that are not text, `Part` must have a fixed IANA MIME type identifying the type and subtype of the media if `inline_data` or `file_data` field is filled with raw bytes.
"codeExecutionResult": { # Result of executing the [ExecutableCode]. Only generated when using the [CodeExecution] tool, and always follows a `part` containing the [ExecutableCode]. # Optional. The result of executing the ExecutableCode.
"outcome": "A String", # Required. Outcome of the code execution.
"output": "A String", # Optional. Contains stdout when code execution is successful, stderr or other description otherwise.
},
"executableCode": { # Code generated by the model that is meant to be executed, and the result returned to the model. Generated when using the [CodeExecution] tool, in which the code will be automatically executed, and a corresponding [CodeExecutionResult] will also be generated. # Optional. Code generated by the model that is intended to be executed.
"code": "A String", # Required. The code to be executed.
"language": "A String", # Required. Programming language of the `code`.
},
"fileData": { # URI-based data. A FileData message contains a URI pointing to data of a specific media type. It is used to represent images, audio, and video stored in Google Cloud Storage. # Optional. The URI-based data of the part. This can be used to include files from Google Cloud Storage.
"displayName": "A String", # Optional. The display name of the file. Used to provide a label or filename to distinguish files. This field is only returned in `PromptMessage` for prompt management. It is used in the Gemini calls only when server side tools (`code_execution`, `google_search`, and `url_context`) are enabled.
"fileUri": "A String", # Required. The URI of the file in Google Cloud Storage.
"mimeType": "A String", # Required. The IANA standard MIME type of the source data.
},
"functionCall": { # A predicted [FunctionCall] returned from the model that contains a string representing the [FunctionDeclaration.name] and a structured JSON object containing the parameters and their values. # Optional. A predicted function call returned from the model. This contains the name of the function to call and the arguments to pass to the function.
"args": { # Optional. The function parameters and values in JSON object format. See [FunctionDeclaration.parameters] for parameter details.
"a_key": "", # Properties of the object.
},
"id": "A String", # Optional. The unique id of the function call. If populated, the client to execute the `function_call` and return the response with the matching `id`.
"name": "A String", # Optional. The name of the function to call. Matches [FunctionDeclaration.name].
"partialArgs": [ # Optional. The partial argument value of the function call. If provided, represents the arguments/fields that are streamed incrementally.
{ # Partial argument value of the function call.
"boolValue": True or False, # Optional. Represents a boolean value.
"jsonPath": "A String", # Required. A JSON Path (RFC 9535) to the argument being streamed. https://datatracker.ietf.org/doc/html/rfc9535. e.g. "$.foo.bar[0].data".
"nullValue": "A String", # Optional. Represents a null value.
"numberValue": 3.14, # Optional. Represents a double value.
"stringValue": "A String", # Optional. Represents a string value.
"willContinue": True or False, # Optional. Whether this is not the last part of the same json_path. If true, another PartialArg message for the current json_path is expected to follow.
},
],
"willContinue": True or False, # Optional. Whether this is the last part of the FunctionCall. If true, another partial message for the current FunctionCall is expected to follow.
},
"functionResponse": { # The result output from a [FunctionCall] that contains a string representing the [FunctionDeclaration.name] and a structured JSON object containing any output from the function is used as context to the model. This should contain the result of a [FunctionCall] made based on model prediction. # Optional. The result of a function call. This is used to provide the model with the result of a function call that it predicted.
"id": "A String", # Optional. The id of the function call this response is for. Populated by the client to match the corresponding function call `id`.
"name": "A String", # Required. The name of the function to call. Matches [FunctionDeclaration.name] and [FunctionCall.name].
"parts": [ # Optional. Ordered `Parts` that constitute a function response. Parts may have different IANA MIME types.
{ # A datatype containing media that is part of a `FunctionResponse` message. A `FunctionResponsePart` consists of data which has an associated datatype. A `FunctionResponsePart` can only contain one of the accepted types in `FunctionResponsePart.data`. A `FunctionResponsePart` must have a fixed IANA MIME type identifying the type and subtype of the media if the `inline_data` field is filled with raw bytes.
"fileData": { # URI based data for function response. # URI based data.
"displayName": "A String", # Optional. Display name of the file data. Used to provide a label or filename to distinguish file datas. This field is only returned in PromptMessage for prompt management. It is currently used in the Gemini GenerateContent calls only when server side tools (code_execution, google_search, and url_context) are enabled.
"fileUri": "A String", # Required. URI.
"mimeType": "A String", # Required. The IANA standard MIME type of the source data.
},
"inlineData": { # Raw media bytes for function response. Text should not be sent as raw bytes, use the 'text' field. # Inline media bytes.
"data": "A String", # Required. Raw bytes.
"displayName": "A String", # Optional. Display name of the blob. Used to provide a label or filename to distinguish blobs. This field is only returned in PromptMessage for prompt management. It is currently used in the Gemini GenerateContent calls only when server side tools (code_execution, google_search, and url_context) are enabled.
"mimeType": "A String", # Required. The IANA standard MIME type of the source data.
},
},
],
"response": { # Required. The function response in JSON object format. Use "output" key to specify function output and "error" key to specify error details (if any). If "output" and "error" keys are not specified, then whole "response" is treated as function output.
"a_key": "", # Properties of the object.
},
},
"inlineData": { # A content blob. A Blob contains data of a specific media type. It is used to represent images, audio, and video. # Optional. The inline data content of the part. This can be used to include images, audio, or video in a request.
"data": "A String", # Required. The raw bytes of the data.
"displayName": "A String", # Optional. The display name of the blob. Used to provide a label or filename to distinguish blobs. This field is only returned in `PromptMessage` for prompt management. It is used in the Gemini calls only when server-side tools (`code_execution`, `google_search`, and `url_context`) are enabled.
"mimeType": "A String", # Required. The IANA standard MIME type of the source data.
},
"mediaResolution": { # per part media resolution. Media resolution for the input media. # per part media resolution. Media resolution for the input media.
"level": "A String", # The tokenization quality used for given media.
},
"text": "A String", # Optional. The text content of the part. When sent from the VSCode Gemini Code Assist extension, references to @mentioned items will be converted to markdown boldface text. For example `@my-repo` will be converted to and sent as `**my-repo**` by the IDE agent.
"thought": True or False, # Optional. Indicates whether the `part` represents the model's thought process or reasoning.
"thoughtSignature": "A String", # Optional. An opaque signature for the thought so it can be reused in subsequent requests.
"videoMetadata": { # Provides metadata for a video, including the start and end offsets for clipping and the frame rate. # Optional. Video metadata. The metadata should only be specified while the video data is presented in inline_data or file_data.
"endOffset": "A String", # Optional. The end offset of the video.
"fps": 3.14, # Optional. The frame rate of the video sent to the model. If not specified, the default value is 1.0. The valid range is (0.0, 24.0].
"startOffset": "A String", # Optional. The start offset of the video.
},
},
],
"role": "A String", # Optional. The producer of the content. Must be either 'user' or 'model'. If not set, the service will default to 'user'.
},
"eventTime": "A String", # Optional. The timestamp when the event occurred.
"stateDelta": { # Optional. The change in the session state caused by this event. This is a key-value map of fields that were modified or added by the event.
"a_key": "", # Properties of the object.
},
},
],
"turnId": "A String", # Optional. A unique identifier for the turn. Useful for referencing specific turns across systems.
"turnIndex": 42, # Required. The 0-based index of the turn in the conversation sequence.
},
],
},
"promptTemplateData": { # Message to hold a prompt template and the values to populate the template. # Prompt template data.
"values": { # The values for fields in the prompt template.
"a_key": { # The structured data content of a message. A Content message contains a `role` field, which indicates the producer of the content, and a `parts` field, which contains the multi-part data of the message.
"parts": [ # Required. A list of Part objects that make up a single message. Parts of a message can have different MIME types. A Content message must have at least one Part.
{ # A datatype containing media that is part of a multi-part Content message. A `Part` consists of data which has an associated datatype. A `Part` can only contain one of the accepted types in `Part.data`. For media types that are not text, `Part` must have a fixed IANA MIME type identifying the type and subtype of the media if `inline_data` or `file_data` field is filled with raw bytes.
"codeExecutionResult": { # Result of executing the [ExecutableCode]. Only generated when using the [CodeExecution] tool, and always follows a `part` containing the [ExecutableCode]. # Optional. The result of executing the ExecutableCode.
"outcome": "A String", # Required. Outcome of the code execution.
"output": "A String", # Optional. Contains stdout when code execution is successful, stderr or other description otherwise.
},
"executableCode": { # Code generated by the model that is meant to be executed, and the result returned to the model. Generated when using the [CodeExecution] tool, in which the code will be automatically executed, and a corresponding [CodeExecutionResult] will also be generated. # Optional. Code generated by the model that is intended to be executed.
"code": "A String", # Required. The code to be executed.
"language": "A String", # Required. Programming language of the `code`.
},
"fileData": { # URI-based data. A FileData message contains a URI pointing to data of a specific media type. It is used to represent images, audio, and video stored in Google Cloud Storage. # Optional. The URI-based data of the part. This can be used to include files from Google Cloud Storage.
"displayName": "A String", # Optional. The display name of the file. Used to provide a label or filename to distinguish files. This field is only returned in `PromptMessage` for prompt management. It is used in the Gemini calls only when server side tools (`code_execution`, `google_search`, and `url_context`) are enabled.
"fileUri": "A String", # Required. The URI of the file in Google Cloud Storage.
"mimeType": "A String", # Required. The IANA standard MIME type of the source data.
},
"functionCall": { # A predicted [FunctionCall] returned from the model that contains a string representing the [FunctionDeclaration.name] and a structured JSON object containing the parameters and their values. # Optional. A predicted function call returned from the model. This contains the name of the function to call and the arguments to pass to the function.
"args": { # Optional. The function parameters and values in JSON object format. See [FunctionDeclaration.parameters] for parameter details.
"a_key": "", # Properties of the object.
},
"id": "A String", # Optional. The unique id of the function call. If populated, the client to execute the `function_call` and return the response with the matching `id`.
"name": "A String", # Optional. The name of the function to call. Matches [FunctionDeclaration.name].
"partialArgs": [ # Optional. The partial argument value of the function call. If provided, represents the arguments/fields that are streamed incrementally.
{ # Partial argument value of the function call.
"boolValue": True or False, # Optional. Represents a boolean value.
"jsonPath": "A String", # Required. A JSON Path (RFC 9535) to the argument being streamed. https://datatracker.ietf.org/doc/html/rfc9535. e.g. "$.foo.bar[0].data".
"nullValue": "A String", # Optional. Represents a null value.
"numberValue": 3.14, # Optional. Represents a double value.
"stringValue": "A String", # Optional. Represents a string value.
"willContinue": True or False, # Optional. Whether this is not the last part of the same json_path. If true, another PartialArg message for the current json_path is expected to follow.
},
],
"willContinue": True or False, # Optional. Whether this is the last part of the FunctionCall. If true, another partial message for the current FunctionCall is expected to follow.
},
"functionResponse": { # The result output from a [FunctionCall] that contains a string representing the [FunctionDeclaration.name] and a structured JSON object containing any output from the function is used as context to the model. This should contain the result of a [FunctionCall] made based on model prediction. # Optional. The result of a function call. This is used to provide the model with the result of a function call that it predicted.
"id": "A String", # Optional. The id of the function call this response is for. Populated by the client to match the corresponding function call `id`.
"name": "A String", # Required. The name of the function to call. Matches [FunctionDeclaration.name] and [FunctionCall.name].
"parts": [ # Optional. Ordered `Parts` that constitute a function response. Parts may have different IANA MIME types.
{ # A datatype containing media that is part of a `FunctionResponse` message. A `FunctionResponsePart` consists of data which has an associated datatype. A `FunctionResponsePart` can only contain one of the accepted types in `FunctionResponsePart.data`. A `FunctionResponsePart` must have a fixed IANA MIME type identifying the type and subtype of the media if the `inline_data` field is filled with raw bytes.
"fileData": { # URI based data for function response. # URI based data.
"displayName": "A String", # Optional. Display name of the file data. Used to provide a label or filename to distinguish file datas. This field is only returned in PromptMessage for prompt management. It is currently used in the Gemini GenerateContent calls only when server side tools (code_execution, google_search, and url_context) are enabled.
"fileUri": "A String", # Required. URI.
"mimeType": "A String", # Required. The IANA standard MIME type of the source data.
},
"inlineData": { # Raw media bytes for function response. Text should not be sent as raw bytes, use the 'text' field. # Inline media bytes.
"data": "A String", # Required. Raw bytes.
"displayName": "A String", # Optional. Display name of the blob. Used to provide a label or filename to distinguish blobs. This field is only returned in PromptMessage for prompt management. It is currently used in the Gemini GenerateContent calls only when server side tools (code_execution, google_search, and url_context) are enabled.
"mimeType": "A String", # Required. The IANA standard MIME type of the source data.
},
},
],
"response": { # Required. The function response in JSON object format. Use "output" key to specify function output and "error" key to specify error details (if any). If "output" and "error" keys are not specified, then whole "response" is treated as function output.
"a_key": "", # Properties of the object.
},
},
"inlineData": { # A content blob. A Blob contains data of a specific media type. It is used to represent images, audio, and video. # Optional. The inline data content of the part. This can be used to include images, audio, or video in a request.
"data": "A String", # Required. The raw bytes of the data.
"displayName": "A String", # Optional. The display name of the blob. Used to provide a label or filename to distinguish blobs. This field is only returned in `PromptMessage` for prompt management. It is used in the Gemini calls only when server-side tools (`code_execution`, `google_search`, and `url_context`) are enabled.
"mimeType": "A String", # Required. The IANA standard MIME type of the source data.
},
"mediaResolution": { # per part media resolution. Media resolution for the input media. # per part media resolution. Media resolution for the input media.
"level": "A String", # The tokenization quality used for given media.
},
"text": "A String", # Optional. The text content of the part. When sent from the VSCode Gemini Code Assist extension, references to @mentioned items will be converted to markdown boldface text. For example `@my-repo` will be converted to and sent as `**my-repo**` by the IDE agent.
"thought": True or False, # Optional. Indicates whether the `part` represents the model's thought process or reasoning.
"thoughtSignature": "A String", # Optional. An opaque signature for the thought so it can be reused in subsequent requests.
"videoMetadata": { # Provides metadata for a video, including the start and end offsets for clipping and the frame rate. # Optional. Video metadata. The metadata should only be specified while the video data is presented in inline_data or file_data.
"endOffset": "A String", # Optional. The end offset of the video.
"fps": 3.14, # Optional. The frame rate of the video sent to the model. If not specified, the default value is 1.0. The valid range is (0.0, 24.0].
"startOffset": "A String", # Optional. The start offset of the video.
},
},
],
"role": "A String", # Optional. The producer of the content. Must be either 'user' or 'model'. If not set, the service will default to 'user'.
},
},
},
"text": "A String", # Text prompt.
"value": "", # Fields and values that can be used to populate the prompt template.
},
"rubrics": { # Optional. Named groups of rubrics associated with this prompt. The key is a user-defined name for the rubric group.
"a_key": { # A group of rubrics, used for grouping rubrics based on a metric or a version.
"displayName": "A String", # Human-readable name for the group. This should be unique within a given context if used for display or selection. Example: "Instruction Following V1", "Content Quality - Summarization Task".
"groupId": "A String", # Unique identifier for the group.
"rubrics": [ # Rubrics that are part of this group.
{ # Message representing a single testable criterion for evaluation. One input prompt could have multiple rubrics.
"content": { # Content of the rubric, defining the testable criteria. # Required. The actual testable criteria for the rubric.
"property": { # Defines criteria based on a specific property. # Evaluation criteria based on a specific property.
"description": "A String", # Description of the property being evaluated. Example: "The model's response is grammatically correct."
},
},
"importance": "A String", # Optional. The relative importance of this rubric.
"rubricId": "A String", # Unique identifier for the rubric. This ID is used to refer to this rubric, e.g., in RubricVerdict.
"type": "A String", # Optional. A type designator for the rubric, which can inform how it's evaluated or interpreted by systems or users. It's recommended to use consistent, well-defined, upper snake_case strings. Examples: "SUMMARIZATION_QUALITY", "SAFETY_HARMFUL_CONTENT", "INSTRUCTION_ADHERENCE".
},
],
},
},
},
"evaluationResponse": { # Evaluation result. # Output only. The response from evaluation.
"candidateResults": [ # Optional. The results for the metric.
{ # Result for a single candidate.
"additionalResults": "", # Optional. Additional results for the metric.
"candidate": "A String", # Required. The candidate that is being evaluated. The value is the same as the candidate name in the EvaluationRequest.
"explanation": "A String", # Optional. The explanation for the metric.
"metric": "A String", # Required. The metric that was evaluated.
"rubricVerdicts": [ # Optional. The rubric verdicts for the metric.
{ # Represents the verdict of an evaluation against a single rubric.
"evaluatedRubric": { # Message representing a single testable criterion for evaluation. One input prompt could have multiple rubrics. # Required. The full rubric definition that was evaluated. Storing this ensures the verdict is self-contained and understandable, especially if the original rubric definition changes or was dynamically generated.
"content": { # Content of the rubric, defining the testable criteria. # Required. The actual testable criteria for the rubric.
"property": { # Defines criteria based on a specific property. # Evaluation criteria based on a specific property.
"description": "A String", # Description of the property being evaluated. Example: "The model's response is grammatically correct."
},
},
"importance": "A String", # Optional. The relative importance of this rubric.
"rubricId": "A String", # Unique identifier for the rubric. This ID is used to refer to this rubric, e.g., in RubricVerdict.
"type": "A String", # Optional. A type designator for the rubric, which can inform how it's evaluated or interpreted by systems or users. It's recommended to use consistent, well-defined, upper snake_case strings. Examples: "SUMMARIZATION_QUALITY", "SAFETY_HARMFUL_CONTENT", "INSTRUCTION_ADHERENCE".
},
"reasoning": "A String", # Optional. Human-readable reasoning or explanation for the verdict. This can include specific examples or details from the evaluated content that justify the given verdict.
"verdict": True or False, # Required. Outcome of the evaluation against the rubric, represented as a boolean. `true` indicates a "Pass", `false` indicates a "Fail".
},
],
"score": 3.14, # Optional. The score for the metric.
},
],
"evaluationRequest": "A String", # Required. The request item that was evaluated. Format: projects/{project}/locations/{location}/evaluationItems/{evaluation_item}
"evaluationRun": "A String", # Required. The evaluation run that was used to generate the result. Format: projects/{project}/locations/{location}/evaluationRuns/{evaluation_run}
"metadata": "", # Optional. Metadata about the evaluation result.
"metric": "A String", # Required. The metric that was evaluated.
"request": { # A single evaluation request supporting input for both single-turn model generation and multi-turn agent execution traces. Valid input modes: 1. Inference Mode: `prompt` is set (containing text or AgentData context). 2. Offline Eval Mode: `prompt` is unset, and `candidate_responses` contains `agent_data` (the completed execution trace). Validation Rule: Either `prompt` must be set, OR at least one of the `candidate_responses` must contain `agent_data`. # Required. The request that was evaluated.
"candidateResponses": [ # Optional. Responses from model under test and other baseline models for comparison.
{ # Responses from model or agent.
"agentData": { # Represents data specific to multi-turn agent evaluations. # Optional. Represents the complete execution trace of a multi-turn conversation, which can involve single or multiple agents. This field is used to provide the full output of an agent's run, including all turns and events, for direct evaluation.
"agents": { # Optional. A map containing the static configurations for each agent in the system. Key: agent_id (matches the `author` field in events). Value: The static configuration of the agent.
"a_key": { # Represents configuration for an Agent.
"agentId": "A String", # Required. Unique identifier of the agent. This ID is used to refer to this agent, e.g., in AgentEvent.author, or in the `sub_agents` field. It must be unique within the `agents` map.
"agentType": "A String", # Optional. The type or class of the agent (e.g., "LlmAgent", "RouterAgent", "ToolUseAgent"). Useful for the autorater to understand the expected behavior of the agent.
"description": "A String", # Optional. A high-level description of the agent's role and responsibilities. Critical for evaluating if the agent is routing tasks correctly.
"instruction": "A String", # Optional. Provides instructions for the LLM model, guiding the agent's behavior. Can be static or dynamic. Dynamic instructions can contain placeholders like {variable_name} that will be resolved at runtime using the `AgentEvent.state_delta` field.
"subAgents": [ # Optional. The list of valid agent IDs that this agent can delegate to. This defines the directed edges in the multi-agent system graph topology.
"A String",
],
"tools": [ # Optional. The list of tools available to this agent.
{ # Tool details that the model may use to generate response. A `Tool` is a piece of code that enables the system to interact with external systems to perform an action, or set of actions, outside of knowledge and scope of the model. A Tool object should contain exactly one type of Tool (e.g FunctionDeclaration, Retrieval or GoogleSearchRetrieval).
"codeExecution": { # Tool that executes code generated by the model, and automatically returns the result to the model. See also [ExecutableCode]and [CodeExecutionResult] which are input and output to this tool. # Optional. CodeExecution tool type. Enables the model to execute code as part of generation.
},
"computerUse": { # Tool to support computer use. # Optional. Tool to support the model interacting directly with the computer. If enabled, it automatically populates computer-use specific Function Declarations.
"environment": "A String", # Required. The environment being operated.
"excludedPredefinedFunctions": [ # Optional. By default, [predefined functions](https://cloud.google.com/vertex-ai/generative-ai/docs/computer-use#supported-actions) are included in the final model call. Some of them can be explicitly excluded from being automatically included. This can serve two purposes: 1. Using a more restricted / different action space. 2. Improving the definitions / instructions of predefined functions.
"A String",
],
},
"enterpriseWebSearch": { # Tool to search public web data, powered by Vertex AI Search and Sec4 compliance. # Optional. Tool to support searching public web data, powered by Vertex AI Search and Sec4 compliance.
"blockingConfidence": "A String", # Optional. Sites with confidence level chosen & above this value will be blocked from the search results.
"excludeDomains": [ # Optional. List of domains to be excluded from the search results. The default limit is 2000 domains.
"A String",
],
},
"functionDeclarations": [ # Optional. Function tool type. One or more function declarations to be passed to the model along with the current user query. Model may decide to call a subset of these functions by populating FunctionCall in the response. User should provide a FunctionResponse for each function call in the next turn. Based on the function responses, Model will generate the final response back to the user. Maximum 512 function declarations can be provided.
{ # Structured representation of a function declaration as defined by the [OpenAPI 3.0 specification](https://spec.openapis.org/oas/v3.0.3). Included in this declaration are the function name, description, parameters and response type. This FunctionDeclaration is a representation of a block of code that can be used as a `Tool` by the model and executed by the client.
"description": "A String", # Optional. Description and purpose of the function. Model uses it to decide how and whether to call the function.
"name": "A String", # Required. The name of the function to call. Must start with a letter or an underscore. Must be a-z, A-Z, 0-9, or contain underscores, dots, colons and dashes, with a maximum length of 64.
"parameters": { # Defines the schema of input and output data. This is a subset of the [OpenAPI 3.0 Schema Object](https://spec.openapis.org/oas/v3.0.3#schema-object). # Optional. Describes the parameters to this function in JSON Schema Object format. Reflects the Open API 3.03 Parameter Object. string Key: the name of the parameter. Parameter names are case sensitive. Schema Value: the Schema defining the type used for the parameter. For function with no parameters, this can be left unset. Parameter names must start with a letter or an underscore and must only contain chars a-z, A-Z, 0-9, or underscores with a maximum length of 64. Example with 1 required and 1 optional parameter: type: OBJECT properties: param1: type: STRING param2: type: INTEGER required: - param1
"additionalProperties": "", # Optional. If `type` is `OBJECT`, specifies how to handle properties not defined in `properties`. If it is a boolean `false`, no additional properties are allowed. If it is a schema, additional properties are allowed if they conform to the schema.
"anyOf": [ # Optional. The instance must be valid against any (one or more) of the subschemas listed in `any_of`.
# Object with schema name: GoogleCloudAiplatformV1beta1Schema
],
"default": "", # Optional. Default value to use if the field is not specified.
"defs": { # Optional. `defs` provides a map of schema definitions that can be reused by `ref` elsewhere in the schema. Only allowed at root level of the schema.
"a_key": # Object with schema name: GoogleCloudAiplatformV1beta1Schema
},
"description": "A String", # Optional. Description of the schema.
"enum": [ # Optional. Possible values of the field. This field can be used to restrict a value to a fixed set of values. To mark a field as an enum, set `format` to `enum` and provide the list of possible values in `enum`. For example: 1. To define directions: `{type:STRING, format:enum, enum:["EAST", "NORTH", "SOUTH", "WEST"]}` 2. To define apartment numbers: `{type:INTEGER, format:enum, enum:["101", "201", "301"]}`
"A String",
],
"example": "", # Optional. Example of an instance of this schema.
"format": "A String", # Optional. The format of the data. For `NUMBER` type, format can be `float` or `double`. For `INTEGER` type, format can be `int32` or `int64`. For `STRING` type, format can be `email`, `byte`, `date`, `date-time`, `password`, and other formats to further refine the data type.
"items": # Object with schema name: GoogleCloudAiplatformV1beta1Schema # Optional. If type is `ARRAY`, `items` specifies the schema of elements in the array.
"maxItems": "A String", # Optional. If type is `ARRAY`, `max_items` specifies the maximum number of items in an array.
"maxLength": "A String", # Optional. If type is `STRING`, `max_length` specifies the maximum length of the string.
"maxProperties": "A String", # Optional. If type is `OBJECT`, `max_properties` specifies the maximum number of properties that can be provided.
"maximum": 3.14, # Optional. If type is `INTEGER` or `NUMBER`, `maximum` specifies the maximum allowed value.
"minItems": "A String", # Optional. If type is `ARRAY`, `min_items` specifies the minimum number of items in an array.
"minLength": "A String", # Optional. If type is `STRING`, `min_length` specifies the minimum length of the string.
"minProperties": "A String", # Optional. If type is `OBJECT`, `min_properties` specifies the minimum number of properties that can be provided.
"minimum": 3.14, # Optional. If type is `INTEGER` or `NUMBER`, `minimum` specifies the minimum allowed value.
"nullable": True or False, # Optional. Indicates if the value of this field can be null.
"pattern": "A String", # Optional. If type is `STRING`, `pattern` specifies a regular expression that the string must match.
"properties": { # Optional. If type is `OBJECT`, `properties` is a map of property names to schema definitions for each property of the object.
"a_key": # Object with schema name: GoogleCloudAiplatformV1beta1Schema
},
"propertyOrdering": [ # Optional. Order of properties displayed or used where order matters. This is not a standard field in OpenAPI specification, but can be used to control the order of properties.
"A String",
],
"ref": "A String", # Optional. Allows referencing another schema definition to use in place of this schema. The value must be a valid reference to a schema in `defs`. For example, the following schema defines a reference to a schema node named "Pet": type: object properties: pet: ref: #/defs/Pet defs: Pet: type: object properties: name: type: string The value of the "pet" property is a reference to the schema node named "Pet". See details in https://json-schema.org/understanding-json-schema/structuring
"required": [ # Optional. If type is `OBJECT`, `required` lists the names of properties that must be present.
"A String",
],
"title": "A String", # Optional. Title for the schema.
"type": "A String", # Optional. Data type of the schema field.
},
"parametersJsonSchema": "", # Optional. Describes the parameters to the function in JSON Schema format. The schema must describe an object where the properties are the parameters to the function. For example: ``` { "type": "object", "properties": { "name": { "type": "string" }, "age": { "type": "integer" } }, "additionalProperties": false, "required": ["name", "age"], "propertyOrdering": ["name", "age"] } ``` This field is mutually exclusive with `parameters`.
"response": { # Defines the schema of input and output data. This is a subset of the [OpenAPI 3.0 Schema Object](https://spec.openapis.org/oas/v3.0.3#schema-object). # Optional. Describes the output from this function in JSON Schema format. Reflects the Open API 3.03 Response Object. The Schema defines the type used for the response value of the function.
"additionalProperties": "", # Optional. If `type` is `OBJECT`, specifies how to handle properties not defined in `properties`. If it is a boolean `false`, no additional properties are allowed. If it is a schema, additional properties are allowed if they conform to the schema.
"anyOf": [ # Optional. The instance must be valid against any (one or more) of the subschemas listed in `any_of`.
# Object with schema name: GoogleCloudAiplatformV1beta1Schema
],
"default": "", # Optional. Default value to use if the field is not specified.
"defs": { # Optional. `defs` provides a map of schema definitions that can be reused by `ref` elsewhere in the schema. Only allowed at root level of the schema.
"a_key": # Object with schema name: GoogleCloudAiplatformV1beta1Schema
},
"description": "A String", # Optional. Description of the schema.
"enum": [ # Optional. Possible values of the field. This field can be used to restrict a value to a fixed set of values. To mark a field as an enum, set `format` to `enum` and provide the list of possible values in `enum`. For example: 1. To define directions: `{type:STRING, format:enum, enum:["EAST", "NORTH", "SOUTH", "WEST"]}` 2. To define apartment numbers: `{type:INTEGER, format:enum, enum:["101", "201", "301"]}`
"A String",
],
"example": "", # Optional. Example of an instance of this schema.
"format": "A String", # Optional. The format of the data. For `NUMBER` type, format can be `float` or `double`. For `INTEGER` type, format can be `int32` or `int64`. For `STRING` type, format can be `email`, `byte`, `date`, `date-time`, `password`, and other formats to further refine the data type.
"items": # Object with schema name: GoogleCloudAiplatformV1beta1Schema # Optional. If type is `ARRAY`, `items` specifies the schema of elements in the array.
"maxItems": "A String", # Optional. If type is `ARRAY`, `max_items` specifies the maximum number of items in an array.
"maxLength": "A String", # Optional. If type is `STRING`, `max_length` specifies the maximum length of the string.
"maxProperties": "A String", # Optional. If type is `OBJECT`, `max_properties` specifies the maximum number of properties that can be provided.
"maximum": 3.14, # Optional. If type is `INTEGER` or `NUMBER`, `maximum` specifies the maximum allowed value.
"minItems": "A String", # Optional. If type is `ARRAY`, `min_items` specifies the minimum number of items in an array.
"minLength": "A String", # Optional. If type is `STRING`, `min_length` specifies the minimum length of the string.
"minProperties": "A String", # Optional. If type is `OBJECT`, `min_properties` specifies the minimum number of properties that can be provided.
"minimum": 3.14, # Optional. If type is `INTEGER` or `NUMBER`, `minimum` specifies the minimum allowed value.
"nullable": True or False, # Optional. Indicates if the value of this field can be null.
"pattern": "A String", # Optional. If type is `STRING`, `pattern` specifies a regular expression that the string must match.
"properties": { # Optional. If type is `OBJECT`, `properties` is a map of property names to schema definitions for each property of the object.
"a_key": # Object with schema name: GoogleCloudAiplatformV1beta1Schema
},
"propertyOrdering": [ # Optional. Order of properties displayed or used where order matters. This is not a standard field in OpenAPI specification, but can be used to control the order of properties.
"A String",
],
"ref": "A String", # Optional. Allows referencing another schema definition to use in place of this schema. The value must be a valid reference to a schema in `defs`. For example, the following schema defines a reference to a schema node named "Pet": type: object properties: pet: ref: #/defs/Pet defs: Pet: type: object properties: name: type: string The value of the "pet" property is a reference to the schema node named "Pet". See details in https://json-schema.org/understanding-json-schema/structuring
"required": [ # Optional. If type is `OBJECT`, `required` lists the names of properties that must be present.
"A String",
],
"title": "A String", # Optional. Title for the schema.
"type": "A String", # Optional. Data type of the schema field.
},
"responseJsonSchema": "", # Optional. Describes the output from this function in JSON Schema format. The value specified by the schema is the response value of the function. This field is mutually exclusive with `response`.
},
],
"googleMaps": { # Tool to retrieve public maps data for grounding, powered by Google. # Optional. GoogleMaps tool type. Tool to support Google Maps in Model.
"enableWidget": True or False, # Optional. If true, include the widget context token in the response.
},
"googleSearch": { # GoogleSearch tool type. Tool to support Google Search in Model. Powered by Google. # Optional. GoogleSearch tool type. Tool to support Google Search in Model. Powered by Google.
"blockingConfidence": "A String", # Optional. Sites with confidence level chosen & above this value will be blocked from the search results.
"excludeDomains": [ # Optional. List of domains to be excluded from the search results. The default limit is 2000 domains. Example: ["amazon.com", "facebook.com"].
"A String",
],
},
"googleSearchRetrieval": { # Tool to retrieve public web data for grounding, powered by Google. # Optional. Specialized retrieval tool that is powered by Google Search.
"dynamicRetrievalConfig": { # Describes the options to customize dynamic retrieval. # Specifies the dynamic retrieval configuration for the given source.
"dynamicThreshold": 3.14, # Optional. The threshold to be used in dynamic retrieval. If not set, a system default value is used.
"mode": "A String", # The mode of the predictor to be used in dynamic retrieval.
},
},
"parallelAiSearch": { # ParallelAiSearch tool type. A tool that uses the Parallel.ai search engine for grounding. # Optional. If specified, Vertex AI will use Parallel.ai to search for information to answer user queries. The search results will be grounded on Parallel.ai and presented to the model for response generation
"apiKey": "A String", # Optional. The API key for ParallelAiSearch. If an API key is not provided, the system will attempt to verify access by checking for an active Parallel.ai subscription through the Google Cloud Marketplace. See https://docs.parallel.ai/search/search-quickstart for more details.
"customConfigs": { # Optional. Custom configs for ParallelAiSearch. This field can be used to pass any parameter from the Parallel.ai Search API. See the Parallel.ai documentation for the full list of available parameters and their usage: https://docs.parallel.ai/api-reference/search-beta/search Currently only `source_policy`, `excerpts`, `max_results`, `mode`, `fetch_policy` can be set via this field. For example: { "source_policy": { "include_domains": ["google.com", "wikipedia.org"], "exclude_domains": ["example.com"] }, "fetch_policy": { "max_age_seconds": 3600 } }
"a_key": "", # Properties of the object.
},
},
"retrieval": { # Defines a retrieval tool that model can call to access external knowledge. # Optional. Retrieval tool type. System will always execute the provided retrieval tool(s) to get external knowledge to answer the prompt. Retrieval results are presented to the model for generation.
"disableAttribution": True or False, # Optional. Deprecated. This option is no longer supported.
"externalApi": { # Retrieve from data source powered by external API for grounding. The external API is not owned by Google, but need to follow the pre-defined API spec. # Use data source powered by external API for grounding.
"apiAuth": { # The generic reusable api auth config. Deprecated. Please use AuthConfig (google/cloud/aiplatform/master/auth.proto) instead. # The authentication config to access the API. Deprecated. Please use auth_config instead.
"apiKeyConfig": { # The API secret. # The API secret.
"apiKeySecretVersion": "A String", # Required. The SecretManager secret version resource name storing API key. e.g. projects/{project}/secrets/{secret}/versions/{version}
"apiKeyString": "A String", # The API key string. Either this or `api_key_secret_version` must be set.
},
},
"apiSpec": "A String", # The API spec that the external API implements.
"authConfig": { # Auth configuration to run the extension. # The authentication config to access the API.
"apiKeyConfig": { # Config for authentication with API key. # Config for API key auth.
"apiKeySecret": "A String", # Optional. The name of the SecretManager secret version resource storing the API key. Format: `projects/{project}/secrets/{secrete}/versions/{version}` - If both `api_key_secret` and `api_key_string` are specified, this field takes precedence over `api_key_string`. - If specified, the `secretmanager.versions.access` permission should be granted to Vertex AI Extension Service Agent (https://cloud.google.com/vertex-ai/docs/general/access-control#service-agents) on the specified resource.
"apiKeyString": "A String", # Optional. The API key to be used in the request directly.
"httpElementLocation": "A String", # Optional. The location of the API key.
"name": "A String", # Optional. The parameter name of the API key. E.g. If the API request is "https://example.com/act?api_key=", "api_key" would be the parameter name.
},
"authType": "A String", # Type of auth scheme.
"googleServiceAccountConfig": { # Config for Google Service Account Authentication. # Config for Google Service Account auth.
"serviceAccount": "A String", # Optional. The service account that the extension execution service runs as. - If the service account is specified, the `iam.serviceAccounts.getAccessToken` permission should be granted to Vertex AI Extension Service Agent (https://cloud.google.com/vertex-ai/docs/general/access-control#service-agents) on the specified service account. - If not specified, the Vertex AI Extension Service Agent will be used to execute the Extension.
},
"httpBasicAuthConfig": { # Config for HTTP Basic Authentication. # Config for HTTP Basic auth.
"credentialSecret": "A String", # Required. The name of the SecretManager secret version resource storing the base64 encoded credentials. Format: `projects/{project}/secrets/{secrete}/versions/{version}` - If specified, the `secretmanager.versions.access` permission should be granted to Vertex AI Extension Service Agent (https://cloud.google.com/vertex-ai/docs/general/access-control#service-agents) on the specified resource.
},
"oauthConfig": { # Config for user oauth. # Config for user oauth.
"accessToken": "A String", # Access token for extension endpoint. Only used to propagate token from [[ExecuteExtensionRequest.runtime_auth_config]] at request time.
"serviceAccount": "A String", # The service account used to generate access tokens for executing the Extension. - If the service account is specified, the `iam.serviceAccounts.getAccessToken` permission should be granted to Vertex AI Extension Service Agent (https://cloud.google.com/vertex-ai/docs/general/access-control#service-agents) on the provided service account.
},
"oidcConfig": { # Config for user OIDC auth. # Config for user OIDC auth.
"idToken": "A String", # OpenID Connect formatted ID token for extension endpoint. Only used to propagate token from [[ExecuteExtensionRequest.runtime_auth_config]] at request time.
"serviceAccount": "A String", # The service account used to generate an OpenID Connect (OIDC)-compatible JWT token signed by the Google OIDC Provider (accounts.google.com) for extension endpoint (https://cloud.google.com/iam/docs/create-short-lived-credentials-direct#sa-credentials-oidc). - The audience for the token will be set to the URL in the server url defined in the OpenApi spec. - If the service account is provided, the service account should grant `iam.serviceAccounts.getOpenIdToken` permission to Vertex AI Extension Service Agent (https://cloud.google.com/vertex-ai/docs/general/access-control#service-agents).
},
},
"elasticSearchParams": { # The search parameters to use for the ELASTIC_SEARCH spec. # Parameters for the elastic search API.
"index": "A String", # The ElasticSearch index to use.
"numHits": 42, # Optional. Number of hits (chunks) to request. When specified, it is passed to Elasticsearch as the `num_hits` param.
"searchTemplate": "A String", # The ElasticSearch search template to use.
},
"endpoint": "A String", # The endpoint of the external API. The system will call the API at this endpoint to retrieve the data for grounding. Example: https://acme.com:443/search
"simpleSearchParams": { # The search parameters to use for SIMPLE_SEARCH spec. # Parameters for the simple search API.
},
},
"vertexAiSearch": { # Retrieve from Vertex AI Search datastore or engine for grounding. datastore and engine are mutually exclusive. See https://cloud.google.com/products/agent-builder # Set to use data source powered by Vertex AI Search.
"dataStoreSpecs": [ # Specifications that define the specific DataStores to be searched, along with configurations for those data stores. This is only considered for Engines with multiple data stores. It should only be set if engine is used.
{ # Define data stores within engine to filter on in a search call and configurations for those data stores. For more information, see https://cloud.google.com/generative-ai-app-builder/docs/reference/rpc/google.cloud.discoveryengine.v1#datastorespec
"dataStore": "A String", # Full resource name of DataStore, such as Format: `projects/{project}/locations/{location}/collections/{collection}/dataStores/{dataStore}`
"filter": "A String", # Optional. Filter specification to filter documents in the data store specified by data_store field. For more information on filtering, see [Filtering](https://cloud.google.com/generative-ai-app-builder/docs/filter-search-metadata)
},
],
"datastore": "A String", # Optional. Fully-qualified Vertex AI Search data store resource ID. Format: `projects/{project}/locations/{location}/collections/{collection}/dataStores/{dataStore}`
"engine": "A String", # Optional. Fully-qualified Vertex AI Search engine resource ID. Format: `projects/{project}/locations/{location}/collections/{collection}/engines/{engine}`
"filter": "A String", # Optional. Filter strings to be passed to the search API.
"maxResults": 42, # Optional. Number of search results to return per query. The default value is 10. The maximumm allowed value is 10.
},
"vertexRagStore": { # Retrieve from Vertex RAG Store for grounding. # Set to use data source powered by Vertex RAG store. User data is uploaded via the VertexRagDataService.
"ragCorpora": [ # Optional. Deprecated. Please use rag_resources instead.
"A String",
],
"ragResources": [ # Optional. The representation of the rag source. It can be used to specify corpus only or ragfiles. Currently only support one corpus or multiple files from one corpus. In the future we may open up multiple corpora support.
{ # The definition of the Rag resource.
"ragCorpus": "A String", # Optional. RagCorpora resource name. Format: `projects/{project}/locations/{location}/ragCorpora/{rag_corpus}`
"ragFileIds": [ # Optional. rag_file_id. The files should be in the same rag_corpus set in rag_corpus field.
"A String",
],
},
],
"ragRetrievalConfig": { # Specifies the context retrieval config. # Optional. The retrieval config for the Rag query.
"filter": { # Config for filters. # Optional. Config for filters.
"metadataFilter": "A String", # Optional. String for metadata filtering.
"vectorDistanceThreshold": 3.14, # Optional. Only returns contexts with vector distance smaller than the threshold.
"vectorSimilarityThreshold": 3.14, # Optional. Only returns contexts with vector similarity larger than the threshold.
},
"hybridSearch": { # Config for Hybrid Search. # Optional. Config for Hybrid Search.
"alpha": 3.14, # Optional. Alpha value controls the weight between dense and sparse vector search results. The range is [0, 1], while 0 means sparse vector search only and 1 means dense vector search only. The default value is 0.5 which balances sparse and dense vector search equally.
},
"ranking": { # Config for ranking and reranking. # Optional. Config for ranking and reranking.
"llmRanker": { # Config for LlmRanker. # Optional. Config for LlmRanker.
"modelName": "A String", # Optional. The model name used for ranking. See [Supported models](https://cloud.google.com/vertex-ai/generative-ai/docs/model-reference/inference#supported-models).
},
"rankService": { # Config for Rank Service. # Optional. Config for Rank Service.
"modelName": "A String", # Optional. The model name of the rank service. Format: `semantic-ranker-512@latest`
},
},
"topK": 42, # Optional. The number of contexts to retrieve.
},
"similarityTopK": 42, # Optional. Number of top k results to return from the selected corpora.
"storeContext": True or False, # Optional. Currently only supported for Gemini Multimodal Live API. In Gemini Multimodal Live API, if `store_context` bool is specified, Gemini will leverage it to automatically memorize the interactions between the client and Gemini, and retrieve context when needed to augment the response generation for users' ongoing and future interactions.
"vectorDistanceThreshold": 3.14, # Optional. Only return results with vector distance smaller than the threshold.
},
},
"urlContext": { # Tool to support URL context. # Optional. Tool to support URL context retrieval.
},
},
],
},
},
"turns": [ # Optional. A chronological list of conversation turns. Each turn represents a logical execution cycle (e.g., User Input -> Agent Response).
{ # Represents a single turn/invocation in the conversation.
"events": [ # Optional. The list of events that occurred during this turn.
{ # Represents a single event in the execution trace.
"activeTools": [ # Optional. The list of tools that were active/available to the agent at the time of this event. This overrides the `AgentConfig.tools` if set.
{ # Tool details that the model may use to generate response. A `Tool` is a piece of code that enables the system to interact with external systems to perform an action, or set of actions, outside of knowledge and scope of the model. A Tool object should contain exactly one type of Tool (e.g FunctionDeclaration, Retrieval or GoogleSearchRetrieval).
"codeExecution": { # Tool that executes code generated by the model, and automatically returns the result to the model. See also [ExecutableCode]and [CodeExecutionResult] which are input and output to this tool. # Optional. CodeExecution tool type. Enables the model to execute code as part of generation.
},
"computerUse": { # Tool to support computer use. # Optional. Tool to support the model interacting directly with the computer. If enabled, it automatically populates computer-use specific Function Declarations.
"environment": "A String", # Required. The environment being operated.
"excludedPredefinedFunctions": [ # Optional. By default, [predefined functions](https://cloud.google.com/vertex-ai/generative-ai/docs/computer-use#supported-actions) are included in the final model call. Some of them can be explicitly excluded from being automatically included. This can serve two purposes: 1. Using a more restricted / different action space. 2. Improving the definitions / instructions of predefined functions.
"A String",
],
},
"enterpriseWebSearch": { # Tool to search public web data, powered by Vertex AI Search and Sec4 compliance. # Optional. Tool to support searching public web data, powered by Vertex AI Search and Sec4 compliance.
"blockingConfidence": "A String", # Optional. Sites with confidence level chosen & above this value will be blocked from the search results.
"excludeDomains": [ # Optional. List of domains to be excluded from the search results. The default limit is 2000 domains.
"A String",
],
},
"functionDeclarations": [ # Optional. Function tool type. One or more function declarations to be passed to the model along with the current user query. Model may decide to call a subset of these functions by populating FunctionCall in the response. User should provide a FunctionResponse for each function call in the next turn. Based on the function responses, Model will generate the final response back to the user. Maximum 512 function declarations can be provided.
{ # Structured representation of a function declaration as defined by the [OpenAPI 3.0 specification](https://spec.openapis.org/oas/v3.0.3). Included in this declaration are the function name, description, parameters and response type. This FunctionDeclaration is a representation of a block of code that can be used as a `Tool` by the model and executed by the client.
"description": "A String", # Optional. Description and purpose of the function. Model uses it to decide how and whether to call the function.
"name": "A String", # Required. The name of the function to call. Must start with a letter or an underscore. Must be a-z, A-Z, 0-9, or contain underscores, dots, colons and dashes, with a maximum length of 64.
"parameters": { # Defines the schema of input and output data. This is a subset of the [OpenAPI 3.0 Schema Object](https://spec.openapis.org/oas/v3.0.3#schema-object). # Optional. Describes the parameters to this function in JSON Schema Object format. Reflects the Open API 3.03 Parameter Object. string Key: the name of the parameter. Parameter names are case sensitive. Schema Value: the Schema defining the type used for the parameter. For function with no parameters, this can be left unset. Parameter names must start with a letter or an underscore and must only contain chars a-z, A-Z, 0-9, or underscores with a maximum length of 64. Example with 1 required and 1 optional parameter: type: OBJECT properties: param1: type: STRING param2: type: INTEGER required: - param1
"additionalProperties": "", # Optional. If `type` is `OBJECT`, specifies how to handle properties not defined in `properties`. If it is a boolean `false`, no additional properties are allowed. If it is a schema, additional properties are allowed if they conform to the schema.
"anyOf": [ # Optional. The instance must be valid against any (one or more) of the subschemas listed in `any_of`.
# Object with schema name: GoogleCloudAiplatformV1beta1Schema
],
"default": "", # Optional. Default value to use if the field is not specified.
"defs": { # Optional. `defs` provides a map of schema definitions that can be reused by `ref` elsewhere in the schema. Only allowed at root level of the schema.
"a_key": # Object with schema name: GoogleCloudAiplatformV1beta1Schema
},
"description": "A String", # Optional. Description of the schema.
"enum": [ # Optional. Possible values of the field. This field can be used to restrict a value to a fixed set of values. To mark a field as an enum, set `format` to `enum` and provide the list of possible values in `enum`. For example: 1. To define directions: `{type:STRING, format:enum, enum:["EAST", "NORTH", "SOUTH", "WEST"]}` 2. To define apartment numbers: `{type:INTEGER, format:enum, enum:["101", "201", "301"]}`
"A String",
],
"example": "", # Optional. Example of an instance of this schema.
"format": "A String", # Optional. The format of the data. For `NUMBER` type, format can be `float` or `double`. For `INTEGER` type, format can be `int32` or `int64`. For `STRING` type, format can be `email`, `byte`, `date`, `date-time`, `password`, and other formats to further refine the data type.
"items": # Object with schema name: GoogleCloudAiplatformV1beta1Schema # Optional. If type is `ARRAY`, `items` specifies the schema of elements in the array.
"maxItems": "A String", # Optional. If type is `ARRAY`, `max_items` specifies the maximum number of items in an array.
"maxLength": "A String", # Optional. If type is `STRING`, `max_length` specifies the maximum length of the string.
"maxProperties": "A String", # Optional. If type is `OBJECT`, `max_properties` specifies the maximum number of properties that can be provided.
"maximum": 3.14, # Optional. If type is `INTEGER` or `NUMBER`, `maximum` specifies the maximum allowed value.
"minItems": "A String", # Optional. If type is `ARRAY`, `min_items` specifies the minimum number of items in an array.
"minLength": "A String", # Optional. If type is `STRING`, `min_length` specifies the minimum length of the string.
"minProperties": "A String", # Optional. If type is `OBJECT`, `min_properties` specifies the minimum number of properties that can be provided.
"minimum": 3.14, # Optional. If type is `INTEGER` or `NUMBER`, `minimum` specifies the minimum allowed value.
"nullable": True or False, # Optional. Indicates if the value of this field can be null.
"pattern": "A String", # Optional. If type is `STRING`, `pattern` specifies a regular expression that the string must match.
"properties": { # Optional. If type is `OBJECT`, `properties` is a map of property names to schema definitions for each property of the object.
"a_key": # Object with schema name: GoogleCloudAiplatformV1beta1Schema
},
"propertyOrdering": [ # Optional. Order of properties displayed or used where order matters. This is not a standard field in OpenAPI specification, but can be used to control the order of properties.
"A String",
],
"ref": "A String", # Optional. Allows referencing another schema definition to use in place of this schema. The value must be a valid reference to a schema in `defs`. For example, the following schema defines a reference to a schema node named "Pet": type: object properties: pet: ref: #/defs/Pet defs: Pet: type: object properties: name: type: string The value of the "pet" property is a reference to the schema node named "Pet". See details in https://json-schema.org/understanding-json-schema/structuring
"required": [ # Optional. If type is `OBJECT`, `required` lists the names of properties that must be present.
"A String",
],
"title": "A String", # Optional. Title for the schema.
"type": "A String", # Optional. Data type of the schema field.
},
"parametersJsonSchema": "", # Optional. Describes the parameters to the function in JSON Schema format. The schema must describe an object where the properties are the parameters to the function. For example: ``` { "type": "object", "properties": { "name": { "type": "string" }, "age": { "type": "integer" } }, "additionalProperties": false, "required": ["name", "age"], "propertyOrdering": ["name", "age"] } ``` This field is mutually exclusive with `parameters`.
"response": { # Defines the schema of input and output data. This is a subset of the [OpenAPI 3.0 Schema Object](https://spec.openapis.org/oas/v3.0.3#schema-object). # Optional. Describes the output from this function in JSON Schema format. Reflects the Open API 3.03 Response Object. The Schema defines the type used for the response value of the function.
"additionalProperties": "", # Optional. If `type` is `OBJECT`, specifies how to handle properties not defined in `properties`. If it is a boolean `false`, no additional properties are allowed. If it is a schema, additional properties are allowed if they conform to the schema.
"anyOf": [ # Optional. The instance must be valid against any (one or more) of the subschemas listed in `any_of`.
# Object with schema name: GoogleCloudAiplatformV1beta1Schema
],
"default": "", # Optional. Default value to use if the field is not specified.
"defs": { # Optional. `defs` provides a map of schema definitions that can be reused by `ref` elsewhere in the schema. Only allowed at root level of the schema.
"a_key": # Object with schema name: GoogleCloudAiplatformV1beta1Schema
},
"description": "A String", # Optional. Description of the schema.
"enum": [ # Optional. Possible values of the field. This field can be used to restrict a value to a fixed set of values. To mark a field as an enum, set `format` to `enum` and provide the list of possible values in `enum`. For example: 1. To define directions: `{type:STRING, format:enum, enum:["EAST", "NORTH", "SOUTH", "WEST"]}` 2. To define apartment numbers: `{type:INTEGER, format:enum, enum:["101", "201", "301"]}`
"A String",
],
"example": "", # Optional. Example of an instance of this schema.
"format": "A String", # Optional. The format of the data. For `NUMBER` type, format can be `float` or `double`. For `INTEGER` type, format can be `int32` or `int64`. For `STRING` type, format can be `email`, `byte`, `date`, `date-time`, `password`, and other formats to further refine the data type.
"items": # Object with schema name: GoogleCloudAiplatformV1beta1Schema # Optional. If type is `ARRAY`, `items` specifies the schema of elements in the array.
"maxItems": "A String", # Optional. If type is `ARRAY`, `max_items` specifies the maximum number of items in an array.
"maxLength": "A String", # Optional. If type is `STRING`, `max_length` specifies the maximum length of the string.
"maxProperties": "A String", # Optional. If type is `OBJECT`, `max_properties` specifies the maximum number of properties that can be provided.
"maximum": 3.14, # Optional. If type is `INTEGER` or `NUMBER`, `maximum` specifies the maximum allowed value.
"minItems": "A String", # Optional. If type is `ARRAY`, `min_items` specifies the minimum number of items in an array.
"minLength": "A String", # Optional. If type is `STRING`, `min_length` specifies the minimum length of the string.
"minProperties": "A String", # Optional. If type is `OBJECT`, `min_properties` specifies the minimum number of properties that can be provided.
"minimum": 3.14, # Optional. If type is `INTEGER` or `NUMBER`, `minimum` specifies the minimum allowed value.
"nullable": True or False, # Optional. Indicates if the value of this field can be null.
"pattern": "A String", # Optional. If type is `STRING`, `pattern` specifies a regular expression that the string must match.
"properties": { # Optional. If type is `OBJECT`, `properties` is a map of property names to schema definitions for each property of the object.
"a_key": # Object with schema name: GoogleCloudAiplatformV1beta1Schema
},
"propertyOrdering": [ # Optional. Order of properties displayed or used where order matters. This is not a standard field in OpenAPI specification, but can be used to control the order of properties.
"A String",
],
"ref": "A String", # Optional. Allows referencing another schema definition to use in place of this schema. The value must be a valid reference to a schema in `defs`. For example, the following schema defines a reference to a schema node named "Pet": type: object properties: pet: ref: #/defs/Pet defs: Pet: type: object properties: name: type: string The value of the "pet" property is a reference to the schema node named "Pet". See details in https://json-schema.org/understanding-json-schema/structuring
"required": [ # Optional. If type is `OBJECT`, `required` lists the names of properties that must be present.
"A String",
],
"title": "A String", # Optional. Title for the schema.
"type": "A String", # Optional. Data type of the schema field.
},
"responseJsonSchema": "", # Optional. Describes the output from this function in JSON Schema format. The value specified by the schema is the response value of the function. This field is mutually exclusive with `response`.
},
],
"googleMaps": { # Tool to retrieve public maps data for grounding, powered by Google. # Optional. GoogleMaps tool type. Tool to support Google Maps in Model.
"enableWidget": True or False, # Optional. If true, include the widget context token in the response.
},
"googleSearch": { # GoogleSearch tool type. Tool to support Google Search in Model. Powered by Google. # Optional. GoogleSearch tool type. Tool to support Google Search in Model. Powered by Google.
"blockingConfidence": "A String", # Optional. Sites with confidence level chosen & above this value will be blocked from the search results.
"excludeDomains": [ # Optional. List of domains to be excluded from the search results. The default limit is 2000 domains. Example: ["amazon.com", "facebook.com"].
"A String",
],
},
"googleSearchRetrieval": { # Tool to retrieve public web data for grounding, powered by Google. # Optional. Specialized retrieval tool that is powered by Google Search.
"dynamicRetrievalConfig": { # Describes the options to customize dynamic retrieval. # Specifies the dynamic retrieval configuration for the given source.
"dynamicThreshold": 3.14, # Optional. The threshold to be used in dynamic retrieval. If not set, a system default value is used.
"mode": "A String", # The mode of the predictor to be used in dynamic retrieval.
},
},
"parallelAiSearch": { # ParallelAiSearch tool type. A tool that uses the Parallel.ai search engine for grounding. # Optional. If specified, Vertex AI will use Parallel.ai to search for information to answer user queries. The search results will be grounded on Parallel.ai and presented to the model for response generation
"apiKey": "A String", # Optional. The API key for ParallelAiSearch. If an API key is not provided, the system will attempt to verify access by checking for an active Parallel.ai subscription through the Google Cloud Marketplace. See https://docs.parallel.ai/search/search-quickstart for more details.
"customConfigs": { # Optional. Custom configs for ParallelAiSearch. This field can be used to pass any parameter from the Parallel.ai Search API. See the Parallel.ai documentation for the full list of available parameters and their usage: https://docs.parallel.ai/api-reference/search-beta/search Currently only `source_policy`, `excerpts`, `max_results`, `mode`, `fetch_policy` can be set via this field. For example: { "source_policy": { "include_domains": ["google.com", "wikipedia.org"], "exclude_domains": ["example.com"] }, "fetch_policy": { "max_age_seconds": 3600 } }
"a_key": "", # Properties of the object.
},
},
"retrieval": { # Defines a retrieval tool that model can call to access external knowledge. # Optional. Retrieval tool type. System will always execute the provided retrieval tool(s) to get external knowledge to answer the prompt. Retrieval results are presented to the model for generation.
"disableAttribution": True or False, # Optional. Deprecated. This option is no longer supported.
"externalApi": { # Retrieve from data source powered by external API for grounding. The external API is not owned by Google, but need to follow the pre-defined API spec. # Use data source powered by external API for grounding.
"apiAuth": { # The generic reusable api auth config. Deprecated. Please use AuthConfig (google/cloud/aiplatform/master/auth.proto) instead. # The authentication config to access the API. Deprecated. Please use auth_config instead.
"apiKeyConfig": { # The API secret. # The API secret.
"apiKeySecretVersion": "A String", # Required. The SecretManager secret version resource name storing API key. e.g. projects/{project}/secrets/{secret}/versions/{version}
"apiKeyString": "A String", # The API key string. Either this or `api_key_secret_version` must be set.
},
},
"apiSpec": "A String", # The API spec that the external API implements.
"authConfig": { # Auth configuration to run the extension. # The authentication config to access the API.
"apiKeyConfig": { # Config for authentication with API key. # Config for API key auth.
"apiKeySecret": "A String", # Optional. The name of the SecretManager secret version resource storing the API key. Format: `projects/{project}/secrets/{secrete}/versions/{version}` - If both `api_key_secret` and `api_key_string` are specified, this field takes precedence over `api_key_string`. - If specified, the `secretmanager.versions.access` permission should be granted to Vertex AI Extension Service Agent (https://cloud.google.com/vertex-ai/docs/general/access-control#service-agents) on the specified resource.
"apiKeyString": "A String", # Optional. The API key to be used in the request directly.
"httpElementLocation": "A String", # Optional. The location of the API key.
"name": "A String", # Optional. The parameter name of the API key. E.g. If the API request is "https://example.com/act?api_key=", "api_key" would be the parameter name.
},
"authType": "A String", # Type of auth scheme.
"googleServiceAccountConfig": { # Config for Google Service Account Authentication. # Config for Google Service Account auth.
"serviceAccount": "A String", # Optional. The service account that the extension execution service runs as. - If the service account is specified, the `iam.serviceAccounts.getAccessToken` permission should be granted to Vertex AI Extension Service Agent (https://cloud.google.com/vertex-ai/docs/general/access-control#service-agents) on the specified service account. - If not specified, the Vertex AI Extension Service Agent will be used to execute the Extension.
},
"httpBasicAuthConfig": { # Config for HTTP Basic Authentication. # Config for HTTP Basic auth.
"credentialSecret": "A String", # Required. The name of the SecretManager secret version resource storing the base64 encoded credentials. Format: `projects/{project}/secrets/{secrete}/versions/{version}` - If specified, the `secretmanager.versions.access` permission should be granted to Vertex AI Extension Service Agent (https://cloud.google.com/vertex-ai/docs/general/access-control#service-agents) on the specified resource.
},
"oauthConfig": { # Config for user oauth. # Config for user oauth.
"accessToken": "A String", # Access token for extension endpoint. Only used to propagate token from [[ExecuteExtensionRequest.runtime_auth_config]] at request time.
"serviceAccount": "A String", # The service account used to generate access tokens for executing the Extension. - If the service account is specified, the `iam.serviceAccounts.getAccessToken` permission should be granted to Vertex AI Extension Service Agent (https://cloud.google.com/vertex-ai/docs/general/access-control#service-agents) on the provided service account.
},
"oidcConfig": { # Config for user OIDC auth. # Config for user OIDC auth.
"idToken": "A String", # OpenID Connect formatted ID token for extension endpoint. Only used to propagate token from [[ExecuteExtensionRequest.runtime_auth_config]] at request time.
"serviceAccount": "A String", # The service account used to generate an OpenID Connect (OIDC)-compatible JWT token signed by the Google OIDC Provider (accounts.google.com) for extension endpoint (https://cloud.google.com/iam/docs/create-short-lived-credentials-direct#sa-credentials-oidc). - The audience for the token will be set to the URL in the server url defined in the OpenApi spec. - If the service account is provided, the service account should grant `iam.serviceAccounts.getOpenIdToken` permission to Vertex AI Extension Service Agent (https://cloud.google.com/vertex-ai/docs/general/access-control#service-agents).
},
},
"elasticSearchParams": { # The search parameters to use for the ELASTIC_SEARCH spec. # Parameters for the elastic search API.
"index": "A String", # The ElasticSearch index to use.
"numHits": 42, # Optional. Number of hits (chunks) to request. When specified, it is passed to Elasticsearch as the `num_hits` param.
"searchTemplate": "A String", # The ElasticSearch search template to use.
},
"endpoint": "A String", # The endpoint of the external API. The system will call the API at this endpoint to retrieve the data for grounding. Example: https://acme.com:443/search
"simpleSearchParams": { # The search parameters to use for SIMPLE_SEARCH spec. # Parameters for the simple search API.
},
},
"vertexAiSearch": { # Retrieve from Vertex AI Search datastore or engine for grounding. datastore and engine are mutually exclusive. See https://cloud.google.com/products/agent-builder # Set to use data source powered by Vertex AI Search.
"dataStoreSpecs": [ # Specifications that define the specific DataStores to be searched, along with configurations for those data stores. This is only considered for Engines with multiple data stores. It should only be set if engine is used.
{ # Define data stores within engine to filter on in a search call and configurations for those data stores. For more information, see https://cloud.google.com/generative-ai-app-builder/docs/reference/rpc/google.cloud.discoveryengine.v1#datastorespec
"dataStore": "A String", # Full resource name of DataStore, such as Format: `projects/{project}/locations/{location}/collections/{collection}/dataStores/{dataStore}`
"filter": "A String", # Optional. Filter specification to filter documents in the data store specified by data_store field. For more information on filtering, see [Filtering](https://cloud.google.com/generative-ai-app-builder/docs/filter-search-metadata)
},
],
"datastore": "A String", # Optional. Fully-qualified Vertex AI Search data store resource ID. Format: `projects/{project}/locations/{location}/collections/{collection}/dataStores/{dataStore}`
"engine": "A String", # Optional. Fully-qualified Vertex AI Search engine resource ID. Format: `projects/{project}/locations/{location}/collections/{collection}/engines/{engine}`
"filter": "A String", # Optional. Filter strings to be passed to the search API.
"maxResults": 42, # Optional. Number of search results to return per query. The default value is 10. The maximumm allowed value is 10.
},
"vertexRagStore": { # Retrieve from Vertex RAG Store for grounding. # Set to use data source powered by Vertex RAG store. User data is uploaded via the VertexRagDataService.
"ragCorpora": [ # Optional. Deprecated. Please use rag_resources instead.
"A String",
],
"ragResources": [ # Optional. The representation of the rag source. It can be used to specify corpus only or ragfiles. Currently only support one corpus or multiple files from one corpus. In the future we may open up multiple corpora support.
{ # The definition of the Rag resource.
"ragCorpus": "A String", # Optional. RagCorpora resource name. Format: `projects/{project}/locations/{location}/ragCorpora/{rag_corpus}`
"ragFileIds": [ # Optional. rag_file_id. The files should be in the same rag_corpus set in rag_corpus field.
"A String",
],
},
],
"ragRetrievalConfig": { # Specifies the context retrieval config. # Optional. The retrieval config for the Rag query.
"filter": { # Config for filters. # Optional. Config for filters.
"metadataFilter": "A String", # Optional. String for metadata filtering.
"vectorDistanceThreshold": 3.14, # Optional. Only returns contexts with vector distance smaller than the threshold.
"vectorSimilarityThreshold": 3.14, # Optional. Only returns contexts with vector similarity larger than the threshold.
},
"hybridSearch": { # Config for Hybrid Search. # Optional. Config for Hybrid Search.
"alpha": 3.14, # Optional. Alpha value controls the weight between dense and sparse vector search results. The range is [0, 1], while 0 means sparse vector search only and 1 means dense vector search only. The default value is 0.5 which balances sparse and dense vector search equally.
},
"ranking": { # Config for ranking and reranking. # Optional. Config for ranking and reranking.
"llmRanker": { # Config for LlmRanker. # Optional. Config for LlmRanker.
"modelName": "A String", # Optional. The model name used for ranking. See [Supported models](https://cloud.google.com/vertex-ai/generative-ai/docs/model-reference/inference#supported-models).
},
"rankService": { # Config for Rank Service. # Optional. Config for Rank Service.
"modelName": "A String", # Optional. The model name of the rank service. Format: `semantic-ranker-512@latest`
},
},
"topK": 42, # Optional. The number of contexts to retrieve.
},
"similarityTopK": 42, # Optional. Number of top k results to return from the selected corpora.
"storeContext": True or False, # Optional. Currently only supported for Gemini Multimodal Live API. In Gemini Multimodal Live API, if `store_context` bool is specified, Gemini will leverage it to automatically memorize the interactions between the client and Gemini, and retrieve context when needed to augment the response generation for users' ongoing and future interactions.
"vectorDistanceThreshold": 3.14, # Optional. Only return results with vector distance smaller than the threshold.
},
},
"urlContext": { # Tool to support URL context. # Optional. Tool to support URL context retrieval.
},
},
],
"author": "A String", # Required. The ID of the agent or entity that generated this event. Use "user" to denote events generated by the end-user.
"content": { # The structured data content of a message. A Content message contains a `role` field, which indicates the producer of the content, and a `parts` field, which contains the multi-part data of the message. # Required. The content of the event (e.g., text response, tool call, tool response).
"parts": [ # Required. A list of Part objects that make up a single message. Parts of a message can have different MIME types. A Content message must have at least one Part.
{ # A datatype containing media that is part of a multi-part Content message. A `Part` consists of data which has an associated datatype. A `Part` can only contain one of the accepted types in `Part.data`. For media types that are not text, `Part` must have a fixed IANA MIME type identifying the type and subtype of the media if `inline_data` or `file_data` field is filled with raw bytes.
"codeExecutionResult": { # Result of executing the [ExecutableCode]. Only generated when using the [CodeExecution] tool, and always follows a `part` containing the [ExecutableCode]. # Optional. The result of executing the ExecutableCode.
"outcome": "A String", # Required. Outcome of the code execution.
"output": "A String", # Optional. Contains stdout when code execution is successful, stderr or other description otherwise.
},
"executableCode": { # Code generated by the model that is meant to be executed, and the result returned to the model. Generated when using the [CodeExecution] tool, in which the code will be automatically executed, and a corresponding [CodeExecutionResult] will also be generated. # Optional. Code generated by the model that is intended to be executed.
"code": "A String", # Required. The code to be executed.
"language": "A String", # Required. Programming language of the `code`.
},
"fileData": { # URI-based data. A FileData message contains a URI pointing to data of a specific media type. It is used to represent images, audio, and video stored in Google Cloud Storage. # Optional. The URI-based data of the part. This can be used to include files from Google Cloud Storage.
"displayName": "A String", # Optional. The display name of the file. Used to provide a label or filename to distinguish files. This field is only returned in `PromptMessage` for prompt management. It is used in the Gemini calls only when server side tools (`code_execution`, `google_search`, and `url_context`) are enabled.
"fileUri": "A String", # Required. The URI of the file in Google Cloud Storage.
"mimeType": "A String", # Required. The IANA standard MIME type of the source data.
},
"functionCall": { # A predicted [FunctionCall] returned from the model that contains a string representing the [FunctionDeclaration.name] and a structured JSON object containing the parameters and their values. # Optional. A predicted function call returned from the model. This contains the name of the function to call and the arguments to pass to the function.
"args": { # Optional. The function parameters and values in JSON object format. See [FunctionDeclaration.parameters] for parameter details.
"a_key": "", # Properties of the object.
},
"id": "A String", # Optional. The unique id of the function call. If populated, the client to execute the `function_call` and return the response with the matching `id`.
"name": "A String", # Optional. The name of the function to call. Matches [FunctionDeclaration.name].
"partialArgs": [ # Optional. The partial argument value of the function call. If provided, represents the arguments/fields that are streamed incrementally.
{ # Partial argument value of the function call.
"boolValue": True or False, # Optional. Represents a boolean value.
"jsonPath": "A String", # Required. A JSON Path (RFC 9535) to the argument being streamed. https://datatracker.ietf.org/doc/html/rfc9535. e.g. "$.foo.bar[0].data".
"nullValue": "A String", # Optional. Represents a null value.
"numberValue": 3.14, # Optional. Represents a double value.
"stringValue": "A String", # Optional. Represents a string value.
"willContinue": True or False, # Optional. Whether this is not the last part of the same json_path. If true, another PartialArg message for the current json_path is expected to follow.
},
],
"willContinue": True or False, # Optional. Whether this is the last part of the FunctionCall. If true, another partial message for the current FunctionCall is expected to follow.
},
"functionResponse": { # The result output from a [FunctionCall] that contains a string representing the [FunctionDeclaration.name] and a structured JSON object containing any output from the function is used as context to the model. This should contain the result of a [FunctionCall] made based on model prediction. # Optional. The result of a function call. This is used to provide the model with the result of a function call that it predicted.
"id": "A String", # Optional. The id of the function call this response is for. Populated by the client to match the corresponding function call `id`.
"name": "A String", # Required. The name of the function to call. Matches [FunctionDeclaration.name] and [FunctionCall.name].
"parts": [ # Optional. Ordered `Parts` that constitute a function response. Parts may have different IANA MIME types.
{ # A datatype containing media that is part of a `FunctionResponse` message. A `FunctionResponsePart` consists of data which has an associated datatype. A `FunctionResponsePart` can only contain one of the accepted types in `FunctionResponsePart.data`. A `FunctionResponsePart` must have a fixed IANA MIME type identifying the type and subtype of the media if the `inline_data` field is filled with raw bytes.
"fileData": { # URI based data for function response. # URI based data.
"displayName": "A String", # Optional. Display name of the file data. Used to provide a label or filename to distinguish file datas. This field is only returned in PromptMessage for prompt management. It is currently used in the Gemini GenerateContent calls only when server side tools (code_execution, google_search, and url_context) are enabled.
"fileUri": "A String", # Required. URI.
"mimeType": "A String", # Required. The IANA standard MIME type of the source data.
},
"inlineData": { # Raw media bytes for function response. Text should not be sent as raw bytes, use the 'text' field. # Inline media bytes.
"data": "A String", # Required. Raw bytes.
"displayName": "A String", # Optional. Display name of the blob. Used to provide a label or filename to distinguish blobs. This field is only returned in PromptMessage for prompt management. It is currently used in the Gemini GenerateContent calls only when server side tools (code_execution, google_search, and url_context) are enabled.
"mimeType": "A String", # Required. The IANA standard MIME type of the source data.
},
},
],
"response": { # Required. The function response in JSON object format. Use "output" key to specify function output and "error" key to specify error details (if any). If "output" and "error" keys are not specified, then whole "response" is treated as function output.
"a_key": "", # Properties of the object.
},
},
"inlineData": { # A content blob. A Blob contains data of a specific media type. It is used to represent images, audio, and video. # Optional. The inline data content of the part. This can be used to include images, audio, or video in a request.
"data": "A String", # Required. The raw bytes of the data.
"displayName": "A String", # Optional. The display name of the blob. Used to provide a label or filename to distinguish blobs. This field is only returned in `PromptMessage` for prompt management. It is used in the Gemini calls only when server-side tools (`code_execution`, `google_search`, and `url_context`) are enabled.
"mimeType": "A String", # Required. The IANA standard MIME type of the source data.
},
"mediaResolution": { # per part media resolution. Media resolution for the input media. # per part media resolution. Media resolution for the input media.
"level": "A String", # The tokenization quality used for given media.
},
"text": "A String", # Optional. The text content of the part. When sent from the VSCode Gemini Code Assist extension, references to @mentioned items will be converted to markdown boldface text. For example `@my-repo` will be converted to and sent as `**my-repo**` by the IDE agent.
"thought": True or False, # Optional. Indicates whether the `part` represents the model's thought process or reasoning.
"thoughtSignature": "A String", # Optional. An opaque signature for the thought so it can be reused in subsequent requests.
"videoMetadata": { # Provides metadata for a video, including the start and end offsets for clipping and the frame rate. # Optional. Video metadata. The metadata should only be specified while the video data is presented in inline_data or file_data.
"endOffset": "A String", # Optional. The end offset of the video.
"fps": 3.14, # Optional. The frame rate of the video sent to the model. If not specified, the default value is 1.0. The valid range is (0.0, 24.0].
"startOffset": "A String", # Optional. The start offset of the video.
},
},
],
"role": "A String", # Optional. The producer of the content. Must be either 'user' or 'model'. If not set, the service will default to 'user'.
},
"eventTime": "A String", # Optional. The timestamp when the event occurred.
"stateDelta": { # Optional. The change in the session state caused by this event. This is a key-value map of fields that were modified or added by the event.
"a_key": "", # Properties of the object.
},
},
],
"turnId": "A String", # Optional. A unique identifier for the turn. Useful for referencing specific turns across systems.
"turnIndex": 42, # Required. The 0-based index of the turn in the conversation sequence.
},
],
},
"candidate": "A String", # Required. The name of the candidate that produced the response.
"error": { # The `Status` type defines a logical error model that is suitable for different programming environments, including REST APIs and RPC APIs. It is used by [gRPC](https://github.com/grpc). Each `Status` message contains three pieces of data: error code, error message, and error details. You can find out more about this error model and how to work with it in the [API Design Guide](https://cloud.google.com/apis/design/errors). # Output only. Error while scraping model or agent.
"code": 42, # The status code, which should be an enum value of google.rpc.Code.
"details": [ # A list of messages that carry the error details. There is a common set of message types for APIs to use.
{
"a_key": "", # Properties of the object. Contains field @type with type URL.
},
],
"message": "A String", # A developer-facing error message, which should be in English. Any user-facing error message should be localized and sent in the google.rpc.Status.details field, or localized by the client.
},
"events": [ # Optional. Intermediate events (such as tool calls and responses) that led to the final response.
{ # The structured data content of a message. A Content message contains a `role` field, which indicates the producer of the content, and a `parts` field, which contains the multi-part data of the message.
"parts": [ # Required. A list of Part objects that make up a single message. Parts of a message can have different MIME types. A Content message must have at least one Part.
{ # A datatype containing media that is part of a multi-part Content message. A `Part` consists of data which has an associated datatype. A `Part` can only contain one of the accepted types in `Part.data`. For media types that are not text, `Part` must have a fixed IANA MIME type identifying the type and subtype of the media if `inline_data` or `file_data` field is filled with raw bytes.
"codeExecutionResult": { # Result of executing the [ExecutableCode]. Only generated when using the [CodeExecution] tool, and always follows a `part` containing the [ExecutableCode]. # Optional. The result of executing the ExecutableCode.
"outcome": "A String", # Required. Outcome of the code execution.
"output": "A String", # Optional. Contains stdout when code execution is successful, stderr or other description otherwise.
},
"executableCode": { # Code generated by the model that is meant to be executed, and the result returned to the model. Generated when using the [CodeExecution] tool, in which the code will be automatically executed, and a corresponding [CodeExecutionResult] will also be generated. # Optional. Code generated by the model that is intended to be executed.
"code": "A String", # Required. The code to be executed.
"language": "A String", # Required. Programming language of the `code`.
},
"fileData": { # URI-based data. A FileData message contains a URI pointing to data of a specific media type. It is used to represent images, audio, and video stored in Google Cloud Storage. # Optional. The URI-based data of the part. This can be used to include files from Google Cloud Storage.
"displayName": "A String", # Optional. The display name of the file. Used to provide a label or filename to distinguish files. This field is only returned in `PromptMessage` for prompt management. It is used in the Gemini calls only when server side tools (`code_execution`, `google_search`, and `url_context`) are enabled.
"fileUri": "A String", # Required. The URI of the file in Google Cloud Storage.
"mimeType": "A String", # Required. The IANA standard MIME type of the source data.
},
"functionCall": { # A predicted [FunctionCall] returned from the model that contains a string representing the [FunctionDeclaration.name] and a structured JSON object containing the parameters and their values. # Optional. A predicted function call returned from the model. This contains the name of the function to call and the arguments to pass to the function.
"args": { # Optional. The function parameters and values in JSON object format. See [FunctionDeclaration.parameters] for parameter details.
"a_key": "", # Properties of the object.
},
"id": "A String", # Optional. The unique id of the function call. If populated, the client to execute the `function_call` and return the response with the matching `id`.
"name": "A String", # Optional. The name of the function to call. Matches [FunctionDeclaration.name].
"partialArgs": [ # Optional. The partial argument value of the function call. If provided, represents the arguments/fields that are streamed incrementally.
{ # Partial argument value of the function call.
"boolValue": True or False, # Optional. Represents a boolean value.
"jsonPath": "A String", # Required. A JSON Path (RFC 9535) to the argument being streamed. https://datatracker.ietf.org/doc/html/rfc9535. e.g. "$.foo.bar[0].data".
"nullValue": "A String", # Optional. Represents a null value.
"numberValue": 3.14, # Optional. Represents a double value.
"stringValue": "A String", # Optional. Represents a string value.
"willContinue": True or False, # Optional. Whether this is not the last part of the same json_path. If true, another PartialArg message for the current json_path is expected to follow.
},
],
"willContinue": True or False, # Optional. Whether this is the last part of the FunctionCall. If true, another partial message for the current FunctionCall is expected to follow.
},
"functionResponse": { # The result output from a [FunctionCall] that contains a string representing the [FunctionDeclaration.name] and a structured JSON object containing any output from the function is used as context to the model. This should contain the result of a [FunctionCall] made based on model prediction. # Optional. The result of a function call. This is used to provide the model with the result of a function call that it predicted.
"id": "A String", # Optional. The id of the function call this response is for. Populated by the client to match the corresponding function call `id`.
"name": "A String", # Required. The name of the function to call. Matches [FunctionDeclaration.name] and [FunctionCall.name].
"parts": [ # Optional. Ordered `Parts` that constitute a function response. Parts may have different IANA MIME types.
{ # A datatype containing media that is part of a `FunctionResponse` message. A `FunctionResponsePart` consists of data which has an associated datatype. A `FunctionResponsePart` can only contain one of the accepted types in `FunctionResponsePart.data`. A `FunctionResponsePart` must have a fixed IANA MIME type identifying the type and subtype of the media if the `inline_data` field is filled with raw bytes.
"fileData": { # URI based data for function response. # URI based data.
"displayName": "A String", # Optional. Display name of the file data. Used to provide a label or filename to distinguish file datas. This field is only returned in PromptMessage for prompt management. It is currently used in the Gemini GenerateContent calls only when server side tools (code_execution, google_search, and url_context) are enabled.
"fileUri": "A String", # Required. URI.
"mimeType": "A String", # Required. The IANA standard MIME type of the source data.
},
"inlineData": { # Raw media bytes for function response. Text should not be sent as raw bytes, use the 'text' field. # Inline media bytes.
"data": "A String", # Required. Raw bytes.
"displayName": "A String", # Optional. Display name of the blob. Used to provide a label or filename to distinguish blobs. This field is only returned in PromptMessage for prompt management. It is currently used in the Gemini GenerateContent calls only when server side tools (code_execution, google_search, and url_context) are enabled.
"mimeType": "A String", # Required. The IANA standard MIME type of the source data.
},
},
],
"response": { # Required. The function response in JSON object format. Use "output" key to specify function output and "error" key to specify error details (if any). If "output" and "error" keys are not specified, then whole "response" is treated as function output.
"a_key": "", # Properties of the object.
},
},
"inlineData": { # A content blob. A Blob contains data of a specific media type. It is used to represent images, audio, and video. # Optional. The inline data content of the part. This can be used to include images, audio, or video in a request.
"data": "A String", # Required. The raw bytes of the data.
"displayName": "A String", # Optional. The display name of the blob. Used to provide a label or filename to distinguish blobs. This field is only returned in `PromptMessage` for prompt management. It is used in the Gemini calls only when server-side tools (`code_execution`, `google_search`, and `url_context`) are enabled.
"mimeType": "A String", # Required. The IANA standard MIME type of the source data.
},
"mediaResolution": { # per part media resolution. Media resolution for the input media. # per part media resolution. Media resolution for the input media.
"level": "A String", # The tokenization quality used for given media.
},
"text": "A String", # Optional. The text content of the part. When sent from the VSCode Gemini Code Assist extension, references to @mentioned items will be converted to markdown boldface text. For example `@my-repo` will be converted to and sent as `**my-repo**` by the IDE agent.
"thought": True or False, # Optional. Indicates whether the `part` represents the model's thought process or reasoning.
"thoughtSignature": "A String", # Optional. An opaque signature for the thought so it can be reused in subsequent requests.
"videoMetadata": { # Provides metadata for a video, including the start and end offsets for clipping and the frame rate. # Optional. Video metadata. The metadata should only be specified while the video data is presented in inline_data or file_data.
"endOffset": "A String", # Optional. The end offset of the video.
"fps": 3.14, # Optional. The frame rate of the video sent to the model. If not specified, the default value is 1.0. The valid range is (0.0, 24.0].
"startOffset": "A String", # Optional. The start offset of the video.
},
},
],
"role": "A String", # Optional. The producer of the content. Must be either 'user' or 'model'. If not set, the service will default to 'user'.
},
],
"text": "A String", # Text response.
"value": "", # Fields and values that can be used to populate the response template.
},
],
"goldenResponse": { # Responses from model or agent. # Optional. The Ideal response or ground truth.
"agentData": { # Represents data specific to multi-turn agent evaluations. # Optional. Represents the complete execution trace of a multi-turn conversation, which can involve single or multiple agents. This field is used to provide the full output of an agent's run, including all turns and events, for direct evaluation.
"agents": { # Optional. A map containing the static configurations for each agent in the system. Key: agent_id (matches the `author` field in events). Value: The static configuration of the agent.
"a_key": { # Represents configuration for an Agent.
"agentId": "A String", # Required. Unique identifier of the agent. This ID is used to refer to this agent, e.g., in AgentEvent.author, or in the `sub_agents` field. It must be unique within the `agents` map.
"agentType": "A String", # Optional. The type or class of the agent (e.g., "LlmAgent", "RouterAgent", "ToolUseAgent"). Useful for the autorater to understand the expected behavior of the agent.
"description": "A String", # Optional. A high-level description of the agent's role and responsibilities. Critical for evaluating if the agent is routing tasks correctly.
"instruction": "A String", # Optional. Provides instructions for the LLM model, guiding the agent's behavior. Can be static or dynamic. Dynamic instructions can contain placeholders like {variable_name} that will be resolved at runtime using the `AgentEvent.state_delta` field.
"subAgents": [ # Optional. The list of valid agent IDs that this agent can delegate to. This defines the directed edges in the multi-agent system graph topology.
"A String",
],
"tools": [ # Optional. The list of tools available to this agent.
{ # Tool details that the model may use to generate response. A `Tool` is a piece of code that enables the system to interact with external systems to perform an action, or set of actions, outside of knowledge and scope of the model. A Tool object should contain exactly one type of Tool (e.g FunctionDeclaration, Retrieval or GoogleSearchRetrieval).
"codeExecution": { # Tool that executes code generated by the model, and automatically returns the result to the model. See also [ExecutableCode]and [CodeExecutionResult] which are input and output to this tool. # Optional. CodeExecution tool type. Enables the model to execute code as part of generation.
},
"computerUse": { # Tool to support computer use. # Optional. Tool to support the model interacting directly with the computer. If enabled, it automatically populates computer-use specific Function Declarations.
"environment": "A String", # Required. The environment being operated.
"excludedPredefinedFunctions": [ # Optional. By default, [predefined functions](https://cloud.google.com/vertex-ai/generative-ai/docs/computer-use#supported-actions) are included in the final model call. Some of them can be explicitly excluded from being automatically included. This can serve two purposes: 1. Using a more restricted / different action space. 2. Improving the definitions / instructions of predefined functions.
"A String",
],
},
"enterpriseWebSearch": { # Tool to search public web data, powered by Vertex AI Search and Sec4 compliance. # Optional. Tool to support searching public web data, powered by Vertex AI Search and Sec4 compliance.
"blockingConfidence": "A String", # Optional. Sites with confidence level chosen & above this value will be blocked from the search results.
"excludeDomains": [ # Optional. List of domains to be excluded from the search results. The default limit is 2000 domains.
"A String",
],
},
"functionDeclarations": [ # Optional. Function tool type. One or more function declarations to be passed to the model along with the current user query. Model may decide to call a subset of these functions by populating FunctionCall in the response. User should provide a FunctionResponse for each function call in the next turn. Based on the function responses, Model will generate the final response back to the user. Maximum 512 function declarations can be provided.
{ # Structured representation of a function declaration as defined by the [OpenAPI 3.0 specification](https://spec.openapis.org/oas/v3.0.3). Included in this declaration are the function name, description, parameters and response type. This FunctionDeclaration is a representation of a block of code that can be used as a `Tool` by the model and executed by the client.
"description": "A String", # Optional. Description and purpose of the function. Model uses it to decide how and whether to call the function.
"name": "A String", # Required. The name of the function to call. Must start with a letter or an underscore. Must be a-z, A-Z, 0-9, or contain underscores, dots, colons and dashes, with a maximum length of 64.
"parameters": { # Defines the schema of input and output data. This is a subset of the [OpenAPI 3.0 Schema Object](https://spec.openapis.org/oas/v3.0.3#schema-object). # Optional. Describes the parameters to this function in JSON Schema Object format. Reflects the Open API 3.03 Parameter Object. string Key: the name of the parameter. Parameter names are case sensitive. Schema Value: the Schema defining the type used for the parameter. For function with no parameters, this can be left unset. Parameter names must start with a letter or an underscore and must only contain chars a-z, A-Z, 0-9, or underscores with a maximum length of 64. Example with 1 required and 1 optional parameter: type: OBJECT properties: param1: type: STRING param2: type: INTEGER required: - param1
"additionalProperties": "", # Optional. If `type` is `OBJECT`, specifies how to handle properties not defined in `properties`. If it is a boolean `false`, no additional properties are allowed. If it is a schema, additional properties are allowed if they conform to the schema.
"anyOf": [ # Optional. The instance must be valid against any (one or more) of the subschemas listed in `any_of`.
# Object with schema name: GoogleCloudAiplatformV1beta1Schema
],
"default": "", # Optional. Default value to use if the field is not specified.
"defs": { # Optional. `defs` provides a map of schema definitions that can be reused by `ref` elsewhere in the schema. Only allowed at root level of the schema.
"a_key": # Object with schema name: GoogleCloudAiplatformV1beta1Schema
},
"description": "A String", # Optional. Description of the schema.
"enum": [ # Optional. Possible values of the field. This field can be used to restrict a value to a fixed set of values. To mark a field as an enum, set `format` to `enum` and provide the list of possible values in `enum`. For example: 1. To define directions: `{type:STRING, format:enum, enum:["EAST", "NORTH", "SOUTH", "WEST"]}` 2. To define apartment numbers: `{type:INTEGER, format:enum, enum:["101", "201", "301"]}`
"A String",
],
"example": "", # Optional. Example of an instance of this schema.
"format": "A String", # Optional. The format of the data. For `NUMBER` type, format can be `float` or `double`. For `INTEGER` type, format can be `int32` or `int64`. For `STRING` type, format can be `email`, `byte`, `date`, `date-time`, `password`, and other formats to further refine the data type.
"items": # Object with schema name: GoogleCloudAiplatformV1beta1Schema # Optional. If type is `ARRAY`, `items` specifies the schema of elements in the array.
"maxItems": "A String", # Optional. If type is `ARRAY`, `max_items` specifies the maximum number of items in an array.
"maxLength": "A String", # Optional. If type is `STRING`, `max_length` specifies the maximum length of the string.
"maxProperties": "A String", # Optional. If type is `OBJECT`, `max_properties` specifies the maximum number of properties that can be provided.
"maximum": 3.14, # Optional. If type is `INTEGER` or `NUMBER`, `maximum` specifies the maximum allowed value.
"minItems": "A String", # Optional. If type is `ARRAY`, `min_items` specifies the minimum number of items in an array.
"minLength": "A String", # Optional. If type is `STRING`, `min_length` specifies the minimum length of the string.
"minProperties": "A String", # Optional. If type is `OBJECT`, `min_properties` specifies the minimum number of properties that can be provided.
"minimum": 3.14, # Optional. If type is `INTEGER` or `NUMBER`, `minimum` specifies the minimum allowed value.
"nullable": True or False, # Optional. Indicates if the value of this field can be null.
"pattern": "A String", # Optional. If type is `STRING`, `pattern` specifies a regular expression that the string must match.
"properties": { # Optional. If type is `OBJECT`, `properties` is a map of property names to schema definitions for each property of the object.
"a_key": # Object with schema name: GoogleCloudAiplatformV1beta1Schema
},
"propertyOrdering": [ # Optional. Order of properties displayed or used where order matters. This is not a standard field in OpenAPI specification, but can be used to control the order of properties.
"A String",
],
"ref": "A String", # Optional. Allows referencing another schema definition to use in place of this schema. The value must be a valid reference to a schema in `defs`. For example, the following schema defines a reference to a schema node named "Pet": type: object properties: pet: ref: #/defs/Pet defs: Pet: type: object properties: name: type: string The value of the "pet" property is a reference to the schema node named "Pet". See details in https://json-schema.org/understanding-json-schema/structuring
"required": [ # Optional. If type is `OBJECT`, `required` lists the names of properties that must be present.
"A String",
],
"title": "A String", # Optional. Title for the schema.
"type": "A String", # Optional. Data type of the schema field.
},
"parametersJsonSchema": "", # Optional. Describes the parameters to the function in JSON Schema format. The schema must describe an object where the properties are the parameters to the function. For example: ``` { "type": "object", "properties": { "name": { "type": "string" }, "age": { "type": "integer" } }, "additionalProperties": false, "required": ["name", "age"], "propertyOrdering": ["name", "age"] } ``` This field is mutually exclusive with `parameters`.
"response": { # Defines the schema of input and output data. This is a subset of the [OpenAPI 3.0 Schema Object](https://spec.openapis.org/oas/v3.0.3#schema-object). # Optional. Describes the output from this function in JSON Schema format. Reflects the Open API 3.03 Response Object. The Schema defines the type used for the response value of the function.
"additionalProperties": "", # Optional. If `type` is `OBJECT`, specifies how to handle properties not defined in `properties`. If it is a boolean `false`, no additional properties are allowed. If it is a schema, additional properties are allowed if they conform to the schema.
"anyOf": [ # Optional. The instance must be valid against any (one or more) of the subschemas listed in `any_of`.
# Object with schema name: GoogleCloudAiplatformV1beta1Schema
],
"default": "", # Optional. Default value to use if the field is not specified.
"defs": { # Optional. `defs` provides a map of schema definitions that can be reused by `ref` elsewhere in the schema. Only allowed at root level of the schema.
"a_key": # Object with schema name: GoogleCloudAiplatformV1beta1Schema
},
"description": "A String", # Optional. Description of the schema.
"enum": [ # Optional. Possible values of the field. This field can be used to restrict a value to a fixed set of values. To mark a field as an enum, set `format` to `enum` and provide the list of possible values in `enum`. For example: 1. To define directions: `{type:STRING, format:enum, enum:["EAST", "NORTH", "SOUTH", "WEST"]}` 2. To define apartment numbers: `{type:INTEGER, format:enum, enum:["101", "201", "301"]}`
"A String",
],
"example": "", # Optional. Example of an instance of this schema.
"format": "A String", # Optional. The format of the data. For `NUMBER` type, format can be `float` or `double`. For `INTEGER` type, format can be `int32` or `int64`. For `STRING` type, format can be `email`, `byte`, `date`, `date-time`, `password`, and other formats to further refine the data type.
"items": # Object with schema name: GoogleCloudAiplatformV1beta1Schema # Optional. If type is `ARRAY`, `items` specifies the schema of elements in the array.
"maxItems": "A String", # Optional. If type is `ARRAY`, `max_items` specifies the maximum number of items in an array.
"maxLength": "A String", # Optional. If type is `STRING`, `max_length` specifies the maximum length of the string.
"maxProperties": "A String", # Optional. If type is `OBJECT`, `max_properties` specifies the maximum number of properties that can be provided.
"maximum": 3.14, # Optional. If type is `INTEGER` or `NUMBER`, `maximum` specifies the maximum allowed value.
"minItems": "A String", # Optional. If type is `ARRAY`, `min_items` specifies the minimum number of items in an array.
"minLength": "A String", # Optional. If type is `STRING`, `min_length` specifies the minimum length of the string.
"minProperties": "A String", # Optional. If type is `OBJECT`, `min_properties` specifies the minimum number of properties that can be provided.
"minimum": 3.14, # Optional. If type is `INTEGER` or `NUMBER`, `minimum` specifies the minimum allowed value.
"nullable": True or False, # Optional. Indicates if the value of this field can be null.
"pattern": "A String", # Optional. If type is `STRING`, `pattern` specifies a regular expression that the string must match.
"properties": { # Optional. If type is `OBJECT`, `properties` is a map of property names to schema definitions for each property of the object.
"a_key": # Object with schema name: GoogleCloudAiplatformV1beta1Schema
},
"propertyOrdering": [ # Optional. Order of properties displayed or used where order matters. This is not a standard field in OpenAPI specification, but can be used to control the order of properties.
"A String",
],
"ref": "A String", # Optional. Allows referencing another schema definition to use in place of this schema. The value must be a valid reference to a schema in `defs`. For example, the following schema defines a reference to a schema node named "Pet": type: object properties: pet: ref: #/defs/Pet defs: Pet: type: object properties: name: type: string The value of the "pet" property is a reference to the schema node named "Pet". See details in https://json-schema.org/understanding-json-schema/structuring
"required": [ # Optional. If type is `OBJECT`, `required` lists the names of properties that must be present.
"A String",
],
"title": "A String", # Optional. Title for the schema.
"type": "A String", # Optional. Data type of the schema field.
},
"responseJsonSchema": "", # Optional. Describes the output from this function in JSON Schema format. The value specified by the schema is the response value of the function. This field is mutually exclusive with `response`.
},
],
"googleMaps": { # Tool to retrieve public maps data for grounding, powered by Google. # Optional. GoogleMaps tool type. Tool to support Google Maps in Model.
"enableWidget": True or False, # Optional. If true, include the widget context token in the response.
},
"googleSearch": { # GoogleSearch tool type. Tool to support Google Search in Model. Powered by Google. # Optional. GoogleSearch tool type. Tool to support Google Search in Model. Powered by Google.
"blockingConfidence": "A String", # Optional. Sites with confidence level chosen & above this value will be blocked from the search results.
"excludeDomains": [ # Optional. List of domains to be excluded from the search results. The default limit is 2000 domains. Example: ["amazon.com", "facebook.com"].
"A String",
],
},
"googleSearchRetrieval": { # Tool to retrieve public web data for grounding, powered by Google. # Optional. Specialized retrieval tool that is powered by Google Search.
"dynamicRetrievalConfig": { # Describes the options to customize dynamic retrieval. # Specifies the dynamic retrieval configuration for the given source.
"dynamicThreshold": 3.14, # Optional. The threshold to be used in dynamic retrieval. If not set, a system default value is used.
"mode": "A String", # The mode of the predictor to be used in dynamic retrieval.
},
},
"parallelAiSearch": { # ParallelAiSearch tool type. A tool that uses the Parallel.ai search engine for grounding. # Optional. If specified, Vertex AI will use Parallel.ai to search for information to answer user queries. The search results will be grounded on Parallel.ai and presented to the model for response generation
"apiKey": "A String", # Optional. The API key for ParallelAiSearch. If an API key is not provided, the system will attempt to verify access by checking for an active Parallel.ai subscription through the Google Cloud Marketplace. See https://docs.parallel.ai/search/search-quickstart for more details.
"customConfigs": { # Optional. Custom configs for ParallelAiSearch. This field can be used to pass any parameter from the Parallel.ai Search API. See the Parallel.ai documentation for the full list of available parameters and their usage: https://docs.parallel.ai/api-reference/search-beta/search Currently only `source_policy`, `excerpts`, `max_results`, `mode`, `fetch_policy` can be set via this field. For example: { "source_policy": { "include_domains": ["google.com", "wikipedia.org"], "exclude_domains": ["example.com"] }, "fetch_policy": { "max_age_seconds": 3600 } }
"a_key": "", # Properties of the object.
},
},
"retrieval": { # Defines a retrieval tool that model can call to access external knowledge. # Optional. Retrieval tool type. System will always execute the provided retrieval tool(s) to get external knowledge to answer the prompt. Retrieval results are presented to the model for generation.
"disableAttribution": True or False, # Optional. Deprecated. This option is no longer supported.
"externalApi": { # Retrieve from data source powered by external API for grounding. The external API is not owned by Google, but need to follow the pre-defined API spec. # Use data source powered by external API for grounding.
"apiAuth": { # The generic reusable api auth config. Deprecated. Please use AuthConfig (google/cloud/aiplatform/master/auth.proto) instead. # The authentication config to access the API. Deprecated. Please use auth_config instead.
"apiKeyConfig": { # The API secret. # The API secret.
"apiKeySecretVersion": "A String", # Required. The SecretManager secret version resource name storing API key. e.g. projects/{project}/secrets/{secret}/versions/{version}
"apiKeyString": "A String", # The API key string. Either this or `api_key_secret_version` must be set.
},
},
"apiSpec": "A String", # The API spec that the external API implements.
"authConfig": { # Auth configuration to run the extension. # The authentication config to access the API.
"apiKeyConfig": { # Config for authentication with API key. # Config for API key auth.
"apiKeySecret": "A String", # Optional. The name of the SecretManager secret version resource storing the API key. Format: `projects/{project}/secrets/{secrete}/versions/{version}` - If both `api_key_secret` and `api_key_string` are specified, this field takes precedence over `api_key_string`. - If specified, the `secretmanager.versions.access` permission should be granted to Vertex AI Extension Service Agent (https://cloud.google.com/vertex-ai/docs/general/access-control#service-agents) on the specified resource.
"apiKeyString": "A String", # Optional. The API key to be used in the request directly.
"httpElementLocation": "A String", # Optional. The location of the API key.
"name": "A String", # Optional. The parameter name of the API key. E.g. If the API request is "https://example.com/act?api_key=", "api_key" would be the parameter name.
},
"authType": "A String", # Type of auth scheme.
"googleServiceAccountConfig": { # Config for Google Service Account Authentication. # Config for Google Service Account auth.
"serviceAccount": "A String", # Optional. The service account that the extension execution service runs as. - If the service account is specified, the `iam.serviceAccounts.getAccessToken` permission should be granted to Vertex AI Extension Service Agent (https://cloud.google.com/vertex-ai/docs/general/access-control#service-agents) on the specified service account. - If not specified, the Vertex AI Extension Service Agent will be used to execute the Extension.
},
"httpBasicAuthConfig": { # Config for HTTP Basic Authentication. # Config for HTTP Basic auth.
"credentialSecret": "A String", # Required. The name of the SecretManager secret version resource storing the base64 encoded credentials. Format: `projects/{project}/secrets/{secrete}/versions/{version}` - If specified, the `secretmanager.versions.access` permission should be granted to Vertex AI Extension Service Agent (https://cloud.google.com/vertex-ai/docs/general/access-control#service-agents) on the specified resource.
},
"oauthConfig": { # Config for user oauth. # Config for user oauth.
"accessToken": "A String", # Access token for extension endpoint. Only used to propagate token from [[ExecuteExtensionRequest.runtime_auth_config]] at request time.
"serviceAccount": "A String", # The service account used to generate access tokens for executing the Extension. - If the service account is specified, the `iam.serviceAccounts.getAccessToken` permission should be granted to Vertex AI Extension Service Agent (https://cloud.google.com/vertex-ai/docs/general/access-control#service-agents) on the provided service account.
},
"oidcConfig": { # Config for user OIDC auth. # Config for user OIDC auth.
"idToken": "A String", # OpenID Connect formatted ID token for extension endpoint. Only used to propagate token from [[ExecuteExtensionRequest.runtime_auth_config]] at request time.
"serviceAccount": "A String", # The service account used to generate an OpenID Connect (OIDC)-compatible JWT token signed by the Google OIDC Provider (accounts.google.com) for extension endpoint (https://cloud.google.com/iam/docs/create-short-lived-credentials-direct#sa-credentials-oidc). - The audience for the token will be set to the URL in the server url defined in the OpenApi spec. - If the service account is provided, the service account should grant `iam.serviceAccounts.getOpenIdToken` permission to Vertex AI Extension Service Agent (https://cloud.google.com/vertex-ai/docs/general/access-control#service-agents).
},
},
"elasticSearchParams": { # The search parameters to use for the ELASTIC_SEARCH spec. # Parameters for the elastic search API.
"index": "A String", # The ElasticSearch index to use.
"numHits": 42, # Optional. Number of hits (chunks) to request. When specified, it is passed to Elasticsearch as the `num_hits` param.
"searchTemplate": "A String", # The ElasticSearch search template to use.
},
"endpoint": "A String", # The endpoint of the external API. The system will call the API at this endpoint to retrieve the data for grounding. Example: https://acme.com:443/search
"simpleSearchParams": { # The search parameters to use for SIMPLE_SEARCH spec. # Parameters for the simple search API.
},
},
"vertexAiSearch": { # Retrieve from Vertex AI Search datastore or engine for grounding. datastore and engine are mutually exclusive. See https://cloud.google.com/products/agent-builder # Set to use data source powered by Vertex AI Search.
"dataStoreSpecs": [ # Specifications that define the specific DataStores to be searched, along with configurations for those data stores. This is only considered for Engines with multiple data stores. It should only be set if engine is used.
{ # Define data stores within engine to filter on in a search call and configurations for those data stores. For more information, see https://cloud.google.com/generative-ai-app-builder/docs/reference/rpc/google.cloud.discoveryengine.v1#datastorespec
"dataStore": "A String", # Full resource name of DataStore, such as Format: `projects/{project}/locations/{location}/collections/{collection}/dataStores/{dataStore}`
"filter": "A String", # Optional. Filter specification to filter documents in the data store specified by data_store field. For more information on filtering, see [Filtering](https://cloud.google.com/generative-ai-app-builder/docs/filter-search-metadata)
},
],
"datastore": "A String", # Optional. Fully-qualified Vertex AI Search data store resource ID. Format: `projects/{project}/locations/{location}/collections/{collection}/dataStores/{dataStore}`
"engine": "A String", # Optional. Fully-qualified Vertex AI Search engine resource ID. Format: `projects/{project}/locations/{location}/collections/{collection}/engines/{engine}`
"filter": "A String", # Optional. Filter strings to be passed to the search API.
"maxResults": 42, # Optional. Number of search results to return per query. The default value is 10. The maximumm allowed value is 10.
},
"vertexRagStore": { # Retrieve from Vertex RAG Store for grounding. # Set to use data source powered by Vertex RAG store. User data is uploaded via the VertexRagDataService.
"ragCorpora": [ # Optional. Deprecated. Please use rag_resources instead.
"A String",
],
"ragResources": [ # Optional. The representation of the rag source. It can be used to specify corpus only or ragfiles. Currently only support one corpus or multiple files from one corpus. In the future we may open up multiple corpora support.
{ # The definition of the Rag resource.
"ragCorpus": "A String", # Optional. RagCorpora resource name. Format: `projects/{project}/locations/{location}/ragCorpora/{rag_corpus}`
"ragFileIds": [ # Optional. rag_file_id. The files should be in the same rag_corpus set in rag_corpus field.
"A String",
],
},
],
"ragRetrievalConfig": { # Specifies the context retrieval config. # Optional. The retrieval config for the Rag query.
"filter": { # Config for filters. # Optional. Config for filters.
"metadataFilter": "A String", # Optional. String for metadata filtering.
"vectorDistanceThreshold": 3.14, # Optional. Only returns contexts with vector distance smaller than the threshold.
"vectorSimilarityThreshold": 3.14, # Optional. Only returns contexts with vector similarity larger than the threshold.
},
"hybridSearch": { # Config for Hybrid Search. # Optional. Config for Hybrid Search.
"alpha": 3.14, # Optional. Alpha value controls the weight between dense and sparse vector search results. The range is [0, 1], while 0 means sparse vector search only and 1 means dense vector search only. The default value is 0.5 which balances sparse and dense vector search equally.
},
"ranking": { # Config for ranking and reranking. # Optional. Config for ranking and reranking.
"llmRanker": { # Config for LlmRanker. # Optional. Config for LlmRanker.
"modelName": "A String", # Optional. The model name used for ranking. See [Supported models](https://cloud.google.com/vertex-ai/generative-ai/docs/model-reference/inference#supported-models).
},
"rankService": { # Config for Rank Service. # Optional. Config for Rank Service.
"modelName": "A String", # Optional. The model name of the rank service. Format: `semantic-ranker-512@latest`
},
},
"topK": 42, # Optional. The number of contexts to retrieve.
},
"similarityTopK": 42, # Optional. Number of top k results to return from the selected corpora.
"storeContext": True or False, # Optional. Currently only supported for Gemini Multimodal Live API. In Gemini Multimodal Live API, if `store_context` bool is specified, Gemini will leverage it to automatically memorize the interactions between the client and Gemini, and retrieve context when needed to augment the response generation for users' ongoing and future interactions.
"vectorDistanceThreshold": 3.14, # Optional. Only return results with vector distance smaller than the threshold.
},
},
"urlContext": { # Tool to support URL context. # Optional. Tool to support URL context retrieval.
},
},
],
},
},
"turns": [ # Optional. A chronological list of conversation turns. Each turn represents a logical execution cycle (e.g., User Input -> Agent Response).
{ # Represents a single turn/invocation in the conversation.
"events": [ # Optional. The list of events that occurred during this turn.
{ # Represents a single event in the execution trace.
"activeTools": [ # Optional. The list of tools that were active/available to the agent at the time of this event. This overrides the `AgentConfig.tools` if set.
{ # Tool details that the model may use to generate response. A `Tool` is a piece of code that enables the system to interact with external systems to perform an action, or set of actions, outside of knowledge and scope of the model. A Tool object should contain exactly one type of Tool (e.g FunctionDeclaration, Retrieval or GoogleSearchRetrieval).
"codeExecution": { # Tool that executes code generated by the model, and automatically returns the result to the model. See also [ExecutableCode]and [CodeExecutionResult] which are input and output to this tool. # Optional. CodeExecution tool type. Enables the model to execute code as part of generation.
},
"computerUse": { # Tool to support computer use. # Optional. Tool to support the model interacting directly with the computer. If enabled, it automatically populates computer-use specific Function Declarations.
"environment": "A String", # Required. The environment being operated.
"excludedPredefinedFunctions": [ # Optional. By default, [predefined functions](https://cloud.google.com/vertex-ai/generative-ai/docs/computer-use#supported-actions) are included in the final model call. Some of them can be explicitly excluded from being automatically included. This can serve two purposes: 1. Using a more restricted / different action space. 2. Improving the definitions / instructions of predefined functions.
"A String",
],
},
"enterpriseWebSearch": { # Tool to search public web data, powered by Vertex AI Search and Sec4 compliance. # Optional. Tool to support searching public web data, powered by Vertex AI Search and Sec4 compliance.
"blockingConfidence": "A String", # Optional. Sites with confidence level chosen & above this value will be blocked from the search results.
"excludeDomains": [ # Optional. List of domains to be excluded from the search results. The default limit is 2000 domains.
"A String",
],
},
"functionDeclarations": [ # Optional. Function tool type. One or more function declarations to be passed to the model along with the current user query. Model may decide to call a subset of these functions by populating FunctionCall in the response. User should provide a FunctionResponse for each function call in the next turn. Based on the function responses, Model will generate the final response back to the user. Maximum 512 function declarations can be provided.
{ # Structured representation of a function declaration as defined by the [OpenAPI 3.0 specification](https://spec.openapis.org/oas/v3.0.3). Included in this declaration are the function name, description, parameters and response type. This FunctionDeclaration is a representation of a block of code that can be used as a `Tool` by the model and executed by the client.
"description": "A String", # Optional. Description and purpose of the function. Model uses it to decide how and whether to call the function.
"name": "A String", # Required. The name of the function to call. Must start with a letter or an underscore. Must be a-z, A-Z, 0-9, or contain underscores, dots, colons and dashes, with a maximum length of 64.
"parameters": { # Defines the schema of input and output data. This is a subset of the [OpenAPI 3.0 Schema Object](https://spec.openapis.org/oas/v3.0.3#schema-object). # Optional. Describes the parameters to this function in JSON Schema Object format. Reflects the Open API 3.03 Parameter Object. string Key: the name of the parameter. Parameter names are case sensitive. Schema Value: the Schema defining the type used for the parameter. For function with no parameters, this can be left unset. Parameter names must start with a letter or an underscore and must only contain chars a-z, A-Z, 0-9, or underscores with a maximum length of 64. Example with 1 required and 1 optional parameter: type: OBJECT properties: param1: type: STRING param2: type: INTEGER required: - param1
"additionalProperties": "", # Optional. If `type` is `OBJECT`, specifies how to handle properties not defined in `properties`. If it is a boolean `false`, no additional properties are allowed. If it is a schema, additional properties are allowed if they conform to the schema.
"anyOf": [ # Optional. The instance must be valid against any (one or more) of the subschemas listed in `any_of`.
# Object with schema name: GoogleCloudAiplatformV1beta1Schema
],
"default": "", # Optional. Default value to use if the field is not specified.
"defs": { # Optional. `defs` provides a map of schema definitions that can be reused by `ref` elsewhere in the schema. Only allowed at root level of the schema.
"a_key": # Object with schema name: GoogleCloudAiplatformV1beta1Schema
},
"description": "A String", # Optional. Description of the schema.
"enum": [ # Optional. Possible values of the field. This field can be used to restrict a value to a fixed set of values. To mark a field as an enum, set `format` to `enum` and provide the list of possible values in `enum`. For example: 1. To define directions: `{type:STRING, format:enum, enum:["EAST", "NORTH", "SOUTH", "WEST"]}` 2. To define apartment numbers: `{type:INTEGER, format:enum, enum:["101", "201", "301"]}`
"A String",
],
"example": "", # Optional. Example of an instance of this schema.
"format": "A String", # Optional. The format of the data. For `NUMBER` type, format can be `float` or `double`. For `INTEGER` type, format can be `int32` or `int64`. For `STRING` type, format can be `email`, `byte`, `date`, `date-time`, `password`, and other formats to further refine the data type.
"items": # Object with schema name: GoogleCloudAiplatformV1beta1Schema # Optional. If type is `ARRAY`, `items` specifies the schema of elements in the array.
"maxItems": "A String", # Optional. If type is `ARRAY`, `max_items` specifies the maximum number of items in an array.
"maxLength": "A String", # Optional. If type is `STRING`, `max_length` specifies the maximum length of the string.
"maxProperties": "A String", # Optional. If type is `OBJECT`, `max_properties` specifies the maximum number of properties that can be provided.
"maximum": 3.14, # Optional. If type is `INTEGER` or `NUMBER`, `maximum` specifies the maximum allowed value.
"minItems": "A String", # Optional. If type is `ARRAY`, `min_items` specifies the minimum number of items in an array.
"minLength": "A String", # Optional. If type is `STRING`, `min_length` specifies the minimum length of the string.
"minProperties": "A String", # Optional. If type is `OBJECT`, `min_properties` specifies the minimum number of properties that can be provided.
"minimum": 3.14, # Optional. If type is `INTEGER` or `NUMBER`, `minimum` specifies the minimum allowed value.
"nullable": True or False, # Optional. Indicates if the value of this field can be null.
"pattern": "A String", # Optional. If type is `STRING`, `pattern` specifies a regular expression that the string must match.
"properties": { # Optional. If type is `OBJECT`, `properties` is a map of property names to schema definitions for each property of the object.
"a_key": # Object with schema name: GoogleCloudAiplatformV1beta1Schema
},
"propertyOrdering": [ # Optional. Order of properties displayed or used where order matters. This is not a standard field in OpenAPI specification, but can be used to control the order of properties.
"A String",
],
"ref": "A String", # Optional. Allows referencing another schema definition to use in place of this schema. The value must be a valid reference to a schema in `defs`. For example, the following schema defines a reference to a schema node named "Pet": type: object properties: pet: ref: #/defs/Pet defs: Pet: type: object properties: name: type: string The value of the "pet" property is a reference to the schema node named "Pet". See details in https://json-schema.org/understanding-json-schema/structuring
"required": [ # Optional. If type is `OBJECT`, `required` lists the names of properties that must be present.
"A String",
],
"title": "A String", # Optional. Title for the schema.
"type": "A String", # Optional. Data type of the schema field.
},
"parametersJsonSchema": "", # Optional. Describes the parameters to the function in JSON Schema format. The schema must describe an object where the properties are the parameters to the function. For example: ``` { "type": "object", "properties": { "name": { "type": "string" }, "age": { "type": "integer" } }, "additionalProperties": false, "required": ["name", "age"], "propertyOrdering": ["name", "age"] } ``` This field is mutually exclusive with `parameters`.
"response": { # Defines the schema of input and output data. This is a subset of the [OpenAPI 3.0 Schema Object](https://spec.openapis.org/oas/v3.0.3#schema-object). # Optional. Describes the output from this function in JSON Schema format. Reflects the Open API 3.03 Response Object. The Schema defines the type used for the response value of the function.
"additionalProperties": "", # Optional. If `type` is `OBJECT`, specifies how to handle properties not defined in `properties`. If it is a boolean `false`, no additional properties are allowed. If it is a schema, additional properties are allowed if they conform to the schema.
"anyOf": [ # Optional. The instance must be valid against any (one or more) of the subschemas listed in `any_of`.
# Object with schema name: GoogleCloudAiplatformV1beta1Schema
],
"default": "", # Optional. Default value to use if the field is not specified.
"defs": { # Optional. `defs` provides a map of schema definitions that can be reused by `ref` elsewhere in the schema. Only allowed at root level of the schema.
"a_key": # Object with schema name: GoogleCloudAiplatformV1beta1Schema
},
"description": "A String", # Optional. Description of the schema.
"enum": [ # Optional. Possible values of the field. This field can be used to restrict a value to a fixed set of values. To mark a field as an enum, set `format` to `enum` and provide the list of possible values in `enum`. For example: 1. To define directions: `{type:STRING, format:enum, enum:["EAST", "NORTH", "SOUTH", "WEST"]}` 2. To define apartment numbers: `{type:INTEGER, format:enum, enum:["101", "201", "301"]}`
"A String",
],
"example": "", # Optional. Example of an instance of this schema.
"format": "A String", # Optional. The format of the data. For `NUMBER` type, format can be `float` or `double`. For `INTEGER` type, format can be `int32` or `int64`. For `STRING` type, format can be `email`, `byte`, `date`, `date-time`, `password`, and other formats to further refine the data type.
"items": # Object with schema name: GoogleCloudAiplatformV1beta1Schema # Optional. If type is `ARRAY`, `items` specifies the schema of elements in the array.
"maxItems": "A String", # Optional. If type is `ARRAY`, `max_items` specifies the maximum number of items in an array.
"maxLength": "A String", # Optional. If type is `STRING`, `max_length` specifies the maximum length of the string.
"maxProperties": "A String", # Optional. If type is `OBJECT`, `max_properties` specifies the maximum number of properties that can be provided.
"maximum": 3.14, # Optional. If type is `INTEGER` or `NUMBER`, `maximum` specifies the maximum allowed value.
"minItems": "A String", # Optional. If type is `ARRAY`, `min_items` specifies the minimum number of items in an array.
"minLength": "A String", # Optional. If type is `STRING`, `min_length` specifies the minimum length of the string.
"minProperties": "A String", # Optional. If type is `OBJECT`, `min_properties` specifies the minimum number of properties that can be provided.
"minimum": 3.14, # Optional. If type is `INTEGER` or `NUMBER`, `minimum` specifies the minimum allowed value.
"nullable": True or False, # Optional. Indicates if the value of this field can be null.
"pattern": "A String", # Optional. If type is `STRING`, `pattern` specifies a regular expression that the string must match.
"properties": { # Optional. If type is `OBJECT`, `properties` is a map of property names to schema definitions for each property of the object.
"a_key": # Object with schema name: GoogleCloudAiplatformV1beta1Schema
},
"propertyOrdering": [ # Optional. Order of properties displayed or used where order matters. This is not a standard field in OpenAPI specification, but can be used to control the order of properties.
"A String",
],
"ref": "A String", # Optional. Allows referencing another schema definition to use in place of this schema. The value must be a valid reference to a schema in `defs`. For example, the following schema defines a reference to a schema node named "Pet": type: object properties: pet: ref: #/defs/Pet defs: Pet: type: object properties: name: type: string The value of the "pet" property is a reference to the schema node named "Pet". See details in https://json-schema.org/understanding-json-schema/structuring
"required": [ # Optional. If type is `OBJECT`, `required` lists the names of properties that must be present.
"A String",
],
"title": "A String", # Optional. Title for the schema.
"type": "A String", # Optional. Data type of the schema field.
},
"responseJsonSchema": "", # Optional. Describes the output from this function in JSON Schema format. The value specified by the schema is the response value of the function. This field is mutually exclusive with `response`.
},
],
"googleMaps": { # Tool to retrieve public maps data for grounding, powered by Google. # Optional. GoogleMaps tool type. Tool to support Google Maps in Model.
"enableWidget": True or False, # Optional. If true, include the widget context token in the response.
},
"googleSearch": { # GoogleSearch tool type. Tool to support Google Search in Model. Powered by Google. # Optional. GoogleSearch tool type. Tool to support Google Search in Model. Powered by Google.
"blockingConfidence": "A String", # Optional. Sites with confidence level chosen & above this value will be blocked from the search results.
"excludeDomains": [ # Optional. List of domains to be excluded from the search results. The default limit is 2000 domains. Example: ["amazon.com", "facebook.com"].
"A String",
],
},
"googleSearchRetrieval": { # Tool to retrieve public web data for grounding, powered by Google. # Optional. Specialized retrieval tool that is powered by Google Search.
"dynamicRetrievalConfig": { # Describes the options to customize dynamic retrieval. # Specifies the dynamic retrieval configuration for the given source.
"dynamicThreshold": 3.14, # Optional. The threshold to be used in dynamic retrieval. If not set, a system default value is used.
"mode": "A String", # The mode of the predictor to be used in dynamic retrieval.
},
},
"parallelAiSearch": { # ParallelAiSearch tool type. A tool that uses the Parallel.ai search engine for grounding. # Optional. If specified, Vertex AI will use Parallel.ai to search for information to answer user queries. The search results will be grounded on Parallel.ai and presented to the model for response generation
"apiKey": "A String", # Optional. The API key for ParallelAiSearch. If an API key is not provided, the system will attempt to verify access by checking for an active Parallel.ai subscription through the Google Cloud Marketplace. See https://docs.parallel.ai/search/search-quickstart for more details.
"customConfigs": { # Optional. Custom configs for ParallelAiSearch. This field can be used to pass any parameter from the Parallel.ai Search API. See the Parallel.ai documentation for the full list of available parameters and their usage: https://docs.parallel.ai/api-reference/search-beta/search Currently only `source_policy`, `excerpts`, `max_results`, `mode`, `fetch_policy` can be set via this field. For example: { "source_policy": { "include_domains": ["google.com", "wikipedia.org"], "exclude_domains": ["example.com"] }, "fetch_policy": { "max_age_seconds": 3600 } }
"a_key": "", # Properties of the object.
},
},
"retrieval": { # Defines a retrieval tool that model can call to access external knowledge. # Optional. Retrieval tool type. System will always execute the provided retrieval tool(s) to get external knowledge to answer the prompt. Retrieval results are presented to the model for generation.
"disableAttribution": True or False, # Optional. Deprecated. This option is no longer supported.
"externalApi": { # Retrieve from data source powered by external API for grounding. The external API is not owned by Google, but need to follow the pre-defined API spec. # Use data source powered by external API for grounding.
"apiAuth": { # The generic reusable api auth config. Deprecated. Please use AuthConfig (google/cloud/aiplatform/master/auth.proto) instead. # The authentication config to access the API. Deprecated. Please use auth_config instead.
"apiKeyConfig": { # The API secret. # The API secret.
"apiKeySecretVersion": "A String", # Required. The SecretManager secret version resource name storing API key. e.g. projects/{project}/secrets/{secret}/versions/{version}
"apiKeyString": "A String", # The API key string. Either this or `api_key_secret_version` must be set.
},
},
"apiSpec": "A String", # The API spec that the external API implements.
"authConfig": { # Auth configuration to run the extension. # The authentication config to access the API.
"apiKeyConfig": { # Config for authentication with API key. # Config for API key auth.
"apiKeySecret": "A String", # Optional. The name of the SecretManager secret version resource storing the API key. Format: `projects/{project}/secrets/{secrete}/versions/{version}` - If both `api_key_secret` and `api_key_string` are specified, this field takes precedence over `api_key_string`. - If specified, the `secretmanager.versions.access` permission should be granted to Vertex AI Extension Service Agent (https://cloud.google.com/vertex-ai/docs/general/access-control#service-agents) on the specified resource.
"apiKeyString": "A String", # Optional. The API key to be used in the request directly.
"httpElementLocation": "A String", # Optional. The location of the API key.
"name": "A String", # Optional. The parameter name of the API key. E.g. If the API request is "https://example.com/act?api_key=", "api_key" would be the parameter name.
},
"authType": "A String", # Type of auth scheme.
"googleServiceAccountConfig": { # Config for Google Service Account Authentication. # Config for Google Service Account auth.
"serviceAccount": "A String", # Optional. The service account that the extension execution service runs as. - If the service account is specified, the `iam.serviceAccounts.getAccessToken` permission should be granted to Vertex AI Extension Service Agent (https://cloud.google.com/vertex-ai/docs/general/access-control#service-agents) on the specified service account. - If not specified, the Vertex AI Extension Service Agent will be used to execute the Extension.
},
"httpBasicAuthConfig": { # Config for HTTP Basic Authentication. # Config for HTTP Basic auth.
"credentialSecret": "A String", # Required. The name of the SecretManager secret version resource storing the base64 encoded credentials. Format: `projects/{project}/secrets/{secrete}/versions/{version}` - If specified, the `secretmanager.versions.access` permission should be granted to Vertex AI Extension Service Agent (https://cloud.google.com/vertex-ai/docs/general/access-control#service-agents) on the specified resource.
},
"oauthConfig": { # Config for user oauth. # Config for user oauth.
"accessToken": "A String", # Access token for extension endpoint. Only used to propagate token from [[ExecuteExtensionRequest.runtime_auth_config]] at request time.
"serviceAccount": "A String", # The service account used to generate access tokens for executing the Extension. - If the service account is specified, the `iam.serviceAccounts.getAccessToken` permission should be granted to Vertex AI Extension Service Agent (https://cloud.google.com/vertex-ai/docs/general/access-control#service-agents) on the provided service account.
},
"oidcConfig": { # Config for user OIDC auth. # Config for user OIDC auth.
"idToken": "A String", # OpenID Connect formatted ID token for extension endpoint. Only used to propagate token from [[ExecuteExtensionRequest.runtime_auth_config]] at request time.
"serviceAccount": "A String", # The service account used to generate an OpenID Connect (OIDC)-compatible JWT token signed by the Google OIDC Provider (accounts.google.com) for extension endpoint (https://cloud.google.com/iam/docs/create-short-lived-credentials-direct#sa-credentials-oidc). - The audience for the token will be set to the URL in the server url defined in the OpenApi spec. - If the service account is provided, the service account should grant `iam.serviceAccounts.getOpenIdToken` permission to Vertex AI Extension Service Agent (https://cloud.google.com/vertex-ai/docs/general/access-control#service-agents).
},
},
"elasticSearchParams": { # The search parameters to use for the ELASTIC_SEARCH spec. # Parameters for the elastic search API.
"index": "A String", # The ElasticSearch index to use.
"numHits": 42, # Optional. Number of hits (chunks) to request. When specified, it is passed to Elasticsearch as the `num_hits` param.
"searchTemplate": "A String", # The ElasticSearch search template to use.
},
"endpoint": "A String", # The endpoint of the external API. The system will call the API at this endpoint to retrieve the data for grounding. Example: https://acme.com:443/search
"simpleSearchParams": { # The search parameters to use for SIMPLE_SEARCH spec. # Parameters for the simple search API.
},
},
"vertexAiSearch": { # Retrieve from Vertex AI Search datastore or engine for grounding. datastore and engine are mutually exclusive. See https://cloud.google.com/products/agent-builder # Set to use data source powered by Vertex AI Search.
"dataStoreSpecs": [ # Specifications that define the specific DataStores to be searched, along with configurations for those data stores. This is only considered for Engines with multiple data stores. It should only be set if engine is used.
{ # Define data stores within engine to filter on in a search call and configurations for those data stores. For more information, see https://cloud.google.com/generative-ai-app-builder/docs/reference/rpc/google.cloud.discoveryengine.v1#datastorespec
"dataStore": "A String", # Full resource name of DataStore, such as Format: `projects/{project}/locations/{location}/collections/{collection}/dataStores/{dataStore}`
"filter": "A String", # Optional. Filter specification to filter documents in the data store specified by data_store field. For more information on filtering, see [Filtering](https://cloud.google.com/generative-ai-app-builder/docs/filter-search-metadata)
},
],
"datastore": "A String", # Optional. Fully-qualified Vertex AI Search data store resource ID. Format: `projects/{project}/locations/{location}/collections/{collection}/dataStores/{dataStore}`
"engine": "A String", # Optional. Fully-qualified Vertex AI Search engine resource ID. Format: `projects/{project}/locations/{location}/collections/{collection}/engines/{engine}`
"filter": "A String", # Optional. Filter strings to be passed to the search API.
"maxResults": 42, # Optional. Number of search results to return per query. The default value is 10. The maximumm allowed value is 10.
},
"vertexRagStore": { # Retrieve from Vertex RAG Store for grounding. # Set to use data source powered by Vertex RAG store. User data is uploaded via the VertexRagDataService.
"ragCorpora": [ # Optional. Deprecated. Please use rag_resources instead.
"A String",
],
"ragResources": [ # Optional. The representation of the rag source. It can be used to specify corpus only or ragfiles. Currently only support one corpus or multiple files from one corpus. In the future we may open up multiple corpora support.
{ # The definition of the Rag resource.
"ragCorpus": "A String", # Optional. RagCorpora resource name. Format: `projects/{project}/locations/{location}/ragCorpora/{rag_corpus}`
"ragFileIds": [ # Optional. rag_file_id. The files should be in the same rag_corpus set in rag_corpus field.
"A String",
],
},
],
"ragRetrievalConfig": { # Specifies the context retrieval config. # Optional. The retrieval config for the Rag query.
"filter": { # Config for filters. # Optional. Config for filters.
"metadataFilter": "A String", # Optional. String for metadata filtering.
"vectorDistanceThreshold": 3.14, # Optional. Only returns contexts with vector distance smaller than the threshold.
"vectorSimilarityThreshold": 3.14, # Optional. Only returns contexts with vector similarity larger than the threshold.
},
"hybridSearch": { # Config for Hybrid Search. # Optional. Config for Hybrid Search.
"alpha": 3.14, # Optional. Alpha value controls the weight between dense and sparse vector search results. The range is [0, 1], while 0 means sparse vector search only and 1 means dense vector search only. The default value is 0.5 which balances sparse and dense vector search equally.
},
"ranking": { # Config for ranking and reranking. # Optional. Config for ranking and reranking.
"llmRanker": { # Config for LlmRanker. # Optional. Config for LlmRanker.
"modelName": "A String", # Optional. The model name used for ranking. See [Supported models](https://cloud.google.com/vertex-ai/generative-ai/docs/model-reference/inference#supported-models).
},
"rankService": { # Config for Rank Service. # Optional. Config for Rank Service.
"modelName": "A String", # Optional. The model name of the rank service. Format: `semantic-ranker-512@latest`
},
},
"topK": 42, # Optional. The number of contexts to retrieve.
},
"similarityTopK": 42, # Optional. Number of top k results to return from the selected corpora.
"storeContext": True or False, # Optional. Currently only supported for Gemini Multimodal Live API. In Gemini Multimodal Live API, if `store_context` bool is specified, Gemini will leverage it to automatically memorize the interactions between the client and Gemini, and retrieve context when needed to augment the response generation for users' ongoing and future interactions.
"vectorDistanceThreshold": 3.14, # Optional. Only return results with vector distance smaller than the threshold.
},
},
"urlContext": { # Tool to support URL context. # Optional. Tool to support URL context retrieval.
},
},
],
"author": "A String", # Required. The ID of the agent or entity that generated this event. Use "user" to denote events generated by the end-user.
"content": { # The structured data content of a message. A Content message contains a `role` field, which indicates the producer of the content, and a `parts` field, which contains the multi-part data of the message. # Required. The content of the event (e.g., text response, tool call, tool response).
"parts": [ # Required. A list of Part objects that make up a single message. Parts of a message can have different MIME types. A Content message must have at least one Part.
{ # A datatype containing media that is part of a multi-part Content message. A `Part` consists of data which has an associated datatype. A `Part` can only contain one of the accepted types in `Part.data`. For media types that are not text, `Part` must have a fixed IANA MIME type identifying the type and subtype of the media if `inline_data` or `file_data` field is filled with raw bytes.
"codeExecutionResult": { # Result of executing the [ExecutableCode]. Only generated when using the [CodeExecution] tool, and always follows a `part` containing the [ExecutableCode]. # Optional. The result of executing the ExecutableCode.
"outcome": "A String", # Required. Outcome of the code execution.
"output": "A String", # Optional. Contains stdout when code execution is successful, stderr or other description otherwise.
},
"executableCode": { # Code generated by the model that is meant to be executed, and the result returned to the model. Generated when using the [CodeExecution] tool, in which the code will be automatically executed, and a corresponding [CodeExecutionResult] will also be generated. # Optional. Code generated by the model that is intended to be executed.
"code": "A String", # Required. The code to be executed.
"language": "A String", # Required. Programming language of the `code`.
},
"fileData": { # URI-based data. A FileData message contains a URI pointing to data of a specific media type. It is used to represent images, audio, and video stored in Google Cloud Storage. # Optional. The URI-based data of the part. This can be used to include files from Google Cloud Storage.
"displayName": "A String", # Optional. The display name of the file. Used to provide a label or filename to distinguish files. This field is only returned in `PromptMessage` for prompt management. It is used in the Gemini calls only when server side tools (`code_execution`, `google_search`, and `url_context`) are enabled.
"fileUri": "A String", # Required. The URI of the file in Google Cloud Storage.
"mimeType": "A String", # Required. The IANA standard MIME type of the source data.
},
"functionCall": { # A predicted [FunctionCall] returned from the model that contains a string representing the [FunctionDeclaration.name] and a structured JSON object containing the parameters and their values. # Optional. A predicted function call returned from the model. This contains the name of the function to call and the arguments to pass to the function.
"args": { # Optional. The function parameters and values in JSON object format. See [FunctionDeclaration.parameters] for parameter details.
"a_key": "", # Properties of the object.
},
"id": "A String", # Optional. The unique id of the function call. If populated, the client to execute the `function_call` and return the response with the matching `id`.
"name": "A String", # Optional. The name of the function to call. Matches [FunctionDeclaration.name].
"partialArgs": [ # Optional. The partial argument value of the function call. If provided, represents the arguments/fields that are streamed incrementally.
{ # Partial argument value of the function call.
"boolValue": True or False, # Optional. Represents a boolean value.
"jsonPath": "A String", # Required. A JSON Path (RFC 9535) to the argument being streamed. https://datatracker.ietf.org/doc/html/rfc9535. e.g. "$.foo.bar[0].data".
"nullValue": "A String", # Optional. Represents a null value.
"numberValue": 3.14, # Optional. Represents a double value.
"stringValue": "A String", # Optional. Represents a string value.
"willContinue": True or False, # Optional. Whether this is not the last part of the same json_path. If true, another PartialArg message for the current json_path is expected to follow.
},
],
"willContinue": True or False, # Optional. Whether this is the last part of the FunctionCall. If true, another partial message for the current FunctionCall is expected to follow.
},
"functionResponse": { # The result output from a [FunctionCall] that contains a string representing the [FunctionDeclaration.name] and a structured JSON object containing any output from the function is used as context to the model. This should contain the result of a [FunctionCall] made based on model prediction. # Optional. The result of a function call. This is used to provide the model with the result of a function call that it predicted.
"id": "A String", # Optional. The id of the function call this response is for. Populated by the client to match the corresponding function call `id`.
"name": "A String", # Required. The name of the function to call. Matches [FunctionDeclaration.name] and [FunctionCall.name].
"parts": [ # Optional. Ordered `Parts` that constitute a function response. Parts may have different IANA MIME types.
{ # A datatype containing media that is part of a `FunctionResponse` message. A `FunctionResponsePart` consists of data which has an associated datatype. A `FunctionResponsePart` can only contain one of the accepted types in `FunctionResponsePart.data`. A `FunctionResponsePart` must have a fixed IANA MIME type identifying the type and subtype of the media if the `inline_data` field is filled with raw bytes.
"fileData": { # URI based data for function response. # URI based data.
"displayName": "A String", # Optional. Display name of the file data. Used to provide a label or filename to distinguish file datas. This field is only returned in PromptMessage for prompt management. It is currently used in the Gemini GenerateContent calls only when server side tools (code_execution, google_search, and url_context) are enabled.
"fileUri": "A String", # Required. URI.
"mimeType": "A String", # Required. The IANA standard MIME type of the source data.
},
"inlineData": { # Raw media bytes for function response. Text should not be sent as raw bytes, use the 'text' field. # Inline media bytes.
"data": "A String", # Required. Raw bytes.
"displayName": "A String", # Optional. Display name of the blob. Used to provide a label or filename to distinguish blobs. This field is only returned in PromptMessage for prompt management. It is currently used in the Gemini GenerateContent calls only when server side tools (code_execution, google_search, and url_context) are enabled.
"mimeType": "A String", # Required. The IANA standard MIME type of the source data.
},
},
],
"response": { # Required. The function response in JSON object format. Use "output" key to specify function output and "error" key to specify error details (if any). If "output" and "error" keys are not specified, then whole "response" is treated as function output.
"a_key": "", # Properties of the object.
},
},
"inlineData": { # A content blob. A Blob contains data of a specific media type. It is used to represent images, audio, and video. # Optional. The inline data content of the part. This can be used to include images, audio, or video in a request.
"data": "A String", # Required. The raw bytes of the data.
"displayName": "A String", # Optional. The display name of the blob. Used to provide a label or filename to distinguish blobs. This field is only returned in `PromptMessage` for prompt management. It is used in the Gemini calls only when server-side tools (`code_execution`, `google_search`, and `url_context`) are enabled.
"mimeType": "A String", # Required. The IANA standard MIME type of the source data.
},
"mediaResolution": { # per part media resolution. Media resolution for the input media. # per part media resolution. Media resolution for the input media.
"level": "A String", # The tokenization quality used for given media.
},
"text": "A String", # Optional. The text content of the part. When sent from the VSCode Gemini Code Assist extension, references to @mentioned items will be converted to markdown boldface text. For example `@my-repo` will be converted to and sent as `**my-repo**` by the IDE agent.
"thought": True or False, # Optional. Indicates whether the `part` represents the model's thought process or reasoning.
"thoughtSignature": "A String", # Optional. An opaque signature for the thought so it can be reused in subsequent requests.
"videoMetadata": { # Provides metadata for a video, including the start and end offsets for clipping and the frame rate. # Optional. Video metadata. The metadata should only be specified while the video data is presented in inline_data or file_data.
"endOffset": "A String", # Optional. The end offset of the video.
"fps": 3.14, # Optional. The frame rate of the video sent to the model. If not specified, the default value is 1.0. The valid range is (0.0, 24.0].
"startOffset": "A String", # Optional. The start offset of the video.
},
},
],
"role": "A String", # Optional. The producer of the content. Must be either 'user' or 'model'. If not set, the service will default to 'user'.
},
"eventTime": "A String", # Optional. The timestamp when the event occurred.
"stateDelta": { # Optional. The change in the session state caused by this event. This is a key-value map of fields that were modified or added by the event.
"a_key": "", # Properties of the object.
},
},
],
"turnId": "A String", # Optional. A unique identifier for the turn. Useful for referencing specific turns across systems.
"turnIndex": 42, # Required. The 0-based index of the turn in the conversation sequence.
},
],
},
"candidate": "A String", # Required. The name of the candidate that produced the response.
"error": { # The `Status` type defines a logical error model that is suitable for different programming environments, including REST APIs and RPC APIs. It is used by [gRPC](https://github.com/grpc). Each `Status` message contains three pieces of data: error code, error message, and error details. You can find out more about this error model and how to work with it in the [API Design Guide](https://cloud.google.com/apis/design/errors). # Output only. Error while scraping model or agent.
"code": 42, # The status code, which should be an enum value of google.rpc.Code.
"details": [ # A list of messages that carry the error details. There is a common set of message types for APIs to use.
{
"a_key": "", # Properties of the object. Contains field @type with type URL.
},
],
"message": "A String", # A developer-facing error message, which should be in English. Any user-facing error message should be localized and sent in the google.rpc.Status.details field, or localized by the client.
},
"events": [ # Optional. Intermediate events (such as tool calls and responses) that led to the final response.
{ # The structured data content of a message. A Content message contains a `role` field, which indicates the producer of the content, and a `parts` field, which contains the multi-part data of the message.
"parts": [ # Required. A list of Part objects that make up a single message. Parts of a message can have different MIME types. A Content message must have at least one Part.
{ # A datatype containing media that is part of a multi-part Content message. A `Part` consists of data which has an associated datatype. A `Part` can only contain one of the accepted types in `Part.data`. For media types that are not text, `Part` must have a fixed IANA MIME type identifying the type and subtype of the media if `inline_data` or `file_data` field is filled with raw bytes.
"codeExecutionResult": { # Result of executing the [ExecutableCode]. Only generated when using the [CodeExecution] tool, and always follows a `part` containing the [ExecutableCode]. # Optional. The result of executing the ExecutableCode.
"outcome": "A String", # Required. Outcome of the code execution.
"output": "A String", # Optional. Contains stdout when code execution is successful, stderr or other description otherwise.
},
"executableCode": { # Code generated by the model that is meant to be executed, and the result returned to the model. Generated when using the [CodeExecution] tool, in which the code will be automatically executed, and a corresponding [CodeExecutionResult] will also be generated. # Optional. Code generated by the model that is intended to be executed.
"code": "A String", # Required. The code to be executed.
"language": "A String", # Required. Programming language of the `code`.
},
"fileData": { # URI-based data. A FileData message contains a URI pointing to data of a specific media type. It is used to represent images, audio, and video stored in Google Cloud Storage. # Optional. The URI-based data of the part. This can be used to include files from Google Cloud Storage.
"displayName": "A String", # Optional. The display name of the file. Used to provide a label or filename to distinguish files. This field is only returned in `PromptMessage` for prompt management. It is used in the Gemini calls only when server side tools (`code_execution`, `google_search`, and `url_context`) are enabled.
"fileUri": "A String", # Required. The URI of the file in Google Cloud Storage.
"mimeType": "A String", # Required. The IANA standard MIME type of the source data.
},
"functionCall": { # A predicted [FunctionCall] returned from the model that contains a string representing the [FunctionDeclaration.name] and a structured JSON object containing the parameters and their values. # Optional. A predicted function call returned from the model. This contains the name of the function to call and the arguments to pass to the function.
"args": { # Optional. The function parameters and values in JSON object format. See [FunctionDeclaration.parameters] for parameter details.
"a_key": "", # Properties of the object.
},
"id": "A String", # Optional. The unique id of the function call. If populated, the client to execute the `function_call` and return the response with the matching `id`.
"name": "A String", # Optional. The name of the function to call. Matches [FunctionDeclaration.name].
"partialArgs": [ # Optional. The partial argument value of the function call. If provided, represents the arguments/fields that are streamed incrementally.
{ # Partial argument value of the function call.
"boolValue": True or False, # Optional. Represents a boolean value.
"jsonPath": "A String", # Required. A JSON Path (RFC 9535) to the argument being streamed. https://datatracker.ietf.org/doc/html/rfc9535. e.g. "$.foo.bar[0].data".
"nullValue": "A String", # Optional. Represents a null value.
"numberValue": 3.14, # Optional. Represents a double value.
"stringValue": "A String", # Optional. Represents a string value.
"willContinue": True or False, # Optional. Whether this is not the last part of the same json_path. If true, another PartialArg message for the current json_path is expected to follow.
},
],
"willContinue": True or False, # Optional. Whether this is the last part of the FunctionCall. If true, another partial message for the current FunctionCall is expected to follow.
},
"functionResponse": { # The result output from a [FunctionCall] that contains a string representing the [FunctionDeclaration.name] and a structured JSON object containing any output from the function is used as context to the model. This should contain the result of a [FunctionCall] made based on model prediction. # Optional. The result of a function call. This is used to provide the model with the result of a function call that it predicted.
"id": "A String", # Optional. The id of the function call this response is for. Populated by the client to match the corresponding function call `id`.
"name": "A String", # Required. The name of the function to call. Matches [FunctionDeclaration.name] and [FunctionCall.name].
"parts": [ # Optional. Ordered `Parts` that constitute a function response. Parts may have different IANA MIME types.
{ # A datatype containing media that is part of a `FunctionResponse` message. A `FunctionResponsePart` consists of data which has an associated datatype. A `FunctionResponsePart` can only contain one of the accepted types in `FunctionResponsePart.data`. A `FunctionResponsePart` must have a fixed IANA MIME type identifying the type and subtype of the media if the `inline_data` field is filled with raw bytes.
"fileData": { # URI based data for function response. # URI based data.
"displayName": "A String", # Optional. Display name of the file data. Used to provide a label or filename to distinguish file datas. This field is only returned in PromptMessage for prompt management. It is currently used in the Gemini GenerateContent calls only when server side tools (code_execution, google_search, and url_context) are enabled.
"fileUri": "A String", # Required. URI.
"mimeType": "A String", # Required. The IANA standard MIME type of the source data.
},
"inlineData": { # Raw media bytes for function response. Text should not be sent as raw bytes, use the 'text' field. # Inline media bytes.
"data": "A String", # Required. Raw bytes.
"displayName": "A String", # Optional. Display name of the blob. Used to provide a label or filename to distinguish blobs. This field is only returned in PromptMessage for prompt management. It is currently used in the Gemini GenerateContent calls only when server side tools (code_execution, google_search, and url_context) are enabled.
"mimeType": "A String", # Required. The IANA standard MIME type of the source data.
},
},
],
"response": { # Required. The function response in JSON object format. Use "output" key to specify function output and "error" key to specify error details (if any). If "output" and "error" keys are not specified, then whole "response" is treated as function output.
"a_key": "", # Properties of the object.
},
},
"inlineData": { # A content blob. A Blob contains data of a specific media type. It is used to represent images, audio, and video. # Optional. The inline data content of the part. This can be used to include images, audio, or video in a request.
"data": "A String", # Required. The raw bytes of the data.
"displayName": "A String", # Optional. The display name of the blob. Used to provide a label or filename to distinguish blobs. This field is only returned in `PromptMessage` for prompt management. It is used in the Gemini calls only when server-side tools (`code_execution`, `google_search`, and `url_context`) are enabled.
"mimeType": "A String", # Required. The IANA standard MIME type of the source data.
},
"mediaResolution": { # per part media resolution. Media resolution for the input media. # per part media resolution. Media resolution for the input media.
"level": "A String", # The tokenization quality used for given media.
},
"text": "A String", # Optional. The text content of the part. When sent from the VSCode Gemini Code Assist extension, references to @mentioned items will be converted to markdown boldface text. For example `@my-repo` will be converted to and sent as `**my-repo**` by the IDE agent.
"thought": True or False, # Optional. Indicates whether the `part` represents the model's thought process or reasoning.
"thoughtSignature": "A String", # Optional. An opaque signature for the thought so it can be reused in subsequent requests.
"videoMetadata": { # Provides metadata for a video, including the start and end offsets for clipping and the frame rate. # Optional. Video metadata. The metadata should only be specified while the video data is presented in inline_data or file_data.
"endOffset": "A String", # Optional. The end offset of the video.
"fps": 3.14, # Optional. The frame rate of the video sent to the model. If not specified, the default value is 1.0. The valid range is (0.0, 24.0].
"startOffset": "A String", # Optional. The start offset of the video.
},
},
],
"role": "A String", # Optional. The producer of the content. Must be either 'user' or 'model'. If not set, the service will default to 'user'.
},
],
"text": "A String", # Text response.
"value": "", # Fields and values that can be used to populate the response template.
},
"prompt": { # Prompt to be evaluated. This can represent a single-turn prompt or a multi-turn conversation for agent evaluations. # Optional. The request/prompt to evaluate.
"agentData": { # Represents data specific to multi-turn agent evaluations. # Optional. Represents the complete execution trace of a multi-turn conversation, which can involve single or multiple agents. This serves as the input context for agent scraping.
"agents": { # Optional. A map containing the static configurations for each agent in the system. Key: agent_id (matches the `author` field in events). Value: The static configuration of the agent.
"a_key": { # Represents configuration for an Agent.
"agentId": "A String", # Required. Unique identifier of the agent. This ID is used to refer to this agent, e.g., in AgentEvent.author, or in the `sub_agents` field. It must be unique within the `agents` map.
"agentType": "A String", # Optional. The type or class of the agent (e.g., "LlmAgent", "RouterAgent", "ToolUseAgent"). Useful for the autorater to understand the expected behavior of the agent.
"description": "A String", # Optional. A high-level description of the agent's role and responsibilities. Critical for evaluating if the agent is routing tasks correctly.
"instruction": "A String", # Optional. Provides instructions for the LLM model, guiding the agent's behavior. Can be static or dynamic. Dynamic instructions can contain placeholders like {variable_name} that will be resolved at runtime using the `AgentEvent.state_delta` field.
"subAgents": [ # Optional. The list of valid agent IDs that this agent can delegate to. This defines the directed edges in the multi-agent system graph topology.
"A String",
],
"tools": [ # Optional. The list of tools available to this agent.
{ # Tool details that the model may use to generate response. A `Tool` is a piece of code that enables the system to interact with external systems to perform an action, or set of actions, outside of knowledge and scope of the model. A Tool object should contain exactly one type of Tool (e.g FunctionDeclaration, Retrieval or GoogleSearchRetrieval).
"codeExecution": { # Tool that executes code generated by the model, and automatically returns the result to the model. See also [ExecutableCode]and [CodeExecutionResult] which are input and output to this tool. # Optional. CodeExecution tool type. Enables the model to execute code as part of generation.
},
"computerUse": { # Tool to support computer use. # Optional. Tool to support the model interacting directly with the computer. If enabled, it automatically populates computer-use specific Function Declarations.
"environment": "A String", # Required. The environment being operated.
"excludedPredefinedFunctions": [ # Optional. By default, [predefined functions](https://cloud.google.com/vertex-ai/generative-ai/docs/computer-use#supported-actions) are included in the final model call. Some of them can be explicitly excluded from being automatically included. This can serve two purposes: 1. Using a more restricted / different action space. 2. Improving the definitions / instructions of predefined functions.
"A String",
],
},
"enterpriseWebSearch": { # Tool to search public web data, powered by Vertex AI Search and Sec4 compliance. # Optional. Tool to support searching public web data, powered by Vertex AI Search and Sec4 compliance.
"blockingConfidence": "A String", # Optional. Sites with confidence level chosen & above this value will be blocked from the search results.
"excludeDomains": [ # Optional. List of domains to be excluded from the search results. The default limit is 2000 domains.
"A String",
],
},
"functionDeclarations": [ # Optional. Function tool type. One or more function declarations to be passed to the model along with the current user query. Model may decide to call a subset of these functions by populating FunctionCall in the response. User should provide a FunctionResponse for each function call in the next turn. Based on the function responses, Model will generate the final response back to the user. Maximum 512 function declarations can be provided.
{ # Structured representation of a function declaration as defined by the [OpenAPI 3.0 specification](https://spec.openapis.org/oas/v3.0.3). Included in this declaration are the function name, description, parameters and response type. This FunctionDeclaration is a representation of a block of code that can be used as a `Tool` by the model and executed by the client.
"description": "A String", # Optional. Description and purpose of the function. Model uses it to decide how and whether to call the function.
"name": "A String", # Required. The name of the function to call. Must start with a letter or an underscore. Must be a-z, A-Z, 0-9, or contain underscores, dots, colons and dashes, with a maximum length of 64.
"parameters": { # Defines the schema of input and output data. This is a subset of the [OpenAPI 3.0 Schema Object](https://spec.openapis.org/oas/v3.0.3#schema-object). # Optional. Describes the parameters to this function in JSON Schema Object format. Reflects the Open API 3.03 Parameter Object. string Key: the name of the parameter. Parameter names are case sensitive. Schema Value: the Schema defining the type used for the parameter. For function with no parameters, this can be left unset. Parameter names must start with a letter or an underscore and must only contain chars a-z, A-Z, 0-9, or underscores with a maximum length of 64. Example with 1 required and 1 optional parameter: type: OBJECT properties: param1: type: STRING param2: type: INTEGER required: - param1
"additionalProperties": "", # Optional. If `type` is `OBJECT`, specifies how to handle properties not defined in `properties`. If it is a boolean `false`, no additional properties are allowed. If it is a schema, additional properties are allowed if they conform to the schema.
"anyOf": [ # Optional. The instance must be valid against any (one or more) of the subschemas listed in `any_of`.
# Object with schema name: GoogleCloudAiplatformV1beta1Schema
],
"default": "", # Optional. Default value to use if the field is not specified.
"defs": { # Optional. `defs` provides a map of schema definitions that can be reused by `ref` elsewhere in the schema. Only allowed at root level of the schema.
"a_key": # Object with schema name: GoogleCloudAiplatformV1beta1Schema
},
"description": "A String", # Optional. Description of the schema.
"enum": [ # Optional. Possible values of the field. This field can be used to restrict a value to a fixed set of values. To mark a field as an enum, set `format` to `enum` and provide the list of possible values in `enum`. For example: 1. To define directions: `{type:STRING, format:enum, enum:["EAST", "NORTH", "SOUTH", "WEST"]}` 2. To define apartment numbers: `{type:INTEGER, format:enum, enum:["101", "201", "301"]}`
"A String",
],
"example": "", # Optional. Example of an instance of this schema.
"format": "A String", # Optional. The format of the data. For `NUMBER` type, format can be `float` or `double`. For `INTEGER` type, format can be `int32` or `int64`. For `STRING` type, format can be `email`, `byte`, `date`, `date-time`, `password`, and other formats to further refine the data type.
"items": # Object with schema name: GoogleCloudAiplatformV1beta1Schema # Optional. If type is `ARRAY`, `items` specifies the schema of elements in the array.
"maxItems": "A String", # Optional. If type is `ARRAY`, `max_items` specifies the maximum number of items in an array.
"maxLength": "A String", # Optional. If type is `STRING`, `max_length` specifies the maximum length of the string.
"maxProperties": "A String", # Optional. If type is `OBJECT`, `max_properties` specifies the maximum number of properties that can be provided.
"maximum": 3.14, # Optional. If type is `INTEGER` or `NUMBER`, `maximum` specifies the maximum allowed value.
"minItems": "A String", # Optional. If type is `ARRAY`, `min_items` specifies the minimum number of items in an array.
"minLength": "A String", # Optional. If type is `STRING`, `min_length` specifies the minimum length of the string.
"minProperties": "A String", # Optional. If type is `OBJECT`, `min_properties` specifies the minimum number of properties that can be provided.
"minimum": 3.14, # Optional. If type is `INTEGER` or `NUMBER`, `minimum` specifies the minimum allowed value.
"nullable": True or False, # Optional. Indicates if the value of this field can be null.
"pattern": "A String", # Optional. If type is `STRING`, `pattern` specifies a regular expression that the string must match.
"properties": { # Optional. If type is `OBJECT`, `properties` is a map of property names to schema definitions for each property of the object.
"a_key": # Object with schema name: GoogleCloudAiplatformV1beta1Schema
},
"propertyOrdering": [ # Optional. Order of properties displayed or used where order matters. This is not a standard field in OpenAPI specification, but can be used to control the order of properties.
"A String",
],
"ref": "A String", # Optional. Allows referencing another schema definition to use in place of this schema. The value must be a valid reference to a schema in `defs`. For example, the following schema defines a reference to a schema node named "Pet": type: object properties: pet: ref: #/defs/Pet defs: Pet: type: object properties: name: type: string The value of the "pet" property is a reference to the schema node named "Pet". See details in https://json-schema.org/understanding-json-schema/structuring
"required": [ # Optional. If type is `OBJECT`, `required` lists the names of properties that must be present.
"A String",
],
"title": "A String", # Optional. Title for the schema.
"type": "A String", # Optional. Data type of the schema field.
},
"parametersJsonSchema": "", # Optional. Describes the parameters to the function in JSON Schema format. The schema must describe an object where the properties are the parameters to the function. For example: ``` { "type": "object", "properties": { "name": { "type": "string" }, "age": { "type": "integer" } }, "additionalProperties": false, "required": ["name", "age"], "propertyOrdering": ["name", "age"] } ``` This field is mutually exclusive with `parameters`.
"response": { # Defines the schema of input and output data. This is a subset of the [OpenAPI 3.0 Schema Object](https://spec.openapis.org/oas/v3.0.3#schema-object). # Optional. Describes the output from this function in JSON Schema format. Reflects the Open API 3.03 Response Object. The Schema defines the type used for the response value of the function.
"additionalProperties": "", # Optional. If `type` is `OBJECT`, specifies how to handle properties not defined in `properties`. If it is a boolean `false`, no additional properties are allowed. If it is a schema, additional properties are allowed if they conform to the schema.
"anyOf": [ # Optional. The instance must be valid against any (one or more) of the subschemas listed in `any_of`.
# Object with schema name: GoogleCloudAiplatformV1beta1Schema
],
"default": "", # Optional. Default value to use if the field is not specified.
"defs": { # Optional. `defs` provides a map of schema definitions that can be reused by `ref` elsewhere in the schema. Only allowed at root level of the schema.
"a_key": # Object with schema name: GoogleCloudAiplatformV1beta1Schema
},
"description": "A String", # Optional. Description of the schema.
"enum": [ # Optional. Possible values of the field. This field can be used to restrict a value to a fixed set of values. To mark a field as an enum, set `format` to `enum` and provide the list of possible values in `enum`. For example: 1. To define directions: `{type:STRING, format:enum, enum:["EAST", "NORTH", "SOUTH", "WEST"]}` 2. To define apartment numbers: `{type:INTEGER, format:enum, enum:["101", "201", "301"]}`
"A String",
],
"example": "", # Optional. Example of an instance of this schema.
"format": "A String", # Optional. The format of the data. For `NUMBER` type, format can be `float` or `double`. For `INTEGER` type, format can be `int32` or `int64`. For `STRING` type, format can be `email`, `byte`, `date`, `date-time`, `password`, and other formats to further refine the data type.
"items": # Object with schema name: GoogleCloudAiplatformV1beta1Schema # Optional. If type is `ARRAY`, `items` specifies the schema of elements in the array.
"maxItems": "A String", # Optional. If type is `ARRAY`, `max_items` specifies the maximum number of items in an array.
"maxLength": "A String", # Optional. If type is `STRING`, `max_length` specifies the maximum length of the string.
"maxProperties": "A String", # Optional. If type is `OBJECT`, `max_properties` specifies the maximum number of properties that can be provided.
"maximum": 3.14, # Optional. If type is `INTEGER` or `NUMBER`, `maximum` specifies the maximum allowed value.
"minItems": "A String", # Optional. If type is `ARRAY`, `min_items` specifies the minimum number of items in an array.
"minLength": "A String", # Optional. If type is `STRING`, `min_length` specifies the minimum length of the string.
"minProperties": "A String", # Optional. If type is `OBJECT`, `min_properties` specifies the minimum number of properties that can be provided.
"minimum": 3.14, # Optional. If type is `INTEGER` or `NUMBER`, `minimum` specifies the minimum allowed value.
"nullable": True or False, # Optional. Indicates if the value of this field can be null.
"pattern": "A String", # Optional. If type is `STRING`, `pattern` specifies a regular expression that the string must match.
"properties": { # Optional. If type is `OBJECT`, `properties` is a map of property names to schema definitions for each property of the object.
"a_key": # Object with schema name: GoogleCloudAiplatformV1beta1Schema
},
"propertyOrdering": [ # Optional. Order of properties displayed or used where order matters. This is not a standard field in OpenAPI specification, but can be used to control the order of properties.
"A String",
],
"ref": "A String", # Optional. Allows referencing another schema definition to use in place of this schema. The value must be a valid reference to a schema in `defs`. For example, the following schema defines a reference to a schema node named "Pet": type: object properties: pet: ref: #/defs/Pet defs: Pet: type: object properties: name: type: string The value of the "pet" property is a reference to the schema node named "Pet". See details in https://json-schema.org/understanding-json-schema/structuring
"required": [ # Optional. If type is `OBJECT`, `required` lists the names of properties that must be present.
"A String",
],
"title": "A String", # Optional. Title for the schema.
"type": "A String", # Optional. Data type of the schema field.
},
"responseJsonSchema": "", # Optional. Describes the output from this function in JSON Schema format. The value specified by the schema is the response value of the function. This field is mutually exclusive with `response`.
},
],
"googleMaps": { # Tool to retrieve public maps data for grounding, powered by Google. # Optional. GoogleMaps tool type. Tool to support Google Maps in Model.
"enableWidget": True or False, # Optional. If true, include the widget context token in the response.
},
"googleSearch": { # GoogleSearch tool type. Tool to support Google Search in Model. Powered by Google. # Optional. GoogleSearch tool type. Tool to support Google Search in Model. Powered by Google.
"blockingConfidence": "A String", # Optional. Sites with confidence level chosen & above this value will be blocked from the search results.
"excludeDomains": [ # Optional. List of domains to be excluded from the search results. The default limit is 2000 domains. Example: ["amazon.com", "facebook.com"].
"A String",
],
},
"googleSearchRetrieval": { # Tool to retrieve public web data for grounding, powered by Google. # Optional. Specialized retrieval tool that is powered by Google Search.
"dynamicRetrievalConfig": { # Describes the options to customize dynamic retrieval. # Specifies the dynamic retrieval configuration for the given source.
"dynamicThreshold": 3.14, # Optional. The threshold to be used in dynamic retrieval. If not set, a system default value is used.
"mode": "A String", # The mode of the predictor to be used in dynamic retrieval.
},
},
"parallelAiSearch": { # ParallelAiSearch tool type. A tool that uses the Parallel.ai search engine for grounding. # Optional. If specified, Vertex AI will use Parallel.ai to search for information to answer user queries. The search results will be grounded on Parallel.ai and presented to the model for response generation
"apiKey": "A String", # Optional. The API key for ParallelAiSearch. If an API key is not provided, the system will attempt to verify access by checking for an active Parallel.ai subscription through the Google Cloud Marketplace. See https://docs.parallel.ai/search/search-quickstart for more details.
"customConfigs": { # Optional. Custom configs for ParallelAiSearch. This field can be used to pass any parameter from the Parallel.ai Search API. See the Parallel.ai documentation for the full list of available parameters and their usage: https://docs.parallel.ai/api-reference/search-beta/search Currently only `source_policy`, `excerpts`, `max_results`, `mode`, `fetch_policy` can be set via this field. For example: { "source_policy": { "include_domains": ["google.com", "wikipedia.org"], "exclude_domains": ["example.com"] }, "fetch_policy": { "max_age_seconds": 3600 } }
"a_key": "", # Properties of the object.
},
},
"retrieval": { # Defines a retrieval tool that model can call to access external knowledge. # Optional. Retrieval tool type. System will always execute the provided retrieval tool(s) to get external knowledge to answer the prompt. Retrieval results are presented to the model for generation.
"disableAttribution": True or False, # Optional. Deprecated. This option is no longer supported.
"externalApi": { # Retrieve from data source powered by external API for grounding. The external API is not owned by Google, but need to follow the pre-defined API spec. # Use data source powered by external API for grounding.
"apiAuth": { # The generic reusable api auth config. Deprecated. Please use AuthConfig (google/cloud/aiplatform/master/auth.proto) instead. # The authentication config to access the API. Deprecated. Please use auth_config instead.
"apiKeyConfig": { # The API secret. # The API secret.
"apiKeySecretVersion": "A String", # Required. The SecretManager secret version resource name storing API key. e.g. projects/{project}/secrets/{secret}/versions/{version}
"apiKeyString": "A String", # The API key string. Either this or `api_key_secret_version` must be set.
},
},
"apiSpec": "A String", # The API spec that the external API implements.
"authConfig": { # Auth configuration to run the extension. # The authentication config to access the API.
"apiKeyConfig": { # Config for authentication with API key. # Config for API key auth.
"apiKeySecret": "A String", # Optional. The name of the SecretManager secret version resource storing the API key. Format: `projects/{project}/secrets/{secrete}/versions/{version}` - If both `api_key_secret` and `api_key_string` are specified, this field takes precedence over `api_key_string`. - If specified, the `secretmanager.versions.access` permission should be granted to Vertex AI Extension Service Agent (https://cloud.google.com/vertex-ai/docs/general/access-control#service-agents) on the specified resource.
"apiKeyString": "A String", # Optional. The API key to be used in the request directly.
"httpElementLocation": "A String", # Optional. The location of the API key.
"name": "A String", # Optional. The parameter name of the API key. E.g. If the API request is "https://example.com/act?api_key=", "api_key" would be the parameter name.
},
"authType": "A String", # Type of auth scheme.
"googleServiceAccountConfig": { # Config for Google Service Account Authentication. # Config for Google Service Account auth.
"serviceAccount": "A String", # Optional. The service account that the extension execution service runs as. - If the service account is specified, the `iam.serviceAccounts.getAccessToken` permission should be granted to Vertex AI Extension Service Agent (https://cloud.google.com/vertex-ai/docs/general/access-control#service-agents) on the specified service account. - If not specified, the Vertex AI Extension Service Agent will be used to execute the Extension.
},
"httpBasicAuthConfig": { # Config for HTTP Basic Authentication. # Config for HTTP Basic auth.
"credentialSecret": "A String", # Required. The name of the SecretManager secret version resource storing the base64 encoded credentials. Format: `projects/{project}/secrets/{secrete}/versions/{version}` - If specified, the `secretmanager.versions.access` permission should be granted to Vertex AI Extension Service Agent (https://cloud.google.com/vertex-ai/docs/general/access-control#service-agents) on the specified resource.
},
"oauthConfig": { # Config for user oauth. # Config for user oauth.
"accessToken": "A String", # Access token for extension endpoint. Only used to propagate token from [[ExecuteExtensionRequest.runtime_auth_config]] at request time.
"serviceAccount": "A String", # The service account used to generate access tokens for executing the Extension. - If the service account is specified, the `iam.serviceAccounts.getAccessToken` permission should be granted to Vertex AI Extension Service Agent (https://cloud.google.com/vertex-ai/docs/general/access-control#service-agents) on the provided service account.
},
"oidcConfig": { # Config for user OIDC auth. # Config for user OIDC auth.
"idToken": "A String", # OpenID Connect formatted ID token for extension endpoint. Only used to propagate token from [[ExecuteExtensionRequest.runtime_auth_config]] at request time.
"serviceAccount": "A String", # The service account used to generate an OpenID Connect (OIDC)-compatible JWT token signed by the Google OIDC Provider (accounts.google.com) for extension endpoint (https://cloud.google.com/iam/docs/create-short-lived-credentials-direct#sa-credentials-oidc). - The audience for the token will be set to the URL in the server url defined in the OpenApi spec. - If the service account is provided, the service account should grant `iam.serviceAccounts.getOpenIdToken` permission to Vertex AI Extension Service Agent (https://cloud.google.com/vertex-ai/docs/general/access-control#service-agents).
},
},
"elasticSearchParams": { # The search parameters to use for the ELASTIC_SEARCH spec. # Parameters for the elastic search API.
"index": "A String", # The ElasticSearch index to use.
"numHits": 42, # Optional. Number of hits (chunks) to request. When specified, it is passed to Elasticsearch as the `num_hits` param.
"searchTemplate": "A String", # The ElasticSearch search template to use.
},
"endpoint": "A String", # The endpoint of the external API. The system will call the API at this endpoint to retrieve the data for grounding. Example: https://acme.com:443/search
"simpleSearchParams": { # The search parameters to use for SIMPLE_SEARCH spec. # Parameters for the simple search API.
},
},
"vertexAiSearch": { # Retrieve from Vertex AI Search datastore or engine for grounding. datastore and engine are mutually exclusive. See https://cloud.google.com/products/agent-builder # Set to use data source powered by Vertex AI Search.
"dataStoreSpecs": [ # Specifications that define the specific DataStores to be searched, along with configurations for those data stores. This is only considered for Engines with multiple data stores. It should only be set if engine is used.
{ # Define data stores within engine to filter on in a search call and configurations for those data stores. For more information, see https://cloud.google.com/generative-ai-app-builder/docs/reference/rpc/google.cloud.discoveryengine.v1#datastorespec
"dataStore": "A String", # Full resource name of DataStore, such as Format: `projects/{project}/locations/{location}/collections/{collection}/dataStores/{dataStore}`
"filter": "A String", # Optional. Filter specification to filter documents in the data store specified by data_store field. For more information on filtering, see [Filtering](https://cloud.google.com/generative-ai-app-builder/docs/filter-search-metadata)
},
],
"datastore": "A String", # Optional. Fully-qualified Vertex AI Search data store resource ID. Format: `projects/{project}/locations/{location}/collections/{collection}/dataStores/{dataStore}`
"engine": "A String", # Optional. Fully-qualified Vertex AI Search engine resource ID. Format: `projects/{project}/locations/{location}/collections/{collection}/engines/{engine}`
"filter": "A String", # Optional. Filter strings to be passed to the search API.
"maxResults": 42, # Optional. Number of search results to return per query. The default value is 10. The maximumm allowed value is 10.
},
"vertexRagStore": { # Retrieve from Vertex RAG Store for grounding. # Set to use data source powered by Vertex RAG store. User data is uploaded via the VertexRagDataService.
"ragCorpora": [ # Optional. Deprecated. Please use rag_resources instead.
"A String",
],
"ragResources": [ # Optional. The representation of the rag source. It can be used to specify corpus only or ragfiles. Currently only support one corpus or multiple files from one corpus. In the future we may open up multiple corpora support.
{ # The definition of the Rag resource.
"ragCorpus": "A String", # Optional. RagCorpora resource name. Format: `projects/{project}/locations/{location}/ragCorpora/{rag_corpus}`
"ragFileIds": [ # Optional. rag_file_id. The files should be in the same rag_corpus set in rag_corpus field.
"A String",
],
},
],
"ragRetrievalConfig": { # Specifies the context retrieval config. # Optional. The retrieval config for the Rag query.
"filter": { # Config for filters. # Optional. Config for filters.
"metadataFilter": "A String", # Optional. String for metadata filtering.
"vectorDistanceThreshold": 3.14, # Optional. Only returns contexts with vector distance smaller than the threshold.
"vectorSimilarityThreshold": 3.14, # Optional. Only returns contexts with vector similarity larger than the threshold.
},
"hybridSearch": { # Config for Hybrid Search. # Optional. Config for Hybrid Search.
"alpha": 3.14, # Optional. Alpha value controls the weight between dense and sparse vector search results. The range is [0, 1], while 0 means sparse vector search only and 1 means dense vector search only. The default value is 0.5 which balances sparse and dense vector search equally.
},
"ranking": { # Config for ranking and reranking. # Optional. Config for ranking and reranking.
"llmRanker": { # Config for LlmRanker. # Optional. Config for LlmRanker.
"modelName": "A String", # Optional. The model name used for ranking. See [Supported models](https://cloud.google.com/vertex-ai/generative-ai/docs/model-reference/inference#supported-models).
},
"rankService": { # Config for Rank Service. # Optional. Config for Rank Service.
"modelName": "A String", # Optional. The model name of the rank service. Format: `semantic-ranker-512@latest`
},
},
"topK": 42, # Optional. The number of contexts to retrieve.
},
"similarityTopK": 42, # Optional. Number of top k results to return from the selected corpora.
"storeContext": True or False, # Optional. Currently only supported for Gemini Multimodal Live API. In Gemini Multimodal Live API, if `store_context` bool is specified, Gemini will leverage it to automatically memorize the interactions between the client and Gemini, and retrieve context when needed to augment the response generation for users' ongoing and future interactions.
"vectorDistanceThreshold": 3.14, # Optional. Only return results with vector distance smaller than the threshold.
},
},
"urlContext": { # Tool to support URL context. # Optional. Tool to support URL context retrieval.
},
},
],
},
},
"turns": [ # Optional. A chronological list of conversation turns. Each turn represents a logical execution cycle (e.g., User Input -> Agent Response).
{ # Represents a single turn/invocation in the conversation.
"events": [ # Optional. The list of events that occurred during this turn.
{ # Represents a single event in the execution trace.
"activeTools": [ # Optional. The list of tools that were active/available to the agent at the time of this event. This overrides the `AgentConfig.tools` if set.
{ # Tool details that the model may use to generate response. A `Tool` is a piece of code that enables the system to interact with external systems to perform an action, or set of actions, outside of knowledge and scope of the model. A Tool object should contain exactly one type of Tool (e.g FunctionDeclaration, Retrieval or GoogleSearchRetrieval).
"codeExecution": { # Tool that executes code generated by the model, and automatically returns the result to the model. See also [ExecutableCode]and [CodeExecutionResult] which are input and output to this tool. # Optional. CodeExecution tool type. Enables the model to execute code as part of generation.
},
"computerUse": { # Tool to support computer use. # Optional. Tool to support the model interacting directly with the computer. If enabled, it automatically populates computer-use specific Function Declarations.
"environment": "A String", # Required. The environment being operated.
"excludedPredefinedFunctions": [ # Optional. By default, [predefined functions](https://cloud.google.com/vertex-ai/generative-ai/docs/computer-use#supported-actions) are included in the final model call. Some of them can be explicitly excluded from being automatically included. This can serve two purposes: 1. Using a more restricted / different action space. 2. Improving the definitions / instructions of predefined functions.
"A String",
],
},
"enterpriseWebSearch": { # Tool to search public web data, powered by Vertex AI Search and Sec4 compliance. # Optional. Tool to support searching public web data, powered by Vertex AI Search and Sec4 compliance.
"blockingConfidence": "A String", # Optional. Sites with confidence level chosen & above this value will be blocked from the search results.
"excludeDomains": [ # Optional. List of domains to be excluded from the search results. The default limit is 2000 domains.
"A String",
],
},
"functionDeclarations": [ # Optional. Function tool type. One or more function declarations to be passed to the model along with the current user query. Model may decide to call a subset of these functions by populating FunctionCall in the response. User should provide a FunctionResponse for each function call in the next turn. Based on the function responses, Model will generate the final response back to the user. Maximum 512 function declarations can be provided.
{ # Structured representation of a function declaration as defined by the [OpenAPI 3.0 specification](https://spec.openapis.org/oas/v3.0.3). Included in this declaration are the function name, description, parameters and response type. This FunctionDeclaration is a representation of a block of code that can be used as a `Tool` by the model and executed by the client.
"description": "A String", # Optional. Description and purpose of the function. Model uses it to decide how and whether to call the function.
"name": "A String", # Required. The name of the function to call. Must start with a letter or an underscore. Must be a-z, A-Z, 0-9, or contain underscores, dots, colons and dashes, with a maximum length of 64.
"parameters": { # Defines the schema of input and output data. This is a subset of the [OpenAPI 3.0 Schema Object](https://spec.openapis.org/oas/v3.0.3#schema-object). # Optional. Describes the parameters to this function in JSON Schema Object format. Reflects the Open API 3.03 Parameter Object. string Key: the name of the parameter. Parameter names are case sensitive. Schema Value: the Schema defining the type used for the parameter. For function with no parameters, this can be left unset. Parameter names must start with a letter or an underscore and must only contain chars a-z, A-Z, 0-9, or underscores with a maximum length of 64. Example with 1 required and 1 optional parameter: type: OBJECT properties: param1: type: STRING param2: type: INTEGER required: - param1
"additionalProperties": "", # Optional. If `type` is `OBJECT`, specifies how to handle properties not defined in `properties`. If it is a boolean `false`, no additional properties are allowed. If it is a schema, additional properties are allowed if they conform to the schema.
"anyOf": [ # Optional. The instance must be valid against any (one or more) of the subschemas listed in `any_of`.
# Object with schema name: GoogleCloudAiplatformV1beta1Schema
],
"default": "", # Optional. Default value to use if the field is not specified.
"defs": { # Optional. `defs` provides a map of schema definitions that can be reused by `ref` elsewhere in the schema. Only allowed at root level of the schema.
"a_key": # Object with schema name: GoogleCloudAiplatformV1beta1Schema
},
"description": "A String", # Optional. Description of the schema.
"enum": [ # Optional. Possible values of the field. This field can be used to restrict a value to a fixed set of values. To mark a field as an enum, set `format` to `enum` and provide the list of possible values in `enum`. For example: 1. To define directions: `{type:STRING, format:enum, enum:["EAST", "NORTH", "SOUTH", "WEST"]}` 2. To define apartment numbers: `{type:INTEGER, format:enum, enum:["101", "201", "301"]}`
"A String",
],
"example": "", # Optional. Example of an instance of this schema.
"format": "A String", # Optional. The format of the data. For `NUMBER` type, format can be `float` or `double`. For `INTEGER` type, format can be `int32` or `int64`. For `STRING` type, format can be `email`, `byte`, `date`, `date-time`, `password`, and other formats to further refine the data type.
"items": # Object with schema name: GoogleCloudAiplatformV1beta1Schema # Optional. If type is `ARRAY`, `items` specifies the schema of elements in the array.
"maxItems": "A String", # Optional. If type is `ARRAY`, `max_items` specifies the maximum number of items in an array.
"maxLength": "A String", # Optional. If type is `STRING`, `max_length` specifies the maximum length of the string.
"maxProperties": "A String", # Optional. If type is `OBJECT`, `max_properties` specifies the maximum number of properties that can be provided.
"maximum": 3.14, # Optional. If type is `INTEGER` or `NUMBER`, `maximum` specifies the maximum allowed value.
"minItems": "A String", # Optional. If type is `ARRAY`, `min_items` specifies the minimum number of items in an array.
"minLength": "A String", # Optional. If type is `STRING`, `min_length` specifies the minimum length of the string.
"minProperties": "A String", # Optional. If type is `OBJECT`, `min_properties` specifies the minimum number of properties that can be provided.
"minimum": 3.14, # Optional. If type is `INTEGER` or `NUMBER`, `minimum` specifies the minimum allowed value.
"nullable": True or False, # Optional. Indicates if the value of this field can be null.
"pattern": "A String", # Optional. If type is `STRING`, `pattern` specifies a regular expression that the string must match.
"properties": { # Optional. If type is `OBJECT`, `properties` is a map of property names to schema definitions for each property of the object.
"a_key": # Object with schema name: GoogleCloudAiplatformV1beta1Schema
},
"propertyOrdering": [ # Optional. Order of properties displayed or used where order matters. This is not a standard field in OpenAPI specification, but can be used to control the order of properties.
"A String",
],
"ref": "A String", # Optional. Allows referencing another schema definition to use in place of this schema. The value must be a valid reference to a schema in `defs`. For example, the following schema defines a reference to a schema node named "Pet": type: object properties: pet: ref: #/defs/Pet defs: Pet: type: object properties: name: type: string The value of the "pet" property is a reference to the schema node named "Pet". See details in https://json-schema.org/understanding-json-schema/structuring
"required": [ # Optional. If type is `OBJECT`, `required` lists the names of properties that must be present.
"A String",
],
"title": "A String", # Optional. Title for the schema.
"type": "A String", # Optional. Data type of the schema field.
},
"parametersJsonSchema": "", # Optional. Describes the parameters to the function in JSON Schema format. The schema must describe an object where the properties are the parameters to the function. For example: ``` { "type": "object", "properties": { "name": { "type": "string" }, "age": { "type": "integer" } }, "additionalProperties": false, "required": ["name", "age"], "propertyOrdering": ["name", "age"] } ``` This field is mutually exclusive with `parameters`.
"response": { # Defines the schema of input and output data. This is a subset of the [OpenAPI 3.0 Schema Object](https://spec.openapis.org/oas/v3.0.3#schema-object). # Optional. Describes the output from this function in JSON Schema format. Reflects the Open API 3.03 Response Object. The Schema defines the type used for the response value of the function.
"additionalProperties": "", # Optional. If `type` is `OBJECT`, specifies how to handle properties not defined in `properties`. If it is a boolean `false`, no additional properties are allowed. If it is a schema, additional properties are allowed if they conform to the schema.
"anyOf": [ # Optional. The instance must be valid against any (one or more) of the subschemas listed in `any_of`.
# Object with schema name: GoogleCloudAiplatformV1beta1Schema
],
"default": "", # Optional. Default value to use if the field is not specified.
"defs": { # Optional. `defs` provides a map of schema definitions that can be reused by `ref` elsewhere in the schema. Only allowed at root level of the schema.
"a_key": # Object with schema name: GoogleCloudAiplatformV1beta1Schema
},
"description": "A String", # Optional. Description of the schema.
"enum": [ # Optional. Possible values of the field. This field can be used to restrict a value to a fixed set of values. To mark a field as an enum, set `format` to `enum` and provide the list of possible values in `enum`. For example: 1. To define directions: `{type:STRING, format:enum, enum:["EAST", "NORTH", "SOUTH", "WEST"]}` 2. To define apartment numbers: `{type:INTEGER, format:enum, enum:["101", "201", "301"]}`
"A String",
],
"example": "", # Optional. Example of an instance of this schema.
"format": "A String", # Optional. The format of the data. For `NUMBER` type, format can be `float` or `double`. For `INTEGER` type, format can be `int32` or `int64`. For `STRING` type, format can be `email`, `byte`, `date`, `date-time`, `password`, and other formats to further refine the data type.
"items": # Object with schema name: GoogleCloudAiplatformV1beta1Schema # Optional. If type is `ARRAY`, `items` specifies the schema of elements in the array.
"maxItems": "A String", # Optional. If type is `ARRAY`, `max_items` specifies the maximum number of items in an array.
"maxLength": "A String", # Optional. If type is `STRING`, `max_length` specifies the maximum length of the string.
"maxProperties": "A String", # Optional. If type is `OBJECT`, `max_properties` specifies the maximum number of properties that can be provided.
"maximum": 3.14, # Optional. If type is `INTEGER` or `NUMBER`, `maximum` specifies the maximum allowed value.
"minItems": "A String", # Optional. If type is `ARRAY`, `min_items` specifies the minimum number of items in an array.
"minLength": "A String", # Optional. If type is `STRING`, `min_length` specifies the minimum length of the string.
"minProperties": "A String", # Optional. If type is `OBJECT`, `min_properties` specifies the minimum number of properties that can be provided.
"minimum": 3.14, # Optional. If type is `INTEGER` or `NUMBER`, `minimum` specifies the minimum allowed value.
"nullable": True or False, # Optional. Indicates if the value of this field can be null.
"pattern": "A String", # Optional. If type is `STRING`, `pattern` specifies a regular expression that the string must match.
"properties": { # Optional. If type is `OBJECT`, `properties` is a map of property names to schema definitions for each property of the object.
"a_key": # Object with schema name: GoogleCloudAiplatformV1beta1Schema
},
"propertyOrdering": [ # Optional. Order of properties displayed or used where order matters. This is not a standard field in OpenAPI specification, but can be used to control the order of properties.
"A String",
],
"ref": "A String", # Optional. Allows referencing another schema definition to use in place of this schema. The value must be a valid reference to a schema in `defs`. For example, the following schema defines a reference to a schema node named "Pet": type: object properties: pet: ref: #/defs/Pet defs: Pet: type: object properties: name: type: string The value of the "pet" property is a reference to the schema node named "Pet". See details in https://json-schema.org/understanding-json-schema/structuring
"required": [ # Optional. If type is `OBJECT`, `required` lists the names of properties that must be present.
"A String",
],
"title": "A String", # Optional. Title for the schema.
"type": "A String", # Optional. Data type of the schema field.
},
"responseJsonSchema": "", # Optional. Describes the output from this function in JSON Schema format. The value specified by the schema is the response value of the function. This field is mutually exclusive with `response`.
},
],
"googleMaps": { # Tool to retrieve public maps data for grounding, powered by Google. # Optional. GoogleMaps tool type. Tool to support Google Maps in Model.
"enableWidget": True or False, # Optional. If true, include the widget context token in the response.
},
"googleSearch": { # GoogleSearch tool type. Tool to support Google Search in Model. Powered by Google. # Optional. GoogleSearch tool type. Tool to support Google Search in Model. Powered by Google.
"blockingConfidence": "A String", # Optional. Sites with confidence level chosen & above this value will be blocked from the search results.
"excludeDomains": [ # Optional. List of domains to be excluded from the search results. The default limit is 2000 domains. Example: ["amazon.com", "facebook.com"].
"A String",
],
},
"googleSearchRetrieval": { # Tool to retrieve public web data for grounding, powered by Google. # Optional. Specialized retrieval tool that is powered by Google Search.
"dynamicRetrievalConfig": { # Describes the options to customize dynamic retrieval. # Specifies the dynamic retrieval configuration for the given source.
"dynamicThreshold": 3.14, # Optional. The threshold to be used in dynamic retrieval. If not set, a system default value is used.
"mode": "A String", # The mode of the predictor to be used in dynamic retrieval.
},
},
"parallelAiSearch": { # ParallelAiSearch tool type. A tool that uses the Parallel.ai search engine for grounding. # Optional. If specified, Vertex AI will use Parallel.ai to search for information to answer user queries. The search results will be grounded on Parallel.ai and presented to the model for response generation
"apiKey": "A String", # Optional. The API key for ParallelAiSearch. If an API key is not provided, the system will attempt to verify access by checking for an active Parallel.ai subscription through the Google Cloud Marketplace. See https://docs.parallel.ai/search/search-quickstart for more details.
"customConfigs": { # Optional. Custom configs for ParallelAiSearch. This field can be used to pass any parameter from the Parallel.ai Search API. See the Parallel.ai documentation for the full list of available parameters and their usage: https://docs.parallel.ai/api-reference/search-beta/search Currently only `source_policy`, `excerpts`, `max_results`, `mode`, `fetch_policy` can be set via this field. For example: { "source_policy": { "include_domains": ["google.com", "wikipedia.org"], "exclude_domains": ["example.com"] }, "fetch_policy": { "max_age_seconds": 3600 } }
"a_key": "", # Properties of the object.
},
},
"retrieval": { # Defines a retrieval tool that model can call to access external knowledge. # Optional. Retrieval tool type. System will always execute the provided retrieval tool(s) to get external knowledge to answer the prompt. Retrieval results are presented to the model for generation.
"disableAttribution": True or False, # Optional. Deprecated. This option is no longer supported.
"externalApi": { # Retrieve from data source powered by external API for grounding. The external API is not owned by Google, but need to follow the pre-defined API spec. # Use data source powered by external API for grounding.
"apiAuth": { # The generic reusable api auth config. Deprecated. Please use AuthConfig (google/cloud/aiplatform/master/auth.proto) instead. # The authentication config to access the API. Deprecated. Please use auth_config instead.
"apiKeyConfig": { # The API secret. # The API secret.
"apiKeySecretVersion": "A String", # Required. The SecretManager secret version resource name storing API key. e.g. projects/{project}/secrets/{secret}/versions/{version}
"apiKeyString": "A String", # The API key string. Either this or `api_key_secret_version` must be set.
},
},
"apiSpec": "A String", # The API spec that the external API implements.
"authConfig": { # Auth configuration to run the extension. # The authentication config to access the API.
"apiKeyConfig": { # Config for authentication with API key. # Config for API key auth.
"apiKeySecret": "A String", # Optional. The name of the SecretManager secret version resource storing the API key. Format: `projects/{project}/secrets/{secrete}/versions/{version}` - If both `api_key_secret` and `api_key_string` are specified, this field takes precedence over `api_key_string`. - If specified, the `secretmanager.versions.access` permission should be granted to Vertex AI Extension Service Agent (https://cloud.google.com/vertex-ai/docs/general/access-control#service-agents) on the specified resource.
"apiKeyString": "A String", # Optional. The API key to be used in the request directly.
"httpElementLocation": "A String", # Optional. The location of the API key.
"name": "A String", # Optional. The parameter name of the API key. E.g. If the API request is "https://example.com/act?api_key=", "api_key" would be the parameter name.
},
"authType": "A String", # Type of auth scheme.
"googleServiceAccountConfig": { # Config for Google Service Account Authentication. # Config for Google Service Account auth.
"serviceAccount": "A String", # Optional. The service account that the extension execution service runs as. - If the service account is specified, the `iam.serviceAccounts.getAccessToken` permission should be granted to Vertex AI Extension Service Agent (https://cloud.google.com/vertex-ai/docs/general/access-control#service-agents) on the specified service account. - If not specified, the Vertex AI Extension Service Agent will be used to execute the Extension.
},
"httpBasicAuthConfig": { # Config for HTTP Basic Authentication. # Config for HTTP Basic auth.
"credentialSecret": "A String", # Required. The name of the SecretManager secret version resource storing the base64 encoded credentials. Format: `projects/{project}/secrets/{secrete}/versions/{version}` - If specified, the `secretmanager.versions.access` permission should be granted to Vertex AI Extension Service Agent (https://cloud.google.com/vertex-ai/docs/general/access-control#service-agents) on the specified resource.
},
"oauthConfig": { # Config for user oauth. # Config for user oauth.
"accessToken": "A String", # Access token for extension endpoint. Only used to propagate token from [[ExecuteExtensionRequest.runtime_auth_config]] at request time.
"serviceAccount": "A String", # The service account used to generate access tokens for executing the Extension. - If the service account is specified, the `iam.serviceAccounts.getAccessToken` permission should be granted to Vertex AI Extension Service Agent (https://cloud.google.com/vertex-ai/docs/general/access-control#service-agents) on the provided service account.
},
"oidcConfig": { # Config for user OIDC auth. # Config for user OIDC auth.
"idToken": "A String", # OpenID Connect formatted ID token for extension endpoint. Only used to propagate token from [[ExecuteExtensionRequest.runtime_auth_config]] at request time.
"serviceAccount": "A String", # The service account used to generate an OpenID Connect (OIDC)-compatible JWT token signed by the Google OIDC Provider (accounts.google.com) for extension endpoint (https://cloud.google.com/iam/docs/create-short-lived-credentials-direct#sa-credentials-oidc). - The audience for the token will be set to the URL in the server url defined in the OpenApi spec. - If the service account is provided, the service account should grant `iam.serviceAccounts.getOpenIdToken` permission to Vertex AI Extension Service Agent (https://cloud.google.com/vertex-ai/docs/general/access-control#service-agents).
},
},
"elasticSearchParams": { # The search parameters to use for the ELASTIC_SEARCH spec. # Parameters for the elastic search API.
"index": "A String", # The ElasticSearch index to use.
"numHits": 42, # Optional. Number of hits (chunks) to request. When specified, it is passed to Elasticsearch as the `num_hits` param.
"searchTemplate": "A String", # The ElasticSearch search template to use.
},
"endpoint": "A String", # The endpoint of the external API. The system will call the API at this endpoint to retrieve the data for grounding. Example: https://acme.com:443/search
"simpleSearchParams": { # The search parameters to use for SIMPLE_SEARCH spec. # Parameters for the simple search API.
},
},
"vertexAiSearch": { # Retrieve from Vertex AI Search datastore or engine for grounding. datastore and engine are mutually exclusive. See https://cloud.google.com/products/agent-builder # Set to use data source powered by Vertex AI Search.
"dataStoreSpecs": [ # Specifications that define the specific DataStores to be searched, along with configurations for those data stores. This is only considered for Engines with multiple data stores. It should only be set if engine is used.
{ # Define data stores within engine to filter on in a search call and configurations for those data stores. For more information, see https://cloud.google.com/generative-ai-app-builder/docs/reference/rpc/google.cloud.discoveryengine.v1#datastorespec
"dataStore": "A String", # Full resource name of DataStore, such as Format: `projects/{project}/locations/{location}/collections/{collection}/dataStores/{dataStore}`
"filter": "A String", # Optional. Filter specification to filter documents in the data store specified by data_store field. For more information on filtering, see [Filtering](https://cloud.google.com/generative-ai-app-builder/docs/filter-search-metadata)
},
],
"datastore": "A String", # Optional. Fully-qualified Vertex AI Search data store resource ID. Format: `projects/{project}/locations/{location}/collections/{collection}/dataStores/{dataStore}`
"engine": "A String", # Optional. Fully-qualified Vertex AI Search engine resource ID. Format: `projects/{project}/locations/{location}/collections/{collection}/engines/{engine}`
"filter": "A String", # Optional. Filter strings to be passed to the search API.
"maxResults": 42, # Optional. Number of search results to return per query. The default value is 10. The maximumm allowed value is 10.
},
"vertexRagStore": { # Retrieve from Vertex RAG Store for grounding. # Set to use data source powered by Vertex RAG store. User data is uploaded via the VertexRagDataService.
"ragCorpora": [ # Optional. Deprecated. Please use rag_resources instead.
"A String",
],
"ragResources": [ # Optional. The representation of the rag source. It can be used to specify corpus only or ragfiles. Currently only support one corpus or multiple files from one corpus. In the future we may open up multiple corpora support.
{ # The definition of the Rag resource.
"ragCorpus": "A String", # Optional. RagCorpora resource name. Format: `projects/{project}/locations/{location}/ragCorpora/{rag_corpus}`
"ragFileIds": [ # Optional. rag_file_id. The files should be in the same rag_corpus set in rag_corpus field.
"A String",
],
},
],
"ragRetrievalConfig": { # Specifies the context retrieval config. # Optional. The retrieval config for the Rag query.
"filter": { # Config for filters. # Optional. Config for filters.
"metadataFilter": "A String", # Optional. String for metadata filtering.
"vectorDistanceThreshold": 3.14, # Optional. Only returns contexts with vector distance smaller than the threshold.
"vectorSimilarityThreshold": 3.14, # Optional. Only returns contexts with vector similarity larger than the threshold.
},
"hybridSearch": { # Config for Hybrid Search. # Optional. Config for Hybrid Search.
"alpha": 3.14, # Optional. Alpha value controls the weight between dense and sparse vector search results. The range is [0, 1], while 0 means sparse vector search only and 1 means dense vector search only. The default value is 0.5 which balances sparse and dense vector search equally.
},
"ranking": { # Config for ranking and reranking. # Optional. Config for ranking and reranking.
"llmRanker": { # Config for LlmRanker. # Optional. Config for LlmRanker.
"modelName": "A String", # Optional. The model name used for ranking. See [Supported models](https://cloud.google.com/vertex-ai/generative-ai/docs/model-reference/inference#supported-models).
},
"rankService": { # Config for Rank Service. # Optional. Config for Rank Service.
"modelName": "A String", # Optional. The model name of the rank service. Format: `semantic-ranker-512@latest`
},
},
"topK": 42, # Optional. The number of contexts to retrieve.
},
"similarityTopK": 42, # Optional. Number of top k results to return from the selected corpora.
"storeContext": True or False, # Optional. Currently only supported for Gemini Multimodal Live API. In Gemini Multimodal Live API, if `store_context` bool is specified, Gemini will leverage it to automatically memorize the interactions between the client and Gemini, and retrieve context when needed to augment the response generation for users' ongoing and future interactions.
"vectorDistanceThreshold": 3.14, # Optional. Only return results with vector distance smaller than the threshold.
},
},
"urlContext": { # Tool to support URL context. # Optional. Tool to support URL context retrieval.
},
},
],
"author": "A String", # Required. The ID of the agent or entity that generated this event. Use "user" to denote events generated by the end-user.
"content": { # The structured data content of a message. A Content message contains a `role` field, which indicates the producer of the content, and a `parts` field, which contains the multi-part data of the message. # Required. The content of the event (e.g., text response, tool call, tool response).
"parts": [ # Required. A list of Part objects that make up a single message. Parts of a message can have different MIME types. A Content message must have at least one Part.
{ # A datatype containing media that is part of a multi-part Content message. A `Part` consists of data which has an associated datatype. A `Part` can only contain one of the accepted types in `Part.data`. For media types that are not text, `Part` must have a fixed IANA MIME type identifying the type and subtype of the media if `inline_data` or `file_data` field is filled with raw bytes.
"codeExecutionResult": { # Result of executing the [ExecutableCode]. Only generated when using the [CodeExecution] tool, and always follows a `part` containing the [ExecutableCode]. # Optional. The result of executing the ExecutableCode.
"outcome": "A String", # Required. Outcome of the code execution.
"output": "A String", # Optional. Contains stdout when code execution is successful, stderr or other description otherwise.
},
"executableCode": { # Code generated by the model that is meant to be executed, and the result returned to the model. Generated when using the [CodeExecution] tool, in which the code will be automatically executed, and a corresponding [CodeExecutionResult] will also be generated. # Optional. Code generated by the model that is intended to be executed.
"code": "A String", # Required. The code to be executed.
"language": "A String", # Required. Programming language of the `code`.
},
"fileData": { # URI-based data. A FileData message contains a URI pointing to data of a specific media type. It is used to represent images, audio, and video stored in Google Cloud Storage. # Optional. The URI-based data of the part. This can be used to include files from Google Cloud Storage.
"displayName": "A String", # Optional. The display name of the file. Used to provide a label or filename to distinguish files. This field is only returned in `PromptMessage` for prompt management. It is used in the Gemini calls only when server side tools (`code_execution`, `google_search`, and `url_context`) are enabled.
"fileUri": "A String", # Required. The URI of the file in Google Cloud Storage.
"mimeType": "A String", # Required. The IANA standard MIME type of the source data.
},
"functionCall": { # A predicted [FunctionCall] returned from the model that contains a string representing the [FunctionDeclaration.name] and a structured JSON object containing the parameters and their values. # Optional. A predicted function call returned from the model. This contains the name of the function to call and the arguments to pass to the function.
"args": { # Optional. The function parameters and values in JSON object format. See [FunctionDeclaration.parameters] for parameter details.
"a_key": "", # Properties of the object.
},
"id": "A String", # Optional. The unique id of the function call. If populated, the client to execute the `function_call` and return the response with the matching `id`.
"name": "A String", # Optional. The name of the function to call. Matches [FunctionDeclaration.name].
"partialArgs": [ # Optional. The partial argument value of the function call. If provided, represents the arguments/fields that are streamed incrementally.
{ # Partial argument value of the function call.
"boolValue": True or False, # Optional. Represents a boolean value.
"jsonPath": "A String", # Required. A JSON Path (RFC 9535) to the argument being streamed. https://datatracker.ietf.org/doc/html/rfc9535. e.g. "$.foo.bar[0].data".
"nullValue": "A String", # Optional. Represents a null value.
"numberValue": 3.14, # Optional. Represents a double value.
"stringValue": "A String", # Optional. Represents a string value.
"willContinue": True or False, # Optional. Whether this is not the last part of the same json_path. If true, another PartialArg message for the current json_path is expected to follow.
},
],
"willContinue": True or False, # Optional. Whether this is the last part of the FunctionCall. If true, another partial message for the current FunctionCall is expected to follow.
},
"functionResponse": { # The result output from a [FunctionCall] that contains a string representing the [FunctionDeclaration.name] and a structured JSON object containing any output from the function is used as context to the model. This should contain the result of a [FunctionCall] made based on model prediction. # Optional. The result of a function call. This is used to provide the model with the result of a function call that it predicted.
"id": "A String", # Optional. The id of the function call this response is for. Populated by the client to match the corresponding function call `id`.
"name": "A String", # Required. The name of the function to call. Matches [FunctionDeclaration.name] and [FunctionCall.name].
"parts": [ # Optional. Ordered `Parts` that constitute a function response. Parts may have different IANA MIME types.
{ # A datatype containing media that is part of a `FunctionResponse` message. A `FunctionResponsePart` consists of data which has an associated datatype. A `FunctionResponsePart` can only contain one of the accepted types in `FunctionResponsePart.data`. A `FunctionResponsePart` must have a fixed IANA MIME type identifying the type and subtype of the media if the `inline_data` field is filled with raw bytes.
"fileData": { # URI based data for function response. # URI based data.
"displayName": "A String", # Optional. Display name of the file data. Used to provide a label or filename to distinguish file datas. This field is only returned in PromptMessage for prompt management. It is currently used in the Gemini GenerateContent calls only when server side tools (code_execution, google_search, and url_context) are enabled.
"fileUri": "A String", # Required. URI.
"mimeType": "A String", # Required. The IANA standard MIME type of the source data.
},
"inlineData": { # Raw media bytes for function response. Text should not be sent as raw bytes, use the 'text' field. # Inline media bytes.
"data": "A String", # Required. Raw bytes.
"displayName": "A String", # Optional. Display name of the blob. Used to provide a label or filename to distinguish blobs. This field is only returned in PromptMessage for prompt management. It is currently used in the Gemini GenerateContent calls only when server side tools (code_execution, google_search, and url_context) are enabled.
"mimeType": "A String", # Required. The IANA standard MIME type of the source data.
},
},
],
"response": { # Required. The function response in JSON object format. Use "output" key to specify function output and "error" key to specify error details (if any). If "output" and "error" keys are not specified, then whole "response" is treated as function output.
"a_key": "", # Properties of the object.
},
},
"inlineData": { # A content blob. A Blob contains data of a specific media type. It is used to represent images, audio, and video. # Optional. The inline data content of the part. This can be used to include images, audio, or video in a request.
"data": "A String", # Required. The raw bytes of the data.
"displayName": "A String", # Optional. The display name of the blob. Used to provide a label or filename to distinguish blobs. This field is only returned in `PromptMessage` for prompt management. It is used in the Gemini calls only when server-side tools (`code_execution`, `google_search`, and `url_context`) are enabled.
"mimeType": "A String", # Required. The IANA standard MIME type of the source data.
},
"mediaResolution": { # per part media resolution. Media resolution for the input media. # per part media resolution. Media resolution for the input media.
"level": "A String", # The tokenization quality used for given media.
},
"text": "A String", # Optional. The text content of the part. When sent from the VSCode Gemini Code Assist extension, references to @mentioned items will be converted to markdown boldface text. For example `@my-repo` will be converted to and sent as `**my-repo**` by the IDE agent.
"thought": True or False, # Optional. Indicates whether the `part` represents the model's thought process or reasoning.
"thoughtSignature": "A String", # Optional. An opaque signature for the thought so it can be reused in subsequent requests.
"videoMetadata": { # Provides metadata for a video, including the start and end offsets for clipping and the frame rate. # Optional. Video metadata. The metadata should only be specified while the video data is presented in inline_data or file_data.
"endOffset": "A String", # Optional. The end offset of the video.
"fps": 3.14, # Optional. The frame rate of the video sent to the model. If not specified, the default value is 1.0. The valid range is (0.0, 24.0].
"startOffset": "A String", # Optional. The start offset of the video.
},
},
],
"role": "A String", # Optional. The producer of the content. Must be either 'user' or 'model'. If not set, the service will default to 'user'.
},
"eventTime": "A String", # Optional. The timestamp when the event occurred.
"stateDelta": { # Optional. The change in the session state caused by this event. This is a key-value map of fields that were modified or added by the event.
"a_key": "", # Properties of the object.
},
},
],
"turnId": "A String", # Optional. A unique identifier for the turn. Useful for referencing specific turns across systems.
"turnIndex": 42, # Required. The 0-based index of the turn in the conversation sequence.
},
],
},
"promptTemplateData": { # Message to hold a prompt template and the values to populate the template. # Prompt template data.
"values": { # The values for fields in the prompt template.
"a_key": { # The structured data content of a message. A Content message contains a `role` field, which indicates the producer of the content, and a `parts` field, which contains the multi-part data of the message.
"parts": [ # Required. A list of Part objects that make up a single message. Parts of a message can have different MIME types. A Content message must have at least one Part.
{ # A datatype containing media that is part of a multi-part Content message. A `Part` consists of data which has an associated datatype. A `Part` can only contain one of the accepted types in `Part.data`. For media types that are not text, `Part` must have a fixed IANA MIME type identifying the type and subtype of the media if `inline_data` or `file_data` field is filled with raw bytes.
"codeExecutionResult": { # Result of executing the [ExecutableCode]. Only generated when using the [CodeExecution] tool, and always follows a `part` containing the [ExecutableCode]. # Optional. The result of executing the ExecutableCode.
"outcome": "A String", # Required. Outcome of the code execution.
"output": "A String", # Optional. Contains stdout when code execution is successful, stderr or other description otherwise.
},
"executableCode": { # Code generated by the model that is meant to be executed, and the result returned to the model. Generated when using the [CodeExecution] tool, in which the code will be automatically executed, and a corresponding [CodeExecutionResult] will also be generated. # Optional. Code generated by the model that is intended to be executed.
"code": "A String", # Required. The code to be executed.
"language": "A String", # Required. Programming language of the `code`.
},
"fileData": { # URI-based data. A FileData message contains a URI pointing to data of a specific media type. It is used to represent images, audio, and video stored in Google Cloud Storage. # Optional. The URI-based data of the part. This can be used to include files from Google Cloud Storage.
"displayName": "A String", # Optional. The display name of the file. Used to provide a label or filename to distinguish files. This field is only returned in `PromptMessage` for prompt management. It is used in the Gemini calls only when server side tools (`code_execution`, `google_search`, and `url_context`) are enabled.
"fileUri": "A String", # Required. The URI of the file in Google Cloud Storage.
"mimeType": "A String", # Required. The IANA standard MIME type of the source data.
},
"functionCall": { # A predicted [FunctionCall] returned from the model that contains a string representing the [FunctionDeclaration.name] and a structured JSON object containing the parameters and their values. # Optional. A predicted function call returned from the model. This contains the name of the function to call and the arguments to pass to the function.
"args": { # Optional. The function parameters and values in JSON object format. See [FunctionDeclaration.parameters] for parameter details.
"a_key": "", # Properties of the object.
},
"id": "A String", # Optional. The unique id of the function call. If populated, the client to execute the `function_call` and return the response with the matching `id`.
"name": "A String", # Optional. The name of the function to call. Matches [FunctionDeclaration.name].
"partialArgs": [ # Optional. The partial argument value of the function call. If provided, represents the arguments/fields that are streamed incrementally.
{ # Partial argument value of the function call.
"boolValue": True or False, # Optional. Represents a boolean value.
"jsonPath": "A String", # Required. A JSON Path (RFC 9535) to the argument being streamed. https://datatracker.ietf.org/doc/html/rfc9535. e.g. "$.foo.bar[0].data".
"nullValue": "A String", # Optional. Represents a null value.
"numberValue": 3.14, # Optional. Represents a double value.
"stringValue": "A String", # Optional. Represents a string value.
"willContinue": True or False, # Optional. Whether this is not the last part of the same json_path. If true, another PartialArg message for the current json_path is expected to follow.
},
],
"willContinue": True or False, # Optional. Whether this is the last part of the FunctionCall. If true, another partial message for the current FunctionCall is expected to follow.
},
"functionResponse": { # The result output from a [FunctionCall] that contains a string representing the [FunctionDeclaration.name] and a structured JSON object containing any output from the function is used as context to the model. This should contain the result of a [FunctionCall] made based on model prediction. # Optional. The result of a function call. This is used to provide the model with the result of a function call that it predicted.
"id": "A String", # Optional. The id of the function call this response is for. Populated by the client to match the corresponding function call `id`.
"name": "A String", # Required. The name of the function to call. Matches [FunctionDeclaration.name] and [FunctionCall.name].
"parts": [ # Optional. Ordered `Parts` that constitute a function response. Parts may have different IANA MIME types.
{ # A datatype containing media that is part of a `FunctionResponse` message. A `FunctionResponsePart` consists of data which has an associated datatype. A `FunctionResponsePart` can only contain one of the accepted types in `FunctionResponsePart.data`. A `FunctionResponsePart` must have a fixed IANA MIME type identifying the type and subtype of the media if the `inline_data` field is filled with raw bytes.
"fileData": { # URI based data for function response. # URI based data.
"displayName": "A String", # Optional. Display name of the file data. Used to provide a label or filename to distinguish file datas. This field is only returned in PromptMessage for prompt management. It is currently used in the Gemini GenerateContent calls only when server side tools (code_execution, google_search, and url_context) are enabled.
"fileUri": "A String", # Required. URI.
"mimeType": "A String", # Required. The IANA standard MIME type of the source data.
},
"inlineData": { # Raw media bytes for function response. Text should not be sent as raw bytes, use the 'text' field. # Inline media bytes.
"data": "A String", # Required. Raw bytes.
"displayName": "A String", # Optional. Display name of the blob. Used to provide a label or filename to distinguish blobs. This field is only returned in PromptMessage for prompt management. It is currently used in the Gemini GenerateContent calls only when server side tools (code_execution, google_search, and url_context) are enabled.
"mimeType": "A String", # Required. The IANA standard MIME type of the source data.
},
},
],
"response": { # Required. The function response in JSON object format. Use "output" key to specify function output and "error" key to specify error details (if any). If "output" and "error" keys are not specified, then whole "response" is treated as function output.
"a_key": "", # Properties of the object.
},
},
"inlineData": { # A content blob. A Blob contains data of a specific media type. It is used to represent images, audio, and video. # Optional. The inline data content of the part. This can be used to include images, audio, or video in a request.
"data": "A String", # Required. The raw bytes of the data.
"displayName": "A String", # Optional. The display name of the blob. Used to provide a label or filename to distinguish blobs. This field is only returned in `PromptMessage` for prompt management. It is used in the Gemini calls only when server-side tools (`code_execution`, `google_search`, and `url_context`) are enabled.
"mimeType": "A String", # Required. The IANA standard MIME type of the source data.
},
"mediaResolution": { # per part media resolution. Media resolution for the input media. # per part media resolution. Media resolution for the input media.
"level": "A String", # The tokenization quality used for given media.
},
"text": "A String", # Optional. The text content of the part. When sent from the VSCode Gemini Code Assist extension, references to @mentioned items will be converted to markdown boldface text. For example `@my-repo` will be converted to and sent as `**my-repo**` by the IDE agent.
"thought": True or False, # Optional. Indicates whether the `part` represents the model's thought process or reasoning.
"thoughtSignature": "A String", # Optional. An opaque signature for the thought so it can be reused in subsequent requests.
"videoMetadata": { # Provides metadata for a video, including the start and end offsets for clipping and the frame rate. # Optional. Video metadata. The metadata should only be specified while the video data is presented in inline_data or file_data.
"endOffset": "A String", # Optional. The end offset of the video.
"fps": 3.14, # Optional. The frame rate of the video sent to the model. If not specified, the default value is 1.0. The valid range is (0.0, 24.0].
"startOffset": "A String", # Optional. The start offset of the video.
},
},
],
"role": "A String", # Optional. The producer of the content. Must be either 'user' or 'model'. If not set, the service will default to 'user'.
},
},
},
"text": "A String", # Text prompt.
"value": "", # Fields and values that can be used to populate the prompt template.
},
"rubrics": { # Optional. Named groups of rubrics associated with this prompt. The key is a user-defined name for the rubric group.
"a_key": { # A group of rubrics, used for grouping rubrics based on a metric or a version.
"displayName": "A String", # Human-readable name for the group. This should be unique within a given context if used for display or selection. Example: "Instruction Following V1", "Content Quality - Summarization Task".
"groupId": "A String", # Unique identifier for the group.
"rubrics": [ # Rubrics that are part of this group.
{ # Message representing a single testable criterion for evaluation. One input prompt could have multiple rubrics.
"content": { # Content of the rubric, defining the testable criteria. # Required. The actual testable criteria for the rubric.
"property": { # Defines criteria based on a specific property. # Evaluation criteria based on a specific property.
"description": "A String", # Description of the property being evaluated. Example: "The model's response is grammatically correct."
},
},
"importance": "A String", # Optional. The relative importance of this rubric.
"rubricId": "A String", # Unique identifier for the rubric. This ID is used to refer to this rubric, e.g., in RubricVerdict.
"type": "A String", # Optional. A type designator for the rubric, which can inform how it's evaluated or interpreted by systems or users. It's recommended to use consistent, well-defined, upper snake_case strings. Examples: "SUMMARIZATION_QUALITY", "SAFETY_HARMFUL_CONTENT", "INSTRUCTION_ADHERENCE".
},
],
},
},
},
},
"gcsUri": "A String", # The Cloud Storage object where the request or response is stored.
"labels": { # Optional. Labels for the EvaluationItem.
"a_key": "A String",
},
"metadata": "", # Optional. Metadata for the EvaluationItem.
"name": "A String", # Identifier. The resource name of the EvaluationItem. Format: `projects/{project}/locations/{location}/evaluationItems/{evaluation_item}`
}
delete(name, x__xgafv=None)
Deletes an Evaluation Item.
Args:
name: string, Required. The name of the EvaluationItem resource to be deleted. Format: `projects/{project}/locations/{location}/evaluationItems/{evaluation_item}` (required)
x__xgafv: string, V1 error format.
Allowed values
1 - v1 error format
2 - v2 error format
Returns:
An object of the form:
{ # This resource represents a long-running operation that is the result of a network API call.
"done": True or False, # If the value is `false`, it means the operation is still in progress. If `true`, the operation is completed, and either `error` or `response` is available.
"error": { # The `Status` type defines a logical error model that is suitable for different programming environments, including REST APIs and RPC APIs. It is used by [gRPC](https://github.com/grpc). Each `Status` message contains three pieces of data: error code, error message, and error details. You can find out more about this error model and how to work with it in the [API Design Guide](https://cloud.google.com/apis/design/errors). # The error result of the operation in case of failure or cancellation.
"code": 42, # The status code, which should be an enum value of google.rpc.Code.
"details": [ # A list of messages that carry the error details. There is a common set of message types for APIs to use.
{
"a_key": "", # Properties of the object. Contains field @type with type URL.
},
],
"message": "A String", # A developer-facing error message, which should be in English. Any user-facing error message should be localized and sent in the google.rpc.Status.details field, or localized by the client.
},
"metadata": { # Service-specific metadata associated with the operation. It typically contains progress information and common metadata such as create time. Some services might not provide such metadata. Any method that returns a long-running operation should document the metadata type, if any.
"a_key": "", # Properties of the object. Contains field @type with type URL.
},
"name": "A String", # The server-assigned name, which is only unique within the same service that originally returns it. If you use the default HTTP mapping, the `name` should be a resource name ending with `operations/{unique_id}`.
"response": { # The normal, successful response of the operation. If the original method returns no data on success, such as `Delete`, the response is `google.protobuf.Empty`. If the original method is standard `Get`/`Create`/`Update`, the response should be the resource. For other methods, the response should have the type `XxxResponse`, where `Xxx` is the original method name. For example, if the original method name is `TakeSnapshot()`, the inferred response type is `TakeSnapshotResponse`.
"a_key": "", # Properties of the object. Contains field @type with type URL.
},
}
get(name, x__xgafv=None)
Gets an Evaluation Item.
Args:
name: string, Required. The name of the EvaluationItem resource. Format: `projects/{project}/locations/{location}/evaluationItems/{evaluation_item}` (required)
x__xgafv: string, V1 error format.
Allowed values
1 - v1 error format
2 - v2 error format
Returns:
An object of the form:
{ # EvaluationItem is a single evaluation request or result. The content of an EvaluationItem is immutable - it cannot be updated once created. EvaluationItems can be deleted when no longer needed.
"createTime": "A String", # Output only. Timestamp when this item was created.
"displayName": "A String", # Required. The display name of the EvaluationItem.
"error": { # The `Status` type defines a logical error model that is suitable for different programming environments, including REST APIs and RPC APIs. It is used by [gRPC](https://github.com/grpc). Each `Status` message contains three pieces of data: error code, error message, and error details. You can find out more about this error model and how to work with it in the [API Design Guide](https://cloud.google.com/apis/design/errors). # Output only. Error for the evaluation item.
"code": 42, # The status code, which should be an enum value of google.rpc.Code.
"details": [ # A list of messages that carry the error details. There is a common set of message types for APIs to use.
{
"a_key": "", # Properties of the object. Contains field @type with type URL.
},
],
"message": "A String", # A developer-facing error message, which should be in English. Any user-facing error message should be localized and sent in the google.rpc.Status.details field, or localized by the client.
},
"evaluationItemType": "A String", # Required. The type of the EvaluationItem.
"evaluationRequest": { # A single evaluation request supporting input for both single-turn model generation and multi-turn agent execution traces. Valid input modes: 1. Inference Mode: `prompt` is set (containing text or AgentData context). 2. Offline Eval Mode: `prompt` is unset, and `candidate_responses` contains `agent_data` (the completed execution trace). Validation Rule: Either `prompt` must be set, OR at least one of the `candidate_responses` must contain `agent_data`. # The request to evaluate.
"candidateResponses": [ # Optional. Responses from model under test and other baseline models for comparison.
{ # Responses from model or agent.
"agentData": { # Represents data specific to multi-turn agent evaluations. # Optional. Represents the complete execution trace of a multi-turn conversation, which can involve single or multiple agents. This field is used to provide the full output of an agent's run, including all turns and events, for direct evaluation.
"agents": { # Optional. A map containing the static configurations for each agent in the system. Key: agent_id (matches the `author` field in events). Value: The static configuration of the agent.
"a_key": { # Represents configuration for an Agent.
"agentId": "A String", # Required. Unique identifier of the agent. This ID is used to refer to this agent, e.g., in AgentEvent.author, or in the `sub_agents` field. It must be unique within the `agents` map.
"agentType": "A String", # Optional. The type or class of the agent (e.g., "LlmAgent", "RouterAgent", "ToolUseAgent"). Useful for the autorater to understand the expected behavior of the agent.
"description": "A String", # Optional. A high-level description of the agent's role and responsibilities. Critical for evaluating if the agent is routing tasks correctly.
"instruction": "A String", # Optional. Provides instructions for the LLM model, guiding the agent's behavior. Can be static or dynamic. Dynamic instructions can contain placeholders like {variable_name} that will be resolved at runtime using the `AgentEvent.state_delta` field.
"subAgents": [ # Optional. The list of valid agent IDs that this agent can delegate to. This defines the directed edges in the multi-agent system graph topology.
"A String",
],
"tools": [ # Optional. The list of tools available to this agent.
{ # Tool details that the model may use to generate response. A `Tool` is a piece of code that enables the system to interact with external systems to perform an action, or set of actions, outside of knowledge and scope of the model. A Tool object should contain exactly one type of Tool (e.g FunctionDeclaration, Retrieval or GoogleSearchRetrieval).
"codeExecution": { # Tool that executes code generated by the model, and automatically returns the result to the model. See also [ExecutableCode]and [CodeExecutionResult] which are input and output to this tool. # Optional. CodeExecution tool type. Enables the model to execute code as part of generation.
},
"computerUse": { # Tool to support computer use. # Optional. Tool to support the model interacting directly with the computer. If enabled, it automatically populates computer-use specific Function Declarations.
"environment": "A String", # Required. The environment being operated.
"excludedPredefinedFunctions": [ # Optional. By default, [predefined functions](https://cloud.google.com/vertex-ai/generative-ai/docs/computer-use#supported-actions) are included in the final model call. Some of them can be explicitly excluded from being automatically included. This can serve two purposes: 1. Using a more restricted / different action space. 2. Improving the definitions / instructions of predefined functions.
"A String",
],
},
"enterpriseWebSearch": { # Tool to search public web data, powered by Vertex AI Search and Sec4 compliance. # Optional. Tool to support searching public web data, powered by Vertex AI Search and Sec4 compliance.
"blockingConfidence": "A String", # Optional. Sites with confidence level chosen & above this value will be blocked from the search results.
"excludeDomains": [ # Optional. List of domains to be excluded from the search results. The default limit is 2000 domains.
"A String",
],
},
"functionDeclarations": [ # Optional. Function tool type. One or more function declarations to be passed to the model along with the current user query. Model may decide to call a subset of these functions by populating FunctionCall in the response. User should provide a FunctionResponse for each function call in the next turn. Based on the function responses, Model will generate the final response back to the user. Maximum 512 function declarations can be provided.
{ # Structured representation of a function declaration as defined by the [OpenAPI 3.0 specification](https://spec.openapis.org/oas/v3.0.3). Included in this declaration are the function name, description, parameters and response type. This FunctionDeclaration is a representation of a block of code that can be used as a `Tool` by the model and executed by the client.
"description": "A String", # Optional. Description and purpose of the function. Model uses it to decide how and whether to call the function.
"name": "A String", # Required. The name of the function to call. Must start with a letter or an underscore. Must be a-z, A-Z, 0-9, or contain underscores, dots, colons and dashes, with a maximum length of 64.
"parameters": { # Defines the schema of input and output data. This is a subset of the [OpenAPI 3.0 Schema Object](https://spec.openapis.org/oas/v3.0.3#schema-object). # Optional. Describes the parameters to this function in JSON Schema Object format. Reflects the Open API 3.03 Parameter Object. string Key: the name of the parameter. Parameter names are case sensitive. Schema Value: the Schema defining the type used for the parameter. For function with no parameters, this can be left unset. Parameter names must start with a letter or an underscore and must only contain chars a-z, A-Z, 0-9, or underscores with a maximum length of 64. Example with 1 required and 1 optional parameter: type: OBJECT properties: param1: type: STRING param2: type: INTEGER required: - param1
"additionalProperties": "", # Optional. If `type` is `OBJECT`, specifies how to handle properties not defined in `properties`. If it is a boolean `false`, no additional properties are allowed. If it is a schema, additional properties are allowed if they conform to the schema.
"anyOf": [ # Optional. The instance must be valid against any (one or more) of the subschemas listed in `any_of`.
# Object with schema name: GoogleCloudAiplatformV1beta1Schema
],
"default": "", # Optional. Default value to use if the field is not specified.
"defs": { # Optional. `defs` provides a map of schema definitions that can be reused by `ref` elsewhere in the schema. Only allowed at root level of the schema.
"a_key": # Object with schema name: GoogleCloudAiplatformV1beta1Schema
},
"description": "A String", # Optional. Description of the schema.
"enum": [ # Optional. Possible values of the field. This field can be used to restrict a value to a fixed set of values. To mark a field as an enum, set `format` to `enum` and provide the list of possible values in `enum`. For example: 1. To define directions: `{type:STRING, format:enum, enum:["EAST", "NORTH", "SOUTH", "WEST"]}` 2. To define apartment numbers: `{type:INTEGER, format:enum, enum:["101", "201", "301"]}`
"A String",
],
"example": "", # Optional. Example of an instance of this schema.
"format": "A String", # Optional. The format of the data. For `NUMBER` type, format can be `float` or `double`. For `INTEGER` type, format can be `int32` or `int64`. For `STRING` type, format can be `email`, `byte`, `date`, `date-time`, `password`, and other formats to further refine the data type.
"items": # Object with schema name: GoogleCloudAiplatformV1beta1Schema # Optional. If type is `ARRAY`, `items` specifies the schema of elements in the array.
"maxItems": "A String", # Optional. If type is `ARRAY`, `max_items` specifies the maximum number of items in an array.
"maxLength": "A String", # Optional. If type is `STRING`, `max_length` specifies the maximum length of the string.
"maxProperties": "A String", # Optional. If type is `OBJECT`, `max_properties` specifies the maximum number of properties that can be provided.
"maximum": 3.14, # Optional. If type is `INTEGER` or `NUMBER`, `maximum` specifies the maximum allowed value.
"minItems": "A String", # Optional. If type is `ARRAY`, `min_items` specifies the minimum number of items in an array.
"minLength": "A String", # Optional. If type is `STRING`, `min_length` specifies the minimum length of the string.
"minProperties": "A String", # Optional. If type is `OBJECT`, `min_properties` specifies the minimum number of properties that can be provided.
"minimum": 3.14, # Optional. If type is `INTEGER` or `NUMBER`, `minimum` specifies the minimum allowed value.
"nullable": True or False, # Optional. Indicates if the value of this field can be null.
"pattern": "A String", # Optional. If type is `STRING`, `pattern` specifies a regular expression that the string must match.
"properties": { # Optional. If type is `OBJECT`, `properties` is a map of property names to schema definitions for each property of the object.
"a_key": # Object with schema name: GoogleCloudAiplatformV1beta1Schema
},
"propertyOrdering": [ # Optional. Order of properties displayed or used where order matters. This is not a standard field in OpenAPI specification, but can be used to control the order of properties.
"A String",
],
"ref": "A String", # Optional. Allows referencing another schema definition to use in place of this schema. The value must be a valid reference to a schema in `defs`. For example, the following schema defines a reference to a schema node named "Pet": type: object properties: pet: ref: #/defs/Pet defs: Pet: type: object properties: name: type: string The value of the "pet" property is a reference to the schema node named "Pet". See details in https://json-schema.org/understanding-json-schema/structuring
"required": [ # Optional. If type is `OBJECT`, `required` lists the names of properties that must be present.
"A String",
],
"title": "A String", # Optional. Title for the schema.
"type": "A String", # Optional. Data type of the schema field.
},
"parametersJsonSchema": "", # Optional. Describes the parameters to the function in JSON Schema format. The schema must describe an object where the properties are the parameters to the function. For example: ``` { "type": "object", "properties": { "name": { "type": "string" }, "age": { "type": "integer" } }, "additionalProperties": false, "required": ["name", "age"], "propertyOrdering": ["name", "age"] } ``` This field is mutually exclusive with `parameters`.
"response": { # Defines the schema of input and output data. This is a subset of the [OpenAPI 3.0 Schema Object](https://spec.openapis.org/oas/v3.0.3#schema-object). # Optional. Describes the output from this function in JSON Schema format. Reflects the Open API 3.03 Response Object. The Schema defines the type used for the response value of the function.
"additionalProperties": "", # Optional. If `type` is `OBJECT`, specifies how to handle properties not defined in `properties`. If it is a boolean `false`, no additional properties are allowed. If it is a schema, additional properties are allowed if they conform to the schema.
"anyOf": [ # Optional. The instance must be valid against any (one or more) of the subschemas listed in `any_of`.
# Object with schema name: GoogleCloudAiplatformV1beta1Schema
],
"default": "", # Optional. Default value to use if the field is not specified.
"defs": { # Optional. `defs` provides a map of schema definitions that can be reused by `ref` elsewhere in the schema. Only allowed at root level of the schema.
"a_key": # Object with schema name: GoogleCloudAiplatformV1beta1Schema
},
"description": "A String", # Optional. Description of the schema.
"enum": [ # Optional. Possible values of the field. This field can be used to restrict a value to a fixed set of values. To mark a field as an enum, set `format` to `enum` and provide the list of possible values in `enum`. For example: 1. To define directions: `{type:STRING, format:enum, enum:["EAST", "NORTH", "SOUTH", "WEST"]}` 2. To define apartment numbers: `{type:INTEGER, format:enum, enum:["101", "201", "301"]}`
"A String",
],
"example": "", # Optional. Example of an instance of this schema.
"format": "A String", # Optional. The format of the data. For `NUMBER` type, format can be `float` or `double`. For `INTEGER` type, format can be `int32` or `int64`. For `STRING` type, format can be `email`, `byte`, `date`, `date-time`, `password`, and other formats to further refine the data type.
"items": # Object with schema name: GoogleCloudAiplatformV1beta1Schema # Optional. If type is `ARRAY`, `items` specifies the schema of elements in the array.
"maxItems": "A String", # Optional. If type is `ARRAY`, `max_items` specifies the maximum number of items in an array.
"maxLength": "A String", # Optional. If type is `STRING`, `max_length` specifies the maximum length of the string.
"maxProperties": "A String", # Optional. If type is `OBJECT`, `max_properties` specifies the maximum number of properties that can be provided.
"maximum": 3.14, # Optional. If type is `INTEGER` or `NUMBER`, `maximum` specifies the maximum allowed value.
"minItems": "A String", # Optional. If type is `ARRAY`, `min_items` specifies the minimum number of items in an array.
"minLength": "A String", # Optional. If type is `STRING`, `min_length` specifies the minimum length of the string.
"minProperties": "A String", # Optional. If type is `OBJECT`, `min_properties` specifies the minimum number of properties that can be provided.
"minimum": 3.14, # Optional. If type is `INTEGER` or `NUMBER`, `minimum` specifies the minimum allowed value.
"nullable": True or False, # Optional. Indicates if the value of this field can be null.
"pattern": "A String", # Optional. If type is `STRING`, `pattern` specifies a regular expression that the string must match.
"properties": { # Optional. If type is `OBJECT`, `properties` is a map of property names to schema definitions for each property of the object.
"a_key": # Object with schema name: GoogleCloudAiplatformV1beta1Schema
},
"propertyOrdering": [ # Optional. Order of properties displayed or used where order matters. This is not a standard field in OpenAPI specification, but can be used to control the order of properties.
"A String",
],
"ref": "A String", # Optional. Allows referencing another schema definition to use in place of this schema. The value must be a valid reference to a schema in `defs`. For example, the following schema defines a reference to a schema node named "Pet": type: object properties: pet: ref: #/defs/Pet defs: Pet: type: object properties: name: type: string The value of the "pet" property is a reference to the schema node named "Pet". See details in https://json-schema.org/understanding-json-schema/structuring
"required": [ # Optional. If type is `OBJECT`, `required` lists the names of properties that must be present.
"A String",
],
"title": "A String", # Optional. Title for the schema.
"type": "A String", # Optional. Data type of the schema field.
},
"responseJsonSchema": "", # Optional. Describes the output from this function in JSON Schema format. The value specified by the schema is the response value of the function. This field is mutually exclusive with `response`.
},
],
"googleMaps": { # Tool to retrieve public maps data for grounding, powered by Google. # Optional. GoogleMaps tool type. Tool to support Google Maps in Model.
"enableWidget": True or False, # Optional. If true, include the widget context token in the response.
},
"googleSearch": { # GoogleSearch tool type. Tool to support Google Search in Model. Powered by Google. # Optional. GoogleSearch tool type. Tool to support Google Search in Model. Powered by Google.
"blockingConfidence": "A String", # Optional. Sites with confidence level chosen & above this value will be blocked from the search results.
"excludeDomains": [ # Optional. List of domains to be excluded from the search results. The default limit is 2000 domains. Example: ["amazon.com", "facebook.com"].
"A String",
],
},
"googleSearchRetrieval": { # Tool to retrieve public web data for grounding, powered by Google. # Optional. Specialized retrieval tool that is powered by Google Search.
"dynamicRetrievalConfig": { # Describes the options to customize dynamic retrieval. # Specifies the dynamic retrieval configuration for the given source.
"dynamicThreshold": 3.14, # Optional. The threshold to be used in dynamic retrieval. If not set, a system default value is used.
"mode": "A String", # The mode of the predictor to be used in dynamic retrieval.
},
},
"parallelAiSearch": { # ParallelAiSearch tool type. A tool that uses the Parallel.ai search engine for grounding. # Optional. If specified, Vertex AI will use Parallel.ai to search for information to answer user queries. The search results will be grounded on Parallel.ai and presented to the model for response generation
"apiKey": "A String", # Optional. The API key for ParallelAiSearch. If an API key is not provided, the system will attempt to verify access by checking for an active Parallel.ai subscription through the Google Cloud Marketplace. See https://docs.parallel.ai/search/search-quickstart for more details.
"customConfigs": { # Optional. Custom configs for ParallelAiSearch. This field can be used to pass any parameter from the Parallel.ai Search API. See the Parallel.ai documentation for the full list of available parameters and their usage: https://docs.parallel.ai/api-reference/search-beta/search Currently only `source_policy`, `excerpts`, `max_results`, `mode`, `fetch_policy` can be set via this field. For example: { "source_policy": { "include_domains": ["google.com", "wikipedia.org"], "exclude_domains": ["example.com"] }, "fetch_policy": { "max_age_seconds": 3600 } }
"a_key": "", # Properties of the object.
},
},
"retrieval": { # Defines a retrieval tool that model can call to access external knowledge. # Optional. Retrieval tool type. System will always execute the provided retrieval tool(s) to get external knowledge to answer the prompt. Retrieval results are presented to the model for generation.
"disableAttribution": True or False, # Optional. Deprecated. This option is no longer supported.
"externalApi": { # Retrieve from data source powered by external API for grounding. The external API is not owned by Google, but need to follow the pre-defined API spec. # Use data source powered by external API for grounding.
"apiAuth": { # The generic reusable api auth config. Deprecated. Please use AuthConfig (google/cloud/aiplatform/master/auth.proto) instead. # The authentication config to access the API. Deprecated. Please use auth_config instead.
"apiKeyConfig": { # The API secret. # The API secret.
"apiKeySecretVersion": "A String", # Required. The SecretManager secret version resource name storing API key. e.g. projects/{project}/secrets/{secret}/versions/{version}
"apiKeyString": "A String", # The API key string. Either this or `api_key_secret_version` must be set.
},
},
"apiSpec": "A String", # The API spec that the external API implements.
"authConfig": { # Auth configuration to run the extension. # The authentication config to access the API.
"apiKeyConfig": { # Config for authentication with API key. # Config for API key auth.
"apiKeySecret": "A String", # Optional. The name of the SecretManager secret version resource storing the API key. Format: `projects/{project}/secrets/{secrete}/versions/{version}` - If both `api_key_secret` and `api_key_string` are specified, this field takes precedence over `api_key_string`. - If specified, the `secretmanager.versions.access` permission should be granted to Vertex AI Extension Service Agent (https://cloud.google.com/vertex-ai/docs/general/access-control#service-agents) on the specified resource.
"apiKeyString": "A String", # Optional. The API key to be used in the request directly.
"httpElementLocation": "A String", # Optional. The location of the API key.
"name": "A String", # Optional. The parameter name of the API key. E.g. If the API request is "https://example.com/act?api_key=", "api_key" would be the parameter name.
},
"authType": "A String", # Type of auth scheme.
"googleServiceAccountConfig": { # Config for Google Service Account Authentication. # Config for Google Service Account auth.
"serviceAccount": "A String", # Optional. The service account that the extension execution service runs as. - If the service account is specified, the `iam.serviceAccounts.getAccessToken` permission should be granted to Vertex AI Extension Service Agent (https://cloud.google.com/vertex-ai/docs/general/access-control#service-agents) on the specified service account. - If not specified, the Vertex AI Extension Service Agent will be used to execute the Extension.
},
"httpBasicAuthConfig": { # Config for HTTP Basic Authentication. # Config for HTTP Basic auth.
"credentialSecret": "A String", # Required. The name of the SecretManager secret version resource storing the base64 encoded credentials. Format: `projects/{project}/secrets/{secrete}/versions/{version}` - If specified, the `secretmanager.versions.access` permission should be granted to Vertex AI Extension Service Agent (https://cloud.google.com/vertex-ai/docs/general/access-control#service-agents) on the specified resource.
},
"oauthConfig": { # Config for user oauth. # Config for user oauth.
"accessToken": "A String", # Access token for extension endpoint. Only used to propagate token from [[ExecuteExtensionRequest.runtime_auth_config]] at request time.
"serviceAccount": "A String", # The service account used to generate access tokens for executing the Extension. - If the service account is specified, the `iam.serviceAccounts.getAccessToken` permission should be granted to Vertex AI Extension Service Agent (https://cloud.google.com/vertex-ai/docs/general/access-control#service-agents) on the provided service account.
},
"oidcConfig": { # Config for user OIDC auth. # Config for user OIDC auth.
"idToken": "A String", # OpenID Connect formatted ID token for extension endpoint. Only used to propagate token from [[ExecuteExtensionRequest.runtime_auth_config]] at request time.
"serviceAccount": "A String", # The service account used to generate an OpenID Connect (OIDC)-compatible JWT token signed by the Google OIDC Provider (accounts.google.com) for extension endpoint (https://cloud.google.com/iam/docs/create-short-lived-credentials-direct#sa-credentials-oidc). - The audience for the token will be set to the URL in the server url defined in the OpenApi spec. - If the service account is provided, the service account should grant `iam.serviceAccounts.getOpenIdToken` permission to Vertex AI Extension Service Agent (https://cloud.google.com/vertex-ai/docs/general/access-control#service-agents).
},
},
"elasticSearchParams": { # The search parameters to use for the ELASTIC_SEARCH spec. # Parameters for the elastic search API.
"index": "A String", # The ElasticSearch index to use.
"numHits": 42, # Optional. Number of hits (chunks) to request. When specified, it is passed to Elasticsearch as the `num_hits` param.
"searchTemplate": "A String", # The ElasticSearch search template to use.
},
"endpoint": "A String", # The endpoint of the external API. The system will call the API at this endpoint to retrieve the data for grounding. Example: https://acme.com:443/search
"simpleSearchParams": { # The search parameters to use for SIMPLE_SEARCH spec. # Parameters for the simple search API.
},
},
"vertexAiSearch": { # Retrieve from Vertex AI Search datastore or engine for grounding. datastore and engine are mutually exclusive. See https://cloud.google.com/products/agent-builder # Set to use data source powered by Vertex AI Search.
"dataStoreSpecs": [ # Specifications that define the specific DataStores to be searched, along with configurations for those data stores. This is only considered for Engines with multiple data stores. It should only be set if engine is used.
{ # Define data stores within engine to filter on in a search call and configurations for those data stores. For more information, see https://cloud.google.com/generative-ai-app-builder/docs/reference/rpc/google.cloud.discoveryengine.v1#datastorespec
"dataStore": "A String", # Full resource name of DataStore, such as Format: `projects/{project}/locations/{location}/collections/{collection}/dataStores/{dataStore}`
"filter": "A String", # Optional. Filter specification to filter documents in the data store specified by data_store field. For more information on filtering, see [Filtering](https://cloud.google.com/generative-ai-app-builder/docs/filter-search-metadata)
},
],
"datastore": "A String", # Optional. Fully-qualified Vertex AI Search data store resource ID. Format: `projects/{project}/locations/{location}/collections/{collection}/dataStores/{dataStore}`
"engine": "A String", # Optional. Fully-qualified Vertex AI Search engine resource ID. Format: `projects/{project}/locations/{location}/collections/{collection}/engines/{engine}`
"filter": "A String", # Optional. Filter strings to be passed to the search API.
"maxResults": 42, # Optional. Number of search results to return per query. The default value is 10. The maximumm allowed value is 10.
},
"vertexRagStore": { # Retrieve from Vertex RAG Store for grounding. # Set to use data source powered by Vertex RAG store. User data is uploaded via the VertexRagDataService.
"ragCorpora": [ # Optional. Deprecated. Please use rag_resources instead.
"A String",
],
"ragResources": [ # Optional. The representation of the rag source. It can be used to specify corpus only or ragfiles. Currently only support one corpus or multiple files from one corpus. In the future we may open up multiple corpora support.
{ # The definition of the Rag resource.
"ragCorpus": "A String", # Optional. RagCorpora resource name. Format: `projects/{project}/locations/{location}/ragCorpora/{rag_corpus}`
"ragFileIds": [ # Optional. rag_file_id. The files should be in the same rag_corpus set in rag_corpus field.
"A String",
],
},
],
"ragRetrievalConfig": { # Specifies the context retrieval config. # Optional. The retrieval config for the Rag query.
"filter": { # Config for filters. # Optional. Config for filters.
"metadataFilter": "A String", # Optional. String for metadata filtering.
"vectorDistanceThreshold": 3.14, # Optional. Only returns contexts with vector distance smaller than the threshold.
"vectorSimilarityThreshold": 3.14, # Optional. Only returns contexts with vector similarity larger than the threshold.
},
"hybridSearch": { # Config for Hybrid Search. # Optional. Config for Hybrid Search.
"alpha": 3.14, # Optional. Alpha value controls the weight between dense and sparse vector search results. The range is [0, 1], while 0 means sparse vector search only and 1 means dense vector search only. The default value is 0.5 which balances sparse and dense vector search equally.
},
"ranking": { # Config for ranking and reranking. # Optional. Config for ranking and reranking.
"llmRanker": { # Config for LlmRanker. # Optional. Config for LlmRanker.
"modelName": "A String", # Optional. The model name used for ranking. See [Supported models](https://cloud.google.com/vertex-ai/generative-ai/docs/model-reference/inference#supported-models).
},
"rankService": { # Config for Rank Service. # Optional. Config for Rank Service.
"modelName": "A String", # Optional. The model name of the rank service. Format: `semantic-ranker-512@latest`
},
},
"topK": 42, # Optional. The number of contexts to retrieve.
},
"similarityTopK": 42, # Optional. Number of top k results to return from the selected corpora.
"storeContext": True or False, # Optional. Currently only supported for Gemini Multimodal Live API. In Gemini Multimodal Live API, if `store_context` bool is specified, Gemini will leverage it to automatically memorize the interactions between the client and Gemini, and retrieve context when needed to augment the response generation for users' ongoing and future interactions.
"vectorDistanceThreshold": 3.14, # Optional. Only return results with vector distance smaller than the threshold.
},
},
"urlContext": { # Tool to support URL context. # Optional. Tool to support URL context retrieval.
},
},
],
},
},
"turns": [ # Optional. A chronological list of conversation turns. Each turn represents a logical execution cycle (e.g., User Input -> Agent Response).
{ # Represents a single turn/invocation in the conversation.
"events": [ # Optional. The list of events that occurred during this turn.
{ # Represents a single event in the execution trace.
"activeTools": [ # Optional. The list of tools that were active/available to the agent at the time of this event. This overrides the `AgentConfig.tools` if set.
{ # Tool details that the model may use to generate response. A `Tool` is a piece of code that enables the system to interact with external systems to perform an action, or set of actions, outside of knowledge and scope of the model. A Tool object should contain exactly one type of Tool (e.g FunctionDeclaration, Retrieval or GoogleSearchRetrieval).
"codeExecution": { # Tool that executes code generated by the model, and automatically returns the result to the model. See also [ExecutableCode]and [CodeExecutionResult] which are input and output to this tool. # Optional. CodeExecution tool type. Enables the model to execute code as part of generation.
},
"computerUse": { # Tool to support computer use. # Optional. Tool to support the model interacting directly with the computer. If enabled, it automatically populates computer-use specific Function Declarations.
"environment": "A String", # Required. The environment being operated.
"excludedPredefinedFunctions": [ # Optional. By default, [predefined functions](https://cloud.google.com/vertex-ai/generative-ai/docs/computer-use#supported-actions) are included in the final model call. Some of them can be explicitly excluded from being automatically included. This can serve two purposes: 1. Using a more restricted / different action space. 2. Improving the definitions / instructions of predefined functions.
"A String",
],
},
"enterpriseWebSearch": { # Tool to search public web data, powered by Vertex AI Search and Sec4 compliance. # Optional. Tool to support searching public web data, powered by Vertex AI Search and Sec4 compliance.
"blockingConfidence": "A String", # Optional. Sites with confidence level chosen & above this value will be blocked from the search results.
"excludeDomains": [ # Optional. List of domains to be excluded from the search results. The default limit is 2000 domains.
"A String",
],
},
"functionDeclarations": [ # Optional. Function tool type. One or more function declarations to be passed to the model along with the current user query. Model may decide to call a subset of these functions by populating FunctionCall in the response. User should provide a FunctionResponse for each function call in the next turn. Based on the function responses, Model will generate the final response back to the user. Maximum 512 function declarations can be provided.
{ # Structured representation of a function declaration as defined by the [OpenAPI 3.0 specification](https://spec.openapis.org/oas/v3.0.3). Included in this declaration are the function name, description, parameters and response type. This FunctionDeclaration is a representation of a block of code that can be used as a `Tool` by the model and executed by the client.
"description": "A String", # Optional. Description and purpose of the function. Model uses it to decide how and whether to call the function.
"name": "A String", # Required. The name of the function to call. Must start with a letter or an underscore. Must be a-z, A-Z, 0-9, or contain underscores, dots, colons and dashes, with a maximum length of 64.
"parameters": { # Defines the schema of input and output data. This is a subset of the [OpenAPI 3.0 Schema Object](https://spec.openapis.org/oas/v3.0.3#schema-object). # Optional. Describes the parameters to this function in JSON Schema Object format. Reflects the Open API 3.03 Parameter Object. string Key: the name of the parameter. Parameter names are case sensitive. Schema Value: the Schema defining the type used for the parameter. For function with no parameters, this can be left unset. Parameter names must start with a letter or an underscore and must only contain chars a-z, A-Z, 0-9, or underscores with a maximum length of 64. Example with 1 required and 1 optional parameter: type: OBJECT properties: param1: type: STRING param2: type: INTEGER required: - param1
"additionalProperties": "", # Optional. If `type` is `OBJECT`, specifies how to handle properties not defined in `properties`. If it is a boolean `false`, no additional properties are allowed. If it is a schema, additional properties are allowed if they conform to the schema.
"anyOf": [ # Optional. The instance must be valid against any (one or more) of the subschemas listed in `any_of`.
# Object with schema name: GoogleCloudAiplatformV1beta1Schema
],
"default": "", # Optional. Default value to use if the field is not specified.
"defs": { # Optional. `defs` provides a map of schema definitions that can be reused by `ref` elsewhere in the schema. Only allowed at root level of the schema.
"a_key": # Object with schema name: GoogleCloudAiplatformV1beta1Schema
},
"description": "A String", # Optional. Description of the schema.
"enum": [ # Optional. Possible values of the field. This field can be used to restrict a value to a fixed set of values. To mark a field as an enum, set `format` to `enum` and provide the list of possible values in `enum`. For example: 1. To define directions: `{type:STRING, format:enum, enum:["EAST", "NORTH", "SOUTH", "WEST"]}` 2. To define apartment numbers: `{type:INTEGER, format:enum, enum:["101", "201", "301"]}`
"A String",
],
"example": "", # Optional. Example of an instance of this schema.
"format": "A String", # Optional. The format of the data. For `NUMBER` type, format can be `float` or `double`. For `INTEGER` type, format can be `int32` or `int64`. For `STRING` type, format can be `email`, `byte`, `date`, `date-time`, `password`, and other formats to further refine the data type.
"items": # Object with schema name: GoogleCloudAiplatformV1beta1Schema # Optional. If type is `ARRAY`, `items` specifies the schema of elements in the array.
"maxItems": "A String", # Optional. If type is `ARRAY`, `max_items` specifies the maximum number of items in an array.
"maxLength": "A String", # Optional. If type is `STRING`, `max_length` specifies the maximum length of the string.
"maxProperties": "A String", # Optional. If type is `OBJECT`, `max_properties` specifies the maximum number of properties that can be provided.
"maximum": 3.14, # Optional. If type is `INTEGER` or `NUMBER`, `maximum` specifies the maximum allowed value.
"minItems": "A String", # Optional. If type is `ARRAY`, `min_items` specifies the minimum number of items in an array.
"minLength": "A String", # Optional. If type is `STRING`, `min_length` specifies the minimum length of the string.
"minProperties": "A String", # Optional. If type is `OBJECT`, `min_properties` specifies the minimum number of properties that can be provided.
"minimum": 3.14, # Optional. If type is `INTEGER` or `NUMBER`, `minimum` specifies the minimum allowed value.
"nullable": True or False, # Optional. Indicates if the value of this field can be null.
"pattern": "A String", # Optional. If type is `STRING`, `pattern` specifies a regular expression that the string must match.
"properties": { # Optional. If type is `OBJECT`, `properties` is a map of property names to schema definitions for each property of the object.
"a_key": # Object with schema name: GoogleCloudAiplatformV1beta1Schema
},
"propertyOrdering": [ # Optional. Order of properties displayed or used where order matters. This is not a standard field in OpenAPI specification, but can be used to control the order of properties.
"A String",
],
"ref": "A String", # Optional. Allows referencing another schema definition to use in place of this schema. The value must be a valid reference to a schema in `defs`. For example, the following schema defines a reference to a schema node named "Pet": type: object properties: pet: ref: #/defs/Pet defs: Pet: type: object properties: name: type: string The value of the "pet" property is a reference to the schema node named "Pet". See details in https://json-schema.org/understanding-json-schema/structuring
"required": [ # Optional. If type is `OBJECT`, `required` lists the names of properties that must be present.
"A String",
],
"title": "A String", # Optional. Title for the schema.
"type": "A String", # Optional. Data type of the schema field.
},
"parametersJsonSchema": "", # Optional. Describes the parameters to the function in JSON Schema format. The schema must describe an object where the properties are the parameters to the function. For example: ``` { "type": "object", "properties": { "name": { "type": "string" }, "age": { "type": "integer" } }, "additionalProperties": false, "required": ["name", "age"], "propertyOrdering": ["name", "age"] } ``` This field is mutually exclusive with `parameters`.
"response": { # Defines the schema of input and output data. This is a subset of the [OpenAPI 3.0 Schema Object](https://spec.openapis.org/oas/v3.0.3#schema-object). # Optional. Describes the output from this function in JSON Schema format. Reflects the Open API 3.03 Response Object. The Schema defines the type used for the response value of the function.
"additionalProperties": "", # Optional. If `type` is `OBJECT`, specifies how to handle properties not defined in `properties`. If it is a boolean `false`, no additional properties are allowed. If it is a schema, additional properties are allowed if they conform to the schema.
"anyOf": [ # Optional. The instance must be valid against any (one or more) of the subschemas listed in `any_of`.
# Object with schema name: GoogleCloudAiplatformV1beta1Schema
],
"default": "", # Optional. Default value to use if the field is not specified.
"defs": { # Optional. `defs` provides a map of schema definitions that can be reused by `ref` elsewhere in the schema. Only allowed at root level of the schema.
"a_key": # Object with schema name: GoogleCloudAiplatformV1beta1Schema
},
"description": "A String", # Optional. Description of the schema.
"enum": [ # Optional. Possible values of the field. This field can be used to restrict a value to a fixed set of values. To mark a field as an enum, set `format` to `enum` and provide the list of possible values in `enum`. For example: 1. To define directions: `{type:STRING, format:enum, enum:["EAST", "NORTH", "SOUTH", "WEST"]}` 2. To define apartment numbers: `{type:INTEGER, format:enum, enum:["101", "201", "301"]}`
"A String",
],
"example": "", # Optional. Example of an instance of this schema.
"format": "A String", # Optional. The format of the data. For `NUMBER` type, format can be `float` or `double`. For `INTEGER` type, format can be `int32` or `int64`. For `STRING` type, format can be `email`, `byte`, `date`, `date-time`, `password`, and other formats to further refine the data type.
"items": # Object with schema name: GoogleCloudAiplatformV1beta1Schema # Optional. If type is `ARRAY`, `items` specifies the schema of elements in the array.
"maxItems": "A String", # Optional. If type is `ARRAY`, `max_items` specifies the maximum number of items in an array.
"maxLength": "A String", # Optional. If type is `STRING`, `max_length` specifies the maximum length of the string.
"maxProperties": "A String", # Optional. If type is `OBJECT`, `max_properties` specifies the maximum number of properties that can be provided.
"maximum": 3.14, # Optional. If type is `INTEGER` or `NUMBER`, `maximum` specifies the maximum allowed value.
"minItems": "A String", # Optional. If type is `ARRAY`, `min_items` specifies the minimum number of items in an array.
"minLength": "A String", # Optional. If type is `STRING`, `min_length` specifies the minimum length of the string.
"minProperties": "A String", # Optional. If type is `OBJECT`, `min_properties` specifies the minimum number of properties that can be provided.
"minimum": 3.14, # Optional. If type is `INTEGER` or `NUMBER`, `minimum` specifies the minimum allowed value.
"nullable": True or False, # Optional. Indicates if the value of this field can be null.
"pattern": "A String", # Optional. If type is `STRING`, `pattern` specifies a regular expression that the string must match.
"properties": { # Optional. If type is `OBJECT`, `properties` is a map of property names to schema definitions for each property of the object.
"a_key": # Object with schema name: GoogleCloudAiplatformV1beta1Schema
},
"propertyOrdering": [ # Optional. Order of properties displayed or used where order matters. This is not a standard field in OpenAPI specification, but can be used to control the order of properties.
"A String",
],
"ref": "A String", # Optional. Allows referencing another schema definition to use in place of this schema. The value must be a valid reference to a schema in `defs`. For example, the following schema defines a reference to a schema node named "Pet": type: object properties: pet: ref: #/defs/Pet defs: Pet: type: object properties: name: type: string The value of the "pet" property is a reference to the schema node named "Pet". See details in https://json-schema.org/understanding-json-schema/structuring
"required": [ # Optional. If type is `OBJECT`, `required` lists the names of properties that must be present.
"A String",
],
"title": "A String", # Optional. Title for the schema.
"type": "A String", # Optional. Data type of the schema field.
},
"responseJsonSchema": "", # Optional. Describes the output from this function in JSON Schema format. The value specified by the schema is the response value of the function. This field is mutually exclusive with `response`.
},
],
"googleMaps": { # Tool to retrieve public maps data for grounding, powered by Google. # Optional. GoogleMaps tool type. Tool to support Google Maps in Model.
"enableWidget": True or False, # Optional. If true, include the widget context token in the response.
},
"googleSearch": { # GoogleSearch tool type. Tool to support Google Search in Model. Powered by Google. # Optional. GoogleSearch tool type. Tool to support Google Search in Model. Powered by Google.
"blockingConfidence": "A String", # Optional. Sites with confidence level chosen & above this value will be blocked from the search results.
"excludeDomains": [ # Optional. List of domains to be excluded from the search results. The default limit is 2000 domains. Example: ["amazon.com", "facebook.com"].
"A String",
],
},
"googleSearchRetrieval": { # Tool to retrieve public web data for grounding, powered by Google. # Optional. Specialized retrieval tool that is powered by Google Search.
"dynamicRetrievalConfig": { # Describes the options to customize dynamic retrieval. # Specifies the dynamic retrieval configuration for the given source.
"dynamicThreshold": 3.14, # Optional. The threshold to be used in dynamic retrieval. If not set, a system default value is used.
"mode": "A String", # The mode of the predictor to be used in dynamic retrieval.
},
},
"parallelAiSearch": { # ParallelAiSearch tool type. A tool that uses the Parallel.ai search engine for grounding. # Optional. If specified, Vertex AI will use Parallel.ai to search for information to answer user queries. The search results will be grounded on Parallel.ai and presented to the model for response generation
"apiKey": "A String", # Optional. The API key for ParallelAiSearch. If an API key is not provided, the system will attempt to verify access by checking for an active Parallel.ai subscription through the Google Cloud Marketplace. See https://docs.parallel.ai/search/search-quickstart for more details.
"customConfigs": { # Optional. Custom configs for ParallelAiSearch. This field can be used to pass any parameter from the Parallel.ai Search API. See the Parallel.ai documentation for the full list of available parameters and their usage: https://docs.parallel.ai/api-reference/search-beta/search Currently only `source_policy`, `excerpts`, `max_results`, `mode`, `fetch_policy` can be set via this field. For example: { "source_policy": { "include_domains": ["google.com", "wikipedia.org"], "exclude_domains": ["example.com"] }, "fetch_policy": { "max_age_seconds": 3600 } }
"a_key": "", # Properties of the object.
},
},
"retrieval": { # Defines a retrieval tool that model can call to access external knowledge. # Optional. Retrieval tool type. System will always execute the provided retrieval tool(s) to get external knowledge to answer the prompt. Retrieval results are presented to the model for generation.
"disableAttribution": True or False, # Optional. Deprecated. This option is no longer supported.
"externalApi": { # Retrieve from data source powered by external API for grounding. The external API is not owned by Google, but need to follow the pre-defined API spec. # Use data source powered by external API for grounding.
"apiAuth": { # The generic reusable api auth config. Deprecated. Please use AuthConfig (google/cloud/aiplatform/master/auth.proto) instead. # The authentication config to access the API. Deprecated. Please use auth_config instead.
"apiKeyConfig": { # The API secret. # The API secret.
"apiKeySecretVersion": "A String", # Required. The SecretManager secret version resource name storing API key. e.g. projects/{project}/secrets/{secret}/versions/{version}
"apiKeyString": "A String", # The API key string. Either this or `api_key_secret_version` must be set.
},
},
"apiSpec": "A String", # The API spec that the external API implements.
"authConfig": { # Auth configuration to run the extension. # The authentication config to access the API.
"apiKeyConfig": { # Config for authentication with API key. # Config for API key auth.
"apiKeySecret": "A String", # Optional. The name of the SecretManager secret version resource storing the API key. Format: `projects/{project}/secrets/{secrete}/versions/{version}` - If both `api_key_secret` and `api_key_string` are specified, this field takes precedence over `api_key_string`. - If specified, the `secretmanager.versions.access` permission should be granted to Vertex AI Extension Service Agent (https://cloud.google.com/vertex-ai/docs/general/access-control#service-agents) on the specified resource.
"apiKeyString": "A String", # Optional. The API key to be used in the request directly.
"httpElementLocation": "A String", # Optional. The location of the API key.
"name": "A String", # Optional. The parameter name of the API key. E.g. If the API request is "https://example.com/act?api_key=", "api_key" would be the parameter name.
},
"authType": "A String", # Type of auth scheme.
"googleServiceAccountConfig": { # Config for Google Service Account Authentication. # Config for Google Service Account auth.
"serviceAccount": "A String", # Optional. The service account that the extension execution service runs as. - If the service account is specified, the `iam.serviceAccounts.getAccessToken` permission should be granted to Vertex AI Extension Service Agent (https://cloud.google.com/vertex-ai/docs/general/access-control#service-agents) on the specified service account. - If not specified, the Vertex AI Extension Service Agent will be used to execute the Extension.
},
"httpBasicAuthConfig": { # Config for HTTP Basic Authentication. # Config for HTTP Basic auth.
"credentialSecret": "A String", # Required. The name of the SecretManager secret version resource storing the base64 encoded credentials. Format: `projects/{project}/secrets/{secrete}/versions/{version}` - If specified, the `secretmanager.versions.access` permission should be granted to Vertex AI Extension Service Agent (https://cloud.google.com/vertex-ai/docs/general/access-control#service-agents) on the specified resource.
},
"oauthConfig": { # Config for user oauth. # Config for user oauth.
"accessToken": "A String", # Access token for extension endpoint. Only used to propagate token from [[ExecuteExtensionRequest.runtime_auth_config]] at request time.
"serviceAccount": "A String", # The service account used to generate access tokens for executing the Extension. - If the service account is specified, the `iam.serviceAccounts.getAccessToken` permission should be granted to Vertex AI Extension Service Agent (https://cloud.google.com/vertex-ai/docs/general/access-control#service-agents) on the provided service account.
},
"oidcConfig": { # Config for user OIDC auth. # Config for user OIDC auth.
"idToken": "A String", # OpenID Connect formatted ID token for extension endpoint. Only used to propagate token from [[ExecuteExtensionRequest.runtime_auth_config]] at request time.
"serviceAccount": "A String", # The service account used to generate an OpenID Connect (OIDC)-compatible JWT token signed by the Google OIDC Provider (accounts.google.com) for extension endpoint (https://cloud.google.com/iam/docs/create-short-lived-credentials-direct#sa-credentials-oidc). - The audience for the token will be set to the URL in the server url defined in the OpenApi spec. - If the service account is provided, the service account should grant `iam.serviceAccounts.getOpenIdToken` permission to Vertex AI Extension Service Agent (https://cloud.google.com/vertex-ai/docs/general/access-control#service-agents).
},
},
"elasticSearchParams": { # The search parameters to use for the ELASTIC_SEARCH spec. # Parameters for the elastic search API.
"index": "A String", # The ElasticSearch index to use.
"numHits": 42, # Optional. Number of hits (chunks) to request. When specified, it is passed to Elasticsearch as the `num_hits` param.
"searchTemplate": "A String", # The ElasticSearch search template to use.
},
"endpoint": "A String", # The endpoint of the external API. The system will call the API at this endpoint to retrieve the data for grounding. Example: https://acme.com:443/search
"simpleSearchParams": { # The search parameters to use for SIMPLE_SEARCH spec. # Parameters for the simple search API.
},
},
"vertexAiSearch": { # Retrieve from Vertex AI Search datastore or engine for grounding. datastore and engine are mutually exclusive. See https://cloud.google.com/products/agent-builder # Set to use data source powered by Vertex AI Search.
"dataStoreSpecs": [ # Specifications that define the specific DataStores to be searched, along with configurations for those data stores. This is only considered for Engines with multiple data stores. It should only be set if engine is used.
{ # Define data stores within engine to filter on in a search call and configurations for those data stores. For more information, see https://cloud.google.com/generative-ai-app-builder/docs/reference/rpc/google.cloud.discoveryengine.v1#datastorespec
"dataStore": "A String", # Full resource name of DataStore, such as Format: `projects/{project}/locations/{location}/collections/{collection}/dataStores/{dataStore}`
"filter": "A String", # Optional. Filter specification to filter documents in the data store specified by data_store field. For more information on filtering, see [Filtering](https://cloud.google.com/generative-ai-app-builder/docs/filter-search-metadata)
},
],
"datastore": "A String", # Optional. Fully-qualified Vertex AI Search data store resource ID. Format: `projects/{project}/locations/{location}/collections/{collection}/dataStores/{dataStore}`
"engine": "A String", # Optional. Fully-qualified Vertex AI Search engine resource ID. Format: `projects/{project}/locations/{location}/collections/{collection}/engines/{engine}`
"filter": "A String", # Optional. Filter strings to be passed to the search API.
"maxResults": 42, # Optional. Number of search results to return per query. The default value is 10. The maximumm allowed value is 10.
},
"vertexRagStore": { # Retrieve from Vertex RAG Store for grounding. # Set to use data source powered by Vertex RAG store. User data is uploaded via the VertexRagDataService.
"ragCorpora": [ # Optional. Deprecated. Please use rag_resources instead.
"A String",
],
"ragResources": [ # Optional. The representation of the rag source. It can be used to specify corpus only or ragfiles. Currently only support one corpus or multiple files from one corpus. In the future we may open up multiple corpora support.
{ # The definition of the Rag resource.
"ragCorpus": "A String", # Optional. RagCorpora resource name. Format: `projects/{project}/locations/{location}/ragCorpora/{rag_corpus}`
"ragFileIds": [ # Optional. rag_file_id. The files should be in the same rag_corpus set in rag_corpus field.
"A String",
],
},
],
"ragRetrievalConfig": { # Specifies the context retrieval config. # Optional. The retrieval config for the Rag query.
"filter": { # Config for filters. # Optional. Config for filters.
"metadataFilter": "A String", # Optional. String for metadata filtering.
"vectorDistanceThreshold": 3.14, # Optional. Only returns contexts with vector distance smaller than the threshold.
"vectorSimilarityThreshold": 3.14, # Optional. Only returns contexts with vector similarity larger than the threshold.
},
"hybridSearch": { # Config for Hybrid Search. # Optional. Config for Hybrid Search.
"alpha": 3.14, # Optional. Alpha value controls the weight between dense and sparse vector search results. The range is [0, 1], while 0 means sparse vector search only and 1 means dense vector search only. The default value is 0.5 which balances sparse and dense vector search equally.
},
"ranking": { # Config for ranking and reranking. # Optional. Config for ranking and reranking.
"llmRanker": { # Config for LlmRanker. # Optional. Config for LlmRanker.
"modelName": "A String", # Optional. The model name used for ranking. See [Supported models](https://cloud.google.com/vertex-ai/generative-ai/docs/model-reference/inference#supported-models).
},
"rankService": { # Config for Rank Service. # Optional. Config for Rank Service.
"modelName": "A String", # Optional. The model name of the rank service. Format: `semantic-ranker-512@latest`
},
},
"topK": 42, # Optional. The number of contexts to retrieve.
},
"similarityTopK": 42, # Optional. Number of top k results to return from the selected corpora.
"storeContext": True or False, # Optional. Currently only supported for Gemini Multimodal Live API. In Gemini Multimodal Live API, if `store_context` bool is specified, Gemini will leverage it to automatically memorize the interactions between the client and Gemini, and retrieve context when needed to augment the response generation for users' ongoing and future interactions.
"vectorDistanceThreshold": 3.14, # Optional. Only return results with vector distance smaller than the threshold.
},
},
"urlContext": { # Tool to support URL context. # Optional. Tool to support URL context retrieval.
},
},
],
"author": "A String", # Required. The ID of the agent or entity that generated this event. Use "user" to denote events generated by the end-user.
"content": { # The structured data content of a message. A Content message contains a `role` field, which indicates the producer of the content, and a `parts` field, which contains the multi-part data of the message. # Required. The content of the event (e.g., text response, tool call, tool response).
"parts": [ # Required. A list of Part objects that make up a single message. Parts of a message can have different MIME types. A Content message must have at least one Part.
{ # A datatype containing media that is part of a multi-part Content message. A `Part` consists of data which has an associated datatype. A `Part` can only contain one of the accepted types in `Part.data`. For media types that are not text, `Part` must have a fixed IANA MIME type identifying the type and subtype of the media if `inline_data` or `file_data` field is filled with raw bytes.
"codeExecutionResult": { # Result of executing the [ExecutableCode]. Only generated when using the [CodeExecution] tool, and always follows a `part` containing the [ExecutableCode]. # Optional. The result of executing the ExecutableCode.
"outcome": "A String", # Required. Outcome of the code execution.
"output": "A String", # Optional. Contains stdout when code execution is successful, stderr or other description otherwise.
},
"executableCode": { # Code generated by the model that is meant to be executed, and the result returned to the model. Generated when using the [CodeExecution] tool, in which the code will be automatically executed, and a corresponding [CodeExecutionResult] will also be generated. # Optional. Code generated by the model that is intended to be executed.
"code": "A String", # Required. The code to be executed.
"language": "A String", # Required. Programming language of the `code`.
},
"fileData": { # URI-based data. A FileData message contains a URI pointing to data of a specific media type. It is used to represent images, audio, and video stored in Google Cloud Storage. # Optional. The URI-based data of the part. This can be used to include files from Google Cloud Storage.
"displayName": "A String", # Optional. The display name of the file. Used to provide a label or filename to distinguish files. This field is only returned in `PromptMessage` for prompt management. It is used in the Gemini calls only when server side tools (`code_execution`, `google_search`, and `url_context`) are enabled.
"fileUri": "A String", # Required. The URI of the file in Google Cloud Storage.
"mimeType": "A String", # Required. The IANA standard MIME type of the source data.
},
"functionCall": { # A predicted [FunctionCall] returned from the model that contains a string representing the [FunctionDeclaration.name] and a structured JSON object containing the parameters and their values. # Optional. A predicted function call returned from the model. This contains the name of the function to call and the arguments to pass to the function.
"args": { # Optional. The function parameters and values in JSON object format. See [FunctionDeclaration.parameters] for parameter details.
"a_key": "", # Properties of the object.
},
"id": "A String", # Optional. The unique id of the function call. If populated, the client to execute the `function_call` and return the response with the matching `id`.
"name": "A String", # Optional. The name of the function to call. Matches [FunctionDeclaration.name].
"partialArgs": [ # Optional. The partial argument value of the function call. If provided, represents the arguments/fields that are streamed incrementally.
{ # Partial argument value of the function call.
"boolValue": True or False, # Optional. Represents a boolean value.
"jsonPath": "A String", # Required. A JSON Path (RFC 9535) to the argument being streamed. https://datatracker.ietf.org/doc/html/rfc9535. e.g. "$.foo.bar[0].data".
"nullValue": "A String", # Optional. Represents a null value.
"numberValue": 3.14, # Optional. Represents a double value.
"stringValue": "A String", # Optional. Represents a string value.
"willContinue": True or False, # Optional. Whether this is not the last part of the same json_path. If true, another PartialArg message for the current json_path is expected to follow.
},
],
"willContinue": True or False, # Optional. Whether this is the last part of the FunctionCall. If true, another partial message for the current FunctionCall is expected to follow.
},
"functionResponse": { # The result output from a [FunctionCall] that contains a string representing the [FunctionDeclaration.name] and a structured JSON object containing any output from the function is used as context to the model. This should contain the result of a [FunctionCall] made based on model prediction. # Optional. The result of a function call. This is used to provide the model with the result of a function call that it predicted.
"id": "A String", # Optional. The id of the function call this response is for. Populated by the client to match the corresponding function call `id`.
"name": "A String", # Required. The name of the function to call. Matches [FunctionDeclaration.name] and [FunctionCall.name].
"parts": [ # Optional. Ordered `Parts` that constitute a function response. Parts may have different IANA MIME types.
{ # A datatype containing media that is part of a `FunctionResponse` message. A `FunctionResponsePart` consists of data which has an associated datatype. A `FunctionResponsePart` can only contain one of the accepted types in `FunctionResponsePart.data`. A `FunctionResponsePart` must have a fixed IANA MIME type identifying the type and subtype of the media if the `inline_data` field is filled with raw bytes.
"fileData": { # URI based data for function response. # URI based data.
"displayName": "A String", # Optional. Display name of the file data. Used to provide a label or filename to distinguish file datas. This field is only returned in PromptMessage for prompt management. It is currently used in the Gemini GenerateContent calls only when server side tools (code_execution, google_search, and url_context) are enabled.
"fileUri": "A String", # Required. URI.
"mimeType": "A String", # Required. The IANA standard MIME type of the source data.
},
"inlineData": { # Raw media bytes for function response. Text should not be sent as raw bytes, use the 'text' field. # Inline media bytes.
"data": "A String", # Required. Raw bytes.
"displayName": "A String", # Optional. Display name of the blob. Used to provide a label or filename to distinguish blobs. This field is only returned in PromptMessage for prompt management. It is currently used in the Gemini GenerateContent calls only when server side tools (code_execution, google_search, and url_context) are enabled.
"mimeType": "A String", # Required. The IANA standard MIME type of the source data.
},
},
],
"response": { # Required. The function response in JSON object format. Use "output" key to specify function output and "error" key to specify error details (if any). If "output" and "error" keys are not specified, then whole "response" is treated as function output.
"a_key": "", # Properties of the object.
},
},
"inlineData": { # A content blob. A Blob contains data of a specific media type. It is used to represent images, audio, and video. # Optional. The inline data content of the part. This can be used to include images, audio, or video in a request.
"data": "A String", # Required. The raw bytes of the data.
"displayName": "A String", # Optional. The display name of the blob. Used to provide a label or filename to distinguish blobs. This field is only returned in `PromptMessage` for prompt management. It is used in the Gemini calls only when server-side tools (`code_execution`, `google_search`, and `url_context`) are enabled.
"mimeType": "A String", # Required. The IANA standard MIME type of the source data.
},
"mediaResolution": { # per part media resolution. Media resolution for the input media. # per part media resolution. Media resolution for the input media.
"level": "A String", # The tokenization quality used for given media.
},
"text": "A String", # Optional. The text content of the part. When sent from the VSCode Gemini Code Assist extension, references to @mentioned items will be converted to markdown boldface text. For example `@my-repo` will be converted to and sent as `**my-repo**` by the IDE agent.
"thought": True or False, # Optional. Indicates whether the `part` represents the model's thought process or reasoning.
"thoughtSignature": "A String", # Optional. An opaque signature for the thought so it can be reused in subsequent requests.
"videoMetadata": { # Provides metadata for a video, including the start and end offsets for clipping and the frame rate. # Optional. Video metadata. The metadata should only be specified while the video data is presented in inline_data or file_data.
"endOffset": "A String", # Optional. The end offset of the video.
"fps": 3.14, # Optional. The frame rate of the video sent to the model. If not specified, the default value is 1.0. The valid range is (0.0, 24.0].
"startOffset": "A String", # Optional. The start offset of the video.
},
},
],
"role": "A String", # Optional. The producer of the content. Must be either 'user' or 'model'. If not set, the service will default to 'user'.
},
"eventTime": "A String", # Optional. The timestamp when the event occurred.
"stateDelta": { # Optional. The change in the session state caused by this event. This is a key-value map of fields that were modified or added by the event.
"a_key": "", # Properties of the object.
},
},
],
"turnId": "A String", # Optional. A unique identifier for the turn. Useful for referencing specific turns across systems.
"turnIndex": 42, # Required. The 0-based index of the turn in the conversation sequence.
},
],
},
"candidate": "A String", # Required. The name of the candidate that produced the response.
"error": { # The `Status` type defines a logical error model that is suitable for different programming environments, including REST APIs and RPC APIs. It is used by [gRPC](https://github.com/grpc). Each `Status` message contains three pieces of data: error code, error message, and error details. You can find out more about this error model and how to work with it in the [API Design Guide](https://cloud.google.com/apis/design/errors). # Output only. Error while scraping model or agent.
"code": 42, # The status code, which should be an enum value of google.rpc.Code.
"details": [ # A list of messages that carry the error details. There is a common set of message types for APIs to use.
{
"a_key": "", # Properties of the object. Contains field @type with type URL.
},
],
"message": "A String", # A developer-facing error message, which should be in English. Any user-facing error message should be localized and sent in the google.rpc.Status.details field, or localized by the client.
},
"events": [ # Optional. Intermediate events (such as tool calls and responses) that led to the final response.
{ # The structured data content of a message. A Content message contains a `role` field, which indicates the producer of the content, and a `parts` field, which contains the multi-part data of the message.
"parts": [ # Required. A list of Part objects that make up a single message. Parts of a message can have different MIME types. A Content message must have at least one Part.
{ # A datatype containing media that is part of a multi-part Content message. A `Part` consists of data which has an associated datatype. A `Part` can only contain one of the accepted types in `Part.data`. For media types that are not text, `Part` must have a fixed IANA MIME type identifying the type and subtype of the media if `inline_data` or `file_data` field is filled with raw bytes.
"codeExecutionResult": { # Result of executing the [ExecutableCode]. Only generated when using the [CodeExecution] tool, and always follows a `part` containing the [ExecutableCode]. # Optional. The result of executing the ExecutableCode.
"outcome": "A String", # Required. Outcome of the code execution.
"output": "A String", # Optional. Contains stdout when code execution is successful, stderr or other description otherwise.
},
"executableCode": { # Code generated by the model that is meant to be executed, and the result returned to the model. Generated when using the [CodeExecution] tool, in which the code will be automatically executed, and a corresponding [CodeExecutionResult] will also be generated. # Optional. Code generated by the model that is intended to be executed.
"code": "A String", # Required. The code to be executed.
"language": "A String", # Required. Programming language of the `code`.
},
"fileData": { # URI-based data. A FileData message contains a URI pointing to data of a specific media type. It is used to represent images, audio, and video stored in Google Cloud Storage. # Optional. The URI-based data of the part. This can be used to include files from Google Cloud Storage.
"displayName": "A String", # Optional. The display name of the file. Used to provide a label or filename to distinguish files. This field is only returned in `PromptMessage` for prompt management. It is used in the Gemini calls only when server side tools (`code_execution`, `google_search`, and `url_context`) are enabled.
"fileUri": "A String", # Required. The URI of the file in Google Cloud Storage.
"mimeType": "A String", # Required. The IANA standard MIME type of the source data.
},
"functionCall": { # A predicted [FunctionCall] returned from the model that contains a string representing the [FunctionDeclaration.name] and a structured JSON object containing the parameters and their values. # Optional. A predicted function call returned from the model. This contains the name of the function to call and the arguments to pass to the function.
"args": { # Optional. The function parameters and values in JSON object format. See [FunctionDeclaration.parameters] for parameter details.
"a_key": "", # Properties of the object.
},
"id": "A String", # Optional. The unique id of the function call. If populated, the client to execute the `function_call` and return the response with the matching `id`.
"name": "A String", # Optional. The name of the function to call. Matches [FunctionDeclaration.name].
"partialArgs": [ # Optional. The partial argument value of the function call. If provided, represents the arguments/fields that are streamed incrementally.
{ # Partial argument value of the function call.
"boolValue": True or False, # Optional. Represents a boolean value.
"jsonPath": "A String", # Required. A JSON Path (RFC 9535) to the argument being streamed. https://datatracker.ietf.org/doc/html/rfc9535. e.g. "$.foo.bar[0].data".
"nullValue": "A String", # Optional. Represents a null value.
"numberValue": 3.14, # Optional. Represents a double value.
"stringValue": "A String", # Optional. Represents a string value.
"willContinue": True or False, # Optional. Whether this is not the last part of the same json_path. If true, another PartialArg message for the current json_path is expected to follow.
},
],
"willContinue": True or False, # Optional. Whether this is the last part of the FunctionCall. If true, another partial message for the current FunctionCall is expected to follow.
},
"functionResponse": { # The result output from a [FunctionCall] that contains a string representing the [FunctionDeclaration.name] and a structured JSON object containing any output from the function is used as context to the model. This should contain the result of a [FunctionCall] made based on model prediction. # Optional. The result of a function call. This is used to provide the model with the result of a function call that it predicted.
"id": "A String", # Optional. The id of the function call this response is for. Populated by the client to match the corresponding function call `id`.
"name": "A String", # Required. The name of the function to call. Matches [FunctionDeclaration.name] and [FunctionCall.name].
"parts": [ # Optional. Ordered `Parts` that constitute a function response. Parts may have different IANA MIME types.
{ # A datatype containing media that is part of a `FunctionResponse` message. A `FunctionResponsePart` consists of data which has an associated datatype. A `FunctionResponsePart` can only contain one of the accepted types in `FunctionResponsePart.data`. A `FunctionResponsePart` must have a fixed IANA MIME type identifying the type and subtype of the media if the `inline_data` field is filled with raw bytes.
"fileData": { # URI based data for function response. # URI based data.
"displayName": "A String", # Optional. Display name of the file data. Used to provide a label or filename to distinguish file datas. This field is only returned in PromptMessage for prompt management. It is currently used in the Gemini GenerateContent calls only when server side tools (code_execution, google_search, and url_context) are enabled.
"fileUri": "A String", # Required. URI.
"mimeType": "A String", # Required. The IANA standard MIME type of the source data.
},
"inlineData": { # Raw media bytes for function response. Text should not be sent as raw bytes, use the 'text' field. # Inline media bytes.
"data": "A String", # Required. Raw bytes.
"displayName": "A String", # Optional. Display name of the blob. Used to provide a label or filename to distinguish blobs. This field is only returned in PromptMessage for prompt management. It is currently used in the Gemini GenerateContent calls only when server side tools (code_execution, google_search, and url_context) are enabled.
"mimeType": "A String", # Required. The IANA standard MIME type of the source data.
},
},
],
"response": { # Required. The function response in JSON object format. Use "output" key to specify function output and "error" key to specify error details (if any). If "output" and "error" keys are not specified, then whole "response" is treated as function output.
"a_key": "", # Properties of the object.
},
},
"inlineData": { # A content blob. A Blob contains data of a specific media type. It is used to represent images, audio, and video. # Optional. The inline data content of the part. This can be used to include images, audio, or video in a request.
"data": "A String", # Required. The raw bytes of the data.
"displayName": "A String", # Optional. The display name of the blob. Used to provide a label or filename to distinguish blobs. This field is only returned in `PromptMessage` for prompt management. It is used in the Gemini calls only when server-side tools (`code_execution`, `google_search`, and `url_context`) are enabled.
"mimeType": "A String", # Required. The IANA standard MIME type of the source data.
},
"mediaResolution": { # per part media resolution. Media resolution for the input media. # per part media resolution. Media resolution for the input media.
"level": "A String", # The tokenization quality used for given media.
},
"text": "A String", # Optional. The text content of the part. When sent from the VSCode Gemini Code Assist extension, references to @mentioned items will be converted to markdown boldface text. For example `@my-repo` will be converted to and sent as `**my-repo**` by the IDE agent.
"thought": True or False, # Optional. Indicates whether the `part` represents the model's thought process or reasoning.
"thoughtSignature": "A String", # Optional. An opaque signature for the thought so it can be reused in subsequent requests.
"videoMetadata": { # Provides metadata for a video, including the start and end offsets for clipping and the frame rate. # Optional. Video metadata. The metadata should only be specified while the video data is presented in inline_data or file_data.
"endOffset": "A String", # Optional. The end offset of the video.
"fps": 3.14, # Optional. The frame rate of the video sent to the model. If not specified, the default value is 1.0. The valid range is (0.0, 24.0].
"startOffset": "A String", # Optional. The start offset of the video.
},
},
],
"role": "A String", # Optional. The producer of the content. Must be either 'user' or 'model'. If not set, the service will default to 'user'.
},
],
"text": "A String", # Text response.
"value": "", # Fields and values that can be used to populate the response template.
},
],
"goldenResponse": { # Responses from model or agent. # Optional. The Ideal response or ground truth.
"agentData": { # Represents data specific to multi-turn agent evaluations. # Optional. Represents the complete execution trace of a multi-turn conversation, which can involve single or multiple agents. This field is used to provide the full output of an agent's run, including all turns and events, for direct evaluation.
"agents": { # Optional. A map containing the static configurations for each agent in the system. Key: agent_id (matches the `author` field in events). Value: The static configuration of the agent.
"a_key": { # Represents configuration for an Agent.
"agentId": "A String", # Required. Unique identifier of the agent. This ID is used to refer to this agent, e.g., in AgentEvent.author, or in the `sub_agents` field. It must be unique within the `agents` map.
"agentType": "A String", # Optional. The type or class of the agent (e.g., "LlmAgent", "RouterAgent", "ToolUseAgent"). Useful for the autorater to understand the expected behavior of the agent.
"description": "A String", # Optional. A high-level description of the agent's role and responsibilities. Critical for evaluating if the agent is routing tasks correctly.
"instruction": "A String", # Optional. Provides instructions for the LLM model, guiding the agent's behavior. Can be static or dynamic. Dynamic instructions can contain placeholders like {variable_name} that will be resolved at runtime using the `AgentEvent.state_delta` field.
"subAgents": [ # Optional. The list of valid agent IDs that this agent can delegate to. This defines the directed edges in the multi-agent system graph topology.
"A String",
],
"tools": [ # Optional. The list of tools available to this agent.
{ # Tool details that the model may use to generate response. A `Tool` is a piece of code that enables the system to interact with external systems to perform an action, or set of actions, outside of knowledge and scope of the model. A Tool object should contain exactly one type of Tool (e.g FunctionDeclaration, Retrieval or GoogleSearchRetrieval).
"codeExecution": { # Tool that executes code generated by the model, and automatically returns the result to the model. See also [ExecutableCode]and [CodeExecutionResult] which are input and output to this tool. # Optional. CodeExecution tool type. Enables the model to execute code as part of generation.
},
"computerUse": { # Tool to support computer use. # Optional. Tool to support the model interacting directly with the computer. If enabled, it automatically populates computer-use specific Function Declarations.
"environment": "A String", # Required. The environment being operated.
"excludedPredefinedFunctions": [ # Optional. By default, [predefined functions](https://cloud.google.com/vertex-ai/generative-ai/docs/computer-use#supported-actions) are included in the final model call. Some of them can be explicitly excluded from being automatically included. This can serve two purposes: 1. Using a more restricted / different action space. 2. Improving the definitions / instructions of predefined functions.
"A String",
],
},
"enterpriseWebSearch": { # Tool to search public web data, powered by Vertex AI Search and Sec4 compliance. # Optional. Tool to support searching public web data, powered by Vertex AI Search and Sec4 compliance.
"blockingConfidence": "A String", # Optional. Sites with confidence level chosen & above this value will be blocked from the search results.
"excludeDomains": [ # Optional. List of domains to be excluded from the search results. The default limit is 2000 domains.
"A String",
],
},
"functionDeclarations": [ # Optional. Function tool type. One or more function declarations to be passed to the model along with the current user query. Model may decide to call a subset of these functions by populating FunctionCall in the response. User should provide a FunctionResponse for each function call in the next turn. Based on the function responses, Model will generate the final response back to the user. Maximum 512 function declarations can be provided.
{ # Structured representation of a function declaration as defined by the [OpenAPI 3.0 specification](https://spec.openapis.org/oas/v3.0.3). Included in this declaration are the function name, description, parameters and response type. This FunctionDeclaration is a representation of a block of code that can be used as a `Tool` by the model and executed by the client.
"description": "A String", # Optional. Description and purpose of the function. Model uses it to decide how and whether to call the function.
"name": "A String", # Required. The name of the function to call. Must start with a letter or an underscore. Must be a-z, A-Z, 0-9, or contain underscores, dots, colons and dashes, with a maximum length of 64.
"parameters": { # Defines the schema of input and output data. This is a subset of the [OpenAPI 3.0 Schema Object](https://spec.openapis.org/oas/v3.0.3#schema-object). # Optional. Describes the parameters to this function in JSON Schema Object format. Reflects the Open API 3.03 Parameter Object. string Key: the name of the parameter. Parameter names are case sensitive. Schema Value: the Schema defining the type used for the parameter. For function with no parameters, this can be left unset. Parameter names must start with a letter or an underscore and must only contain chars a-z, A-Z, 0-9, or underscores with a maximum length of 64. Example with 1 required and 1 optional parameter: type: OBJECT properties: param1: type: STRING param2: type: INTEGER required: - param1
"additionalProperties": "", # Optional. If `type` is `OBJECT`, specifies how to handle properties not defined in `properties`. If it is a boolean `false`, no additional properties are allowed. If it is a schema, additional properties are allowed if they conform to the schema.
"anyOf": [ # Optional. The instance must be valid against any (one or more) of the subschemas listed in `any_of`.
# Object with schema name: GoogleCloudAiplatformV1beta1Schema
],
"default": "", # Optional. Default value to use if the field is not specified.
"defs": { # Optional. `defs` provides a map of schema definitions that can be reused by `ref` elsewhere in the schema. Only allowed at root level of the schema.
"a_key": # Object with schema name: GoogleCloudAiplatformV1beta1Schema
},
"description": "A String", # Optional. Description of the schema.
"enum": [ # Optional. Possible values of the field. This field can be used to restrict a value to a fixed set of values. To mark a field as an enum, set `format` to `enum` and provide the list of possible values in `enum`. For example: 1. To define directions: `{type:STRING, format:enum, enum:["EAST", "NORTH", "SOUTH", "WEST"]}` 2. To define apartment numbers: `{type:INTEGER, format:enum, enum:["101", "201", "301"]}`
"A String",
],
"example": "", # Optional. Example of an instance of this schema.
"format": "A String", # Optional. The format of the data. For `NUMBER` type, format can be `float` or `double`. For `INTEGER` type, format can be `int32` or `int64`. For `STRING` type, format can be `email`, `byte`, `date`, `date-time`, `password`, and other formats to further refine the data type.
"items": # Object with schema name: GoogleCloudAiplatformV1beta1Schema # Optional. If type is `ARRAY`, `items` specifies the schema of elements in the array.
"maxItems": "A String", # Optional. If type is `ARRAY`, `max_items` specifies the maximum number of items in an array.
"maxLength": "A String", # Optional. If type is `STRING`, `max_length` specifies the maximum length of the string.
"maxProperties": "A String", # Optional. If type is `OBJECT`, `max_properties` specifies the maximum number of properties that can be provided.
"maximum": 3.14, # Optional. If type is `INTEGER` or `NUMBER`, `maximum` specifies the maximum allowed value.
"minItems": "A String", # Optional. If type is `ARRAY`, `min_items` specifies the minimum number of items in an array.
"minLength": "A String", # Optional. If type is `STRING`, `min_length` specifies the minimum length of the string.
"minProperties": "A String", # Optional. If type is `OBJECT`, `min_properties` specifies the minimum number of properties that can be provided.
"minimum": 3.14, # Optional. If type is `INTEGER` or `NUMBER`, `minimum` specifies the minimum allowed value.
"nullable": True or False, # Optional. Indicates if the value of this field can be null.
"pattern": "A String", # Optional. If type is `STRING`, `pattern` specifies a regular expression that the string must match.
"properties": { # Optional. If type is `OBJECT`, `properties` is a map of property names to schema definitions for each property of the object.
"a_key": # Object with schema name: GoogleCloudAiplatformV1beta1Schema
},
"propertyOrdering": [ # Optional. Order of properties displayed or used where order matters. This is not a standard field in OpenAPI specification, but can be used to control the order of properties.
"A String",
],
"ref": "A String", # Optional. Allows referencing another schema definition to use in place of this schema. The value must be a valid reference to a schema in `defs`. For example, the following schema defines a reference to a schema node named "Pet": type: object properties: pet: ref: #/defs/Pet defs: Pet: type: object properties: name: type: string The value of the "pet" property is a reference to the schema node named "Pet". See details in https://json-schema.org/understanding-json-schema/structuring
"required": [ # Optional. If type is `OBJECT`, `required` lists the names of properties that must be present.
"A String",
],
"title": "A String", # Optional. Title for the schema.
"type": "A String", # Optional. Data type of the schema field.
},
"parametersJsonSchema": "", # Optional. Describes the parameters to the function in JSON Schema format. The schema must describe an object where the properties are the parameters to the function. For example: ``` { "type": "object", "properties": { "name": { "type": "string" }, "age": { "type": "integer" } }, "additionalProperties": false, "required": ["name", "age"], "propertyOrdering": ["name", "age"] } ``` This field is mutually exclusive with `parameters`.
"response": { # Defines the schema of input and output data. This is a subset of the [OpenAPI 3.0 Schema Object](https://spec.openapis.org/oas/v3.0.3#schema-object). # Optional. Describes the output from this function in JSON Schema format. Reflects the Open API 3.03 Response Object. The Schema defines the type used for the response value of the function.
"additionalProperties": "", # Optional. If `type` is `OBJECT`, specifies how to handle properties not defined in `properties`. If it is a boolean `false`, no additional properties are allowed. If it is a schema, additional properties are allowed if they conform to the schema.
"anyOf": [ # Optional. The instance must be valid against any (one or more) of the subschemas listed in `any_of`.
# Object with schema name: GoogleCloudAiplatformV1beta1Schema
],
"default": "", # Optional. Default value to use if the field is not specified.
"defs": { # Optional. `defs` provides a map of schema definitions that can be reused by `ref` elsewhere in the schema. Only allowed at root level of the schema.
"a_key": # Object with schema name: GoogleCloudAiplatformV1beta1Schema
},
"description": "A String", # Optional. Description of the schema.
"enum": [ # Optional. Possible values of the field. This field can be used to restrict a value to a fixed set of values. To mark a field as an enum, set `format` to `enum` and provide the list of possible values in `enum`. For example: 1. To define directions: `{type:STRING, format:enum, enum:["EAST", "NORTH", "SOUTH", "WEST"]}` 2. To define apartment numbers: `{type:INTEGER, format:enum, enum:["101", "201", "301"]}`
"A String",
],
"example": "", # Optional. Example of an instance of this schema.
"format": "A String", # Optional. The format of the data. For `NUMBER` type, format can be `float` or `double`. For `INTEGER` type, format can be `int32` or `int64`. For `STRING` type, format can be `email`, `byte`, `date`, `date-time`, `password`, and other formats to further refine the data type.
"items": # Object with schema name: GoogleCloudAiplatformV1beta1Schema # Optional. If type is `ARRAY`, `items` specifies the schema of elements in the array.
"maxItems": "A String", # Optional. If type is `ARRAY`, `max_items` specifies the maximum number of items in an array.
"maxLength": "A String", # Optional. If type is `STRING`, `max_length` specifies the maximum length of the string.
"maxProperties": "A String", # Optional. If type is `OBJECT`, `max_properties` specifies the maximum number of properties that can be provided.
"maximum": 3.14, # Optional. If type is `INTEGER` or `NUMBER`, `maximum` specifies the maximum allowed value.
"minItems": "A String", # Optional. If type is `ARRAY`, `min_items` specifies the minimum number of items in an array.
"minLength": "A String", # Optional. If type is `STRING`, `min_length` specifies the minimum length of the string.
"minProperties": "A String", # Optional. If type is `OBJECT`, `min_properties` specifies the minimum number of properties that can be provided.
"minimum": 3.14, # Optional. If type is `INTEGER` or `NUMBER`, `minimum` specifies the minimum allowed value.
"nullable": True or False, # Optional. Indicates if the value of this field can be null.
"pattern": "A String", # Optional. If type is `STRING`, `pattern` specifies a regular expression that the string must match.
"properties": { # Optional. If type is `OBJECT`, `properties` is a map of property names to schema definitions for each property of the object.
"a_key": # Object with schema name: GoogleCloudAiplatformV1beta1Schema
},
"propertyOrdering": [ # Optional. Order of properties displayed or used where order matters. This is not a standard field in OpenAPI specification, but can be used to control the order of properties.
"A String",
],
"ref": "A String", # Optional. Allows referencing another schema definition to use in place of this schema. The value must be a valid reference to a schema in `defs`. For example, the following schema defines a reference to a schema node named "Pet": type: object properties: pet: ref: #/defs/Pet defs: Pet: type: object properties: name: type: string The value of the "pet" property is a reference to the schema node named "Pet". See details in https://json-schema.org/understanding-json-schema/structuring
"required": [ # Optional. If type is `OBJECT`, `required` lists the names of properties that must be present.
"A String",
],
"title": "A String", # Optional. Title for the schema.
"type": "A String", # Optional. Data type of the schema field.
},
"responseJsonSchema": "", # Optional. Describes the output from this function in JSON Schema format. The value specified by the schema is the response value of the function. This field is mutually exclusive with `response`.
},
],
"googleMaps": { # Tool to retrieve public maps data for grounding, powered by Google. # Optional. GoogleMaps tool type. Tool to support Google Maps in Model.
"enableWidget": True or False, # Optional. If true, include the widget context token in the response.
},
"googleSearch": { # GoogleSearch tool type. Tool to support Google Search in Model. Powered by Google. # Optional. GoogleSearch tool type. Tool to support Google Search in Model. Powered by Google.
"blockingConfidence": "A String", # Optional. Sites with confidence level chosen & above this value will be blocked from the search results.
"excludeDomains": [ # Optional. List of domains to be excluded from the search results. The default limit is 2000 domains. Example: ["amazon.com", "facebook.com"].
"A String",
],
},
"googleSearchRetrieval": { # Tool to retrieve public web data for grounding, powered by Google. # Optional. Specialized retrieval tool that is powered by Google Search.
"dynamicRetrievalConfig": { # Describes the options to customize dynamic retrieval. # Specifies the dynamic retrieval configuration for the given source.
"dynamicThreshold": 3.14, # Optional. The threshold to be used in dynamic retrieval. If not set, a system default value is used.
"mode": "A String", # The mode of the predictor to be used in dynamic retrieval.
},
},
"parallelAiSearch": { # ParallelAiSearch tool type. A tool that uses the Parallel.ai search engine for grounding. # Optional. If specified, Vertex AI will use Parallel.ai to search for information to answer user queries. The search results will be grounded on Parallel.ai and presented to the model for response generation
"apiKey": "A String", # Optional. The API key for ParallelAiSearch. If an API key is not provided, the system will attempt to verify access by checking for an active Parallel.ai subscription through the Google Cloud Marketplace. See https://docs.parallel.ai/search/search-quickstart for more details.
"customConfigs": { # Optional. Custom configs for ParallelAiSearch. This field can be used to pass any parameter from the Parallel.ai Search API. See the Parallel.ai documentation for the full list of available parameters and their usage: https://docs.parallel.ai/api-reference/search-beta/search Currently only `source_policy`, `excerpts`, `max_results`, `mode`, `fetch_policy` can be set via this field. For example: { "source_policy": { "include_domains": ["google.com", "wikipedia.org"], "exclude_domains": ["example.com"] }, "fetch_policy": { "max_age_seconds": 3600 } }
"a_key": "", # Properties of the object.
},
},
"retrieval": { # Defines a retrieval tool that model can call to access external knowledge. # Optional. Retrieval tool type. System will always execute the provided retrieval tool(s) to get external knowledge to answer the prompt. Retrieval results are presented to the model for generation.
"disableAttribution": True or False, # Optional. Deprecated. This option is no longer supported.
"externalApi": { # Retrieve from data source powered by external API for grounding. The external API is not owned by Google, but need to follow the pre-defined API spec. # Use data source powered by external API for grounding.
"apiAuth": { # The generic reusable api auth config. Deprecated. Please use AuthConfig (google/cloud/aiplatform/master/auth.proto) instead. # The authentication config to access the API. Deprecated. Please use auth_config instead.
"apiKeyConfig": { # The API secret. # The API secret.
"apiKeySecretVersion": "A String", # Required. The SecretManager secret version resource name storing API key. e.g. projects/{project}/secrets/{secret}/versions/{version}
"apiKeyString": "A String", # The API key string. Either this or `api_key_secret_version` must be set.
},
},
"apiSpec": "A String", # The API spec that the external API implements.
"authConfig": { # Auth configuration to run the extension. # The authentication config to access the API.
"apiKeyConfig": { # Config for authentication with API key. # Config for API key auth.
"apiKeySecret": "A String", # Optional. The name of the SecretManager secret version resource storing the API key. Format: `projects/{project}/secrets/{secrete}/versions/{version}` - If both `api_key_secret` and `api_key_string` are specified, this field takes precedence over `api_key_string`. - If specified, the `secretmanager.versions.access` permission should be granted to Vertex AI Extension Service Agent (https://cloud.google.com/vertex-ai/docs/general/access-control#service-agents) on the specified resource.
"apiKeyString": "A String", # Optional. The API key to be used in the request directly.
"httpElementLocation": "A String", # Optional. The location of the API key.
"name": "A String", # Optional. The parameter name of the API key. E.g. If the API request is "https://example.com/act?api_key=", "api_key" would be the parameter name.
},
"authType": "A String", # Type of auth scheme.
"googleServiceAccountConfig": { # Config for Google Service Account Authentication. # Config for Google Service Account auth.
"serviceAccount": "A String", # Optional. The service account that the extension execution service runs as. - If the service account is specified, the `iam.serviceAccounts.getAccessToken` permission should be granted to Vertex AI Extension Service Agent (https://cloud.google.com/vertex-ai/docs/general/access-control#service-agents) on the specified service account. - If not specified, the Vertex AI Extension Service Agent will be used to execute the Extension.
},
"httpBasicAuthConfig": { # Config for HTTP Basic Authentication. # Config for HTTP Basic auth.
"credentialSecret": "A String", # Required. The name of the SecretManager secret version resource storing the base64 encoded credentials. Format: `projects/{project}/secrets/{secrete}/versions/{version}` - If specified, the `secretmanager.versions.access` permission should be granted to Vertex AI Extension Service Agent (https://cloud.google.com/vertex-ai/docs/general/access-control#service-agents) on the specified resource.
},
"oauthConfig": { # Config for user oauth. # Config for user oauth.
"accessToken": "A String", # Access token for extension endpoint. Only used to propagate token from [[ExecuteExtensionRequest.runtime_auth_config]] at request time.
"serviceAccount": "A String", # The service account used to generate access tokens for executing the Extension. - If the service account is specified, the `iam.serviceAccounts.getAccessToken` permission should be granted to Vertex AI Extension Service Agent (https://cloud.google.com/vertex-ai/docs/general/access-control#service-agents) on the provided service account.
},
"oidcConfig": { # Config for user OIDC auth. # Config for user OIDC auth.
"idToken": "A String", # OpenID Connect formatted ID token for extension endpoint. Only used to propagate token from [[ExecuteExtensionRequest.runtime_auth_config]] at request time.
"serviceAccount": "A String", # The service account used to generate an OpenID Connect (OIDC)-compatible JWT token signed by the Google OIDC Provider (accounts.google.com) for extension endpoint (https://cloud.google.com/iam/docs/create-short-lived-credentials-direct#sa-credentials-oidc). - The audience for the token will be set to the URL in the server url defined in the OpenApi spec. - If the service account is provided, the service account should grant `iam.serviceAccounts.getOpenIdToken` permission to Vertex AI Extension Service Agent (https://cloud.google.com/vertex-ai/docs/general/access-control#service-agents).
},
},
"elasticSearchParams": { # The search parameters to use for the ELASTIC_SEARCH spec. # Parameters for the elastic search API.
"index": "A String", # The ElasticSearch index to use.
"numHits": 42, # Optional. Number of hits (chunks) to request. When specified, it is passed to Elasticsearch as the `num_hits` param.
"searchTemplate": "A String", # The ElasticSearch search template to use.
},
"endpoint": "A String", # The endpoint of the external API. The system will call the API at this endpoint to retrieve the data for grounding. Example: https://acme.com:443/search
"simpleSearchParams": { # The search parameters to use for SIMPLE_SEARCH spec. # Parameters for the simple search API.
},
},
"vertexAiSearch": { # Retrieve from Vertex AI Search datastore or engine for grounding. datastore and engine are mutually exclusive. See https://cloud.google.com/products/agent-builder # Set to use data source powered by Vertex AI Search.
"dataStoreSpecs": [ # Specifications that define the specific DataStores to be searched, along with configurations for those data stores. This is only considered for Engines with multiple data stores. It should only be set if engine is used.
{ # Define data stores within engine to filter on in a search call and configurations for those data stores. For more information, see https://cloud.google.com/generative-ai-app-builder/docs/reference/rpc/google.cloud.discoveryengine.v1#datastorespec
"dataStore": "A String", # Full resource name of DataStore, such as Format: `projects/{project}/locations/{location}/collections/{collection}/dataStores/{dataStore}`
"filter": "A String", # Optional. Filter specification to filter documents in the data store specified by data_store field. For more information on filtering, see [Filtering](https://cloud.google.com/generative-ai-app-builder/docs/filter-search-metadata)
},
],
"datastore": "A String", # Optional. Fully-qualified Vertex AI Search data store resource ID. Format: `projects/{project}/locations/{location}/collections/{collection}/dataStores/{dataStore}`
"engine": "A String", # Optional. Fully-qualified Vertex AI Search engine resource ID. Format: `projects/{project}/locations/{location}/collections/{collection}/engines/{engine}`
"filter": "A String", # Optional. Filter strings to be passed to the search API.
"maxResults": 42, # Optional. Number of search results to return per query. The default value is 10. The maximumm allowed value is 10.
},
"vertexRagStore": { # Retrieve from Vertex RAG Store for grounding. # Set to use data source powered by Vertex RAG store. User data is uploaded via the VertexRagDataService.
"ragCorpora": [ # Optional. Deprecated. Please use rag_resources instead.
"A String",
],
"ragResources": [ # Optional. The representation of the rag source. It can be used to specify corpus only or ragfiles. Currently only support one corpus or multiple files from one corpus. In the future we may open up multiple corpora support.
{ # The definition of the Rag resource.
"ragCorpus": "A String", # Optional. RagCorpora resource name. Format: `projects/{project}/locations/{location}/ragCorpora/{rag_corpus}`
"ragFileIds": [ # Optional. rag_file_id. The files should be in the same rag_corpus set in rag_corpus field.
"A String",
],
},
],
"ragRetrievalConfig": { # Specifies the context retrieval config. # Optional. The retrieval config for the Rag query.
"filter": { # Config for filters. # Optional. Config for filters.
"metadataFilter": "A String", # Optional. String for metadata filtering.
"vectorDistanceThreshold": 3.14, # Optional. Only returns contexts with vector distance smaller than the threshold.
"vectorSimilarityThreshold": 3.14, # Optional. Only returns contexts with vector similarity larger than the threshold.
},
"hybridSearch": { # Config for Hybrid Search. # Optional. Config for Hybrid Search.
"alpha": 3.14, # Optional. Alpha value controls the weight between dense and sparse vector search results. The range is [0, 1], while 0 means sparse vector search only and 1 means dense vector search only. The default value is 0.5 which balances sparse and dense vector search equally.
},
"ranking": { # Config for ranking and reranking. # Optional. Config for ranking and reranking.
"llmRanker": { # Config for LlmRanker. # Optional. Config for LlmRanker.
"modelName": "A String", # Optional. The model name used for ranking. See [Supported models](https://cloud.google.com/vertex-ai/generative-ai/docs/model-reference/inference#supported-models).
},
"rankService": { # Config for Rank Service. # Optional. Config for Rank Service.
"modelName": "A String", # Optional. The model name of the rank service. Format: `semantic-ranker-512@latest`
},
},
"topK": 42, # Optional. The number of contexts to retrieve.
},
"similarityTopK": 42, # Optional. Number of top k results to return from the selected corpora.
"storeContext": True or False, # Optional. Currently only supported for Gemini Multimodal Live API. In Gemini Multimodal Live API, if `store_context` bool is specified, Gemini will leverage it to automatically memorize the interactions between the client and Gemini, and retrieve context when needed to augment the response generation for users' ongoing and future interactions.
"vectorDistanceThreshold": 3.14, # Optional. Only return results with vector distance smaller than the threshold.
},
},
"urlContext": { # Tool to support URL context. # Optional. Tool to support URL context retrieval.
},
},
],
},
},
"turns": [ # Optional. A chronological list of conversation turns. Each turn represents a logical execution cycle (e.g., User Input -> Agent Response).
{ # Represents a single turn/invocation in the conversation.
"events": [ # Optional. The list of events that occurred during this turn.
{ # Represents a single event in the execution trace.
"activeTools": [ # Optional. The list of tools that were active/available to the agent at the time of this event. This overrides the `AgentConfig.tools` if set.
{ # Tool details that the model may use to generate response. A `Tool` is a piece of code that enables the system to interact with external systems to perform an action, or set of actions, outside of knowledge and scope of the model. A Tool object should contain exactly one type of Tool (e.g FunctionDeclaration, Retrieval or GoogleSearchRetrieval).
"codeExecution": { # Tool that executes code generated by the model, and automatically returns the result to the model. See also [ExecutableCode]and [CodeExecutionResult] which are input and output to this tool. # Optional. CodeExecution tool type. Enables the model to execute code as part of generation.
},
"computerUse": { # Tool to support computer use. # Optional. Tool to support the model interacting directly with the computer. If enabled, it automatically populates computer-use specific Function Declarations.
"environment": "A String", # Required. The environment being operated.
"excludedPredefinedFunctions": [ # Optional. By default, [predefined functions](https://cloud.google.com/vertex-ai/generative-ai/docs/computer-use#supported-actions) are included in the final model call. Some of them can be explicitly excluded from being automatically included. This can serve two purposes: 1. Using a more restricted / different action space. 2. Improving the definitions / instructions of predefined functions.
"A String",
],
},
"enterpriseWebSearch": { # Tool to search public web data, powered by Vertex AI Search and Sec4 compliance. # Optional. Tool to support searching public web data, powered by Vertex AI Search and Sec4 compliance.
"blockingConfidence": "A String", # Optional. Sites with confidence level chosen & above this value will be blocked from the search results.
"excludeDomains": [ # Optional. List of domains to be excluded from the search results. The default limit is 2000 domains.
"A String",
],
},
"functionDeclarations": [ # Optional. Function tool type. One or more function declarations to be passed to the model along with the current user query. Model may decide to call a subset of these functions by populating FunctionCall in the response. User should provide a FunctionResponse for each function call in the next turn. Based on the function responses, Model will generate the final response back to the user. Maximum 512 function declarations can be provided.
{ # Structured representation of a function declaration as defined by the [OpenAPI 3.0 specification](https://spec.openapis.org/oas/v3.0.3). Included in this declaration are the function name, description, parameters and response type. This FunctionDeclaration is a representation of a block of code that can be used as a `Tool` by the model and executed by the client.
"description": "A String", # Optional. Description and purpose of the function. Model uses it to decide how and whether to call the function.
"name": "A String", # Required. The name of the function to call. Must start with a letter or an underscore. Must be a-z, A-Z, 0-9, or contain underscores, dots, colons and dashes, with a maximum length of 64.
"parameters": { # Defines the schema of input and output data. This is a subset of the [OpenAPI 3.0 Schema Object](https://spec.openapis.org/oas/v3.0.3#schema-object). # Optional. Describes the parameters to this function in JSON Schema Object format. Reflects the Open API 3.03 Parameter Object. string Key: the name of the parameter. Parameter names are case sensitive. Schema Value: the Schema defining the type used for the parameter. For function with no parameters, this can be left unset. Parameter names must start with a letter or an underscore and must only contain chars a-z, A-Z, 0-9, or underscores with a maximum length of 64. Example with 1 required and 1 optional parameter: type: OBJECT properties: param1: type: STRING param2: type: INTEGER required: - param1
"additionalProperties": "", # Optional. If `type` is `OBJECT`, specifies how to handle properties not defined in `properties`. If it is a boolean `false`, no additional properties are allowed. If it is a schema, additional properties are allowed if they conform to the schema.
"anyOf": [ # Optional. The instance must be valid against any (one or more) of the subschemas listed in `any_of`.
# Object with schema name: GoogleCloudAiplatformV1beta1Schema
],
"default": "", # Optional. Default value to use if the field is not specified.
"defs": { # Optional. `defs` provides a map of schema definitions that can be reused by `ref` elsewhere in the schema. Only allowed at root level of the schema.
"a_key": # Object with schema name: GoogleCloudAiplatformV1beta1Schema
},
"description": "A String", # Optional. Description of the schema.
"enum": [ # Optional. Possible values of the field. This field can be used to restrict a value to a fixed set of values. To mark a field as an enum, set `format` to `enum` and provide the list of possible values in `enum`. For example: 1. To define directions: `{type:STRING, format:enum, enum:["EAST", "NORTH", "SOUTH", "WEST"]}` 2. To define apartment numbers: `{type:INTEGER, format:enum, enum:["101", "201", "301"]}`
"A String",
],
"example": "", # Optional. Example of an instance of this schema.
"format": "A String", # Optional. The format of the data. For `NUMBER` type, format can be `float` or `double`. For `INTEGER` type, format can be `int32` or `int64`. For `STRING` type, format can be `email`, `byte`, `date`, `date-time`, `password`, and other formats to further refine the data type.
"items": # Object with schema name: GoogleCloudAiplatformV1beta1Schema # Optional. If type is `ARRAY`, `items` specifies the schema of elements in the array.
"maxItems": "A String", # Optional. If type is `ARRAY`, `max_items` specifies the maximum number of items in an array.
"maxLength": "A String", # Optional. If type is `STRING`, `max_length` specifies the maximum length of the string.
"maxProperties": "A String", # Optional. If type is `OBJECT`, `max_properties` specifies the maximum number of properties that can be provided.
"maximum": 3.14, # Optional. If type is `INTEGER` or `NUMBER`, `maximum` specifies the maximum allowed value.
"minItems": "A String", # Optional. If type is `ARRAY`, `min_items` specifies the minimum number of items in an array.
"minLength": "A String", # Optional. If type is `STRING`, `min_length` specifies the minimum length of the string.
"minProperties": "A String", # Optional. If type is `OBJECT`, `min_properties` specifies the minimum number of properties that can be provided.
"minimum": 3.14, # Optional. If type is `INTEGER` or `NUMBER`, `minimum` specifies the minimum allowed value.
"nullable": True or False, # Optional. Indicates if the value of this field can be null.
"pattern": "A String", # Optional. If type is `STRING`, `pattern` specifies a regular expression that the string must match.
"properties": { # Optional. If type is `OBJECT`, `properties` is a map of property names to schema definitions for each property of the object.
"a_key": # Object with schema name: GoogleCloudAiplatformV1beta1Schema
},
"propertyOrdering": [ # Optional. Order of properties displayed or used where order matters. This is not a standard field in OpenAPI specification, but can be used to control the order of properties.
"A String",
],
"ref": "A String", # Optional. Allows referencing another schema definition to use in place of this schema. The value must be a valid reference to a schema in `defs`. For example, the following schema defines a reference to a schema node named "Pet": type: object properties: pet: ref: #/defs/Pet defs: Pet: type: object properties: name: type: string The value of the "pet" property is a reference to the schema node named "Pet". See details in https://json-schema.org/understanding-json-schema/structuring
"required": [ # Optional. If type is `OBJECT`, `required` lists the names of properties that must be present.
"A String",
],
"title": "A String", # Optional. Title for the schema.
"type": "A String", # Optional. Data type of the schema field.
},
"parametersJsonSchema": "", # Optional. Describes the parameters to the function in JSON Schema format. The schema must describe an object where the properties are the parameters to the function. For example: ``` { "type": "object", "properties": { "name": { "type": "string" }, "age": { "type": "integer" } }, "additionalProperties": false, "required": ["name", "age"], "propertyOrdering": ["name", "age"] } ``` This field is mutually exclusive with `parameters`.
"response": { # Defines the schema of input and output data. This is a subset of the [OpenAPI 3.0 Schema Object](https://spec.openapis.org/oas/v3.0.3#schema-object). # Optional. Describes the output from this function in JSON Schema format. Reflects the Open API 3.03 Response Object. The Schema defines the type used for the response value of the function.
"additionalProperties": "", # Optional. If `type` is `OBJECT`, specifies how to handle properties not defined in `properties`. If it is a boolean `false`, no additional properties are allowed. If it is a schema, additional properties are allowed if they conform to the schema.
"anyOf": [ # Optional. The instance must be valid against any (one or more) of the subschemas listed in `any_of`.
# Object with schema name: GoogleCloudAiplatformV1beta1Schema
],
"default": "", # Optional. Default value to use if the field is not specified.
"defs": { # Optional. `defs` provides a map of schema definitions that can be reused by `ref` elsewhere in the schema. Only allowed at root level of the schema.
"a_key": # Object with schema name: GoogleCloudAiplatformV1beta1Schema
},
"description": "A String", # Optional. Description of the schema.
"enum": [ # Optional. Possible values of the field. This field can be used to restrict a value to a fixed set of values. To mark a field as an enum, set `format` to `enum` and provide the list of possible values in `enum`. For example: 1. To define directions: `{type:STRING, format:enum, enum:["EAST", "NORTH", "SOUTH", "WEST"]}` 2. To define apartment numbers: `{type:INTEGER, format:enum, enum:["101", "201", "301"]}`
"A String",
],
"example": "", # Optional. Example of an instance of this schema.
"format": "A String", # Optional. The format of the data. For `NUMBER` type, format can be `float` or `double`. For `INTEGER` type, format can be `int32` or `int64`. For `STRING` type, format can be `email`, `byte`, `date`, `date-time`, `password`, and other formats to further refine the data type.
"items": # Object with schema name: GoogleCloudAiplatformV1beta1Schema # Optional. If type is `ARRAY`, `items` specifies the schema of elements in the array.
"maxItems": "A String", # Optional. If type is `ARRAY`, `max_items` specifies the maximum number of items in an array.
"maxLength": "A String", # Optional. If type is `STRING`, `max_length` specifies the maximum length of the string.
"maxProperties": "A String", # Optional. If type is `OBJECT`, `max_properties` specifies the maximum number of properties that can be provided.
"maximum": 3.14, # Optional. If type is `INTEGER` or `NUMBER`, `maximum` specifies the maximum allowed value.
"minItems": "A String", # Optional. If type is `ARRAY`, `min_items` specifies the minimum number of items in an array.
"minLength": "A String", # Optional. If type is `STRING`, `min_length` specifies the minimum length of the string.
"minProperties": "A String", # Optional. If type is `OBJECT`, `min_properties` specifies the minimum number of properties that can be provided.
"minimum": 3.14, # Optional. If type is `INTEGER` or `NUMBER`, `minimum` specifies the minimum allowed value.
"nullable": True or False, # Optional. Indicates if the value of this field can be null.
"pattern": "A String", # Optional. If type is `STRING`, `pattern` specifies a regular expression that the string must match.
"properties": { # Optional. If type is `OBJECT`, `properties` is a map of property names to schema definitions for each property of the object.
"a_key": # Object with schema name: GoogleCloudAiplatformV1beta1Schema
},
"propertyOrdering": [ # Optional. Order of properties displayed or used where order matters. This is not a standard field in OpenAPI specification, but can be used to control the order of properties.
"A String",
],
"ref": "A String", # Optional. Allows referencing another schema definition to use in place of this schema. The value must be a valid reference to a schema in `defs`. For example, the following schema defines a reference to a schema node named "Pet": type: object properties: pet: ref: #/defs/Pet defs: Pet: type: object properties: name: type: string The value of the "pet" property is a reference to the schema node named "Pet". See details in https://json-schema.org/understanding-json-schema/structuring
"required": [ # Optional. If type is `OBJECT`, `required` lists the names of properties that must be present.
"A String",
],
"title": "A String", # Optional. Title for the schema.
"type": "A String", # Optional. Data type of the schema field.
},
"responseJsonSchema": "", # Optional. Describes the output from this function in JSON Schema format. The value specified by the schema is the response value of the function. This field is mutually exclusive with `response`.
},
],
"googleMaps": { # Tool to retrieve public maps data for grounding, powered by Google. # Optional. GoogleMaps tool type. Tool to support Google Maps in Model.
"enableWidget": True or False, # Optional. If true, include the widget context token in the response.
},
"googleSearch": { # GoogleSearch tool type. Tool to support Google Search in Model. Powered by Google. # Optional. GoogleSearch tool type. Tool to support Google Search in Model. Powered by Google.
"blockingConfidence": "A String", # Optional. Sites with confidence level chosen & above this value will be blocked from the search results.
"excludeDomains": [ # Optional. List of domains to be excluded from the search results. The default limit is 2000 domains. Example: ["amazon.com", "facebook.com"].
"A String",
],
},
"googleSearchRetrieval": { # Tool to retrieve public web data for grounding, powered by Google. # Optional. Specialized retrieval tool that is powered by Google Search.
"dynamicRetrievalConfig": { # Describes the options to customize dynamic retrieval. # Specifies the dynamic retrieval configuration for the given source.
"dynamicThreshold": 3.14, # Optional. The threshold to be used in dynamic retrieval. If not set, a system default value is used.
"mode": "A String", # The mode of the predictor to be used in dynamic retrieval.
},
},
"parallelAiSearch": { # ParallelAiSearch tool type. A tool that uses the Parallel.ai search engine for grounding. # Optional. If specified, Vertex AI will use Parallel.ai to search for information to answer user queries. The search results will be grounded on Parallel.ai and presented to the model for response generation
"apiKey": "A String", # Optional. The API key for ParallelAiSearch. If an API key is not provided, the system will attempt to verify access by checking for an active Parallel.ai subscription through the Google Cloud Marketplace. See https://docs.parallel.ai/search/search-quickstart for more details.
"customConfigs": { # Optional. Custom configs for ParallelAiSearch. This field can be used to pass any parameter from the Parallel.ai Search API. See the Parallel.ai documentation for the full list of available parameters and their usage: https://docs.parallel.ai/api-reference/search-beta/search Currently only `source_policy`, `excerpts`, `max_results`, `mode`, `fetch_policy` can be set via this field. For example: { "source_policy": { "include_domains": ["google.com", "wikipedia.org"], "exclude_domains": ["example.com"] }, "fetch_policy": { "max_age_seconds": 3600 } }
"a_key": "", # Properties of the object.
},
},
"retrieval": { # Defines a retrieval tool that model can call to access external knowledge. # Optional. Retrieval tool type. System will always execute the provided retrieval tool(s) to get external knowledge to answer the prompt. Retrieval results are presented to the model for generation.
"disableAttribution": True or False, # Optional. Deprecated. This option is no longer supported.
"externalApi": { # Retrieve from data source powered by external API for grounding. The external API is not owned by Google, but need to follow the pre-defined API spec. # Use data source powered by external API for grounding.
"apiAuth": { # The generic reusable api auth config. Deprecated. Please use AuthConfig (google/cloud/aiplatform/master/auth.proto) instead. # The authentication config to access the API. Deprecated. Please use auth_config instead.
"apiKeyConfig": { # The API secret. # The API secret.
"apiKeySecretVersion": "A String", # Required. The SecretManager secret version resource name storing API key. e.g. projects/{project}/secrets/{secret}/versions/{version}
"apiKeyString": "A String", # The API key string. Either this or `api_key_secret_version` must be set.
},
},
"apiSpec": "A String", # The API spec that the external API implements.
"authConfig": { # Auth configuration to run the extension. # The authentication config to access the API.
"apiKeyConfig": { # Config for authentication with API key. # Config for API key auth.
"apiKeySecret": "A String", # Optional. The name of the SecretManager secret version resource storing the API key. Format: `projects/{project}/secrets/{secrete}/versions/{version}` - If both `api_key_secret` and `api_key_string` are specified, this field takes precedence over `api_key_string`. - If specified, the `secretmanager.versions.access` permission should be granted to Vertex AI Extension Service Agent (https://cloud.google.com/vertex-ai/docs/general/access-control#service-agents) on the specified resource.
"apiKeyString": "A String", # Optional. The API key to be used in the request directly.
"httpElementLocation": "A String", # Optional. The location of the API key.
"name": "A String", # Optional. The parameter name of the API key. E.g. If the API request is "https://example.com/act?api_key=", "api_key" would be the parameter name.
},
"authType": "A String", # Type of auth scheme.
"googleServiceAccountConfig": { # Config for Google Service Account Authentication. # Config for Google Service Account auth.
"serviceAccount": "A String", # Optional. The service account that the extension execution service runs as. - If the service account is specified, the `iam.serviceAccounts.getAccessToken` permission should be granted to Vertex AI Extension Service Agent (https://cloud.google.com/vertex-ai/docs/general/access-control#service-agents) on the specified service account. - If not specified, the Vertex AI Extension Service Agent will be used to execute the Extension.
},
"httpBasicAuthConfig": { # Config for HTTP Basic Authentication. # Config for HTTP Basic auth.
"credentialSecret": "A String", # Required. The name of the SecretManager secret version resource storing the base64 encoded credentials. Format: `projects/{project}/secrets/{secrete}/versions/{version}` - If specified, the `secretmanager.versions.access` permission should be granted to Vertex AI Extension Service Agent (https://cloud.google.com/vertex-ai/docs/general/access-control#service-agents) on the specified resource.
},
"oauthConfig": { # Config for user oauth. # Config for user oauth.
"accessToken": "A String", # Access token for extension endpoint. Only used to propagate token from [[ExecuteExtensionRequest.runtime_auth_config]] at request time.
"serviceAccount": "A String", # The service account used to generate access tokens for executing the Extension. - If the service account is specified, the `iam.serviceAccounts.getAccessToken` permission should be granted to Vertex AI Extension Service Agent (https://cloud.google.com/vertex-ai/docs/general/access-control#service-agents) on the provided service account.
},
"oidcConfig": { # Config for user OIDC auth. # Config for user OIDC auth.
"idToken": "A String", # OpenID Connect formatted ID token for extension endpoint. Only used to propagate token from [[ExecuteExtensionRequest.runtime_auth_config]] at request time.
"serviceAccount": "A String", # The service account used to generate an OpenID Connect (OIDC)-compatible JWT token signed by the Google OIDC Provider (accounts.google.com) for extension endpoint (https://cloud.google.com/iam/docs/create-short-lived-credentials-direct#sa-credentials-oidc). - The audience for the token will be set to the URL in the server url defined in the OpenApi spec. - If the service account is provided, the service account should grant `iam.serviceAccounts.getOpenIdToken` permission to Vertex AI Extension Service Agent (https://cloud.google.com/vertex-ai/docs/general/access-control#service-agents).
},
},
"elasticSearchParams": { # The search parameters to use for the ELASTIC_SEARCH spec. # Parameters for the elastic search API.
"index": "A String", # The ElasticSearch index to use.
"numHits": 42, # Optional. Number of hits (chunks) to request. When specified, it is passed to Elasticsearch as the `num_hits` param.
"searchTemplate": "A String", # The ElasticSearch search template to use.
},
"endpoint": "A String", # The endpoint of the external API. The system will call the API at this endpoint to retrieve the data for grounding. Example: https://acme.com:443/search
"simpleSearchParams": { # The search parameters to use for SIMPLE_SEARCH spec. # Parameters for the simple search API.
},
},
"vertexAiSearch": { # Retrieve from Vertex AI Search datastore or engine for grounding. datastore and engine are mutually exclusive. See https://cloud.google.com/products/agent-builder # Set to use data source powered by Vertex AI Search.
"dataStoreSpecs": [ # Specifications that define the specific DataStores to be searched, along with configurations for those data stores. This is only considered for Engines with multiple data stores. It should only be set if engine is used.
{ # Define data stores within engine to filter on in a search call and configurations for those data stores. For more information, see https://cloud.google.com/generative-ai-app-builder/docs/reference/rpc/google.cloud.discoveryengine.v1#datastorespec
"dataStore": "A String", # Full resource name of DataStore, such as Format: `projects/{project}/locations/{location}/collections/{collection}/dataStores/{dataStore}`
"filter": "A String", # Optional. Filter specification to filter documents in the data store specified by data_store field. For more information on filtering, see [Filtering](https://cloud.google.com/generative-ai-app-builder/docs/filter-search-metadata)
},
],
"datastore": "A String", # Optional. Fully-qualified Vertex AI Search data store resource ID. Format: `projects/{project}/locations/{location}/collections/{collection}/dataStores/{dataStore}`
"engine": "A String", # Optional. Fully-qualified Vertex AI Search engine resource ID. Format: `projects/{project}/locations/{location}/collections/{collection}/engines/{engine}`
"filter": "A String", # Optional. Filter strings to be passed to the search API.
"maxResults": 42, # Optional. Number of search results to return per query. The default value is 10. The maximumm allowed value is 10.
},
"vertexRagStore": { # Retrieve from Vertex RAG Store for grounding. # Set to use data source powered by Vertex RAG store. User data is uploaded via the VertexRagDataService.
"ragCorpora": [ # Optional. Deprecated. Please use rag_resources instead.
"A String",
],
"ragResources": [ # Optional. The representation of the rag source. It can be used to specify corpus only or ragfiles. Currently only support one corpus or multiple files from one corpus. In the future we may open up multiple corpora support.
{ # The definition of the Rag resource.
"ragCorpus": "A String", # Optional. RagCorpora resource name. Format: `projects/{project}/locations/{location}/ragCorpora/{rag_corpus}`
"ragFileIds": [ # Optional. rag_file_id. The files should be in the same rag_corpus set in rag_corpus field.
"A String",
],
},
],
"ragRetrievalConfig": { # Specifies the context retrieval config. # Optional. The retrieval config for the Rag query.
"filter": { # Config for filters. # Optional. Config for filters.
"metadataFilter": "A String", # Optional. String for metadata filtering.
"vectorDistanceThreshold": 3.14, # Optional. Only returns contexts with vector distance smaller than the threshold.
"vectorSimilarityThreshold": 3.14, # Optional. Only returns contexts with vector similarity larger than the threshold.
},
"hybridSearch": { # Config for Hybrid Search. # Optional. Config for Hybrid Search.
"alpha": 3.14, # Optional. Alpha value controls the weight between dense and sparse vector search results. The range is [0, 1], while 0 means sparse vector search only and 1 means dense vector search only. The default value is 0.5 which balances sparse and dense vector search equally.
},
"ranking": { # Config for ranking and reranking. # Optional. Config for ranking and reranking.
"llmRanker": { # Config for LlmRanker. # Optional. Config for LlmRanker.
"modelName": "A String", # Optional. The model name used for ranking. See [Supported models](https://cloud.google.com/vertex-ai/generative-ai/docs/model-reference/inference#supported-models).
},
"rankService": { # Config for Rank Service. # Optional. Config for Rank Service.
"modelName": "A String", # Optional. The model name of the rank service. Format: `semantic-ranker-512@latest`
},
},
"topK": 42, # Optional. The number of contexts to retrieve.
},
"similarityTopK": 42, # Optional. Number of top k results to return from the selected corpora.
"storeContext": True or False, # Optional. Currently only supported for Gemini Multimodal Live API. In Gemini Multimodal Live API, if `store_context` bool is specified, Gemini will leverage it to automatically memorize the interactions between the client and Gemini, and retrieve context when needed to augment the response generation for users' ongoing and future interactions.
"vectorDistanceThreshold": 3.14, # Optional. Only return results with vector distance smaller than the threshold.
},
},
"urlContext": { # Tool to support URL context. # Optional. Tool to support URL context retrieval.
},
},
],
"author": "A String", # Required. The ID of the agent or entity that generated this event. Use "user" to denote events generated by the end-user.
"content": { # The structured data content of a message. A Content message contains a `role` field, which indicates the producer of the content, and a `parts` field, which contains the multi-part data of the message. # Required. The content of the event (e.g., text response, tool call, tool response).
"parts": [ # Required. A list of Part objects that make up a single message. Parts of a message can have different MIME types. A Content message must have at least one Part.
{ # A datatype containing media that is part of a multi-part Content message. A `Part` consists of data which has an associated datatype. A `Part` can only contain one of the accepted types in `Part.data`. For media types that are not text, `Part` must have a fixed IANA MIME type identifying the type and subtype of the media if `inline_data` or `file_data` field is filled with raw bytes.
"codeExecutionResult": { # Result of executing the [ExecutableCode]. Only generated when using the [CodeExecution] tool, and always follows a `part` containing the [ExecutableCode]. # Optional. The result of executing the ExecutableCode.
"outcome": "A String", # Required. Outcome of the code execution.
"output": "A String", # Optional. Contains stdout when code execution is successful, stderr or other description otherwise.
},
"executableCode": { # Code generated by the model that is meant to be executed, and the result returned to the model. Generated when using the [CodeExecution] tool, in which the code will be automatically executed, and a corresponding [CodeExecutionResult] will also be generated. # Optional. Code generated by the model that is intended to be executed.
"code": "A String", # Required. The code to be executed.
"language": "A String", # Required. Programming language of the `code`.
},
"fileData": { # URI-based data. A FileData message contains a URI pointing to data of a specific media type. It is used to represent images, audio, and video stored in Google Cloud Storage. # Optional. The URI-based data of the part. This can be used to include files from Google Cloud Storage.
"displayName": "A String", # Optional. The display name of the file. Used to provide a label or filename to distinguish files. This field is only returned in `PromptMessage` for prompt management. It is used in the Gemini calls only when server side tools (`code_execution`, `google_search`, and `url_context`) are enabled.
"fileUri": "A String", # Required. The URI of the file in Google Cloud Storage.
"mimeType": "A String", # Required. The IANA standard MIME type of the source data.
},
"functionCall": { # A predicted [FunctionCall] returned from the model that contains a string representing the [FunctionDeclaration.name] and a structured JSON object containing the parameters and their values. # Optional. A predicted function call returned from the model. This contains the name of the function to call and the arguments to pass to the function.
"args": { # Optional. The function parameters and values in JSON object format. See [FunctionDeclaration.parameters] for parameter details.
"a_key": "", # Properties of the object.
},
"id": "A String", # Optional. The unique id of the function call. If populated, the client to execute the `function_call` and return the response with the matching `id`.
"name": "A String", # Optional. The name of the function to call. Matches [FunctionDeclaration.name].
"partialArgs": [ # Optional. The partial argument value of the function call. If provided, represents the arguments/fields that are streamed incrementally.
{ # Partial argument value of the function call.
"boolValue": True or False, # Optional. Represents a boolean value.
"jsonPath": "A String", # Required. A JSON Path (RFC 9535) to the argument being streamed. https://datatracker.ietf.org/doc/html/rfc9535. e.g. "$.foo.bar[0].data".
"nullValue": "A String", # Optional. Represents a null value.
"numberValue": 3.14, # Optional. Represents a double value.
"stringValue": "A String", # Optional. Represents a string value.
"willContinue": True or False, # Optional. Whether this is not the last part of the same json_path. If true, another PartialArg message for the current json_path is expected to follow.
},
],
"willContinue": True or False, # Optional. Whether this is the last part of the FunctionCall. If true, another partial message for the current FunctionCall is expected to follow.
},
"functionResponse": { # The result output from a [FunctionCall] that contains a string representing the [FunctionDeclaration.name] and a structured JSON object containing any output from the function is used as context to the model. This should contain the result of a [FunctionCall] made based on model prediction. # Optional. The result of a function call. This is used to provide the model with the result of a function call that it predicted.
"id": "A String", # Optional. The id of the function call this response is for. Populated by the client to match the corresponding function call `id`.
"name": "A String", # Required. The name of the function to call. Matches [FunctionDeclaration.name] and [FunctionCall.name].
"parts": [ # Optional. Ordered `Parts` that constitute a function response. Parts may have different IANA MIME types.
{ # A datatype containing media that is part of a `FunctionResponse` message. A `FunctionResponsePart` consists of data which has an associated datatype. A `FunctionResponsePart` can only contain one of the accepted types in `FunctionResponsePart.data`. A `FunctionResponsePart` must have a fixed IANA MIME type identifying the type and subtype of the media if the `inline_data` field is filled with raw bytes.
"fileData": { # URI based data for function response. # URI based data.
"displayName": "A String", # Optional. Display name of the file data. Used to provide a label or filename to distinguish file datas. This field is only returned in PromptMessage for prompt management. It is currently used in the Gemini GenerateContent calls only when server side tools (code_execution, google_search, and url_context) are enabled.
"fileUri": "A String", # Required. URI.
"mimeType": "A String", # Required. The IANA standard MIME type of the source data.
},
"inlineData": { # Raw media bytes for function response. Text should not be sent as raw bytes, use the 'text' field. # Inline media bytes.
"data": "A String", # Required. Raw bytes.
"displayName": "A String", # Optional. Display name of the blob. Used to provide a label or filename to distinguish blobs. This field is only returned in PromptMessage for prompt management. It is currently used in the Gemini GenerateContent calls only when server side tools (code_execution, google_search, and url_context) are enabled.
"mimeType": "A String", # Required. The IANA standard MIME type of the source data.
},
},
],
"response": { # Required. The function response in JSON object format. Use "output" key to specify function output and "error" key to specify error details (if any). If "output" and "error" keys are not specified, then whole "response" is treated as function output.
"a_key": "", # Properties of the object.
},
},
"inlineData": { # A content blob. A Blob contains data of a specific media type. It is used to represent images, audio, and video. # Optional. The inline data content of the part. This can be used to include images, audio, or video in a request.
"data": "A String", # Required. The raw bytes of the data.
"displayName": "A String", # Optional. The display name of the blob. Used to provide a label or filename to distinguish blobs. This field is only returned in `PromptMessage` for prompt management. It is used in the Gemini calls only when server-side tools (`code_execution`, `google_search`, and `url_context`) are enabled.
"mimeType": "A String", # Required. The IANA standard MIME type of the source data.
},
"mediaResolution": { # per part media resolution. Media resolution for the input media. # per part media resolution. Media resolution for the input media.
"level": "A String", # The tokenization quality used for given media.
},
"text": "A String", # Optional. The text content of the part. When sent from the VSCode Gemini Code Assist extension, references to @mentioned items will be converted to markdown boldface text. For example `@my-repo` will be converted to and sent as `**my-repo**` by the IDE agent.
"thought": True or False, # Optional. Indicates whether the `part` represents the model's thought process or reasoning.
"thoughtSignature": "A String", # Optional. An opaque signature for the thought so it can be reused in subsequent requests.
"videoMetadata": { # Provides metadata for a video, including the start and end offsets for clipping and the frame rate. # Optional. Video metadata. The metadata should only be specified while the video data is presented in inline_data or file_data.
"endOffset": "A String", # Optional. The end offset of the video.
"fps": 3.14, # Optional. The frame rate of the video sent to the model. If not specified, the default value is 1.0. The valid range is (0.0, 24.0].
"startOffset": "A String", # Optional. The start offset of the video.
},
},
],
"role": "A String", # Optional. The producer of the content. Must be either 'user' or 'model'. If not set, the service will default to 'user'.
},
"eventTime": "A String", # Optional. The timestamp when the event occurred.
"stateDelta": { # Optional. The change in the session state caused by this event. This is a key-value map of fields that were modified or added by the event.
"a_key": "", # Properties of the object.
},
},
],
"turnId": "A String", # Optional. A unique identifier for the turn. Useful for referencing specific turns across systems.
"turnIndex": 42, # Required. The 0-based index of the turn in the conversation sequence.
},
],
},
"candidate": "A String", # Required. The name of the candidate that produced the response.
"error": { # The `Status` type defines a logical error model that is suitable for different programming environments, including REST APIs and RPC APIs. It is used by [gRPC](https://github.com/grpc). Each `Status` message contains three pieces of data: error code, error message, and error details. You can find out more about this error model and how to work with it in the [API Design Guide](https://cloud.google.com/apis/design/errors). # Output only. Error while scraping model or agent.
"code": 42, # The status code, which should be an enum value of google.rpc.Code.
"details": [ # A list of messages that carry the error details. There is a common set of message types for APIs to use.
{
"a_key": "", # Properties of the object. Contains field @type with type URL.
},
],
"message": "A String", # A developer-facing error message, which should be in English. Any user-facing error message should be localized and sent in the google.rpc.Status.details field, or localized by the client.
},
"events": [ # Optional. Intermediate events (such as tool calls and responses) that led to the final response.
{ # The structured data content of a message. A Content message contains a `role` field, which indicates the producer of the content, and a `parts` field, which contains the multi-part data of the message.
"parts": [ # Required. A list of Part objects that make up a single message. Parts of a message can have different MIME types. A Content message must have at least one Part.
{ # A datatype containing media that is part of a multi-part Content message. A `Part` consists of data which has an associated datatype. A `Part` can only contain one of the accepted types in `Part.data`. For media types that are not text, `Part` must have a fixed IANA MIME type identifying the type and subtype of the media if `inline_data` or `file_data` field is filled with raw bytes.
"codeExecutionResult": { # Result of executing the [ExecutableCode]. Only generated when using the [CodeExecution] tool, and always follows a `part` containing the [ExecutableCode]. # Optional. The result of executing the ExecutableCode.
"outcome": "A String", # Required. Outcome of the code execution.
"output": "A String", # Optional. Contains stdout when code execution is successful, stderr or other description otherwise.
},
"executableCode": { # Code generated by the model that is meant to be executed, and the result returned to the model. Generated when using the [CodeExecution] tool, in which the code will be automatically executed, and a corresponding [CodeExecutionResult] will also be generated. # Optional. Code generated by the model that is intended to be executed.
"code": "A String", # Required. The code to be executed.
"language": "A String", # Required. Programming language of the `code`.
},
"fileData": { # URI-based data. A FileData message contains a URI pointing to data of a specific media type. It is used to represent images, audio, and video stored in Google Cloud Storage. # Optional. The URI-based data of the part. This can be used to include files from Google Cloud Storage.
"displayName": "A String", # Optional. The display name of the file. Used to provide a label or filename to distinguish files. This field is only returned in `PromptMessage` for prompt management. It is used in the Gemini calls only when server side tools (`code_execution`, `google_search`, and `url_context`) are enabled.
"fileUri": "A String", # Required. The URI of the file in Google Cloud Storage.
"mimeType": "A String", # Required. The IANA standard MIME type of the source data.
},
"functionCall": { # A predicted [FunctionCall] returned from the model that contains a string representing the [FunctionDeclaration.name] and a structured JSON object containing the parameters and their values. # Optional. A predicted function call returned from the model. This contains the name of the function to call and the arguments to pass to the function.
"args": { # Optional. The function parameters and values in JSON object format. See [FunctionDeclaration.parameters] for parameter details.
"a_key": "", # Properties of the object.
},
"id": "A String", # Optional. The unique id of the function call. If populated, the client to execute the `function_call` and return the response with the matching `id`.
"name": "A String", # Optional. The name of the function to call. Matches [FunctionDeclaration.name].
"partialArgs": [ # Optional. The partial argument value of the function call. If provided, represents the arguments/fields that are streamed incrementally.
{ # Partial argument value of the function call.
"boolValue": True or False, # Optional. Represents a boolean value.
"jsonPath": "A String", # Required. A JSON Path (RFC 9535) to the argument being streamed. https://datatracker.ietf.org/doc/html/rfc9535. e.g. "$.foo.bar[0].data".
"nullValue": "A String", # Optional. Represents a null value.
"numberValue": 3.14, # Optional. Represents a double value.
"stringValue": "A String", # Optional. Represents a string value.
"willContinue": True or False, # Optional. Whether this is not the last part of the same json_path. If true, another PartialArg message for the current json_path is expected to follow.
},
],
"willContinue": True or False, # Optional. Whether this is the last part of the FunctionCall. If true, another partial message for the current FunctionCall is expected to follow.
},
"functionResponse": { # The result output from a [FunctionCall] that contains a string representing the [FunctionDeclaration.name] and a structured JSON object containing any output from the function is used as context to the model. This should contain the result of a [FunctionCall] made based on model prediction. # Optional. The result of a function call. This is used to provide the model with the result of a function call that it predicted.
"id": "A String", # Optional. The id of the function call this response is for. Populated by the client to match the corresponding function call `id`.
"name": "A String", # Required. The name of the function to call. Matches [FunctionDeclaration.name] and [FunctionCall.name].
"parts": [ # Optional. Ordered `Parts` that constitute a function response. Parts may have different IANA MIME types.
{ # A datatype containing media that is part of a `FunctionResponse` message. A `FunctionResponsePart` consists of data which has an associated datatype. A `FunctionResponsePart` can only contain one of the accepted types in `FunctionResponsePart.data`. A `FunctionResponsePart` must have a fixed IANA MIME type identifying the type and subtype of the media if the `inline_data` field is filled with raw bytes.
"fileData": { # URI based data for function response. # URI based data.
"displayName": "A String", # Optional. Display name of the file data. Used to provide a label or filename to distinguish file datas. This field is only returned in PromptMessage for prompt management. It is currently used in the Gemini GenerateContent calls only when server side tools (code_execution, google_search, and url_context) are enabled.
"fileUri": "A String", # Required. URI.
"mimeType": "A String", # Required. The IANA standard MIME type of the source data.
},
"inlineData": { # Raw media bytes for function response. Text should not be sent as raw bytes, use the 'text' field. # Inline media bytes.
"data": "A String", # Required. Raw bytes.
"displayName": "A String", # Optional. Display name of the blob. Used to provide a label or filename to distinguish blobs. This field is only returned in PromptMessage for prompt management. It is currently used in the Gemini GenerateContent calls only when server side tools (code_execution, google_search, and url_context) are enabled.
"mimeType": "A String", # Required. The IANA standard MIME type of the source data.
},
},
],
"response": { # Required. The function response in JSON object format. Use "output" key to specify function output and "error" key to specify error details (if any). If "output" and "error" keys are not specified, then whole "response" is treated as function output.
"a_key": "", # Properties of the object.
},
},
"inlineData": { # A content blob. A Blob contains data of a specific media type. It is used to represent images, audio, and video. # Optional. The inline data content of the part. This can be used to include images, audio, or video in a request.
"data": "A String", # Required. The raw bytes of the data.
"displayName": "A String", # Optional. The display name of the blob. Used to provide a label or filename to distinguish blobs. This field is only returned in `PromptMessage` for prompt management. It is used in the Gemini calls only when server-side tools (`code_execution`, `google_search`, and `url_context`) are enabled.
"mimeType": "A String", # Required. The IANA standard MIME type of the source data.
},
"mediaResolution": { # per part media resolution. Media resolution for the input media. # per part media resolution. Media resolution for the input media.
"level": "A String", # The tokenization quality used for given media.
},
"text": "A String", # Optional. The text content of the part. When sent from the VSCode Gemini Code Assist extension, references to @mentioned items will be converted to markdown boldface text. For example `@my-repo` will be converted to and sent as `**my-repo**` by the IDE agent.
"thought": True or False, # Optional. Indicates whether the `part` represents the model's thought process or reasoning.
"thoughtSignature": "A String", # Optional. An opaque signature for the thought so it can be reused in subsequent requests.
"videoMetadata": { # Provides metadata for a video, including the start and end offsets for clipping and the frame rate. # Optional. Video metadata. The metadata should only be specified while the video data is presented in inline_data or file_data.
"endOffset": "A String", # Optional. The end offset of the video.
"fps": 3.14, # Optional. The frame rate of the video sent to the model. If not specified, the default value is 1.0. The valid range is (0.0, 24.0].
"startOffset": "A String", # Optional. The start offset of the video.
},
},
],
"role": "A String", # Optional. The producer of the content. Must be either 'user' or 'model'. If not set, the service will default to 'user'.
},
],
"text": "A String", # Text response.
"value": "", # Fields and values that can be used to populate the response template.
},
"prompt": { # Prompt to be evaluated. This can represent a single-turn prompt or a multi-turn conversation for agent evaluations. # Optional. The request/prompt to evaluate.
"agentData": { # Represents data specific to multi-turn agent evaluations. # Optional. Represents the complete execution trace of a multi-turn conversation, which can involve single or multiple agents. This serves as the input context for agent scraping.
"agents": { # Optional. A map containing the static configurations for each agent in the system. Key: agent_id (matches the `author` field in events). Value: The static configuration of the agent.
"a_key": { # Represents configuration for an Agent.
"agentId": "A String", # Required. Unique identifier of the agent. This ID is used to refer to this agent, e.g., in AgentEvent.author, or in the `sub_agents` field. It must be unique within the `agents` map.
"agentType": "A String", # Optional. The type or class of the agent (e.g., "LlmAgent", "RouterAgent", "ToolUseAgent"). Useful for the autorater to understand the expected behavior of the agent.
"description": "A String", # Optional. A high-level description of the agent's role and responsibilities. Critical for evaluating if the agent is routing tasks correctly.
"instruction": "A String", # Optional. Provides instructions for the LLM model, guiding the agent's behavior. Can be static or dynamic. Dynamic instructions can contain placeholders like {variable_name} that will be resolved at runtime using the `AgentEvent.state_delta` field.
"subAgents": [ # Optional. The list of valid agent IDs that this agent can delegate to. This defines the directed edges in the multi-agent system graph topology.
"A String",
],
"tools": [ # Optional. The list of tools available to this agent.
{ # Tool details that the model may use to generate response. A `Tool` is a piece of code that enables the system to interact with external systems to perform an action, or set of actions, outside of knowledge and scope of the model. A Tool object should contain exactly one type of Tool (e.g FunctionDeclaration, Retrieval or GoogleSearchRetrieval).
"codeExecution": { # Tool that executes code generated by the model, and automatically returns the result to the model. See also [ExecutableCode]and [CodeExecutionResult] which are input and output to this tool. # Optional. CodeExecution tool type. Enables the model to execute code as part of generation.
},
"computerUse": { # Tool to support computer use. # Optional. Tool to support the model interacting directly with the computer. If enabled, it automatically populates computer-use specific Function Declarations.
"environment": "A String", # Required. The environment being operated.
"excludedPredefinedFunctions": [ # Optional. By default, [predefined functions](https://cloud.google.com/vertex-ai/generative-ai/docs/computer-use#supported-actions) are included in the final model call. Some of them can be explicitly excluded from being automatically included. This can serve two purposes: 1. Using a more restricted / different action space. 2. Improving the definitions / instructions of predefined functions.
"A String",
],
},
"enterpriseWebSearch": { # Tool to search public web data, powered by Vertex AI Search and Sec4 compliance. # Optional. Tool to support searching public web data, powered by Vertex AI Search and Sec4 compliance.
"blockingConfidence": "A String", # Optional. Sites with confidence level chosen & above this value will be blocked from the search results.
"excludeDomains": [ # Optional. List of domains to be excluded from the search results. The default limit is 2000 domains.
"A String",
],
},
"functionDeclarations": [ # Optional. Function tool type. One or more function declarations to be passed to the model along with the current user query. Model may decide to call a subset of these functions by populating FunctionCall in the response. User should provide a FunctionResponse for each function call in the next turn. Based on the function responses, Model will generate the final response back to the user. Maximum 512 function declarations can be provided.
{ # Structured representation of a function declaration as defined by the [OpenAPI 3.0 specification](https://spec.openapis.org/oas/v3.0.3). Included in this declaration are the function name, description, parameters and response type. This FunctionDeclaration is a representation of a block of code that can be used as a `Tool` by the model and executed by the client.
"description": "A String", # Optional. Description and purpose of the function. Model uses it to decide how and whether to call the function.
"name": "A String", # Required. The name of the function to call. Must start with a letter or an underscore. Must be a-z, A-Z, 0-9, or contain underscores, dots, colons and dashes, with a maximum length of 64.
"parameters": { # Defines the schema of input and output data. This is a subset of the [OpenAPI 3.0 Schema Object](https://spec.openapis.org/oas/v3.0.3#schema-object). # Optional. Describes the parameters to this function in JSON Schema Object format. Reflects the Open API 3.03 Parameter Object. string Key: the name of the parameter. Parameter names are case sensitive. Schema Value: the Schema defining the type used for the parameter. For function with no parameters, this can be left unset. Parameter names must start with a letter or an underscore and must only contain chars a-z, A-Z, 0-9, or underscores with a maximum length of 64. Example with 1 required and 1 optional parameter: type: OBJECT properties: param1: type: STRING param2: type: INTEGER required: - param1
"additionalProperties": "", # Optional. If `type` is `OBJECT`, specifies how to handle properties not defined in `properties`. If it is a boolean `false`, no additional properties are allowed. If it is a schema, additional properties are allowed if they conform to the schema.
"anyOf": [ # Optional. The instance must be valid against any (one or more) of the subschemas listed in `any_of`.
# Object with schema name: GoogleCloudAiplatformV1beta1Schema
],
"default": "", # Optional. Default value to use if the field is not specified.
"defs": { # Optional. `defs` provides a map of schema definitions that can be reused by `ref` elsewhere in the schema. Only allowed at root level of the schema.
"a_key": # Object with schema name: GoogleCloudAiplatformV1beta1Schema
},
"description": "A String", # Optional. Description of the schema.
"enum": [ # Optional. Possible values of the field. This field can be used to restrict a value to a fixed set of values. To mark a field as an enum, set `format` to `enum` and provide the list of possible values in `enum`. For example: 1. To define directions: `{type:STRING, format:enum, enum:["EAST", "NORTH", "SOUTH", "WEST"]}` 2. To define apartment numbers: `{type:INTEGER, format:enum, enum:["101", "201", "301"]}`
"A String",
],
"example": "", # Optional. Example of an instance of this schema.
"format": "A String", # Optional. The format of the data. For `NUMBER` type, format can be `float` or `double`. For `INTEGER` type, format can be `int32` or `int64`. For `STRING` type, format can be `email`, `byte`, `date`, `date-time`, `password`, and other formats to further refine the data type.
"items": # Object with schema name: GoogleCloudAiplatformV1beta1Schema # Optional. If type is `ARRAY`, `items` specifies the schema of elements in the array.
"maxItems": "A String", # Optional. If type is `ARRAY`, `max_items` specifies the maximum number of items in an array.
"maxLength": "A String", # Optional. If type is `STRING`, `max_length` specifies the maximum length of the string.
"maxProperties": "A String", # Optional. If type is `OBJECT`, `max_properties` specifies the maximum number of properties that can be provided.
"maximum": 3.14, # Optional. If type is `INTEGER` or `NUMBER`, `maximum` specifies the maximum allowed value.
"minItems": "A String", # Optional. If type is `ARRAY`, `min_items` specifies the minimum number of items in an array.
"minLength": "A String", # Optional. If type is `STRING`, `min_length` specifies the minimum length of the string.
"minProperties": "A String", # Optional. If type is `OBJECT`, `min_properties` specifies the minimum number of properties that can be provided.
"minimum": 3.14, # Optional. If type is `INTEGER` or `NUMBER`, `minimum` specifies the minimum allowed value.
"nullable": True or False, # Optional. Indicates if the value of this field can be null.
"pattern": "A String", # Optional. If type is `STRING`, `pattern` specifies a regular expression that the string must match.
"properties": { # Optional. If type is `OBJECT`, `properties` is a map of property names to schema definitions for each property of the object.
"a_key": # Object with schema name: GoogleCloudAiplatformV1beta1Schema
},
"propertyOrdering": [ # Optional. Order of properties displayed or used where order matters. This is not a standard field in OpenAPI specification, but can be used to control the order of properties.
"A String",
],
"ref": "A String", # Optional. Allows referencing another schema definition to use in place of this schema. The value must be a valid reference to a schema in `defs`. For example, the following schema defines a reference to a schema node named "Pet": type: object properties: pet: ref: #/defs/Pet defs: Pet: type: object properties: name: type: string The value of the "pet" property is a reference to the schema node named "Pet". See details in https://json-schema.org/understanding-json-schema/structuring
"required": [ # Optional. If type is `OBJECT`, `required` lists the names of properties that must be present.
"A String",
],
"title": "A String", # Optional. Title for the schema.
"type": "A String", # Optional. Data type of the schema field.
},
"parametersJsonSchema": "", # Optional. Describes the parameters to the function in JSON Schema format. The schema must describe an object where the properties are the parameters to the function. For example: ``` { "type": "object", "properties": { "name": { "type": "string" }, "age": { "type": "integer" } }, "additionalProperties": false, "required": ["name", "age"], "propertyOrdering": ["name", "age"] } ``` This field is mutually exclusive with `parameters`.
"response": { # Defines the schema of input and output data. This is a subset of the [OpenAPI 3.0 Schema Object](https://spec.openapis.org/oas/v3.0.3#schema-object). # Optional. Describes the output from this function in JSON Schema format. Reflects the Open API 3.03 Response Object. The Schema defines the type used for the response value of the function.
"additionalProperties": "", # Optional. If `type` is `OBJECT`, specifies how to handle properties not defined in `properties`. If it is a boolean `false`, no additional properties are allowed. If it is a schema, additional properties are allowed if they conform to the schema.
"anyOf": [ # Optional. The instance must be valid against any (one or more) of the subschemas listed in `any_of`.
# Object with schema name: GoogleCloudAiplatformV1beta1Schema
],
"default": "", # Optional. Default value to use if the field is not specified.
"defs": { # Optional. `defs` provides a map of schema definitions that can be reused by `ref` elsewhere in the schema. Only allowed at root level of the schema.
"a_key": # Object with schema name: GoogleCloudAiplatformV1beta1Schema
},
"description": "A String", # Optional. Description of the schema.
"enum": [ # Optional. Possible values of the field. This field can be used to restrict a value to a fixed set of values. To mark a field as an enum, set `format` to `enum` and provide the list of possible values in `enum`. For example: 1. To define directions: `{type:STRING, format:enum, enum:["EAST", "NORTH", "SOUTH", "WEST"]}` 2. To define apartment numbers: `{type:INTEGER, format:enum, enum:["101", "201", "301"]}`
"A String",
],
"example": "", # Optional. Example of an instance of this schema.
"format": "A String", # Optional. The format of the data. For `NUMBER` type, format can be `float` or `double`. For `INTEGER` type, format can be `int32` or `int64`. For `STRING` type, format can be `email`, `byte`, `date`, `date-time`, `password`, and other formats to further refine the data type.
"items": # Object with schema name: GoogleCloudAiplatformV1beta1Schema # Optional. If type is `ARRAY`, `items` specifies the schema of elements in the array.
"maxItems": "A String", # Optional. If type is `ARRAY`, `max_items` specifies the maximum number of items in an array.
"maxLength": "A String", # Optional. If type is `STRING`, `max_length` specifies the maximum length of the string.
"maxProperties": "A String", # Optional. If type is `OBJECT`, `max_properties` specifies the maximum number of properties that can be provided.
"maximum": 3.14, # Optional. If type is `INTEGER` or `NUMBER`, `maximum` specifies the maximum allowed value.
"minItems": "A String", # Optional. If type is `ARRAY`, `min_items` specifies the minimum number of items in an array.
"minLength": "A String", # Optional. If type is `STRING`, `min_length` specifies the minimum length of the string.
"minProperties": "A String", # Optional. If type is `OBJECT`, `min_properties` specifies the minimum number of properties that can be provided.
"minimum": 3.14, # Optional. If type is `INTEGER` or `NUMBER`, `minimum` specifies the minimum allowed value.
"nullable": True or False, # Optional. Indicates if the value of this field can be null.
"pattern": "A String", # Optional. If type is `STRING`, `pattern` specifies a regular expression that the string must match.
"properties": { # Optional. If type is `OBJECT`, `properties` is a map of property names to schema definitions for each property of the object.
"a_key": # Object with schema name: GoogleCloudAiplatformV1beta1Schema
},
"propertyOrdering": [ # Optional. Order of properties displayed or used where order matters. This is not a standard field in OpenAPI specification, but can be used to control the order of properties.
"A String",
],
"ref": "A String", # Optional. Allows referencing another schema definition to use in place of this schema. The value must be a valid reference to a schema in `defs`. For example, the following schema defines a reference to a schema node named "Pet": type: object properties: pet: ref: #/defs/Pet defs: Pet: type: object properties: name: type: string The value of the "pet" property is a reference to the schema node named "Pet". See details in https://json-schema.org/understanding-json-schema/structuring
"required": [ # Optional. If type is `OBJECT`, `required` lists the names of properties that must be present.
"A String",
],
"title": "A String", # Optional. Title for the schema.
"type": "A String", # Optional. Data type of the schema field.
},
"responseJsonSchema": "", # Optional. Describes the output from this function in JSON Schema format. The value specified by the schema is the response value of the function. This field is mutually exclusive with `response`.
},
],
"googleMaps": { # Tool to retrieve public maps data for grounding, powered by Google. # Optional. GoogleMaps tool type. Tool to support Google Maps in Model.
"enableWidget": True or False, # Optional. If true, include the widget context token in the response.
},
"googleSearch": { # GoogleSearch tool type. Tool to support Google Search in Model. Powered by Google. # Optional. GoogleSearch tool type. Tool to support Google Search in Model. Powered by Google.
"blockingConfidence": "A String", # Optional. Sites with confidence level chosen & above this value will be blocked from the search results.
"excludeDomains": [ # Optional. List of domains to be excluded from the search results. The default limit is 2000 domains. Example: ["amazon.com", "facebook.com"].
"A String",
],
},
"googleSearchRetrieval": { # Tool to retrieve public web data for grounding, powered by Google. # Optional. Specialized retrieval tool that is powered by Google Search.
"dynamicRetrievalConfig": { # Describes the options to customize dynamic retrieval. # Specifies the dynamic retrieval configuration for the given source.
"dynamicThreshold": 3.14, # Optional. The threshold to be used in dynamic retrieval. If not set, a system default value is used.
"mode": "A String", # The mode of the predictor to be used in dynamic retrieval.
},
},
"parallelAiSearch": { # ParallelAiSearch tool type. A tool that uses the Parallel.ai search engine for grounding. # Optional. If specified, Vertex AI will use Parallel.ai to search for information to answer user queries. The search results will be grounded on Parallel.ai and presented to the model for response generation
"apiKey": "A String", # Optional. The API key for ParallelAiSearch. If an API key is not provided, the system will attempt to verify access by checking for an active Parallel.ai subscription through the Google Cloud Marketplace. See https://docs.parallel.ai/search/search-quickstart for more details.
"customConfigs": { # Optional. Custom configs for ParallelAiSearch. This field can be used to pass any parameter from the Parallel.ai Search API. See the Parallel.ai documentation for the full list of available parameters and their usage: https://docs.parallel.ai/api-reference/search-beta/search Currently only `source_policy`, `excerpts`, `max_results`, `mode`, `fetch_policy` can be set via this field. For example: { "source_policy": { "include_domains": ["google.com", "wikipedia.org"], "exclude_domains": ["example.com"] }, "fetch_policy": { "max_age_seconds": 3600 } }
"a_key": "", # Properties of the object.
},
},
"retrieval": { # Defines a retrieval tool that model can call to access external knowledge. # Optional. Retrieval tool type. System will always execute the provided retrieval tool(s) to get external knowledge to answer the prompt. Retrieval results are presented to the model for generation.
"disableAttribution": True or False, # Optional. Deprecated. This option is no longer supported.
"externalApi": { # Retrieve from data source powered by external API for grounding. The external API is not owned by Google, but need to follow the pre-defined API spec. # Use data source powered by external API for grounding.
"apiAuth": { # The generic reusable api auth config. Deprecated. Please use AuthConfig (google/cloud/aiplatform/master/auth.proto) instead. # The authentication config to access the API. Deprecated. Please use auth_config instead.
"apiKeyConfig": { # The API secret. # The API secret.
"apiKeySecretVersion": "A String", # Required. The SecretManager secret version resource name storing API key. e.g. projects/{project}/secrets/{secret}/versions/{version}
"apiKeyString": "A String", # The API key string. Either this or `api_key_secret_version` must be set.
},
},
"apiSpec": "A String", # The API spec that the external API implements.
"authConfig": { # Auth configuration to run the extension. # The authentication config to access the API.
"apiKeyConfig": { # Config for authentication with API key. # Config for API key auth.
"apiKeySecret": "A String", # Optional. The name of the SecretManager secret version resource storing the API key. Format: `projects/{project}/secrets/{secrete}/versions/{version}` - If both `api_key_secret` and `api_key_string` are specified, this field takes precedence over `api_key_string`. - If specified, the `secretmanager.versions.access` permission should be granted to Vertex AI Extension Service Agent (https://cloud.google.com/vertex-ai/docs/general/access-control#service-agents) on the specified resource.
"apiKeyString": "A String", # Optional. The API key to be used in the request directly.
"httpElementLocation": "A String", # Optional. The location of the API key.
"name": "A String", # Optional. The parameter name of the API key. E.g. If the API request is "https://example.com/act?api_key=", "api_key" would be the parameter name.
},
"authType": "A String", # Type of auth scheme.
"googleServiceAccountConfig": { # Config for Google Service Account Authentication. # Config for Google Service Account auth.
"serviceAccount": "A String", # Optional. The service account that the extension execution service runs as. - If the service account is specified, the `iam.serviceAccounts.getAccessToken` permission should be granted to Vertex AI Extension Service Agent (https://cloud.google.com/vertex-ai/docs/general/access-control#service-agents) on the specified service account. - If not specified, the Vertex AI Extension Service Agent will be used to execute the Extension.
},
"httpBasicAuthConfig": { # Config for HTTP Basic Authentication. # Config for HTTP Basic auth.
"credentialSecret": "A String", # Required. The name of the SecretManager secret version resource storing the base64 encoded credentials. Format: `projects/{project}/secrets/{secrete}/versions/{version}` - If specified, the `secretmanager.versions.access` permission should be granted to Vertex AI Extension Service Agent (https://cloud.google.com/vertex-ai/docs/general/access-control#service-agents) on the specified resource.
},
"oauthConfig": { # Config for user oauth. # Config for user oauth.
"accessToken": "A String", # Access token for extension endpoint. Only used to propagate token from [[ExecuteExtensionRequest.runtime_auth_config]] at request time.
"serviceAccount": "A String", # The service account used to generate access tokens for executing the Extension. - If the service account is specified, the `iam.serviceAccounts.getAccessToken` permission should be granted to Vertex AI Extension Service Agent (https://cloud.google.com/vertex-ai/docs/general/access-control#service-agents) on the provided service account.
},
"oidcConfig": { # Config for user OIDC auth. # Config for user OIDC auth.
"idToken": "A String", # OpenID Connect formatted ID token for extension endpoint. Only used to propagate token from [[ExecuteExtensionRequest.runtime_auth_config]] at request time.
"serviceAccount": "A String", # The service account used to generate an OpenID Connect (OIDC)-compatible JWT token signed by the Google OIDC Provider (accounts.google.com) for extension endpoint (https://cloud.google.com/iam/docs/create-short-lived-credentials-direct#sa-credentials-oidc). - The audience for the token will be set to the URL in the server url defined in the OpenApi spec. - If the service account is provided, the service account should grant `iam.serviceAccounts.getOpenIdToken` permission to Vertex AI Extension Service Agent (https://cloud.google.com/vertex-ai/docs/general/access-control#service-agents).
},
},
"elasticSearchParams": { # The search parameters to use for the ELASTIC_SEARCH spec. # Parameters for the elastic search API.
"index": "A String", # The ElasticSearch index to use.
"numHits": 42, # Optional. Number of hits (chunks) to request. When specified, it is passed to Elasticsearch as the `num_hits` param.
"searchTemplate": "A String", # The ElasticSearch search template to use.
},
"endpoint": "A String", # The endpoint of the external API. The system will call the API at this endpoint to retrieve the data for grounding. Example: https://acme.com:443/search
"simpleSearchParams": { # The search parameters to use for SIMPLE_SEARCH spec. # Parameters for the simple search API.
},
},
"vertexAiSearch": { # Retrieve from Vertex AI Search datastore or engine for grounding. datastore and engine are mutually exclusive. See https://cloud.google.com/products/agent-builder # Set to use data source powered by Vertex AI Search.
"dataStoreSpecs": [ # Specifications that define the specific DataStores to be searched, along with configurations for those data stores. This is only considered for Engines with multiple data stores. It should only be set if engine is used.
{ # Define data stores within engine to filter on in a search call and configurations for those data stores. For more information, see https://cloud.google.com/generative-ai-app-builder/docs/reference/rpc/google.cloud.discoveryengine.v1#datastorespec
"dataStore": "A String", # Full resource name of DataStore, such as Format: `projects/{project}/locations/{location}/collections/{collection}/dataStores/{dataStore}`
"filter": "A String", # Optional. Filter specification to filter documents in the data store specified by data_store field. For more information on filtering, see [Filtering](https://cloud.google.com/generative-ai-app-builder/docs/filter-search-metadata)
},
],
"datastore": "A String", # Optional. Fully-qualified Vertex AI Search data store resource ID. Format: `projects/{project}/locations/{location}/collections/{collection}/dataStores/{dataStore}`
"engine": "A String", # Optional. Fully-qualified Vertex AI Search engine resource ID. Format: `projects/{project}/locations/{location}/collections/{collection}/engines/{engine}`
"filter": "A String", # Optional. Filter strings to be passed to the search API.
"maxResults": 42, # Optional. Number of search results to return per query. The default value is 10. The maximumm allowed value is 10.
},
"vertexRagStore": { # Retrieve from Vertex RAG Store for grounding. # Set to use data source powered by Vertex RAG store. User data is uploaded via the VertexRagDataService.
"ragCorpora": [ # Optional. Deprecated. Please use rag_resources instead.
"A String",
],
"ragResources": [ # Optional. The representation of the rag source. It can be used to specify corpus only or ragfiles. Currently only support one corpus or multiple files from one corpus. In the future we may open up multiple corpora support.
{ # The definition of the Rag resource.
"ragCorpus": "A String", # Optional. RagCorpora resource name. Format: `projects/{project}/locations/{location}/ragCorpora/{rag_corpus}`
"ragFileIds": [ # Optional. rag_file_id. The files should be in the same rag_corpus set in rag_corpus field.
"A String",
],
},
],
"ragRetrievalConfig": { # Specifies the context retrieval config. # Optional. The retrieval config for the Rag query.
"filter": { # Config for filters. # Optional. Config for filters.
"metadataFilter": "A String", # Optional. String for metadata filtering.
"vectorDistanceThreshold": 3.14, # Optional. Only returns contexts with vector distance smaller than the threshold.
"vectorSimilarityThreshold": 3.14, # Optional. Only returns contexts with vector similarity larger than the threshold.
},
"hybridSearch": { # Config for Hybrid Search. # Optional. Config for Hybrid Search.
"alpha": 3.14, # Optional. Alpha value controls the weight between dense and sparse vector search results. The range is [0, 1], while 0 means sparse vector search only and 1 means dense vector search only. The default value is 0.5 which balances sparse and dense vector search equally.
},
"ranking": { # Config for ranking and reranking. # Optional. Config for ranking and reranking.
"llmRanker": { # Config for LlmRanker. # Optional. Config for LlmRanker.
"modelName": "A String", # Optional. The model name used for ranking. See [Supported models](https://cloud.google.com/vertex-ai/generative-ai/docs/model-reference/inference#supported-models).
},
"rankService": { # Config for Rank Service. # Optional. Config for Rank Service.
"modelName": "A String", # Optional. The model name of the rank service. Format: `semantic-ranker-512@latest`
},
},
"topK": 42, # Optional. The number of contexts to retrieve.
},
"similarityTopK": 42, # Optional. Number of top k results to return from the selected corpora.
"storeContext": True or False, # Optional. Currently only supported for Gemini Multimodal Live API. In Gemini Multimodal Live API, if `store_context` bool is specified, Gemini will leverage it to automatically memorize the interactions between the client and Gemini, and retrieve context when needed to augment the response generation for users' ongoing and future interactions.
"vectorDistanceThreshold": 3.14, # Optional. Only return results with vector distance smaller than the threshold.
},
},
"urlContext": { # Tool to support URL context. # Optional. Tool to support URL context retrieval.
},
},
],
},
},
"turns": [ # Optional. A chronological list of conversation turns. Each turn represents a logical execution cycle (e.g., User Input -> Agent Response).
{ # Represents a single turn/invocation in the conversation.
"events": [ # Optional. The list of events that occurred during this turn.
{ # Represents a single event in the execution trace.
"activeTools": [ # Optional. The list of tools that were active/available to the agent at the time of this event. This overrides the `AgentConfig.tools` if set.
{ # Tool details that the model may use to generate response. A `Tool` is a piece of code that enables the system to interact with external systems to perform an action, or set of actions, outside of knowledge and scope of the model. A Tool object should contain exactly one type of Tool (e.g FunctionDeclaration, Retrieval or GoogleSearchRetrieval).
"codeExecution": { # Tool that executes code generated by the model, and automatically returns the result to the model. See also [ExecutableCode]and [CodeExecutionResult] which are input and output to this tool. # Optional. CodeExecution tool type. Enables the model to execute code as part of generation.
},
"computerUse": { # Tool to support computer use. # Optional. Tool to support the model interacting directly with the computer. If enabled, it automatically populates computer-use specific Function Declarations.
"environment": "A String", # Required. The environment being operated.
"excludedPredefinedFunctions": [ # Optional. By default, [predefined functions](https://cloud.google.com/vertex-ai/generative-ai/docs/computer-use#supported-actions) are included in the final model call. Some of them can be explicitly excluded from being automatically included. This can serve two purposes: 1. Using a more restricted / different action space. 2. Improving the definitions / instructions of predefined functions.
"A String",
],
},
"enterpriseWebSearch": { # Tool to search public web data, powered by Vertex AI Search and Sec4 compliance. # Optional. Tool to support searching public web data, powered by Vertex AI Search and Sec4 compliance.
"blockingConfidence": "A String", # Optional. Sites with confidence level chosen & above this value will be blocked from the search results.
"excludeDomains": [ # Optional. List of domains to be excluded from the search results. The default limit is 2000 domains.
"A String",
],
},
"functionDeclarations": [ # Optional. Function tool type. One or more function declarations to be passed to the model along with the current user query. Model may decide to call a subset of these functions by populating FunctionCall in the response. User should provide a FunctionResponse for each function call in the next turn. Based on the function responses, Model will generate the final response back to the user. Maximum 512 function declarations can be provided.
{ # Structured representation of a function declaration as defined by the [OpenAPI 3.0 specification](https://spec.openapis.org/oas/v3.0.3). Included in this declaration are the function name, description, parameters and response type. This FunctionDeclaration is a representation of a block of code that can be used as a `Tool` by the model and executed by the client.
"description": "A String", # Optional. Description and purpose of the function. Model uses it to decide how and whether to call the function.
"name": "A String", # Required. The name of the function to call. Must start with a letter or an underscore. Must be a-z, A-Z, 0-9, or contain underscores, dots, colons and dashes, with a maximum length of 64.
"parameters": { # Defines the schema of input and output data. This is a subset of the [OpenAPI 3.0 Schema Object](https://spec.openapis.org/oas/v3.0.3#schema-object). # Optional. Describes the parameters to this function in JSON Schema Object format. Reflects the Open API 3.03 Parameter Object. string Key: the name of the parameter. Parameter names are case sensitive. Schema Value: the Schema defining the type used for the parameter. For function with no parameters, this can be left unset. Parameter names must start with a letter or an underscore and must only contain chars a-z, A-Z, 0-9, or underscores with a maximum length of 64. Example with 1 required and 1 optional parameter: type: OBJECT properties: param1: type: STRING param2: type: INTEGER required: - param1
"additionalProperties": "", # Optional. If `type` is `OBJECT`, specifies how to handle properties not defined in `properties`. If it is a boolean `false`, no additional properties are allowed. If it is a schema, additional properties are allowed if they conform to the schema.
"anyOf": [ # Optional. The instance must be valid against any (one or more) of the subschemas listed in `any_of`.
# Object with schema name: GoogleCloudAiplatformV1beta1Schema
],
"default": "", # Optional. Default value to use if the field is not specified.
"defs": { # Optional. `defs` provides a map of schema definitions that can be reused by `ref` elsewhere in the schema. Only allowed at root level of the schema.
"a_key": # Object with schema name: GoogleCloudAiplatformV1beta1Schema
},
"description": "A String", # Optional. Description of the schema.
"enum": [ # Optional. Possible values of the field. This field can be used to restrict a value to a fixed set of values. To mark a field as an enum, set `format` to `enum` and provide the list of possible values in `enum`. For example: 1. To define directions: `{type:STRING, format:enum, enum:["EAST", "NORTH", "SOUTH", "WEST"]}` 2. To define apartment numbers: `{type:INTEGER, format:enum, enum:["101", "201", "301"]}`
"A String",
],
"example": "", # Optional. Example of an instance of this schema.
"format": "A String", # Optional. The format of the data. For `NUMBER` type, format can be `float` or `double`. For `INTEGER` type, format can be `int32` or `int64`. For `STRING` type, format can be `email`, `byte`, `date`, `date-time`, `password`, and other formats to further refine the data type.
"items": # Object with schema name: GoogleCloudAiplatformV1beta1Schema # Optional. If type is `ARRAY`, `items` specifies the schema of elements in the array.
"maxItems": "A String", # Optional. If type is `ARRAY`, `max_items` specifies the maximum number of items in an array.
"maxLength": "A String", # Optional. If type is `STRING`, `max_length` specifies the maximum length of the string.
"maxProperties": "A String", # Optional. If type is `OBJECT`, `max_properties` specifies the maximum number of properties that can be provided.
"maximum": 3.14, # Optional. If type is `INTEGER` or `NUMBER`, `maximum` specifies the maximum allowed value.
"minItems": "A String", # Optional. If type is `ARRAY`, `min_items` specifies the minimum number of items in an array.
"minLength": "A String", # Optional. If type is `STRING`, `min_length` specifies the minimum length of the string.
"minProperties": "A String", # Optional. If type is `OBJECT`, `min_properties` specifies the minimum number of properties that can be provided.
"minimum": 3.14, # Optional. If type is `INTEGER` or `NUMBER`, `minimum` specifies the minimum allowed value.
"nullable": True or False, # Optional. Indicates if the value of this field can be null.
"pattern": "A String", # Optional. If type is `STRING`, `pattern` specifies a regular expression that the string must match.
"properties": { # Optional. If type is `OBJECT`, `properties` is a map of property names to schema definitions for each property of the object.
"a_key": # Object with schema name: GoogleCloudAiplatformV1beta1Schema
},
"propertyOrdering": [ # Optional. Order of properties displayed or used where order matters. This is not a standard field in OpenAPI specification, but can be used to control the order of properties.
"A String",
],
"ref": "A String", # Optional. Allows referencing another schema definition to use in place of this schema. The value must be a valid reference to a schema in `defs`. For example, the following schema defines a reference to a schema node named "Pet": type: object properties: pet: ref: #/defs/Pet defs: Pet: type: object properties: name: type: string The value of the "pet" property is a reference to the schema node named "Pet". See details in https://json-schema.org/understanding-json-schema/structuring
"required": [ # Optional. If type is `OBJECT`, `required` lists the names of properties that must be present.
"A String",
],
"title": "A String", # Optional. Title for the schema.
"type": "A String", # Optional. Data type of the schema field.
},
"parametersJsonSchema": "", # Optional. Describes the parameters to the function in JSON Schema format. The schema must describe an object where the properties are the parameters to the function. For example: ``` { "type": "object", "properties": { "name": { "type": "string" }, "age": { "type": "integer" } }, "additionalProperties": false, "required": ["name", "age"], "propertyOrdering": ["name", "age"] } ``` This field is mutually exclusive with `parameters`.
"response": { # Defines the schema of input and output data. This is a subset of the [OpenAPI 3.0 Schema Object](https://spec.openapis.org/oas/v3.0.3#schema-object). # Optional. Describes the output from this function in JSON Schema format. Reflects the Open API 3.03 Response Object. The Schema defines the type used for the response value of the function.
"additionalProperties": "", # Optional. If `type` is `OBJECT`, specifies how to handle properties not defined in `properties`. If it is a boolean `false`, no additional properties are allowed. If it is a schema, additional properties are allowed if they conform to the schema.
"anyOf": [ # Optional. The instance must be valid against any (one or more) of the subschemas listed in `any_of`.
# Object with schema name: GoogleCloudAiplatformV1beta1Schema
],
"default": "", # Optional. Default value to use if the field is not specified.
"defs": { # Optional. `defs` provides a map of schema definitions that can be reused by `ref` elsewhere in the schema. Only allowed at root level of the schema.
"a_key": # Object with schema name: GoogleCloudAiplatformV1beta1Schema
},
"description": "A String", # Optional. Description of the schema.
"enum": [ # Optional. Possible values of the field. This field can be used to restrict a value to a fixed set of values. To mark a field as an enum, set `format` to `enum` and provide the list of possible values in `enum`. For example: 1. To define directions: `{type:STRING, format:enum, enum:["EAST", "NORTH", "SOUTH", "WEST"]}` 2. To define apartment numbers: `{type:INTEGER, format:enum, enum:["101", "201", "301"]}`
"A String",
],
"example": "", # Optional. Example of an instance of this schema.
"format": "A String", # Optional. The format of the data. For `NUMBER` type, format can be `float` or `double`. For `INTEGER` type, format can be `int32` or `int64`. For `STRING` type, format can be `email`, `byte`, `date`, `date-time`, `password`, and other formats to further refine the data type.
"items": # Object with schema name: GoogleCloudAiplatformV1beta1Schema # Optional. If type is `ARRAY`, `items` specifies the schema of elements in the array.
"maxItems": "A String", # Optional. If type is `ARRAY`, `max_items` specifies the maximum number of items in an array.
"maxLength": "A String", # Optional. If type is `STRING`, `max_length` specifies the maximum length of the string.
"maxProperties": "A String", # Optional. If type is `OBJECT`, `max_properties` specifies the maximum number of properties that can be provided.
"maximum": 3.14, # Optional. If type is `INTEGER` or `NUMBER`, `maximum` specifies the maximum allowed value.
"minItems": "A String", # Optional. If type is `ARRAY`, `min_items` specifies the minimum number of items in an array.
"minLength": "A String", # Optional. If type is `STRING`, `min_length` specifies the minimum length of the string.
"minProperties": "A String", # Optional. If type is `OBJECT`, `min_properties` specifies the minimum number of properties that can be provided.
"minimum": 3.14, # Optional. If type is `INTEGER` or `NUMBER`, `minimum` specifies the minimum allowed value.
"nullable": True or False, # Optional. Indicates if the value of this field can be null.
"pattern": "A String", # Optional. If type is `STRING`, `pattern` specifies a regular expression that the string must match.
"properties": { # Optional. If type is `OBJECT`, `properties` is a map of property names to schema definitions for each property of the object.
"a_key": # Object with schema name: GoogleCloudAiplatformV1beta1Schema
},
"propertyOrdering": [ # Optional. Order of properties displayed or used where order matters. This is not a standard field in OpenAPI specification, but can be used to control the order of properties.
"A String",
],
"ref": "A String", # Optional. Allows referencing another schema definition to use in place of this schema. The value must be a valid reference to a schema in `defs`. For example, the following schema defines a reference to a schema node named "Pet": type: object properties: pet: ref: #/defs/Pet defs: Pet: type: object properties: name: type: string The value of the "pet" property is a reference to the schema node named "Pet". See details in https://json-schema.org/understanding-json-schema/structuring
"required": [ # Optional. If type is `OBJECT`, `required` lists the names of properties that must be present.
"A String",
],
"title": "A String", # Optional. Title for the schema.
"type": "A String", # Optional. Data type of the schema field.
},
"responseJsonSchema": "", # Optional. Describes the output from this function in JSON Schema format. The value specified by the schema is the response value of the function. This field is mutually exclusive with `response`.
},
],
"googleMaps": { # Tool to retrieve public maps data for grounding, powered by Google. # Optional. GoogleMaps tool type. Tool to support Google Maps in Model.
"enableWidget": True or False, # Optional. If true, include the widget context token in the response.
},
"googleSearch": { # GoogleSearch tool type. Tool to support Google Search in Model. Powered by Google. # Optional. GoogleSearch tool type. Tool to support Google Search in Model. Powered by Google.
"blockingConfidence": "A String", # Optional. Sites with confidence level chosen & above this value will be blocked from the search results.
"excludeDomains": [ # Optional. List of domains to be excluded from the search results. The default limit is 2000 domains. Example: ["amazon.com", "facebook.com"].
"A String",
],
},
"googleSearchRetrieval": { # Tool to retrieve public web data for grounding, powered by Google. # Optional. Specialized retrieval tool that is powered by Google Search.
"dynamicRetrievalConfig": { # Describes the options to customize dynamic retrieval. # Specifies the dynamic retrieval configuration for the given source.
"dynamicThreshold": 3.14, # Optional. The threshold to be used in dynamic retrieval. If not set, a system default value is used.
"mode": "A String", # The mode of the predictor to be used in dynamic retrieval.
},
},
"parallelAiSearch": { # ParallelAiSearch tool type. A tool that uses the Parallel.ai search engine for grounding. # Optional. If specified, Vertex AI will use Parallel.ai to search for information to answer user queries. The search results will be grounded on Parallel.ai and presented to the model for response generation
"apiKey": "A String", # Optional. The API key for ParallelAiSearch. If an API key is not provided, the system will attempt to verify access by checking for an active Parallel.ai subscription through the Google Cloud Marketplace. See https://docs.parallel.ai/search/search-quickstart for more details.
"customConfigs": { # Optional. Custom configs for ParallelAiSearch. This field can be used to pass any parameter from the Parallel.ai Search API. See the Parallel.ai documentation for the full list of available parameters and their usage: https://docs.parallel.ai/api-reference/search-beta/search Currently only `source_policy`, `excerpts`, `max_results`, `mode`, `fetch_policy` can be set via this field. For example: { "source_policy": { "include_domains": ["google.com", "wikipedia.org"], "exclude_domains": ["example.com"] }, "fetch_policy": { "max_age_seconds": 3600 } }
"a_key": "", # Properties of the object.
},
},
"retrieval": { # Defines a retrieval tool that model can call to access external knowledge. # Optional. Retrieval tool type. System will always execute the provided retrieval tool(s) to get external knowledge to answer the prompt. Retrieval results are presented to the model for generation.
"disableAttribution": True or False, # Optional. Deprecated. This option is no longer supported.
"externalApi": { # Retrieve from data source powered by external API for grounding. The external API is not owned by Google, but need to follow the pre-defined API spec. # Use data source powered by external API for grounding.
"apiAuth": { # The generic reusable api auth config. Deprecated. Please use AuthConfig (google/cloud/aiplatform/master/auth.proto) instead. # The authentication config to access the API. Deprecated. Please use auth_config instead.
"apiKeyConfig": { # The API secret. # The API secret.
"apiKeySecretVersion": "A String", # Required. The SecretManager secret version resource name storing API key. e.g. projects/{project}/secrets/{secret}/versions/{version}
"apiKeyString": "A String", # The API key string. Either this or `api_key_secret_version` must be set.
},
},
"apiSpec": "A String", # The API spec that the external API implements.
"authConfig": { # Auth configuration to run the extension. # The authentication config to access the API.
"apiKeyConfig": { # Config for authentication with API key. # Config for API key auth.
"apiKeySecret": "A String", # Optional. The name of the SecretManager secret version resource storing the API key. Format: `projects/{project}/secrets/{secrete}/versions/{version}` - If both `api_key_secret` and `api_key_string` are specified, this field takes precedence over `api_key_string`. - If specified, the `secretmanager.versions.access` permission should be granted to Vertex AI Extension Service Agent (https://cloud.google.com/vertex-ai/docs/general/access-control#service-agents) on the specified resource.
"apiKeyString": "A String", # Optional. The API key to be used in the request directly.
"httpElementLocation": "A String", # Optional. The location of the API key.
"name": "A String", # Optional. The parameter name of the API key. E.g. If the API request is "https://example.com/act?api_key=", "api_key" would be the parameter name.
},
"authType": "A String", # Type of auth scheme.
"googleServiceAccountConfig": { # Config for Google Service Account Authentication. # Config for Google Service Account auth.
"serviceAccount": "A String", # Optional. The service account that the extension execution service runs as. - If the service account is specified, the `iam.serviceAccounts.getAccessToken` permission should be granted to Vertex AI Extension Service Agent (https://cloud.google.com/vertex-ai/docs/general/access-control#service-agents) on the specified service account. - If not specified, the Vertex AI Extension Service Agent will be used to execute the Extension.
},
"httpBasicAuthConfig": { # Config for HTTP Basic Authentication. # Config for HTTP Basic auth.
"credentialSecret": "A String", # Required. The name of the SecretManager secret version resource storing the base64 encoded credentials. Format: `projects/{project}/secrets/{secrete}/versions/{version}` - If specified, the `secretmanager.versions.access` permission should be granted to Vertex AI Extension Service Agent (https://cloud.google.com/vertex-ai/docs/general/access-control#service-agents) on the specified resource.
},
"oauthConfig": { # Config for user oauth. # Config for user oauth.
"accessToken": "A String", # Access token for extension endpoint. Only used to propagate token from [[ExecuteExtensionRequest.runtime_auth_config]] at request time.
"serviceAccount": "A String", # The service account used to generate access tokens for executing the Extension. - If the service account is specified, the `iam.serviceAccounts.getAccessToken` permission should be granted to Vertex AI Extension Service Agent (https://cloud.google.com/vertex-ai/docs/general/access-control#service-agents) on the provided service account.
},
"oidcConfig": { # Config for user OIDC auth. # Config for user OIDC auth.
"idToken": "A String", # OpenID Connect formatted ID token for extension endpoint. Only used to propagate token from [[ExecuteExtensionRequest.runtime_auth_config]] at request time.
"serviceAccount": "A String", # The service account used to generate an OpenID Connect (OIDC)-compatible JWT token signed by the Google OIDC Provider (accounts.google.com) for extension endpoint (https://cloud.google.com/iam/docs/create-short-lived-credentials-direct#sa-credentials-oidc). - The audience for the token will be set to the URL in the server url defined in the OpenApi spec. - If the service account is provided, the service account should grant `iam.serviceAccounts.getOpenIdToken` permission to Vertex AI Extension Service Agent (https://cloud.google.com/vertex-ai/docs/general/access-control#service-agents).
},
},
"elasticSearchParams": { # The search parameters to use for the ELASTIC_SEARCH spec. # Parameters for the elastic search API.
"index": "A String", # The ElasticSearch index to use.
"numHits": 42, # Optional. Number of hits (chunks) to request. When specified, it is passed to Elasticsearch as the `num_hits` param.
"searchTemplate": "A String", # The ElasticSearch search template to use.
},
"endpoint": "A String", # The endpoint of the external API. The system will call the API at this endpoint to retrieve the data for grounding. Example: https://acme.com:443/search
"simpleSearchParams": { # The search parameters to use for SIMPLE_SEARCH spec. # Parameters for the simple search API.
},
},
"vertexAiSearch": { # Retrieve from Vertex AI Search datastore or engine for grounding. datastore and engine are mutually exclusive. See https://cloud.google.com/products/agent-builder # Set to use data source powered by Vertex AI Search.
"dataStoreSpecs": [ # Specifications that define the specific DataStores to be searched, along with configurations for those data stores. This is only considered for Engines with multiple data stores. It should only be set if engine is used.
{ # Define data stores within engine to filter on in a search call and configurations for those data stores. For more information, see https://cloud.google.com/generative-ai-app-builder/docs/reference/rpc/google.cloud.discoveryengine.v1#datastorespec
"dataStore": "A String", # Full resource name of DataStore, such as Format: `projects/{project}/locations/{location}/collections/{collection}/dataStores/{dataStore}`
"filter": "A String", # Optional. Filter specification to filter documents in the data store specified by data_store field. For more information on filtering, see [Filtering](https://cloud.google.com/generative-ai-app-builder/docs/filter-search-metadata)
},
],
"datastore": "A String", # Optional. Fully-qualified Vertex AI Search data store resource ID. Format: `projects/{project}/locations/{location}/collections/{collection}/dataStores/{dataStore}`
"engine": "A String", # Optional. Fully-qualified Vertex AI Search engine resource ID. Format: `projects/{project}/locations/{location}/collections/{collection}/engines/{engine}`
"filter": "A String", # Optional. Filter strings to be passed to the search API.
"maxResults": 42, # Optional. Number of search results to return per query. The default value is 10. The maximumm allowed value is 10.
},
"vertexRagStore": { # Retrieve from Vertex RAG Store for grounding. # Set to use data source powered by Vertex RAG store. User data is uploaded via the VertexRagDataService.
"ragCorpora": [ # Optional. Deprecated. Please use rag_resources instead.
"A String",
],
"ragResources": [ # Optional. The representation of the rag source. It can be used to specify corpus only or ragfiles. Currently only support one corpus or multiple files from one corpus. In the future we may open up multiple corpora support.
{ # The definition of the Rag resource.
"ragCorpus": "A String", # Optional. RagCorpora resource name. Format: `projects/{project}/locations/{location}/ragCorpora/{rag_corpus}`
"ragFileIds": [ # Optional. rag_file_id. The files should be in the same rag_corpus set in rag_corpus field.
"A String",
],
},
],
"ragRetrievalConfig": { # Specifies the context retrieval config. # Optional. The retrieval config for the Rag query.
"filter": { # Config for filters. # Optional. Config for filters.
"metadataFilter": "A String", # Optional. String for metadata filtering.
"vectorDistanceThreshold": 3.14, # Optional. Only returns contexts with vector distance smaller than the threshold.
"vectorSimilarityThreshold": 3.14, # Optional. Only returns contexts with vector similarity larger than the threshold.
},
"hybridSearch": { # Config for Hybrid Search. # Optional. Config for Hybrid Search.
"alpha": 3.14, # Optional. Alpha value controls the weight between dense and sparse vector search results. The range is [0, 1], while 0 means sparse vector search only and 1 means dense vector search only. The default value is 0.5 which balances sparse and dense vector search equally.
},
"ranking": { # Config for ranking and reranking. # Optional. Config for ranking and reranking.
"llmRanker": { # Config for LlmRanker. # Optional. Config for LlmRanker.
"modelName": "A String", # Optional. The model name used for ranking. See [Supported models](https://cloud.google.com/vertex-ai/generative-ai/docs/model-reference/inference#supported-models).
},
"rankService": { # Config for Rank Service. # Optional. Config for Rank Service.
"modelName": "A String", # Optional. The model name of the rank service. Format: `semantic-ranker-512@latest`
},
},
"topK": 42, # Optional. The number of contexts to retrieve.
},
"similarityTopK": 42, # Optional. Number of top k results to return from the selected corpora.
"storeContext": True or False, # Optional. Currently only supported for Gemini Multimodal Live API. In Gemini Multimodal Live API, if `store_context` bool is specified, Gemini will leverage it to automatically memorize the interactions between the client and Gemini, and retrieve context when needed to augment the response generation for users' ongoing and future interactions.
"vectorDistanceThreshold": 3.14, # Optional. Only return results with vector distance smaller than the threshold.
},
},
"urlContext": { # Tool to support URL context. # Optional. Tool to support URL context retrieval.
},
},
],
"author": "A String", # Required. The ID of the agent or entity that generated this event. Use "user" to denote events generated by the end-user.
"content": { # The structured data content of a message. A Content message contains a `role` field, which indicates the producer of the content, and a `parts` field, which contains the multi-part data of the message. # Required. The content of the event (e.g., text response, tool call, tool response).
"parts": [ # Required. A list of Part objects that make up a single message. Parts of a message can have different MIME types. A Content message must have at least one Part.
{ # A datatype containing media that is part of a multi-part Content message. A `Part` consists of data which has an associated datatype. A `Part` can only contain one of the accepted types in `Part.data`. For media types that are not text, `Part` must have a fixed IANA MIME type identifying the type and subtype of the media if `inline_data` or `file_data` field is filled with raw bytes.
"codeExecutionResult": { # Result of executing the [ExecutableCode]. Only generated when using the [CodeExecution] tool, and always follows a `part` containing the [ExecutableCode]. # Optional. The result of executing the ExecutableCode.
"outcome": "A String", # Required. Outcome of the code execution.
"output": "A String", # Optional. Contains stdout when code execution is successful, stderr or other description otherwise.
},
"executableCode": { # Code generated by the model that is meant to be executed, and the result returned to the model. Generated when using the [CodeExecution] tool, in which the code will be automatically executed, and a corresponding [CodeExecutionResult] will also be generated. # Optional. Code generated by the model that is intended to be executed.
"code": "A String", # Required. The code to be executed.
"language": "A String", # Required. Programming language of the `code`.
},
"fileData": { # URI-based data. A FileData message contains a URI pointing to data of a specific media type. It is used to represent images, audio, and video stored in Google Cloud Storage. # Optional. The URI-based data of the part. This can be used to include files from Google Cloud Storage.
"displayName": "A String", # Optional. The display name of the file. Used to provide a label or filename to distinguish files. This field is only returned in `PromptMessage` for prompt management. It is used in the Gemini calls only when server side tools (`code_execution`, `google_search`, and `url_context`) are enabled.
"fileUri": "A String", # Required. The URI of the file in Google Cloud Storage.
"mimeType": "A String", # Required. The IANA standard MIME type of the source data.
},
"functionCall": { # A predicted [FunctionCall] returned from the model that contains a string representing the [FunctionDeclaration.name] and a structured JSON object containing the parameters and their values. # Optional. A predicted function call returned from the model. This contains the name of the function to call and the arguments to pass to the function.
"args": { # Optional. The function parameters and values in JSON object format. See [FunctionDeclaration.parameters] for parameter details.
"a_key": "", # Properties of the object.
},
"id": "A String", # Optional. The unique id of the function call. If populated, the client to execute the `function_call` and return the response with the matching `id`.
"name": "A String", # Optional. The name of the function to call. Matches [FunctionDeclaration.name].
"partialArgs": [ # Optional. The partial argument value of the function call. If provided, represents the arguments/fields that are streamed incrementally.
{ # Partial argument value of the function call.
"boolValue": True or False, # Optional. Represents a boolean value.
"jsonPath": "A String", # Required. A JSON Path (RFC 9535) to the argument being streamed. https://datatracker.ietf.org/doc/html/rfc9535. e.g. "$.foo.bar[0].data".
"nullValue": "A String", # Optional. Represents a null value.
"numberValue": 3.14, # Optional. Represents a double value.
"stringValue": "A String", # Optional. Represents a string value.
"willContinue": True or False, # Optional. Whether this is not the last part of the same json_path. If true, another PartialArg message for the current json_path is expected to follow.
},
],
"willContinue": True or False, # Optional. Whether this is the last part of the FunctionCall. If true, another partial message for the current FunctionCall is expected to follow.
},
"functionResponse": { # The result output from a [FunctionCall] that contains a string representing the [FunctionDeclaration.name] and a structured JSON object containing any output from the function is used as context to the model. This should contain the result of a [FunctionCall] made based on model prediction. # Optional. The result of a function call. This is used to provide the model with the result of a function call that it predicted.
"id": "A String", # Optional. The id of the function call this response is for. Populated by the client to match the corresponding function call `id`.
"name": "A String", # Required. The name of the function to call. Matches [FunctionDeclaration.name] and [FunctionCall.name].
"parts": [ # Optional. Ordered `Parts` that constitute a function response. Parts may have different IANA MIME types.
{ # A datatype containing media that is part of a `FunctionResponse` message. A `FunctionResponsePart` consists of data which has an associated datatype. A `FunctionResponsePart` can only contain one of the accepted types in `FunctionResponsePart.data`. A `FunctionResponsePart` must have a fixed IANA MIME type identifying the type and subtype of the media if the `inline_data` field is filled with raw bytes.
"fileData": { # URI based data for function response. # URI based data.
"displayName": "A String", # Optional. Display name of the file data. Used to provide a label or filename to distinguish file datas. This field is only returned in PromptMessage for prompt management. It is currently used in the Gemini GenerateContent calls only when server side tools (code_execution, google_search, and url_context) are enabled.
"fileUri": "A String", # Required. URI.
"mimeType": "A String", # Required. The IANA standard MIME type of the source data.
},
"inlineData": { # Raw media bytes for function response. Text should not be sent as raw bytes, use the 'text' field. # Inline media bytes.
"data": "A String", # Required. Raw bytes.
"displayName": "A String", # Optional. Display name of the blob. Used to provide a label or filename to distinguish blobs. This field is only returned in PromptMessage for prompt management. It is currently used in the Gemini GenerateContent calls only when server side tools (code_execution, google_search, and url_context) are enabled.
"mimeType": "A String", # Required. The IANA standard MIME type of the source data.
},
},
],
"response": { # Required. The function response in JSON object format. Use "output" key to specify function output and "error" key to specify error details (if any). If "output" and "error" keys are not specified, then whole "response" is treated as function output.
"a_key": "", # Properties of the object.
},
},
"inlineData": { # A content blob. A Blob contains data of a specific media type. It is used to represent images, audio, and video. # Optional. The inline data content of the part. This can be used to include images, audio, or video in a request.
"data": "A String", # Required. The raw bytes of the data.
"displayName": "A String", # Optional. The display name of the blob. Used to provide a label or filename to distinguish blobs. This field is only returned in `PromptMessage` for prompt management. It is used in the Gemini calls only when server-side tools (`code_execution`, `google_search`, and `url_context`) are enabled.
"mimeType": "A String", # Required. The IANA standard MIME type of the source data.
},
"mediaResolution": { # per part media resolution. Media resolution for the input media. # per part media resolution. Media resolution for the input media.
"level": "A String", # The tokenization quality used for given media.
},
"text": "A String", # Optional. The text content of the part. When sent from the VSCode Gemini Code Assist extension, references to @mentioned items will be converted to markdown boldface text. For example `@my-repo` will be converted to and sent as `**my-repo**` by the IDE agent.
"thought": True or False, # Optional. Indicates whether the `part` represents the model's thought process or reasoning.
"thoughtSignature": "A String", # Optional. An opaque signature for the thought so it can be reused in subsequent requests.
"videoMetadata": { # Provides metadata for a video, including the start and end offsets for clipping and the frame rate. # Optional. Video metadata. The metadata should only be specified while the video data is presented in inline_data or file_data.
"endOffset": "A String", # Optional. The end offset of the video.
"fps": 3.14, # Optional. The frame rate of the video sent to the model. If not specified, the default value is 1.0. The valid range is (0.0, 24.0].
"startOffset": "A String", # Optional. The start offset of the video.
},
},
],
"role": "A String", # Optional. The producer of the content. Must be either 'user' or 'model'. If not set, the service will default to 'user'.
},
"eventTime": "A String", # Optional. The timestamp when the event occurred.
"stateDelta": { # Optional. The change in the session state caused by this event. This is a key-value map of fields that were modified or added by the event.
"a_key": "", # Properties of the object.
},
},
],
"turnId": "A String", # Optional. A unique identifier for the turn. Useful for referencing specific turns across systems.
"turnIndex": 42, # Required. The 0-based index of the turn in the conversation sequence.
},
],
},
"promptTemplateData": { # Message to hold a prompt template and the values to populate the template. # Prompt template data.
"values": { # The values for fields in the prompt template.
"a_key": { # The structured data content of a message. A Content message contains a `role` field, which indicates the producer of the content, and a `parts` field, which contains the multi-part data of the message.
"parts": [ # Required. A list of Part objects that make up a single message. Parts of a message can have different MIME types. A Content message must have at least one Part.
{ # A datatype containing media that is part of a multi-part Content message. A `Part` consists of data which has an associated datatype. A `Part` can only contain one of the accepted types in `Part.data`. For media types that are not text, `Part` must have a fixed IANA MIME type identifying the type and subtype of the media if `inline_data` or `file_data` field is filled with raw bytes.
"codeExecutionResult": { # Result of executing the [ExecutableCode]. Only generated when using the [CodeExecution] tool, and always follows a `part` containing the [ExecutableCode]. # Optional. The result of executing the ExecutableCode.
"outcome": "A String", # Required. Outcome of the code execution.
"output": "A String", # Optional. Contains stdout when code execution is successful, stderr or other description otherwise.
},
"executableCode": { # Code generated by the model that is meant to be executed, and the result returned to the model. Generated when using the [CodeExecution] tool, in which the code will be automatically executed, and a corresponding [CodeExecutionResult] will also be generated. # Optional. Code generated by the model that is intended to be executed.
"code": "A String", # Required. The code to be executed.
"language": "A String", # Required. Programming language of the `code`.
},
"fileData": { # URI-based data. A FileData message contains a URI pointing to data of a specific media type. It is used to represent images, audio, and video stored in Google Cloud Storage. # Optional. The URI-based data of the part. This can be used to include files from Google Cloud Storage.
"displayName": "A String", # Optional. The display name of the file. Used to provide a label or filename to distinguish files. This field is only returned in `PromptMessage` for prompt management. It is used in the Gemini calls only when server side tools (`code_execution`, `google_search`, and `url_context`) are enabled.
"fileUri": "A String", # Required. The URI of the file in Google Cloud Storage.
"mimeType": "A String", # Required. The IANA standard MIME type of the source data.
},
"functionCall": { # A predicted [FunctionCall] returned from the model that contains a string representing the [FunctionDeclaration.name] and a structured JSON object containing the parameters and their values. # Optional. A predicted function call returned from the model. This contains the name of the function to call and the arguments to pass to the function.
"args": { # Optional. The function parameters and values in JSON object format. See [FunctionDeclaration.parameters] for parameter details.
"a_key": "", # Properties of the object.
},
"id": "A String", # Optional. The unique id of the function call. If populated, the client to execute the `function_call` and return the response with the matching `id`.
"name": "A String", # Optional. The name of the function to call. Matches [FunctionDeclaration.name].
"partialArgs": [ # Optional. The partial argument value of the function call. If provided, represents the arguments/fields that are streamed incrementally.
{ # Partial argument value of the function call.
"boolValue": True or False, # Optional. Represents a boolean value.
"jsonPath": "A String", # Required. A JSON Path (RFC 9535) to the argument being streamed. https://datatracker.ietf.org/doc/html/rfc9535. e.g. "$.foo.bar[0].data".
"nullValue": "A String", # Optional. Represents a null value.
"numberValue": 3.14, # Optional. Represents a double value.
"stringValue": "A String", # Optional. Represents a string value.
"willContinue": True or False, # Optional. Whether this is not the last part of the same json_path. If true, another PartialArg message for the current json_path is expected to follow.
},
],
"willContinue": True or False, # Optional. Whether this is the last part of the FunctionCall. If true, another partial message for the current FunctionCall is expected to follow.
},
"functionResponse": { # The result output from a [FunctionCall] that contains a string representing the [FunctionDeclaration.name] and a structured JSON object containing any output from the function is used as context to the model. This should contain the result of a [FunctionCall] made based on model prediction. # Optional. The result of a function call. This is used to provide the model with the result of a function call that it predicted.
"id": "A String", # Optional. The id of the function call this response is for. Populated by the client to match the corresponding function call `id`.
"name": "A String", # Required. The name of the function to call. Matches [FunctionDeclaration.name] and [FunctionCall.name].
"parts": [ # Optional. Ordered `Parts` that constitute a function response. Parts may have different IANA MIME types.
{ # A datatype containing media that is part of a `FunctionResponse` message. A `FunctionResponsePart` consists of data which has an associated datatype. A `FunctionResponsePart` can only contain one of the accepted types in `FunctionResponsePart.data`. A `FunctionResponsePart` must have a fixed IANA MIME type identifying the type and subtype of the media if the `inline_data` field is filled with raw bytes.
"fileData": { # URI based data for function response. # URI based data.
"displayName": "A String", # Optional. Display name of the file data. Used to provide a label or filename to distinguish file datas. This field is only returned in PromptMessage for prompt management. It is currently used in the Gemini GenerateContent calls only when server side tools (code_execution, google_search, and url_context) are enabled.
"fileUri": "A String", # Required. URI.
"mimeType": "A String", # Required. The IANA standard MIME type of the source data.
},
"inlineData": { # Raw media bytes for function response. Text should not be sent as raw bytes, use the 'text' field. # Inline media bytes.
"data": "A String", # Required. Raw bytes.
"displayName": "A String", # Optional. Display name of the blob. Used to provide a label or filename to distinguish blobs. This field is only returned in PromptMessage for prompt management. It is currently used in the Gemini GenerateContent calls only when server side tools (code_execution, google_search, and url_context) are enabled.
"mimeType": "A String", # Required. The IANA standard MIME type of the source data.
},
},
],
"response": { # Required. The function response in JSON object format. Use "output" key to specify function output and "error" key to specify error details (if any). If "output" and "error" keys are not specified, then whole "response" is treated as function output.
"a_key": "", # Properties of the object.
},
},
"inlineData": { # A content blob. A Blob contains data of a specific media type. It is used to represent images, audio, and video. # Optional. The inline data content of the part. This can be used to include images, audio, or video in a request.
"data": "A String", # Required. The raw bytes of the data.
"displayName": "A String", # Optional. The display name of the blob. Used to provide a label or filename to distinguish blobs. This field is only returned in `PromptMessage` for prompt management. It is used in the Gemini calls only when server-side tools (`code_execution`, `google_search`, and `url_context`) are enabled.
"mimeType": "A String", # Required. The IANA standard MIME type of the source data.
},
"mediaResolution": { # per part media resolution. Media resolution for the input media. # per part media resolution. Media resolution for the input media.
"level": "A String", # The tokenization quality used for given media.
},
"text": "A String", # Optional. The text content of the part. When sent from the VSCode Gemini Code Assist extension, references to @mentioned items will be converted to markdown boldface text. For example `@my-repo` will be converted to and sent as `**my-repo**` by the IDE agent.
"thought": True or False, # Optional. Indicates whether the `part` represents the model's thought process or reasoning.
"thoughtSignature": "A String", # Optional. An opaque signature for the thought so it can be reused in subsequent requests.
"videoMetadata": { # Provides metadata for a video, including the start and end offsets for clipping and the frame rate. # Optional. Video metadata. The metadata should only be specified while the video data is presented in inline_data or file_data.
"endOffset": "A String", # Optional. The end offset of the video.
"fps": 3.14, # Optional. The frame rate of the video sent to the model. If not specified, the default value is 1.0. The valid range is (0.0, 24.0].
"startOffset": "A String", # Optional. The start offset of the video.
},
},
],
"role": "A String", # Optional. The producer of the content. Must be either 'user' or 'model'. If not set, the service will default to 'user'.
},
},
},
"text": "A String", # Text prompt.
"value": "", # Fields and values that can be used to populate the prompt template.
},
"rubrics": { # Optional. Named groups of rubrics associated with this prompt. The key is a user-defined name for the rubric group.
"a_key": { # A group of rubrics, used for grouping rubrics based on a metric or a version.
"displayName": "A String", # Human-readable name for the group. This should be unique within a given context if used for display or selection. Example: "Instruction Following V1", "Content Quality - Summarization Task".
"groupId": "A String", # Unique identifier for the group.
"rubrics": [ # Rubrics that are part of this group.
{ # Message representing a single testable criterion for evaluation. One input prompt could have multiple rubrics.
"content": { # Content of the rubric, defining the testable criteria. # Required. The actual testable criteria for the rubric.
"property": { # Defines criteria based on a specific property. # Evaluation criteria based on a specific property.
"description": "A String", # Description of the property being evaluated. Example: "The model's response is grammatically correct."
},
},
"importance": "A String", # Optional. The relative importance of this rubric.
"rubricId": "A String", # Unique identifier for the rubric. This ID is used to refer to this rubric, e.g., in RubricVerdict.
"type": "A String", # Optional. A type designator for the rubric, which can inform how it's evaluated or interpreted by systems or users. It's recommended to use consistent, well-defined, upper snake_case strings. Examples: "SUMMARIZATION_QUALITY", "SAFETY_HARMFUL_CONTENT", "INSTRUCTION_ADHERENCE".
},
],
},
},
},
"evaluationResponse": { # Evaluation result. # Output only. The response from evaluation.
"candidateResults": [ # Optional. The results for the metric.
{ # Result for a single candidate.
"additionalResults": "", # Optional. Additional results for the metric.
"candidate": "A String", # Required. The candidate that is being evaluated. The value is the same as the candidate name in the EvaluationRequest.
"explanation": "A String", # Optional. The explanation for the metric.
"metric": "A String", # Required. The metric that was evaluated.
"rubricVerdicts": [ # Optional. The rubric verdicts for the metric.
{ # Represents the verdict of an evaluation against a single rubric.
"evaluatedRubric": { # Message representing a single testable criterion for evaluation. One input prompt could have multiple rubrics. # Required. The full rubric definition that was evaluated. Storing this ensures the verdict is self-contained and understandable, especially if the original rubric definition changes or was dynamically generated.
"content": { # Content of the rubric, defining the testable criteria. # Required. The actual testable criteria for the rubric.
"property": { # Defines criteria based on a specific property. # Evaluation criteria based on a specific property.
"description": "A String", # Description of the property being evaluated. Example: "The model's response is grammatically correct."
},
},
"importance": "A String", # Optional. The relative importance of this rubric.
"rubricId": "A String", # Unique identifier for the rubric. This ID is used to refer to this rubric, e.g., in RubricVerdict.
"type": "A String", # Optional. A type designator for the rubric, which can inform how it's evaluated or interpreted by systems or users. It's recommended to use consistent, well-defined, upper snake_case strings. Examples: "SUMMARIZATION_QUALITY", "SAFETY_HARMFUL_CONTENT", "INSTRUCTION_ADHERENCE".
},
"reasoning": "A String", # Optional. Human-readable reasoning or explanation for the verdict. This can include specific examples or details from the evaluated content that justify the given verdict.
"verdict": True or False, # Required. Outcome of the evaluation against the rubric, represented as a boolean. `true` indicates a "Pass", `false` indicates a "Fail".
},
],
"score": 3.14, # Optional. The score for the metric.
},
],
"evaluationRequest": "A String", # Required. The request item that was evaluated. Format: projects/{project}/locations/{location}/evaluationItems/{evaluation_item}
"evaluationRun": "A String", # Required. The evaluation run that was used to generate the result. Format: projects/{project}/locations/{location}/evaluationRuns/{evaluation_run}
"metadata": "", # Optional. Metadata about the evaluation result.
"metric": "A String", # Required. The metric that was evaluated.
"request": { # A single evaluation request supporting input for both single-turn model generation and multi-turn agent execution traces. Valid input modes: 1. Inference Mode: `prompt` is set (containing text or AgentData context). 2. Offline Eval Mode: `prompt` is unset, and `candidate_responses` contains `agent_data` (the completed execution trace). Validation Rule: Either `prompt` must be set, OR at least one of the `candidate_responses` must contain `agent_data`. # Required. The request that was evaluated.
"candidateResponses": [ # Optional. Responses from model under test and other baseline models for comparison.
{ # Responses from model or agent.
"agentData": { # Represents data specific to multi-turn agent evaluations. # Optional. Represents the complete execution trace of a multi-turn conversation, which can involve single or multiple agents. This field is used to provide the full output of an agent's run, including all turns and events, for direct evaluation.
"agents": { # Optional. A map containing the static configurations for each agent in the system. Key: agent_id (matches the `author` field in events). Value: The static configuration of the agent.
"a_key": { # Represents configuration for an Agent.
"agentId": "A String", # Required. Unique identifier of the agent. This ID is used to refer to this agent, e.g., in AgentEvent.author, or in the `sub_agents` field. It must be unique within the `agents` map.
"agentType": "A String", # Optional. The type or class of the agent (e.g., "LlmAgent", "RouterAgent", "ToolUseAgent"). Useful for the autorater to understand the expected behavior of the agent.
"description": "A String", # Optional. A high-level description of the agent's role and responsibilities. Critical for evaluating if the agent is routing tasks correctly.
"instruction": "A String", # Optional. Provides instructions for the LLM model, guiding the agent's behavior. Can be static or dynamic. Dynamic instructions can contain placeholders like {variable_name} that will be resolved at runtime using the `AgentEvent.state_delta` field.
"subAgents": [ # Optional. The list of valid agent IDs that this agent can delegate to. This defines the directed edges in the multi-agent system graph topology.
"A String",
],
"tools": [ # Optional. The list of tools available to this agent.
{ # Tool details that the model may use to generate response. A `Tool` is a piece of code that enables the system to interact with external systems to perform an action, or set of actions, outside of knowledge and scope of the model. A Tool object should contain exactly one type of Tool (e.g FunctionDeclaration, Retrieval or GoogleSearchRetrieval).
"codeExecution": { # Tool that executes code generated by the model, and automatically returns the result to the model. See also [ExecutableCode]and [CodeExecutionResult] which are input and output to this tool. # Optional. CodeExecution tool type. Enables the model to execute code as part of generation.
},
"computerUse": { # Tool to support computer use. # Optional. Tool to support the model interacting directly with the computer. If enabled, it automatically populates computer-use specific Function Declarations.
"environment": "A String", # Required. The environment being operated.
"excludedPredefinedFunctions": [ # Optional. By default, [predefined functions](https://cloud.google.com/vertex-ai/generative-ai/docs/computer-use#supported-actions) are included in the final model call. Some of them can be explicitly excluded from being automatically included. This can serve two purposes: 1. Using a more restricted / different action space. 2. Improving the definitions / instructions of predefined functions.
"A String",
],
},
"enterpriseWebSearch": { # Tool to search public web data, powered by Vertex AI Search and Sec4 compliance. # Optional. Tool to support searching public web data, powered by Vertex AI Search and Sec4 compliance.
"blockingConfidence": "A String", # Optional. Sites with confidence level chosen & above this value will be blocked from the search results.
"excludeDomains": [ # Optional. List of domains to be excluded from the search results. The default limit is 2000 domains.
"A String",
],
},
"functionDeclarations": [ # Optional. Function tool type. One or more function declarations to be passed to the model along with the current user query. Model may decide to call a subset of these functions by populating FunctionCall in the response. User should provide a FunctionResponse for each function call in the next turn. Based on the function responses, Model will generate the final response back to the user. Maximum 512 function declarations can be provided.
{ # Structured representation of a function declaration as defined by the [OpenAPI 3.0 specification](https://spec.openapis.org/oas/v3.0.3). Included in this declaration are the function name, description, parameters and response type. This FunctionDeclaration is a representation of a block of code that can be used as a `Tool` by the model and executed by the client.
"description": "A String", # Optional. Description and purpose of the function. Model uses it to decide how and whether to call the function.
"name": "A String", # Required. The name of the function to call. Must start with a letter or an underscore. Must be a-z, A-Z, 0-9, or contain underscores, dots, colons and dashes, with a maximum length of 64.
"parameters": { # Defines the schema of input and output data. This is a subset of the [OpenAPI 3.0 Schema Object](https://spec.openapis.org/oas/v3.0.3#schema-object). # Optional. Describes the parameters to this function in JSON Schema Object format. Reflects the Open API 3.03 Parameter Object. string Key: the name of the parameter. Parameter names are case sensitive. Schema Value: the Schema defining the type used for the parameter. For function with no parameters, this can be left unset. Parameter names must start with a letter or an underscore and must only contain chars a-z, A-Z, 0-9, or underscores with a maximum length of 64. Example with 1 required and 1 optional parameter: type: OBJECT properties: param1: type: STRING param2: type: INTEGER required: - param1
"additionalProperties": "", # Optional. If `type` is `OBJECT`, specifies how to handle properties not defined in `properties`. If it is a boolean `false`, no additional properties are allowed. If it is a schema, additional properties are allowed if they conform to the schema.
"anyOf": [ # Optional. The instance must be valid against any (one or more) of the subschemas listed in `any_of`.
# Object with schema name: GoogleCloudAiplatformV1beta1Schema
],
"default": "", # Optional. Default value to use if the field is not specified.
"defs": { # Optional. `defs` provides a map of schema definitions that can be reused by `ref` elsewhere in the schema. Only allowed at root level of the schema.
"a_key": # Object with schema name: GoogleCloudAiplatformV1beta1Schema
},
"description": "A String", # Optional. Description of the schema.
"enum": [ # Optional. Possible values of the field. This field can be used to restrict a value to a fixed set of values. To mark a field as an enum, set `format` to `enum` and provide the list of possible values in `enum`. For example: 1. To define directions: `{type:STRING, format:enum, enum:["EAST", "NORTH", "SOUTH", "WEST"]}` 2. To define apartment numbers: `{type:INTEGER, format:enum, enum:["101", "201", "301"]}`
"A String",
],
"example": "", # Optional. Example of an instance of this schema.
"format": "A String", # Optional. The format of the data. For `NUMBER` type, format can be `float` or `double`. For `INTEGER` type, format can be `int32` or `int64`. For `STRING` type, format can be `email`, `byte`, `date`, `date-time`, `password`, and other formats to further refine the data type.
"items": # Object with schema name: GoogleCloudAiplatformV1beta1Schema # Optional. If type is `ARRAY`, `items` specifies the schema of elements in the array.
"maxItems": "A String", # Optional. If type is `ARRAY`, `max_items` specifies the maximum number of items in an array.
"maxLength": "A String", # Optional. If type is `STRING`, `max_length` specifies the maximum length of the string.
"maxProperties": "A String", # Optional. If type is `OBJECT`, `max_properties` specifies the maximum number of properties that can be provided.
"maximum": 3.14, # Optional. If type is `INTEGER` or `NUMBER`, `maximum` specifies the maximum allowed value.
"minItems": "A String", # Optional. If type is `ARRAY`, `min_items` specifies the minimum number of items in an array.
"minLength": "A String", # Optional. If type is `STRING`, `min_length` specifies the minimum length of the string.
"minProperties": "A String", # Optional. If type is `OBJECT`, `min_properties` specifies the minimum number of properties that can be provided.
"minimum": 3.14, # Optional. If type is `INTEGER` or `NUMBER`, `minimum` specifies the minimum allowed value.
"nullable": True or False, # Optional. Indicates if the value of this field can be null.
"pattern": "A String", # Optional. If type is `STRING`, `pattern` specifies a regular expression that the string must match.
"properties": { # Optional. If type is `OBJECT`, `properties` is a map of property names to schema definitions for each property of the object.
"a_key": # Object with schema name: GoogleCloudAiplatformV1beta1Schema
},
"propertyOrdering": [ # Optional. Order of properties displayed or used where order matters. This is not a standard field in OpenAPI specification, but can be used to control the order of properties.
"A String",
],
"ref": "A String", # Optional. Allows referencing another schema definition to use in place of this schema. The value must be a valid reference to a schema in `defs`. For example, the following schema defines a reference to a schema node named "Pet": type: object properties: pet: ref: #/defs/Pet defs: Pet: type: object properties: name: type: string The value of the "pet" property is a reference to the schema node named "Pet". See details in https://json-schema.org/understanding-json-schema/structuring
"required": [ # Optional. If type is `OBJECT`, `required` lists the names of properties that must be present.
"A String",
],
"title": "A String", # Optional. Title for the schema.
"type": "A String", # Optional. Data type of the schema field.
},
"parametersJsonSchema": "", # Optional. Describes the parameters to the function in JSON Schema format. The schema must describe an object where the properties are the parameters to the function. For example: ``` { "type": "object", "properties": { "name": { "type": "string" }, "age": { "type": "integer" } }, "additionalProperties": false, "required": ["name", "age"], "propertyOrdering": ["name", "age"] } ``` This field is mutually exclusive with `parameters`.
"response": { # Defines the schema of input and output data. This is a subset of the [OpenAPI 3.0 Schema Object](https://spec.openapis.org/oas/v3.0.3#schema-object). # Optional. Describes the output from this function in JSON Schema format. Reflects the Open API 3.03 Response Object. The Schema defines the type used for the response value of the function.
"additionalProperties": "", # Optional. If `type` is `OBJECT`, specifies how to handle properties not defined in `properties`. If it is a boolean `false`, no additional properties are allowed. If it is a schema, additional properties are allowed if they conform to the schema.
"anyOf": [ # Optional. The instance must be valid against any (one or more) of the subschemas listed in `any_of`.
# Object with schema name: GoogleCloudAiplatformV1beta1Schema
],
"default": "", # Optional. Default value to use if the field is not specified.
"defs": { # Optional. `defs` provides a map of schema definitions that can be reused by `ref` elsewhere in the schema. Only allowed at root level of the schema.
"a_key": # Object with schema name: GoogleCloudAiplatformV1beta1Schema
},
"description": "A String", # Optional. Description of the schema.
"enum": [ # Optional. Possible values of the field. This field can be used to restrict a value to a fixed set of values. To mark a field as an enum, set `format` to `enum` and provide the list of possible values in `enum`. For example: 1. To define directions: `{type:STRING, format:enum, enum:["EAST", "NORTH", "SOUTH", "WEST"]}` 2. To define apartment numbers: `{type:INTEGER, format:enum, enum:["101", "201", "301"]}`
"A String",
],
"example": "", # Optional. Example of an instance of this schema.
"format": "A String", # Optional. The format of the data. For `NUMBER` type, format can be `float` or `double`. For `INTEGER` type, format can be `int32` or `int64`. For `STRING` type, format can be `email`, `byte`, `date`, `date-time`, `password`, and other formats to further refine the data type.
"items": # Object with schema name: GoogleCloudAiplatformV1beta1Schema # Optional. If type is `ARRAY`, `items` specifies the schema of elements in the array.
"maxItems": "A String", # Optional. If type is `ARRAY`, `max_items` specifies the maximum number of items in an array.
"maxLength": "A String", # Optional. If type is `STRING`, `max_length` specifies the maximum length of the string.
"maxProperties": "A String", # Optional. If type is `OBJECT`, `max_properties` specifies the maximum number of properties that can be provided.
"maximum": 3.14, # Optional. If type is `INTEGER` or `NUMBER`, `maximum` specifies the maximum allowed value.
"minItems": "A String", # Optional. If type is `ARRAY`, `min_items` specifies the minimum number of items in an array.
"minLength": "A String", # Optional. If type is `STRING`, `min_length` specifies the minimum length of the string.
"minProperties": "A String", # Optional. If type is `OBJECT`, `min_properties` specifies the minimum number of properties that can be provided.
"minimum": 3.14, # Optional. If type is `INTEGER` or `NUMBER`, `minimum` specifies the minimum allowed value.
"nullable": True or False, # Optional. Indicates if the value of this field can be null.
"pattern": "A String", # Optional. If type is `STRING`, `pattern` specifies a regular expression that the string must match.
"properties": { # Optional. If type is `OBJECT`, `properties` is a map of property names to schema definitions for each property of the object.
"a_key": # Object with schema name: GoogleCloudAiplatformV1beta1Schema
},
"propertyOrdering": [ # Optional. Order of properties displayed or used where order matters. This is not a standard field in OpenAPI specification, but can be used to control the order of properties.
"A String",
],
"ref": "A String", # Optional. Allows referencing another schema definition to use in place of this schema. The value must be a valid reference to a schema in `defs`. For example, the following schema defines a reference to a schema node named "Pet": type: object properties: pet: ref: #/defs/Pet defs: Pet: type: object properties: name: type: string The value of the "pet" property is a reference to the schema node named "Pet". See details in https://json-schema.org/understanding-json-schema/structuring
"required": [ # Optional. If type is `OBJECT`, `required` lists the names of properties that must be present.
"A String",
],
"title": "A String", # Optional. Title for the schema.
"type": "A String", # Optional. Data type of the schema field.
},
"responseJsonSchema": "", # Optional. Describes the output from this function in JSON Schema format. The value specified by the schema is the response value of the function. This field is mutually exclusive with `response`.
},
],
"googleMaps": { # Tool to retrieve public maps data for grounding, powered by Google. # Optional. GoogleMaps tool type. Tool to support Google Maps in Model.
"enableWidget": True or False, # Optional. If true, include the widget context token in the response.
},
"googleSearch": { # GoogleSearch tool type. Tool to support Google Search in Model. Powered by Google. # Optional. GoogleSearch tool type. Tool to support Google Search in Model. Powered by Google.
"blockingConfidence": "A String", # Optional. Sites with confidence level chosen & above this value will be blocked from the search results.
"excludeDomains": [ # Optional. List of domains to be excluded from the search results. The default limit is 2000 domains. Example: ["amazon.com", "facebook.com"].
"A String",
],
},
"googleSearchRetrieval": { # Tool to retrieve public web data for grounding, powered by Google. # Optional. Specialized retrieval tool that is powered by Google Search.
"dynamicRetrievalConfig": { # Describes the options to customize dynamic retrieval. # Specifies the dynamic retrieval configuration for the given source.
"dynamicThreshold": 3.14, # Optional. The threshold to be used in dynamic retrieval. If not set, a system default value is used.
"mode": "A String", # The mode of the predictor to be used in dynamic retrieval.
},
},
"parallelAiSearch": { # ParallelAiSearch tool type. A tool that uses the Parallel.ai search engine for grounding. # Optional. If specified, Vertex AI will use Parallel.ai to search for information to answer user queries. The search results will be grounded on Parallel.ai and presented to the model for response generation
"apiKey": "A String", # Optional. The API key for ParallelAiSearch. If an API key is not provided, the system will attempt to verify access by checking for an active Parallel.ai subscription through the Google Cloud Marketplace. See https://docs.parallel.ai/search/search-quickstart for more details.
"customConfigs": { # Optional. Custom configs for ParallelAiSearch. This field can be used to pass any parameter from the Parallel.ai Search API. See the Parallel.ai documentation for the full list of available parameters and their usage: https://docs.parallel.ai/api-reference/search-beta/search Currently only `source_policy`, `excerpts`, `max_results`, `mode`, `fetch_policy` can be set via this field. For example: { "source_policy": { "include_domains": ["google.com", "wikipedia.org"], "exclude_domains": ["example.com"] }, "fetch_policy": { "max_age_seconds": 3600 } }
"a_key": "", # Properties of the object.
},
},
"retrieval": { # Defines a retrieval tool that model can call to access external knowledge. # Optional. Retrieval tool type. System will always execute the provided retrieval tool(s) to get external knowledge to answer the prompt. Retrieval results are presented to the model for generation.
"disableAttribution": True or False, # Optional. Deprecated. This option is no longer supported.
"externalApi": { # Retrieve from data source powered by external API for grounding. The external API is not owned by Google, but need to follow the pre-defined API spec. # Use data source powered by external API for grounding.
"apiAuth": { # The generic reusable api auth config. Deprecated. Please use AuthConfig (google/cloud/aiplatform/master/auth.proto) instead. # The authentication config to access the API. Deprecated. Please use auth_config instead.
"apiKeyConfig": { # The API secret. # The API secret.
"apiKeySecretVersion": "A String", # Required. The SecretManager secret version resource name storing API key. e.g. projects/{project}/secrets/{secret}/versions/{version}
"apiKeyString": "A String", # The API key string. Either this or `api_key_secret_version` must be set.
},
},
"apiSpec": "A String", # The API spec that the external API implements.
"authConfig": { # Auth configuration to run the extension. # The authentication config to access the API.
"apiKeyConfig": { # Config for authentication with API key. # Config for API key auth.
"apiKeySecret": "A String", # Optional. The name of the SecretManager secret version resource storing the API key. Format: `projects/{project}/secrets/{secrete}/versions/{version}` - If both `api_key_secret` and `api_key_string` are specified, this field takes precedence over `api_key_string`. - If specified, the `secretmanager.versions.access` permission should be granted to Vertex AI Extension Service Agent (https://cloud.google.com/vertex-ai/docs/general/access-control#service-agents) on the specified resource.
"apiKeyString": "A String", # Optional. The API key to be used in the request directly.
"httpElementLocation": "A String", # Optional. The location of the API key.
"name": "A String", # Optional. The parameter name of the API key. E.g. If the API request is "https://example.com/act?api_key=", "api_key" would be the parameter name.
},
"authType": "A String", # Type of auth scheme.
"googleServiceAccountConfig": { # Config for Google Service Account Authentication. # Config for Google Service Account auth.
"serviceAccount": "A String", # Optional. The service account that the extension execution service runs as. - If the service account is specified, the `iam.serviceAccounts.getAccessToken` permission should be granted to Vertex AI Extension Service Agent (https://cloud.google.com/vertex-ai/docs/general/access-control#service-agents) on the specified service account. - If not specified, the Vertex AI Extension Service Agent will be used to execute the Extension.
},
"httpBasicAuthConfig": { # Config for HTTP Basic Authentication. # Config for HTTP Basic auth.
"credentialSecret": "A String", # Required. The name of the SecretManager secret version resource storing the base64 encoded credentials. Format: `projects/{project}/secrets/{secrete}/versions/{version}` - If specified, the `secretmanager.versions.access` permission should be granted to Vertex AI Extension Service Agent (https://cloud.google.com/vertex-ai/docs/general/access-control#service-agents) on the specified resource.
},
"oauthConfig": { # Config for user oauth. # Config for user oauth.
"accessToken": "A String", # Access token for extension endpoint. Only used to propagate token from [[ExecuteExtensionRequest.runtime_auth_config]] at request time.
"serviceAccount": "A String", # The service account used to generate access tokens for executing the Extension. - If the service account is specified, the `iam.serviceAccounts.getAccessToken` permission should be granted to Vertex AI Extension Service Agent (https://cloud.google.com/vertex-ai/docs/general/access-control#service-agents) on the provided service account.
},
"oidcConfig": { # Config for user OIDC auth. # Config for user OIDC auth.
"idToken": "A String", # OpenID Connect formatted ID token for extension endpoint. Only used to propagate token from [[ExecuteExtensionRequest.runtime_auth_config]] at request time.
"serviceAccount": "A String", # The service account used to generate an OpenID Connect (OIDC)-compatible JWT token signed by the Google OIDC Provider (accounts.google.com) for extension endpoint (https://cloud.google.com/iam/docs/create-short-lived-credentials-direct#sa-credentials-oidc). - The audience for the token will be set to the URL in the server url defined in the OpenApi spec. - If the service account is provided, the service account should grant `iam.serviceAccounts.getOpenIdToken` permission to Vertex AI Extension Service Agent (https://cloud.google.com/vertex-ai/docs/general/access-control#service-agents).
},
},
"elasticSearchParams": { # The search parameters to use for the ELASTIC_SEARCH spec. # Parameters for the elastic search API.
"index": "A String", # The ElasticSearch index to use.
"numHits": 42, # Optional. Number of hits (chunks) to request. When specified, it is passed to Elasticsearch as the `num_hits` param.
"searchTemplate": "A String", # The ElasticSearch search template to use.
},
"endpoint": "A String", # The endpoint of the external API. The system will call the API at this endpoint to retrieve the data for grounding. Example: https://acme.com:443/search
"simpleSearchParams": { # The search parameters to use for SIMPLE_SEARCH spec. # Parameters for the simple search API.
},
},
"vertexAiSearch": { # Retrieve from Vertex AI Search datastore or engine for grounding. datastore and engine are mutually exclusive. See https://cloud.google.com/products/agent-builder # Set to use data source powered by Vertex AI Search.
"dataStoreSpecs": [ # Specifications that define the specific DataStores to be searched, along with configurations for those data stores. This is only considered for Engines with multiple data stores. It should only be set if engine is used.
{ # Define data stores within engine to filter on in a search call and configurations for those data stores. For more information, see https://cloud.google.com/generative-ai-app-builder/docs/reference/rpc/google.cloud.discoveryengine.v1#datastorespec
"dataStore": "A String", # Full resource name of DataStore, such as Format: `projects/{project}/locations/{location}/collections/{collection}/dataStores/{dataStore}`
"filter": "A String", # Optional. Filter specification to filter documents in the data store specified by data_store field. For more information on filtering, see [Filtering](https://cloud.google.com/generative-ai-app-builder/docs/filter-search-metadata)
},
],
"datastore": "A String", # Optional. Fully-qualified Vertex AI Search data store resource ID. Format: `projects/{project}/locations/{location}/collections/{collection}/dataStores/{dataStore}`
"engine": "A String", # Optional. Fully-qualified Vertex AI Search engine resource ID. Format: `projects/{project}/locations/{location}/collections/{collection}/engines/{engine}`
"filter": "A String", # Optional. Filter strings to be passed to the search API.
"maxResults": 42, # Optional. Number of search results to return per query. The default value is 10. The maximumm allowed value is 10.
},
"vertexRagStore": { # Retrieve from Vertex RAG Store for grounding. # Set to use data source powered by Vertex RAG store. User data is uploaded via the VertexRagDataService.
"ragCorpora": [ # Optional. Deprecated. Please use rag_resources instead.
"A String",
],
"ragResources": [ # Optional. The representation of the rag source. It can be used to specify corpus only or ragfiles. Currently only support one corpus or multiple files from one corpus. In the future we may open up multiple corpora support.
{ # The definition of the Rag resource.
"ragCorpus": "A String", # Optional. RagCorpora resource name. Format: `projects/{project}/locations/{location}/ragCorpora/{rag_corpus}`
"ragFileIds": [ # Optional. rag_file_id. The files should be in the same rag_corpus set in rag_corpus field.
"A String",
],
},
],
"ragRetrievalConfig": { # Specifies the context retrieval config. # Optional. The retrieval config for the Rag query.
"filter": { # Config for filters. # Optional. Config for filters.
"metadataFilter": "A String", # Optional. String for metadata filtering.
"vectorDistanceThreshold": 3.14, # Optional. Only returns contexts with vector distance smaller than the threshold.
"vectorSimilarityThreshold": 3.14, # Optional. Only returns contexts with vector similarity larger than the threshold.
},
"hybridSearch": { # Config for Hybrid Search. # Optional. Config for Hybrid Search.
"alpha": 3.14, # Optional. Alpha value controls the weight between dense and sparse vector search results. The range is [0, 1], while 0 means sparse vector search only and 1 means dense vector search only. The default value is 0.5 which balances sparse and dense vector search equally.
},
"ranking": { # Config for ranking and reranking. # Optional. Config for ranking and reranking.
"llmRanker": { # Config for LlmRanker. # Optional. Config for LlmRanker.
"modelName": "A String", # Optional. The model name used for ranking. See [Supported models](https://cloud.google.com/vertex-ai/generative-ai/docs/model-reference/inference#supported-models).
},
"rankService": { # Config for Rank Service. # Optional. Config for Rank Service.
"modelName": "A String", # Optional. The model name of the rank service. Format: `semantic-ranker-512@latest`
},
},
"topK": 42, # Optional. The number of contexts to retrieve.
},
"similarityTopK": 42, # Optional. Number of top k results to return from the selected corpora.
"storeContext": True or False, # Optional. Currently only supported for Gemini Multimodal Live API. In Gemini Multimodal Live API, if `store_context` bool is specified, Gemini will leverage it to automatically memorize the interactions between the client and Gemini, and retrieve context when needed to augment the response generation for users' ongoing and future interactions.
"vectorDistanceThreshold": 3.14, # Optional. Only return results with vector distance smaller than the threshold.
},
},
"urlContext": { # Tool to support URL context. # Optional. Tool to support URL context retrieval.
},
},
],
},
},
"turns": [ # Optional. A chronological list of conversation turns. Each turn represents a logical execution cycle (e.g., User Input -> Agent Response).
{ # Represents a single turn/invocation in the conversation.
"events": [ # Optional. The list of events that occurred during this turn.
{ # Represents a single event in the execution trace.
"activeTools": [ # Optional. The list of tools that were active/available to the agent at the time of this event. This overrides the `AgentConfig.tools` if set.
{ # Tool details that the model may use to generate response. A `Tool` is a piece of code that enables the system to interact with external systems to perform an action, or set of actions, outside of knowledge and scope of the model. A Tool object should contain exactly one type of Tool (e.g FunctionDeclaration, Retrieval or GoogleSearchRetrieval).
"codeExecution": { # Tool that executes code generated by the model, and automatically returns the result to the model. See also [ExecutableCode]and [CodeExecutionResult] which are input and output to this tool. # Optional. CodeExecution tool type. Enables the model to execute code as part of generation.
},
"computerUse": { # Tool to support computer use. # Optional. Tool to support the model interacting directly with the computer. If enabled, it automatically populates computer-use specific Function Declarations.
"environment": "A String", # Required. The environment being operated.
"excludedPredefinedFunctions": [ # Optional. By default, [predefined functions](https://cloud.google.com/vertex-ai/generative-ai/docs/computer-use#supported-actions) are included in the final model call. Some of them can be explicitly excluded from being automatically included. This can serve two purposes: 1. Using a more restricted / different action space. 2. Improving the definitions / instructions of predefined functions.
"A String",
],
},
"enterpriseWebSearch": { # Tool to search public web data, powered by Vertex AI Search and Sec4 compliance. # Optional. Tool to support searching public web data, powered by Vertex AI Search and Sec4 compliance.
"blockingConfidence": "A String", # Optional. Sites with confidence level chosen & above this value will be blocked from the search results.
"excludeDomains": [ # Optional. List of domains to be excluded from the search results. The default limit is 2000 domains.
"A String",
],
},
"functionDeclarations": [ # Optional. Function tool type. One or more function declarations to be passed to the model along with the current user query. Model may decide to call a subset of these functions by populating FunctionCall in the response. User should provide a FunctionResponse for each function call in the next turn. Based on the function responses, Model will generate the final response back to the user. Maximum 512 function declarations can be provided.
{ # Structured representation of a function declaration as defined by the [OpenAPI 3.0 specification](https://spec.openapis.org/oas/v3.0.3). Included in this declaration are the function name, description, parameters and response type. This FunctionDeclaration is a representation of a block of code that can be used as a `Tool` by the model and executed by the client.
"description": "A String", # Optional. Description and purpose of the function. Model uses it to decide how and whether to call the function.
"name": "A String", # Required. The name of the function to call. Must start with a letter or an underscore. Must be a-z, A-Z, 0-9, or contain underscores, dots, colons and dashes, with a maximum length of 64.
"parameters": { # Defines the schema of input and output data. This is a subset of the [OpenAPI 3.0 Schema Object](https://spec.openapis.org/oas/v3.0.3#schema-object). # Optional. Describes the parameters to this function in JSON Schema Object format. Reflects the Open API 3.03 Parameter Object. string Key: the name of the parameter. Parameter names are case sensitive. Schema Value: the Schema defining the type used for the parameter. For function with no parameters, this can be left unset. Parameter names must start with a letter or an underscore and must only contain chars a-z, A-Z, 0-9, or underscores with a maximum length of 64. Example with 1 required and 1 optional parameter: type: OBJECT properties: param1: type: STRING param2: type: INTEGER required: - param1
"additionalProperties": "", # Optional. If `type` is `OBJECT`, specifies how to handle properties not defined in `properties`. If it is a boolean `false`, no additional properties are allowed. If it is a schema, additional properties are allowed if they conform to the schema.
"anyOf": [ # Optional. The instance must be valid against any (one or more) of the subschemas listed in `any_of`.
# Object with schema name: GoogleCloudAiplatformV1beta1Schema
],
"default": "", # Optional. Default value to use if the field is not specified.
"defs": { # Optional. `defs` provides a map of schema definitions that can be reused by `ref` elsewhere in the schema. Only allowed at root level of the schema.
"a_key": # Object with schema name: GoogleCloudAiplatformV1beta1Schema
},
"description": "A String", # Optional. Description of the schema.
"enum": [ # Optional. Possible values of the field. This field can be used to restrict a value to a fixed set of values. To mark a field as an enum, set `format` to `enum` and provide the list of possible values in `enum`. For example: 1. To define directions: `{type:STRING, format:enum, enum:["EAST", "NORTH", "SOUTH", "WEST"]}` 2. To define apartment numbers: `{type:INTEGER, format:enum, enum:["101", "201", "301"]}`
"A String",
],
"example": "", # Optional. Example of an instance of this schema.
"format": "A String", # Optional. The format of the data. For `NUMBER` type, format can be `float` or `double`. For `INTEGER` type, format can be `int32` or `int64`. For `STRING` type, format can be `email`, `byte`, `date`, `date-time`, `password`, and other formats to further refine the data type.
"items": # Object with schema name: GoogleCloudAiplatformV1beta1Schema # Optional. If type is `ARRAY`, `items` specifies the schema of elements in the array.
"maxItems": "A String", # Optional. If type is `ARRAY`, `max_items` specifies the maximum number of items in an array.
"maxLength": "A String", # Optional. If type is `STRING`, `max_length` specifies the maximum length of the string.
"maxProperties": "A String", # Optional. If type is `OBJECT`, `max_properties` specifies the maximum number of properties that can be provided.
"maximum": 3.14, # Optional. If type is `INTEGER` or `NUMBER`, `maximum` specifies the maximum allowed value.
"minItems": "A String", # Optional. If type is `ARRAY`, `min_items` specifies the minimum number of items in an array.
"minLength": "A String", # Optional. If type is `STRING`, `min_length` specifies the minimum length of the string.
"minProperties": "A String", # Optional. If type is `OBJECT`, `min_properties` specifies the minimum number of properties that can be provided.
"minimum": 3.14, # Optional. If type is `INTEGER` or `NUMBER`, `minimum` specifies the minimum allowed value.
"nullable": True or False, # Optional. Indicates if the value of this field can be null.
"pattern": "A String", # Optional. If type is `STRING`, `pattern` specifies a regular expression that the string must match.
"properties": { # Optional. If type is `OBJECT`, `properties` is a map of property names to schema definitions for each property of the object.
"a_key": # Object with schema name: GoogleCloudAiplatformV1beta1Schema
},
"propertyOrdering": [ # Optional. Order of properties displayed or used where order matters. This is not a standard field in OpenAPI specification, but can be used to control the order of properties.
"A String",
],
"ref": "A String", # Optional. Allows referencing another schema definition to use in place of this schema. The value must be a valid reference to a schema in `defs`. For example, the following schema defines a reference to a schema node named "Pet": type: object properties: pet: ref: #/defs/Pet defs: Pet: type: object properties: name: type: string The value of the "pet" property is a reference to the schema node named "Pet". See details in https://json-schema.org/understanding-json-schema/structuring
"required": [ # Optional. If type is `OBJECT`, `required` lists the names of properties that must be present.
"A String",
],
"title": "A String", # Optional. Title for the schema.
"type": "A String", # Optional. Data type of the schema field.
},
"parametersJsonSchema": "", # Optional. Describes the parameters to the function in JSON Schema format. The schema must describe an object where the properties are the parameters to the function. For example: ``` { "type": "object", "properties": { "name": { "type": "string" }, "age": { "type": "integer" } }, "additionalProperties": false, "required": ["name", "age"], "propertyOrdering": ["name", "age"] } ``` This field is mutually exclusive with `parameters`.
"response": { # Defines the schema of input and output data. This is a subset of the [OpenAPI 3.0 Schema Object](https://spec.openapis.org/oas/v3.0.3#schema-object). # Optional. Describes the output from this function in JSON Schema format. Reflects the Open API 3.03 Response Object. The Schema defines the type used for the response value of the function.
"additionalProperties": "", # Optional. If `type` is `OBJECT`, specifies how to handle properties not defined in `properties`. If it is a boolean `false`, no additional properties are allowed. If it is a schema, additional properties are allowed if they conform to the schema.
"anyOf": [ # Optional. The instance must be valid against any (one or more) of the subschemas listed in `any_of`.
# Object with schema name: GoogleCloudAiplatformV1beta1Schema
],
"default": "", # Optional. Default value to use if the field is not specified.
"defs": { # Optional. `defs` provides a map of schema definitions that can be reused by `ref` elsewhere in the schema. Only allowed at root level of the schema.
"a_key": # Object with schema name: GoogleCloudAiplatformV1beta1Schema
},
"description": "A String", # Optional. Description of the schema.
"enum": [ # Optional. Possible values of the field. This field can be used to restrict a value to a fixed set of values. To mark a field as an enum, set `format` to `enum` and provide the list of possible values in `enum`. For example: 1. To define directions: `{type:STRING, format:enum, enum:["EAST", "NORTH", "SOUTH", "WEST"]}` 2. To define apartment numbers: `{type:INTEGER, format:enum, enum:["101", "201", "301"]}`
"A String",
],
"example": "", # Optional. Example of an instance of this schema.
"format": "A String", # Optional. The format of the data. For `NUMBER` type, format can be `float` or `double`. For `INTEGER` type, format can be `int32` or `int64`. For `STRING` type, format can be `email`, `byte`, `date`, `date-time`, `password`, and other formats to further refine the data type.
"items": # Object with schema name: GoogleCloudAiplatformV1beta1Schema # Optional. If type is `ARRAY`, `items` specifies the schema of elements in the array.
"maxItems": "A String", # Optional. If type is `ARRAY`, `max_items` specifies the maximum number of items in an array.
"maxLength": "A String", # Optional. If type is `STRING`, `max_length` specifies the maximum length of the string.
"maxProperties": "A String", # Optional. If type is `OBJECT`, `max_properties` specifies the maximum number of properties that can be provided.
"maximum": 3.14, # Optional. If type is `INTEGER` or `NUMBER`, `maximum` specifies the maximum allowed value.
"minItems": "A String", # Optional. If type is `ARRAY`, `min_items` specifies the minimum number of items in an array.
"minLength": "A String", # Optional. If type is `STRING`, `min_length` specifies the minimum length of the string.
"minProperties": "A String", # Optional. If type is `OBJECT`, `min_properties` specifies the minimum number of properties that can be provided.
"minimum": 3.14, # Optional. If type is `INTEGER` or `NUMBER`, `minimum` specifies the minimum allowed value.
"nullable": True or False, # Optional. Indicates if the value of this field can be null.
"pattern": "A String", # Optional. If type is `STRING`, `pattern` specifies a regular expression that the string must match.
"properties": { # Optional. If type is `OBJECT`, `properties` is a map of property names to schema definitions for each property of the object.
"a_key": # Object with schema name: GoogleCloudAiplatformV1beta1Schema
},
"propertyOrdering": [ # Optional. Order of properties displayed or used where order matters. This is not a standard field in OpenAPI specification, but can be used to control the order of properties.
"A String",
],
"ref": "A String", # Optional. Allows referencing another schema definition to use in place of this schema. The value must be a valid reference to a schema in `defs`. For example, the following schema defines a reference to a schema node named "Pet": type: object properties: pet: ref: #/defs/Pet defs: Pet: type: object properties: name: type: string The value of the "pet" property is a reference to the schema node named "Pet". See details in https://json-schema.org/understanding-json-schema/structuring
"required": [ # Optional. If type is `OBJECT`, `required` lists the names of properties that must be present.
"A String",
],
"title": "A String", # Optional. Title for the schema.
"type": "A String", # Optional. Data type of the schema field.
},
"responseJsonSchema": "", # Optional. Describes the output from this function in JSON Schema format. The value specified by the schema is the response value of the function. This field is mutually exclusive with `response`.
},
],
"googleMaps": { # Tool to retrieve public maps data for grounding, powered by Google. # Optional. GoogleMaps tool type. Tool to support Google Maps in Model.
"enableWidget": True or False, # Optional. If true, include the widget context token in the response.
},
"googleSearch": { # GoogleSearch tool type. Tool to support Google Search in Model. Powered by Google. # Optional. GoogleSearch tool type. Tool to support Google Search in Model. Powered by Google.
"blockingConfidence": "A String", # Optional. Sites with confidence level chosen & above this value will be blocked from the search results.
"excludeDomains": [ # Optional. List of domains to be excluded from the search results. The default limit is 2000 domains. Example: ["amazon.com", "facebook.com"].
"A String",
],
},
"googleSearchRetrieval": { # Tool to retrieve public web data for grounding, powered by Google. # Optional. Specialized retrieval tool that is powered by Google Search.
"dynamicRetrievalConfig": { # Describes the options to customize dynamic retrieval. # Specifies the dynamic retrieval configuration for the given source.
"dynamicThreshold": 3.14, # Optional. The threshold to be used in dynamic retrieval. If not set, a system default value is used.
"mode": "A String", # The mode of the predictor to be used in dynamic retrieval.
},
},
"parallelAiSearch": { # ParallelAiSearch tool type. A tool that uses the Parallel.ai search engine for grounding. # Optional. If specified, Vertex AI will use Parallel.ai to search for information to answer user queries. The search results will be grounded on Parallel.ai and presented to the model for response generation
"apiKey": "A String", # Optional. The API key for ParallelAiSearch. If an API key is not provided, the system will attempt to verify access by checking for an active Parallel.ai subscription through the Google Cloud Marketplace. See https://docs.parallel.ai/search/search-quickstart for more details.
"customConfigs": { # Optional. Custom configs for ParallelAiSearch. This field can be used to pass any parameter from the Parallel.ai Search API. See the Parallel.ai documentation for the full list of available parameters and their usage: https://docs.parallel.ai/api-reference/search-beta/search Currently only `source_policy`, `excerpts`, `max_results`, `mode`, `fetch_policy` can be set via this field. For example: { "source_policy": { "include_domains": ["google.com", "wikipedia.org"], "exclude_domains": ["example.com"] }, "fetch_policy": { "max_age_seconds": 3600 } }
"a_key": "", # Properties of the object.
},
},
"retrieval": { # Defines a retrieval tool that model can call to access external knowledge. # Optional. Retrieval tool type. System will always execute the provided retrieval tool(s) to get external knowledge to answer the prompt. Retrieval results are presented to the model for generation.
"disableAttribution": True or False, # Optional. Deprecated. This option is no longer supported.
"externalApi": { # Retrieve from data source powered by external API for grounding. The external API is not owned by Google, but need to follow the pre-defined API spec. # Use data source powered by external API for grounding.
"apiAuth": { # The generic reusable api auth config. Deprecated. Please use AuthConfig (google/cloud/aiplatform/master/auth.proto) instead. # The authentication config to access the API. Deprecated. Please use auth_config instead.
"apiKeyConfig": { # The API secret. # The API secret.
"apiKeySecretVersion": "A String", # Required. The SecretManager secret version resource name storing API key. e.g. projects/{project}/secrets/{secret}/versions/{version}
"apiKeyString": "A String", # The API key string. Either this or `api_key_secret_version` must be set.
},
},
"apiSpec": "A String", # The API spec that the external API implements.
"authConfig": { # Auth configuration to run the extension. # The authentication config to access the API.
"apiKeyConfig": { # Config for authentication with API key. # Config for API key auth.
"apiKeySecret": "A String", # Optional. The name of the SecretManager secret version resource storing the API key. Format: `projects/{project}/secrets/{secrete}/versions/{version}` - If both `api_key_secret` and `api_key_string` are specified, this field takes precedence over `api_key_string`. - If specified, the `secretmanager.versions.access` permission should be granted to Vertex AI Extension Service Agent (https://cloud.google.com/vertex-ai/docs/general/access-control#service-agents) on the specified resource.
"apiKeyString": "A String", # Optional. The API key to be used in the request directly.
"httpElementLocation": "A String", # Optional. The location of the API key.
"name": "A String", # Optional. The parameter name of the API key. E.g. If the API request is "https://example.com/act?api_key=", "api_key" would be the parameter name.
},
"authType": "A String", # Type of auth scheme.
"googleServiceAccountConfig": { # Config for Google Service Account Authentication. # Config for Google Service Account auth.
"serviceAccount": "A String", # Optional. The service account that the extension execution service runs as. - If the service account is specified, the `iam.serviceAccounts.getAccessToken` permission should be granted to Vertex AI Extension Service Agent (https://cloud.google.com/vertex-ai/docs/general/access-control#service-agents) on the specified service account. - If not specified, the Vertex AI Extension Service Agent will be used to execute the Extension.
},
"httpBasicAuthConfig": { # Config for HTTP Basic Authentication. # Config for HTTP Basic auth.
"credentialSecret": "A String", # Required. The name of the SecretManager secret version resource storing the base64 encoded credentials. Format: `projects/{project}/secrets/{secrete}/versions/{version}` - If specified, the `secretmanager.versions.access` permission should be granted to Vertex AI Extension Service Agent (https://cloud.google.com/vertex-ai/docs/general/access-control#service-agents) on the specified resource.
},
"oauthConfig": { # Config for user oauth. # Config for user oauth.
"accessToken": "A String", # Access token for extension endpoint. Only used to propagate token from [[ExecuteExtensionRequest.runtime_auth_config]] at request time.
"serviceAccount": "A String", # The service account used to generate access tokens for executing the Extension. - If the service account is specified, the `iam.serviceAccounts.getAccessToken` permission should be granted to Vertex AI Extension Service Agent (https://cloud.google.com/vertex-ai/docs/general/access-control#service-agents) on the provided service account.
},
"oidcConfig": { # Config for user OIDC auth. # Config for user OIDC auth.
"idToken": "A String", # OpenID Connect formatted ID token for extension endpoint. Only used to propagate token from [[ExecuteExtensionRequest.runtime_auth_config]] at request time.
"serviceAccount": "A String", # The service account used to generate an OpenID Connect (OIDC)-compatible JWT token signed by the Google OIDC Provider (accounts.google.com) for extension endpoint (https://cloud.google.com/iam/docs/create-short-lived-credentials-direct#sa-credentials-oidc). - The audience for the token will be set to the URL in the server url defined in the OpenApi spec. - If the service account is provided, the service account should grant `iam.serviceAccounts.getOpenIdToken` permission to Vertex AI Extension Service Agent (https://cloud.google.com/vertex-ai/docs/general/access-control#service-agents).
},
},
"elasticSearchParams": { # The search parameters to use for the ELASTIC_SEARCH spec. # Parameters for the elastic search API.
"index": "A String", # The ElasticSearch index to use.
"numHits": 42, # Optional. Number of hits (chunks) to request. When specified, it is passed to Elasticsearch as the `num_hits` param.
"searchTemplate": "A String", # The ElasticSearch search template to use.
},
"endpoint": "A String", # The endpoint of the external API. The system will call the API at this endpoint to retrieve the data for grounding. Example: https://acme.com:443/search
"simpleSearchParams": { # The search parameters to use for SIMPLE_SEARCH spec. # Parameters for the simple search API.
},
},
"vertexAiSearch": { # Retrieve from Vertex AI Search datastore or engine for grounding. datastore and engine are mutually exclusive. See https://cloud.google.com/products/agent-builder # Set to use data source powered by Vertex AI Search.
"dataStoreSpecs": [ # Specifications that define the specific DataStores to be searched, along with configurations for those data stores. This is only considered for Engines with multiple data stores. It should only be set if engine is used.
{ # Define data stores within engine to filter on in a search call and configurations for those data stores. For more information, see https://cloud.google.com/generative-ai-app-builder/docs/reference/rpc/google.cloud.discoveryengine.v1#datastorespec
"dataStore": "A String", # Full resource name of DataStore, such as Format: `projects/{project}/locations/{location}/collections/{collection}/dataStores/{dataStore}`
"filter": "A String", # Optional. Filter specification to filter documents in the data store specified by data_store field. For more information on filtering, see [Filtering](https://cloud.google.com/generative-ai-app-builder/docs/filter-search-metadata)
},
],
"datastore": "A String", # Optional. Fully-qualified Vertex AI Search data store resource ID. Format: `projects/{project}/locations/{location}/collections/{collection}/dataStores/{dataStore}`
"engine": "A String", # Optional. Fully-qualified Vertex AI Search engine resource ID. Format: `projects/{project}/locations/{location}/collections/{collection}/engines/{engine}`
"filter": "A String", # Optional. Filter strings to be passed to the search API.
"maxResults": 42, # Optional. Number of search results to return per query. The default value is 10. The maximumm allowed value is 10.
},
"vertexRagStore": { # Retrieve from Vertex RAG Store for grounding. # Set to use data source powered by Vertex RAG store. User data is uploaded via the VertexRagDataService.
"ragCorpora": [ # Optional. Deprecated. Please use rag_resources instead.
"A String",
],
"ragResources": [ # Optional. The representation of the rag source. It can be used to specify corpus only or ragfiles. Currently only support one corpus or multiple files from one corpus. In the future we may open up multiple corpora support.
{ # The definition of the Rag resource.
"ragCorpus": "A String", # Optional. RagCorpora resource name. Format: `projects/{project}/locations/{location}/ragCorpora/{rag_corpus}`
"ragFileIds": [ # Optional. rag_file_id. The files should be in the same rag_corpus set in rag_corpus field.
"A String",
],
},
],
"ragRetrievalConfig": { # Specifies the context retrieval config. # Optional. The retrieval config for the Rag query.
"filter": { # Config for filters. # Optional. Config for filters.
"metadataFilter": "A String", # Optional. String for metadata filtering.
"vectorDistanceThreshold": 3.14, # Optional. Only returns contexts with vector distance smaller than the threshold.
"vectorSimilarityThreshold": 3.14, # Optional. Only returns contexts with vector similarity larger than the threshold.
},
"hybridSearch": { # Config for Hybrid Search. # Optional. Config for Hybrid Search.
"alpha": 3.14, # Optional. Alpha value controls the weight between dense and sparse vector search results. The range is [0, 1], while 0 means sparse vector search only and 1 means dense vector search only. The default value is 0.5 which balances sparse and dense vector search equally.
},
"ranking": { # Config for ranking and reranking. # Optional. Config for ranking and reranking.
"llmRanker": { # Config for LlmRanker. # Optional. Config for LlmRanker.
"modelName": "A String", # Optional. The model name used for ranking. See [Supported models](https://cloud.google.com/vertex-ai/generative-ai/docs/model-reference/inference#supported-models).
},
"rankService": { # Config for Rank Service. # Optional. Config for Rank Service.
"modelName": "A String", # Optional. The model name of the rank service. Format: `semantic-ranker-512@latest`
},
},
"topK": 42, # Optional. The number of contexts to retrieve.
},
"similarityTopK": 42, # Optional. Number of top k results to return from the selected corpora.
"storeContext": True or False, # Optional. Currently only supported for Gemini Multimodal Live API. In Gemini Multimodal Live API, if `store_context` bool is specified, Gemini will leverage it to automatically memorize the interactions between the client and Gemini, and retrieve context when needed to augment the response generation for users' ongoing and future interactions.
"vectorDistanceThreshold": 3.14, # Optional. Only return results with vector distance smaller than the threshold.
},
},
"urlContext": { # Tool to support URL context. # Optional. Tool to support URL context retrieval.
},
},
],
"author": "A String", # Required. The ID of the agent or entity that generated this event. Use "user" to denote events generated by the end-user.
"content": { # The structured data content of a message. A Content message contains a `role` field, which indicates the producer of the content, and a `parts` field, which contains the multi-part data of the message. # Required. The content of the event (e.g., text response, tool call, tool response).
"parts": [ # Required. A list of Part objects that make up a single message. Parts of a message can have different MIME types. A Content message must have at least one Part.
{ # A datatype containing media that is part of a multi-part Content message. A `Part` consists of data which has an associated datatype. A `Part` can only contain one of the accepted types in `Part.data`. For media types that are not text, `Part` must have a fixed IANA MIME type identifying the type and subtype of the media if `inline_data` or `file_data` field is filled with raw bytes.
"codeExecutionResult": { # Result of executing the [ExecutableCode]. Only generated when using the [CodeExecution] tool, and always follows a `part` containing the [ExecutableCode]. # Optional. The result of executing the ExecutableCode.
"outcome": "A String", # Required. Outcome of the code execution.
"output": "A String", # Optional. Contains stdout when code execution is successful, stderr or other description otherwise.
},
"executableCode": { # Code generated by the model that is meant to be executed, and the result returned to the model. Generated when using the [CodeExecution] tool, in which the code will be automatically executed, and a corresponding [CodeExecutionResult] will also be generated. # Optional. Code generated by the model that is intended to be executed.
"code": "A String", # Required. The code to be executed.
"language": "A String", # Required. Programming language of the `code`.
},
"fileData": { # URI-based data. A FileData message contains a URI pointing to data of a specific media type. It is used to represent images, audio, and video stored in Google Cloud Storage. # Optional. The URI-based data of the part. This can be used to include files from Google Cloud Storage.
"displayName": "A String", # Optional. The display name of the file. Used to provide a label or filename to distinguish files. This field is only returned in `PromptMessage` for prompt management. It is used in the Gemini calls only when server side tools (`code_execution`, `google_search`, and `url_context`) are enabled.
"fileUri": "A String", # Required. The URI of the file in Google Cloud Storage.
"mimeType": "A String", # Required. The IANA standard MIME type of the source data.
},
"functionCall": { # A predicted [FunctionCall] returned from the model that contains a string representing the [FunctionDeclaration.name] and a structured JSON object containing the parameters and their values. # Optional. A predicted function call returned from the model. This contains the name of the function to call and the arguments to pass to the function.
"args": { # Optional. The function parameters and values in JSON object format. See [FunctionDeclaration.parameters] for parameter details.
"a_key": "", # Properties of the object.
},
"id": "A String", # Optional. The unique id of the function call. If populated, the client to execute the `function_call` and return the response with the matching `id`.
"name": "A String", # Optional. The name of the function to call. Matches [FunctionDeclaration.name].
"partialArgs": [ # Optional. The partial argument value of the function call. If provided, represents the arguments/fields that are streamed incrementally.
{ # Partial argument value of the function call.
"boolValue": True or False, # Optional. Represents a boolean value.
"jsonPath": "A String", # Required. A JSON Path (RFC 9535) to the argument being streamed. https://datatracker.ietf.org/doc/html/rfc9535. e.g. "$.foo.bar[0].data".
"nullValue": "A String", # Optional. Represents a null value.
"numberValue": 3.14, # Optional. Represents a double value.
"stringValue": "A String", # Optional. Represents a string value.
"willContinue": True or False, # Optional. Whether this is not the last part of the same json_path. If true, another PartialArg message for the current json_path is expected to follow.
},
],
"willContinue": True or False, # Optional. Whether this is the last part of the FunctionCall. If true, another partial message for the current FunctionCall is expected to follow.
},
"functionResponse": { # The result output from a [FunctionCall] that contains a string representing the [FunctionDeclaration.name] and a structured JSON object containing any output from the function is used as context to the model. This should contain the result of a [FunctionCall] made based on model prediction. # Optional. The result of a function call. This is used to provide the model with the result of a function call that it predicted.
"id": "A String", # Optional. The id of the function call this response is for. Populated by the client to match the corresponding function call `id`.
"name": "A String", # Required. The name of the function to call. Matches [FunctionDeclaration.name] and [FunctionCall.name].
"parts": [ # Optional. Ordered `Parts` that constitute a function response. Parts may have different IANA MIME types.
{ # A datatype containing media that is part of a `FunctionResponse` message. A `FunctionResponsePart` consists of data which has an associated datatype. A `FunctionResponsePart` can only contain one of the accepted types in `FunctionResponsePart.data`. A `FunctionResponsePart` must have a fixed IANA MIME type identifying the type and subtype of the media if the `inline_data` field is filled with raw bytes.
"fileData": { # URI based data for function response. # URI based data.
"displayName": "A String", # Optional. Display name of the file data. Used to provide a label or filename to distinguish file datas. This field is only returned in PromptMessage for prompt management. It is currently used in the Gemini GenerateContent calls only when server side tools (code_execution, google_search, and url_context) are enabled.
"fileUri": "A String", # Required. URI.
"mimeType": "A String", # Required. The IANA standard MIME type of the source data.
},
"inlineData": { # Raw media bytes for function response. Text should not be sent as raw bytes, use the 'text' field. # Inline media bytes.
"data": "A String", # Required. Raw bytes.
"displayName": "A String", # Optional. Display name of the blob. Used to provide a label or filename to distinguish blobs. This field is only returned in PromptMessage for prompt management. It is currently used in the Gemini GenerateContent calls only when server side tools (code_execution, google_search, and url_context) are enabled.
"mimeType": "A String", # Required. The IANA standard MIME type of the source data.
},
},
],
"response": { # Required. The function response in JSON object format. Use "output" key to specify function output and "error" key to specify error details (if any). If "output" and "error" keys are not specified, then whole "response" is treated as function output.
"a_key": "", # Properties of the object.
},
},
"inlineData": { # A content blob. A Blob contains data of a specific media type. It is used to represent images, audio, and video. # Optional. The inline data content of the part. This can be used to include images, audio, or video in a request.
"data": "A String", # Required. The raw bytes of the data.
"displayName": "A String", # Optional. The display name of the blob. Used to provide a label or filename to distinguish blobs. This field is only returned in `PromptMessage` for prompt management. It is used in the Gemini calls only when server-side tools (`code_execution`, `google_search`, and `url_context`) are enabled.
"mimeType": "A String", # Required. The IANA standard MIME type of the source data.
},
"mediaResolution": { # per part media resolution. Media resolution for the input media. # per part media resolution. Media resolution for the input media.
"level": "A String", # The tokenization quality used for given media.
},
"text": "A String", # Optional. The text content of the part. When sent from the VSCode Gemini Code Assist extension, references to @mentioned items will be converted to markdown boldface text. For example `@my-repo` will be converted to and sent as `**my-repo**` by the IDE agent.
"thought": True or False, # Optional. Indicates whether the `part` represents the model's thought process or reasoning.
"thoughtSignature": "A String", # Optional. An opaque signature for the thought so it can be reused in subsequent requests.
"videoMetadata": { # Provides metadata for a video, including the start and end offsets for clipping and the frame rate. # Optional. Video metadata. The metadata should only be specified while the video data is presented in inline_data or file_data.
"endOffset": "A String", # Optional. The end offset of the video.
"fps": 3.14, # Optional. The frame rate of the video sent to the model. If not specified, the default value is 1.0. The valid range is (0.0, 24.0].
"startOffset": "A String", # Optional. The start offset of the video.
},
},
],
"role": "A String", # Optional. The producer of the content. Must be either 'user' or 'model'. If not set, the service will default to 'user'.
},
"eventTime": "A String", # Optional. The timestamp when the event occurred.
"stateDelta": { # Optional. The change in the session state caused by this event. This is a key-value map of fields that were modified or added by the event.
"a_key": "", # Properties of the object.
},
},
],
"turnId": "A String", # Optional. A unique identifier for the turn. Useful for referencing specific turns across systems.
"turnIndex": 42, # Required. The 0-based index of the turn in the conversation sequence.
},
],
},
"candidate": "A String", # Required. The name of the candidate that produced the response.
"error": { # The `Status` type defines a logical error model that is suitable for different programming environments, including REST APIs and RPC APIs. It is used by [gRPC](https://github.com/grpc). Each `Status` message contains three pieces of data: error code, error message, and error details. You can find out more about this error model and how to work with it in the [API Design Guide](https://cloud.google.com/apis/design/errors). # Output only. Error while scraping model or agent.
"code": 42, # The status code, which should be an enum value of google.rpc.Code.
"details": [ # A list of messages that carry the error details. There is a common set of message types for APIs to use.
{
"a_key": "", # Properties of the object. Contains field @type with type URL.
},
],
"message": "A String", # A developer-facing error message, which should be in English. Any user-facing error message should be localized and sent in the google.rpc.Status.details field, or localized by the client.
},
"events": [ # Optional. Intermediate events (such as tool calls and responses) that led to the final response.
{ # The structured data content of a message. A Content message contains a `role` field, which indicates the producer of the content, and a `parts` field, which contains the multi-part data of the message.
"parts": [ # Required. A list of Part objects that make up a single message. Parts of a message can have different MIME types. A Content message must have at least one Part.
{ # A datatype containing media that is part of a multi-part Content message. A `Part` consists of data which has an associated datatype. A `Part` can only contain one of the accepted types in `Part.data`. For media types that are not text, `Part` must have a fixed IANA MIME type identifying the type and subtype of the media if `inline_data` or `file_data` field is filled with raw bytes.
"codeExecutionResult": { # Result of executing the [ExecutableCode]. Only generated when using the [CodeExecution] tool, and always follows a `part` containing the [ExecutableCode]. # Optional. The result of executing the ExecutableCode.
"outcome": "A String", # Required. Outcome of the code execution.
"output": "A String", # Optional. Contains stdout when code execution is successful, stderr or other description otherwise.
},
"executableCode": { # Code generated by the model that is meant to be executed, and the result returned to the model. Generated when using the [CodeExecution] tool, in which the code will be automatically executed, and a corresponding [CodeExecutionResult] will also be generated. # Optional. Code generated by the model that is intended to be executed.
"code": "A String", # Required. The code to be executed.
"language": "A String", # Required. Programming language of the `code`.
},
"fileData": { # URI-based data. A FileData message contains a URI pointing to data of a specific media type. It is used to represent images, audio, and video stored in Google Cloud Storage. # Optional. The URI-based data of the part. This can be used to include files from Google Cloud Storage.
"displayName": "A String", # Optional. The display name of the file. Used to provide a label or filename to distinguish files. This field is only returned in `PromptMessage` for prompt management. It is used in the Gemini calls only when server side tools (`code_execution`, `google_search`, and `url_context`) are enabled.
"fileUri": "A String", # Required. The URI of the file in Google Cloud Storage.
"mimeType": "A String", # Required. The IANA standard MIME type of the source data.
},
"functionCall": { # A predicted [FunctionCall] returned from the model that contains a string representing the [FunctionDeclaration.name] and a structured JSON object containing the parameters and their values. # Optional. A predicted function call returned from the model. This contains the name of the function to call and the arguments to pass to the function.
"args": { # Optional. The function parameters and values in JSON object format. See [FunctionDeclaration.parameters] for parameter details.
"a_key": "", # Properties of the object.
},
"id": "A String", # Optional. The unique id of the function call. If populated, the client to execute the `function_call` and return the response with the matching `id`.
"name": "A String", # Optional. The name of the function to call. Matches [FunctionDeclaration.name].
"partialArgs": [ # Optional. The partial argument value of the function call. If provided, represents the arguments/fields that are streamed incrementally.
{ # Partial argument value of the function call.
"boolValue": True or False, # Optional. Represents a boolean value.
"jsonPath": "A String", # Required. A JSON Path (RFC 9535) to the argument being streamed. https://datatracker.ietf.org/doc/html/rfc9535. e.g. "$.foo.bar[0].data".
"nullValue": "A String", # Optional. Represents a null value.
"numberValue": 3.14, # Optional. Represents a double value.
"stringValue": "A String", # Optional. Represents a string value.
"willContinue": True or False, # Optional. Whether this is not the last part of the same json_path. If true, another PartialArg message for the current json_path is expected to follow.
},
],
"willContinue": True or False, # Optional. Whether this is the last part of the FunctionCall. If true, another partial message for the current FunctionCall is expected to follow.
},
"functionResponse": { # The result output from a [FunctionCall] that contains a string representing the [FunctionDeclaration.name] and a structured JSON object containing any output from the function is used as context to the model. This should contain the result of a [FunctionCall] made based on model prediction. # Optional. The result of a function call. This is used to provide the model with the result of a function call that it predicted.
"id": "A String", # Optional. The id of the function call this response is for. Populated by the client to match the corresponding function call `id`.
"name": "A String", # Required. The name of the function to call. Matches [FunctionDeclaration.name] and [FunctionCall.name].
"parts": [ # Optional. Ordered `Parts` that constitute a function response. Parts may have different IANA MIME types.
{ # A datatype containing media that is part of a `FunctionResponse` message. A `FunctionResponsePart` consists of data which has an associated datatype. A `FunctionResponsePart` can only contain one of the accepted types in `FunctionResponsePart.data`. A `FunctionResponsePart` must have a fixed IANA MIME type identifying the type and subtype of the media if the `inline_data` field is filled with raw bytes.
"fileData": { # URI based data for function response. # URI based data.
"displayName": "A String", # Optional. Display name of the file data. Used to provide a label or filename to distinguish file datas. This field is only returned in PromptMessage for prompt management. It is currently used in the Gemini GenerateContent calls only when server side tools (code_execution, google_search, and url_context) are enabled.
"fileUri": "A String", # Required. URI.
"mimeType": "A String", # Required. The IANA standard MIME type of the source data.
},
"inlineData": { # Raw media bytes for function response. Text should not be sent as raw bytes, use the 'text' field. # Inline media bytes.
"data": "A String", # Required. Raw bytes.
"displayName": "A String", # Optional. Display name of the blob. Used to provide a label or filename to distinguish blobs. This field is only returned in PromptMessage for prompt management. It is currently used in the Gemini GenerateContent calls only when server side tools (code_execution, google_search, and url_context) are enabled.
"mimeType": "A String", # Required. The IANA standard MIME type of the source data.
},
},
],
"response": { # Required. The function response in JSON object format. Use "output" key to specify function output and "error" key to specify error details (if any). If "output" and "error" keys are not specified, then whole "response" is treated as function output.
"a_key": "", # Properties of the object.
},
},
"inlineData": { # A content blob. A Blob contains data of a specific media type. It is used to represent images, audio, and video. # Optional. The inline data content of the part. This can be used to include images, audio, or video in a request.
"data": "A String", # Required. The raw bytes of the data.
"displayName": "A String", # Optional. The display name of the blob. Used to provide a label or filename to distinguish blobs. This field is only returned in `PromptMessage` for prompt management. It is used in the Gemini calls only when server-side tools (`code_execution`, `google_search`, and `url_context`) are enabled.
"mimeType": "A String", # Required. The IANA standard MIME type of the source data.
},
"mediaResolution": { # per part media resolution. Media resolution for the input media. # per part media resolution. Media resolution for the input media.
"level": "A String", # The tokenization quality used for given media.
},
"text": "A String", # Optional. The text content of the part. When sent from the VSCode Gemini Code Assist extension, references to @mentioned items will be converted to markdown boldface text. For example `@my-repo` will be converted to and sent as `**my-repo**` by the IDE agent.
"thought": True or False, # Optional. Indicates whether the `part` represents the model's thought process or reasoning.
"thoughtSignature": "A String", # Optional. An opaque signature for the thought so it can be reused in subsequent requests.
"videoMetadata": { # Provides metadata for a video, including the start and end offsets for clipping and the frame rate. # Optional. Video metadata. The metadata should only be specified while the video data is presented in inline_data or file_data.
"endOffset": "A String", # Optional. The end offset of the video.
"fps": 3.14, # Optional. The frame rate of the video sent to the model. If not specified, the default value is 1.0. The valid range is (0.0, 24.0].
"startOffset": "A String", # Optional. The start offset of the video.
},
},
],
"role": "A String", # Optional. The producer of the content. Must be either 'user' or 'model'. If not set, the service will default to 'user'.
},
],
"text": "A String", # Text response.
"value": "", # Fields and values that can be used to populate the response template.
},
],
"goldenResponse": { # Responses from model or agent. # Optional. The Ideal response or ground truth.
"agentData": { # Represents data specific to multi-turn agent evaluations. # Optional. Represents the complete execution trace of a multi-turn conversation, which can involve single or multiple agents. This field is used to provide the full output of an agent's run, including all turns and events, for direct evaluation.
"agents": { # Optional. A map containing the static configurations for each agent in the system. Key: agent_id (matches the `author` field in events). Value: The static configuration of the agent.
"a_key": { # Represents configuration for an Agent.
"agentId": "A String", # Required. Unique identifier of the agent. This ID is used to refer to this agent, e.g., in AgentEvent.author, or in the `sub_agents` field. It must be unique within the `agents` map.
"agentType": "A String", # Optional. The type or class of the agent (e.g., "LlmAgent", "RouterAgent", "ToolUseAgent"). Useful for the autorater to understand the expected behavior of the agent.
"description": "A String", # Optional. A high-level description of the agent's role and responsibilities. Critical for evaluating if the agent is routing tasks correctly.
"instruction": "A String", # Optional. Provides instructions for the LLM model, guiding the agent's behavior. Can be static or dynamic. Dynamic instructions can contain placeholders like {variable_name} that will be resolved at runtime using the `AgentEvent.state_delta` field.
"subAgents": [ # Optional. The list of valid agent IDs that this agent can delegate to. This defines the directed edges in the multi-agent system graph topology.
"A String",
],
"tools": [ # Optional. The list of tools available to this agent.
{ # Tool details that the model may use to generate response. A `Tool` is a piece of code that enables the system to interact with external systems to perform an action, or set of actions, outside of knowledge and scope of the model. A Tool object should contain exactly one type of Tool (e.g FunctionDeclaration, Retrieval or GoogleSearchRetrieval).
"codeExecution": { # Tool that executes code generated by the model, and automatically returns the result to the model. See also [ExecutableCode]and [CodeExecutionResult] which are input and output to this tool. # Optional. CodeExecution tool type. Enables the model to execute code as part of generation.
},
"computerUse": { # Tool to support computer use. # Optional. Tool to support the model interacting directly with the computer. If enabled, it automatically populates computer-use specific Function Declarations.
"environment": "A String", # Required. The environment being operated.
"excludedPredefinedFunctions": [ # Optional. By default, [predefined functions](https://cloud.google.com/vertex-ai/generative-ai/docs/computer-use#supported-actions) are included in the final model call. Some of them can be explicitly excluded from being automatically included. This can serve two purposes: 1. Using a more restricted / different action space. 2. Improving the definitions / instructions of predefined functions.
"A String",
],
},
"enterpriseWebSearch": { # Tool to search public web data, powered by Vertex AI Search and Sec4 compliance. # Optional. Tool to support searching public web data, powered by Vertex AI Search and Sec4 compliance.
"blockingConfidence": "A String", # Optional. Sites with confidence level chosen & above this value will be blocked from the search results.
"excludeDomains": [ # Optional. List of domains to be excluded from the search results. The default limit is 2000 domains.
"A String",
],
},
"functionDeclarations": [ # Optional. Function tool type. One or more function declarations to be passed to the model along with the current user query. Model may decide to call a subset of these functions by populating FunctionCall in the response. User should provide a FunctionResponse for each function call in the next turn. Based on the function responses, Model will generate the final response back to the user. Maximum 512 function declarations can be provided.
{ # Structured representation of a function declaration as defined by the [OpenAPI 3.0 specification](https://spec.openapis.org/oas/v3.0.3). Included in this declaration are the function name, description, parameters and response type. This FunctionDeclaration is a representation of a block of code that can be used as a `Tool` by the model and executed by the client.
"description": "A String", # Optional. Description and purpose of the function. Model uses it to decide how and whether to call the function.
"name": "A String", # Required. The name of the function to call. Must start with a letter or an underscore. Must be a-z, A-Z, 0-9, or contain underscores, dots, colons and dashes, with a maximum length of 64.
"parameters": { # Defines the schema of input and output data. This is a subset of the [OpenAPI 3.0 Schema Object](https://spec.openapis.org/oas/v3.0.3#schema-object). # Optional. Describes the parameters to this function in JSON Schema Object format. Reflects the Open API 3.03 Parameter Object. string Key: the name of the parameter. Parameter names are case sensitive. Schema Value: the Schema defining the type used for the parameter. For function with no parameters, this can be left unset. Parameter names must start with a letter or an underscore and must only contain chars a-z, A-Z, 0-9, or underscores with a maximum length of 64. Example with 1 required and 1 optional parameter: type: OBJECT properties: param1: type: STRING param2: type: INTEGER required: - param1
"additionalProperties": "", # Optional. If `type` is `OBJECT`, specifies how to handle properties not defined in `properties`. If it is a boolean `false`, no additional properties are allowed. If it is a schema, additional properties are allowed if they conform to the schema.
"anyOf": [ # Optional. The instance must be valid against any (one or more) of the subschemas listed in `any_of`.
# Object with schema name: GoogleCloudAiplatformV1beta1Schema
],
"default": "", # Optional. Default value to use if the field is not specified.
"defs": { # Optional. `defs` provides a map of schema definitions that can be reused by `ref` elsewhere in the schema. Only allowed at root level of the schema.
"a_key": # Object with schema name: GoogleCloudAiplatformV1beta1Schema
},
"description": "A String", # Optional. Description of the schema.
"enum": [ # Optional. Possible values of the field. This field can be used to restrict a value to a fixed set of values. To mark a field as an enum, set `format` to `enum` and provide the list of possible values in `enum`. For example: 1. To define directions: `{type:STRING, format:enum, enum:["EAST", "NORTH", "SOUTH", "WEST"]}` 2. To define apartment numbers: `{type:INTEGER, format:enum, enum:["101", "201", "301"]}`
"A String",
],
"example": "", # Optional. Example of an instance of this schema.
"format": "A String", # Optional. The format of the data. For `NUMBER` type, format can be `float` or `double`. For `INTEGER` type, format can be `int32` or `int64`. For `STRING` type, format can be `email`, `byte`, `date`, `date-time`, `password`, and other formats to further refine the data type.
"items": # Object with schema name: GoogleCloudAiplatformV1beta1Schema # Optional. If type is `ARRAY`, `items` specifies the schema of elements in the array.
"maxItems": "A String", # Optional. If type is `ARRAY`, `max_items` specifies the maximum number of items in an array.
"maxLength": "A String", # Optional. If type is `STRING`, `max_length` specifies the maximum length of the string.
"maxProperties": "A String", # Optional. If type is `OBJECT`, `max_properties` specifies the maximum number of properties that can be provided.
"maximum": 3.14, # Optional. If type is `INTEGER` or `NUMBER`, `maximum` specifies the maximum allowed value.
"minItems": "A String", # Optional. If type is `ARRAY`, `min_items` specifies the minimum number of items in an array.
"minLength": "A String", # Optional. If type is `STRING`, `min_length` specifies the minimum length of the string.
"minProperties": "A String", # Optional. If type is `OBJECT`, `min_properties` specifies the minimum number of properties that can be provided.
"minimum": 3.14, # Optional. If type is `INTEGER` or `NUMBER`, `minimum` specifies the minimum allowed value.
"nullable": True or False, # Optional. Indicates if the value of this field can be null.
"pattern": "A String", # Optional. If type is `STRING`, `pattern` specifies a regular expression that the string must match.
"properties": { # Optional. If type is `OBJECT`, `properties` is a map of property names to schema definitions for each property of the object.
"a_key": # Object with schema name: GoogleCloudAiplatformV1beta1Schema
},
"propertyOrdering": [ # Optional. Order of properties displayed or used where order matters. This is not a standard field in OpenAPI specification, but can be used to control the order of properties.
"A String",
],
"ref": "A String", # Optional. Allows referencing another schema definition to use in place of this schema. The value must be a valid reference to a schema in `defs`. For example, the following schema defines a reference to a schema node named "Pet": type: object properties: pet: ref: #/defs/Pet defs: Pet: type: object properties: name: type: string The value of the "pet" property is a reference to the schema node named "Pet". See details in https://json-schema.org/understanding-json-schema/structuring
"required": [ # Optional. If type is `OBJECT`, `required` lists the names of properties that must be present.
"A String",
],
"title": "A String", # Optional. Title for the schema.
"type": "A String", # Optional. Data type of the schema field.
},
"parametersJsonSchema": "", # Optional. Describes the parameters to the function in JSON Schema format. The schema must describe an object where the properties are the parameters to the function. For example: ``` { "type": "object", "properties": { "name": { "type": "string" }, "age": { "type": "integer" } }, "additionalProperties": false, "required": ["name", "age"], "propertyOrdering": ["name", "age"] } ``` This field is mutually exclusive with `parameters`.
"response": { # Defines the schema of input and output data. This is a subset of the [OpenAPI 3.0 Schema Object](https://spec.openapis.org/oas/v3.0.3#schema-object). # Optional. Describes the output from this function in JSON Schema format. Reflects the Open API 3.03 Response Object. The Schema defines the type used for the response value of the function.
"additionalProperties": "", # Optional. If `type` is `OBJECT`, specifies how to handle properties not defined in `properties`. If it is a boolean `false`, no additional properties are allowed. If it is a schema, additional properties are allowed if they conform to the schema.
"anyOf": [ # Optional. The instance must be valid against any (one or more) of the subschemas listed in `any_of`.
# Object with schema name: GoogleCloudAiplatformV1beta1Schema
],
"default": "", # Optional. Default value to use if the field is not specified.
"defs": { # Optional. `defs` provides a map of schema definitions that can be reused by `ref` elsewhere in the schema. Only allowed at root level of the schema.
"a_key": # Object with schema name: GoogleCloudAiplatformV1beta1Schema
},
"description": "A String", # Optional. Description of the schema.
"enum": [ # Optional. Possible values of the field. This field can be used to restrict a value to a fixed set of values. To mark a field as an enum, set `format` to `enum` and provide the list of possible values in `enum`. For example: 1. To define directions: `{type:STRING, format:enum, enum:["EAST", "NORTH", "SOUTH", "WEST"]}` 2. To define apartment numbers: `{type:INTEGER, format:enum, enum:["101", "201", "301"]}`
"A String",
],
"example": "", # Optional. Example of an instance of this schema.
"format": "A String", # Optional. The format of the data. For `NUMBER` type, format can be `float` or `double`. For `INTEGER` type, format can be `int32` or `int64`. For `STRING` type, format can be `email`, `byte`, `date`, `date-time`, `password`, and other formats to further refine the data type.
"items": # Object with schema name: GoogleCloudAiplatformV1beta1Schema # Optional. If type is `ARRAY`, `items` specifies the schema of elements in the array.
"maxItems": "A String", # Optional. If type is `ARRAY`, `max_items` specifies the maximum number of items in an array.
"maxLength": "A String", # Optional. If type is `STRING`, `max_length` specifies the maximum length of the string.
"maxProperties": "A String", # Optional. If type is `OBJECT`, `max_properties` specifies the maximum number of properties that can be provided.
"maximum": 3.14, # Optional. If type is `INTEGER` or `NUMBER`, `maximum` specifies the maximum allowed value.
"minItems": "A String", # Optional. If type is `ARRAY`, `min_items` specifies the minimum number of items in an array.
"minLength": "A String", # Optional. If type is `STRING`, `min_length` specifies the minimum length of the string.
"minProperties": "A String", # Optional. If type is `OBJECT`, `min_properties` specifies the minimum number of properties that can be provided.
"minimum": 3.14, # Optional. If type is `INTEGER` or `NUMBER`, `minimum` specifies the minimum allowed value.
"nullable": True or False, # Optional. Indicates if the value of this field can be null.
"pattern": "A String", # Optional. If type is `STRING`, `pattern` specifies a regular expression that the string must match.
"properties": { # Optional. If type is `OBJECT`, `properties` is a map of property names to schema definitions for each property of the object.
"a_key": # Object with schema name: GoogleCloudAiplatformV1beta1Schema
},
"propertyOrdering": [ # Optional. Order of properties displayed or used where order matters. This is not a standard field in OpenAPI specification, but can be used to control the order of properties.
"A String",
],
"ref": "A String", # Optional. Allows referencing another schema definition to use in place of this schema. The value must be a valid reference to a schema in `defs`. For example, the following schema defines a reference to a schema node named "Pet": type: object properties: pet: ref: #/defs/Pet defs: Pet: type: object properties: name: type: string The value of the "pet" property is a reference to the schema node named "Pet". See details in https://json-schema.org/understanding-json-schema/structuring
"required": [ # Optional. If type is `OBJECT`, `required` lists the names of properties that must be present.
"A String",
],
"title": "A String", # Optional. Title for the schema.
"type": "A String", # Optional. Data type of the schema field.
},
"responseJsonSchema": "", # Optional. Describes the output from this function in JSON Schema format. The value specified by the schema is the response value of the function. This field is mutually exclusive with `response`.
},
],
"googleMaps": { # Tool to retrieve public maps data for grounding, powered by Google. # Optional. GoogleMaps tool type. Tool to support Google Maps in Model.
"enableWidget": True or False, # Optional. If true, include the widget context token in the response.
},
"googleSearch": { # GoogleSearch tool type. Tool to support Google Search in Model. Powered by Google. # Optional. GoogleSearch tool type. Tool to support Google Search in Model. Powered by Google.
"blockingConfidence": "A String", # Optional. Sites with confidence level chosen & above this value will be blocked from the search results.
"excludeDomains": [ # Optional. List of domains to be excluded from the search results. The default limit is 2000 domains. Example: ["amazon.com", "facebook.com"].
"A String",
],
},
"googleSearchRetrieval": { # Tool to retrieve public web data for grounding, powered by Google. # Optional. Specialized retrieval tool that is powered by Google Search.
"dynamicRetrievalConfig": { # Describes the options to customize dynamic retrieval. # Specifies the dynamic retrieval configuration for the given source.
"dynamicThreshold": 3.14, # Optional. The threshold to be used in dynamic retrieval. If not set, a system default value is used.
"mode": "A String", # The mode of the predictor to be used in dynamic retrieval.
},
},
"parallelAiSearch": { # ParallelAiSearch tool type. A tool that uses the Parallel.ai search engine for grounding. # Optional. If specified, Vertex AI will use Parallel.ai to search for information to answer user queries. The search results will be grounded on Parallel.ai and presented to the model for response generation
"apiKey": "A String", # Optional. The API key for ParallelAiSearch. If an API key is not provided, the system will attempt to verify access by checking for an active Parallel.ai subscription through the Google Cloud Marketplace. See https://docs.parallel.ai/search/search-quickstart for more details.
"customConfigs": { # Optional. Custom configs for ParallelAiSearch. This field can be used to pass any parameter from the Parallel.ai Search API. See the Parallel.ai documentation for the full list of available parameters and their usage: https://docs.parallel.ai/api-reference/search-beta/search Currently only `source_policy`, `excerpts`, `max_results`, `mode`, `fetch_policy` can be set via this field. For example: { "source_policy": { "include_domains": ["google.com", "wikipedia.org"], "exclude_domains": ["example.com"] }, "fetch_policy": { "max_age_seconds": 3600 } }
"a_key": "", # Properties of the object.
},
},
"retrieval": { # Defines a retrieval tool that model can call to access external knowledge. # Optional. Retrieval tool type. System will always execute the provided retrieval tool(s) to get external knowledge to answer the prompt. Retrieval results are presented to the model for generation.
"disableAttribution": True or False, # Optional. Deprecated. This option is no longer supported.
"externalApi": { # Retrieve from data source powered by external API for grounding. The external API is not owned by Google, but need to follow the pre-defined API spec. # Use data source powered by external API for grounding.
"apiAuth": { # The generic reusable api auth config. Deprecated. Please use AuthConfig (google/cloud/aiplatform/master/auth.proto) instead. # The authentication config to access the API. Deprecated. Please use auth_config instead.
"apiKeyConfig": { # The API secret. # The API secret.
"apiKeySecretVersion": "A String", # Required. The SecretManager secret version resource name storing API key. e.g. projects/{project}/secrets/{secret}/versions/{version}
"apiKeyString": "A String", # The API key string. Either this or `api_key_secret_version` must be set.
},
},
"apiSpec": "A String", # The API spec that the external API implements.
"authConfig": { # Auth configuration to run the extension. # The authentication config to access the API.
"apiKeyConfig": { # Config for authentication with API key. # Config for API key auth.
"apiKeySecret": "A String", # Optional. The name of the SecretManager secret version resource storing the API key. Format: `projects/{project}/secrets/{secrete}/versions/{version}` - If both `api_key_secret` and `api_key_string` are specified, this field takes precedence over `api_key_string`. - If specified, the `secretmanager.versions.access` permission should be granted to Vertex AI Extension Service Agent (https://cloud.google.com/vertex-ai/docs/general/access-control#service-agents) on the specified resource.
"apiKeyString": "A String", # Optional. The API key to be used in the request directly.
"httpElementLocation": "A String", # Optional. The location of the API key.
"name": "A String", # Optional. The parameter name of the API key. E.g. If the API request is "https://example.com/act?api_key=", "api_key" would be the parameter name.
},
"authType": "A String", # Type of auth scheme.
"googleServiceAccountConfig": { # Config for Google Service Account Authentication. # Config for Google Service Account auth.
"serviceAccount": "A String", # Optional. The service account that the extension execution service runs as. - If the service account is specified, the `iam.serviceAccounts.getAccessToken` permission should be granted to Vertex AI Extension Service Agent (https://cloud.google.com/vertex-ai/docs/general/access-control#service-agents) on the specified service account. - If not specified, the Vertex AI Extension Service Agent will be used to execute the Extension.
},
"httpBasicAuthConfig": { # Config for HTTP Basic Authentication. # Config for HTTP Basic auth.
"credentialSecret": "A String", # Required. The name of the SecretManager secret version resource storing the base64 encoded credentials. Format: `projects/{project}/secrets/{secrete}/versions/{version}` - If specified, the `secretmanager.versions.access` permission should be granted to Vertex AI Extension Service Agent (https://cloud.google.com/vertex-ai/docs/general/access-control#service-agents) on the specified resource.
},
"oauthConfig": { # Config for user oauth. # Config for user oauth.
"accessToken": "A String", # Access token for extension endpoint. Only used to propagate token from [[ExecuteExtensionRequest.runtime_auth_config]] at request time.
"serviceAccount": "A String", # The service account used to generate access tokens for executing the Extension. - If the service account is specified, the `iam.serviceAccounts.getAccessToken` permission should be granted to Vertex AI Extension Service Agent (https://cloud.google.com/vertex-ai/docs/general/access-control#service-agents) on the provided service account.
},
"oidcConfig": { # Config for user OIDC auth. # Config for user OIDC auth.
"idToken": "A String", # OpenID Connect formatted ID token for extension endpoint. Only used to propagate token from [[ExecuteExtensionRequest.runtime_auth_config]] at request time.
"serviceAccount": "A String", # The service account used to generate an OpenID Connect (OIDC)-compatible JWT token signed by the Google OIDC Provider (accounts.google.com) for extension endpoint (https://cloud.google.com/iam/docs/create-short-lived-credentials-direct#sa-credentials-oidc). - The audience for the token will be set to the URL in the server url defined in the OpenApi spec. - If the service account is provided, the service account should grant `iam.serviceAccounts.getOpenIdToken` permission to Vertex AI Extension Service Agent (https://cloud.google.com/vertex-ai/docs/general/access-control#service-agents).
},
},
"elasticSearchParams": { # The search parameters to use for the ELASTIC_SEARCH spec. # Parameters for the elastic search API.
"index": "A String", # The ElasticSearch index to use.
"numHits": 42, # Optional. Number of hits (chunks) to request. When specified, it is passed to Elasticsearch as the `num_hits` param.
"searchTemplate": "A String", # The ElasticSearch search template to use.
},
"endpoint": "A String", # The endpoint of the external API. The system will call the API at this endpoint to retrieve the data for grounding. Example: https://acme.com:443/search
"simpleSearchParams": { # The search parameters to use for SIMPLE_SEARCH spec. # Parameters for the simple search API.
},
},
"vertexAiSearch": { # Retrieve from Vertex AI Search datastore or engine for grounding. datastore and engine are mutually exclusive. See https://cloud.google.com/products/agent-builder # Set to use data source powered by Vertex AI Search.
"dataStoreSpecs": [ # Specifications that define the specific DataStores to be searched, along with configurations for those data stores. This is only considered for Engines with multiple data stores. It should only be set if engine is used.
{ # Define data stores within engine to filter on in a search call and configurations for those data stores. For more information, see https://cloud.google.com/generative-ai-app-builder/docs/reference/rpc/google.cloud.discoveryengine.v1#datastorespec
"dataStore": "A String", # Full resource name of DataStore, such as Format: `projects/{project}/locations/{location}/collections/{collection}/dataStores/{dataStore}`
"filter": "A String", # Optional. Filter specification to filter documents in the data store specified by data_store field. For more information on filtering, see [Filtering](https://cloud.google.com/generative-ai-app-builder/docs/filter-search-metadata)
},
],
"datastore": "A String", # Optional. Fully-qualified Vertex AI Search data store resource ID. Format: `projects/{project}/locations/{location}/collections/{collection}/dataStores/{dataStore}`
"engine": "A String", # Optional. Fully-qualified Vertex AI Search engine resource ID. Format: `projects/{project}/locations/{location}/collections/{collection}/engines/{engine}`
"filter": "A String", # Optional. Filter strings to be passed to the search API.
"maxResults": 42, # Optional. Number of search results to return per query. The default value is 10. The maximumm allowed value is 10.
},
"vertexRagStore": { # Retrieve from Vertex RAG Store for grounding. # Set to use data source powered by Vertex RAG store. User data is uploaded via the VertexRagDataService.
"ragCorpora": [ # Optional. Deprecated. Please use rag_resources instead.
"A String",
],
"ragResources": [ # Optional. The representation of the rag source. It can be used to specify corpus only or ragfiles. Currently only support one corpus or multiple files from one corpus. In the future we may open up multiple corpora support.
{ # The definition of the Rag resource.
"ragCorpus": "A String", # Optional. RagCorpora resource name. Format: `projects/{project}/locations/{location}/ragCorpora/{rag_corpus}`
"ragFileIds": [ # Optional. rag_file_id. The files should be in the same rag_corpus set in rag_corpus field.
"A String",
],
},
],
"ragRetrievalConfig": { # Specifies the context retrieval config. # Optional. The retrieval config for the Rag query.
"filter": { # Config for filters. # Optional. Config for filters.
"metadataFilter": "A String", # Optional. String for metadata filtering.
"vectorDistanceThreshold": 3.14, # Optional. Only returns contexts with vector distance smaller than the threshold.
"vectorSimilarityThreshold": 3.14, # Optional. Only returns contexts with vector similarity larger than the threshold.
},
"hybridSearch": { # Config for Hybrid Search. # Optional. Config for Hybrid Search.
"alpha": 3.14, # Optional. Alpha value controls the weight between dense and sparse vector search results. The range is [0, 1], while 0 means sparse vector search only and 1 means dense vector search only. The default value is 0.5 which balances sparse and dense vector search equally.
},
"ranking": { # Config for ranking and reranking. # Optional. Config for ranking and reranking.
"llmRanker": { # Config for LlmRanker. # Optional. Config for LlmRanker.
"modelName": "A String", # Optional. The model name used for ranking. See [Supported models](https://cloud.google.com/vertex-ai/generative-ai/docs/model-reference/inference#supported-models).
},
"rankService": { # Config for Rank Service. # Optional. Config for Rank Service.
"modelName": "A String", # Optional. The model name of the rank service. Format: `semantic-ranker-512@latest`
},
},
"topK": 42, # Optional. The number of contexts to retrieve.
},
"similarityTopK": 42, # Optional. Number of top k results to return from the selected corpora.
"storeContext": True or False, # Optional. Currently only supported for Gemini Multimodal Live API. In Gemini Multimodal Live API, if `store_context` bool is specified, Gemini will leverage it to automatically memorize the interactions between the client and Gemini, and retrieve context when needed to augment the response generation for users' ongoing and future interactions.
"vectorDistanceThreshold": 3.14, # Optional. Only return results with vector distance smaller than the threshold.
},
},
"urlContext": { # Tool to support URL context. # Optional. Tool to support URL context retrieval.
},
},
],
},
},
"turns": [ # Optional. A chronological list of conversation turns. Each turn represents a logical execution cycle (e.g., User Input -> Agent Response).
{ # Represents a single turn/invocation in the conversation.
"events": [ # Optional. The list of events that occurred during this turn.
{ # Represents a single event in the execution trace.
"activeTools": [ # Optional. The list of tools that were active/available to the agent at the time of this event. This overrides the `AgentConfig.tools` if set.
{ # Tool details that the model may use to generate response. A `Tool` is a piece of code that enables the system to interact with external systems to perform an action, or set of actions, outside of knowledge and scope of the model. A Tool object should contain exactly one type of Tool (e.g FunctionDeclaration, Retrieval or GoogleSearchRetrieval).
"codeExecution": { # Tool that executes code generated by the model, and automatically returns the result to the model. See also [ExecutableCode]and [CodeExecutionResult] which are input and output to this tool. # Optional. CodeExecution tool type. Enables the model to execute code as part of generation.
},
"computerUse": { # Tool to support computer use. # Optional. Tool to support the model interacting directly with the computer. If enabled, it automatically populates computer-use specific Function Declarations.
"environment": "A String", # Required. The environment being operated.
"excludedPredefinedFunctions": [ # Optional. By default, [predefined functions](https://cloud.google.com/vertex-ai/generative-ai/docs/computer-use#supported-actions) are included in the final model call. Some of them can be explicitly excluded from being automatically included. This can serve two purposes: 1. Using a more restricted / different action space. 2. Improving the definitions / instructions of predefined functions.
"A String",
],
},
"enterpriseWebSearch": { # Tool to search public web data, powered by Vertex AI Search and Sec4 compliance. # Optional. Tool to support searching public web data, powered by Vertex AI Search and Sec4 compliance.
"blockingConfidence": "A String", # Optional. Sites with confidence level chosen & above this value will be blocked from the search results.
"excludeDomains": [ # Optional. List of domains to be excluded from the search results. The default limit is 2000 domains.
"A String",
],
},
"functionDeclarations": [ # Optional. Function tool type. One or more function declarations to be passed to the model along with the current user query. Model may decide to call a subset of these functions by populating FunctionCall in the response. User should provide a FunctionResponse for each function call in the next turn. Based on the function responses, Model will generate the final response back to the user. Maximum 512 function declarations can be provided.
{ # Structured representation of a function declaration as defined by the [OpenAPI 3.0 specification](https://spec.openapis.org/oas/v3.0.3). Included in this declaration are the function name, description, parameters and response type. This FunctionDeclaration is a representation of a block of code that can be used as a `Tool` by the model and executed by the client.
"description": "A String", # Optional. Description and purpose of the function. Model uses it to decide how and whether to call the function.
"name": "A String", # Required. The name of the function to call. Must start with a letter or an underscore. Must be a-z, A-Z, 0-9, or contain underscores, dots, colons and dashes, with a maximum length of 64.
"parameters": { # Defines the schema of input and output data. This is a subset of the [OpenAPI 3.0 Schema Object](https://spec.openapis.org/oas/v3.0.3#schema-object). # Optional. Describes the parameters to this function in JSON Schema Object format. Reflects the Open API 3.03 Parameter Object. string Key: the name of the parameter. Parameter names are case sensitive. Schema Value: the Schema defining the type used for the parameter. For function with no parameters, this can be left unset. Parameter names must start with a letter or an underscore and must only contain chars a-z, A-Z, 0-9, or underscores with a maximum length of 64. Example with 1 required and 1 optional parameter: type: OBJECT properties: param1: type: STRING param2: type: INTEGER required: - param1
"additionalProperties": "", # Optional. If `type` is `OBJECT`, specifies how to handle properties not defined in `properties`. If it is a boolean `false`, no additional properties are allowed. If it is a schema, additional properties are allowed if they conform to the schema.
"anyOf": [ # Optional. The instance must be valid against any (one or more) of the subschemas listed in `any_of`.
# Object with schema name: GoogleCloudAiplatformV1beta1Schema
],
"default": "", # Optional. Default value to use if the field is not specified.
"defs": { # Optional. `defs` provides a map of schema definitions that can be reused by `ref` elsewhere in the schema. Only allowed at root level of the schema.
"a_key": # Object with schema name: GoogleCloudAiplatformV1beta1Schema
},
"description": "A String", # Optional. Description of the schema.
"enum": [ # Optional. Possible values of the field. This field can be used to restrict a value to a fixed set of values. To mark a field as an enum, set `format` to `enum` and provide the list of possible values in `enum`. For example: 1. To define directions: `{type:STRING, format:enum, enum:["EAST", "NORTH", "SOUTH", "WEST"]}` 2. To define apartment numbers: `{type:INTEGER, format:enum, enum:["101", "201", "301"]}`
"A String",
],
"example": "", # Optional. Example of an instance of this schema.
"format": "A String", # Optional. The format of the data. For `NUMBER` type, format can be `float` or `double`. For `INTEGER` type, format can be `int32` or `int64`. For `STRING` type, format can be `email`, `byte`, `date`, `date-time`, `password`, and other formats to further refine the data type.
"items": # Object with schema name: GoogleCloudAiplatformV1beta1Schema # Optional. If type is `ARRAY`, `items` specifies the schema of elements in the array.
"maxItems": "A String", # Optional. If type is `ARRAY`, `max_items` specifies the maximum number of items in an array.
"maxLength": "A String", # Optional. If type is `STRING`, `max_length` specifies the maximum length of the string.
"maxProperties": "A String", # Optional. If type is `OBJECT`, `max_properties` specifies the maximum number of properties that can be provided.
"maximum": 3.14, # Optional. If type is `INTEGER` or `NUMBER`, `maximum` specifies the maximum allowed value.
"minItems": "A String", # Optional. If type is `ARRAY`, `min_items` specifies the minimum number of items in an array.
"minLength": "A String", # Optional. If type is `STRING`, `min_length` specifies the minimum length of the string.
"minProperties": "A String", # Optional. If type is `OBJECT`, `min_properties` specifies the minimum number of properties that can be provided.
"minimum": 3.14, # Optional. If type is `INTEGER` or `NUMBER`, `minimum` specifies the minimum allowed value.
"nullable": True or False, # Optional. Indicates if the value of this field can be null.
"pattern": "A String", # Optional. If type is `STRING`, `pattern` specifies a regular expression that the string must match.
"properties": { # Optional. If type is `OBJECT`, `properties` is a map of property names to schema definitions for each property of the object.
"a_key": # Object with schema name: GoogleCloudAiplatformV1beta1Schema
},
"propertyOrdering": [ # Optional. Order of properties displayed or used where order matters. This is not a standard field in OpenAPI specification, but can be used to control the order of properties.
"A String",
],
"ref": "A String", # Optional. Allows referencing another schema definition to use in place of this schema. The value must be a valid reference to a schema in `defs`. For example, the following schema defines a reference to a schema node named "Pet": type: object properties: pet: ref: #/defs/Pet defs: Pet: type: object properties: name: type: string The value of the "pet" property is a reference to the schema node named "Pet". See details in https://json-schema.org/understanding-json-schema/structuring
"required": [ # Optional. If type is `OBJECT`, `required` lists the names of properties that must be present.
"A String",
],
"title": "A String", # Optional. Title for the schema.
"type": "A String", # Optional. Data type of the schema field.
},
"parametersJsonSchema": "", # Optional. Describes the parameters to the function in JSON Schema format. The schema must describe an object where the properties are the parameters to the function. For example: ``` { "type": "object", "properties": { "name": { "type": "string" }, "age": { "type": "integer" } }, "additionalProperties": false, "required": ["name", "age"], "propertyOrdering": ["name", "age"] } ``` This field is mutually exclusive with `parameters`.
"response": { # Defines the schema of input and output data. This is a subset of the [OpenAPI 3.0 Schema Object](https://spec.openapis.org/oas/v3.0.3#schema-object). # Optional. Describes the output from this function in JSON Schema format. Reflects the Open API 3.03 Response Object. The Schema defines the type used for the response value of the function.
"additionalProperties": "", # Optional. If `type` is `OBJECT`, specifies how to handle properties not defined in `properties`. If it is a boolean `false`, no additional properties are allowed. If it is a schema, additional properties are allowed if they conform to the schema.
"anyOf": [ # Optional. The instance must be valid against any (one or more) of the subschemas listed in `any_of`.
# Object with schema name: GoogleCloudAiplatformV1beta1Schema
],
"default": "", # Optional. Default value to use if the field is not specified.
"defs": { # Optional. `defs` provides a map of schema definitions that can be reused by `ref` elsewhere in the schema. Only allowed at root level of the schema.
"a_key": # Object with schema name: GoogleCloudAiplatformV1beta1Schema
},
"description": "A String", # Optional. Description of the schema.
"enum": [ # Optional. Possible values of the field. This field can be used to restrict a value to a fixed set of values. To mark a field as an enum, set `format` to `enum` and provide the list of possible values in `enum`. For example: 1. To define directions: `{type:STRING, format:enum, enum:["EAST", "NORTH", "SOUTH", "WEST"]}` 2. To define apartment numbers: `{type:INTEGER, format:enum, enum:["101", "201", "301"]}`
"A String",
],
"example": "", # Optional. Example of an instance of this schema.
"format": "A String", # Optional. The format of the data. For `NUMBER` type, format can be `float` or `double`. For `INTEGER` type, format can be `int32` or `int64`. For `STRING` type, format can be `email`, `byte`, `date`, `date-time`, `password`, and other formats to further refine the data type.
"items": # Object with schema name: GoogleCloudAiplatformV1beta1Schema # Optional. If type is `ARRAY`, `items` specifies the schema of elements in the array.
"maxItems": "A String", # Optional. If type is `ARRAY`, `max_items` specifies the maximum number of items in an array.
"maxLength": "A String", # Optional. If type is `STRING`, `max_length` specifies the maximum length of the string.
"maxProperties": "A String", # Optional. If type is `OBJECT`, `max_properties` specifies the maximum number of properties that can be provided.
"maximum": 3.14, # Optional. If type is `INTEGER` or `NUMBER`, `maximum` specifies the maximum allowed value.
"minItems": "A String", # Optional. If type is `ARRAY`, `min_items` specifies the minimum number of items in an array.
"minLength": "A String", # Optional. If type is `STRING`, `min_length` specifies the minimum length of the string.
"minProperties": "A String", # Optional. If type is `OBJECT`, `min_properties` specifies the minimum number of properties that can be provided.
"minimum": 3.14, # Optional. If type is `INTEGER` or `NUMBER`, `minimum` specifies the minimum allowed value.
"nullable": True or False, # Optional. Indicates if the value of this field can be null.
"pattern": "A String", # Optional. If type is `STRING`, `pattern` specifies a regular expression that the string must match.
"properties": { # Optional. If type is `OBJECT`, `properties` is a map of property names to schema definitions for each property of the object.
"a_key": # Object with schema name: GoogleCloudAiplatformV1beta1Schema
},
"propertyOrdering": [ # Optional. Order of properties displayed or used where order matters. This is not a standard field in OpenAPI specification, but can be used to control the order of properties.
"A String",
],
"ref": "A String", # Optional. Allows referencing another schema definition to use in place of this schema. The value must be a valid reference to a schema in `defs`. For example, the following schema defines a reference to a schema node named "Pet": type: object properties: pet: ref: #/defs/Pet defs: Pet: type: object properties: name: type: string The value of the "pet" property is a reference to the schema node named "Pet". See details in https://json-schema.org/understanding-json-schema/structuring
"required": [ # Optional. If type is `OBJECT`, `required` lists the names of properties that must be present.
"A String",
],
"title": "A String", # Optional. Title for the schema.
"type": "A String", # Optional. Data type of the schema field.
},
"responseJsonSchema": "", # Optional. Describes the output from this function in JSON Schema format. The value specified by the schema is the response value of the function. This field is mutually exclusive with `response`.
},
],
"googleMaps": { # Tool to retrieve public maps data for grounding, powered by Google. # Optional. GoogleMaps tool type. Tool to support Google Maps in Model.
"enableWidget": True or False, # Optional. If true, include the widget context token in the response.
},
"googleSearch": { # GoogleSearch tool type. Tool to support Google Search in Model. Powered by Google. # Optional. GoogleSearch tool type. Tool to support Google Search in Model. Powered by Google.
"blockingConfidence": "A String", # Optional. Sites with confidence level chosen & above this value will be blocked from the search results.
"excludeDomains": [ # Optional. List of domains to be excluded from the search results. The default limit is 2000 domains. Example: ["amazon.com", "facebook.com"].
"A String",
],
},
"googleSearchRetrieval": { # Tool to retrieve public web data for grounding, powered by Google. # Optional. Specialized retrieval tool that is powered by Google Search.
"dynamicRetrievalConfig": { # Describes the options to customize dynamic retrieval. # Specifies the dynamic retrieval configuration for the given source.
"dynamicThreshold": 3.14, # Optional. The threshold to be used in dynamic retrieval. If not set, a system default value is used.
"mode": "A String", # The mode of the predictor to be used in dynamic retrieval.
},
},
"parallelAiSearch": { # ParallelAiSearch tool type. A tool that uses the Parallel.ai search engine for grounding. # Optional. If specified, Vertex AI will use Parallel.ai to search for information to answer user queries. The search results will be grounded on Parallel.ai and presented to the model for response generation
"apiKey": "A String", # Optional. The API key for ParallelAiSearch. If an API key is not provided, the system will attempt to verify access by checking for an active Parallel.ai subscription through the Google Cloud Marketplace. See https://docs.parallel.ai/search/search-quickstart for more details.
"customConfigs": { # Optional. Custom configs for ParallelAiSearch. This field can be used to pass any parameter from the Parallel.ai Search API. See the Parallel.ai documentation for the full list of available parameters and their usage: https://docs.parallel.ai/api-reference/search-beta/search Currently only `source_policy`, `excerpts`, `max_results`, `mode`, `fetch_policy` can be set via this field. For example: { "source_policy": { "include_domains": ["google.com", "wikipedia.org"], "exclude_domains": ["example.com"] }, "fetch_policy": { "max_age_seconds": 3600 } }
"a_key": "", # Properties of the object.
},
},
"retrieval": { # Defines a retrieval tool that model can call to access external knowledge. # Optional. Retrieval tool type. System will always execute the provided retrieval tool(s) to get external knowledge to answer the prompt. Retrieval results are presented to the model for generation.
"disableAttribution": True or False, # Optional. Deprecated. This option is no longer supported.
"externalApi": { # Retrieve from data source powered by external API for grounding. The external API is not owned by Google, but need to follow the pre-defined API spec. # Use data source powered by external API for grounding.
"apiAuth": { # The generic reusable api auth config. Deprecated. Please use AuthConfig (google/cloud/aiplatform/master/auth.proto) instead. # The authentication config to access the API. Deprecated. Please use auth_config instead.
"apiKeyConfig": { # The API secret. # The API secret.
"apiKeySecretVersion": "A String", # Required. The SecretManager secret version resource name storing API key. e.g. projects/{project}/secrets/{secret}/versions/{version}
"apiKeyString": "A String", # The API key string. Either this or `api_key_secret_version` must be set.
},
},
"apiSpec": "A String", # The API spec that the external API implements.
"authConfig": { # Auth configuration to run the extension. # The authentication config to access the API.
"apiKeyConfig": { # Config for authentication with API key. # Config for API key auth.
"apiKeySecret": "A String", # Optional. The name of the SecretManager secret version resource storing the API key. Format: `projects/{project}/secrets/{secrete}/versions/{version}` - If both `api_key_secret` and `api_key_string` are specified, this field takes precedence over `api_key_string`. - If specified, the `secretmanager.versions.access` permission should be granted to Vertex AI Extension Service Agent (https://cloud.google.com/vertex-ai/docs/general/access-control#service-agents) on the specified resource.
"apiKeyString": "A String", # Optional. The API key to be used in the request directly.
"httpElementLocation": "A String", # Optional. The location of the API key.
"name": "A String", # Optional. The parameter name of the API key. E.g. If the API request is "https://example.com/act?api_key=", "api_key" would be the parameter name.
},
"authType": "A String", # Type of auth scheme.
"googleServiceAccountConfig": { # Config for Google Service Account Authentication. # Config for Google Service Account auth.
"serviceAccount": "A String", # Optional. The service account that the extension execution service runs as. - If the service account is specified, the `iam.serviceAccounts.getAccessToken` permission should be granted to Vertex AI Extension Service Agent (https://cloud.google.com/vertex-ai/docs/general/access-control#service-agents) on the specified service account. - If not specified, the Vertex AI Extension Service Agent will be used to execute the Extension.
},
"httpBasicAuthConfig": { # Config for HTTP Basic Authentication. # Config for HTTP Basic auth.
"credentialSecret": "A String", # Required. The name of the SecretManager secret version resource storing the base64 encoded credentials. Format: `projects/{project}/secrets/{secrete}/versions/{version}` - If specified, the `secretmanager.versions.access` permission should be granted to Vertex AI Extension Service Agent (https://cloud.google.com/vertex-ai/docs/general/access-control#service-agents) on the specified resource.
},
"oauthConfig": { # Config for user oauth. # Config for user oauth.
"accessToken": "A String", # Access token for extension endpoint. Only used to propagate token from [[ExecuteExtensionRequest.runtime_auth_config]] at request time.
"serviceAccount": "A String", # The service account used to generate access tokens for executing the Extension. - If the service account is specified, the `iam.serviceAccounts.getAccessToken` permission should be granted to Vertex AI Extension Service Agent (https://cloud.google.com/vertex-ai/docs/general/access-control#service-agents) on the provided service account.
},
"oidcConfig": { # Config for user OIDC auth. # Config for user OIDC auth.
"idToken": "A String", # OpenID Connect formatted ID token for extension endpoint. Only used to propagate token from [[ExecuteExtensionRequest.runtime_auth_config]] at request time.
"serviceAccount": "A String", # The service account used to generate an OpenID Connect (OIDC)-compatible JWT token signed by the Google OIDC Provider (accounts.google.com) for extension endpoint (https://cloud.google.com/iam/docs/create-short-lived-credentials-direct#sa-credentials-oidc). - The audience for the token will be set to the URL in the server url defined in the OpenApi spec. - If the service account is provided, the service account should grant `iam.serviceAccounts.getOpenIdToken` permission to Vertex AI Extension Service Agent (https://cloud.google.com/vertex-ai/docs/general/access-control#service-agents).
},
},
"elasticSearchParams": { # The search parameters to use for the ELASTIC_SEARCH spec. # Parameters for the elastic search API.
"index": "A String", # The ElasticSearch index to use.
"numHits": 42, # Optional. Number of hits (chunks) to request. When specified, it is passed to Elasticsearch as the `num_hits` param.
"searchTemplate": "A String", # The ElasticSearch search template to use.
},
"endpoint": "A String", # The endpoint of the external API. The system will call the API at this endpoint to retrieve the data for grounding. Example: https://acme.com:443/search
"simpleSearchParams": { # The search parameters to use for SIMPLE_SEARCH spec. # Parameters for the simple search API.
},
},
"vertexAiSearch": { # Retrieve from Vertex AI Search datastore or engine for grounding. datastore and engine are mutually exclusive. See https://cloud.google.com/products/agent-builder # Set to use data source powered by Vertex AI Search.
"dataStoreSpecs": [ # Specifications that define the specific DataStores to be searched, along with configurations for those data stores. This is only considered for Engines with multiple data stores. It should only be set if engine is used.
{ # Define data stores within engine to filter on in a search call and configurations for those data stores. For more information, see https://cloud.google.com/generative-ai-app-builder/docs/reference/rpc/google.cloud.discoveryengine.v1#datastorespec
"dataStore": "A String", # Full resource name of DataStore, such as Format: `projects/{project}/locations/{location}/collections/{collection}/dataStores/{dataStore}`
"filter": "A String", # Optional. Filter specification to filter documents in the data store specified by data_store field. For more information on filtering, see [Filtering](https://cloud.google.com/generative-ai-app-builder/docs/filter-search-metadata)
},
],
"datastore": "A String", # Optional. Fully-qualified Vertex AI Search data store resource ID. Format: `projects/{project}/locations/{location}/collections/{collection}/dataStores/{dataStore}`
"engine": "A String", # Optional. Fully-qualified Vertex AI Search engine resource ID. Format: `projects/{project}/locations/{location}/collections/{collection}/engines/{engine}`
"filter": "A String", # Optional. Filter strings to be passed to the search API.
"maxResults": 42, # Optional. Number of search results to return per query. The default value is 10. The maximumm allowed value is 10.
},
"vertexRagStore": { # Retrieve from Vertex RAG Store for grounding. # Set to use data source powered by Vertex RAG store. User data is uploaded via the VertexRagDataService.
"ragCorpora": [ # Optional. Deprecated. Please use rag_resources instead.
"A String",
],
"ragResources": [ # Optional. The representation of the rag source. It can be used to specify corpus only or ragfiles. Currently only support one corpus or multiple files from one corpus. In the future we may open up multiple corpora support.
{ # The definition of the Rag resource.
"ragCorpus": "A String", # Optional. RagCorpora resource name. Format: `projects/{project}/locations/{location}/ragCorpora/{rag_corpus}`
"ragFileIds": [ # Optional. rag_file_id. The files should be in the same rag_corpus set in rag_corpus field.
"A String",
],
},
],
"ragRetrievalConfig": { # Specifies the context retrieval config. # Optional. The retrieval config for the Rag query.
"filter": { # Config for filters. # Optional. Config for filters.
"metadataFilter": "A String", # Optional. String for metadata filtering.
"vectorDistanceThreshold": 3.14, # Optional. Only returns contexts with vector distance smaller than the threshold.
"vectorSimilarityThreshold": 3.14, # Optional. Only returns contexts with vector similarity larger than the threshold.
},
"hybridSearch": { # Config for Hybrid Search. # Optional. Config for Hybrid Search.
"alpha": 3.14, # Optional. Alpha value controls the weight between dense and sparse vector search results. The range is [0, 1], while 0 means sparse vector search only and 1 means dense vector search only. The default value is 0.5 which balances sparse and dense vector search equally.
},
"ranking": { # Config for ranking and reranking. # Optional. Config for ranking and reranking.
"llmRanker": { # Config for LlmRanker. # Optional. Config for LlmRanker.
"modelName": "A String", # Optional. The model name used for ranking. See [Supported models](https://cloud.google.com/vertex-ai/generative-ai/docs/model-reference/inference#supported-models).
},
"rankService": { # Config for Rank Service. # Optional. Config for Rank Service.
"modelName": "A String", # Optional. The model name of the rank service. Format: `semantic-ranker-512@latest`
},
},
"topK": 42, # Optional. The number of contexts to retrieve.
},
"similarityTopK": 42, # Optional. Number of top k results to return from the selected corpora.
"storeContext": True or False, # Optional. Currently only supported for Gemini Multimodal Live API. In Gemini Multimodal Live API, if `store_context` bool is specified, Gemini will leverage it to automatically memorize the interactions between the client and Gemini, and retrieve context when needed to augment the response generation for users' ongoing and future interactions.
"vectorDistanceThreshold": 3.14, # Optional. Only return results with vector distance smaller than the threshold.
},
},
"urlContext": { # Tool to support URL context. # Optional. Tool to support URL context retrieval.
},
},
],
"author": "A String", # Required. The ID of the agent or entity that generated this event. Use "user" to denote events generated by the end-user.
"content": { # The structured data content of a message. A Content message contains a `role` field, which indicates the producer of the content, and a `parts` field, which contains the multi-part data of the message. # Required. The content of the event (e.g., text response, tool call, tool response).
"parts": [ # Required. A list of Part objects that make up a single message. Parts of a message can have different MIME types. A Content message must have at least one Part.
{ # A datatype containing media that is part of a multi-part Content message. A `Part` consists of data which has an associated datatype. A `Part` can only contain one of the accepted types in `Part.data`. For media types that are not text, `Part` must have a fixed IANA MIME type identifying the type and subtype of the media if `inline_data` or `file_data` field is filled with raw bytes.
"codeExecutionResult": { # Result of executing the [ExecutableCode]. Only generated when using the [CodeExecution] tool, and always follows a `part` containing the [ExecutableCode]. # Optional. The result of executing the ExecutableCode.
"outcome": "A String", # Required. Outcome of the code execution.
"output": "A String", # Optional. Contains stdout when code execution is successful, stderr or other description otherwise.
},
"executableCode": { # Code generated by the model that is meant to be executed, and the result returned to the model. Generated when using the [CodeExecution] tool, in which the code will be automatically executed, and a corresponding [CodeExecutionResult] will also be generated. # Optional. Code generated by the model that is intended to be executed.
"code": "A String", # Required. The code to be executed.
"language": "A String", # Required. Programming language of the `code`.
},
"fileData": { # URI-based data. A FileData message contains a URI pointing to data of a specific media type. It is used to represent images, audio, and video stored in Google Cloud Storage. # Optional. The URI-based data of the part. This can be used to include files from Google Cloud Storage.
"displayName": "A String", # Optional. The display name of the file. Used to provide a label or filename to distinguish files. This field is only returned in `PromptMessage` for prompt management. It is used in the Gemini calls only when server side tools (`code_execution`, `google_search`, and `url_context`) are enabled.
"fileUri": "A String", # Required. The URI of the file in Google Cloud Storage.
"mimeType": "A String", # Required. The IANA standard MIME type of the source data.
},
"functionCall": { # A predicted [FunctionCall] returned from the model that contains a string representing the [FunctionDeclaration.name] and a structured JSON object containing the parameters and their values. # Optional. A predicted function call returned from the model. This contains the name of the function to call and the arguments to pass to the function.
"args": { # Optional. The function parameters and values in JSON object format. See [FunctionDeclaration.parameters] for parameter details.
"a_key": "", # Properties of the object.
},
"id": "A String", # Optional. The unique id of the function call. If populated, the client to execute the `function_call` and return the response with the matching `id`.
"name": "A String", # Optional. The name of the function to call. Matches [FunctionDeclaration.name].
"partialArgs": [ # Optional. The partial argument value of the function call. If provided, represents the arguments/fields that are streamed incrementally.
{ # Partial argument value of the function call.
"boolValue": True or False, # Optional. Represents a boolean value.
"jsonPath": "A String", # Required. A JSON Path (RFC 9535) to the argument being streamed. https://datatracker.ietf.org/doc/html/rfc9535. e.g. "$.foo.bar[0].data".
"nullValue": "A String", # Optional. Represents a null value.
"numberValue": 3.14, # Optional. Represents a double value.
"stringValue": "A String", # Optional. Represents a string value.
"willContinue": True or False, # Optional. Whether this is not the last part of the same json_path. If true, another PartialArg message for the current json_path is expected to follow.
},
],
"willContinue": True or False, # Optional. Whether this is the last part of the FunctionCall. If true, another partial message for the current FunctionCall is expected to follow.
},
"functionResponse": { # The result output from a [FunctionCall] that contains a string representing the [FunctionDeclaration.name] and a structured JSON object containing any output from the function is used as context to the model. This should contain the result of a [FunctionCall] made based on model prediction. # Optional. The result of a function call. This is used to provide the model with the result of a function call that it predicted.
"id": "A String", # Optional. The id of the function call this response is for. Populated by the client to match the corresponding function call `id`.
"name": "A String", # Required. The name of the function to call. Matches [FunctionDeclaration.name] and [FunctionCall.name].
"parts": [ # Optional. Ordered `Parts` that constitute a function response. Parts may have different IANA MIME types.
{ # A datatype containing media that is part of a `FunctionResponse` message. A `FunctionResponsePart` consists of data which has an associated datatype. A `FunctionResponsePart` can only contain one of the accepted types in `FunctionResponsePart.data`. A `FunctionResponsePart` must have a fixed IANA MIME type identifying the type and subtype of the media if the `inline_data` field is filled with raw bytes.
"fileData": { # URI based data for function response. # URI based data.
"displayName": "A String", # Optional. Display name of the file data. Used to provide a label or filename to distinguish file datas. This field is only returned in PromptMessage for prompt management. It is currently used in the Gemini GenerateContent calls only when server side tools (code_execution, google_search, and url_context) are enabled.
"fileUri": "A String", # Required. URI.
"mimeType": "A String", # Required. The IANA standard MIME type of the source data.
},
"inlineData": { # Raw media bytes for function response. Text should not be sent as raw bytes, use the 'text' field. # Inline media bytes.
"data": "A String", # Required. Raw bytes.
"displayName": "A String", # Optional. Display name of the blob. Used to provide a label or filename to distinguish blobs. This field is only returned in PromptMessage for prompt management. It is currently used in the Gemini GenerateContent calls only when server side tools (code_execution, google_search, and url_context) are enabled.
"mimeType": "A String", # Required. The IANA standard MIME type of the source data.
},
},
],
"response": { # Required. The function response in JSON object format. Use "output" key to specify function output and "error" key to specify error details (if any). If "output" and "error" keys are not specified, then whole "response" is treated as function output.
"a_key": "", # Properties of the object.
},
},
"inlineData": { # A content blob. A Blob contains data of a specific media type. It is used to represent images, audio, and video. # Optional. The inline data content of the part. This can be used to include images, audio, or video in a request.
"data": "A String", # Required. The raw bytes of the data.
"displayName": "A String", # Optional. The display name of the blob. Used to provide a label or filename to distinguish blobs. This field is only returned in `PromptMessage` for prompt management. It is used in the Gemini calls only when server-side tools (`code_execution`, `google_search`, and `url_context`) are enabled.
"mimeType": "A String", # Required. The IANA standard MIME type of the source data.
},
"mediaResolution": { # per part media resolution. Media resolution for the input media. # per part media resolution. Media resolution for the input media.
"level": "A String", # The tokenization quality used for given media.
},
"text": "A String", # Optional. The text content of the part. When sent from the VSCode Gemini Code Assist extension, references to @mentioned items will be converted to markdown boldface text. For example `@my-repo` will be converted to and sent as `**my-repo**` by the IDE agent.
"thought": True or False, # Optional. Indicates whether the `part` represents the model's thought process or reasoning.
"thoughtSignature": "A String", # Optional. An opaque signature for the thought so it can be reused in subsequent requests.
"videoMetadata": { # Provides metadata for a video, including the start and end offsets for clipping and the frame rate. # Optional. Video metadata. The metadata should only be specified while the video data is presented in inline_data or file_data.
"endOffset": "A String", # Optional. The end offset of the video.
"fps": 3.14, # Optional. The frame rate of the video sent to the model. If not specified, the default value is 1.0. The valid range is (0.0, 24.0].
"startOffset": "A String", # Optional. The start offset of the video.
},
},
],
"role": "A String", # Optional. The producer of the content. Must be either 'user' or 'model'. If not set, the service will default to 'user'.
},
"eventTime": "A String", # Optional. The timestamp when the event occurred.
"stateDelta": { # Optional. The change in the session state caused by this event. This is a key-value map of fields that were modified or added by the event.
"a_key": "", # Properties of the object.
},
},
],
"turnId": "A String", # Optional. A unique identifier for the turn. Useful for referencing specific turns across systems.
"turnIndex": 42, # Required. The 0-based index of the turn in the conversation sequence.
},
],
},
"candidate": "A String", # Required. The name of the candidate that produced the response.
"error": { # The `Status` type defines a logical error model that is suitable for different programming environments, including REST APIs and RPC APIs. It is used by [gRPC](https://github.com/grpc). Each `Status` message contains three pieces of data: error code, error message, and error details. You can find out more about this error model and how to work with it in the [API Design Guide](https://cloud.google.com/apis/design/errors). # Output only. Error while scraping model or agent.
"code": 42, # The status code, which should be an enum value of google.rpc.Code.
"details": [ # A list of messages that carry the error details. There is a common set of message types for APIs to use.
{
"a_key": "", # Properties of the object. Contains field @type with type URL.
},
],
"message": "A String", # A developer-facing error message, which should be in English. Any user-facing error message should be localized and sent in the google.rpc.Status.details field, or localized by the client.
},
"events": [ # Optional. Intermediate events (such as tool calls and responses) that led to the final response.
{ # The structured data content of a message. A Content message contains a `role` field, which indicates the producer of the content, and a `parts` field, which contains the multi-part data of the message.
"parts": [ # Required. A list of Part objects that make up a single message. Parts of a message can have different MIME types. A Content message must have at least one Part.
{ # A datatype containing media that is part of a multi-part Content message. A `Part` consists of data which has an associated datatype. A `Part` can only contain one of the accepted types in `Part.data`. For media types that are not text, `Part` must have a fixed IANA MIME type identifying the type and subtype of the media if `inline_data` or `file_data` field is filled with raw bytes.
"codeExecutionResult": { # Result of executing the [ExecutableCode]. Only generated when using the [CodeExecution] tool, and always follows a `part` containing the [ExecutableCode]. # Optional. The result of executing the ExecutableCode.
"outcome": "A String", # Required. Outcome of the code execution.
"output": "A String", # Optional. Contains stdout when code execution is successful, stderr or other description otherwise.
},
"executableCode": { # Code generated by the model that is meant to be executed, and the result returned to the model. Generated when using the [CodeExecution] tool, in which the code will be automatically executed, and a corresponding [CodeExecutionResult] will also be generated. # Optional. Code generated by the model that is intended to be executed.
"code": "A String", # Required. The code to be executed.
"language": "A String", # Required. Programming language of the `code`.
},
"fileData": { # URI-based data. A FileData message contains a URI pointing to data of a specific media type. It is used to represent images, audio, and video stored in Google Cloud Storage. # Optional. The URI-based data of the part. This can be used to include files from Google Cloud Storage.
"displayName": "A String", # Optional. The display name of the file. Used to provide a label or filename to distinguish files. This field is only returned in `PromptMessage` for prompt management. It is used in the Gemini calls only when server side tools (`code_execution`, `google_search`, and `url_context`) are enabled.
"fileUri": "A String", # Required. The URI of the file in Google Cloud Storage.
"mimeType": "A String", # Required. The IANA standard MIME type of the source data.
},
"functionCall": { # A predicted [FunctionCall] returned from the model that contains a string representing the [FunctionDeclaration.name] and a structured JSON object containing the parameters and their values. # Optional. A predicted function call returned from the model. This contains the name of the function to call and the arguments to pass to the function.
"args": { # Optional. The function parameters and values in JSON object format. See [FunctionDeclaration.parameters] for parameter details.
"a_key": "", # Properties of the object.
},
"id": "A String", # Optional. The unique id of the function call. If populated, the client to execute the `function_call` and return the response with the matching `id`.
"name": "A String", # Optional. The name of the function to call. Matches [FunctionDeclaration.name].
"partialArgs": [ # Optional. The partial argument value of the function call. If provided, represents the arguments/fields that are streamed incrementally.
{ # Partial argument value of the function call.
"boolValue": True or False, # Optional. Represents a boolean value.
"jsonPath": "A String", # Required. A JSON Path (RFC 9535) to the argument being streamed. https://datatracker.ietf.org/doc/html/rfc9535. e.g. "$.foo.bar[0].data".
"nullValue": "A String", # Optional. Represents a null value.
"numberValue": 3.14, # Optional. Represents a double value.
"stringValue": "A String", # Optional. Represents a string value.
"willContinue": True or False, # Optional. Whether this is not the last part of the same json_path. If true, another PartialArg message for the current json_path is expected to follow.
},
],
"willContinue": True or False, # Optional. Whether this is the last part of the FunctionCall. If true, another partial message for the current FunctionCall is expected to follow.
},
"functionResponse": { # The result output from a [FunctionCall] that contains a string representing the [FunctionDeclaration.name] and a structured JSON object containing any output from the function is used as context to the model. This should contain the result of a [FunctionCall] made based on model prediction. # Optional. The result of a function call. This is used to provide the model with the result of a function call that it predicted.
"id": "A String", # Optional. The id of the function call this response is for. Populated by the client to match the corresponding function call `id`.
"name": "A String", # Required. The name of the function to call. Matches [FunctionDeclaration.name] and [FunctionCall.name].
"parts": [ # Optional. Ordered `Parts` that constitute a function response. Parts may have different IANA MIME types.
{ # A datatype containing media that is part of a `FunctionResponse` message. A `FunctionResponsePart` consists of data which has an associated datatype. A `FunctionResponsePart` can only contain one of the accepted types in `FunctionResponsePart.data`. A `FunctionResponsePart` must have a fixed IANA MIME type identifying the type and subtype of the media if the `inline_data` field is filled with raw bytes.
"fileData": { # URI based data for function response. # URI based data.
"displayName": "A String", # Optional. Display name of the file data. Used to provide a label or filename to distinguish file datas. This field is only returned in PromptMessage for prompt management. It is currently used in the Gemini GenerateContent calls only when server side tools (code_execution, google_search, and url_context) are enabled.
"fileUri": "A String", # Required. URI.
"mimeType": "A String", # Required. The IANA standard MIME type of the source data.
},
"inlineData": { # Raw media bytes for function response. Text should not be sent as raw bytes, use the 'text' field. # Inline media bytes.
"data": "A String", # Required. Raw bytes.
"displayName": "A String", # Optional. Display name of the blob. Used to provide a label or filename to distinguish blobs. This field is only returned in PromptMessage for prompt management. It is currently used in the Gemini GenerateContent calls only when server side tools (code_execution, google_search, and url_context) are enabled.
"mimeType": "A String", # Required. The IANA standard MIME type of the source data.
},
},
],
"response": { # Required. The function response in JSON object format. Use "output" key to specify function output and "error" key to specify error details (if any). If "output" and "error" keys are not specified, then whole "response" is treated as function output.
"a_key": "", # Properties of the object.
},
},
"inlineData": { # A content blob. A Blob contains data of a specific media type. It is used to represent images, audio, and video. # Optional. The inline data content of the part. This can be used to include images, audio, or video in a request.
"data": "A String", # Required. The raw bytes of the data.
"displayName": "A String", # Optional. The display name of the blob. Used to provide a label or filename to distinguish blobs. This field is only returned in `PromptMessage` for prompt management. It is used in the Gemini calls only when server-side tools (`code_execution`, `google_search`, and `url_context`) are enabled.
"mimeType": "A String", # Required. The IANA standard MIME type of the source data.
},
"mediaResolution": { # per part media resolution. Media resolution for the input media. # per part media resolution. Media resolution for the input media.
"level": "A String", # The tokenization quality used for given media.
},
"text": "A String", # Optional. The text content of the part. When sent from the VSCode Gemini Code Assist extension, references to @mentioned items will be converted to markdown boldface text. For example `@my-repo` will be converted to and sent as `**my-repo**` by the IDE agent.
"thought": True or False, # Optional. Indicates whether the `part` represents the model's thought process or reasoning.
"thoughtSignature": "A String", # Optional. An opaque signature for the thought so it can be reused in subsequent requests.
"videoMetadata": { # Provides metadata for a video, including the start and end offsets for clipping and the frame rate. # Optional. Video metadata. The metadata should only be specified while the video data is presented in inline_data or file_data.
"endOffset": "A String", # Optional. The end offset of the video.
"fps": 3.14, # Optional. The frame rate of the video sent to the model. If not specified, the default value is 1.0. The valid range is (0.0, 24.0].
"startOffset": "A String", # Optional. The start offset of the video.
},
},
],
"role": "A String", # Optional. The producer of the content. Must be either 'user' or 'model'. If not set, the service will default to 'user'.
},
],
"text": "A String", # Text response.
"value": "", # Fields and values that can be used to populate the response template.
},
"prompt": { # Prompt to be evaluated. This can represent a single-turn prompt or a multi-turn conversation for agent evaluations. # Optional. The request/prompt to evaluate.
"agentData": { # Represents data specific to multi-turn agent evaluations. # Optional. Represents the complete execution trace of a multi-turn conversation, which can involve single or multiple agents. This serves as the input context for agent scraping.
"agents": { # Optional. A map containing the static configurations for each agent in the system. Key: agent_id (matches the `author` field in events). Value: The static configuration of the agent.
"a_key": { # Represents configuration for an Agent.
"agentId": "A String", # Required. Unique identifier of the agent. This ID is used to refer to this agent, e.g., in AgentEvent.author, or in the `sub_agents` field. It must be unique within the `agents` map.
"agentType": "A String", # Optional. The type or class of the agent (e.g., "LlmAgent", "RouterAgent", "ToolUseAgent"). Useful for the autorater to understand the expected behavior of the agent.
"description": "A String", # Optional. A high-level description of the agent's role and responsibilities. Critical for evaluating if the agent is routing tasks correctly.
"instruction": "A String", # Optional. Provides instructions for the LLM model, guiding the agent's behavior. Can be static or dynamic. Dynamic instructions can contain placeholders like {variable_name} that will be resolved at runtime using the `AgentEvent.state_delta` field.
"subAgents": [ # Optional. The list of valid agent IDs that this agent can delegate to. This defines the directed edges in the multi-agent system graph topology.
"A String",
],
"tools": [ # Optional. The list of tools available to this agent.
{ # Tool details that the model may use to generate response. A `Tool` is a piece of code that enables the system to interact with external systems to perform an action, or set of actions, outside of knowledge and scope of the model. A Tool object should contain exactly one type of Tool (e.g FunctionDeclaration, Retrieval or GoogleSearchRetrieval).
"codeExecution": { # Tool that executes code generated by the model, and automatically returns the result to the model. See also [ExecutableCode]and [CodeExecutionResult] which are input and output to this tool. # Optional. CodeExecution tool type. Enables the model to execute code as part of generation.
},
"computerUse": { # Tool to support computer use. # Optional. Tool to support the model interacting directly with the computer. If enabled, it automatically populates computer-use specific Function Declarations.
"environment": "A String", # Required. The environment being operated.
"excludedPredefinedFunctions": [ # Optional. By default, [predefined functions](https://cloud.google.com/vertex-ai/generative-ai/docs/computer-use#supported-actions) are included in the final model call. Some of them can be explicitly excluded from being automatically included. This can serve two purposes: 1. Using a more restricted / different action space. 2. Improving the definitions / instructions of predefined functions.
"A String",
],
},
"enterpriseWebSearch": { # Tool to search public web data, powered by Vertex AI Search and Sec4 compliance. # Optional. Tool to support searching public web data, powered by Vertex AI Search and Sec4 compliance.
"blockingConfidence": "A String", # Optional. Sites with confidence level chosen & above this value will be blocked from the search results.
"excludeDomains": [ # Optional. List of domains to be excluded from the search results. The default limit is 2000 domains.
"A String",
],
},
"functionDeclarations": [ # Optional. Function tool type. One or more function declarations to be passed to the model along with the current user query. Model may decide to call a subset of these functions by populating FunctionCall in the response. User should provide a FunctionResponse for each function call in the next turn. Based on the function responses, Model will generate the final response back to the user. Maximum 512 function declarations can be provided.
{ # Structured representation of a function declaration as defined by the [OpenAPI 3.0 specification](https://spec.openapis.org/oas/v3.0.3). Included in this declaration are the function name, description, parameters and response type. This FunctionDeclaration is a representation of a block of code that can be used as a `Tool` by the model and executed by the client.
"description": "A String", # Optional. Description and purpose of the function. Model uses it to decide how and whether to call the function.
"name": "A String", # Required. The name of the function to call. Must start with a letter or an underscore. Must be a-z, A-Z, 0-9, or contain underscores, dots, colons and dashes, with a maximum length of 64.
"parameters": { # Defines the schema of input and output data. This is a subset of the [OpenAPI 3.0 Schema Object](https://spec.openapis.org/oas/v3.0.3#schema-object). # Optional. Describes the parameters to this function in JSON Schema Object format. Reflects the Open API 3.03 Parameter Object. string Key: the name of the parameter. Parameter names are case sensitive. Schema Value: the Schema defining the type used for the parameter. For function with no parameters, this can be left unset. Parameter names must start with a letter or an underscore and must only contain chars a-z, A-Z, 0-9, or underscores with a maximum length of 64. Example with 1 required and 1 optional parameter: type: OBJECT properties: param1: type: STRING param2: type: INTEGER required: - param1
"additionalProperties": "", # Optional. If `type` is `OBJECT`, specifies how to handle properties not defined in `properties`. If it is a boolean `false`, no additional properties are allowed. If it is a schema, additional properties are allowed if they conform to the schema.
"anyOf": [ # Optional. The instance must be valid against any (one or more) of the subschemas listed in `any_of`.
# Object with schema name: GoogleCloudAiplatformV1beta1Schema
],
"default": "", # Optional. Default value to use if the field is not specified.
"defs": { # Optional. `defs` provides a map of schema definitions that can be reused by `ref` elsewhere in the schema. Only allowed at root level of the schema.
"a_key": # Object with schema name: GoogleCloudAiplatformV1beta1Schema
},
"description": "A String", # Optional. Description of the schema.
"enum": [ # Optional. Possible values of the field. This field can be used to restrict a value to a fixed set of values. To mark a field as an enum, set `format` to `enum` and provide the list of possible values in `enum`. For example: 1. To define directions: `{type:STRING, format:enum, enum:["EAST", "NORTH", "SOUTH", "WEST"]}` 2. To define apartment numbers: `{type:INTEGER, format:enum, enum:["101", "201", "301"]}`
"A String",
],
"example": "", # Optional. Example of an instance of this schema.
"format": "A String", # Optional. The format of the data. For `NUMBER` type, format can be `float` or `double`. For `INTEGER` type, format can be `int32` or `int64`. For `STRING` type, format can be `email`, `byte`, `date`, `date-time`, `password`, and other formats to further refine the data type.
"items": # Object with schema name: GoogleCloudAiplatformV1beta1Schema # Optional. If type is `ARRAY`, `items` specifies the schema of elements in the array.
"maxItems": "A String", # Optional. If type is `ARRAY`, `max_items` specifies the maximum number of items in an array.
"maxLength": "A String", # Optional. If type is `STRING`, `max_length` specifies the maximum length of the string.
"maxProperties": "A String", # Optional. If type is `OBJECT`, `max_properties` specifies the maximum number of properties that can be provided.
"maximum": 3.14, # Optional. If type is `INTEGER` or `NUMBER`, `maximum` specifies the maximum allowed value.
"minItems": "A String", # Optional. If type is `ARRAY`, `min_items` specifies the minimum number of items in an array.
"minLength": "A String", # Optional. If type is `STRING`, `min_length` specifies the minimum length of the string.
"minProperties": "A String", # Optional. If type is `OBJECT`, `min_properties` specifies the minimum number of properties that can be provided.
"minimum": 3.14, # Optional. If type is `INTEGER` or `NUMBER`, `minimum` specifies the minimum allowed value.
"nullable": True or False, # Optional. Indicates if the value of this field can be null.
"pattern": "A String", # Optional. If type is `STRING`, `pattern` specifies a regular expression that the string must match.
"properties": { # Optional. If type is `OBJECT`, `properties` is a map of property names to schema definitions for each property of the object.
"a_key": # Object with schema name: GoogleCloudAiplatformV1beta1Schema
},
"propertyOrdering": [ # Optional. Order of properties displayed or used where order matters. This is not a standard field in OpenAPI specification, but can be used to control the order of properties.
"A String",
],
"ref": "A String", # Optional. Allows referencing another schema definition to use in place of this schema. The value must be a valid reference to a schema in `defs`. For example, the following schema defines a reference to a schema node named "Pet": type: object properties: pet: ref: #/defs/Pet defs: Pet: type: object properties: name: type: string The value of the "pet" property is a reference to the schema node named "Pet". See details in https://json-schema.org/understanding-json-schema/structuring
"required": [ # Optional. If type is `OBJECT`, `required` lists the names of properties that must be present.
"A String",
],
"title": "A String", # Optional. Title for the schema.
"type": "A String", # Optional. Data type of the schema field.
},
"parametersJsonSchema": "", # Optional. Describes the parameters to the function in JSON Schema format. The schema must describe an object where the properties are the parameters to the function. For example: ``` { "type": "object", "properties": { "name": { "type": "string" }, "age": { "type": "integer" } }, "additionalProperties": false, "required": ["name", "age"], "propertyOrdering": ["name", "age"] } ``` This field is mutually exclusive with `parameters`.
"response": { # Defines the schema of input and output data. This is a subset of the [OpenAPI 3.0 Schema Object](https://spec.openapis.org/oas/v3.0.3#schema-object). # Optional. Describes the output from this function in JSON Schema format. Reflects the Open API 3.03 Response Object. The Schema defines the type used for the response value of the function.
"additionalProperties": "", # Optional. If `type` is `OBJECT`, specifies how to handle properties not defined in `properties`. If it is a boolean `false`, no additional properties are allowed. If it is a schema, additional properties are allowed if they conform to the schema.
"anyOf": [ # Optional. The instance must be valid against any (one or more) of the subschemas listed in `any_of`.
# Object with schema name: GoogleCloudAiplatformV1beta1Schema
],
"default": "", # Optional. Default value to use if the field is not specified.
"defs": { # Optional. `defs` provides a map of schema definitions that can be reused by `ref` elsewhere in the schema. Only allowed at root level of the schema.
"a_key": # Object with schema name: GoogleCloudAiplatformV1beta1Schema
},
"description": "A String", # Optional. Description of the schema.
"enum": [ # Optional. Possible values of the field. This field can be used to restrict a value to a fixed set of values. To mark a field as an enum, set `format` to `enum` and provide the list of possible values in `enum`. For example: 1. To define directions: `{type:STRING, format:enum, enum:["EAST", "NORTH", "SOUTH", "WEST"]}` 2. To define apartment numbers: `{type:INTEGER, format:enum, enum:["101", "201", "301"]}`
"A String",
],
"example": "", # Optional. Example of an instance of this schema.
"format": "A String", # Optional. The format of the data. For `NUMBER` type, format can be `float` or `double`. For `INTEGER` type, format can be `int32` or `int64`. For `STRING` type, format can be `email`, `byte`, `date`, `date-time`, `password`, and other formats to further refine the data type.
"items": # Object with schema name: GoogleCloudAiplatformV1beta1Schema # Optional. If type is `ARRAY`, `items` specifies the schema of elements in the array.
"maxItems": "A String", # Optional. If type is `ARRAY`, `max_items` specifies the maximum number of items in an array.
"maxLength": "A String", # Optional. If type is `STRING`, `max_length` specifies the maximum length of the string.
"maxProperties": "A String", # Optional. If type is `OBJECT`, `max_properties` specifies the maximum number of properties that can be provided.
"maximum": 3.14, # Optional. If type is `INTEGER` or `NUMBER`, `maximum` specifies the maximum allowed value.
"minItems": "A String", # Optional. If type is `ARRAY`, `min_items` specifies the minimum number of items in an array.
"minLength": "A String", # Optional. If type is `STRING`, `min_length` specifies the minimum length of the string.
"minProperties": "A String", # Optional. If type is `OBJECT`, `min_properties` specifies the minimum number of properties that can be provided.
"minimum": 3.14, # Optional. If type is `INTEGER` or `NUMBER`, `minimum` specifies the minimum allowed value.
"nullable": True or False, # Optional. Indicates if the value of this field can be null.
"pattern": "A String", # Optional. If type is `STRING`, `pattern` specifies a regular expression that the string must match.
"properties": { # Optional. If type is `OBJECT`, `properties` is a map of property names to schema definitions for each property of the object.
"a_key": # Object with schema name: GoogleCloudAiplatformV1beta1Schema
},
"propertyOrdering": [ # Optional. Order of properties displayed or used where order matters. This is not a standard field in OpenAPI specification, but can be used to control the order of properties.
"A String",
],
"ref": "A String", # Optional. Allows referencing another schema definition to use in place of this schema. The value must be a valid reference to a schema in `defs`. For example, the following schema defines a reference to a schema node named "Pet": type: object properties: pet: ref: #/defs/Pet defs: Pet: type: object properties: name: type: string The value of the "pet" property is a reference to the schema node named "Pet". See details in https://json-schema.org/understanding-json-schema/structuring
"required": [ # Optional. If type is `OBJECT`, `required` lists the names of properties that must be present.
"A String",
],
"title": "A String", # Optional. Title for the schema.
"type": "A String", # Optional. Data type of the schema field.
},
"responseJsonSchema": "", # Optional. Describes the output from this function in JSON Schema format. The value specified by the schema is the response value of the function. This field is mutually exclusive with `response`.
},
],
"googleMaps": { # Tool to retrieve public maps data for grounding, powered by Google. # Optional. GoogleMaps tool type. Tool to support Google Maps in Model.
"enableWidget": True or False, # Optional. If true, include the widget context token in the response.
},
"googleSearch": { # GoogleSearch tool type. Tool to support Google Search in Model. Powered by Google. # Optional. GoogleSearch tool type. Tool to support Google Search in Model. Powered by Google.
"blockingConfidence": "A String", # Optional. Sites with confidence level chosen & above this value will be blocked from the search results.
"excludeDomains": [ # Optional. List of domains to be excluded from the search results. The default limit is 2000 domains. Example: ["amazon.com", "facebook.com"].
"A String",
],
},
"googleSearchRetrieval": { # Tool to retrieve public web data for grounding, powered by Google. # Optional. Specialized retrieval tool that is powered by Google Search.
"dynamicRetrievalConfig": { # Describes the options to customize dynamic retrieval. # Specifies the dynamic retrieval configuration for the given source.
"dynamicThreshold": 3.14, # Optional. The threshold to be used in dynamic retrieval. If not set, a system default value is used.
"mode": "A String", # The mode of the predictor to be used in dynamic retrieval.
},
},
"parallelAiSearch": { # ParallelAiSearch tool type. A tool that uses the Parallel.ai search engine for grounding. # Optional. If specified, Vertex AI will use Parallel.ai to search for information to answer user queries. The search results will be grounded on Parallel.ai and presented to the model for response generation
"apiKey": "A String", # Optional. The API key for ParallelAiSearch. If an API key is not provided, the system will attempt to verify access by checking for an active Parallel.ai subscription through the Google Cloud Marketplace. See https://docs.parallel.ai/search/search-quickstart for more details.
"customConfigs": { # Optional. Custom configs for ParallelAiSearch. This field can be used to pass any parameter from the Parallel.ai Search API. See the Parallel.ai documentation for the full list of available parameters and their usage: https://docs.parallel.ai/api-reference/search-beta/search Currently only `source_policy`, `excerpts`, `max_results`, `mode`, `fetch_policy` can be set via this field. For example: { "source_policy": { "include_domains": ["google.com", "wikipedia.org"], "exclude_domains": ["example.com"] }, "fetch_policy": { "max_age_seconds": 3600 } }
"a_key": "", # Properties of the object.
},
},
"retrieval": { # Defines a retrieval tool that model can call to access external knowledge. # Optional. Retrieval tool type. System will always execute the provided retrieval tool(s) to get external knowledge to answer the prompt. Retrieval results are presented to the model for generation.
"disableAttribution": True or False, # Optional. Deprecated. This option is no longer supported.
"externalApi": { # Retrieve from data source powered by external API for grounding. The external API is not owned by Google, but need to follow the pre-defined API spec. # Use data source powered by external API for grounding.
"apiAuth": { # The generic reusable api auth config. Deprecated. Please use AuthConfig (google/cloud/aiplatform/master/auth.proto) instead. # The authentication config to access the API. Deprecated. Please use auth_config instead.
"apiKeyConfig": { # The API secret. # The API secret.
"apiKeySecretVersion": "A String", # Required. The SecretManager secret version resource name storing API key. e.g. projects/{project}/secrets/{secret}/versions/{version}
"apiKeyString": "A String", # The API key string. Either this or `api_key_secret_version` must be set.
},
},
"apiSpec": "A String", # The API spec that the external API implements.
"authConfig": { # Auth configuration to run the extension. # The authentication config to access the API.
"apiKeyConfig": { # Config for authentication with API key. # Config for API key auth.
"apiKeySecret": "A String", # Optional. The name of the SecretManager secret version resource storing the API key. Format: `projects/{project}/secrets/{secrete}/versions/{version}` - If both `api_key_secret` and `api_key_string` are specified, this field takes precedence over `api_key_string`. - If specified, the `secretmanager.versions.access` permission should be granted to Vertex AI Extension Service Agent (https://cloud.google.com/vertex-ai/docs/general/access-control#service-agents) on the specified resource.
"apiKeyString": "A String", # Optional. The API key to be used in the request directly.
"httpElementLocation": "A String", # Optional. The location of the API key.
"name": "A String", # Optional. The parameter name of the API key. E.g. If the API request is "https://example.com/act?api_key=", "api_key" would be the parameter name.
},
"authType": "A String", # Type of auth scheme.
"googleServiceAccountConfig": { # Config for Google Service Account Authentication. # Config for Google Service Account auth.
"serviceAccount": "A String", # Optional. The service account that the extension execution service runs as. - If the service account is specified, the `iam.serviceAccounts.getAccessToken` permission should be granted to Vertex AI Extension Service Agent (https://cloud.google.com/vertex-ai/docs/general/access-control#service-agents) on the specified service account. - If not specified, the Vertex AI Extension Service Agent will be used to execute the Extension.
},
"httpBasicAuthConfig": { # Config for HTTP Basic Authentication. # Config for HTTP Basic auth.
"credentialSecret": "A String", # Required. The name of the SecretManager secret version resource storing the base64 encoded credentials. Format: `projects/{project}/secrets/{secrete}/versions/{version}` - If specified, the `secretmanager.versions.access` permission should be granted to Vertex AI Extension Service Agent (https://cloud.google.com/vertex-ai/docs/general/access-control#service-agents) on the specified resource.
},
"oauthConfig": { # Config for user oauth. # Config for user oauth.
"accessToken": "A String", # Access token for extension endpoint. Only used to propagate token from [[ExecuteExtensionRequest.runtime_auth_config]] at request time.
"serviceAccount": "A String", # The service account used to generate access tokens for executing the Extension. - If the service account is specified, the `iam.serviceAccounts.getAccessToken` permission should be granted to Vertex AI Extension Service Agent (https://cloud.google.com/vertex-ai/docs/general/access-control#service-agents) on the provided service account.
},
"oidcConfig": { # Config for user OIDC auth. # Config for user OIDC auth.
"idToken": "A String", # OpenID Connect formatted ID token for extension endpoint. Only used to propagate token from [[ExecuteExtensionRequest.runtime_auth_config]] at request time.
"serviceAccount": "A String", # The service account used to generate an OpenID Connect (OIDC)-compatible JWT token signed by the Google OIDC Provider (accounts.google.com) for extension endpoint (https://cloud.google.com/iam/docs/create-short-lived-credentials-direct#sa-credentials-oidc). - The audience for the token will be set to the URL in the server url defined in the OpenApi spec. - If the service account is provided, the service account should grant `iam.serviceAccounts.getOpenIdToken` permission to Vertex AI Extension Service Agent (https://cloud.google.com/vertex-ai/docs/general/access-control#service-agents).
},
},
"elasticSearchParams": { # The search parameters to use for the ELASTIC_SEARCH spec. # Parameters for the elastic search API.
"index": "A String", # The ElasticSearch index to use.
"numHits": 42, # Optional. Number of hits (chunks) to request. When specified, it is passed to Elasticsearch as the `num_hits` param.
"searchTemplate": "A String", # The ElasticSearch search template to use.
},
"endpoint": "A String", # The endpoint of the external API. The system will call the API at this endpoint to retrieve the data for grounding. Example: https://acme.com:443/search
"simpleSearchParams": { # The search parameters to use for SIMPLE_SEARCH spec. # Parameters for the simple search API.
},
},
"vertexAiSearch": { # Retrieve from Vertex AI Search datastore or engine for grounding. datastore and engine are mutually exclusive. See https://cloud.google.com/products/agent-builder # Set to use data source powered by Vertex AI Search.
"dataStoreSpecs": [ # Specifications that define the specific DataStores to be searched, along with configurations for those data stores. This is only considered for Engines with multiple data stores. It should only be set if engine is used.
{ # Define data stores within engine to filter on in a search call and configurations for those data stores. For more information, see https://cloud.google.com/generative-ai-app-builder/docs/reference/rpc/google.cloud.discoveryengine.v1#datastorespec
"dataStore": "A String", # Full resource name of DataStore, such as Format: `projects/{project}/locations/{location}/collections/{collection}/dataStores/{dataStore}`
"filter": "A String", # Optional. Filter specification to filter documents in the data store specified by data_store field. For more information on filtering, see [Filtering](https://cloud.google.com/generative-ai-app-builder/docs/filter-search-metadata)
},
],
"datastore": "A String", # Optional. Fully-qualified Vertex AI Search data store resource ID. Format: `projects/{project}/locations/{location}/collections/{collection}/dataStores/{dataStore}`
"engine": "A String", # Optional. Fully-qualified Vertex AI Search engine resource ID. Format: `projects/{project}/locations/{location}/collections/{collection}/engines/{engine}`
"filter": "A String", # Optional. Filter strings to be passed to the search API.
"maxResults": 42, # Optional. Number of search results to return per query. The default value is 10. The maximumm allowed value is 10.
},
"vertexRagStore": { # Retrieve from Vertex RAG Store for grounding. # Set to use data source powered by Vertex RAG store. User data is uploaded via the VertexRagDataService.
"ragCorpora": [ # Optional. Deprecated. Please use rag_resources instead.
"A String",
],
"ragResources": [ # Optional. The representation of the rag source. It can be used to specify corpus only or ragfiles. Currently only support one corpus or multiple files from one corpus. In the future we may open up multiple corpora support.
{ # The definition of the Rag resource.
"ragCorpus": "A String", # Optional. RagCorpora resource name. Format: `projects/{project}/locations/{location}/ragCorpora/{rag_corpus}`
"ragFileIds": [ # Optional. rag_file_id. The files should be in the same rag_corpus set in rag_corpus field.
"A String",
],
},
],
"ragRetrievalConfig": { # Specifies the context retrieval config. # Optional. The retrieval config for the Rag query.
"filter": { # Config for filters. # Optional. Config for filters.
"metadataFilter": "A String", # Optional. String for metadata filtering.
"vectorDistanceThreshold": 3.14, # Optional. Only returns contexts with vector distance smaller than the threshold.
"vectorSimilarityThreshold": 3.14, # Optional. Only returns contexts with vector similarity larger than the threshold.
},
"hybridSearch": { # Config for Hybrid Search. # Optional. Config for Hybrid Search.
"alpha": 3.14, # Optional. Alpha value controls the weight between dense and sparse vector search results. The range is [0, 1], while 0 means sparse vector search only and 1 means dense vector search only. The default value is 0.5 which balances sparse and dense vector search equally.
},
"ranking": { # Config for ranking and reranking. # Optional. Config for ranking and reranking.
"llmRanker": { # Config for LlmRanker. # Optional. Config for LlmRanker.
"modelName": "A String", # Optional. The model name used for ranking. See [Supported models](https://cloud.google.com/vertex-ai/generative-ai/docs/model-reference/inference#supported-models).
},
"rankService": { # Config for Rank Service. # Optional. Config for Rank Service.
"modelName": "A String", # Optional. The model name of the rank service. Format: `semantic-ranker-512@latest`
},
},
"topK": 42, # Optional. The number of contexts to retrieve.
},
"similarityTopK": 42, # Optional. Number of top k results to return from the selected corpora.
"storeContext": True or False, # Optional. Currently only supported for Gemini Multimodal Live API. In Gemini Multimodal Live API, if `store_context` bool is specified, Gemini will leverage it to automatically memorize the interactions between the client and Gemini, and retrieve context when needed to augment the response generation for users' ongoing and future interactions.
"vectorDistanceThreshold": 3.14, # Optional. Only return results with vector distance smaller than the threshold.
},
},
"urlContext": { # Tool to support URL context. # Optional. Tool to support URL context retrieval.
},
},
],
},
},
"turns": [ # Optional. A chronological list of conversation turns. Each turn represents a logical execution cycle (e.g., User Input -> Agent Response).
{ # Represents a single turn/invocation in the conversation.
"events": [ # Optional. The list of events that occurred during this turn.
{ # Represents a single event in the execution trace.
"activeTools": [ # Optional. The list of tools that were active/available to the agent at the time of this event. This overrides the `AgentConfig.tools` if set.
{ # Tool details that the model may use to generate response. A `Tool` is a piece of code that enables the system to interact with external systems to perform an action, or set of actions, outside of knowledge and scope of the model. A Tool object should contain exactly one type of Tool (e.g FunctionDeclaration, Retrieval or GoogleSearchRetrieval).
"codeExecution": { # Tool that executes code generated by the model, and automatically returns the result to the model. See also [ExecutableCode]and [CodeExecutionResult] which are input and output to this tool. # Optional. CodeExecution tool type. Enables the model to execute code as part of generation.
},
"computerUse": { # Tool to support computer use. # Optional. Tool to support the model interacting directly with the computer. If enabled, it automatically populates computer-use specific Function Declarations.
"environment": "A String", # Required. The environment being operated.
"excludedPredefinedFunctions": [ # Optional. By default, [predefined functions](https://cloud.google.com/vertex-ai/generative-ai/docs/computer-use#supported-actions) are included in the final model call. Some of them can be explicitly excluded from being automatically included. This can serve two purposes: 1. Using a more restricted / different action space. 2. Improving the definitions / instructions of predefined functions.
"A String",
],
},
"enterpriseWebSearch": { # Tool to search public web data, powered by Vertex AI Search and Sec4 compliance. # Optional. Tool to support searching public web data, powered by Vertex AI Search and Sec4 compliance.
"blockingConfidence": "A String", # Optional. Sites with confidence level chosen & above this value will be blocked from the search results.
"excludeDomains": [ # Optional. List of domains to be excluded from the search results. The default limit is 2000 domains.
"A String",
],
},
"functionDeclarations": [ # Optional. Function tool type. One or more function declarations to be passed to the model along with the current user query. Model may decide to call a subset of these functions by populating FunctionCall in the response. User should provide a FunctionResponse for each function call in the next turn. Based on the function responses, Model will generate the final response back to the user. Maximum 512 function declarations can be provided.
{ # Structured representation of a function declaration as defined by the [OpenAPI 3.0 specification](https://spec.openapis.org/oas/v3.0.3). Included in this declaration are the function name, description, parameters and response type. This FunctionDeclaration is a representation of a block of code that can be used as a `Tool` by the model and executed by the client.
"description": "A String", # Optional. Description and purpose of the function. Model uses it to decide how and whether to call the function.
"name": "A String", # Required. The name of the function to call. Must start with a letter or an underscore. Must be a-z, A-Z, 0-9, or contain underscores, dots, colons and dashes, with a maximum length of 64.
"parameters": { # Defines the schema of input and output data. This is a subset of the [OpenAPI 3.0 Schema Object](https://spec.openapis.org/oas/v3.0.3#schema-object). # Optional. Describes the parameters to this function in JSON Schema Object format. Reflects the Open API 3.03 Parameter Object. string Key: the name of the parameter. Parameter names are case sensitive. Schema Value: the Schema defining the type used for the parameter. For function with no parameters, this can be left unset. Parameter names must start with a letter or an underscore and must only contain chars a-z, A-Z, 0-9, or underscores with a maximum length of 64. Example with 1 required and 1 optional parameter: type: OBJECT properties: param1: type: STRING param2: type: INTEGER required: - param1
"additionalProperties": "", # Optional. If `type` is `OBJECT`, specifies how to handle properties not defined in `properties`. If it is a boolean `false`, no additional properties are allowed. If it is a schema, additional properties are allowed if they conform to the schema.
"anyOf": [ # Optional. The instance must be valid against any (one or more) of the subschemas listed in `any_of`.
# Object with schema name: GoogleCloudAiplatformV1beta1Schema
],
"default": "", # Optional. Default value to use if the field is not specified.
"defs": { # Optional. `defs` provides a map of schema definitions that can be reused by `ref` elsewhere in the schema. Only allowed at root level of the schema.
"a_key": # Object with schema name: GoogleCloudAiplatformV1beta1Schema
},
"description": "A String", # Optional. Description of the schema.
"enum": [ # Optional. Possible values of the field. This field can be used to restrict a value to a fixed set of values. To mark a field as an enum, set `format` to `enum` and provide the list of possible values in `enum`. For example: 1. To define directions: `{type:STRING, format:enum, enum:["EAST", "NORTH", "SOUTH", "WEST"]}` 2. To define apartment numbers: `{type:INTEGER, format:enum, enum:["101", "201", "301"]}`
"A String",
],
"example": "", # Optional. Example of an instance of this schema.
"format": "A String", # Optional. The format of the data. For `NUMBER` type, format can be `float` or `double`. For `INTEGER` type, format can be `int32` or `int64`. For `STRING` type, format can be `email`, `byte`, `date`, `date-time`, `password`, and other formats to further refine the data type.
"items": # Object with schema name: GoogleCloudAiplatformV1beta1Schema # Optional. If type is `ARRAY`, `items` specifies the schema of elements in the array.
"maxItems": "A String", # Optional. If type is `ARRAY`, `max_items` specifies the maximum number of items in an array.
"maxLength": "A String", # Optional. If type is `STRING`, `max_length` specifies the maximum length of the string.
"maxProperties": "A String", # Optional. If type is `OBJECT`, `max_properties` specifies the maximum number of properties that can be provided.
"maximum": 3.14, # Optional. If type is `INTEGER` or `NUMBER`, `maximum` specifies the maximum allowed value.
"minItems": "A String", # Optional. If type is `ARRAY`, `min_items` specifies the minimum number of items in an array.
"minLength": "A String", # Optional. If type is `STRING`, `min_length` specifies the minimum length of the string.
"minProperties": "A String", # Optional. If type is `OBJECT`, `min_properties` specifies the minimum number of properties that can be provided.
"minimum": 3.14, # Optional. If type is `INTEGER` or `NUMBER`, `minimum` specifies the minimum allowed value.
"nullable": True or False, # Optional. Indicates if the value of this field can be null.
"pattern": "A String", # Optional. If type is `STRING`, `pattern` specifies a regular expression that the string must match.
"properties": { # Optional. If type is `OBJECT`, `properties` is a map of property names to schema definitions for each property of the object.
"a_key": # Object with schema name: GoogleCloudAiplatformV1beta1Schema
},
"propertyOrdering": [ # Optional. Order of properties displayed or used where order matters. This is not a standard field in OpenAPI specification, but can be used to control the order of properties.
"A String",
],
"ref": "A String", # Optional. Allows referencing another schema definition to use in place of this schema. The value must be a valid reference to a schema in `defs`. For example, the following schema defines a reference to a schema node named "Pet": type: object properties: pet: ref: #/defs/Pet defs: Pet: type: object properties: name: type: string The value of the "pet" property is a reference to the schema node named "Pet". See details in https://json-schema.org/understanding-json-schema/structuring
"required": [ # Optional. If type is `OBJECT`, `required` lists the names of properties that must be present.
"A String",
],
"title": "A String", # Optional. Title for the schema.
"type": "A String", # Optional. Data type of the schema field.
},
"parametersJsonSchema": "", # Optional. Describes the parameters to the function in JSON Schema format. The schema must describe an object where the properties are the parameters to the function. For example: ``` { "type": "object", "properties": { "name": { "type": "string" }, "age": { "type": "integer" } }, "additionalProperties": false, "required": ["name", "age"], "propertyOrdering": ["name", "age"] } ``` This field is mutually exclusive with `parameters`.
"response": { # Defines the schema of input and output data. This is a subset of the [OpenAPI 3.0 Schema Object](https://spec.openapis.org/oas/v3.0.3#schema-object). # Optional. Describes the output from this function in JSON Schema format. Reflects the Open API 3.03 Response Object. The Schema defines the type used for the response value of the function.
"additionalProperties": "", # Optional. If `type` is `OBJECT`, specifies how to handle properties not defined in `properties`. If it is a boolean `false`, no additional properties are allowed. If it is a schema, additional properties are allowed if they conform to the schema.
"anyOf": [ # Optional. The instance must be valid against any (one or more) of the subschemas listed in `any_of`.
# Object with schema name: GoogleCloudAiplatformV1beta1Schema
],
"default": "", # Optional. Default value to use if the field is not specified.
"defs": { # Optional. `defs` provides a map of schema definitions that can be reused by `ref` elsewhere in the schema. Only allowed at root level of the schema.
"a_key": # Object with schema name: GoogleCloudAiplatformV1beta1Schema
},
"description": "A String", # Optional. Description of the schema.
"enum": [ # Optional. Possible values of the field. This field can be used to restrict a value to a fixed set of values. To mark a field as an enum, set `format` to `enum` and provide the list of possible values in `enum`. For example: 1. To define directions: `{type:STRING, format:enum, enum:["EAST", "NORTH", "SOUTH", "WEST"]}` 2. To define apartment numbers: `{type:INTEGER, format:enum, enum:["101", "201", "301"]}`
"A String",
],
"example": "", # Optional. Example of an instance of this schema.
"format": "A String", # Optional. The format of the data. For `NUMBER` type, format can be `float` or `double`. For `INTEGER` type, format can be `int32` or `int64`. For `STRING` type, format can be `email`, `byte`, `date`, `date-time`, `password`, and other formats to further refine the data type.
"items": # Object with schema name: GoogleCloudAiplatformV1beta1Schema # Optional. If type is `ARRAY`, `items` specifies the schema of elements in the array.
"maxItems": "A String", # Optional. If type is `ARRAY`, `max_items` specifies the maximum number of items in an array.
"maxLength": "A String", # Optional. If type is `STRING`, `max_length` specifies the maximum length of the string.
"maxProperties": "A String", # Optional. If type is `OBJECT`, `max_properties` specifies the maximum number of properties that can be provided.
"maximum": 3.14, # Optional. If type is `INTEGER` or `NUMBER`, `maximum` specifies the maximum allowed value.
"minItems": "A String", # Optional. If type is `ARRAY`, `min_items` specifies the minimum number of items in an array.
"minLength": "A String", # Optional. If type is `STRING`, `min_length` specifies the minimum length of the string.
"minProperties": "A String", # Optional. If type is `OBJECT`, `min_properties` specifies the minimum number of properties that can be provided.
"minimum": 3.14, # Optional. If type is `INTEGER` or `NUMBER`, `minimum` specifies the minimum allowed value.
"nullable": True or False, # Optional. Indicates if the value of this field can be null.
"pattern": "A String", # Optional. If type is `STRING`, `pattern` specifies a regular expression that the string must match.
"properties": { # Optional. If type is `OBJECT`, `properties` is a map of property names to schema definitions for each property of the object.
"a_key": # Object with schema name: GoogleCloudAiplatformV1beta1Schema
},
"propertyOrdering": [ # Optional. Order of properties displayed or used where order matters. This is not a standard field in OpenAPI specification, but can be used to control the order of properties.
"A String",
],
"ref": "A String", # Optional. Allows referencing another schema definition to use in place of this schema. The value must be a valid reference to a schema in `defs`. For example, the following schema defines a reference to a schema node named "Pet": type: object properties: pet: ref: #/defs/Pet defs: Pet: type: object properties: name: type: string The value of the "pet" property is a reference to the schema node named "Pet". See details in https://json-schema.org/understanding-json-schema/structuring
"required": [ # Optional. If type is `OBJECT`, `required` lists the names of properties that must be present.
"A String",
],
"title": "A String", # Optional. Title for the schema.
"type": "A String", # Optional. Data type of the schema field.
},
"responseJsonSchema": "", # Optional. Describes the output from this function in JSON Schema format. The value specified by the schema is the response value of the function. This field is mutually exclusive with `response`.
},
],
"googleMaps": { # Tool to retrieve public maps data for grounding, powered by Google. # Optional. GoogleMaps tool type. Tool to support Google Maps in Model.
"enableWidget": True or False, # Optional. If true, include the widget context token in the response.
},
"googleSearch": { # GoogleSearch tool type. Tool to support Google Search in Model. Powered by Google. # Optional. GoogleSearch tool type. Tool to support Google Search in Model. Powered by Google.
"blockingConfidence": "A String", # Optional. Sites with confidence level chosen & above this value will be blocked from the search results.
"excludeDomains": [ # Optional. List of domains to be excluded from the search results. The default limit is 2000 domains. Example: ["amazon.com", "facebook.com"].
"A String",
],
},
"googleSearchRetrieval": { # Tool to retrieve public web data for grounding, powered by Google. # Optional. Specialized retrieval tool that is powered by Google Search.
"dynamicRetrievalConfig": { # Describes the options to customize dynamic retrieval. # Specifies the dynamic retrieval configuration for the given source.
"dynamicThreshold": 3.14, # Optional. The threshold to be used in dynamic retrieval. If not set, a system default value is used.
"mode": "A String", # The mode of the predictor to be used in dynamic retrieval.
},
},
"parallelAiSearch": { # ParallelAiSearch tool type. A tool that uses the Parallel.ai search engine for grounding. # Optional. If specified, Vertex AI will use Parallel.ai to search for information to answer user queries. The search results will be grounded on Parallel.ai and presented to the model for response generation
"apiKey": "A String", # Optional. The API key for ParallelAiSearch. If an API key is not provided, the system will attempt to verify access by checking for an active Parallel.ai subscription through the Google Cloud Marketplace. See https://docs.parallel.ai/search/search-quickstart for more details.
"customConfigs": { # Optional. Custom configs for ParallelAiSearch. This field can be used to pass any parameter from the Parallel.ai Search API. See the Parallel.ai documentation for the full list of available parameters and their usage: https://docs.parallel.ai/api-reference/search-beta/search Currently only `source_policy`, `excerpts`, `max_results`, `mode`, `fetch_policy` can be set via this field. For example: { "source_policy": { "include_domains": ["google.com", "wikipedia.org"], "exclude_domains": ["example.com"] }, "fetch_policy": { "max_age_seconds": 3600 } }
"a_key": "", # Properties of the object.
},
},
"retrieval": { # Defines a retrieval tool that model can call to access external knowledge. # Optional. Retrieval tool type. System will always execute the provided retrieval tool(s) to get external knowledge to answer the prompt. Retrieval results are presented to the model for generation.
"disableAttribution": True or False, # Optional. Deprecated. This option is no longer supported.
"externalApi": { # Retrieve from data source powered by external API for grounding. The external API is not owned by Google, but need to follow the pre-defined API spec. # Use data source powered by external API for grounding.
"apiAuth": { # The generic reusable api auth config. Deprecated. Please use AuthConfig (google/cloud/aiplatform/master/auth.proto) instead. # The authentication config to access the API. Deprecated. Please use auth_config instead.
"apiKeyConfig": { # The API secret. # The API secret.
"apiKeySecretVersion": "A String", # Required. The SecretManager secret version resource name storing API key. e.g. projects/{project}/secrets/{secret}/versions/{version}
"apiKeyString": "A String", # The API key string. Either this or `api_key_secret_version` must be set.
},
},
"apiSpec": "A String", # The API spec that the external API implements.
"authConfig": { # Auth configuration to run the extension. # The authentication config to access the API.
"apiKeyConfig": { # Config for authentication with API key. # Config for API key auth.
"apiKeySecret": "A String", # Optional. The name of the SecretManager secret version resource storing the API key. Format: `projects/{project}/secrets/{secrete}/versions/{version}` - If both `api_key_secret` and `api_key_string` are specified, this field takes precedence over `api_key_string`. - If specified, the `secretmanager.versions.access` permission should be granted to Vertex AI Extension Service Agent (https://cloud.google.com/vertex-ai/docs/general/access-control#service-agents) on the specified resource.
"apiKeyString": "A String", # Optional. The API key to be used in the request directly.
"httpElementLocation": "A String", # Optional. The location of the API key.
"name": "A String", # Optional. The parameter name of the API key. E.g. If the API request is "https://example.com/act?api_key=", "api_key" would be the parameter name.
},
"authType": "A String", # Type of auth scheme.
"googleServiceAccountConfig": { # Config for Google Service Account Authentication. # Config for Google Service Account auth.
"serviceAccount": "A String", # Optional. The service account that the extension execution service runs as. - If the service account is specified, the `iam.serviceAccounts.getAccessToken` permission should be granted to Vertex AI Extension Service Agent (https://cloud.google.com/vertex-ai/docs/general/access-control#service-agents) on the specified service account. - If not specified, the Vertex AI Extension Service Agent will be used to execute the Extension.
},
"httpBasicAuthConfig": { # Config for HTTP Basic Authentication. # Config for HTTP Basic auth.
"credentialSecret": "A String", # Required. The name of the SecretManager secret version resource storing the base64 encoded credentials. Format: `projects/{project}/secrets/{secrete}/versions/{version}` - If specified, the `secretmanager.versions.access` permission should be granted to Vertex AI Extension Service Agent (https://cloud.google.com/vertex-ai/docs/general/access-control#service-agents) on the specified resource.
},
"oauthConfig": { # Config for user oauth. # Config for user oauth.
"accessToken": "A String", # Access token for extension endpoint. Only used to propagate token from [[ExecuteExtensionRequest.runtime_auth_config]] at request time.
"serviceAccount": "A String", # The service account used to generate access tokens for executing the Extension. - If the service account is specified, the `iam.serviceAccounts.getAccessToken` permission should be granted to Vertex AI Extension Service Agent (https://cloud.google.com/vertex-ai/docs/general/access-control#service-agents) on the provided service account.
},
"oidcConfig": { # Config for user OIDC auth. # Config for user OIDC auth.
"idToken": "A String", # OpenID Connect formatted ID token for extension endpoint. Only used to propagate token from [[ExecuteExtensionRequest.runtime_auth_config]] at request time.
"serviceAccount": "A String", # The service account used to generate an OpenID Connect (OIDC)-compatible JWT token signed by the Google OIDC Provider (accounts.google.com) for extension endpoint (https://cloud.google.com/iam/docs/create-short-lived-credentials-direct#sa-credentials-oidc). - The audience for the token will be set to the URL in the server url defined in the OpenApi spec. - If the service account is provided, the service account should grant `iam.serviceAccounts.getOpenIdToken` permission to Vertex AI Extension Service Agent (https://cloud.google.com/vertex-ai/docs/general/access-control#service-agents).
},
},
"elasticSearchParams": { # The search parameters to use for the ELASTIC_SEARCH spec. # Parameters for the elastic search API.
"index": "A String", # The ElasticSearch index to use.
"numHits": 42, # Optional. Number of hits (chunks) to request. When specified, it is passed to Elasticsearch as the `num_hits` param.
"searchTemplate": "A String", # The ElasticSearch search template to use.
},
"endpoint": "A String", # The endpoint of the external API. The system will call the API at this endpoint to retrieve the data for grounding. Example: https://acme.com:443/search
"simpleSearchParams": { # The search parameters to use for SIMPLE_SEARCH spec. # Parameters for the simple search API.
},
},
"vertexAiSearch": { # Retrieve from Vertex AI Search datastore or engine for grounding. datastore and engine are mutually exclusive. See https://cloud.google.com/products/agent-builder # Set to use data source powered by Vertex AI Search.
"dataStoreSpecs": [ # Specifications that define the specific DataStores to be searched, along with configurations for those data stores. This is only considered for Engines with multiple data stores. It should only be set if engine is used.
{ # Define data stores within engine to filter on in a search call and configurations for those data stores. For more information, see https://cloud.google.com/generative-ai-app-builder/docs/reference/rpc/google.cloud.discoveryengine.v1#datastorespec
"dataStore": "A String", # Full resource name of DataStore, such as Format: `projects/{project}/locations/{location}/collections/{collection}/dataStores/{dataStore}`
"filter": "A String", # Optional. Filter specification to filter documents in the data store specified by data_store field. For more information on filtering, see [Filtering](https://cloud.google.com/generative-ai-app-builder/docs/filter-search-metadata)
},
],
"datastore": "A String", # Optional. Fully-qualified Vertex AI Search data store resource ID. Format: `projects/{project}/locations/{location}/collections/{collection}/dataStores/{dataStore}`
"engine": "A String", # Optional. Fully-qualified Vertex AI Search engine resource ID. Format: `projects/{project}/locations/{location}/collections/{collection}/engines/{engine}`
"filter": "A String", # Optional. Filter strings to be passed to the search API.
"maxResults": 42, # Optional. Number of search results to return per query. The default value is 10. The maximumm allowed value is 10.
},
"vertexRagStore": { # Retrieve from Vertex RAG Store for grounding. # Set to use data source powered by Vertex RAG store. User data is uploaded via the VertexRagDataService.
"ragCorpora": [ # Optional. Deprecated. Please use rag_resources instead.
"A String",
],
"ragResources": [ # Optional. The representation of the rag source. It can be used to specify corpus only or ragfiles. Currently only support one corpus or multiple files from one corpus. In the future we may open up multiple corpora support.
{ # The definition of the Rag resource.
"ragCorpus": "A String", # Optional. RagCorpora resource name. Format: `projects/{project}/locations/{location}/ragCorpora/{rag_corpus}`
"ragFileIds": [ # Optional. rag_file_id. The files should be in the same rag_corpus set in rag_corpus field.
"A String",
],
},
],
"ragRetrievalConfig": { # Specifies the context retrieval config. # Optional. The retrieval config for the Rag query.
"filter": { # Config for filters. # Optional. Config for filters.
"metadataFilter": "A String", # Optional. String for metadata filtering.
"vectorDistanceThreshold": 3.14, # Optional. Only returns contexts with vector distance smaller than the threshold.
"vectorSimilarityThreshold": 3.14, # Optional. Only returns contexts with vector similarity larger than the threshold.
},
"hybridSearch": { # Config for Hybrid Search. # Optional. Config for Hybrid Search.
"alpha": 3.14, # Optional. Alpha value controls the weight between dense and sparse vector search results. The range is [0, 1], while 0 means sparse vector search only and 1 means dense vector search only. The default value is 0.5 which balances sparse and dense vector search equally.
},
"ranking": { # Config for ranking and reranking. # Optional. Config for ranking and reranking.
"llmRanker": { # Config for LlmRanker. # Optional. Config for LlmRanker.
"modelName": "A String", # Optional. The model name used for ranking. See [Supported models](https://cloud.google.com/vertex-ai/generative-ai/docs/model-reference/inference#supported-models).
},
"rankService": { # Config for Rank Service. # Optional. Config for Rank Service.
"modelName": "A String", # Optional. The model name of the rank service. Format: `semantic-ranker-512@latest`
},
},
"topK": 42, # Optional. The number of contexts to retrieve.
},
"similarityTopK": 42, # Optional. Number of top k results to return from the selected corpora.
"storeContext": True or False, # Optional. Currently only supported for Gemini Multimodal Live API. In Gemini Multimodal Live API, if `store_context` bool is specified, Gemini will leverage it to automatically memorize the interactions between the client and Gemini, and retrieve context when needed to augment the response generation for users' ongoing and future interactions.
"vectorDistanceThreshold": 3.14, # Optional. Only return results with vector distance smaller than the threshold.
},
},
"urlContext": { # Tool to support URL context. # Optional. Tool to support URL context retrieval.
},
},
],
"author": "A String", # Required. The ID of the agent or entity that generated this event. Use "user" to denote events generated by the end-user.
"content": { # The structured data content of a message. A Content message contains a `role` field, which indicates the producer of the content, and a `parts` field, which contains the multi-part data of the message. # Required. The content of the event (e.g., text response, tool call, tool response).
"parts": [ # Required. A list of Part objects that make up a single message. Parts of a message can have different MIME types. A Content message must have at least one Part.
{ # A datatype containing media that is part of a multi-part Content message. A `Part` consists of data which has an associated datatype. A `Part` can only contain one of the accepted types in `Part.data`. For media types that are not text, `Part` must have a fixed IANA MIME type identifying the type and subtype of the media if `inline_data` or `file_data` field is filled with raw bytes.
"codeExecutionResult": { # Result of executing the [ExecutableCode]. Only generated when using the [CodeExecution] tool, and always follows a `part` containing the [ExecutableCode]. # Optional. The result of executing the ExecutableCode.
"outcome": "A String", # Required. Outcome of the code execution.
"output": "A String", # Optional. Contains stdout when code execution is successful, stderr or other description otherwise.
},
"executableCode": { # Code generated by the model that is meant to be executed, and the result returned to the model. Generated when using the [CodeExecution] tool, in which the code will be automatically executed, and a corresponding [CodeExecutionResult] will also be generated. # Optional. Code generated by the model that is intended to be executed.
"code": "A String", # Required. The code to be executed.
"language": "A String", # Required. Programming language of the `code`.
},
"fileData": { # URI-based data. A FileData message contains a URI pointing to data of a specific media type. It is used to represent images, audio, and video stored in Google Cloud Storage. # Optional. The URI-based data of the part. This can be used to include files from Google Cloud Storage.
"displayName": "A String", # Optional. The display name of the file. Used to provide a label or filename to distinguish files. This field is only returned in `PromptMessage` for prompt management. It is used in the Gemini calls only when server side tools (`code_execution`, `google_search`, and `url_context`) are enabled.
"fileUri": "A String", # Required. The URI of the file in Google Cloud Storage.
"mimeType": "A String", # Required. The IANA standard MIME type of the source data.
},
"functionCall": { # A predicted [FunctionCall] returned from the model that contains a string representing the [FunctionDeclaration.name] and a structured JSON object containing the parameters and their values. # Optional. A predicted function call returned from the model. This contains the name of the function to call and the arguments to pass to the function.
"args": { # Optional. The function parameters and values in JSON object format. See [FunctionDeclaration.parameters] for parameter details.
"a_key": "", # Properties of the object.
},
"id": "A String", # Optional. The unique id of the function call. If populated, the client to execute the `function_call` and return the response with the matching `id`.
"name": "A String", # Optional. The name of the function to call. Matches [FunctionDeclaration.name].
"partialArgs": [ # Optional. The partial argument value of the function call. If provided, represents the arguments/fields that are streamed incrementally.
{ # Partial argument value of the function call.
"boolValue": True or False, # Optional. Represents a boolean value.
"jsonPath": "A String", # Required. A JSON Path (RFC 9535) to the argument being streamed. https://datatracker.ietf.org/doc/html/rfc9535. e.g. "$.foo.bar[0].data".
"nullValue": "A String", # Optional. Represents a null value.
"numberValue": 3.14, # Optional. Represents a double value.
"stringValue": "A String", # Optional. Represents a string value.
"willContinue": True or False, # Optional. Whether this is not the last part of the same json_path. If true, another PartialArg message for the current json_path is expected to follow.
},
],
"willContinue": True or False, # Optional. Whether this is the last part of the FunctionCall. If true, another partial message for the current FunctionCall is expected to follow.
},
"functionResponse": { # The result output from a [FunctionCall] that contains a string representing the [FunctionDeclaration.name] and a structured JSON object containing any output from the function is used as context to the model. This should contain the result of a [FunctionCall] made based on model prediction. # Optional. The result of a function call. This is used to provide the model with the result of a function call that it predicted.
"id": "A String", # Optional. The id of the function call this response is for. Populated by the client to match the corresponding function call `id`.
"name": "A String", # Required. The name of the function to call. Matches [FunctionDeclaration.name] and [FunctionCall.name].
"parts": [ # Optional. Ordered `Parts` that constitute a function response. Parts may have different IANA MIME types.
{ # A datatype containing media that is part of a `FunctionResponse` message. A `FunctionResponsePart` consists of data which has an associated datatype. A `FunctionResponsePart` can only contain one of the accepted types in `FunctionResponsePart.data`. A `FunctionResponsePart` must have a fixed IANA MIME type identifying the type and subtype of the media if the `inline_data` field is filled with raw bytes.
"fileData": { # URI based data for function response. # URI based data.
"displayName": "A String", # Optional. Display name of the file data. Used to provide a label or filename to distinguish file datas. This field is only returned in PromptMessage for prompt management. It is currently used in the Gemini GenerateContent calls only when server side tools (code_execution, google_search, and url_context) are enabled.
"fileUri": "A String", # Required. URI.
"mimeType": "A String", # Required. The IANA standard MIME type of the source data.
},
"inlineData": { # Raw media bytes for function response. Text should not be sent as raw bytes, use the 'text' field. # Inline media bytes.
"data": "A String", # Required. Raw bytes.
"displayName": "A String", # Optional. Display name of the blob. Used to provide a label or filename to distinguish blobs. This field is only returned in PromptMessage for prompt management. It is currently used in the Gemini GenerateContent calls only when server side tools (code_execution, google_search, and url_context) are enabled.
"mimeType": "A String", # Required. The IANA standard MIME type of the source data.
},
},
],
"response": { # Required. The function response in JSON object format. Use "output" key to specify function output and "error" key to specify error details (if any). If "output" and "error" keys are not specified, then whole "response" is treated as function output.
"a_key": "", # Properties of the object.
},
},
"inlineData": { # A content blob. A Blob contains data of a specific media type. It is used to represent images, audio, and video. # Optional. The inline data content of the part. This can be used to include images, audio, or video in a request.
"data": "A String", # Required. The raw bytes of the data.
"displayName": "A String", # Optional. The display name of the blob. Used to provide a label or filename to distinguish blobs. This field is only returned in `PromptMessage` for prompt management. It is used in the Gemini calls only when server-side tools (`code_execution`, `google_search`, and `url_context`) are enabled.
"mimeType": "A String", # Required. The IANA standard MIME type of the source data.
},
"mediaResolution": { # per part media resolution. Media resolution for the input media. # per part media resolution. Media resolution for the input media.
"level": "A String", # The tokenization quality used for given media.
},
"text": "A String", # Optional. The text content of the part. When sent from the VSCode Gemini Code Assist extension, references to @mentioned items will be converted to markdown boldface text. For example `@my-repo` will be converted to and sent as `**my-repo**` by the IDE agent.
"thought": True or False, # Optional. Indicates whether the `part` represents the model's thought process or reasoning.
"thoughtSignature": "A String", # Optional. An opaque signature for the thought so it can be reused in subsequent requests.
"videoMetadata": { # Provides metadata for a video, including the start and end offsets for clipping and the frame rate. # Optional. Video metadata. The metadata should only be specified while the video data is presented in inline_data or file_data.
"endOffset": "A String", # Optional. The end offset of the video.
"fps": 3.14, # Optional. The frame rate of the video sent to the model. If not specified, the default value is 1.0. The valid range is (0.0, 24.0].
"startOffset": "A String", # Optional. The start offset of the video.
},
},
],
"role": "A String", # Optional. The producer of the content. Must be either 'user' or 'model'. If not set, the service will default to 'user'.
},
"eventTime": "A String", # Optional. The timestamp when the event occurred.
"stateDelta": { # Optional. The change in the session state caused by this event. This is a key-value map of fields that were modified or added by the event.
"a_key": "", # Properties of the object.
},
},
],
"turnId": "A String", # Optional. A unique identifier for the turn. Useful for referencing specific turns across systems.
"turnIndex": 42, # Required. The 0-based index of the turn in the conversation sequence.
},
],
},
"promptTemplateData": { # Message to hold a prompt template and the values to populate the template. # Prompt template data.
"values": { # The values for fields in the prompt template.
"a_key": { # The structured data content of a message. A Content message contains a `role` field, which indicates the producer of the content, and a `parts` field, which contains the multi-part data of the message.
"parts": [ # Required. A list of Part objects that make up a single message. Parts of a message can have different MIME types. A Content message must have at least one Part.
{ # A datatype containing media that is part of a multi-part Content message. A `Part` consists of data which has an associated datatype. A `Part` can only contain one of the accepted types in `Part.data`. For media types that are not text, `Part` must have a fixed IANA MIME type identifying the type and subtype of the media if `inline_data` or `file_data` field is filled with raw bytes.
"codeExecutionResult": { # Result of executing the [ExecutableCode]. Only generated when using the [CodeExecution] tool, and always follows a `part` containing the [ExecutableCode]. # Optional. The result of executing the ExecutableCode.
"outcome": "A String", # Required. Outcome of the code execution.
"output": "A String", # Optional. Contains stdout when code execution is successful, stderr or other description otherwise.
},
"executableCode": { # Code generated by the model that is meant to be executed, and the result returned to the model. Generated when using the [CodeExecution] tool, in which the code will be automatically executed, and a corresponding [CodeExecutionResult] will also be generated. # Optional. Code generated by the model that is intended to be executed.
"code": "A String", # Required. The code to be executed.
"language": "A String", # Required. Programming language of the `code`.
},
"fileData": { # URI-based data. A FileData message contains a URI pointing to data of a specific media type. It is used to represent images, audio, and video stored in Google Cloud Storage. # Optional. The URI-based data of the part. This can be used to include files from Google Cloud Storage.
"displayName": "A String", # Optional. The display name of the file. Used to provide a label or filename to distinguish files. This field is only returned in `PromptMessage` for prompt management. It is used in the Gemini calls only when server side tools (`code_execution`, `google_search`, and `url_context`) are enabled.
"fileUri": "A String", # Required. The URI of the file in Google Cloud Storage.
"mimeType": "A String", # Required. The IANA standard MIME type of the source data.
},
"functionCall": { # A predicted [FunctionCall] returned from the model that contains a string representing the [FunctionDeclaration.name] and a structured JSON object containing the parameters and their values. # Optional. A predicted function call returned from the model. This contains the name of the function to call and the arguments to pass to the function.
"args": { # Optional. The function parameters and values in JSON object format. See [FunctionDeclaration.parameters] for parameter details.
"a_key": "", # Properties of the object.
},
"id": "A String", # Optional. The unique id of the function call. If populated, the client to execute the `function_call` and return the response with the matching `id`.
"name": "A String", # Optional. The name of the function to call. Matches [FunctionDeclaration.name].
"partialArgs": [ # Optional. The partial argument value of the function call. If provided, represents the arguments/fields that are streamed incrementally.
{ # Partial argument value of the function call.
"boolValue": True or False, # Optional. Represents a boolean value.
"jsonPath": "A String", # Required. A JSON Path (RFC 9535) to the argument being streamed. https://datatracker.ietf.org/doc/html/rfc9535. e.g. "$.foo.bar[0].data".
"nullValue": "A String", # Optional. Represents a null value.
"numberValue": 3.14, # Optional. Represents a double value.
"stringValue": "A String", # Optional. Represents a string value.
"willContinue": True or False, # Optional. Whether this is not the last part of the same json_path. If true, another PartialArg message for the current json_path is expected to follow.
},
],
"willContinue": True or False, # Optional. Whether this is the last part of the FunctionCall. If true, another partial message for the current FunctionCall is expected to follow.
},
"functionResponse": { # The result output from a [FunctionCall] that contains a string representing the [FunctionDeclaration.name] and a structured JSON object containing any output from the function is used as context to the model. This should contain the result of a [FunctionCall] made based on model prediction. # Optional. The result of a function call. This is used to provide the model with the result of a function call that it predicted.
"id": "A String", # Optional. The id of the function call this response is for. Populated by the client to match the corresponding function call `id`.
"name": "A String", # Required. The name of the function to call. Matches [FunctionDeclaration.name] and [FunctionCall.name].
"parts": [ # Optional. Ordered `Parts` that constitute a function response. Parts may have different IANA MIME types.
{ # A datatype containing media that is part of a `FunctionResponse` message. A `FunctionResponsePart` consists of data which has an associated datatype. A `FunctionResponsePart` can only contain one of the accepted types in `FunctionResponsePart.data`. A `FunctionResponsePart` must have a fixed IANA MIME type identifying the type and subtype of the media if the `inline_data` field is filled with raw bytes.
"fileData": { # URI based data for function response. # URI based data.
"displayName": "A String", # Optional. Display name of the file data. Used to provide a label or filename to distinguish file datas. This field is only returned in PromptMessage for prompt management. It is currently used in the Gemini GenerateContent calls only when server side tools (code_execution, google_search, and url_context) are enabled.
"fileUri": "A String", # Required. URI.
"mimeType": "A String", # Required. The IANA standard MIME type of the source data.
},
"inlineData": { # Raw media bytes for function response. Text should not be sent as raw bytes, use the 'text' field. # Inline media bytes.
"data": "A String", # Required. Raw bytes.
"displayName": "A String", # Optional. Display name of the blob. Used to provide a label or filename to distinguish blobs. This field is only returned in PromptMessage for prompt management. It is currently used in the Gemini GenerateContent calls only when server side tools (code_execution, google_search, and url_context) are enabled.
"mimeType": "A String", # Required. The IANA standard MIME type of the source data.
},
},
],
"response": { # Required. The function response in JSON object format. Use "output" key to specify function output and "error" key to specify error details (if any). If "output" and "error" keys are not specified, then whole "response" is treated as function output.
"a_key": "", # Properties of the object.
},
},
"inlineData": { # A content blob. A Blob contains data of a specific media type. It is used to represent images, audio, and video. # Optional. The inline data content of the part. This can be used to include images, audio, or video in a request.
"data": "A String", # Required. The raw bytes of the data.
"displayName": "A String", # Optional. The display name of the blob. Used to provide a label or filename to distinguish blobs. This field is only returned in `PromptMessage` for prompt management. It is used in the Gemini calls only when server-side tools (`code_execution`, `google_search`, and `url_context`) are enabled.
"mimeType": "A String", # Required. The IANA standard MIME type of the source data.
},
"mediaResolution": { # per part media resolution. Media resolution for the input media. # per part media resolution. Media resolution for the input media.
"level": "A String", # The tokenization quality used for given media.
},
"text": "A String", # Optional. The text content of the part. When sent from the VSCode Gemini Code Assist extension, references to @mentioned items will be converted to markdown boldface text. For example `@my-repo` will be converted to and sent as `**my-repo**` by the IDE agent.
"thought": True or False, # Optional. Indicates whether the `part` represents the model's thought process or reasoning.
"thoughtSignature": "A String", # Optional. An opaque signature for the thought so it can be reused in subsequent requests.
"videoMetadata": { # Provides metadata for a video, including the start and end offsets for clipping and the frame rate. # Optional. Video metadata. The metadata should only be specified while the video data is presented in inline_data or file_data.
"endOffset": "A String", # Optional. The end offset of the video.
"fps": 3.14, # Optional. The frame rate of the video sent to the model. If not specified, the default value is 1.0. The valid range is (0.0, 24.0].
"startOffset": "A String", # Optional. The start offset of the video.
},
},
],
"role": "A String", # Optional. The producer of the content. Must be either 'user' or 'model'. If not set, the service will default to 'user'.
},
},
},
"text": "A String", # Text prompt.
"value": "", # Fields and values that can be used to populate the prompt template.
},
"rubrics": { # Optional. Named groups of rubrics associated with this prompt. The key is a user-defined name for the rubric group.
"a_key": { # A group of rubrics, used for grouping rubrics based on a metric or a version.
"displayName": "A String", # Human-readable name for the group. This should be unique within a given context if used for display or selection. Example: "Instruction Following V1", "Content Quality - Summarization Task".
"groupId": "A String", # Unique identifier for the group.
"rubrics": [ # Rubrics that are part of this group.
{ # Message representing a single testable criterion for evaluation. One input prompt could have multiple rubrics.
"content": { # Content of the rubric, defining the testable criteria. # Required. The actual testable criteria for the rubric.
"property": { # Defines criteria based on a specific property. # Evaluation criteria based on a specific property.
"description": "A String", # Description of the property being evaluated. Example: "The model's response is grammatically correct."
},
},
"importance": "A String", # Optional. The relative importance of this rubric.
"rubricId": "A String", # Unique identifier for the rubric. This ID is used to refer to this rubric, e.g., in RubricVerdict.
"type": "A String", # Optional. A type designator for the rubric, which can inform how it's evaluated or interpreted by systems or users. It's recommended to use consistent, well-defined, upper snake_case strings. Examples: "SUMMARIZATION_QUALITY", "SAFETY_HARMFUL_CONTENT", "INSTRUCTION_ADHERENCE".
},
],
},
},
},
},
"gcsUri": "A String", # The Cloud Storage object where the request or response is stored.
"labels": { # Optional. Labels for the EvaluationItem.
"a_key": "A String",
},
"metadata": "", # Optional. Metadata for the EvaluationItem.
"name": "A String", # Identifier. The resource name of the EvaluationItem. Format: `projects/{project}/locations/{location}/evaluationItems/{evaluation_item}`
}
list(parent, filter=None, orderBy=None, pageSize=None, pageToken=None, x__xgafv=None)
Lists Evaluation Items.
Args:
parent: string, Required. The resource name of the Location from which to list the Evaluation Items. Format: `projects/{project}/locations/{location}` (required)
filter: string, Optional. Filter expression that matches a subset of the EvaluationItems to show. For field names both snake_case and camelCase are supported. For more information about filter syntax, see [AIP-160](https://google.aip.dev/160).
orderBy: string, Optional. A comma-separated list of fields to order by, sorted in ascending order by default. Use `desc` after a field name for descending.
pageSize: integer, Optional. The maximum number of Evaluation Items to return.
pageToken: string, Optional. A page token, received from a previous `ListEvaluationItems` call. Provide this to retrieve the subsequent page.
x__xgafv: string, V1 error format.
Allowed values
1 - v1 error format
2 - v2 error format
Returns:
An object of the form:
{ # Response message for EvaluationManagementService.ListEvaluationItems.
"evaluationItems": [ # List of EvaluationItems in the requested page.
{ # EvaluationItem is a single evaluation request or result. The content of an EvaluationItem is immutable - it cannot be updated once created. EvaluationItems can be deleted when no longer needed.
"createTime": "A String", # Output only. Timestamp when this item was created.
"displayName": "A String", # Required. The display name of the EvaluationItem.
"error": { # The `Status` type defines a logical error model that is suitable for different programming environments, including REST APIs and RPC APIs. It is used by [gRPC](https://github.com/grpc). Each `Status` message contains three pieces of data: error code, error message, and error details. You can find out more about this error model and how to work with it in the [API Design Guide](https://cloud.google.com/apis/design/errors). # Output only. Error for the evaluation item.
"code": 42, # The status code, which should be an enum value of google.rpc.Code.
"details": [ # A list of messages that carry the error details. There is a common set of message types for APIs to use.
{
"a_key": "", # Properties of the object. Contains field @type with type URL.
},
],
"message": "A String", # A developer-facing error message, which should be in English. Any user-facing error message should be localized and sent in the google.rpc.Status.details field, or localized by the client.
},
"evaluationItemType": "A String", # Required. The type of the EvaluationItem.
"evaluationRequest": { # A single evaluation request supporting input for both single-turn model generation and multi-turn agent execution traces. Valid input modes: 1. Inference Mode: `prompt` is set (containing text or AgentData context). 2. Offline Eval Mode: `prompt` is unset, and `candidate_responses` contains `agent_data` (the completed execution trace). Validation Rule: Either `prompt` must be set, OR at least one of the `candidate_responses` must contain `agent_data`. # The request to evaluate.
"candidateResponses": [ # Optional. Responses from model under test and other baseline models for comparison.
{ # Responses from model or agent.
"agentData": { # Represents data specific to multi-turn agent evaluations. # Optional. Represents the complete execution trace of a multi-turn conversation, which can involve single or multiple agents. This field is used to provide the full output of an agent's run, including all turns and events, for direct evaluation.
"agents": { # Optional. A map containing the static configurations for each agent in the system. Key: agent_id (matches the `author` field in events). Value: The static configuration of the agent.
"a_key": { # Represents configuration for an Agent.
"agentId": "A String", # Required. Unique identifier of the agent. This ID is used to refer to this agent, e.g., in AgentEvent.author, or in the `sub_agents` field. It must be unique within the `agents` map.
"agentType": "A String", # Optional. The type or class of the agent (e.g., "LlmAgent", "RouterAgent", "ToolUseAgent"). Useful for the autorater to understand the expected behavior of the agent.
"description": "A String", # Optional. A high-level description of the agent's role and responsibilities. Critical for evaluating if the agent is routing tasks correctly.
"instruction": "A String", # Optional. Provides instructions for the LLM model, guiding the agent's behavior. Can be static or dynamic. Dynamic instructions can contain placeholders like {variable_name} that will be resolved at runtime using the `AgentEvent.state_delta` field.
"subAgents": [ # Optional. The list of valid agent IDs that this agent can delegate to. This defines the directed edges in the multi-agent system graph topology.
"A String",
],
"tools": [ # Optional. The list of tools available to this agent.
{ # Tool details that the model may use to generate response. A `Tool` is a piece of code that enables the system to interact with external systems to perform an action, or set of actions, outside of knowledge and scope of the model. A Tool object should contain exactly one type of Tool (e.g FunctionDeclaration, Retrieval or GoogleSearchRetrieval).
"codeExecution": { # Tool that executes code generated by the model, and automatically returns the result to the model. See also [ExecutableCode]and [CodeExecutionResult] which are input and output to this tool. # Optional. CodeExecution tool type. Enables the model to execute code as part of generation.
},
"computerUse": { # Tool to support computer use. # Optional. Tool to support the model interacting directly with the computer. If enabled, it automatically populates computer-use specific Function Declarations.
"environment": "A String", # Required. The environment being operated.
"excludedPredefinedFunctions": [ # Optional. By default, [predefined functions](https://cloud.google.com/vertex-ai/generative-ai/docs/computer-use#supported-actions) are included in the final model call. Some of them can be explicitly excluded from being automatically included. This can serve two purposes: 1. Using a more restricted / different action space. 2. Improving the definitions / instructions of predefined functions.
"A String",
],
},
"enterpriseWebSearch": { # Tool to search public web data, powered by Vertex AI Search and Sec4 compliance. # Optional. Tool to support searching public web data, powered by Vertex AI Search and Sec4 compliance.
"blockingConfidence": "A String", # Optional. Sites with confidence level chosen & above this value will be blocked from the search results.
"excludeDomains": [ # Optional. List of domains to be excluded from the search results. The default limit is 2000 domains.
"A String",
],
},
"functionDeclarations": [ # Optional. Function tool type. One or more function declarations to be passed to the model along with the current user query. Model may decide to call a subset of these functions by populating FunctionCall in the response. User should provide a FunctionResponse for each function call in the next turn. Based on the function responses, Model will generate the final response back to the user. Maximum 512 function declarations can be provided.
{ # Structured representation of a function declaration as defined by the [OpenAPI 3.0 specification](https://spec.openapis.org/oas/v3.0.3). Included in this declaration are the function name, description, parameters and response type. This FunctionDeclaration is a representation of a block of code that can be used as a `Tool` by the model and executed by the client.
"description": "A String", # Optional. Description and purpose of the function. Model uses it to decide how and whether to call the function.
"name": "A String", # Required. The name of the function to call. Must start with a letter or an underscore. Must be a-z, A-Z, 0-9, or contain underscores, dots, colons and dashes, with a maximum length of 64.
"parameters": { # Defines the schema of input and output data. This is a subset of the [OpenAPI 3.0 Schema Object](https://spec.openapis.org/oas/v3.0.3#schema-object). # Optional. Describes the parameters to this function in JSON Schema Object format. Reflects the Open API 3.03 Parameter Object. string Key: the name of the parameter. Parameter names are case sensitive. Schema Value: the Schema defining the type used for the parameter. For function with no parameters, this can be left unset. Parameter names must start with a letter or an underscore and must only contain chars a-z, A-Z, 0-9, or underscores with a maximum length of 64. Example with 1 required and 1 optional parameter: type: OBJECT properties: param1: type: STRING param2: type: INTEGER required: - param1
"additionalProperties": "", # Optional. If `type` is `OBJECT`, specifies how to handle properties not defined in `properties`. If it is a boolean `false`, no additional properties are allowed. If it is a schema, additional properties are allowed if they conform to the schema.
"anyOf": [ # Optional. The instance must be valid against any (one or more) of the subschemas listed in `any_of`.
# Object with schema name: GoogleCloudAiplatformV1beta1Schema
],
"default": "", # Optional. Default value to use if the field is not specified.
"defs": { # Optional. `defs` provides a map of schema definitions that can be reused by `ref` elsewhere in the schema. Only allowed at root level of the schema.
"a_key": # Object with schema name: GoogleCloudAiplatformV1beta1Schema
},
"description": "A String", # Optional. Description of the schema.
"enum": [ # Optional. Possible values of the field. This field can be used to restrict a value to a fixed set of values. To mark a field as an enum, set `format` to `enum` and provide the list of possible values in `enum`. For example: 1. To define directions: `{type:STRING, format:enum, enum:["EAST", "NORTH", "SOUTH", "WEST"]}` 2. To define apartment numbers: `{type:INTEGER, format:enum, enum:["101", "201", "301"]}`
"A String",
],
"example": "", # Optional. Example of an instance of this schema.
"format": "A String", # Optional. The format of the data. For `NUMBER` type, format can be `float` or `double`. For `INTEGER` type, format can be `int32` or `int64`. For `STRING` type, format can be `email`, `byte`, `date`, `date-time`, `password`, and other formats to further refine the data type.
"items": # Object with schema name: GoogleCloudAiplatformV1beta1Schema # Optional. If type is `ARRAY`, `items` specifies the schema of elements in the array.
"maxItems": "A String", # Optional. If type is `ARRAY`, `max_items` specifies the maximum number of items in an array.
"maxLength": "A String", # Optional. If type is `STRING`, `max_length` specifies the maximum length of the string.
"maxProperties": "A String", # Optional. If type is `OBJECT`, `max_properties` specifies the maximum number of properties that can be provided.
"maximum": 3.14, # Optional. If type is `INTEGER` or `NUMBER`, `maximum` specifies the maximum allowed value.
"minItems": "A String", # Optional. If type is `ARRAY`, `min_items` specifies the minimum number of items in an array.
"minLength": "A String", # Optional. If type is `STRING`, `min_length` specifies the minimum length of the string.
"minProperties": "A String", # Optional. If type is `OBJECT`, `min_properties` specifies the minimum number of properties that can be provided.
"minimum": 3.14, # Optional. If type is `INTEGER` or `NUMBER`, `minimum` specifies the minimum allowed value.
"nullable": True or False, # Optional. Indicates if the value of this field can be null.
"pattern": "A String", # Optional. If type is `STRING`, `pattern` specifies a regular expression that the string must match.
"properties": { # Optional. If type is `OBJECT`, `properties` is a map of property names to schema definitions for each property of the object.
"a_key": # Object with schema name: GoogleCloudAiplatformV1beta1Schema
},
"propertyOrdering": [ # Optional. Order of properties displayed or used where order matters. This is not a standard field in OpenAPI specification, but can be used to control the order of properties.
"A String",
],
"ref": "A String", # Optional. Allows referencing another schema definition to use in place of this schema. The value must be a valid reference to a schema in `defs`. For example, the following schema defines a reference to a schema node named "Pet": type: object properties: pet: ref: #/defs/Pet defs: Pet: type: object properties: name: type: string The value of the "pet" property is a reference to the schema node named "Pet". See details in https://json-schema.org/understanding-json-schema/structuring
"required": [ # Optional. If type is `OBJECT`, `required` lists the names of properties that must be present.
"A String",
],
"title": "A String", # Optional. Title for the schema.
"type": "A String", # Optional. Data type of the schema field.
},
"parametersJsonSchema": "", # Optional. Describes the parameters to the function in JSON Schema format. The schema must describe an object where the properties are the parameters to the function. For example: ``` { "type": "object", "properties": { "name": { "type": "string" }, "age": { "type": "integer" } }, "additionalProperties": false, "required": ["name", "age"], "propertyOrdering": ["name", "age"] } ``` This field is mutually exclusive with `parameters`.
"response": { # Defines the schema of input and output data. This is a subset of the [OpenAPI 3.0 Schema Object](https://spec.openapis.org/oas/v3.0.3#schema-object). # Optional. Describes the output from this function in JSON Schema format. Reflects the Open API 3.03 Response Object. The Schema defines the type used for the response value of the function.
"additionalProperties": "", # Optional. If `type` is `OBJECT`, specifies how to handle properties not defined in `properties`. If it is a boolean `false`, no additional properties are allowed. If it is a schema, additional properties are allowed if they conform to the schema.
"anyOf": [ # Optional. The instance must be valid against any (one or more) of the subschemas listed in `any_of`.
# Object with schema name: GoogleCloudAiplatformV1beta1Schema
],
"default": "", # Optional. Default value to use if the field is not specified.
"defs": { # Optional. `defs` provides a map of schema definitions that can be reused by `ref` elsewhere in the schema. Only allowed at root level of the schema.
"a_key": # Object with schema name: GoogleCloudAiplatformV1beta1Schema
},
"description": "A String", # Optional. Description of the schema.
"enum": [ # Optional. Possible values of the field. This field can be used to restrict a value to a fixed set of values. To mark a field as an enum, set `format` to `enum` and provide the list of possible values in `enum`. For example: 1. To define directions: `{type:STRING, format:enum, enum:["EAST", "NORTH", "SOUTH", "WEST"]}` 2. To define apartment numbers: `{type:INTEGER, format:enum, enum:["101", "201", "301"]}`
"A String",
],
"example": "", # Optional. Example of an instance of this schema.
"format": "A String", # Optional. The format of the data. For `NUMBER` type, format can be `float` or `double`. For `INTEGER` type, format can be `int32` or `int64`. For `STRING` type, format can be `email`, `byte`, `date`, `date-time`, `password`, and other formats to further refine the data type.
"items": # Object with schema name: GoogleCloudAiplatformV1beta1Schema # Optional. If type is `ARRAY`, `items` specifies the schema of elements in the array.
"maxItems": "A String", # Optional. If type is `ARRAY`, `max_items` specifies the maximum number of items in an array.
"maxLength": "A String", # Optional. If type is `STRING`, `max_length` specifies the maximum length of the string.
"maxProperties": "A String", # Optional. If type is `OBJECT`, `max_properties` specifies the maximum number of properties that can be provided.
"maximum": 3.14, # Optional. If type is `INTEGER` or `NUMBER`, `maximum` specifies the maximum allowed value.
"minItems": "A String", # Optional. If type is `ARRAY`, `min_items` specifies the minimum number of items in an array.
"minLength": "A String", # Optional. If type is `STRING`, `min_length` specifies the minimum length of the string.
"minProperties": "A String", # Optional. If type is `OBJECT`, `min_properties` specifies the minimum number of properties that can be provided.
"minimum": 3.14, # Optional. If type is `INTEGER` or `NUMBER`, `minimum` specifies the minimum allowed value.
"nullable": True or False, # Optional. Indicates if the value of this field can be null.
"pattern": "A String", # Optional. If type is `STRING`, `pattern` specifies a regular expression that the string must match.
"properties": { # Optional. If type is `OBJECT`, `properties` is a map of property names to schema definitions for each property of the object.
"a_key": # Object with schema name: GoogleCloudAiplatformV1beta1Schema
},
"propertyOrdering": [ # Optional. Order of properties displayed or used where order matters. This is not a standard field in OpenAPI specification, but can be used to control the order of properties.
"A String",
],
"ref": "A String", # Optional. Allows referencing another schema definition to use in place of this schema. The value must be a valid reference to a schema in `defs`. For example, the following schema defines a reference to a schema node named "Pet": type: object properties: pet: ref: #/defs/Pet defs: Pet: type: object properties: name: type: string The value of the "pet" property is a reference to the schema node named "Pet". See details in https://json-schema.org/understanding-json-schema/structuring
"required": [ # Optional. If type is `OBJECT`, `required` lists the names of properties that must be present.
"A String",
],
"title": "A String", # Optional. Title for the schema.
"type": "A String", # Optional. Data type of the schema field.
},
"responseJsonSchema": "", # Optional. Describes the output from this function in JSON Schema format. The value specified by the schema is the response value of the function. This field is mutually exclusive with `response`.
},
],
"googleMaps": { # Tool to retrieve public maps data for grounding, powered by Google. # Optional. GoogleMaps tool type. Tool to support Google Maps in Model.
"enableWidget": True or False, # Optional. If true, include the widget context token in the response.
},
"googleSearch": { # GoogleSearch tool type. Tool to support Google Search in Model. Powered by Google. # Optional. GoogleSearch tool type. Tool to support Google Search in Model. Powered by Google.
"blockingConfidence": "A String", # Optional. Sites with confidence level chosen & above this value will be blocked from the search results.
"excludeDomains": [ # Optional. List of domains to be excluded from the search results. The default limit is 2000 domains. Example: ["amazon.com", "facebook.com"].
"A String",
],
},
"googleSearchRetrieval": { # Tool to retrieve public web data for grounding, powered by Google. # Optional. Specialized retrieval tool that is powered by Google Search.
"dynamicRetrievalConfig": { # Describes the options to customize dynamic retrieval. # Specifies the dynamic retrieval configuration for the given source.
"dynamicThreshold": 3.14, # Optional. The threshold to be used in dynamic retrieval. If not set, a system default value is used.
"mode": "A String", # The mode of the predictor to be used in dynamic retrieval.
},
},
"parallelAiSearch": { # ParallelAiSearch tool type. A tool that uses the Parallel.ai search engine for grounding. # Optional. If specified, Vertex AI will use Parallel.ai to search for information to answer user queries. The search results will be grounded on Parallel.ai and presented to the model for response generation
"apiKey": "A String", # Optional. The API key for ParallelAiSearch. If an API key is not provided, the system will attempt to verify access by checking for an active Parallel.ai subscription through the Google Cloud Marketplace. See https://docs.parallel.ai/search/search-quickstart for more details.
"customConfigs": { # Optional. Custom configs for ParallelAiSearch. This field can be used to pass any parameter from the Parallel.ai Search API. See the Parallel.ai documentation for the full list of available parameters and their usage: https://docs.parallel.ai/api-reference/search-beta/search Currently only `source_policy`, `excerpts`, `max_results`, `mode`, `fetch_policy` can be set via this field. For example: { "source_policy": { "include_domains": ["google.com", "wikipedia.org"], "exclude_domains": ["example.com"] }, "fetch_policy": { "max_age_seconds": 3600 } }
"a_key": "", # Properties of the object.
},
},
"retrieval": { # Defines a retrieval tool that model can call to access external knowledge. # Optional. Retrieval tool type. System will always execute the provided retrieval tool(s) to get external knowledge to answer the prompt. Retrieval results are presented to the model for generation.
"disableAttribution": True or False, # Optional. Deprecated. This option is no longer supported.
"externalApi": { # Retrieve from data source powered by external API for grounding. The external API is not owned by Google, but need to follow the pre-defined API spec. # Use data source powered by external API for grounding.
"apiAuth": { # The generic reusable api auth config. Deprecated. Please use AuthConfig (google/cloud/aiplatform/master/auth.proto) instead. # The authentication config to access the API. Deprecated. Please use auth_config instead.
"apiKeyConfig": { # The API secret. # The API secret.
"apiKeySecretVersion": "A String", # Required. The SecretManager secret version resource name storing API key. e.g. projects/{project}/secrets/{secret}/versions/{version}
"apiKeyString": "A String", # The API key string. Either this or `api_key_secret_version` must be set.
},
},
"apiSpec": "A String", # The API spec that the external API implements.
"authConfig": { # Auth configuration to run the extension. # The authentication config to access the API.
"apiKeyConfig": { # Config for authentication with API key. # Config for API key auth.
"apiKeySecret": "A String", # Optional. The name of the SecretManager secret version resource storing the API key. Format: `projects/{project}/secrets/{secrete}/versions/{version}` - If both `api_key_secret` and `api_key_string` are specified, this field takes precedence over `api_key_string`. - If specified, the `secretmanager.versions.access` permission should be granted to Vertex AI Extension Service Agent (https://cloud.google.com/vertex-ai/docs/general/access-control#service-agents) on the specified resource.
"apiKeyString": "A String", # Optional. The API key to be used in the request directly.
"httpElementLocation": "A String", # Optional. The location of the API key.
"name": "A String", # Optional. The parameter name of the API key. E.g. If the API request is "https://example.com/act?api_key=", "api_key" would be the parameter name.
},
"authType": "A String", # Type of auth scheme.
"googleServiceAccountConfig": { # Config for Google Service Account Authentication. # Config for Google Service Account auth.
"serviceAccount": "A String", # Optional. The service account that the extension execution service runs as. - If the service account is specified, the `iam.serviceAccounts.getAccessToken` permission should be granted to Vertex AI Extension Service Agent (https://cloud.google.com/vertex-ai/docs/general/access-control#service-agents) on the specified service account. - If not specified, the Vertex AI Extension Service Agent will be used to execute the Extension.
},
"httpBasicAuthConfig": { # Config for HTTP Basic Authentication. # Config for HTTP Basic auth.
"credentialSecret": "A String", # Required. The name of the SecretManager secret version resource storing the base64 encoded credentials. Format: `projects/{project}/secrets/{secrete}/versions/{version}` - If specified, the `secretmanager.versions.access` permission should be granted to Vertex AI Extension Service Agent (https://cloud.google.com/vertex-ai/docs/general/access-control#service-agents) on the specified resource.
},
"oauthConfig": { # Config for user oauth. # Config for user oauth.
"accessToken": "A String", # Access token for extension endpoint. Only used to propagate token from [[ExecuteExtensionRequest.runtime_auth_config]] at request time.
"serviceAccount": "A String", # The service account used to generate access tokens for executing the Extension. - If the service account is specified, the `iam.serviceAccounts.getAccessToken` permission should be granted to Vertex AI Extension Service Agent (https://cloud.google.com/vertex-ai/docs/general/access-control#service-agents) on the provided service account.
},
"oidcConfig": { # Config for user OIDC auth. # Config for user OIDC auth.
"idToken": "A String", # OpenID Connect formatted ID token for extension endpoint. Only used to propagate token from [[ExecuteExtensionRequest.runtime_auth_config]] at request time.
"serviceAccount": "A String", # The service account used to generate an OpenID Connect (OIDC)-compatible JWT token signed by the Google OIDC Provider (accounts.google.com) for extension endpoint (https://cloud.google.com/iam/docs/create-short-lived-credentials-direct#sa-credentials-oidc). - The audience for the token will be set to the URL in the server url defined in the OpenApi spec. - If the service account is provided, the service account should grant `iam.serviceAccounts.getOpenIdToken` permission to Vertex AI Extension Service Agent (https://cloud.google.com/vertex-ai/docs/general/access-control#service-agents).
},
},
"elasticSearchParams": { # The search parameters to use for the ELASTIC_SEARCH spec. # Parameters for the elastic search API.
"index": "A String", # The ElasticSearch index to use.
"numHits": 42, # Optional. Number of hits (chunks) to request. When specified, it is passed to Elasticsearch as the `num_hits` param.
"searchTemplate": "A String", # The ElasticSearch search template to use.
},
"endpoint": "A String", # The endpoint of the external API. The system will call the API at this endpoint to retrieve the data for grounding. Example: https://acme.com:443/search
"simpleSearchParams": { # The search parameters to use for SIMPLE_SEARCH spec. # Parameters for the simple search API.
},
},
"vertexAiSearch": { # Retrieve from Vertex AI Search datastore or engine for grounding. datastore and engine are mutually exclusive. See https://cloud.google.com/products/agent-builder # Set to use data source powered by Vertex AI Search.
"dataStoreSpecs": [ # Specifications that define the specific DataStores to be searched, along with configurations for those data stores. This is only considered for Engines with multiple data stores. It should only be set if engine is used.
{ # Define data stores within engine to filter on in a search call and configurations for those data stores. For more information, see https://cloud.google.com/generative-ai-app-builder/docs/reference/rpc/google.cloud.discoveryengine.v1#datastorespec
"dataStore": "A String", # Full resource name of DataStore, such as Format: `projects/{project}/locations/{location}/collections/{collection}/dataStores/{dataStore}`
"filter": "A String", # Optional. Filter specification to filter documents in the data store specified by data_store field. For more information on filtering, see [Filtering](https://cloud.google.com/generative-ai-app-builder/docs/filter-search-metadata)
},
],
"datastore": "A String", # Optional. Fully-qualified Vertex AI Search data store resource ID. Format: `projects/{project}/locations/{location}/collections/{collection}/dataStores/{dataStore}`
"engine": "A String", # Optional. Fully-qualified Vertex AI Search engine resource ID. Format: `projects/{project}/locations/{location}/collections/{collection}/engines/{engine}`
"filter": "A String", # Optional. Filter strings to be passed to the search API.
"maxResults": 42, # Optional. Number of search results to return per query. The default value is 10. The maximumm allowed value is 10.
},
"vertexRagStore": { # Retrieve from Vertex RAG Store for grounding. # Set to use data source powered by Vertex RAG store. User data is uploaded via the VertexRagDataService.
"ragCorpora": [ # Optional. Deprecated. Please use rag_resources instead.
"A String",
],
"ragResources": [ # Optional. The representation of the rag source. It can be used to specify corpus only or ragfiles. Currently only support one corpus or multiple files from one corpus. In the future we may open up multiple corpora support.
{ # The definition of the Rag resource.
"ragCorpus": "A String", # Optional. RagCorpora resource name. Format: `projects/{project}/locations/{location}/ragCorpora/{rag_corpus}`
"ragFileIds": [ # Optional. rag_file_id. The files should be in the same rag_corpus set in rag_corpus field.
"A String",
],
},
],
"ragRetrievalConfig": { # Specifies the context retrieval config. # Optional. The retrieval config for the Rag query.
"filter": { # Config for filters. # Optional. Config for filters.
"metadataFilter": "A String", # Optional. String for metadata filtering.
"vectorDistanceThreshold": 3.14, # Optional. Only returns contexts with vector distance smaller than the threshold.
"vectorSimilarityThreshold": 3.14, # Optional. Only returns contexts with vector similarity larger than the threshold.
},
"hybridSearch": { # Config for Hybrid Search. # Optional. Config for Hybrid Search.
"alpha": 3.14, # Optional. Alpha value controls the weight between dense and sparse vector search results. The range is [0, 1], while 0 means sparse vector search only and 1 means dense vector search only. The default value is 0.5 which balances sparse and dense vector search equally.
},
"ranking": { # Config for ranking and reranking. # Optional. Config for ranking and reranking.
"llmRanker": { # Config for LlmRanker. # Optional. Config for LlmRanker.
"modelName": "A String", # Optional. The model name used for ranking. See [Supported models](https://cloud.google.com/vertex-ai/generative-ai/docs/model-reference/inference#supported-models).
},
"rankService": { # Config for Rank Service. # Optional. Config for Rank Service.
"modelName": "A String", # Optional. The model name of the rank service. Format: `semantic-ranker-512@latest`
},
},
"topK": 42, # Optional. The number of contexts to retrieve.
},
"similarityTopK": 42, # Optional. Number of top k results to return from the selected corpora.
"storeContext": True or False, # Optional. Currently only supported for Gemini Multimodal Live API. In Gemini Multimodal Live API, if `store_context` bool is specified, Gemini will leverage it to automatically memorize the interactions between the client and Gemini, and retrieve context when needed to augment the response generation for users' ongoing and future interactions.
"vectorDistanceThreshold": 3.14, # Optional. Only return results with vector distance smaller than the threshold.
},
},
"urlContext": { # Tool to support URL context. # Optional. Tool to support URL context retrieval.
},
},
],
},
},
"turns": [ # Optional. A chronological list of conversation turns. Each turn represents a logical execution cycle (e.g., User Input -> Agent Response).
{ # Represents a single turn/invocation in the conversation.
"events": [ # Optional. The list of events that occurred during this turn.
{ # Represents a single event in the execution trace.
"activeTools": [ # Optional. The list of tools that were active/available to the agent at the time of this event. This overrides the `AgentConfig.tools` if set.
{ # Tool details that the model may use to generate response. A `Tool` is a piece of code that enables the system to interact with external systems to perform an action, or set of actions, outside of knowledge and scope of the model. A Tool object should contain exactly one type of Tool (e.g FunctionDeclaration, Retrieval or GoogleSearchRetrieval).
"codeExecution": { # Tool that executes code generated by the model, and automatically returns the result to the model. See also [ExecutableCode]and [CodeExecutionResult] which are input and output to this tool. # Optional. CodeExecution tool type. Enables the model to execute code as part of generation.
},
"computerUse": { # Tool to support computer use. # Optional. Tool to support the model interacting directly with the computer. If enabled, it automatically populates computer-use specific Function Declarations.
"environment": "A String", # Required. The environment being operated.
"excludedPredefinedFunctions": [ # Optional. By default, [predefined functions](https://cloud.google.com/vertex-ai/generative-ai/docs/computer-use#supported-actions) are included in the final model call. Some of them can be explicitly excluded from being automatically included. This can serve two purposes: 1. Using a more restricted / different action space. 2. Improving the definitions / instructions of predefined functions.
"A String",
],
},
"enterpriseWebSearch": { # Tool to search public web data, powered by Vertex AI Search and Sec4 compliance. # Optional. Tool to support searching public web data, powered by Vertex AI Search and Sec4 compliance.
"blockingConfidence": "A String", # Optional. Sites with confidence level chosen & above this value will be blocked from the search results.
"excludeDomains": [ # Optional. List of domains to be excluded from the search results. The default limit is 2000 domains.
"A String",
],
},
"functionDeclarations": [ # Optional. Function tool type. One or more function declarations to be passed to the model along with the current user query. Model may decide to call a subset of these functions by populating FunctionCall in the response. User should provide a FunctionResponse for each function call in the next turn. Based on the function responses, Model will generate the final response back to the user. Maximum 512 function declarations can be provided.
{ # Structured representation of a function declaration as defined by the [OpenAPI 3.0 specification](https://spec.openapis.org/oas/v3.0.3). Included in this declaration are the function name, description, parameters and response type. This FunctionDeclaration is a representation of a block of code that can be used as a `Tool` by the model and executed by the client.
"description": "A String", # Optional. Description and purpose of the function. Model uses it to decide how and whether to call the function.
"name": "A String", # Required. The name of the function to call. Must start with a letter or an underscore. Must be a-z, A-Z, 0-9, or contain underscores, dots, colons and dashes, with a maximum length of 64.
"parameters": { # Defines the schema of input and output data. This is a subset of the [OpenAPI 3.0 Schema Object](https://spec.openapis.org/oas/v3.0.3#schema-object). # Optional. Describes the parameters to this function in JSON Schema Object format. Reflects the Open API 3.03 Parameter Object. string Key: the name of the parameter. Parameter names are case sensitive. Schema Value: the Schema defining the type used for the parameter. For function with no parameters, this can be left unset. Parameter names must start with a letter or an underscore and must only contain chars a-z, A-Z, 0-9, or underscores with a maximum length of 64. Example with 1 required and 1 optional parameter: type: OBJECT properties: param1: type: STRING param2: type: INTEGER required: - param1
"additionalProperties": "", # Optional. If `type` is `OBJECT`, specifies how to handle properties not defined in `properties`. If it is a boolean `false`, no additional properties are allowed. If it is a schema, additional properties are allowed if they conform to the schema.
"anyOf": [ # Optional. The instance must be valid against any (one or more) of the subschemas listed in `any_of`.
# Object with schema name: GoogleCloudAiplatformV1beta1Schema
],
"default": "", # Optional. Default value to use if the field is not specified.
"defs": { # Optional. `defs` provides a map of schema definitions that can be reused by `ref` elsewhere in the schema. Only allowed at root level of the schema.
"a_key": # Object with schema name: GoogleCloudAiplatformV1beta1Schema
},
"description": "A String", # Optional. Description of the schema.
"enum": [ # Optional. Possible values of the field. This field can be used to restrict a value to a fixed set of values. To mark a field as an enum, set `format` to `enum` and provide the list of possible values in `enum`. For example: 1. To define directions: `{type:STRING, format:enum, enum:["EAST", "NORTH", "SOUTH", "WEST"]}` 2. To define apartment numbers: `{type:INTEGER, format:enum, enum:["101", "201", "301"]}`
"A String",
],
"example": "", # Optional. Example of an instance of this schema.
"format": "A String", # Optional. The format of the data. For `NUMBER` type, format can be `float` or `double`. For `INTEGER` type, format can be `int32` or `int64`. For `STRING` type, format can be `email`, `byte`, `date`, `date-time`, `password`, and other formats to further refine the data type.
"items": # Object with schema name: GoogleCloudAiplatformV1beta1Schema # Optional. If type is `ARRAY`, `items` specifies the schema of elements in the array.
"maxItems": "A String", # Optional. If type is `ARRAY`, `max_items` specifies the maximum number of items in an array.
"maxLength": "A String", # Optional. If type is `STRING`, `max_length` specifies the maximum length of the string.
"maxProperties": "A String", # Optional. If type is `OBJECT`, `max_properties` specifies the maximum number of properties that can be provided.
"maximum": 3.14, # Optional. If type is `INTEGER` or `NUMBER`, `maximum` specifies the maximum allowed value.
"minItems": "A String", # Optional. If type is `ARRAY`, `min_items` specifies the minimum number of items in an array.
"minLength": "A String", # Optional. If type is `STRING`, `min_length` specifies the minimum length of the string.
"minProperties": "A String", # Optional. If type is `OBJECT`, `min_properties` specifies the minimum number of properties that can be provided.
"minimum": 3.14, # Optional. If type is `INTEGER` or `NUMBER`, `minimum` specifies the minimum allowed value.
"nullable": True or False, # Optional. Indicates if the value of this field can be null.
"pattern": "A String", # Optional. If type is `STRING`, `pattern` specifies a regular expression that the string must match.
"properties": { # Optional. If type is `OBJECT`, `properties` is a map of property names to schema definitions for each property of the object.
"a_key": # Object with schema name: GoogleCloudAiplatformV1beta1Schema
},
"propertyOrdering": [ # Optional. Order of properties displayed or used where order matters. This is not a standard field in OpenAPI specification, but can be used to control the order of properties.
"A String",
],
"ref": "A String", # Optional. Allows referencing another schema definition to use in place of this schema. The value must be a valid reference to a schema in `defs`. For example, the following schema defines a reference to a schema node named "Pet": type: object properties: pet: ref: #/defs/Pet defs: Pet: type: object properties: name: type: string The value of the "pet" property is a reference to the schema node named "Pet". See details in https://json-schema.org/understanding-json-schema/structuring
"required": [ # Optional. If type is `OBJECT`, `required` lists the names of properties that must be present.
"A String",
],
"title": "A String", # Optional. Title for the schema.
"type": "A String", # Optional. Data type of the schema field.
},
"parametersJsonSchema": "", # Optional. Describes the parameters to the function in JSON Schema format. The schema must describe an object where the properties are the parameters to the function. For example: ``` { "type": "object", "properties": { "name": { "type": "string" }, "age": { "type": "integer" } }, "additionalProperties": false, "required": ["name", "age"], "propertyOrdering": ["name", "age"] } ``` This field is mutually exclusive with `parameters`.
"response": { # Defines the schema of input and output data. This is a subset of the [OpenAPI 3.0 Schema Object](https://spec.openapis.org/oas/v3.0.3#schema-object). # Optional. Describes the output from this function in JSON Schema format. Reflects the Open API 3.03 Response Object. The Schema defines the type used for the response value of the function.
"additionalProperties": "", # Optional. If `type` is `OBJECT`, specifies how to handle properties not defined in `properties`. If it is a boolean `false`, no additional properties are allowed. If it is a schema, additional properties are allowed if they conform to the schema.
"anyOf": [ # Optional. The instance must be valid against any (one or more) of the subschemas listed in `any_of`.
# Object with schema name: GoogleCloudAiplatformV1beta1Schema
],
"default": "", # Optional. Default value to use if the field is not specified.
"defs": { # Optional. `defs` provides a map of schema definitions that can be reused by `ref` elsewhere in the schema. Only allowed at root level of the schema.
"a_key": # Object with schema name: GoogleCloudAiplatformV1beta1Schema
},
"description": "A String", # Optional. Description of the schema.
"enum": [ # Optional. Possible values of the field. This field can be used to restrict a value to a fixed set of values. To mark a field as an enum, set `format` to `enum` and provide the list of possible values in `enum`. For example: 1. To define directions: `{type:STRING, format:enum, enum:["EAST", "NORTH", "SOUTH", "WEST"]}` 2. To define apartment numbers: `{type:INTEGER, format:enum, enum:["101", "201", "301"]}`
"A String",
],
"example": "", # Optional. Example of an instance of this schema.
"format": "A String", # Optional. The format of the data. For `NUMBER` type, format can be `float` or `double`. For `INTEGER` type, format can be `int32` or `int64`. For `STRING` type, format can be `email`, `byte`, `date`, `date-time`, `password`, and other formats to further refine the data type.
"items": # Object with schema name: GoogleCloudAiplatformV1beta1Schema # Optional. If type is `ARRAY`, `items` specifies the schema of elements in the array.
"maxItems": "A String", # Optional. If type is `ARRAY`, `max_items` specifies the maximum number of items in an array.
"maxLength": "A String", # Optional. If type is `STRING`, `max_length` specifies the maximum length of the string.
"maxProperties": "A String", # Optional. If type is `OBJECT`, `max_properties` specifies the maximum number of properties that can be provided.
"maximum": 3.14, # Optional. If type is `INTEGER` or `NUMBER`, `maximum` specifies the maximum allowed value.
"minItems": "A String", # Optional. If type is `ARRAY`, `min_items` specifies the minimum number of items in an array.
"minLength": "A String", # Optional. If type is `STRING`, `min_length` specifies the minimum length of the string.
"minProperties": "A String", # Optional. If type is `OBJECT`, `min_properties` specifies the minimum number of properties that can be provided.
"minimum": 3.14, # Optional. If type is `INTEGER` or `NUMBER`, `minimum` specifies the minimum allowed value.
"nullable": True or False, # Optional. Indicates if the value of this field can be null.
"pattern": "A String", # Optional. If type is `STRING`, `pattern` specifies a regular expression that the string must match.
"properties": { # Optional. If type is `OBJECT`, `properties` is a map of property names to schema definitions for each property of the object.
"a_key": # Object with schema name: GoogleCloudAiplatformV1beta1Schema
},
"propertyOrdering": [ # Optional. Order of properties displayed or used where order matters. This is not a standard field in OpenAPI specification, but can be used to control the order of properties.
"A String",
],
"ref": "A String", # Optional. Allows referencing another schema definition to use in place of this schema. The value must be a valid reference to a schema in `defs`. For example, the following schema defines a reference to a schema node named "Pet": type: object properties: pet: ref: #/defs/Pet defs: Pet: type: object properties: name: type: string The value of the "pet" property is a reference to the schema node named "Pet". See details in https://json-schema.org/understanding-json-schema/structuring
"required": [ # Optional. If type is `OBJECT`, `required` lists the names of properties that must be present.
"A String",
],
"title": "A String", # Optional. Title for the schema.
"type": "A String", # Optional. Data type of the schema field.
},
"responseJsonSchema": "", # Optional. Describes the output from this function in JSON Schema format. The value specified by the schema is the response value of the function. This field is mutually exclusive with `response`.
},
],
"googleMaps": { # Tool to retrieve public maps data for grounding, powered by Google. # Optional. GoogleMaps tool type. Tool to support Google Maps in Model.
"enableWidget": True or False, # Optional. If true, include the widget context token in the response.
},
"googleSearch": { # GoogleSearch tool type. Tool to support Google Search in Model. Powered by Google. # Optional. GoogleSearch tool type. Tool to support Google Search in Model. Powered by Google.
"blockingConfidence": "A String", # Optional. Sites with confidence level chosen & above this value will be blocked from the search results.
"excludeDomains": [ # Optional. List of domains to be excluded from the search results. The default limit is 2000 domains. Example: ["amazon.com", "facebook.com"].
"A String",
],
},
"googleSearchRetrieval": { # Tool to retrieve public web data for grounding, powered by Google. # Optional. Specialized retrieval tool that is powered by Google Search.
"dynamicRetrievalConfig": { # Describes the options to customize dynamic retrieval. # Specifies the dynamic retrieval configuration for the given source.
"dynamicThreshold": 3.14, # Optional. The threshold to be used in dynamic retrieval. If not set, a system default value is used.
"mode": "A String", # The mode of the predictor to be used in dynamic retrieval.
},
},
"parallelAiSearch": { # ParallelAiSearch tool type. A tool that uses the Parallel.ai search engine for grounding. # Optional. If specified, Vertex AI will use Parallel.ai to search for information to answer user queries. The search results will be grounded on Parallel.ai and presented to the model for response generation
"apiKey": "A String", # Optional. The API key for ParallelAiSearch. If an API key is not provided, the system will attempt to verify access by checking for an active Parallel.ai subscription through the Google Cloud Marketplace. See https://docs.parallel.ai/search/search-quickstart for more details.
"customConfigs": { # Optional. Custom configs for ParallelAiSearch. This field can be used to pass any parameter from the Parallel.ai Search API. See the Parallel.ai documentation for the full list of available parameters and their usage: https://docs.parallel.ai/api-reference/search-beta/search Currently only `source_policy`, `excerpts`, `max_results`, `mode`, `fetch_policy` can be set via this field. For example: { "source_policy": { "include_domains": ["google.com", "wikipedia.org"], "exclude_domains": ["example.com"] }, "fetch_policy": { "max_age_seconds": 3600 } }
"a_key": "", # Properties of the object.
},
},
"retrieval": { # Defines a retrieval tool that model can call to access external knowledge. # Optional. Retrieval tool type. System will always execute the provided retrieval tool(s) to get external knowledge to answer the prompt. Retrieval results are presented to the model for generation.
"disableAttribution": True or False, # Optional. Deprecated. This option is no longer supported.
"externalApi": { # Retrieve from data source powered by external API for grounding. The external API is not owned by Google, but need to follow the pre-defined API spec. # Use data source powered by external API for grounding.
"apiAuth": { # The generic reusable api auth config. Deprecated. Please use AuthConfig (google/cloud/aiplatform/master/auth.proto) instead. # The authentication config to access the API. Deprecated. Please use auth_config instead.
"apiKeyConfig": { # The API secret. # The API secret.
"apiKeySecretVersion": "A String", # Required. The SecretManager secret version resource name storing API key. e.g. projects/{project}/secrets/{secret}/versions/{version}
"apiKeyString": "A String", # The API key string. Either this or `api_key_secret_version` must be set.
},
},
"apiSpec": "A String", # The API spec that the external API implements.
"authConfig": { # Auth configuration to run the extension. # The authentication config to access the API.
"apiKeyConfig": { # Config for authentication with API key. # Config for API key auth.
"apiKeySecret": "A String", # Optional. The name of the SecretManager secret version resource storing the API key. Format: `projects/{project}/secrets/{secrete}/versions/{version}` - If both `api_key_secret` and `api_key_string` are specified, this field takes precedence over `api_key_string`. - If specified, the `secretmanager.versions.access` permission should be granted to Vertex AI Extension Service Agent (https://cloud.google.com/vertex-ai/docs/general/access-control#service-agents) on the specified resource.
"apiKeyString": "A String", # Optional. The API key to be used in the request directly.
"httpElementLocation": "A String", # Optional. The location of the API key.
"name": "A String", # Optional. The parameter name of the API key. E.g. If the API request is "https://example.com/act?api_key=", "api_key" would be the parameter name.
},
"authType": "A String", # Type of auth scheme.
"googleServiceAccountConfig": { # Config for Google Service Account Authentication. # Config for Google Service Account auth.
"serviceAccount": "A String", # Optional. The service account that the extension execution service runs as. - If the service account is specified, the `iam.serviceAccounts.getAccessToken` permission should be granted to Vertex AI Extension Service Agent (https://cloud.google.com/vertex-ai/docs/general/access-control#service-agents) on the specified service account. - If not specified, the Vertex AI Extension Service Agent will be used to execute the Extension.
},
"httpBasicAuthConfig": { # Config for HTTP Basic Authentication. # Config for HTTP Basic auth.
"credentialSecret": "A String", # Required. The name of the SecretManager secret version resource storing the base64 encoded credentials. Format: `projects/{project}/secrets/{secrete}/versions/{version}` - If specified, the `secretmanager.versions.access` permission should be granted to Vertex AI Extension Service Agent (https://cloud.google.com/vertex-ai/docs/general/access-control#service-agents) on the specified resource.
},
"oauthConfig": { # Config for user oauth. # Config for user oauth.
"accessToken": "A String", # Access token for extension endpoint. Only used to propagate token from [[ExecuteExtensionRequest.runtime_auth_config]] at request time.
"serviceAccount": "A String", # The service account used to generate access tokens for executing the Extension. - If the service account is specified, the `iam.serviceAccounts.getAccessToken` permission should be granted to Vertex AI Extension Service Agent (https://cloud.google.com/vertex-ai/docs/general/access-control#service-agents) on the provided service account.
},
"oidcConfig": { # Config for user OIDC auth. # Config for user OIDC auth.
"idToken": "A String", # OpenID Connect formatted ID token for extension endpoint. Only used to propagate token from [[ExecuteExtensionRequest.runtime_auth_config]] at request time.
"serviceAccount": "A String", # The service account used to generate an OpenID Connect (OIDC)-compatible JWT token signed by the Google OIDC Provider (accounts.google.com) for extension endpoint (https://cloud.google.com/iam/docs/create-short-lived-credentials-direct#sa-credentials-oidc). - The audience for the token will be set to the URL in the server url defined in the OpenApi spec. - If the service account is provided, the service account should grant `iam.serviceAccounts.getOpenIdToken` permission to Vertex AI Extension Service Agent (https://cloud.google.com/vertex-ai/docs/general/access-control#service-agents).
},
},
"elasticSearchParams": { # The search parameters to use for the ELASTIC_SEARCH spec. # Parameters for the elastic search API.
"index": "A String", # The ElasticSearch index to use.
"numHits": 42, # Optional. Number of hits (chunks) to request. When specified, it is passed to Elasticsearch as the `num_hits` param.
"searchTemplate": "A String", # The ElasticSearch search template to use.
},
"endpoint": "A String", # The endpoint of the external API. The system will call the API at this endpoint to retrieve the data for grounding. Example: https://acme.com:443/search
"simpleSearchParams": { # The search parameters to use for SIMPLE_SEARCH spec. # Parameters for the simple search API.
},
},
"vertexAiSearch": { # Retrieve from Vertex AI Search datastore or engine for grounding. datastore and engine are mutually exclusive. See https://cloud.google.com/products/agent-builder # Set to use data source powered by Vertex AI Search.
"dataStoreSpecs": [ # Specifications that define the specific DataStores to be searched, along with configurations for those data stores. This is only considered for Engines with multiple data stores. It should only be set if engine is used.
{ # Define data stores within engine to filter on in a search call and configurations for those data stores. For more information, see https://cloud.google.com/generative-ai-app-builder/docs/reference/rpc/google.cloud.discoveryengine.v1#datastorespec
"dataStore": "A String", # Full resource name of DataStore, such as Format: `projects/{project}/locations/{location}/collections/{collection}/dataStores/{dataStore}`
"filter": "A String", # Optional. Filter specification to filter documents in the data store specified by data_store field. For more information on filtering, see [Filtering](https://cloud.google.com/generative-ai-app-builder/docs/filter-search-metadata)
},
],
"datastore": "A String", # Optional. Fully-qualified Vertex AI Search data store resource ID. Format: `projects/{project}/locations/{location}/collections/{collection}/dataStores/{dataStore}`
"engine": "A String", # Optional. Fully-qualified Vertex AI Search engine resource ID. Format: `projects/{project}/locations/{location}/collections/{collection}/engines/{engine}`
"filter": "A String", # Optional. Filter strings to be passed to the search API.
"maxResults": 42, # Optional. Number of search results to return per query. The default value is 10. The maximumm allowed value is 10.
},
"vertexRagStore": { # Retrieve from Vertex RAG Store for grounding. # Set to use data source powered by Vertex RAG store. User data is uploaded via the VertexRagDataService.
"ragCorpora": [ # Optional. Deprecated. Please use rag_resources instead.
"A String",
],
"ragResources": [ # Optional. The representation of the rag source. It can be used to specify corpus only or ragfiles. Currently only support one corpus or multiple files from one corpus. In the future we may open up multiple corpora support.
{ # The definition of the Rag resource.
"ragCorpus": "A String", # Optional. RagCorpora resource name. Format: `projects/{project}/locations/{location}/ragCorpora/{rag_corpus}`
"ragFileIds": [ # Optional. rag_file_id. The files should be in the same rag_corpus set in rag_corpus field.
"A String",
],
},
],
"ragRetrievalConfig": { # Specifies the context retrieval config. # Optional. The retrieval config for the Rag query.
"filter": { # Config for filters. # Optional. Config for filters.
"metadataFilter": "A String", # Optional. String for metadata filtering.
"vectorDistanceThreshold": 3.14, # Optional. Only returns contexts with vector distance smaller than the threshold.
"vectorSimilarityThreshold": 3.14, # Optional. Only returns contexts with vector similarity larger than the threshold.
},
"hybridSearch": { # Config for Hybrid Search. # Optional. Config for Hybrid Search.
"alpha": 3.14, # Optional. Alpha value controls the weight between dense and sparse vector search results. The range is [0, 1], while 0 means sparse vector search only and 1 means dense vector search only. The default value is 0.5 which balances sparse and dense vector search equally.
},
"ranking": { # Config for ranking and reranking. # Optional. Config for ranking and reranking.
"llmRanker": { # Config for LlmRanker. # Optional. Config for LlmRanker.
"modelName": "A String", # Optional. The model name used for ranking. See [Supported models](https://cloud.google.com/vertex-ai/generative-ai/docs/model-reference/inference#supported-models).
},
"rankService": { # Config for Rank Service. # Optional. Config for Rank Service.
"modelName": "A String", # Optional. The model name of the rank service. Format: `semantic-ranker-512@latest`
},
},
"topK": 42, # Optional. The number of contexts to retrieve.
},
"similarityTopK": 42, # Optional. Number of top k results to return from the selected corpora.
"storeContext": True or False, # Optional. Currently only supported for Gemini Multimodal Live API. In Gemini Multimodal Live API, if `store_context` bool is specified, Gemini will leverage it to automatically memorize the interactions between the client and Gemini, and retrieve context when needed to augment the response generation for users' ongoing and future interactions.
"vectorDistanceThreshold": 3.14, # Optional. Only return results with vector distance smaller than the threshold.
},
},
"urlContext": { # Tool to support URL context. # Optional. Tool to support URL context retrieval.
},
},
],
"author": "A String", # Required. The ID of the agent or entity that generated this event. Use "user" to denote events generated by the end-user.
"content": { # The structured data content of a message. A Content message contains a `role` field, which indicates the producer of the content, and a `parts` field, which contains the multi-part data of the message. # Required. The content of the event (e.g., text response, tool call, tool response).
"parts": [ # Required. A list of Part objects that make up a single message. Parts of a message can have different MIME types. A Content message must have at least one Part.
{ # A datatype containing media that is part of a multi-part Content message. A `Part` consists of data which has an associated datatype. A `Part` can only contain one of the accepted types in `Part.data`. For media types that are not text, `Part` must have a fixed IANA MIME type identifying the type and subtype of the media if `inline_data` or `file_data` field is filled with raw bytes.
"codeExecutionResult": { # Result of executing the [ExecutableCode]. Only generated when using the [CodeExecution] tool, and always follows a `part` containing the [ExecutableCode]. # Optional. The result of executing the ExecutableCode.
"outcome": "A String", # Required. Outcome of the code execution.
"output": "A String", # Optional. Contains stdout when code execution is successful, stderr or other description otherwise.
},
"executableCode": { # Code generated by the model that is meant to be executed, and the result returned to the model. Generated when using the [CodeExecution] tool, in which the code will be automatically executed, and a corresponding [CodeExecutionResult] will also be generated. # Optional. Code generated by the model that is intended to be executed.
"code": "A String", # Required. The code to be executed.
"language": "A String", # Required. Programming language of the `code`.
},
"fileData": { # URI-based data. A FileData message contains a URI pointing to data of a specific media type. It is used to represent images, audio, and video stored in Google Cloud Storage. # Optional. The URI-based data of the part. This can be used to include files from Google Cloud Storage.
"displayName": "A String", # Optional. The display name of the file. Used to provide a label or filename to distinguish files. This field is only returned in `PromptMessage` for prompt management. It is used in the Gemini calls only when server side tools (`code_execution`, `google_search`, and `url_context`) are enabled.
"fileUri": "A String", # Required. The URI of the file in Google Cloud Storage.
"mimeType": "A String", # Required. The IANA standard MIME type of the source data.
},
"functionCall": { # A predicted [FunctionCall] returned from the model that contains a string representing the [FunctionDeclaration.name] and a structured JSON object containing the parameters and their values. # Optional. A predicted function call returned from the model. This contains the name of the function to call and the arguments to pass to the function.
"args": { # Optional. The function parameters and values in JSON object format. See [FunctionDeclaration.parameters] for parameter details.
"a_key": "", # Properties of the object.
},
"id": "A String", # Optional. The unique id of the function call. If populated, the client to execute the `function_call` and return the response with the matching `id`.
"name": "A String", # Optional. The name of the function to call. Matches [FunctionDeclaration.name].
"partialArgs": [ # Optional. The partial argument value of the function call. If provided, represents the arguments/fields that are streamed incrementally.
{ # Partial argument value of the function call.
"boolValue": True or False, # Optional. Represents a boolean value.
"jsonPath": "A String", # Required. A JSON Path (RFC 9535) to the argument being streamed. https://datatracker.ietf.org/doc/html/rfc9535. e.g. "$.foo.bar[0].data".
"nullValue": "A String", # Optional. Represents a null value.
"numberValue": 3.14, # Optional. Represents a double value.
"stringValue": "A String", # Optional. Represents a string value.
"willContinue": True or False, # Optional. Whether this is not the last part of the same json_path. If true, another PartialArg message for the current json_path is expected to follow.
},
],
"willContinue": True or False, # Optional. Whether this is the last part of the FunctionCall. If true, another partial message for the current FunctionCall is expected to follow.
},
"functionResponse": { # The result output from a [FunctionCall] that contains a string representing the [FunctionDeclaration.name] and a structured JSON object containing any output from the function is used as context to the model. This should contain the result of a [FunctionCall] made based on model prediction. # Optional. The result of a function call. This is used to provide the model with the result of a function call that it predicted.
"id": "A String", # Optional. The id of the function call this response is for. Populated by the client to match the corresponding function call `id`.
"name": "A String", # Required. The name of the function to call. Matches [FunctionDeclaration.name] and [FunctionCall.name].
"parts": [ # Optional. Ordered `Parts` that constitute a function response. Parts may have different IANA MIME types.
{ # A datatype containing media that is part of a `FunctionResponse` message. A `FunctionResponsePart` consists of data which has an associated datatype. A `FunctionResponsePart` can only contain one of the accepted types in `FunctionResponsePart.data`. A `FunctionResponsePart` must have a fixed IANA MIME type identifying the type and subtype of the media if the `inline_data` field is filled with raw bytes.
"fileData": { # URI based data for function response. # URI based data.
"displayName": "A String", # Optional. Display name of the file data. Used to provide a label or filename to distinguish file datas. This field is only returned in PromptMessage for prompt management. It is currently used in the Gemini GenerateContent calls only when server side tools (code_execution, google_search, and url_context) are enabled.
"fileUri": "A String", # Required. URI.
"mimeType": "A String", # Required. The IANA standard MIME type of the source data.
},
"inlineData": { # Raw media bytes for function response. Text should not be sent as raw bytes, use the 'text' field. # Inline media bytes.
"data": "A String", # Required. Raw bytes.
"displayName": "A String", # Optional. Display name of the blob. Used to provide a label or filename to distinguish blobs. This field is only returned in PromptMessage for prompt management. It is currently used in the Gemini GenerateContent calls only when server side tools (code_execution, google_search, and url_context) are enabled.
"mimeType": "A String", # Required. The IANA standard MIME type of the source data.
},
},
],
"response": { # Required. The function response in JSON object format. Use "output" key to specify function output and "error" key to specify error details (if any). If "output" and "error" keys are not specified, then whole "response" is treated as function output.
"a_key": "", # Properties of the object.
},
},
"inlineData": { # A content blob. A Blob contains data of a specific media type. It is used to represent images, audio, and video. # Optional. The inline data content of the part. This can be used to include images, audio, or video in a request.
"data": "A String", # Required. The raw bytes of the data.
"displayName": "A String", # Optional. The display name of the blob. Used to provide a label or filename to distinguish blobs. This field is only returned in `PromptMessage` for prompt management. It is used in the Gemini calls only when server-side tools (`code_execution`, `google_search`, and `url_context`) are enabled.
"mimeType": "A String", # Required. The IANA standard MIME type of the source data.
},
"mediaResolution": { # per part media resolution. Media resolution for the input media. # per part media resolution. Media resolution for the input media.
"level": "A String", # The tokenization quality used for given media.
},
"text": "A String", # Optional. The text content of the part. When sent from the VSCode Gemini Code Assist extension, references to @mentioned items will be converted to markdown boldface text. For example `@my-repo` will be converted to and sent as `**my-repo**` by the IDE agent.
"thought": True or False, # Optional. Indicates whether the `part` represents the model's thought process or reasoning.
"thoughtSignature": "A String", # Optional. An opaque signature for the thought so it can be reused in subsequent requests.
"videoMetadata": { # Provides metadata for a video, including the start and end offsets for clipping and the frame rate. # Optional. Video metadata. The metadata should only be specified while the video data is presented in inline_data or file_data.
"endOffset": "A String", # Optional. The end offset of the video.
"fps": 3.14, # Optional. The frame rate of the video sent to the model. If not specified, the default value is 1.0. The valid range is (0.0, 24.0].
"startOffset": "A String", # Optional. The start offset of the video.
},
},
],
"role": "A String", # Optional. The producer of the content. Must be either 'user' or 'model'. If not set, the service will default to 'user'.
},
"eventTime": "A String", # Optional. The timestamp when the event occurred.
"stateDelta": { # Optional. The change in the session state caused by this event. This is a key-value map of fields that were modified or added by the event.
"a_key": "", # Properties of the object.
},
},
],
"turnId": "A String", # Optional. A unique identifier for the turn. Useful for referencing specific turns across systems.
"turnIndex": 42, # Required. The 0-based index of the turn in the conversation sequence.
},
],
},
"candidate": "A String", # Required. The name of the candidate that produced the response.
"error": { # The `Status` type defines a logical error model that is suitable for different programming environments, including REST APIs and RPC APIs. It is used by [gRPC](https://github.com/grpc). Each `Status` message contains three pieces of data: error code, error message, and error details. You can find out more about this error model and how to work with it in the [API Design Guide](https://cloud.google.com/apis/design/errors). # Output only. Error while scraping model or agent.
"code": 42, # The status code, which should be an enum value of google.rpc.Code.
"details": [ # A list of messages that carry the error details. There is a common set of message types for APIs to use.
{
"a_key": "", # Properties of the object. Contains field @type with type URL.
},
],
"message": "A String", # A developer-facing error message, which should be in English. Any user-facing error message should be localized and sent in the google.rpc.Status.details field, or localized by the client.
},
"events": [ # Optional. Intermediate events (such as tool calls and responses) that led to the final response.
{ # The structured data content of a message. A Content message contains a `role` field, which indicates the producer of the content, and a `parts` field, which contains the multi-part data of the message.
"parts": [ # Required. A list of Part objects that make up a single message. Parts of a message can have different MIME types. A Content message must have at least one Part.
{ # A datatype containing media that is part of a multi-part Content message. A `Part` consists of data which has an associated datatype. A `Part` can only contain one of the accepted types in `Part.data`. For media types that are not text, `Part` must have a fixed IANA MIME type identifying the type and subtype of the media if `inline_data` or `file_data` field is filled with raw bytes.
"codeExecutionResult": { # Result of executing the [ExecutableCode]. Only generated when using the [CodeExecution] tool, and always follows a `part` containing the [ExecutableCode]. # Optional. The result of executing the ExecutableCode.
"outcome": "A String", # Required. Outcome of the code execution.
"output": "A String", # Optional. Contains stdout when code execution is successful, stderr or other description otherwise.
},
"executableCode": { # Code generated by the model that is meant to be executed, and the result returned to the model. Generated when using the [CodeExecution] tool, in which the code will be automatically executed, and a corresponding [CodeExecutionResult] will also be generated. # Optional. Code generated by the model that is intended to be executed.
"code": "A String", # Required. The code to be executed.
"language": "A String", # Required. Programming language of the `code`.
},
"fileData": { # URI-based data. A FileData message contains a URI pointing to data of a specific media type. It is used to represent images, audio, and video stored in Google Cloud Storage. # Optional. The URI-based data of the part. This can be used to include files from Google Cloud Storage.
"displayName": "A String", # Optional. The display name of the file. Used to provide a label or filename to distinguish files. This field is only returned in `PromptMessage` for prompt management. It is used in the Gemini calls only when server side tools (`code_execution`, `google_search`, and `url_context`) are enabled.
"fileUri": "A String", # Required. The URI of the file in Google Cloud Storage.
"mimeType": "A String", # Required. The IANA standard MIME type of the source data.
},
"functionCall": { # A predicted [FunctionCall] returned from the model that contains a string representing the [FunctionDeclaration.name] and a structured JSON object containing the parameters and their values. # Optional. A predicted function call returned from the model. This contains the name of the function to call and the arguments to pass to the function.
"args": { # Optional. The function parameters and values in JSON object format. See [FunctionDeclaration.parameters] for parameter details.
"a_key": "", # Properties of the object.
},
"id": "A String", # Optional. The unique id of the function call. If populated, the client to execute the `function_call` and return the response with the matching `id`.
"name": "A String", # Optional. The name of the function to call. Matches [FunctionDeclaration.name].
"partialArgs": [ # Optional. The partial argument value of the function call. If provided, represents the arguments/fields that are streamed incrementally.
{ # Partial argument value of the function call.
"boolValue": True or False, # Optional. Represents a boolean value.
"jsonPath": "A String", # Required. A JSON Path (RFC 9535) to the argument being streamed. https://datatracker.ietf.org/doc/html/rfc9535. e.g. "$.foo.bar[0].data".
"nullValue": "A String", # Optional. Represents a null value.
"numberValue": 3.14, # Optional. Represents a double value.
"stringValue": "A String", # Optional. Represents a string value.
"willContinue": True or False, # Optional. Whether this is not the last part of the same json_path. If true, another PartialArg message for the current json_path is expected to follow.
},
],
"willContinue": True or False, # Optional. Whether this is the last part of the FunctionCall. If true, another partial message for the current FunctionCall is expected to follow.
},
"functionResponse": { # The result output from a [FunctionCall] that contains a string representing the [FunctionDeclaration.name] and a structured JSON object containing any output from the function is used as context to the model. This should contain the result of a [FunctionCall] made based on model prediction. # Optional. The result of a function call. This is used to provide the model with the result of a function call that it predicted.
"id": "A String", # Optional. The id of the function call this response is for. Populated by the client to match the corresponding function call `id`.
"name": "A String", # Required. The name of the function to call. Matches [FunctionDeclaration.name] and [FunctionCall.name].
"parts": [ # Optional. Ordered `Parts` that constitute a function response. Parts may have different IANA MIME types.
{ # A datatype containing media that is part of a `FunctionResponse` message. A `FunctionResponsePart` consists of data which has an associated datatype. A `FunctionResponsePart` can only contain one of the accepted types in `FunctionResponsePart.data`. A `FunctionResponsePart` must have a fixed IANA MIME type identifying the type and subtype of the media if the `inline_data` field is filled with raw bytes.
"fileData": { # URI based data for function response. # URI based data.
"displayName": "A String", # Optional. Display name of the file data. Used to provide a label or filename to distinguish file datas. This field is only returned in PromptMessage for prompt management. It is currently used in the Gemini GenerateContent calls only when server side tools (code_execution, google_search, and url_context) are enabled.
"fileUri": "A String", # Required. URI.
"mimeType": "A String", # Required. The IANA standard MIME type of the source data.
},
"inlineData": { # Raw media bytes for function response. Text should not be sent as raw bytes, use the 'text' field. # Inline media bytes.
"data": "A String", # Required. Raw bytes.
"displayName": "A String", # Optional. Display name of the blob. Used to provide a label or filename to distinguish blobs. This field is only returned in PromptMessage for prompt management. It is currently used in the Gemini GenerateContent calls only when server side tools (code_execution, google_search, and url_context) are enabled.
"mimeType": "A String", # Required. The IANA standard MIME type of the source data.
},
},
],
"response": { # Required. The function response in JSON object format. Use "output" key to specify function output and "error" key to specify error details (if any). If "output" and "error" keys are not specified, then whole "response" is treated as function output.
"a_key": "", # Properties of the object.
},
},
"inlineData": { # A content blob. A Blob contains data of a specific media type. It is used to represent images, audio, and video. # Optional. The inline data content of the part. This can be used to include images, audio, or video in a request.
"data": "A String", # Required. The raw bytes of the data.
"displayName": "A String", # Optional. The display name of the blob. Used to provide a label or filename to distinguish blobs. This field is only returned in `PromptMessage` for prompt management. It is used in the Gemini calls only when server-side tools (`code_execution`, `google_search`, and `url_context`) are enabled.
"mimeType": "A String", # Required. The IANA standard MIME type of the source data.
},
"mediaResolution": { # per part media resolution. Media resolution for the input media. # per part media resolution. Media resolution for the input media.
"level": "A String", # The tokenization quality used for given media.
},
"text": "A String", # Optional. The text content of the part. When sent from the VSCode Gemini Code Assist extension, references to @mentioned items will be converted to markdown boldface text. For example `@my-repo` will be converted to and sent as `**my-repo**` by the IDE agent.
"thought": True or False, # Optional. Indicates whether the `part` represents the model's thought process or reasoning.
"thoughtSignature": "A String", # Optional. An opaque signature for the thought so it can be reused in subsequent requests.
"videoMetadata": { # Provides metadata for a video, including the start and end offsets for clipping and the frame rate. # Optional. Video metadata. The metadata should only be specified while the video data is presented in inline_data or file_data.
"endOffset": "A String", # Optional. The end offset of the video.
"fps": 3.14, # Optional. The frame rate of the video sent to the model. If not specified, the default value is 1.0. The valid range is (0.0, 24.0].
"startOffset": "A String", # Optional. The start offset of the video.
},
},
],
"role": "A String", # Optional. The producer of the content. Must be either 'user' or 'model'. If not set, the service will default to 'user'.
},
],
"text": "A String", # Text response.
"value": "", # Fields and values that can be used to populate the response template.
},
],
"goldenResponse": { # Responses from model or agent. # Optional. The Ideal response or ground truth.
"agentData": { # Represents data specific to multi-turn agent evaluations. # Optional. Represents the complete execution trace of a multi-turn conversation, which can involve single or multiple agents. This field is used to provide the full output of an agent's run, including all turns and events, for direct evaluation.
"agents": { # Optional. A map containing the static configurations for each agent in the system. Key: agent_id (matches the `author` field in events). Value: The static configuration of the agent.
"a_key": { # Represents configuration for an Agent.
"agentId": "A String", # Required. Unique identifier of the agent. This ID is used to refer to this agent, e.g., in AgentEvent.author, or in the `sub_agents` field. It must be unique within the `agents` map.
"agentType": "A String", # Optional. The type or class of the agent (e.g., "LlmAgent", "RouterAgent", "ToolUseAgent"). Useful for the autorater to understand the expected behavior of the agent.
"description": "A String", # Optional. A high-level description of the agent's role and responsibilities. Critical for evaluating if the agent is routing tasks correctly.
"instruction": "A String", # Optional. Provides instructions for the LLM model, guiding the agent's behavior. Can be static or dynamic. Dynamic instructions can contain placeholders like {variable_name} that will be resolved at runtime using the `AgentEvent.state_delta` field.
"subAgents": [ # Optional. The list of valid agent IDs that this agent can delegate to. This defines the directed edges in the multi-agent system graph topology.
"A String",
],
"tools": [ # Optional. The list of tools available to this agent.
{ # Tool details that the model may use to generate response. A `Tool` is a piece of code that enables the system to interact with external systems to perform an action, or set of actions, outside of knowledge and scope of the model. A Tool object should contain exactly one type of Tool (e.g FunctionDeclaration, Retrieval or GoogleSearchRetrieval).
"codeExecution": { # Tool that executes code generated by the model, and automatically returns the result to the model. See also [ExecutableCode]and [CodeExecutionResult] which are input and output to this tool. # Optional. CodeExecution tool type. Enables the model to execute code as part of generation.
},
"computerUse": { # Tool to support computer use. # Optional. Tool to support the model interacting directly with the computer. If enabled, it automatically populates computer-use specific Function Declarations.
"environment": "A String", # Required. The environment being operated.
"excludedPredefinedFunctions": [ # Optional. By default, [predefined functions](https://cloud.google.com/vertex-ai/generative-ai/docs/computer-use#supported-actions) are included in the final model call. Some of them can be explicitly excluded from being automatically included. This can serve two purposes: 1. Using a more restricted / different action space. 2. Improving the definitions / instructions of predefined functions.
"A String",
],
},
"enterpriseWebSearch": { # Tool to search public web data, powered by Vertex AI Search and Sec4 compliance. # Optional. Tool to support searching public web data, powered by Vertex AI Search and Sec4 compliance.
"blockingConfidence": "A String", # Optional. Sites with confidence level chosen & above this value will be blocked from the search results.
"excludeDomains": [ # Optional. List of domains to be excluded from the search results. The default limit is 2000 domains.
"A String",
],
},
"functionDeclarations": [ # Optional. Function tool type. One or more function declarations to be passed to the model along with the current user query. Model may decide to call a subset of these functions by populating FunctionCall in the response. User should provide a FunctionResponse for each function call in the next turn. Based on the function responses, Model will generate the final response back to the user. Maximum 512 function declarations can be provided.
{ # Structured representation of a function declaration as defined by the [OpenAPI 3.0 specification](https://spec.openapis.org/oas/v3.0.3). Included in this declaration are the function name, description, parameters and response type. This FunctionDeclaration is a representation of a block of code that can be used as a `Tool` by the model and executed by the client.
"description": "A String", # Optional. Description and purpose of the function. Model uses it to decide how and whether to call the function.
"name": "A String", # Required. The name of the function to call. Must start with a letter or an underscore. Must be a-z, A-Z, 0-9, or contain underscores, dots, colons and dashes, with a maximum length of 64.
"parameters": { # Defines the schema of input and output data. This is a subset of the [OpenAPI 3.0 Schema Object](https://spec.openapis.org/oas/v3.0.3#schema-object). # Optional. Describes the parameters to this function in JSON Schema Object format. Reflects the Open API 3.03 Parameter Object. string Key: the name of the parameter. Parameter names are case sensitive. Schema Value: the Schema defining the type used for the parameter. For function with no parameters, this can be left unset. Parameter names must start with a letter or an underscore and must only contain chars a-z, A-Z, 0-9, or underscores with a maximum length of 64. Example with 1 required and 1 optional parameter: type: OBJECT properties: param1: type: STRING param2: type: INTEGER required: - param1
"additionalProperties": "", # Optional. If `type` is `OBJECT`, specifies how to handle properties not defined in `properties`. If it is a boolean `false`, no additional properties are allowed. If it is a schema, additional properties are allowed if they conform to the schema.
"anyOf": [ # Optional. The instance must be valid against any (one or more) of the subschemas listed in `any_of`.
# Object with schema name: GoogleCloudAiplatformV1beta1Schema
],
"default": "", # Optional. Default value to use if the field is not specified.
"defs": { # Optional. `defs` provides a map of schema definitions that can be reused by `ref` elsewhere in the schema. Only allowed at root level of the schema.
"a_key": # Object with schema name: GoogleCloudAiplatformV1beta1Schema
},
"description": "A String", # Optional. Description of the schema.
"enum": [ # Optional. Possible values of the field. This field can be used to restrict a value to a fixed set of values. To mark a field as an enum, set `format` to `enum` and provide the list of possible values in `enum`. For example: 1. To define directions: `{type:STRING, format:enum, enum:["EAST", "NORTH", "SOUTH", "WEST"]}` 2. To define apartment numbers: `{type:INTEGER, format:enum, enum:["101", "201", "301"]}`
"A String",
],
"example": "", # Optional. Example of an instance of this schema.
"format": "A String", # Optional. The format of the data. For `NUMBER` type, format can be `float` or `double`. For `INTEGER` type, format can be `int32` or `int64`. For `STRING` type, format can be `email`, `byte`, `date`, `date-time`, `password`, and other formats to further refine the data type.
"items": # Object with schema name: GoogleCloudAiplatformV1beta1Schema # Optional. If type is `ARRAY`, `items` specifies the schema of elements in the array.
"maxItems": "A String", # Optional. If type is `ARRAY`, `max_items` specifies the maximum number of items in an array.
"maxLength": "A String", # Optional. If type is `STRING`, `max_length` specifies the maximum length of the string.
"maxProperties": "A String", # Optional. If type is `OBJECT`, `max_properties` specifies the maximum number of properties that can be provided.
"maximum": 3.14, # Optional. If type is `INTEGER` or `NUMBER`, `maximum` specifies the maximum allowed value.
"minItems": "A String", # Optional. If type is `ARRAY`, `min_items` specifies the minimum number of items in an array.
"minLength": "A String", # Optional. If type is `STRING`, `min_length` specifies the minimum length of the string.
"minProperties": "A String", # Optional. If type is `OBJECT`, `min_properties` specifies the minimum number of properties that can be provided.
"minimum": 3.14, # Optional. If type is `INTEGER` or `NUMBER`, `minimum` specifies the minimum allowed value.
"nullable": True or False, # Optional. Indicates if the value of this field can be null.
"pattern": "A String", # Optional. If type is `STRING`, `pattern` specifies a regular expression that the string must match.
"properties": { # Optional. If type is `OBJECT`, `properties` is a map of property names to schema definitions for each property of the object.
"a_key": # Object with schema name: GoogleCloudAiplatformV1beta1Schema
},
"propertyOrdering": [ # Optional. Order of properties displayed or used where order matters. This is not a standard field in OpenAPI specification, but can be used to control the order of properties.
"A String",
],
"ref": "A String", # Optional. Allows referencing another schema definition to use in place of this schema. The value must be a valid reference to a schema in `defs`. For example, the following schema defines a reference to a schema node named "Pet": type: object properties: pet: ref: #/defs/Pet defs: Pet: type: object properties: name: type: string The value of the "pet" property is a reference to the schema node named "Pet". See details in https://json-schema.org/understanding-json-schema/structuring
"required": [ # Optional. If type is `OBJECT`, `required` lists the names of properties that must be present.
"A String",
],
"title": "A String", # Optional. Title for the schema.
"type": "A String", # Optional. Data type of the schema field.
},
"parametersJsonSchema": "", # Optional. Describes the parameters to the function in JSON Schema format. The schema must describe an object where the properties are the parameters to the function. For example: ``` { "type": "object", "properties": { "name": { "type": "string" }, "age": { "type": "integer" } }, "additionalProperties": false, "required": ["name", "age"], "propertyOrdering": ["name", "age"] } ``` This field is mutually exclusive with `parameters`.
"response": { # Defines the schema of input and output data. This is a subset of the [OpenAPI 3.0 Schema Object](https://spec.openapis.org/oas/v3.0.3#schema-object). # Optional. Describes the output from this function in JSON Schema format. Reflects the Open API 3.03 Response Object. The Schema defines the type used for the response value of the function.
"additionalProperties": "", # Optional. If `type` is `OBJECT`, specifies how to handle properties not defined in `properties`. If it is a boolean `false`, no additional properties are allowed. If it is a schema, additional properties are allowed if they conform to the schema.
"anyOf": [ # Optional. The instance must be valid against any (one or more) of the subschemas listed in `any_of`.
# Object with schema name: GoogleCloudAiplatformV1beta1Schema
],
"default": "", # Optional. Default value to use if the field is not specified.
"defs": { # Optional. `defs` provides a map of schema definitions that can be reused by `ref` elsewhere in the schema. Only allowed at root level of the schema.
"a_key": # Object with schema name: GoogleCloudAiplatformV1beta1Schema
},
"description": "A String", # Optional. Description of the schema.
"enum": [ # Optional. Possible values of the field. This field can be used to restrict a value to a fixed set of values. To mark a field as an enum, set `format` to `enum` and provide the list of possible values in `enum`. For example: 1. To define directions: `{type:STRING, format:enum, enum:["EAST", "NORTH", "SOUTH", "WEST"]}` 2. To define apartment numbers: `{type:INTEGER, format:enum, enum:["101", "201", "301"]}`
"A String",
],
"example": "", # Optional. Example of an instance of this schema.
"format": "A String", # Optional. The format of the data. For `NUMBER` type, format can be `float` or `double`. For `INTEGER` type, format can be `int32` or `int64`. For `STRING` type, format can be `email`, `byte`, `date`, `date-time`, `password`, and other formats to further refine the data type.
"items": # Object with schema name: GoogleCloudAiplatformV1beta1Schema # Optional. If type is `ARRAY`, `items` specifies the schema of elements in the array.
"maxItems": "A String", # Optional. If type is `ARRAY`, `max_items` specifies the maximum number of items in an array.
"maxLength": "A String", # Optional. If type is `STRING`, `max_length` specifies the maximum length of the string.
"maxProperties": "A String", # Optional. If type is `OBJECT`, `max_properties` specifies the maximum number of properties that can be provided.
"maximum": 3.14, # Optional. If type is `INTEGER` or `NUMBER`, `maximum` specifies the maximum allowed value.
"minItems": "A String", # Optional. If type is `ARRAY`, `min_items` specifies the minimum number of items in an array.
"minLength": "A String", # Optional. If type is `STRING`, `min_length` specifies the minimum length of the string.
"minProperties": "A String", # Optional. If type is `OBJECT`, `min_properties` specifies the minimum number of properties that can be provided.
"minimum": 3.14, # Optional. If type is `INTEGER` or `NUMBER`, `minimum` specifies the minimum allowed value.
"nullable": True or False, # Optional. Indicates if the value of this field can be null.
"pattern": "A String", # Optional. If type is `STRING`, `pattern` specifies a regular expression that the string must match.
"properties": { # Optional. If type is `OBJECT`, `properties` is a map of property names to schema definitions for each property of the object.
"a_key": # Object with schema name: GoogleCloudAiplatformV1beta1Schema
},
"propertyOrdering": [ # Optional. Order of properties displayed or used where order matters. This is not a standard field in OpenAPI specification, but can be used to control the order of properties.
"A String",
],
"ref": "A String", # Optional. Allows referencing another schema definition to use in place of this schema. The value must be a valid reference to a schema in `defs`. For example, the following schema defines a reference to a schema node named "Pet": type: object properties: pet: ref: #/defs/Pet defs: Pet: type: object properties: name: type: string The value of the "pet" property is a reference to the schema node named "Pet". See details in https://json-schema.org/understanding-json-schema/structuring
"required": [ # Optional. If type is `OBJECT`, `required` lists the names of properties that must be present.
"A String",
],
"title": "A String", # Optional. Title for the schema.
"type": "A String", # Optional. Data type of the schema field.
},
"responseJsonSchema": "", # Optional. Describes the output from this function in JSON Schema format. The value specified by the schema is the response value of the function. This field is mutually exclusive with `response`.
},
],
"googleMaps": { # Tool to retrieve public maps data for grounding, powered by Google. # Optional. GoogleMaps tool type. Tool to support Google Maps in Model.
"enableWidget": True or False, # Optional. If true, include the widget context token in the response.
},
"googleSearch": { # GoogleSearch tool type. Tool to support Google Search in Model. Powered by Google. # Optional. GoogleSearch tool type. Tool to support Google Search in Model. Powered by Google.
"blockingConfidence": "A String", # Optional. Sites with confidence level chosen & above this value will be blocked from the search results.
"excludeDomains": [ # Optional. List of domains to be excluded from the search results. The default limit is 2000 domains. Example: ["amazon.com", "facebook.com"].
"A String",
],
},
"googleSearchRetrieval": { # Tool to retrieve public web data for grounding, powered by Google. # Optional. Specialized retrieval tool that is powered by Google Search.
"dynamicRetrievalConfig": { # Describes the options to customize dynamic retrieval. # Specifies the dynamic retrieval configuration for the given source.
"dynamicThreshold": 3.14, # Optional. The threshold to be used in dynamic retrieval. If not set, a system default value is used.
"mode": "A String", # The mode of the predictor to be used in dynamic retrieval.
},
},
"parallelAiSearch": { # ParallelAiSearch tool type. A tool that uses the Parallel.ai search engine for grounding. # Optional. If specified, Vertex AI will use Parallel.ai to search for information to answer user queries. The search results will be grounded on Parallel.ai and presented to the model for response generation
"apiKey": "A String", # Optional. The API key for ParallelAiSearch. If an API key is not provided, the system will attempt to verify access by checking for an active Parallel.ai subscription through the Google Cloud Marketplace. See https://docs.parallel.ai/search/search-quickstart for more details.
"customConfigs": { # Optional. Custom configs for ParallelAiSearch. This field can be used to pass any parameter from the Parallel.ai Search API. See the Parallel.ai documentation for the full list of available parameters and their usage: https://docs.parallel.ai/api-reference/search-beta/search Currently only `source_policy`, `excerpts`, `max_results`, `mode`, `fetch_policy` can be set via this field. For example: { "source_policy": { "include_domains": ["google.com", "wikipedia.org"], "exclude_domains": ["example.com"] }, "fetch_policy": { "max_age_seconds": 3600 } }
"a_key": "", # Properties of the object.
},
},
"retrieval": { # Defines a retrieval tool that model can call to access external knowledge. # Optional. Retrieval tool type. System will always execute the provided retrieval tool(s) to get external knowledge to answer the prompt. Retrieval results are presented to the model for generation.
"disableAttribution": True or False, # Optional. Deprecated. This option is no longer supported.
"externalApi": { # Retrieve from data source powered by external API for grounding. The external API is not owned by Google, but need to follow the pre-defined API spec. # Use data source powered by external API for grounding.
"apiAuth": { # The generic reusable api auth config. Deprecated. Please use AuthConfig (google/cloud/aiplatform/master/auth.proto) instead. # The authentication config to access the API. Deprecated. Please use auth_config instead.
"apiKeyConfig": { # The API secret. # The API secret.
"apiKeySecretVersion": "A String", # Required. The SecretManager secret version resource name storing API key. e.g. projects/{project}/secrets/{secret}/versions/{version}
"apiKeyString": "A String", # The API key string. Either this or `api_key_secret_version` must be set.
},
},
"apiSpec": "A String", # The API spec that the external API implements.
"authConfig": { # Auth configuration to run the extension. # The authentication config to access the API.
"apiKeyConfig": { # Config for authentication with API key. # Config for API key auth.
"apiKeySecret": "A String", # Optional. The name of the SecretManager secret version resource storing the API key. Format: `projects/{project}/secrets/{secrete}/versions/{version}` - If both `api_key_secret` and `api_key_string` are specified, this field takes precedence over `api_key_string`. - If specified, the `secretmanager.versions.access` permission should be granted to Vertex AI Extension Service Agent (https://cloud.google.com/vertex-ai/docs/general/access-control#service-agents) on the specified resource.
"apiKeyString": "A String", # Optional. The API key to be used in the request directly.
"httpElementLocation": "A String", # Optional. The location of the API key.
"name": "A String", # Optional. The parameter name of the API key. E.g. If the API request is "https://example.com/act?api_key=", "api_key" would be the parameter name.
},
"authType": "A String", # Type of auth scheme.
"googleServiceAccountConfig": { # Config for Google Service Account Authentication. # Config for Google Service Account auth.
"serviceAccount": "A String", # Optional. The service account that the extension execution service runs as. - If the service account is specified, the `iam.serviceAccounts.getAccessToken` permission should be granted to Vertex AI Extension Service Agent (https://cloud.google.com/vertex-ai/docs/general/access-control#service-agents) on the specified service account. - If not specified, the Vertex AI Extension Service Agent will be used to execute the Extension.
},
"httpBasicAuthConfig": { # Config for HTTP Basic Authentication. # Config for HTTP Basic auth.
"credentialSecret": "A String", # Required. The name of the SecretManager secret version resource storing the base64 encoded credentials. Format: `projects/{project}/secrets/{secrete}/versions/{version}` - If specified, the `secretmanager.versions.access` permission should be granted to Vertex AI Extension Service Agent (https://cloud.google.com/vertex-ai/docs/general/access-control#service-agents) on the specified resource.
},
"oauthConfig": { # Config for user oauth. # Config for user oauth.
"accessToken": "A String", # Access token for extension endpoint. Only used to propagate token from [[ExecuteExtensionRequest.runtime_auth_config]] at request time.
"serviceAccount": "A String", # The service account used to generate access tokens for executing the Extension. - If the service account is specified, the `iam.serviceAccounts.getAccessToken` permission should be granted to Vertex AI Extension Service Agent (https://cloud.google.com/vertex-ai/docs/general/access-control#service-agents) on the provided service account.
},
"oidcConfig": { # Config for user OIDC auth. # Config for user OIDC auth.
"idToken": "A String", # OpenID Connect formatted ID token for extension endpoint. Only used to propagate token from [[ExecuteExtensionRequest.runtime_auth_config]] at request time.
"serviceAccount": "A String", # The service account used to generate an OpenID Connect (OIDC)-compatible JWT token signed by the Google OIDC Provider (accounts.google.com) for extension endpoint (https://cloud.google.com/iam/docs/create-short-lived-credentials-direct#sa-credentials-oidc). - The audience for the token will be set to the URL in the server url defined in the OpenApi spec. - If the service account is provided, the service account should grant `iam.serviceAccounts.getOpenIdToken` permission to Vertex AI Extension Service Agent (https://cloud.google.com/vertex-ai/docs/general/access-control#service-agents).
},
},
"elasticSearchParams": { # The search parameters to use for the ELASTIC_SEARCH spec. # Parameters for the elastic search API.
"index": "A String", # The ElasticSearch index to use.
"numHits": 42, # Optional. Number of hits (chunks) to request. When specified, it is passed to Elasticsearch as the `num_hits` param.
"searchTemplate": "A String", # The ElasticSearch search template to use.
},
"endpoint": "A String", # The endpoint of the external API. The system will call the API at this endpoint to retrieve the data for grounding. Example: https://acme.com:443/search
"simpleSearchParams": { # The search parameters to use for SIMPLE_SEARCH spec. # Parameters for the simple search API.
},
},
"vertexAiSearch": { # Retrieve from Vertex AI Search datastore or engine for grounding. datastore and engine are mutually exclusive. See https://cloud.google.com/products/agent-builder # Set to use data source powered by Vertex AI Search.
"dataStoreSpecs": [ # Specifications that define the specific DataStores to be searched, along with configurations for those data stores. This is only considered for Engines with multiple data stores. It should only be set if engine is used.
{ # Define data stores within engine to filter on in a search call and configurations for those data stores. For more information, see https://cloud.google.com/generative-ai-app-builder/docs/reference/rpc/google.cloud.discoveryengine.v1#datastorespec
"dataStore": "A String", # Full resource name of DataStore, such as Format: `projects/{project}/locations/{location}/collections/{collection}/dataStores/{dataStore}`
"filter": "A String", # Optional. Filter specification to filter documents in the data store specified by data_store field. For more information on filtering, see [Filtering](https://cloud.google.com/generative-ai-app-builder/docs/filter-search-metadata)
},
],
"datastore": "A String", # Optional. Fully-qualified Vertex AI Search data store resource ID. Format: `projects/{project}/locations/{location}/collections/{collection}/dataStores/{dataStore}`
"engine": "A String", # Optional. Fully-qualified Vertex AI Search engine resource ID. Format: `projects/{project}/locations/{location}/collections/{collection}/engines/{engine}`
"filter": "A String", # Optional. Filter strings to be passed to the search API.
"maxResults": 42, # Optional. Number of search results to return per query. The default value is 10. The maximumm allowed value is 10.
},
"vertexRagStore": { # Retrieve from Vertex RAG Store for grounding. # Set to use data source powered by Vertex RAG store. User data is uploaded via the VertexRagDataService.
"ragCorpora": [ # Optional. Deprecated. Please use rag_resources instead.
"A String",
],
"ragResources": [ # Optional. The representation of the rag source. It can be used to specify corpus only or ragfiles. Currently only support one corpus or multiple files from one corpus. In the future we may open up multiple corpora support.
{ # The definition of the Rag resource.
"ragCorpus": "A String", # Optional. RagCorpora resource name. Format: `projects/{project}/locations/{location}/ragCorpora/{rag_corpus}`
"ragFileIds": [ # Optional. rag_file_id. The files should be in the same rag_corpus set in rag_corpus field.
"A String",
],
},
],
"ragRetrievalConfig": { # Specifies the context retrieval config. # Optional. The retrieval config for the Rag query.
"filter": { # Config for filters. # Optional. Config for filters.
"metadataFilter": "A String", # Optional. String for metadata filtering.
"vectorDistanceThreshold": 3.14, # Optional. Only returns contexts with vector distance smaller than the threshold.
"vectorSimilarityThreshold": 3.14, # Optional. Only returns contexts with vector similarity larger than the threshold.
},
"hybridSearch": { # Config for Hybrid Search. # Optional. Config for Hybrid Search.
"alpha": 3.14, # Optional. Alpha value controls the weight between dense and sparse vector search results. The range is [0, 1], while 0 means sparse vector search only and 1 means dense vector search only. The default value is 0.5 which balances sparse and dense vector search equally.
},
"ranking": { # Config for ranking and reranking. # Optional. Config for ranking and reranking.
"llmRanker": { # Config for LlmRanker. # Optional. Config for LlmRanker.
"modelName": "A String", # Optional. The model name used for ranking. See [Supported models](https://cloud.google.com/vertex-ai/generative-ai/docs/model-reference/inference#supported-models).
},
"rankService": { # Config for Rank Service. # Optional. Config for Rank Service.
"modelName": "A String", # Optional. The model name of the rank service. Format: `semantic-ranker-512@latest`
},
},
"topK": 42, # Optional. The number of contexts to retrieve.
},
"similarityTopK": 42, # Optional. Number of top k results to return from the selected corpora.
"storeContext": True or False, # Optional. Currently only supported for Gemini Multimodal Live API. In Gemini Multimodal Live API, if `store_context` bool is specified, Gemini will leverage it to automatically memorize the interactions between the client and Gemini, and retrieve context when needed to augment the response generation for users' ongoing and future interactions.
"vectorDistanceThreshold": 3.14, # Optional. Only return results with vector distance smaller than the threshold.
},
},
"urlContext": { # Tool to support URL context. # Optional. Tool to support URL context retrieval.
},
},
],
},
},
"turns": [ # Optional. A chronological list of conversation turns. Each turn represents a logical execution cycle (e.g., User Input -> Agent Response).
{ # Represents a single turn/invocation in the conversation.
"events": [ # Optional. The list of events that occurred during this turn.
{ # Represents a single event in the execution trace.
"activeTools": [ # Optional. The list of tools that were active/available to the agent at the time of this event. This overrides the `AgentConfig.tools` if set.
{ # Tool details that the model may use to generate response. A `Tool` is a piece of code that enables the system to interact with external systems to perform an action, or set of actions, outside of knowledge and scope of the model. A Tool object should contain exactly one type of Tool (e.g FunctionDeclaration, Retrieval or GoogleSearchRetrieval).
"codeExecution": { # Tool that executes code generated by the model, and automatically returns the result to the model. See also [ExecutableCode]and [CodeExecutionResult] which are input and output to this tool. # Optional. CodeExecution tool type. Enables the model to execute code as part of generation.
},
"computerUse": { # Tool to support computer use. # Optional. Tool to support the model interacting directly with the computer. If enabled, it automatically populates computer-use specific Function Declarations.
"environment": "A String", # Required. The environment being operated.
"excludedPredefinedFunctions": [ # Optional. By default, [predefined functions](https://cloud.google.com/vertex-ai/generative-ai/docs/computer-use#supported-actions) are included in the final model call. Some of them can be explicitly excluded from being automatically included. This can serve two purposes: 1. Using a more restricted / different action space. 2. Improving the definitions / instructions of predefined functions.
"A String",
],
},
"enterpriseWebSearch": { # Tool to search public web data, powered by Vertex AI Search and Sec4 compliance. # Optional. Tool to support searching public web data, powered by Vertex AI Search and Sec4 compliance.
"blockingConfidence": "A String", # Optional. Sites with confidence level chosen & above this value will be blocked from the search results.
"excludeDomains": [ # Optional. List of domains to be excluded from the search results. The default limit is 2000 domains.
"A String",
],
},
"functionDeclarations": [ # Optional. Function tool type. One or more function declarations to be passed to the model along with the current user query. Model may decide to call a subset of these functions by populating FunctionCall in the response. User should provide a FunctionResponse for each function call in the next turn. Based on the function responses, Model will generate the final response back to the user. Maximum 512 function declarations can be provided.
{ # Structured representation of a function declaration as defined by the [OpenAPI 3.0 specification](https://spec.openapis.org/oas/v3.0.3). Included in this declaration are the function name, description, parameters and response type. This FunctionDeclaration is a representation of a block of code that can be used as a `Tool` by the model and executed by the client.
"description": "A String", # Optional. Description and purpose of the function. Model uses it to decide how and whether to call the function.
"name": "A String", # Required. The name of the function to call. Must start with a letter or an underscore. Must be a-z, A-Z, 0-9, or contain underscores, dots, colons and dashes, with a maximum length of 64.
"parameters": { # Defines the schema of input and output data. This is a subset of the [OpenAPI 3.0 Schema Object](https://spec.openapis.org/oas/v3.0.3#schema-object). # Optional. Describes the parameters to this function in JSON Schema Object format. Reflects the Open API 3.03 Parameter Object. string Key: the name of the parameter. Parameter names are case sensitive. Schema Value: the Schema defining the type used for the parameter. For function with no parameters, this can be left unset. Parameter names must start with a letter or an underscore and must only contain chars a-z, A-Z, 0-9, or underscores with a maximum length of 64. Example with 1 required and 1 optional parameter: type: OBJECT properties: param1: type: STRING param2: type: INTEGER required: - param1
"additionalProperties": "", # Optional. If `type` is `OBJECT`, specifies how to handle properties not defined in `properties`. If it is a boolean `false`, no additional properties are allowed. If it is a schema, additional properties are allowed if they conform to the schema.
"anyOf": [ # Optional. The instance must be valid against any (one or more) of the subschemas listed in `any_of`.
# Object with schema name: GoogleCloudAiplatformV1beta1Schema
],
"default": "", # Optional. Default value to use if the field is not specified.
"defs": { # Optional. `defs` provides a map of schema definitions that can be reused by `ref` elsewhere in the schema. Only allowed at root level of the schema.
"a_key": # Object with schema name: GoogleCloudAiplatformV1beta1Schema
},
"description": "A String", # Optional. Description of the schema.
"enum": [ # Optional. Possible values of the field. This field can be used to restrict a value to a fixed set of values. To mark a field as an enum, set `format` to `enum` and provide the list of possible values in `enum`. For example: 1. To define directions: `{type:STRING, format:enum, enum:["EAST", "NORTH", "SOUTH", "WEST"]}` 2. To define apartment numbers: `{type:INTEGER, format:enum, enum:["101", "201", "301"]}`
"A String",
],
"example": "", # Optional. Example of an instance of this schema.
"format": "A String", # Optional. The format of the data. For `NUMBER` type, format can be `float` or `double`. For `INTEGER` type, format can be `int32` or `int64`. For `STRING` type, format can be `email`, `byte`, `date`, `date-time`, `password`, and other formats to further refine the data type.
"items": # Object with schema name: GoogleCloudAiplatformV1beta1Schema # Optional. If type is `ARRAY`, `items` specifies the schema of elements in the array.
"maxItems": "A String", # Optional. If type is `ARRAY`, `max_items` specifies the maximum number of items in an array.
"maxLength": "A String", # Optional. If type is `STRING`, `max_length` specifies the maximum length of the string.
"maxProperties": "A String", # Optional. If type is `OBJECT`, `max_properties` specifies the maximum number of properties that can be provided.
"maximum": 3.14, # Optional. If type is `INTEGER` or `NUMBER`, `maximum` specifies the maximum allowed value.
"minItems": "A String", # Optional. If type is `ARRAY`, `min_items` specifies the minimum number of items in an array.
"minLength": "A String", # Optional. If type is `STRING`, `min_length` specifies the minimum length of the string.
"minProperties": "A String", # Optional. If type is `OBJECT`, `min_properties` specifies the minimum number of properties that can be provided.
"minimum": 3.14, # Optional. If type is `INTEGER` or `NUMBER`, `minimum` specifies the minimum allowed value.
"nullable": True or False, # Optional. Indicates if the value of this field can be null.
"pattern": "A String", # Optional. If type is `STRING`, `pattern` specifies a regular expression that the string must match.
"properties": { # Optional. If type is `OBJECT`, `properties` is a map of property names to schema definitions for each property of the object.
"a_key": # Object with schema name: GoogleCloudAiplatformV1beta1Schema
},
"propertyOrdering": [ # Optional. Order of properties displayed or used where order matters. This is not a standard field in OpenAPI specification, but can be used to control the order of properties.
"A String",
],
"ref": "A String", # Optional. Allows referencing another schema definition to use in place of this schema. The value must be a valid reference to a schema in `defs`. For example, the following schema defines a reference to a schema node named "Pet": type: object properties: pet: ref: #/defs/Pet defs: Pet: type: object properties: name: type: string The value of the "pet" property is a reference to the schema node named "Pet". See details in https://json-schema.org/understanding-json-schema/structuring
"required": [ # Optional. If type is `OBJECT`, `required` lists the names of properties that must be present.
"A String",
],
"title": "A String", # Optional. Title for the schema.
"type": "A String", # Optional. Data type of the schema field.
},
"parametersJsonSchema": "", # Optional. Describes the parameters to the function in JSON Schema format. The schema must describe an object where the properties are the parameters to the function. For example: ``` { "type": "object", "properties": { "name": { "type": "string" }, "age": { "type": "integer" } }, "additionalProperties": false, "required": ["name", "age"], "propertyOrdering": ["name", "age"] } ``` This field is mutually exclusive with `parameters`.
"response": { # Defines the schema of input and output data. This is a subset of the [OpenAPI 3.0 Schema Object](https://spec.openapis.org/oas/v3.0.3#schema-object). # Optional. Describes the output from this function in JSON Schema format. Reflects the Open API 3.03 Response Object. The Schema defines the type used for the response value of the function.
"additionalProperties": "", # Optional. If `type` is `OBJECT`, specifies how to handle properties not defined in `properties`. If it is a boolean `false`, no additional properties are allowed. If it is a schema, additional properties are allowed if they conform to the schema.
"anyOf": [ # Optional. The instance must be valid against any (one or more) of the subschemas listed in `any_of`.
# Object with schema name: GoogleCloudAiplatformV1beta1Schema
],
"default": "", # Optional. Default value to use if the field is not specified.
"defs": { # Optional. `defs` provides a map of schema definitions that can be reused by `ref` elsewhere in the schema. Only allowed at root level of the schema.
"a_key": # Object with schema name: GoogleCloudAiplatformV1beta1Schema
},
"description": "A String", # Optional. Description of the schema.
"enum": [ # Optional. Possible values of the field. This field can be used to restrict a value to a fixed set of values. To mark a field as an enum, set `format` to `enum` and provide the list of possible values in `enum`. For example: 1. To define directions: `{type:STRING, format:enum, enum:["EAST", "NORTH", "SOUTH", "WEST"]}` 2. To define apartment numbers: `{type:INTEGER, format:enum, enum:["101", "201", "301"]}`
"A String",
],
"example": "", # Optional. Example of an instance of this schema.
"format": "A String", # Optional. The format of the data. For `NUMBER` type, format can be `float` or `double`. For `INTEGER` type, format can be `int32` or `int64`. For `STRING` type, format can be `email`, `byte`, `date`, `date-time`, `password`, and other formats to further refine the data type.
"items": # Object with schema name: GoogleCloudAiplatformV1beta1Schema # Optional. If type is `ARRAY`, `items` specifies the schema of elements in the array.
"maxItems": "A String", # Optional. If type is `ARRAY`, `max_items` specifies the maximum number of items in an array.
"maxLength": "A String", # Optional. If type is `STRING`, `max_length` specifies the maximum length of the string.
"maxProperties": "A String", # Optional. If type is `OBJECT`, `max_properties` specifies the maximum number of properties that can be provided.
"maximum": 3.14, # Optional. If type is `INTEGER` or `NUMBER`, `maximum` specifies the maximum allowed value.
"minItems": "A String", # Optional. If type is `ARRAY`, `min_items` specifies the minimum number of items in an array.
"minLength": "A String", # Optional. If type is `STRING`, `min_length` specifies the minimum length of the string.
"minProperties": "A String", # Optional. If type is `OBJECT`, `min_properties` specifies the minimum number of properties that can be provided.
"minimum": 3.14, # Optional. If type is `INTEGER` or `NUMBER`, `minimum` specifies the minimum allowed value.
"nullable": True or False, # Optional. Indicates if the value of this field can be null.
"pattern": "A String", # Optional. If type is `STRING`, `pattern` specifies a regular expression that the string must match.
"properties": { # Optional. If type is `OBJECT`, `properties` is a map of property names to schema definitions for each property of the object.
"a_key": # Object with schema name: GoogleCloudAiplatformV1beta1Schema
},
"propertyOrdering": [ # Optional. Order of properties displayed or used where order matters. This is not a standard field in OpenAPI specification, but can be used to control the order of properties.
"A String",
],
"ref": "A String", # Optional. Allows referencing another schema definition to use in place of this schema. The value must be a valid reference to a schema in `defs`. For example, the following schema defines a reference to a schema node named "Pet": type: object properties: pet: ref: #/defs/Pet defs: Pet: type: object properties: name: type: string The value of the "pet" property is a reference to the schema node named "Pet". See details in https://json-schema.org/understanding-json-schema/structuring
"required": [ # Optional. If type is `OBJECT`, `required` lists the names of properties that must be present.
"A String",
],
"title": "A String", # Optional. Title for the schema.
"type": "A String", # Optional. Data type of the schema field.
},
"responseJsonSchema": "", # Optional. Describes the output from this function in JSON Schema format. The value specified by the schema is the response value of the function. This field is mutually exclusive with `response`.
},
],
"googleMaps": { # Tool to retrieve public maps data for grounding, powered by Google. # Optional. GoogleMaps tool type. Tool to support Google Maps in Model.
"enableWidget": True or False, # Optional. If true, include the widget context token in the response.
},
"googleSearch": { # GoogleSearch tool type. Tool to support Google Search in Model. Powered by Google. # Optional. GoogleSearch tool type. Tool to support Google Search in Model. Powered by Google.
"blockingConfidence": "A String", # Optional. Sites with confidence level chosen & above this value will be blocked from the search results.
"excludeDomains": [ # Optional. List of domains to be excluded from the search results. The default limit is 2000 domains. Example: ["amazon.com", "facebook.com"].
"A String",
],
},
"googleSearchRetrieval": { # Tool to retrieve public web data for grounding, powered by Google. # Optional. Specialized retrieval tool that is powered by Google Search.
"dynamicRetrievalConfig": { # Describes the options to customize dynamic retrieval. # Specifies the dynamic retrieval configuration for the given source.
"dynamicThreshold": 3.14, # Optional. The threshold to be used in dynamic retrieval. If not set, a system default value is used.
"mode": "A String", # The mode of the predictor to be used in dynamic retrieval.
},
},
"parallelAiSearch": { # ParallelAiSearch tool type. A tool that uses the Parallel.ai search engine for grounding. # Optional. If specified, Vertex AI will use Parallel.ai to search for information to answer user queries. The search results will be grounded on Parallel.ai and presented to the model for response generation
"apiKey": "A String", # Optional. The API key for ParallelAiSearch. If an API key is not provided, the system will attempt to verify access by checking for an active Parallel.ai subscription through the Google Cloud Marketplace. See https://docs.parallel.ai/search/search-quickstart for more details.
"customConfigs": { # Optional. Custom configs for ParallelAiSearch. This field can be used to pass any parameter from the Parallel.ai Search API. See the Parallel.ai documentation for the full list of available parameters and their usage: https://docs.parallel.ai/api-reference/search-beta/search Currently only `source_policy`, `excerpts`, `max_results`, `mode`, `fetch_policy` can be set via this field. For example: { "source_policy": { "include_domains": ["google.com", "wikipedia.org"], "exclude_domains": ["example.com"] }, "fetch_policy": { "max_age_seconds": 3600 } }
"a_key": "", # Properties of the object.
},
},
"retrieval": { # Defines a retrieval tool that model can call to access external knowledge. # Optional. Retrieval tool type. System will always execute the provided retrieval tool(s) to get external knowledge to answer the prompt. Retrieval results are presented to the model for generation.
"disableAttribution": True or False, # Optional. Deprecated. This option is no longer supported.
"externalApi": { # Retrieve from data source powered by external API for grounding. The external API is not owned by Google, but need to follow the pre-defined API spec. # Use data source powered by external API for grounding.
"apiAuth": { # The generic reusable api auth config. Deprecated. Please use AuthConfig (google/cloud/aiplatform/master/auth.proto) instead. # The authentication config to access the API. Deprecated. Please use auth_config instead.
"apiKeyConfig": { # The API secret. # The API secret.
"apiKeySecretVersion": "A String", # Required. The SecretManager secret version resource name storing API key. e.g. projects/{project}/secrets/{secret}/versions/{version}
"apiKeyString": "A String", # The API key string. Either this or `api_key_secret_version` must be set.
},
},
"apiSpec": "A String", # The API spec that the external API implements.
"authConfig": { # Auth configuration to run the extension. # The authentication config to access the API.
"apiKeyConfig": { # Config for authentication with API key. # Config for API key auth.
"apiKeySecret": "A String", # Optional. The name of the SecretManager secret version resource storing the API key. Format: `projects/{project}/secrets/{secrete}/versions/{version}` - If both `api_key_secret` and `api_key_string` are specified, this field takes precedence over `api_key_string`. - If specified, the `secretmanager.versions.access` permission should be granted to Vertex AI Extension Service Agent (https://cloud.google.com/vertex-ai/docs/general/access-control#service-agents) on the specified resource.
"apiKeyString": "A String", # Optional. The API key to be used in the request directly.
"httpElementLocation": "A String", # Optional. The location of the API key.
"name": "A String", # Optional. The parameter name of the API key. E.g. If the API request is "https://example.com/act?api_key=", "api_key" would be the parameter name.
},
"authType": "A String", # Type of auth scheme.
"googleServiceAccountConfig": { # Config for Google Service Account Authentication. # Config for Google Service Account auth.
"serviceAccount": "A String", # Optional. The service account that the extension execution service runs as. - If the service account is specified, the `iam.serviceAccounts.getAccessToken` permission should be granted to Vertex AI Extension Service Agent (https://cloud.google.com/vertex-ai/docs/general/access-control#service-agents) on the specified service account. - If not specified, the Vertex AI Extension Service Agent will be used to execute the Extension.
},
"httpBasicAuthConfig": { # Config for HTTP Basic Authentication. # Config for HTTP Basic auth.
"credentialSecret": "A String", # Required. The name of the SecretManager secret version resource storing the base64 encoded credentials. Format: `projects/{project}/secrets/{secrete}/versions/{version}` - If specified, the `secretmanager.versions.access` permission should be granted to Vertex AI Extension Service Agent (https://cloud.google.com/vertex-ai/docs/general/access-control#service-agents) on the specified resource.
},
"oauthConfig": { # Config for user oauth. # Config for user oauth.
"accessToken": "A String", # Access token for extension endpoint. Only used to propagate token from [[ExecuteExtensionRequest.runtime_auth_config]] at request time.
"serviceAccount": "A String", # The service account used to generate access tokens for executing the Extension. - If the service account is specified, the `iam.serviceAccounts.getAccessToken` permission should be granted to Vertex AI Extension Service Agent (https://cloud.google.com/vertex-ai/docs/general/access-control#service-agents) on the provided service account.
},
"oidcConfig": { # Config for user OIDC auth. # Config for user OIDC auth.
"idToken": "A String", # OpenID Connect formatted ID token for extension endpoint. Only used to propagate token from [[ExecuteExtensionRequest.runtime_auth_config]] at request time.
"serviceAccount": "A String", # The service account used to generate an OpenID Connect (OIDC)-compatible JWT token signed by the Google OIDC Provider (accounts.google.com) for extension endpoint (https://cloud.google.com/iam/docs/create-short-lived-credentials-direct#sa-credentials-oidc). - The audience for the token will be set to the URL in the server url defined in the OpenApi spec. - If the service account is provided, the service account should grant `iam.serviceAccounts.getOpenIdToken` permission to Vertex AI Extension Service Agent (https://cloud.google.com/vertex-ai/docs/general/access-control#service-agents).
},
},
"elasticSearchParams": { # The search parameters to use for the ELASTIC_SEARCH spec. # Parameters for the elastic search API.
"index": "A String", # The ElasticSearch index to use.
"numHits": 42, # Optional. Number of hits (chunks) to request. When specified, it is passed to Elasticsearch as the `num_hits` param.
"searchTemplate": "A String", # The ElasticSearch search template to use.
},
"endpoint": "A String", # The endpoint of the external API. The system will call the API at this endpoint to retrieve the data for grounding. Example: https://acme.com:443/search
"simpleSearchParams": { # The search parameters to use for SIMPLE_SEARCH spec. # Parameters for the simple search API.
},
},
"vertexAiSearch": { # Retrieve from Vertex AI Search datastore or engine for grounding. datastore and engine are mutually exclusive. See https://cloud.google.com/products/agent-builder # Set to use data source powered by Vertex AI Search.
"dataStoreSpecs": [ # Specifications that define the specific DataStores to be searched, along with configurations for those data stores. This is only considered for Engines with multiple data stores. It should only be set if engine is used.
{ # Define data stores within engine to filter on in a search call and configurations for those data stores. For more information, see https://cloud.google.com/generative-ai-app-builder/docs/reference/rpc/google.cloud.discoveryengine.v1#datastorespec
"dataStore": "A String", # Full resource name of DataStore, such as Format: `projects/{project}/locations/{location}/collections/{collection}/dataStores/{dataStore}`
"filter": "A String", # Optional. Filter specification to filter documents in the data store specified by data_store field. For more information on filtering, see [Filtering](https://cloud.google.com/generative-ai-app-builder/docs/filter-search-metadata)
},
],
"datastore": "A String", # Optional. Fully-qualified Vertex AI Search data store resource ID. Format: `projects/{project}/locations/{location}/collections/{collection}/dataStores/{dataStore}`
"engine": "A String", # Optional. Fully-qualified Vertex AI Search engine resource ID. Format: `projects/{project}/locations/{location}/collections/{collection}/engines/{engine}`
"filter": "A String", # Optional. Filter strings to be passed to the search API.
"maxResults": 42, # Optional. Number of search results to return per query. The default value is 10. The maximumm allowed value is 10.
},
"vertexRagStore": { # Retrieve from Vertex RAG Store for grounding. # Set to use data source powered by Vertex RAG store. User data is uploaded via the VertexRagDataService.
"ragCorpora": [ # Optional. Deprecated. Please use rag_resources instead.
"A String",
],
"ragResources": [ # Optional. The representation of the rag source. It can be used to specify corpus only or ragfiles. Currently only support one corpus or multiple files from one corpus. In the future we may open up multiple corpora support.
{ # The definition of the Rag resource.
"ragCorpus": "A String", # Optional. RagCorpora resource name. Format: `projects/{project}/locations/{location}/ragCorpora/{rag_corpus}`
"ragFileIds": [ # Optional. rag_file_id. The files should be in the same rag_corpus set in rag_corpus field.
"A String",
],
},
],
"ragRetrievalConfig": { # Specifies the context retrieval config. # Optional. The retrieval config for the Rag query.
"filter": { # Config for filters. # Optional. Config for filters.
"metadataFilter": "A String", # Optional. String for metadata filtering.
"vectorDistanceThreshold": 3.14, # Optional. Only returns contexts with vector distance smaller than the threshold.
"vectorSimilarityThreshold": 3.14, # Optional. Only returns contexts with vector similarity larger than the threshold.
},
"hybridSearch": { # Config for Hybrid Search. # Optional. Config for Hybrid Search.
"alpha": 3.14, # Optional. Alpha value controls the weight between dense and sparse vector search results. The range is [0, 1], while 0 means sparse vector search only and 1 means dense vector search only. The default value is 0.5 which balances sparse and dense vector search equally.
},
"ranking": { # Config for ranking and reranking. # Optional. Config for ranking and reranking.
"llmRanker": { # Config for LlmRanker. # Optional. Config for LlmRanker.
"modelName": "A String", # Optional. The model name used for ranking. See [Supported models](https://cloud.google.com/vertex-ai/generative-ai/docs/model-reference/inference#supported-models).
},
"rankService": { # Config for Rank Service. # Optional. Config for Rank Service.
"modelName": "A String", # Optional. The model name of the rank service. Format: `semantic-ranker-512@latest`
},
},
"topK": 42, # Optional. The number of contexts to retrieve.
},
"similarityTopK": 42, # Optional. Number of top k results to return from the selected corpora.
"storeContext": True or False, # Optional. Currently only supported for Gemini Multimodal Live API. In Gemini Multimodal Live API, if `store_context` bool is specified, Gemini will leverage it to automatically memorize the interactions between the client and Gemini, and retrieve context when needed to augment the response generation for users' ongoing and future interactions.
"vectorDistanceThreshold": 3.14, # Optional. Only return results with vector distance smaller than the threshold.
},
},
"urlContext": { # Tool to support URL context. # Optional. Tool to support URL context retrieval.
},
},
],
"author": "A String", # Required. The ID of the agent or entity that generated this event. Use "user" to denote events generated by the end-user.
"content": { # The structured data content of a message. A Content message contains a `role` field, which indicates the producer of the content, and a `parts` field, which contains the multi-part data of the message. # Required. The content of the event (e.g., text response, tool call, tool response).
"parts": [ # Required. A list of Part objects that make up a single message. Parts of a message can have different MIME types. A Content message must have at least one Part.
{ # A datatype containing media that is part of a multi-part Content message. A `Part` consists of data which has an associated datatype. A `Part` can only contain one of the accepted types in `Part.data`. For media types that are not text, `Part` must have a fixed IANA MIME type identifying the type and subtype of the media if `inline_data` or `file_data` field is filled with raw bytes.
"codeExecutionResult": { # Result of executing the [ExecutableCode]. Only generated when using the [CodeExecution] tool, and always follows a `part` containing the [ExecutableCode]. # Optional. The result of executing the ExecutableCode.
"outcome": "A String", # Required. Outcome of the code execution.
"output": "A String", # Optional. Contains stdout when code execution is successful, stderr or other description otherwise.
},
"executableCode": { # Code generated by the model that is meant to be executed, and the result returned to the model. Generated when using the [CodeExecution] tool, in which the code will be automatically executed, and a corresponding [CodeExecutionResult] will also be generated. # Optional. Code generated by the model that is intended to be executed.
"code": "A String", # Required. The code to be executed.
"language": "A String", # Required. Programming language of the `code`.
},
"fileData": { # URI-based data. A FileData message contains a URI pointing to data of a specific media type. It is used to represent images, audio, and video stored in Google Cloud Storage. # Optional. The URI-based data of the part. This can be used to include files from Google Cloud Storage.
"displayName": "A String", # Optional. The display name of the file. Used to provide a label or filename to distinguish files. This field is only returned in `PromptMessage` for prompt management. It is used in the Gemini calls only when server side tools (`code_execution`, `google_search`, and `url_context`) are enabled.
"fileUri": "A String", # Required. The URI of the file in Google Cloud Storage.
"mimeType": "A String", # Required. The IANA standard MIME type of the source data.
},
"functionCall": { # A predicted [FunctionCall] returned from the model that contains a string representing the [FunctionDeclaration.name] and a structured JSON object containing the parameters and their values. # Optional. A predicted function call returned from the model. This contains the name of the function to call and the arguments to pass to the function.
"args": { # Optional. The function parameters and values in JSON object format. See [FunctionDeclaration.parameters] for parameter details.
"a_key": "", # Properties of the object.
},
"id": "A String", # Optional. The unique id of the function call. If populated, the client to execute the `function_call` and return the response with the matching `id`.
"name": "A String", # Optional. The name of the function to call. Matches [FunctionDeclaration.name].
"partialArgs": [ # Optional. The partial argument value of the function call. If provided, represents the arguments/fields that are streamed incrementally.
{ # Partial argument value of the function call.
"boolValue": True or False, # Optional. Represents a boolean value.
"jsonPath": "A String", # Required. A JSON Path (RFC 9535) to the argument being streamed. https://datatracker.ietf.org/doc/html/rfc9535. e.g. "$.foo.bar[0].data".
"nullValue": "A String", # Optional. Represents a null value.
"numberValue": 3.14, # Optional. Represents a double value.
"stringValue": "A String", # Optional. Represents a string value.
"willContinue": True or False, # Optional. Whether this is not the last part of the same json_path. If true, another PartialArg message for the current json_path is expected to follow.
},
],
"willContinue": True or False, # Optional. Whether this is the last part of the FunctionCall. If true, another partial message for the current FunctionCall is expected to follow.
},
"functionResponse": { # The result output from a [FunctionCall] that contains a string representing the [FunctionDeclaration.name] and a structured JSON object containing any output from the function is used as context to the model. This should contain the result of a [FunctionCall] made based on model prediction. # Optional. The result of a function call. This is used to provide the model with the result of a function call that it predicted.
"id": "A String", # Optional. The id of the function call this response is for. Populated by the client to match the corresponding function call `id`.
"name": "A String", # Required. The name of the function to call. Matches [FunctionDeclaration.name] and [FunctionCall.name].
"parts": [ # Optional. Ordered `Parts` that constitute a function response. Parts may have different IANA MIME types.
{ # A datatype containing media that is part of a `FunctionResponse` message. A `FunctionResponsePart` consists of data which has an associated datatype. A `FunctionResponsePart` can only contain one of the accepted types in `FunctionResponsePart.data`. A `FunctionResponsePart` must have a fixed IANA MIME type identifying the type and subtype of the media if the `inline_data` field is filled with raw bytes.
"fileData": { # URI based data for function response. # URI based data.
"displayName": "A String", # Optional. Display name of the file data. Used to provide a label or filename to distinguish file datas. This field is only returned in PromptMessage for prompt management. It is currently used in the Gemini GenerateContent calls only when server side tools (code_execution, google_search, and url_context) are enabled.
"fileUri": "A String", # Required. URI.
"mimeType": "A String", # Required. The IANA standard MIME type of the source data.
},
"inlineData": { # Raw media bytes for function response. Text should not be sent as raw bytes, use the 'text' field. # Inline media bytes.
"data": "A String", # Required. Raw bytes.
"displayName": "A String", # Optional. Display name of the blob. Used to provide a label or filename to distinguish blobs. This field is only returned in PromptMessage for prompt management. It is currently used in the Gemini GenerateContent calls only when server side tools (code_execution, google_search, and url_context) are enabled.
"mimeType": "A String", # Required. The IANA standard MIME type of the source data.
},
},
],
"response": { # Required. The function response in JSON object format. Use "output" key to specify function output and "error" key to specify error details (if any). If "output" and "error" keys are not specified, then whole "response" is treated as function output.
"a_key": "", # Properties of the object.
},
},
"inlineData": { # A content blob. A Blob contains data of a specific media type. It is used to represent images, audio, and video. # Optional. The inline data content of the part. This can be used to include images, audio, or video in a request.
"data": "A String", # Required. The raw bytes of the data.
"displayName": "A String", # Optional. The display name of the blob. Used to provide a label or filename to distinguish blobs. This field is only returned in `PromptMessage` for prompt management. It is used in the Gemini calls only when server-side tools (`code_execution`, `google_search`, and `url_context`) are enabled.
"mimeType": "A String", # Required. The IANA standard MIME type of the source data.
},
"mediaResolution": { # per part media resolution. Media resolution for the input media. # per part media resolution. Media resolution for the input media.
"level": "A String", # The tokenization quality used for given media.
},
"text": "A String", # Optional. The text content of the part. When sent from the VSCode Gemini Code Assist extension, references to @mentioned items will be converted to markdown boldface text. For example `@my-repo` will be converted to and sent as `**my-repo**` by the IDE agent.
"thought": True or False, # Optional. Indicates whether the `part` represents the model's thought process or reasoning.
"thoughtSignature": "A String", # Optional. An opaque signature for the thought so it can be reused in subsequent requests.
"videoMetadata": { # Provides metadata for a video, including the start and end offsets for clipping and the frame rate. # Optional. Video metadata. The metadata should only be specified while the video data is presented in inline_data or file_data.
"endOffset": "A String", # Optional. The end offset of the video.
"fps": 3.14, # Optional. The frame rate of the video sent to the model. If not specified, the default value is 1.0. The valid range is (0.0, 24.0].
"startOffset": "A String", # Optional. The start offset of the video.
},
},
],
"role": "A String", # Optional. The producer of the content. Must be either 'user' or 'model'. If not set, the service will default to 'user'.
},
"eventTime": "A String", # Optional. The timestamp when the event occurred.
"stateDelta": { # Optional. The change in the session state caused by this event. This is a key-value map of fields that were modified or added by the event.
"a_key": "", # Properties of the object.
},
},
],
"turnId": "A String", # Optional. A unique identifier for the turn. Useful for referencing specific turns across systems.
"turnIndex": 42, # Required. The 0-based index of the turn in the conversation sequence.
},
],
},
"candidate": "A String", # Required. The name of the candidate that produced the response.
"error": { # The `Status` type defines a logical error model that is suitable for different programming environments, including REST APIs and RPC APIs. It is used by [gRPC](https://github.com/grpc). Each `Status` message contains three pieces of data: error code, error message, and error details. You can find out more about this error model and how to work with it in the [API Design Guide](https://cloud.google.com/apis/design/errors). # Output only. Error while scraping model or agent.
"code": 42, # The status code, which should be an enum value of google.rpc.Code.
"details": [ # A list of messages that carry the error details. There is a common set of message types for APIs to use.
{
"a_key": "", # Properties of the object. Contains field @type with type URL.
},
],
"message": "A String", # A developer-facing error message, which should be in English. Any user-facing error message should be localized and sent in the google.rpc.Status.details field, or localized by the client.
},
"events": [ # Optional. Intermediate events (such as tool calls and responses) that led to the final response.
{ # The structured data content of a message. A Content message contains a `role` field, which indicates the producer of the content, and a `parts` field, which contains the multi-part data of the message.
"parts": [ # Required. A list of Part objects that make up a single message. Parts of a message can have different MIME types. A Content message must have at least one Part.
{ # A datatype containing media that is part of a multi-part Content message. A `Part` consists of data which has an associated datatype. A `Part` can only contain one of the accepted types in `Part.data`. For media types that are not text, `Part` must have a fixed IANA MIME type identifying the type and subtype of the media if `inline_data` or `file_data` field is filled with raw bytes.
"codeExecutionResult": { # Result of executing the [ExecutableCode]. Only generated when using the [CodeExecution] tool, and always follows a `part` containing the [ExecutableCode]. # Optional. The result of executing the ExecutableCode.
"outcome": "A String", # Required. Outcome of the code execution.
"output": "A String", # Optional. Contains stdout when code execution is successful, stderr or other description otherwise.
},
"executableCode": { # Code generated by the model that is meant to be executed, and the result returned to the model. Generated when using the [CodeExecution] tool, in which the code will be automatically executed, and a corresponding [CodeExecutionResult] will also be generated. # Optional. Code generated by the model that is intended to be executed.
"code": "A String", # Required. The code to be executed.
"language": "A String", # Required. Programming language of the `code`.
},
"fileData": { # URI-based data. A FileData message contains a URI pointing to data of a specific media type. It is used to represent images, audio, and video stored in Google Cloud Storage. # Optional. The URI-based data of the part. This can be used to include files from Google Cloud Storage.
"displayName": "A String", # Optional. The display name of the file. Used to provide a label or filename to distinguish files. This field is only returned in `PromptMessage` for prompt management. It is used in the Gemini calls only when server side tools (`code_execution`, `google_search`, and `url_context`) are enabled.
"fileUri": "A String", # Required. The URI of the file in Google Cloud Storage.
"mimeType": "A String", # Required. The IANA standard MIME type of the source data.
},
"functionCall": { # A predicted [FunctionCall] returned from the model that contains a string representing the [FunctionDeclaration.name] and a structured JSON object containing the parameters and their values. # Optional. A predicted function call returned from the model. This contains the name of the function to call and the arguments to pass to the function.
"args": { # Optional. The function parameters and values in JSON object format. See [FunctionDeclaration.parameters] for parameter details.
"a_key": "", # Properties of the object.
},
"id": "A String", # Optional. The unique id of the function call. If populated, the client to execute the `function_call` and return the response with the matching `id`.
"name": "A String", # Optional. The name of the function to call. Matches [FunctionDeclaration.name].
"partialArgs": [ # Optional. The partial argument value of the function call. If provided, represents the arguments/fields that are streamed incrementally.
{ # Partial argument value of the function call.
"boolValue": True or False, # Optional. Represents a boolean value.
"jsonPath": "A String", # Required. A JSON Path (RFC 9535) to the argument being streamed. https://datatracker.ietf.org/doc/html/rfc9535. e.g. "$.foo.bar[0].data".
"nullValue": "A String", # Optional. Represents a null value.
"numberValue": 3.14, # Optional. Represents a double value.
"stringValue": "A String", # Optional. Represents a string value.
"willContinue": True or False, # Optional. Whether this is not the last part of the same json_path. If true, another PartialArg message for the current json_path is expected to follow.
},
],
"willContinue": True or False, # Optional. Whether this is the last part of the FunctionCall. If true, another partial message for the current FunctionCall is expected to follow.
},
"functionResponse": { # The result output from a [FunctionCall] that contains a string representing the [FunctionDeclaration.name] and a structured JSON object containing any output from the function is used as context to the model. This should contain the result of a [FunctionCall] made based on model prediction. # Optional. The result of a function call. This is used to provide the model with the result of a function call that it predicted.
"id": "A String", # Optional. The id of the function call this response is for. Populated by the client to match the corresponding function call `id`.
"name": "A String", # Required. The name of the function to call. Matches [FunctionDeclaration.name] and [FunctionCall.name].
"parts": [ # Optional. Ordered `Parts` that constitute a function response. Parts may have different IANA MIME types.
{ # A datatype containing media that is part of a `FunctionResponse` message. A `FunctionResponsePart` consists of data which has an associated datatype. A `FunctionResponsePart` can only contain one of the accepted types in `FunctionResponsePart.data`. A `FunctionResponsePart` must have a fixed IANA MIME type identifying the type and subtype of the media if the `inline_data` field is filled with raw bytes.
"fileData": { # URI based data for function response. # URI based data.
"displayName": "A String", # Optional. Display name of the file data. Used to provide a label or filename to distinguish file datas. This field is only returned in PromptMessage for prompt management. It is currently used in the Gemini GenerateContent calls only when server side tools (code_execution, google_search, and url_context) are enabled.
"fileUri": "A String", # Required. URI.
"mimeType": "A String", # Required. The IANA standard MIME type of the source data.
},
"inlineData": { # Raw media bytes for function response. Text should not be sent as raw bytes, use the 'text' field. # Inline media bytes.
"data": "A String", # Required. Raw bytes.
"displayName": "A String", # Optional. Display name of the blob. Used to provide a label or filename to distinguish blobs. This field is only returned in PromptMessage for prompt management. It is currently used in the Gemini GenerateContent calls only when server side tools (code_execution, google_search, and url_context) are enabled.
"mimeType": "A String", # Required. The IANA standard MIME type of the source data.
},
},
],
"response": { # Required. The function response in JSON object format. Use "output" key to specify function output and "error" key to specify error details (if any). If "output" and "error" keys are not specified, then whole "response" is treated as function output.
"a_key": "", # Properties of the object.
},
},
"inlineData": { # A content blob. A Blob contains data of a specific media type. It is used to represent images, audio, and video. # Optional. The inline data content of the part. This can be used to include images, audio, or video in a request.
"data": "A String", # Required. The raw bytes of the data.
"displayName": "A String", # Optional. The display name of the blob. Used to provide a label or filename to distinguish blobs. This field is only returned in `PromptMessage` for prompt management. It is used in the Gemini calls only when server-side tools (`code_execution`, `google_search`, and `url_context`) are enabled.
"mimeType": "A String", # Required. The IANA standard MIME type of the source data.
},
"mediaResolution": { # per part media resolution. Media resolution for the input media. # per part media resolution. Media resolution for the input media.
"level": "A String", # The tokenization quality used for given media.
},
"text": "A String", # Optional. The text content of the part. When sent from the VSCode Gemini Code Assist extension, references to @mentioned items will be converted to markdown boldface text. For example `@my-repo` will be converted to and sent as `**my-repo**` by the IDE agent.
"thought": True or False, # Optional. Indicates whether the `part` represents the model's thought process or reasoning.
"thoughtSignature": "A String", # Optional. An opaque signature for the thought so it can be reused in subsequent requests.
"videoMetadata": { # Provides metadata for a video, including the start and end offsets for clipping and the frame rate. # Optional. Video metadata. The metadata should only be specified while the video data is presented in inline_data or file_data.
"endOffset": "A String", # Optional. The end offset of the video.
"fps": 3.14, # Optional. The frame rate of the video sent to the model. If not specified, the default value is 1.0. The valid range is (0.0, 24.0].
"startOffset": "A String", # Optional. The start offset of the video.
},
},
],
"role": "A String", # Optional. The producer of the content. Must be either 'user' or 'model'. If not set, the service will default to 'user'.
},
],
"text": "A String", # Text response.
"value": "", # Fields and values that can be used to populate the response template.
},
"prompt": { # Prompt to be evaluated. This can represent a single-turn prompt or a multi-turn conversation for agent evaluations. # Optional. The request/prompt to evaluate.
"agentData": { # Represents data specific to multi-turn agent evaluations. # Optional. Represents the complete execution trace of a multi-turn conversation, which can involve single or multiple agents. This serves as the input context for agent scraping.
"agents": { # Optional. A map containing the static configurations for each agent in the system. Key: agent_id (matches the `author` field in events). Value: The static configuration of the agent.
"a_key": { # Represents configuration for an Agent.
"agentId": "A String", # Required. Unique identifier of the agent. This ID is used to refer to this agent, e.g., in AgentEvent.author, or in the `sub_agents` field. It must be unique within the `agents` map.
"agentType": "A String", # Optional. The type or class of the agent (e.g., "LlmAgent", "RouterAgent", "ToolUseAgent"). Useful for the autorater to understand the expected behavior of the agent.
"description": "A String", # Optional. A high-level description of the agent's role and responsibilities. Critical for evaluating if the agent is routing tasks correctly.
"instruction": "A String", # Optional. Provides instructions for the LLM model, guiding the agent's behavior. Can be static or dynamic. Dynamic instructions can contain placeholders like {variable_name} that will be resolved at runtime using the `AgentEvent.state_delta` field.
"subAgents": [ # Optional. The list of valid agent IDs that this agent can delegate to. This defines the directed edges in the multi-agent system graph topology.
"A String",
],
"tools": [ # Optional. The list of tools available to this agent.
{ # Tool details that the model may use to generate response. A `Tool` is a piece of code that enables the system to interact with external systems to perform an action, or set of actions, outside of knowledge and scope of the model. A Tool object should contain exactly one type of Tool (e.g FunctionDeclaration, Retrieval or GoogleSearchRetrieval).
"codeExecution": { # Tool that executes code generated by the model, and automatically returns the result to the model. See also [ExecutableCode]and [CodeExecutionResult] which are input and output to this tool. # Optional. CodeExecution tool type. Enables the model to execute code as part of generation.
},
"computerUse": { # Tool to support computer use. # Optional. Tool to support the model interacting directly with the computer. If enabled, it automatically populates computer-use specific Function Declarations.
"environment": "A String", # Required. The environment being operated.
"excludedPredefinedFunctions": [ # Optional. By default, [predefined functions](https://cloud.google.com/vertex-ai/generative-ai/docs/computer-use#supported-actions) are included in the final model call. Some of them can be explicitly excluded from being automatically included. This can serve two purposes: 1. Using a more restricted / different action space. 2. Improving the definitions / instructions of predefined functions.
"A String",
],
},
"enterpriseWebSearch": { # Tool to search public web data, powered by Vertex AI Search and Sec4 compliance. # Optional. Tool to support searching public web data, powered by Vertex AI Search and Sec4 compliance.
"blockingConfidence": "A String", # Optional. Sites with confidence level chosen & above this value will be blocked from the search results.
"excludeDomains": [ # Optional. List of domains to be excluded from the search results. The default limit is 2000 domains.
"A String",
],
},
"functionDeclarations": [ # Optional. Function tool type. One or more function declarations to be passed to the model along with the current user query. Model may decide to call a subset of these functions by populating FunctionCall in the response. User should provide a FunctionResponse for each function call in the next turn. Based on the function responses, Model will generate the final response back to the user. Maximum 512 function declarations can be provided.
{ # Structured representation of a function declaration as defined by the [OpenAPI 3.0 specification](https://spec.openapis.org/oas/v3.0.3). Included in this declaration are the function name, description, parameters and response type. This FunctionDeclaration is a representation of a block of code that can be used as a `Tool` by the model and executed by the client.
"description": "A String", # Optional. Description and purpose of the function. Model uses it to decide how and whether to call the function.
"name": "A String", # Required. The name of the function to call. Must start with a letter or an underscore. Must be a-z, A-Z, 0-9, or contain underscores, dots, colons and dashes, with a maximum length of 64.
"parameters": { # Defines the schema of input and output data. This is a subset of the [OpenAPI 3.0 Schema Object](https://spec.openapis.org/oas/v3.0.3#schema-object). # Optional. Describes the parameters to this function in JSON Schema Object format. Reflects the Open API 3.03 Parameter Object. string Key: the name of the parameter. Parameter names are case sensitive. Schema Value: the Schema defining the type used for the parameter. For function with no parameters, this can be left unset. Parameter names must start with a letter or an underscore and must only contain chars a-z, A-Z, 0-9, or underscores with a maximum length of 64. Example with 1 required and 1 optional parameter: type: OBJECT properties: param1: type: STRING param2: type: INTEGER required: - param1
"additionalProperties": "", # Optional. If `type` is `OBJECT`, specifies how to handle properties not defined in `properties`. If it is a boolean `false`, no additional properties are allowed. If it is a schema, additional properties are allowed if they conform to the schema.
"anyOf": [ # Optional. The instance must be valid against any (one or more) of the subschemas listed in `any_of`.
# Object with schema name: GoogleCloudAiplatformV1beta1Schema
],
"default": "", # Optional. Default value to use if the field is not specified.
"defs": { # Optional. `defs` provides a map of schema definitions that can be reused by `ref` elsewhere in the schema. Only allowed at root level of the schema.
"a_key": # Object with schema name: GoogleCloudAiplatformV1beta1Schema
},
"description": "A String", # Optional. Description of the schema.
"enum": [ # Optional. Possible values of the field. This field can be used to restrict a value to a fixed set of values. To mark a field as an enum, set `format` to `enum` and provide the list of possible values in `enum`. For example: 1. To define directions: `{type:STRING, format:enum, enum:["EAST", "NORTH", "SOUTH", "WEST"]}` 2. To define apartment numbers: `{type:INTEGER, format:enum, enum:["101", "201", "301"]}`
"A String",
],
"example": "", # Optional. Example of an instance of this schema.
"format": "A String", # Optional. The format of the data. For `NUMBER` type, format can be `float` or `double`. For `INTEGER` type, format can be `int32` or `int64`. For `STRING` type, format can be `email`, `byte`, `date`, `date-time`, `password`, and other formats to further refine the data type.
"items": # Object with schema name: GoogleCloudAiplatformV1beta1Schema # Optional. If type is `ARRAY`, `items` specifies the schema of elements in the array.
"maxItems": "A String", # Optional. If type is `ARRAY`, `max_items` specifies the maximum number of items in an array.
"maxLength": "A String", # Optional. If type is `STRING`, `max_length` specifies the maximum length of the string.
"maxProperties": "A String", # Optional. If type is `OBJECT`, `max_properties` specifies the maximum number of properties that can be provided.
"maximum": 3.14, # Optional. If type is `INTEGER` or `NUMBER`, `maximum` specifies the maximum allowed value.
"minItems": "A String", # Optional. If type is `ARRAY`, `min_items` specifies the minimum number of items in an array.
"minLength": "A String", # Optional. If type is `STRING`, `min_length` specifies the minimum length of the string.
"minProperties": "A String", # Optional. If type is `OBJECT`, `min_properties` specifies the minimum number of properties that can be provided.
"minimum": 3.14, # Optional. If type is `INTEGER` or `NUMBER`, `minimum` specifies the minimum allowed value.
"nullable": True or False, # Optional. Indicates if the value of this field can be null.
"pattern": "A String", # Optional. If type is `STRING`, `pattern` specifies a regular expression that the string must match.
"properties": { # Optional. If type is `OBJECT`, `properties` is a map of property names to schema definitions for each property of the object.
"a_key": # Object with schema name: GoogleCloudAiplatformV1beta1Schema
},
"propertyOrdering": [ # Optional. Order of properties displayed or used where order matters. This is not a standard field in OpenAPI specification, but can be used to control the order of properties.
"A String",
],
"ref": "A String", # Optional. Allows referencing another schema definition to use in place of this schema. The value must be a valid reference to a schema in `defs`. For example, the following schema defines a reference to a schema node named "Pet": type: object properties: pet: ref: #/defs/Pet defs: Pet: type: object properties: name: type: string The value of the "pet" property is a reference to the schema node named "Pet". See details in https://json-schema.org/understanding-json-schema/structuring
"required": [ # Optional. If type is `OBJECT`, `required` lists the names of properties that must be present.
"A String",
],
"title": "A String", # Optional. Title for the schema.
"type": "A String", # Optional. Data type of the schema field.
},
"parametersJsonSchema": "", # Optional. Describes the parameters to the function in JSON Schema format. The schema must describe an object where the properties are the parameters to the function. For example: ``` { "type": "object", "properties": { "name": { "type": "string" }, "age": { "type": "integer" } }, "additionalProperties": false, "required": ["name", "age"], "propertyOrdering": ["name", "age"] } ``` This field is mutually exclusive with `parameters`.
"response": { # Defines the schema of input and output data. This is a subset of the [OpenAPI 3.0 Schema Object](https://spec.openapis.org/oas/v3.0.3#schema-object). # Optional. Describes the output from this function in JSON Schema format. Reflects the Open API 3.03 Response Object. The Schema defines the type used for the response value of the function.
"additionalProperties": "", # Optional. If `type` is `OBJECT`, specifies how to handle properties not defined in `properties`. If it is a boolean `false`, no additional properties are allowed. If it is a schema, additional properties are allowed if they conform to the schema.
"anyOf": [ # Optional. The instance must be valid against any (one or more) of the subschemas listed in `any_of`.
# Object with schema name: GoogleCloudAiplatformV1beta1Schema
],
"default": "", # Optional. Default value to use if the field is not specified.
"defs": { # Optional. `defs` provides a map of schema definitions that can be reused by `ref` elsewhere in the schema. Only allowed at root level of the schema.
"a_key": # Object with schema name: GoogleCloudAiplatformV1beta1Schema
},
"description": "A String", # Optional. Description of the schema.
"enum": [ # Optional. Possible values of the field. This field can be used to restrict a value to a fixed set of values. To mark a field as an enum, set `format` to `enum` and provide the list of possible values in `enum`. For example: 1. To define directions: `{type:STRING, format:enum, enum:["EAST", "NORTH", "SOUTH", "WEST"]}` 2. To define apartment numbers: `{type:INTEGER, format:enum, enum:["101", "201", "301"]}`
"A String",
],
"example": "", # Optional. Example of an instance of this schema.
"format": "A String", # Optional. The format of the data. For `NUMBER` type, format can be `float` or `double`. For `INTEGER` type, format can be `int32` or `int64`. For `STRING` type, format can be `email`, `byte`, `date`, `date-time`, `password`, and other formats to further refine the data type.
"items": # Object with schema name: GoogleCloudAiplatformV1beta1Schema # Optional. If type is `ARRAY`, `items` specifies the schema of elements in the array.
"maxItems": "A String", # Optional. If type is `ARRAY`, `max_items` specifies the maximum number of items in an array.
"maxLength": "A String", # Optional. If type is `STRING`, `max_length` specifies the maximum length of the string.
"maxProperties": "A String", # Optional. If type is `OBJECT`, `max_properties` specifies the maximum number of properties that can be provided.
"maximum": 3.14, # Optional. If type is `INTEGER` or `NUMBER`, `maximum` specifies the maximum allowed value.
"minItems": "A String", # Optional. If type is `ARRAY`, `min_items` specifies the minimum number of items in an array.
"minLength": "A String", # Optional. If type is `STRING`, `min_length` specifies the minimum length of the string.
"minProperties": "A String", # Optional. If type is `OBJECT`, `min_properties` specifies the minimum number of properties that can be provided.
"minimum": 3.14, # Optional. If type is `INTEGER` or `NUMBER`, `minimum` specifies the minimum allowed value.
"nullable": True or False, # Optional. Indicates if the value of this field can be null.
"pattern": "A String", # Optional. If type is `STRING`, `pattern` specifies a regular expression that the string must match.
"properties": { # Optional. If type is `OBJECT`, `properties` is a map of property names to schema definitions for each property of the object.
"a_key": # Object with schema name: GoogleCloudAiplatformV1beta1Schema
},
"propertyOrdering": [ # Optional. Order of properties displayed or used where order matters. This is not a standard field in OpenAPI specification, but can be used to control the order of properties.
"A String",
],
"ref": "A String", # Optional. Allows referencing another schema definition to use in place of this schema. The value must be a valid reference to a schema in `defs`. For example, the following schema defines a reference to a schema node named "Pet": type: object properties: pet: ref: #/defs/Pet defs: Pet: type: object properties: name: type: string The value of the "pet" property is a reference to the schema node named "Pet". See details in https://json-schema.org/understanding-json-schema/structuring
"required": [ # Optional. If type is `OBJECT`, `required` lists the names of properties that must be present.
"A String",
],
"title": "A String", # Optional. Title for the schema.
"type": "A String", # Optional. Data type of the schema field.
},
"responseJsonSchema": "", # Optional. Describes the output from this function in JSON Schema format. The value specified by the schema is the response value of the function. This field is mutually exclusive with `response`.
},
],
"googleMaps": { # Tool to retrieve public maps data for grounding, powered by Google. # Optional. GoogleMaps tool type. Tool to support Google Maps in Model.
"enableWidget": True or False, # Optional. If true, include the widget context token in the response.
},
"googleSearch": { # GoogleSearch tool type. Tool to support Google Search in Model. Powered by Google. # Optional. GoogleSearch tool type. Tool to support Google Search in Model. Powered by Google.
"blockingConfidence": "A String", # Optional. Sites with confidence level chosen & above this value will be blocked from the search results.
"excludeDomains": [ # Optional. List of domains to be excluded from the search results. The default limit is 2000 domains. Example: ["amazon.com", "facebook.com"].
"A String",
],
},
"googleSearchRetrieval": { # Tool to retrieve public web data for grounding, powered by Google. # Optional. Specialized retrieval tool that is powered by Google Search.
"dynamicRetrievalConfig": { # Describes the options to customize dynamic retrieval. # Specifies the dynamic retrieval configuration for the given source.
"dynamicThreshold": 3.14, # Optional. The threshold to be used in dynamic retrieval. If not set, a system default value is used.
"mode": "A String", # The mode of the predictor to be used in dynamic retrieval.
},
},
"parallelAiSearch": { # ParallelAiSearch tool type. A tool that uses the Parallel.ai search engine for grounding. # Optional. If specified, Vertex AI will use Parallel.ai to search for information to answer user queries. The search results will be grounded on Parallel.ai and presented to the model for response generation
"apiKey": "A String", # Optional. The API key for ParallelAiSearch. If an API key is not provided, the system will attempt to verify access by checking for an active Parallel.ai subscription through the Google Cloud Marketplace. See https://docs.parallel.ai/search/search-quickstart for more details.
"customConfigs": { # Optional. Custom configs for ParallelAiSearch. This field can be used to pass any parameter from the Parallel.ai Search API. See the Parallel.ai documentation for the full list of available parameters and their usage: https://docs.parallel.ai/api-reference/search-beta/search Currently only `source_policy`, `excerpts`, `max_results`, `mode`, `fetch_policy` can be set via this field. For example: { "source_policy": { "include_domains": ["google.com", "wikipedia.org"], "exclude_domains": ["example.com"] }, "fetch_policy": { "max_age_seconds": 3600 } }
"a_key": "", # Properties of the object.
},
},
"retrieval": { # Defines a retrieval tool that model can call to access external knowledge. # Optional. Retrieval tool type. System will always execute the provided retrieval tool(s) to get external knowledge to answer the prompt. Retrieval results are presented to the model for generation.
"disableAttribution": True or False, # Optional. Deprecated. This option is no longer supported.
"externalApi": { # Retrieve from data source powered by external API for grounding. The external API is not owned by Google, but need to follow the pre-defined API spec. # Use data source powered by external API for grounding.
"apiAuth": { # The generic reusable api auth config. Deprecated. Please use AuthConfig (google/cloud/aiplatform/master/auth.proto) instead. # The authentication config to access the API. Deprecated. Please use auth_config instead.
"apiKeyConfig": { # The API secret. # The API secret.
"apiKeySecretVersion": "A String", # Required. The SecretManager secret version resource name storing API key. e.g. projects/{project}/secrets/{secret}/versions/{version}
"apiKeyString": "A String", # The API key string. Either this or `api_key_secret_version` must be set.
},
},
"apiSpec": "A String", # The API spec that the external API implements.
"authConfig": { # Auth configuration to run the extension. # The authentication config to access the API.
"apiKeyConfig": { # Config for authentication with API key. # Config for API key auth.
"apiKeySecret": "A String", # Optional. The name of the SecretManager secret version resource storing the API key. Format: `projects/{project}/secrets/{secrete}/versions/{version}` - If both `api_key_secret` and `api_key_string` are specified, this field takes precedence over `api_key_string`. - If specified, the `secretmanager.versions.access` permission should be granted to Vertex AI Extension Service Agent (https://cloud.google.com/vertex-ai/docs/general/access-control#service-agents) on the specified resource.
"apiKeyString": "A String", # Optional. The API key to be used in the request directly.
"httpElementLocation": "A String", # Optional. The location of the API key.
"name": "A String", # Optional. The parameter name of the API key. E.g. If the API request is "https://example.com/act?api_key=", "api_key" would be the parameter name.
},
"authType": "A String", # Type of auth scheme.
"googleServiceAccountConfig": { # Config for Google Service Account Authentication. # Config for Google Service Account auth.
"serviceAccount": "A String", # Optional. The service account that the extension execution service runs as. - If the service account is specified, the `iam.serviceAccounts.getAccessToken` permission should be granted to Vertex AI Extension Service Agent (https://cloud.google.com/vertex-ai/docs/general/access-control#service-agents) on the specified service account. - If not specified, the Vertex AI Extension Service Agent will be used to execute the Extension.
},
"httpBasicAuthConfig": { # Config for HTTP Basic Authentication. # Config for HTTP Basic auth.
"credentialSecret": "A String", # Required. The name of the SecretManager secret version resource storing the base64 encoded credentials. Format: `projects/{project}/secrets/{secrete}/versions/{version}` - If specified, the `secretmanager.versions.access` permission should be granted to Vertex AI Extension Service Agent (https://cloud.google.com/vertex-ai/docs/general/access-control#service-agents) on the specified resource.
},
"oauthConfig": { # Config for user oauth. # Config for user oauth.
"accessToken": "A String", # Access token for extension endpoint. Only used to propagate token from [[ExecuteExtensionRequest.runtime_auth_config]] at request time.
"serviceAccount": "A String", # The service account used to generate access tokens for executing the Extension. - If the service account is specified, the `iam.serviceAccounts.getAccessToken` permission should be granted to Vertex AI Extension Service Agent (https://cloud.google.com/vertex-ai/docs/general/access-control#service-agents) on the provided service account.
},
"oidcConfig": { # Config for user OIDC auth. # Config for user OIDC auth.
"idToken": "A String", # OpenID Connect formatted ID token for extension endpoint. Only used to propagate token from [[ExecuteExtensionRequest.runtime_auth_config]] at request time.
"serviceAccount": "A String", # The service account used to generate an OpenID Connect (OIDC)-compatible JWT token signed by the Google OIDC Provider (accounts.google.com) for extension endpoint (https://cloud.google.com/iam/docs/create-short-lived-credentials-direct#sa-credentials-oidc). - The audience for the token will be set to the URL in the server url defined in the OpenApi spec. - If the service account is provided, the service account should grant `iam.serviceAccounts.getOpenIdToken` permission to Vertex AI Extension Service Agent (https://cloud.google.com/vertex-ai/docs/general/access-control#service-agents).
},
},
"elasticSearchParams": { # The search parameters to use for the ELASTIC_SEARCH spec. # Parameters for the elastic search API.
"index": "A String", # The ElasticSearch index to use.
"numHits": 42, # Optional. Number of hits (chunks) to request. When specified, it is passed to Elasticsearch as the `num_hits` param.
"searchTemplate": "A String", # The ElasticSearch search template to use.
},
"endpoint": "A String", # The endpoint of the external API. The system will call the API at this endpoint to retrieve the data for grounding. Example: https://acme.com:443/search
"simpleSearchParams": { # The search parameters to use for SIMPLE_SEARCH spec. # Parameters for the simple search API.
},
},
"vertexAiSearch": { # Retrieve from Vertex AI Search datastore or engine for grounding. datastore and engine are mutually exclusive. See https://cloud.google.com/products/agent-builder # Set to use data source powered by Vertex AI Search.
"dataStoreSpecs": [ # Specifications that define the specific DataStores to be searched, along with configurations for those data stores. This is only considered for Engines with multiple data stores. It should only be set if engine is used.
{ # Define data stores within engine to filter on in a search call and configurations for those data stores. For more information, see https://cloud.google.com/generative-ai-app-builder/docs/reference/rpc/google.cloud.discoveryengine.v1#datastorespec
"dataStore": "A String", # Full resource name of DataStore, such as Format: `projects/{project}/locations/{location}/collections/{collection}/dataStores/{dataStore}`
"filter": "A String", # Optional. Filter specification to filter documents in the data store specified by data_store field. For more information on filtering, see [Filtering](https://cloud.google.com/generative-ai-app-builder/docs/filter-search-metadata)
},
],
"datastore": "A String", # Optional. Fully-qualified Vertex AI Search data store resource ID. Format: `projects/{project}/locations/{location}/collections/{collection}/dataStores/{dataStore}`
"engine": "A String", # Optional. Fully-qualified Vertex AI Search engine resource ID. Format: `projects/{project}/locations/{location}/collections/{collection}/engines/{engine}`
"filter": "A String", # Optional. Filter strings to be passed to the search API.
"maxResults": 42, # Optional. Number of search results to return per query. The default value is 10. The maximumm allowed value is 10.
},
"vertexRagStore": { # Retrieve from Vertex RAG Store for grounding. # Set to use data source powered by Vertex RAG store. User data is uploaded via the VertexRagDataService.
"ragCorpora": [ # Optional. Deprecated. Please use rag_resources instead.
"A String",
],
"ragResources": [ # Optional. The representation of the rag source. It can be used to specify corpus only or ragfiles. Currently only support one corpus or multiple files from one corpus. In the future we may open up multiple corpora support.
{ # The definition of the Rag resource.
"ragCorpus": "A String", # Optional. RagCorpora resource name. Format: `projects/{project}/locations/{location}/ragCorpora/{rag_corpus}`
"ragFileIds": [ # Optional. rag_file_id. The files should be in the same rag_corpus set in rag_corpus field.
"A String",
],
},
],
"ragRetrievalConfig": { # Specifies the context retrieval config. # Optional. The retrieval config for the Rag query.
"filter": { # Config for filters. # Optional. Config for filters.
"metadataFilter": "A String", # Optional. String for metadata filtering.
"vectorDistanceThreshold": 3.14, # Optional. Only returns contexts with vector distance smaller than the threshold.
"vectorSimilarityThreshold": 3.14, # Optional. Only returns contexts with vector similarity larger than the threshold.
},
"hybridSearch": { # Config for Hybrid Search. # Optional. Config for Hybrid Search.
"alpha": 3.14, # Optional. Alpha value controls the weight between dense and sparse vector search results. The range is [0, 1], while 0 means sparse vector search only and 1 means dense vector search only. The default value is 0.5 which balances sparse and dense vector search equally.
},
"ranking": { # Config for ranking and reranking. # Optional. Config for ranking and reranking.
"llmRanker": { # Config for LlmRanker. # Optional. Config for LlmRanker.
"modelName": "A String", # Optional. The model name used for ranking. See [Supported models](https://cloud.google.com/vertex-ai/generative-ai/docs/model-reference/inference#supported-models).
},
"rankService": { # Config for Rank Service. # Optional. Config for Rank Service.
"modelName": "A String", # Optional. The model name of the rank service. Format: `semantic-ranker-512@latest`
},
},
"topK": 42, # Optional. The number of contexts to retrieve.
},
"similarityTopK": 42, # Optional. Number of top k results to return from the selected corpora.
"storeContext": True or False, # Optional. Currently only supported for Gemini Multimodal Live API. In Gemini Multimodal Live API, if `store_context` bool is specified, Gemini will leverage it to automatically memorize the interactions between the client and Gemini, and retrieve context when needed to augment the response generation for users' ongoing and future interactions.
"vectorDistanceThreshold": 3.14, # Optional. Only return results with vector distance smaller than the threshold.
},
},
"urlContext": { # Tool to support URL context. # Optional. Tool to support URL context retrieval.
},
},
],
},
},
"turns": [ # Optional. A chronological list of conversation turns. Each turn represents a logical execution cycle (e.g., User Input -> Agent Response).
{ # Represents a single turn/invocation in the conversation.
"events": [ # Optional. The list of events that occurred during this turn.
{ # Represents a single event in the execution trace.
"activeTools": [ # Optional. The list of tools that were active/available to the agent at the time of this event. This overrides the `AgentConfig.tools` if set.
{ # Tool details that the model may use to generate response. A `Tool` is a piece of code that enables the system to interact with external systems to perform an action, or set of actions, outside of knowledge and scope of the model. A Tool object should contain exactly one type of Tool (e.g FunctionDeclaration, Retrieval or GoogleSearchRetrieval).
"codeExecution": { # Tool that executes code generated by the model, and automatically returns the result to the model. See also [ExecutableCode]and [CodeExecutionResult] which are input and output to this tool. # Optional. CodeExecution tool type. Enables the model to execute code as part of generation.
},
"computerUse": { # Tool to support computer use. # Optional. Tool to support the model interacting directly with the computer. If enabled, it automatically populates computer-use specific Function Declarations.
"environment": "A String", # Required. The environment being operated.
"excludedPredefinedFunctions": [ # Optional. By default, [predefined functions](https://cloud.google.com/vertex-ai/generative-ai/docs/computer-use#supported-actions) are included in the final model call. Some of them can be explicitly excluded from being automatically included. This can serve two purposes: 1. Using a more restricted / different action space. 2. Improving the definitions / instructions of predefined functions.
"A String",
],
},
"enterpriseWebSearch": { # Tool to search public web data, powered by Vertex AI Search and Sec4 compliance. # Optional. Tool to support searching public web data, powered by Vertex AI Search and Sec4 compliance.
"blockingConfidence": "A String", # Optional. Sites with confidence level chosen & above this value will be blocked from the search results.
"excludeDomains": [ # Optional. List of domains to be excluded from the search results. The default limit is 2000 domains.
"A String",
],
},
"functionDeclarations": [ # Optional. Function tool type. One or more function declarations to be passed to the model along with the current user query. Model may decide to call a subset of these functions by populating FunctionCall in the response. User should provide a FunctionResponse for each function call in the next turn. Based on the function responses, Model will generate the final response back to the user. Maximum 512 function declarations can be provided.
{ # Structured representation of a function declaration as defined by the [OpenAPI 3.0 specification](https://spec.openapis.org/oas/v3.0.3). Included in this declaration are the function name, description, parameters and response type. This FunctionDeclaration is a representation of a block of code that can be used as a `Tool` by the model and executed by the client.
"description": "A String", # Optional. Description and purpose of the function. Model uses it to decide how and whether to call the function.
"name": "A String", # Required. The name of the function to call. Must start with a letter or an underscore. Must be a-z, A-Z, 0-9, or contain underscores, dots, colons and dashes, with a maximum length of 64.
"parameters": { # Defines the schema of input and output data. This is a subset of the [OpenAPI 3.0 Schema Object](https://spec.openapis.org/oas/v3.0.3#schema-object). # Optional. Describes the parameters to this function in JSON Schema Object format. Reflects the Open API 3.03 Parameter Object. string Key: the name of the parameter. Parameter names are case sensitive. Schema Value: the Schema defining the type used for the parameter. For function with no parameters, this can be left unset. Parameter names must start with a letter or an underscore and must only contain chars a-z, A-Z, 0-9, or underscores with a maximum length of 64. Example with 1 required and 1 optional parameter: type: OBJECT properties: param1: type: STRING param2: type: INTEGER required: - param1
"additionalProperties": "", # Optional. If `type` is `OBJECT`, specifies how to handle properties not defined in `properties`. If it is a boolean `false`, no additional properties are allowed. If it is a schema, additional properties are allowed if they conform to the schema.
"anyOf": [ # Optional. The instance must be valid against any (one or more) of the subschemas listed in `any_of`.
# Object with schema name: GoogleCloudAiplatformV1beta1Schema
],
"default": "", # Optional. Default value to use if the field is not specified.
"defs": { # Optional. `defs` provides a map of schema definitions that can be reused by `ref` elsewhere in the schema. Only allowed at root level of the schema.
"a_key": # Object with schema name: GoogleCloudAiplatformV1beta1Schema
},
"description": "A String", # Optional. Description of the schema.
"enum": [ # Optional. Possible values of the field. This field can be used to restrict a value to a fixed set of values. To mark a field as an enum, set `format` to `enum` and provide the list of possible values in `enum`. For example: 1. To define directions: `{type:STRING, format:enum, enum:["EAST", "NORTH", "SOUTH", "WEST"]}` 2. To define apartment numbers: `{type:INTEGER, format:enum, enum:["101", "201", "301"]}`
"A String",
],
"example": "", # Optional. Example of an instance of this schema.
"format": "A String", # Optional. The format of the data. For `NUMBER` type, format can be `float` or `double`. For `INTEGER` type, format can be `int32` or `int64`. For `STRING` type, format can be `email`, `byte`, `date`, `date-time`, `password`, and other formats to further refine the data type.
"items": # Object with schema name: GoogleCloudAiplatformV1beta1Schema # Optional. If type is `ARRAY`, `items` specifies the schema of elements in the array.
"maxItems": "A String", # Optional. If type is `ARRAY`, `max_items` specifies the maximum number of items in an array.
"maxLength": "A String", # Optional. If type is `STRING`, `max_length` specifies the maximum length of the string.
"maxProperties": "A String", # Optional. If type is `OBJECT`, `max_properties` specifies the maximum number of properties that can be provided.
"maximum": 3.14, # Optional. If type is `INTEGER` or `NUMBER`, `maximum` specifies the maximum allowed value.
"minItems": "A String", # Optional. If type is `ARRAY`, `min_items` specifies the minimum number of items in an array.
"minLength": "A String", # Optional. If type is `STRING`, `min_length` specifies the minimum length of the string.
"minProperties": "A String", # Optional. If type is `OBJECT`, `min_properties` specifies the minimum number of properties that can be provided.
"minimum": 3.14, # Optional. If type is `INTEGER` or `NUMBER`, `minimum` specifies the minimum allowed value.
"nullable": True or False, # Optional. Indicates if the value of this field can be null.
"pattern": "A String", # Optional. If type is `STRING`, `pattern` specifies a regular expression that the string must match.
"properties": { # Optional. If type is `OBJECT`, `properties` is a map of property names to schema definitions for each property of the object.
"a_key": # Object with schema name: GoogleCloudAiplatformV1beta1Schema
},
"propertyOrdering": [ # Optional. Order of properties displayed or used where order matters. This is not a standard field in OpenAPI specification, but can be used to control the order of properties.
"A String",
],
"ref": "A String", # Optional. Allows referencing another schema definition to use in place of this schema. The value must be a valid reference to a schema in `defs`. For example, the following schema defines a reference to a schema node named "Pet": type: object properties: pet: ref: #/defs/Pet defs: Pet: type: object properties: name: type: string The value of the "pet" property is a reference to the schema node named "Pet". See details in https://json-schema.org/understanding-json-schema/structuring
"required": [ # Optional. If type is `OBJECT`, `required` lists the names of properties that must be present.
"A String",
],
"title": "A String", # Optional. Title for the schema.
"type": "A String", # Optional. Data type of the schema field.
},
"parametersJsonSchema": "", # Optional. Describes the parameters to the function in JSON Schema format. The schema must describe an object where the properties are the parameters to the function. For example: ``` { "type": "object", "properties": { "name": { "type": "string" }, "age": { "type": "integer" } }, "additionalProperties": false, "required": ["name", "age"], "propertyOrdering": ["name", "age"] } ``` This field is mutually exclusive with `parameters`.
"response": { # Defines the schema of input and output data. This is a subset of the [OpenAPI 3.0 Schema Object](https://spec.openapis.org/oas/v3.0.3#schema-object). # Optional. Describes the output from this function in JSON Schema format. Reflects the Open API 3.03 Response Object. The Schema defines the type used for the response value of the function.
"additionalProperties": "", # Optional. If `type` is `OBJECT`, specifies how to handle properties not defined in `properties`. If it is a boolean `false`, no additional properties are allowed. If it is a schema, additional properties are allowed if they conform to the schema.
"anyOf": [ # Optional. The instance must be valid against any (one or more) of the subschemas listed in `any_of`.
# Object with schema name: GoogleCloudAiplatformV1beta1Schema
],
"default": "", # Optional. Default value to use if the field is not specified.
"defs": { # Optional. `defs` provides a map of schema definitions that can be reused by `ref` elsewhere in the schema. Only allowed at root level of the schema.
"a_key": # Object with schema name: GoogleCloudAiplatformV1beta1Schema
},
"description": "A String", # Optional. Description of the schema.
"enum": [ # Optional. Possible values of the field. This field can be used to restrict a value to a fixed set of values. To mark a field as an enum, set `format` to `enum` and provide the list of possible values in `enum`. For example: 1. To define directions: `{type:STRING, format:enum, enum:["EAST", "NORTH", "SOUTH", "WEST"]}` 2. To define apartment numbers: `{type:INTEGER, format:enum, enum:["101", "201", "301"]}`
"A String",
],
"example": "", # Optional. Example of an instance of this schema.
"format": "A String", # Optional. The format of the data. For `NUMBER` type, format can be `float` or `double`. For `INTEGER` type, format can be `int32` or `int64`. For `STRING` type, format can be `email`, `byte`, `date`, `date-time`, `password`, and other formats to further refine the data type.
"items": # Object with schema name: GoogleCloudAiplatformV1beta1Schema # Optional. If type is `ARRAY`, `items` specifies the schema of elements in the array.
"maxItems": "A String", # Optional. If type is `ARRAY`, `max_items` specifies the maximum number of items in an array.
"maxLength": "A String", # Optional. If type is `STRING`, `max_length` specifies the maximum length of the string.
"maxProperties": "A String", # Optional. If type is `OBJECT`, `max_properties` specifies the maximum number of properties that can be provided.
"maximum": 3.14, # Optional. If type is `INTEGER` or `NUMBER`, `maximum` specifies the maximum allowed value.
"minItems": "A String", # Optional. If type is `ARRAY`, `min_items` specifies the minimum number of items in an array.
"minLength": "A String", # Optional. If type is `STRING`, `min_length` specifies the minimum length of the string.
"minProperties": "A String", # Optional. If type is `OBJECT`, `min_properties` specifies the minimum number of properties that can be provided.
"minimum": 3.14, # Optional. If type is `INTEGER` or `NUMBER`, `minimum` specifies the minimum allowed value.
"nullable": True or False, # Optional. Indicates if the value of this field can be null.
"pattern": "A String", # Optional. If type is `STRING`, `pattern` specifies a regular expression that the string must match.
"properties": { # Optional. If type is `OBJECT`, `properties` is a map of property names to schema definitions for each property of the object.
"a_key": # Object with schema name: GoogleCloudAiplatformV1beta1Schema
},
"propertyOrdering": [ # Optional. Order of properties displayed or used where order matters. This is not a standard field in OpenAPI specification, but can be used to control the order of properties.
"A String",
],
"ref": "A String", # Optional. Allows referencing another schema definition to use in place of this schema. The value must be a valid reference to a schema in `defs`. For example, the following schema defines a reference to a schema node named "Pet": type: object properties: pet: ref: #/defs/Pet defs: Pet: type: object properties: name: type: string The value of the "pet" property is a reference to the schema node named "Pet". See details in https://json-schema.org/understanding-json-schema/structuring
"required": [ # Optional. If type is `OBJECT`, `required` lists the names of properties that must be present.
"A String",
],
"title": "A String", # Optional. Title for the schema.
"type": "A String", # Optional. Data type of the schema field.
},
"responseJsonSchema": "", # Optional. Describes the output from this function in JSON Schema format. The value specified by the schema is the response value of the function. This field is mutually exclusive with `response`.
},
],
"googleMaps": { # Tool to retrieve public maps data for grounding, powered by Google. # Optional. GoogleMaps tool type. Tool to support Google Maps in Model.
"enableWidget": True or False, # Optional. If true, include the widget context token in the response.
},
"googleSearch": { # GoogleSearch tool type. Tool to support Google Search in Model. Powered by Google. # Optional. GoogleSearch tool type. Tool to support Google Search in Model. Powered by Google.
"blockingConfidence": "A String", # Optional. Sites with confidence level chosen & above this value will be blocked from the search results.
"excludeDomains": [ # Optional. List of domains to be excluded from the search results. The default limit is 2000 domains. Example: ["amazon.com", "facebook.com"].
"A String",
],
},
"googleSearchRetrieval": { # Tool to retrieve public web data for grounding, powered by Google. # Optional. Specialized retrieval tool that is powered by Google Search.
"dynamicRetrievalConfig": { # Describes the options to customize dynamic retrieval. # Specifies the dynamic retrieval configuration for the given source.
"dynamicThreshold": 3.14, # Optional. The threshold to be used in dynamic retrieval. If not set, a system default value is used.
"mode": "A String", # The mode of the predictor to be used in dynamic retrieval.
},
},
"parallelAiSearch": { # ParallelAiSearch tool type. A tool that uses the Parallel.ai search engine for grounding. # Optional. If specified, Vertex AI will use Parallel.ai to search for information to answer user queries. The search results will be grounded on Parallel.ai and presented to the model for response generation
"apiKey": "A String", # Optional. The API key for ParallelAiSearch. If an API key is not provided, the system will attempt to verify access by checking for an active Parallel.ai subscription through the Google Cloud Marketplace. See https://docs.parallel.ai/search/search-quickstart for more details.
"customConfigs": { # Optional. Custom configs for ParallelAiSearch. This field can be used to pass any parameter from the Parallel.ai Search API. See the Parallel.ai documentation for the full list of available parameters and their usage: https://docs.parallel.ai/api-reference/search-beta/search Currently only `source_policy`, `excerpts`, `max_results`, `mode`, `fetch_policy` can be set via this field. For example: { "source_policy": { "include_domains": ["google.com", "wikipedia.org"], "exclude_domains": ["example.com"] }, "fetch_policy": { "max_age_seconds": 3600 } }
"a_key": "", # Properties of the object.
},
},
"retrieval": { # Defines a retrieval tool that model can call to access external knowledge. # Optional. Retrieval tool type. System will always execute the provided retrieval tool(s) to get external knowledge to answer the prompt. Retrieval results are presented to the model for generation.
"disableAttribution": True or False, # Optional. Deprecated. This option is no longer supported.
"externalApi": { # Retrieve from data source powered by external API for grounding. The external API is not owned by Google, but need to follow the pre-defined API spec. # Use data source powered by external API for grounding.
"apiAuth": { # The generic reusable api auth config. Deprecated. Please use AuthConfig (google/cloud/aiplatform/master/auth.proto) instead. # The authentication config to access the API. Deprecated. Please use auth_config instead.
"apiKeyConfig": { # The API secret. # The API secret.
"apiKeySecretVersion": "A String", # Required. The SecretManager secret version resource name storing API key. e.g. projects/{project}/secrets/{secret}/versions/{version}
"apiKeyString": "A String", # The API key string. Either this or `api_key_secret_version` must be set.
},
},
"apiSpec": "A String", # The API spec that the external API implements.
"authConfig": { # Auth configuration to run the extension. # The authentication config to access the API.
"apiKeyConfig": { # Config for authentication with API key. # Config for API key auth.
"apiKeySecret": "A String", # Optional. The name of the SecretManager secret version resource storing the API key. Format: `projects/{project}/secrets/{secrete}/versions/{version}` - If both `api_key_secret` and `api_key_string` are specified, this field takes precedence over `api_key_string`. - If specified, the `secretmanager.versions.access` permission should be granted to Vertex AI Extension Service Agent (https://cloud.google.com/vertex-ai/docs/general/access-control#service-agents) on the specified resource.
"apiKeyString": "A String", # Optional. The API key to be used in the request directly.
"httpElementLocation": "A String", # Optional. The location of the API key.
"name": "A String", # Optional. The parameter name of the API key. E.g. If the API request is "https://example.com/act?api_key=", "api_key" would be the parameter name.
},
"authType": "A String", # Type of auth scheme.
"googleServiceAccountConfig": { # Config for Google Service Account Authentication. # Config for Google Service Account auth.
"serviceAccount": "A String", # Optional. The service account that the extension execution service runs as. - If the service account is specified, the `iam.serviceAccounts.getAccessToken` permission should be granted to Vertex AI Extension Service Agent (https://cloud.google.com/vertex-ai/docs/general/access-control#service-agents) on the specified service account. - If not specified, the Vertex AI Extension Service Agent will be used to execute the Extension.
},
"httpBasicAuthConfig": { # Config for HTTP Basic Authentication. # Config for HTTP Basic auth.
"credentialSecret": "A String", # Required. The name of the SecretManager secret version resource storing the base64 encoded credentials. Format: `projects/{project}/secrets/{secrete}/versions/{version}` - If specified, the `secretmanager.versions.access` permission should be granted to Vertex AI Extension Service Agent (https://cloud.google.com/vertex-ai/docs/general/access-control#service-agents) on the specified resource.
},
"oauthConfig": { # Config for user oauth. # Config for user oauth.
"accessToken": "A String", # Access token for extension endpoint. Only used to propagate token from [[ExecuteExtensionRequest.runtime_auth_config]] at request time.
"serviceAccount": "A String", # The service account used to generate access tokens for executing the Extension. - If the service account is specified, the `iam.serviceAccounts.getAccessToken` permission should be granted to Vertex AI Extension Service Agent (https://cloud.google.com/vertex-ai/docs/general/access-control#service-agents) on the provided service account.
},
"oidcConfig": { # Config for user OIDC auth. # Config for user OIDC auth.
"idToken": "A String", # OpenID Connect formatted ID token for extension endpoint. Only used to propagate token from [[ExecuteExtensionRequest.runtime_auth_config]] at request time.
"serviceAccount": "A String", # The service account used to generate an OpenID Connect (OIDC)-compatible JWT token signed by the Google OIDC Provider (accounts.google.com) for extension endpoint (https://cloud.google.com/iam/docs/create-short-lived-credentials-direct#sa-credentials-oidc). - The audience for the token will be set to the URL in the server url defined in the OpenApi spec. - If the service account is provided, the service account should grant `iam.serviceAccounts.getOpenIdToken` permission to Vertex AI Extension Service Agent (https://cloud.google.com/vertex-ai/docs/general/access-control#service-agents).
},
},
"elasticSearchParams": { # The search parameters to use for the ELASTIC_SEARCH spec. # Parameters for the elastic search API.
"index": "A String", # The ElasticSearch index to use.
"numHits": 42, # Optional. Number of hits (chunks) to request. When specified, it is passed to Elasticsearch as the `num_hits` param.
"searchTemplate": "A String", # The ElasticSearch search template to use.
},
"endpoint": "A String", # The endpoint of the external API. The system will call the API at this endpoint to retrieve the data for grounding. Example: https://acme.com:443/search
"simpleSearchParams": { # The search parameters to use for SIMPLE_SEARCH spec. # Parameters for the simple search API.
},
},
"vertexAiSearch": { # Retrieve from Vertex AI Search datastore or engine for grounding. datastore and engine are mutually exclusive. See https://cloud.google.com/products/agent-builder # Set to use data source powered by Vertex AI Search.
"dataStoreSpecs": [ # Specifications that define the specific DataStores to be searched, along with configurations for those data stores. This is only considered for Engines with multiple data stores. It should only be set if engine is used.
{ # Define data stores within engine to filter on in a search call and configurations for those data stores. For more information, see https://cloud.google.com/generative-ai-app-builder/docs/reference/rpc/google.cloud.discoveryengine.v1#datastorespec
"dataStore": "A String", # Full resource name of DataStore, such as Format: `projects/{project}/locations/{location}/collections/{collection}/dataStores/{dataStore}`
"filter": "A String", # Optional. Filter specification to filter documents in the data store specified by data_store field. For more information on filtering, see [Filtering](https://cloud.google.com/generative-ai-app-builder/docs/filter-search-metadata)
},
],
"datastore": "A String", # Optional. Fully-qualified Vertex AI Search data store resource ID. Format: `projects/{project}/locations/{location}/collections/{collection}/dataStores/{dataStore}`
"engine": "A String", # Optional. Fully-qualified Vertex AI Search engine resource ID. Format: `projects/{project}/locations/{location}/collections/{collection}/engines/{engine}`
"filter": "A String", # Optional. Filter strings to be passed to the search API.
"maxResults": 42, # Optional. Number of search results to return per query. The default value is 10. The maximumm allowed value is 10.
},
"vertexRagStore": { # Retrieve from Vertex RAG Store for grounding. # Set to use data source powered by Vertex RAG store. User data is uploaded via the VertexRagDataService.
"ragCorpora": [ # Optional. Deprecated. Please use rag_resources instead.
"A String",
],
"ragResources": [ # Optional. The representation of the rag source. It can be used to specify corpus only or ragfiles. Currently only support one corpus or multiple files from one corpus. In the future we may open up multiple corpora support.
{ # The definition of the Rag resource.
"ragCorpus": "A String", # Optional. RagCorpora resource name. Format: `projects/{project}/locations/{location}/ragCorpora/{rag_corpus}`
"ragFileIds": [ # Optional. rag_file_id. The files should be in the same rag_corpus set in rag_corpus field.
"A String",
],
},
],
"ragRetrievalConfig": { # Specifies the context retrieval config. # Optional. The retrieval config for the Rag query.
"filter": { # Config for filters. # Optional. Config for filters.
"metadataFilter": "A String", # Optional. String for metadata filtering.
"vectorDistanceThreshold": 3.14, # Optional. Only returns contexts with vector distance smaller than the threshold.
"vectorSimilarityThreshold": 3.14, # Optional. Only returns contexts with vector similarity larger than the threshold.
},
"hybridSearch": { # Config for Hybrid Search. # Optional. Config for Hybrid Search.
"alpha": 3.14, # Optional. Alpha value controls the weight between dense and sparse vector search results. The range is [0, 1], while 0 means sparse vector search only and 1 means dense vector search only. The default value is 0.5 which balances sparse and dense vector search equally.
},
"ranking": { # Config for ranking and reranking. # Optional. Config for ranking and reranking.
"llmRanker": { # Config for LlmRanker. # Optional. Config for LlmRanker.
"modelName": "A String", # Optional. The model name used for ranking. See [Supported models](https://cloud.google.com/vertex-ai/generative-ai/docs/model-reference/inference#supported-models).
},
"rankService": { # Config for Rank Service. # Optional. Config for Rank Service.
"modelName": "A String", # Optional. The model name of the rank service. Format: `semantic-ranker-512@latest`
},
},
"topK": 42, # Optional. The number of contexts to retrieve.
},
"similarityTopK": 42, # Optional. Number of top k results to return from the selected corpora.
"storeContext": True or False, # Optional. Currently only supported for Gemini Multimodal Live API. In Gemini Multimodal Live API, if `store_context` bool is specified, Gemini will leverage it to automatically memorize the interactions between the client and Gemini, and retrieve context when needed to augment the response generation for users' ongoing and future interactions.
"vectorDistanceThreshold": 3.14, # Optional. Only return results with vector distance smaller than the threshold.
},
},
"urlContext": { # Tool to support URL context. # Optional. Tool to support URL context retrieval.
},
},
],
"author": "A String", # Required. The ID of the agent or entity that generated this event. Use "user" to denote events generated by the end-user.
"content": { # The structured data content of a message. A Content message contains a `role` field, which indicates the producer of the content, and a `parts` field, which contains the multi-part data of the message. # Required. The content of the event (e.g., text response, tool call, tool response).
"parts": [ # Required. A list of Part objects that make up a single message. Parts of a message can have different MIME types. A Content message must have at least one Part.
{ # A datatype containing media that is part of a multi-part Content message. A `Part` consists of data which has an associated datatype. A `Part` can only contain one of the accepted types in `Part.data`. For media types that are not text, `Part` must have a fixed IANA MIME type identifying the type and subtype of the media if `inline_data` or `file_data` field is filled with raw bytes.
"codeExecutionResult": { # Result of executing the [ExecutableCode]. Only generated when using the [CodeExecution] tool, and always follows a `part` containing the [ExecutableCode]. # Optional. The result of executing the ExecutableCode.
"outcome": "A String", # Required. Outcome of the code execution.
"output": "A String", # Optional. Contains stdout when code execution is successful, stderr or other description otherwise.
},
"executableCode": { # Code generated by the model that is meant to be executed, and the result returned to the model. Generated when using the [CodeExecution] tool, in which the code will be automatically executed, and a corresponding [CodeExecutionResult] will also be generated. # Optional. Code generated by the model that is intended to be executed.
"code": "A String", # Required. The code to be executed.
"language": "A String", # Required. Programming language of the `code`.
},
"fileData": { # URI-based data. A FileData message contains a URI pointing to data of a specific media type. It is used to represent images, audio, and video stored in Google Cloud Storage. # Optional. The URI-based data of the part. This can be used to include files from Google Cloud Storage.
"displayName": "A String", # Optional. The display name of the file. Used to provide a label or filename to distinguish files. This field is only returned in `PromptMessage` for prompt management. It is used in the Gemini calls only when server side tools (`code_execution`, `google_search`, and `url_context`) are enabled.
"fileUri": "A String", # Required. The URI of the file in Google Cloud Storage.
"mimeType": "A String", # Required. The IANA standard MIME type of the source data.
},
"functionCall": { # A predicted [FunctionCall] returned from the model that contains a string representing the [FunctionDeclaration.name] and a structured JSON object containing the parameters and their values. # Optional. A predicted function call returned from the model. This contains the name of the function to call and the arguments to pass to the function.
"args": { # Optional. The function parameters and values in JSON object format. See [FunctionDeclaration.parameters] for parameter details.
"a_key": "", # Properties of the object.
},
"id": "A String", # Optional. The unique id of the function call. If populated, the client to execute the `function_call` and return the response with the matching `id`.
"name": "A String", # Optional. The name of the function to call. Matches [FunctionDeclaration.name].
"partialArgs": [ # Optional. The partial argument value of the function call. If provided, represents the arguments/fields that are streamed incrementally.
{ # Partial argument value of the function call.
"boolValue": True or False, # Optional. Represents a boolean value.
"jsonPath": "A String", # Required. A JSON Path (RFC 9535) to the argument being streamed. https://datatracker.ietf.org/doc/html/rfc9535. e.g. "$.foo.bar[0].data".
"nullValue": "A String", # Optional. Represents a null value.
"numberValue": 3.14, # Optional. Represents a double value.
"stringValue": "A String", # Optional. Represents a string value.
"willContinue": True or False, # Optional. Whether this is not the last part of the same json_path. If true, another PartialArg message for the current json_path is expected to follow.
},
],
"willContinue": True or False, # Optional. Whether this is the last part of the FunctionCall. If true, another partial message for the current FunctionCall is expected to follow.
},
"functionResponse": { # The result output from a [FunctionCall] that contains a string representing the [FunctionDeclaration.name] and a structured JSON object containing any output from the function is used as context to the model. This should contain the result of a [FunctionCall] made based on model prediction. # Optional. The result of a function call. This is used to provide the model with the result of a function call that it predicted.
"id": "A String", # Optional. The id of the function call this response is for. Populated by the client to match the corresponding function call `id`.
"name": "A String", # Required. The name of the function to call. Matches [FunctionDeclaration.name] and [FunctionCall.name].
"parts": [ # Optional. Ordered `Parts` that constitute a function response. Parts may have different IANA MIME types.
{ # A datatype containing media that is part of a `FunctionResponse` message. A `FunctionResponsePart` consists of data which has an associated datatype. A `FunctionResponsePart` can only contain one of the accepted types in `FunctionResponsePart.data`. A `FunctionResponsePart` must have a fixed IANA MIME type identifying the type and subtype of the media if the `inline_data` field is filled with raw bytes.
"fileData": { # URI based data for function response. # URI based data.
"displayName": "A String", # Optional. Display name of the file data. Used to provide a label or filename to distinguish file datas. This field is only returned in PromptMessage for prompt management. It is currently used in the Gemini GenerateContent calls only when server side tools (code_execution, google_search, and url_context) are enabled.
"fileUri": "A String", # Required. URI.
"mimeType": "A String", # Required. The IANA standard MIME type of the source data.
},
"inlineData": { # Raw media bytes for function response. Text should not be sent as raw bytes, use the 'text' field. # Inline media bytes.
"data": "A String", # Required. Raw bytes.
"displayName": "A String", # Optional. Display name of the blob. Used to provide a label or filename to distinguish blobs. This field is only returned in PromptMessage for prompt management. It is currently used in the Gemini GenerateContent calls only when server side tools (code_execution, google_search, and url_context) are enabled.
"mimeType": "A String", # Required. The IANA standard MIME type of the source data.
},
},
],
"response": { # Required. The function response in JSON object format. Use "output" key to specify function output and "error" key to specify error details (if any). If "output" and "error" keys are not specified, then whole "response" is treated as function output.
"a_key": "", # Properties of the object.
},
},
"inlineData": { # A content blob. A Blob contains data of a specific media type. It is used to represent images, audio, and video. # Optional. The inline data content of the part. This can be used to include images, audio, or video in a request.
"data": "A String", # Required. The raw bytes of the data.
"displayName": "A String", # Optional. The display name of the blob. Used to provide a label or filename to distinguish blobs. This field is only returned in `PromptMessage` for prompt management. It is used in the Gemini calls only when server-side tools (`code_execution`, `google_search`, and `url_context`) are enabled.
"mimeType": "A String", # Required. The IANA standard MIME type of the source data.
},
"mediaResolution": { # per part media resolution. Media resolution for the input media. # per part media resolution. Media resolution for the input media.
"level": "A String", # The tokenization quality used for given media.
},
"text": "A String", # Optional. The text content of the part. When sent from the VSCode Gemini Code Assist extension, references to @mentioned items will be converted to markdown boldface text. For example `@my-repo` will be converted to and sent as `**my-repo**` by the IDE agent.
"thought": True or False, # Optional. Indicates whether the `part` represents the model's thought process or reasoning.
"thoughtSignature": "A String", # Optional. An opaque signature for the thought so it can be reused in subsequent requests.
"videoMetadata": { # Provides metadata for a video, including the start and end offsets for clipping and the frame rate. # Optional. Video metadata. The metadata should only be specified while the video data is presented in inline_data or file_data.
"endOffset": "A String", # Optional. The end offset of the video.
"fps": 3.14, # Optional. The frame rate of the video sent to the model. If not specified, the default value is 1.0. The valid range is (0.0, 24.0].
"startOffset": "A String", # Optional. The start offset of the video.
},
},
],
"role": "A String", # Optional. The producer of the content. Must be either 'user' or 'model'. If not set, the service will default to 'user'.
},
"eventTime": "A String", # Optional. The timestamp when the event occurred.
"stateDelta": { # Optional. The change in the session state caused by this event. This is a key-value map of fields that were modified or added by the event.
"a_key": "", # Properties of the object.
},
},
],
"turnId": "A String", # Optional. A unique identifier for the turn. Useful for referencing specific turns across systems.
"turnIndex": 42, # Required. The 0-based index of the turn in the conversation sequence.
},
],
},
"promptTemplateData": { # Message to hold a prompt template and the values to populate the template. # Prompt template data.
"values": { # The values for fields in the prompt template.
"a_key": { # The structured data content of a message. A Content message contains a `role` field, which indicates the producer of the content, and a `parts` field, which contains the multi-part data of the message.
"parts": [ # Required. A list of Part objects that make up a single message. Parts of a message can have different MIME types. A Content message must have at least one Part.
{ # A datatype containing media that is part of a multi-part Content message. A `Part` consists of data which has an associated datatype. A `Part` can only contain one of the accepted types in `Part.data`. For media types that are not text, `Part` must have a fixed IANA MIME type identifying the type and subtype of the media if `inline_data` or `file_data` field is filled with raw bytes.
"codeExecutionResult": { # Result of executing the [ExecutableCode]. Only generated when using the [CodeExecution] tool, and always follows a `part` containing the [ExecutableCode]. # Optional. The result of executing the ExecutableCode.
"outcome": "A String", # Required. Outcome of the code execution.
"output": "A String", # Optional. Contains stdout when code execution is successful, stderr or other description otherwise.
},
"executableCode": { # Code generated by the model that is meant to be executed, and the result returned to the model. Generated when using the [CodeExecution] tool, in which the code will be automatically executed, and a corresponding [CodeExecutionResult] will also be generated. # Optional. Code generated by the model that is intended to be executed.
"code": "A String", # Required. The code to be executed.
"language": "A String", # Required. Programming language of the `code`.
},
"fileData": { # URI-based data. A FileData message contains a URI pointing to data of a specific media type. It is used to represent images, audio, and video stored in Google Cloud Storage. # Optional. The URI-based data of the part. This can be used to include files from Google Cloud Storage.
"displayName": "A String", # Optional. The display name of the file. Used to provide a label or filename to distinguish files. This field is only returned in `PromptMessage` for prompt management. It is used in the Gemini calls only when server side tools (`code_execution`, `google_search`, and `url_context`) are enabled.
"fileUri": "A String", # Required. The URI of the file in Google Cloud Storage.
"mimeType": "A String", # Required. The IANA standard MIME type of the source data.
},
"functionCall": { # A predicted [FunctionCall] returned from the model that contains a string representing the [FunctionDeclaration.name] and a structured JSON object containing the parameters and their values. # Optional. A predicted function call returned from the model. This contains the name of the function to call and the arguments to pass to the function.
"args": { # Optional. The function parameters and values in JSON object format. See [FunctionDeclaration.parameters] for parameter details.
"a_key": "", # Properties of the object.
},
"id": "A String", # Optional. The unique id of the function call. If populated, the client to execute the `function_call` and return the response with the matching `id`.
"name": "A String", # Optional. The name of the function to call. Matches [FunctionDeclaration.name].
"partialArgs": [ # Optional. The partial argument value of the function call. If provided, represents the arguments/fields that are streamed incrementally.
{ # Partial argument value of the function call.
"boolValue": True or False, # Optional. Represents a boolean value.
"jsonPath": "A String", # Required. A JSON Path (RFC 9535) to the argument being streamed. https://datatracker.ietf.org/doc/html/rfc9535. e.g. "$.foo.bar[0].data".
"nullValue": "A String", # Optional. Represents a null value.
"numberValue": 3.14, # Optional. Represents a double value.
"stringValue": "A String", # Optional. Represents a string value.
"willContinue": True or False, # Optional. Whether this is not the last part of the same json_path. If true, another PartialArg message for the current json_path is expected to follow.
},
],
"willContinue": True or False, # Optional. Whether this is the last part of the FunctionCall. If true, another partial message for the current FunctionCall is expected to follow.
},
"functionResponse": { # The result output from a [FunctionCall] that contains a string representing the [FunctionDeclaration.name] and a structured JSON object containing any output from the function is used as context to the model. This should contain the result of a [FunctionCall] made based on model prediction. # Optional. The result of a function call. This is used to provide the model with the result of a function call that it predicted.
"id": "A String", # Optional. The id of the function call this response is for. Populated by the client to match the corresponding function call `id`.
"name": "A String", # Required. The name of the function to call. Matches [FunctionDeclaration.name] and [FunctionCall.name].
"parts": [ # Optional. Ordered `Parts` that constitute a function response. Parts may have different IANA MIME types.
{ # A datatype containing media that is part of a `FunctionResponse` message. A `FunctionResponsePart` consists of data which has an associated datatype. A `FunctionResponsePart` can only contain one of the accepted types in `FunctionResponsePart.data`. A `FunctionResponsePart` must have a fixed IANA MIME type identifying the type and subtype of the media if the `inline_data` field is filled with raw bytes.
"fileData": { # URI based data for function response. # URI based data.
"displayName": "A String", # Optional. Display name of the file data. Used to provide a label or filename to distinguish file datas. This field is only returned in PromptMessage for prompt management. It is currently used in the Gemini GenerateContent calls only when server side tools (code_execution, google_search, and url_context) are enabled.
"fileUri": "A String", # Required. URI.
"mimeType": "A String", # Required. The IANA standard MIME type of the source data.
},
"inlineData": { # Raw media bytes for function response. Text should not be sent as raw bytes, use the 'text' field. # Inline media bytes.
"data": "A String", # Required. Raw bytes.
"displayName": "A String", # Optional. Display name of the blob. Used to provide a label or filename to distinguish blobs. This field is only returned in PromptMessage for prompt management. It is currently used in the Gemini GenerateContent calls only when server side tools (code_execution, google_search, and url_context) are enabled.
"mimeType": "A String", # Required. The IANA standard MIME type of the source data.
},
},
],
"response": { # Required. The function response in JSON object format. Use "output" key to specify function output and "error" key to specify error details (if any). If "output" and "error" keys are not specified, then whole "response" is treated as function output.
"a_key": "", # Properties of the object.
},
},
"inlineData": { # A content blob. A Blob contains data of a specific media type. It is used to represent images, audio, and video. # Optional. The inline data content of the part. This can be used to include images, audio, or video in a request.
"data": "A String", # Required. The raw bytes of the data.
"displayName": "A String", # Optional. The display name of the blob. Used to provide a label or filename to distinguish blobs. This field is only returned in `PromptMessage` for prompt management. It is used in the Gemini calls only when server-side tools (`code_execution`, `google_search`, and `url_context`) are enabled.
"mimeType": "A String", # Required. The IANA standard MIME type of the source data.
},
"mediaResolution": { # per part media resolution. Media resolution for the input media. # per part media resolution. Media resolution for the input media.
"level": "A String", # The tokenization quality used for given media.
},
"text": "A String", # Optional. The text content of the part. When sent from the VSCode Gemini Code Assist extension, references to @mentioned items will be converted to markdown boldface text. For example `@my-repo` will be converted to and sent as `**my-repo**` by the IDE agent.
"thought": True or False, # Optional. Indicates whether the `part` represents the model's thought process or reasoning.
"thoughtSignature": "A String", # Optional. An opaque signature for the thought so it can be reused in subsequent requests.
"videoMetadata": { # Provides metadata for a video, including the start and end offsets for clipping and the frame rate. # Optional. Video metadata. The metadata should only be specified while the video data is presented in inline_data or file_data.
"endOffset": "A String", # Optional. The end offset of the video.
"fps": 3.14, # Optional. The frame rate of the video sent to the model. If not specified, the default value is 1.0. The valid range is (0.0, 24.0].
"startOffset": "A String", # Optional. The start offset of the video.
},
},
],
"role": "A String", # Optional. The producer of the content. Must be either 'user' or 'model'. If not set, the service will default to 'user'.
},
},
},
"text": "A String", # Text prompt.
"value": "", # Fields and values that can be used to populate the prompt template.
},
"rubrics": { # Optional. Named groups of rubrics associated with this prompt. The key is a user-defined name for the rubric group.
"a_key": { # A group of rubrics, used for grouping rubrics based on a metric or a version.
"displayName": "A String", # Human-readable name for the group. This should be unique within a given context if used for display or selection. Example: "Instruction Following V1", "Content Quality - Summarization Task".
"groupId": "A String", # Unique identifier for the group.
"rubrics": [ # Rubrics that are part of this group.
{ # Message representing a single testable criterion for evaluation. One input prompt could have multiple rubrics.
"content": { # Content of the rubric, defining the testable criteria. # Required. The actual testable criteria for the rubric.
"property": { # Defines criteria based on a specific property. # Evaluation criteria based on a specific property.
"description": "A String", # Description of the property being evaluated. Example: "The model's response is grammatically correct."
},
},
"importance": "A String", # Optional. The relative importance of this rubric.
"rubricId": "A String", # Unique identifier for the rubric. This ID is used to refer to this rubric, e.g., in RubricVerdict.
"type": "A String", # Optional. A type designator for the rubric, which can inform how it's evaluated or interpreted by systems or users. It's recommended to use consistent, well-defined, upper snake_case strings. Examples: "SUMMARIZATION_QUALITY", "SAFETY_HARMFUL_CONTENT", "INSTRUCTION_ADHERENCE".
},
],
},
},
},
"evaluationResponse": { # Evaluation result. # Output only. The response from evaluation.
"candidateResults": [ # Optional. The results for the metric.
{ # Result for a single candidate.
"additionalResults": "", # Optional. Additional results for the metric.
"candidate": "A String", # Required. The candidate that is being evaluated. The value is the same as the candidate name in the EvaluationRequest.
"explanation": "A String", # Optional. The explanation for the metric.
"metric": "A String", # Required. The metric that was evaluated.
"rubricVerdicts": [ # Optional. The rubric verdicts for the metric.
{ # Represents the verdict of an evaluation against a single rubric.
"evaluatedRubric": { # Message representing a single testable criterion for evaluation. One input prompt could have multiple rubrics. # Required. The full rubric definition that was evaluated. Storing this ensures the verdict is self-contained and understandable, especially if the original rubric definition changes or was dynamically generated.
"content": { # Content of the rubric, defining the testable criteria. # Required. The actual testable criteria for the rubric.
"property": { # Defines criteria based on a specific property. # Evaluation criteria based on a specific property.
"description": "A String", # Description of the property being evaluated. Example: "The model's response is grammatically correct."
},
},
"importance": "A String", # Optional. The relative importance of this rubric.
"rubricId": "A String", # Unique identifier for the rubric. This ID is used to refer to this rubric, e.g., in RubricVerdict.
"type": "A String", # Optional. A type designator for the rubric, which can inform how it's evaluated or interpreted by systems or users. It's recommended to use consistent, well-defined, upper snake_case strings. Examples: "SUMMARIZATION_QUALITY", "SAFETY_HARMFUL_CONTENT", "INSTRUCTION_ADHERENCE".
},
"reasoning": "A String", # Optional. Human-readable reasoning or explanation for the verdict. This can include specific examples or details from the evaluated content that justify the given verdict.
"verdict": True or False, # Required. Outcome of the evaluation against the rubric, represented as a boolean. `true` indicates a "Pass", `false` indicates a "Fail".
},
],
"score": 3.14, # Optional. The score for the metric.
},
],
"evaluationRequest": "A String", # Required. The request item that was evaluated. Format: projects/{project}/locations/{location}/evaluationItems/{evaluation_item}
"evaluationRun": "A String", # Required. The evaluation run that was used to generate the result. Format: projects/{project}/locations/{location}/evaluationRuns/{evaluation_run}
"metadata": "", # Optional. Metadata about the evaluation result.
"metric": "A String", # Required. The metric that was evaluated.
"request": { # A single evaluation request supporting input for both single-turn model generation and multi-turn agent execution traces. Valid input modes: 1. Inference Mode: `prompt` is set (containing text or AgentData context). 2. Offline Eval Mode: `prompt` is unset, and `candidate_responses` contains `agent_data` (the completed execution trace). Validation Rule: Either `prompt` must be set, OR at least one of the `candidate_responses` must contain `agent_data`. # Required. The request that was evaluated.
"candidateResponses": [ # Optional. Responses from model under test and other baseline models for comparison.
{ # Responses from model or agent.
"agentData": { # Represents data specific to multi-turn agent evaluations. # Optional. Represents the complete execution trace of a multi-turn conversation, which can involve single or multiple agents. This field is used to provide the full output of an agent's run, including all turns and events, for direct evaluation.
"agents": { # Optional. A map containing the static configurations for each agent in the system. Key: agent_id (matches the `author` field in events). Value: The static configuration of the agent.
"a_key": { # Represents configuration for an Agent.
"agentId": "A String", # Required. Unique identifier of the agent. This ID is used to refer to this agent, e.g., in AgentEvent.author, or in the `sub_agents` field. It must be unique within the `agents` map.
"agentType": "A String", # Optional. The type or class of the agent (e.g., "LlmAgent", "RouterAgent", "ToolUseAgent"). Useful for the autorater to understand the expected behavior of the agent.
"description": "A String", # Optional. A high-level description of the agent's role and responsibilities. Critical for evaluating if the agent is routing tasks correctly.
"instruction": "A String", # Optional. Provides instructions for the LLM model, guiding the agent's behavior. Can be static or dynamic. Dynamic instructions can contain placeholders like {variable_name} that will be resolved at runtime using the `AgentEvent.state_delta` field.
"subAgents": [ # Optional. The list of valid agent IDs that this agent can delegate to. This defines the directed edges in the multi-agent system graph topology.
"A String",
],
"tools": [ # Optional. The list of tools available to this agent.
{ # Tool details that the model may use to generate response. A `Tool` is a piece of code that enables the system to interact with external systems to perform an action, or set of actions, outside of knowledge and scope of the model. A Tool object should contain exactly one type of Tool (e.g FunctionDeclaration, Retrieval or GoogleSearchRetrieval).
"codeExecution": { # Tool that executes code generated by the model, and automatically returns the result to the model. See also [ExecutableCode]and [CodeExecutionResult] which are input and output to this tool. # Optional. CodeExecution tool type. Enables the model to execute code as part of generation.
},
"computerUse": { # Tool to support computer use. # Optional. Tool to support the model interacting directly with the computer. If enabled, it automatically populates computer-use specific Function Declarations.
"environment": "A String", # Required. The environment being operated.
"excludedPredefinedFunctions": [ # Optional. By default, [predefined functions](https://cloud.google.com/vertex-ai/generative-ai/docs/computer-use#supported-actions) are included in the final model call. Some of them can be explicitly excluded from being automatically included. This can serve two purposes: 1. Using a more restricted / different action space. 2. Improving the definitions / instructions of predefined functions.
"A String",
],
},
"enterpriseWebSearch": { # Tool to search public web data, powered by Vertex AI Search and Sec4 compliance. # Optional. Tool to support searching public web data, powered by Vertex AI Search and Sec4 compliance.
"blockingConfidence": "A String", # Optional. Sites with confidence level chosen & above this value will be blocked from the search results.
"excludeDomains": [ # Optional. List of domains to be excluded from the search results. The default limit is 2000 domains.
"A String",
],
},
"functionDeclarations": [ # Optional. Function tool type. One or more function declarations to be passed to the model along with the current user query. Model may decide to call a subset of these functions by populating FunctionCall in the response. User should provide a FunctionResponse for each function call in the next turn. Based on the function responses, Model will generate the final response back to the user. Maximum 512 function declarations can be provided.
{ # Structured representation of a function declaration as defined by the [OpenAPI 3.0 specification](https://spec.openapis.org/oas/v3.0.3). Included in this declaration are the function name, description, parameters and response type. This FunctionDeclaration is a representation of a block of code that can be used as a `Tool` by the model and executed by the client.
"description": "A String", # Optional. Description and purpose of the function. Model uses it to decide how and whether to call the function.
"name": "A String", # Required. The name of the function to call. Must start with a letter or an underscore. Must be a-z, A-Z, 0-9, or contain underscores, dots, colons and dashes, with a maximum length of 64.
"parameters": { # Defines the schema of input and output data. This is a subset of the [OpenAPI 3.0 Schema Object](https://spec.openapis.org/oas/v3.0.3#schema-object). # Optional. Describes the parameters to this function in JSON Schema Object format. Reflects the Open API 3.03 Parameter Object. string Key: the name of the parameter. Parameter names are case sensitive. Schema Value: the Schema defining the type used for the parameter. For function with no parameters, this can be left unset. Parameter names must start with a letter or an underscore and must only contain chars a-z, A-Z, 0-9, or underscores with a maximum length of 64. Example with 1 required and 1 optional parameter: type: OBJECT properties: param1: type: STRING param2: type: INTEGER required: - param1
"additionalProperties": "", # Optional. If `type` is `OBJECT`, specifies how to handle properties not defined in `properties`. If it is a boolean `false`, no additional properties are allowed. If it is a schema, additional properties are allowed if they conform to the schema.
"anyOf": [ # Optional. The instance must be valid against any (one or more) of the subschemas listed in `any_of`.
# Object with schema name: GoogleCloudAiplatformV1beta1Schema
],
"default": "", # Optional. Default value to use if the field is not specified.
"defs": { # Optional. `defs` provides a map of schema definitions that can be reused by `ref` elsewhere in the schema. Only allowed at root level of the schema.
"a_key": # Object with schema name: GoogleCloudAiplatformV1beta1Schema
},
"description": "A String", # Optional. Description of the schema.
"enum": [ # Optional. Possible values of the field. This field can be used to restrict a value to a fixed set of values. To mark a field as an enum, set `format` to `enum` and provide the list of possible values in `enum`. For example: 1. To define directions: `{type:STRING, format:enum, enum:["EAST", "NORTH", "SOUTH", "WEST"]}` 2. To define apartment numbers: `{type:INTEGER, format:enum, enum:["101", "201", "301"]}`
"A String",
],
"example": "", # Optional. Example of an instance of this schema.
"format": "A String", # Optional. The format of the data. For `NUMBER` type, format can be `float` or `double`. For `INTEGER` type, format can be `int32` or `int64`. For `STRING` type, format can be `email`, `byte`, `date`, `date-time`, `password`, and other formats to further refine the data type.
"items": # Object with schema name: GoogleCloudAiplatformV1beta1Schema # Optional. If type is `ARRAY`, `items` specifies the schema of elements in the array.
"maxItems": "A String", # Optional. If type is `ARRAY`, `max_items` specifies the maximum number of items in an array.
"maxLength": "A String", # Optional. If type is `STRING`, `max_length` specifies the maximum length of the string.
"maxProperties": "A String", # Optional. If type is `OBJECT`, `max_properties` specifies the maximum number of properties that can be provided.
"maximum": 3.14, # Optional. If type is `INTEGER` or `NUMBER`, `maximum` specifies the maximum allowed value.
"minItems": "A String", # Optional. If type is `ARRAY`, `min_items` specifies the minimum number of items in an array.
"minLength": "A String", # Optional. If type is `STRING`, `min_length` specifies the minimum length of the string.
"minProperties": "A String", # Optional. If type is `OBJECT`, `min_properties` specifies the minimum number of properties that can be provided.
"minimum": 3.14, # Optional. If type is `INTEGER` or `NUMBER`, `minimum` specifies the minimum allowed value.
"nullable": True or False, # Optional. Indicates if the value of this field can be null.
"pattern": "A String", # Optional. If type is `STRING`, `pattern` specifies a regular expression that the string must match.
"properties": { # Optional. If type is `OBJECT`, `properties` is a map of property names to schema definitions for each property of the object.
"a_key": # Object with schema name: GoogleCloudAiplatformV1beta1Schema
},
"propertyOrdering": [ # Optional. Order of properties displayed or used where order matters. This is not a standard field in OpenAPI specification, but can be used to control the order of properties.
"A String",
],
"ref": "A String", # Optional. Allows referencing another schema definition to use in place of this schema. The value must be a valid reference to a schema in `defs`. For example, the following schema defines a reference to a schema node named "Pet": type: object properties: pet: ref: #/defs/Pet defs: Pet: type: object properties: name: type: string The value of the "pet" property is a reference to the schema node named "Pet". See details in https://json-schema.org/understanding-json-schema/structuring
"required": [ # Optional. If type is `OBJECT`, `required` lists the names of properties that must be present.
"A String",
],
"title": "A String", # Optional. Title for the schema.
"type": "A String", # Optional. Data type of the schema field.
},
"parametersJsonSchema": "", # Optional. Describes the parameters to the function in JSON Schema format. The schema must describe an object where the properties are the parameters to the function. For example: ``` { "type": "object", "properties": { "name": { "type": "string" }, "age": { "type": "integer" } }, "additionalProperties": false, "required": ["name", "age"], "propertyOrdering": ["name", "age"] } ``` This field is mutually exclusive with `parameters`.
"response": { # Defines the schema of input and output data. This is a subset of the [OpenAPI 3.0 Schema Object](https://spec.openapis.org/oas/v3.0.3#schema-object). # Optional. Describes the output from this function in JSON Schema format. Reflects the Open API 3.03 Response Object. The Schema defines the type used for the response value of the function.
"additionalProperties": "", # Optional. If `type` is `OBJECT`, specifies how to handle properties not defined in `properties`. If it is a boolean `false`, no additional properties are allowed. If it is a schema, additional properties are allowed if they conform to the schema.
"anyOf": [ # Optional. The instance must be valid against any (one or more) of the subschemas listed in `any_of`.
# Object with schema name: GoogleCloudAiplatformV1beta1Schema
],
"default": "", # Optional. Default value to use if the field is not specified.
"defs": { # Optional. `defs` provides a map of schema definitions that can be reused by `ref` elsewhere in the schema. Only allowed at root level of the schema.
"a_key": # Object with schema name: GoogleCloudAiplatformV1beta1Schema
},
"description": "A String", # Optional. Description of the schema.
"enum": [ # Optional. Possible values of the field. This field can be used to restrict a value to a fixed set of values. To mark a field as an enum, set `format` to `enum` and provide the list of possible values in `enum`. For example: 1. To define directions: `{type:STRING, format:enum, enum:["EAST", "NORTH", "SOUTH", "WEST"]}` 2. To define apartment numbers: `{type:INTEGER, format:enum, enum:["101", "201", "301"]}`
"A String",
],
"example": "", # Optional. Example of an instance of this schema.
"format": "A String", # Optional. The format of the data. For `NUMBER` type, format can be `float` or `double`. For `INTEGER` type, format can be `int32` or `int64`. For `STRING` type, format can be `email`, `byte`, `date`, `date-time`, `password`, and other formats to further refine the data type.
"items": # Object with schema name: GoogleCloudAiplatformV1beta1Schema # Optional. If type is `ARRAY`, `items` specifies the schema of elements in the array.
"maxItems": "A String", # Optional. If type is `ARRAY`, `max_items` specifies the maximum number of items in an array.
"maxLength": "A String", # Optional. If type is `STRING`, `max_length` specifies the maximum length of the string.
"maxProperties": "A String", # Optional. If type is `OBJECT`, `max_properties` specifies the maximum number of properties that can be provided.
"maximum": 3.14, # Optional. If type is `INTEGER` or `NUMBER`, `maximum` specifies the maximum allowed value.
"minItems": "A String", # Optional. If type is `ARRAY`, `min_items` specifies the minimum number of items in an array.
"minLength": "A String", # Optional. If type is `STRING`, `min_length` specifies the minimum length of the string.
"minProperties": "A String", # Optional. If type is `OBJECT`, `min_properties` specifies the minimum number of properties that can be provided.
"minimum": 3.14, # Optional. If type is `INTEGER` or `NUMBER`, `minimum` specifies the minimum allowed value.
"nullable": True or False, # Optional. Indicates if the value of this field can be null.
"pattern": "A String", # Optional. If type is `STRING`, `pattern` specifies a regular expression that the string must match.
"properties": { # Optional. If type is `OBJECT`, `properties` is a map of property names to schema definitions for each property of the object.
"a_key": # Object with schema name: GoogleCloudAiplatformV1beta1Schema
},
"propertyOrdering": [ # Optional. Order of properties displayed or used where order matters. This is not a standard field in OpenAPI specification, but can be used to control the order of properties.
"A String",
],
"ref": "A String", # Optional. Allows referencing another schema definition to use in place of this schema. The value must be a valid reference to a schema in `defs`. For example, the following schema defines a reference to a schema node named "Pet": type: object properties: pet: ref: #/defs/Pet defs: Pet: type: object properties: name: type: string The value of the "pet" property is a reference to the schema node named "Pet". See details in https://json-schema.org/understanding-json-schema/structuring
"required": [ # Optional. If type is `OBJECT`, `required` lists the names of properties that must be present.
"A String",
],
"title": "A String", # Optional. Title for the schema.
"type": "A String", # Optional. Data type of the schema field.
},
"responseJsonSchema": "", # Optional. Describes the output from this function in JSON Schema format. The value specified by the schema is the response value of the function. This field is mutually exclusive with `response`.
},
],
"googleMaps": { # Tool to retrieve public maps data for grounding, powered by Google. # Optional. GoogleMaps tool type. Tool to support Google Maps in Model.
"enableWidget": True or False, # Optional. If true, include the widget context token in the response.
},
"googleSearch": { # GoogleSearch tool type. Tool to support Google Search in Model. Powered by Google. # Optional. GoogleSearch tool type. Tool to support Google Search in Model. Powered by Google.
"blockingConfidence": "A String", # Optional. Sites with confidence level chosen & above this value will be blocked from the search results.
"excludeDomains": [ # Optional. List of domains to be excluded from the search results. The default limit is 2000 domains. Example: ["amazon.com", "facebook.com"].
"A String",
],
},
"googleSearchRetrieval": { # Tool to retrieve public web data for grounding, powered by Google. # Optional. Specialized retrieval tool that is powered by Google Search.
"dynamicRetrievalConfig": { # Describes the options to customize dynamic retrieval. # Specifies the dynamic retrieval configuration for the given source.
"dynamicThreshold": 3.14, # Optional. The threshold to be used in dynamic retrieval. If not set, a system default value is used.
"mode": "A String", # The mode of the predictor to be used in dynamic retrieval.
},
},
"parallelAiSearch": { # ParallelAiSearch tool type. A tool that uses the Parallel.ai search engine for grounding. # Optional. If specified, Vertex AI will use Parallel.ai to search for information to answer user queries. The search results will be grounded on Parallel.ai and presented to the model for response generation
"apiKey": "A String", # Optional. The API key for ParallelAiSearch. If an API key is not provided, the system will attempt to verify access by checking for an active Parallel.ai subscription through the Google Cloud Marketplace. See https://docs.parallel.ai/search/search-quickstart for more details.
"customConfigs": { # Optional. Custom configs for ParallelAiSearch. This field can be used to pass any parameter from the Parallel.ai Search API. See the Parallel.ai documentation for the full list of available parameters and their usage: https://docs.parallel.ai/api-reference/search-beta/search Currently only `source_policy`, `excerpts`, `max_results`, `mode`, `fetch_policy` can be set via this field. For example: { "source_policy": { "include_domains": ["google.com", "wikipedia.org"], "exclude_domains": ["example.com"] }, "fetch_policy": { "max_age_seconds": 3600 } }
"a_key": "", # Properties of the object.
},
},
"retrieval": { # Defines a retrieval tool that model can call to access external knowledge. # Optional. Retrieval tool type. System will always execute the provided retrieval tool(s) to get external knowledge to answer the prompt. Retrieval results are presented to the model for generation.
"disableAttribution": True or False, # Optional. Deprecated. This option is no longer supported.
"externalApi": { # Retrieve from data source powered by external API for grounding. The external API is not owned by Google, but need to follow the pre-defined API spec. # Use data source powered by external API for grounding.
"apiAuth": { # The generic reusable api auth config. Deprecated. Please use AuthConfig (google/cloud/aiplatform/master/auth.proto) instead. # The authentication config to access the API. Deprecated. Please use auth_config instead.
"apiKeyConfig": { # The API secret. # The API secret.
"apiKeySecretVersion": "A String", # Required. The SecretManager secret version resource name storing API key. e.g. projects/{project}/secrets/{secret}/versions/{version}
"apiKeyString": "A String", # The API key string. Either this or `api_key_secret_version` must be set.
},
},
"apiSpec": "A String", # The API spec that the external API implements.
"authConfig": { # Auth configuration to run the extension. # The authentication config to access the API.
"apiKeyConfig": { # Config for authentication with API key. # Config for API key auth.
"apiKeySecret": "A String", # Optional. The name of the SecretManager secret version resource storing the API key. Format: `projects/{project}/secrets/{secrete}/versions/{version}` - If both `api_key_secret` and `api_key_string` are specified, this field takes precedence over `api_key_string`. - If specified, the `secretmanager.versions.access` permission should be granted to Vertex AI Extension Service Agent (https://cloud.google.com/vertex-ai/docs/general/access-control#service-agents) on the specified resource.
"apiKeyString": "A String", # Optional. The API key to be used in the request directly.
"httpElementLocation": "A String", # Optional. The location of the API key.
"name": "A String", # Optional. The parameter name of the API key. E.g. If the API request is "https://example.com/act?api_key=", "api_key" would be the parameter name.
},
"authType": "A String", # Type of auth scheme.
"googleServiceAccountConfig": { # Config for Google Service Account Authentication. # Config for Google Service Account auth.
"serviceAccount": "A String", # Optional. The service account that the extension execution service runs as. - If the service account is specified, the `iam.serviceAccounts.getAccessToken` permission should be granted to Vertex AI Extension Service Agent (https://cloud.google.com/vertex-ai/docs/general/access-control#service-agents) on the specified service account. - If not specified, the Vertex AI Extension Service Agent will be used to execute the Extension.
},
"httpBasicAuthConfig": { # Config for HTTP Basic Authentication. # Config for HTTP Basic auth.
"credentialSecret": "A String", # Required. The name of the SecretManager secret version resource storing the base64 encoded credentials. Format: `projects/{project}/secrets/{secrete}/versions/{version}` - If specified, the `secretmanager.versions.access` permission should be granted to Vertex AI Extension Service Agent (https://cloud.google.com/vertex-ai/docs/general/access-control#service-agents) on the specified resource.
},
"oauthConfig": { # Config for user oauth. # Config for user oauth.
"accessToken": "A String", # Access token for extension endpoint. Only used to propagate token from [[ExecuteExtensionRequest.runtime_auth_config]] at request time.
"serviceAccount": "A String", # The service account used to generate access tokens for executing the Extension. - If the service account is specified, the `iam.serviceAccounts.getAccessToken` permission should be granted to Vertex AI Extension Service Agent (https://cloud.google.com/vertex-ai/docs/general/access-control#service-agents) on the provided service account.
},
"oidcConfig": { # Config for user OIDC auth. # Config for user OIDC auth.
"idToken": "A String", # OpenID Connect formatted ID token for extension endpoint. Only used to propagate token from [[ExecuteExtensionRequest.runtime_auth_config]] at request time.
"serviceAccount": "A String", # The service account used to generate an OpenID Connect (OIDC)-compatible JWT token signed by the Google OIDC Provider (accounts.google.com) for extension endpoint (https://cloud.google.com/iam/docs/create-short-lived-credentials-direct#sa-credentials-oidc). - The audience for the token will be set to the URL in the server url defined in the OpenApi spec. - If the service account is provided, the service account should grant `iam.serviceAccounts.getOpenIdToken` permission to Vertex AI Extension Service Agent (https://cloud.google.com/vertex-ai/docs/general/access-control#service-agents).
},
},
"elasticSearchParams": { # The search parameters to use for the ELASTIC_SEARCH spec. # Parameters for the elastic search API.
"index": "A String", # The ElasticSearch index to use.
"numHits": 42, # Optional. Number of hits (chunks) to request. When specified, it is passed to Elasticsearch as the `num_hits` param.
"searchTemplate": "A String", # The ElasticSearch search template to use.
},
"endpoint": "A String", # The endpoint of the external API. The system will call the API at this endpoint to retrieve the data for grounding. Example: https://acme.com:443/search
"simpleSearchParams": { # The search parameters to use for SIMPLE_SEARCH spec. # Parameters for the simple search API.
},
},
"vertexAiSearch": { # Retrieve from Vertex AI Search datastore or engine for grounding. datastore and engine are mutually exclusive. See https://cloud.google.com/products/agent-builder # Set to use data source powered by Vertex AI Search.
"dataStoreSpecs": [ # Specifications that define the specific DataStores to be searched, along with configurations for those data stores. This is only considered for Engines with multiple data stores. It should only be set if engine is used.
{ # Define data stores within engine to filter on in a search call and configurations for those data stores. For more information, see https://cloud.google.com/generative-ai-app-builder/docs/reference/rpc/google.cloud.discoveryengine.v1#datastorespec
"dataStore": "A String", # Full resource name of DataStore, such as Format: `projects/{project}/locations/{location}/collections/{collection}/dataStores/{dataStore}`
"filter": "A String", # Optional. Filter specification to filter documents in the data store specified by data_store field. For more information on filtering, see [Filtering](https://cloud.google.com/generative-ai-app-builder/docs/filter-search-metadata)
},
],
"datastore": "A String", # Optional. Fully-qualified Vertex AI Search data store resource ID. Format: `projects/{project}/locations/{location}/collections/{collection}/dataStores/{dataStore}`
"engine": "A String", # Optional. Fully-qualified Vertex AI Search engine resource ID. Format: `projects/{project}/locations/{location}/collections/{collection}/engines/{engine}`
"filter": "A String", # Optional. Filter strings to be passed to the search API.
"maxResults": 42, # Optional. Number of search results to return per query. The default value is 10. The maximumm allowed value is 10.
},
"vertexRagStore": { # Retrieve from Vertex RAG Store for grounding. # Set to use data source powered by Vertex RAG store. User data is uploaded via the VertexRagDataService.
"ragCorpora": [ # Optional. Deprecated. Please use rag_resources instead.
"A String",
],
"ragResources": [ # Optional. The representation of the rag source. It can be used to specify corpus only or ragfiles. Currently only support one corpus or multiple files from one corpus. In the future we may open up multiple corpora support.
{ # The definition of the Rag resource.
"ragCorpus": "A String", # Optional. RagCorpora resource name. Format: `projects/{project}/locations/{location}/ragCorpora/{rag_corpus}`
"ragFileIds": [ # Optional. rag_file_id. The files should be in the same rag_corpus set in rag_corpus field.
"A String",
],
},
],
"ragRetrievalConfig": { # Specifies the context retrieval config. # Optional. The retrieval config for the Rag query.
"filter": { # Config for filters. # Optional. Config for filters.
"metadataFilter": "A String", # Optional. String for metadata filtering.
"vectorDistanceThreshold": 3.14, # Optional. Only returns contexts with vector distance smaller than the threshold.
"vectorSimilarityThreshold": 3.14, # Optional. Only returns contexts with vector similarity larger than the threshold.
},
"hybridSearch": { # Config for Hybrid Search. # Optional. Config for Hybrid Search.
"alpha": 3.14, # Optional. Alpha value controls the weight between dense and sparse vector search results. The range is [0, 1], while 0 means sparse vector search only and 1 means dense vector search only. The default value is 0.5 which balances sparse and dense vector search equally.
},
"ranking": { # Config for ranking and reranking. # Optional. Config for ranking and reranking.
"llmRanker": { # Config for LlmRanker. # Optional. Config for LlmRanker.
"modelName": "A String", # Optional. The model name used for ranking. See [Supported models](https://cloud.google.com/vertex-ai/generative-ai/docs/model-reference/inference#supported-models).
},
"rankService": { # Config for Rank Service. # Optional. Config for Rank Service.
"modelName": "A String", # Optional. The model name of the rank service. Format: `semantic-ranker-512@latest`
},
},
"topK": 42, # Optional. The number of contexts to retrieve.
},
"similarityTopK": 42, # Optional. Number of top k results to return from the selected corpora.
"storeContext": True or False, # Optional. Currently only supported for Gemini Multimodal Live API. In Gemini Multimodal Live API, if `store_context` bool is specified, Gemini will leverage it to automatically memorize the interactions between the client and Gemini, and retrieve context when needed to augment the response generation for users' ongoing and future interactions.
"vectorDistanceThreshold": 3.14, # Optional. Only return results with vector distance smaller than the threshold.
},
},
"urlContext": { # Tool to support URL context. # Optional. Tool to support URL context retrieval.
},
},
],
},
},
"turns": [ # Optional. A chronological list of conversation turns. Each turn represents a logical execution cycle (e.g., User Input -> Agent Response).
{ # Represents a single turn/invocation in the conversation.
"events": [ # Optional. The list of events that occurred during this turn.
{ # Represents a single event in the execution trace.
"activeTools": [ # Optional. The list of tools that were active/available to the agent at the time of this event. This overrides the `AgentConfig.tools` if set.
{ # Tool details that the model may use to generate response. A `Tool` is a piece of code that enables the system to interact with external systems to perform an action, or set of actions, outside of knowledge and scope of the model. A Tool object should contain exactly one type of Tool (e.g FunctionDeclaration, Retrieval or GoogleSearchRetrieval).
"codeExecution": { # Tool that executes code generated by the model, and automatically returns the result to the model. See also [ExecutableCode]and [CodeExecutionResult] which are input and output to this tool. # Optional. CodeExecution tool type. Enables the model to execute code as part of generation.
},
"computerUse": { # Tool to support computer use. # Optional. Tool to support the model interacting directly with the computer. If enabled, it automatically populates computer-use specific Function Declarations.
"environment": "A String", # Required. The environment being operated.
"excludedPredefinedFunctions": [ # Optional. By default, [predefined functions](https://cloud.google.com/vertex-ai/generative-ai/docs/computer-use#supported-actions) are included in the final model call. Some of them can be explicitly excluded from being automatically included. This can serve two purposes: 1. Using a more restricted / different action space. 2. Improving the definitions / instructions of predefined functions.
"A String",
],
},
"enterpriseWebSearch": { # Tool to search public web data, powered by Vertex AI Search and Sec4 compliance. # Optional. Tool to support searching public web data, powered by Vertex AI Search and Sec4 compliance.
"blockingConfidence": "A String", # Optional. Sites with confidence level chosen & above this value will be blocked from the search results.
"excludeDomains": [ # Optional. List of domains to be excluded from the search results. The default limit is 2000 domains.
"A String",
],
},
"functionDeclarations": [ # Optional. Function tool type. One or more function declarations to be passed to the model along with the current user query. Model may decide to call a subset of these functions by populating FunctionCall in the response. User should provide a FunctionResponse for each function call in the next turn. Based on the function responses, Model will generate the final response back to the user. Maximum 512 function declarations can be provided.
{ # Structured representation of a function declaration as defined by the [OpenAPI 3.0 specification](https://spec.openapis.org/oas/v3.0.3). Included in this declaration are the function name, description, parameters and response type. This FunctionDeclaration is a representation of a block of code that can be used as a `Tool` by the model and executed by the client.
"description": "A String", # Optional. Description and purpose of the function. Model uses it to decide how and whether to call the function.
"name": "A String", # Required. The name of the function to call. Must start with a letter or an underscore. Must be a-z, A-Z, 0-9, or contain underscores, dots, colons and dashes, with a maximum length of 64.
"parameters": { # Defines the schema of input and output data. This is a subset of the [OpenAPI 3.0 Schema Object](https://spec.openapis.org/oas/v3.0.3#schema-object). # Optional. Describes the parameters to this function in JSON Schema Object format. Reflects the Open API 3.03 Parameter Object. string Key: the name of the parameter. Parameter names are case sensitive. Schema Value: the Schema defining the type used for the parameter. For function with no parameters, this can be left unset. Parameter names must start with a letter or an underscore and must only contain chars a-z, A-Z, 0-9, or underscores with a maximum length of 64. Example with 1 required and 1 optional parameter: type: OBJECT properties: param1: type: STRING param2: type: INTEGER required: - param1
"additionalProperties": "", # Optional. If `type` is `OBJECT`, specifies how to handle properties not defined in `properties`. If it is a boolean `false`, no additional properties are allowed. If it is a schema, additional properties are allowed if they conform to the schema.
"anyOf": [ # Optional. The instance must be valid against any (one or more) of the subschemas listed in `any_of`.
# Object with schema name: GoogleCloudAiplatformV1beta1Schema
],
"default": "", # Optional. Default value to use if the field is not specified.
"defs": { # Optional. `defs` provides a map of schema definitions that can be reused by `ref` elsewhere in the schema. Only allowed at root level of the schema.
"a_key": # Object with schema name: GoogleCloudAiplatformV1beta1Schema
},
"description": "A String", # Optional. Description of the schema.
"enum": [ # Optional. Possible values of the field. This field can be used to restrict a value to a fixed set of values. To mark a field as an enum, set `format` to `enum` and provide the list of possible values in `enum`. For example: 1. To define directions: `{type:STRING, format:enum, enum:["EAST", "NORTH", "SOUTH", "WEST"]}` 2. To define apartment numbers: `{type:INTEGER, format:enum, enum:["101", "201", "301"]}`
"A String",
],
"example": "", # Optional. Example of an instance of this schema.
"format": "A String", # Optional. The format of the data. For `NUMBER` type, format can be `float` or `double`. For `INTEGER` type, format can be `int32` or `int64`. For `STRING` type, format can be `email`, `byte`, `date`, `date-time`, `password`, and other formats to further refine the data type.
"items": # Object with schema name: GoogleCloudAiplatformV1beta1Schema # Optional. If type is `ARRAY`, `items` specifies the schema of elements in the array.
"maxItems": "A String", # Optional. If type is `ARRAY`, `max_items` specifies the maximum number of items in an array.
"maxLength": "A String", # Optional. If type is `STRING`, `max_length` specifies the maximum length of the string.
"maxProperties": "A String", # Optional. If type is `OBJECT`, `max_properties` specifies the maximum number of properties that can be provided.
"maximum": 3.14, # Optional. If type is `INTEGER` or `NUMBER`, `maximum` specifies the maximum allowed value.
"minItems": "A String", # Optional. If type is `ARRAY`, `min_items` specifies the minimum number of items in an array.
"minLength": "A String", # Optional. If type is `STRING`, `min_length` specifies the minimum length of the string.
"minProperties": "A String", # Optional. If type is `OBJECT`, `min_properties` specifies the minimum number of properties that can be provided.
"minimum": 3.14, # Optional. If type is `INTEGER` or `NUMBER`, `minimum` specifies the minimum allowed value.
"nullable": True or False, # Optional. Indicates if the value of this field can be null.
"pattern": "A String", # Optional. If type is `STRING`, `pattern` specifies a regular expression that the string must match.
"properties": { # Optional. If type is `OBJECT`, `properties` is a map of property names to schema definitions for each property of the object.
"a_key": # Object with schema name: GoogleCloudAiplatformV1beta1Schema
},
"propertyOrdering": [ # Optional. Order of properties displayed or used where order matters. This is not a standard field in OpenAPI specification, but can be used to control the order of properties.
"A String",
],
"ref": "A String", # Optional. Allows referencing another schema definition to use in place of this schema. The value must be a valid reference to a schema in `defs`. For example, the following schema defines a reference to a schema node named "Pet": type: object properties: pet: ref: #/defs/Pet defs: Pet: type: object properties: name: type: string The value of the "pet" property is a reference to the schema node named "Pet". See details in https://json-schema.org/understanding-json-schema/structuring
"required": [ # Optional. If type is `OBJECT`, `required` lists the names of properties that must be present.
"A String",
],
"title": "A String", # Optional. Title for the schema.
"type": "A String", # Optional. Data type of the schema field.
},
"parametersJsonSchema": "", # Optional. Describes the parameters to the function in JSON Schema format. The schema must describe an object where the properties are the parameters to the function. For example: ``` { "type": "object", "properties": { "name": { "type": "string" }, "age": { "type": "integer" } }, "additionalProperties": false, "required": ["name", "age"], "propertyOrdering": ["name", "age"] } ``` This field is mutually exclusive with `parameters`.
"response": { # Defines the schema of input and output data. This is a subset of the [OpenAPI 3.0 Schema Object](https://spec.openapis.org/oas/v3.0.3#schema-object). # Optional. Describes the output from this function in JSON Schema format. Reflects the Open API 3.03 Response Object. The Schema defines the type used for the response value of the function.
"additionalProperties": "", # Optional. If `type` is `OBJECT`, specifies how to handle properties not defined in `properties`. If it is a boolean `false`, no additional properties are allowed. If it is a schema, additional properties are allowed if they conform to the schema.
"anyOf": [ # Optional. The instance must be valid against any (one or more) of the subschemas listed in `any_of`.
# Object with schema name: GoogleCloudAiplatformV1beta1Schema
],
"default": "", # Optional. Default value to use if the field is not specified.
"defs": { # Optional. `defs` provides a map of schema definitions that can be reused by `ref` elsewhere in the schema. Only allowed at root level of the schema.
"a_key": # Object with schema name: GoogleCloudAiplatformV1beta1Schema
},
"description": "A String", # Optional. Description of the schema.
"enum": [ # Optional. Possible values of the field. This field can be used to restrict a value to a fixed set of values. To mark a field as an enum, set `format` to `enum` and provide the list of possible values in `enum`. For example: 1. To define directions: `{type:STRING, format:enum, enum:["EAST", "NORTH", "SOUTH", "WEST"]}` 2. To define apartment numbers: `{type:INTEGER, format:enum, enum:["101", "201", "301"]}`
"A String",
],
"example": "", # Optional. Example of an instance of this schema.
"format": "A String", # Optional. The format of the data. For `NUMBER` type, format can be `float` or `double`. For `INTEGER` type, format can be `int32` or `int64`. For `STRING` type, format can be `email`, `byte`, `date`, `date-time`, `password`, and other formats to further refine the data type.
"items": # Object with schema name: GoogleCloudAiplatformV1beta1Schema # Optional. If type is `ARRAY`, `items` specifies the schema of elements in the array.
"maxItems": "A String", # Optional. If type is `ARRAY`, `max_items` specifies the maximum number of items in an array.
"maxLength": "A String", # Optional. If type is `STRING`, `max_length` specifies the maximum length of the string.
"maxProperties": "A String", # Optional. If type is `OBJECT`, `max_properties` specifies the maximum number of properties that can be provided.
"maximum": 3.14, # Optional. If type is `INTEGER` or `NUMBER`, `maximum` specifies the maximum allowed value.
"minItems": "A String", # Optional. If type is `ARRAY`, `min_items` specifies the minimum number of items in an array.
"minLength": "A String", # Optional. If type is `STRING`, `min_length` specifies the minimum length of the string.
"minProperties": "A String", # Optional. If type is `OBJECT`, `min_properties` specifies the minimum number of properties that can be provided.
"minimum": 3.14, # Optional. If type is `INTEGER` or `NUMBER`, `minimum` specifies the minimum allowed value.
"nullable": True or False, # Optional. Indicates if the value of this field can be null.
"pattern": "A String", # Optional. If type is `STRING`, `pattern` specifies a regular expression that the string must match.
"properties": { # Optional. If type is `OBJECT`, `properties` is a map of property names to schema definitions for each property of the object.
"a_key": # Object with schema name: GoogleCloudAiplatformV1beta1Schema
},
"propertyOrdering": [ # Optional. Order of properties displayed or used where order matters. This is not a standard field in OpenAPI specification, but can be used to control the order of properties.
"A String",
],
"ref": "A String", # Optional. Allows referencing another schema definition to use in place of this schema. The value must be a valid reference to a schema in `defs`. For example, the following schema defines a reference to a schema node named "Pet": type: object properties: pet: ref: #/defs/Pet defs: Pet: type: object properties: name: type: string The value of the "pet" property is a reference to the schema node named "Pet". See details in https://json-schema.org/understanding-json-schema/structuring
"required": [ # Optional. If type is `OBJECT`, `required` lists the names of properties that must be present.
"A String",
],
"title": "A String", # Optional. Title for the schema.
"type": "A String", # Optional. Data type of the schema field.
},
"responseJsonSchema": "", # Optional. Describes the output from this function in JSON Schema format. The value specified by the schema is the response value of the function. This field is mutually exclusive with `response`.
},
],
"googleMaps": { # Tool to retrieve public maps data for grounding, powered by Google. # Optional. GoogleMaps tool type. Tool to support Google Maps in Model.
"enableWidget": True or False, # Optional. If true, include the widget context token in the response.
},
"googleSearch": { # GoogleSearch tool type. Tool to support Google Search in Model. Powered by Google. # Optional. GoogleSearch tool type. Tool to support Google Search in Model. Powered by Google.
"blockingConfidence": "A String", # Optional. Sites with confidence level chosen & above this value will be blocked from the search results.
"excludeDomains": [ # Optional. List of domains to be excluded from the search results. The default limit is 2000 domains. Example: ["amazon.com", "facebook.com"].
"A String",
],
},
"googleSearchRetrieval": { # Tool to retrieve public web data for grounding, powered by Google. # Optional. Specialized retrieval tool that is powered by Google Search.
"dynamicRetrievalConfig": { # Describes the options to customize dynamic retrieval. # Specifies the dynamic retrieval configuration for the given source.
"dynamicThreshold": 3.14, # Optional. The threshold to be used in dynamic retrieval. If not set, a system default value is used.
"mode": "A String", # The mode of the predictor to be used in dynamic retrieval.
},
},
"parallelAiSearch": { # ParallelAiSearch tool type. A tool that uses the Parallel.ai search engine for grounding. # Optional. If specified, Vertex AI will use Parallel.ai to search for information to answer user queries. The search results will be grounded on Parallel.ai and presented to the model for response generation
"apiKey": "A String", # Optional. The API key for ParallelAiSearch. If an API key is not provided, the system will attempt to verify access by checking for an active Parallel.ai subscription through the Google Cloud Marketplace. See https://docs.parallel.ai/search/search-quickstart for more details.
"customConfigs": { # Optional. Custom configs for ParallelAiSearch. This field can be used to pass any parameter from the Parallel.ai Search API. See the Parallel.ai documentation for the full list of available parameters and their usage: https://docs.parallel.ai/api-reference/search-beta/search Currently only `source_policy`, `excerpts`, `max_results`, `mode`, `fetch_policy` can be set via this field. For example: { "source_policy": { "include_domains": ["google.com", "wikipedia.org"], "exclude_domains": ["example.com"] }, "fetch_policy": { "max_age_seconds": 3600 } }
"a_key": "", # Properties of the object.
},
},
"retrieval": { # Defines a retrieval tool that model can call to access external knowledge. # Optional. Retrieval tool type. System will always execute the provided retrieval tool(s) to get external knowledge to answer the prompt. Retrieval results are presented to the model for generation.
"disableAttribution": True or False, # Optional. Deprecated. This option is no longer supported.
"externalApi": { # Retrieve from data source powered by external API for grounding. The external API is not owned by Google, but need to follow the pre-defined API spec. # Use data source powered by external API for grounding.
"apiAuth": { # The generic reusable api auth config. Deprecated. Please use AuthConfig (google/cloud/aiplatform/master/auth.proto) instead. # The authentication config to access the API. Deprecated. Please use auth_config instead.
"apiKeyConfig": { # The API secret. # The API secret.
"apiKeySecretVersion": "A String", # Required. The SecretManager secret version resource name storing API key. e.g. projects/{project}/secrets/{secret}/versions/{version}
"apiKeyString": "A String", # The API key string. Either this or `api_key_secret_version` must be set.
},
},
"apiSpec": "A String", # The API spec that the external API implements.
"authConfig": { # Auth configuration to run the extension. # The authentication config to access the API.
"apiKeyConfig": { # Config for authentication with API key. # Config for API key auth.
"apiKeySecret": "A String", # Optional. The name of the SecretManager secret version resource storing the API key. Format: `projects/{project}/secrets/{secrete}/versions/{version}` - If both `api_key_secret` and `api_key_string` are specified, this field takes precedence over `api_key_string`. - If specified, the `secretmanager.versions.access` permission should be granted to Vertex AI Extension Service Agent (https://cloud.google.com/vertex-ai/docs/general/access-control#service-agents) on the specified resource.
"apiKeyString": "A String", # Optional. The API key to be used in the request directly.
"httpElementLocation": "A String", # Optional. The location of the API key.
"name": "A String", # Optional. The parameter name of the API key. E.g. If the API request is "https://example.com/act?api_key=", "api_key" would be the parameter name.
},
"authType": "A String", # Type of auth scheme.
"googleServiceAccountConfig": { # Config for Google Service Account Authentication. # Config for Google Service Account auth.
"serviceAccount": "A String", # Optional. The service account that the extension execution service runs as. - If the service account is specified, the `iam.serviceAccounts.getAccessToken` permission should be granted to Vertex AI Extension Service Agent (https://cloud.google.com/vertex-ai/docs/general/access-control#service-agents) on the specified service account. - If not specified, the Vertex AI Extension Service Agent will be used to execute the Extension.
},
"httpBasicAuthConfig": { # Config for HTTP Basic Authentication. # Config for HTTP Basic auth.
"credentialSecret": "A String", # Required. The name of the SecretManager secret version resource storing the base64 encoded credentials. Format: `projects/{project}/secrets/{secrete}/versions/{version}` - If specified, the `secretmanager.versions.access` permission should be granted to Vertex AI Extension Service Agent (https://cloud.google.com/vertex-ai/docs/general/access-control#service-agents) on the specified resource.
},
"oauthConfig": { # Config for user oauth. # Config for user oauth.
"accessToken": "A String", # Access token for extension endpoint. Only used to propagate token from [[ExecuteExtensionRequest.runtime_auth_config]] at request time.
"serviceAccount": "A String", # The service account used to generate access tokens for executing the Extension. - If the service account is specified, the `iam.serviceAccounts.getAccessToken` permission should be granted to Vertex AI Extension Service Agent (https://cloud.google.com/vertex-ai/docs/general/access-control#service-agents) on the provided service account.
},
"oidcConfig": { # Config for user OIDC auth. # Config for user OIDC auth.
"idToken": "A String", # OpenID Connect formatted ID token for extension endpoint. Only used to propagate token from [[ExecuteExtensionRequest.runtime_auth_config]] at request time.
"serviceAccount": "A String", # The service account used to generate an OpenID Connect (OIDC)-compatible JWT token signed by the Google OIDC Provider (accounts.google.com) for extension endpoint (https://cloud.google.com/iam/docs/create-short-lived-credentials-direct#sa-credentials-oidc). - The audience for the token will be set to the URL in the server url defined in the OpenApi spec. - If the service account is provided, the service account should grant `iam.serviceAccounts.getOpenIdToken` permission to Vertex AI Extension Service Agent (https://cloud.google.com/vertex-ai/docs/general/access-control#service-agents).
},
},
"elasticSearchParams": { # The search parameters to use for the ELASTIC_SEARCH spec. # Parameters for the elastic search API.
"index": "A String", # The ElasticSearch index to use.
"numHits": 42, # Optional. Number of hits (chunks) to request. When specified, it is passed to Elasticsearch as the `num_hits` param.
"searchTemplate": "A String", # The ElasticSearch search template to use.
},
"endpoint": "A String", # The endpoint of the external API. The system will call the API at this endpoint to retrieve the data for grounding. Example: https://acme.com:443/search
"simpleSearchParams": { # The search parameters to use for SIMPLE_SEARCH spec. # Parameters for the simple search API.
},
},
"vertexAiSearch": { # Retrieve from Vertex AI Search datastore or engine for grounding. datastore and engine are mutually exclusive. See https://cloud.google.com/products/agent-builder # Set to use data source powered by Vertex AI Search.
"dataStoreSpecs": [ # Specifications that define the specific DataStores to be searched, along with configurations for those data stores. This is only considered for Engines with multiple data stores. It should only be set if engine is used.
{ # Define data stores within engine to filter on in a search call and configurations for those data stores. For more information, see https://cloud.google.com/generative-ai-app-builder/docs/reference/rpc/google.cloud.discoveryengine.v1#datastorespec
"dataStore": "A String", # Full resource name of DataStore, such as Format: `projects/{project}/locations/{location}/collections/{collection}/dataStores/{dataStore}`
"filter": "A String", # Optional. Filter specification to filter documents in the data store specified by data_store field. For more information on filtering, see [Filtering](https://cloud.google.com/generative-ai-app-builder/docs/filter-search-metadata)
},
],
"datastore": "A String", # Optional. Fully-qualified Vertex AI Search data store resource ID. Format: `projects/{project}/locations/{location}/collections/{collection}/dataStores/{dataStore}`
"engine": "A String", # Optional. Fully-qualified Vertex AI Search engine resource ID. Format: `projects/{project}/locations/{location}/collections/{collection}/engines/{engine}`
"filter": "A String", # Optional. Filter strings to be passed to the search API.
"maxResults": 42, # Optional. Number of search results to return per query. The default value is 10. The maximumm allowed value is 10.
},
"vertexRagStore": { # Retrieve from Vertex RAG Store for grounding. # Set to use data source powered by Vertex RAG store. User data is uploaded via the VertexRagDataService.
"ragCorpora": [ # Optional. Deprecated. Please use rag_resources instead.
"A String",
],
"ragResources": [ # Optional. The representation of the rag source. It can be used to specify corpus only or ragfiles. Currently only support one corpus or multiple files from one corpus. In the future we may open up multiple corpora support.
{ # The definition of the Rag resource.
"ragCorpus": "A String", # Optional. RagCorpora resource name. Format: `projects/{project}/locations/{location}/ragCorpora/{rag_corpus}`
"ragFileIds": [ # Optional. rag_file_id. The files should be in the same rag_corpus set in rag_corpus field.
"A String",
],
},
],
"ragRetrievalConfig": { # Specifies the context retrieval config. # Optional. The retrieval config for the Rag query.
"filter": { # Config for filters. # Optional. Config for filters.
"metadataFilter": "A String", # Optional. String for metadata filtering.
"vectorDistanceThreshold": 3.14, # Optional. Only returns contexts with vector distance smaller than the threshold.
"vectorSimilarityThreshold": 3.14, # Optional. Only returns contexts with vector similarity larger than the threshold.
},
"hybridSearch": { # Config for Hybrid Search. # Optional. Config for Hybrid Search.
"alpha": 3.14, # Optional. Alpha value controls the weight between dense and sparse vector search results. The range is [0, 1], while 0 means sparse vector search only and 1 means dense vector search only. The default value is 0.5 which balances sparse and dense vector search equally.
},
"ranking": { # Config for ranking and reranking. # Optional. Config for ranking and reranking.
"llmRanker": { # Config for LlmRanker. # Optional. Config for LlmRanker.
"modelName": "A String", # Optional. The model name used for ranking. See [Supported models](https://cloud.google.com/vertex-ai/generative-ai/docs/model-reference/inference#supported-models).
},
"rankService": { # Config for Rank Service. # Optional. Config for Rank Service.
"modelName": "A String", # Optional. The model name of the rank service. Format: `semantic-ranker-512@latest`
},
},
"topK": 42, # Optional. The number of contexts to retrieve.
},
"similarityTopK": 42, # Optional. Number of top k results to return from the selected corpora.
"storeContext": True or False, # Optional. Currently only supported for Gemini Multimodal Live API. In Gemini Multimodal Live API, if `store_context` bool is specified, Gemini will leverage it to automatically memorize the interactions between the client and Gemini, and retrieve context when needed to augment the response generation for users' ongoing and future interactions.
"vectorDistanceThreshold": 3.14, # Optional. Only return results with vector distance smaller than the threshold.
},
},
"urlContext": { # Tool to support URL context. # Optional. Tool to support URL context retrieval.
},
},
],
"author": "A String", # Required. The ID of the agent or entity that generated this event. Use "user" to denote events generated by the end-user.
"content": { # The structured data content of a message. A Content message contains a `role` field, which indicates the producer of the content, and a `parts` field, which contains the multi-part data of the message. # Required. The content of the event (e.g., text response, tool call, tool response).
"parts": [ # Required. A list of Part objects that make up a single message. Parts of a message can have different MIME types. A Content message must have at least one Part.
{ # A datatype containing media that is part of a multi-part Content message. A `Part` consists of data which has an associated datatype. A `Part` can only contain one of the accepted types in `Part.data`. For media types that are not text, `Part` must have a fixed IANA MIME type identifying the type and subtype of the media if `inline_data` or `file_data` field is filled with raw bytes.
"codeExecutionResult": { # Result of executing the [ExecutableCode]. Only generated when using the [CodeExecution] tool, and always follows a `part` containing the [ExecutableCode]. # Optional. The result of executing the ExecutableCode.
"outcome": "A String", # Required. Outcome of the code execution.
"output": "A String", # Optional. Contains stdout when code execution is successful, stderr or other description otherwise.
},
"executableCode": { # Code generated by the model that is meant to be executed, and the result returned to the model. Generated when using the [CodeExecution] tool, in which the code will be automatically executed, and a corresponding [CodeExecutionResult] will also be generated. # Optional. Code generated by the model that is intended to be executed.
"code": "A String", # Required. The code to be executed.
"language": "A String", # Required. Programming language of the `code`.
},
"fileData": { # URI-based data. A FileData message contains a URI pointing to data of a specific media type. It is used to represent images, audio, and video stored in Google Cloud Storage. # Optional. The URI-based data of the part. This can be used to include files from Google Cloud Storage.
"displayName": "A String", # Optional. The display name of the file. Used to provide a label or filename to distinguish files. This field is only returned in `PromptMessage` for prompt management. It is used in the Gemini calls only when server side tools (`code_execution`, `google_search`, and `url_context`) are enabled.
"fileUri": "A String", # Required. The URI of the file in Google Cloud Storage.
"mimeType": "A String", # Required. The IANA standard MIME type of the source data.
},
"functionCall": { # A predicted [FunctionCall] returned from the model that contains a string representing the [FunctionDeclaration.name] and a structured JSON object containing the parameters and their values. # Optional. A predicted function call returned from the model. This contains the name of the function to call and the arguments to pass to the function.
"args": { # Optional. The function parameters and values in JSON object format. See [FunctionDeclaration.parameters] for parameter details.
"a_key": "", # Properties of the object.
},
"id": "A String", # Optional. The unique id of the function call. If populated, the client to execute the `function_call` and return the response with the matching `id`.
"name": "A String", # Optional. The name of the function to call. Matches [FunctionDeclaration.name].
"partialArgs": [ # Optional. The partial argument value of the function call. If provided, represents the arguments/fields that are streamed incrementally.
{ # Partial argument value of the function call.
"boolValue": True or False, # Optional. Represents a boolean value.
"jsonPath": "A String", # Required. A JSON Path (RFC 9535) to the argument being streamed. https://datatracker.ietf.org/doc/html/rfc9535. e.g. "$.foo.bar[0].data".
"nullValue": "A String", # Optional. Represents a null value.
"numberValue": 3.14, # Optional. Represents a double value.
"stringValue": "A String", # Optional. Represents a string value.
"willContinue": True or False, # Optional. Whether this is not the last part of the same json_path. If true, another PartialArg message for the current json_path is expected to follow.
},
],
"willContinue": True or False, # Optional. Whether this is the last part of the FunctionCall. If true, another partial message for the current FunctionCall is expected to follow.
},
"functionResponse": { # The result output from a [FunctionCall] that contains a string representing the [FunctionDeclaration.name] and a structured JSON object containing any output from the function is used as context to the model. This should contain the result of a [FunctionCall] made based on model prediction. # Optional. The result of a function call. This is used to provide the model with the result of a function call that it predicted.
"id": "A String", # Optional. The id of the function call this response is for. Populated by the client to match the corresponding function call `id`.
"name": "A String", # Required. The name of the function to call. Matches [FunctionDeclaration.name] and [FunctionCall.name].
"parts": [ # Optional. Ordered `Parts` that constitute a function response. Parts may have different IANA MIME types.
{ # A datatype containing media that is part of a `FunctionResponse` message. A `FunctionResponsePart` consists of data which has an associated datatype. A `FunctionResponsePart` can only contain one of the accepted types in `FunctionResponsePart.data`. A `FunctionResponsePart` must have a fixed IANA MIME type identifying the type and subtype of the media if the `inline_data` field is filled with raw bytes.
"fileData": { # URI based data for function response. # URI based data.
"displayName": "A String", # Optional. Display name of the file data. Used to provide a label or filename to distinguish file datas. This field is only returned in PromptMessage for prompt management. It is currently used in the Gemini GenerateContent calls only when server side tools (code_execution, google_search, and url_context) are enabled.
"fileUri": "A String", # Required. URI.
"mimeType": "A String", # Required. The IANA standard MIME type of the source data.
},
"inlineData": { # Raw media bytes for function response. Text should not be sent as raw bytes, use the 'text' field. # Inline media bytes.
"data": "A String", # Required. Raw bytes.
"displayName": "A String", # Optional. Display name of the blob. Used to provide a label or filename to distinguish blobs. This field is only returned in PromptMessage for prompt management. It is currently used in the Gemini GenerateContent calls only when server side tools (code_execution, google_search, and url_context) are enabled.
"mimeType": "A String", # Required. The IANA standard MIME type of the source data.
},
},
],
"response": { # Required. The function response in JSON object format. Use "output" key to specify function output and "error" key to specify error details (if any). If "output" and "error" keys are not specified, then whole "response" is treated as function output.
"a_key": "", # Properties of the object.
},
},
"inlineData": { # A content blob. A Blob contains data of a specific media type. It is used to represent images, audio, and video. # Optional. The inline data content of the part. This can be used to include images, audio, or video in a request.
"data": "A String", # Required. The raw bytes of the data.
"displayName": "A String", # Optional. The display name of the blob. Used to provide a label or filename to distinguish blobs. This field is only returned in `PromptMessage` for prompt management. It is used in the Gemini calls only when server-side tools (`code_execution`, `google_search`, and `url_context`) are enabled.
"mimeType": "A String", # Required. The IANA standard MIME type of the source data.
},
"mediaResolution": { # per part media resolution. Media resolution for the input media. # per part media resolution. Media resolution for the input media.
"level": "A String", # The tokenization quality used for given media.
},
"text": "A String", # Optional. The text content of the part. When sent from the VSCode Gemini Code Assist extension, references to @mentioned items will be converted to markdown boldface text. For example `@my-repo` will be converted to and sent as `**my-repo**` by the IDE agent.
"thought": True or False, # Optional. Indicates whether the `part` represents the model's thought process or reasoning.
"thoughtSignature": "A String", # Optional. An opaque signature for the thought so it can be reused in subsequent requests.
"videoMetadata": { # Provides metadata for a video, including the start and end offsets for clipping and the frame rate. # Optional. Video metadata. The metadata should only be specified while the video data is presented in inline_data or file_data.
"endOffset": "A String", # Optional. The end offset of the video.
"fps": 3.14, # Optional. The frame rate of the video sent to the model. If not specified, the default value is 1.0. The valid range is (0.0, 24.0].
"startOffset": "A String", # Optional. The start offset of the video.
},
},
],
"role": "A String", # Optional. The producer of the content. Must be either 'user' or 'model'. If not set, the service will default to 'user'.
},
"eventTime": "A String", # Optional. The timestamp when the event occurred.
"stateDelta": { # Optional. The change in the session state caused by this event. This is a key-value map of fields that were modified or added by the event.
"a_key": "", # Properties of the object.
},
},
],
"turnId": "A String", # Optional. A unique identifier for the turn. Useful for referencing specific turns across systems.
"turnIndex": 42, # Required. The 0-based index of the turn in the conversation sequence.
},
],
},
"candidate": "A String", # Required. The name of the candidate that produced the response.
"error": { # The `Status` type defines a logical error model that is suitable for different programming environments, including REST APIs and RPC APIs. It is used by [gRPC](https://github.com/grpc). Each `Status` message contains three pieces of data: error code, error message, and error details. You can find out more about this error model and how to work with it in the [API Design Guide](https://cloud.google.com/apis/design/errors). # Output only. Error while scraping model or agent.
"code": 42, # The status code, which should be an enum value of google.rpc.Code.
"details": [ # A list of messages that carry the error details. There is a common set of message types for APIs to use.
{
"a_key": "", # Properties of the object. Contains field @type with type URL.
},
],
"message": "A String", # A developer-facing error message, which should be in English. Any user-facing error message should be localized and sent in the google.rpc.Status.details field, or localized by the client.
},
"events": [ # Optional. Intermediate events (such as tool calls and responses) that led to the final response.
{ # The structured data content of a message. A Content message contains a `role` field, which indicates the producer of the content, and a `parts` field, which contains the multi-part data of the message.
"parts": [ # Required. A list of Part objects that make up a single message. Parts of a message can have different MIME types. A Content message must have at least one Part.
{ # A datatype containing media that is part of a multi-part Content message. A `Part` consists of data which has an associated datatype. A `Part` can only contain one of the accepted types in `Part.data`. For media types that are not text, `Part` must have a fixed IANA MIME type identifying the type and subtype of the media if `inline_data` or `file_data` field is filled with raw bytes.
"codeExecutionResult": { # Result of executing the [ExecutableCode]. Only generated when using the [CodeExecution] tool, and always follows a `part` containing the [ExecutableCode]. # Optional. The result of executing the ExecutableCode.
"outcome": "A String", # Required. Outcome of the code execution.
"output": "A String", # Optional. Contains stdout when code execution is successful, stderr or other description otherwise.
},
"executableCode": { # Code generated by the model that is meant to be executed, and the result returned to the model. Generated when using the [CodeExecution] tool, in which the code will be automatically executed, and a corresponding [CodeExecutionResult] will also be generated. # Optional. Code generated by the model that is intended to be executed.
"code": "A String", # Required. The code to be executed.
"language": "A String", # Required. Programming language of the `code`.
},
"fileData": { # URI-based data. A FileData message contains a URI pointing to data of a specific media type. It is used to represent images, audio, and video stored in Google Cloud Storage. # Optional. The URI-based data of the part. This can be used to include files from Google Cloud Storage.
"displayName": "A String", # Optional. The display name of the file. Used to provide a label or filename to distinguish files. This field is only returned in `PromptMessage` for prompt management. It is used in the Gemini calls only when server side tools (`code_execution`, `google_search`, and `url_context`) are enabled.
"fileUri": "A String", # Required. The URI of the file in Google Cloud Storage.
"mimeType": "A String", # Required. The IANA standard MIME type of the source data.
},
"functionCall": { # A predicted [FunctionCall] returned from the model that contains a string representing the [FunctionDeclaration.name] and a structured JSON object containing the parameters and their values. # Optional. A predicted function call returned from the model. This contains the name of the function to call and the arguments to pass to the function.
"args": { # Optional. The function parameters and values in JSON object format. See [FunctionDeclaration.parameters] for parameter details.
"a_key": "", # Properties of the object.
},
"id": "A String", # Optional. The unique id of the function call. If populated, the client to execute the `function_call` and return the response with the matching `id`.
"name": "A String", # Optional. The name of the function to call. Matches [FunctionDeclaration.name].
"partialArgs": [ # Optional. The partial argument value of the function call. If provided, represents the arguments/fields that are streamed incrementally.
{ # Partial argument value of the function call.
"boolValue": True or False, # Optional. Represents a boolean value.
"jsonPath": "A String", # Required. A JSON Path (RFC 9535) to the argument being streamed. https://datatracker.ietf.org/doc/html/rfc9535. e.g. "$.foo.bar[0].data".
"nullValue": "A String", # Optional. Represents a null value.
"numberValue": 3.14, # Optional. Represents a double value.
"stringValue": "A String", # Optional. Represents a string value.
"willContinue": True or False, # Optional. Whether this is not the last part of the same json_path. If true, another PartialArg message for the current json_path is expected to follow.
},
],
"willContinue": True or False, # Optional. Whether this is the last part of the FunctionCall. If true, another partial message for the current FunctionCall is expected to follow.
},
"functionResponse": { # The result output from a [FunctionCall] that contains a string representing the [FunctionDeclaration.name] and a structured JSON object containing any output from the function is used as context to the model. This should contain the result of a [FunctionCall] made based on model prediction. # Optional. The result of a function call. This is used to provide the model with the result of a function call that it predicted.
"id": "A String", # Optional. The id of the function call this response is for. Populated by the client to match the corresponding function call `id`.
"name": "A String", # Required. The name of the function to call. Matches [FunctionDeclaration.name] and [FunctionCall.name].
"parts": [ # Optional. Ordered `Parts` that constitute a function response. Parts may have different IANA MIME types.
{ # A datatype containing media that is part of a `FunctionResponse` message. A `FunctionResponsePart` consists of data which has an associated datatype. A `FunctionResponsePart` can only contain one of the accepted types in `FunctionResponsePart.data`. A `FunctionResponsePart` must have a fixed IANA MIME type identifying the type and subtype of the media if the `inline_data` field is filled with raw bytes.
"fileData": { # URI based data for function response. # URI based data.
"displayName": "A String", # Optional. Display name of the file data. Used to provide a label or filename to distinguish file datas. This field is only returned in PromptMessage for prompt management. It is currently used in the Gemini GenerateContent calls only when server side tools (code_execution, google_search, and url_context) are enabled.
"fileUri": "A String", # Required. URI.
"mimeType": "A String", # Required. The IANA standard MIME type of the source data.
},
"inlineData": { # Raw media bytes for function response. Text should not be sent as raw bytes, use the 'text' field. # Inline media bytes.
"data": "A String", # Required. Raw bytes.
"displayName": "A String", # Optional. Display name of the blob. Used to provide a label or filename to distinguish blobs. This field is only returned in PromptMessage for prompt management. It is currently used in the Gemini GenerateContent calls only when server side tools (code_execution, google_search, and url_context) are enabled.
"mimeType": "A String", # Required. The IANA standard MIME type of the source data.
},
},
],
"response": { # Required. The function response in JSON object format. Use "output" key to specify function output and "error" key to specify error details (if any). If "output" and "error" keys are not specified, then whole "response" is treated as function output.
"a_key": "", # Properties of the object.
},
},
"inlineData": { # A content blob. A Blob contains data of a specific media type. It is used to represent images, audio, and video. # Optional. The inline data content of the part. This can be used to include images, audio, or video in a request.
"data": "A String", # Required. The raw bytes of the data.
"displayName": "A String", # Optional. The display name of the blob. Used to provide a label or filename to distinguish blobs. This field is only returned in `PromptMessage` for prompt management. It is used in the Gemini calls only when server-side tools (`code_execution`, `google_search`, and `url_context`) are enabled.
"mimeType": "A String", # Required. The IANA standard MIME type of the source data.
},
"mediaResolution": { # per part media resolution. Media resolution for the input media. # per part media resolution. Media resolution for the input media.
"level": "A String", # The tokenization quality used for given media.
},
"text": "A String", # Optional. The text content of the part. When sent from the VSCode Gemini Code Assist extension, references to @mentioned items will be converted to markdown boldface text. For example `@my-repo` will be converted to and sent as `**my-repo**` by the IDE agent.
"thought": True or False, # Optional. Indicates whether the `part` represents the model's thought process or reasoning.
"thoughtSignature": "A String", # Optional. An opaque signature for the thought so it can be reused in subsequent requests.
"videoMetadata": { # Provides metadata for a video, including the start and end offsets for clipping and the frame rate. # Optional. Video metadata. The metadata should only be specified while the video data is presented in inline_data or file_data.
"endOffset": "A String", # Optional. The end offset of the video.
"fps": 3.14, # Optional. The frame rate of the video sent to the model. If not specified, the default value is 1.0. The valid range is (0.0, 24.0].
"startOffset": "A String", # Optional. The start offset of the video.
},
},
],
"role": "A String", # Optional. The producer of the content. Must be either 'user' or 'model'. If not set, the service will default to 'user'.
},
],
"text": "A String", # Text response.
"value": "", # Fields and values that can be used to populate the response template.
},
],
"goldenResponse": { # Responses from model or agent. # Optional. The Ideal response or ground truth.
"agentData": { # Represents data specific to multi-turn agent evaluations. # Optional. Represents the complete execution trace of a multi-turn conversation, which can involve single or multiple agents. This field is used to provide the full output of an agent's run, including all turns and events, for direct evaluation.
"agents": { # Optional. A map containing the static configurations for each agent in the system. Key: agent_id (matches the `author` field in events). Value: The static configuration of the agent.
"a_key": { # Represents configuration for an Agent.
"agentId": "A String", # Required. Unique identifier of the agent. This ID is used to refer to this agent, e.g., in AgentEvent.author, or in the `sub_agents` field. It must be unique within the `agents` map.
"agentType": "A String", # Optional. The type or class of the agent (e.g., "LlmAgent", "RouterAgent", "ToolUseAgent"). Useful for the autorater to understand the expected behavior of the agent.
"description": "A String", # Optional. A high-level description of the agent's role and responsibilities. Critical for evaluating if the agent is routing tasks correctly.
"instruction": "A String", # Optional. Provides instructions for the LLM model, guiding the agent's behavior. Can be static or dynamic. Dynamic instructions can contain placeholders like {variable_name} that will be resolved at runtime using the `AgentEvent.state_delta` field.
"subAgents": [ # Optional. The list of valid agent IDs that this agent can delegate to. This defines the directed edges in the multi-agent system graph topology.
"A String",
],
"tools": [ # Optional. The list of tools available to this agent.
{ # Tool details that the model may use to generate response. A `Tool` is a piece of code that enables the system to interact with external systems to perform an action, or set of actions, outside of knowledge and scope of the model. A Tool object should contain exactly one type of Tool (e.g FunctionDeclaration, Retrieval or GoogleSearchRetrieval).
"codeExecution": { # Tool that executes code generated by the model, and automatically returns the result to the model. See also [ExecutableCode]and [CodeExecutionResult] which are input and output to this tool. # Optional. CodeExecution tool type. Enables the model to execute code as part of generation.
},
"computerUse": { # Tool to support computer use. # Optional. Tool to support the model interacting directly with the computer. If enabled, it automatically populates computer-use specific Function Declarations.
"environment": "A String", # Required. The environment being operated.
"excludedPredefinedFunctions": [ # Optional. By default, [predefined functions](https://cloud.google.com/vertex-ai/generative-ai/docs/computer-use#supported-actions) are included in the final model call. Some of them can be explicitly excluded from being automatically included. This can serve two purposes: 1. Using a more restricted / different action space. 2. Improving the definitions / instructions of predefined functions.
"A String",
],
},
"enterpriseWebSearch": { # Tool to search public web data, powered by Vertex AI Search and Sec4 compliance. # Optional. Tool to support searching public web data, powered by Vertex AI Search and Sec4 compliance.
"blockingConfidence": "A String", # Optional. Sites with confidence level chosen & above this value will be blocked from the search results.
"excludeDomains": [ # Optional. List of domains to be excluded from the search results. The default limit is 2000 domains.
"A String",
],
},
"functionDeclarations": [ # Optional. Function tool type. One or more function declarations to be passed to the model along with the current user query. Model may decide to call a subset of these functions by populating FunctionCall in the response. User should provide a FunctionResponse for each function call in the next turn. Based on the function responses, Model will generate the final response back to the user. Maximum 512 function declarations can be provided.
{ # Structured representation of a function declaration as defined by the [OpenAPI 3.0 specification](https://spec.openapis.org/oas/v3.0.3). Included in this declaration are the function name, description, parameters and response type. This FunctionDeclaration is a representation of a block of code that can be used as a `Tool` by the model and executed by the client.
"description": "A String", # Optional. Description and purpose of the function. Model uses it to decide how and whether to call the function.
"name": "A String", # Required. The name of the function to call. Must start with a letter or an underscore. Must be a-z, A-Z, 0-9, or contain underscores, dots, colons and dashes, with a maximum length of 64.
"parameters": { # Defines the schema of input and output data. This is a subset of the [OpenAPI 3.0 Schema Object](https://spec.openapis.org/oas/v3.0.3#schema-object). # Optional. Describes the parameters to this function in JSON Schema Object format. Reflects the Open API 3.03 Parameter Object. string Key: the name of the parameter. Parameter names are case sensitive. Schema Value: the Schema defining the type used for the parameter. For function with no parameters, this can be left unset. Parameter names must start with a letter or an underscore and must only contain chars a-z, A-Z, 0-9, or underscores with a maximum length of 64. Example with 1 required and 1 optional parameter: type: OBJECT properties: param1: type: STRING param2: type: INTEGER required: - param1
"additionalProperties": "", # Optional. If `type` is `OBJECT`, specifies how to handle properties not defined in `properties`. If it is a boolean `false`, no additional properties are allowed. If it is a schema, additional properties are allowed if they conform to the schema.
"anyOf": [ # Optional. The instance must be valid against any (one or more) of the subschemas listed in `any_of`.
# Object with schema name: GoogleCloudAiplatformV1beta1Schema
],
"default": "", # Optional. Default value to use if the field is not specified.
"defs": { # Optional. `defs` provides a map of schema definitions that can be reused by `ref` elsewhere in the schema. Only allowed at root level of the schema.
"a_key": # Object with schema name: GoogleCloudAiplatformV1beta1Schema
},
"description": "A String", # Optional. Description of the schema.
"enum": [ # Optional. Possible values of the field. This field can be used to restrict a value to a fixed set of values. To mark a field as an enum, set `format` to `enum` and provide the list of possible values in `enum`. For example: 1. To define directions: `{type:STRING, format:enum, enum:["EAST", "NORTH", "SOUTH", "WEST"]}` 2. To define apartment numbers: `{type:INTEGER, format:enum, enum:["101", "201", "301"]}`
"A String",
],
"example": "", # Optional. Example of an instance of this schema.
"format": "A String", # Optional. The format of the data. For `NUMBER` type, format can be `float` or `double`. For `INTEGER` type, format can be `int32` or `int64`. For `STRING` type, format can be `email`, `byte`, `date`, `date-time`, `password`, and other formats to further refine the data type.
"items": # Object with schema name: GoogleCloudAiplatformV1beta1Schema # Optional. If type is `ARRAY`, `items` specifies the schema of elements in the array.
"maxItems": "A String", # Optional. If type is `ARRAY`, `max_items` specifies the maximum number of items in an array.
"maxLength": "A String", # Optional. If type is `STRING`, `max_length` specifies the maximum length of the string.
"maxProperties": "A String", # Optional. If type is `OBJECT`, `max_properties` specifies the maximum number of properties that can be provided.
"maximum": 3.14, # Optional. If type is `INTEGER` or `NUMBER`, `maximum` specifies the maximum allowed value.
"minItems": "A String", # Optional. If type is `ARRAY`, `min_items` specifies the minimum number of items in an array.
"minLength": "A String", # Optional. If type is `STRING`, `min_length` specifies the minimum length of the string.
"minProperties": "A String", # Optional. If type is `OBJECT`, `min_properties` specifies the minimum number of properties that can be provided.
"minimum": 3.14, # Optional. If type is `INTEGER` or `NUMBER`, `minimum` specifies the minimum allowed value.
"nullable": True or False, # Optional. Indicates if the value of this field can be null.
"pattern": "A String", # Optional. If type is `STRING`, `pattern` specifies a regular expression that the string must match.
"properties": { # Optional. If type is `OBJECT`, `properties` is a map of property names to schema definitions for each property of the object.
"a_key": # Object with schema name: GoogleCloudAiplatformV1beta1Schema
},
"propertyOrdering": [ # Optional. Order of properties displayed or used where order matters. This is not a standard field in OpenAPI specification, but can be used to control the order of properties.
"A String",
],
"ref": "A String", # Optional. Allows referencing another schema definition to use in place of this schema. The value must be a valid reference to a schema in `defs`. For example, the following schema defines a reference to a schema node named "Pet": type: object properties: pet: ref: #/defs/Pet defs: Pet: type: object properties: name: type: string The value of the "pet" property is a reference to the schema node named "Pet". See details in https://json-schema.org/understanding-json-schema/structuring
"required": [ # Optional. If type is `OBJECT`, `required` lists the names of properties that must be present.
"A String",
],
"title": "A String", # Optional. Title for the schema.
"type": "A String", # Optional. Data type of the schema field.
},
"parametersJsonSchema": "", # Optional. Describes the parameters to the function in JSON Schema format. The schema must describe an object where the properties are the parameters to the function. For example: ``` { "type": "object", "properties": { "name": { "type": "string" }, "age": { "type": "integer" } }, "additionalProperties": false, "required": ["name", "age"], "propertyOrdering": ["name", "age"] } ``` This field is mutually exclusive with `parameters`.
"response": { # Defines the schema of input and output data. This is a subset of the [OpenAPI 3.0 Schema Object](https://spec.openapis.org/oas/v3.0.3#schema-object). # Optional. Describes the output from this function in JSON Schema format. Reflects the Open API 3.03 Response Object. The Schema defines the type used for the response value of the function.
"additionalProperties": "", # Optional. If `type` is `OBJECT`, specifies how to handle properties not defined in `properties`. If it is a boolean `false`, no additional properties are allowed. If it is a schema, additional properties are allowed if they conform to the schema.
"anyOf": [ # Optional. The instance must be valid against any (one or more) of the subschemas listed in `any_of`.
# Object with schema name: GoogleCloudAiplatformV1beta1Schema
],
"default": "", # Optional. Default value to use if the field is not specified.
"defs": { # Optional. `defs` provides a map of schema definitions that can be reused by `ref` elsewhere in the schema. Only allowed at root level of the schema.
"a_key": # Object with schema name: GoogleCloudAiplatformV1beta1Schema
},
"description": "A String", # Optional. Description of the schema.
"enum": [ # Optional. Possible values of the field. This field can be used to restrict a value to a fixed set of values. To mark a field as an enum, set `format` to `enum` and provide the list of possible values in `enum`. For example: 1. To define directions: `{type:STRING, format:enum, enum:["EAST", "NORTH", "SOUTH", "WEST"]}` 2. To define apartment numbers: `{type:INTEGER, format:enum, enum:["101", "201", "301"]}`
"A String",
],
"example": "", # Optional. Example of an instance of this schema.
"format": "A String", # Optional. The format of the data. For `NUMBER` type, format can be `float` or `double`. For `INTEGER` type, format can be `int32` or `int64`. For `STRING` type, format can be `email`, `byte`, `date`, `date-time`, `password`, and other formats to further refine the data type.
"items": # Object with schema name: GoogleCloudAiplatformV1beta1Schema # Optional. If type is `ARRAY`, `items` specifies the schema of elements in the array.
"maxItems": "A String", # Optional. If type is `ARRAY`, `max_items` specifies the maximum number of items in an array.
"maxLength": "A String", # Optional. If type is `STRING`, `max_length` specifies the maximum length of the string.
"maxProperties": "A String", # Optional. If type is `OBJECT`, `max_properties` specifies the maximum number of properties that can be provided.
"maximum": 3.14, # Optional. If type is `INTEGER` or `NUMBER`, `maximum` specifies the maximum allowed value.
"minItems": "A String", # Optional. If type is `ARRAY`, `min_items` specifies the minimum number of items in an array.
"minLength": "A String", # Optional. If type is `STRING`, `min_length` specifies the minimum length of the string.
"minProperties": "A String", # Optional. If type is `OBJECT`, `min_properties` specifies the minimum number of properties that can be provided.
"minimum": 3.14, # Optional. If type is `INTEGER` or `NUMBER`, `minimum` specifies the minimum allowed value.
"nullable": True or False, # Optional. Indicates if the value of this field can be null.
"pattern": "A String", # Optional. If type is `STRING`, `pattern` specifies a regular expression that the string must match.
"properties": { # Optional. If type is `OBJECT`, `properties` is a map of property names to schema definitions for each property of the object.
"a_key": # Object with schema name: GoogleCloudAiplatformV1beta1Schema
},
"propertyOrdering": [ # Optional. Order of properties displayed or used where order matters. This is not a standard field in OpenAPI specification, but can be used to control the order of properties.
"A String",
],
"ref": "A String", # Optional. Allows referencing another schema definition to use in place of this schema. The value must be a valid reference to a schema in `defs`. For example, the following schema defines a reference to a schema node named "Pet": type: object properties: pet: ref: #/defs/Pet defs: Pet: type: object properties: name: type: string The value of the "pet" property is a reference to the schema node named "Pet". See details in https://json-schema.org/understanding-json-schema/structuring
"required": [ # Optional. If type is `OBJECT`, `required` lists the names of properties that must be present.
"A String",
],
"title": "A String", # Optional. Title for the schema.
"type": "A String", # Optional. Data type of the schema field.
},
"responseJsonSchema": "", # Optional. Describes the output from this function in JSON Schema format. The value specified by the schema is the response value of the function. This field is mutually exclusive with `response`.
},
],
"googleMaps": { # Tool to retrieve public maps data for grounding, powered by Google. # Optional. GoogleMaps tool type. Tool to support Google Maps in Model.
"enableWidget": True or False, # Optional. If true, include the widget context token in the response.
},
"googleSearch": { # GoogleSearch tool type. Tool to support Google Search in Model. Powered by Google. # Optional. GoogleSearch tool type. Tool to support Google Search in Model. Powered by Google.
"blockingConfidence": "A String", # Optional. Sites with confidence level chosen & above this value will be blocked from the search results.
"excludeDomains": [ # Optional. List of domains to be excluded from the search results. The default limit is 2000 domains. Example: ["amazon.com", "facebook.com"].
"A String",
],
},
"googleSearchRetrieval": { # Tool to retrieve public web data for grounding, powered by Google. # Optional. Specialized retrieval tool that is powered by Google Search.
"dynamicRetrievalConfig": { # Describes the options to customize dynamic retrieval. # Specifies the dynamic retrieval configuration for the given source.
"dynamicThreshold": 3.14, # Optional. The threshold to be used in dynamic retrieval. If not set, a system default value is used.
"mode": "A String", # The mode of the predictor to be used in dynamic retrieval.
},
},
"parallelAiSearch": { # ParallelAiSearch tool type. A tool that uses the Parallel.ai search engine for grounding. # Optional. If specified, Vertex AI will use Parallel.ai to search for information to answer user queries. The search results will be grounded on Parallel.ai and presented to the model for response generation
"apiKey": "A String", # Optional. The API key for ParallelAiSearch. If an API key is not provided, the system will attempt to verify access by checking for an active Parallel.ai subscription through the Google Cloud Marketplace. See https://docs.parallel.ai/search/search-quickstart for more details.
"customConfigs": { # Optional. Custom configs for ParallelAiSearch. This field can be used to pass any parameter from the Parallel.ai Search API. See the Parallel.ai documentation for the full list of available parameters and their usage: https://docs.parallel.ai/api-reference/search-beta/search Currently only `source_policy`, `excerpts`, `max_results`, `mode`, `fetch_policy` can be set via this field. For example: { "source_policy": { "include_domains": ["google.com", "wikipedia.org"], "exclude_domains": ["example.com"] }, "fetch_policy": { "max_age_seconds": 3600 } }
"a_key": "", # Properties of the object.
},
},
"retrieval": { # Defines a retrieval tool that model can call to access external knowledge. # Optional. Retrieval tool type. System will always execute the provided retrieval tool(s) to get external knowledge to answer the prompt. Retrieval results are presented to the model for generation.
"disableAttribution": True or False, # Optional. Deprecated. This option is no longer supported.
"externalApi": { # Retrieve from data source powered by external API for grounding. The external API is not owned by Google, but need to follow the pre-defined API spec. # Use data source powered by external API for grounding.
"apiAuth": { # The generic reusable api auth config. Deprecated. Please use AuthConfig (google/cloud/aiplatform/master/auth.proto) instead. # The authentication config to access the API. Deprecated. Please use auth_config instead.
"apiKeyConfig": { # The API secret. # The API secret.
"apiKeySecretVersion": "A String", # Required. The SecretManager secret version resource name storing API key. e.g. projects/{project}/secrets/{secret}/versions/{version}
"apiKeyString": "A String", # The API key string. Either this or `api_key_secret_version` must be set.
},
},
"apiSpec": "A String", # The API spec that the external API implements.
"authConfig": { # Auth configuration to run the extension. # The authentication config to access the API.
"apiKeyConfig": { # Config for authentication with API key. # Config for API key auth.
"apiKeySecret": "A String", # Optional. The name of the SecretManager secret version resource storing the API key. Format: `projects/{project}/secrets/{secrete}/versions/{version}` - If both `api_key_secret` and `api_key_string` are specified, this field takes precedence over `api_key_string`. - If specified, the `secretmanager.versions.access` permission should be granted to Vertex AI Extension Service Agent (https://cloud.google.com/vertex-ai/docs/general/access-control#service-agents) on the specified resource.
"apiKeyString": "A String", # Optional. The API key to be used in the request directly.
"httpElementLocation": "A String", # Optional. The location of the API key.
"name": "A String", # Optional. The parameter name of the API key. E.g. If the API request is "https://example.com/act?api_key=", "api_key" would be the parameter name.
},
"authType": "A String", # Type of auth scheme.
"googleServiceAccountConfig": { # Config for Google Service Account Authentication. # Config for Google Service Account auth.
"serviceAccount": "A String", # Optional. The service account that the extension execution service runs as. - If the service account is specified, the `iam.serviceAccounts.getAccessToken` permission should be granted to Vertex AI Extension Service Agent (https://cloud.google.com/vertex-ai/docs/general/access-control#service-agents) on the specified service account. - If not specified, the Vertex AI Extension Service Agent will be used to execute the Extension.
},
"httpBasicAuthConfig": { # Config for HTTP Basic Authentication. # Config for HTTP Basic auth.
"credentialSecret": "A String", # Required. The name of the SecretManager secret version resource storing the base64 encoded credentials. Format: `projects/{project}/secrets/{secrete}/versions/{version}` - If specified, the `secretmanager.versions.access` permission should be granted to Vertex AI Extension Service Agent (https://cloud.google.com/vertex-ai/docs/general/access-control#service-agents) on the specified resource.
},
"oauthConfig": { # Config for user oauth. # Config for user oauth.
"accessToken": "A String", # Access token for extension endpoint. Only used to propagate token from [[ExecuteExtensionRequest.runtime_auth_config]] at request time.
"serviceAccount": "A String", # The service account used to generate access tokens for executing the Extension. - If the service account is specified, the `iam.serviceAccounts.getAccessToken` permission should be granted to Vertex AI Extension Service Agent (https://cloud.google.com/vertex-ai/docs/general/access-control#service-agents) on the provided service account.
},
"oidcConfig": { # Config for user OIDC auth. # Config for user OIDC auth.
"idToken": "A String", # OpenID Connect formatted ID token for extension endpoint. Only used to propagate token from [[ExecuteExtensionRequest.runtime_auth_config]] at request time.
"serviceAccount": "A String", # The service account used to generate an OpenID Connect (OIDC)-compatible JWT token signed by the Google OIDC Provider (accounts.google.com) for extension endpoint (https://cloud.google.com/iam/docs/create-short-lived-credentials-direct#sa-credentials-oidc). - The audience for the token will be set to the URL in the server url defined in the OpenApi spec. - If the service account is provided, the service account should grant `iam.serviceAccounts.getOpenIdToken` permission to Vertex AI Extension Service Agent (https://cloud.google.com/vertex-ai/docs/general/access-control#service-agents).
},
},
"elasticSearchParams": { # The search parameters to use for the ELASTIC_SEARCH spec. # Parameters for the elastic search API.
"index": "A String", # The ElasticSearch index to use.
"numHits": 42, # Optional. Number of hits (chunks) to request. When specified, it is passed to Elasticsearch as the `num_hits` param.
"searchTemplate": "A String", # The ElasticSearch search template to use.
},
"endpoint": "A String", # The endpoint of the external API. The system will call the API at this endpoint to retrieve the data for grounding. Example: https://acme.com:443/search
"simpleSearchParams": { # The search parameters to use for SIMPLE_SEARCH spec. # Parameters for the simple search API.
},
},
"vertexAiSearch": { # Retrieve from Vertex AI Search datastore or engine for grounding. datastore and engine are mutually exclusive. See https://cloud.google.com/products/agent-builder # Set to use data source powered by Vertex AI Search.
"dataStoreSpecs": [ # Specifications that define the specific DataStores to be searched, along with configurations for those data stores. This is only considered for Engines with multiple data stores. It should only be set if engine is used.
{ # Define data stores within engine to filter on in a search call and configurations for those data stores. For more information, see https://cloud.google.com/generative-ai-app-builder/docs/reference/rpc/google.cloud.discoveryengine.v1#datastorespec
"dataStore": "A String", # Full resource name of DataStore, such as Format: `projects/{project}/locations/{location}/collections/{collection}/dataStores/{dataStore}`
"filter": "A String", # Optional. Filter specification to filter documents in the data store specified by data_store field. For more information on filtering, see [Filtering](https://cloud.google.com/generative-ai-app-builder/docs/filter-search-metadata)
},
],
"datastore": "A String", # Optional. Fully-qualified Vertex AI Search data store resource ID. Format: `projects/{project}/locations/{location}/collections/{collection}/dataStores/{dataStore}`
"engine": "A String", # Optional. Fully-qualified Vertex AI Search engine resource ID. Format: `projects/{project}/locations/{location}/collections/{collection}/engines/{engine}`
"filter": "A String", # Optional. Filter strings to be passed to the search API.
"maxResults": 42, # Optional. Number of search results to return per query. The default value is 10. The maximumm allowed value is 10.
},
"vertexRagStore": { # Retrieve from Vertex RAG Store for grounding. # Set to use data source powered by Vertex RAG store. User data is uploaded via the VertexRagDataService.
"ragCorpora": [ # Optional. Deprecated. Please use rag_resources instead.
"A String",
],
"ragResources": [ # Optional. The representation of the rag source. It can be used to specify corpus only or ragfiles. Currently only support one corpus or multiple files from one corpus. In the future we may open up multiple corpora support.
{ # The definition of the Rag resource.
"ragCorpus": "A String", # Optional. RagCorpora resource name. Format: `projects/{project}/locations/{location}/ragCorpora/{rag_corpus}`
"ragFileIds": [ # Optional. rag_file_id. The files should be in the same rag_corpus set in rag_corpus field.
"A String",
],
},
],
"ragRetrievalConfig": { # Specifies the context retrieval config. # Optional. The retrieval config for the Rag query.
"filter": { # Config for filters. # Optional. Config for filters.
"metadataFilter": "A String", # Optional. String for metadata filtering.
"vectorDistanceThreshold": 3.14, # Optional. Only returns contexts with vector distance smaller than the threshold.
"vectorSimilarityThreshold": 3.14, # Optional. Only returns contexts with vector similarity larger than the threshold.
},
"hybridSearch": { # Config for Hybrid Search. # Optional. Config for Hybrid Search.
"alpha": 3.14, # Optional. Alpha value controls the weight between dense and sparse vector search results. The range is [0, 1], while 0 means sparse vector search only and 1 means dense vector search only. The default value is 0.5 which balances sparse and dense vector search equally.
},
"ranking": { # Config for ranking and reranking. # Optional. Config for ranking and reranking.
"llmRanker": { # Config for LlmRanker. # Optional. Config for LlmRanker.
"modelName": "A String", # Optional. The model name used for ranking. See [Supported models](https://cloud.google.com/vertex-ai/generative-ai/docs/model-reference/inference#supported-models).
},
"rankService": { # Config for Rank Service. # Optional. Config for Rank Service.
"modelName": "A String", # Optional. The model name of the rank service. Format: `semantic-ranker-512@latest`
},
},
"topK": 42, # Optional. The number of contexts to retrieve.
},
"similarityTopK": 42, # Optional. Number of top k results to return from the selected corpora.
"storeContext": True or False, # Optional. Currently only supported for Gemini Multimodal Live API. In Gemini Multimodal Live API, if `store_context` bool is specified, Gemini will leverage it to automatically memorize the interactions between the client and Gemini, and retrieve context when needed to augment the response generation for users' ongoing and future interactions.
"vectorDistanceThreshold": 3.14, # Optional. Only return results with vector distance smaller than the threshold.
},
},
"urlContext": { # Tool to support URL context. # Optional. Tool to support URL context retrieval.
},
},
],
},
},
"turns": [ # Optional. A chronological list of conversation turns. Each turn represents a logical execution cycle (e.g., User Input -> Agent Response).
{ # Represents a single turn/invocation in the conversation.
"events": [ # Optional. The list of events that occurred during this turn.
{ # Represents a single event in the execution trace.
"activeTools": [ # Optional. The list of tools that were active/available to the agent at the time of this event. This overrides the `AgentConfig.tools` if set.
{ # Tool details that the model may use to generate response. A `Tool` is a piece of code that enables the system to interact with external systems to perform an action, or set of actions, outside of knowledge and scope of the model. A Tool object should contain exactly one type of Tool (e.g FunctionDeclaration, Retrieval or GoogleSearchRetrieval).
"codeExecution": { # Tool that executes code generated by the model, and automatically returns the result to the model. See also [ExecutableCode]and [CodeExecutionResult] which are input and output to this tool. # Optional. CodeExecution tool type. Enables the model to execute code as part of generation.
},
"computerUse": { # Tool to support computer use. # Optional. Tool to support the model interacting directly with the computer. If enabled, it automatically populates computer-use specific Function Declarations.
"environment": "A String", # Required. The environment being operated.
"excludedPredefinedFunctions": [ # Optional. By default, [predefined functions](https://cloud.google.com/vertex-ai/generative-ai/docs/computer-use#supported-actions) are included in the final model call. Some of them can be explicitly excluded from being automatically included. This can serve two purposes: 1. Using a more restricted / different action space. 2. Improving the definitions / instructions of predefined functions.
"A String",
],
},
"enterpriseWebSearch": { # Tool to search public web data, powered by Vertex AI Search and Sec4 compliance. # Optional. Tool to support searching public web data, powered by Vertex AI Search and Sec4 compliance.
"blockingConfidence": "A String", # Optional. Sites with confidence level chosen & above this value will be blocked from the search results.
"excludeDomains": [ # Optional. List of domains to be excluded from the search results. The default limit is 2000 domains.
"A String",
],
},
"functionDeclarations": [ # Optional. Function tool type. One or more function declarations to be passed to the model along with the current user query. Model may decide to call a subset of these functions by populating FunctionCall in the response. User should provide a FunctionResponse for each function call in the next turn. Based on the function responses, Model will generate the final response back to the user. Maximum 512 function declarations can be provided.
{ # Structured representation of a function declaration as defined by the [OpenAPI 3.0 specification](https://spec.openapis.org/oas/v3.0.3). Included in this declaration are the function name, description, parameters and response type. This FunctionDeclaration is a representation of a block of code that can be used as a `Tool` by the model and executed by the client.
"description": "A String", # Optional. Description and purpose of the function. Model uses it to decide how and whether to call the function.
"name": "A String", # Required. The name of the function to call. Must start with a letter or an underscore. Must be a-z, A-Z, 0-9, or contain underscores, dots, colons and dashes, with a maximum length of 64.
"parameters": { # Defines the schema of input and output data. This is a subset of the [OpenAPI 3.0 Schema Object](https://spec.openapis.org/oas/v3.0.3#schema-object). # Optional. Describes the parameters to this function in JSON Schema Object format. Reflects the Open API 3.03 Parameter Object. string Key: the name of the parameter. Parameter names are case sensitive. Schema Value: the Schema defining the type used for the parameter. For function with no parameters, this can be left unset. Parameter names must start with a letter or an underscore and must only contain chars a-z, A-Z, 0-9, or underscores with a maximum length of 64. Example with 1 required and 1 optional parameter: type: OBJECT properties: param1: type: STRING param2: type: INTEGER required: - param1
"additionalProperties": "", # Optional. If `type` is `OBJECT`, specifies how to handle properties not defined in `properties`. If it is a boolean `false`, no additional properties are allowed. If it is a schema, additional properties are allowed if they conform to the schema.
"anyOf": [ # Optional. The instance must be valid against any (one or more) of the subschemas listed in `any_of`.
# Object with schema name: GoogleCloudAiplatformV1beta1Schema
],
"default": "", # Optional. Default value to use if the field is not specified.
"defs": { # Optional. `defs` provides a map of schema definitions that can be reused by `ref` elsewhere in the schema. Only allowed at root level of the schema.
"a_key": # Object with schema name: GoogleCloudAiplatformV1beta1Schema
},
"description": "A String", # Optional. Description of the schema.
"enum": [ # Optional. Possible values of the field. This field can be used to restrict a value to a fixed set of values. To mark a field as an enum, set `format` to `enum` and provide the list of possible values in `enum`. For example: 1. To define directions: `{type:STRING, format:enum, enum:["EAST", "NORTH", "SOUTH", "WEST"]}` 2. To define apartment numbers: `{type:INTEGER, format:enum, enum:["101", "201", "301"]}`
"A String",
],
"example": "", # Optional. Example of an instance of this schema.
"format": "A String", # Optional. The format of the data. For `NUMBER` type, format can be `float` or `double`. For `INTEGER` type, format can be `int32` or `int64`. For `STRING` type, format can be `email`, `byte`, `date`, `date-time`, `password`, and other formats to further refine the data type.
"items": # Object with schema name: GoogleCloudAiplatformV1beta1Schema # Optional. If type is `ARRAY`, `items` specifies the schema of elements in the array.
"maxItems": "A String", # Optional. If type is `ARRAY`, `max_items` specifies the maximum number of items in an array.
"maxLength": "A String", # Optional. If type is `STRING`, `max_length` specifies the maximum length of the string.
"maxProperties": "A String", # Optional. If type is `OBJECT`, `max_properties` specifies the maximum number of properties that can be provided.
"maximum": 3.14, # Optional. If type is `INTEGER` or `NUMBER`, `maximum` specifies the maximum allowed value.
"minItems": "A String", # Optional. If type is `ARRAY`, `min_items` specifies the minimum number of items in an array.
"minLength": "A String", # Optional. If type is `STRING`, `min_length` specifies the minimum length of the string.
"minProperties": "A String", # Optional. If type is `OBJECT`, `min_properties` specifies the minimum number of properties that can be provided.
"minimum": 3.14, # Optional. If type is `INTEGER` or `NUMBER`, `minimum` specifies the minimum allowed value.
"nullable": True or False, # Optional. Indicates if the value of this field can be null.
"pattern": "A String", # Optional. If type is `STRING`, `pattern` specifies a regular expression that the string must match.
"properties": { # Optional. If type is `OBJECT`, `properties` is a map of property names to schema definitions for each property of the object.
"a_key": # Object with schema name: GoogleCloudAiplatformV1beta1Schema
},
"propertyOrdering": [ # Optional. Order of properties displayed or used where order matters. This is not a standard field in OpenAPI specification, but can be used to control the order of properties.
"A String",
],
"ref": "A String", # Optional. Allows referencing another schema definition to use in place of this schema. The value must be a valid reference to a schema in `defs`. For example, the following schema defines a reference to a schema node named "Pet": type: object properties: pet: ref: #/defs/Pet defs: Pet: type: object properties: name: type: string The value of the "pet" property is a reference to the schema node named "Pet". See details in https://json-schema.org/understanding-json-schema/structuring
"required": [ # Optional. If type is `OBJECT`, `required` lists the names of properties that must be present.
"A String",
],
"title": "A String", # Optional. Title for the schema.
"type": "A String", # Optional. Data type of the schema field.
},
"parametersJsonSchema": "", # Optional. Describes the parameters to the function in JSON Schema format. The schema must describe an object where the properties are the parameters to the function. For example: ``` { "type": "object", "properties": { "name": { "type": "string" }, "age": { "type": "integer" } }, "additionalProperties": false, "required": ["name", "age"], "propertyOrdering": ["name", "age"] } ``` This field is mutually exclusive with `parameters`.
"response": { # Defines the schema of input and output data. This is a subset of the [OpenAPI 3.0 Schema Object](https://spec.openapis.org/oas/v3.0.3#schema-object). # Optional. Describes the output from this function in JSON Schema format. Reflects the Open API 3.03 Response Object. The Schema defines the type used for the response value of the function.
"additionalProperties": "", # Optional. If `type` is `OBJECT`, specifies how to handle properties not defined in `properties`. If it is a boolean `false`, no additional properties are allowed. If it is a schema, additional properties are allowed if they conform to the schema.
"anyOf": [ # Optional. The instance must be valid against any (one or more) of the subschemas listed in `any_of`.
# Object with schema name: GoogleCloudAiplatformV1beta1Schema
],
"default": "", # Optional. Default value to use if the field is not specified.
"defs": { # Optional. `defs` provides a map of schema definitions that can be reused by `ref` elsewhere in the schema. Only allowed at root level of the schema.
"a_key": # Object with schema name: GoogleCloudAiplatformV1beta1Schema
},
"description": "A String", # Optional. Description of the schema.
"enum": [ # Optional. Possible values of the field. This field can be used to restrict a value to a fixed set of values. To mark a field as an enum, set `format` to `enum` and provide the list of possible values in `enum`. For example: 1. To define directions: `{type:STRING, format:enum, enum:["EAST", "NORTH", "SOUTH", "WEST"]}` 2. To define apartment numbers: `{type:INTEGER, format:enum, enum:["101", "201", "301"]}`
"A String",
],
"example": "", # Optional. Example of an instance of this schema.
"format": "A String", # Optional. The format of the data. For `NUMBER` type, format can be `float` or `double`. For `INTEGER` type, format can be `int32` or `int64`. For `STRING` type, format can be `email`, `byte`, `date`, `date-time`, `password`, and other formats to further refine the data type.
"items": # Object with schema name: GoogleCloudAiplatformV1beta1Schema # Optional. If type is `ARRAY`, `items` specifies the schema of elements in the array.
"maxItems": "A String", # Optional. If type is `ARRAY`, `max_items` specifies the maximum number of items in an array.
"maxLength": "A String", # Optional. If type is `STRING`, `max_length` specifies the maximum length of the string.
"maxProperties": "A String", # Optional. If type is `OBJECT`, `max_properties` specifies the maximum number of properties that can be provided.
"maximum": 3.14, # Optional. If type is `INTEGER` or `NUMBER`, `maximum` specifies the maximum allowed value.
"minItems": "A String", # Optional. If type is `ARRAY`, `min_items` specifies the minimum number of items in an array.
"minLength": "A String", # Optional. If type is `STRING`, `min_length` specifies the minimum length of the string.
"minProperties": "A String", # Optional. If type is `OBJECT`, `min_properties` specifies the minimum number of properties that can be provided.
"minimum": 3.14, # Optional. If type is `INTEGER` or `NUMBER`, `minimum` specifies the minimum allowed value.
"nullable": True or False, # Optional. Indicates if the value of this field can be null.
"pattern": "A String", # Optional. If type is `STRING`, `pattern` specifies a regular expression that the string must match.
"properties": { # Optional. If type is `OBJECT`, `properties` is a map of property names to schema definitions for each property of the object.
"a_key": # Object with schema name: GoogleCloudAiplatformV1beta1Schema
},
"propertyOrdering": [ # Optional. Order of properties displayed or used where order matters. This is not a standard field in OpenAPI specification, but can be used to control the order of properties.
"A String",
],
"ref": "A String", # Optional. Allows referencing another schema definition to use in place of this schema. The value must be a valid reference to a schema in `defs`. For example, the following schema defines a reference to a schema node named "Pet": type: object properties: pet: ref: #/defs/Pet defs: Pet: type: object properties: name: type: string The value of the "pet" property is a reference to the schema node named "Pet". See details in https://json-schema.org/understanding-json-schema/structuring
"required": [ # Optional. If type is `OBJECT`, `required` lists the names of properties that must be present.
"A String",
],
"title": "A String", # Optional. Title for the schema.
"type": "A String", # Optional. Data type of the schema field.
},
"responseJsonSchema": "", # Optional. Describes the output from this function in JSON Schema format. The value specified by the schema is the response value of the function. This field is mutually exclusive with `response`.
},
],
"googleMaps": { # Tool to retrieve public maps data for grounding, powered by Google. # Optional. GoogleMaps tool type. Tool to support Google Maps in Model.
"enableWidget": True or False, # Optional. If true, include the widget context token in the response.
},
"googleSearch": { # GoogleSearch tool type. Tool to support Google Search in Model. Powered by Google. # Optional. GoogleSearch tool type. Tool to support Google Search in Model. Powered by Google.
"blockingConfidence": "A String", # Optional. Sites with confidence level chosen & above this value will be blocked from the search results.
"excludeDomains": [ # Optional. List of domains to be excluded from the search results. The default limit is 2000 domains. Example: ["amazon.com", "facebook.com"].
"A String",
],
},
"googleSearchRetrieval": { # Tool to retrieve public web data for grounding, powered by Google. # Optional. Specialized retrieval tool that is powered by Google Search.
"dynamicRetrievalConfig": { # Describes the options to customize dynamic retrieval. # Specifies the dynamic retrieval configuration for the given source.
"dynamicThreshold": 3.14, # Optional. The threshold to be used in dynamic retrieval. If not set, a system default value is used.
"mode": "A String", # The mode of the predictor to be used in dynamic retrieval.
},
},
"parallelAiSearch": { # ParallelAiSearch tool type. A tool that uses the Parallel.ai search engine for grounding. # Optional. If specified, Vertex AI will use Parallel.ai to search for information to answer user queries. The search results will be grounded on Parallel.ai and presented to the model for response generation
"apiKey": "A String", # Optional. The API key for ParallelAiSearch. If an API key is not provided, the system will attempt to verify access by checking for an active Parallel.ai subscription through the Google Cloud Marketplace. See https://docs.parallel.ai/search/search-quickstart for more details.
"customConfigs": { # Optional. Custom configs for ParallelAiSearch. This field can be used to pass any parameter from the Parallel.ai Search API. See the Parallel.ai documentation for the full list of available parameters and their usage: https://docs.parallel.ai/api-reference/search-beta/search Currently only `source_policy`, `excerpts`, `max_results`, `mode`, `fetch_policy` can be set via this field. For example: { "source_policy": { "include_domains": ["google.com", "wikipedia.org"], "exclude_domains": ["example.com"] }, "fetch_policy": { "max_age_seconds": 3600 } }
"a_key": "", # Properties of the object.
},
},
"retrieval": { # Defines a retrieval tool that model can call to access external knowledge. # Optional. Retrieval tool type. System will always execute the provided retrieval tool(s) to get external knowledge to answer the prompt. Retrieval results are presented to the model for generation.
"disableAttribution": True or False, # Optional. Deprecated. This option is no longer supported.
"externalApi": { # Retrieve from data source powered by external API for grounding. The external API is not owned by Google, but need to follow the pre-defined API spec. # Use data source powered by external API for grounding.
"apiAuth": { # The generic reusable api auth config. Deprecated. Please use AuthConfig (google/cloud/aiplatform/master/auth.proto) instead. # The authentication config to access the API. Deprecated. Please use auth_config instead.
"apiKeyConfig": { # The API secret. # The API secret.
"apiKeySecretVersion": "A String", # Required. The SecretManager secret version resource name storing API key. e.g. projects/{project}/secrets/{secret}/versions/{version}
"apiKeyString": "A String", # The API key string. Either this or `api_key_secret_version` must be set.
},
},
"apiSpec": "A String", # The API spec that the external API implements.
"authConfig": { # Auth configuration to run the extension. # The authentication config to access the API.
"apiKeyConfig": { # Config for authentication with API key. # Config for API key auth.
"apiKeySecret": "A String", # Optional. The name of the SecretManager secret version resource storing the API key. Format: `projects/{project}/secrets/{secrete}/versions/{version}` - If both `api_key_secret` and `api_key_string` are specified, this field takes precedence over `api_key_string`. - If specified, the `secretmanager.versions.access` permission should be granted to Vertex AI Extension Service Agent (https://cloud.google.com/vertex-ai/docs/general/access-control#service-agents) on the specified resource.
"apiKeyString": "A String", # Optional. The API key to be used in the request directly.
"httpElementLocation": "A String", # Optional. The location of the API key.
"name": "A String", # Optional. The parameter name of the API key. E.g. If the API request is "https://example.com/act?api_key=", "api_key" would be the parameter name.
},
"authType": "A String", # Type of auth scheme.
"googleServiceAccountConfig": { # Config for Google Service Account Authentication. # Config for Google Service Account auth.
"serviceAccount": "A String", # Optional. The service account that the extension execution service runs as. - If the service account is specified, the `iam.serviceAccounts.getAccessToken` permission should be granted to Vertex AI Extension Service Agent (https://cloud.google.com/vertex-ai/docs/general/access-control#service-agents) on the specified service account. - If not specified, the Vertex AI Extension Service Agent will be used to execute the Extension.
},
"httpBasicAuthConfig": { # Config for HTTP Basic Authentication. # Config for HTTP Basic auth.
"credentialSecret": "A String", # Required. The name of the SecretManager secret version resource storing the base64 encoded credentials. Format: `projects/{project}/secrets/{secrete}/versions/{version}` - If specified, the `secretmanager.versions.access` permission should be granted to Vertex AI Extension Service Agent (https://cloud.google.com/vertex-ai/docs/general/access-control#service-agents) on the specified resource.
},
"oauthConfig": { # Config for user oauth. # Config for user oauth.
"accessToken": "A String", # Access token for extension endpoint. Only used to propagate token from [[ExecuteExtensionRequest.runtime_auth_config]] at request time.
"serviceAccount": "A String", # The service account used to generate access tokens for executing the Extension. - If the service account is specified, the `iam.serviceAccounts.getAccessToken` permission should be granted to Vertex AI Extension Service Agent (https://cloud.google.com/vertex-ai/docs/general/access-control#service-agents) on the provided service account.
},
"oidcConfig": { # Config for user OIDC auth. # Config for user OIDC auth.
"idToken": "A String", # OpenID Connect formatted ID token for extension endpoint. Only used to propagate token from [[ExecuteExtensionRequest.runtime_auth_config]] at request time.
"serviceAccount": "A String", # The service account used to generate an OpenID Connect (OIDC)-compatible JWT token signed by the Google OIDC Provider (accounts.google.com) for extension endpoint (https://cloud.google.com/iam/docs/create-short-lived-credentials-direct#sa-credentials-oidc). - The audience for the token will be set to the URL in the server url defined in the OpenApi spec. - If the service account is provided, the service account should grant `iam.serviceAccounts.getOpenIdToken` permission to Vertex AI Extension Service Agent (https://cloud.google.com/vertex-ai/docs/general/access-control#service-agents).
},
},
"elasticSearchParams": { # The search parameters to use for the ELASTIC_SEARCH spec. # Parameters for the elastic search API.
"index": "A String", # The ElasticSearch index to use.
"numHits": 42, # Optional. Number of hits (chunks) to request. When specified, it is passed to Elasticsearch as the `num_hits` param.
"searchTemplate": "A String", # The ElasticSearch search template to use.
},
"endpoint": "A String", # The endpoint of the external API. The system will call the API at this endpoint to retrieve the data for grounding. Example: https://acme.com:443/search
"simpleSearchParams": { # The search parameters to use for SIMPLE_SEARCH spec. # Parameters for the simple search API.
},
},
"vertexAiSearch": { # Retrieve from Vertex AI Search datastore or engine for grounding. datastore and engine are mutually exclusive. See https://cloud.google.com/products/agent-builder # Set to use data source powered by Vertex AI Search.
"dataStoreSpecs": [ # Specifications that define the specific DataStores to be searched, along with configurations for those data stores. This is only considered for Engines with multiple data stores. It should only be set if engine is used.
{ # Define data stores within engine to filter on in a search call and configurations for those data stores. For more information, see https://cloud.google.com/generative-ai-app-builder/docs/reference/rpc/google.cloud.discoveryengine.v1#datastorespec
"dataStore": "A String", # Full resource name of DataStore, such as Format: `projects/{project}/locations/{location}/collections/{collection}/dataStores/{dataStore}`
"filter": "A String", # Optional. Filter specification to filter documents in the data store specified by data_store field. For more information on filtering, see [Filtering](https://cloud.google.com/generative-ai-app-builder/docs/filter-search-metadata)
},
],
"datastore": "A String", # Optional. Fully-qualified Vertex AI Search data store resource ID. Format: `projects/{project}/locations/{location}/collections/{collection}/dataStores/{dataStore}`
"engine": "A String", # Optional. Fully-qualified Vertex AI Search engine resource ID. Format: `projects/{project}/locations/{location}/collections/{collection}/engines/{engine}`
"filter": "A String", # Optional. Filter strings to be passed to the search API.
"maxResults": 42, # Optional. Number of search results to return per query. The default value is 10. The maximumm allowed value is 10.
},
"vertexRagStore": { # Retrieve from Vertex RAG Store for grounding. # Set to use data source powered by Vertex RAG store. User data is uploaded via the VertexRagDataService.
"ragCorpora": [ # Optional. Deprecated. Please use rag_resources instead.
"A String",
],
"ragResources": [ # Optional. The representation of the rag source. It can be used to specify corpus only or ragfiles. Currently only support one corpus or multiple files from one corpus. In the future we may open up multiple corpora support.
{ # The definition of the Rag resource.
"ragCorpus": "A String", # Optional. RagCorpora resource name. Format: `projects/{project}/locations/{location}/ragCorpora/{rag_corpus}`
"ragFileIds": [ # Optional. rag_file_id. The files should be in the same rag_corpus set in rag_corpus field.
"A String",
],
},
],
"ragRetrievalConfig": { # Specifies the context retrieval config. # Optional. The retrieval config for the Rag query.
"filter": { # Config for filters. # Optional. Config for filters.
"metadataFilter": "A String", # Optional. String for metadata filtering.
"vectorDistanceThreshold": 3.14, # Optional. Only returns contexts with vector distance smaller than the threshold.
"vectorSimilarityThreshold": 3.14, # Optional. Only returns contexts with vector similarity larger than the threshold.
},
"hybridSearch": { # Config for Hybrid Search. # Optional. Config for Hybrid Search.
"alpha": 3.14, # Optional. Alpha value controls the weight between dense and sparse vector search results. The range is [0, 1], while 0 means sparse vector search only and 1 means dense vector search only. The default value is 0.5 which balances sparse and dense vector search equally.
},
"ranking": { # Config for ranking and reranking. # Optional. Config for ranking and reranking.
"llmRanker": { # Config for LlmRanker. # Optional. Config for LlmRanker.
"modelName": "A String", # Optional. The model name used for ranking. See [Supported models](https://cloud.google.com/vertex-ai/generative-ai/docs/model-reference/inference#supported-models).
},
"rankService": { # Config for Rank Service. # Optional. Config for Rank Service.
"modelName": "A String", # Optional. The model name of the rank service. Format: `semantic-ranker-512@latest`
},
},
"topK": 42, # Optional. The number of contexts to retrieve.
},
"similarityTopK": 42, # Optional. Number of top k results to return from the selected corpora.
"storeContext": True or False, # Optional. Currently only supported for Gemini Multimodal Live API. In Gemini Multimodal Live API, if `store_context` bool is specified, Gemini will leverage it to automatically memorize the interactions between the client and Gemini, and retrieve context when needed to augment the response generation for users' ongoing and future interactions.
"vectorDistanceThreshold": 3.14, # Optional. Only return results with vector distance smaller than the threshold.
},
},
"urlContext": { # Tool to support URL context. # Optional. Tool to support URL context retrieval.
},
},
],
"author": "A String", # Required. The ID of the agent or entity that generated this event. Use "user" to denote events generated by the end-user.
"content": { # The structured data content of a message. A Content message contains a `role` field, which indicates the producer of the content, and a `parts` field, which contains the multi-part data of the message. # Required. The content of the event (e.g., text response, tool call, tool response).
"parts": [ # Required. A list of Part objects that make up a single message. Parts of a message can have different MIME types. A Content message must have at least one Part.
{ # A datatype containing media that is part of a multi-part Content message. A `Part` consists of data which has an associated datatype. A `Part` can only contain one of the accepted types in `Part.data`. For media types that are not text, `Part` must have a fixed IANA MIME type identifying the type and subtype of the media if `inline_data` or `file_data` field is filled with raw bytes.
"codeExecutionResult": { # Result of executing the [ExecutableCode]. Only generated when using the [CodeExecution] tool, and always follows a `part` containing the [ExecutableCode]. # Optional. The result of executing the ExecutableCode.
"outcome": "A String", # Required. Outcome of the code execution.
"output": "A String", # Optional. Contains stdout when code execution is successful, stderr or other description otherwise.
},
"executableCode": { # Code generated by the model that is meant to be executed, and the result returned to the model. Generated when using the [CodeExecution] tool, in which the code will be automatically executed, and a corresponding [CodeExecutionResult] will also be generated. # Optional. Code generated by the model that is intended to be executed.
"code": "A String", # Required. The code to be executed.
"language": "A String", # Required. Programming language of the `code`.
},
"fileData": { # URI-based data. A FileData message contains a URI pointing to data of a specific media type. It is used to represent images, audio, and video stored in Google Cloud Storage. # Optional. The URI-based data of the part. This can be used to include files from Google Cloud Storage.
"displayName": "A String", # Optional. The display name of the file. Used to provide a label or filename to distinguish files. This field is only returned in `PromptMessage` for prompt management. It is used in the Gemini calls only when server side tools (`code_execution`, `google_search`, and `url_context`) are enabled.
"fileUri": "A String", # Required. The URI of the file in Google Cloud Storage.
"mimeType": "A String", # Required. The IANA standard MIME type of the source data.
},
"functionCall": { # A predicted [FunctionCall] returned from the model that contains a string representing the [FunctionDeclaration.name] and a structured JSON object containing the parameters and their values. # Optional. A predicted function call returned from the model. This contains the name of the function to call and the arguments to pass to the function.
"args": { # Optional. The function parameters and values in JSON object format. See [FunctionDeclaration.parameters] for parameter details.
"a_key": "", # Properties of the object.
},
"id": "A String", # Optional. The unique id of the function call. If populated, the client to execute the `function_call` and return the response with the matching `id`.
"name": "A String", # Optional. The name of the function to call. Matches [FunctionDeclaration.name].
"partialArgs": [ # Optional. The partial argument value of the function call. If provided, represents the arguments/fields that are streamed incrementally.
{ # Partial argument value of the function call.
"boolValue": True or False, # Optional. Represents a boolean value.
"jsonPath": "A String", # Required. A JSON Path (RFC 9535) to the argument being streamed. https://datatracker.ietf.org/doc/html/rfc9535. e.g. "$.foo.bar[0].data".
"nullValue": "A String", # Optional. Represents a null value.
"numberValue": 3.14, # Optional. Represents a double value.
"stringValue": "A String", # Optional. Represents a string value.
"willContinue": True or False, # Optional. Whether this is not the last part of the same json_path. If true, another PartialArg message for the current json_path is expected to follow.
},
],
"willContinue": True or False, # Optional. Whether this is the last part of the FunctionCall. If true, another partial message for the current FunctionCall is expected to follow.
},
"functionResponse": { # The result output from a [FunctionCall] that contains a string representing the [FunctionDeclaration.name] and a structured JSON object containing any output from the function is used as context to the model. This should contain the result of a [FunctionCall] made based on model prediction. # Optional. The result of a function call. This is used to provide the model with the result of a function call that it predicted.
"id": "A String", # Optional. The id of the function call this response is for. Populated by the client to match the corresponding function call `id`.
"name": "A String", # Required. The name of the function to call. Matches [FunctionDeclaration.name] and [FunctionCall.name].
"parts": [ # Optional. Ordered `Parts` that constitute a function response. Parts may have different IANA MIME types.
{ # A datatype containing media that is part of a `FunctionResponse` message. A `FunctionResponsePart` consists of data which has an associated datatype. A `FunctionResponsePart` can only contain one of the accepted types in `FunctionResponsePart.data`. A `FunctionResponsePart` must have a fixed IANA MIME type identifying the type and subtype of the media if the `inline_data` field is filled with raw bytes.
"fileData": { # URI based data for function response. # URI based data.
"displayName": "A String", # Optional. Display name of the file data. Used to provide a label or filename to distinguish file datas. This field is only returned in PromptMessage for prompt management. It is currently used in the Gemini GenerateContent calls only when server side tools (code_execution, google_search, and url_context) are enabled.
"fileUri": "A String", # Required. URI.
"mimeType": "A String", # Required. The IANA standard MIME type of the source data.
},
"inlineData": { # Raw media bytes for function response. Text should not be sent as raw bytes, use the 'text' field. # Inline media bytes.
"data": "A String", # Required. Raw bytes.
"displayName": "A String", # Optional. Display name of the blob. Used to provide a label or filename to distinguish blobs. This field is only returned in PromptMessage for prompt management. It is currently used in the Gemini GenerateContent calls only when server side tools (code_execution, google_search, and url_context) are enabled.
"mimeType": "A String", # Required. The IANA standard MIME type of the source data.
},
},
],
"response": { # Required. The function response in JSON object format. Use "output" key to specify function output and "error" key to specify error details (if any). If "output" and "error" keys are not specified, then whole "response" is treated as function output.
"a_key": "", # Properties of the object.
},
},
"inlineData": { # A content blob. A Blob contains data of a specific media type. It is used to represent images, audio, and video. # Optional. The inline data content of the part. This can be used to include images, audio, or video in a request.
"data": "A String", # Required. The raw bytes of the data.
"displayName": "A String", # Optional. The display name of the blob. Used to provide a label or filename to distinguish blobs. This field is only returned in `PromptMessage` for prompt management. It is used in the Gemini calls only when server-side tools (`code_execution`, `google_search`, and `url_context`) are enabled.
"mimeType": "A String", # Required. The IANA standard MIME type of the source data.
},
"mediaResolution": { # per part media resolution. Media resolution for the input media. # per part media resolution. Media resolution for the input media.
"level": "A String", # The tokenization quality used for given media.
},
"text": "A String", # Optional. The text content of the part. When sent from the VSCode Gemini Code Assist extension, references to @mentioned items will be converted to markdown boldface text. For example `@my-repo` will be converted to and sent as `**my-repo**` by the IDE agent.
"thought": True or False, # Optional. Indicates whether the `part` represents the model's thought process or reasoning.
"thoughtSignature": "A String", # Optional. An opaque signature for the thought so it can be reused in subsequent requests.
"videoMetadata": { # Provides metadata for a video, including the start and end offsets for clipping and the frame rate. # Optional. Video metadata. The metadata should only be specified while the video data is presented in inline_data or file_data.
"endOffset": "A String", # Optional. The end offset of the video.
"fps": 3.14, # Optional. The frame rate of the video sent to the model. If not specified, the default value is 1.0. The valid range is (0.0, 24.0].
"startOffset": "A String", # Optional. The start offset of the video.
},
},
],
"role": "A String", # Optional. The producer of the content. Must be either 'user' or 'model'. If not set, the service will default to 'user'.
},
"eventTime": "A String", # Optional. The timestamp when the event occurred.
"stateDelta": { # Optional. The change in the session state caused by this event. This is a key-value map of fields that were modified or added by the event.
"a_key": "", # Properties of the object.
},
},
],
"turnId": "A String", # Optional. A unique identifier for the turn. Useful for referencing specific turns across systems.
"turnIndex": 42, # Required. The 0-based index of the turn in the conversation sequence.
},
],
},
"candidate": "A String", # Required. The name of the candidate that produced the response.
"error": { # The `Status` type defines a logical error model that is suitable for different programming environments, including REST APIs and RPC APIs. It is used by [gRPC](https://github.com/grpc). Each `Status` message contains three pieces of data: error code, error message, and error details. You can find out more about this error model and how to work with it in the [API Design Guide](https://cloud.google.com/apis/design/errors). # Output only. Error while scraping model or agent.
"code": 42, # The status code, which should be an enum value of google.rpc.Code.
"details": [ # A list of messages that carry the error details. There is a common set of message types for APIs to use.
{
"a_key": "", # Properties of the object. Contains field @type with type URL.
},
],
"message": "A String", # A developer-facing error message, which should be in English. Any user-facing error message should be localized and sent in the google.rpc.Status.details field, or localized by the client.
},
"events": [ # Optional. Intermediate events (such as tool calls and responses) that led to the final response.
{ # The structured data content of a message. A Content message contains a `role` field, which indicates the producer of the content, and a `parts` field, which contains the multi-part data of the message.
"parts": [ # Required. A list of Part objects that make up a single message. Parts of a message can have different MIME types. A Content message must have at least one Part.
{ # A datatype containing media that is part of a multi-part Content message. A `Part` consists of data which has an associated datatype. A `Part` can only contain one of the accepted types in `Part.data`. For media types that are not text, `Part` must have a fixed IANA MIME type identifying the type and subtype of the media if `inline_data` or `file_data` field is filled with raw bytes.
"codeExecutionResult": { # Result of executing the [ExecutableCode]. Only generated when using the [CodeExecution] tool, and always follows a `part` containing the [ExecutableCode]. # Optional. The result of executing the ExecutableCode.
"outcome": "A String", # Required. Outcome of the code execution.
"output": "A String", # Optional. Contains stdout when code execution is successful, stderr or other description otherwise.
},
"executableCode": { # Code generated by the model that is meant to be executed, and the result returned to the model. Generated when using the [CodeExecution] tool, in which the code will be automatically executed, and a corresponding [CodeExecutionResult] will also be generated. # Optional. Code generated by the model that is intended to be executed.
"code": "A String", # Required. The code to be executed.
"language": "A String", # Required. Programming language of the `code`.
},
"fileData": { # URI-based data. A FileData message contains a URI pointing to data of a specific media type. It is used to represent images, audio, and video stored in Google Cloud Storage. # Optional. The URI-based data of the part. This can be used to include files from Google Cloud Storage.
"displayName": "A String", # Optional. The display name of the file. Used to provide a label or filename to distinguish files. This field is only returned in `PromptMessage` for prompt management. It is used in the Gemini calls only when server side tools (`code_execution`, `google_search`, and `url_context`) are enabled.
"fileUri": "A String", # Required. The URI of the file in Google Cloud Storage.
"mimeType": "A String", # Required. The IANA standard MIME type of the source data.
},
"functionCall": { # A predicted [FunctionCall] returned from the model that contains a string representing the [FunctionDeclaration.name] and a structured JSON object containing the parameters and their values. # Optional. A predicted function call returned from the model. This contains the name of the function to call and the arguments to pass to the function.
"args": { # Optional. The function parameters and values in JSON object format. See [FunctionDeclaration.parameters] for parameter details.
"a_key": "", # Properties of the object.
},
"id": "A String", # Optional. The unique id of the function call. If populated, the client to execute the `function_call` and return the response with the matching `id`.
"name": "A String", # Optional. The name of the function to call. Matches [FunctionDeclaration.name].
"partialArgs": [ # Optional. The partial argument value of the function call. If provided, represents the arguments/fields that are streamed incrementally.
{ # Partial argument value of the function call.
"boolValue": True or False, # Optional. Represents a boolean value.
"jsonPath": "A String", # Required. A JSON Path (RFC 9535) to the argument being streamed. https://datatracker.ietf.org/doc/html/rfc9535. e.g. "$.foo.bar[0].data".
"nullValue": "A String", # Optional. Represents a null value.
"numberValue": 3.14, # Optional. Represents a double value.
"stringValue": "A String", # Optional. Represents a string value.
"willContinue": True or False, # Optional. Whether this is not the last part of the same json_path. If true, another PartialArg message for the current json_path is expected to follow.
},
],
"willContinue": True or False, # Optional. Whether this is the last part of the FunctionCall. If true, another partial message for the current FunctionCall is expected to follow.
},
"functionResponse": { # The result output from a [FunctionCall] that contains a string representing the [FunctionDeclaration.name] and a structured JSON object containing any output from the function is used as context to the model. This should contain the result of a [FunctionCall] made based on model prediction. # Optional. The result of a function call. This is used to provide the model with the result of a function call that it predicted.
"id": "A String", # Optional. The id of the function call this response is for. Populated by the client to match the corresponding function call `id`.
"name": "A String", # Required. The name of the function to call. Matches [FunctionDeclaration.name] and [FunctionCall.name].
"parts": [ # Optional. Ordered `Parts` that constitute a function response. Parts may have different IANA MIME types.
{ # A datatype containing media that is part of a `FunctionResponse` message. A `FunctionResponsePart` consists of data which has an associated datatype. A `FunctionResponsePart` can only contain one of the accepted types in `FunctionResponsePart.data`. A `FunctionResponsePart` must have a fixed IANA MIME type identifying the type and subtype of the media if the `inline_data` field is filled with raw bytes.
"fileData": { # URI based data for function response. # URI based data.
"displayName": "A String", # Optional. Display name of the file data. Used to provide a label or filename to distinguish file datas. This field is only returned in PromptMessage for prompt management. It is currently used in the Gemini GenerateContent calls only when server side tools (code_execution, google_search, and url_context) are enabled.
"fileUri": "A String", # Required. URI.
"mimeType": "A String", # Required. The IANA standard MIME type of the source data.
},
"inlineData": { # Raw media bytes for function response. Text should not be sent as raw bytes, use the 'text' field. # Inline media bytes.
"data": "A String", # Required. Raw bytes.
"displayName": "A String", # Optional. Display name of the blob. Used to provide a label or filename to distinguish blobs. This field is only returned in PromptMessage for prompt management. It is currently used in the Gemini GenerateContent calls only when server side tools (code_execution, google_search, and url_context) are enabled.
"mimeType": "A String", # Required. The IANA standard MIME type of the source data.
},
},
],
"response": { # Required. The function response in JSON object format. Use "output" key to specify function output and "error" key to specify error details (if any). If "output" and "error" keys are not specified, then whole "response" is treated as function output.
"a_key": "", # Properties of the object.
},
},
"inlineData": { # A content blob. A Blob contains data of a specific media type. It is used to represent images, audio, and video. # Optional. The inline data content of the part. This can be used to include images, audio, or video in a request.
"data": "A String", # Required. The raw bytes of the data.
"displayName": "A String", # Optional. The display name of the blob. Used to provide a label or filename to distinguish blobs. This field is only returned in `PromptMessage` for prompt management. It is used in the Gemini calls only when server-side tools (`code_execution`, `google_search`, and `url_context`) are enabled.
"mimeType": "A String", # Required. The IANA standard MIME type of the source data.
},
"mediaResolution": { # per part media resolution. Media resolution for the input media. # per part media resolution. Media resolution for the input media.
"level": "A String", # The tokenization quality used for given media.
},
"text": "A String", # Optional. The text content of the part. When sent from the VSCode Gemini Code Assist extension, references to @mentioned items will be converted to markdown boldface text. For example `@my-repo` will be converted to and sent as `**my-repo**` by the IDE agent.
"thought": True or False, # Optional. Indicates whether the `part` represents the model's thought process or reasoning.
"thoughtSignature": "A String", # Optional. An opaque signature for the thought so it can be reused in subsequent requests.
"videoMetadata": { # Provides metadata for a video, including the start and end offsets for clipping and the frame rate. # Optional. Video metadata. The metadata should only be specified while the video data is presented in inline_data or file_data.
"endOffset": "A String", # Optional. The end offset of the video.
"fps": 3.14, # Optional. The frame rate of the video sent to the model. If not specified, the default value is 1.0. The valid range is (0.0, 24.0].
"startOffset": "A String", # Optional. The start offset of the video.
},
},
],
"role": "A String", # Optional. The producer of the content. Must be either 'user' or 'model'. If not set, the service will default to 'user'.
},
],
"text": "A String", # Text response.
"value": "", # Fields and values that can be used to populate the response template.
},
"prompt": { # Prompt to be evaluated. This can represent a single-turn prompt or a multi-turn conversation for agent evaluations. # Optional. The request/prompt to evaluate.
"agentData": { # Represents data specific to multi-turn agent evaluations. # Optional. Represents the complete execution trace of a multi-turn conversation, which can involve single or multiple agents. This serves as the input context for agent scraping.
"agents": { # Optional. A map containing the static configurations for each agent in the system. Key: agent_id (matches the `author` field in events). Value: The static configuration of the agent.
"a_key": { # Represents configuration for an Agent.
"agentId": "A String", # Required. Unique identifier of the agent. This ID is used to refer to this agent, e.g., in AgentEvent.author, or in the `sub_agents` field. It must be unique within the `agents` map.
"agentType": "A String", # Optional. The type or class of the agent (e.g., "LlmAgent", "RouterAgent", "ToolUseAgent"). Useful for the autorater to understand the expected behavior of the agent.
"description": "A String", # Optional. A high-level description of the agent's role and responsibilities. Critical for evaluating if the agent is routing tasks correctly.
"instruction": "A String", # Optional. Provides instructions for the LLM model, guiding the agent's behavior. Can be static or dynamic. Dynamic instructions can contain placeholders like {variable_name} that will be resolved at runtime using the `AgentEvent.state_delta` field.
"subAgents": [ # Optional. The list of valid agent IDs that this agent can delegate to. This defines the directed edges in the multi-agent system graph topology.
"A String",
],
"tools": [ # Optional. The list of tools available to this agent.
{ # Tool details that the model may use to generate response. A `Tool` is a piece of code that enables the system to interact with external systems to perform an action, or set of actions, outside of knowledge and scope of the model. A Tool object should contain exactly one type of Tool (e.g FunctionDeclaration, Retrieval or GoogleSearchRetrieval).
"codeExecution": { # Tool that executes code generated by the model, and automatically returns the result to the model. See also [ExecutableCode]and [CodeExecutionResult] which are input and output to this tool. # Optional. CodeExecution tool type. Enables the model to execute code as part of generation.
},
"computerUse": { # Tool to support computer use. # Optional. Tool to support the model interacting directly with the computer. If enabled, it automatically populates computer-use specific Function Declarations.
"environment": "A String", # Required. The environment being operated.
"excludedPredefinedFunctions": [ # Optional. By default, [predefined functions](https://cloud.google.com/vertex-ai/generative-ai/docs/computer-use#supported-actions) are included in the final model call. Some of them can be explicitly excluded from being automatically included. This can serve two purposes: 1. Using a more restricted / different action space. 2. Improving the definitions / instructions of predefined functions.
"A String",
],
},
"enterpriseWebSearch": { # Tool to search public web data, powered by Vertex AI Search and Sec4 compliance. # Optional. Tool to support searching public web data, powered by Vertex AI Search and Sec4 compliance.
"blockingConfidence": "A String", # Optional. Sites with confidence level chosen & above this value will be blocked from the search results.
"excludeDomains": [ # Optional. List of domains to be excluded from the search results. The default limit is 2000 domains.
"A String",
],
},
"functionDeclarations": [ # Optional. Function tool type. One or more function declarations to be passed to the model along with the current user query. Model may decide to call a subset of these functions by populating FunctionCall in the response. User should provide a FunctionResponse for each function call in the next turn. Based on the function responses, Model will generate the final response back to the user. Maximum 512 function declarations can be provided.
{ # Structured representation of a function declaration as defined by the [OpenAPI 3.0 specification](https://spec.openapis.org/oas/v3.0.3). Included in this declaration are the function name, description, parameters and response type. This FunctionDeclaration is a representation of a block of code that can be used as a `Tool` by the model and executed by the client.
"description": "A String", # Optional. Description and purpose of the function. Model uses it to decide how and whether to call the function.
"name": "A String", # Required. The name of the function to call. Must start with a letter or an underscore. Must be a-z, A-Z, 0-9, or contain underscores, dots, colons and dashes, with a maximum length of 64.
"parameters": { # Defines the schema of input and output data. This is a subset of the [OpenAPI 3.0 Schema Object](https://spec.openapis.org/oas/v3.0.3#schema-object). # Optional. Describes the parameters to this function in JSON Schema Object format. Reflects the Open API 3.03 Parameter Object. string Key: the name of the parameter. Parameter names are case sensitive. Schema Value: the Schema defining the type used for the parameter. For function with no parameters, this can be left unset. Parameter names must start with a letter or an underscore and must only contain chars a-z, A-Z, 0-9, or underscores with a maximum length of 64. Example with 1 required and 1 optional parameter: type: OBJECT properties: param1: type: STRING param2: type: INTEGER required: - param1
"additionalProperties": "", # Optional. If `type` is `OBJECT`, specifies how to handle properties not defined in `properties`. If it is a boolean `false`, no additional properties are allowed. If it is a schema, additional properties are allowed if they conform to the schema.
"anyOf": [ # Optional. The instance must be valid against any (one or more) of the subschemas listed in `any_of`.
# Object with schema name: GoogleCloudAiplatformV1beta1Schema
],
"default": "", # Optional. Default value to use if the field is not specified.
"defs": { # Optional. `defs` provides a map of schema definitions that can be reused by `ref` elsewhere in the schema. Only allowed at root level of the schema.
"a_key": # Object with schema name: GoogleCloudAiplatformV1beta1Schema
},
"description": "A String", # Optional. Description of the schema.
"enum": [ # Optional. Possible values of the field. This field can be used to restrict a value to a fixed set of values. To mark a field as an enum, set `format` to `enum` and provide the list of possible values in `enum`. For example: 1. To define directions: `{type:STRING, format:enum, enum:["EAST", "NORTH", "SOUTH", "WEST"]}` 2. To define apartment numbers: `{type:INTEGER, format:enum, enum:["101", "201", "301"]}`
"A String",
],
"example": "", # Optional. Example of an instance of this schema.
"format": "A String", # Optional. The format of the data. For `NUMBER` type, format can be `float` or `double`. For `INTEGER` type, format can be `int32` or `int64`. For `STRING` type, format can be `email`, `byte`, `date`, `date-time`, `password`, and other formats to further refine the data type.
"items": # Object with schema name: GoogleCloudAiplatformV1beta1Schema # Optional. If type is `ARRAY`, `items` specifies the schema of elements in the array.
"maxItems": "A String", # Optional. If type is `ARRAY`, `max_items` specifies the maximum number of items in an array.
"maxLength": "A String", # Optional. If type is `STRING`, `max_length` specifies the maximum length of the string.
"maxProperties": "A String", # Optional. If type is `OBJECT`, `max_properties` specifies the maximum number of properties that can be provided.
"maximum": 3.14, # Optional. If type is `INTEGER` or `NUMBER`, `maximum` specifies the maximum allowed value.
"minItems": "A String", # Optional. If type is `ARRAY`, `min_items` specifies the minimum number of items in an array.
"minLength": "A String", # Optional. If type is `STRING`, `min_length` specifies the minimum length of the string.
"minProperties": "A String", # Optional. If type is `OBJECT`, `min_properties` specifies the minimum number of properties that can be provided.
"minimum": 3.14, # Optional. If type is `INTEGER` or `NUMBER`, `minimum` specifies the minimum allowed value.
"nullable": True or False, # Optional. Indicates if the value of this field can be null.
"pattern": "A String", # Optional. If type is `STRING`, `pattern` specifies a regular expression that the string must match.
"properties": { # Optional. If type is `OBJECT`, `properties` is a map of property names to schema definitions for each property of the object.
"a_key": # Object with schema name: GoogleCloudAiplatformV1beta1Schema
},
"propertyOrdering": [ # Optional. Order of properties displayed or used where order matters. This is not a standard field in OpenAPI specification, but can be used to control the order of properties.
"A String",
],
"ref": "A String", # Optional. Allows referencing another schema definition to use in place of this schema. The value must be a valid reference to a schema in `defs`. For example, the following schema defines a reference to a schema node named "Pet": type: object properties: pet: ref: #/defs/Pet defs: Pet: type: object properties: name: type: string The value of the "pet" property is a reference to the schema node named "Pet". See details in https://json-schema.org/understanding-json-schema/structuring
"required": [ # Optional. If type is `OBJECT`, `required` lists the names of properties that must be present.
"A String",
],
"title": "A String", # Optional. Title for the schema.
"type": "A String", # Optional. Data type of the schema field.
},
"parametersJsonSchema": "", # Optional. Describes the parameters to the function in JSON Schema format. The schema must describe an object where the properties are the parameters to the function. For example: ``` { "type": "object", "properties": { "name": { "type": "string" }, "age": { "type": "integer" } }, "additionalProperties": false, "required": ["name", "age"], "propertyOrdering": ["name", "age"] } ``` This field is mutually exclusive with `parameters`.
"response": { # Defines the schema of input and output data. This is a subset of the [OpenAPI 3.0 Schema Object](https://spec.openapis.org/oas/v3.0.3#schema-object). # Optional. Describes the output from this function in JSON Schema format. Reflects the Open API 3.03 Response Object. The Schema defines the type used for the response value of the function.
"additionalProperties": "", # Optional. If `type` is `OBJECT`, specifies how to handle properties not defined in `properties`. If it is a boolean `false`, no additional properties are allowed. If it is a schema, additional properties are allowed if they conform to the schema.
"anyOf": [ # Optional. The instance must be valid against any (one or more) of the subschemas listed in `any_of`.
# Object with schema name: GoogleCloudAiplatformV1beta1Schema
],
"default": "", # Optional. Default value to use if the field is not specified.
"defs": { # Optional. `defs` provides a map of schema definitions that can be reused by `ref` elsewhere in the schema. Only allowed at root level of the schema.
"a_key": # Object with schema name: GoogleCloudAiplatformV1beta1Schema
},
"description": "A String", # Optional. Description of the schema.
"enum": [ # Optional. Possible values of the field. This field can be used to restrict a value to a fixed set of values. To mark a field as an enum, set `format` to `enum` and provide the list of possible values in `enum`. For example: 1. To define directions: `{type:STRING, format:enum, enum:["EAST", "NORTH", "SOUTH", "WEST"]}` 2. To define apartment numbers: `{type:INTEGER, format:enum, enum:["101", "201", "301"]}`
"A String",
],
"example": "", # Optional. Example of an instance of this schema.
"format": "A String", # Optional. The format of the data. For `NUMBER` type, format can be `float` or `double`. For `INTEGER` type, format can be `int32` or `int64`. For `STRING` type, format can be `email`, `byte`, `date`, `date-time`, `password`, and other formats to further refine the data type.
"items": # Object with schema name: GoogleCloudAiplatformV1beta1Schema # Optional. If type is `ARRAY`, `items` specifies the schema of elements in the array.
"maxItems": "A String", # Optional. If type is `ARRAY`, `max_items` specifies the maximum number of items in an array.
"maxLength": "A String", # Optional. If type is `STRING`, `max_length` specifies the maximum length of the string.
"maxProperties": "A String", # Optional. If type is `OBJECT`, `max_properties` specifies the maximum number of properties that can be provided.
"maximum": 3.14, # Optional. If type is `INTEGER` or `NUMBER`, `maximum` specifies the maximum allowed value.
"minItems": "A String", # Optional. If type is `ARRAY`, `min_items` specifies the minimum number of items in an array.
"minLength": "A String", # Optional. If type is `STRING`, `min_length` specifies the minimum length of the string.
"minProperties": "A String", # Optional. If type is `OBJECT`, `min_properties` specifies the minimum number of properties that can be provided.
"minimum": 3.14, # Optional. If type is `INTEGER` or `NUMBER`, `minimum` specifies the minimum allowed value.
"nullable": True or False, # Optional. Indicates if the value of this field can be null.
"pattern": "A String", # Optional. If type is `STRING`, `pattern` specifies a regular expression that the string must match.
"properties": { # Optional. If type is `OBJECT`, `properties` is a map of property names to schema definitions for each property of the object.
"a_key": # Object with schema name: GoogleCloudAiplatformV1beta1Schema
},
"propertyOrdering": [ # Optional. Order of properties displayed or used where order matters. This is not a standard field in OpenAPI specification, but can be used to control the order of properties.
"A String",
],
"ref": "A String", # Optional. Allows referencing another schema definition to use in place of this schema. The value must be a valid reference to a schema in `defs`. For example, the following schema defines a reference to a schema node named "Pet": type: object properties: pet: ref: #/defs/Pet defs: Pet: type: object properties: name: type: string The value of the "pet" property is a reference to the schema node named "Pet". See details in https://json-schema.org/understanding-json-schema/structuring
"required": [ # Optional. If type is `OBJECT`, `required` lists the names of properties that must be present.
"A String",
],
"title": "A String", # Optional. Title for the schema.
"type": "A String", # Optional. Data type of the schema field.
},
"responseJsonSchema": "", # Optional. Describes the output from this function in JSON Schema format. The value specified by the schema is the response value of the function. This field is mutually exclusive with `response`.
},
],
"googleMaps": { # Tool to retrieve public maps data for grounding, powered by Google. # Optional. GoogleMaps tool type. Tool to support Google Maps in Model.
"enableWidget": True or False, # Optional. If true, include the widget context token in the response.
},
"googleSearch": { # GoogleSearch tool type. Tool to support Google Search in Model. Powered by Google. # Optional. GoogleSearch tool type. Tool to support Google Search in Model. Powered by Google.
"blockingConfidence": "A String", # Optional. Sites with confidence level chosen & above this value will be blocked from the search results.
"excludeDomains": [ # Optional. List of domains to be excluded from the search results. The default limit is 2000 domains. Example: ["amazon.com", "facebook.com"].
"A String",
],
},
"googleSearchRetrieval": { # Tool to retrieve public web data for grounding, powered by Google. # Optional. Specialized retrieval tool that is powered by Google Search.
"dynamicRetrievalConfig": { # Describes the options to customize dynamic retrieval. # Specifies the dynamic retrieval configuration for the given source.
"dynamicThreshold": 3.14, # Optional. The threshold to be used in dynamic retrieval. If not set, a system default value is used.
"mode": "A String", # The mode of the predictor to be used in dynamic retrieval.
},
},
"parallelAiSearch": { # ParallelAiSearch tool type. A tool that uses the Parallel.ai search engine for grounding. # Optional. If specified, Vertex AI will use Parallel.ai to search for information to answer user queries. The search results will be grounded on Parallel.ai and presented to the model for response generation
"apiKey": "A String", # Optional. The API key for ParallelAiSearch. If an API key is not provided, the system will attempt to verify access by checking for an active Parallel.ai subscription through the Google Cloud Marketplace. See https://docs.parallel.ai/search/search-quickstart for more details.
"customConfigs": { # Optional. Custom configs for ParallelAiSearch. This field can be used to pass any parameter from the Parallel.ai Search API. See the Parallel.ai documentation for the full list of available parameters and their usage: https://docs.parallel.ai/api-reference/search-beta/search Currently only `source_policy`, `excerpts`, `max_results`, `mode`, `fetch_policy` can be set via this field. For example: { "source_policy": { "include_domains": ["google.com", "wikipedia.org"], "exclude_domains": ["example.com"] }, "fetch_policy": { "max_age_seconds": 3600 } }
"a_key": "", # Properties of the object.
},
},
"retrieval": { # Defines a retrieval tool that model can call to access external knowledge. # Optional. Retrieval tool type. System will always execute the provided retrieval tool(s) to get external knowledge to answer the prompt. Retrieval results are presented to the model for generation.
"disableAttribution": True or False, # Optional. Deprecated. This option is no longer supported.
"externalApi": { # Retrieve from data source powered by external API for grounding. The external API is not owned by Google, but need to follow the pre-defined API spec. # Use data source powered by external API for grounding.
"apiAuth": { # The generic reusable api auth config. Deprecated. Please use AuthConfig (google/cloud/aiplatform/master/auth.proto) instead. # The authentication config to access the API. Deprecated. Please use auth_config instead.
"apiKeyConfig": { # The API secret. # The API secret.
"apiKeySecretVersion": "A String", # Required. The SecretManager secret version resource name storing API key. e.g. projects/{project}/secrets/{secret}/versions/{version}
"apiKeyString": "A String", # The API key string. Either this or `api_key_secret_version` must be set.
},
},
"apiSpec": "A String", # The API spec that the external API implements.
"authConfig": { # Auth configuration to run the extension. # The authentication config to access the API.
"apiKeyConfig": { # Config for authentication with API key. # Config for API key auth.
"apiKeySecret": "A String", # Optional. The name of the SecretManager secret version resource storing the API key. Format: `projects/{project}/secrets/{secrete}/versions/{version}` - If both `api_key_secret` and `api_key_string` are specified, this field takes precedence over `api_key_string`. - If specified, the `secretmanager.versions.access` permission should be granted to Vertex AI Extension Service Agent (https://cloud.google.com/vertex-ai/docs/general/access-control#service-agents) on the specified resource.
"apiKeyString": "A String", # Optional. The API key to be used in the request directly.
"httpElementLocation": "A String", # Optional. The location of the API key.
"name": "A String", # Optional. The parameter name of the API key. E.g. If the API request is "https://example.com/act?api_key=", "api_key" would be the parameter name.
},
"authType": "A String", # Type of auth scheme.
"googleServiceAccountConfig": { # Config for Google Service Account Authentication. # Config for Google Service Account auth.
"serviceAccount": "A String", # Optional. The service account that the extension execution service runs as. - If the service account is specified, the `iam.serviceAccounts.getAccessToken` permission should be granted to Vertex AI Extension Service Agent (https://cloud.google.com/vertex-ai/docs/general/access-control#service-agents) on the specified service account. - If not specified, the Vertex AI Extension Service Agent will be used to execute the Extension.
},
"httpBasicAuthConfig": { # Config for HTTP Basic Authentication. # Config for HTTP Basic auth.
"credentialSecret": "A String", # Required. The name of the SecretManager secret version resource storing the base64 encoded credentials. Format: `projects/{project}/secrets/{secrete}/versions/{version}` - If specified, the `secretmanager.versions.access` permission should be granted to Vertex AI Extension Service Agent (https://cloud.google.com/vertex-ai/docs/general/access-control#service-agents) on the specified resource.
},
"oauthConfig": { # Config for user oauth. # Config for user oauth.
"accessToken": "A String", # Access token for extension endpoint. Only used to propagate token from [[ExecuteExtensionRequest.runtime_auth_config]] at request time.
"serviceAccount": "A String", # The service account used to generate access tokens for executing the Extension. - If the service account is specified, the `iam.serviceAccounts.getAccessToken` permission should be granted to Vertex AI Extension Service Agent (https://cloud.google.com/vertex-ai/docs/general/access-control#service-agents) on the provided service account.
},
"oidcConfig": { # Config for user OIDC auth. # Config for user OIDC auth.
"idToken": "A String", # OpenID Connect formatted ID token for extension endpoint. Only used to propagate token from [[ExecuteExtensionRequest.runtime_auth_config]] at request time.
"serviceAccount": "A String", # The service account used to generate an OpenID Connect (OIDC)-compatible JWT token signed by the Google OIDC Provider (accounts.google.com) for extension endpoint (https://cloud.google.com/iam/docs/create-short-lived-credentials-direct#sa-credentials-oidc). - The audience for the token will be set to the URL in the server url defined in the OpenApi spec. - If the service account is provided, the service account should grant `iam.serviceAccounts.getOpenIdToken` permission to Vertex AI Extension Service Agent (https://cloud.google.com/vertex-ai/docs/general/access-control#service-agents).
},
},
"elasticSearchParams": { # The search parameters to use for the ELASTIC_SEARCH spec. # Parameters for the elastic search API.
"index": "A String", # The ElasticSearch index to use.
"numHits": 42, # Optional. Number of hits (chunks) to request. When specified, it is passed to Elasticsearch as the `num_hits` param.
"searchTemplate": "A String", # The ElasticSearch search template to use.
},
"endpoint": "A String", # The endpoint of the external API. The system will call the API at this endpoint to retrieve the data for grounding. Example: https://acme.com:443/search
"simpleSearchParams": { # The search parameters to use for SIMPLE_SEARCH spec. # Parameters for the simple search API.
},
},
"vertexAiSearch": { # Retrieve from Vertex AI Search datastore or engine for grounding. datastore and engine are mutually exclusive. See https://cloud.google.com/products/agent-builder # Set to use data source powered by Vertex AI Search.
"dataStoreSpecs": [ # Specifications that define the specific DataStores to be searched, along with configurations for those data stores. This is only considered for Engines with multiple data stores. It should only be set if engine is used.
{ # Define data stores within engine to filter on in a search call and configurations for those data stores. For more information, see https://cloud.google.com/generative-ai-app-builder/docs/reference/rpc/google.cloud.discoveryengine.v1#datastorespec
"dataStore": "A String", # Full resource name of DataStore, such as Format: `projects/{project}/locations/{location}/collections/{collection}/dataStores/{dataStore}`
"filter": "A String", # Optional. Filter specification to filter documents in the data store specified by data_store field. For more information on filtering, see [Filtering](https://cloud.google.com/generative-ai-app-builder/docs/filter-search-metadata)
},
],
"datastore": "A String", # Optional. Fully-qualified Vertex AI Search data store resource ID. Format: `projects/{project}/locations/{location}/collections/{collection}/dataStores/{dataStore}`
"engine": "A String", # Optional. Fully-qualified Vertex AI Search engine resource ID. Format: `projects/{project}/locations/{location}/collections/{collection}/engines/{engine}`
"filter": "A String", # Optional. Filter strings to be passed to the search API.
"maxResults": 42, # Optional. Number of search results to return per query. The default value is 10. The maximumm allowed value is 10.
},
"vertexRagStore": { # Retrieve from Vertex RAG Store for grounding. # Set to use data source powered by Vertex RAG store. User data is uploaded via the VertexRagDataService.
"ragCorpora": [ # Optional. Deprecated. Please use rag_resources instead.
"A String",
],
"ragResources": [ # Optional. The representation of the rag source. It can be used to specify corpus only or ragfiles. Currently only support one corpus or multiple files from one corpus. In the future we may open up multiple corpora support.
{ # The definition of the Rag resource.
"ragCorpus": "A String", # Optional. RagCorpora resource name. Format: `projects/{project}/locations/{location}/ragCorpora/{rag_corpus}`
"ragFileIds": [ # Optional. rag_file_id. The files should be in the same rag_corpus set in rag_corpus field.
"A String",
],
},
],
"ragRetrievalConfig": { # Specifies the context retrieval config. # Optional. The retrieval config for the Rag query.
"filter": { # Config for filters. # Optional. Config for filters.
"metadataFilter": "A String", # Optional. String for metadata filtering.
"vectorDistanceThreshold": 3.14, # Optional. Only returns contexts with vector distance smaller than the threshold.
"vectorSimilarityThreshold": 3.14, # Optional. Only returns contexts with vector similarity larger than the threshold.
},
"hybridSearch": { # Config for Hybrid Search. # Optional. Config for Hybrid Search.
"alpha": 3.14, # Optional. Alpha value controls the weight between dense and sparse vector search results. The range is [0, 1], while 0 means sparse vector search only and 1 means dense vector search only. The default value is 0.5 which balances sparse and dense vector search equally.
},
"ranking": { # Config for ranking and reranking. # Optional. Config for ranking and reranking.
"llmRanker": { # Config for LlmRanker. # Optional. Config for LlmRanker.
"modelName": "A String", # Optional. The model name used for ranking. See [Supported models](https://cloud.google.com/vertex-ai/generative-ai/docs/model-reference/inference#supported-models).
},
"rankService": { # Config for Rank Service. # Optional. Config for Rank Service.
"modelName": "A String", # Optional. The model name of the rank service. Format: `semantic-ranker-512@latest`
},
},
"topK": 42, # Optional. The number of contexts to retrieve.
},
"similarityTopK": 42, # Optional. Number of top k results to return from the selected corpora.
"storeContext": True or False, # Optional. Currently only supported for Gemini Multimodal Live API. In Gemini Multimodal Live API, if `store_context` bool is specified, Gemini will leverage it to automatically memorize the interactions between the client and Gemini, and retrieve context when needed to augment the response generation for users' ongoing and future interactions.
"vectorDistanceThreshold": 3.14, # Optional. Only return results with vector distance smaller than the threshold.
},
},
"urlContext": { # Tool to support URL context. # Optional. Tool to support URL context retrieval.
},
},
],
},
},
"turns": [ # Optional. A chronological list of conversation turns. Each turn represents a logical execution cycle (e.g., User Input -> Agent Response).
{ # Represents a single turn/invocation in the conversation.
"events": [ # Optional. The list of events that occurred during this turn.
{ # Represents a single event in the execution trace.
"activeTools": [ # Optional. The list of tools that were active/available to the agent at the time of this event. This overrides the `AgentConfig.tools` if set.
{ # Tool details that the model may use to generate response. A `Tool` is a piece of code that enables the system to interact with external systems to perform an action, or set of actions, outside of knowledge and scope of the model. A Tool object should contain exactly one type of Tool (e.g FunctionDeclaration, Retrieval or GoogleSearchRetrieval).
"codeExecution": { # Tool that executes code generated by the model, and automatically returns the result to the model. See also [ExecutableCode]and [CodeExecutionResult] which are input and output to this tool. # Optional. CodeExecution tool type. Enables the model to execute code as part of generation.
},
"computerUse": { # Tool to support computer use. # Optional. Tool to support the model interacting directly with the computer. If enabled, it automatically populates computer-use specific Function Declarations.
"environment": "A String", # Required. The environment being operated.
"excludedPredefinedFunctions": [ # Optional. By default, [predefined functions](https://cloud.google.com/vertex-ai/generative-ai/docs/computer-use#supported-actions) are included in the final model call. Some of them can be explicitly excluded from being automatically included. This can serve two purposes: 1. Using a more restricted / different action space. 2. Improving the definitions / instructions of predefined functions.
"A String",
],
},
"enterpriseWebSearch": { # Tool to search public web data, powered by Vertex AI Search and Sec4 compliance. # Optional. Tool to support searching public web data, powered by Vertex AI Search and Sec4 compliance.
"blockingConfidence": "A String", # Optional. Sites with confidence level chosen & above this value will be blocked from the search results.
"excludeDomains": [ # Optional. List of domains to be excluded from the search results. The default limit is 2000 domains.
"A String",
],
},
"functionDeclarations": [ # Optional. Function tool type. One or more function declarations to be passed to the model along with the current user query. Model may decide to call a subset of these functions by populating FunctionCall in the response. User should provide a FunctionResponse for each function call in the next turn. Based on the function responses, Model will generate the final response back to the user. Maximum 512 function declarations can be provided.
{ # Structured representation of a function declaration as defined by the [OpenAPI 3.0 specification](https://spec.openapis.org/oas/v3.0.3). Included in this declaration are the function name, description, parameters and response type. This FunctionDeclaration is a representation of a block of code that can be used as a `Tool` by the model and executed by the client.
"description": "A String", # Optional. Description and purpose of the function. Model uses it to decide how and whether to call the function.
"name": "A String", # Required. The name of the function to call. Must start with a letter or an underscore. Must be a-z, A-Z, 0-9, or contain underscores, dots, colons and dashes, with a maximum length of 64.
"parameters": { # Defines the schema of input and output data. This is a subset of the [OpenAPI 3.0 Schema Object](https://spec.openapis.org/oas/v3.0.3#schema-object). # Optional. Describes the parameters to this function in JSON Schema Object format. Reflects the Open API 3.03 Parameter Object. string Key: the name of the parameter. Parameter names are case sensitive. Schema Value: the Schema defining the type used for the parameter. For function with no parameters, this can be left unset. Parameter names must start with a letter or an underscore and must only contain chars a-z, A-Z, 0-9, or underscores with a maximum length of 64. Example with 1 required and 1 optional parameter: type: OBJECT properties: param1: type: STRING param2: type: INTEGER required: - param1
"additionalProperties": "", # Optional. If `type` is `OBJECT`, specifies how to handle properties not defined in `properties`. If it is a boolean `false`, no additional properties are allowed. If it is a schema, additional properties are allowed if they conform to the schema.
"anyOf": [ # Optional. The instance must be valid against any (one or more) of the subschemas listed in `any_of`.
# Object with schema name: GoogleCloudAiplatformV1beta1Schema
],
"default": "", # Optional. Default value to use if the field is not specified.
"defs": { # Optional. `defs` provides a map of schema definitions that can be reused by `ref` elsewhere in the schema. Only allowed at root level of the schema.
"a_key": # Object with schema name: GoogleCloudAiplatformV1beta1Schema
},
"description": "A String", # Optional. Description of the schema.
"enum": [ # Optional. Possible values of the field. This field can be used to restrict a value to a fixed set of values. To mark a field as an enum, set `format` to `enum` and provide the list of possible values in `enum`. For example: 1. To define directions: `{type:STRING, format:enum, enum:["EAST", "NORTH", "SOUTH", "WEST"]}` 2. To define apartment numbers: `{type:INTEGER, format:enum, enum:["101", "201", "301"]}`
"A String",
],
"example": "", # Optional. Example of an instance of this schema.
"format": "A String", # Optional. The format of the data. For `NUMBER` type, format can be `float` or `double`. For `INTEGER` type, format can be `int32` or `int64`. For `STRING` type, format can be `email`, `byte`, `date`, `date-time`, `password`, and other formats to further refine the data type.
"items": # Object with schema name: GoogleCloudAiplatformV1beta1Schema # Optional. If type is `ARRAY`, `items` specifies the schema of elements in the array.
"maxItems": "A String", # Optional. If type is `ARRAY`, `max_items` specifies the maximum number of items in an array.
"maxLength": "A String", # Optional. If type is `STRING`, `max_length` specifies the maximum length of the string.
"maxProperties": "A String", # Optional. If type is `OBJECT`, `max_properties` specifies the maximum number of properties that can be provided.
"maximum": 3.14, # Optional. If type is `INTEGER` or `NUMBER`, `maximum` specifies the maximum allowed value.
"minItems": "A String", # Optional. If type is `ARRAY`, `min_items` specifies the minimum number of items in an array.
"minLength": "A String", # Optional. If type is `STRING`, `min_length` specifies the minimum length of the string.
"minProperties": "A String", # Optional. If type is `OBJECT`, `min_properties` specifies the minimum number of properties that can be provided.
"minimum": 3.14, # Optional. If type is `INTEGER` or `NUMBER`, `minimum` specifies the minimum allowed value.
"nullable": True or False, # Optional. Indicates if the value of this field can be null.
"pattern": "A String", # Optional. If type is `STRING`, `pattern` specifies a regular expression that the string must match.
"properties": { # Optional. If type is `OBJECT`, `properties` is a map of property names to schema definitions for each property of the object.
"a_key": # Object with schema name: GoogleCloudAiplatformV1beta1Schema
},
"propertyOrdering": [ # Optional. Order of properties displayed or used where order matters. This is not a standard field in OpenAPI specification, but can be used to control the order of properties.
"A String",
],
"ref": "A String", # Optional. Allows referencing another schema definition to use in place of this schema. The value must be a valid reference to a schema in `defs`. For example, the following schema defines a reference to a schema node named "Pet": type: object properties: pet: ref: #/defs/Pet defs: Pet: type: object properties: name: type: string The value of the "pet" property is a reference to the schema node named "Pet". See details in https://json-schema.org/understanding-json-schema/structuring
"required": [ # Optional. If type is `OBJECT`, `required` lists the names of properties that must be present.
"A String",
],
"title": "A String", # Optional. Title for the schema.
"type": "A String", # Optional. Data type of the schema field.
},
"parametersJsonSchema": "", # Optional. Describes the parameters to the function in JSON Schema format. The schema must describe an object where the properties are the parameters to the function. For example: ``` { "type": "object", "properties": { "name": { "type": "string" }, "age": { "type": "integer" } }, "additionalProperties": false, "required": ["name", "age"], "propertyOrdering": ["name", "age"] } ``` This field is mutually exclusive with `parameters`.
"response": { # Defines the schema of input and output data. This is a subset of the [OpenAPI 3.0 Schema Object](https://spec.openapis.org/oas/v3.0.3#schema-object). # Optional. Describes the output from this function in JSON Schema format. Reflects the Open API 3.03 Response Object. The Schema defines the type used for the response value of the function.
"additionalProperties": "", # Optional. If `type` is `OBJECT`, specifies how to handle properties not defined in `properties`. If it is a boolean `false`, no additional properties are allowed. If it is a schema, additional properties are allowed if they conform to the schema.
"anyOf": [ # Optional. The instance must be valid against any (one or more) of the subschemas listed in `any_of`.
# Object with schema name: GoogleCloudAiplatformV1beta1Schema
],
"default": "", # Optional. Default value to use if the field is not specified.
"defs": { # Optional. `defs` provides a map of schema definitions that can be reused by `ref` elsewhere in the schema. Only allowed at root level of the schema.
"a_key": # Object with schema name: GoogleCloudAiplatformV1beta1Schema
},
"description": "A String", # Optional. Description of the schema.
"enum": [ # Optional. Possible values of the field. This field can be used to restrict a value to a fixed set of values. To mark a field as an enum, set `format` to `enum` and provide the list of possible values in `enum`. For example: 1. To define directions: `{type:STRING, format:enum, enum:["EAST", "NORTH", "SOUTH", "WEST"]}` 2. To define apartment numbers: `{type:INTEGER, format:enum, enum:["101", "201", "301"]}`
"A String",
],
"example": "", # Optional. Example of an instance of this schema.
"format": "A String", # Optional. The format of the data. For `NUMBER` type, format can be `float` or `double`. For `INTEGER` type, format can be `int32` or `int64`. For `STRING` type, format can be `email`, `byte`, `date`, `date-time`, `password`, and other formats to further refine the data type.
"items": # Object with schema name: GoogleCloudAiplatformV1beta1Schema # Optional. If type is `ARRAY`, `items` specifies the schema of elements in the array.
"maxItems": "A String", # Optional. If type is `ARRAY`, `max_items` specifies the maximum number of items in an array.
"maxLength": "A String", # Optional. If type is `STRING`, `max_length` specifies the maximum length of the string.
"maxProperties": "A String", # Optional. If type is `OBJECT`, `max_properties` specifies the maximum number of properties that can be provided.
"maximum": 3.14, # Optional. If type is `INTEGER` or `NUMBER`, `maximum` specifies the maximum allowed value.
"minItems": "A String", # Optional. If type is `ARRAY`, `min_items` specifies the minimum number of items in an array.
"minLength": "A String", # Optional. If type is `STRING`, `min_length` specifies the minimum length of the string.
"minProperties": "A String", # Optional. If type is `OBJECT`, `min_properties` specifies the minimum number of properties that can be provided.
"minimum": 3.14, # Optional. If type is `INTEGER` or `NUMBER`, `minimum` specifies the minimum allowed value.
"nullable": True or False, # Optional. Indicates if the value of this field can be null.
"pattern": "A String", # Optional. If type is `STRING`, `pattern` specifies a regular expression that the string must match.
"properties": { # Optional. If type is `OBJECT`, `properties` is a map of property names to schema definitions for each property of the object.
"a_key": # Object with schema name: GoogleCloudAiplatformV1beta1Schema
},
"propertyOrdering": [ # Optional. Order of properties displayed or used where order matters. This is not a standard field in OpenAPI specification, but can be used to control the order of properties.
"A String",
],
"ref": "A String", # Optional. Allows referencing another schema definition to use in place of this schema. The value must be a valid reference to a schema in `defs`. For example, the following schema defines a reference to a schema node named "Pet": type: object properties: pet: ref: #/defs/Pet defs: Pet: type: object properties: name: type: string The value of the "pet" property is a reference to the schema node named "Pet". See details in https://json-schema.org/understanding-json-schema/structuring
"required": [ # Optional. If type is `OBJECT`, `required` lists the names of properties that must be present.
"A String",
],
"title": "A String", # Optional. Title for the schema.
"type": "A String", # Optional. Data type of the schema field.
},
"responseJsonSchema": "", # Optional. Describes the output from this function in JSON Schema format. The value specified by the schema is the response value of the function. This field is mutually exclusive with `response`.
},
],
"googleMaps": { # Tool to retrieve public maps data for grounding, powered by Google. # Optional. GoogleMaps tool type. Tool to support Google Maps in Model.
"enableWidget": True or False, # Optional. If true, include the widget context token in the response.
},
"googleSearch": { # GoogleSearch tool type. Tool to support Google Search in Model. Powered by Google. # Optional. GoogleSearch tool type. Tool to support Google Search in Model. Powered by Google.
"blockingConfidence": "A String", # Optional. Sites with confidence level chosen & above this value will be blocked from the search results.
"excludeDomains": [ # Optional. List of domains to be excluded from the search results. The default limit is 2000 domains. Example: ["amazon.com", "facebook.com"].
"A String",
],
},
"googleSearchRetrieval": { # Tool to retrieve public web data for grounding, powered by Google. # Optional. Specialized retrieval tool that is powered by Google Search.
"dynamicRetrievalConfig": { # Describes the options to customize dynamic retrieval. # Specifies the dynamic retrieval configuration for the given source.
"dynamicThreshold": 3.14, # Optional. The threshold to be used in dynamic retrieval. If not set, a system default value is used.
"mode": "A String", # The mode of the predictor to be used in dynamic retrieval.
},
},
"parallelAiSearch": { # ParallelAiSearch tool type. A tool that uses the Parallel.ai search engine for grounding. # Optional. If specified, Vertex AI will use Parallel.ai to search for information to answer user queries. The search results will be grounded on Parallel.ai and presented to the model for response generation
"apiKey": "A String", # Optional. The API key for ParallelAiSearch. If an API key is not provided, the system will attempt to verify access by checking for an active Parallel.ai subscription through the Google Cloud Marketplace. See https://docs.parallel.ai/search/search-quickstart for more details.
"customConfigs": { # Optional. Custom configs for ParallelAiSearch. This field can be used to pass any parameter from the Parallel.ai Search API. See the Parallel.ai documentation for the full list of available parameters and their usage: https://docs.parallel.ai/api-reference/search-beta/search Currently only `source_policy`, `excerpts`, `max_results`, `mode`, `fetch_policy` can be set via this field. For example: { "source_policy": { "include_domains": ["google.com", "wikipedia.org"], "exclude_domains": ["example.com"] }, "fetch_policy": { "max_age_seconds": 3600 } }
"a_key": "", # Properties of the object.
},
},
"retrieval": { # Defines a retrieval tool that model can call to access external knowledge. # Optional. Retrieval tool type. System will always execute the provided retrieval tool(s) to get external knowledge to answer the prompt. Retrieval results are presented to the model for generation.
"disableAttribution": True or False, # Optional. Deprecated. This option is no longer supported.
"externalApi": { # Retrieve from data source powered by external API for grounding. The external API is not owned by Google, but need to follow the pre-defined API spec. # Use data source powered by external API for grounding.
"apiAuth": { # The generic reusable api auth config. Deprecated. Please use AuthConfig (google/cloud/aiplatform/master/auth.proto) instead. # The authentication config to access the API. Deprecated. Please use auth_config instead.
"apiKeyConfig": { # The API secret. # The API secret.
"apiKeySecretVersion": "A String", # Required. The SecretManager secret version resource name storing API key. e.g. projects/{project}/secrets/{secret}/versions/{version}
"apiKeyString": "A String", # The API key string. Either this or `api_key_secret_version` must be set.
},
},
"apiSpec": "A String", # The API spec that the external API implements.
"authConfig": { # Auth configuration to run the extension. # The authentication config to access the API.
"apiKeyConfig": { # Config for authentication with API key. # Config for API key auth.
"apiKeySecret": "A String", # Optional. The name of the SecretManager secret version resource storing the API key. Format: `projects/{project}/secrets/{secrete}/versions/{version}` - If both `api_key_secret` and `api_key_string` are specified, this field takes precedence over `api_key_string`. - If specified, the `secretmanager.versions.access` permission should be granted to Vertex AI Extension Service Agent (https://cloud.google.com/vertex-ai/docs/general/access-control#service-agents) on the specified resource.
"apiKeyString": "A String", # Optional. The API key to be used in the request directly.
"httpElementLocation": "A String", # Optional. The location of the API key.
"name": "A String", # Optional. The parameter name of the API key. E.g. If the API request is "https://example.com/act?api_key=", "api_key" would be the parameter name.
},
"authType": "A String", # Type of auth scheme.
"googleServiceAccountConfig": { # Config for Google Service Account Authentication. # Config for Google Service Account auth.
"serviceAccount": "A String", # Optional. The service account that the extension execution service runs as. - If the service account is specified, the `iam.serviceAccounts.getAccessToken` permission should be granted to Vertex AI Extension Service Agent (https://cloud.google.com/vertex-ai/docs/general/access-control#service-agents) on the specified service account. - If not specified, the Vertex AI Extension Service Agent will be used to execute the Extension.
},
"httpBasicAuthConfig": { # Config for HTTP Basic Authentication. # Config for HTTP Basic auth.
"credentialSecret": "A String", # Required. The name of the SecretManager secret version resource storing the base64 encoded credentials. Format: `projects/{project}/secrets/{secrete}/versions/{version}` - If specified, the `secretmanager.versions.access` permission should be granted to Vertex AI Extension Service Agent (https://cloud.google.com/vertex-ai/docs/general/access-control#service-agents) on the specified resource.
},
"oauthConfig": { # Config for user oauth. # Config for user oauth.
"accessToken": "A String", # Access token for extension endpoint. Only used to propagate token from [[ExecuteExtensionRequest.runtime_auth_config]] at request time.
"serviceAccount": "A String", # The service account used to generate access tokens for executing the Extension. - If the service account is specified, the `iam.serviceAccounts.getAccessToken` permission should be granted to Vertex AI Extension Service Agent (https://cloud.google.com/vertex-ai/docs/general/access-control#service-agents) on the provided service account.
},
"oidcConfig": { # Config for user OIDC auth. # Config for user OIDC auth.
"idToken": "A String", # OpenID Connect formatted ID token for extension endpoint. Only used to propagate token from [[ExecuteExtensionRequest.runtime_auth_config]] at request time.
"serviceAccount": "A String", # The service account used to generate an OpenID Connect (OIDC)-compatible JWT token signed by the Google OIDC Provider (accounts.google.com) for extension endpoint (https://cloud.google.com/iam/docs/create-short-lived-credentials-direct#sa-credentials-oidc). - The audience for the token will be set to the URL in the server url defined in the OpenApi spec. - If the service account is provided, the service account should grant `iam.serviceAccounts.getOpenIdToken` permission to Vertex AI Extension Service Agent (https://cloud.google.com/vertex-ai/docs/general/access-control#service-agents).
},
},
"elasticSearchParams": { # The search parameters to use for the ELASTIC_SEARCH spec. # Parameters for the elastic search API.
"index": "A String", # The ElasticSearch index to use.
"numHits": 42, # Optional. Number of hits (chunks) to request. When specified, it is passed to Elasticsearch as the `num_hits` param.
"searchTemplate": "A String", # The ElasticSearch search template to use.
},
"endpoint": "A String", # The endpoint of the external API. The system will call the API at this endpoint to retrieve the data for grounding. Example: https://acme.com:443/search
"simpleSearchParams": { # The search parameters to use for SIMPLE_SEARCH spec. # Parameters for the simple search API.
},
},
"vertexAiSearch": { # Retrieve from Vertex AI Search datastore or engine for grounding. datastore and engine are mutually exclusive. See https://cloud.google.com/products/agent-builder # Set to use data source powered by Vertex AI Search.
"dataStoreSpecs": [ # Specifications that define the specific DataStores to be searched, along with configurations for those data stores. This is only considered for Engines with multiple data stores. It should only be set if engine is used.
{ # Define data stores within engine to filter on in a search call and configurations for those data stores. For more information, see https://cloud.google.com/generative-ai-app-builder/docs/reference/rpc/google.cloud.discoveryengine.v1#datastorespec
"dataStore": "A String", # Full resource name of DataStore, such as Format: `projects/{project}/locations/{location}/collections/{collection}/dataStores/{dataStore}`
"filter": "A String", # Optional. Filter specification to filter documents in the data store specified by data_store field. For more information on filtering, see [Filtering](https://cloud.google.com/generative-ai-app-builder/docs/filter-search-metadata)
},
],
"datastore": "A String", # Optional. Fully-qualified Vertex AI Search data store resource ID. Format: `projects/{project}/locations/{location}/collections/{collection}/dataStores/{dataStore}`
"engine": "A String", # Optional. Fully-qualified Vertex AI Search engine resource ID. Format: `projects/{project}/locations/{location}/collections/{collection}/engines/{engine}`
"filter": "A String", # Optional. Filter strings to be passed to the search API.
"maxResults": 42, # Optional. Number of search results to return per query. The default value is 10. The maximumm allowed value is 10.
},
"vertexRagStore": { # Retrieve from Vertex RAG Store for grounding. # Set to use data source powered by Vertex RAG store. User data is uploaded via the VertexRagDataService.
"ragCorpora": [ # Optional. Deprecated. Please use rag_resources instead.
"A String",
],
"ragResources": [ # Optional. The representation of the rag source. It can be used to specify corpus only or ragfiles. Currently only support one corpus or multiple files from one corpus. In the future we may open up multiple corpora support.
{ # The definition of the Rag resource.
"ragCorpus": "A String", # Optional. RagCorpora resource name. Format: `projects/{project}/locations/{location}/ragCorpora/{rag_corpus}`
"ragFileIds": [ # Optional. rag_file_id. The files should be in the same rag_corpus set in rag_corpus field.
"A String",
],
},
],
"ragRetrievalConfig": { # Specifies the context retrieval config. # Optional. The retrieval config for the Rag query.
"filter": { # Config for filters. # Optional. Config for filters.
"metadataFilter": "A String", # Optional. String for metadata filtering.
"vectorDistanceThreshold": 3.14, # Optional. Only returns contexts with vector distance smaller than the threshold.
"vectorSimilarityThreshold": 3.14, # Optional. Only returns contexts with vector similarity larger than the threshold.
},
"hybridSearch": { # Config for Hybrid Search. # Optional. Config for Hybrid Search.
"alpha": 3.14, # Optional. Alpha value controls the weight between dense and sparse vector search results. The range is [0, 1], while 0 means sparse vector search only and 1 means dense vector search only. The default value is 0.5 which balances sparse and dense vector search equally.
},
"ranking": { # Config for ranking and reranking. # Optional. Config for ranking and reranking.
"llmRanker": { # Config for LlmRanker. # Optional. Config for LlmRanker.
"modelName": "A String", # Optional. The model name used for ranking. See [Supported models](https://cloud.google.com/vertex-ai/generative-ai/docs/model-reference/inference#supported-models).
},
"rankService": { # Config for Rank Service. # Optional. Config for Rank Service.
"modelName": "A String", # Optional. The model name of the rank service. Format: `semantic-ranker-512@latest`
},
},
"topK": 42, # Optional. The number of contexts to retrieve.
},
"similarityTopK": 42, # Optional. Number of top k results to return from the selected corpora.
"storeContext": True or False, # Optional. Currently only supported for Gemini Multimodal Live API. In Gemini Multimodal Live API, if `store_context` bool is specified, Gemini will leverage it to automatically memorize the interactions between the client and Gemini, and retrieve context when needed to augment the response generation for users' ongoing and future interactions.
"vectorDistanceThreshold": 3.14, # Optional. Only return results with vector distance smaller than the threshold.
},
},
"urlContext": { # Tool to support URL context. # Optional. Tool to support URL context retrieval.
},
},
],
"author": "A String", # Required. The ID of the agent or entity that generated this event. Use "user" to denote events generated by the end-user.
"content": { # The structured data content of a message. A Content message contains a `role` field, which indicates the producer of the content, and a `parts` field, which contains the multi-part data of the message. # Required. The content of the event (e.g., text response, tool call, tool response).
"parts": [ # Required. A list of Part objects that make up a single message. Parts of a message can have different MIME types. A Content message must have at least one Part.
{ # A datatype containing media that is part of a multi-part Content message. A `Part` consists of data which has an associated datatype. A `Part` can only contain one of the accepted types in `Part.data`. For media types that are not text, `Part` must have a fixed IANA MIME type identifying the type and subtype of the media if `inline_data` or `file_data` field is filled with raw bytes.
"codeExecutionResult": { # Result of executing the [ExecutableCode]. Only generated when using the [CodeExecution] tool, and always follows a `part` containing the [ExecutableCode]. # Optional. The result of executing the ExecutableCode.
"outcome": "A String", # Required. Outcome of the code execution.
"output": "A String", # Optional. Contains stdout when code execution is successful, stderr or other description otherwise.
},
"executableCode": { # Code generated by the model that is meant to be executed, and the result returned to the model. Generated when using the [CodeExecution] tool, in which the code will be automatically executed, and a corresponding [CodeExecutionResult] will also be generated. # Optional. Code generated by the model that is intended to be executed.
"code": "A String", # Required. The code to be executed.
"language": "A String", # Required. Programming language of the `code`.
},
"fileData": { # URI-based data. A FileData message contains a URI pointing to data of a specific media type. It is used to represent images, audio, and video stored in Google Cloud Storage. # Optional. The URI-based data of the part. This can be used to include files from Google Cloud Storage.
"displayName": "A String", # Optional. The display name of the file. Used to provide a label or filename to distinguish files. This field is only returned in `PromptMessage` for prompt management. It is used in the Gemini calls only when server side tools (`code_execution`, `google_search`, and `url_context`) are enabled.
"fileUri": "A String", # Required. The URI of the file in Google Cloud Storage.
"mimeType": "A String", # Required. The IANA standard MIME type of the source data.
},
"functionCall": { # A predicted [FunctionCall] returned from the model that contains a string representing the [FunctionDeclaration.name] and a structured JSON object containing the parameters and their values. # Optional. A predicted function call returned from the model. This contains the name of the function to call and the arguments to pass to the function.
"args": { # Optional. The function parameters and values in JSON object format. See [FunctionDeclaration.parameters] for parameter details.
"a_key": "", # Properties of the object.
},
"id": "A String", # Optional. The unique id of the function call. If populated, the client to execute the `function_call` and return the response with the matching `id`.
"name": "A String", # Optional. The name of the function to call. Matches [FunctionDeclaration.name].
"partialArgs": [ # Optional. The partial argument value of the function call. If provided, represents the arguments/fields that are streamed incrementally.
{ # Partial argument value of the function call.
"boolValue": True or False, # Optional. Represents a boolean value.
"jsonPath": "A String", # Required. A JSON Path (RFC 9535) to the argument being streamed. https://datatracker.ietf.org/doc/html/rfc9535. e.g. "$.foo.bar[0].data".
"nullValue": "A String", # Optional. Represents a null value.
"numberValue": 3.14, # Optional. Represents a double value.
"stringValue": "A String", # Optional. Represents a string value.
"willContinue": True or False, # Optional. Whether this is not the last part of the same json_path. If true, another PartialArg message for the current json_path is expected to follow.
},
],
"willContinue": True or False, # Optional. Whether this is the last part of the FunctionCall. If true, another partial message for the current FunctionCall is expected to follow.
},
"functionResponse": { # The result output from a [FunctionCall] that contains a string representing the [FunctionDeclaration.name] and a structured JSON object containing any output from the function is used as context to the model. This should contain the result of a [FunctionCall] made based on model prediction. # Optional. The result of a function call. This is used to provide the model with the result of a function call that it predicted.
"id": "A String", # Optional. The id of the function call this response is for. Populated by the client to match the corresponding function call `id`.
"name": "A String", # Required. The name of the function to call. Matches [FunctionDeclaration.name] and [FunctionCall.name].
"parts": [ # Optional. Ordered `Parts` that constitute a function response. Parts may have different IANA MIME types.
{ # A datatype containing media that is part of a `FunctionResponse` message. A `FunctionResponsePart` consists of data which has an associated datatype. A `FunctionResponsePart` can only contain one of the accepted types in `FunctionResponsePart.data`. A `FunctionResponsePart` must have a fixed IANA MIME type identifying the type and subtype of the media if the `inline_data` field is filled with raw bytes.
"fileData": { # URI based data for function response. # URI based data.
"displayName": "A String", # Optional. Display name of the file data. Used to provide a label or filename to distinguish file datas. This field is only returned in PromptMessage for prompt management. It is currently used in the Gemini GenerateContent calls only when server side tools (code_execution, google_search, and url_context) are enabled.
"fileUri": "A String", # Required. URI.
"mimeType": "A String", # Required. The IANA standard MIME type of the source data.
},
"inlineData": { # Raw media bytes for function response. Text should not be sent as raw bytes, use the 'text' field. # Inline media bytes.
"data": "A String", # Required. Raw bytes.
"displayName": "A String", # Optional. Display name of the blob. Used to provide a label or filename to distinguish blobs. This field is only returned in PromptMessage for prompt management. It is currently used in the Gemini GenerateContent calls only when server side tools (code_execution, google_search, and url_context) are enabled.
"mimeType": "A String", # Required. The IANA standard MIME type of the source data.
},
},
],
"response": { # Required. The function response in JSON object format. Use "output" key to specify function output and "error" key to specify error details (if any). If "output" and "error" keys are not specified, then whole "response" is treated as function output.
"a_key": "", # Properties of the object.
},
},
"inlineData": { # A content blob. A Blob contains data of a specific media type. It is used to represent images, audio, and video. # Optional. The inline data content of the part. This can be used to include images, audio, or video in a request.
"data": "A String", # Required. The raw bytes of the data.
"displayName": "A String", # Optional. The display name of the blob. Used to provide a label or filename to distinguish blobs. This field is only returned in `PromptMessage` for prompt management. It is used in the Gemini calls only when server-side tools (`code_execution`, `google_search`, and `url_context`) are enabled.
"mimeType": "A String", # Required. The IANA standard MIME type of the source data.
},
"mediaResolution": { # per part media resolution. Media resolution for the input media. # per part media resolution. Media resolution for the input media.
"level": "A String", # The tokenization quality used for given media.
},
"text": "A String", # Optional. The text content of the part. When sent from the VSCode Gemini Code Assist extension, references to @mentioned items will be converted to markdown boldface text. For example `@my-repo` will be converted to and sent as `**my-repo**` by the IDE agent.
"thought": True or False, # Optional. Indicates whether the `part` represents the model's thought process or reasoning.
"thoughtSignature": "A String", # Optional. An opaque signature for the thought so it can be reused in subsequent requests.
"videoMetadata": { # Provides metadata for a video, including the start and end offsets for clipping and the frame rate. # Optional. Video metadata. The metadata should only be specified while the video data is presented in inline_data or file_data.
"endOffset": "A String", # Optional. The end offset of the video.
"fps": 3.14, # Optional. The frame rate of the video sent to the model. If not specified, the default value is 1.0. The valid range is (0.0, 24.0].
"startOffset": "A String", # Optional. The start offset of the video.
},
},
],
"role": "A String", # Optional. The producer of the content. Must be either 'user' or 'model'. If not set, the service will default to 'user'.
},
"eventTime": "A String", # Optional. The timestamp when the event occurred.
"stateDelta": { # Optional. The change in the session state caused by this event. This is a key-value map of fields that were modified or added by the event.
"a_key": "", # Properties of the object.
},
},
],
"turnId": "A String", # Optional. A unique identifier for the turn. Useful for referencing specific turns across systems.
"turnIndex": 42, # Required. The 0-based index of the turn in the conversation sequence.
},
],
},
"promptTemplateData": { # Message to hold a prompt template and the values to populate the template. # Prompt template data.
"values": { # The values for fields in the prompt template.
"a_key": { # The structured data content of a message. A Content message contains a `role` field, which indicates the producer of the content, and a `parts` field, which contains the multi-part data of the message.
"parts": [ # Required. A list of Part objects that make up a single message. Parts of a message can have different MIME types. A Content message must have at least one Part.
{ # A datatype containing media that is part of a multi-part Content message. A `Part` consists of data which has an associated datatype. A `Part` can only contain one of the accepted types in `Part.data`. For media types that are not text, `Part` must have a fixed IANA MIME type identifying the type and subtype of the media if `inline_data` or `file_data` field is filled with raw bytes.
"codeExecutionResult": { # Result of executing the [ExecutableCode]. Only generated when using the [CodeExecution] tool, and always follows a `part` containing the [ExecutableCode]. # Optional. The result of executing the ExecutableCode.
"outcome": "A String", # Required. Outcome of the code execution.
"output": "A String", # Optional. Contains stdout when code execution is successful, stderr or other description otherwise.
},
"executableCode": { # Code generated by the model that is meant to be executed, and the result returned to the model. Generated when using the [CodeExecution] tool, in which the code will be automatically executed, and a corresponding [CodeExecutionResult] will also be generated. # Optional. Code generated by the model that is intended to be executed.
"code": "A String", # Required. The code to be executed.
"language": "A String", # Required. Programming language of the `code`.
},
"fileData": { # URI-based data. A FileData message contains a URI pointing to data of a specific media type. It is used to represent images, audio, and video stored in Google Cloud Storage. # Optional. The URI-based data of the part. This can be used to include files from Google Cloud Storage.
"displayName": "A String", # Optional. The display name of the file. Used to provide a label or filename to distinguish files. This field is only returned in `PromptMessage` for prompt management. It is used in the Gemini calls only when server side tools (`code_execution`, `google_search`, and `url_context`) are enabled.
"fileUri": "A String", # Required. The URI of the file in Google Cloud Storage.
"mimeType": "A String", # Required. The IANA standard MIME type of the source data.
},
"functionCall": { # A predicted [FunctionCall] returned from the model that contains a string representing the [FunctionDeclaration.name] and a structured JSON object containing the parameters and their values. # Optional. A predicted function call returned from the model. This contains the name of the function to call and the arguments to pass to the function.
"args": { # Optional. The function parameters and values in JSON object format. See [FunctionDeclaration.parameters] for parameter details.
"a_key": "", # Properties of the object.
},
"id": "A String", # Optional. The unique id of the function call. If populated, the client to execute the `function_call` and return the response with the matching `id`.
"name": "A String", # Optional. The name of the function to call. Matches [FunctionDeclaration.name].
"partialArgs": [ # Optional. The partial argument value of the function call. If provided, represents the arguments/fields that are streamed incrementally.
{ # Partial argument value of the function call.
"boolValue": True or False, # Optional. Represents a boolean value.
"jsonPath": "A String", # Required. A JSON Path (RFC 9535) to the argument being streamed. https://datatracker.ietf.org/doc/html/rfc9535. e.g. "$.foo.bar[0].data".
"nullValue": "A String", # Optional. Represents a null value.
"numberValue": 3.14, # Optional. Represents a double value.
"stringValue": "A String", # Optional. Represents a string value.
"willContinue": True or False, # Optional. Whether this is not the last part of the same json_path. If true, another PartialArg message for the current json_path is expected to follow.
},
],
"willContinue": True or False, # Optional. Whether this is the last part of the FunctionCall. If true, another partial message for the current FunctionCall is expected to follow.
},
"functionResponse": { # The result output from a [FunctionCall] that contains a string representing the [FunctionDeclaration.name] and a structured JSON object containing any output from the function is used as context to the model. This should contain the result of a [FunctionCall] made based on model prediction. # Optional. The result of a function call. This is used to provide the model with the result of a function call that it predicted.
"id": "A String", # Optional. The id of the function call this response is for. Populated by the client to match the corresponding function call `id`.
"name": "A String", # Required. The name of the function to call. Matches [FunctionDeclaration.name] and [FunctionCall.name].
"parts": [ # Optional. Ordered `Parts` that constitute a function response. Parts may have different IANA MIME types.
{ # A datatype containing media that is part of a `FunctionResponse` message. A `FunctionResponsePart` consists of data which has an associated datatype. A `FunctionResponsePart` can only contain one of the accepted types in `FunctionResponsePart.data`. A `FunctionResponsePart` must have a fixed IANA MIME type identifying the type and subtype of the media if the `inline_data` field is filled with raw bytes.
"fileData": { # URI based data for function response. # URI based data.
"displayName": "A String", # Optional. Display name of the file data. Used to provide a label or filename to distinguish file datas. This field is only returned in PromptMessage for prompt management. It is currently used in the Gemini GenerateContent calls only when server side tools (code_execution, google_search, and url_context) are enabled.
"fileUri": "A String", # Required. URI.
"mimeType": "A String", # Required. The IANA standard MIME type of the source data.
},
"inlineData": { # Raw media bytes for function response. Text should not be sent as raw bytes, use the 'text' field. # Inline media bytes.
"data": "A String", # Required. Raw bytes.
"displayName": "A String", # Optional. Display name of the blob. Used to provide a label or filename to distinguish blobs. This field is only returned in PromptMessage for prompt management. It is currently used in the Gemini GenerateContent calls only when server side tools (code_execution, google_search, and url_context) are enabled.
"mimeType": "A String", # Required. The IANA standard MIME type of the source data.
},
},
],
"response": { # Required. The function response in JSON object format. Use "output" key to specify function output and "error" key to specify error details (if any). If "output" and "error" keys are not specified, then whole "response" is treated as function output.
"a_key": "", # Properties of the object.
},
},
"inlineData": { # A content blob. A Blob contains data of a specific media type. It is used to represent images, audio, and video. # Optional. The inline data content of the part. This can be used to include images, audio, or video in a request.
"data": "A String", # Required. The raw bytes of the data.
"displayName": "A String", # Optional. The display name of the blob. Used to provide a label or filename to distinguish blobs. This field is only returned in `PromptMessage` for prompt management. It is used in the Gemini calls only when server-side tools (`code_execution`, `google_search`, and `url_context`) are enabled.
"mimeType": "A String", # Required. The IANA standard MIME type of the source data.
},
"mediaResolution": { # per part media resolution. Media resolution for the input media. # per part media resolution. Media resolution for the input media.
"level": "A String", # The tokenization quality used for given media.
},
"text": "A String", # Optional. The text content of the part. When sent from the VSCode Gemini Code Assist extension, references to @mentioned items will be converted to markdown boldface text. For example `@my-repo` will be converted to and sent as `**my-repo**` by the IDE agent.
"thought": True or False, # Optional. Indicates whether the `part` represents the model's thought process or reasoning.
"thoughtSignature": "A String", # Optional. An opaque signature for the thought so it can be reused in subsequent requests.
"videoMetadata": { # Provides metadata for a video, including the start and end offsets for clipping and the frame rate. # Optional. Video metadata. The metadata should only be specified while the video data is presented in inline_data or file_data.
"endOffset": "A String", # Optional. The end offset of the video.
"fps": 3.14, # Optional. The frame rate of the video sent to the model. If not specified, the default value is 1.0. The valid range is (0.0, 24.0].
"startOffset": "A String", # Optional. The start offset of the video.
},
},
],
"role": "A String", # Optional. The producer of the content. Must be either 'user' or 'model'. If not set, the service will default to 'user'.
},
},
},
"text": "A String", # Text prompt.
"value": "", # Fields and values that can be used to populate the prompt template.
},
"rubrics": { # Optional. Named groups of rubrics associated with this prompt. The key is a user-defined name for the rubric group.
"a_key": { # A group of rubrics, used for grouping rubrics based on a metric or a version.
"displayName": "A String", # Human-readable name for the group. This should be unique within a given context if used for display or selection. Example: "Instruction Following V1", "Content Quality - Summarization Task".
"groupId": "A String", # Unique identifier for the group.
"rubrics": [ # Rubrics that are part of this group.
{ # Message representing a single testable criterion for evaluation. One input prompt could have multiple rubrics.
"content": { # Content of the rubric, defining the testable criteria. # Required. The actual testable criteria for the rubric.
"property": { # Defines criteria based on a specific property. # Evaluation criteria based on a specific property.
"description": "A String", # Description of the property being evaluated. Example: "The model's response is grammatically correct."
},
},
"importance": "A String", # Optional. The relative importance of this rubric.
"rubricId": "A String", # Unique identifier for the rubric. This ID is used to refer to this rubric, e.g., in RubricVerdict.
"type": "A String", # Optional. A type designator for the rubric, which can inform how it's evaluated or interpreted by systems or users. It's recommended to use consistent, well-defined, upper snake_case strings. Examples: "SUMMARIZATION_QUALITY", "SAFETY_HARMFUL_CONTENT", "INSTRUCTION_ADHERENCE".
},
],
},
},
},
},
"gcsUri": "A String", # The Cloud Storage object where the request or response is stored.
"labels": { # Optional. Labels for the EvaluationItem.
"a_key": "A String",
},
"metadata": "", # Optional. Metadata for the EvaluationItem.
"name": "A String", # Identifier. The resource name of the EvaluationItem. Format: `projects/{project}/locations/{location}/evaluationItems/{evaluation_item}`
},
],
"nextPageToken": "A String", # A token to retrieve the next page of results.
}
list_next()
Retrieves the next page of results.
Args:
previous_request: The request for the previous page. (required)
previous_response: The response from the request for the previous page. (required)
Returns:
A request object that you can call 'execute()' on to request the next
page. Returns None if there are no more items in the collection.