Vertex AI API . projects . locations . cachedContents

Instance Methods

close()

Close httplib2 connections.

create(parent, body=None, x__xgafv=None)

Creates cached content, this call will initialize the cached content in the data storage, and users need to pay for the cache data storage.

delete(name, x__xgafv=None)

Deletes cached content

get(name, x__xgafv=None)

Gets cached content configurations

list(parent, pageSize=None, pageToken=None, x__xgafv=None)

Lists cached contents in a project

list_next()

Retrieves the next page of results.

patch(name, body=None, updateMask=None, x__xgafv=None)

Updates cached content configurations

Method Details

close()
Close httplib2 connections.
create(parent, body=None, x__xgafv=None)
Creates cached content, this call will initialize the cached content in the data storage, and users need to pay for the cache data storage.

Args:
  parent: string, Required. The parent resource where the cached content will be created (required)
  body: object, The request body.
    The object takes the form of:

{ # A resource used in LLM queries for users to explicitly specify what to cache and how to cache.
  "contents": [ # Optional. Input only. Immutable. The content to cache
    { # The structured data content of a message. A Content message contains a `role` field, which indicates the producer of the content, and a `parts` field, which contains the multi-part data of the message.
      "parts": [ # Required. A list of Part objects that make up a single message. Parts of a message can have different MIME types. A Content message must have at least one Part.
        { # A datatype containing media that is part of a multi-part Content message. A `Part` consists of data which has an associated datatype. A `Part` can only contain one of the accepted types in `Part.data`. For media types that are not text, `Part` must have a fixed IANA MIME type identifying the type and subtype of the media if `inline_data` or `file_data` field is filled with raw bytes.
          "codeExecutionResult": { # Result of executing the [ExecutableCode]. Only generated when using the [CodeExecution] tool, and always follows a `part` containing the [ExecutableCode]. # Optional. The result of executing the ExecutableCode.
            "outcome": "A String", # Required. Outcome of the code execution.
            "output": "A String", # Optional. Contains stdout when code execution is successful, stderr or other description otherwise.
          },
          "executableCode": { # Code generated by the model that is meant to be executed, and the result returned to the model. Generated when using the [CodeExecution] tool, in which the code will be automatically executed, and a corresponding [CodeExecutionResult] will also be generated. # Optional. Code generated by the model that is intended to be executed.
            "code": "A String", # Required. The code to be executed.
            "language": "A String", # Required. Programming language of the `code`.
          },
          "fileData": { # URI-based data. A FileData message contains a URI pointing to data of a specific media type. It is used to represent images, audio, and video stored in Google Cloud Storage. # Optional. The URI-based data of the part. This can be used to include files from Google Cloud Storage.
            "displayName": "A String", # Optional. The display name of the file. Used to provide a label or filename to distinguish files. This field is only returned in `PromptMessage` for prompt management. It is used in the Gemini calls only when server side tools (`code_execution`, `google_search`, and `url_context`) are enabled.
            "fileUri": "A String", # Required. The URI of the file in Google Cloud Storage.
            "mimeType": "A String", # Required. The IANA standard MIME type of the source data.
          },
          "functionCall": { # A predicted [FunctionCall] returned from the model that contains a string representing the [FunctionDeclaration.name] and a structured JSON object containing the parameters and their values. # Optional. A predicted function call returned from the model. This contains the name of the function to call and the arguments to pass to the function.
            "args": { # Optional. The function parameters and values in JSON object format. See [FunctionDeclaration.parameters] for parameter details.
              "a_key": "", # Properties of the object.
            },
            "name": "A String", # Optional. The name of the function to call. Matches [FunctionDeclaration.name].
            "partialArgs": [ # Optional. The partial argument value of the function call. If provided, represents the arguments/fields that are streamed incrementally.
              { # Partial argument value of the function call.
                "boolValue": True or False, # Optional. Represents a boolean value.
                "jsonPath": "A String", # Required. A JSON Path (RFC 9535) to the argument being streamed. https://datatracker.ietf.org/doc/html/rfc9535. e.g. "$.foo.bar[0].data".
                "nullValue": "A String", # Optional. Represents a null value.
                "numberValue": 3.14, # Optional. Represents a double value.
                "stringValue": "A String", # Optional. Represents a string value.
                "willContinue": True or False, # Optional. Whether this is not the last part of the same json_path. If true, another PartialArg message for the current json_path is expected to follow.
              },
            ],
            "willContinue": True or False, # Optional. Whether this is the last part of the FunctionCall. If true, another partial message for the current FunctionCall is expected to follow.
          },
          "functionResponse": { # The result output from a [FunctionCall] that contains a string representing the [FunctionDeclaration.name] and a structured JSON object containing any output from the function is used as context to the model. This should contain the result of a [FunctionCall] made based on model prediction. # Optional. The result of a function call. This is used to provide the model with the result of a function call that it predicted.
            "name": "A String", # Required. The name of the function to call. Matches [FunctionDeclaration.name] and [FunctionCall.name].
            "parts": [ # Optional. Ordered `Parts` that constitute a function response. Parts may have different IANA MIME types.
              { # A datatype containing media that is part of a `FunctionResponse` message. A `FunctionResponsePart` consists of data which has an associated datatype. A `FunctionResponsePart` can only contain one of the accepted types in `FunctionResponsePart.data`. A `FunctionResponsePart` must have a fixed IANA MIME type identifying the type and subtype of the media if the `inline_data` field is filled with raw bytes.
                "fileData": { # URI based data for function response. # URI based data.
                  "displayName": "A String", # Optional. Display name of the file data. Used to provide a label or filename to distinguish file datas. This field is only returned in PromptMessage for prompt management. It is currently used in the Gemini GenerateContent calls only when server side tools (code_execution, google_search, and url_context) are enabled.
                  "fileUri": "A String", # Required. URI.
                  "mimeType": "A String", # Required. The IANA standard MIME type of the source data.
                },
                "inlineData": { # Raw media bytes for function response. Text should not be sent as raw bytes, use the 'text' field. # Inline media bytes.
                  "data": "A String", # Required. Raw bytes.
                  "displayName": "A String", # Optional. Display name of the blob. Used to provide a label or filename to distinguish blobs. This field is only returned in PromptMessage for prompt management. It is currently used in the Gemini GenerateContent calls only when server side tools (code_execution, google_search, and url_context) are enabled.
                  "mimeType": "A String", # Required. The IANA standard MIME type of the source data.
                },
              },
            ],
            "response": { # Required. The function response in JSON object format. Use "output" key to specify function output and "error" key to specify error details (if any). If "output" and "error" keys are not specified, then whole "response" is treated as function output.
              "a_key": "", # Properties of the object.
            },
          },
          "inlineData": { # A content blob. A Blob contains data of a specific media type. It is used to represent images, audio, and video. # Optional. The inline data content of the part. This can be used to include images, audio, or video in a request.
            "data": "A String", # Required. The raw bytes of the data.
            "displayName": "A String", # Optional. The display name of the blob. Used to provide a label or filename to distinguish blobs. This field is only returned in `PromptMessage` for prompt management. It is used in the Gemini calls only when server-side tools (`code_execution`, `google_search`, and `url_context`) are enabled.
            "mimeType": "A String", # Required. The IANA standard MIME type of the source data.
          },
          "mediaResolution": { # per part media resolution. Media resolution for the input media. # per part media resolution. Media resolution for the input media.
            "level": "A String", # The tokenization quality used for given media.
          },
          "text": "A String", # Optional. The text content of the part.
          "thought": True or False, # Optional. Indicates whether the `part` represents the model's thought process or reasoning.
          "thoughtSignature": "A String", # Optional. An opaque signature for the thought so it can be reused in subsequent requests.
          "videoMetadata": { # Provides metadata for a video, including the start and end offsets for clipping and the frame rate. # Optional. Video metadata. The metadata should only be specified while the video data is presented in inline_data or file_data.
            "endOffset": "A String", # Optional. The end offset of the video.
            "fps": 3.14, # Optional. The frame rate of the video sent to the model. If not specified, the default value is 1.0. The valid range is (0.0, 24.0].
            "startOffset": "A String", # Optional. The start offset of the video.
          },
        },
      ],
      "role": "A String", # Optional. The producer of the content. Must be either 'user' or 'model'. If not set, the service will default to 'user'.
    },
  ],
  "createTime": "A String", # Output only. Creation time of the cache entry.
  "displayName": "A String", # Optional. Immutable. The user-generated meaningful display name of the cached content.
  "encryptionSpec": { # Represents a customer-managed encryption key specification that can be applied to a Vertex AI resource. # Input only. Immutable. Customer-managed encryption key spec for a `CachedContent`. If set, this `CachedContent` and all its sub-resources will be secured by this key.
    "kmsKeyName": "A String", # Required. Resource name of the Cloud KMS key used to protect the resource. The Cloud KMS key must be in the same region as the resource. It must have the format `projects/{project}/locations/{location}/keyRings/{key_ring}/cryptoKeys/{crypto_key}`.
  },
  "expireTime": "A String", # Timestamp of when this resource is considered expired. This is *always* provided on output, regardless of what was sent on input.
  "model": "A String", # Immutable. The name of the `Model` to use for cached content. Currently, only the published Gemini base models are supported, in form of projects/{PROJECT}/locations/{LOCATION}/publishers/google/models/{MODEL}
  "name": "A String", # Immutable. Identifier. The server-generated resource name of the cached content Format: projects/{project}/locations/{location}/cachedContents/{cached_content}
  "systemInstruction": { # The structured data content of a message. A Content message contains a `role` field, which indicates the producer of the content, and a `parts` field, which contains the multi-part data of the message. # Optional. Input only. Immutable. Developer set system instruction. Currently, text only
    "parts": [ # Required. A list of Part objects that make up a single message. Parts of a message can have different MIME types. A Content message must have at least one Part.
      { # A datatype containing media that is part of a multi-part Content message. A `Part` consists of data which has an associated datatype. A `Part` can only contain one of the accepted types in `Part.data`. For media types that are not text, `Part` must have a fixed IANA MIME type identifying the type and subtype of the media if `inline_data` or `file_data` field is filled with raw bytes.
        "codeExecutionResult": { # Result of executing the [ExecutableCode]. Only generated when using the [CodeExecution] tool, and always follows a `part` containing the [ExecutableCode]. # Optional. The result of executing the ExecutableCode.
          "outcome": "A String", # Required. Outcome of the code execution.
          "output": "A String", # Optional. Contains stdout when code execution is successful, stderr or other description otherwise.
        },
        "executableCode": { # Code generated by the model that is meant to be executed, and the result returned to the model. Generated when using the [CodeExecution] tool, in which the code will be automatically executed, and a corresponding [CodeExecutionResult] will also be generated. # Optional. Code generated by the model that is intended to be executed.
          "code": "A String", # Required. The code to be executed.
          "language": "A String", # Required. Programming language of the `code`.
        },
        "fileData": { # URI-based data. A FileData message contains a URI pointing to data of a specific media type. It is used to represent images, audio, and video stored in Google Cloud Storage. # Optional. The URI-based data of the part. This can be used to include files from Google Cloud Storage.
          "displayName": "A String", # Optional. The display name of the file. Used to provide a label or filename to distinguish files. This field is only returned in `PromptMessage` for prompt management. It is used in the Gemini calls only when server side tools (`code_execution`, `google_search`, and `url_context`) are enabled.
          "fileUri": "A String", # Required. The URI of the file in Google Cloud Storage.
          "mimeType": "A String", # Required. The IANA standard MIME type of the source data.
        },
        "functionCall": { # A predicted [FunctionCall] returned from the model that contains a string representing the [FunctionDeclaration.name] and a structured JSON object containing the parameters and their values. # Optional. A predicted function call returned from the model. This contains the name of the function to call and the arguments to pass to the function.
          "args": { # Optional. The function parameters and values in JSON object format. See [FunctionDeclaration.parameters] for parameter details.
            "a_key": "", # Properties of the object.
          },
          "name": "A String", # Optional. The name of the function to call. Matches [FunctionDeclaration.name].
          "partialArgs": [ # Optional. The partial argument value of the function call. If provided, represents the arguments/fields that are streamed incrementally.
            { # Partial argument value of the function call.
              "boolValue": True or False, # Optional. Represents a boolean value.
              "jsonPath": "A String", # Required. A JSON Path (RFC 9535) to the argument being streamed. https://datatracker.ietf.org/doc/html/rfc9535. e.g. "$.foo.bar[0].data".
              "nullValue": "A String", # Optional. Represents a null value.
              "numberValue": 3.14, # Optional. Represents a double value.
              "stringValue": "A String", # Optional. Represents a string value.
              "willContinue": True or False, # Optional. Whether this is not the last part of the same json_path. If true, another PartialArg message for the current json_path is expected to follow.
            },
          ],
          "willContinue": True or False, # Optional. Whether this is the last part of the FunctionCall. If true, another partial message for the current FunctionCall is expected to follow.
        },
        "functionResponse": { # The result output from a [FunctionCall] that contains a string representing the [FunctionDeclaration.name] and a structured JSON object containing any output from the function is used as context to the model. This should contain the result of a [FunctionCall] made based on model prediction. # Optional. The result of a function call. This is used to provide the model with the result of a function call that it predicted.
          "name": "A String", # Required. The name of the function to call. Matches [FunctionDeclaration.name] and [FunctionCall.name].
          "parts": [ # Optional. Ordered `Parts` that constitute a function response. Parts may have different IANA MIME types.
            { # A datatype containing media that is part of a `FunctionResponse` message. A `FunctionResponsePart` consists of data which has an associated datatype. A `FunctionResponsePart` can only contain one of the accepted types in `FunctionResponsePart.data`. A `FunctionResponsePart` must have a fixed IANA MIME type identifying the type and subtype of the media if the `inline_data` field is filled with raw bytes.
              "fileData": { # URI based data for function response. # URI based data.
                "displayName": "A String", # Optional. Display name of the file data. Used to provide a label or filename to distinguish file datas. This field is only returned in PromptMessage for prompt management. It is currently used in the Gemini GenerateContent calls only when server side tools (code_execution, google_search, and url_context) are enabled.
                "fileUri": "A String", # Required. URI.
                "mimeType": "A String", # Required. The IANA standard MIME type of the source data.
              },
              "inlineData": { # Raw media bytes for function response. Text should not be sent as raw bytes, use the 'text' field. # Inline media bytes.
                "data": "A String", # Required. Raw bytes.
                "displayName": "A String", # Optional. Display name of the blob. Used to provide a label or filename to distinguish blobs. This field is only returned in PromptMessage for prompt management. It is currently used in the Gemini GenerateContent calls only when server side tools (code_execution, google_search, and url_context) are enabled.
                "mimeType": "A String", # Required. The IANA standard MIME type of the source data.
              },
            },
          ],
          "response": { # Required. The function response in JSON object format. Use "output" key to specify function output and "error" key to specify error details (if any). If "output" and "error" keys are not specified, then whole "response" is treated as function output.
            "a_key": "", # Properties of the object.
          },
        },
        "inlineData": { # A content blob. A Blob contains data of a specific media type. It is used to represent images, audio, and video. # Optional. The inline data content of the part. This can be used to include images, audio, or video in a request.
          "data": "A String", # Required. The raw bytes of the data.
          "displayName": "A String", # Optional. The display name of the blob. Used to provide a label or filename to distinguish blobs. This field is only returned in `PromptMessage` for prompt management. It is used in the Gemini calls only when server-side tools (`code_execution`, `google_search`, and `url_context`) are enabled.
          "mimeType": "A String", # Required. The IANA standard MIME type of the source data.
        },
        "mediaResolution": { # per part media resolution. Media resolution for the input media. # per part media resolution. Media resolution for the input media.
          "level": "A String", # The tokenization quality used for given media.
        },
        "text": "A String", # Optional. The text content of the part.
        "thought": True or False, # Optional. Indicates whether the `part` represents the model's thought process or reasoning.
        "thoughtSignature": "A String", # Optional. An opaque signature for the thought so it can be reused in subsequent requests.
        "videoMetadata": { # Provides metadata for a video, including the start and end offsets for clipping and the frame rate. # Optional. Video metadata. The metadata should only be specified while the video data is presented in inline_data or file_data.
          "endOffset": "A String", # Optional. The end offset of the video.
          "fps": 3.14, # Optional. The frame rate of the video sent to the model. If not specified, the default value is 1.0. The valid range is (0.0, 24.0].
          "startOffset": "A String", # Optional. The start offset of the video.
        },
      },
    ],
    "role": "A String", # Optional. The producer of the content. Must be either 'user' or 'model'. If not set, the service will default to 'user'.
  },
  "toolConfig": { # Tool config. This config is shared for all tools provided in the request. # Optional. Input only. Immutable. Tool config. This config is shared for all tools
    "functionCallingConfig": { # Function calling config. # Optional. Function calling config.
      "allowedFunctionNames": [ # Optional. Function names to call. Only set when the Mode is ANY. Function names should match [FunctionDeclaration.name]. With mode set to ANY, model will predict a function call from the set of function names provided.
        "A String",
      ],
      "mode": "A String", # Optional. Function calling mode.
      "streamFunctionCallArguments": True or False, # Optional. When set to true, arguments of a single function call will be streamed out in multiple parts/contents/responses. Partial parameter results will be returned in the [FunctionCall.partial_args] field.
    },
    "retrievalConfig": { # Retrieval config. # Optional. Retrieval config.
      "languageCode": "A String", # The language code of the user.
      "latLng": { # An object that represents a latitude/longitude pair. This is expressed as a pair of doubles to represent degrees latitude and degrees longitude. Unless specified otherwise, this object must conform to the WGS84 standard. Values must be within normalized ranges. # The location of the user.
        "latitude": 3.14, # The latitude in degrees. It must be in the range [-90.0, +90.0].
        "longitude": 3.14, # The longitude in degrees. It must be in the range [-180.0, +180.0].
      },
    },
  },
  "tools": [ # Optional. Input only. Immutable. A list of `Tools` the model may use to generate the next response
    { # Tool details that the model may use to generate response. A `Tool` is a piece of code that enables the system to interact with external systems to perform an action, or set of actions, outside of knowledge and scope of the model. A Tool object should contain exactly one type of Tool (e.g FunctionDeclaration, Retrieval or GoogleSearchRetrieval).
      "codeExecution": { # Tool that executes code generated by the model, and automatically returns the result to the model. See also [ExecutableCode]and [CodeExecutionResult] which are input and output to this tool. # Optional. CodeExecution tool type. Enables the model to execute code as part of generation.
      },
      "computerUse": { # Tool to support computer use. # Optional. Tool to support the model interacting directly with the computer. If enabled, it automatically populates computer-use specific Function Declarations.
        "environment": "A String", # Required. The environment being operated.
        "excludedPredefinedFunctions": [ # Optional. By default, [predefined functions](https://cloud.google.com/vertex-ai/generative-ai/docs/computer-use#supported-actions) are included in the final model call. Some of them can be explicitly excluded from being automatically included. This can serve two purposes: 1. Using a more restricted / different action space. 2. Improving the definitions / instructions of predefined functions.
          "A String",
        ],
      },
      "enterpriseWebSearch": { # Tool to search public web data, powered by Vertex AI Search and Sec4 compliance. # Optional. Tool to support searching public web data, powered by Vertex AI Search and Sec4 compliance.
        "blockingConfidence": "A String", # Optional. Sites with confidence level chosen & above this value will be blocked from the search results.
        "excludeDomains": [ # Optional. List of domains to be excluded from the search results. The default limit is 2000 domains.
          "A String",
        ],
      },
      "functionDeclarations": [ # Optional. Function tool type. One or more function declarations to be passed to the model along with the current user query. Model may decide to call a subset of these functions by populating FunctionCall in the response. User should provide a FunctionResponse for each function call in the next turn. Based on the function responses, Model will generate the final response back to the user. Maximum 512 function declarations can be provided.
        { # Structured representation of a function declaration as defined by the [OpenAPI 3.0 specification](https://spec.openapis.org/oas/v3.0.3). Included in this declaration are the function name, description, parameters and response type. This FunctionDeclaration is a representation of a block of code that can be used as a `Tool` by the model and executed by the client.
          "description": "A String", # Optional. Description and purpose of the function. Model uses it to decide how and whether to call the function.
          "name": "A String", # Required. The name of the function to call. Must start with a letter or an underscore. Must be a-z, A-Z, 0-9, or contain underscores, dots, colons and dashes, with a maximum length of 64.
          "parameters": { # Defines the schema of input and output data. This is a subset of the [OpenAPI 3.0 Schema Object](https://spec.openapis.org/oas/v3.0.3#schema-object). # Optional. Describes the parameters to this function in JSON Schema Object format. Reflects the Open API 3.03 Parameter Object. string Key: the name of the parameter. Parameter names are case sensitive. Schema Value: the Schema defining the type used for the parameter. For function with no parameters, this can be left unset. Parameter names must start with a letter or an underscore and must only contain chars a-z, A-Z, 0-9, or underscores with a maximum length of 64. Example with 1 required and 1 optional parameter: type: OBJECT properties: param1: type: STRING param2: type: INTEGER required: - param1
            "additionalProperties": "", # Optional. If `type` is `OBJECT`, specifies how to handle properties not defined in `properties`. If it is a boolean `false`, no additional properties are allowed. If it is a schema, additional properties are allowed if they conform to the schema.
            "anyOf": [ # Optional. The instance must be valid against any (one or more) of the subschemas listed in `any_of`.
              # Object with schema name: GoogleCloudAiplatformV1Schema
            ],
            "default": "", # Optional. Default value to use if the field is not specified.
            "defs": { # Optional. `defs` provides a map of schema definitions that can be reused by `ref` elsewhere in the schema. Only allowed at root level of the schema.
              "a_key": # Object with schema name: GoogleCloudAiplatformV1Schema
            },
            "description": "A String", # Optional. Description of the schema.
            "enum": [ # Optional. Possible values of the field. This field can be used to restrict a value to a fixed set of values. To mark a field as an enum, set `format` to `enum` and provide the list of possible values in `enum`. For example: 1. To define directions: `{type:STRING, format:enum, enum:["EAST", "NORTH", "SOUTH", "WEST"]}` 2. To define apartment numbers: `{type:INTEGER, format:enum, enum:["101", "201", "301"]}`
              "A String",
            ],
            "example": "", # Optional. Example of an instance of this schema.
            "format": "A String", # Optional. The format of the data. For `NUMBER` type, format can be `float` or `double`. For `INTEGER` type, format can be `int32` or `int64`. For `STRING` type, format can be `email`, `byte`, `date`, `date-time`, `password`, and other formats to further refine the data type.
            "items": # Object with schema name: GoogleCloudAiplatformV1Schema # Optional. If type is `ARRAY`, `items` specifies the schema of elements in the array.
            "maxItems": "A String", # Optional. If type is `ARRAY`, `max_items` specifies the maximum number of items in an array.
            "maxLength": "A String", # Optional. If type is `STRING`, `max_length` specifies the maximum length of the string.
            "maxProperties": "A String", # Optional. If type is `OBJECT`, `max_properties` specifies the maximum number of properties that can be provided.
            "maximum": 3.14, # Optional. If type is `INTEGER` or `NUMBER`, `maximum` specifies the maximum allowed value.
            "minItems": "A String", # Optional. If type is `ARRAY`, `min_items` specifies the minimum number of items in an array.
            "minLength": "A String", # Optional. If type is `STRING`, `min_length` specifies the minimum length of the string.
            "minProperties": "A String", # Optional. If type is `OBJECT`, `min_properties` specifies the minimum number of properties that can be provided.
            "minimum": 3.14, # Optional. If type is `INTEGER` or `NUMBER`, `minimum` specifies the minimum allowed value.
            "nullable": True or False, # Optional. Indicates if the value of this field can be null.
            "pattern": "A String", # Optional. If type is `STRING`, `pattern` specifies a regular expression that the string must match.
            "properties": { # Optional. If type is `OBJECT`, `properties` is a map of property names to schema definitions for each property of the object.
              "a_key": # Object with schema name: GoogleCloudAiplatformV1Schema
            },
            "propertyOrdering": [ # Optional. Order of properties displayed or used where order matters. This is not a standard field in OpenAPI specification, but can be used to control the order of properties.
              "A String",
            ],
            "ref": "A String", # Optional. Allows referencing another schema definition to use in place of this schema. The value must be a valid reference to a schema in `defs`. For example, the following schema defines a reference to a schema node named "Pet": type: object properties: pet: ref: #/defs/Pet defs: Pet: type: object properties: name: type: string The value of the "pet" property is a reference to the schema node named "Pet". See details in https://json-schema.org/understanding-json-schema/structuring
            "required": [ # Optional. If type is `OBJECT`, `required` lists the names of properties that must be present.
              "A String",
            ],
            "title": "A String", # Optional. Title for the schema.
            "type": "A String", # Optional. Data type of the schema field.
          },
          "parametersJsonSchema": "", # Optional. Describes the parameters to the function in JSON Schema format. The schema must describe an object where the properties are the parameters to the function. For example: ``` { "type": "object", "properties": { "name": { "type": "string" }, "age": { "type": "integer" } }, "additionalProperties": false, "required": ["name", "age"], "propertyOrdering": ["name", "age"] } ``` This field is mutually exclusive with `parameters`.
          "response": { # Defines the schema of input and output data. This is a subset of the [OpenAPI 3.0 Schema Object](https://spec.openapis.org/oas/v3.0.3#schema-object). # Optional. Describes the output from this function in JSON Schema format. Reflects the Open API 3.03 Response Object. The Schema defines the type used for the response value of the function.
            "additionalProperties": "", # Optional. If `type` is `OBJECT`, specifies how to handle properties not defined in `properties`. If it is a boolean `false`, no additional properties are allowed. If it is a schema, additional properties are allowed if they conform to the schema.
            "anyOf": [ # Optional. The instance must be valid against any (one or more) of the subschemas listed in `any_of`.
              # Object with schema name: GoogleCloudAiplatformV1Schema
            ],
            "default": "", # Optional. Default value to use if the field is not specified.
            "defs": { # Optional. `defs` provides a map of schema definitions that can be reused by `ref` elsewhere in the schema. Only allowed at root level of the schema.
              "a_key": # Object with schema name: GoogleCloudAiplatformV1Schema
            },
            "description": "A String", # Optional. Description of the schema.
            "enum": [ # Optional. Possible values of the field. This field can be used to restrict a value to a fixed set of values. To mark a field as an enum, set `format` to `enum` and provide the list of possible values in `enum`. For example: 1. To define directions: `{type:STRING, format:enum, enum:["EAST", "NORTH", "SOUTH", "WEST"]}` 2. To define apartment numbers: `{type:INTEGER, format:enum, enum:["101", "201", "301"]}`
              "A String",
            ],
            "example": "", # Optional. Example of an instance of this schema.
            "format": "A String", # Optional. The format of the data. For `NUMBER` type, format can be `float` or `double`. For `INTEGER` type, format can be `int32` or `int64`. For `STRING` type, format can be `email`, `byte`, `date`, `date-time`, `password`, and other formats to further refine the data type.
            "items": # Object with schema name: GoogleCloudAiplatformV1Schema # Optional. If type is `ARRAY`, `items` specifies the schema of elements in the array.
            "maxItems": "A String", # Optional. If type is `ARRAY`, `max_items` specifies the maximum number of items in an array.
            "maxLength": "A String", # Optional. If type is `STRING`, `max_length` specifies the maximum length of the string.
            "maxProperties": "A String", # Optional. If type is `OBJECT`, `max_properties` specifies the maximum number of properties that can be provided.
            "maximum": 3.14, # Optional. If type is `INTEGER` or `NUMBER`, `maximum` specifies the maximum allowed value.
            "minItems": "A String", # Optional. If type is `ARRAY`, `min_items` specifies the minimum number of items in an array.
            "minLength": "A String", # Optional. If type is `STRING`, `min_length` specifies the minimum length of the string.
            "minProperties": "A String", # Optional. If type is `OBJECT`, `min_properties` specifies the minimum number of properties that can be provided.
            "minimum": 3.14, # Optional. If type is `INTEGER` or `NUMBER`, `minimum` specifies the minimum allowed value.
            "nullable": True or False, # Optional. Indicates if the value of this field can be null.
            "pattern": "A String", # Optional. If type is `STRING`, `pattern` specifies a regular expression that the string must match.
            "properties": { # Optional. If type is `OBJECT`, `properties` is a map of property names to schema definitions for each property of the object.
              "a_key": # Object with schema name: GoogleCloudAiplatformV1Schema
            },
            "propertyOrdering": [ # Optional. Order of properties displayed or used where order matters. This is not a standard field in OpenAPI specification, but can be used to control the order of properties.
              "A String",
            ],
            "ref": "A String", # Optional. Allows referencing another schema definition to use in place of this schema. The value must be a valid reference to a schema in `defs`. For example, the following schema defines a reference to a schema node named "Pet": type: object properties: pet: ref: #/defs/Pet defs: Pet: type: object properties: name: type: string The value of the "pet" property is a reference to the schema node named "Pet". See details in https://json-schema.org/understanding-json-schema/structuring
            "required": [ # Optional. If type is `OBJECT`, `required` lists the names of properties that must be present.
              "A String",
            ],
            "title": "A String", # Optional. Title for the schema.
            "type": "A String", # Optional. Data type of the schema field.
          },
          "responseJsonSchema": "", # Optional. Describes the output from this function in JSON Schema format. The value specified by the schema is the response value of the function. This field is mutually exclusive with `response`.
        },
      ],
      "googleMaps": { # Tool to retrieve public maps data for grounding, powered by Google. # Optional. GoogleMaps tool type. Tool to support Google Maps in Model.
        "enableWidget": True or False, # Optional. If true, include the widget context token in the response.
      },
      "googleSearch": { # GoogleSearch tool type. Tool to support Google Search in Model. Powered by Google. # Optional. GoogleSearch tool type. Tool to support Google Search in Model. Powered by Google.
        "blockingConfidence": "A String", # Optional. Sites with confidence level chosen & above this value will be blocked from the search results.
        "excludeDomains": [ # Optional. List of domains to be excluded from the search results. The default limit is 2000 domains. Example: ["amazon.com", "facebook.com"].
          "A String",
        ],
      },
      "googleSearchRetrieval": { # Tool to retrieve public web data for grounding, powered by Google. # Optional. Specialized retrieval tool that is powered by Google Search.
        "dynamicRetrievalConfig": { # Describes the options to customize dynamic retrieval. # Specifies the dynamic retrieval configuration for the given source.
          "dynamicThreshold": 3.14, # Optional. The threshold to be used in dynamic retrieval. If not set, a system default value is used.
          "mode": "A String", # The mode of the predictor to be used in dynamic retrieval.
        },
      },
      "parallelAiSearch": { # ParallelAiSearch tool type. A tool that uses the Parallel.ai search engine for grounding. # Optional. If specified, Vertex AI will use Parallel.ai to search for information to answer user queries. The search results will be grounded on Parallel.ai and presented to the model for response generation
        "apiKey": "A String", # Optional. The API key for ParallelAiSearch. If an API key is not provided, the system will attempt to verify access by checking for an active Parallel.ai subscription through the Google Cloud Marketplace. See https://docs.parallel.ai/search/search-quickstart for more details.
        "customConfigs": { # Optional. Custom configs for ParallelAiSearch. This field can be used to pass any parameter from the Parallel.ai Search API. See the Parallel.ai documentation for the full list of available parameters and their usage: https://docs.parallel.ai/api-reference/search-beta/search Currently only `source_policy`, `excerpts`, `max_results`, `mode`, `fetch_policy` can be set via this field. For example: { "source_policy": { "include_domains": ["google.com", "wikipedia.org"], "exclude_domains": ["example.com"] }, "fetch_policy": { "max_age_seconds": 3600 } }
          "a_key": "", # Properties of the object.
        },
      },
      "retrieval": { # Defines a retrieval tool that model can call to access external knowledge. # Optional. Retrieval tool type. System will always execute the provided retrieval tool(s) to get external knowledge to answer the prompt. Retrieval results are presented to the model for generation.
        "disableAttribution": True or False, # Optional. Deprecated. This option is no longer supported.
        "externalApi": { # Retrieve from data source powered by external API for grounding. The external API is not owned by Google, but need to follow the pre-defined API spec. # Use data source powered by external API for grounding.
          "apiAuth": { # The generic reusable api auth config. Deprecated. Please use AuthConfig (google/cloud/aiplatform/master/auth.proto) instead. # The authentication config to access the API. Deprecated. Please use auth_config instead.
            "apiKeyConfig": { # The API secret. # The API secret.
              "apiKeySecretVersion": "A String", # Required. The SecretManager secret version resource name storing API key. e.g. projects/{project}/secrets/{secret}/versions/{version}
              "apiKeyString": "A String", # The API key string. Either this or `api_key_secret_version` must be set.
            },
          },
          "apiSpec": "A String", # The API spec that the external API implements.
          "authConfig": { # Auth configuration to run the extension. # The authentication config to access the API.
            "apiKeyConfig": { # Config for authentication with API key. # Config for API key auth.
              "apiKeySecret": "A String", # Optional. The name of the SecretManager secret version resource storing the API key. Format: `projects/{project}/secrets/{secrete}/versions/{version}` - If both `api_key_secret` and `api_key_string` are specified, this field takes precedence over `api_key_string`. - If specified, the `secretmanager.versions.access` permission should be granted to Vertex AI Extension Service Agent (https://cloud.google.com/vertex-ai/docs/general/access-control#service-agents) on the specified resource.
              "apiKeyString": "A String", # Optional. The API key to be used in the request directly.
              "httpElementLocation": "A String", # Optional. The location of the API key.
              "name": "A String", # Optional. The parameter name of the API key. E.g. If the API request is "https://example.com/act?api_key=", "api_key" would be the parameter name.
            },
            "authType": "A String", # Type of auth scheme.
            "googleServiceAccountConfig": { # Config for Google Service Account Authentication. # Config for Google Service Account auth.
              "serviceAccount": "A String", # Optional. The service account that the extension execution service runs as. - If the service account is specified, the `iam.serviceAccounts.getAccessToken` permission should be granted to Vertex AI Extension Service Agent (https://cloud.google.com/vertex-ai/docs/general/access-control#service-agents) on the specified service account. - If not specified, the Vertex AI Extension Service Agent will be used to execute the Extension.
            },
            "httpBasicAuthConfig": { # Config for HTTP Basic Authentication. # Config for HTTP Basic auth.
              "credentialSecret": "A String", # Required. The name of the SecretManager secret version resource storing the base64 encoded credentials. Format: `projects/{project}/secrets/{secrete}/versions/{version}` - If specified, the `secretmanager.versions.access` permission should be granted to Vertex AI Extension Service Agent (https://cloud.google.com/vertex-ai/docs/general/access-control#service-agents) on the specified resource.
            },
            "oauthConfig": { # Config for user oauth. # Config for user oauth.
              "accessToken": "A String", # Access token for extension endpoint. Only used to propagate token from [[ExecuteExtensionRequest.runtime_auth_config]] at request time.
              "serviceAccount": "A String", # The service account used to generate access tokens for executing the Extension. - If the service account is specified, the `iam.serviceAccounts.getAccessToken` permission should be granted to Vertex AI Extension Service Agent (https://cloud.google.com/vertex-ai/docs/general/access-control#service-agents) on the provided service account.
            },
            "oidcConfig": { # Config for user OIDC auth. # Config for user OIDC auth.
              "idToken": "A String", # OpenID Connect formatted ID token for extension endpoint. Only used to propagate token from [[ExecuteExtensionRequest.runtime_auth_config]] at request time.
              "serviceAccount": "A String", # The service account used to generate an OpenID Connect (OIDC)-compatible JWT token signed by the Google OIDC Provider (accounts.google.com) for extension endpoint (https://cloud.google.com/iam/docs/create-short-lived-credentials-direct#sa-credentials-oidc). - The audience for the token will be set to the URL in the server url defined in the OpenApi spec. - If the service account is provided, the service account should grant `iam.serviceAccounts.getOpenIdToken` permission to Vertex AI Extension Service Agent (https://cloud.google.com/vertex-ai/docs/general/access-control#service-agents).
            },
          },
          "elasticSearchParams": { # The search parameters to use for the ELASTIC_SEARCH spec. # Parameters for the elastic search API.
            "index": "A String", # The ElasticSearch index to use.
            "numHits": 42, # Optional. Number of hits (chunks) to request. When specified, it is passed to Elasticsearch as the `num_hits` param.
            "searchTemplate": "A String", # The ElasticSearch search template to use.
          },
          "endpoint": "A String", # The endpoint of the external API. The system will call the API at this endpoint to retrieve the data for grounding. Example: https://acme.com:443/search
          "simpleSearchParams": { # The search parameters to use for SIMPLE_SEARCH spec. # Parameters for the simple search API.
          },
        },
        "vertexAiSearch": { # Retrieve from Vertex AI Search datastore or engine for grounding. datastore and engine are mutually exclusive. See https://cloud.google.com/products/agent-builder # Set to use data source powered by Vertex AI Search.
          "dataStoreSpecs": [ # Specifications that define the specific DataStores to be searched, along with configurations for those data stores. This is only considered for Engines with multiple data stores. It should only be set if engine is used.
            { # Define data stores within engine to filter on in a search call and configurations for those data stores. For more information, see https://cloud.google.com/generative-ai-app-builder/docs/reference/rpc/google.cloud.discoveryengine.v1#datastorespec
              "dataStore": "A String", # Full resource name of DataStore, such as Format: `projects/{project}/locations/{location}/collections/{collection}/dataStores/{dataStore}`
              "filter": "A String", # Optional. Filter specification to filter documents in the data store specified by data_store field. For more information on filtering, see [Filtering](https://cloud.google.com/generative-ai-app-builder/docs/filter-search-metadata)
            },
          ],
          "datastore": "A String", # Optional. Fully-qualified Vertex AI Search data store resource ID. Format: `projects/{project}/locations/{location}/collections/{collection}/dataStores/{dataStore}`
          "engine": "A String", # Optional. Fully-qualified Vertex AI Search engine resource ID. Format: `projects/{project}/locations/{location}/collections/{collection}/engines/{engine}`
          "filter": "A String", # Optional. Filter strings to be passed to the search API.
          "maxResults": 42, # Optional. Number of search results to return per query. The default value is 10. The maximumm allowed value is 10.
        },
        "vertexRagStore": { # Retrieve from Vertex RAG Store for grounding. # Set to use data source powered by Vertex RAG store. User data is uploaded via the VertexRagDataService.
          "ragResources": [ # Optional. The representation of the rag source. It can be used to specify corpus only or ragfiles. Currently only support one corpus or multiple files from one corpus. In the future we may open up multiple corpora support.
            { # The definition of the Rag resource.
              "ragCorpus": "A String", # Optional. RagCorpora resource name. Format: `projects/{project}/locations/{location}/ragCorpora/{rag_corpus}`
              "ragFileIds": [ # Optional. rag_file_id. The files should be in the same rag_corpus set in rag_corpus field.
                "A String",
              ],
            },
          ],
          "ragRetrievalConfig": { # Specifies the context retrieval config. # Optional. The retrieval config for the Rag query.
            "filter": { # Config for filters. # Optional. Config for filters.
              "metadataFilter": "A String", # Optional. String for metadata filtering.
              "vectorDistanceThreshold": 3.14, # Optional. Only returns contexts with vector distance smaller than the threshold.
              "vectorSimilarityThreshold": 3.14, # Optional. Only returns contexts with vector similarity larger than the threshold.
            },
            "ranking": { # Config for ranking and reranking. # Optional. Config for ranking and reranking.
              "llmRanker": { # Config for LlmRanker. # Optional. Config for LlmRanker.
                "modelName": "A String", # Optional. The model name used for ranking. See [Supported models](https://cloud.google.com/vertex-ai/generative-ai/docs/model-reference/inference#supported-models).
              },
              "rankService": { # Config for Rank Service. # Optional. Config for Rank Service.
                "modelName": "A String", # Optional. The model name of the rank service. Format: `semantic-ranker-512@latest`
              },
            },
            "topK": 42, # Optional. The number of contexts to retrieve.
          },
          "similarityTopK": 42, # Optional. Number of top k results to return from the selected corpora.
          "vectorDistanceThreshold": 3.14, # Optional. Only return results with vector distance smaller than the threshold.
        },
      },
      "urlContext": { # Tool to support URL context. # Optional. Tool to support URL context retrieval.
      },
    },
  ],
  "ttl": "A String", # Input only. The TTL for this resource. The expiration time is computed: now + TTL.
  "updateTime": "A String", # Output only. When the cache entry was last updated in UTC time.
  "usageMetadata": { # Metadata on the usage of the cached content. # Output only. Metadata on the usage of the cached content.
    "audioDurationSeconds": 42, # Duration of audio in seconds.
    "imageCount": 42, # Number of images.
    "textCount": 42, # Number of text characters.
    "totalTokenCount": 42, # Total number of tokens that the cached content consumes.
    "videoDurationSeconds": 42, # Duration of video in seconds.
  },
}

  x__xgafv: string, V1 error format.
    Allowed values
      1 - v1 error format
      2 - v2 error format

Returns:
  An object of the form:

    { # A resource used in LLM queries for users to explicitly specify what to cache and how to cache.
  "contents": [ # Optional. Input only. Immutable. The content to cache
    { # The structured data content of a message. A Content message contains a `role` field, which indicates the producer of the content, and a `parts` field, which contains the multi-part data of the message.
      "parts": [ # Required. A list of Part objects that make up a single message. Parts of a message can have different MIME types. A Content message must have at least one Part.
        { # A datatype containing media that is part of a multi-part Content message. A `Part` consists of data which has an associated datatype. A `Part` can only contain one of the accepted types in `Part.data`. For media types that are not text, `Part` must have a fixed IANA MIME type identifying the type and subtype of the media if `inline_data` or `file_data` field is filled with raw bytes.
          "codeExecutionResult": { # Result of executing the [ExecutableCode]. Only generated when using the [CodeExecution] tool, and always follows a `part` containing the [ExecutableCode]. # Optional. The result of executing the ExecutableCode.
            "outcome": "A String", # Required. Outcome of the code execution.
            "output": "A String", # Optional. Contains stdout when code execution is successful, stderr or other description otherwise.
          },
          "executableCode": { # Code generated by the model that is meant to be executed, and the result returned to the model. Generated when using the [CodeExecution] tool, in which the code will be automatically executed, and a corresponding [CodeExecutionResult] will also be generated. # Optional. Code generated by the model that is intended to be executed.
            "code": "A String", # Required. The code to be executed.
            "language": "A String", # Required. Programming language of the `code`.
          },
          "fileData": { # URI-based data. A FileData message contains a URI pointing to data of a specific media type. It is used to represent images, audio, and video stored in Google Cloud Storage. # Optional. The URI-based data of the part. This can be used to include files from Google Cloud Storage.
            "displayName": "A String", # Optional. The display name of the file. Used to provide a label or filename to distinguish files. This field is only returned in `PromptMessage` for prompt management. It is used in the Gemini calls only when server side tools (`code_execution`, `google_search`, and `url_context`) are enabled.
            "fileUri": "A String", # Required. The URI of the file in Google Cloud Storage.
            "mimeType": "A String", # Required. The IANA standard MIME type of the source data.
          },
          "functionCall": { # A predicted [FunctionCall] returned from the model that contains a string representing the [FunctionDeclaration.name] and a structured JSON object containing the parameters and their values. # Optional. A predicted function call returned from the model. This contains the name of the function to call and the arguments to pass to the function.
            "args": { # Optional. The function parameters and values in JSON object format. See [FunctionDeclaration.parameters] for parameter details.
              "a_key": "", # Properties of the object.
            },
            "name": "A String", # Optional. The name of the function to call. Matches [FunctionDeclaration.name].
            "partialArgs": [ # Optional. The partial argument value of the function call. If provided, represents the arguments/fields that are streamed incrementally.
              { # Partial argument value of the function call.
                "boolValue": True or False, # Optional. Represents a boolean value.
                "jsonPath": "A String", # Required. A JSON Path (RFC 9535) to the argument being streamed. https://datatracker.ietf.org/doc/html/rfc9535. e.g. "$.foo.bar[0].data".
                "nullValue": "A String", # Optional. Represents a null value.
                "numberValue": 3.14, # Optional. Represents a double value.
                "stringValue": "A String", # Optional. Represents a string value.
                "willContinue": True or False, # Optional. Whether this is not the last part of the same json_path. If true, another PartialArg message for the current json_path is expected to follow.
              },
            ],
            "willContinue": True or False, # Optional. Whether this is the last part of the FunctionCall. If true, another partial message for the current FunctionCall is expected to follow.
          },
          "functionResponse": { # The result output from a [FunctionCall] that contains a string representing the [FunctionDeclaration.name] and a structured JSON object containing any output from the function is used as context to the model. This should contain the result of a [FunctionCall] made based on model prediction. # Optional. The result of a function call. This is used to provide the model with the result of a function call that it predicted.
            "name": "A String", # Required. The name of the function to call. Matches [FunctionDeclaration.name] and [FunctionCall.name].
            "parts": [ # Optional. Ordered `Parts` that constitute a function response. Parts may have different IANA MIME types.
              { # A datatype containing media that is part of a `FunctionResponse` message. A `FunctionResponsePart` consists of data which has an associated datatype. A `FunctionResponsePart` can only contain one of the accepted types in `FunctionResponsePart.data`. A `FunctionResponsePart` must have a fixed IANA MIME type identifying the type and subtype of the media if the `inline_data` field is filled with raw bytes.
                "fileData": { # URI based data for function response. # URI based data.
                  "displayName": "A String", # Optional. Display name of the file data. Used to provide a label or filename to distinguish file datas. This field is only returned in PromptMessage for prompt management. It is currently used in the Gemini GenerateContent calls only when server side tools (code_execution, google_search, and url_context) are enabled.
                  "fileUri": "A String", # Required. URI.
                  "mimeType": "A String", # Required. The IANA standard MIME type of the source data.
                },
                "inlineData": { # Raw media bytes for function response. Text should not be sent as raw bytes, use the 'text' field. # Inline media bytes.
                  "data": "A String", # Required. Raw bytes.
                  "displayName": "A String", # Optional. Display name of the blob. Used to provide a label or filename to distinguish blobs. This field is only returned in PromptMessage for prompt management. It is currently used in the Gemini GenerateContent calls only when server side tools (code_execution, google_search, and url_context) are enabled.
                  "mimeType": "A String", # Required. The IANA standard MIME type of the source data.
                },
              },
            ],
            "response": { # Required. The function response in JSON object format. Use "output" key to specify function output and "error" key to specify error details (if any). If "output" and "error" keys are not specified, then whole "response" is treated as function output.
              "a_key": "", # Properties of the object.
            },
          },
          "inlineData": { # A content blob. A Blob contains data of a specific media type. It is used to represent images, audio, and video. # Optional. The inline data content of the part. This can be used to include images, audio, or video in a request.
            "data": "A String", # Required. The raw bytes of the data.
            "displayName": "A String", # Optional. The display name of the blob. Used to provide a label or filename to distinguish blobs. This field is only returned in `PromptMessage` for prompt management. It is used in the Gemini calls only when server-side tools (`code_execution`, `google_search`, and `url_context`) are enabled.
            "mimeType": "A String", # Required. The IANA standard MIME type of the source data.
          },
          "mediaResolution": { # per part media resolution. Media resolution for the input media. # per part media resolution. Media resolution for the input media.
            "level": "A String", # The tokenization quality used for given media.
          },
          "text": "A String", # Optional. The text content of the part.
          "thought": True or False, # Optional. Indicates whether the `part` represents the model's thought process or reasoning.
          "thoughtSignature": "A String", # Optional. An opaque signature for the thought so it can be reused in subsequent requests.
          "videoMetadata": { # Provides metadata for a video, including the start and end offsets for clipping and the frame rate. # Optional. Video metadata. The metadata should only be specified while the video data is presented in inline_data or file_data.
            "endOffset": "A String", # Optional. The end offset of the video.
            "fps": 3.14, # Optional. The frame rate of the video sent to the model. If not specified, the default value is 1.0. The valid range is (0.0, 24.0].
            "startOffset": "A String", # Optional. The start offset of the video.
          },
        },
      ],
      "role": "A String", # Optional. The producer of the content. Must be either 'user' or 'model'. If not set, the service will default to 'user'.
    },
  ],
  "createTime": "A String", # Output only. Creation time of the cache entry.
  "displayName": "A String", # Optional. Immutable. The user-generated meaningful display name of the cached content.
  "encryptionSpec": { # Represents a customer-managed encryption key specification that can be applied to a Vertex AI resource. # Input only. Immutable. Customer-managed encryption key spec for a `CachedContent`. If set, this `CachedContent` and all its sub-resources will be secured by this key.
    "kmsKeyName": "A String", # Required. Resource name of the Cloud KMS key used to protect the resource. The Cloud KMS key must be in the same region as the resource. It must have the format `projects/{project}/locations/{location}/keyRings/{key_ring}/cryptoKeys/{crypto_key}`.
  },
  "expireTime": "A String", # Timestamp of when this resource is considered expired. This is *always* provided on output, regardless of what was sent on input.
  "model": "A String", # Immutable. The name of the `Model` to use for cached content. Currently, only the published Gemini base models are supported, in form of projects/{PROJECT}/locations/{LOCATION}/publishers/google/models/{MODEL}
  "name": "A String", # Immutable. Identifier. The server-generated resource name of the cached content Format: projects/{project}/locations/{location}/cachedContents/{cached_content}
  "systemInstruction": { # The structured data content of a message. A Content message contains a `role` field, which indicates the producer of the content, and a `parts` field, which contains the multi-part data of the message. # Optional. Input only. Immutable. Developer set system instruction. Currently, text only
    "parts": [ # Required. A list of Part objects that make up a single message. Parts of a message can have different MIME types. A Content message must have at least one Part.
      { # A datatype containing media that is part of a multi-part Content message. A `Part` consists of data which has an associated datatype. A `Part` can only contain one of the accepted types in `Part.data`. For media types that are not text, `Part` must have a fixed IANA MIME type identifying the type and subtype of the media if `inline_data` or `file_data` field is filled with raw bytes.
        "codeExecutionResult": { # Result of executing the [ExecutableCode]. Only generated when using the [CodeExecution] tool, and always follows a `part` containing the [ExecutableCode]. # Optional. The result of executing the ExecutableCode.
          "outcome": "A String", # Required. Outcome of the code execution.
          "output": "A String", # Optional. Contains stdout when code execution is successful, stderr or other description otherwise.
        },
        "executableCode": { # Code generated by the model that is meant to be executed, and the result returned to the model. Generated when using the [CodeExecution] tool, in which the code will be automatically executed, and a corresponding [CodeExecutionResult] will also be generated. # Optional. Code generated by the model that is intended to be executed.
          "code": "A String", # Required. The code to be executed.
          "language": "A String", # Required. Programming language of the `code`.
        },
        "fileData": { # URI-based data. A FileData message contains a URI pointing to data of a specific media type. It is used to represent images, audio, and video stored in Google Cloud Storage. # Optional. The URI-based data of the part. This can be used to include files from Google Cloud Storage.
          "displayName": "A String", # Optional. The display name of the file. Used to provide a label or filename to distinguish files. This field is only returned in `PromptMessage` for prompt management. It is used in the Gemini calls only when server side tools (`code_execution`, `google_search`, and `url_context`) are enabled.
          "fileUri": "A String", # Required. The URI of the file in Google Cloud Storage.
          "mimeType": "A String", # Required. The IANA standard MIME type of the source data.
        },
        "functionCall": { # A predicted [FunctionCall] returned from the model that contains a string representing the [FunctionDeclaration.name] and a structured JSON object containing the parameters and their values. # Optional. A predicted function call returned from the model. This contains the name of the function to call and the arguments to pass to the function.
          "args": { # Optional. The function parameters and values in JSON object format. See [FunctionDeclaration.parameters] for parameter details.
            "a_key": "", # Properties of the object.
          },
          "name": "A String", # Optional. The name of the function to call. Matches [FunctionDeclaration.name].
          "partialArgs": [ # Optional. The partial argument value of the function call. If provided, represents the arguments/fields that are streamed incrementally.
            { # Partial argument value of the function call.
              "boolValue": True or False, # Optional. Represents a boolean value.
              "jsonPath": "A String", # Required. A JSON Path (RFC 9535) to the argument being streamed. https://datatracker.ietf.org/doc/html/rfc9535. e.g. "$.foo.bar[0].data".
              "nullValue": "A String", # Optional. Represents a null value.
              "numberValue": 3.14, # Optional. Represents a double value.
              "stringValue": "A String", # Optional. Represents a string value.
              "willContinue": True or False, # Optional. Whether this is not the last part of the same json_path. If true, another PartialArg message for the current json_path is expected to follow.
            },
          ],
          "willContinue": True or False, # Optional. Whether this is the last part of the FunctionCall. If true, another partial message for the current FunctionCall is expected to follow.
        },
        "functionResponse": { # The result output from a [FunctionCall] that contains a string representing the [FunctionDeclaration.name] and a structured JSON object containing any output from the function is used as context to the model. This should contain the result of a [FunctionCall] made based on model prediction. # Optional. The result of a function call. This is used to provide the model with the result of a function call that it predicted.
          "name": "A String", # Required. The name of the function to call. Matches [FunctionDeclaration.name] and [FunctionCall.name].
          "parts": [ # Optional. Ordered `Parts` that constitute a function response. Parts may have different IANA MIME types.
            { # A datatype containing media that is part of a `FunctionResponse` message. A `FunctionResponsePart` consists of data which has an associated datatype. A `FunctionResponsePart` can only contain one of the accepted types in `FunctionResponsePart.data`. A `FunctionResponsePart` must have a fixed IANA MIME type identifying the type and subtype of the media if the `inline_data` field is filled with raw bytes.
              "fileData": { # URI based data for function response. # URI based data.
                "displayName": "A String", # Optional. Display name of the file data. Used to provide a label or filename to distinguish file datas. This field is only returned in PromptMessage for prompt management. It is currently used in the Gemini GenerateContent calls only when server side tools (code_execution, google_search, and url_context) are enabled.
                "fileUri": "A String", # Required. URI.
                "mimeType": "A String", # Required. The IANA standard MIME type of the source data.
              },
              "inlineData": { # Raw media bytes for function response. Text should not be sent as raw bytes, use the 'text' field. # Inline media bytes.
                "data": "A String", # Required. Raw bytes.
                "displayName": "A String", # Optional. Display name of the blob. Used to provide a label or filename to distinguish blobs. This field is only returned in PromptMessage for prompt management. It is currently used in the Gemini GenerateContent calls only when server side tools (code_execution, google_search, and url_context) are enabled.
                "mimeType": "A String", # Required. The IANA standard MIME type of the source data.
              },
            },
          ],
          "response": { # Required. The function response in JSON object format. Use "output" key to specify function output and "error" key to specify error details (if any). If "output" and "error" keys are not specified, then whole "response" is treated as function output.
            "a_key": "", # Properties of the object.
          },
        },
        "inlineData": { # A content blob. A Blob contains data of a specific media type. It is used to represent images, audio, and video. # Optional. The inline data content of the part. This can be used to include images, audio, or video in a request.
          "data": "A String", # Required. The raw bytes of the data.
          "displayName": "A String", # Optional. The display name of the blob. Used to provide a label or filename to distinguish blobs. This field is only returned in `PromptMessage` for prompt management. It is used in the Gemini calls only when server-side tools (`code_execution`, `google_search`, and `url_context`) are enabled.
          "mimeType": "A String", # Required. The IANA standard MIME type of the source data.
        },
        "mediaResolution": { # per part media resolution. Media resolution for the input media. # per part media resolution. Media resolution for the input media.
          "level": "A String", # The tokenization quality used for given media.
        },
        "text": "A String", # Optional. The text content of the part.
        "thought": True or False, # Optional. Indicates whether the `part` represents the model's thought process or reasoning.
        "thoughtSignature": "A String", # Optional. An opaque signature for the thought so it can be reused in subsequent requests.
        "videoMetadata": { # Provides metadata for a video, including the start and end offsets for clipping and the frame rate. # Optional. Video metadata. The metadata should only be specified while the video data is presented in inline_data or file_data.
          "endOffset": "A String", # Optional. The end offset of the video.
          "fps": 3.14, # Optional. The frame rate of the video sent to the model. If not specified, the default value is 1.0. The valid range is (0.0, 24.0].
          "startOffset": "A String", # Optional. The start offset of the video.
        },
      },
    ],
    "role": "A String", # Optional. The producer of the content. Must be either 'user' or 'model'. If not set, the service will default to 'user'.
  },
  "toolConfig": { # Tool config. This config is shared for all tools provided in the request. # Optional. Input only. Immutable. Tool config. This config is shared for all tools
    "functionCallingConfig": { # Function calling config. # Optional. Function calling config.
      "allowedFunctionNames": [ # Optional. Function names to call. Only set when the Mode is ANY. Function names should match [FunctionDeclaration.name]. With mode set to ANY, model will predict a function call from the set of function names provided.
        "A String",
      ],
      "mode": "A String", # Optional. Function calling mode.
      "streamFunctionCallArguments": True or False, # Optional. When set to true, arguments of a single function call will be streamed out in multiple parts/contents/responses. Partial parameter results will be returned in the [FunctionCall.partial_args] field.
    },
    "retrievalConfig": { # Retrieval config. # Optional. Retrieval config.
      "languageCode": "A String", # The language code of the user.
      "latLng": { # An object that represents a latitude/longitude pair. This is expressed as a pair of doubles to represent degrees latitude and degrees longitude. Unless specified otherwise, this object must conform to the WGS84 standard. Values must be within normalized ranges. # The location of the user.
        "latitude": 3.14, # The latitude in degrees. It must be in the range [-90.0, +90.0].
        "longitude": 3.14, # The longitude in degrees. It must be in the range [-180.0, +180.0].
      },
    },
  },
  "tools": [ # Optional. Input only. Immutable. A list of `Tools` the model may use to generate the next response
    { # Tool details that the model may use to generate response. A `Tool` is a piece of code that enables the system to interact with external systems to perform an action, or set of actions, outside of knowledge and scope of the model. A Tool object should contain exactly one type of Tool (e.g FunctionDeclaration, Retrieval or GoogleSearchRetrieval).
      "codeExecution": { # Tool that executes code generated by the model, and automatically returns the result to the model. See also [ExecutableCode]and [CodeExecutionResult] which are input and output to this tool. # Optional. CodeExecution tool type. Enables the model to execute code as part of generation.
      },
      "computerUse": { # Tool to support computer use. # Optional. Tool to support the model interacting directly with the computer. If enabled, it automatically populates computer-use specific Function Declarations.
        "environment": "A String", # Required. The environment being operated.
        "excludedPredefinedFunctions": [ # Optional. By default, [predefined functions](https://cloud.google.com/vertex-ai/generative-ai/docs/computer-use#supported-actions) are included in the final model call. Some of them can be explicitly excluded from being automatically included. This can serve two purposes: 1. Using a more restricted / different action space. 2. Improving the definitions / instructions of predefined functions.
          "A String",
        ],
      },
      "enterpriseWebSearch": { # Tool to search public web data, powered by Vertex AI Search and Sec4 compliance. # Optional. Tool to support searching public web data, powered by Vertex AI Search and Sec4 compliance.
        "blockingConfidence": "A String", # Optional. Sites with confidence level chosen & above this value will be blocked from the search results.
        "excludeDomains": [ # Optional. List of domains to be excluded from the search results. The default limit is 2000 domains.
          "A String",
        ],
      },
      "functionDeclarations": [ # Optional. Function tool type. One or more function declarations to be passed to the model along with the current user query. Model may decide to call a subset of these functions by populating FunctionCall in the response. User should provide a FunctionResponse for each function call in the next turn. Based on the function responses, Model will generate the final response back to the user. Maximum 512 function declarations can be provided.
        { # Structured representation of a function declaration as defined by the [OpenAPI 3.0 specification](https://spec.openapis.org/oas/v3.0.3). Included in this declaration are the function name, description, parameters and response type. This FunctionDeclaration is a representation of a block of code that can be used as a `Tool` by the model and executed by the client.
          "description": "A String", # Optional. Description and purpose of the function. Model uses it to decide how and whether to call the function.
          "name": "A String", # Required. The name of the function to call. Must start with a letter or an underscore. Must be a-z, A-Z, 0-9, or contain underscores, dots, colons and dashes, with a maximum length of 64.
          "parameters": { # Defines the schema of input and output data. This is a subset of the [OpenAPI 3.0 Schema Object](https://spec.openapis.org/oas/v3.0.3#schema-object). # Optional. Describes the parameters to this function in JSON Schema Object format. Reflects the Open API 3.03 Parameter Object. string Key: the name of the parameter. Parameter names are case sensitive. Schema Value: the Schema defining the type used for the parameter. For function with no parameters, this can be left unset. Parameter names must start with a letter or an underscore and must only contain chars a-z, A-Z, 0-9, or underscores with a maximum length of 64. Example with 1 required and 1 optional parameter: type: OBJECT properties: param1: type: STRING param2: type: INTEGER required: - param1
            "additionalProperties": "", # Optional. If `type` is `OBJECT`, specifies how to handle properties not defined in `properties`. If it is a boolean `false`, no additional properties are allowed. If it is a schema, additional properties are allowed if they conform to the schema.
            "anyOf": [ # Optional. The instance must be valid against any (one or more) of the subschemas listed in `any_of`.
              # Object with schema name: GoogleCloudAiplatformV1Schema
            ],
            "default": "", # Optional. Default value to use if the field is not specified.
            "defs": { # Optional. `defs` provides a map of schema definitions that can be reused by `ref` elsewhere in the schema. Only allowed at root level of the schema.
              "a_key": # Object with schema name: GoogleCloudAiplatformV1Schema
            },
            "description": "A String", # Optional. Description of the schema.
            "enum": [ # Optional. Possible values of the field. This field can be used to restrict a value to a fixed set of values. To mark a field as an enum, set `format` to `enum` and provide the list of possible values in `enum`. For example: 1. To define directions: `{type:STRING, format:enum, enum:["EAST", "NORTH", "SOUTH", "WEST"]}` 2. To define apartment numbers: `{type:INTEGER, format:enum, enum:["101", "201", "301"]}`
              "A String",
            ],
            "example": "", # Optional. Example of an instance of this schema.
            "format": "A String", # Optional. The format of the data. For `NUMBER` type, format can be `float` or `double`. For `INTEGER` type, format can be `int32` or `int64`. For `STRING` type, format can be `email`, `byte`, `date`, `date-time`, `password`, and other formats to further refine the data type.
            "items": # Object with schema name: GoogleCloudAiplatformV1Schema # Optional. If type is `ARRAY`, `items` specifies the schema of elements in the array.
            "maxItems": "A String", # Optional. If type is `ARRAY`, `max_items` specifies the maximum number of items in an array.
            "maxLength": "A String", # Optional. If type is `STRING`, `max_length` specifies the maximum length of the string.
            "maxProperties": "A String", # Optional. If type is `OBJECT`, `max_properties` specifies the maximum number of properties that can be provided.
            "maximum": 3.14, # Optional. If type is `INTEGER` or `NUMBER`, `maximum` specifies the maximum allowed value.
            "minItems": "A String", # Optional. If type is `ARRAY`, `min_items` specifies the minimum number of items in an array.
            "minLength": "A String", # Optional. If type is `STRING`, `min_length` specifies the minimum length of the string.
            "minProperties": "A String", # Optional. If type is `OBJECT`, `min_properties` specifies the minimum number of properties that can be provided.
            "minimum": 3.14, # Optional. If type is `INTEGER` or `NUMBER`, `minimum` specifies the minimum allowed value.
            "nullable": True or False, # Optional. Indicates if the value of this field can be null.
            "pattern": "A String", # Optional. If type is `STRING`, `pattern` specifies a regular expression that the string must match.
            "properties": { # Optional. If type is `OBJECT`, `properties` is a map of property names to schema definitions for each property of the object.
              "a_key": # Object with schema name: GoogleCloudAiplatformV1Schema
            },
            "propertyOrdering": [ # Optional. Order of properties displayed or used where order matters. This is not a standard field in OpenAPI specification, but can be used to control the order of properties.
              "A String",
            ],
            "ref": "A String", # Optional. Allows referencing another schema definition to use in place of this schema. The value must be a valid reference to a schema in `defs`. For example, the following schema defines a reference to a schema node named "Pet": type: object properties: pet: ref: #/defs/Pet defs: Pet: type: object properties: name: type: string The value of the "pet" property is a reference to the schema node named "Pet". See details in https://json-schema.org/understanding-json-schema/structuring
            "required": [ # Optional. If type is `OBJECT`, `required` lists the names of properties that must be present.
              "A String",
            ],
            "title": "A String", # Optional. Title for the schema.
            "type": "A String", # Optional. Data type of the schema field.
          },
          "parametersJsonSchema": "", # Optional. Describes the parameters to the function in JSON Schema format. The schema must describe an object where the properties are the parameters to the function. For example: ``` { "type": "object", "properties": { "name": { "type": "string" }, "age": { "type": "integer" } }, "additionalProperties": false, "required": ["name", "age"], "propertyOrdering": ["name", "age"] } ``` This field is mutually exclusive with `parameters`.
          "response": { # Defines the schema of input and output data. This is a subset of the [OpenAPI 3.0 Schema Object](https://spec.openapis.org/oas/v3.0.3#schema-object). # Optional. Describes the output from this function in JSON Schema format. Reflects the Open API 3.03 Response Object. The Schema defines the type used for the response value of the function.
            "additionalProperties": "", # Optional. If `type` is `OBJECT`, specifies how to handle properties not defined in `properties`. If it is a boolean `false`, no additional properties are allowed. If it is a schema, additional properties are allowed if they conform to the schema.
            "anyOf": [ # Optional. The instance must be valid against any (one or more) of the subschemas listed in `any_of`.
              # Object with schema name: GoogleCloudAiplatformV1Schema
            ],
            "default": "", # Optional. Default value to use if the field is not specified.
            "defs": { # Optional. `defs` provides a map of schema definitions that can be reused by `ref` elsewhere in the schema. Only allowed at root level of the schema.
              "a_key": # Object with schema name: GoogleCloudAiplatformV1Schema
            },
            "description": "A String", # Optional. Description of the schema.
            "enum": [ # Optional. Possible values of the field. This field can be used to restrict a value to a fixed set of values. To mark a field as an enum, set `format` to `enum` and provide the list of possible values in `enum`. For example: 1. To define directions: `{type:STRING, format:enum, enum:["EAST", "NORTH", "SOUTH", "WEST"]}` 2. To define apartment numbers: `{type:INTEGER, format:enum, enum:["101", "201", "301"]}`
              "A String",
            ],
            "example": "", # Optional. Example of an instance of this schema.
            "format": "A String", # Optional. The format of the data. For `NUMBER` type, format can be `float` or `double`. For `INTEGER` type, format can be `int32` or `int64`. For `STRING` type, format can be `email`, `byte`, `date`, `date-time`, `password`, and other formats to further refine the data type.
            "items": # Object with schema name: GoogleCloudAiplatformV1Schema # Optional. If type is `ARRAY`, `items` specifies the schema of elements in the array.
            "maxItems": "A String", # Optional. If type is `ARRAY`, `max_items` specifies the maximum number of items in an array.
            "maxLength": "A String", # Optional. If type is `STRING`, `max_length` specifies the maximum length of the string.
            "maxProperties": "A String", # Optional. If type is `OBJECT`, `max_properties` specifies the maximum number of properties that can be provided.
            "maximum": 3.14, # Optional. If type is `INTEGER` or `NUMBER`, `maximum` specifies the maximum allowed value.
            "minItems": "A String", # Optional. If type is `ARRAY`, `min_items` specifies the minimum number of items in an array.
            "minLength": "A String", # Optional. If type is `STRING`, `min_length` specifies the minimum length of the string.
            "minProperties": "A String", # Optional. If type is `OBJECT`, `min_properties` specifies the minimum number of properties that can be provided.
            "minimum": 3.14, # Optional. If type is `INTEGER` or `NUMBER`, `minimum` specifies the minimum allowed value.
            "nullable": True or False, # Optional. Indicates if the value of this field can be null.
            "pattern": "A String", # Optional. If type is `STRING`, `pattern` specifies a regular expression that the string must match.
            "properties": { # Optional. If type is `OBJECT`, `properties` is a map of property names to schema definitions for each property of the object.
              "a_key": # Object with schema name: GoogleCloudAiplatformV1Schema
            },
            "propertyOrdering": [ # Optional. Order of properties displayed or used where order matters. This is not a standard field in OpenAPI specification, but can be used to control the order of properties.
              "A String",
            ],
            "ref": "A String", # Optional. Allows referencing another schema definition to use in place of this schema. The value must be a valid reference to a schema in `defs`. For example, the following schema defines a reference to a schema node named "Pet": type: object properties: pet: ref: #/defs/Pet defs: Pet: type: object properties: name: type: string The value of the "pet" property is a reference to the schema node named "Pet". See details in https://json-schema.org/understanding-json-schema/structuring
            "required": [ # Optional. If type is `OBJECT`, `required` lists the names of properties that must be present.
              "A String",
            ],
            "title": "A String", # Optional. Title for the schema.
            "type": "A String", # Optional. Data type of the schema field.
          },
          "responseJsonSchema": "", # Optional. Describes the output from this function in JSON Schema format. The value specified by the schema is the response value of the function. This field is mutually exclusive with `response`.
        },
      ],
      "googleMaps": { # Tool to retrieve public maps data for grounding, powered by Google. # Optional. GoogleMaps tool type. Tool to support Google Maps in Model.
        "enableWidget": True or False, # Optional. If true, include the widget context token in the response.
      },
      "googleSearch": { # GoogleSearch tool type. Tool to support Google Search in Model. Powered by Google. # Optional. GoogleSearch tool type. Tool to support Google Search in Model. Powered by Google.
        "blockingConfidence": "A String", # Optional. Sites with confidence level chosen & above this value will be blocked from the search results.
        "excludeDomains": [ # Optional. List of domains to be excluded from the search results. The default limit is 2000 domains. Example: ["amazon.com", "facebook.com"].
          "A String",
        ],
      },
      "googleSearchRetrieval": { # Tool to retrieve public web data for grounding, powered by Google. # Optional. Specialized retrieval tool that is powered by Google Search.
        "dynamicRetrievalConfig": { # Describes the options to customize dynamic retrieval. # Specifies the dynamic retrieval configuration for the given source.
          "dynamicThreshold": 3.14, # Optional. The threshold to be used in dynamic retrieval. If not set, a system default value is used.
          "mode": "A String", # The mode of the predictor to be used in dynamic retrieval.
        },
      },
      "parallelAiSearch": { # ParallelAiSearch tool type. A tool that uses the Parallel.ai search engine for grounding. # Optional. If specified, Vertex AI will use Parallel.ai to search for information to answer user queries. The search results will be grounded on Parallel.ai and presented to the model for response generation
        "apiKey": "A String", # Optional. The API key for ParallelAiSearch. If an API key is not provided, the system will attempt to verify access by checking for an active Parallel.ai subscription through the Google Cloud Marketplace. See https://docs.parallel.ai/search/search-quickstart for more details.
        "customConfigs": { # Optional. Custom configs for ParallelAiSearch. This field can be used to pass any parameter from the Parallel.ai Search API. See the Parallel.ai documentation for the full list of available parameters and their usage: https://docs.parallel.ai/api-reference/search-beta/search Currently only `source_policy`, `excerpts`, `max_results`, `mode`, `fetch_policy` can be set via this field. For example: { "source_policy": { "include_domains": ["google.com", "wikipedia.org"], "exclude_domains": ["example.com"] }, "fetch_policy": { "max_age_seconds": 3600 } }
          "a_key": "", # Properties of the object.
        },
      },
      "retrieval": { # Defines a retrieval tool that model can call to access external knowledge. # Optional. Retrieval tool type. System will always execute the provided retrieval tool(s) to get external knowledge to answer the prompt. Retrieval results are presented to the model for generation.
        "disableAttribution": True or False, # Optional. Deprecated. This option is no longer supported.
        "externalApi": { # Retrieve from data source powered by external API for grounding. The external API is not owned by Google, but need to follow the pre-defined API spec. # Use data source powered by external API for grounding.
          "apiAuth": { # The generic reusable api auth config. Deprecated. Please use AuthConfig (google/cloud/aiplatform/master/auth.proto) instead. # The authentication config to access the API. Deprecated. Please use auth_config instead.
            "apiKeyConfig": { # The API secret. # The API secret.
              "apiKeySecretVersion": "A String", # Required. The SecretManager secret version resource name storing API key. e.g. projects/{project}/secrets/{secret}/versions/{version}
              "apiKeyString": "A String", # The API key string. Either this or `api_key_secret_version` must be set.
            },
          },
          "apiSpec": "A String", # The API spec that the external API implements.
          "authConfig": { # Auth configuration to run the extension. # The authentication config to access the API.
            "apiKeyConfig": { # Config for authentication with API key. # Config for API key auth.
              "apiKeySecret": "A String", # Optional. The name of the SecretManager secret version resource storing the API key. Format: `projects/{project}/secrets/{secrete}/versions/{version}` - If both `api_key_secret` and `api_key_string` are specified, this field takes precedence over `api_key_string`. - If specified, the `secretmanager.versions.access` permission should be granted to Vertex AI Extension Service Agent (https://cloud.google.com/vertex-ai/docs/general/access-control#service-agents) on the specified resource.
              "apiKeyString": "A String", # Optional. The API key to be used in the request directly.
              "httpElementLocation": "A String", # Optional. The location of the API key.
              "name": "A String", # Optional. The parameter name of the API key. E.g. If the API request is "https://example.com/act?api_key=", "api_key" would be the parameter name.
            },
            "authType": "A String", # Type of auth scheme.
            "googleServiceAccountConfig": { # Config for Google Service Account Authentication. # Config for Google Service Account auth.
              "serviceAccount": "A String", # Optional. The service account that the extension execution service runs as. - If the service account is specified, the `iam.serviceAccounts.getAccessToken` permission should be granted to Vertex AI Extension Service Agent (https://cloud.google.com/vertex-ai/docs/general/access-control#service-agents) on the specified service account. - If not specified, the Vertex AI Extension Service Agent will be used to execute the Extension.
            },
            "httpBasicAuthConfig": { # Config for HTTP Basic Authentication. # Config for HTTP Basic auth.
              "credentialSecret": "A String", # Required. The name of the SecretManager secret version resource storing the base64 encoded credentials. Format: `projects/{project}/secrets/{secrete}/versions/{version}` - If specified, the `secretmanager.versions.access` permission should be granted to Vertex AI Extension Service Agent (https://cloud.google.com/vertex-ai/docs/general/access-control#service-agents) on the specified resource.
            },
            "oauthConfig": { # Config for user oauth. # Config for user oauth.
              "accessToken": "A String", # Access token for extension endpoint. Only used to propagate token from [[ExecuteExtensionRequest.runtime_auth_config]] at request time.
              "serviceAccount": "A String", # The service account used to generate access tokens for executing the Extension. - If the service account is specified, the `iam.serviceAccounts.getAccessToken` permission should be granted to Vertex AI Extension Service Agent (https://cloud.google.com/vertex-ai/docs/general/access-control#service-agents) on the provided service account.
            },
            "oidcConfig": { # Config for user OIDC auth. # Config for user OIDC auth.
              "idToken": "A String", # OpenID Connect formatted ID token for extension endpoint. Only used to propagate token from [[ExecuteExtensionRequest.runtime_auth_config]] at request time.
              "serviceAccount": "A String", # The service account used to generate an OpenID Connect (OIDC)-compatible JWT token signed by the Google OIDC Provider (accounts.google.com) for extension endpoint (https://cloud.google.com/iam/docs/create-short-lived-credentials-direct#sa-credentials-oidc). - The audience for the token will be set to the URL in the server url defined in the OpenApi spec. - If the service account is provided, the service account should grant `iam.serviceAccounts.getOpenIdToken` permission to Vertex AI Extension Service Agent (https://cloud.google.com/vertex-ai/docs/general/access-control#service-agents).
            },
          },
          "elasticSearchParams": { # The search parameters to use for the ELASTIC_SEARCH spec. # Parameters for the elastic search API.
            "index": "A String", # The ElasticSearch index to use.
            "numHits": 42, # Optional. Number of hits (chunks) to request. When specified, it is passed to Elasticsearch as the `num_hits` param.
            "searchTemplate": "A String", # The ElasticSearch search template to use.
          },
          "endpoint": "A String", # The endpoint of the external API. The system will call the API at this endpoint to retrieve the data for grounding. Example: https://acme.com:443/search
          "simpleSearchParams": { # The search parameters to use for SIMPLE_SEARCH spec. # Parameters for the simple search API.
          },
        },
        "vertexAiSearch": { # Retrieve from Vertex AI Search datastore or engine for grounding. datastore and engine are mutually exclusive. See https://cloud.google.com/products/agent-builder # Set to use data source powered by Vertex AI Search.
          "dataStoreSpecs": [ # Specifications that define the specific DataStores to be searched, along with configurations for those data stores. This is only considered for Engines with multiple data stores. It should only be set if engine is used.
            { # Define data stores within engine to filter on in a search call and configurations for those data stores. For more information, see https://cloud.google.com/generative-ai-app-builder/docs/reference/rpc/google.cloud.discoveryengine.v1#datastorespec
              "dataStore": "A String", # Full resource name of DataStore, such as Format: `projects/{project}/locations/{location}/collections/{collection}/dataStores/{dataStore}`
              "filter": "A String", # Optional. Filter specification to filter documents in the data store specified by data_store field. For more information on filtering, see [Filtering](https://cloud.google.com/generative-ai-app-builder/docs/filter-search-metadata)
            },
          ],
          "datastore": "A String", # Optional. Fully-qualified Vertex AI Search data store resource ID. Format: `projects/{project}/locations/{location}/collections/{collection}/dataStores/{dataStore}`
          "engine": "A String", # Optional. Fully-qualified Vertex AI Search engine resource ID. Format: `projects/{project}/locations/{location}/collections/{collection}/engines/{engine}`
          "filter": "A String", # Optional. Filter strings to be passed to the search API.
          "maxResults": 42, # Optional. Number of search results to return per query. The default value is 10. The maximumm allowed value is 10.
        },
        "vertexRagStore": { # Retrieve from Vertex RAG Store for grounding. # Set to use data source powered by Vertex RAG store. User data is uploaded via the VertexRagDataService.
          "ragResources": [ # Optional. The representation of the rag source. It can be used to specify corpus only or ragfiles. Currently only support one corpus or multiple files from one corpus. In the future we may open up multiple corpora support.
            { # The definition of the Rag resource.
              "ragCorpus": "A String", # Optional. RagCorpora resource name. Format: `projects/{project}/locations/{location}/ragCorpora/{rag_corpus}`
              "ragFileIds": [ # Optional. rag_file_id. The files should be in the same rag_corpus set in rag_corpus field.
                "A String",
              ],
            },
          ],
          "ragRetrievalConfig": { # Specifies the context retrieval config. # Optional. The retrieval config for the Rag query.
            "filter": { # Config for filters. # Optional. Config for filters.
              "metadataFilter": "A String", # Optional. String for metadata filtering.
              "vectorDistanceThreshold": 3.14, # Optional. Only returns contexts with vector distance smaller than the threshold.
              "vectorSimilarityThreshold": 3.14, # Optional. Only returns contexts with vector similarity larger than the threshold.
            },
            "ranking": { # Config for ranking and reranking. # Optional. Config for ranking and reranking.
              "llmRanker": { # Config for LlmRanker. # Optional. Config for LlmRanker.
                "modelName": "A String", # Optional. The model name used for ranking. See [Supported models](https://cloud.google.com/vertex-ai/generative-ai/docs/model-reference/inference#supported-models).
              },
              "rankService": { # Config for Rank Service. # Optional. Config for Rank Service.
                "modelName": "A String", # Optional. The model name of the rank service. Format: `semantic-ranker-512@latest`
              },
            },
            "topK": 42, # Optional. The number of contexts to retrieve.
          },
          "similarityTopK": 42, # Optional. Number of top k results to return from the selected corpora.
          "vectorDistanceThreshold": 3.14, # Optional. Only return results with vector distance smaller than the threshold.
        },
      },
      "urlContext": { # Tool to support URL context. # Optional. Tool to support URL context retrieval.
      },
    },
  ],
  "ttl": "A String", # Input only. The TTL for this resource. The expiration time is computed: now + TTL.
  "updateTime": "A String", # Output only. When the cache entry was last updated in UTC time.
  "usageMetadata": { # Metadata on the usage of the cached content. # Output only. Metadata on the usage of the cached content.
    "audioDurationSeconds": 42, # Duration of audio in seconds.
    "imageCount": 42, # Number of images.
    "textCount": 42, # Number of text characters.
    "totalTokenCount": 42, # Total number of tokens that the cached content consumes.
    "videoDurationSeconds": 42, # Duration of video in seconds.
  },
}
delete(name, x__xgafv=None)
Deletes cached content

Args:
  name: string, Required. The resource name referring to the cached content (required)
  x__xgafv: string, V1 error format.
    Allowed values
      1 - v1 error format
      2 - v2 error format

Returns:
  An object of the form:

    { # A generic empty message that you can re-use to avoid defining duplicated empty messages in your APIs. A typical example is to use it as the request or the response type of an API method. For instance: service Foo { rpc Bar(google.protobuf.Empty) returns (google.protobuf.Empty); }
}
get(name, x__xgafv=None)
Gets cached content configurations

Args:
  name: string, Required. The resource name referring to the cached content (required)
  x__xgafv: string, V1 error format.
    Allowed values
      1 - v1 error format
      2 - v2 error format

Returns:
  An object of the form:

    { # A resource used in LLM queries for users to explicitly specify what to cache and how to cache.
  "contents": [ # Optional. Input only. Immutable. The content to cache
    { # The structured data content of a message. A Content message contains a `role` field, which indicates the producer of the content, and a `parts` field, which contains the multi-part data of the message.
      "parts": [ # Required. A list of Part objects that make up a single message. Parts of a message can have different MIME types. A Content message must have at least one Part.
        { # A datatype containing media that is part of a multi-part Content message. A `Part` consists of data which has an associated datatype. A `Part` can only contain one of the accepted types in `Part.data`. For media types that are not text, `Part` must have a fixed IANA MIME type identifying the type and subtype of the media if `inline_data` or `file_data` field is filled with raw bytes.
          "codeExecutionResult": { # Result of executing the [ExecutableCode]. Only generated when using the [CodeExecution] tool, and always follows a `part` containing the [ExecutableCode]. # Optional. The result of executing the ExecutableCode.
            "outcome": "A String", # Required. Outcome of the code execution.
            "output": "A String", # Optional. Contains stdout when code execution is successful, stderr or other description otherwise.
          },
          "executableCode": { # Code generated by the model that is meant to be executed, and the result returned to the model. Generated when using the [CodeExecution] tool, in which the code will be automatically executed, and a corresponding [CodeExecutionResult] will also be generated. # Optional. Code generated by the model that is intended to be executed.
            "code": "A String", # Required. The code to be executed.
            "language": "A String", # Required. Programming language of the `code`.
          },
          "fileData": { # URI-based data. A FileData message contains a URI pointing to data of a specific media type. It is used to represent images, audio, and video stored in Google Cloud Storage. # Optional. The URI-based data of the part. This can be used to include files from Google Cloud Storage.
            "displayName": "A String", # Optional. The display name of the file. Used to provide a label or filename to distinguish files. This field is only returned in `PromptMessage` for prompt management. It is used in the Gemini calls only when server side tools (`code_execution`, `google_search`, and `url_context`) are enabled.
            "fileUri": "A String", # Required. The URI of the file in Google Cloud Storage.
            "mimeType": "A String", # Required. The IANA standard MIME type of the source data.
          },
          "functionCall": { # A predicted [FunctionCall] returned from the model that contains a string representing the [FunctionDeclaration.name] and a structured JSON object containing the parameters and their values. # Optional. A predicted function call returned from the model. This contains the name of the function to call and the arguments to pass to the function.
            "args": { # Optional. The function parameters and values in JSON object format. See [FunctionDeclaration.parameters] for parameter details.
              "a_key": "", # Properties of the object.
            },
            "name": "A String", # Optional. The name of the function to call. Matches [FunctionDeclaration.name].
            "partialArgs": [ # Optional. The partial argument value of the function call. If provided, represents the arguments/fields that are streamed incrementally.
              { # Partial argument value of the function call.
                "boolValue": True or False, # Optional. Represents a boolean value.
                "jsonPath": "A String", # Required. A JSON Path (RFC 9535) to the argument being streamed. https://datatracker.ietf.org/doc/html/rfc9535. e.g. "$.foo.bar[0].data".
                "nullValue": "A String", # Optional. Represents a null value.
                "numberValue": 3.14, # Optional. Represents a double value.
                "stringValue": "A String", # Optional. Represents a string value.
                "willContinue": True or False, # Optional. Whether this is not the last part of the same json_path. If true, another PartialArg message for the current json_path is expected to follow.
              },
            ],
            "willContinue": True or False, # Optional. Whether this is the last part of the FunctionCall. If true, another partial message for the current FunctionCall is expected to follow.
          },
          "functionResponse": { # The result output from a [FunctionCall] that contains a string representing the [FunctionDeclaration.name] and a structured JSON object containing any output from the function is used as context to the model. This should contain the result of a [FunctionCall] made based on model prediction. # Optional. The result of a function call. This is used to provide the model with the result of a function call that it predicted.
            "name": "A String", # Required. The name of the function to call. Matches [FunctionDeclaration.name] and [FunctionCall.name].
            "parts": [ # Optional. Ordered `Parts` that constitute a function response. Parts may have different IANA MIME types.
              { # A datatype containing media that is part of a `FunctionResponse` message. A `FunctionResponsePart` consists of data which has an associated datatype. A `FunctionResponsePart` can only contain one of the accepted types in `FunctionResponsePart.data`. A `FunctionResponsePart` must have a fixed IANA MIME type identifying the type and subtype of the media if the `inline_data` field is filled with raw bytes.
                "fileData": { # URI based data for function response. # URI based data.
                  "displayName": "A String", # Optional. Display name of the file data. Used to provide a label or filename to distinguish file datas. This field is only returned in PromptMessage for prompt management. It is currently used in the Gemini GenerateContent calls only when server side tools (code_execution, google_search, and url_context) are enabled.
                  "fileUri": "A String", # Required. URI.
                  "mimeType": "A String", # Required. The IANA standard MIME type of the source data.
                },
                "inlineData": { # Raw media bytes for function response. Text should not be sent as raw bytes, use the 'text' field. # Inline media bytes.
                  "data": "A String", # Required. Raw bytes.
                  "displayName": "A String", # Optional. Display name of the blob. Used to provide a label or filename to distinguish blobs. This field is only returned in PromptMessage for prompt management. It is currently used in the Gemini GenerateContent calls only when server side tools (code_execution, google_search, and url_context) are enabled.
                  "mimeType": "A String", # Required. The IANA standard MIME type of the source data.
                },
              },
            ],
            "response": { # Required. The function response in JSON object format. Use "output" key to specify function output and "error" key to specify error details (if any). If "output" and "error" keys are not specified, then whole "response" is treated as function output.
              "a_key": "", # Properties of the object.
            },
          },
          "inlineData": { # A content blob. A Blob contains data of a specific media type. It is used to represent images, audio, and video. # Optional. The inline data content of the part. This can be used to include images, audio, or video in a request.
            "data": "A String", # Required. The raw bytes of the data.
            "displayName": "A String", # Optional. The display name of the blob. Used to provide a label or filename to distinguish blobs. This field is only returned in `PromptMessage` for prompt management. It is used in the Gemini calls only when server-side tools (`code_execution`, `google_search`, and `url_context`) are enabled.
            "mimeType": "A String", # Required. The IANA standard MIME type of the source data.
          },
          "mediaResolution": { # per part media resolution. Media resolution for the input media. # per part media resolution. Media resolution for the input media.
            "level": "A String", # The tokenization quality used for given media.
          },
          "text": "A String", # Optional. The text content of the part.
          "thought": True or False, # Optional. Indicates whether the `part` represents the model's thought process or reasoning.
          "thoughtSignature": "A String", # Optional. An opaque signature for the thought so it can be reused in subsequent requests.
          "videoMetadata": { # Provides metadata for a video, including the start and end offsets for clipping and the frame rate. # Optional. Video metadata. The metadata should only be specified while the video data is presented in inline_data or file_data.
            "endOffset": "A String", # Optional. The end offset of the video.
            "fps": 3.14, # Optional. The frame rate of the video sent to the model. If not specified, the default value is 1.0. The valid range is (0.0, 24.0].
            "startOffset": "A String", # Optional. The start offset of the video.
          },
        },
      ],
      "role": "A String", # Optional. The producer of the content. Must be either 'user' or 'model'. If not set, the service will default to 'user'.
    },
  ],
  "createTime": "A String", # Output only. Creation time of the cache entry.
  "displayName": "A String", # Optional. Immutable. The user-generated meaningful display name of the cached content.
  "encryptionSpec": { # Represents a customer-managed encryption key specification that can be applied to a Vertex AI resource. # Input only. Immutable. Customer-managed encryption key spec for a `CachedContent`. If set, this `CachedContent` and all its sub-resources will be secured by this key.
    "kmsKeyName": "A String", # Required. Resource name of the Cloud KMS key used to protect the resource. The Cloud KMS key must be in the same region as the resource. It must have the format `projects/{project}/locations/{location}/keyRings/{key_ring}/cryptoKeys/{crypto_key}`.
  },
  "expireTime": "A String", # Timestamp of when this resource is considered expired. This is *always* provided on output, regardless of what was sent on input.
  "model": "A String", # Immutable. The name of the `Model` to use for cached content. Currently, only the published Gemini base models are supported, in form of projects/{PROJECT}/locations/{LOCATION}/publishers/google/models/{MODEL}
  "name": "A String", # Immutable. Identifier. The server-generated resource name of the cached content Format: projects/{project}/locations/{location}/cachedContents/{cached_content}
  "systemInstruction": { # The structured data content of a message. A Content message contains a `role` field, which indicates the producer of the content, and a `parts` field, which contains the multi-part data of the message. # Optional. Input only. Immutable. Developer set system instruction. Currently, text only
    "parts": [ # Required. A list of Part objects that make up a single message. Parts of a message can have different MIME types. A Content message must have at least one Part.
      { # A datatype containing media that is part of a multi-part Content message. A `Part` consists of data which has an associated datatype. A `Part` can only contain one of the accepted types in `Part.data`. For media types that are not text, `Part` must have a fixed IANA MIME type identifying the type and subtype of the media if `inline_data` or `file_data` field is filled with raw bytes.
        "codeExecutionResult": { # Result of executing the [ExecutableCode]. Only generated when using the [CodeExecution] tool, and always follows a `part` containing the [ExecutableCode]. # Optional. The result of executing the ExecutableCode.
          "outcome": "A String", # Required. Outcome of the code execution.
          "output": "A String", # Optional. Contains stdout when code execution is successful, stderr or other description otherwise.
        },
        "executableCode": { # Code generated by the model that is meant to be executed, and the result returned to the model. Generated when using the [CodeExecution] tool, in which the code will be automatically executed, and a corresponding [CodeExecutionResult] will also be generated. # Optional. Code generated by the model that is intended to be executed.
          "code": "A String", # Required. The code to be executed.
          "language": "A String", # Required. Programming language of the `code`.
        },
        "fileData": { # URI-based data. A FileData message contains a URI pointing to data of a specific media type. It is used to represent images, audio, and video stored in Google Cloud Storage. # Optional. The URI-based data of the part. This can be used to include files from Google Cloud Storage.
          "displayName": "A String", # Optional. The display name of the file. Used to provide a label or filename to distinguish files. This field is only returned in `PromptMessage` for prompt management. It is used in the Gemini calls only when server side tools (`code_execution`, `google_search`, and `url_context`) are enabled.
          "fileUri": "A String", # Required. The URI of the file in Google Cloud Storage.
          "mimeType": "A String", # Required. The IANA standard MIME type of the source data.
        },
        "functionCall": { # A predicted [FunctionCall] returned from the model that contains a string representing the [FunctionDeclaration.name] and a structured JSON object containing the parameters and their values. # Optional. A predicted function call returned from the model. This contains the name of the function to call and the arguments to pass to the function.
          "args": { # Optional. The function parameters and values in JSON object format. See [FunctionDeclaration.parameters] for parameter details.
            "a_key": "", # Properties of the object.
          },
          "name": "A String", # Optional. The name of the function to call. Matches [FunctionDeclaration.name].
          "partialArgs": [ # Optional. The partial argument value of the function call. If provided, represents the arguments/fields that are streamed incrementally.
            { # Partial argument value of the function call.
              "boolValue": True or False, # Optional. Represents a boolean value.
              "jsonPath": "A String", # Required. A JSON Path (RFC 9535) to the argument being streamed. https://datatracker.ietf.org/doc/html/rfc9535. e.g. "$.foo.bar[0].data".
              "nullValue": "A String", # Optional. Represents a null value.
              "numberValue": 3.14, # Optional. Represents a double value.
              "stringValue": "A String", # Optional. Represents a string value.
              "willContinue": True or False, # Optional. Whether this is not the last part of the same json_path. If true, another PartialArg message for the current json_path is expected to follow.
            },
          ],
          "willContinue": True or False, # Optional. Whether this is the last part of the FunctionCall. If true, another partial message for the current FunctionCall is expected to follow.
        },
        "functionResponse": { # The result output from a [FunctionCall] that contains a string representing the [FunctionDeclaration.name] and a structured JSON object containing any output from the function is used as context to the model. This should contain the result of a [FunctionCall] made based on model prediction. # Optional. The result of a function call. This is used to provide the model with the result of a function call that it predicted.
          "name": "A String", # Required. The name of the function to call. Matches [FunctionDeclaration.name] and [FunctionCall.name].
          "parts": [ # Optional. Ordered `Parts` that constitute a function response. Parts may have different IANA MIME types.
            { # A datatype containing media that is part of a `FunctionResponse` message. A `FunctionResponsePart` consists of data which has an associated datatype. A `FunctionResponsePart` can only contain one of the accepted types in `FunctionResponsePart.data`. A `FunctionResponsePart` must have a fixed IANA MIME type identifying the type and subtype of the media if the `inline_data` field is filled with raw bytes.
              "fileData": { # URI based data for function response. # URI based data.
                "displayName": "A String", # Optional. Display name of the file data. Used to provide a label or filename to distinguish file datas. This field is only returned in PromptMessage for prompt management. It is currently used in the Gemini GenerateContent calls only when server side tools (code_execution, google_search, and url_context) are enabled.
                "fileUri": "A String", # Required. URI.
                "mimeType": "A String", # Required. The IANA standard MIME type of the source data.
              },
              "inlineData": { # Raw media bytes for function response. Text should not be sent as raw bytes, use the 'text' field. # Inline media bytes.
                "data": "A String", # Required. Raw bytes.
                "displayName": "A String", # Optional. Display name of the blob. Used to provide a label or filename to distinguish blobs. This field is only returned in PromptMessage for prompt management. It is currently used in the Gemini GenerateContent calls only when server side tools (code_execution, google_search, and url_context) are enabled.
                "mimeType": "A String", # Required. The IANA standard MIME type of the source data.
              },
            },
          ],
          "response": { # Required. The function response in JSON object format. Use "output" key to specify function output and "error" key to specify error details (if any). If "output" and "error" keys are not specified, then whole "response" is treated as function output.
            "a_key": "", # Properties of the object.
          },
        },
        "inlineData": { # A content blob. A Blob contains data of a specific media type. It is used to represent images, audio, and video. # Optional. The inline data content of the part. This can be used to include images, audio, or video in a request.
          "data": "A String", # Required. The raw bytes of the data.
          "displayName": "A String", # Optional. The display name of the blob. Used to provide a label or filename to distinguish blobs. This field is only returned in `PromptMessage` for prompt management. It is used in the Gemini calls only when server-side tools (`code_execution`, `google_search`, and `url_context`) are enabled.
          "mimeType": "A String", # Required. The IANA standard MIME type of the source data.
        },
        "mediaResolution": { # per part media resolution. Media resolution for the input media. # per part media resolution. Media resolution for the input media.
          "level": "A String", # The tokenization quality used for given media.
        },
        "text": "A String", # Optional. The text content of the part.
        "thought": True or False, # Optional. Indicates whether the `part` represents the model's thought process or reasoning.
        "thoughtSignature": "A String", # Optional. An opaque signature for the thought so it can be reused in subsequent requests.
        "videoMetadata": { # Provides metadata for a video, including the start and end offsets for clipping and the frame rate. # Optional. Video metadata. The metadata should only be specified while the video data is presented in inline_data or file_data.
          "endOffset": "A String", # Optional. The end offset of the video.
          "fps": 3.14, # Optional. The frame rate of the video sent to the model. If not specified, the default value is 1.0. The valid range is (0.0, 24.0].
          "startOffset": "A String", # Optional. The start offset of the video.
        },
      },
    ],
    "role": "A String", # Optional. The producer of the content. Must be either 'user' or 'model'. If not set, the service will default to 'user'.
  },
  "toolConfig": { # Tool config. This config is shared for all tools provided in the request. # Optional. Input only. Immutable. Tool config. This config is shared for all tools
    "functionCallingConfig": { # Function calling config. # Optional. Function calling config.
      "allowedFunctionNames": [ # Optional. Function names to call. Only set when the Mode is ANY. Function names should match [FunctionDeclaration.name]. With mode set to ANY, model will predict a function call from the set of function names provided.
        "A String",
      ],
      "mode": "A String", # Optional. Function calling mode.
      "streamFunctionCallArguments": True or False, # Optional. When set to true, arguments of a single function call will be streamed out in multiple parts/contents/responses. Partial parameter results will be returned in the [FunctionCall.partial_args] field.
    },
    "retrievalConfig": { # Retrieval config. # Optional. Retrieval config.
      "languageCode": "A String", # The language code of the user.
      "latLng": { # An object that represents a latitude/longitude pair. This is expressed as a pair of doubles to represent degrees latitude and degrees longitude. Unless specified otherwise, this object must conform to the WGS84 standard. Values must be within normalized ranges. # The location of the user.
        "latitude": 3.14, # The latitude in degrees. It must be in the range [-90.0, +90.0].
        "longitude": 3.14, # The longitude in degrees. It must be in the range [-180.0, +180.0].
      },
    },
  },
  "tools": [ # Optional. Input only. Immutable. A list of `Tools` the model may use to generate the next response
    { # Tool details that the model may use to generate response. A `Tool` is a piece of code that enables the system to interact with external systems to perform an action, or set of actions, outside of knowledge and scope of the model. A Tool object should contain exactly one type of Tool (e.g FunctionDeclaration, Retrieval or GoogleSearchRetrieval).
      "codeExecution": { # Tool that executes code generated by the model, and automatically returns the result to the model. See also [ExecutableCode]and [CodeExecutionResult] which are input and output to this tool. # Optional. CodeExecution tool type. Enables the model to execute code as part of generation.
      },
      "computerUse": { # Tool to support computer use. # Optional. Tool to support the model interacting directly with the computer. If enabled, it automatically populates computer-use specific Function Declarations.
        "environment": "A String", # Required. The environment being operated.
        "excludedPredefinedFunctions": [ # Optional. By default, [predefined functions](https://cloud.google.com/vertex-ai/generative-ai/docs/computer-use#supported-actions) are included in the final model call. Some of them can be explicitly excluded from being automatically included. This can serve two purposes: 1. Using a more restricted / different action space. 2. Improving the definitions / instructions of predefined functions.
          "A String",
        ],
      },
      "enterpriseWebSearch": { # Tool to search public web data, powered by Vertex AI Search and Sec4 compliance. # Optional. Tool to support searching public web data, powered by Vertex AI Search and Sec4 compliance.
        "blockingConfidence": "A String", # Optional. Sites with confidence level chosen & above this value will be blocked from the search results.
        "excludeDomains": [ # Optional. List of domains to be excluded from the search results. The default limit is 2000 domains.
          "A String",
        ],
      },
      "functionDeclarations": [ # Optional. Function tool type. One or more function declarations to be passed to the model along with the current user query. Model may decide to call a subset of these functions by populating FunctionCall in the response. User should provide a FunctionResponse for each function call in the next turn. Based on the function responses, Model will generate the final response back to the user. Maximum 512 function declarations can be provided.
        { # Structured representation of a function declaration as defined by the [OpenAPI 3.0 specification](https://spec.openapis.org/oas/v3.0.3). Included in this declaration are the function name, description, parameters and response type. This FunctionDeclaration is a representation of a block of code that can be used as a `Tool` by the model and executed by the client.
          "description": "A String", # Optional. Description and purpose of the function. Model uses it to decide how and whether to call the function.
          "name": "A String", # Required. The name of the function to call. Must start with a letter or an underscore. Must be a-z, A-Z, 0-9, or contain underscores, dots, colons and dashes, with a maximum length of 64.
          "parameters": { # Defines the schema of input and output data. This is a subset of the [OpenAPI 3.0 Schema Object](https://spec.openapis.org/oas/v3.0.3#schema-object). # Optional. Describes the parameters to this function in JSON Schema Object format. Reflects the Open API 3.03 Parameter Object. string Key: the name of the parameter. Parameter names are case sensitive. Schema Value: the Schema defining the type used for the parameter. For function with no parameters, this can be left unset. Parameter names must start with a letter or an underscore and must only contain chars a-z, A-Z, 0-9, or underscores with a maximum length of 64. Example with 1 required and 1 optional parameter: type: OBJECT properties: param1: type: STRING param2: type: INTEGER required: - param1
            "additionalProperties": "", # Optional. If `type` is `OBJECT`, specifies how to handle properties not defined in `properties`. If it is a boolean `false`, no additional properties are allowed. If it is a schema, additional properties are allowed if they conform to the schema.
            "anyOf": [ # Optional. The instance must be valid against any (one or more) of the subschemas listed in `any_of`.
              # Object with schema name: GoogleCloudAiplatformV1Schema
            ],
            "default": "", # Optional. Default value to use if the field is not specified.
            "defs": { # Optional. `defs` provides a map of schema definitions that can be reused by `ref` elsewhere in the schema. Only allowed at root level of the schema.
              "a_key": # Object with schema name: GoogleCloudAiplatformV1Schema
            },
            "description": "A String", # Optional. Description of the schema.
            "enum": [ # Optional. Possible values of the field. This field can be used to restrict a value to a fixed set of values. To mark a field as an enum, set `format` to `enum` and provide the list of possible values in `enum`. For example: 1. To define directions: `{type:STRING, format:enum, enum:["EAST", "NORTH", "SOUTH", "WEST"]}` 2. To define apartment numbers: `{type:INTEGER, format:enum, enum:["101", "201", "301"]}`
              "A String",
            ],
            "example": "", # Optional. Example of an instance of this schema.
            "format": "A String", # Optional. The format of the data. For `NUMBER` type, format can be `float` or `double`. For `INTEGER` type, format can be `int32` or `int64`. For `STRING` type, format can be `email`, `byte`, `date`, `date-time`, `password`, and other formats to further refine the data type.
            "items": # Object with schema name: GoogleCloudAiplatformV1Schema # Optional. If type is `ARRAY`, `items` specifies the schema of elements in the array.
            "maxItems": "A String", # Optional. If type is `ARRAY`, `max_items` specifies the maximum number of items in an array.
            "maxLength": "A String", # Optional. If type is `STRING`, `max_length` specifies the maximum length of the string.
            "maxProperties": "A String", # Optional. If type is `OBJECT`, `max_properties` specifies the maximum number of properties that can be provided.
            "maximum": 3.14, # Optional. If type is `INTEGER` or `NUMBER`, `maximum` specifies the maximum allowed value.
            "minItems": "A String", # Optional. If type is `ARRAY`, `min_items` specifies the minimum number of items in an array.
            "minLength": "A String", # Optional. If type is `STRING`, `min_length` specifies the minimum length of the string.
            "minProperties": "A String", # Optional. If type is `OBJECT`, `min_properties` specifies the minimum number of properties that can be provided.
            "minimum": 3.14, # Optional. If type is `INTEGER` or `NUMBER`, `minimum` specifies the minimum allowed value.
            "nullable": True or False, # Optional. Indicates if the value of this field can be null.
            "pattern": "A String", # Optional. If type is `STRING`, `pattern` specifies a regular expression that the string must match.
            "properties": { # Optional. If type is `OBJECT`, `properties` is a map of property names to schema definitions for each property of the object.
              "a_key": # Object with schema name: GoogleCloudAiplatformV1Schema
            },
            "propertyOrdering": [ # Optional. Order of properties displayed or used where order matters. This is not a standard field in OpenAPI specification, but can be used to control the order of properties.
              "A String",
            ],
            "ref": "A String", # Optional. Allows referencing another schema definition to use in place of this schema. The value must be a valid reference to a schema in `defs`. For example, the following schema defines a reference to a schema node named "Pet": type: object properties: pet: ref: #/defs/Pet defs: Pet: type: object properties: name: type: string The value of the "pet" property is a reference to the schema node named "Pet". See details in https://json-schema.org/understanding-json-schema/structuring
            "required": [ # Optional. If type is `OBJECT`, `required` lists the names of properties that must be present.
              "A String",
            ],
            "title": "A String", # Optional. Title for the schema.
            "type": "A String", # Optional. Data type of the schema field.
          },
          "parametersJsonSchema": "", # Optional. Describes the parameters to the function in JSON Schema format. The schema must describe an object where the properties are the parameters to the function. For example: ``` { "type": "object", "properties": { "name": { "type": "string" }, "age": { "type": "integer" } }, "additionalProperties": false, "required": ["name", "age"], "propertyOrdering": ["name", "age"] } ``` This field is mutually exclusive with `parameters`.
          "response": { # Defines the schema of input and output data. This is a subset of the [OpenAPI 3.0 Schema Object](https://spec.openapis.org/oas/v3.0.3#schema-object). # Optional. Describes the output from this function in JSON Schema format. Reflects the Open API 3.03 Response Object. The Schema defines the type used for the response value of the function.
            "additionalProperties": "", # Optional. If `type` is `OBJECT`, specifies how to handle properties not defined in `properties`. If it is a boolean `false`, no additional properties are allowed. If it is a schema, additional properties are allowed if they conform to the schema.
            "anyOf": [ # Optional. The instance must be valid against any (one or more) of the subschemas listed in `any_of`.
              # Object with schema name: GoogleCloudAiplatformV1Schema
            ],
            "default": "", # Optional. Default value to use if the field is not specified.
            "defs": { # Optional. `defs` provides a map of schema definitions that can be reused by `ref` elsewhere in the schema. Only allowed at root level of the schema.
              "a_key": # Object with schema name: GoogleCloudAiplatformV1Schema
            },
            "description": "A String", # Optional. Description of the schema.
            "enum": [ # Optional. Possible values of the field. This field can be used to restrict a value to a fixed set of values. To mark a field as an enum, set `format` to `enum` and provide the list of possible values in `enum`. For example: 1. To define directions: `{type:STRING, format:enum, enum:["EAST", "NORTH", "SOUTH", "WEST"]}` 2. To define apartment numbers: `{type:INTEGER, format:enum, enum:["101", "201", "301"]}`
              "A String",
            ],
            "example": "", # Optional. Example of an instance of this schema.
            "format": "A String", # Optional. The format of the data. For `NUMBER` type, format can be `float` or `double`. For `INTEGER` type, format can be `int32` or `int64`. For `STRING` type, format can be `email`, `byte`, `date`, `date-time`, `password`, and other formats to further refine the data type.
            "items": # Object with schema name: GoogleCloudAiplatformV1Schema # Optional. If type is `ARRAY`, `items` specifies the schema of elements in the array.
            "maxItems": "A String", # Optional. If type is `ARRAY`, `max_items` specifies the maximum number of items in an array.
            "maxLength": "A String", # Optional. If type is `STRING`, `max_length` specifies the maximum length of the string.
            "maxProperties": "A String", # Optional. If type is `OBJECT`, `max_properties` specifies the maximum number of properties that can be provided.
            "maximum": 3.14, # Optional. If type is `INTEGER` or `NUMBER`, `maximum` specifies the maximum allowed value.
            "minItems": "A String", # Optional. If type is `ARRAY`, `min_items` specifies the minimum number of items in an array.
            "minLength": "A String", # Optional. If type is `STRING`, `min_length` specifies the minimum length of the string.
            "minProperties": "A String", # Optional. If type is `OBJECT`, `min_properties` specifies the minimum number of properties that can be provided.
            "minimum": 3.14, # Optional. If type is `INTEGER` or `NUMBER`, `minimum` specifies the minimum allowed value.
            "nullable": True or False, # Optional. Indicates if the value of this field can be null.
            "pattern": "A String", # Optional. If type is `STRING`, `pattern` specifies a regular expression that the string must match.
            "properties": { # Optional. If type is `OBJECT`, `properties` is a map of property names to schema definitions for each property of the object.
              "a_key": # Object with schema name: GoogleCloudAiplatformV1Schema
            },
            "propertyOrdering": [ # Optional. Order of properties displayed or used where order matters. This is not a standard field in OpenAPI specification, but can be used to control the order of properties.
              "A String",
            ],
            "ref": "A String", # Optional. Allows referencing another schema definition to use in place of this schema. The value must be a valid reference to a schema in `defs`. For example, the following schema defines a reference to a schema node named "Pet": type: object properties: pet: ref: #/defs/Pet defs: Pet: type: object properties: name: type: string The value of the "pet" property is a reference to the schema node named "Pet". See details in https://json-schema.org/understanding-json-schema/structuring
            "required": [ # Optional. If type is `OBJECT`, `required` lists the names of properties that must be present.
              "A String",
            ],
            "title": "A String", # Optional. Title for the schema.
            "type": "A String", # Optional. Data type of the schema field.
          },
          "responseJsonSchema": "", # Optional. Describes the output from this function in JSON Schema format. The value specified by the schema is the response value of the function. This field is mutually exclusive with `response`.
        },
      ],
      "googleMaps": { # Tool to retrieve public maps data for grounding, powered by Google. # Optional. GoogleMaps tool type. Tool to support Google Maps in Model.
        "enableWidget": True or False, # Optional. If true, include the widget context token in the response.
      },
      "googleSearch": { # GoogleSearch tool type. Tool to support Google Search in Model. Powered by Google. # Optional. GoogleSearch tool type. Tool to support Google Search in Model. Powered by Google.
        "blockingConfidence": "A String", # Optional. Sites with confidence level chosen & above this value will be blocked from the search results.
        "excludeDomains": [ # Optional. List of domains to be excluded from the search results. The default limit is 2000 domains. Example: ["amazon.com", "facebook.com"].
          "A String",
        ],
      },
      "googleSearchRetrieval": { # Tool to retrieve public web data for grounding, powered by Google. # Optional. Specialized retrieval tool that is powered by Google Search.
        "dynamicRetrievalConfig": { # Describes the options to customize dynamic retrieval. # Specifies the dynamic retrieval configuration for the given source.
          "dynamicThreshold": 3.14, # Optional. The threshold to be used in dynamic retrieval. If not set, a system default value is used.
          "mode": "A String", # The mode of the predictor to be used in dynamic retrieval.
        },
      },
      "parallelAiSearch": { # ParallelAiSearch tool type. A tool that uses the Parallel.ai search engine for grounding. # Optional. If specified, Vertex AI will use Parallel.ai to search for information to answer user queries. The search results will be grounded on Parallel.ai and presented to the model for response generation
        "apiKey": "A String", # Optional. The API key for ParallelAiSearch. If an API key is not provided, the system will attempt to verify access by checking for an active Parallel.ai subscription through the Google Cloud Marketplace. See https://docs.parallel.ai/search/search-quickstart for more details.
        "customConfigs": { # Optional. Custom configs for ParallelAiSearch. This field can be used to pass any parameter from the Parallel.ai Search API. See the Parallel.ai documentation for the full list of available parameters and their usage: https://docs.parallel.ai/api-reference/search-beta/search Currently only `source_policy`, `excerpts`, `max_results`, `mode`, `fetch_policy` can be set via this field. For example: { "source_policy": { "include_domains": ["google.com", "wikipedia.org"], "exclude_domains": ["example.com"] }, "fetch_policy": { "max_age_seconds": 3600 } }
          "a_key": "", # Properties of the object.
        },
      },
      "retrieval": { # Defines a retrieval tool that model can call to access external knowledge. # Optional. Retrieval tool type. System will always execute the provided retrieval tool(s) to get external knowledge to answer the prompt. Retrieval results are presented to the model for generation.
        "disableAttribution": True or False, # Optional. Deprecated. This option is no longer supported.
        "externalApi": { # Retrieve from data source powered by external API for grounding. The external API is not owned by Google, but need to follow the pre-defined API spec. # Use data source powered by external API for grounding.
          "apiAuth": { # The generic reusable api auth config. Deprecated. Please use AuthConfig (google/cloud/aiplatform/master/auth.proto) instead. # The authentication config to access the API. Deprecated. Please use auth_config instead.
            "apiKeyConfig": { # The API secret. # The API secret.
              "apiKeySecretVersion": "A String", # Required. The SecretManager secret version resource name storing API key. e.g. projects/{project}/secrets/{secret}/versions/{version}
              "apiKeyString": "A String", # The API key string. Either this or `api_key_secret_version` must be set.
            },
          },
          "apiSpec": "A String", # The API spec that the external API implements.
          "authConfig": { # Auth configuration to run the extension. # The authentication config to access the API.
            "apiKeyConfig": { # Config for authentication with API key. # Config for API key auth.
              "apiKeySecret": "A String", # Optional. The name of the SecretManager secret version resource storing the API key. Format: `projects/{project}/secrets/{secrete}/versions/{version}` - If both `api_key_secret` and `api_key_string` are specified, this field takes precedence over `api_key_string`. - If specified, the `secretmanager.versions.access` permission should be granted to Vertex AI Extension Service Agent (https://cloud.google.com/vertex-ai/docs/general/access-control#service-agents) on the specified resource.
              "apiKeyString": "A String", # Optional. The API key to be used in the request directly.
              "httpElementLocation": "A String", # Optional. The location of the API key.
              "name": "A String", # Optional. The parameter name of the API key. E.g. If the API request is "https://example.com/act?api_key=", "api_key" would be the parameter name.
            },
            "authType": "A String", # Type of auth scheme.
            "googleServiceAccountConfig": { # Config for Google Service Account Authentication. # Config for Google Service Account auth.
              "serviceAccount": "A String", # Optional. The service account that the extension execution service runs as. - If the service account is specified, the `iam.serviceAccounts.getAccessToken` permission should be granted to Vertex AI Extension Service Agent (https://cloud.google.com/vertex-ai/docs/general/access-control#service-agents) on the specified service account. - If not specified, the Vertex AI Extension Service Agent will be used to execute the Extension.
            },
            "httpBasicAuthConfig": { # Config for HTTP Basic Authentication. # Config for HTTP Basic auth.
              "credentialSecret": "A String", # Required. The name of the SecretManager secret version resource storing the base64 encoded credentials. Format: `projects/{project}/secrets/{secrete}/versions/{version}` - If specified, the `secretmanager.versions.access` permission should be granted to Vertex AI Extension Service Agent (https://cloud.google.com/vertex-ai/docs/general/access-control#service-agents) on the specified resource.
            },
            "oauthConfig": { # Config for user oauth. # Config for user oauth.
              "accessToken": "A String", # Access token for extension endpoint. Only used to propagate token from [[ExecuteExtensionRequest.runtime_auth_config]] at request time.
              "serviceAccount": "A String", # The service account used to generate access tokens for executing the Extension. - If the service account is specified, the `iam.serviceAccounts.getAccessToken` permission should be granted to Vertex AI Extension Service Agent (https://cloud.google.com/vertex-ai/docs/general/access-control#service-agents) on the provided service account.
            },
            "oidcConfig": { # Config for user OIDC auth. # Config for user OIDC auth.
              "idToken": "A String", # OpenID Connect formatted ID token for extension endpoint. Only used to propagate token from [[ExecuteExtensionRequest.runtime_auth_config]] at request time.
              "serviceAccount": "A String", # The service account used to generate an OpenID Connect (OIDC)-compatible JWT token signed by the Google OIDC Provider (accounts.google.com) for extension endpoint (https://cloud.google.com/iam/docs/create-short-lived-credentials-direct#sa-credentials-oidc). - The audience for the token will be set to the URL in the server url defined in the OpenApi spec. - If the service account is provided, the service account should grant `iam.serviceAccounts.getOpenIdToken` permission to Vertex AI Extension Service Agent (https://cloud.google.com/vertex-ai/docs/general/access-control#service-agents).
            },
          },
          "elasticSearchParams": { # The search parameters to use for the ELASTIC_SEARCH spec. # Parameters for the elastic search API.
            "index": "A String", # The ElasticSearch index to use.
            "numHits": 42, # Optional. Number of hits (chunks) to request. When specified, it is passed to Elasticsearch as the `num_hits` param.
            "searchTemplate": "A String", # The ElasticSearch search template to use.
          },
          "endpoint": "A String", # The endpoint of the external API. The system will call the API at this endpoint to retrieve the data for grounding. Example: https://acme.com:443/search
          "simpleSearchParams": { # The search parameters to use for SIMPLE_SEARCH spec. # Parameters for the simple search API.
          },
        },
        "vertexAiSearch": { # Retrieve from Vertex AI Search datastore or engine for grounding. datastore and engine are mutually exclusive. See https://cloud.google.com/products/agent-builder # Set to use data source powered by Vertex AI Search.
          "dataStoreSpecs": [ # Specifications that define the specific DataStores to be searched, along with configurations for those data stores. This is only considered for Engines with multiple data stores. It should only be set if engine is used.
            { # Define data stores within engine to filter on in a search call and configurations for those data stores. For more information, see https://cloud.google.com/generative-ai-app-builder/docs/reference/rpc/google.cloud.discoveryengine.v1#datastorespec
              "dataStore": "A String", # Full resource name of DataStore, such as Format: `projects/{project}/locations/{location}/collections/{collection}/dataStores/{dataStore}`
              "filter": "A String", # Optional. Filter specification to filter documents in the data store specified by data_store field. For more information on filtering, see [Filtering](https://cloud.google.com/generative-ai-app-builder/docs/filter-search-metadata)
            },
          ],
          "datastore": "A String", # Optional. Fully-qualified Vertex AI Search data store resource ID. Format: `projects/{project}/locations/{location}/collections/{collection}/dataStores/{dataStore}`
          "engine": "A String", # Optional. Fully-qualified Vertex AI Search engine resource ID. Format: `projects/{project}/locations/{location}/collections/{collection}/engines/{engine}`
          "filter": "A String", # Optional. Filter strings to be passed to the search API.
          "maxResults": 42, # Optional. Number of search results to return per query. The default value is 10. The maximumm allowed value is 10.
        },
        "vertexRagStore": { # Retrieve from Vertex RAG Store for grounding. # Set to use data source powered by Vertex RAG store. User data is uploaded via the VertexRagDataService.
          "ragResources": [ # Optional. The representation of the rag source. It can be used to specify corpus only or ragfiles. Currently only support one corpus or multiple files from one corpus. In the future we may open up multiple corpora support.
            { # The definition of the Rag resource.
              "ragCorpus": "A String", # Optional. RagCorpora resource name. Format: `projects/{project}/locations/{location}/ragCorpora/{rag_corpus}`
              "ragFileIds": [ # Optional. rag_file_id. The files should be in the same rag_corpus set in rag_corpus field.
                "A String",
              ],
            },
          ],
          "ragRetrievalConfig": { # Specifies the context retrieval config. # Optional. The retrieval config for the Rag query.
            "filter": { # Config for filters. # Optional. Config for filters.
              "metadataFilter": "A String", # Optional. String for metadata filtering.
              "vectorDistanceThreshold": 3.14, # Optional. Only returns contexts with vector distance smaller than the threshold.
              "vectorSimilarityThreshold": 3.14, # Optional. Only returns contexts with vector similarity larger than the threshold.
            },
            "ranking": { # Config for ranking and reranking. # Optional. Config for ranking and reranking.
              "llmRanker": { # Config for LlmRanker. # Optional. Config for LlmRanker.
                "modelName": "A String", # Optional. The model name used for ranking. See [Supported models](https://cloud.google.com/vertex-ai/generative-ai/docs/model-reference/inference#supported-models).
              },
              "rankService": { # Config for Rank Service. # Optional. Config for Rank Service.
                "modelName": "A String", # Optional. The model name of the rank service. Format: `semantic-ranker-512@latest`
              },
            },
            "topK": 42, # Optional. The number of contexts to retrieve.
          },
          "similarityTopK": 42, # Optional. Number of top k results to return from the selected corpora.
          "vectorDistanceThreshold": 3.14, # Optional. Only return results with vector distance smaller than the threshold.
        },
      },
      "urlContext": { # Tool to support URL context. # Optional. Tool to support URL context retrieval.
      },
    },
  ],
  "ttl": "A String", # Input only. The TTL for this resource. The expiration time is computed: now + TTL.
  "updateTime": "A String", # Output only. When the cache entry was last updated in UTC time.
  "usageMetadata": { # Metadata on the usage of the cached content. # Output only. Metadata on the usage of the cached content.
    "audioDurationSeconds": 42, # Duration of audio in seconds.
    "imageCount": 42, # Number of images.
    "textCount": 42, # Number of text characters.
    "totalTokenCount": 42, # Total number of tokens that the cached content consumes.
    "videoDurationSeconds": 42, # Duration of video in seconds.
  },
}
list(parent, pageSize=None, pageToken=None, x__xgafv=None)
Lists cached contents in a project

Args:
  parent: string, Required. The parent, which owns this collection of cached contents. (required)
  pageSize: integer, Optional. The maximum number of cached contents to return. The service may return fewer than this value. If unspecified, some default (under maximum) number of items will be returned. The maximum value is 1000; values above 1000 will be coerced to 1000.
  pageToken: string, Optional. A page token, received from a previous `ListCachedContents` call. Provide this to retrieve the subsequent page. When paginating, all other parameters provided to `ListCachedContents` must match the call that provided the page token.
  x__xgafv: string, V1 error format.
    Allowed values
      1 - v1 error format
      2 - v2 error format

Returns:
  An object of the form:

    { # Response with a list of CachedContents.
  "cachedContents": [ # List of cached contents.
    { # A resource used in LLM queries for users to explicitly specify what to cache and how to cache.
      "contents": [ # Optional. Input only. Immutable. The content to cache
        { # The structured data content of a message. A Content message contains a `role` field, which indicates the producer of the content, and a `parts` field, which contains the multi-part data of the message.
          "parts": [ # Required. A list of Part objects that make up a single message. Parts of a message can have different MIME types. A Content message must have at least one Part.
            { # A datatype containing media that is part of a multi-part Content message. A `Part` consists of data which has an associated datatype. A `Part` can only contain one of the accepted types in `Part.data`. For media types that are not text, `Part` must have a fixed IANA MIME type identifying the type and subtype of the media if `inline_data` or `file_data` field is filled with raw bytes.
              "codeExecutionResult": { # Result of executing the [ExecutableCode]. Only generated when using the [CodeExecution] tool, and always follows a `part` containing the [ExecutableCode]. # Optional. The result of executing the ExecutableCode.
                "outcome": "A String", # Required. Outcome of the code execution.
                "output": "A String", # Optional. Contains stdout when code execution is successful, stderr or other description otherwise.
              },
              "executableCode": { # Code generated by the model that is meant to be executed, and the result returned to the model. Generated when using the [CodeExecution] tool, in which the code will be automatically executed, and a corresponding [CodeExecutionResult] will also be generated. # Optional. Code generated by the model that is intended to be executed.
                "code": "A String", # Required. The code to be executed.
                "language": "A String", # Required. Programming language of the `code`.
              },
              "fileData": { # URI-based data. A FileData message contains a URI pointing to data of a specific media type. It is used to represent images, audio, and video stored in Google Cloud Storage. # Optional. The URI-based data of the part. This can be used to include files from Google Cloud Storage.
                "displayName": "A String", # Optional. The display name of the file. Used to provide a label or filename to distinguish files. This field is only returned in `PromptMessage` for prompt management. It is used in the Gemini calls only when server side tools (`code_execution`, `google_search`, and `url_context`) are enabled.
                "fileUri": "A String", # Required. The URI of the file in Google Cloud Storage.
                "mimeType": "A String", # Required. The IANA standard MIME type of the source data.
              },
              "functionCall": { # A predicted [FunctionCall] returned from the model that contains a string representing the [FunctionDeclaration.name] and a structured JSON object containing the parameters and their values. # Optional. A predicted function call returned from the model. This contains the name of the function to call and the arguments to pass to the function.
                "args": { # Optional. The function parameters and values in JSON object format. See [FunctionDeclaration.parameters] for parameter details.
                  "a_key": "", # Properties of the object.
                },
                "name": "A String", # Optional. The name of the function to call. Matches [FunctionDeclaration.name].
                "partialArgs": [ # Optional. The partial argument value of the function call. If provided, represents the arguments/fields that are streamed incrementally.
                  { # Partial argument value of the function call.
                    "boolValue": True or False, # Optional. Represents a boolean value.
                    "jsonPath": "A String", # Required. A JSON Path (RFC 9535) to the argument being streamed. https://datatracker.ietf.org/doc/html/rfc9535. e.g. "$.foo.bar[0].data".
                    "nullValue": "A String", # Optional. Represents a null value.
                    "numberValue": 3.14, # Optional. Represents a double value.
                    "stringValue": "A String", # Optional. Represents a string value.
                    "willContinue": True or False, # Optional. Whether this is not the last part of the same json_path. If true, another PartialArg message for the current json_path is expected to follow.
                  },
                ],
                "willContinue": True or False, # Optional. Whether this is the last part of the FunctionCall. If true, another partial message for the current FunctionCall is expected to follow.
              },
              "functionResponse": { # The result output from a [FunctionCall] that contains a string representing the [FunctionDeclaration.name] and a structured JSON object containing any output from the function is used as context to the model. This should contain the result of a [FunctionCall] made based on model prediction. # Optional. The result of a function call. This is used to provide the model with the result of a function call that it predicted.
                "name": "A String", # Required. The name of the function to call. Matches [FunctionDeclaration.name] and [FunctionCall.name].
                "parts": [ # Optional. Ordered `Parts` that constitute a function response. Parts may have different IANA MIME types.
                  { # A datatype containing media that is part of a `FunctionResponse` message. A `FunctionResponsePart` consists of data which has an associated datatype. A `FunctionResponsePart` can only contain one of the accepted types in `FunctionResponsePart.data`. A `FunctionResponsePart` must have a fixed IANA MIME type identifying the type and subtype of the media if the `inline_data` field is filled with raw bytes.
                    "fileData": { # URI based data for function response. # URI based data.
                      "displayName": "A String", # Optional. Display name of the file data. Used to provide a label or filename to distinguish file datas. This field is only returned in PromptMessage for prompt management. It is currently used in the Gemini GenerateContent calls only when server side tools (code_execution, google_search, and url_context) are enabled.
                      "fileUri": "A String", # Required. URI.
                      "mimeType": "A String", # Required. The IANA standard MIME type of the source data.
                    },
                    "inlineData": { # Raw media bytes for function response. Text should not be sent as raw bytes, use the 'text' field. # Inline media bytes.
                      "data": "A String", # Required. Raw bytes.
                      "displayName": "A String", # Optional. Display name of the blob. Used to provide a label or filename to distinguish blobs. This field is only returned in PromptMessage for prompt management. It is currently used in the Gemini GenerateContent calls only when server side tools (code_execution, google_search, and url_context) are enabled.
                      "mimeType": "A String", # Required. The IANA standard MIME type of the source data.
                    },
                  },
                ],
                "response": { # Required. The function response in JSON object format. Use "output" key to specify function output and "error" key to specify error details (if any). If "output" and "error" keys are not specified, then whole "response" is treated as function output.
                  "a_key": "", # Properties of the object.
                },
              },
              "inlineData": { # A content blob. A Blob contains data of a specific media type. It is used to represent images, audio, and video. # Optional. The inline data content of the part. This can be used to include images, audio, or video in a request.
                "data": "A String", # Required. The raw bytes of the data.
                "displayName": "A String", # Optional. The display name of the blob. Used to provide a label or filename to distinguish blobs. This field is only returned in `PromptMessage` for prompt management. It is used in the Gemini calls only when server-side tools (`code_execution`, `google_search`, and `url_context`) are enabled.
                "mimeType": "A String", # Required. The IANA standard MIME type of the source data.
              },
              "mediaResolution": { # per part media resolution. Media resolution for the input media. # per part media resolution. Media resolution for the input media.
                "level": "A String", # The tokenization quality used for given media.
              },
              "text": "A String", # Optional. The text content of the part.
              "thought": True or False, # Optional. Indicates whether the `part` represents the model's thought process or reasoning.
              "thoughtSignature": "A String", # Optional. An opaque signature for the thought so it can be reused in subsequent requests.
              "videoMetadata": { # Provides metadata for a video, including the start and end offsets for clipping and the frame rate. # Optional. Video metadata. The metadata should only be specified while the video data is presented in inline_data or file_data.
                "endOffset": "A String", # Optional. The end offset of the video.
                "fps": 3.14, # Optional. The frame rate of the video sent to the model. If not specified, the default value is 1.0. The valid range is (0.0, 24.0].
                "startOffset": "A String", # Optional. The start offset of the video.
              },
            },
          ],
          "role": "A String", # Optional. The producer of the content. Must be either 'user' or 'model'. If not set, the service will default to 'user'.
        },
      ],
      "createTime": "A String", # Output only. Creation time of the cache entry.
      "displayName": "A String", # Optional. Immutable. The user-generated meaningful display name of the cached content.
      "encryptionSpec": { # Represents a customer-managed encryption key specification that can be applied to a Vertex AI resource. # Input only. Immutable. Customer-managed encryption key spec for a `CachedContent`. If set, this `CachedContent` and all its sub-resources will be secured by this key.
        "kmsKeyName": "A String", # Required. Resource name of the Cloud KMS key used to protect the resource. The Cloud KMS key must be in the same region as the resource. It must have the format `projects/{project}/locations/{location}/keyRings/{key_ring}/cryptoKeys/{crypto_key}`.
      },
      "expireTime": "A String", # Timestamp of when this resource is considered expired. This is *always* provided on output, regardless of what was sent on input.
      "model": "A String", # Immutable. The name of the `Model` to use for cached content. Currently, only the published Gemini base models are supported, in form of projects/{PROJECT}/locations/{LOCATION}/publishers/google/models/{MODEL}
      "name": "A String", # Immutable. Identifier. The server-generated resource name of the cached content Format: projects/{project}/locations/{location}/cachedContents/{cached_content}
      "systemInstruction": { # The structured data content of a message. A Content message contains a `role` field, which indicates the producer of the content, and a `parts` field, which contains the multi-part data of the message. # Optional. Input only. Immutable. Developer set system instruction. Currently, text only
        "parts": [ # Required. A list of Part objects that make up a single message. Parts of a message can have different MIME types. A Content message must have at least one Part.
          { # A datatype containing media that is part of a multi-part Content message. A `Part` consists of data which has an associated datatype. A `Part` can only contain one of the accepted types in `Part.data`. For media types that are not text, `Part` must have a fixed IANA MIME type identifying the type and subtype of the media if `inline_data` or `file_data` field is filled with raw bytes.
            "codeExecutionResult": { # Result of executing the [ExecutableCode]. Only generated when using the [CodeExecution] tool, and always follows a `part` containing the [ExecutableCode]. # Optional. The result of executing the ExecutableCode.
              "outcome": "A String", # Required. Outcome of the code execution.
              "output": "A String", # Optional. Contains stdout when code execution is successful, stderr or other description otherwise.
            },
            "executableCode": { # Code generated by the model that is meant to be executed, and the result returned to the model. Generated when using the [CodeExecution] tool, in which the code will be automatically executed, and a corresponding [CodeExecutionResult] will also be generated. # Optional. Code generated by the model that is intended to be executed.
              "code": "A String", # Required. The code to be executed.
              "language": "A String", # Required. Programming language of the `code`.
            },
            "fileData": { # URI-based data. A FileData message contains a URI pointing to data of a specific media type. It is used to represent images, audio, and video stored in Google Cloud Storage. # Optional. The URI-based data of the part. This can be used to include files from Google Cloud Storage.
              "displayName": "A String", # Optional. The display name of the file. Used to provide a label or filename to distinguish files. This field is only returned in `PromptMessage` for prompt management. It is used in the Gemini calls only when server side tools (`code_execution`, `google_search`, and `url_context`) are enabled.
              "fileUri": "A String", # Required. The URI of the file in Google Cloud Storage.
              "mimeType": "A String", # Required. The IANA standard MIME type of the source data.
            },
            "functionCall": { # A predicted [FunctionCall] returned from the model that contains a string representing the [FunctionDeclaration.name] and a structured JSON object containing the parameters and their values. # Optional. A predicted function call returned from the model. This contains the name of the function to call and the arguments to pass to the function.
              "args": { # Optional. The function parameters and values in JSON object format. See [FunctionDeclaration.parameters] for parameter details.
                "a_key": "", # Properties of the object.
              },
              "name": "A String", # Optional. The name of the function to call. Matches [FunctionDeclaration.name].
              "partialArgs": [ # Optional. The partial argument value of the function call. If provided, represents the arguments/fields that are streamed incrementally.
                { # Partial argument value of the function call.
                  "boolValue": True or False, # Optional. Represents a boolean value.
                  "jsonPath": "A String", # Required. A JSON Path (RFC 9535) to the argument being streamed. https://datatracker.ietf.org/doc/html/rfc9535. e.g. "$.foo.bar[0].data".
                  "nullValue": "A String", # Optional. Represents a null value.
                  "numberValue": 3.14, # Optional. Represents a double value.
                  "stringValue": "A String", # Optional. Represents a string value.
                  "willContinue": True or False, # Optional. Whether this is not the last part of the same json_path. If true, another PartialArg message for the current json_path is expected to follow.
                },
              ],
              "willContinue": True or False, # Optional. Whether this is the last part of the FunctionCall. If true, another partial message for the current FunctionCall is expected to follow.
            },
            "functionResponse": { # The result output from a [FunctionCall] that contains a string representing the [FunctionDeclaration.name] and a structured JSON object containing any output from the function is used as context to the model. This should contain the result of a [FunctionCall] made based on model prediction. # Optional. The result of a function call. This is used to provide the model with the result of a function call that it predicted.
              "name": "A String", # Required. The name of the function to call. Matches [FunctionDeclaration.name] and [FunctionCall.name].
              "parts": [ # Optional. Ordered `Parts` that constitute a function response. Parts may have different IANA MIME types.
                { # A datatype containing media that is part of a `FunctionResponse` message. A `FunctionResponsePart` consists of data which has an associated datatype. A `FunctionResponsePart` can only contain one of the accepted types in `FunctionResponsePart.data`. A `FunctionResponsePart` must have a fixed IANA MIME type identifying the type and subtype of the media if the `inline_data` field is filled with raw bytes.
                  "fileData": { # URI based data for function response. # URI based data.
                    "displayName": "A String", # Optional. Display name of the file data. Used to provide a label or filename to distinguish file datas. This field is only returned in PromptMessage for prompt management. It is currently used in the Gemini GenerateContent calls only when server side tools (code_execution, google_search, and url_context) are enabled.
                    "fileUri": "A String", # Required. URI.
                    "mimeType": "A String", # Required. The IANA standard MIME type of the source data.
                  },
                  "inlineData": { # Raw media bytes for function response. Text should not be sent as raw bytes, use the 'text' field. # Inline media bytes.
                    "data": "A String", # Required. Raw bytes.
                    "displayName": "A String", # Optional. Display name of the blob. Used to provide a label or filename to distinguish blobs. This field is only returned in PromptMessage for prompt management. It is currently used in the Gemini GenerateContent calls only when server side tools (code_execution, google_search, and url_context) are enabled.
                    "mimeType": "A String", # Required. The IANA standard MIME type of the source data.
                  },
                },
              ],
              "response": { # Required. The function response in JSON object format. Use "output" key to specify function output and "error" key to specify error details (if any). If "output" and "error" keys are not specified, then whole "response" is treated as function output.
                "a_key": "", # Properties of the object.
              },
            },
            "inlineData": { # A content blob. A Blob contains data of a specific media type. It is used to represent images, audio, and video. # Optional. The inline data content of the part. This can be used to include images, audio, or video in a request.
              "data": "A String", # Required. The raw bytes of the data.
              "displayName": "A String", # Optional. The display name of the blob. Used to provide a label or filename to distinguish blobs. This field is only returned in `PromptMessage` for prompt management. It is used in the Gemini calls only when server-side tools (`code_execution`, `google_search`, and `url_context`) are enabled.
              "mimeType": "A String", # Required. The IANA standard MIME type of the source data.
            },
            "mediaResolution": { # per part media resolution. Media resolution for the input media. # per part media resolution. Media resolution for the input media.
              "level": "A String", # The tokenization quality used for given media.
            },
            "text": "A String", # Optional. The text content of the part.
            "thought": True or False, # Optional. Indicates whether the `part` represents the model's thought process or reasoning.
            "thoughtSignature": "A String", # Optional. An opaque signature for the thought so it can be reused in subsequent requests.
            "videoMetadata": { # Provides metadata for a video, including the start and end offsets for clipping and the frame rate. # Optional. Video metadata. The metadata should only be specified while the video data is presented in inline_data or file_data.
              "endOffset": "A String", # Optional. The end offset of the video.
              "fps": 3.14, # Optional. The frame rate of the video sent to the model. If not specified, the default value is 1.0. The valid range is (0.0, 24.0].
              "startOffset": "A String", # Optional. The start offset of the video.
            },
          },
        ],
        "role": "A String", # Optional. The producer of the content. Must be either 'user' or 'model'. If not set, the service will default to 'user'.
      },
      "toolConfig": { # Tool config. This config is shared for all tools provided in the request. # Optional. Input only. Immutable. Tool config. This config is shared for all tools
        "functionCallingConfig": { # Function calling config. # Optional. Function calling config.
          "allowedFunctionNames": [ # Optional. Function names to call. Only set when the Mode is ANY. Function names should match [FunctionDeclaration.name]. With mode set to ANY, model will predict a function call from the set of function names provided.
            "A String",
          ],
          "mode": "A String", # Optional. Function calling mode.
          "streamFunctionCallArguments": True or False, # Optional. When set to true, arguments of a single function call will be streamed out in multiple parts/contents/responses. Partial parameter results will be returned in the [FunctionCall.partial_args] field.
        },
        "retrievalConfig": { # Retrieval config. # Optional. Retrieval config.
          "languageCode": "A String", # The language code of the user.
          "latLng": { # An object that represents a latitude/longitude pair. This is expressed as a pair of doubles to represent degrees latitude and degrees longitude. Unless specified otherwise, this object must conform to the WGS84 standard. Values must be within normalized ranges. # The location of the user.
            "latitude": 3.14, # The latitude in degrees. It must be in the range [-90.0, +90.0].
            "longitude": 3.14, # The longitude in degrees. It must be in the range [-180.0, +180.0].
          },
        },
      },
      "tools": [ # Optional. Input only. Immutable. A list of `Tools` the model may use to generate the next response
        { # Tool details that the model may use to generate response. A `Tool` is a piece of code that enables the system to interact with external systems to perform an action, or set of actions, outside of knowledge and scope of the model. A Tool object should contain exactly one type of Tool (e.g FunctionDeclaration, Retrieval or GoogleSearchRetrieval).
          "codeExecution": { # Tool that executes code generated by the model, and automatically returns the result to the model. See also [ExecutableCode]and [CodeExecutionResult] which are input and output to this tool. # Optional. CodeExecution tool type. Enables the model to execute code as part of generation.
          },
          "computerUse": { # Tool to support computer use. # Optional. Tool to support the model interacting directly with the computer. If enabled, it automatically populates computer-use specific Function Declarations.
            "environment": "A String", # Required. The environment being operated.
            "excludedPredefinedFunctions": [ # Optional. By default, [predefined functions](https://cloud.google.com/vertex-ai/generative-ai/docs/computer-use#supported-actions) are included in the final model call. Some of them can be explicitly excluded from being automatically included. This can serve two purposes: 1. Using a more restricted / different action space. 2. Improving the definitions / instructions of predefined functions.
              "A String",
            ],
          },
          "enterpriseWebSearch": { # Tool to search public web data, powered by Vertex AI Search and Sec4 compliance. # Optional. Tool to support searching public web data, powered by Vertex AI Search and Sec4 compliance.
            "blockingConfidence": "A String", # Optional. Sites with confidence level chosen & above this value will be blocked from the search results.
            "excludeDomains": [ # Optional. List of domains to be excluded from the search results. The default limit is 2000 domains.
              "A String",
            ],
          },
          "functionDeclarations": [ # Optional. Function tool type. One or more function declarations to be passed to the model along with the current user query. Model may decide to call a subset of these functions by populating FunctionCall in the response. User should provide a FunctionResponse for each function call in the next turn. Based on the function responses, Model will generate the final response back to the user. Maximum 512 function declarations can be provided.
            { # Structured representation of a function declaration as defined by the [OpenAPI 3.0 specification](https://spec.openapis.org/oas/v3.0.3). Included in this declaration are the function name, description, parameters and response type. This FunctionDeclaration is a representation of a block of code that can be used as a `Tool` by the model and executed by the client.
              "description": "A String", # Optional. Description and purpose of the function. Model uses it to decide how and whether to call the function.
              "name": "A String", # Required. The name of the function to call. Must start with a letter or an underscore. Must be a-z, A-Z, 0-9, or contain underscores, dots, colons and dashes, with a maximum length of 64.
              "parameters": { # Defines the schema of input and output data. This is a subset of the [OpenAPI 3.0 Schema Object](https://spec.openapis.org/oas/v3.0.3#schema-object). # Optional. Describes the parameters to this function in JSON Schema Object format. Reflects the Open API 3.03 Parameter Object. string Key: the name of the parameter. Parameter names are case sensitive. Schema Value: the Schema defining the type used for the parameter. For function with no parameters, this can be left unset. Parameter names must start with a letter or an underscore and must only contain chars a-z, A-Z, 0-9, or underscores with a maximum length of 64. Example with 1 required and 1 optional parameter: type: OBJECT properties: param1: type: STRING param2: type: INTEGER required: - param1
                "additionalProperties": "", # Optional. If `type` is `OBJECT`, specifies how to handle properties not defined in `properties`. If it is a boolean `false`, no additional properties are allowed. If it is a schema, additional properties are allowed if they conform to the schema.
                "anyOf": [ # Optional. The instance must be valid against any (one or more) of the subschemas listed in `any_of`.
                  # Object with schema name: GoogleCloudAiplatformV1Schema
                ],
                "default": "", # Optional. Default value to use if the field is not specified.
                "defs": { # Optional. `defs` provides a map of schema definitions that can be reused by `ref` elsewhere in the schema. Only allowed at root level of the schema.
                  "a_key": # Object with schema name: GoogleCloudAiplatformV1Schema
                },
                "description": "A String", # Optional. Description of the schema.
                "enum": [ # Optional. Possible values of the field. This field can be used to restrict a value to a fixed set of values. To mark a field as an enum, set `format` to `enum` and provide the list of possible values in `enum`. For example: 1. To define directions: `{type:STRING, format:enum, enum:["EAST", "NORTH", "SOUTH", "WEST"]}` 2. To define apartment numbers: `{type:INTEGER, format:enum, enum:["101", "201", "301"]}`
                  "A String",
                ],
                "example": "", # Optional. Example of an instance of this schema.
                "format": "A String", # Optional. The format of the data. For `NUMBER` type, format can be `float` or `double`. For `INTEGER` type, format can be `int32` or `int64`. For `STRING` type, format can be `email`, `byte`, `date`, `date-time`, `password`, and other formats to further refine the data type.
                "items": # Object with schema name: GoogleCloudAiplatformV1Schema # Optional. If type is `ARRAY`, `items` specifies the schema of elements in the array.
                "maxItems": "A String", # Optional. If type is `ARRAY`, `max_items` specifies the maximum number of items in an array.
                "maxLength": "A String", # Optional. If type is `STRING`, `max_length` specifies the maximum length of the string.
                "maxProperties": "A String", # Optional. If type is `OBJECT`, `max_properties` specifies the maximum number of properties that can be provided.
                "maximum": 3.14, # Optional. If type is `INTEGER` or `NUMBER`, `maximum` specifies the maximum allowed value.
                "minItems": "A String", # Optional. If type is `ARRAY`, `min_items` specifies the minimum number of items in an array.
                "minLength": "A String", # Optional. If type is `STRING`, `min_length` specifies the minimum length of the string.
                "minProperties": "A String", # Optional. If type is `OBJECT`, `min_properties` specifies the minimum number of properties that can be provided.
                "minimum": 3.14, # Optional. If type is `INTEGER` or `NUMBER`, `minimum` specifies the minimum allowed value.
                "nullable": True or False, # Optional. Indicates if the value of this field can be null.
                "pattern": "A String", # Optional. If type is `STRING`, `pattern` specifies a regular expression that the string must match.
                "properties": { # Optional. If type is `OBJECT`, `properties` is a map of property names to schema definitions for each property of the object.
                  "a_key": # Object with schema name: GoogleCloudAiplatformV1Schema
                },
                "propertyOrdering": [ # Optional. Order of properties displayed or used where order matters. This is not a standard field in OpenAPI specification, but can be used to control the order of properties.
                  "A String",
                ],
                "ref": "A String", # Optional. Allows referencing another schema definition to use in place of this schema. The value must be a valid reference to a schema in `defs`. For example, the following schema defines a reference to a schema node named "Pet": type: object properties: pet: ref: #/defs/Pet defs: Pet: type: object properties: name: type: string The value of the "pet" property is a reference to the schema node named "Pet". See details in https://json-schema.org/understanding-json-schema/structuring
                "required": [ # Optional. If type is `OBJECT`, `required` lists the names of properties that must be present.
                  "A String",
                ],
                "title": "A String", # Optional. Title for the schema.
                "type": "A String", # Optional. Data type of the schema field.
              },
              "parametersJsonSchema": "", # Optional. Describes the parameters to the function in JSON Schema format. The schema must describe an object where the properties are the parameters to the function. For example: ``` { "type": "object", "properties": { "name": { "type": "string" }, "age": { "type": "integer" } }, "additionalProperties": false, "required": ["name", "age"], "propertyOrdering": ["name", "age"] } ``` This field is mutually exclusive with `parameters`.
              "response": { # Defines the schema of input and output data. This is a subset of the [OpenAPI 3.0 Schema Object](https://spec.openapis.org/oas/v3.0.3#schema-object). # Optional. Describes the output from this function in JSON Schema format. Reflects the Open API 3.03 Response Object. The Schema defines the type used for the response value of the function.
                "additionalProperties": "", # Optional. If `type` is `OBJECT`, specifies how to handle properties not defined in `properties`. If it is a boolean `false`, no additional properties are allowed. If it is a schema, additional properties are allowed if they conform to the schema.
                "anyOf": [ # Optional. The instance must be valid against any (one or more) of the subschemas listed in `any_of`.
                  # Object with schema name: GoogleCloudAiplatformV1Schema
                ],
                "default": "", # Optional. Default value to use if the field is not specified.
                "defs": { # Optional. `defs` provides a map of schema definitions that can be reused by `ref` elsewhere in the schema. Only allowed at root level of the schema.
                  "a_key": # Object with schema name: GoogleCloudAiplatformV1Schema
                },
                "description": "A String", # Optional. Description of the schema.
                "enum": [ # Optional. Possible values of the field. This field can be used to restrict a value to a fixed set of values. To mark a field as an enum, set `format` to `enum` and provide the list of possible values in `enum`. For example: 1. To define directions: `{type:STRING, format:enum, enum:["EAST", "NORTH", "SOUTH", "WEST"]}` 2. To define apartment numbers: `{type:INTEGER, format:enum, enum:["101", "201", "301"]}`
                  "A String",
                ],
                "example": "", # Optional. Example of an instance of this schema.
                "format": "A String", # Optional. The format of the data. For `NUMBER` type, format can be `float` or `double`. For `INTEGER` type, format can be `int32` or `int64`. For `STRING` type, format can be `email`, `byte`, `date`, `date-time`, `password`, and other formats to further refine the data type.
                "items": # Object with schema name: GoogleCloudAiplatformV1Schema # Optional. If type is `ARRAY`, `items` specifies the schema of elements in the array.
                "maxItems": "A String", # Optional. If type is `ARRAY`, `max_items` specifies the maximum number of items in an array.
                "maxLength": "A String", # Optional. If type is `STRING`, `max_length` specifies the maximum length of the string.
                "maxProperties": "A String", # Optional. If type is `OBJECT`, `max_properties` specifies the maximum number of properties that can be provided.
                "maximum": 3.14, # Optional. If type is `INTEGER` or `NUMBER`, `maximum` specifies the maximum allowed value.
                "minItems": "A String", # Optional. If type is `ARRAY`, `min_items` specifies the minimum number of items in an array.
                "minLength": "A String", # Optional. If type is `STRING`, `min_length` specifies the minimum length of the string.
                "minProperties": "A String", # Optional. If type is `OBJECT`, `min_properties` specifies the minimum number of properties that can be provided.
                "minimum": 3.14, # Optional. If type is `INTEGER` or `NUMBER`, `minimum` specifies the minimum allowed value.
                "nullable": True or False, # Optional. Indicates if the value of this field can be null.
                "pattern": "A String", # Optional. If type is `STRING`, `pattern` specifies a regular expression that the string must match.
                "properties": { # Optional. If type is `OBJECT`, `properties` is a map of property names to schema definitions for each property of the object.
                  "a_key": # Object with schema name: GoogleCloudAiplatformV1Schema
                },
                "propertyOrdering": [ # Optional. Order of properties displayed or used where order matters. This is not a standard field in OpenAPI specification, but can be used to control the order of properties.
                  "A String",
                ],
                "ref": "A String", # Optional. Allows referencing another schema definition to use in place of this schema. The value must be a valid reference to a schema in `defs`. For example, the following schema defines a reference to a schema node named "Pet": type: object properties: pet: ref: #/defs/Pet defs: Pet: type: object properties: name: type: string The value of the "pet" property is a reference to the schema node named "Pet". See details in https://json-schema.org/understanding-json-schema/structuring
                "required": [ # Optional. If type is `OBJECT`, `required` lists the names of properties that must be present.
                  "A String",
                ],
                "title": "A String", # Optional. Title for the schema.
                "type": "A String", # Optional. Data type of the schema field.
              },
              "responseJsonSchema": "", # Optional. Describes the output from this function in JSON Schema format. The value specified by the schema is the response value of the function. This field is mutually exclusive with `response`.
            },
          ],
          "googleMaps": { # Tool to retrieve public maps data for grounding, powered by Google. # Optional. GoogleMaps tool type. Tool to support Google Maps in Model.
            "enableWidget": True or False, # Optional. If true, include the widget context token in the response.
          },
          "googleSearch": { # GoogleSearch tool type. Tool to support Google Search in Model. Powered by Google. # Optional. GoogleSearch tool type. Tool to support Google Search in Model. Powered by Google.
            "blockingConfidence": "A String", # Optional. Sites with confidence level chosen & above this value will be blocked from the search results.
            "excludeDomains": [ # Optional. List of domains to be excluded from the search results. The default limit is 2000 domains. Example: ["amazon.com", "facebook.com"].
              "A String",
            ],
          },
          "googleSearchRetrieval": { # Tool to retrieve public web data for grounding, powered by Google. # Optional. Specialized retrieval tool that is powered by Google Search.
            "dynamicRetrievalConfig": { # Describes the options to customize dynamic retrieval. # Specifies the dynamic retrieval configuration for the given source.
              "dynamicThreshold": 3.14, # Optional. The threshold to be used in dynamic retrieval. If not set, a system default value is used.
              "mode": "A String", # The mode of the predictor to be used in dynamic retrieval.
            },
          },
          "parallelAiSearch": { # ParallelAiSearch tool type. A tool that uses the Parallel.ai search engine for grounding. # Optional. If specified, Vertex AI will use Parallel.ai to search for information to answer user queries. The search results will be grounded on Parallel.ai and presented to the model for response generation
            "apiKey": "A String", # Optional. The API key for ParallelAiSearch. If an API key is not provided, the system will attempt to verify access by checking for an active Parallel.ai subscription through the Google Cloud Marketplace. See https://docs.parallel.ai/search/search-quickstart for more details.
            "customConfigs": { # Optional. Custom configs for ParallelAiSearch. This field can be used to pass any parameter from the Parallel.ai Search API. See the Parallel.ai documentation for the full list of available parameters and their usage: https://docs.parallel.ai/api-reference/search-beta/search Currently only `source_policy`, `excerpts`, `max_results`, `mode`, `fetch_policy` can be set via this field. For example: { "source_policy": { "include_domains": ["google.com", "wikipedia.org"], "exclude_domains": ["example.com"] }, "fetch_policy": { "max_age_seconds": 3600 } }
              "a_key": "", # Properties of the object.
            },
          },
          "retrieval": { # Defines a retrieval tool that model can call to access external knowledge. # Optional. Retrieval tool type. System will always execute the provided retrieval tool(s) to get external knowledge to answer the prompt. Retrieval results are presented to the model for generation.
            "disableAttribution": True or False, # Optional. Deprecated. This option is no longer supported.
            "externalApi": { # Retrieve from data source powered by external API for grounding. The external API is not owned by Google, but need to follow the pre-defined API spec. # Use data source powered by external API for grounding.
              "apiAuth": { # The generic reusable api auth config. Deprecated. Please use AuthConfig (google/cloud/aiplatform/master/auth.proto) instead. # The authentication config to access the API. Deprecated. Please use auth_config instead.
                "apiKeyConfig": { # The API secret. # The API secret.
                  "apiKeySecretVersion": "A String", # Required. The SecretManager secret version resource name storing API key. e.g. projects/{project}/secrets/{secret}/versions/{version}
                  "apiKeyString": "A String", # The API key string. Either this or `api_key_secret_version` must be set.
                },
              },
              "apiSpec": "A String", # The API spec that the external API implements.
              "authConfig": { # Auth configuration to run the extension. # The authentication config to access the API.
                "apiKeyConfig": { # Config for authentication with API key. # Config for API key auth.
                  "apiKeySecret": "A String", # Optional. The name of the SecretManager secret version resource storing the API key. Format: `projects/{project}/secrets/{secrete}/versions/{version}` - If both `api_key_secret` and `api_key_string` are specified, this field takes precedence over `api_key_string`. - If specified, the `secretmanager.versions.access` permission should be granted to Vertex AI Extension Service Agent (https://cloud.google.com/vertex-ai/docs/general/access-control#service-agents) on the specified resource.
                  "apiKeyString": "A String", # Optional. The API key to be used in the request directly.
                  "httpElementLocation": "A String", # Optional. The location of the API key.
                  "name": "A String", # Optional. The parameter name of the API key. E.g. If the API request is "https://example.com/act?api_key=", "api_key" would be the parameter name.
                },
                "authType": "A String", # Type of auth scheme.
                "googleServiceAccountConfig": { # Config for Google Service Account Authentication. # Config for Google Service Account auth.
                  "serviceAccount": "A String", # Optional. The service account that the extension execution service runs as. - If the service account is specified, the `iam.serviceAccounts.getAccessToken` permission should be granted to Vertex AI Extension Service Agent (https://cloud.google.com/vertex-ai/docs/general/access-control#service-agents) on the specified service account. - If not specified, the Vertex AI Extension Service Agent will be used to execute the Extension.
                },
                "httpBasicAuthConfig": { # Config for HTTP Basic Authentication. # Config for HTTP Basic auth.
                  "credentialSecret": "A String", # Required. The name of the SecretManager secret version resource storing the base64 encoded credentials. Format: `projects/{project}/secrets/{secrete}/versions/{version}` - If specified, the `secretmanager.versions.access` permission should be granted to Vertex AI Extension Service Agent (https://cloud.google.com/vertex-ai/docs/general/access-control#service-agents) on the specified resource.
                },
                "oauthConfig": { # Config for user oauth. # Config for user oauth.
                  "accessToken": "A String", # Access token for extension endpoint. Only used to propagate token from [[ExecuteExtensionRequest.runtime_auth_config]] at request time.
                  "serviceAccount": "A String", # The service account used to generate access tokens for executing the Extension. - If the service account is specified, the `iam.serviceAccounts.getAccessToken` permission should be granted to Vertex AI Extension Service Agent (https://cloud.google.com/vertex-ai/docs/general/access-control#service-agents) on the provided service account.
                },
                "oidcConfig": { # Config for user OIDC auth. # Config for user OIDC auth.
                  "idToken": "A String", # OpenID Connect formatted ID token for extension endpoint. Only used to propagate token from [[ExecuteExtensionRequest.runtime_auth_config]] at request time.
                  "serviceAccount": "A String", # The service account used to generate an OpenID Connect (OIDC)-compatible JWT token signed by the Google OIDC Provider (accounts.google.com) for extension endpoint (https://cloud.google.com/iam/docs/create-short-lived-credentials-direct#sa-credentials-oidc). - The audience for the token will be set to the URL in the server url defined in the OpenApi spec. - If the service account is provided, the service account should grant `iam.serviceAccounts.getOpenIdToken` permission to Vertex AI Extension Service Agent (https://cloud.google.com/vertex-ai/docs/general/access-control#service-agents).
                },
              },
              "elasticSearchParams": { # The search parameters to use for the ELASTIC_SEARCH spec. # Parameters for the elastic search API.
                "index": "A String", # The ElasticSearch index to use.
                "numHits": 42, # Optional. Number of hits (chunks) to request. When specified, it is passed to Elasticsearch as the `num_hits` param.
                "searchTemplate": "A String", # The ElasticSearch search template to use.
              },
              "endpoint": "A String", # The endpoint of the external API. The system will call the API at this endpoint to retrieve the data for grounding. Example: https://acme.com:443/search
              "simpleSearchParams": { # The search parameters to use for SIMPLE_SEARCH spec. # Parameters for the simple search API.
              },
            },
            "vertexAiSearch": { # Retrieve from Vertex AI Search datastore or engine for grounding. datastore and engine are mutually exclusive. See https://cloud.google.com/products/agent-builder # Set to use data source powered by Vertex AI Search.
              "dataStoreSpecs": [ # Specifications that define the specific DataStores to be searched, along with configurations for those data stores. This is only considered for Engines with multiple data stores. It should only be set if engine is used.
                { # Define data stores within engine to filter on in a search call and configurations for those data stores. For more information, see https://cloud.google.com/generative-ai-app-builder/docs/reference/rpc/google.cloud.discoveryengine.v1#datastorespec
                  "dataStore": "A String", # Full resource name of DataStore, such as Format: `projects/{project}/locations/{location}/collections/{collection}/dataStores/{dataStore}`
                  "filter": "A String", # Optional. Filter specification to filter documents in the data store specified by data_store field. For more information on filtering, see [Filtering](https://cloud.google.com/generative-ai-app-builder/docs/filter-search-metadata)
                },
              ],
              "datastore": "A String", # Optional. Fully-qualified Vertex AI Search data store resource ID. Format: `projects/{project}/locations/{location}/collections/{collection}/dataStores/{dataStore}`
              "engine": "A String", # Optional. Fully-qualified Vertex AI Search engine resource ID. Format: `projects/{project}/locations/{location}/collections/{collection}/engines/{engine}`
              "filter": "A String", # Optional. Filter strings to be passed to the search API.
              "maxResults": 42, # Optional. Number of search results to return per query. The default value is 10. The maximumm allowed value is 10.
            },
            "vertexRagStore": { # Retrieve from Vertex RAG Store for grounding. # Set to use data source powered by Vertex RAG store. User data is uploaded via the VertexRagDataService.
              "ragResources": [ # Optional. The representation of the rag source. It can be used to specify corpus only or ragfiles. Currently only support one corpus or multiple files from one corpus. In the future we may open up multiple corpora support.
                { # The definition of the Rag resource.
                  "ragCorpus": "A String", # Optional. RagCorpora resource name. Format: `projects/{project}/locations/{location}/ragCorpora/{rag_corpus}`
                  "ragFileIds": [ # Optional. rag_file_id. The files should be in the same rag_corpus set in rag_corpus field.
                    "A String",
                  ],
                },
              ],
              "ragRetrievalConfig": { # Specifies the context retrieval config. # Optional. The retrieval config for the Rag query.
                "filter": { # Config for filters. # Optional. Config for filters.
                  "metadataFilter": "A String", # Optional. String for metadata filtering.
                  "vectorDistanceThreshold": 3.14, # Optional. Only returns contexts with vector distance smaller than the threshold.
                  "vectorSimilarityThreshold": 3.14, # Optional. Only returns contexts with vector similarity larger than the threshold.
                },
                "ranking": { # Config for ranking and reranking. # Optional. Config for ranking and reranking.
                  "llmRanker": { # Config for LlmRanker. # Optional. Config for LlmRanker.
                    "modelName": "A String", # Optional. The model name used for ranking. See [Supported models](https://cloud.google.com/vertex-ai/generative-ai/docs/model-reference/inference#supported-models).
                  },
                  "rankService": { # Config for Rank Service. # Optional. Config for Rank Service.
                    "modelName": "A String", # Optional. The model name of the rank service. Format: `semantic-ranker-512@latest`
                  },
                },
                "topK": 42, # Optional. The number of contexts to retrieve.
              },
              "similarityTopK": 42, # Optional. Number of top k results to return from the selected corpora.
              "vectorDistanceThreshold": 3.14, # Optional. Only return results with vector distance smaller than the threshold.
            },
          },
          "urlContext": { # Tool to support URL context. # Optional. Tool to support URL context retrieval.
          },
        },
      ],
      "ttl": "A String", # Input only. The TTL for this resource. The expiration time is computed: now + TTL.
      "updateTime": "A String", # Output only. When the cache entry was last updated in UTC time.
      "usageMetadata": { # Metadata on the usage of the cached content. # Output only. Metadata on the usage of the cached content.
        "audioDurationSeconds": 42, # Duration of audio in seconds.
        "imageCount": 42, # Number of images.
        "textCount": 42, # Number of text characters.
        "totalTokenCount": 42, # Total number of tokens that the cached content consumes.
        "videoDurationSeconds": 42, # Duration of video in seconds.
      },
    },
  ],
  "nextPageToken": "A String", # A token, which can be sent as `page_token` to retrieve the next page. If this field is omitted, there are no subsequent pages.
}
list_next()
Retrieves the next page of results.

        Args:
          previous_request: The request for the previous page. (required)
          previous_response: The response from the request for the previous page. (required)

        Returns:
          A request object that you can call 'execute()' on to request the next
          page. Returns None if there are no more items in the collection.
        
patch(name, body=None, updateMask=None, x__xgafv=None)
Updates cached content configurations

Args:
  name: string, Immutable. Identifier. The server-generated resource name of the cached content Format: projects/{project}/locations/{location}/cachedContents/{cached_content} (required)
  body: object, The request body.
    The object takes the form of:

{ # A resource used in LLM queries for users to explicitly specify what to cache and how to cache.
  "contents": [ # Optional. Input only. Immutable. The content to cache
    { # The structured data content of a message. A Content message contains a `role` field, which indicates the producer of the content, and a `parts` field, which contains the multi-part data of the message.
      "parts": [ # Required. A list of Part objects that make up a single message. Parts of a message can have different MIME types. A Content message must have at least one Part.
        { # A datatype containing media that is part of a multi-part Content message. A `Part` consists of data which has an associated datatype. A `Part` can only contain one of the accepted types in `Part.data`. For media types that are not text, `Part` must have a fixed IANA MIME type identifying the type and subtype of the media if `inline_data` or `file_data` field is filled with raw bytes.
          "codeExecutionResult": { # Result of executing the [ExecutableCode]. Only generated when using the [CodeExecution] tool, and always follows a `part` containing the [ExecutableCode]. # Optional. The result of executing the ExecutableCode.
            "outcome": "A String", # Required. Outcome of the code execution.
            "output": "A String", # Optional. Contains stdout when code execution is successful, stderr or other description otherwise.
          },
          "executableCode": { # Code generated by the model that is meant to be executed, and the result returned to the model. Generated when using the [CodeExecution] tool, in which the code will be automatically executed, and a corresponding [CodeExecutionResult] will also be generated. # Optional. Code generated by the model that is intended to be executed.
            "code": "A String", # Required. The code to be executed.
            "language": "A String", # Required. Programming language of the `code`.
          },
          "fileData": { # URI-based data. A FileData message contains a URI pointing to data of a specific media type. It is used to represent images, audio, and video stored in Google Cloud Storage. # Optional. The URI-based data of the part. This can be used to include files from Google Cloud Storage.
            "displayName": "A String", # Optional. The display name of the file. Used to provide a label or filename to distinguish files. This field is only returned in `PromptMessage` for prompt management. It is used in the Gemini calls only when server side tools (`code_execution`, `google_search`, and `url_context`) are enabled.
            "fileUri": "A String", # Required. The URI of the file in Google Cloud Storage.
            "mimeType": "A String", # Required. The IANA standard MIME type of the source data.
          },
          "functionCall": { # A predicted [FunctionCall] returned from the model that contains a string representing the [FunctionDeclaration.name] and a structured JSON object containing the parameters and their values. # Optional. A predicted function call returned from the model. This contains the name of the function to call and the arguments to pass to the function.
            "args": { # Optional. The function parameters and values in JSON object format. See [FunctionDeclaration.parameters] for parameter details.
              "a_key": "", # Properties of the object.
            },
            "name": "A String", # Optional. The name of the function to call. Matches [FunctionDeclaration.name].
            "partialArgs": [ # Optional. The partial argument value of the function call. If provided, represents the arguments/fields that are streamed incrementally.
              { # Partial argument value of the function call.
                "boolValue": True or False, # Optional. Represents a boolean value.
                "jsonPath": "A String", # Required. A JSON Path (RFC 9535) to the argument being streamed. https://datatracker.ietf.org/doc/html/rfc9535. e.g. "$.foo.bar[0].data".
                "nullValue": "A String", # Optional. Represents a null value.
                "numberValue": 3.14, # Optional. Represents a double value.
                "stringValue": "A String", # Optional. Represents a string value.
                "willContinue": True or False, # Optional. Whether this is not the last part of the same json_path. If true, another PartialArg message for the current json_path is expected to follow.
              },
            ],
            "willContinue": True or False, # Optional. Whether this is the last part of the FunctionCall. If true, another partial message for the current FunctionCall is expected to follow.
          },
          "functionResponse": { # The result output from a [FunctionCall] that contains a string representing the [FunctionDeclaration.name] and a structured JSON object containing any output from the function is used as context to the model. This should contain the result of a [FunctionCall] made based on model prediction. # Optional. The result of a function call. This is used to provide the model with the result of a function call that it predicted.
            "name": "A String", # Required. The name of the function to call. Matches [FunctionDeclaration.name] and [FunctionCall.name].
            "parts": [ # Optional. Ordered `Parts` that constitute a function response. Parts may have different IANA MIME types.
              { # A datatype containing media that is part of a `FunctionResponse` message. A `FunctionResponsePart` consists of data which has an associated datatype. A `FunctionResponsePart` can only contain one of the accepted types in `FunctionResponsePart.data`. A `FunctionResponsePart` must have a fixed IANA MIME type identifying the type and subtype of the media if the `inline_data` field is filled with raw bytes.
                "fileData": { # URI based data for function response. # URI based data.
                  "displayName": "A String", # Optional. Display name of the file data. Used to provide a label or filename to distinguish file datas. This field is only returned in PromptMessage for prompt management. It is currently used in the Gemini GenerateContent calls only when server side tools (code_execution, google_search, and url_context) are enabled.
                  "fileUri": "A String", # Required. URI.
                  "mimeType": "A String", # Required. The IANA standard MIME type of the source data.
                },
                "inlineData": { # Raw media bytes for function response. Text should not be sent as raw bytes, use the 'text' field. # Inline media bytes.
                  "data": "A String", # Required. Raw bytes.
                  "displayName": "A String", # Optional. Display name of the blob. Used to provide a label or filename to distinguish blobs. This field is only returned in PromptMessage for prompt management. It is currently used in the Gemini GenerateContent calls only when server side tools (code_execution, google_search, and url_context) are enabled.
                  "mimeType": "A String", # Required. The IANA standard MIME type of the source data.
                },
              },
            ],
            "response": { # Required. The function response in JSON object format. Use "output" key to specify function output and "error" key to specify error details (if any). If "output" and "error" keys are not specified, then whole "response" is treated as function output.
              "a_key": "", # Properties of the object.
            },
          },
          "inlineData": { # A content blob. A Blob contains data of a specific media type. It is used to represent images, audio, and video. # Optional. The inline data content of the part. This can be used to include images, audio, or video in a request.
            "data": "A String", # Required. The raw bytes of the data.
            "displayName": "A String", # Optional. The display name of the blob. Used to provide a label or filename to distinguish blobs. This field is only returned in `PromptMessage` for prompt management. It is used in the Gemini calls only when server-side tools (`code_execution`, `google_search`, and `url_context`) are enabled.
            "mimeType": "A String", # Required. The IANA standard MIME type of the source data.
          },
          "mediaResolution": { # per part media resolution. Media resolution for the input media. # per part media resolution. Media resolution for the input media.
            "level": "A String", # The tokenization quality used for given media.
          },
          "text": "A String", # Optional. The text content of the part.
          "thought": True or False, # Optional. Indicates whether the `part` represents the model's thought process or reasoning.
          "thoughtSignature": "A String", # Optional. An opaque signature for the thought so it can be reused in subsequent requests.
          "videoMetadata": { # Provides metadata for a video, including the start and end offsets for clipping and the frame rate. # Optional. Video metadata. The metadata should only be specified while the video data is presented in inline_data or file_data.
            "endOffset": "A String", # Optional. The end offset of the video.
            "fps": 3.14, # Optional. The frame rate of the video sent to the model. If not specified, the default value is 1.0. The valid range is (0.0, 24.0].
            "startOffset": "A String", # Optional. The start offset of the video.
          },
        },
      ],
      "role": "A String", # Optional. The producer of the content. Must be either 'user' or 'model'. If not set, the service will default to 'user'.
    },
  ],
  "createTime": "A String", # Output only. Creation time of the cache entry.
  "displayName": "A String", # Optional. Immutable. The user-generated meaningful display name of the cached content.
  "encryptionSpec": { # Represents a customer-managed encryption key specification that can be applied to a Vertex AI resource. # Input only. Immutable. Customer-managed encryption key spec for a `CachedContent`. If set, this `CachedContent` and all its sub-resources will be secured by this key.
    "kmsKeyName": "A String", # Required. Resource name of the Cloud KMS key used to protect the resource. The Cloud KMS key must be in the same region as the resource. It must have the format `projects/{project}/locations/{location}/keyRings/{key_ring}/cryptoKeys/{crypto_key}`.
  },
  "expireTime": "A String", # Timestamp of when this resource is considered expired. This is *always* provided on output, regardless of what was sent on input.
  "model": "A String", # Immutable. The name of the `Model` to use for cached content. Currently, only the published Gemini base models are supported, in form of projects/{PROJECT}/locations/{LOCATION}/publishers/google/models/{MODEL}
  "name": "A String", # Immutable. Identifier. The server-generated resource name of the cached content Format: projects/{project}/locations/{location}/cachedContents/{cached_content}
  "systemInstruction": { # The structured data content of a message. A Content message contains a `role` field, which indicates the producer of the content, and a `parts` field, which contains the multi-part data of the message. # Optional. Input only. Immutable. Developer set system instruction. Currently, text only
    "parts": [ # Required. A list of Part objects that make up a single message. Parts of a message can have different MIME types. A Content message must have at least one Part.
      { # A datatype containing media that is part of a multi-part Content message. A `Part` consists of data which has an associated datatype. A `Part` can only contain one of the accepted types in `Part.data`. For media types that are not text, `Part` must have a fixed IANA MIME type identifying the type and subtype of the media if `inline_data` or `file_data` field is filled with raw bytes.
        "codeExecutionResult": { # Result of executing the [ExecutableCode]. Only generated when using the [CodeExecution] tool, and always follows a `part` containing the [ExecutableCode]. # Optional. The result of executing the ExecutableCode.
          "outcome": "A String", # Required. Outcome of the code execution.
          "output": "A String", # Optional. Contains stdout when code execution is successful, stderr or other description otherwise.
        },
        "executableCode": { # Code generated by the model that is meant to be executed, and the result returned to the model. Generated when using the [CodeExecution] tool, in which the code will be automatically executed, and a corresponding [CodeExecutionResult] will also be generated. # Optional. Code generated by the model that is intended to be executed.
          "code": "A String", # Required. The code to be executed.
          "language": "A String", # Required. Programming language of the `code`.
        },
        "fileData": { # URI-based data. A FileData message contains a URI pointing to data of a specific media type. It is used to represent images, audio, and video stored in Google Cloud Storage. # Optional. The URI-based data of the part. This can be used to include files from Google Cloud Storage.
          "displayName": "A String", # Optional. The display name of the file. Used to provide a label or filename to distinguish files. This field is only returned in `PromptMessage` for prompt management. It is used in the Gemini calls only when server side tools (`code_execution`, `google_search`, and `url_context`) are enabled.
          "fileUri": "A String", # Required. The URI of the file in Google Cloud Storage.
          "mimeType": "A String", # Required. The IANA standard MIME type of the source data.
        },
        "functionCall": { # A predicted [FunctionCall] returned from the model that contains a string representing the [FunctionDeclaration.name] and a structured JSON object containing the parameters and their values. # Optional. A predicted function call returned from the model. This contains the name of the function to call and the arguments to pass to the function.
          "args": { # Optional. The function parameters and values in JSON object format. See [FunctionDeclaration.parameters] for parameter details.
            "a_key": "", # Properties of the object.
          },
          "name": "A String", # Optional. The name of the function to call. Matches [FunctionDeclaration.name].
          "partialArgs": [ # Optional. The partial argument value of the function call. If provided, represents the arguments/fields that are streamed incrementally.
            { # Partial argument value of the function call.
              "boolValue": True or False, # Optional. Represents a boolean value.
              "jsonPath": "A String", # Required. A JSON Path (RFC 9535) to the argument being streamed. https://datatracker.ietf.org/doc/html/rfc9535. e.g. "$.foo.bar[0].data".
              "nullValue": "A String", # Optional. Represents a null value.
              "numberValue": 3.14, # Optional. Represents a double value.
              "stringValue": "A String", # Optional. Represents a string value.
              "willContinue": True or False, # Optional. Whether this is not the last part of the same json_path. If true, another PartialArg message for the current json_path is expected to follow.
            },
          ],
          "willContinue": True or False, # Optional. Whether this is the last part of the FunctionCall. If true, another partial message for the current FunctionCall is expected to follow.
        },
        "functionResponse": { # The result output from a [FunctionCall] that contains a string representing the [FunctionDeclaration.name] and a structured JSON object containing any output from the function is used as context to the model. This should contain the result of a [FunctionCall] made based on model prediction. # Optional. The result of a function call. This is used to provide the model with the result of a function call that it predicted.
          "name": "A String", # Required. The name of the function to call. Matches [FunctionDeclaration.name] and [FunctionCall.name].
          "parts": [ # Optional. Ordered `Parts` that constitute a function response. Parts may have different IANA MIME types.
            { # A datatype containing media that is part of a `FunctionResponse` message. A `FunctionResponsePart` consists of data which has an associated datatype. A `FunctionResponsePart` can only contain one of the accepted types in `FunctionResponsePart.data`. A `FunctionResponsePart` must have a fixed IANA MIME type identifying the type and subtype of the media if the `inline_data` field is filled with raw bytes.
              "fileData": { # URI based data for function response. # URI based data.
                "displayName": "A String", # Optional. Display name of the file data. Used to provide a label or filename to distinguish file datas. This field is only returned in PromptMessage for prompt management. It is currently used in the Gemini GenerateContent calls only when server side tools (code_execution, google_search, and url_context) are enabled.
                "fileUri": "A String", # Required. URI.
                "mimeType": "A String", # Required. The IANA standard MIME type of the source data.
              },
              "inlineData": { # Raw media bytes for function response. Text should not be sent as raw bytes, use the 'text' field. # Inline media bytes.
                "data": "A String", # Required. Raw bytes.
                "displayName": "A String", # Optional. Display name of the blob. Used to provide a label or filename to distinguish blobs. This field is only returned in PromptMessage for prompt management. It is currently used in the Gemini GenerateContent calls only when server side tools (code_execution, google_search, and url_context) are enabled.
                "mimeType": "A String", # Required. The IANA standard MIME type of the source data.
              },
            },
          ],
          "response": { # Required. The function response in JSON object format. Use "output" key to specify function output and "error" key to specify error details (if any). If "output" and "error" keys are not specified, then whole "response" is treated as function output.
            "a_key": "", # Properties of the object.
          },
        },
        "inlineData": { # A content blob. A Blob contains data of a specific media type. It is used to represent images, audio, and video. # Optional. The inline data content of the part. This can be used to include images, audio, or video in a request.
          "data": "A String", # Required. The raw bytes of the data.
          "displayName": "A String", # Optional. The display name of the blob. Used to provide a label or filename to distinguish blobs. This field is only returned in `PromptMessage` for prompt management. It is used in the Gemini calls only when server-side tools (`code_execution`, `google_search`, and `url_context`) are enabled.
          "mimeType": "A String", # Required. The IANA standard MIME type of the source data.
        },
        "mediaResolution": { # per part media resolution. Media resolution for the input media. # per part media resolution. Media resolution for the input media.
          "level": "A String", # The tokenization quality used for given media.
        },
        "text": "A String", # Optional. The text content of the part.
        "thought": True or False, # Optional. Indicates whether the `part` represents the model's thought process or reasoning.
        "thoughtSignature": "A String", # Optional. An opaque signature for the thought so it can be reused in subsequent requests.
        "videoMetadata": { # Provides metadata for a video, including the start and end offsets for clipping and the frame rate. # Optional. Video metadata. The metadata should only be specified while the video data is presented in inline_data or file_data.
          "endOffset": "A String", # Optional. The end offset of the video.
          "fps": 3.14, # Optional. The frame rate of the video sent to the model. If not specified, the default value is 1.0. The valid range is (0.0, 24.0].
          "startOffset": "A String", # Optional. The start offset of the video.
        },
      },
    ],
    "role": "A String", # Optional. The producer of the content. Must be either 'user' or 'model'. If not set, the service will default to 'user'.
  },
  "toolConfig": { # Tool config. This config is shared for all tools provided in the request. # Optional. Input only. Immutable. Tool config. This config is shared for all tools
    "functionCallingConfig": { # Function calling config. # Optional. Function calling config.
      "allowedFunctionNames": [ # Optional. Function names to call. Only set when the Mode is ANY. Function names should match [FunctionDeclaration.name]. With mode set to ANY, model will predict a function call from the set of function names provided.
        "A String",
      ],
      "mode": "A String", # Optional. Function calling mode.
      "streamFunctionCallArguments": True or False, # Optional. When set to true, arguments of a single function call will be streamed out in multiple parts/contents/responses. Partial parameter results will be returned in the [FunctionCall.partial_args] field.
    },
    "retrievalConfig": { # Retrieval config. # Optional. Retrieval config.
      "languageCode": "A String", # The language code of the user.
      "latLng": { # An object that represents a latitude/longitude pair. This is expressed as a pair of doubles to represent degrees latitude and degrees longitude. Unless specified otherwise, this object must conform to the WGS84 standard. Values must be within normalized ranges. # The location of the user.
        "latitude": 3.14, # The latitude in degrees. It must be in the range [-90.0, +90.0].
        "longitude": 3.14, # The longitude in degrees. It must be in the range [-180.0, +180.0].
      },
    },
  },
  "tools": [ # Optional. Input only. Immutable. A list of `Tools` the model may use to generate the next response
    { # Tool details that the model may use to generate response. A `Tool` is a piece of code that enables the system to interact with external systems to perform an action, or set of actions, outside of knowledge and scope of the model. A Tool object should contain exactly one type of Tool (e.g FunctionDeclaration, Retrieval or GoogleSearchRetrieval).
      "codeExecution": { # Tool that executes code generated by the model, and automatically returns the result to the model. See also [ExecutableCode]and [CodeExecutionResult] which are input and output to this tool. # Optional. CodeExecution tool type. Enables the model to execute code as part of generation.
      },
      "computerUse": { # Tool to support computer use. # Optional. Tool to support the model interacting directly with the computer. If enabled, it automatically populates computer-use specific Function Declarations.
        "environment": "A String", # Required. The environment being operated.
        "excludedPredefinedFunctions": [ # Optional. By default, [predefined functions](https://cloud.google.com/vertex-ai/generative-ai/docs/computer-use#supported-actions) are included in the final model call. Some of them can be explicitly excluded from being automatically included. This can serve two purposes: 1. Using a more restricted / different action space. 2. Improving the definitions / instructions of predefined functions.
          "A String",
        ],
      },
      "enterpriseWebSearch": { # Tool to search public web data, powered by Vertex AI Search and Sec4 compliance. # Optional. Tool to support searching public web data, powered by Vertex AI Search and Sec4 compliance.
        "blockingConfidence": "A String", # Optional. Sites with confidence level chosen & above this value will be blocked from the search results.
        "excludeDomains": [ # Optional. List of domains to be excluded from the search results. The default limit is 2000 domains.
          "A String",
        ],
      },
      "functionDeclarations": [ # Optional. Function tool type. One or more function declarations to be passed to the model along with the current user query. Model may decide to call a subset of these functions by populating FunctionCall in the response. User should provide a FunctionResponse for each function call in the next turn. Based on the function responses, Model will generate the final response back to the user. Maximum 512 function declarations can be provided.
        { # Structured representation of a function declaration as defined by the [OpenAPI 3.0 specification](https://spec.openapis.org/oas/v3.0.3). Included in this declaration are the function name, description, parameters and response type. This FunctionDeclaration is a representation of a block of code that can be used as a `Tool` by the model and executed by the client.
          "description": "A String", # Optional. Description and purpose of the function. Model uses it to decide how and whether to call the function.
          "name": "A String", # Required. The name of the function to call. Must start with a letter or an underscore. Must be a-z, A-Z, 0-9, or contain underscores, dots, colons and dashes, with a maximum length of 64.
          "parameters": { # Defines the schema of input and output data. This is a subset of the [OpenAPI 3.0 Schema Object](https://spec.openapis.org/oas/v3.0.3#schema-object). # Optional. Describes the parameters to this function in JSON Schema Object format. Reflects the Open API 3.03 Parameter Object. string Key: the name of the parameter. Parameter names are case sensitive. Schema Value: the Schema defining the type used for the parameter. For function with no parameters, this can be left unset. Parameter names must start with a letter or an underscore and must only contain chars a-z, A-Z, 0-9, or underscores with a maximum length of 64. Example with 1 required and 1 optional parameter: type: OBJECT properties: param1: type: STRING param2: type: INTEGER required: - param1
            "additionalProperties": "", # Optional. If `type` is `OBJECT`, specifies how to handle properties not defined in `properties`. If it is a boolean `false`, no additional properties are allowed. If it is a schema, additional properties are allowed if they conform to the schema.
            "anyOf": [ # Optional. The instance must be valid against any (one or more) of the subschemas listed in `any_of`.
              # Object with schema name: GoogleCloudAiplatformV1Schema
            ],
            "default": "", # Optional. Default value to use if the field is not specified.
            "defs": { # Optional. `defs` provides a map of schema definitions that can be reused by `ref` elsewhere in the schema. Only allowed at root level of the schema.
              "a_key": # Object with schema name: GoogleCloudAiplatformV1Schema
            },
            "description": "A String", # Optional. Description of the schema.
            "enum": [ # Optional. Possible values of the field. This field can be used to restrict a value to a fixed set of values. To mark a field as an enum, set `format` to `enum` and provide the list of possible values in `enum`. For example: 1. To define directions: `{type:STRING, format:enum, enum:["EAST", "NORTH", "SOUTH", "WEST"]}` 2. To define apartment numbers: `{type:INTEGER, format:enum, enum:["101", "201", "301"]}`
              "A String",
            ],
            "example": "", # Optional. Example of an instance of this schema.
            "format": "A String", # Optional. The format of the data. For `NUMBER` type, format can be `float` or `double`. For `INTEGER` type, format can be `int32` or `int64`. For `STRING` type, format can be `email`, `byte`, `date`, `date-time`, `password`, and other formats to further refine the data type.
            "items": # Object with schema name: GoogleCloudAiplatformV1Schema # Optional. If type is `ARRAY`, `items` specifies the schema of elements in the array.
            "maxItems": "A String", # Optional. If type is `ARRAY`, `max_items` specifies the maximum number of items in an array.
            "maxLength": "A String", # Optional. If type is `STRING`, `max_length` specifies the maximum length of the string.
            "maxProperties": "A String", # Optional. If type is `OBJECT`, `max_properties` specifies the maximum number of properties that can be provided.
            "maximum": 3.14, # Optional. If type is `INTEGER` or `NUMBER`, `maximum` specifies the maximum allowed value.
            "minItems": "A String", # Optional. If type is `ARRAY`, `min_items` specifies the minimum number of items in an array.
            "minLength": "A String", # Optional. If type is `STRING`, `min_length` specifies the minimum length of the string.
            "minProperties": "A String", # Optional. If type is `OBJECT`, `min_properties` specifies the minimum number of properties that can be provided.
            "minimum": 3.14, # Optional. If type is `INTEGER` or `NUMBER`, `minimum` specifies the minimum allowed value.
            "nullable": True or False, # Optional. Indicates if the value of this field can be null.
            "pattern": "A String", # Optional. If type is `STRING`, `pattern` specifies a regular expression that the string must match.
            "properties": { # Optional. If type is `OBJECT`, `properties` is a map of property names to schema definitions for each property of the object.
              "a_key": # Object with schema name: GoogleCloudAiplatformV1Schema
            },
            "propertyOrdering": [ # Optional. Order of properties displayed or used where order matters. This is not a standard field in OpenAPI specification, but can be used to control the order of properties.
              "A String",
            ],
            "ref": "A String", # Optional. Allows referencing another schema definition to use in place of this schema. The value must be a valid reference to a schema in `defs`. For example, the following schema defines a reference to a schema node named "Pet": type: object properties: pet: ref: #/defs/Pet defs: Pet: type: object properties: name: type: string The value of the "pet" property is a reference to the schema node named "Pet". See details in https://json-schema.org/understanding-json-schema/structuring
            "required": [ # Optional. If type is `OBJECT`, `required` lists the names of properties that must be present.
              "A String",
            ],
            "title": "A String", # Optional. Title for the schema.
            "type": "A String", # Optional. Data type of the schema field.
          },
          "parametersJsonSchema": "", # Optional. Describes the parameters to the function in JSON Schema format. The schema must describe an object where the properties are the parameters to the function. For example: ``` { "type": "object", "properties": { "name": { "type": "string" }, "age": { "type": "integer" } }, "additionalProperties": false, "required": ["name", "age"], "propertyOrdering": ["name", "age"] } ``` This field is mutually exclusive with `parameters`.
          "response": { # Defines the schema of input and output data. This is a subset of the [OpenAPI 3.0 Schema Object](https://spec.openapis.org/oas/v3.0.3#schema-object). # Optional. Describes the output from this function in JSON Schema format. Reflects the Open API 3.03 Response Object. The Schema defines the type used for the response value of the function.
            "additionalProperties": "", # Optional. If `type` is `OBJECT`, specifies how to handle properties not defined in `properties`. If it is a boolean `false`, no additional properties are allowed. If it is a schema, additional properties are allowed if they conform to the schema.
            "anyOf": [ # Optional. The instance must be valid against any (one or more) of the subschemas listed in `any_of`.
              # Object with schema name: GoogleCloudAiplatformV1Schema
            ],
            "default": "", # Optional. Default value to use if the field is not specified.
            "defs": { # Optional. `defs` provides a map of schema definitions that can be reused by `ref` elsewhere in the schema. Only allowed at root level of the schema.
              "a_key": # Object with schema name: GoogleCloudAiplatformV1Schema
            },
            "description": "A String", # Optional. Description of the schema.
            "enum": [ # Optional. Possible values of the field. This field can be used to restrict a value to a fixed set of values. To mark a field as an enum, set `format` to `enum` and provide the list of possible values in `enum`. For example: 1. To define directions: `{type:STRING, format:enum, enum:["EAST", "NORTH", "SOUTH", "WEST"]}` 2. To define apartment numbers: `{type:INTEGER, format:enum, enum:["101", "201", "301"]}`
              "A String",
            ],
            "example": "", # Optional. Example of an instance of this schema.
            "format": "A String", # Optional. The format of the data. For `NUMBER` type, format can be `float` or `double`. For `INTEGER` type, format can be `int32` or `int64`. For `STRING` type, format can be `email`, `byte`, `date`, `date-time`, `password`, and other formats to further refine the data type.
            "items": # Object with schema name: GoogleCloudAiplatformV1Schema # Optional. If type is `ARRAY`, `items` specifies the schema of elements in the array.
            "maxItems": "A String", # Optional. If type is `ARRAY`, `max_items` specifies the maximum number of items in an array.
            "maxLength": "A String", # Optional. If type is `STRING`, `max_length` specifies the maximum length of the string.
            "maxProperties": "A String", # Optional. If type is `OBJECT`, `max_properties` specifies the maximum number of properties that can be provided.
            "maximum": 3.14, # Optional. If type is `INTEGER` or `NUMBER`, `maximum` specifies the maximum allowed value.
            "minItems": "A String", # Optional. If type is `ARRAY`, `min_items` specifies the minimum number of items in an array.
            "minLength": "A String", # Optional. If type is `STRING`, `min_length` specifies the minimum length of the string.
            "minProperties": "A String", # Optional. If type is `OBJECT`, `min_properties` specifies the minimum number of properties that can be provided.
            "minimum": 3.14, # Optional. If type is `INTEGER` or `NUMBER`, `minimum` specifies the minimum allowed value.
            "nullable": True or False, # Optional. Indicates if the value of this field can be null.
            "pattern": "A String", # Optional. If type is `STRING`, `pattern` specifies a regular expression that the string must match.
            "properties": { # Optional. If type is `OBJECT`, `properties` is a map of property names to schema definitions for each property of the object.
              "a_key": # Object with schema name: GoogleCloudAiplatformV1Schema
            },
            "propertyOrdering": [ # Optional. Order of properties displayed or used where order matters. This is not a standard field in OpenAPI specification, but can be used to control the order of properties.
              "A String",
            ],
            "ref": "A String", # Optional. Allows referencing another schema definition to use in place of this schema. The value must be a valid reference to a schema in `defs`. For example, the following schema defines a reference to a schema node named "Pet": type: object properties: pet: ref: #/defs/Pet defs: Pet: type: object properties: name: type: string The value of the "pet" property is a reference to the schema node named "Pet". See details in https://json-schema.org/understanding-json-schema/structuring
            "required": [ # Optional. If type is `OBJECT`, `required` lists the names of properties that must be present.
              "A String",
            ],
            "title": "A String", # Optional. Title for the schema.
            "type": "A String", # Optional. Data type of the schema field.
          },
          "responseJsonSchema": "", # Optional. Describes the output from this function in JSON Schema format. The value specified by the schema is the response value of the function. This field is mutually exclusive with `response`.
        },
      ],
      "googleMaps": { # Tool to retrieve public maps data for grounding, powered by Google. # Optional. GoogleMaps tool type. Tool to support Google Maps in Model.
        "enableWidget": True or False, # Optional. If true, include the widget context token in the response.
      },
      "googleSearch": { # GoogleSearch tool type. Tool to support Google Search in Model. Powered by Google. # Optional. GoogleSearch tool type. Tool to support Google Search in Model. Powered by Google.
        "blockingConfidence": "A String", # Optional. Sites with confidence level chosen & above this value will be blocked from the search results.
        "excludeDomains": [ # Optional. List of domains to be excluded from the search results. The default limit is 2000 domains. Example: ["amazon.com", "facebook.com"].
          "A String",
        ],
      },
      "googleSearchRetrieval": { # Tool to retrieve public web data for grounding, powered by Google. # Optional. Specialized retrieval tool that is powered by Google Search.
        "dynamicRetrievalConfig": { # Describes the options to customize dynamic retrieval. # Specifies the dynamic retrieval configuration for the given source.
          "dynamicThreshold": 3.14, # Optional. The threshold to be used in dynamic retrieval. If not set, a system default value is used.
          "mode": "A String", # The mode of the predictor to be used in dynamic retrieval.
        },
      },
      "parallelAiSearch": { # ParallelAiSearch tool type. A tool that uses the Parallel.ai search engine for grounding. # Optional. If specified, Vertex AI will use Parallel.ai to search for information to answer user queries. The search results will be grounded on Parallel.ai and presented to the model for response generation
        "apiKey": "A String", # Optional. The API key for ParallelAiSearch. If an API key is not provided, the system will attempt to verify access by checking for an active Parallel.ai subscription through the Google Cloud Marketplace. See https://docs.parallel.ai/search/search-quickstart for more details.
        "customConfigs": { # Optional. Custom configs for ParallelAiSearch. This field can be used to pass any parameter from the Parallel.ai Search API. See the Parallel.ai documentation for the full list of available parameters and their usage: https://docs.parallel.ai/api-reference/search-beta/search Currently only `source_policy`, `excerpts`, `max_results`, `mode`, `fetch_policy` can be set via this field. For example: { "source_policy": { "include_domains": ["google.com", "wikipedia.org"], "exclude_domains": ["example.com"] }, "fetch_policy": { "max_age_seconds": 3600 } }
          "a_key": "", # Properties of the object.
        },
      },
      "retrieval": { # Defines a retrieval tool that model can call to access external knowledge. # Optional. Retrieval tool type. System will always execute the provided retrieval tool(s) to get external knowledge to answer the prompt. Retrieval results are presented to the model for generation.
        "disableAttribution": True or False, # Optional. Deprecated. This option is no longer supported.
        "externalApi": { # Retrieve from data source powered by external API for grounding. The external API is not owned by Google, but need to follow the pre-defined API spec. # Use data source powered by external API for grounding.
          "apiAuth": { # The generic reusable api auth config. Deprecated. Please use AuthConfig (google/cloud/aiplatform/master/auth.proto) instead. # The authentication config to access the API. Deprecated. Please use auth_config instead.
            "apiKeyConfig": { # The API secret. # The API secret.
              "apiKeySecretVersion": "A String", # Required. The SecretManager secret version resource name storing API key. e.g. projects/{project}/secrets/{secret}/versions/{version}
              "apiKeyString": "A String", # The API key string. Either this or `api_key_secret_version` must be set.
            },
          },
          "apiSpec": "A String", # The API spec that the external API implements.
          "authConfig": { # Auth configuration to run the extension. # The authentication config to access the API.
            "apiKeyConfig": { # Config for authentication with API key. # Config for API key auth.
              "apiKeySecret": "A String", # Optional. The name of the SecretManager secret version resource storing the API key. Format: `projects/{project}/secrets/{secrete}/versions/{version}` - If both `api_key_secret` and `api_key_string` are specified, this field takes precedence over `api_key_string`. - If specified, the `secretmanager.versions.access` permission should be granted to Vertex AI Extension Service Agent (https://cloud.google.com/vertex-ai/docs/general/access-control#service-agents) on the specified resource.
              "apiKeyString": "A String", # Optional. The API key to be used in the request directly.
              "httpElementLocation": "A String", # Optional. The location of the API key.
              "name": "A String", # Optional. The parameter name of the API key. E.g. If the API request is "https://example.com/act?api_key=", "api_key" would be the parameter name.
            },
            "authType": "A String", # Type of auth scheme.
            "googleServiceAccountConfig": { # Config for Google Service Account Authentication. # Config for Google Service Account auth.
              "serviceAccount": "A String", # Optional. The service account that the extension execution service runs as. - If the service account is specified, the `iam.serviceAccounts.getAccessToken` permission should be granted to Vertex AI Extension Service Agent (https://cloud.google.com/vertex-ai/docs/general/access-control#service-agents) on the specified service account. - If not specified, the Vertex AI Extension Service Agent will be used to execute the Extension.
            },
            "httpBasicAuthConfig": { # Config for HTTP Basic Authentication. # Config for HTTP Basic auth.
              "credentialSecret": "A String", # Required. The name of the SecretManager secret version resource storing the base64 encoded credentials. Format: `projects/{project}/secrets/{secrete}/versions/{version}` - If specified, the `secretmanager.versions.access` permission should be granted to Vertex AI Extension Service Agent (https://cloud.google.com/vertex-ai/docs/general/access-control#service-agents) on the specified resource.
            },
            "oauthConfig": { # Config for user oauth. # Config for user oauth.
              "accessToken": "A String", # Access token for extension endpoint. Only used to propagate token from [[ExecuteExtensionRequest.runtime_auth_config]] at request time.
              "serviceAccount": "A String", # The service account used to generate access tokens for executing the Extension. - If the service account is specified, the `iam.serviceAccounts.getAccessToken` permission should be granted to Vertex AI Extension Service Agent (https://cloud.google.com/vertex-ai/docs/general/access-control#service-agents) on the provided service account.
            },
            "oidcConfig": { # Config for user OIDC auth. # Config for user OIDC auth.
              "idToken": "A String", # OpenID Connect formatted ID token for extension endpoint. Only used to propagate token from [[ExecuteExtensionRequest.runtime_auth_config]] at request time.
              "serviceAccount": "A String", # The service account used to generate an OpenID Connect (OIDC)-compatible JWT token signed by the Google OIDC Provider (accounts.google.com) for extension endpoint (https://cloud.google.com/iam/docs/create-short-lived-credentials-direct#sa-credentials-oidc). - The audience for the token will be set to the URL in the server url defined in the OpenApi spec. - If the service account is provided, the service account should grant `iam.serviceAccounts.getOpenIdToken` permission to Vertex AI Extension Service Agent (https://cloud.google.com/vertex-ai/docs/general/access-control#service-agents).
            },
          },
          "elasticSearchParams": { # The search parameters to use for the ELASTIC_SEARCH spec. # Parameters for the elastic search API.
            "index": "A String", # The ElasticSearch index to use.
            "numHits": 42, # Optional. Number of hits (chunks) to request. When specified, it is passed to Elasticsearch as the `num_hits` param.
            "searchTemplate": "A String", # The ElasticSearch search template to use.
          },
          "endpoint": "A String", # The endpoint of the external API. The system will call the API at this endpoint to retrieve the data for grounding. Example: https://acme.com:443/search
          "simpleSearchParams": { # The search parameters to use for SIMPLE_SEARCH spec. # Parameters for the simple search API.
          },
        },
        "vertexAiSearch": { # Retrieve from Vertex AI Search datastore or engine for grounding. datastore and engine are mutually exclusive. See https://cloud.google.com/products/agent-builder # Set to use data source powered by Vertex AI Search.
          "dataStoreSpecs": [ # Specifications that define the specific DataStores to be searched, along with configurations for those data stores. This is only considered for Engines with multiple data stores. It should only be set if engine is used.
            { # Define data stores within engine to filter on in a search call and configurations for those data stores. For more information, see https://cloud.google.com/generative-ai-app-builder/docs/reference/rpc/google.cloud.discoveryengine.v1#datastorespec
              "dataStore": "A String", # Full resource name of DataStore, such as Format: `projects/{project}/locations/{location}/collections/{collection}/dataStores/{dataStore}`
              "filter": "A String", # Optional. Filter specification to filter documents in the data store specified by data_store field. For more information on filtering, see [Filtering](https://cloud.google.com/generative-ai-app-builder/docs/filter-search-metadata)
            },
          ],
          "datastore": "A String", # Optional. Fully-qualified Vertex AI Search data store resource ID. Format: `projects/{project}/locations/{location}/collections/{collection}/dataStores/{dataStore}`
          "engine": "A String", # Optional. Fully-qualified Vertex AI Search engine resource ID. Format: `projects/{project}/locations/{location}/collections/{collection}/engines/{engine}`
          "filter": "A String", # Optional. Filter strings to be passed to the search API.
          "maxResults": 42, # Optional. Number of search results to return per query. The default value is 10. The maximumm allowed value is 10.
        },
        "vertexRagStore": { # Retrieve from Vertex RAG Store for grounding. # Set to use data source powered by Vertex RAG store. User data is uploaded via the VertexRagDataService.
          "ragResources": [ # Optional. The representation of the rag source. It can be used to specify corpus only or ragfiles. Currently only support one corpus or multiple files from one corpus. In the future we may open up multiple corpora support.
            { # The definition of the Rag resource.
              "ragCorpus": "A String", # Optional. RagCorpora resource name. Format: `projects/{project}/locations/{location}/ragCorpora/{rag_corpus}`
              "ragFileIds": [ # Optional. rag_file_id. The files should be in the same rag_corpus set in rag_corpus field.
                "A String",
              ],
            },
          ],
          "ragRetrievalConfig": { # Specifies the context retrieval config. # Optional. The retrieval config for the Rag query.
            "filter": { # Config for filters. # Optional. Config for filters.
              "metadataFilter": "A String", # Optional. String for metadata filtering.
              "vectorDistanceThreshold": 3.14, # Optional. Only returns contexts with vector distance smaller than the threshold.
              "vectorSimilarityThreshold": 3.14, # Optional. Only returns contexts with vector similarity larger than the threshold.
            },
            "ranking": { # Config for ranking and reranking. # Optional. Config for ranking and reranking.
              "llmRanker": { # Config for LlmRanker. # Optional. Config for LlmRanker.
                "modelName": "A String", # Optional. The model name used for ranking. See [Supported models](https://cloud.google.com/vertex-ai/generative-ai/docs/model-reference/inference#supported-models).
              },
              "rankService": { # Config for Rank Service. # Optional. Config for Rank Service.
                "modelName": "A String", # Optional. The model name of the rank service. Format: `semantic-ranker-512@latest`
              },
            },
            "topK": 42, # Optional. The number of contexts to retrieve.
          },
          "similarityTopK": 42, # Optional. Number of top k results to return from the selected corpora.
          "vectorDistanceThreshold": 3.14, # Optional. Only return results with vector distance smaller than the threshold.
        },
      },
      "urlContext": { # Tool to support URL context. # Optional. Tool to support URL context retrieval.
      },
    },
  ],
  "ttl": "A String", # Input only. The TTL for this resource. The expiration time is computed: now + TTL.
  "updateTime": "A String", # Output only. When the cache entry was last updated in UTC time.
  "usageMetadata": { # Metadata on the usage of the cached content. # Output only. Metadata on the usage of the cached content.
    "audioDurationSeconds": 42, # Duration of audio in seconds.
    "imageCount": 42, # Number of images.
    "textCount": 42, # Number of text characters.
    "totalTokenCount": 42, # Total number of tokens that the cached content consumes.
    "videoDurationSeconds": 42, # Duration of video in seconds.
  },
}

  updateMask: string, Required. The list of fields to update.
  x__xgafv: string, V1 error format.
    Allowed values
      1 - v1 error format
      2 - v2 error format

Returns:
  An object of the form:

    { # A resource used in LLM queries for users to explicitly specify what to cache and how to cache.
  "contents": [ # Optional. Input only. Immutable. The content to cache
    { # The structured data content of a message. A Content message contains a `role` field, which indicates the producer of the content, and a `parts` field, which contains the multi-part data of the message.
      "parts": [ # Required. A list of Part objects that make up a single message. Parts of a message can have different MIME types. A Content message must have at least one Part.
        { # A datatype containing media that is part of a multi-part Content message. A `Part` consists of data which has an associated datatype. A `Part` can only contain one of the accepted types in `Part.data`. For media types that are not text, `Part` must have a fixed IANA MIME type identifying the type and subtype of the media if `inline_data` or `file_data` field is filled with raw bytes.
          "codeExecutionResult": { # Result of executing the [ExecutableCode]. Only generated when using the [CodeExecution] tool, and always follows a `part` containing the [ExecutableCode]. # Optional. The result of executing the ExecutableCode.
            "outcome": "A String", # Required. Outcome of the code execution.
            "output": "A String", # Optional. Contains stdout when code execution is successful, stderr or other description otherwise.
          },
          "executableCode": { # Code generated by the model that is meant to be executed, and the result returned to the model. Generated when using the [CodeExecution] tool, in which the code will be automatically executed, and a corresponding [CodeExecutionResult] will also be generated. # Optional. Code generated by the model that is intended to be executed.
            "code": "A String", # Required. The code to be executed.
            "language": "A String", # Required. Programming language of the `code`.
          },
          "fileData": { # URI-based data. A FileData message contains a URI pointing to data of a specific media type. It is used to represent images, audio, and video stored in Google Cloud Storage. # Optional. The URI-based data of the part. This can be used to include files from Google Cloud Storage.
            "displayName": "A String", # Optional. The display name of the file. Used to provide a label or filename to distinguish files. This field is only returned in `PromptMessage` for prompt management. It is used in the Gemini calls only when server side tools (`code_execution`, `google_search`, and `url_context`) are enabled.
            "fileUri": "A String", # Required. The URI of the file in Google Cloud Storage.
            "mimeType": "A String", # Required. The IANA standard MIME type of the source data.
          },
          "functionCall": { # A predicted [FunctionCall] returned from the model that contains a string representing the [FunctionDeclaration.name] and a structured JSON object containing the parameters and their values. # Optional. A predicted function call returned from the model. This contains the name of the function to call and the arguments to pass to the function.
            "args": { # Optional. The function parameters and values in JSON object format. See [FunctionDeclaration.parameters] for parameter details.
              "a_key": "", # Properties of the object.
            },
            "name": "A String", # Optional. The name of the function to call. Matches [FunctionDeclaration.name].
            "partialArgs": [ # Optional. The partial argument value of the function call. If provided, represents the arguments/fields that are streamed incrementally.
              { # Partial argument value of the function call.
                "boolValue": True or False, # Optional. Represents a boolean value.
                "jsonPath": "A String", # Required. A JSON Path (RFC 9535) to the argument being streamed. https://datatracker.ietf.org/doc/html/rfc9535. e.g. "$.foo.bar[0].data".
                "nullValue": "A String", # Optional. Represents a null value.
                "numberValue": 3.14, # Optional. Represents a double value.
                "stringValue": "A String", # Optional. Represents a string value.
                "willContinue": True or False, # Optional. Whether this is not the last part of the same json_path. If true, another PartialArg message for the current json_path is expected to follow.
              },
            ],
            "willContinue": True or False, # Optional. Whether this is the last part of the FunctionCall. If true, another partial message for the current FunctionCall is expected to follow.
          },
          "functionResponse": { # The result output from a [FunctionCall] that contains a string representing the [FunctionDeclaration.name] and a structured JSON object containing any output from the function is used as context to the model. This should contain the result of a [FunctionCall] made based on model prediction. # Optional. The result of a function call. This is used to provide the model with the result of a function call that it predicted.
            "name": "A String", # Required. The name of the function to call. Matches [FunctionDeclaration.name] and [FunctionCall.name].
            "parts": [ # Optional. Ordered `Parts` that constitute a function response. Parts may have different IANA MIME types.
              { # A datatype containing media that is part of a `FunctionResponse` message. A `FunctionResponsePart` consists of data which has an associated datatype. A `FunctionResponsePart` can only contain one of the accepted types in `FunctionResponsePart.data`. A `FunctionResponsePart` must have a fixed IANA MIME type identifying the type and subtype of the media if the `inline_data` field is filled with raw bytes.
                "fileData": { # URI based data for function response. # URI based data.
                  "displayName": "A String", # Optional. Display name of the file data. Used to provide a label or filename to distinguish file datas. This field is only returned in PromptMessage for prompt management. It is currently used in the Gemini GenerateContent calls only when server side tools (code_execution, google_search, and url_context) are enabled.
                  "fileUri": "A String", # Required. URI.
                  "mimeType": "A String", # Required. The IANA standard MIME type of the source data.
                },
                "inlineData": { # Raw media bytes for function response. Text should not be sent as raw bytes, use the 'text' field. # Inline media bytes.
                  "data": "A String", # Required. Raw bytes.
                  "displayName": "A String", # Optional. Display name of the blob. Used to provide a label or filename to distinguish blobs. This field is only returned in PromptMessage for prompt management. It is currently used in the Gemini GenerateContent calls only when server side tools (code_execution, google_search, and url_context) are enabled.
                  "mimeType": "A String", # Required. The IANA standard MIME type of the source data.
                },
              },
            ],
            "response": { # Required. The function response in JSON object format. Use "output" key to specify function output and "error" key to specify error details (if any). If "output" and "error" keys are not specified, then whole "response" is treated as function output.
              "a_key": "", # Properties of the object.
            },
          },
          "inlineData": { # A content blob. A Blob contains data of a specific media type. It is used to represent images, audio, and video. # Optional. The inline data content of the part. This can be used to include images, audio, or video in a request.
            "data": "A String", # Required. The raw bytes of the data.
            "displayName": "A String", # Optional. The display name of the blob. Used to provide a label or filename to distinguish blobs. This field is only returned in `PromptMessage` for prompt management. It is used in the Gemini calls only when server-side tools (`code_execution`, `google_search`, and `url_context`) are enabled.
            "mimeType": "A String", # Required. The IANA standard MIME type of the source data.
          },
          "mediaResolution": { # per part media resolution. Media resolution for the input media. # per part media resolution. Media resolution for the input media.
            "level": "A String", # The tokenization quality used for given media.
          },
          "text": "A String", # Optional. The text content of the part.
          "thought": True or False, # Optional. Indicates whether the `part` represents the model's thought process or reasoning.
          "thoughtSignature": "A String", # Optional. An opaque signature for the thought so it can be reused in subsequent requests.
          "videoMetadata": { # Provides metadata for a video, including the start and end offsets for clipping and the frame rate. # Optional. Video metadata. The metadata should only be specified while the video data is presented in inline_data or file_data.
            "endOffset": "A String", # Optional. The end offset of the video.
            "fps": 3.14, # Optional. The frame rate of the video sent to the model. If not specified, the default value is 1.0. The valid range is (0.0, 24.0].
            "startOffset": "A String", # Optional. The start offset of the video.
          },
        },
      ],
      "role": "A String", # Optional. The producer of the content. Must be either 'user' or 'model'. If not set, the service will default to 'user'.
    },
  ],
  "createTime": "A String", # Output only. Creation time of the cache entry.
  "displayName": "A String", # Optional. Immutable. The user-generated meaningful display name of the cached content.
  "encryptionSpec": { # Represents a customer-managed encryption key specification that can be applied to a Vertex AI resource. # Input only. Immutable. Customer-managed encryption key spec for a `CachedContent`. If set, this `CachedContent` and all its sub-resources will be secured by this key.
    "kmsKeyName": "A String", # Required. Resource name of the Cloud KMS key used to protect the resource. The Cloud KMS key must be in the same region as the resource. It must have the format `projects/{project}/locations/{location}/keyRings/{key_ring}/cryptoKeys/{crypto_key}`.
  },
  "expireTime": "A String", # Timestamp of when this resource is considered expired. This is *always* provided on output, regardless of what was sent on input.
  "model": "A String", # Immutable. The name of the `Model` to use for cached content. Currently, only the published Gemini base models are supported, in form of projects/{PROJECT}/locations/{LOCATION}/publishers/google/models/{MODEL}
  "name": "A String", # Immutable. Identifier. The server-generated resource name of the cached content Format: projects/{project}/locations/{location}/cachedContents/{cached_content}
  "systemInstruction": { # The structured data content of a message. A Content message contains a `role` field, which indicates the producer of the content, and a `parts` field, which contains the multi-part data of the message. # Optional. Input only. Immutable. Developer set system instruction. Currently, text only
    "parts": [ # Required. A list of Part objects that make up a single message. Parts of a message can have different MIME types. A Content message must have at least one Part.
      { # A datatype containing media that is part of a multi-part Content message. A `Part` consists of data which has an associated datatype. A `Part` can only contain one of the accepted types in `Part.data`. For media types that are not text, `Part` must have a fixed IANA MIME type identifying the type and subtype of the media if `inline_data` or `file_data` field is filled with raw bytes.
        "codeExecutionResult": { # Result of executing the [ExecutableCode]. Only generated when using the [CodeExecution] tool, and always follows a `part` containing the [ExecutableCode]. # Optional. The result of executing the ExecutableCode.
          "outcome": "A String", # Required. Outcome of the code execution.
          "output": "A String", # Optional. Contains stdout when code execution is successful, stderr or other description otherwise.
        },
        "executableCode": { # Code generated by the model that is meant to be executed, and the result returned to the model. Generated when using the [CodeExecution] tool, in which the code will be automatically executed, and a corresponding [CodeExecutionResult] will also be generated. # Optional. Code generated by the model that is intended to be executed.
          "code": "A String", # Required. The code to be executed.
          "language": "A String", # Required. Programming language of the `code`.
        },
        "fileData": { # URI-based data. A FileData message contains a URI pointing to data of a specific media type. It is used to represent images, audio, and video stored in Google Cloud Storage. # Optional. The URI-based data of the part. This can be used to include files from Google Cloud Storage.
          "displayName": "A String", # Optional. The display name of the file. Used to provide a label or filename to distinguish files. This field is only returned in `PromptMessage` for prompt management. It is used in the Gemini calls only when server side tools (`code_execution`, `google_search`, and `url_context`) are enabled.
          "fileUri": "A String", # Required. The URI of the file in Google Cloud Storage.
          "mimeType": "A String", # Required. The IANA standard MIME type of the source data.
        },
        "functionCall": { # A predicted [FunctionCall] returned from the model that contains a string representing the [FunctionDeclaration.name] and a structured JSON object containing the parameters and their values. # Optional. A predicted function call returned from the model. This contains the name of the function to call and the arguments to pass to the function.
          "args": { # Optional. The function parameters and values in JSON object format. See [FunctionDeclaration.parameters] for parameter details.
            "a_key": "", # Properties of the object.
          },
          "name": "A String", # Optional. The name of the function to call. Matches [FunctionDeclaration.name].
          "partialArgs": [ # Optional. The partial argument value of the function call. If provided, represents the arguments/fields that are streamed incrementally.
            { # Partial argument value of the function call.
              "boolValue": True or False, # Optional. Represents a boolean value.
              "jsonPath": "A String", # Required. A JSON Path (RFC 9535) to the argument being streamed. https://datatracker.ietf.org/doc/html/rfc9535. e.g. "$.foo.bar[0].data".
              "nullValue": "A String", # Optional. Represents a null value.
              "numberValue": 3.14, # Optional. Represents a double value.
              "stringValue": "A String", # Optional. Represents a string value.
              "willContinue": True or False, # Optional. Whether this is not the last part of the same json_path. If true, another PartialArg message for the current json_path is expected to follow.
            },
          ],
          "willContinue": True or False, # Optional. Whether this is the last part of the FunctionCall. If true, another partial message for the current FunctionCall is expected to follow.
        },
        "functionResponse": { # The result output from a [FunctionCall] that contains a string representing the [FunctionDeclaration.name] and a structured JSON object containing any output from the function is used as context to the model. This should contain the result of a [FunctionCall] made based on model prediction. # Optional. The result of a function call. This is used to provide the model with the result of a function call that it predicted.
          "name": "A String", # Required. The name of the function to call. Matches [FunctionDeclaration.name] and [FunctionCall.name].
          "parts": [ # Optional. Ordered `Parts` that constitute a function response. Parts may have different IANA MIME types.
            { # A datatype containing media that is part of a `FunctionResponse` message. A `FunctionResponsePart` consists of data which has an associated datatype. A `FunctionResponsePart` can only contain one of the accepted types in `FunctionResponsePart.data`. A `FunctionResponsePart` must have a fixed IANA MIME type identifying the type and subtype of the media if the `inline_data` field is filled with raw bytes.
              "fileData": { # URI based data for function response. # URI based data.
                "displayName": "A String", # Optional. Display name of the file data. Used to provide a label or filename to distinguish file datas. This field is only returned in PromptMessage for prompt management. It is currently used in the Gemini GenerateContent calls only when server side tools (code_execution, google_search, and url_context) are enabled.
                "fileUri": "A String", # Required. URI.
                "mimeType": "A String", # Required. The IANA standard MIME type of the source data.
              },
              "inlineData": { # Raw media bytes for function response. Text should not be sent as raw bytes, use the 'text' field. # Inline media bytes.
                "data": "A String", # Required. Raw bytes.
                "displayName": "A String", # Optional. Display name of the blob. Used to provide a label or filename to distinguish blobs. This field is only returned in PromptMessage for prompt management. It is currently used in the Gemini GenerateContent calls only when server side tools (code_execution, google_search, and url_context) are enabled.
                "mimeType": "A String", # Required. The IANA standard MIME type of the source data.
              },
            },
          ],
          "response": { # Required. The function response in JSON object format. Use "output" key to specify function output and "error" key to specify error details (if any). If "output" and "error" keys are not specified, then whole "response" is treated as function output.
            "a_key": "", # Properties of the object.
          },
        },
        "inlineData": { # A content blob. A Blob contains data of a specific media type. It is used to represent images, audio, and video. # Optional. The inline data content of the part. This can be used to include images, audio, or video in a request.
          "data": "A String", # Required. The raw bytes of the data.
          "displayName": "A String", # Optional. The display name of the blob. Used to provide a label or filename to distinguish blobs. This field is only returned in `PromptMessage` for prompt management. It is used in the Gemini calls only when server-side tools (`code_execution`, `google_search`, and `url_context`) are enabled.
          "mimeType": "A String", # Required. The IANA standard MIME type of the source data.
        },
        "mediaResolution": { # per part media resolution. Media resolution for the input media. # per part media resolution. Media resolution for the input media.
          "level": "A String", # The tokenization quality used for given media.
        },
        "text": "A String", # Optional. The text content of the part.
        "thought": True or False, # Optional. Indicates whether the `part` represents the model's thought process or reasoning.
        "thoughtSignature": "A String", # Optional. An opaque signature for the thought so it can be reused in subsequent requests.
        "videoMetadata": { # Provides metadata for a video, including the start and end offsets for clipping and the frame rate. # Optional. Video metadata. The metadata should only be specified while the video data is presented in inline_data or file_data.
          "endOffset": "A String", # Optional. The end offset of the video.
          "fps": 3.14, # Optional. The frame rate of the video sent to the model. If not specified, the default value is 1.0. The valid range is (0.0, 24.0].
          "startOffset": "A String", # Optional. The start offset of the video.
        },
      },
    ],
    "role": "A String", # Optional. The producer of the content. Must be either 'user' or 'model'. If not set, the service will default to 'user'.
  },
  "toolConfig": { # Tool config. This config is shared for all tools provided in the request. # Optional. Input only. Immutable. Tool config. This config is shared for all tools
    "functionCallingConfig": { # Function calling config. # Optional. Function calling config.
      "allowedFunctionNames": [ # Optional. Function names to call. Only set when the Mode is ANY. Function names should match [FunctionDeclaration.name]. With mode set to ANY, model will predict a function call from the set of function names provided.
        "A String",
      ],
      "mode": "A String", # Optional. Function calling mode.
      "streamFunctionCallArguments": True or False, # Optional. When set to true, arguments of a single function call will be streamed out in multiple parts/contents/responses. Partial parameter results will be returned in the [FunctionCall.partial_args] field.
    },
    "retrievalConfig": { # Retrieval config. # Optional. Retrieval config.
      "languageCode": "A String", # The language code of the user.
      "latLng": { # An object that represents a latitude/longitude pair. This is expressed as a pair of doubles to represent degrees latitude and degrees longitude. Unless specified otherwise, this object must conform to the WGS84 standard. Values must be within normalized ranges. # The location of the user.
        "latitude": 3.14, # The latitude in degrees. It must be in the range [-90.0, +90.0].
        "longitude": 3.14, # The longitude in degrees. It must be in the range [-180.0, +180.0].
      },
    },
  },
  "tools": [ # Optional. Input only. Immutable. A list of `Tools` the model may use to generate the next response
    { # Tool details that the model may use to generate response. A `Tool` is a piece of code that enables the system to interact with external systems to perform an action, or set of actions, outside of knowledge and scope of the model. A Tool object should contain exactly one type of Tool (e.g FunctionDeclaration, Retrieval or GoogleSearchRetrieval).
      "codeExecution": { # Tool that executes code generated by the model, and automatically returns the result to the model. See also [ExecutableCode]and [CodeExecutionResult] which are input and output to this tool. # Optional. CodeExecution tool type. Enables the model to execute code as part of generation.
      },
      "computerUse": { # Tool to support computer use. # Optional. Tool to support the model interacting directly with the computer. If enabled, it automatically populates computer-use specific Function Declarations.
        "environment": "A String", # Required. The environment being operated.
        "excludedPredefinedFunctions": [ # Optional. By default, [predefined functions](https://cloud.google.com/vertex-ai/generative-ai/docs/computer-use#supported-actions) are included in the final model call. Some of them can be explicitly excluded from being automatically included. This can serve two purposes: 1. Using a more restricted / different action space. 2. Improving the definitions / instructions of predefined functions.
          "A String",
        ],
      },
      "enterpriseWebSearch": { # Tool to search public web data, powered by Vertex AI Search and Sec4 compliance. # Optional. Tool to support searching public web data, powered by Vertex AI Search and Sec4 compliance.
        "blockingConfidence": "A String", # Optional. Sites with confidence level chosen & above this value will be blocked from the search results.
        "excludeDomains": [ # Optional. List of domains to be excluded from the search results. The default limit is 2000 domains.
          "A String",
        ],
      },
      "functionDeclarations": [ # Optional. Function tool type. One or more function declarations to be passed to the model along with the current user query. Model may decide to call a subset of these functions by populating FunctionCall in the response. User should provide a FunctionResponse for each function call in the next turn. Based on the function responses, Model will generate the final response back to the user. Maximum 512 function declarations can be provided.
        { # Structured representation of a function declaration as defined by the [OpenAPI 3.0 specification](https://spec.openapis.org/oas/v3.0.3). Included in this declaration are the function name, description, parameters and response type. This FunctionDeclaration is a representation of a block of code that can be used as a `Tool` by the model and executed by the client.
          "description": "A String", # Optional. Description and purpose of the function. Model uses it to decide how and whether to call the function.
          "name": "A String", # Required. The name of the function to call. Must start with a letter or an underscore. Must be a-z, A-Z, 0-9, or contain underscores, dots, colons and dashes, with a maximum length of 64.
          "parameters": { # Defines the schema of input and output data. This is a subset of the [OpenAPI 3.0 Schema Object](https://spec.openapis.org/oas/v3.0.3#schema-object). # Optional. Describes the parameters to this function in JSON Schema Object format. Reflects the Open API 3.03 Parameter Object. string Key: the name of the parameter. Parameter names are case sensitive. Schema Value: the Schema defining the type used for the parameter. For function with no parameters, this can be left unset. Parameter names must start with a letter or an underscore and must only contain chars a-z, A-Z, 0-9, or underscores with a maximum length of 64. Example with 1 required and 1 optional parameter: type: OBJECT properties: param1: type: STRING param2: type: INTEGER required: - param1
            "additionalProperties": "", # Optional. If `type` is `OBJECT`, specifies how to handle properties not defined in `properties`. If it is a boolean `false`, no additional properties are allowed. If it is a schema, additional properties are allowed if they conform to the schema.
            "anyOf": [ # Optional. The instance must be valid against any (one or more) of the subschemas listed in `any_of`.
              # Object with schema name: GoogleCloudAiplatformV1Schema
            ],
            "default": "", # Optional. Default value to use if the field is not specified.
            "defs": { # Optional. `defs` provides a map of schema definitions that can be reused by `ref` elsewhere in the schema. Only allowed at root level of the schema.
              "a_key": # Object with schema name: GoogleCloudAiplatformV1Schema
            },
            "description": "A String", # Optional. Description of the schema.
            "enum": [ # Optional. Possible values of the field. This field can be used to restrict a value to a fixed set of values. To mark a field as an enum, set `format` to `enum` and provide the list of possible values in `enum`. For example: 1. To define directions: `{type:STRING, format:enum, enum:["EAST", "NORTH", "SOUTH", "WEST"]}` 2. To define apartment numbers: `{type:INTEGER, format:enum, enum:["101", "201", "301"]}`
              "A String",
            ],
            "example": "", # Optional. Example of an instance of this schema.
            "format": "A String", # Optional. The format of the data. For `NUMBER` type, format can be `float` or `double`. For `INTEGER` type, format can be `int32` or `int64`. For `STRING` type, format can be `email`, `byte`, `date`, `date-time`, `password`, and other formats to further refine the data type.
            "items": # Object with schema name: GoogleCloudAiplatformV1Schema # Optional. If type is `ARRAY`, `items` specifies the schema of elements in the array.
            "maxItems": "A String", # Optional. If type is `ARRAY`, `max_items` specifies the maximum number of items in an array.
            "maxLength": "A String", # Optional. If type is `STRING`, `max_length` specifies the maximum length of the string.
            "maxProperties": "A String", # Optional. If type is `OBJECT`, `max_properties` specifies the maximum number of properties that can be provided.
            "maximum": 3.14, # Optional. If type is `INTEGER` or `NUMBER`, `maximum` specifies the maximum allowed value.
            "minItems": "A String", # Optional. If type is `ARRAY`, `min_items` specifies the minimum number of items in an array.
            "minLength": "A String", # Optional. If type is `STRING`, `min_length` specifies the minimum length of the string.
            "minProperties": "A String", # Optional. If type is `OBJECT`, `min_properties` specifies the minimum number of properties that can be provided.
            "minimum": 3.14, # Optional. If type is `INTEGER` or `NUMBER`, `minimum` specifies the minimum allowed value.
            "nullable": True or False, # Optional. Indicates if the value of this field can be null.
            "pattern": "A String", # Optional. If type is `STRING`, `pattern` specifies a regular expression that the string must match.
            "properties": { # Optional. If type is `OBJECT`, `properties` is a map of property names to schema definitions for each property of the object.
              "a_key": # Object with schema name: GoogleCloudAiplatformV1Schema
            },
            "propertyOrdering": [ # Optional. Order of properties displayed or used where order matters. This is not a standard field in OpenAPI specification, but can be used to control the order of properties.
              "A String",
            ],
            "ref": "A String", # Optional. Allows referencing another schema definition to use in place of this schema. The value must be a valid reference to a schema in `defs`. For example, the following schema defines a reference to a schema node named "Pet": type: object properties: pet: ref: #/defs/Pet defs: Pet: type: object properties: name: type: string The value of the "pet" property is a reference to the schema node named "Pet". See details in https://json-schema.org/understanding-json-schema/structuring
            "required": [ # Optional. If type is `OBJECT`, `required` lists the names of properties that must be present.
              "A String",
            ],
            "title": "A String", # Optional. Title for the schema.
            "type": "A String", # Optional. Data type of the schema field.
          },
          "parametersJsonSchema": "", # Optional. Describes the parameters to the function in JSON Schema format. The schema must describe an object where the properties are the parameters to the function. For example: ``` { "type": "object", "properties": { "name": { "type": "string" }, "age": { "type": "integer" } }, "additionalProperties": false, "required": ["name", "age"], "propertyOrdering": ["name", "age"] } ``` This field is mutually exclusive with `parameters`.
          "response": { # Defines the schema of input and output data. This is a subset of the [OpenAPI 3.0 Schema Object](https://spec.openapis.org/oas/v3.0.3#schema-object). # Optional. Describes the output from this function in JSON Schema format. Reflects the Open API 3.03 Response Object. The Schema defines the type used for the response value of the function.
            "additionalProperties": "", # Optional. If `type` is `OBJECT`, specifies how to handle properties not defined in `properties`. If it is a boolean `false`, no additional properties are allowed. If it is a schema, additional properties are allowed if they conform to the schema.
            "anyOf": [ # Optional. The instance must be valid against any (one or more) of the subschemas listed in `any_of`.
              # Object with schema name: GoogleCloudAiplatformV1Schema
            ],
            "default": "", # Optional. Default value to use if the field is not specified.
            "defs": { # Optional. `defs` provides a map of schema definitions that can be reused by `ref` elsewhere in the schema. Only allowed at root level of the schema.
              "a_key": # Object with schema name: GoogleCloudAiplatformV1Schema
            },
            "description": "A String", # Optional. Description of the schema.
            "enum": [ # Optional. Possible values of the field. This field can be used to restrict a value to a fixed set of values. To mark a field as an enum, set `format` to `enum` and provide the list of possible values in `enum`. For example: 1. To define directions: `{type:STRING, format:enum, enum:["EAST", "NORTH", "SOUTH", "WEST"]}` 2. To define apartment numbers: `{type:INTEGER, format:enum, enum:["101", "201", "301"]}`
              "A String",
            ],
            "example": "", # Optional. Example of an instance of this schema.
            "format": "A String", # Optional. The format of the data. For `NUMBER` type, format can be `float` or `double`. For `INTEGER` type, format can be `int32` or `int64`. For `STRING` type, format can be `email`, `byte`, `date`, `date-time`, `password`, and other formats to further refine the data type.
            "items": # Object with schema name: GoogleCloudAiplatformV1Schema # Optional. If type is `ARRAY`, `items` specifies the schema of elements in the array.
            "maxItems": "A String", # Optional. If type is `ARRAY`, `max_items` specifies the maximum number of items in an array.
            "maxLength": "A String", # Optional. If type is `STRING`, `max_length` specifies the maximum length of the string.
            "maxProperties": "A String", # Optional. If type is `OBJECT`, `max_properties` specifies the maximum number of properties that can be provided.
            "maximum": 3.14, # Optional. If type is `INTEGER` or `NUMBER`, `maximum` specifies the maximum allowed value.
            "minItems": "A String", # Optional. If type is `ARRAY`, `min_items` specifies the minimum number of items in an array.
            "minLength": "A String", # Optional. If type is `STRING`, `min_length` specifies the minimum length of the string.
            "minProperties": "A String", # Optional. If type is `OBJECT`, `min_properties` specifies the minimum number of properties that can be provided.
            "minimum": 3.14, # Optional. If type is `INTEGER` or `NUMBER`, `minimum` specifies the minimum allowed value.
            "nullable": True or False, # Optional. Indicates if the value of this field can be null.
            "pattern": "A String", # Optional. If type is `STRING`, `pattern` specifies a regular expression that the string must match.
            "properties": { # Optional. If type is `OBJECT`, `properties` is a map of property names to schema definitions for each property of the object.
              "a_key": # Object with schema name: GoogleCloudAiplatformV1Schema
            },
            "propertyOrdering": [ # Optional. Order of properties displayed or used where order matters. This is not a standard field in OpenAPI specification, but can be used to control the order of properties.
              "A String",
            ],
            "ref": "A String", # Optional. Allows referencing another schema definition to use in place of this schema. The value must be a valid reference to a schema in `defs`. For example, the following schema defines a reference to a schema node named "Pet": type: object properties: pet: ref: #/defs/Pet defs: Pet: type: object properties: name: type: string The value of the "pet" property is a reference to the schema node named "Pet". See details in https://json-schema.org/understanding-json-schema/structuring
            "required": [ # Optional. If type is `OBJECT`, `required` lists the names of properties that must be present.
              "A String",
            ],
            "title": "A String", # Optional. Title for the schema.
            "type": "A String", # Optional. Data type of the schema field.
          },
          "responseJsonSchema": "", # Optional. Describes the output from this function in JSON Schema format. The value specified by the schema is the response value of the function. This field is mutually exclusive with `response`.
        },
      ],
      "googleMaps": { # Tool to retrieve public maps data for grounding, powered by Google. # Optional. GoogleMaps tool type. Tool to support Google Maps in Model.
        "enableWidget": True or False, # Optional. If true, include the widget context token in the response.
      },
      "googleSearch": { # GoogleSearch tool type. Tool to support Google Search in Model. Powered by Google. # Optional. GoogleSearch tool type. Tool to support Google Search in Model. Powered by Google.
        "blockingConfidence": "A String", # Optional. Sites with confidence level chosen & above this value will be blocked from the search results.
        "excludeDomains": [ # Optional. List of domains to be excluded from the search results. The default limit is 2000 domains. Example: ["amazon.com", "facebook.com"].
          "A String",
        ],
      },
      "googleSearchRetrieval": { # Tool to retrieve public web data for grounding, powered by Google. # Optional. Specialized retrieval tool that is powered by Google Search.
        "dynamicRetrievalConfig": { # Describes the options to customize dynamic retrieval. # Specifies the dynamic retrieval configuration for the given source.
          "dynamicThreshold": 3.14, # Optional. The threshold to be used in dynamic retrieval. If not set, a system default value is used.
          "mode": "A String", # The mode of the predictor to be used in dynamic retrieval.
        },
      },
      "parallelAiSearch": { # ParallelAiSearch tool type. A tool that uses the Parallel.ai search engine for grounding. # Optional. If specified, Vertex AI will use Parallel.ai to search for information to answer user queries. The search results will be grounded on Parallel.ai and presented to the model for response generation
        "apiKey": "A String", # Optional. The API key for ParallelAiSearch. If an API key is not provided, the system will attempt to verify access by checking for an active Parallel.ai subscription through the Google Cloud Marketplace. See https://docs.parallel.ai/search/search-quickstart for more details.
        "customConfigs": { # Optional. Custom configs for ParallelAiSearch. This field can be used to pass any parameter from the Parallel.ai Search API. See the Parallel.ai documentation for the full list of available parameters and their usage: https://docs.parallel.ai/api-reference/search-beta/search Currently only `source_policy`, `excerpts`, `max_results`, `mode`, `fetch_policy` can be set via this field. For example: { "source_policy": { "include_domains": ["google.com", "wikipedia.org"], "exclude_domains": ["example.com"] }, "fetch_policy": { "max_age_seconds": 3600 } }
          "a_key": "", # Properties of the object.
        },
      },
      "retrieval": { # Defines a retrieval tool that model can call to access external knowledge. # Optional. Retrieval tool type. System will always execute the provided retrieval tool(s) to get external knowledge to answer the prompt. Retrieval results are presented to the model for generation.
        "disableAttribution": True or False, # Optional. Deprecated. This option is no longer supported.
        "externalApi": { # Retrieve from data source powered by external API for grounding. The external API is not owned by Google, but need to follow the pre-defined API spec. # Use data source powered by external API for grounding.
          "apiAuth": { # The generic reusable api auth config. Deprecated. Please use AuthConfig (google/cloud/aiplatform/master/auth.proto) instead. # The authentication config to access the API. Deprecated. Please use auth_config instead.
            "apiKeyConfig": { # The API secret. # The API secret.
              "apiKeySecretVersion": "A String", # Required. The SecretManager secret version resource name storing API key. e.g. projects/{project}/secrets/{secret}/versions/{version}
              "apiKeyString": "A String", # The API key string. Either this or `api_key_secret_version` must be set.
            },
          },
          "apiSpec": "A String", # The API spec that the external API implements.
          "authConfig": { # Auth configuration to run the extension. # The authentication config to access the API.
            "apiKeyConfig": { # Config for authentication with API key. # Config for API key auth.
              "apiKeySecret": "A String", # Optional. The name of the SecretManager secret version resource storing the API key. Format: `projects/{project}/secrets/{secrete}/versions/{version}` - If both `api_key_secret` and `api_key_string` are specified, this field takes precedence over `api_key_string`. - If specified, the `secretmanager.versions.access` permission should be granted to Vertex AI Extension Service Agent (https://cloud.google.com/vertex-ai/docs/general/access-control#service-agents) on the specified resource.
              "apiKeyString": "A String", # Optional. The API key to be used in the request directly.
              "httpElementLocation": "A String", # Optional. The location of the API key.
              "name": "A String", # Optional. The parameter name of the API key. E.g. If the API request is "https://example.com/act?api_key=", "api_key" would be the parameter name.
            },
            "authType": "A String", # Type of auth scheme.
            "googleServiceAccountConfig": { # Config for Google Service Account Authentication. # Config for Google Service Account auth.
              "serviceAccount": "A String", # Optional. The service account that the extension execution service runs as. - If the service account is specified, the `iam.serviceAccounts.getAccessToken` permission should be granted to Vertex AI Extension Service Agent (https://cloud.google.com/vertex-ai/docs/general/access-control#service-agents) on the specified service account. - If not specified, the Vertex AI Extension Service Agent will be used to execute the Extension.
            },
            "httpBasicAuthConfig": { # Config for HTTP Basic Authentication. # Config for HTTP Basic auth.
              "credentialSecret": "A String", # Required. The name of the SecretManager secret version resource storing the base64 encoded credentials. Format: `projects/{project}/secrets/{secrete}/versions/{version}` - If specified, the `secretmanager.versions.access` permission should be granted to Vertex AI Extension Service Agent (https://cloud.google.com/vertex-ai/docs/general/access-control#service-agents) on the specified resource.
            },
            "oauthConfig": { # Config for user oauth. # Config for user oauth.
              "accessToken": "A String", # Access token for extension endpoint. Only used to propagate token from [[ExecuteExtensionRequest.runtime_auth_config]] at request time.
              "serviceAccount": "A String", # The service account used to generate access tokens for executing the Extension. - If the service account is specified, the `iam.serviceAccounts.getAccessToken` permission should be granted to Vertex AI Extension Service Agent (https://cloud.google.com/vertex-ai/docs/general/access-control#service-agents) on the provided service account.
            },
            "oidcConfig": { # Config for user OIDC auth. # Config for user OIDC auth.
              "idToken": "A String", # OpenID Connect formatted ID token for extension endpoint. Only used to propagate token from [[ExecuteExtensionRequest.runtime_auth_config]] at request time.
              "serviceAccount": "A String", # The service account used to generate an OpenID Connect (OIDC)-compatible JWT token signed by the Google OIDC Provider (accounts.google.com) for extension endpoint (https://cloud.google.com/iam/docs/create-short-lived-credentials-direct#sa-credentials-oidc). - The audience for the token will be set to the URL in the server url defined in the OpenApi spec. - If the service account is provided, the service account should grant `iam.serviceAccounts.getOpenIdToken` permission to Vertex AI Extension Service Agent (https://cloud.google.com/vertex-ai/docs/general/access-control#service-agents).
            },
          },
          "elasticSearchParams": { # The search parameters to use for the ELASTIC_SEARCH spec. # Parameters for the elastic search API.
            "index": "A String", # The ElasticSearch index to use.
            "numHits": 42, # Optional. Number of hits (chunks) to request. When specified, it is passed to Elasticsearch as the `num_hits` param.
            "searchTemplate": "A String", # The ElasticSearch search template to use.
          },
          "endpoint": "A String", # The endpoint of the external API. The system will call the API at this endpoint to retrieve the data for grounding. Example: https://acme.com:443/search
          "simpleSearchParams": { # The search parameters to use for SIMPLE_SEARCH spec. # Parameters for the simple search API.
          },
        },
        "vertexAiSearch": { # Retrieve from Vertex AI Search datastore or engine for grounding. datastore and engine are mutually exclusive. See https://cloud.google.com/products/agent-builder # Set to use data source powered by Vertex AI Search.
          "dataStoreSpecs": [ # Specifications that define the specific DataStores to be searched, along with configurations for those data stores. This is only considered for Engines with multiple data stores. It should only be set if engine is used.
            { # Define data stores within engine to filter on in a search call and configurations for those data stores. For more information, see https://cloud.google.com/generative-ai-app-builder/docs/reference/rpc/google.cloud.discoveryengine.v1#datastorespec
              "dataStore": "A String", # Full resource name of DataStore, such as Format: `projects/{project}/locations/{location}/collections/{collection}/dataStores/{dataStore}`
              "filter": "A String", # Optional. Filter specification to filter documents in the data store specified by data_store field. For more information on filtering, see [Filtering](https://cloud.google.com/generative-ai-app-builder/docs/filter-search-metadata)
            },
          ],
          "datastore": "A String", # Optional. Fully-qualified Vertex AI Search data store resource ID. Format: `projects/{project}/locations/{location}/collections/{collection}/dataStores/{dataStore}`
          "engine": "A String", # Optional. Fully-qualified Vertex AI Search engine resource ID. Format: `projects/{project}/locations/{location}/collections/{collection}/engines/{engine}`
          "filter": "A String", # Optional. Filter strings to be passed to the search API.
          "maxResults": 42, # Optional. Number of search results to return per query. The default value is 10. The maximumm allowed value is 10.
        },
        "vertexRagStore": { # Retrieve from Vertex RAG Store for grounding. # Set to use data source powered by Vertex RAG store. User data is uploaded via the VertexRagDataService.
          "ragResources": [ # Optional. The representation of the rag source. It can be used to specify corpus only or ragfiles. Currently only support one corpus or multiple files from one corpus. In the future we may open up multiple corpora support.
            { # The definition of the Rag resource.
              "ragCorpus": "A String", # Optional. RagCorpora resource name. Format: `projects/{project}/locations/{location}/ragCorpora/{rag_corpus}`
              "ragFileIds": [ # Optional. rag_file_id. The files should be in the same rag_corpus set in rag_corpus field.
                "A String",
              ],
            },
          ],
          "ragRetrievalConfig": { # Specifies the context retrieval config. # Optional. The retrieval config for the Rag query.
            "filter": { # Config for filters. # Optional. Config for filters.
              "metadataFilter": "A String", # Optional. String for metadata filtering.
              "vectorDistanceThreshold": 3.14, # Optional. Only returns contexts with vector distance smaller than the threshold.
              "vectorSimilarityThreshold": 3.14, # Optional. Only returns contexts with vector similarity larger than the threshold.
            },
            "ranking": { # Config for ranking and reranking. # Optional. Config for ranking and reranking.
              "llmRanker": { # Config for LlmRanker. # Optional. Config for LlmRanker.
                "modelName": "A String", # Optional. The model name used for ranking. See [Supported models](https://cloud.google.com/vertex-ai/generative-ai/docs/model-reference/inference#supported-models).
              },
              "rankService": { # Config for Rank Service. # Optional. Config for Rank Service.
                "modelName": "A String", # Optional. The model name of the rank service. Format: `semantic-ranker-512@latest`
              },
            },
            "topK": 42, # Optional. The number of contexts to retrieve.
          },
          "similarityTopK": 42, # Optional. Number of top k results to return from the selected corpora.
          "vectorDistanceThreshold": 3.14, # Optional. Only return results with vector distance smaller than the threshold.
        },
      },
      "urlContext": { # Tool to support URL context. # Optional. Tool to support URL context retrieval.
      },
    },
  ],
  "ttl": "A String", # Input only. The TTL for this resource. The expiration time is computed: now + TTL.
  "updateTime": "A String", # Output only. When the cache entry was last updated in UTC time.
  "usageMetadata": { # Metadata on the usage of the cached content. # Output only. Metadata on the usage of the cached content.
    "audioDurationSeconds": 42, # Duration of audio in seconds.
    "imageCount": 42, # Number of images.
    "textCount": 42, # Number of text characters.
    "totalTokenCount": 42, # Total number of tokens that the cached content consumes.
    "videoDurationSeconds": 42, # Duration of video in seconds.
  },
}