Vertex AI API . projects . locations . indexEndpoints

Instance Methods

operations()

Returns the operations Resource.

close()

Close httplib2 connections.

create(parent, body=None, x__xgafv=None)

Creates an IndexEndpoint.

delete(name, x__xgafv=None)

Deletes an IndexEndpoint.

deployIndex(indexEndpoint, body=None, x__xgafv=None)

Deploys an Index into this IndexEndpoint, creating a DeployedIndex within it. Only non-empty Indexes can be deployed.

findNeighbors(indexEndpoint, body=None, x__xgafv=None)

Finds the nearest neighbors of each vector within the request.

get(name, x__xgafv=None)

Gets an IndexEndpoint.

list(parent, filter=None, pageSize=None, pageToken=None, readMask=None, x__xgafv=None)

Lists IndexEndpoints in a Location.

list_next()

Retrieves the next page of results.

mutateDeployedIndex(indexEndpoint, body=None, x__xgafv=None)

Update an existing DeployedIndex under an IndexEndpoint.

patch(name, body=None, updateMask=None, x__xgafv=None)

Updates an IndexEndpoint.

readIndexDatapoints(indexEndpoint, body=None, x__xgafv=None)

Reads the datapoints/vectors of the given IDs. A maximum of 1000 datapoints can be retrieved in a batch.

undeployIndex(indexEndpoint, body=None, x__xgafv=None)

Undeploys an Index from an IndexEndpoint, removing a DeployedIndex from it, and freeing all resources it's using.

Method Details

close()
Close httplib2 connections.
create(parent, body=None, x__xgafv=None)
Creates an IndexEndpoint.

Args:
  parent: string, Required. The resource name of the Location to create the IndexEndpoint in. Format: `projects/{project}/locations/{location}` (required)
  body: object, The request body.
    The object takes the form of:

{ # Indexes are deployed into it. An IndexEndpoint can have multiple DeployedIndexes.
  "createTime": "A String", # Output only. Timestamp when this IndexEndpoint was created.
  "deployedIndexes": [ # Output only. The indexes deployed in this endpoint.
    { # A deployment of an Index. IndexEndpoints contain one or more DeployedIndexes.
      "automaticResources": { # A description of resources that to large degree are decided by Vertex AI, and require only a modest additional configuration. Each Model supporting these resources documents its specific guidelines. # Optional. A description of resources that the DeployedIndex uses, which to large degree are decided by Vertex AI, and optionally allows only a modest additional configuration. If min_replica_count is not set, the default value is 2 (we don't provide SLA when min_replica_count=1). If max_replica_count is not set, the default value is min_replica_count. The max allowed replica count is 1000.
        "maxReplicaCount": 42, # Immutable. The maximum number of replicas this DeployedModel may be deployed on when the traffic against it increases. If the requested value is too large, the deployment will error, but if deployment succeeds then the ability to scale the model to that many replicas is guaranteed (barring service outages). If traffic against the DeployedModel increases beyond what its replicas at maximum may handle, a portion of the traffic will be dropped. If this value is not provided, a no upper bound for scaling under heavy traffic will be assume, though Vertex AI may be unable to scale beyond certain replica number.
        "minReplicaCount": 42, # Immutable. The minimum number of replicas this DeployedModel will be always deployed on. If traffic against it increases, it may dynamically be deployed onto more replicas up to max_replica_count, and as traffic decreases, some of these extra replicas may be freed. If the requested value is too large, the deployment will error.
      },
      "createTime": "A String", # Output only. Timestamp when the DeployedIndex was created.
      "dedicatedResources": { # A description of resources that are dedicated to a DeployedModel, and that need a higher degree of manual configuration. # Optional. A description of resources that are dedicated to the DeployedIndex, and that need a higher degree of manual configuration. The field min_replica_count must be set to a value strictly greater than 0, or else validation will fail. We don't provide SLA when min_replica_count=1. If max_replica_count is not set, the default value is min_replica_count. The max allowed replica count is 1000. Available machine types for SMALL shard: e2-standard-2 and all machine types available for MEDIUM and LARGE shard. Available machine types for MEDIUM shard: e2-standard-16 and all machine types available for LARGE shard. Available machine types for LARGE shard: e2-highmem-16, n2d-standard-32. n1-standard-16 and n1-standard-32 are still available, but we recommend e2-standard-16 and e2-highmem-16 for cost efficiency.
        "autoscalingMetricSpecs": [ # Immutable. The metric specifications that overrides a resource utilization metric (CPU utilization, accelerator's duty cycle, and so on) target value (default to 60 if not set). At most one entry is allowed per metric. If machine_spec.accelerator_count is above 0, the autoscaling will be based on both CPU utilization and accelerator's duty cycle metrics and scale up when either metrics exceeds its target value while scale down if both metrics are under their target value. The default target value is 60 for both metrics. If machine_spec.accelerator_count is 0, the autoscaling will be based on CPU utilization metric only with default target value 60 if not explicitly set. For example, in the case of Online Prediction, if you want to override target CPU utilization to 80, you should set autoscaling_metric_specs.metric_name to `aiplatform.googleapis.com/prediction/online/cpu/utilization` and autoscaling_metric_specs.target to `80`.
          { # The metric specification that defines the target resource utilization (CPU utilization, accelerator's duty cycle, and so on) for calculating the desired replica count.
            "metricName": "A String", # Required. The resource metric name. Supported metrics: * For Online Prediction: * `aiplatform.googleapis.com/prediction/online/accelerator/duty_cycle` * `aiplatform.googleapis.com/prediction/online/cpu/utilization`
            "target": 42, # The target resource utilization in percentage (1% - 100%) for the given metric; once the real usage deviates from the target by a certain percentage, the machine replicas change. The default value is 60 (representing 60%) if not provided.
          },
        ],
        "machineSpec": { # Specification of a single machine. # Required. Immutable. The specification of a single machine used by the prediction.
          "acceleratorCount": 42, # The number of accelerators to attach to the machine.
          "acceleratorType": "A String", # Immutable. The type of accelerator(s) that may be attached to the machine as per accelerator_count.
          "machineType": "A String", # Immutable. The type of the machine. See the [list of machine types supported for prediction](https://cloud.google.com/vertex-ai/docs/predictions/configure-compute#machine-types) See the [list of machine types supported for custom training](https://cloud.google.com/vertex-ai/docs/training/configure-compute#machine-types). For DeployedModel this field is optional, and the default value is `n1-standard-2`. For BatchPredictionJob or as part of WorkerPoolSpec this field is required.
          "reservationAffinity": { # A ReservationAffinity can be used to configure a Vertex AI resource (e.g., a DeployedModel) to draw its Compute Engine resources from a Shared Reservation, or exclusively from on-demand capacity. # Optional. Immutable. Configuration controlling how this resource pool consumes reservation.
            "key": "A String", # Optional. Corresponds to the label key of a reservation resource. To target a SPECIFIC_RESERVATION by name, use `compute.googleapis.com/reservation-name` as the key and specify the name of your reservation as its value.
            "reservationAffinityType": "A String", # Required. Specifies the reservation affinity type.
            "values": [ # Optional. Corresponds to the label values of a reservation resource. This must be the full resource name of the reservation.
              "A String",
            ],
          },
          "tpuTopology": "A String", # Immutable. The topology of the TPUs. Corresponds to the TPU topologies available from GKE. (Example: tpu_topology: "2x2x1").
        },
        "maxReplicaCount": 42, # Immutable. The maximum number of replicas this DeployedModel may be deployed on when the traffic against it increases. If the requested value is too large, the deployment will error, but if deployment succeeds then the ability to scale the model to that many replicas is guaranteed (barring service outages). If traffic against the DeployedModel increases beyond what its replicas at maximum may handle, a portion of the traffic will be dropped. If this value is not provided, will use min_replica_count as the default value. The value of this field impacts the charge against Vertex CPU and GPU quotas. Specifically, you will be charged for (max_replica_count * number of cores in the selected machine type) and (max_replica_count * number of GPUs per replica in the selected machine type).
        "minReplicaCount": 42, # Required. Immutable. The minimum number of machine replicas this DeployedModel will be always deployed on. This value must be greater than or equal to 1. If traffic against the DeployedModel increases, it may dynamically be deployed onto more replicas, and as traffic decreases, some of these extra replicas may be freed.
        "requiredReplicaCount": 42, # Optional. Number of required available replicas for the deployment to succeed. This field is only needed when partial model deployment/mutation is desired. If set, the model deploy/mutate operation will succeed once available_replica_count reaches required_replica_count, and the rest of the replicas will be retried. If not set, the default required_replica_count will be min_replica_count.
        "spot": True or False, # Optional. If true, schedule the deployment workload on [spot VMs](https://cloud.google.com/kubernetes-engine/docs/concepts/spot-vms).
      },
      "deployedIndexAuthConfig": { # Used to set up the auth on the DeployedIndex's private endpoint. # Optional. If set, the authentication is enabled for the private endpoint.
        "authProvider": { # Configuration for an authentication provider, including support for [JSON Web Token (JWT)](https://tools.ietf.org/html/draft-ietf-oauth-json-web-token-32). # Defines the authentication provider that the DeployedIndex uses.
          "allowedIssuers": [ # A list of allowed JWT issuers. Each entry must be a valid Google service account, in the following format: `service-account-name@project-id.iam.gserviceaccount.com`
            "A String",
          ],
          "audiences": [ # The list of JWT [audiences](https://tools.ietf.org/html/draft-ietf-oauth-json-web-token-32#section-4.1.3). that are allowed to access. A JWT containing any of these audiences will be accepted.
            "A String",
          ],
        },
      },
      "deploymentGroup": "A String", # Optional. The deployment group can be no longer than 64 characters (eg: 'test', 'prod'). If not set, we will use the 'default' deployment group. Creating `deployment_groups` with `reserved_ip_ranges` is a recommended practice when the peered network has multiple peering ranges. This creates your deployments from predictable IP spaces for easier traffic administration. Also, one deployment_group (except 'default') can only be used with the same reserved_ip_ranges which means if the deployment_group has been used with reserved_ip_ranges: [a, b, c], using it with [a, b] or [d, e] is disallowed. Note: we only support up to 5 deployment groups(not including 'default').
      "displayName": "A String", # The display name of the DeployedIndex. If not provided upon creation, the Index's display_name is used.
      "enableAccessLogging": True or False, # Optional. If true, private endpoint's access logs are sent to Cloud Logging. These logs are like standard server access logs, containing information like timestamp and latency for each MatchRequest. Note that logs may incur a cost, especially if the deployed index receives a high queries per second rate (QPS). Estimate your costs before enabling this option.
      "id": "A String", # Required. The user specified ID of the DeployedIndex. The ID can be up to 128 characters long and must start with a letter and only contain letters, numbers, and underscores. The ID must be unique within the project it is created in.
      "index": "A String", # Required. The name of the Index this is the deployment of. We may refer to this Index as the DeployedIndex's "original" Index.
      "indexSyncTime": "A String", # Output only. The DeployedIndex may depend on various data on its original Index. Additionally when certain changes to the original Index are being done (e.g. when what the Index contains is being changed) the DeployedIndex may be asynchronously updated in the background to reflect these changes. If this timestamp's value is at least the Index.update_time of the original Index, it means that this DeployedIndex and the original Index are in sync. If this timestamp is older, then to see which updates this DeployedIndex already contains (and which it does not), one must list the operations that are running on the original Index. Only the successfully completed Operations with update_time equal or before this sync time are contained in this DeployedIndex.
      "privateEndpoints": { # IndexPrivateEndpoints proto is used to provide paths for users to send requests via private endpoints (e.g. private service access, private service connect). To send request via private service access, use match_grpc_address. To send request via private service connect, use service_attachment. # Output only. Provides paths for users to send requests directly to the deployed index services running on Cloud via private services access. This field is populated if network is configured.
        "matchGrpcAddress": "A String", # Output only. The ip address used to send match gRPC requests.
        "pscAutomatedEndpoints": [ # Output only. PscAutomatedEndpoints is populated if private service connect is enabled if PscAutomatedConfig is set.
          { # PscAutomatedEndpoints defines the output of the forwarding rule automatically created by each PscAutomationConfig.
            "matchAddress": "A String", # Ip Address created by the automated forwarding rule.
            "network": "A String", # Corresponding network in pscAutomationConfigs.
            "projectId": "A String", # Corresponding project_id in pscAutomationConfigs
          },
        ],
        "serviceAttachment": "A String", # Output only. The name of the service attachment resource. Populated if private service connect is enabled.
      },
      "pscAutomationConfigs": [ # Optional. If set for PSC deployed index, PSC connection will be automatically created after deployment is done and the endpoint information is populated in private_endpoints.psc_automated_endpoints.
        { # PSC config that is used to automatically create forwarding rule via ServiceConnectionMap.
          "network": "A String", # Required. The full name of the Google Compute Engine [network](https://cloud.google.com/compute/docs/networks-and-firewalls#networks). [Format](https://cloud.google.com/compute/docs/reference/rest/v1/networks/insert): `projects/{project}/global/networks/{network}`. Where {project} is a project number, as in '12345', and {network} is network name.
          "projectId": "A String", # Required. Project id used to create forwarding rule.
        },
      ],
      "reservedIpRanges": [ # Optional. A list of reserved ip ranges under the VPC network that can be used for this DeployedIndex. If set, we will deploy the index within the provided ip ranges. Otherwise, the index might be deployed to any ip ranges under the provided VPC network. The value should be the name of the address (https://cloud.google.com/compute/docs/reference/rest/v1/addresses) Example: ['vertex-ai-ip-range']. For more information about subnets and network IP ranges, please see https://cloud.google.com/vpc/docs/subnets#manually_created_subnet_ip_ranges.
        "A String",
      ],
    },
  ],
  "description": "A String", # The description of the IndexEndpoint.
  "displayName": "A String", # Required. The display name of the IndexEndpoint. The name can be up to 128 characters long and can consist of any UTF-8 characters.
  "enablePrivateServiceConnect": True or False, # Optional. Deprecated: If true, expose the IndexEndpoint via private service connect. Only one of the fields, network or enable_private_service_connect, can be set.
  "encryptionSpec": { # Represents a customer-managed encryption key spec that can be applied to a top-level resource. # Immutable. Customer-managed encryption key spec for an IndexEndpoint. If set, this IndexEndpoint and all sub-resources of this IndexEndpoint will be secured by this key.
    "kmsKeyName": "A String", # Required. The Cloud KMS resource identifier of the customer managed encryption key used to protect a resource. Has the form: `projects/my-project/locations/my-region/keyRings/my-kr/cryptoKeys/my-key`. The key needs to be in the same region as where the compute resource is created.
  },
  "etag": "A String", # Used to perform consistent read-modify-write updates. If not set, a blind "overwrite" update happens.
  "labels": { # The labels with user-defined metadata to organize your IndexEndpoints. Label keys and values can be no longer than 64 characters (Unicode codepoints), can only contain lowercase letters, numeric characters, underscores and dashes. International characters are allowed. See https://goo.gl/xmQnxf for more information and examples of labels.
    "a_key": "A String",
  },
  "name": "A String", # Output only. The resource name of the IndexEndpoint.
  "network": "A String", # Optional. The full name of the Google Compute Engine [network](https://cloud.google.com/compute/docs/networks-and-firewalls#networks) to which the IndexEndpoint should be peered. Private services access must already be configured for the network. If left unspecified, the Endpoint is not peered with any network. network and private_service_connect_config are mutually exclusive. [Format](https://cloud.google.com/compute/docs/reference/rest/v1/networks/insert): `projects/{project}/global/networks/{network}`. Where {project} is a project number, as in '12345', and {network} is network name.
  "privateServiceConnectConfig": { # Represents configuration for private service connect. # Optional. Configuration for private service connect. network and private_service_connect_config are mutually exclusive.
    "enablePrivateServiceConnect": True or False, # Required. If true, expose the IndexEndpoint via private service connect.
    "projectAllowlist": [ # A list of Projects from which the forwarding rule will target the service attachment.
      "A String",
    ],
    "serviceAttachment": "A String", # Output only. The name of the generated service attachment resource. This is only populated if the endpoint is deployed with PrivateServiceConnect.
  },
  "publicEndpointDomainName": "A String", # Output only. If public_endpoint_enabled is true, this field will be populated with the domain name to use for this index endpoint.
  "publicEndpointEnabled": True or False, # Optional. If true, the deployed index will be accessible through public endpoint.
  "satisfiesPzi": True or False, # Output only. Reserved for future use.
  "satisfiesPzs": True or False, # Output only. Reserved for future use.
  "updateTime": "A String", # Output only. Timestamp when this IndexEndpoint was last updated. This timestamp is not updated when the endpoint's DeployedIndexes are updated, e.g. due to updates of the original Indexes they are the deployments of.
}

  x__xgafv: string, V1 error format.
    Allowed values
      1 - v1 error format
      2 - v2 error format

Returns:
  An object of the form:

    { # This resource represents a long-running operation that is the result of a network API call.
  "done": True or False, # If the value is `false`, it means the operation is still in progress. If `true`, the operation is completed, and either `error` or `response` is available.
  "error": { # The `Status` type defines a logical error model that is suitable for different programming environments, including REST APIs and RPC APIs. It is used by [gRPC](https://github.com/grpc). Each `Status` message contains three pieces of data: error code, error message, and error details. You can find out more about this error model and how to work with it in the [API Design Guide](https://cloud.google.com/apis/design/errors). # The error result of the operation in case of failure or cancellation.
    "code": 42, # The status code, which should be an enum value of google.rpc.Code.
    "details": [ # A list of messages that carry the error details. There is a common set of message types for APIs to use.
      {
        "a_key": "", # Properties of the object. Contains field @type with type URL.
      },
    ],
    "message": "A String", # A developer-facing error message, which should be in English. Any user-facing error message should be localized and sent in the google.rpc.Status.details field, or localized by the client.
  },
  "metadata": { # Service-specific metadata associated with the operation. It typically contains progress information and common metadata such as create time. Some services might not provide such metadata. Any method that returns a long-running operation should document the metadata type, if any.
    "a_key": "", # Properties of the object. Contains field @type with type URL.
  },
  "name": "A String", # The server-assigned name, which is only unique within the same service that originally returns it. If you use the default HTTP mapping, the `name` should be a resource name ending with `operations/{unique_id}`.
  "response": { # The normal, successful response of the operation. If the original method returns no data on success, such as `Delete`, the response is `google.protobuf.Empty`. If the original method is standard `Get`/`Create`/`Update`, the response should be the resource. For other methods, the response should have the type `XxxResponse`, where `Xxx` is the original method name. For example, if the original method name is `TakeSnapshot()`, the inferred response type is `TakeSnapshotResponse`.
    "a_key": "", # Properties of the object. Contains field @type with type URL.
  },
}
delete(name, x__xgafv=None)
Deletes an IndexEndpoint.

Args:
  name: string, Required. The name of the IndexEndpoint resource to be deleted. Format: `projects/{project}/locations/{location}/indexEndpoints/{index_endpoint}` (required)
  x__xgafv: string, V1 error format.
    Allowed values
      1 - v1 error format
      2 - v2 error format

Returns:
  An object of the form:

    { # This resource represents a long-running operation that is the result of a network API call.
  "done": True or False, # If the value is `false`, it means the operation is still in progress. If `true`, the operation is completed, and either `error` or `response` is available.
  "error": { # The `Status` type defines a logical error model that is suitable for different programming environments, including REST APIs and RPC APIs. It is used by [gRPC](https://github.com/grpc). Each `Status` message contains three pieces of data: error code, error message, and error details. You can find out more about this error model and how to work with it in the [API Design Guide](https://cloud.google.com/apis/design/errors). # The error result of the operation in case of failure or cancellation.
    "code": 42, # The status code, which should be an enum value of google.rpc.Code.
    "details": [ # A list of messages that carry the error details. There is a common set of message types for APIs to use.
      {
        "a_key": "", # Properties of the object. Contains field @type with type URL.
      },
    ],
    "message": "A String", # A developer-facing error message, which should be in English. Any user-facing error message should be localized and sent in the google.rpc.Status.details field, or localized by the client.
  },
  "metadata": { # Service-specific metadata associated with the operation. It typically contains progress information and common metadata such as create time. Some services might not provide such metadata. Any method that returns a long-running operation should document the metadata type, if any.
    "a_key": "", # Properties of the object. Contains field @type with type URL.
  },
  "name": "A String", # The server-assigned name, which is only unique within the same service that originally returns it. If you use the default HTTP mapping, the `name` should be a resource name ending with `operations/{unique_id}`.
  "response": { # The normal, successful response of the operation. If the original method returns no data on success, such as `Delete`, the response is `google.protobuf.Empty`. If the original method is standard `Get`/`Create`/`Update`, the response should be the resource. For other methods, the response should have the type `XxxResponse`, where `Xxx` is the original method name. For example, if the original method name is `TakeSnapshot()`, the inferred response type is `TakeSnapshotResponse`.
    "a_key": "", # Properties of the object. Contains field @type with type URL.
  },
}
deployIndex(indexEndpoint, body=None, x__xgafv=None)
Deploys an Index into this IndexEndpoint, creating a DeployedIndex within it. Only non-empty Indexes can be deployed.

Args:
  indexEndpoint: string, Required. The name of the IndexEndpoint resource into which to deploy an Index. Format: `projects/{project}/locations/{location}/indexEndpoints/{index_endpoint}` (required)
  body: object, The request body.
    The object takes the form of:

{ # Request message for IndexEndpointService.DeployIndex.
  "deployedIndex": { # A deployment of an Index. IndexEndpoints contain one or more DeployedIndexes. # Required. The DeployedIndex to be created within the IndexEndpoint.
    "automaticResources": { # A description of resources that to large degree are decided by Vertex AI, and require only a modest additional configuration. Each Model supporting these resources documents its specific guidelines. # Optional. A description of resources that the DeployedIndex uses, which to large degree are decided by Vertex AI, and optionally allows only a modest additional configuration. If min_replica_count is not set, the default value is 2 (we don't provide SLA when min_replica_count=1). If max_replica_count is not set, the default value is min_replica_count. The max allowed replica count is 1000.
      "maxReplicaCount": 42, # Immutable. The maximum number of replicas this DeployedModel may be deployed on when the traffic against it increases. If the requested value is too large, the deployment will error, but if deployment succeeds then the ability to scale the model to that many replicas is guaranteed (barring service outages). If traffic against the DeployedModel increases beyond what its replicas at maximum may handle, a portion of the traffic will be dropped. If this value is not provided, a no upper bound for scaling under heavy traffic will be assume, though Vertex AI may be unable to scale beyond certain replica number.
      "minReplicaCount": 42, # Immutable. The minimum number of replicas this DeployedModel will be always deployed on. If traffic against it increases, it may dynamically be deployed onto more replicas up to max_replica_count, and as traffic decreases, some of these extra replicas may be freed. If the requested value is too large, the deployment will error.
    },
    "createTime": "A String", # Output only. Timestamp when the DeployedIndex was created.
    "dedicatedResources": { # A description of resources that are dedicated to a DeployedModel, and that need a higher degree of manual configuration. # Optional. A description of resources that are dedicated to the DeployedIndex, and that need a higher degree of manual configuration. The field min_replica_count must be set to a value strictly greater than 0, or else validation will fail. We don't provide SLA when min_replica_count=1. If max_replica_count is not set, the default value is min_replica_count. The max allowed replica count is 1000. Available machine types for SMALL shard: e2-standard-2 and all machine types available for MEDIUM and LARGE shard. Available machine types for MEDIUM shard: e2-standard-16 and all machine types available for LARGE shard. Available machine types for LARGE shard: e2-highmem-16, n2d-standard-32. n1-standard-16 and n1-standard-32 are still available, but we recommend e2-standard-16 and e2-highmem-16 for cost efficiency.
      "autoscalingMetricSpecs": [ # Immutable. The metric specifications that overrides a resource utilization metric (CPU utilization, accelerator's duty cycle, and so on) target value (default to 60 if not set). At most one entry is allowed per metric. If machine_spec.accelerator_count is above 0, the autoscaling will be based on both CPU utilization and accelerator's duty cycle metrics and scale up when either metrics exceeds its target value while scale down if both metrics are under their target value. The default target value is 60 for both metrics. If machine_spec.accelerator_count is 0, the autoscaling will be based on CPU utilization metric only with default target value 60 if not explicitly set. For example, in the case of Online Prediction, if you want to override target CPU utilization to 80, you should set autoscaling_metric_specs.metric_name to `aiplatform.googleapis.com/prediction/online/cpu/utilization` and autoscaling_metric_specs.target to `80`.
        { # The metric specification that defines the target resource utilization (CPU utilization, accelerator's duty cycle, and so on) for calculating the desired replica count.
          "metricName": "A String", # Required. The resource metric name. Supported metrics: * For Online Prediction: * `aiplatform.googleapis.com/prediction/online/accelerator/duty_cycle` * `aiplatform.googleapis.com/prediction/online/cpu/utilization`
          "target": 42, # The target resource utilization in percentage (1% - 100%) for the given metric; once the real usage deviates from the target by a certain percentage, the machine replicas change. The default value is 60 (representing 60%) if not provided.
        },
      ],
      "machineSpec": { # Specification of a single machine. # Required. Immutable. The specification of a single machine used by the prediction.
        "acceleratorCount": 42, # The number of accelerators to attach to the machine.
        "acceleratorType": "A String", # Immutable. The type of accelerator(s) that may be attached to the machine as per accelerator_count.
        "machineType": "A String", # Immutable. The type of the machine. See the [list of machine types supported for prediction](https://cloud.google.com/vertex-ai/docs/predictions/configure-compute#machine-types) See the [list of machine types supported for custom training](https://cloud.google.com/vertex-ai/docs/training/configure-compute#machine-types). For DeployedModel this field is optional, and the default value is `n1-standard-2`. For BatchPredictionJob or as part of WorkerPoolSpec this field is required.
        "reservationAffinity": { # A ReservationAffinity can be used to configure a Vertex AI resource (e.g., a DeployedModel) to draw its Compute Engine resources from a Shared Reservation, or exclusively from on-demand capacity. # Optional. Immutable. Configuration controlling how this resource pool consumes reservation.
          "key": "A String", # Optional. Corresponds to the label key of a reservation resource. To target a SPECIFIC_RESERVATION by name, use `compute.googleapis.com/reservation-name` as the key and specify the name of your reservation as its value.
          "reservationAffinityType": "A String", # Required. Specifies the reservation affinity type.
          "values": [ # Optional. Corresponds to the label values of a reservation resource. This must be the full resource name of the reservation.
            "A String",
          ],
        },
        "tpuTopology": "A String", # Immutable. The topology of the TPUs. Corresponds to the TPU topologies available from GKE. (Example: tpu_topology: "2x2x1").
      },
      "maxReplicaCount": 42, # Immutable. The maximum number of replicas this DeployedModel may be deployed on when the traffic against it increases. If the requested value is too large, the deployment will error, but if deployment succeeds then the ability to scale the model to that many replicas is guaranteed (barring service outages). If traffic against the DeployedModel increases beyond what its replicas at maximum may handle, a portion of the traffic will be dropped. If this value is not provided, will use min_replica_count as the default value. The value of this field impacts the charge against Vertex CPU and GPU quotas. Specifically, you will be charged for (max_replica_count * number of cores in the selected machine type) and (max_replica_count * number of GPUs per replica in the selected machine type).
      "minReplicaCount": 42, # Required. Immutable. The minimum number of machine replicas this DeployedModel will be always deployed on. This value must be greater than or equal to 1. If traffic against the DeployedModel increases, it may dynamically be deployed onto more replicas, and as traffic decreases, some of these extra replicas may be freed.
      "requiredReplicaCount": 42, # Optional. Number of required available replicas for the deployment to succeed. This field is only needed when partial model deployment/mutation is desired. If set, the model deploy/mutate operation will succeed once available_replica_count reaches required_replica_count, and the rest of the replicas will be retried. If not set, the default required_replica_count will be min_replica_count.
      "spot": True or False, # Optional. If true, schedule the deployment workload on [spot VMs](https://cloud.google.com/kubernetes-engine/docs/concepts/spot-vms).
    },
    "deployedIndexAuthConfig": { # Used to set up the auth on the DeployedIndex's private endpoint. # Optional. If set, the authentication is enabled for the private endpoint.
      "authProvider": { # Configuration for an authentication provider, including support for [JSON Web Token (JWT)](https://tools.ietf.org/html/draft-ietf-oauth-json-web-token-32). # Defines the authentication provider that the DeployedIndex uses.
        "allowedIssuers": [ # A list of allowed JWT issuers. Each entry must be a valid Google service account, in the following format: `service-account-name@project-id.iam.gserviceaccount.com`
          "A String",
        ],
        "audiences": [ # The list of JWT [audiences](https://tools.ietf.org/html/draft-ietf-oauth-json-web-token-32#section-4.1.3). that are allowed to access. A JWT containing any of these audiences will be accepted.
          "A String",
        ],
      },
    },
    "deploymentGroup": "A String", # Optional. The deployment group can be no longer than 64 characters (eg: 'test', 'prod'). If not set, we will use the 'default' deployment group. Creating `deployment_groups` with `reserved_ip_ranges` is a recommended practice when the peered network has multiple peering ranges. This creates your deployments from predictable IP spaces for easier traffic administration. Also, one deployment_group (except 'default') can only be used with the same reserved_ip_ranges which means if the deployment_group has been used with reserved_ip_ranges: [a, b, c], using it with [a, b] or [d, e] is disallowed. Note: we only support up to 5 deployment groups(not including 'default').
    "displayName": "A String", # The display name of the DeployedIndex. If not provided upon creation, the Index's display_name is used.
    "enableAccessLogging": True or False, # Optional. If true, private endpoint's access logs are sent to Cloud Logging. These logs are like standard server access logs, containing information like timestamp and latency for each MatchRequest. Note that logs may incur a cost, especially if the deployed index receives a high queries per second rate (QPS). Estimate your costs before enabling this option.
    "id": "A String", # Required. The user specified ID of the DeployedIndex. The ID can be up to 128 characters long and must start with a letter and only contain letters, numbers, and underscores. The ID must be unique within the project it is created in.
    "index": "A String", # Required. The name of the Index this is the deployment of. We may refer to this Index as the DeployedIndex's "original" Index.
    "indexSyncTime": "A String", # Output only. The DeployedIndex may depend on various data on its original Index. Additionally when certain changes to the original Index are being done (e.g. when what the Index contains is being changed) the DeployedIndex may be asynchronously updated in the background to reflect these changes. If this timestamp's value is at least the Index.update_time of the original Index, it means that this DeployedIndex and the original Index are in sync. If this timestamp is older, then to see which updates this DeployedIndex already contains (and which it does not), one must list the operations that are running on the original Index. Only the successfully completed Operations with update_time equal or before this sync time are contained in this DeployedIndex.
    "privateEndpoints": { # IndexPrivateEndpoints proto is used to provide paths for users to send requests via private endpoints (e.g. private service access, private service connect). To send request via private service access, use match_grpc_address. To send request via private service connect, use service_attachment. # Output only. Provides paths for users to send requests directly to the deployed index services running on Cloud via private services access. This field is populated if network is configured.
      "matchGrpcAddress": "A String", # Output only. The ip address used to send match gRPC requests.
      "pscAutomatedEndpoints": [ # Output only. PscAutomatedEndpoints is populated if private service connect is enabled if PscAutomatedConfig is set.
        { # PscAutomatedEndpoints defines the output of the forwarding rule automatically created by each PscAutomationConfig.
          "matchAddress": "A String", # Ip Address created by the automated forwarding rule.
          "network": "A String", # Corresponding network in pscAutomationConfigs.
          "projectId": "A String", # Corresponding project_id in pscAutomationConfigs
        },
      ],
      "serviceAttachment": "A String", # Output only. The name of the service attachment resource. Populated if private service connect is enabled.
    },
    "pscAutomationConfigs": [ # Optional. If set for PSC deployed index, PSC connection will be automatically created after deployment is done and the endpoint information is populated in private_endpoints.psc_automated_endpoints.
      { # PSC config that is used to automatically create forwarding rule via ServiceConnectionMap.
        "network": "A String", # Required. The full name of the Google Compute Engine [network](https://cloud.google.com/compute/docs/networks-and-firewalls#networks). [Format](https://cloud.google.com/compute/docs/reference/rest/v1/networks/insert): `projects/{project}/global/networks/{network}`. Where {project} is a project number, as in '12345', and {network} is network name.
        "projectId": "A String", # Required. Project id used to create forwarding rule.
      },
    ],
    "reservedIpRanges": [ # Optional. A list of reserved ip ranges under the VPC network that can be used for this DeployedIndex. If set, we will deploy the index within the provided ip ranges. Otherwise, the index might be deployed to any ip ranges under the provided VPC network. The value should be the name of the address (https://cloud.google.com/compute/docs/reference/rest/v1/addresses) Example: ['vertex-ai-ip-range']. For more information about subnets and network IP ranges, please see https://cloud.google.com/vpc/docs/subnets#manually_created_subnet_ip_ranges.
      "A String",
    ],
  },
}

  x__xgafv: string, V1 error format.
    Allowed values
      1 - v1 error format
      2 - v2 error format

Returns:
  An object of the form:

    { # This resource represents a long-running operation that is the result of a network API call.
  "done": True or False, # If the value is `false`, it means the operation is still in progress. If `true`, the operation is completed, and either `error` or `response` is available.
  "error": { # The `Status` type defines a logical error model that is suitable for different programming environments, including REST APIs and RPC APIs. It is used by [gRPC](https://github.com/grpc). Each `Status` message contains three pieces of data: error code, error message, and error details. You can find out more about this error model and how to work with it in the [API Design Guide](https://cloud.google.com/apis/design/errors). # The error result of the operation in case of failure or cancellation.
    "code": 42, # The status code, which should be an enum value of google.rpc.Code.
    "details": [ # A list of messages that carry the error details. There is a common set of message types for APIs to use.
      {
        "a_key": "", # Properties of the object. Contains field @type with type URL.
      },
    ],
    "message": "A String", # A developer-facing error message, which should be in English. Any user-facing error message should be localized and sent in the google.rpc.Status.details field, or localized by the client.
  },
  "metadata": { # Service-specific metadata associated with the operation. It typically contains progress information and common metadata such as create time. Some services might not provide such metadata. Any method that returns a long-running operation should document the metadata type, if any.
    "a_key": "", # Properties of the object. Contains field @type with type URL.
  },
  "name": "A String", # The server-assigned name, which is only unique within the same service that originally returns it. If you use the default HTTP mapping, the `name` should be a resource name ending with `operations/{unique_id}`.
  "response": { # The normal, successful response of the operation. If the original method returns no data on success, such as `Delete`, the response is `google.protobuf.Empty`. If the original method is standard `Get`/`Create`/`Update`, the response should be the resource. For other methods, the response should have the type `XxxResponse`, where `Xxx` is the original method name. For example, if the original method name is `TakeSnapshot()`, the inferred response type is `TakeSnapshotResponse`.
    "a_key": "", # Properties of the object. Contains field @type with type URL.
  },
}
findNeighbors(indexEndpoint, body=None, x__xgafv=None)
Finds the nearest neighbors of each vector within the request.

Args:
  indexEndpoint: string, Required. The name of the index endpoint. Format: `projects/{project}/locations/{location}/indexEndpoints/{index_endpoint}` (required)
  body: object, The request body.
    The object takes the form of:

{ # The request message for MatchService.FindNeighbors.
  "deployedIndexId": "A String", # The ID of the DeployedIndex that will serve the request. This request is sent to a specific IndexEndpoint, as per the IndexEndpoint.network. That IndexEndpoint also has IndexEndpoint.deployed_indexes, and each such index has a DeployedIndex.id field. The value of the field below must equal one of the DeployedIndex.id fields of the IndexEndpoint that is being called for this request.
  "queries": [ # The list of queries.
    { # A query to find a number of the nearest neighbors (most similar vectors) of a vector.
      "approximateNeighborCount": 42, # The number of neighbors to find via approximate search before exact reordering is performed. If not set, the default value from scam config is used; if set, this value must be > 0.
      "datapoint": { # A datapoint of Index. # Required. The datapoint/vector whose nearest neighbors should be searched for.
        "crowdingTag": { # Crowding tag is a constraint on a neighbor list produced by nearest neighbor search requiring that no more than some value k' of the k neighbors returned have the same value of crowding_attribute. # Optional. CrowdingTag of the datapoint, the number of neighbors to return in each crowding can be configured during query.
          "crowdingAttribute": "A String", # The attribute value used for crowding. The maximum number of neighbors to return per crowding attribute value (per_crowding_attribute_num_neighbors) is configured per-query. This field is ignored if per_crowding_attribute_num_neighbors is larger than the total number of neighbors to return for a given query.
        },
        "datapointId": "A String", # Required. Unique identifier of the datapoint.
        "featureVector": [ # Required. Feature embedding vector for dense index. An array of numbers with the length of [NearestNeighborSearchConfig.dimensions].
          3.14,
        ],
        "numericRestricts": [ # Optional. List of Restrict of the datapoint, used to perform "restricted searches" where boolean rule are used to filter the subset of the database eligible for matching. This uses numeric comparisons.
          { # This field allows restricts to be based on numeric comparisons rather than categorical tokens.
            "namespace": "A String", # The namespace of this restriction. e.g.: cost.
            "op": "A String", # This MUST be specified for queries and must NOT be specified for datapoints.
            "valueDouble": 3.14, # Represents 64 bit float.
            "valueFloat": 3.14, # Represents 32 bit float.
            "valueInt": "A String", # Represents 64 bit integer.
          },
        ],
        "restricts": [ # Optional. List of Restrict of the datapoint, used to perform "restricted searches" where boolean rule are used to filter the subset of the database eligible for matching. This uses categorical tokens. See: https://cloud.google.com/vertex-ai/docs/matching-engine/filtering
          { # Restriction of a datapoint which describe its attributes(tokens) from each of several attribute categories(namespaces).
            "allowList": [ # The attributes to allow in this namespace. e.g.: 'red'
              "A String",
            ],
            "denyList": [ # The attributes to deny in this namespace. e.g.: 'blue'
              "A String",
            ],
            "namespace": "A String", # The namespace of this restriction. e.g.: color.
          },
        ],
        "sparseEmbedding": { # Feature embedding vector for sparse index. An array of numbers whose values are located in the specified dimensions. # Optional. Feature embedding vector for sparse index.
          "dimensions": [ # Required. The list of indexes for the embedding values of the sparse vector.
            "A String",
          ],
          "values": [ # Required. The list of embedding values of the sparse vector.
            3.14,
          ],
        },
      },
      "fractionLeafNodesToSearchOverride": 3.14, # The fraction of the number of leaves to search, set at query time allows user to tune search performance. This value increase result in both search accuracy and latency increase. The value should be between 0.0 and 1.0. If not set or set to 0.0, query uses the default value specified in NearestNeighborSearchConfig.TreeAHConfig.fraction_leaf_nodes_to_search.
      "neighborCount": 42, # The number of nearest neighbors to be retrieved from database for each query. If not set, will use the default from the service configuration (https://cloud.google.com/vertex-ai/docs/matching-engine/configuring-indexes#nearest-neighbor-search-config).
      "perCrowdingAttributeNeighborCount": 42, # Crowding is a constraint on a neighbor list produced by nearest neighbor search requiring that no more than some value k' of the k neighbors returned have the same value of crowding_attribute. It's used for improving result diversity. This field is the maximum number of matches with the same crowding tag.
      "rrf": { # Parameters for RRF algorithm that combines search results. # Optional. Represents RRF algorithm that combines search results.
        "alpha": 3.14, # Required. Users can provide an alpha value to give more weight to dense vs sparse results. For example, if the alpha is 0, we only return sparse and if the alpha is 1, we only return dense.
      },
    },
  ],
  "returnFullDatapoint": True or False, # If set to true, the full datapoints (including all vector values and restricts) of the nearest neighbors are returned. Note that returning full datapoint will significantly increase the latency and cost of the query.
}

  x__xgafv: string, V1 error format.
    Allowed values
      1 - v1 error format
      2 - v2 error format

Returns:
  An object of the form:

    { # The response message for MatchService.FindNeighbors.
  "nearestNeighbors": [ # The nearest neighbors of the query datapoints.
    { # Nearest neighbors for one query.
      "id": "A String", # The ID of the query datapoint.
      "neighbors": [ # All its neighbors.
        { # A neighbor of the query vector.
          "datapoint": { # A datapoint of Index. # The datapoint of the neighbor. Note that full datapoints are returned only when "return_full_datapoint" is set to true. Otherwise, only the "datapoint_id" and "crowding_tag" fields are populated.
            "crowdingTag": { # Crowding tag is a constraint on a neighbor list produced by nearest neighbor search requiring that no more than some value k' of the k neighbors returned have the same value of crowding_attribute. # Optional. CrowdingTag of the datapoint, the number of neighbors to return in each crowding can be configured during query.
              "crowdingAttribute": "A String", # The attribute value used for crowding. The maximum number of neighbors to return per crowding attribute value (per_crowding_attribute_num_neighbors) is configured per-query. This field is ignored if per_crowding_attribute_num_neighbors is larger than the total number of neighbors to return for a given query.
            },
            "datapointId": "A String", # Required. Unique identifier of the datapoint.
            "featureVector": [ # Required. Feature embedding vector for dense index. An array of numbers with the length of [NearestNeighborSearchConfig.dimensions].
              3.14,
            ],
            "numericRestricts": [ # Optional. List of Restrict of the datapoint, used to perform "restricted searches" where boolean rule are used to filter the subset of the database eligible for matching. This uses numeric comparisons.
              { # This field allows restricts to be based on numeric comparisons rather than categorical tokens.
                "namespace": "A String", # The namespace of this restriction. e.g.: cost.
                "op": "A String", # This MUST be specified for queries and must NOT be specified for datapoints.
                "valueDouble": 3.14, # Represents 64 bit float.
                "valueFloat": 3.14, # Represents 32 bit float.
                "valueInt": "A String", # Represents 64 bit integer.
              },
            ],
            "restricts": [ # Optional. List of Restrict of the datapoint, used to perform "restricted searches" where boolean rule are used to filter the subset of the database eligible for matching. This uses categorical tokens. See: https://cloud.google.com/vertex-ai/docs/matching-engine/filtering
              { # Restriction of a datapoint which describe its attributes(tokens) from each of several attribute categories(namespaces).
                "allowList": [ # The attributes to allow in this namespace. e.g.: 'red'
                  "A String",
                ],
                "denyList": [ # The attributes to deny in this namespace. e.g.: 'blue'
                  "A String",
                ],
                "namespace": "A String", # The namespace of this restriction. e.g.: color.
              },
            ],
            "sparseEmbedding": { # Feature embedding vector for sparse index. An array of numbers whose values are located in the specified dimensions. # Optional. Feature embedding vector for sparse index.
              "dimensions": [ # Required. The list of indexes for the embedding values of the sparse vector.
                "A String",
              ],
              "values": [ # Required. The list of embedding values of the sparse vector.
                3.14,
              ],
            },
          },
          "distance": 3.14, # The distance between the neighbor and the dense embedding query.
          "sparseDistance": 3.14, # The distance between the neighbor and the query sparse_embedding.
        },
      ],
    },
  ],
}
get(name, x__xgafv=None)
Gets an IndexEndpoint.

Args:
  name: string, Required. The name of the IndexEndpoint resource. Format: `projects/{project}/locations/{location}/indexEndpoints/{index_endpoint}` (required)
  x__xgafv: string, V1 error format.
    Allowed values
      1 - v1 error format
      2 - v2 error format

Returns:
  An object of the form:

    { # Indexes are deployed into it. An IndexEndpoint can have multiple DeployedIndexes.
  "createTime": "A String", # Output only. Timestamp when this IndexEndpoint was created.
  "deployedIndexes": [ # Output only. The indexes deployed in this endpoint.
    { # A deployment of an Index. IndexEndpoints contain one or more DeployedIndexes.
      "automaticResources": { # A description of resources that to large degree are decided by Vertex AI, and require only a modest additional configuration. Each Model supporting these resources documents its specific guidelines. # Optional. A description of resources that the DeployedIndex uses, which to large degree are decided by Vertex AI, and optionally allows only a modest additional configuration. If min_replica_count is not set, the default value is 2 (we don't provide SLA when min_replica_count=1). If max_replica_count is not set, the default value is min_replica_count. The max allowed replica count is 1000.
        "maxReplicaCount": 42, # Immutable. The maximum number of replicas this DeployedModel may be deployed on when the traffic against it increases. If the requested value is too large, the deployment will error, but if deployment succeeds then the ability to scale the model to that many replicas is guaranteed (barring service outages). If traffic against the DeployedModel increases beyond what its replicas at maximum may handle, a portion of the traffic will be dropped. If this value is not provided, a no upper bound for scaling under heavy traffic will be assume, though Vertex AI may be unable to scale beyond certain replica number.
        "minReplicaCount": 42, # Immutable. The minimum number of replicas this DeployedModel will be always deployed on. If traffic against it increases, it may dynamically be deployed onto more replicas up to max_replica_count, and as traffic decreases, some of these extra replicas may be freed. If the requested value is too large, the deployment will error.
      },
      "createTime": "A String", # Output only. Timestamp when the DeployedIndex was created.
      "dedicatedResources": { # A description of resources that are dedicated to a DeployedModel, and that need a higher degree of manual configuration. # Optional. A description of resources that are dedicated to the DeployedIndex, and that need a higher degree of manual configuration. The field min_replica_count must be set to a value strictly greater than 0, or else validation will fail. We don't provide SLA when min_replica_count=1. If max_replica_count is not set, the default value is min_replica_count. The max allowed replica count is 1000. Available machine types for SMALL shard: e2-standard-2 and all machine types available for MEDIUM and LARGE shard. Available machine types for MEDIUM shard: e2-standard-16 and all machine types available for LARGE shard. Available machine types for LARGE shard: e2-highmem-16, n2d-standard-32. n1-standard-16 and n1-standard-32 are still available, but we recommend e2-standard-16 and e2-highmem-16 for cost efficiency.
        "autoscalingMetricSpecs": [ # Immutable. The metric specifications that overrides a resource utilization metric (CPU utilization, accelerator's duty cycle, and so on) target value (default to 60 if not set). At most one entry is allowed per metric. If machine_spec.accelerator_count is above 0, the autoscaling will be based on both CPU utilization and accelerator's duty cycle metrics and scale up when either metrics exceeds its target value while scale down if both metrics are under their target value. The default target value is 60 for both metrics. If machine_spec.accelerator_count is 0, the autoscaling will be based on CPU utilization metric only with default target value 60 if not explicitly set. For example, in the case of Online Prediction, if you want to override target CPU utilization to 80, you should set autoscaling_metric_specs.metric_name to `aiplatform.googleapis.com/prediction/online/cpu/utilization` and autoscaling_metric_specs.target to `80`.
          { # The metric specification that defines the target resource utilization (CPU utilization, accelerator's duty cycle, and so on) for calculating the desired replica count.
            "metricName": "A String", # Required. The resource metric name. Supported metrics: * For Online Prediction: * `aiplatform.googleapis.com/prediction/online/accelerator/duty_cycle` * `aiplatform.googleapis.com/prediction/online/cpu/utilization`
            "target": 42, # The target resource utilization in percentage (1% - 100%) for the given metric; once the real usage deviates from the target by a certain percentage, the machine replicas change. The default value is 60 (representing 60%) if not provided.
          },
        ],
        "machineSpec": { # Specification of a single machine. # Required. Immutable. The specification of a single machine used by the prediction.
          "acceleratorCount": 42, # The number of accelerators to attach to the machine.
          "acceleratorType": "A String", # Immutable. The type of accelerator(s) that may be attached to the machine as per accelerator_count.
          "machineType": "A String", # Immutable. The type of the machine. See the [list of machine types supported for prediction](https://cloud.google.com/vertex-ai/docs/predictions/configure-compute#machine-types) See the [list of machine types supported for custom training](https://cloud.google.com/vertex-ai/docs/training/configure-compute#machine-types). For DeployedModel this field is optional, and the default value is `n1-standard-2`. For BatchPredictionJob or as part of WorkerPoolSpec this field is required.
          "reservationAffinity": { # A ReservationAffinity can be used to configure a Vertex AI resource (e.g., a DeployedModel) to draw its Compute Engine resources from a Shared Reservation, or exclusively from on-demand capacity. # Optional. Immutable. Configuration controlling how this resource pool consumes reservation.
            "key": "A String", # Optional. Corresponds to the label key of a reservation resource. To target a SPECIFIC_RESERVATION by name, use `compute.googleapis.com/reservation-name` as the key and specify the name of your reservation as its value.
            "reservationAffinityType": "A String", # Required. Specifies the reservation affinity type.
            "values": [ # Optional. Corresponds to the label values of a reservation resource. This must be the full resource name of the reservation.
              "A String",
            ],
          },
          "tpuTopology": "A String", # Immutable. The topology of the TPUs. Corresponds to the TPU topologies available from GKE. (Example: tpu_topology: "2x2x1").
        },
        "maxReplicaCount": 42, # Immutable. The maximum number of replicas this DeployedModel may be deployed on when the traffic against it increases. If the requested value is too large, the deployment will error, but if deployment succeeds then the ability to scale the model to that many replicas is guaranteed (barring service outages). If traffic against the DeployedModel increases beyond what its replicas at maximum may handle, a portion of the traffic will be dropped. If this value is not provided, will use min_replica_count as the default value. The value of this field impacts the charge against Vertex CPU and GPU quotas. Specifically, you will be charged for (max_replica_count * number of cores in the selected machine type) and (max_replica_count * number of GPUs per replica in the selected machine type).
        "minReplicaCount": 42, # Required. Immutable. The minimum number of machine replicas this DeployedModel will be always deployed on. This value must be greater than or equal to 1. If traffic against the DeployedModel increases, it may dynamically be deployed onto more replicas, and as traffic decreases, some of these extra replicas may be freed.
        "requiredReplicaCount": 42, # Optional. Number of required available replicas for the deployment to succeed. This field is only needed when partial model deployment/mutation is desired. If set, the model deploy/mutate operation will succeed once available_replica_count reaches required_replica_count, and the rest of the replicas will be retried. If not set, the default required_replica_count will be min_replica_count.
        "spot": True or False, # Optional. If true, schedule the deployment workload on [spot VMs](https://cloud.google.com/kubernetes-engine/docs/concepts/spot-vms).
      },
      "deployedIndexAuthConfig": { # Used to set up the auth on the DeployedIndex's private endpoint. # Optional. If set, the authentication is enabled for the private endpoint.
        "authProvider": { # Configuration for an authentication provider, including support for [JSON Web Token (JWT)](https://tools.ietf.org/html/draft-ietf-oauth-json-web-token-32). # Defines the authentication provider that the DeployedIndex uses.
          "allowedIssuers": [ # A list of allowed JWT issuers. Each entry must be a valid Google service account, in the following format: `service-account-name@project-id.iam.gserviceaccount.com`
            "A String",
          ],
          "audiences": [ # The list of JWT [audiences](https://tools.ietf.org/html/draft-ietf-oauth-json-web-token-32#section-4.1.3). that are allowed to access. A JWT containing any of these audiences will be accepted.
            "A String",
          ],
        },
      },
      "deploymentGroup": "A String", # Optional. The deployment group can be no longer than 64 characters (eg: 'test', 'prod'). If not set, we will use the 'default' deployment group. Creating `deployment_groups` with `reserved_ip_ranges` is a recommended practice when the peered network has multiple peering ranges. This creates your deployments from predictable IP spaces for easier traffic administration. Also, one deployment_group (except 'default') can only be used with the same reserved_ip_ranges which means if the deployment_group has been used with reserved_ip_ranges: [a, b, c], using it with [a, b] or [d, e] is disallowed. Note: we only support up to 5 deployment groups(not including 'default').
      "displayName": "A String", # The display name of the DeployedIndex. If not provided upon creation, the Index's display_name is used.
      "enableAccessLogging": True or False, # Optional. If true, private endpoint's access logs are sent to Cloud Logging. These logs are like standard server access logs, containing information like timestamp and latency for each MatchRequest. Note that logs may incur a cost, especially if the deployed index receives a high queries per second rate (QPS). Estimate your costs before enabling this option.
      "id": "A String", # Required. The user specified ID of the DeployedIndex. The ID can be up to 128 characters long and must start with a letter and only contain letters, numbers, and underscores. The ID must be unique within the project it is created in.
      "index": "A String", # Required. The name of the Index this is the deployment of. We may refer to this Index as the DeployedIndex's "original" Index.
      "indexSyncTime": "A String", # Output only. The DeployedIndex may depend on various data on its original Index. Additionally when certain changes to the original Index are being done (e.g. when what the Index contains is being changed) the DeployedIndex may be asynchronously updated in the background to reflect these changes. If this timestamp's value is at least the Index.update_time of the original Index, it means that this DeployedIndex and the original Index are in sync. If this timestamp is older, then to see which updates this DeployedIndex already contains (and which it does not), one must list the operations that are running on the original Index. Only the successfully completed Operations with update_time equal or before this sync time are contained in this DeployedIndex.
      "privateEndpoints": { # IndexPrivateEndpoints proto is used to provide paths for users to send requests via private endpoints (e.g. private service access, private service connect). To send request via private service access, use match_grpc_address. To send request via private service connect, use service_attachment. # Output only. Provides paths for users to send requests directly to the deployed index services running on Cloud via private services access. This field is populated if network is configured.
        "matchGrpcAddress": "A String", # Output only. The ip address used to send match gRPC requests.
        "pscAutomatedEndpoints": [ # Output only. PscAutomatedEndpoints is populated if private service connect is enabled if PscAutomatedConfig is set.
          { # PscAutomatedEndpoints defines the output of the forwarding rule automatically created by each PscAutomationConfig.
            "matchAddress": "A String", # Ip Address created by the automated forwarding rule.
            "network": "A String", # Corresponding network in pscAutomationConfigs.
            "projectId": "A String", # Corresponding project_id in pscAutomationConfigs
          },
        ],
        "serviceAttachment": "A String", # Output only. The name of the service attachment resource. Populated if private service connect is enabled.
      },
      "pscAutomationConfigs": [ # Optional. If set for PSC deployed index, PSC connection will be automatically created after deployment is done and the endpoint information is populated in private_endpoints.psc_automated_endpoints.
        { # PSC config that is used to automatically create forwarding rule via ServiceConnectionMap.
          "network": "A String", # Required. The full name of the Google Compute Engine [network](https://cloud.google.com/compute/docs/networks-and-firewalls#networks). [Format](https://cloud.google.com/compute/docs/reference/rest/v1/networks/insert): `projects/{project}/global/networks/{network}`. Where {project} is a project number, as in '12345', and {network} is network name.
          "projectId": "A String", # Required. Project id used to create forwarding rule.
        },
      ],
      "reservedIpRanges": [ # Optional. A list of reserved ip ranges under the VPC network that can be used for this DeployedIndex. If set, we will deploy the index within the provided ip ranges. Otherwise, the index might be deployed to any ip ranges under the provided VPC network. The value should be the name of the address (https://cloud.google.com/compute/docs/reference/rest/v1/addresses) Example: ['vertex-ai-ip-range']. For more information about subnets and network IP ranges, please see https://cloud.google.com/vpc/docs/subnets#manually_created_subnet_ip_ranges.
        "A String",
      ],
    },
  ],
  "description": "A String", # The description of the IndexEndpoint.
  "displayName": "A String", # Required. The display name of the IndexEndpoint. The name can be up to 128 characters long and can consist of any UTF-8 characters.
  "enablePrivateServiceConnect": True or False, # Optional. Deprecated: If true, expose the IndexEndpoint via private service connect. Only one of the fields, network or enable_private_service_connect, can be set.
  "encryptionSpec": { # Represents a customer-managed encryption key spec that can be applied to a top-level resource. # Immutable. Customer-managed encryption key spec for an IndexEndpoint. If set, this IndexEndpoint and all sub-resources of this IndexEndpoint will be secured by this key.
    "kmsKeyName": "A String", # Required. The Cloud KMS resource identifier of the customer managed encryption key used to protect a resource. Has the form: `projects/my-project/locations/my-region/keyRings/my-kr/cryptoKeys/my-key`. The key needs to be in the same region as where the compute resource is created.
  },
  "etag": "A String", # Used to perform consistent read-modify-write updates. If not set, a blind "overwrite" update happens.
  "labels": { # The labels with user-defined metadata to organize your IndexEndpoints. Label keys and values can be no longer than 64 characters (Unicode codepoints), can only contain lowercase letters, numeric characters, underscores and dashes. International characters are allowed. See https://goo.gl/xmQnxf for more information and examples of labels.
    "a_key": "A String",
  },
  "name": "A String", # Output only. The resource name of the IndexEndpoint.
  "network": "A String", # Optional. The full name of the Google Compute Engine [network](https://cloud.google.com/compute/docs/networks-and-firewalls#networks) to which the IndexEndpoint should be peered. Private services access must already be configured for the network. If left unspecified, the Endpoint is not peered with any network. network and private_service_connect_config are mutually exclusive. [Format](https://cloud.google.com/compute/docs/reference/rest/v1/networks/insert): `projects/{project}/global/networks/{network}`. Where {project} is a project number, as in '12345', and {network} is network name.
  "privateServiceConnectConfig": { # Represents configuration for private service connect. # Optional. Configuration for private service connect. network and private_service_connect_config are mutually exclusive.
    "enablePrivateServiceConnect": True or False, # Required. If true, expose the IndexEndpoint via private service connect.
    "projectAllowlist": [ # A list of Projects from which the forwarding rule will target the service attachment.
      "A String",
    ],
    "serviceAttachment": "A String", # Output only. The name of the generated service attachment resource. This is only populated if the endpoint is deployed with PrivateServiceConnect.
  },
  "publicEndpointDomainName": "A String", # Output only. If public_endpoint_enabled is true, this field will be populated with the domain name to use for this index endpoint.
  "publicEndpointEnabled": True or False, # Optional. If true, the deployed index will be accessible through public endpoint.
  "satisfiesPzi": True or False, # Output only. Reserved for future use.
  "satisfiesPzs": True or False, # Output only. Reserved for future use.
  "updateTime": "A String", # Output only. Timestamp when this IndexEndpoint was last updated. This timestamp is not updated when the endpoint's DeployedIndexes are updated, e.g. due to updates of the original Indexes they are the deployments of.
}
list(parent, filter=None, pageSize=None, pageToken=None, readMask=None, x__xgafv=None)
Lists IndexEndpoints in a Location.

Args:
  parent: string, Required. The resource name of the Location from which to list the IndexEndpoints. Format: `projects/{project}/locations/{location}` (required)
  filter: string, Optional. An expression for filtering the results of the request. For field names both snake_case and camelCase are supported. * `index_endpoint` supports = and !=. `index_endpoint` represents the IndexEndpoint ID, ie. the last segment of the IndexEndpoint's resourcename. * `display_name` supports =, != and regex() (uses [re2](https://github.com/google/re2/wiki/Syntax) syntax) * `labels` supports general map functions that is: `labels.key=value` - key:value equality `labels.key:* or labels:key - key existence A key including a space must be quoted. `labels."a key"`. Some examples: * `index_endpoint="1"` * `display_name="myDisplayName"` * `regex(display_name, "^A") -> The display name starts with an A. * `labels.myKey="myValue"`
  pageSize: integer, Optional. The standard list page size.
  pageToken: string, Optional. The standard list page token. Typically obtained via ListIndexEndpointsResponse.next_page_token of the previous IndexEndpointService.ListIndexEndpoints call.
  readMask: string, Optional. Mask specifying which fields to read.
  x__xgafv: string, V1 error format.
    Allowed values
      1 - v1 error format
      2 - v2 error format

Returns:
  An object of the form:

    { # Response message for IndexEndpointService.ListIndexEndpoints.
  "indexEndpoints": [ # List of IndexEndpoints in the requested page.
    { # Indexes are deployed into it. An IndexEndpoint can have multiple DeployedIndexes.
      "createTime": "A String", # Output only. Timestamp when this IndexEndpoint was created.
      "deployedIndexes": [ # Output only. The indexes deployed in this endpoint.
        { # A deployment of an Index. IndexEndpoints contain one or more DeployedIndexes.
          "automaticResources": { # A description of resources that to large degree are decided by Vertex AI, and require only a modest additional configuration. Each Model supporting these resources documents its specific guidelines. # Optional. A description of resources that the DeployedIndex uses, which to large degree are decided by Vertex AI, and optionally allows only a modest additional configuration. If min_replica_count is not set, the default value is 2 (we don't provide SLA when min_replica_count=1). If max_replica_count is not set, the default value is min_replica_count. The max allowed replica count is 1000.
            "maxReplicaCount": 42, # Immutable. The maximum number of replicas this DeployedModel may be deployed on when the traffic against it increases. If the requested value is too large, the deployment will error, but if deployment succeeds then the ability to scale the model to that many replicas is guaranteed (barring service outages). If traffic against the DeployedModel increases beyond what its replicas at maximum may handle, a portion of the traffic will be dropped. If this value is not provided, a no upper bound for scaling under heavy traffic will be assume, though Vertex AI may be unable to scale beyond certain replica number.
            "minReplicaCount": 42, # Immutable. The minimum number of replicas this DeployedModel will be always deployed on. If traffic against it increases, it may dynamically be deployed onto more replicas up to max_replica_count, and as traffic decreases, some of these extra replicas may be freed. If the requested value is too large, the deployment will error.
          },
          "createTime": "A String", # Output only. Timestamp when the DeployedIndex was created.
          "dedicatedResources": { # A description of resources that are dedicated to a DeployedModel, and that need a higher degree of manual configuration. # Optional. A description of resources that are dedicated to the DeployedIndex, and that need a higher degree of manual configuration. The field min_replica_count must be set to a value strictly greater than 0, or else validation will fail. We don't provide SLA when min_replica_count=1. If max_replica_count is not set, the default value is min_replica_count. The max allowed replica count is 1000. Available machine types for SMALL shard: e2-standard-2 and all machine types available for MEDIUM and LARGE shard. Available machine types for MEDIUM shard: e2-standard-16 and all machine types available for LARGE shard. Available machine types for LARGE shard: e2-highmem-16, n2d-standard-32. n1-standard-16 and n1-standard-32 are still available, but we recommend e2-standard-16 and e2-highmem-16 for cost efficiency.
            "autoscalingMetricSpecs": [ # Immutable. The metric specifications that overrides a resource utilization metric (CPU utilization, accelerator's duty cycle, and so on) target value (default to 60 if not set). At most one entry is allowed per metric. If machine_spec.accelerator_count is above 0, the autoscaling will be based on both CPU utilization and accelerator's duty cycle metrics and scale up when either metrics exceeds its target value while scale down if both metrics are under their target value. The default target value is 60 for both metrics. If machine_spec.accelerator_count is 0, the autoscaling will be based on CPU utilization metric only with default target value 60 if not explicitly set. For example, in the case of Online Prediction, if you want to override target CPU utilization to 80, you should set autoscaling_metric_specs.metric_name to `aiplatform.googleapis.com/prediction/online/cpu/utilization` and autoscaling_metric_specs.target to `80`.
              { # The metric specification that defines the target resource utilization (CPU utilization, accelerator's duty cycle, and so on) for calculating the desired replica count.
                "metricName": "A String", # Required. The resource metric name. Supported metrics: * For Online Prediction: * `aiplatform.googleapis.com/prediction/online/accelerator/duty_cycle` * `aiplatform.googleapis.com/prediction/online/cpu/utilization`
                "target": 42, # The target resource utilization in percentage (1% - 100%) for the given metric; once the real usage deviates from the target by a certain percentage, the machine replicas change. The default value is 60 (representing 60%) if not provided.
              },
            ],
            "machineSpec": { # Specification of a single machine. # Required. Immutable. The specification of a single machine used by the prediction.
              "acceleratorCount": 42, # The number of accelerators to attach to the machine.
              "acceleratorType": "A String", # Immutable. The type of accelerator(s) that may be attached to the machine as per accelerator_count.
              "machineType": "A String", # Immutable. The type of the machine. See the [list of machine types supported for prediction](https://cloud.google.com/vertex-ai/docs/predictions/configure-compute#machine-types) See the [list of machine types supported for custom training](https://cloud.google.com/vertex-ai/docs/training/configure-compute#machine-types). For DeployedModel this field is optional, and the default value is `n1-standard-2`. For BatchPredictionJob or as part of WorkerPoolSpec this field is required.
              "reservationAffinity": { # A ReservationAffinity can be used to configure a Vertex AI resource (e.g., a DeployedModel) to draw its Compute Engine resources from a Shared Reservation, or exclusively from on-demand capacity. # Optional. Immutable. Configuration controlling how this resource pool consumes reservation.
                "key": "A String", # Optional. Corresponds to the label key of a reservation resource. To target a SPECIFIC_RESERVATION by name, use `compute.googleapis.com/reservation-name` as the key and specify the name of your reservation as its value.
                "reservationAffinityType": "A String", # Required. Specifies the reservation affinity type.
                "values": [ # Optional. Corresponds to the label values of a reservation resource. This must be the full resource name of the reservation.
                  "A String",
                ],
              },
              "tpuTopology": "A String", # Immutable. The topology of the TPUs. Corresponds to the TPU topologies available from GKE. (Example: tpu_topology: "2x2x1").
            },
            "maxReplicaCount": 42, # Immutable. The maximum number of replicas this DeployedModel may be deployed on when the traffic against it increases. If the requested value is too large, the deployment will error, but if deployment succeeds then the ability to scale the model to that many replicas is guaranteed (barring service outages). If traffic against the DeployedModel increases beyond what its replicas at maximum may handle, a portion of the traffic will be dropped. If this value is not provided, will use min_replica_count as the default value. The value of this field impacts the charge against Vertex CPU and GPU quotas. Specifically, you will be charged for (max_replica_count * number of cores in the selected machine type) and (max_replica_count * number of GPUs per replica in the selected machine type).
            "minReplicaCount": 42, # Required. Immutable. The minimum number of machine replicas this DeployedModel will be always deployed on. This value must be greater than or equal to 1. If traffic against the DeployedModel increases, it may dynamically be deployed onto more replicas, and as traffic decreases, some of these extra replicas may be freed.
            "requiredReplicaCount": 42, # Optional. Number of required available replicas for the deployment to succeed. This field is only needed when partial model deployment/mutation is desired. If set, the model deploy/mutate operation will succeed once available_replica_count reaches required_replica_count, and the rest of the replicas will be retried. If not set, the default required_replica_count will be min_replica_count.
            "spot": True or False, # Optional. If true, schedule the deployment workload on [spot VMs](https://cloud.google.com/kubernetes-engine/docs/concepts/spot-vms).
          },
          "deployedIndexAuthConfig": { # Used to set up the auth on the DeployedIndex's private endpoint. # Optional. If set, the authentication is enabled for the private endpoint.
            "authProvider": { # Configuration for an authentication provider, including support for [JSON Web Token (JWT)](https://tools.ietf.org/html/draft-ietf-oauth-json-web-token-32). # Defines the authentication provider that the DeployedIndex uses.
              "allowedIssuers": [ # A list of allowed JWT issuers. Each entry must be a valid Google service account, in the following format: `service-account-name@project-id.iam.gserviceaccount.com`
                "A String",
              ],
              "audiences": [ # The list of JWT [audiences](https://tools.ietf.org/html/draft-ietf-oauth-json-web-token-32#section-4.1.3). that are allowed to access. A JWT containing any of these audiences will be accepted.
                "A String",
              ],
            },
          },
          "deploymentGroup": "A String", # Optional. The deployment group can be no longer than 64 characters (eg: 'test', 'prod'). If not set, we will use the 'default' deployment group. Creating `deployment_groups` with `reserved_ip_ranges` is a recommended practice when the peered network has multiple peering ranges. This creates your deployments from predictable IP spaces for easier traffic administration. Also, one deployment_group (except 'default') can only be used with the same reserved_ip_ranges which means if the deployment_group has been used with reserved_ip_ranges: [a, b, c], using it with [a, b] or [d, e] is disallowed. Note: we only support up to 5 deployment groups(not including 'default').
          "displayName": "A String", # The display name of the DeployedIndex. If not provided upon creation, the Index's display_name is used.
          "enableAccessLogging": True or False, # Optional. If true, private endpoint's access logs are sent to Cloud Logging. These logs are like standard server access logs, containing information like timestamp and latency for each MatchRequest. Note that logs may incur a cost, especially if the deployed index receives a high queries per second rate (QPS). Estimate your costs before enabling this option.
          "id": "A String", # Required. The user specified ID of the DeployedIndex. The ID can be up to 128 characters long and must start with a letter and only contain letters, numbers, and underscores. The ID must be unique within the project it is created in.
          "index": "A String", # Required. The name of the Index this is the deployment of. We may refer to this Index as the DeployedIndex's "original" Index.
          "indexSyncTime": "A String", # Output only. The DeployedIndex may depend on various data on its original Index. Additionally when certain changes to the original Index are being done (e.g. when what the Index contains is being changed) the DeployedIndex may be asynchronously updated in the background to reflect these changes. If this timestamp's value is at least the Index.update_time of the original Index, it means that this DeployedIndex and the original Index are in sync. If this timestamp is older, then to see which updates this DeployedIndex already contains (and which it does not), one must list the operations that are running on the original Index. Only the successfully completed Operations with update_time equal or before this sync time are contained in this DeployedIndex.
          "privateEndpoints": { # IndexPrivateEndpoints proto is used to provide paths for users to send requests via private endpoints (e.g. private service access, private service connect). To send request via private service access, use match_grpc_address. To send request via private service connect, use service_attachment. # Output only. Provides paths for users to send requests directly to the deployed index services running on Cloud via private services access. This field is populated if network is configured.
            "matchGrpcAddress": "A String", # Output only. The ip address used to send match gRPC requests.
            "pscAutomatedEndpoints": [ # Output only. PscAutomatedEndpoints is populated if private service connect is enabled if PscAutomatedConfig is set.
              { # PscAutomatedEndpoints defines the output of the forwarding rule automatically created by each PscAutomationConfig.
                "matchAddress": "A String", # Ip Address created by the automated forwarding rule.
                "network": "A String", # Corresponding network in pscAutomationConfigs.
                "projectId": "A String", # Corresponding project_id in pscAutomationConfigs
              },
            ],
            "serviceAttachment": "A String", # Output only. The name of the service attachment resource. Populated if private service connect is enabled.
          },
          "pscAutomationConfigs": [ # Optional. If set for PSC deployed index, PSC connection will be automatically created after deployment is done and the endpoint information is populated in private_endpoints.psc_automated_endpoints.
            { # PSC config that is used to automatically create forwarding rule via ServiceConnectionMap.
              "network": "A String", # Required. The full name of the Google Compute Engine [network](https://cloud.google.com/compute/docs/networks-and-firewalls#networks). [Format](https://cloud.google.com/compute/docs/reference/rest/v1/networks/insert): `projects/{project}/global/networks/{network}`. Where {project} is a project number, as in '12345', and {network} is network name.
              "projectId": "A String", # Required. Project id used to create forwarding rule.
            },
          ],
          "reservedIpRanges": [ # Optional. A list of reserved ip ranges under the VPC network that can be used for this DeployedIndex. If set, we will deploy the index within the provided ip ranges. Otherwise, the index might be deployed to any ip ranges under the provided VPC network. The value should be the name of the address (https://cloud.google.com/compute/docs/reference/rest/v1/addresses) Example: ['vertex-ai-ip-range']. For more information about subnets and network IP ranges, please see https://cloud.google.com/vpc/docs/subnets#manually_created_subnet_ip_ranges.
            "A String",
          ],
        },
      ],
      "description": "A String", # The description of the IndexEndpoint.
      "displayName": "A String", # Required. The display name of the IndexEndpoint. The name can be up to 128 characters long and can consist of any UTF-8 characters.
      "enablePrivateServiceConnect": True or False, # Optional. Deprecated: If true, expose the IndexEndpoint via private service connect. Only one of the fields, network or enable_private_service_connect, can be set.
      "encryptionSpec": { # Represents a customer-managed encryption key spec that can be applied to a top-level resource. # Immutable. Customer-managed encryption key spec for an IndexEndpoint. If set, this IndexEndpoint and all sub-resources of this IndexEndpoint will be secured by this key.
        "kmsKeyName": "A String", # Required. The Cloud KMS resource identifier of the customer managed encryption key used to protect a resource. Has the form: `projects/my-project/locations/my-region/keyRings/my-kr/cryptoKeys/my-key`. The key needs to be in the same region as where the compute resource is created.
      },
      "etag": "A String", # Used to perform consistent read-modify-write updates. If not set, a blind "overwrite" update happens.
      "labels": { # The labels with user-defined metadata to organize your IndexEndpoints. Label keys and values can be no longer than 64 characters (Unicode codepoints), can only contain lowercase letters, numeric characters, underscores and dashes. International characters are allowed. See https://goo.gl/xmQnxf for more information and examples of labels.
        "a_key": "A String",
      },
      "name": "A String", # Output only. The resource name of the IndexEndpoint.
      "network": "A String", # Optional. The full name of the Google Compute Engine [network](https://cloud.google.com/compute/docs/networks-and-firewalls#networks) to which the IndexEndpoint should be peered. Private services access must already be configured for the network. If left unspecified, the Endpoint is not peered with any network. network and private_service_connect_config are mutually exclusive. [Format](https://cloud.google.com/compute/docs/reference/rest/v1/networks/insert): `projects/{project}/global/networks/{network}`. Where {project} is a project number, as in '12345', and {network} is network name.
      "privateServiceConnectConfig": { # Represents configuration for private service connect. # Optional. Configuration for private service connect. network and private_service_connect_config are mutually exclusive.
        "enablePrivateServiceConnect": True or False, # Required. If true, expose the IndexEndpoint via private service connect.
        "projectAllowlist": [ # A list of Projects from which the forwarding rule will target the service attachment.
          "A String",
        ],
        "serviceAttachment": "A String", # Output only. The name of the generated service attachment resource. This is only populated if the endpoint is deployed with PrivateServiceConnect.
      },
      "publicEndpointDomainName": "A String", # Output only. If public_endpoint_enabled is true, this field will be populated with the domain name to use for this index endpoint.
      "publicEndpointEnabled": True or False, # Optional. If true, the deployed index will be accessible through public endpoint.
      "satisfiesPzi": True or False, # Output only. Reserved for future use.
      "satisfiesPzs": True or False, # Output only. Reserved for future use.
      "updateTime": "A String", # Output only. Timestamp when this IndexEndpoint was last updated. This timestamp is not updated when the endpoint's DeployedIndexes are updated, e.g. due to updates of the original Indexes they are the deployments of.
    },
  ],
  "nextPageToken": "A String", # A token to retrieve next page of results. Pass to ListIndexEndpointsRequest.page_token to obtain that page.
}
list_next()
Retrieves the next page of results.

        Args:
          previous_request: The request for the previous page. (required)
          previous_response: The response from the request for the previous page. (required)

        Returns:
          A request object that you can call 'execute()' on to request the next
          page. Returns None if there are no more items in the collection.
        
mutateDeployedIndex(indexEndpoint, body=None, x__xgafv=None)
Update an existing DeployedIndex under an IndexEndpoint.

Args:
  indexEndpoint: string, Required. The name of the IndexEndpoint resource into which to deploy an Index. Format: `projects/{project}/locations/{location}/indexEndpoints/{index_endpoint}` (required)
  body: object, The request body.
    The object takes the form of:

{ # A deployment of an Index. IndexEndpoints contain one or more DeployedIndexes.
  "automaticResources": { # A description of resources that to large degree are decided by Vertex AI, and require only a modest additional configuration. Each Model supporting these resources documents its specific guidelines. # Optional. A description of resources that the DeployedIndex uses, which to large degree are decided by Vertex AI, and optionally allows only a modest additional configuration. If min_replica_count is not set, the default value is 2 (we don't provide SLA when min_replica_count=1). If max_replica_count is not set, the default value is min_replica_count. The max allowed replica count is 1000.
    "maxReplicaCount": 42, # Immutable. The maximum number of replicas this DeployedModel may be deployed on when the traffic against it increases. If the requested value is too large, the deployment will error, but if deployment succeeds then the ability to scale the model to that many replicas is guaranteed (barring service outages). If traffic against the DeployedModel increases beyond what its replicas at maximum may handle, a portion of the traffic will be dropped. If this value is not provided, a no upper bound for scaling under heavy traffic will be assume, though Vertex AI may be unable to scale beyond certain replica number.
    "minReplicaCount": 42, # Immutable. The minimum number of replicas this DeployedModel will be always deployed on. If traffic against it increases, it may dynamically be deployed onto more replicas up to max_replica_count, and as traffic decreases, some of these extra replicas may be freed. If the requested value is too large, the deployment will error.
  },
  "createTime": "A String", # Output only. Timestamp when the DeployedIndex was created.
  "dedicatedResources": { # A description of resources that are dedicated to a DeployedModel, and that need a higher degree of manual configuration. # Optional. A description of resources that are dedicated to the DeployedIndex, and that need a higher degree of manual configuration. The field min_replica_count must be set to a value strictly greater than 0, or else validation will fail. We don't provide SLA when min_replica_count=1. If max_replica_count is not set, the default value is min_replica_count. The max allowed replica count is 1000. Available machine types for SMALL shard: e2-standard-2 and all machine types available for MEDIUM and LARGE shard. Available machine types for MEDIUM shard: e2-standard-16 and all machine types available for LARGE shard. Available machine types for LARGE shard: e2-highmem-16, n2d-standard-32. n1-standard-16 and n1-standard-32 are still available, but we recommend e2-standard-16 and e2-highmem-16 for cost efficiency.
    "autoscalingMetricSpecs": [ # Immutable. The metric specifications that overrides a resource utilization metric (CPU utilization, accelerator's duty cycle, and so on) target value (default to 60 if not set). At most one entry is allowed per metric. If machine_spec.accelerator_count is above 0, the autoscaling will be based on both CPU utilization and accelerator's duty cycle metrics and scale up when either metrics exceeds its target value while scale down if both metrics are under their target value. The default target value is 60 for both metrics. If machine_spec.accelerator_count is 0, the autoscaling will be based on CPU utilization metric only with default target value 60 if not explicitly set. For example, in the case of Online Prediction, if you want to override target CPU utilization to 80, you should set autoscaling_metric_specs.metric_name to `aiplatform.googleapis.com/prediction/online/cpu/utilization` and autoscaling_metric_specs.target to `80`.
      { # The metric specification that defines the target resource utilization (CPU utilization, accelerator's duty cycle, and so on) for calculating the desired replica count.
        "metricName": "A String", # Required. The resource metric name. Supported metrics: * For Online Prediction: * `aiplatform.googleapis.com/prediction/online/accelerator/duty_cycle` * `aiplatform.googleapis.com/prediction/online/cpu/utilization`
        "target": 42, # The target resource utilization in percentage (1% - 100%) for the given metric; once the real usage deviates from the target by a certain percentage, the machine replicas change. The default value is 60 (representing 60%) if not provided.
      },
    ],
    "machineSpec": { # Specification of a single machine. # Required. Immutable. The specification of a single machine used by the prediction.
      "acceleratorCount": 42, # The number of accelerators to attach to the machine.
      "acceleratorType": "A String", # Immutable. The type of accelerator(s) that may be attached to the machine as per accelerator_count.
      "machineType": "A String", # Immutable. The type of the machine. See the [list of machine types supported for prediction](https://cloud.google.com/vertex-ai/docs/predictions/configure-compute#machine-types) See the [list of machine types supported for custom training](https://cloud.google.com/vertex-ai/docs/training/configure-compute#machine-types). For DeployedModel this field is optional, and the default value is `n1-standard-2`. For BatchPredictionJob or as part of WorkerPoolSpec this field is required.
      "reservationAffinity": { # A ReservationAffinity can be used to configure a Vertex AI resource (e.g., a DeployedModel) to draw its Compute Engine resources from a Shared Reservation, or exclusively from on-demand capacity. # Optional. Immutable. Configuration controlling how this resource pool consumes reservation.
        "key": "A String", # Optional. Corresponds to the label key of a reservation resource. To target a SPECIFIC_RESERVATION by name, use `compute.googleapis.com/reservation-name` as the key and specify the name of your reservation as its value.
        "reservationAffinityType": "A String", # Required. Specifies the reservation affinity type.
        "values": [ # Optional. Corresponds to the label values of a reservation resource. This must be the full resource name of the reservation.
          "A String",
        ],
      },
      "tpuTopology": "A String", # Immutable. The topology of the TPUs. Corresponds to the TPU topologies available from GKE. (Example: tpu_topology: "2x2x1").
    },
    "maxReplicaCount": 42, # Immutable. The maximum number of replicas this DeployedModel may be deployed on when the traffic against it increases. If the requested value is too large, the deployment will error, but if deployment succeeds then the ability to scale the model to that many replicas is guaranteed (barring service outages). If traffic against the DeployedModel increases beyond what its replicas at maximum may handle, a portion of the traffic will be dropped. If this value is not provided, will use min_replica_count as the default value. The value of this field impacts the charge against Vertex CPU and GPU quotas. Specifically, you will be charged for (max_replica_count * number of cores in the selected machine type) and (max_replica_count * number of GPUs per replica in the selected machine type).
    "minReplicaCount": 42, # Required. Immutable. The minimum number of machine replicas this DeployedModel will be always deployed on. This value must be greater than or equal to 1. If traffic against the DeployedModel increases, it may dynamically be deployed onto more replicas, and as traffic decreases, some of these extra replicas may be freed.
    "requiredReplicaCount": 42, # Optional. Number of required available replicas for the deployment to succeed. This field is only needed when partial model deployment/mutation is desired. If set, the model deploy/mutate operation will succeed once available_replica_count reaches required_replica_count, and the rest of the replicas will be retried. If not set, the default required_replica_count will be min_replica_count.
    "spot": True or False, # Optional. If true, schedule the deployment workload on [spot VMs](https://cloud.google.com/kubernetes-engine/docs/concepts/spot-vms).
  },
  "deployedIndexAuthConfig": { # Used to set up the auth on the DeployedIndex's private endpoint. # Optional. If set, the authentication is enabled for the private endpoint.
    "authProvider": { # Configuration for an authentication provider, including support for [JSON Web Token (JWT)](https://tools.ietf.org/html/draft-ietf-oauth-json-web-token-32). # Defines the authentication provider that the DeployedIndex uses.
      "allowedIssuers": [ # A list of allowed JWT issuers. Each entry must be a valid Google service account, in the following format: `service-account-name@project-id.iam.gserviceaccount.com`
        "A String",
      ],
      "audiences": [ # The list of JWT [audiences](https://tools.ietf.org/html/draft-ietf-oauth-json-web-token-32#section-4.1.3). that are allowed to access. A JWT containing any of these audiences will be accepted.
        "A String",
      ],
    },
  },
  "deploymentGroup": "A String", # Optional. The deployment group can be no longer than 64 characters (eg: 'test', 'prod'). If not set, we will use the 'default' deployment group. Creating `deployment_groups` with `reserved_ip_ranges` is a recommended practice when the peered network has multiple peering ranges. This creates your deployments from predictable IP spaces for easier traffic administration. Also, one deployment_group (except 'default') can only be used with the same reserved_ip_ranges which means if the deployment_group has been used with reserved_ip_ranges: [a, b, c], using it with [a, b] or [d, e] is disallowed. Note: we only support up to 5 deployment groups(not including 'default').
  "displayName": "A String", # The display name of the DeployedIndex. If not provided upon creation, the Index's display_name is used.
  "enableAccessLogging": True or False, # Optional. If true, private endpoint's access logs are sent to Cloud Logging. These logs are like standard server access logs, containing information like timestamp and latency for each MatchRequest. Note that logs may incur a cost, especially if the deployed index receives a high queries per second rate (QPS). Estimate your costs before enabling this option.
  "id": "A String", # Required. The user specified ID of the DeployedIndex. The ID can be up to 128 characters long and must start with a letter and only contain letters, numbers, and underscores. The ID must be unique within the project it is created in.
  "index": "A String", # Required. The name of the Index this is the deployment of. We may refer to this Index as the DeployedIndex's "original" Index.
  "indexSyncTime": "A String", # Output only. The DeployedIndex may depend on various data on its original Index. Additionally when certain changes to the original Index are being done (e.g. when what the Index contains is being changed) the DeployedIndex may be asynchronously updated in the background to reflect these changes. If this timestamp's value is at least the Index.update_time of the original Index, it means that this DeployedIndex and the original Index are in sync. If this timestamp is older, then to see which updates this DeployedIndex already contains (and which it does not), one must list the operations that are running on the original Index. Only the successfully completed Operations with update_time equal or before this sync time are contained in this DeployedIndex.
  "privateEndpoints": { # IndexPrivateEndpoints proto is used to provide paths for users to send requests via private endpoints (e.g. private service access, private service connect). To send request via private service access, use match_grpc_address. To send request via private service connect, use service_attachment. # Output only. Provides paths for users to send requests directly to the deployed index services running on Cloud via private services access. This field is populated if network is configured.
    "matchGrpcAddress": "A String", # Output only. The ip address used to send match gRPC requests.
    "pscAutomatedEndpoints": [ # Output only. PscAutomatedEndpoints is populated if private service connect is enabled if PscAutomatedConfig is set.
      { # PscAutomatedEndpoints defines the output of the forwarding rule automatically created by each PscAutomationConfig.
        "matchAddress": "A String", # Ip Address created by the automated forwarding rule.
        "network": "A String", # Corresponding network in pscAutomationConfigs.
        "projectId": "A String", # Corresponding project_id in pscAutomationConfigs
      },
    ],
    "serviceAttachment": "A String", # Output only. The name of the service attachment resource. Populated if private service connect is enabled.
  },
  "pscAutomationConfigs": [ # Optional. If set for PSC deployed index, PSC connection will be automatically created after deployment is done and the endpoint information is populated in private_endpoints.psc_automated_endpoints.
    { # PSC config that is used to automatically create forwarding rule via ServiceConnectionMap.
      "network": "A String", # Required. The full name of the Google Compute Engine [network](https://cloud.google.com/compute/docs/networks-and-firewalls#networks). [Format](https://cloud.google.com/compute/docs/reference/rest/v1/networks/insert): `projects/{project}/global/networks/{network}`. Where {project} is a project number, as in '12345', and {network} is network name.
      "projectId": "A String", # Required. Project id used to create forwarding rule.
    },
  ],
  "reservedIpRanges": [ # Optional. A list of reserved ip ranges under the VPC network that can be used for this DeployedIndex. If set, we will deploy the index within the provided ip ranges. Otherwise, the index might be deployed to any ip ranges under the provided VPC network. The value should be the name of the address (https://cloud.google.com/compute/docs/reference/rest/v1/addresses) Example: ['vertex-ai-ip-range']. For more information about subnets and network IP ranges, please see https://cloud.google.com/vpc/docs/subnets#manually_created_subnet_ip_ranges.
    "A String",
  ],
}

  x__xgafv: string, V1 error format.
    Allowed values
      1 - v1 error format
      2 - v2 error format

Returns:
  An object of the form:

    { # This resource represents a long-running operation that is the result of a network API call.
  "done": True or False, # If the value is `false`, it means the operation is still in progress. If `true`, the operation is completed, and either `error` or `response` is available.
  "error": { # The `Status` type defines a logical error model that is suitable for different programming environments, including REST APIs and RPC APIs. It is used by [gRPC](https://github.com/grpc). Each `Status` message contains three pieces of data: error code, error message, and error details. You can find out more about this error model and how to work with it in the [API Design Guide](https://cloud.google.com/apis/design/errors). # The error result of the operation in case of failure or cancellation.
    "code": 42, # The status code, which should be an enum value of google.rpc.Code.
    "details": [ # A list of messages that carry the error details. There is a common set of message types for APIs to use.
      {
        "a_key": "", # Properties of the object. Contains field @type with type URL.
      },
    ],
    "message": "A String", # A developer-facing error message, which should be in English. Any user-facing error message should be localized and sent in the google.rpc.Status.details field, or localized by the client.
  },
  "metadata": { # Service-specific metadata associated with the operation. It typically contains progress information and common metadata such as create time. Some services might not provide such metadata. Any method that returns a long-running operation should document the metadata type, if any.
    "a_key": "", # Properties of the object. Contains field @type with type URL.
  },
  "name": "A String", # The server-assigned name, which is only unique within the same service that originally returns it. If you use the default HTTP mapping, the `name` should be a resource name ending with `operations/{unique_id}`.
  "response": { # The normal, successful response of the operation. If the original method returns no data on success, such as `Delete`, the response is `google.protobuf.Empty`. If the original method is standard `Get`/`Create`/`Update`, the response should be the resource. For other methods, the response should have the type `XxxResponse`, where `Xxx` is the original method name. For example, if the original method name is `TakeSnapshot()`, the inferred response type is `TakeSnapshotResponse`.
    "a_key": "", # Properties of the object. Contains field @type with type URL.
  },
}
patch(name, body=None, updateMask=None, x__xgafv=None)
Updates an IndexEndpoint.

Args:
  name: string, Output only. The resource name of the IndexEndpoint. (required)
  body: object, The request body.
    The object takes the form of:

{ # Indexes are deployed into it. An IndexEndpoint can have multiple DeployedIndexes.
  "createTime": "A String", # Output only. Timestamp when this IndexEndpoint was created.
  "deployedIndexes": [ # Output only. The indexes deployed in this endpoint.
    { # A deployment of an Index. IndexEndpoints contain one or more DeployedIndexes.
      "automaticResources": { # A description of resources that to large degree are decided by Vertex AI, and require only a modest additional configuration. Each Model supporting these resources documents its specific guidelines. # Optional. A description of resources that the DeployedIndex uses, which to large degree are decided by Vertex AI, and optionally allows only a modest additional configuration. If min_replica_count is not set, the default value is 2 (we don't provide SLA when min_replica_count=1). If max_replica_count is not set, the default value is min_replica_count. The max allowed replica count is 1000.
        "maxReplicaCount": 42, # Immutable. The maximum number of replicas this DeployedModel may be deployed on when the traffic against it increases. If the requested value is too large, the deployment will error, but if deployment succeeds then the ability to scale the model to that many replicas is guaranteed (barring service outages). If traffic against the DeployedModel increases beyond what its replicas at maximum may handle, a portion of the traffic will be dropped. If this value is not provided, a no upper bound for scaling under heavy traffic will be assume, though Vertex AI may be unable to scale beyond certain replica number.
        "minReplicaCount": 42, # Immutable. The minimum number of replicas this DeployedModel will be always deployed on. If traffic against it increases, it may dynamically be deployed onto more replicas up to max_replica_count, and as traffic decreases, some of these extra replicas may be freed. If the requested value is too large, the deployment will error.
      },
      "createTime": "A String", # Output only. Timestamp when the DeployedIndex was created.
      "dedicatedResources": { # A description of resources that are dedicated to a DeployedModel, and that need a higher degree of manual configuration. # Optional. A description of resources that are dedicated to the DeployedIndex, and that need a higher degree of manual configuration. The field min_replica_count must be set to a value strictly greater than 0, or else validation will fail. We don't provide SLA when min_replica_count=1. If max_replica_count is not set, the default value is min_replica_count. The max allowed replica count is 1000. Available machine types for SMALL shard: e2-standard-2 and all machine types available for MEDIUM and LARGE shard. Available machine types for MEDIUM shard: e2-standard-16 and all machine types available for LARGE shard. Available machine types for LARGE shard: e2-highmem-16, n2d-standard-32. n1-standard-16 and n1-standard-32 are still available, but we recommend e2-standard-16 and e2-highmem-16 for cost efficiency.
        "autoscalingMetricSpecs": [ # Immutable. The metric specifications that overrides a resource utilization metric (CPU utilization, accelerator's duty cycle, and so on) target value (default to 60 if not set). At most one entry is allowed per metric. If machine_spec.accelerator_count is above 0, the autoscaling will be based on both CPU utilization and accelerator's duty cycle metrics and scale up when either metrics exceeds its target value while scale down if both metrics are under their target value. The default target value is 60 for both metrics. If machine_spec.accelerator_count is 0, the autoscaling will be based on CPU utilization metric only with default target value 60 if not explicitly set. For example, in the case of Online Prediction, if you want to override target CPU utilization to 80, you should set autoscaling_metric_specs.metric_name to `aiplatform.googleapis.com/prediction/online/cpu/utilization` and autoscaling_metric_specs.target to `80`.
          { # The metric specification that defines the target resource utilization (CPU utilization, accelerator's duty cycle, and so on) for calculating the desired replica count.
            "metricName": "A String", # Required. The resource metric name. Supported metrics: * For Online Prediction: * `aiplatform.googleapis.com/prediction/online/accelerator/duty_cycle` * `aiplatform.googleapis.com/prediction/online/cpu/utilization`
            "target": 42, # The target resource utilization in percentage (1% - 100%) for the given metric; once the real usage deviates from the target by a certain percentage, the machine replicas change. The default value is 60 (representing 60%) if not provided.
          },
        ],
        "machineSpec": { # Specification of a single machine. # Required. Immutable. The specification of a single machine used by the prediction.
          "acceleratorCount": 42, # The number of accelerators to attach to the machine.
          "acceleratorType": "A String", # Immutable. The type of accelerator(s) that may be attached to the machine as per accelerator_count.
          "machineType": "A String", # Immutable. The type of the machine. See the [list of machine types supported for prediction](https://cloud.google.com/vertex-ai/docs/predictions/configure-compute#machine-types) See the [list of machine types supported for custom training](https://cloud.google.com/vertex-ai/docs/training/configure-compute#machine-types). For DeployedModel this field is optional, and the default value is `n1-standard-2`. For BatchPredictionJob or as part of WorkerPoolSpec this field is required.
          "reservationAffinity": { # A ReservationAffinity can be used to configure a Vertex AI resource (e.g., a DeployedModel) to draw its Compute Engine resources from a Shared Reservation, or exclusively from on-demand capacity. # Optional. Immutable. Configuration controlling how this resource pool consumes reservation.
            "key": "A String", # Optional. Corresponds to the label key of a reservation resource. To target a SPECIFIC_RESERVATION by name, use `compute.googleapis.com/reservation-name` as the key and specify the name of your reservation as its value.
            "reservationAffinityType": "A String", # Required. Specifies the reservation affinity type.
            "values": [ # Optional. Corresponds to the label values of a reservation resource. This must be the full resource name of the reservation.
              "A String",
            ],
          },
          "tpuTopology": "A String", # Immutable. The topology of the TPUs. Corresponds to the TPU topologies available from GKE. (Example: tpu_topology: "2x2x1").
        },
        "maxReplicaCount": 42, # Immutable. The maximum number of replicas this DeployedModel may be deployed on when the traffic against it increases. If the requested value is too large, the deployment will error, but if deployment succeeds then the ability to scale the model to that many replicas is guaranteed (barring service outages). If traffic against the DeployedModel increases beyond what its replicas at maximum may handle, a portion of the traffic will be dropped. If this value is not provided, will use min_replica_count as the default value. The value of this field impacts the charge against Vertex CPU and GPU quotas. Specifically, you will be charged for (max_replica_count * number of cores in the selected machine type) and (max_replica_count * number of GPUs per replica in the selected machine type).
        "minReplicaCount": 42, # Required. Immutable. The minimum number of machine replicas this DeployedModel will be always deployed on. This value must be greater than or equal to 1. If traffic against the DeployedModel increases, it may dynamically be deployed onto more replicas, and as traffic decreases, some of these extra replicas may be freed.
        "requiredReplicaCount": 42, # Optional. Number of required available replicas for the deployment to succeed. This field is only needed when partial model deployment/mutation is desired. If set, the model deploy/mutate operation will succeed once available_replica_count reaches required_replica_count, and the rest of the replicas will be retried. If not set, the default required_replica_count will be min_replica_count.
        "spot": True or False, # Optional. If true, schedule the deployment workload on [spot VMs](https://cloud.google.com/kubernetes-engine/docs/concepts/spot-vms).
      },
      "deployedIndexAuthConfig": { # Used to set up the auth on the DeployedIndex's private endpoint. # Optional. If set, the authentication is enabled for the private endpoint.
        "authProvider": { # Configuration for an authentication provider, including support for [JSON Web Token (JWT)](https://tools.ietf.org/html/draft-ietf-oauth-json-web-token-32). # Defines the authentication provider that the DeployedIndex uses.
          "allowedIssuers": [ # A list of allowed JWT issuers. Each entry must be a valid Google service account, in the following format: `service-account-name@project-id.iam.gserviceaccount.com`
            "A String",
          ],
          "audiences": [ # The list of JWT [audiences](https://tools.ietf.org/html/draft-ietf-oauth-json-web-token-32#section-4.1.3). that are allowed to access. A JWT containing any of these audiences will be accepted.
            "A String",
          ],
        },
      },
      "deploymentGroup": "A String", # Optional. The deployment group can be no longer than 64 characters (eg: 'test', 'prod'). If not set, we will use the 'default' deployment group. Creating `deployment_groups` with `reserved_ip_ranges` is a recommended practice when the peered network has multiple peering ranges. This creates your deployments from predictable IP spaces for easier traffic administration. Also, one deployment_group (except 'default') can only be used with the same reserved_ip_ranges which means if the deployment_group has been used with reserved_ip_ranges: [a, b, c], using it with [a, b] or [d, e] is disallowed. Note: we only support up to 5 deployment groups(not including 'default').
      "displayName": "A String", # The display name of the DeployedIndex. If not provided upon creation, the Index's display_name is used.
      "enableAccessLogging": True or False, # Optional. If true, private endpoint's access logs are sent to Cloud Logging. These logs are like standard server access logs, containing information like timestamp and latency for each MatchRequest. Note that logs may incur a cost, especially if the deployed index receives a high queries per second rate (QPS). Estimate your costs before enabling this option.
      "id": "A String", # Required. The user specified ID of the DeployedIndex. The ID can be up to 128 characters long and must start with a letter and only contain letters, numbers, and underscores. The ID must be unique within the project it is created in.
      "index": "A String", # Required. The name of the Index this is the deployment of. We may refer to this Index as the DeployedIndex's "original" Index.
      "indexSyncTime": "A String", # Output only. The DeployedIndex may depend on various data on its original Index. Additionally when certain changes to the original Index are being done (e.g. when what the Index contains is being changed) the DeployedIndex may be asynchronously updated in the background to reflect these changes. If this timestamp's value is at least the Index.update_time of the original Index, it means that this DeployedIndex and the original Index are in sync. If this timestamp is older, then to see which updates this DeployedIndex already contains (and which it does not), one must list the operations that are running on the original Index. Only the successfully completed Operations with update_time equal or before this sync time are contained in this DeployedIndex.
      "privateEndpoints": { # IndexPrivateEndpoints proto is used to provide paths for users to send requests via private endpoints (e.g. private service access, private service connect). To send request via private service access, use match_grpc_address. To send request via private service connect, use service_attachment. # Output only. Provides paths for users to send requests directly to the deployed index services running on Cloud via private services access. This field is populated if network is configured.
        "matchGrpcAddress": "A String", # Output only. The ip address used to send match gRPC requests.
        "pscAutomatedEndpoints": [ # Output only. PscAutomatedEndpoints is populated if private service connect is enabled if PscAutomatedConfig is set.
          { # PscAutomatedEndpoints defines the output of the forwarding rule automatically created by each PscAutomationConfig.
            "matchAddress": "A String", # Ip Address created by the automated forwarding rule.
            "network": "A String", # Corresponding network in pscAutomationConfigs.
            "projectId": "A String", # Corresponding project_id in pscAutomationConfigs
          },
        ],
        "serviceAttachment": "A String", # Output only. The name of the service attachment resource. Populated if private service connect is enabled.
      },
      "pscAutomationConfigs": [ # Optional. If set for PSC deployed index, PSC connection will be automatically created after deployment is done and the endpoint information is populated in private_endpoints.psc_automated_endpoints.
        { # PSC config that is used to automatically create forwarding rule via ServiceConnectionMap.
          "network": "A String", # Required. The full name of the Google Compute Engine [network](https://cloud.google.com/compute/docs/networks-and-firewalls#networks). [Format](https://cloud.google.com/compute/docs/reference/rest/v1/networks/insert): `projects/{project}/global/networks/{network}`. Where {project} is a project number, as in '12345', and {network} is network name.
          "projectId": "A String", # Required. Project id used to create forwarding rule.
        },
      ],
      "reservedIpRanges": [ # Optional. A list of reserved ip ranges under the VPC network that can be used for this DeployedIndex. If set, we will deploy the index within the provided ip ranges. Otherwise, the index might be deployed to any ip ranges under the provided VPC network. The value should be the name of the address (https://cloud.google.com/compute/docs/reference/rest/v1/addresses) Example: ['vertex-ai-ip-range']. For more information about subnets and network IP ranges, please see https://cloud.google.com/vpc/docs/subnets#manually_created_subnet_ip_ranges.
        "A String",
      ],
    },
  ],
  "description": "A String", # The description of the IndexEndpoint.
  "displayName": "A String", # Required. The display name of the IndexEndpoint. The name can be up to 128 characters long and can consist of any UTF-8 characters.
  "enablePrivateServiceConnect": True or False, # Optional. Deprecated: If true, expose the IndexEndpoint via private service connect. Only one of the fields, network or enable_private_service_connect, can be set.
  "encryptionSpec": { # Represents a customer-managed encryption key spec that can be applied to a top-level resource. # Immutable. Customer-managed encryption key spec for an IndexEndpoint. If set, this IndexEndpoint and all sub-resources of this IndexEndpoint will be secured by this key.
    "kmsKeyName": "A String", # Required. The Cloud KMS resource identifier of the customer managed encryption key used to protect a resource. Has the form: `projects/my-project/locations/my-region/keyRings/my-kr/cryptoKeys/my-key`. The key needs to be in the same region as where the compute resource is created.
  },
  "etag": "A String", # Used to perform consistent read-modify-write updates. If not set, a blind "overwrite" update happens.
  "labels": { # The labels with user-defined metadata to organize your IndexEndpoints. Label keys and values can be no longer than 64 characters (Unicode codepoints), can only contain lowercase letters, numeric characters, underscores and dashes. International characters are allowed. See https://goo.gl/xmQnxf for more information and examples of labels.
    "a_key": "A String",
  },
  "name": "A String", # Output only. The resource name of the IndexEndpoint.
  "network": "A String", # Optional. The full name of the Google Compute Engine [network](https://cloud.google.com/compute/docs/networks-and-firewalls#networks) to which the IndexEndpoint should be peered. Private services access must already be configured for the network. If left unspecified, the Endpoint is not peered with any network. network and private_service_connect_config are mutually exclusive. [Format](https://cloud.google.com/compute/docs/reference/rest/v1/networks/insert): `projects/{project}/global/networks/{network}`. Where {project} is a project number, as in '12345', and {network} is network name.
  "privateServiceConnectConfig": { # Represents configuration for private service connect. # Optional. Configuration for private service connect. network and private_service_connect_config are mutually exclusive.
    "enablePrivateServiceConnect": True or False, # Required. If true, expose the IndexEndpoint via private service connect.
    "projectAllowlist": [ # A list of Projects from which the forwarding rule will target the service attachment.
      "A String",
    ],
    "serviceAttachment": "A String", # Output only. The name of the generated service attachment resource. This is only populated if the endpoint is deployed with PrivateServiceConnect.
  },
  "publicEndpointDomainName": "A String", # Output only. If public_endpoint_enabled is true, this field will be populated with the domain name to use for this index endpoint.
  "publicEndpointEnabled": True or False, # Optional. If true, the deployed index will be accessible through public endpoint.
  "satisfiesPzi": True or False, # Output only. Reserved for future use.
  "satisfiesPzs": True or False, # Output only. Reserved for future use.
  "updateTime": "A String", # Output only. Timestamp when this IndexEndpoint was last updated. This timestamp is not updated when the endpoint's DeployedIndexes are updated, e.g. due to updates of the original Indexes they are the deployments of.
}

  updateMask: string, Required. The update mask applies to the resource. See google.protobuf.FieldMask.
  x__xgafv: string, V1 error format.
    Allowed values
      1 - v1 error format
      2 - v2 error format

Returns:
  An object of the form:

    { # Indexes are deployed into it. An IndexEndpoint can have multiple DeployedIndexes.
  "createTime": "A String", # Output only. Timestamp when this IndexEndpoint was created.
  "deployedIndexes": [ # Output only. The indexes deployed in this endpoint.
    { # A deployment of an Index. IndexEndpoints contain one or more DeployedIndexes.
      "automaticResources": { # A description of resources that to large degree are decided by Vertex AI, and require only a modest additional configuration. Each Model supporting these resources documents its specific guidelines. # Optional. A description of resources that the DeployedIndex uses, which to large degree are decided by Vertex AI, and optionally allows only a modest additional configuration. If min_replica_count is not set, the default value is 2 (we don't provide SLA when min_replica_count=1). If max_replica_count is not set, the default value is min_replica_count. The max allowed replica count is 1000.
        "maxReplicaCount": 42, # Immutable. The maximum number of replicas this DeployedModel may be deployed on when the traffic against it increases. If the requested value is too large, the deployment will error, but if deployment succeeds then the ability to scale the model to that many replicas is guaranteed (barring service outages). If traffic against the DeployedModel increases beyond what its replicas at maximum may handle, a portion of the traffic will be dropped. If this value is not provided, a no upper bound for scaling under heavy traffic will be assume, though Vertex AI may be unable to scale beyond certain replica number.
        "minReplicaCount": 42, # Immutable. The minimum number of replicas this DeployedModel will be always deployed on. If traffic against it increases, it may dynamically be deployed onto more replicas up to max_replica_count, and as traffic decreases, some of these extra replicas may be freed. If the requested value is too large, the deployment will error.
      },
      "createTime": "A String", # Output only. Timestamp when the DeployedIndex was created.
      "dedicatedResources": { # A description of resources that are dedicated to a DeployedModel, and that need a higher degree of manual configuration. # Optional. A description of resources that are dedicated to the DeployedIndex, and that need a higher degree of manual configuration. The field min_replica_count must be set to a value strictly greater than 0, or else validation will fail. We don't provide SLA when min_replica_count=1. If max_replica_count is not set, the default value is min_replica_count. The max allowed replica count is 1000. Available machine types for SMALL shard: e2-standard-2 and all machine types available for MEDIUM and LARGE shard. Available machine types for MEDIUM shard: e2-standard-16 and all machine types available for LARGE shard. Available machine types for LARGE shard: e2-highmem-16, n2d-standard-32. n1-standard-16 and n1-standard-32 are still available, but we recommend e2-standard-16 and e2-highmem-16 for cost efficiency.
        "autoscalingMetricSpecs": [ # Immutable. The metric specifications that overrides a resource utilization metric (CPU utilization, accelerator's duty cycle, and so on) target value (default to 60 if not set). At most one entry is allowed per metric. If machine_spec.accelerator_count is above 0, the autoscaling will be based on both CPU utilization and accelerator's duty cycle metrics and scale up when either metrics exceeds its target value while scale down if both metrics are under their target value. The default target value is 60 for both metrics. If machine_spec.accelerator_count is 0, the autoscaling will be based on CPU utilization metric only with default target value 60 if not explicitly set. For example, in the case of Online Prediction, if you want to override target CPU utilization to 80, you should set autoscaling_metric_specs.metric_name to `aiplatform.googleapis.com/prediction/online/cpu/utilization` and autoscaling_metric_specs.target to `80`.
          { # The metric specification that defines the target resource utilization (CPU utilization, accelerator's duty cycle, and so on) for calculating the desired replica count.
            "metricName": "A String", # Required. The resource metric name. Supported metrics: * For Online Prediction: * `aiplatform.googleapis.com/prediction/online/accelerator/duty_cycle` * `aiplatform.googleapis.com/prediction/online/cpu/utilization`
            "target": 42, # The target resource utilization in percentage (1% - 100%) for the given metric; once the real usage deviates from the target by a certain percentage, the machine replicas change. The default value is 60 (representing 60%) if not provided.
          },
        ],
        "machineSpec": { # Specification of a single machine. # Required. Immutable. The specification of a single machine used by the prediction.
          "acceleratorCount": 42, # The number of accelerators to attach to the machine.
          "acceleratorType": "A String", # Immutable. The type of accelerator(s) that may be attached to the machine as per accelerator_count.
          "machineType": "A String", # Immutable. The type of the machine. See the [list of machine types supported for prediction](https://cloud.google.com/vertex-ai/docs/predictions/configure-compute#machine-types) See the [list of machine types supported for custom training](https://cloud.google.com/vertex-ai/docs/training/configure-compute#machine-types). For DeployedModel this field is optional, and the default value is `n1-standard-2`. For BatchPredictionJob or as part of WorkerPoolSpec this field is required.
          "reservationAffinity": { # A ReservationAffinity can be used to configure a Vertex AI resource (e.g., a DeployedModel) to draw its Compute Engine resources from a Shared Reservation, or exclusively from on-demand capacity. # Optional. Immutable. Configuration controlling how this resource pool consumes reservation.
            "key": "A String", # Optional. Corresponds to the label key of a reservation resource. To target a SPECIFIC_RESERVATION by name, use `compute.googleapis.com/reservation-name` as the key and specify the name of your reservation as its value.
            "reservationAffinityType": "A String", # Required. Specifies the reservation affinity type.
            "values": [ # Optional. Corresponds to the label values of a reservation resource. This must be the full resource name of the reservation.
              "A String",
            ],
          },
          "tpuTopology": "A String", # Immutable. The topology of the TPUs. Corresponds to the TPU topologies available from GKE. (Example: tpu_topology: "2x2x1").
        },
        "maxReplicaCount": 42, # Immutable. The maximum number of replicas this DeployedModel may be deployed on when the traffic against it increases. If the requested value is too large, the deployment will error, but if deployment succeeds then the ability to scale the model to that many replicas is guaranteed (barring service outages). If traffic against the DeployedModel increases beyond what its replicas at maximum may handle, a portion of the traffic will be dropped. If this value is not provided, will use min_replica_count as the default value. The value of this field impacts the charge against Vertex CPU and GPU quotas. Specifically, you will be charged for (max_replica_count * number of cores in the selected machine type) and (max_replica_count * number of GPUs per replica in the selected machine type).
        "minReplicaCount": 42, # Required. Immutable. The minimum number of machine replicas this DeployedModel will be always deployed on. This value must be greater than or equal to 1. If traffic against the DeployedModel increases, it may dynamically be deployed onto more replicas, and as traffic decreases, some of these extra replicas may be freed.
        "requiredReplicaCount": 42, # Optional. Number of required available replicas for the deployment to succeed. This field is only needed when partial model deployment/mutation is desired. If set, the model deploy/mutate operation will succeed once available_replica_count reaches required_replica_count, and the rest of the replicas will be retried. If not set, the default required_replica_count will be min_replica_count.
        "spot": True or False, # Optional. If true, schedule the deployment workload on [spot VMs](https://cloud.google.com/kubernetes-engine/docs/concepts/spot-vms).
      },
      "deployedIndexAuthConfig": { # Used to set up the auth on the DeployedIndex's private endpoint. # Optional. If set, the authentication is enabled for the private endpoint.
        "authProvider": { # Configuration for an authentication provider, including support for [JSON Web Token (JWT)](https://tools.ietf.org/html/draft-ietf-oauth-json-web-token-32). # Defines the authentication provider that the DeployedIndex uses.
          "allowedIssuers": [ # A list of allowed JWT issuers. Each entry must be a valid Google service account, in the following format: `service-account-name@project-id.iam.gserviceaccount.com`
            "A String",
          ],
          "audiences": [ # The list of JWT [audiences](https://tools.ietf.org/html/draft-ietf-oauth-json-web-token-32#section-4.1.3). that are allowed to access. A JWT containing any of these audiences will be accepted.
            "A String",
          ],
        },
      },
      "deploymentGroup": "A String", # Optional. The deployment group can be no longer than 64 characters (eg: 'test', 'prod'). If not set, we will use the 'default' deployment group. Creating `deployment_groups` with `reserved_ip_ranges` is a recommended practice when the peered network has multiple peering ranges. This creates your deployments from predictable IP spaces for easier traffic administration. Also, one deployment_group (except 'default') can only be used with the same reserved_ip_ranges which means if the deployment_group has been used with reserved_ip_ranges: [a, b, c], using it with [a, b] or [d, e] is disallowed. Note: we only support up to 5 deployment groups(not including 'default').
      "displayName": "A String", # The display name of the DeployedIndex. If not provided upon creation, the Index's display_name is used.
      "enableAccessLogging": True or False, # Optional. If true, private endpoint's access logs are sent to Cloud Logging. These logs are like standard server access logs, containing information like timestamp and latency for each MatchRequest. Note that logs may incur a cost, especially if the deployed index receives a high queries per second rate (QPS). Estimate your costs before enabling this option.
      "id": "A String", # Required. The user specified ID of the DeployedIndex. The ID can be up to 128 characters long and must start with a letter and only contain letters, numbers, and underscores. The ID must be unique within the project it is created in.
      "index": "A String", # Required. The name of the Index this is the deployment of. We may refer to this Index as the DeployedIndex's "original" Index.
      "indexSyncTime": "A String", # Output only. The DeployedIndex may depend on various data on its original Index. Additionally when certain changes to the original Index are being done (e.g. when what the Index contains is being changed) the DeployedIndex may be asynchronously updated in the background to reflect these changes. If this timestamp's value is at least the Index.update_time of the original Index, it means that this DeployedIndex and the original Index are in sync. If this timestamp is older, then to see which updates this DeployedIndex already contains (and which it does not), one must list the operations that are running on the original Index. Only the successfully completed Operations with update_time equal or before this sync time are contained in this DeployedIndex.
      "privateEndpoints": { # IndexPrivateEndpoints proto is used to provide paths for users to send requests via private endpoints (e.g. private service access, private service connect). To send request via private service access, use match_grpc_address. To send request via private service connect, use service_attachment. # Output only. Provides paths for users to send requests directly to the deployed index services running on Cloud via private services access. This field is populated if network is configured.
        "matchGrpcAddress": "A String", # Output only. The ip address used to send match gRPC requests.
        "pscAutomatedEndpoints": [ # Output only. PscAutomatedEndpoints is populated if private service connect is enabled if PscAutomatedConfig is set.
          { # PscAutomatedEndpoints defines the output of the forwarding rule automatically created by each PscAutomationConfig.
            "matchAddress": "A String", # Ip Address created by the automated forwarding rule.
            "network": "A String", # Corresponding network in pscAutomationConfigs.
            "projectId": "A String", # Corresponding project_id in pscAutomationConfigs
          },
        ],
        "serviceAttachment": "A String", # Output only. The name of the service attachment resource. Populated if private service connect is enabled.
      },
      "pscAutomationConfigs": [ # Optional. If set for PSC deployed index, PSC connection will be automatically created after deployment is done and the endpoint information is populated in private_endpoints.psc_automated_endpoints.
        { # PSC config that is used to automatically create forwarding rule via ServiceConnectionMap.
          "network": "A String", # Required. The full name of the Google Compute Engine [network](https://cloud.google.com/compute/docs/networks-and-firewalls#networks). [Format](https://cloud.google.com/compute/docs/reference/rest/v1/networks/insert): `projects/{project}/global/networks/{network}`. Where {project} is a project number, as in '12345', and {network} is network name.
          "projectId": "A String", # Required. Project id used to create forwarding rule.
        },
      ],
      "reservedIpRanges": [ # Optional. A list of reserved ip ranges under the VPC network that can be used for this DeployedIndex. If set, we will deploy the index within the provided ip ranges. Otherwise, the index might be deployed to any ip ranges under the provided VPC network. The value should be the name of the address (https://cloud.google.com/compute/docs/reference/rest/v1/addresses) Example: ['vertex-ai-ip-range']. For more information about subnets and network IP ranges, please see https://cloud.google.com/vpc/docs/subnets#manually_created_subnet_ip_ranges.
        "A String",
      ],
    },
  ],
  "description": "A String", # The description of the IndexEndpoint.
  "displayName": "A String", # Required. The display name of the IndexEndpoint. The name can be up to 128 characters long and can consist of any UTF-8 characters.
  "enablePrivateServiceConnect": True or False, # Optional. Deprecated: If true, expose the IndexEndpoint via private service connect. Only one of the fields, network or enable_private_service_connect, can be set.
  "encryptionSpec": { # Represents a customer-managed encryption key spec that can be applied to a top-level resource. # Immutable. Customer-managed encryption key spec for an IndexEndpoint. If set, this IndexEndpoint and all sub-resources of this IndexEndpoint will be secured by this key.
    "kmsKeyName": "A String", # Required. The Cloud KMS resource identifier of the customer managed encryption key used to protect a resource. Has the form: `projects/my-project/locations/my-region/keyRings/my-kr/cryptoKeys/my-key`. The key needs to be in the same region as where the compute resource is created.
  },
  "etag": "A String", # Used to perform consistent read-modify-write updates. If not set, a blind "overwrite" update happens.
  "labels": { # The labels with user-defined metadata to organize your IndexEndpoints. Label keys and values can be no longer than 64 characters (Unicode codepoints), can only contain lowercase letters, numeric characters, underscores and dashes. International characters are allowed. See https://goo.gl/xmQnxf for more information and examples of labels.
    "a_key": "A String",
  },
  "name": "A String", # Output only. The resource name of the IndexEndpoint.
  "network": "A String", # Optional. The full name of the Google Compute Engine [network](https://cloud.google.com/compute/docs/networks-and-firewalls#networks) to which the IndexEndpoint should be peered. Private services access must already be configured for the network. If left unspecified, the Endpoint is not peered with any network. network and private_service_connect_config are mutually exclusive. [Format](https://cloud.google.com/compute/docs/reference/rest/v1/networks/insert): `projects/{project}/global/networks/{network}`. Where {project} is a project number, as in '12345', and {network} is network name.
  "privateServiceConnectConfig": { # Represents configuration for private service connect. # Optional. Configuration for private service connect. network and private_service_connect_config are mutually exclusive.
    "enablePrivateServiceConnect": True or False, # Required. If true, expose the IndexEndpoint via private service connect.
    "projectAllowlist": [ # A list of Projects from which the forwarding rule will target the service attachment.
      "A String",
    ],
    "serviceAttachment": "A String", # Output only. The name of the generated service attachment resource. This is only populated if the endpoint is deployed with PrivateServiceConnect.
  },
  "publicEndpointDomainName": "A String", # Output only. If public_endpoint_enabled is true, this field will be populated with the domain name to use for this index endpoint.
  "publicEndpointEnabled": True or False, # Optional. If true, the deployed index will be accessible through public endpoint.
  "satisfiesPzi": True or False, # Output only. Reserved for future use.
  "satisfiesPzs": True or False, # Output only. Reserved for future use.
  "updateTime": "A String", # Output only. Timestamp when this IndexEndpoint was last updated. This timestamp is not updated when the endpoint's DeployedIndexes are updated, e.g. due to updates of the original Indexes they are the deployments of.
}
readIndexDatapoints(indexEndpoint, body=None, x__xgafv=None)
Reads the datapoints/vectors of the given IDs. A maximum of 1000 datapoints can be retrieved in a batch.

Args:
  indexEndpoint: string, Required. The name of the index endpoint. Format: `projects/{project}/locations/{location}/indexEndpoints/{index_endpoint}` (required)
  body: object, The request body.
    The object takes the form of:

{ # The request message for MatchService.ReadIndexDatapoints.
  "deployedIndexId": "A String", # The ID of the DeployedIndex that will serve the request.
  "ids": [ # IDs of the datapoints to be searched for.
    "A String",
  ],
}

  x__xgafv: string, V1 error format.
    Allowed values
      1 - v1 error format
      2 - v2 error format

Returns:
  An object of the form:

    { # The response message for MatchService.ReadIndexDatapoints.
  "datapoints": [ # The result list of datapoints.
    { # A datapoint of Index.
      "crowdingTag": { # Crowding tag is a constraint on a neighbor list produced by nearest neighbor search requiring that no more than some value k' of the k neighbors returned have the same value of crowding_attribute. # Optional. CrowdingTag of the datapoint, the number of neighbors to return in each crowding can be configured during query.
        "crowdingAttribute": "A String", # The attribute value used for crowding. The maximum number of neighbors to return per crowding attribute value (per_crowding_attribute_num_neighbors) is configured per-query. This field is ignored if per_crowding_attribute_num_neighbors is larger than the total number of neighbors to return for a given query.
      },
      "datapointId": "A String", # Required. Unique identifier of the datapoint.
      "featureVector": [ # Required. Feature embedding vector for dense index. An array of numbers with the length of [NearestNeighborSearchConfig.dimensions].
        3.14,
      ],
      "numericRestricts": [ # Optional. List of Restrict of the datapoint, used to perform "restricted searches" where boolean rule are used to filter the subset of the database eligible for matching. This uses numeric comparisons.
        { # This field allows restricts to be based on numeric comparisons rather than categorical tokens.
          "namespace": "A String", # The namespace of this restriction. e.g.: cost.
          "op": "A String", # This MUST be specified for queries and must NOT be specified for datapoints.
          "valueDouble": 3.14, # Represents 64 bit float.
          "valueFloat": 3.14, # Represents 32 bit float.
          "valueInt": "A String", # Represents 64 bit integer.
        },
      ],
      "restricts": [ # Optional. List of Restrict of the datapoint, used to perform "restricted searches" where boolean rule are used to filter the subset of the database eligible for matching. This uses categorical tokens. See: https://cloud.google.com/vertex-ai/docs/matching-engine/filtering
        { # Restriction of a datapoint which describe its attributes(tokens) from each of several attribute categories(namespaces).
          "allowList": [ # The attributes to allow in this namespace. e.g.: 'red'
            "A String",
          ],
          "denyList": [ # The attributes to deny in this namespace. e.g.: 'blue'
            "A String",
          ],
          "namespace": "A String", # The namespace of this restriction. e.g.: color.
        },
      ],
      "sparseEmbedding": { # Feature embedding vector for sparse index. An array of numbers whose values are located in the specified dimensions. # Optional. Feature embedding vector for sparse index.
        "dimensions": [ # Required. The list of indexes for the embedding values of the sparse vector.
          "A String",
        ],
        "values": [ # Required. The list of embedding values of the sparse vector.
          3.14,
        ],
      },
    },
  ],
}
undeployIndex(indexEndpoint, body=None, x__xgafv=None)
Undeploys an Index from an IndexEndpoint, removing a DeployedIndex from it, and freeing all resources it's using.

Args:
  indexEndpoint: string, Required. The name of the IndexEndpoint resource from which to undeploy an Index. Format: `projects/{project}/locations/{location}/indexEndpoints/{index_endpoint}` (required)
  body: object, The request body.
    The object takes the form of:

{ # Request message for IndexEndpointService.UndeployIndex.
  "deployedIndexId": "A String", # Required. The ID of the DeployedIndex to be undeployed from the IndexEndpoint.
}

  x__xgafv: string, V1 error format.
    Allowed values
      1 - v1 error format
      2 - v2 error format

Returns:
  An object of the form:

    { # This resource represents a long-running operation that is the result of a network API call.
  "done": True or False, # If the value is `false`, it means the operation is still in progress. If `true`, the operation is completed, and either `error` or `response` is available.
  "error": { # The `Status` type defines a logical error model that is suitable for different programming environments, including REST APIs and RPC APIs. It is used by [gRPC](https://github.com/grpc). Each `Status` message contains three pieces of data: error code, error message, and error details. You can find out more about this error model and how to work with it in the [API Design Guide](https://cloud.google.com/apis/design/errors). # The error result of the operation in case of failure or cancellation.
    "code": 42, # The status code, which should be an enum value of google.rpc.Code.
    "details": [ # A list of messages that carry the error details. There is a common set of message types for APIs to use.
      {
        "a_key": "", # Properties of the object. Contains field @type with type URL.
      },
    ],
    "message": "A String", # A developer-facing error message, which should be in English. Any user-facing error message should be localized and sent in the google.rpc.Status.details field, or localized by the client.
  },
  "metadata": { # Service-specific metadata associated with the operation. It typically contains progress information and common metadata such as create time. Some services might not provide such metadata. Any method that returns a long-running operation should document the metadata type, if any.
    "a_key": "", # Properties of the object. Contains field @type with type URL.
  },
  "name": "A String", # The server-assigned name, which is only unique within the same service that originally returns it. If you use the default HTTP mapping, the `name` should be a resource name ending with `operations/{unique_id}`.
  "response": { # The normal, successful response of the operation. If the original method returns no data on success, such as `Delete`, the response is `google.protobuf.Empty`. If the original method is standard `Get`/`Create`/`Update`, the response should be the resource. For other methods, the response should have the type `XxxResponse`, where `Xxx` is the original method name. For example, if the original method name is `TakeSnapshot()`, the inferred response type is `TakeSnapshotResponse`.
    "a_key": "", # Properties of the object. Contains field @type with type URL.
  },
}