Returns the batchPredictionJobs Resource.
Returns the customJobs Resource.
Returns the dataLabelingJobs Resource.
Returns the datasets Resource.
Returns the deploymentResourcePools Resource.
Returns the endpoints Resource.
Returns the featureGroups Resource.
Returns the featureOnlineStores Resource.
Returns the featurestores Resource.
Returns the hyperparameterTuningJobs Resource.
Returns the indexEndpoints Resource.
Returns the indexes Resource.
Returns the metadataStores Resource.
Returns the migratableResources Resource.
modelDeploymentMonitoringJobs()
Returns the modelDeploymentMonitoringJobs Resource.
Returns the models Resource.
Returns the nasJobs Resource.
Returns the notebookExecutionJobs Resource.
Returns the notebookRuntimeTemplates Resource.
Returns the notebookRuntimes Resource.
Returns the operations Resource.
Returns the persistentResources Resource.
Returns the pipelineJobs Resource.
Returns the publishers Resource.
Returns the schedules Resource.
Returns the specialistPools Resource.
Returns the studies Resource.
Returns the tensorboards Resource.
Returns the trainingPipelines Resource.
Returns the tuningJobs Resource.
Close httplib2 connections.
evaluateInstances(location, body=None, x__xgafv=None)
Evaluates instances based on a given metric.
Gets information about a location.
list(name, filter=None, pageSize=None, pageToken=None, x__xgafv=None)
Lists information about the supported locations for this service.
Retrieves the next page of results.
close()
Close httplib2 connections.
evaluateInstances(location, body=None, x__xgafv=None)
Evaluates instances based on a given metric. Args: location: string, Required. The resource name of the Location to evaluate the instances. Format: `projects/{project}/locations/{location}` (required) body: object, The request body. The object takes the form of: { # Request message for EvaluationService.EvaluateInstances. "bleuInput": { # Input for bleu metric. # Instances and metric spec for bleu metric. "instances": [ # Required. Repeated bleu instances. { # Spec for bleu instance. "prediction": "A String", # Required. Output of the evaluated model. "reference": "A String", # Required. Ground truth used to compare against the prediction. }, ], "metricSpec": { # Spec for bleu score metric - calculates the precision of n-grams in the prediction as compared to reference - returns a score ranging between 0 to 1. # Required. Spec for bleu score metric. "useEffectiveOrder": True or False, # Optional. Whether to use_effective_order to compute bleu score. }, }, "coherenceInput": { # Input for coherence metric. # Input for coherence metric. "instance": { # Spec for coherence instance. # Required. Coherence instance. "prediction": "A String", # Required. Output of the evaluated model. }, "metricSpec": { # Spec for coherence score metric. # Required. Spec for coherence score metric. "version": 42, # Optional. Which version to use for evaluation. }, }, "exactMatchInput": { # Input for exact match metric. # Auto metric instances. Instances and metric spec for exact match metric. "instances": [ # Required. Repeated exact match instances. { # Spec for exact match instance. "prediction": "A String", # Required. Output of the evaluated model. "reference": "A String", # Required. Ground truth used to compare against the prediction. }, ], "metricSpec": { # Spec for exact match metric - returns 1 if prediction and reference exactly matches, otherwise 0. # Required. Spec for exact match metric. }, }, "fluencyInput": { # Input for fluency metric. # LLM-based metric instance. General text generation metrics, applicable to other categories. Input for fluency metric. "instance": { # Spec for fluency instance. # Required. Fluency instance. "prediction": "A String", # Required. Output of the evaluated model. }, "metricSpec": { # Spec for fluency score metric. # Required. Spec for fluency score metric. "version": 42, # Optional. Which version to use for evaluation. }, }, "fulfillmentInput": { # Input for fulfillment metric. # Input for fulfillment metric. "instance": { # Spec for fulfillment instance. # Required. Fulfillment instance. "instruction": "A String", # Required. Inference instruction prompt to compare prediction with. "prediction": "A String", # Required. Output of the evaluated model. }, "metricSpec": { # Spec for fulfillment metric. # Required. Spec for fulfillment score metric. "version": 42, # Optional. Which version to use for evaluation. }, }, "groundednessInput": { # Input for groundedness metric. # Input for groundedness metric. "instance": { # Spec for groundedness instance. # Required. Groundedness instance. "context": "A String", # Required. Background information provided in context used to compare against the prediction. "prediction": "A String", # Required. Output of the evaluated model. }, "metricSpec": { # Spec for groundedness metric. # Required. Spec for groundedness metric. "version": 42, # Optional. Which version to use for evaluation. }, }, "pairwiseMetricInput": { # Input for pairwise metric. # Input for pairwise metric. "instance": { # Pairwise metric instance. Usually one instance corresponds to one row in an evaluation dataset. # Required. Pairwise metric instance. "jsonInstance": "A String", # Instance specified as a json string. String key-value pairs are expected in the json_instance to render PairwiseMetricSpec.instance_prompt_template. }, "metricSpec": { # Spec for pairwise metric. # Required. Spec for pairwise metric. "metricPromptTemplate": "A String", # Required. Metric prompt template for pairwise metric. }, }, "pairwiseQuestionAnsweringQualityInput": { # Input for pairwise question answering quality metric. # Input for pairwise question answering quality metric. "instance": { # Spec for pairwise question answering quality instance. # Required. Pairwise question answering quality instance. "baselinePrediction": "A String", # Required. Output of the baseline model. "context": "A String", # Required. Text to answer the question. "instruction": "A String", # Required. Question Answering prompt for LLM. "prediction": "A String", # Required. Output of the candidate model. "reference": "A String", # Optional. Ground truth used to compare against the prediction. }, "metricSpec": { # Spec for pairwise question answering quality score metric. # Required. Spec for pairwise question answering quality score metric. "useReference": True or False, # Optional. Whether to use instance.reference to compute question answering quality. "version": 42, # Optional. Which version to use for evaluation. }, }, "pairwiseSummarizationQualityInput": { # Input for pairwise summarization quality metric. # Input for pairwise summarization quality metric. "instance": { # Spec for pairwise summarization quality instance. # Required. Pairwise summarization quality instance. "baselinePrediction": "A String", # Required. Output of the baseline model. "context": "A String", # Required. Text to be summarized. "instruction": "A String", # Required. Summarization prompt for LLM. "prediction": "A String", # Required. Output of the candidate model. "reference": "A String", # Optional. Ground truth used to compare against the prediction. }, "metricSpec": { # Spec for pairwise summarization quality score metric. # Required. Spec for pairwise summarization quality score metric. "useReference": True or False, # Optional. Whether to use instance.reference to compute pairwise summarization quality. "version": 42, # Optional. Which version to use for evaluation. }, }, "pointwiseMetricInput": { # Input for pointwise metric. # Input for pointwise metric. "instance": { # Pointwise metric instance. Usually one instance corresponds to one row in an evaluation dataset. # Required. Pointwise metric instance. "jsonInstance": "A String", # Instance specified as a json string. String key-value pairs are expected in the json_instance to render PointwiseMetricSpec.instance_prompt_template. }, "metricSpec": { # Spec for pointwise metric. # Required. Spec for pointwise metric. "metricPromptTemplate": "A String", # Required. Metric prompt template for pointwise metric. }, }, "questionAnsweringCorrectnessInput": { # Input for question answering correctness metric. # Input for question answering correctness metric. "instance": { # Spec for question answering correctness instance. # Required. Question answering correctness instance. "context": "A String", # Optional. Text provided as context to answer the question. "instruction": "A String", # Required. The question asked and other instruction in the inference prompt. "prediction": "A String", # Required. Output of the evaluated model. "reference": "A String", # Optional. Ground truth used to compare against the prediction. }, "metricSpec": { # Spec for question answering correctness metric. # Required. Spec for question answering correctness score metric. "useReference": True or False, # Optional. Whether to use instance.reference to compute question answering correctness. "version": 42, # Optional. Which version to use for evaluation. }, }, "questionAnsweringHelpfulnessInput": { # Input for question answering helpfulness metric. # Input for question answering helpfulness metric. "instance": { # Spec for question answering helpfulness instance. # Required. Question answering helpfulness instance. "context": "A String", # Optional. Text provided as context to answer the question. "instruction": "A String", # Required. The question asked and other instruction in the inference prompt. "prediction": "A String", # Required. Output of the evaluated model. "reference": "A String", # Optional. Ground truth used to compare against the prediction. }, "metricSpec": { # Spec for question answering helpfulness metric. # Required. Spec for question answering helpfulness score metric. "useReference": True or False, # Optional. Whether to use instance.reference to compute question answering helpfulness. "version": 42, # Optional. Which version to use for evaluation. }, }, "questionAnsweringQualityInput": { # Input for question answering quality metric. # Input for question answering quality metric. "instance": { # Spec for question answering quality instance. # Required. Question answering quality instance. "context": "A String", # Required. Text to answer the question. "instruction": "A String", # Required. Question Answering prompt for LLM. "prediction": "A String", # Required. Output of the evaluated model. "reference": "A String", # Optional. Ground truth used to compare against the prediction. }, "metricSpec": { # Spec for question answering quality score metric. # Required. Spec for question answering quality score metric. "useReference": True or False, # Optional. Whether to use instance.reference to compute question answering quality. "version": 42, # Optional. Which version to use for evaluation. }, }, "questionAnsweringRelevanceInput": { # Input for question answering relevance metric. # Input for question answering relevance metric. "instance": { # Spec for question answering relevance instance. # Required. Question answering relevance instance. "context": "A String", # Optional. Text provided as context to answer the question. "instruction": "A String", # Required. The question asked and other instruction in the inference prompt. "prediction": "A String", # Required. Output of the evaluated model. "reference": "A String", # Optional. Ground truth used to compare against the prediction. }, "metricSpec": { # Spec for question answering relevance metric. # Required. Spec for question answering relevance score metric. "useReference": True or False, # Optional. Whether to use instance.reference to compute question answering relevance. "version": 42, # Optional. Which version to use for evaluation. }, }, "rougeInput": { # Input for rouge metric. # Instances and metric spec for rouge metric. "instances": [ # Required. Repeated rouge instances. { # Spec for rouge instance. "prediction": "A String", # Required. Output of the evaluated model. "reference": "A String", # Required. Ground truth used to compare against the prediction. }, ], "metricSpec": { # Spec for rouge score metric - calculates the recall of n-grams in prediction as compared to reference - returns a score ranging between 0 and 1. # Required. Spec for rouge score metric. "rougeType": "A String", # Optional. Supported rouge types are rougen[1-9], rougeL, and rougeLsum. "splitSummaries": True or False, # Optional. Whether to split summaries while using rougeLsum. "useStemmer": True or False, # Optional. Whether to use stemmer to compute rouge score. }, }, "safetyInput": { # Input for safety metric. # Input for safety metric. "instance": { # Spec for safety instance. # Required. Safety instance. "prediction": "A String", # Required. Output of the evaluated model. }, "metricSpec": { # Spec for safety metric. # Required. Spec for safety metric. "version": 42, # Optional. Which version to use for evaluation. }, }, "summarizationHelpfulnessInput": { # Input for summarization helpfulness metric. # Input for summarization helpfulness metric. "instance": { # Spec for summarization helpfulness instance. # Required. Summarization helpfulness instance. "context": "A String", # Required. Text to be summarized. "instruction": "A String", # Optional. Summarization prompt for LLM. "prediction": "A String", # Required. Output of the evaluated model. "reference": "A String", # Optional. Ground truth used to compare against the prediction. }, "metricSpec": { # Spec for summarization helpfulness score metric. # Required. Spec for summarization helpfulness score metric. "useReference": True or False, # Optional. Whether to use instance.reference to compute summarization helpfulness. "version": 42, # Optional. Which version to use for evaluation. }, }, "summarizationQualityInput": { # Input for summarization quality metric. # Input for summarization quality metric. "instance": { # Spec for summarization quality instance. # Required. Summarization quality instance. "context": "A String", # Required. Text to be summarized. "instruction": "A String", # Required. Summarization prompt for LLM. "prediction": "A String", # Required. Output of the evaluated model. "reference": "A String", # Optional. Ground truth used to compare against the prediction. }, "metricSpec": { # Spec for summarization quality score metric. # Required. Spec for summarization quality score metric. "useReference": True or False, # Optional. Whether to use instance.reference to compute summarization quality. "version": 42, # Optional. Which version to use for evaluation. }, }, "summarizationVerbosityInput": { # Input for summarization verbosity metric. # Input for summarization verbosity metric. "instance": { # Spec for summarization verbosity instance. # Required. Summarization verbosity instance. "context": "A String", # Required. Text to be summarized. "instruction": "A String", # Optional. Summarization prompt for LLM. "prediction": "A String", # Required. Output of the evaluated model. "reference": "A String", # Optional. Ground truth used to compare against the prediction. }, "metricSpec": { # Spec for summarization verbosity score metric. # Required. Spec for summarization verbosity score metric. "useReference": True or False, # Optional. Whether to use instance.reference to compute summarization verbosity. "version": 42, # Optional. Which version to use for evaluation. }, }, "toolCallValidInput": { # Input for tool call valid metric. # Tool call metric instances. Input for tool call valid metric. "instances": [ # Required. Repeated tool call valid instances. { # Spec for tool call valid instance. "prediction": "A String", # Required. Output of the evaluated model. "reference": "A String", # Required. Ground truth used to compare against the prediction. }, ], "metricSpec": { # Spec for tool call valid metric. # Required. Spec for tool call valid metric. }, }, "toolNameMatchInput": { # Input for tool name match metric. # Input for tool name match metric. "instances": [ # Required. Repeated tool name match instances. { # Spec for tool name match instance. "prediction": "A String", # Required. Output of the evaluated model. "reference": "A String", # Required. Ground truth used to compare against the prediction. }, ], "metricSpec": { # Spec for tool name match metric. # Required. Spec for tool name match metric. }, }, "toolParameterKeyMatchInput": { # Input for tool parameter key match metric. # Input for tool parameter key match metric. "instances": [ # Required. Repeated tool parameter key match instances. { # Spec for tool parameter key match instance. "prediction": "A String", # Required. Output of the evaluated model. "reference": "A String", # Required. Ground truth used to compare against the prediction. }, ], "metricSpec": { # Spec for tool parameter key match metric. # Required. Spec for tool parameter key match metric. }, }, "toolParameterKvMatchInput": { # Input for tool parameter key value match metric. # Input for tool parameter key value match metric. "instances": [ # Required. Repeated tool parameter key value match instances. { # Spec for tool parameter key value match instance. "prediction": "A String", # Required. Output of the evaluated model. "reference": "A String", # Required. Ground truth used to compare against the prediction. }, ], "metricSpec": { # Spec for tool parameter key value match metric. # Required. Spec for tool parameter key value match metric. "useStrictStringMatch": True or False, # Optional. Whether to use STRICT string match on parameter values. }, }, } x__xgafv: string, V1 error format. Allowed values 1 - v1 error format 2 - v2 error format Returns: An object of the form: { # Response message for EvaluationService.EvaluateInstances. "bleuResults": { # Results for bleu metric. # Results for bleu metric. "bleuMetricValues": [ # Output only. Bleu metric values. { # Bleu metric value for an instance. "score": 3.14, # Output only. Bleu score. }, ], }, "coherenceResult": { # Spec for coherence result. # Result for coherence metric. "confidence": 3.14, # Output only. Confidence for coherence score. "explanation": "A String", # Output only. Explanation for coherence score. "score": 3.14, # Output only. Coherence score. }, "exactMatchResults": { # Results for exact match metric. # Auto metric evaluation results. Results for exact match metric. "exactMatchMetricValues": [ # Output only. Exact match metric values. { # Exact match metric value for an instance. "score": 3.14, # Output only. Exact match score. }, ], }, "fluencyResult": { # Spec for fluency result. # LLM-based metric evaluation result. General text generation metrics, applicable to other categories. Result for fluency metric. "confidence": 3.14, # Output only. Confidence for fluency score. "explanation": "A String", # Output only. Explanation for fluency score. "score": 3.14, # Output only. Fluency score. }, "fulfillmentResult": { # Spec for fulfillment result. # Result for fulfillment metric. "confidence": 3.14, # Output only. Confidence for fulfillment score. "explanation": "A String", # Output only. Explanation for fulfillment score. "score": 3.14, # Output only. Fulfillment score. }, "groundednessResult": { # Spec for groundedness result. # Result for groundedness metric. "confidence": 3.14, # Output only. Confidence for groundedness score. "explanation": "A String", # Output only. Explanation for groundedness score. "score": 3.14, # Output only. Groundedness score. }, "pairwiseMetricResult": { # Spec for pairwise metric result. # Result for pairwise metric. "explanation": "A String", # Output only. Explanation for pairwise metric score. "pairwiseChoice": "A String", # Output only. Pairwise metric choice. }, "pairwiseQuestionAnsweringQualityResult": { # Spec for pairwise question answering quality result. # Result for pairwise question answering quality metric. "confidence": 3.14, # Output only. Confidence for question answering quality score. "explanation": "A String", # Output only. Explanation for question answering quality score. "pairwiseChoice": "A String", # Output only. Pairwise question answering prediction choice. }, "pairwiseSummarizationQualityResult": { # Spec for pairwise summarization quality result. # Result for pairwise summarization quality metric. "confidence": 3.14, # Output only. Confidence for summarization quality score. "explanation": "A String", # Output only. Explanation for summarization quality score. "pairwiseChoice": "A String", # Output only. Pairwise summarization prediction choice. }, "pointwiseMetricResult": { # Spec for pointwise metric result. # Generic metrics. Result for pointwise metric. "explanation": "A String", # Output only. Explanation for pointwise metric score. "score": 3.14, # Output only. Pointwise metric score. }, "questionAnsweringCorrectnessResult": { # Spec for question answering correctness result. # Result for question answering correctness metric. "confidence": 3.14, # Output only. Confidence for question answering correctness score. "explanation": "A String", # Output only. Explanation for question answering correctness score. "score": 3.14, # Output only. Question Answering Correctness score. }, "questionAnsweringHelpfulnessResult": { # Spec for question answering helpfulness result. # Result for question answering helpfulness metric. "confidence": 3.14, # Output only. Confidence for question answering helpfulness score. "explanation": "A String", # Output only. Explanation for question answering helpfulness score. "score": 3.14, # Output only. Question Answering Helpfulness score. }, "questionAnsweringQualityResult": { # Spec for question answering quality result. # Question answering only metrics. Result for question answering quality metric. "confidence": 3.14, # Output only. Confidence for question answering quality score. "explanation": "A String", # Output only. Explanation for question answering quality score. "score": 3.14, # Output only. Question Answering Quality score. }, "questionAnsweringRelevanceResult": { # Spec for question answering relevance result. # Result for question answering relevance metric. "confidence": 3.14, # Output only. Confidence for question answering relevance score. "explanation": "A String", # Output only. Explanation for question answering relevance score. "score": 3.14, # Output only. Question Answering Relevance score. }, "rougeResults": { # Results for rouge metric. # Results for rouge metric. "rougeMetricValues": [ # Output only. Rouge metric values. { # Rouge metric value for an instance. "score": 3.14, # Output only. Rouge score. }, ], }, "safetyResult": { # Spec for safety result. # Result for safety metric. "confidence": 3.14, # Output only. Confidence for safety score. "explanation": "A String", # Output only. Explanation for safety score. "score": 3.14, # Output only. Safety score. }, "summarizationHelpfulnessResult": { # Spec for summarization helpfulness result. # Result for summarization helpfulness metric. "confidence": 3.14, # Output only. Confidence for summarization helpfulness score. "explanation": "A String", # Output only. Explanation for summarization helpfulness score. "score": 3.14, # Output only. Summarization Helpfulness score. }, "summarizationQualityResult": { # Spec for summarization quality result. # Summarization only metrics. Result for summarization quality metric. "confidence": 3.14, # Output only. Confidence for summarization quality score. "explanation": "A String", # Output only. Explanation for summarization quality score. "score": 3.14, # Output only. Summarization Quality score. }, "summarizationVerbosityResult": { # Spec for summarization verbosity result. # Result for summarization verbosity metric. "confidence": 3.14, # Output only. Confidence for summarization verbosity score. "explanation": "A String", # Output only. Explanation for summarization verbosity score. "score": 3.14, # Output only. Summarization Verbosity score. }, "toolCallValidResults": { # Results for tool call valid metric. # Tool call metrics. Results for tool call valid metric. "toolCallValidMetricValues": [ # Output only. Tool call valid metric values. { # Tool call valid metric value for an instance. "score": 3.14, # Output only. Tool call valid score. }, ], }, "toolNameMatchResults": { # Results for tool name match metric. # Results for tool name match metric. "toolNameMatchMetricValues": [ # Output only. Tool name match metric values. { # Tool name match metric value for an instance. "score": 3.14, # Output only. Tool name match score. }, ], }, "toolParameterKeyMatchResults": { # Results for tool parameter key match metric. # Results for tool parameter key match metric. "toolParameterKeyMatchMetricValues": [ # Output only. Tool parameter key match metric values. { # Tool parameter key match metric value for an instance. "score": 3.14, # Output only. Tool parameter key match score. }, ], }, "toolParameterKvMatchResults": { # Results for tool parameter key value match metric. # Results for tool parameter key value match metric. "toolParameterKvMatchMetricValues": [ # Output only. Tool parameter key value match metric values. { # Tool parameter key value match metric value for an instance. "score": 3.14, # Output only. Tool parameter key value match score. }, ], }, }
get(name, x__xgafv=None)
Gets information about a location. Args: name: string, Resource name for the location. (required) x__xgafv: string, V1 error format. Allowed values 1 - v1 error format 2 - v2 error format Returns: An object of the form: { # A resource that represents a Google Cloud location. "displayName": "A String", # The friendly name for this location, typically a nearby city name. For example, "Tokyo". "labels": { # Cross-service attributes for the location. For example {"cloud.googleapis.com/region": "us-east1"} "a_key": "A String", }, "locationId": "A String", # The canonical id for this location. For example: `"us-east1"`. "metadata": { # Service-specific metadata. For example the available capacity at the given location. "a_key": "", # Properties of the object. Contains field @type with type URL. }, "name": "A String", # Resource name for the location, which may vary between implementations. For example: `"projects/example-project/locations/us-east1"` }
list(name, filter=None, pageSize=None, pageToken=None, x__xgafv=None)
Lists information about the supported locations for this service. Args: name: string, The resource that owns the locations collection, if applicable. (required) filter: string, A filter to narrow down results to a preferred subset. The filtering language accepts strings like `"displayName=tokyo"`, and is documented in more detail in [AIP-160](https://google.aip.dev/160). pageSize: integer, The maximum number of results to return. If not set, the service selects a default. pageToken: string, A page token received from the `next_page_token` field in the response. Send that page token to receive the subsequent page. x__xgafv: string, V1 error format. Allowed values 1 - v1 error format 2 - v2 error format Returns: An object of the form: { # The response message for Locations.ListLocations. "locations": [ # A list of locations that matches the specified filter in the request. { # A resource that represents a Google Cloud location. "displayName": "A String", # The friendly name for this location, typically a nearby city name. For example, "Tokyo". "labels": { # Cross-service attributes for the location. For example {"cloud.googleapis.com/region": "us-east1"} "a_key": "A String", }, "locationId": "A String", # The canonical id for this location. For example: `"us-east1"`. "metadata": { # Service-specific metadata. For example the available capacity at the given location. "a_key": "", # Properties of the object. Contains field @type with type URL. }, "name": "A String", # Resource name for the location, which may vary between implementations. For example: `"projects/example-project/locations/us-east1"` }, ], "nextPageToken": "A String", # The standard List next-page token. }
list_next()
Retrieves the next page of results. Args: previous_request: The request for the previous page. (required) previous_response: The response from the request for the previous page. (required) Returns: A request object that you can call 'execute()' on to request the next page. Returns None if there are no more items in the collection.