Returns the agents Resource.
Returns the apps Resource.
Returns the batchPredictionJobs Resource.
Returns the cachedContents Resource.
Returns the customJobs Resource.
Returns the dataLabelingJobs Resource.
Returns the datasets Resource.
Returns the deploymentResourcePools Resource.
Returns the edgeDevices Resource.
Returns the endpoints Resource.
Returns the evaluationTasks Resource.
Returns the exampleStores Resource.
Returns the extensionControllers Resource.
Returns the extensions Resource.
Returns the featureGroups Resource.
Returns the featureOnlineStores Resource.
Returns the featurestores Resource.
Returns the hyperparameterTuningJobs Resource.
Returns the indexEndpoints Resource.
Returns the indexes Resource.
Returns the metadataStores Resource.
Returns the migratableResources Resource.
modelDeploymentMonitoringJobs()
Returns the modelDeploymentMonitoringJobs Resource.
Returns the modelMonitors Resource.
Returns the models Resource.
Returns the nasJobs Resource.
Returns the notebookExecutionJobs Resource.
Returns the notebookRuntimeTemplates Resource.
Returns the notebookRuntimes Resource.
Returns the operations Resource.
Returns the persistentResources Resource.
Returns the pipelineJobs Resource.
Returns the publishers Resource.
Returns the ragCorpora Resource.
Returns the reasoningEngines Resource.
Returns the schedules Resource.
Returns the solvers Resource.
Returns the specialistPools Resource.
Returns the studies Resource.
Returns the tensorboards Resource.
Returns the trainingPipelines Resource.
Returns the tuningJobs Resource.
augmentPrompt(parent, body=None, x__xgafv=None)
Given an input prompt, it returns augmented prompt from vertex rag store to guide LLM towards generating grounded responses.
Close httplib2 connections.
corroborateContent(parent, body=None, x__xgafv=None)
Given an input text, it returns a score that evaluates the factuality of the text. It also extracts and returns claims from the text and provides supporting facts.
evaluateInstances(location, body=None, x__xgafv=None)
Evaluates instances based on a given metric.
Gets information about a location.
list(name, filter=None, pageSize=None, pageToken=None, x__xgafv=None)
Lists information about the supported locations for this service.
Retrieves the next page of results.
retrieveContexts(parent, body=None, x__xgafv=None)
Retrieves relevant contexts for a query.
augmentPrompt(parent, body=None, x__xgafv=None)
Given an input prompt, it returns augmented prompt from vertex rag store to guide LLM towards generating grounded responses. Args: parent: string, Required. The resource name of the Location from which to augment prompt. The users must have permission to make a call in the project. Format: `projects/{project}/locations/{location}`. (required) body: object, The request body. The object takes the form of: { # Request message for AugmentPrompt. "contents": [ # Optional. Input content to augment, only text format is supported for now. { # The base structured datatype containing multi-part content of a message. A `Content` includes a `role` field designating the producer of the `Content` and a `parts` field containing multi-part data that contains the content of the message turn. "parts": [ # Required. Ordered `Parts` that constitute a single message. Parts may have different IANA MIME types. { # A datatype containing media that is part of a multi-part `Content` message. A `Part` consists of data which has an associated datatype. A `Part` can only contain one of the accepted types in `Part.data`. A `Part` must have a fixed IANA MIME type identifying the type and subtype of the media if `inline_data` or `file_data` field is filled with raw bytes. "codeExecutionResult": { # Result of executing the [ExecutableCode]. Always follows a `part` containing the [ExecutableCode]. # Optional. Result of executing the [ExecutableCode]. "outcome": "A String", # Required. Outcome of the code execution. "output": "A String", # Optional. Contains stdout when code execution is successful, stderr or other description otherwise. }, "executableCode": { # Code generated by the model that is meant to be executed, and the result returned to the model. Generated when using the [FunctionDeclaration] tool and [FunctionCallingConfig] mode is set to [Mode.CODE]. # Optional. Code generated by the model that is meant to be executed. "code": "A String", # Required. The code to be executed. "language": "A String", # Required. Programming language of the `code`. }, "fileData": { # URI based data. # Optional. URI based data. "fileUri": "A String", # Required. URI. "mimeType": "A String", # Required. The IANA standard MIME type of the source data. }, "functionCall": { # A predicted [FunctionCall] returned from the model that contains a string representing the [FunctionDeclaration.name] and a structured JSON object containing the parameters and their values. # Optional. A predicted [FunctionCall] returned from the model that contains a string representing the [FunctionDeclaration.name] with the parameters and their values. "args": { # Optional. Required. The function parameters and values in JSON object format. See [FunctionDeclaration.parameters] for parameter details. "a_key": "", # Properties of the object. }, "name": "A String", # Required. The name of the function to call. Matches [FunctionDeclaration.name]. }, "functionResponse": { # The result output from a [FunctionCall] that contains a string representing the [FunctionDeclaration.name] and a structured JSON object containing any output from the function is used as context to the model. This should contain the result of a [FunctionCall] made based on model prediction. # Optional. The result output of a [FunctionCall] that contains a string representing the [FunctionDeclaration.name] and a structured JSON object containing any output from the function call. It is used as context to the model. "name": "A String", # Required. The name of the function to call. Matches [FunctionDeclaration.name] and [FunctionCall.name]. "response": { # Required. The function response in JSON object format. Use "output" key to specify function output and "error" key to specify error details (if any). If "output" and "error" keys are not specified, then whole "response" is treated as function output. "a_key": "", # Properties of the object. }, }, "inlineData": { # Content blob. It's preferred to send as text directly rather than raw bytes. # Optional. Inlined bytes data. "data": "A String", # Required. Raw bytes. "mimeType": "A String", # Required. The IANA standard MIME type of the source data. }, "text": "A String", # Optional. Text part (can be code). "videoMetadata": { # Metadata describes the input video content. # Optional. Video metadata. The metadata should only be specified while the video data is presented in inline_data or file_data. "endOffset": "A String", # Optional. The end offset of the video. "startOffset": "A String", # Optional. The start offset of the video. }, }, ], "role": "A String", # Optional. The producer of the content. Must be either 'user' or 'model'. Useful to set for multi-turn conversations, otherwise can be left blank or unset. }, ], "model": { # Metadata of the backend deployed model. # Optional. Metadata of the backend deployed model. "model": "A String", # Optional. The model that the user will send the augmented prompt for content generation. "modelVersion": "A String", # Optional. The model version of the backend deployed model. }, "vertexRagStore": { # Retrieve from Vertex RAG Store for grounding. # Optional. Retrieves contexts from the Vertex RagStore. "ragCorpora": [ # Optional. Deprecated. Please use rag_resources instead. "A String", ], "ragResources": [ # Optional. The representation of the rag source. It can be used to specify corpus only or ragfiles. Currently only support one corpus or multiple files from one corpus. In the future we may open up multiple corpora support. { # The definition of the Rag resource. "ragCorpus": "A String", # Optional. RagCorpora resource name. Format: `projects/{project}/locations/{location}/ragCorpora/{rag_corpus}` "ragFileIds": [ # Optional. rag_file_id. The files should be in the same rag_corpus set in rag_corpus field. "A String", ], }, ], "similarityTopK": 42, # Optional. Number of top k results to return from the selected corpora. "vectorDistanceThreshold": 3.14, # Optional. Only return results with vector distance smaller than the threshold. }, } x__xgafv: string, V1 error format. Allowed values 1 - v1 error format 2 - v2 error format Returns: An object of the form: { # Response message for AugmentPrompt. "augmentedPrompt": [ # Augmented prompt, only text format is supported for now. { # The base structured datatype containing multi-part content of a message. A `Content` includes a `role` field designating the producer of the `Content` and a `parts` field containing multi-part data that contains the content of the message turn. "parts": [ # Required. Ordered `Parts` that constitute a single message. Parts may have different IANA MIME types. { # A datatype containing media that is part of a multi-part `Content` message. A `Part` consists of data which has an associated datatype. A `Part` can only contain one of the accepted types in `Part.data`. A `Part` must have a fixed IANA MIME type identifying the type and subtype of the media if `inline_data` or `file_data` field is filled with raw bytes. "codeExecutionResult": { # Result of executing the [ExecutableCode]. Always follows a `part` containing the [ExecutableCode]. # Optional. Result of executing the [ExecutableCode]. "outcome": "A String", # Required. Outcome of the code execution. "output": "A String", # Optional. Contains stdout when code execution is successful, stderr or other description otherwise. }, "executableCode": { # Code generated by the model that is meant to be executed, and the result returned to the model. Generated when using the [FunctionDeclaration] tool and [FunctionCallingConfig] mode is set to [Mode.CODE]. # Optional. Code generated by the model that is meant to be executed. "code": "A String", # Required. The code to be executed. "language": "A String", # Required. Programming language of the `code`. }, "fileData": { # URI based data. # Optional. URI based data. "fileUri": "A String", # Required. URI. "mimeType": "A String", # Required. The IANA standard MIME type of the source data. }, "functionCall": { # A predicted [FunctionCall] returned from the model that contains a string representing the [FunctionDeclaration.name] and a structured JSON object containing the parameters and their values. # Optional. A predicted [FunctionCall] returned from the model that contains a string representing the [FunctionDeclaration.name] with the parameters and their values. "args": { # Optional. Required. The function parameters and values in JSON object format. See [FunctionDeclaration.parameters] for parameter details. "a_key": "", # Properties of the object. }, "name": "A String", # Required. The name of the function to call. Matches [FunctionDeclaration.name]. }, "functionResponse": { # The result output from a [FunctionCall] that contains a string representing the [FunctionDeclaration.name] and a structured JSON object containing any output from the function is used as context to the model. This should contain the result of a [FunctionCall] made based on model prediction. # Optional. The result output of a [FunctionCall] that contains a string representing the [FunctionDeclaration.name] and a structured JSON object containing any output from the function call. It is used as context to the model. "name": "A String", # Required. The name of the function to call. Matches [FunctionDeclaration.name] and [FunctionCall.name]. "response": { # Required. The function response in JSON object format. Use "output" key to specify function output and "error" key to specify error details (if any). If "output" and "error" keys are not specified, then whole "response" is treated as function output. "a_key": "", # Properties of the object. }, }, "inlineData": { # Content blob. It's preferred to send as text directly rather than raw bytes. # Optional. Inlined bytes data. "data": "A String", # Required. Raw bytes. "mimeType": "A String", # Required. The IANA standard MIME type of the source data. }, "text": "A String", # Optional. Text part (can be code). "videoMetadata": { # Metadata describes the input video content. # Optional. Video metadata. The metadata should only be specified while the video data is presented in inline_data or file_data. "endOffset": "A String", # Optional. The end offset of the video. "startOffset": "A String", # Optional. The start offset of the video. }, }, ], "role": "A String", # Optional. The producer of the content. Must be either 'user' or 'model'. Useful to set for multi-turn conversations, otherwise can be left blank or unset. }, ], "facts": [ # Retrieved facts from RAG data sources. { # The fact used in grounding. "query": "A String", # Query that is used to retrieve this fact. "summary": "A String", # If present, the summary/snippet of the fact. "title": "A String", # If present, it refers to the title of this fact. "uri": "A String", # If present, this uri links to the source of the fact. "vectorDistance": 3.14, # If present, the distance between the query vector and this fact vector. }, ], }
close()
Close httplib2 connections.
corroborateContent(parent, body=None, x__xgafv=None)
Given an input text, it returns a score that evaluates the factuality of the text. It also extracts and returns claims from the text and provides supporting facts. Args: parent: string, Required. The resource name of the Location from which to corroborate text. The users must have permission to make a call in the project. Format: `projects/{project}/locations/{location}`. (required) body: object, The request body. The object takes the form of: { # Request message for CorroborateContent. "content": { # The base structured datatype containing multi-part content of a message. A `Content` includes a `role` field designating the producer of the `Content` and a `parts` field containing multi-part data that contains the content of the message turn. # Optional. Input content to corroborate, only text format is supported for now. "parts": [ # Required. Ordered `Parts` that constitute a single message. Parts may have different IANA MIME types. { # A datatype containing media that is part of a multi-part `Content` message. A `Part` consists of data which has an associated datatype. A `Part` can only contain one of the accepted types in `Part.data`. A `Part` must have a fixed IANA MIME type identifying the type and subtype of the media if `inline_data` or `file_data` field is filled with raw bytes. "codeExecutionResult": { # Result of executing the [ExecutableCode]. Always follows a `part` containing the [ExecutableCode]. # Optional. Result of executing the [ExecutableCode]. "outcome": "A String", # Required. Outcome of the code execution. "output": "A String", # Optional. Contains stdout when code execution is successful, stderr or other description otherwise. }, "executableCode": { # Code generated by the model that is meant to be executed, and the result returned to the model. Generated when using the [FunctionDeclaration] tool and [FunctionCallingConfig] mode is set to [Mode.CODE]. # Optional. Code generated by the model that is meant to be executed. "code": "A String", # Required. The code to be executed. "language": "A String", # Required. Programming language of the `code`. }, "fileData": { # URI based data. # Optional. URI based data. "fileUri": "A String", # Required. URI. "mimeType": "A String", # Required. The IANA standard MIME type of the source data. }, "functionCall": { # A predicted [FunctionCall] returned from the model that contains a string representing the [FunctionDeclaration.name] and a structured JSON object containing the parameters and their values. # Optional. A predicted [FunctionCall] returned from the model that contains a string representing the [FunctionDeclaration.name] with the parameters and their values. "args": { # Optional. Required. The function parameters and values in JSON object format. See [FunctionDeclaration.parameters] for parameter details. "a_key": "", # Properties of the object. }, "name": "A String", # Required. The name of the function to call. Matches [FunctionDeclaration.name]. }, "functionResponse": { # The result output from a [FunctionCall] that contains a string representing the [FunctionDeclaration.name] and a structured JSON object containing any output from the function is used as context to the model. This should contain the result of a [FunctionCall] made based on model prediction. # Optional. The result output of a [FunctionCall] that contains a string representing the [FunctionDeclaration.name] and a structured JSON object containing any output from the function call. It is used as context to the model. "name": "A String", # Required. The name of the function to call. Matches [FunctionDeclaration.name] and [FunctionCall.name]. "response": { # Required. The function response in JSON object format. Use "output" key to specify function output and "error" key to specify error details (if any). If "output" and "error" keys are not specified, then whole "response" is treated as function output. "a_key": "", # Properties of the object. }, }, "inlineData": { # Content blob. It's preferred to send as text directly rather than raw bytes. # Optional. Inlined bytes data. "data": "A String", # Required. Raw bytes. "mimeType": "A String", # Required. The IANA standard MIME type of the source data. }, "text": "A String", # Optional. Text part (can be code). "videoMetadata": { # Metadata describes the input video content. # Optional. Video metadata. The metadata should only be specified while the video data is presented in inline_data or file_data. "endOffset": "A String", # Optional. The end offset of the video. "startOffset": "A String", # Optional. The start offset of the video. }, }, ], "role": "A String", # Optional. The producer of the content. Must be either 'user' or 'model'. Useful to set for multi-turn conversations, otherwise can be left blank or unset. }, "facts": [ # Optional. Facts used to generate the text can also be used to corroborate the text. { # The fact used in grounding. "query": "A String", # Query that is used to retrieve this fact. "summary": "A String", # If present, the summary/snippet of the fact. "title": "A String", # If present, it refers to the title of this fact. "uri": "A String", # If present, this uri links to the source of the fact. "vectorDistance": 3.14, # If present, the distance between the query vector and this fact vector. }, ], "parameters": { # Parameters that can be overrided per request. # Optional. Parameters that can be set to override default settings per request. "citationThreshold": 3.14, # Optional. Only return claims with citation score larger than the threshold. }, } x__xgafv: string, V1 error format. Allowed values 1 - v1 error format 2 - v2 error format Returns: An object of the form: { # Response message for CorroborateContent. "claims": [ # Claims that are extracted from the input content and facts that support the claims. { # Claim that is extracted from the input text and facts that support it. "endIndex": 42, # Index in the input text where the claim ends (exclusive). "factIndexes": [ # Indexes of the facts supporting this claim. 42, ], "score": 3.14, # Confidence score of this corroboration. "startIndex": 42, # Index in the input text where the claim starts (inclusive). }, ], "corroborationScore": 3.14, # Confidence score of corroborating content. Value is [0,1] with 1 is the most confidence. }
evaluateInstances(location, body=None, x__xgafv=None)
Evaluates instances based on a given metric. Args: location: string, Required. The resource name of the Location to evaluate the instances. Format: `projects/{project}/locations/{location}` (required) body: object, The request body. The object takes the form of: { # Request message for EvaluationService.EvaluateInstances. "bleuInput": { # Input for bleu metric. # Instances and metric spec for bleu metric. "instances": [ # Required. Repeated bleu instances. { # Spec for bleu instance. "prediction": "A String", # Required. Output of the evaluated model. "reference": "A String", # Required. Ground truth used to compare against the prediction. }, ], "metricSpec": { # Spec for bleu score metric - calculates the precision of n-grams in the prediction as compared to reference - returns a score ranging between 0 to 1. # Required. Spec for bleu score metric. "useEffectiveOrder": True or False, # Optional. Whether to use_effective_order to compute bleu score. }, }, "coherenceInput": { # Input for coherence metric. # Input for coherence metric. "instance": { # Spec for coherence instance. # Required. Coherence instance. "prediction": "A String", # Required. Output of the evaluated model. }, "metricSpec": { # Spec for coherence score metric. # Required. Spec for coherence score metric. "version": 42, # Optional. Which version to use for evaluation. }, }, "exactMatchInput": { # Input for exact match metric. # Auto metric instances. Instances and metric spec for exact match metric. "instances": [ # Required. Repeated exact match instances. { # Spec for exact match instance. "prediction": "A String", # Required. Output of the evaluated model. "reference": "A String", # Required. Ground truth used to compare against the prediction. }, ], "metricSpec": { # Spec for exact match metric - returns 1 if prediction and reference exactly matches, otherwise 0. # Required. Spec for exact match metric. }, }, "fluencyInput": { # Input for fluency metric. # LLM-based metric instance. General text generation metrics, applicable to other categories. Input for fluency metric. "instance": { # Spec for fluency instance. # Required. Fluency instance. "prediction": "A String", # Required. Output of the evaluated model. }, "metricSpec": { # Spec for fluency score metric. # Required. Spec for fluency score metric. "version": 42, # Optional. Which version to use for evaluation. }, }, "fulfillmentInput": { # Input for fulfillment metric. # Input for fulfillment metric. "instance": { # Spec for fulfillment instance. # Required. Fulfillment instance. "instruction": "A String", # Required. Inference instruction prompt to compare prediction with. "prediction": "A String", # Required. Output of the evaluated model. }, "metricSpec": { # Spec for fulfillment metric. # Required. Spec for fulfillment score metric. "version": 42, # Optional. Which version to use for evaluation. }, }, "groundednessInput": { # Input for groundedness metric. # Input for groundedness metric. "instance": { # Spec for groundedness instance. # Required. Groundedness instance. "context": "A String", # Required. Background information provided in context used to compare against the prediction. "prediction": "A String", # Required. Output of the evaluated model. }, "metricSpec": { # Spec for groundedness metric. # Required. Spec for groundedness metric. "version": 42, # Optional. Which version to use for evaluation. }, }, "pairwiseMetricInput": { # Input for pairwise metric. # Input for pairwise metric. "instance": { # Pairwise metric instance. Usually one instance corresponds to one row in an evaluation dataset. # Required. Pairwise metric instance. "jsonInstance": "A String", # Instance specified as a json string. String key-value pairs are expected in the json_instance to render PairwiseMetricSpec.instance_prompt_template. }, "metricSpec": { # Spec for pairwise metric. # Required. Spec for pairwise metric. "metricPromptTemplate": "A String", # Required. Metric prompt template for pairwise metric. }, }, "pairwiseQuestionAnsweringQualityInput": { # Input for pairwise question answering quality metric. # Input for pairwise question answering quality metric. "instance": { # Spec for pairwise question answering quality instance. # Required. Pairwise question answering quality instance. "baselinePrediction": "A String", # Required. Output of the baseline model. "context": "A String", # Required. Text to answer the question. "instruction": "A String", # Required. Question Answering prompt for LLM. "prediction": "A String", # Required. Output of the candidate model. "reference": "A String", # Optional. Ground truth used to compare against the prediction. }, "metricSpec": { # Spec for pairwise question answering quality score metric. # Required. Spec for pairwise question answering quality score metric. "useReference": True or False, # Optional. Whether to use instance.reference to compute question answering quality. "version": 42, # Optional. Which version to use for evaluation. }, }, "pairwiseSummarizationQualityInput": { # Input for pairwise summarization quality metric. # Input for pairwise summarization quality metric. "instance": { # Spec for pairwise summarization quality instance. # Required. Pairwise summarization quality instance. "baselinePrediction": "A String", # Required. Output of the baseline model. "context": "A String", # Required. Text to be summarized. "instruction": "A String", # Required. Summarization prompt for LLM. "prediction": "A String", # Required. Output of the candidate model. "reference": "A String", # Optional. Ground truth used to compare against the prediction. }, "metricSpec": { # Spec for pairwise summarization quality score metric. # Required. Spec for pairwise summarization quality score metric. "useReference": True or False, # Optional. Whether to use instance.reference to compute pairwise summarization quality. "version": 42, # Optional. Which version to use for evaluation. }, }, "pointwiseMetricInput": { # Input for pointwise metric. # Input for pointwise metric. "instance": { # Pointwise metric instance. Usually one instance corresponds to one row in an evaluation dataset. # Required. Pointwise metric instance. "jsonInstance": "A String", # Instance specified as a json string. String key-value pairs are expected in the json_instance to render PointwiseMetricSpec.instance_prompt_template. }, "metricSpec": { # Spec for pointwise metric. # Required. Spec for pointwise metric. "metricPromptTemplate": "A String", # Required. Metric prompt template for pointwise metric. }, }, "questionAnsweringCorrectnessInput": { # Input for question answering correctness metric. # Input for question answering correctness metric. "instance": { # Spec for question answering correctness instance. # Required. Question answering correctness instance. "context": "A String", # Optional. Text provided as context to answer the question. "instruction": "A String", # Required. The question asked and other instruction in the inference prompt. "prediction": "A String", # Required. Output of the evaluated model. "reference": "A String", # Optional. Ground truth used to compare against the prediction. }, "metricSpec": { # Spec for question answering correctness metric. # Required. Spec for question answering correctness score metric. "useReference": True or False, # Optional. Whether to use instance.reference to compute question answering correctness. "version": 42, # Optional. Which version to use for evaluation. }, }, "questionAnsweringHelpfulnessInput": { # Input for question answering helpfulness metric. # Input for question answering helpfulness metric. "instance": { # Spec for question answering helpfulness instance. # Required. Question answering helpfulness instance. "context": "A String", # Optional. Text provided as context to answer the question. "instruction": "A String", # Required. The question asked and other instruction in the inference prompt. "prediction": "A String", # Required. Output of the evaluated model. "reference": "A String", # Optional. Ground truth used to compare against the prediction. }, "metricSpec": { # Spec for question answering helpfulness metric. # Required. Spec for question answering helpfulness score metric. "useReference": True or False, # Optional. Whether to use instance.reference to compute question answering helpfulness. "version": 42, # Optional. Which version to use for evaluation. }, }, "questionAnsweringQualityInput": { # Input for question answering quality metric. # Input for question answering quality metric. "instance": { # Spec for question answering quality instance. # Required. Question answering quality instance. "context": "A String", # Required. Text to answer the question. "instruction": "A String", # Required. Question Answering prompt for LLM. "prediction": "A String", # Required. Output of the evaluated model. "reference": "A String", # Optional. Ground truth used to compare against the prediction. }, "metricSpec": { # Spec for question answering quality score metric. # Required. Spec for question answering quality score metric. "useReference": True or False, # Optional. Whether to use instance.reference to compute question answering quality. "version": 42, # Optional. Which version to use for evaluation. }, }, "questionAnsweringRelevanceInput": { # Input for question answering relevance metric. # Input for question answering relevance metric. "instance": { # Spec for question answering relevance instance. # Required. Question answering relevance instance. "context": "A String", # Optional. Text provided as context to answer the question. "instruction": "A String", # Required. The question asked and other instruction in the inference prompt. "prediction": "A String", # Required. Output of the evaluated model. "reference": "A String", # Optional. Ground truth used to compare against the prediction. }, "metricSpec": { # Spec for question answering relevance metric. # Required. Spec for question answering relevance score metric. "useReference": True or False, # Optional. Whether to use instance.reference to compute question answering relevance. "version": 42, # Optional. Which version to use for evaluation. }, }, "rougeInput": { # Input for rouge metric. # Instances and metric spec for rouge metric. "instances": [ # Required. Repeated rouge instances. { # Spec for rouge instance. "prediction": "A String", # Required. Output of the evaluated model. "reference": "A String", # Required. Ground truth used to compare against the prediction. }, ], "metricSpec": { # Spec for rouge score metric - calculates the recall of n-grams in prediction as compared to reference - returns a score ranging between 0 and 1. # Required. Spec for rouge score metric. "rougeType": "A String", # Optional. Supported rouge types are rougen[1-9], rougeL, and rougeLsum. "splitSummaries": True or False, # Optional. Whether to split summaries while using rougeLsum. "useStemmer": True or False, # Optional. Whether to use stemmer to compute rouge score. }, }, "safetyInput": { # Input for safety metric. # Input for safety metric. "instance": { # Spec for safety instance. # Required. Safety instance. "prediction": "A String", # Required. Output of the evaluated model. }, "metricSpec": { # Spec for safety metric. # Required. Spec for safety metric. "version": 42, # Optional. Which version to use for evaluation. }, }, "summarizationHelpfulnessInput": { # Input for summarization helpfulness metric. # Input for summarization helpfulness metric. "instance": { # Spec for summarization helpfulness instance. # Required. Summarization helpfulness instance. "context": "A String", # Required. Text to be summarized. "instruction": "A String", # Optional. Summarization prompt for LLM. "prediction": "A String", # Required. Output of the evaluated model. "reference": "A String", # Optional. Ground truth used to compare against the prediction. }, "metricSpec": { # Spec for summarization helpfulness score metric. # Required. Spec for summarization helpfulness score metric. "useReference": True or False, # Optional. Whether to use instance.reference to compute summarization helpfulness. "version": 42, # Optional. Which version to use for evaluation. }, }, "summarizationQualityInput": { # Input for summarization quality metric. # Input for summarization quality metric. "instance": { # Spec for summarization quality instance. # Required. Summarization quality instance. "context": "A String", # Required. Text to be summarized. "instruction": "A String", # Required. Summarization prompt for LLM. "prediction": "A String", # Required. Output of the evaluated model. "reference": "A String", # Optional. Ground truth used to compare against the prediction. }, "metricSpec": { # Spec for summarization quality score metric. # Required. Spec for summarization quality score metric. "useReference": True or False, # Optional. Whether to use instance.reference to compute summarization quality. "version": 42, # Optional. Which version to use for evaluation. }, }, "summarizationVerbosityInput": { # Input for summarization verbosity metric. # Input for summarization verbosity metric. "instance": { # Spec for summarization verbosity instance. # Required. Summarization verbosity instance. "context": "A String", # Required. Text to be summarized. "instruction": "A String", # Optional. Summarization prompt for LLM. "prediction": "A String", # Required. Output of the evaluated model. "reference": "A String", # Optional. Ground truth used to compare against the prediction. }, "metricSpec": { # Spec for summarization verbosity score metric. # Required. Spec for summarization verbosity score metric. "useReference": True or False, # Optional. Whether to use instance.reference to compute summarization verbosity. "version": 42, # Optional. Which version to use for evaluation. }, }, "toolCallValidInput": { # Input for tool call valid metric. # Tool call metric instances. Input for tool call valid metric. "instances": [ # Required. Repeated tool call valid instances. { # Spec for tool call valid instance. "prediction": "A String", # Required. Output of the evaluated model. "reference": "A String", # Required. Ground truth used to compare against the prediction. }, ], "metricSpec": { # Spec for tool call valid metric. # Required. Spec for tool call valid metric. }, }, "toolNameMatchInput": { # Input for tool name match metric. # Input for tool name match metric. "instances": [ # Required. Repeated tool name match instances. { # Spec for tool name match instance. "prediction": "A String", # Required. Output of the evaluated model. "reference": "A String", # Required. Ground truth used to compare against the prediction. }, ], "metricSpec": { # Spec for tool name match metric. # Required. Spec for tool name match metric. }, }, "toolParameterKeyMatchInput": { # Input for tool parameter key match metric. # Input for tool parameter key match metric. "instances": [ # Required. Repeated tool parameter key match instances. { # Spec for tool parameter key match instance. "prediction": "A String", # Required. Output of the evaluated model. "reference": "A String", # Required. Ground truth used to compare against the prediction. }, ], "metricSpec": { # Spec for tool parameter key match metric. # Required. Spec for tool parameter key match metric. }, }, "toolParameterKvMatchInput": { # Input for tool parameter key value match metric. # Input for tool parameter key value match metric. "instances": [ # Required. Repeated tool parameter key value match instances. { # Spec for tool parameter key value match instance. "prediction": "A String", # Required. Output of the evaluated model. "reference": "A String", # Required. Ground truth used to compare against the prediction. }, ], "metricSpec": { # Spec for tool parameter key value match metric. # Required. Spec for tool parameter key value match metric. "useStrictStringMatch": True or False, # Optional. Whether to use STRICT string match on parameter values. }, }, } x__xgafv: string, V1 error format. Allowed values 1 - v1 error format 2 - v2 error format Returns: An object of the form: { # Response message for EvaluationService.EvaluateInstances. "bleuResults": { # Results for bleu metric. # Results for bleu metric. "bleuMetricValues": [ # Output only. Bleu metric values. { # Bleu metric value for an instance. "score": 3.14, # Output only. Bleu score. }, ], }, "coherenceResult": { # Spec for coherence result. # Result for coherence metric. "confidence": 3.14, # Output only. Confidence for coherence score. "explanation": "A String", # Output only. Explanation for coherence score. "score": 3.14, # Output only. Coherence score. }, "exactMatchResults": { # Results for exact match metric. # Auto metric evaluation results. Results for exact match metric. "exactMatchMetricValues": [ # Output only. Exact match metric values. { # Exact match metric value for an instance. "score": 3.14, # Output only. Exact match score. }, ], }, "fluencyResult": { # Spec for fluency result. # LLM-based metric evaluation result. General text generation metrics, applicable to other categories. Result for fluency metric. "confidence": 3.14, # Output only. Confidence for fluency score. "explanation": "A String", # Output only. Explanation for fluency score. "score": 3.14, # Output only. Fluency score. }, "fulfillmentResult": { # Spec for fulfillment result. # Result for fulfillment metric. "confidence": 3.14, # Output only. Confidence for fulfillment score. "explanation": "A String", # Output only. Explanation for fulfillment score. "score": 3.14, # Output only. Fulfillment score. }, "groundednessResult": { # Spec for groundedness result. # Result for groundedness metric. "confidence": 3.14, # Output only. Confidence for groundedness score. "explanation": "A String", # Output only. Explanation for groundedness score. "score": 3.14, # Output only. Groundedness score. }, "pairwiseMetricResult": { # Spec for pairwise metric result. # Result for pairwise metric. "explanation": "A String", # Output only. Explanation for pairwise metric score. "pairwiseChoice": "A String", # Output only. Pairwise metric choice. }, "pairwiseQuestionAnsweringQualityResult": { # Spec for pairwise question answering quality result. # Result for pairwise question answering quality metric. "confidence": 3.14, # Output only. Confidence for question answering quality score. "explanation": "A String", # Output only. Explanation for question answering quality score. "pairwiseChoice": "A String", # Output only. Pairwise question answering prediction choice. }, "pairwiseSummarizationQualityResult": { # Spec for pairwise summarization quality result. # Result for pairwise summarization quality metric. "confidence": 3.14, # Output only. Confidence for summarization quality score. "explanation": "A String", # Output only. Explanation for summarization quality score. "pairwiseChoice": "A String", # Output only. Pairwise summarization prediction choice. }, "pointwiseMetricResult": { # Spec for pointwise metric result. # Generic metrics. Result for pointwise metric. "explanation": "A String", # Output only. Explanation for pointwise metric score. "score": 3.14, # Output only. Pointwise metric score. }, "questionAnsweringCorrectnessResult": { # Spec for question answering correctness result. # Result for question answering correctness metric. "confidence": 3.14, # Output only. Confidence for question answering correctness score. "explanation": "A String", # Output only. Explanation for question answering correctness score. "score": 3.14, # Output only. Question Answering Correctness score. }, "questionAnsweringHelpfulnessResult": { # Spec for question answering helpfulness result. # Result for question answering helpfulness metric. "confidence": 3.14, # Output only. Confidence for question answering helpfulness score. "explanation": "A String", # Output only. Explanation for question answering helpfulness score. "score": 3.14, # Output only. Question Answering Helpfulness score. }, "questionAnsweringQualityResult": { # Spec for question answering quality result. # Question answering only metrics. Result for question answering quality metric. "confidence": 3.14, # Output only. Confidence for question answering quality score. "explanation": "A String", # Output only. Explanation for question answering quality score. "score": 3.14, # Output only. Question Answering Quality score. }, "questionAnsweringRelevanceResult": { # Spec for question answering relevance result. # Result for question answering relevance metric. "confidence": 3.14, # Output only. Confidence for question answering relevance score. "explanation": "A String", # Output only. Explanation for question answering relevance score. "score": 3.14, # Output only. Question Answering Relevance score. }, "rougeResults": { # Results for rouge metric. # Results for rouge metric. "rougeMetricValues": [ # Output only. Rouge metric values. { # Rouge metric value for an instance. "score": 3.14, # Output only. Rouge score. }, ], }, "safetyResult": { # Spec for safety result. # Result for safety metric. "confidence": 3.14, # Output only. Confidence for safety score. "explanation": "A String", # Output only. Explanation for safety score. "score": 3.14, # Output only. Safety score. }, "summarizationHelpfulnessResult": { # Spec for summarization helpfulness result. # Result for summarization helpfulness metric. "confidence": 3.14, # Output only. Confidence for summarization helpfulness score. "explanation": "A String", # Output only. Explanation for summarization helpfulness score. "score": 3.14, # Output only. Summarization Helpfulness score. }, "summarizationQualityResult": { # Spec for summarization quality result. # Summarization only metrics. Result for summarization quality metric. "confidence": 3.14, # Output only. Confidence for summarization quality score. "explanation": "A String", # Output only. Explanation for summarization quality score. "score": 3.14, # Output only. Summarization Quality score. }, "summarizationVerbosityResult": { # Spec for summarization verbosity result. # Result for summarization verbosity metric. "confidence": 3.14, # Output only. Confidence for summarization verbosity score. "explanation": "A String", # Output only. Explanation for summarization verbosity score. "score": 3.14, # Output only. Summarization Verbosity score. }, "toolCallValidResults": { # Results for tool call valid metric. # Tool call metrics. Results for tool call valid metric. "toolCallValidMetricValues": [ # Output only. Tool call valid metric values. { # Tool call valid metric value for an instance. "score": 3.14, # Output only. Tool call valid score. }, ], }, "toolNameMatchResults": { # Results for tool name match metric. # Results for tool name match metric. "toolNameMatchMetricValues": [ # Output only. Tool name match metric values. { # Tool name match metric value for an instance. "score": 3.14, # Output only. Tool name match score. }, ], }, "toolParameterKeyMatchResults": { # Results for tool parameter key match metric. # Results for tool parameter key match metric. "toolParameterKeyMatchMetricValues": [ # Output only. Tool parameter key match metric values. { # Tool parameter key match metric value for an instance. "score": 3.14, # Output only. Tool parameter key match score. }, ], }, "toolParameterKvMatchResults": { # Results for tool parameter key value match metric. # Results for tool parameter key value match metric. "toolParameterKvMatchMetricValues": [ # Output only. Tool parameter key value match metric values. { # Tool parameter key value match metric value for an instance. "score": 3.14, # Output only. Tool parameter key value match score. }, ], }, }
get(name, x__xgafv=None)
Gets information about a location. Args: name: string, Resource name for the location. (required) x__xgafv: string, V1 error format. Allowed values 1 - v1 error format 2 - v2 error format Returns: An object of the form: { # A resource that represents a Google Cloud location. "displayName": "A String", # The friendly name for this location, typically a nearby city name. For example, "Tokyo". "labels": { # Cross-service attributes for the location. For example {"cloud.googleapis.com/region": "us-east1"} "a_key": "A String", }, "locationId": "A String", # The canonical id for this location. For example: `"us-east1"`. "metadata": { # Service-specific metadata. For example the available capacity at the given location. "a_key": "", # Properties of the object. Contains field @type with type URL. }, "name": "A String", # Resource name for the location, which may vary between implementations. For example: `"projects/example-project/locations/us-east1"` }
list(name, filter=None, pageSize=None, pageToken=None, x__xgafv=None)
Lists information about the supported locations for this service. Args: name: string, The resource that owns the locations collection, if applicable. (required) filter: string, A filter to narrow down results to a preferred subset. The filtering language accepts strings like `"displayName=tokyo"`, and is documented in more detail in [AIP-160](https://google.aip.dev/160). pageSize: integer, The maximum number of results to return. If not set, the service selects a default. pageToken: string, A page token received from the `next_page_token` field in the response. Send that page token to receive the subsequent page. x__xgafv: string, V1 error format. Allowed values 1 - v1 error format 2 - v2 error format Returns: An object of the form: { # The response message for Locations.ListLocations. "locations": [ # A list of locations that matches the specified filter in the request. { # A resource that represents a Google Cloud location. "displayName": "A String", # The friendly name for this location, typically a nearby city name. For example, "Tokyo". "labels": { # Cross-service attributes for the location. For example {"cloud.googleapis.com/region": "us-east1"} "a_key": "A String", }, "locationId": "A String", # The canonical id for this location. For example: `"us-east1"`. "metadata": { # Service-specific metadata. For example the available capacity at the given location. "a_key": "", # Properties of the object. Contains field @type with type URL. }, "name": "A String", # Resource name for the location, which may vary between implementations. For example: `"projects/example-project/locations/us-east1"` }, ], "nextPageToken": "A String", # The standard List next-page token. }
list_next()
Retrieves the next page of results. Args: previous_request: The request for the previous page. (required) previous_response: The response from the request for the previous page. (required) Returns: A request object that you can call 'execute()' on to request the next page. Returns None if there are no more items in the collection.
retrieveContexts(parent, body=None, x__xgafv=None)
Retrieves relevant contexts for a query. Args: parent: string, Required. The resource name of the Location from which to retrieve RagContexts. The users must have permission to make a call in the project. Format: `projects/{project}/locations/{location}`. (required) body: object, The request body. The object takes the form of: { # Request message for VertexRagService.RetrieveContexts. "query": { # A query to retrieve relevant contexts. # Required. Single RAG retrieve query. "ranking": { # Configurations for hybrid search results ranking. # Optional. Configurations for hybrid search results ranking. "alpha": 3.14, # Optional. Alpha value controls the weight between dense and sparse vector search results. The range is [0, 1], while 0 means sparse vector search only and 1 means dense vector search only. The default value is 0.5 which balances sparse and dense vector search equally. }, "similarityTopK": 42, # Optional. The number of contexts to retrieve. "text": "A String", # Optional. The query in text format to get relevant contexts. }, "vertexRagStore": { # The data source for Vertex RagStore. # The data source for Vertex RagStore. "ragCorpora": [ # Optional. Deprecated. Please use rag_resources to specify the data source. "A String", ], "ragResources": [ # Optional. The representation of the rag source. It can be used to specify corpus only or ragfiles. Currently only support one corpus or multiple files from one corpus. In the future we may open up multiple corpora support. { # The definition of the Rag resource. "ragCorpus": "A String", # Optional. RagCorpora resource name. Format: `projects/{project}/locations/{location}/ragCorpora/{rag_corpus}` "ragFileIds": [ # Optional. rag_file_id. The files should be in the same rag_corpus set in rag_corpus field. "A String", ], }, ], "vectorDistanceThreshold": 3.14, # Optional. Only return contexts with vector distance smaller than the threshold. }, } x__xgafv: string, V1 error format. Allowed values 1 - v1 error format 2 - v2 error format Returns: An object of the form: { # Response message for VertexRagService.RetrieveContexts. "contexts": { # Relevant contexts for one query. # The contexts of the query. "contexts": [ # All its contexts. { # A context of the query. "distance": 3.14, # The distance between the query dense embedding vector and the context text vector. "sourceUri": "A String", # If the file is imported from Cloud Storage or Google Drive, source_uri will be original file URI in Cloud Storage or Google Drive; if file is uploaded, source_uri will be file display name. "sparseDistance": 3.14, # The distance between the query sparse embedding vector and the context text vector. "text": "A String", # The text chunk. }, ], }, }