Close httplib2 connections.
create(parent, body=None, qaQuestionId=None, x__xgafv=None)
Create a QaQuestion.
Deletes a QaQuestion.
Gets a QaQuestion.
list(parent, pageSize=None, pageToken=None, x__xgafv=None)
Lists QaQuestions.
Retrieves the next page of results.
patch(name, body=None, updateMask=None, x__xgafv=None)
Updates a QaQuestion.
close()
Close httplib2 connections.
create(parent, body=None, qaQuestionId=None, x__xgafv=None)
Create a QaQuestion. Args: parent: string, Required. The parent resource of the QaQuestion. (required) body: object, The request body. The object takes the form of: { # A single question to be scored by the Insights QA feature. "abbreviation": "A String", # Short, descriptive string, used in the UI where it's not practical to display the full question body. E.g., "Greeting". "answerChoices": [ # A list of valid answers to the question, which the LLM must choose from. { # Message representing a possible answer to the question. "boolValue": True or False, # Boolean value. "key": "A String", # A short string used as an identifier. "naValue": True or False, # A value of "Not Applicable (N/A)". If provided, this field may only be set to `true`. If a question receives this answer, it will be excluded from any score calculations. "numValue": 3.14, # Numerical value. "score": 3.14, # Numerical score of the answer, used for generating the overall score of a QaScorecardResult. If the answer uses na_value, this field is unused. "strValue": "A String", # String value. }, ], "answerInstructions": "A String", # Instructions describing how to determine the answer. "createTime": "A String", # Output only. The time at which this question was created. "metrics": { # A wrapper representing metrics calculated against a test-set on a LLM that was fine tuned for this question. # Metrics of the underlying tuned LLM over a holdout/test set while fine tuning the underlying LLM for the given question. This field will only be populated if and only if the question is part of a scorecard revision that has been tuned. "accuracy": 3.14, # Output only. Accuracy of the model. Measures the percentage of correct answers the model gave on the test set. }, "name": "A String", # Identifier. The resource name of the question. Format: projects/{project}/locations/{location}/qaScorecards/{qa_scorecard}/revisions/{revision}/qaQuestions/{qa_question} "order": 42, # Defines the order of the question within its parent scorecard revision. "questionBody": "A String", # Question text. E.g., "Did the agent greet the customer?" "tags": [ # User-defined list of arbitrary tags for the question. Used for grouping/organization and for weighting the score of each question. "A String", ], "tuningMetadata": { # Metadata about the tuning operation for the question. Will only be set if a scorecard containing this question has been tuned. # Metadata about the tuning operation for the question.This field will only be populated if and only if the question is part of a scorecard revision that has been tuned. "datasetValidationWarnings": [ # A list of any applicable data validation warnings about the question's feedback labels. "A String", ], "totalValidLabelCount": "A String", # Total number of valid labels provided for the question at the time of tuining. "tuningError": "A String", # Error status of the tuning operation for the question. Will only be set if the tuning operation failed. }, "updateTime": "A String", # Output only. The most recent time at which the question was updated. } qaQuestionId: string, Optional. A unique ID for the new question. This ID will become the final component of the question's resource name. If no ID is specified, a server-generated ID will be used. This value should be 4-64 characters and must match the regular expression `^[a-z0-9-]{4,64}$`. Valid characters are `a-z-`. x__xgafv: string, V1 error format. Allowed values 1 - v1 error format 2 - v2 error format Returns: An object of the form: { # A single question to be scored by the Insights QA feature. "abbreviation": "A String", # Short, descriptive string, used in the UI where it's not practical to display the full question body. E.g., "Greeting". "answerChoices": [ # A list of valid answers to the question, which the LLM must choose from. { # Message representing a possible answer to the question. "boolValue": True or False, # Boolean value. "key": "A String", # A short string used as an identifier. "naValue": True or False, # A value of "Not Applicable (N/A)". If provided, this field may only be set to `true`. If a question receives this answer, it will be excluded from any score calculations. "numValue": 3.14, # Numerical value. "score": 3.14, # Numerical score of the answer, used for generating the overall score of a QaScorecardResult. If the answer uses na_value, this field is unused. "strValue": "A String", # String value. }, ], "answerInstructions": "A String", # Instructions describing how to determine the answer. "createTime": "A String", # Output only. The time at which this question was created. "metrics": { # A wrapper representing metrics calculated against a test-set on a LLM that was fine tuned for this question. # Metrics of the underlying tuned LLM over a holdout/test set while fine tuning the underlying LLM for the given question. This field will only be populated if and only if the question is part of a scorecard revision that has been tuned. "accuracy": 3.14, # Output only. Accuracy of the model. Measures the percentage of correct answers the model gave on the test set. }, "name": "A String", # Identifier. The resource name of the question. Format: projects/{project}/locations/{location}/qaScorecards/{qa_scorecard}/revisions/{revision}/qaQuestions/{qa_question} "order": 42, # Defines the order of the question within its parent scorecard revision. "questionBody": "A String", # Question text. E.g., "Did the agent greet the customer?" "tags": [ # User-defined list of arbitrary tags for the question. Used for grouping/organization and for weighting the score of each question. "A String", ], "tuningMetadata": { # Metadata about the tuning operation for the question. Will only be set if a scorecard containing this question has been tuned. # Metadata about the tuning operation for the question.This field will only be populated if and only if the question is part of a scorecard revision that has been tuned. "datasetValidationWarnings": [ # A list of any applicable data validation warnings about the question's feedback labels. "A String", ], "totalValidLabelCount": "A String", # Total number of valid labels provided for the question at the time of tuining. "tuningError": "A String", # Error status of the tuning operation for the question. Will only be set if the tuning operation failed. }, "updateTime": "A String", # Output only. The most recent time at which the question was updated. }
delete(name, x__xgafv=None)
Deletes a QaQuestion. Args: name: string, Required. The name of the QaQuestion to delete. (required) x__xgafv: string, V1 error format. Allowed values 1 - v1 error format 2 - v2 error format Returns: An object of the form: { # A generic empty message that you can re-use to avoid defining duplicated empty messages in your APIs. A typical example is to use it as the request or the response type of an API method. For instance: service Foo { rpc Bar(google.protobuf.Empty) returns (google.protobuf.Empty); } }
get(name, x__xgafv=None)
Gets a QaQuestion. Args: name: string, Required. The name of the QaQuestion to get. (required) x__xgafv: string, V1 error format. Allowed values 1 - v1 error format 2 - v2 error format Returns: An object of the form: { # A single question to be scored by the Insights QA feature. "abbreviation": "A String", # Short, descriptive string, used in the UI where it's not practical to display the full question body. E.g., "Greeting". "answerChoices": [ # A list of valid answers to the question, which the LLM must choose from. { # Message representing a possible answer to the question. "boolValue": True or False, # Boolean value. "key": "A String", # A short string used as an identifier. "naValue": True or False, # A value of "Not Applicable (N/A)". If provided, this field may only be set to `true`. If a question receives this answer, it will be excluded from any score calculations. "numValue": 3.14, # Numerical value. "score": 3.14, # Numerical score of the answer, used for generating the overall score of a QaScorecardResult. If the answer uses na_value, this field is unused. "strValue": "A String", # String value. }, ], "answerInstructions": "A String", # Instructions describing how to determine the answer. "createTime": "A String", # Output only. The time at which this question was created. "metrics": { # A wrapper representing metrics calculated against a test-set on a LLM that was fine tuned for this question. # Metrics of the underlying tuned LLM over a holdout/test set while fine tuning the underlying LLM for the given question. This field will only be populated if and only if the question is part of a scorecard revision that has been tuned. "accuracy": 3.14, # Output only. Accuracy of the model. Measures the percentage of correct answers the model gave on the test set. }, "name": "A String", # Identifier. The resource name of the question. Format: projects/{project}/locations/{location}/qaScorecards/{qa_scorecard}/revisions/{revision}/qaQuestions/{qa_question} "order": 42, # Defines the order of the question within its parent scorecard revision. "questionBody": "A String", # Question text. E.g., "Did the agent greet the customer?" "tags": [ # User-defined list of arbitrary tags for the question. Used for grouping/organization and for weighting the score of each question. "A String", ], "tuningMetadata": { # Metadata about the tuning operation for the question. Will only be set if a scorecard containing this question has been tuned. # Metadata about the tuning operation for the question.This field will only be populated if and only if the question is part of a scorecard revision that has been tuned. "datasetValidationWarnings": [ # A list of any applicable data validation warnings about the question's feedback labels. "A String", ], "totalValidLabelCount": "A String", # Total number of valid labels provided for the question at the time of tuining. "tuningError": "A String", # Error status of the tuning operation for the question. Will only be set if the tuning operation failed. }, "updateTime": "A String", # Output only. The most recent time at which the question was updated. }
list(parent, pageSize=None, pageToken=None, x__xgafv=None)
Lists QaQuestions. Args: parent: string, Required. The parent resource of the questions. (required) pageSize: integer, Optional. The maximum number of questions to return in the response. If the value is zero, the service will select a default size. A call might return fewer objects than requested. A non-empty `next_page_token` in the response indicates that more data is available. pageToken: string, Optional. The value returned by the last `ListQaQuestionsResponse`. This value indicates that this is a continuation of a prior `ListQaQuestions` call and that the system should return the next page of data. x__xgafv: string, V1 error format. Allowed values 1 - v1 error format 2 - v2 error format Returns: An object of the form: { # The response from a ListQaQuestions request. "nextPageToken": "A String", # A token, which can be sent as `page_token` to retrieve the next page. If this field is omitted, there are no subsequent pages. "qaQuestions": [ # The QaQuestions under the parent. { # A single question to be scored by the Insights QA feature. "abbreviation": "A String", # Short, descriptive string, used in the UI where it's not practical to display the full question body. E.g., "Greeting". "answerChoices": [ # A list of valid answers to the question, which the LLM must choose from. { # Message representing a possible answer to the question. "boolValue": True or False, # Boolean value. "key": "A String", # A short string used as an identifier. "naValue": True or False, # A value of "Not Applicable (N/A)". If provided, this field may only be set to `true`. If a question receives this answer, it will be excluded from any score calculations. "numValue": 3.14, # Numerical value. "score": 3.14, # Numerical score of the answer, used for generating the overall score of a QaScorecardResult. If the answer uses na_value, this field is unused. "strValue": "A String", # String value. }, ], "answerInstructions": "A String", # Instructions describing how to determine the answer. "createTime": "A String", # Output only. The time at which this question was created. "metrics": { # A wrapper representing metrics calculated against a test-set on a LLM that was fine tuned for this question. # Metrics of the underlying tuned LLM over a holdout/test set while fine tuning the underlying LLM for the given question. This field will only be populated if and only if the question is part of a scorecard revision that has been tuned. "accuracy": 3.14, # Output only. Accuracy of the model. Measures the percentage of correct answers the model gave on the test set. }, "name": "A String", # Identifier. The resource name of the question. Format: projects/{project}/locations/{location}/qaScorecards/{qa_scorecard}/revisions/{revision}/qaQuestions/{qa_question} "order": 42, # Defines the order of the question within its parent scorecard revision. "questionBody": "A String", # Question text. E.g., "Did the agent greet the customer?" "tags": [ # User-defined list of arbitrary tags for the question. Used for grouping/organization and for weighting the score of each question. "A String", ], "tuningMetadata": { # Metadata about the tuning operation for the question. Will only be set if a scorecard containing this question has been tuned. # Metadata about the tuning operation for the question.This field will only be populated if and only if the question is part of a scorecard revision that has been tuned. "datasetValidationWarnings": [ # A list of any applicable data validation warnings about the question's feedback labels. "A String", ], "totalValidLabelCount": "A String", # Total number of valid labels provided for the question at the time of tuining. "tuningError": "A String", # Error status of the tuning operation for the question. Will only be set if the tuning operation failed. }, "updateTime": "A String", # Output only. The most recent time at which the question was updated. }, ], }
list_next()
Retrieves the next page of results. Args: previous_request: The request for the previous page. (required) previous_response: The response from the request for the previous page. (required) Returns: A request object that you can call 'execute()' on to request the next page. Returns None if there are no more items in the collection.
patch(name, body=None, updateMask=None, x__xgafv=None)
Updates a QaQuestion. Args: name: string, Identifier. The resource name of the question. Format: projects/{project}/locations/{location}/qaScorecards/{qa_scorecard}/revisions/{revision}/qaQuestions/{qa_question} (required) body: object, The request body. The object takes the form of: { # A single question to be scored by the Insights QA feature. "abbreviation": "A String", # Short, descriptive string, used in the UI where it's not practical to display the full question body. E.g., "Greeting". "answerChoices": [ # A list of valid answers to the question, which the LLM must choose from. { # Message representing a possible answer to the question. "boolValue": True or False, # Boolean value. "key": "A String", # A short string used as an identifier. "naValue": True or False, # A value of "Not Applicable (N/A)". If provided, this field may only be set to `true`. If a question receives this answer, it will be excluded from any score calculations. "numValue": 3.14, # Numerical value. "score": 3.14, # Numerical score of the answer, used for generating the overall score of a QaScorecardResult. If the answer uses na_value, this field is unused. "strValue": "A String", # String value. }, ], "answerInstructions": "A String", # Instructions describing how to determine the answer. "createTime": "A String", # Output only. The time at which this question was created. "metrics": { # A wrapper representing metrics calculated against a test-set on a LLM that was fine tuned for this question. # Metrics of the underlying tuned LLM over a holdout/test set while fine tuning the underlying LLM for the given question. This field will only be populated if and only if the question is part of a scorecard revision that has been tuned. "accuracy": 3.14, # Output only. Accuracy of the model. Measures the percentage of correct answers the model gave on the test set. }, "name": "A String", # Identifier. The resource name of the question. Format: projects/{project}/locations/{location}/qaScorecards/{qa_scorecard}/revisions/{revision}/qaQuestions/{qa_question} "order": 42, # Defines the order of the question within its parent scorecard revision. "questionBody": "A String", # Question text. E.g., "Did the agent greet the customer?" "tags": [ # User-defined list of arbitrary tags for the question. Used for grouping/organization and for weighting the score of each question. "A String", ], "tuningMetadata": { # Metadata about the tuning operation for the question. Will only be set if a scorecard containing this question has been tuned. # Metadata about the tuning operation for the question.This field will only be populated if and only if the question is part of a scorecard revision that has been tuned. "datasetValidationWarnings": [ # A list of any applicable data validation warnings about the question's feedback labels. "A String", ], "totalValidLabelCount": "A String", # Total number of valid labels provided for the question at the time of tuining. "tuningError": "A String", # Error status of the tuning operation for the question. Will only be set if the tuning operation failed. }, "updateTime": "A String", # Output only. The most recent time at which the question was updated. } updateMask: string, Required. The list of fields to be updated. All possible fields can be updated by passing `*`, or a subset of the following updateable fields can be provided: * `abbreviation` * `answer_choices` * `answer_instructions` * `order` * `question_body` * `tags` x__xgafv: string, V1 error format. Allowed values 1 - v1 error format 2 - v2 error format Returns: An object of the form: { # A single question to be scored by the Insights QA feature. "abbreviation": "A String", # Short, descriptive string, used in the UI where it's not practical to display the full question body. E.g., "Greeting". "answerChoices": [ # A list of valid answers to the question, which the LLM must choose from. { # Message representing a possible answer to the question. "boolValue": True or False, # Boolean value. "key": "A String", # A short string used as an identifier. "naValue": True or False, # A value of "Not Applicable (N/A)". If provided, this field may only be set to `true`. If a question receives this answer, it will be excluded from any score calculations. "numValue": 3.14, # Numerical value. "score": 3.14, # Numerical score of the answer, used for generating the overall score of a QaScorecardResult. If the answer uses na_value, this field is unused. "strValue": "A String", # String value. }, ], "answerInstructions": "A String", # Instructions describing how to determine the answer. "createTime": "A String", # Output only. The time at which this question was created. "metrics": { # A wrapper representing metrics calculated against a test-set on a LLM that was fine tuned for this question. # Metrics of the underlying tuned LLM over a holdout/test set while fine tuning the underlying LLM for the given question. This field will only be populated if and only if the question is part of a scorecard revision that has been tuned. "accuracy": 3.14, # Output only. Accuracy of the model. Measures the percentage of correct answers the model gave on the test set. }, "name": "A String", # Identifier. The resource name of the question. Format: projects/{project}/locations/{location}/qaScorecards/{qa_scorecard}/revisions/{revision}/qaQuestions/{qa_question} "order": 42, # Defines the order of the question within its parent scorecard revision. "questionBody": "A String", # Question text. E.g., "Did the agent greet the customer?" "tags": [ # User-defined list of arbitrary tags for the question. Used for grouping/organization and for weighting the score of each question. "A String", ], "tuningMetadata": { # Metadata about the tuning operation for the question. Will only be set if a scorecard containing this question has been tuned. # Metadata about the tuning operation for the question.This field will only be populated if and only if the question is part of a scorecard revision that has been tuned. "datasetValidationWarnings": [ # A list of any applicable data validation warnings about the question's feedback labels. "A String", ], "totalValidLabelCount": "A String", # Total number of valid labels provided for the question at the time of tuining. "tuningError": "A String", # Error status of the tuning operation for the question. Will only be set if the tuning operation failed. }, "updateTime": "A String", # Output only. The most recent time at which the question was updated. }