Returns the qaQuestions Resource.
Close httplib2 connections.
create(parent, body=None, qaScorecardRevisionId=None, x__xgafv=None)
Creates a QaScorecardRevision.
delete(name, force=None, x__xgafv=None)
Deletes a QaScorecardRevision.
deploy(name, body=None, x__xgafv=None)
Deploy a QaScorecardRevision.
Gets a QaScorecardRevision.
list(parent, filter=None, pageSize=None, pageToken=None, x__xgafv=None)
Lists all revisions under the parent QaScorecard.
Retrieves the next page of results.
tuneQaScorecardRevision(parent, body=None, x__xgafv=None)
Fine tune one or more QaModels.
undeploy(name, body=None, x__xgafv=None)
Undeploy a QaScorecardRevision.
close()
Close httplib2 connections.
create(parent, body=None, qaScorecardRevisionId=None, x__xgafv=None)
Creates a QaScorecardRevision. Args: parent: string, Required. The parent resource of the QaScorecardRevision. (required) body: object, The request body. The object takes the form of: { # A revision of a QaScorecard. Modifying published scorecard fields would invalidate existing scorecard results — the questions may have changed, or the score weighting will make existing scores impossible to understand. So changes must create a new revision, rather than modifying the existing resource. "alternateIds": [ # Output only. Alternative IDs for this revision of the scorecard, e.g., `latest`. "A String", ], "createTime": "A String", # Output only. The timestamp that the revision was created. "name": "A String", # Identifier. The name of the scorecard revision. Format: projects/{project}/locations/{location}/qaScorecards/{qa_scorecard}/revisions/{revision} "snapshot": { # A QaScorecard represents a collection of questions to be scored during analysis. # The snapshot of the scorecard at the time of this revision's creation. "createTime": "A String", # Output only. The time at which this scorecard was created. "description": "A String", # A text description explaining the intent of the scorecard. "displayName": "A String", # The user-specified display name of the scorecard. "name": "A String", # Identifier. The scorecard name. Format: projects/{project}/locations/{location}/qaScorecards/{qa_scorecard} "updateTime": "A String", # Output only. The most recent time at which the scorecard was updated. }, "state": "A String", # Output only. State of the scorecard revision, indicating whether it's ready to be used in analysis. } qaScorecardRevisionId: string, Optional. A unique ID for the new QaScorecardRevision. This ID will become the final component of the QaScorecardRevision's resource name. If no ID is specified, a server-generated ID will be used. This value should be 4-64 characters and must match the regular expression `^[a-z0-9-]{4,64}$`. Valid characters are `a-z-`. x__xgafv: string, V1 error format. Allowed values 1 - v1 error format 2 - v2 error format Returns: An object of the form: { # A revision of a QaScorecard. Modifying published scorecard fields would invalidate existing scorecard results — the questions may have changed, or the score weighting will make existing scores impossible to understand. So changes must create a new revision, rather than modifying the existing resource. "alternateIds": [ # Output only. Alternative IDs for this revision of the scorecard, e.g., `latest`. "A String", ], "createTime": "A String", # Output only. The timestamp that the revision was created. "name": "A String", # Identifier. The name of the scorecard revision. Format: projects/{project}/locations/{location}/qaScorecards/{qa_scorecard}/revisions/{revision} "snapshot": { # A QaScorecard represents a collection of questions to be scored during analysis. # The snapshot of the scorecard at the time of this revision's creation. "createTime": "A String", # Output only. The time at which this scorecard was created. "description": "A String", # A text description explaining the intent of the scorecard. "displayName": "A String", # The user-specified display name of the scorecard. "name": "A String", # Identifier. The scorecard name. Format: projects/{project}/locations/{location}/qaScorecards/{qa_scorecard} "updateTime": "A String", # Output only. The most recent time at which the scorecard was updated. }, "state": "A String", # Output only. State of the scorecard revision, indicating whether it's ready to be used in analysis. }
delete(name, force=None, x__xgafv=None)
Deletes a QaScorecardRevision. Args: name: string, Required. The name of the QaScorecardRevision to delete. (required) force: boolean, Optional. If set to true, all of this QaScorecardRevision's child resources will also be deleted. Otherwise, the request will only succeed if it has none. x__xgafv: string, V1 error format. Allowed values 1 - v1 error format 2 - v2 error format Returns: An object of the form: { # A generic empty message that you can re-use to avoid defining duplicated empty messages in your APIs. A typical example is to use it as the request or the response type of an API method. For instance: service Foo { rpc Bar(google.protobuf.Empty) returns (google.protobuf.Empty); } }
deploy(name, body=None, x__xgafv=None)
Deploy a QaScorecardRevision. Args: name: string, Required. The name of the QaScorecardRevision to deploy. (required) body: object, The request body. The object takes the form of: { # The request to deploy a QaScorecardRevision } x__xgafv: string, V1 error format. Allowed values 1 - v1 error format 2 - v2 error format Returns: An object of the form: { # A revision of a QaScorecard. Modifying published scorecard fields would invalidate existing scorecard results — the questions may have changed, or the score weighting will make existing scores impossible to understand. So changes must create a new revision, rather than modifying the existing resource. "alternateIds": [ # Output only. Alternative IDs for this revision of the scorecard, e.g., `latest`. "A String", ], "createTime": "A String", # Output only. The timestamp that the revision was created. "name": "A String", # Identifier. The name of the scorecard revision. Format: projects/{project}/locations/{location}/qaScorecards/{qa_scorecard}/revisions/{revision} "snapshot": { # A QaScorecard represents a collection of questions to be scored during analysis. # The snapshot of the scorecard at the time of this revision's creation. "createTime": "A String", # Output only. The time at which this scorecard was created. "description": "A String", # A text description explaining the intent of the scorecard. "displayName": "A String", # The user-specified display name of the scorecard. "name": "A String", # Identifier. The scorecard name. Format: projects/{project}/locations/{location}/qaScorecards/{qa_scorecard} "updateTime": "A String", # Output only. The most recent time at which the scorecard was updated. }, "state": "A String", # Output only. State of the scorecard revision, indicating whether it's ready to be used in analysis. }
get(name, x__xgafv=None)
Gets a QaScorecardRevision. Args: name: string, Required. The name of the QaScorecardRevision to get. (required) x__xgafv: string, V1 error format. Allowed values 1 - v1 error format 2 - v2 error format Returns: An object of the form: { # A revision of a QaScorecard. Modifying published scorecard fields would invalidate existing scorecard results — the questions may have changed, or the score weighting will make existing scores impossible to understand. So changes must create a new revision, rather than modifying the existing resource. "alternateIds": [ # Output only. Alternative IDs for this revision of the scorecard, e.g., `latest`. "A String", ], "createTime": "A String", # Output only. The timestamp that the revision was created. "name": "A String", # Identifier. The name of the scorecard revision. Format: projects/{project}/locations/{location}/qaScorecards/{qa_scorecard}/revisions/{revision} "snapshot": { # A QaScorecard represents a collection of questions to be scored during analysis. # The snapshot of the scorecard at the time of this revision's creation. "createTime": "A String", # Output only. The time at which this scorecard was created. "description": "A String", # A text description explaining the intent of the scorecard. "displayName": "A String", # The user-specified display name of the scorecard. "name": "A String", # Identifier. The scorecard name. Format: projects/{project}/locations/{location}/qaScorecards/{qa_scorecard} "updateTime": "A String", # Output only. The most recent time at which the scorecard was updated. }, "state": "A String", # Output only. State of the scorecard revision, indicating whether it's ready to be used in analysis. }
list(parent, filter=None, pageSize=None, pageToken=None, x__xgafv=None)
Lists all revisions under the parent QaScorecard. Args: parent: string, Required. The parent resource of the scorecard revisions. To list all revisions of all scorecards, substitute the QaScorecard ID with a '-' character. (required) filter: string, Optional. A filter to reduce results to a specific subset. Useful for querying scorecard revisions with specific properties. pageSize: integer, Optional. The maximum number of scorecard revisions to return in the response. If the value is zero, the service will select a default size. A call might return fewer objects than requested. A non-empty `next_page_token` in the response indicates that more data is available. pageToken: string, Optional. The value returned by the last `ListQaScorecardRevisionsResponse`. This value indicates that this is a continuation of a prior `ListQaScorecardRevisions` call and that the system should return the next page of data. x__xgafv: string, V1 error format. Allowed values 1 - v1 error format 2 - v2 error format Returns: An object of the form: { # The response from a ListQaScorecardRevisions request. "nextPageToken": "A String", # A token, which can be sent as `page_token` to retrieve the next page. If this field is omitted, there are no subsequent pages. "qaScorecardRevisions": [ # The QaScorecards under the parent. { # A revision of a QaScorecard. Modifying published scorecard fields would invalidate existing scorecard results — the questions may have changed, or the score weighting will make existing scores impossible to understand. So changes must create a new revision, rather than modifying the existing resource. "alternateIds": [ # Output only. Alternative IDs for this revision of the scorecard, e.g., `latest`. "A String", ], "createTime": "A String", # Output only. The timestamp that the revision was created. "name": "A String", # Identifier. The name of the scorecard revision. Format: projects/{project}/locations/{location}/qaScorecards/{qa_scorecard}/revisions/{revision} "snapshot": { # A QaScorecard represents a collection of questions to be scored during analysis. # The snapshot of the scorecard at the time of this revision's creation. "createTime": "A String", # Output only. The time at which this scorecard was created. "description": "A String", # A text description explaining the intent of the scorecard. "displayName": "A String", # The user-specified display name of the scorecard. "name": "A String", # Identifier. The scorecard name. Format: projects/{project}/locations/{location}/qaScorecards/{qa_scorecard} "updateTime": "A String", # Output only. The most recent time at which the scorecard was updated. }, "state": "A String", # Output only. State of the scorecard revision, indicating whether it's ready to be used in analysis. }, ], }
list_next()
Retrieves the next page of results. Args: previous_request: The request for the previous page. (required) previous_response: The response from the request for the previous page. (required) Returns: A request object that you can call 'execute()' on to request the next page. Returns None if there are no more items in the collection.
tuneQaScorecardRevision(parent, body=None, x__xgafv=None)
Fine tune one or more QaModels. Args: parent: string, Required. The parent resource for new fine tuning job instance. (required) body: object, The request body. The object takes the form of: { # Request for TuneQaScorecardRevision endpoint. "filter": "A String", # Required. Filter for selecting the feedback labels that needs to be used for training. This filter can be used to limit the feedback labels used for tuning to a feedback labels created or updated for a specific time-window etc. "validateOnly": True or False, # Optional. Run in validate only mode, no fine tuning will actually run. Data quality validations like training data distributions will run. Even when set to false, the data quality validations will still run but once the validations complete we will proceed with the fine tune, if applicable. } x__xgafv: string, V1 error format. Allowed values 1 - v1 error format 2 - v2 error format Returns: An object of the form: { # This resource represents a long-running operation that is the result of a network API call. "done": True or False, # If the value is `false`, it means the operation is still in progress. If `true`, the operation is completed, and either `error` or `response` is available. "error": { # The `Status` type defines a logical error model that is suitable for different programming environments, including REST APIs and RPC APIs. It is used by [gRPC](https://github.com/grpc). Each `Status` message contains three pieces of data: error code, error message, and error details. You can find out more about this error model and how to work with it in the [API Design Guide](https://cloud.google.com/apis/design/errors). # The error result of the operation in case of failure or cancellation. "code": 42, # The status code, which should be an enum value of google.rpc.Code. "details": [ # A list of messages that carry the error details. There is a common set of message types for APIs to use. { "a_key": "", # Properties of the object. Contains field @type with type URL. }, ], "message": "A String", # A developer-facing error message, which should be in English. Any user-facing error message should be localized and sent in the google.rpc.Status.details field, or localized by the client. }, "metadata": { # Service-specific metadata associated with the operation. It typically contains progress information and common metadata such as create time. Some services might not provide such metadata. Any method that returns a long-running operation should document the metadata type, if any. "a_key": "", # Properties of the object. Contains field @type with type URL. }, "name": "A String", # The server-assigned name, which is only unique within the same service that originally returns it. If you use the default HTTP mapping, the `name` should be a resource name ending with `operations/{unique_id}`. "response": { # The normal, successful response of the operation. If the original method returns no data on success, such as `Delete`, the response is `google.protobuf.Empty`. If the original method is standard `Get`/`Create`/`Update`, the response should be the resource. For other methods, the response should have the type `XxxResponse`, where `Xxx` is the original method name. For example, if the original method name is `TakeSnapshot()`, the inferred response type is `TakeSnapshotResponse`. "a_key": "", # Properties of the object. Contains field @type with type URL. }, }
undeploy(name, body=None, x__xgafv=None)
Undeploy a QaScorecardRevision. Args: name: string, Required. The name of the QaScorecardRevision to undeploy. (required) body: object, The request body. The object takes the form of: { # The request to undeploy a QaScorecardRevision } x__xgafv: string, V1 error format. Allowed values 1 - v1 error format 2 - v2 error format Returns: An object of the form: { # A revision of a QaScorecard. Modifying published scorecard fields would invalidate existing scorecard results — the questions may have changed, or the score weighting will make existing scores impossible to understand. So changes must create a new revision, rather than modifying the existing resource. "alternateIds": [ # Output only. Alternative IDs for this revision of the scorecard, e.g., `latest`. "A String", ], "createTime": "A String", # Output only. The timestamp that the revision was created. "name": "A String", # Identifier. The name of the scorecard revision. Format: projects/{project}/locations/{location}/qaScorecards/{qa_scorecard}/revisions/{revision} "snapshot": { # A QaScorecard represents a collection of questions to be scored during analysis. # The snapshot of the scorecard at the time of this revision's creation. "createTime": "A String", # Output only. The time at which this scorecard was created. "description": "A String", # A text description explaining the intent of the scorecard. "displayName": "A String", # The user-specified display name of the scorecard. "name": "A String", # Identifier. The scorecard name. Format: projects/{project}/locations/{location}/qaScorecards/{qa_scorecard} "updateTime": "A String", # Output only. The most recent time at which the scorecard was updated. }, "state": "A String", # Output only. State of the scorecard revision, indicating whether it's ready to be used in analysis. }