Close httplib2 connections.
Retrieves the specified conversation dataset.
importConversationData(name, body=None, x__xgafv=None)
Import data into the specified conversation dataset. Note that it is not allowed to import data to a conversation dataset that already has data in it. This method is a [long-running operation](https://cloud.google.com/dialogflow/es/docs/how/long-running-operations). The returned `Operation` type has the following method-specific fields: - `metadata`: ImportConversationDataOperationMetadata - `response`: ImportConversationDataOperationResponse
list(parent, pageSize=None, pageToken=None, x__xgafv=None)
Returns the list of all conversation datasets in the specified project and location.
Retrieves the next page of results.
close()
Close httplib2 connections.
get(name, x__xgafv=None)
Retrieves the specified conversation dataset. Args: name: string, Required. The conversation dataset to retrieve. Format: `projects//locations//conversationDatasets/` (required) x__xgafv: string, V1 error format. Allowed values 1 - v1 error format 2 - v2 error format Returns: An object of the form: { # Represents a conversation dataset that a user imports raw data into. The data inside ConversationDataset can not be changed after ImportConversationData finishes (and calling ImportConversationData on a dataset that already has data is not allowed). "conversationCount": "A String", # Output only. The number of conversations this conversation dataset contains. "conversationInfo": { # Represents metadata of a conversation. # Output only. Metadata set during conversation data import. "languageCode": "A String", # Optional. The language code of the conversation data within this dataset. See https://cloud.google.com/apis/design/standard_fields for more information. Supports all UTF-8 languages. }, "createTime": "A String", # Output only. Creation time of this dataset. "description": "A String", # Optional. The description of the dataset. Maximum of 10000 bytes. "displayName": "A String", # Required. The display name of the dataset. Maximum of 64 bytes. "inputConfig": { # Represents the configuration of importing a set of conversation files in Google Cloud Storage. # Output only. Input configurations set during conversation data import. "gcsSource": { # Google Cloud Storage location for the inputs. # The Cloud Storage URI has the form gs:////agent*.json. Wildcards are allowed and will be expanded into all matched JSON files, which will be read as one conversation per file. "uris": [ # Required. Google Cloud Storage URIs for the inputs. A URI is of the form: `gs://bucket/object-prefix-or-name` Whether a prefix or name is used depends on the use case. "A String", ], }, }, "name": "A String", # Output only. ConversationDataset resource name. Format: `projects//locations//conversationDatasets/` "satisfiesPzi": True or False, # Output only. A read only boolean field reflecting Zone Isolation status of the dataset. "satisfiesPzs": True or False, # Output only. A read only boolean field reflecting Zone Separation status of the dataset. }
importConversationData(name, body=None, x__xgafv=None)
Import data into the specified conversation dataset. Note that it is not allowed to import data to a conversation dataset that already has data in it. This method is a [long-running operation](https://cloud.google.com/dialogflow/es/docs/how/long-running-operations). The returned `Operation` type has the following method-specific fields: - `metadata`: ImportConversationDataOperationMetadata - `response`: ImportConversationDataOperationResponse Args: name: string, Required. Dataset resource name. Format: `projects//locations//conversationDatasets/` (required) body: object, The request body. The object takes the form of: { # The request message for ConversationDatasets.ImportConversationData. "inputConfig": { # Represents the configuration of importing a set of conversation files in Google Cloud Storage. # Required. Configuration describing where to import data from. "gcsSource": { # Google Cloud Storage location for the inputs. # The Cloud Storage URI has the form gs:////agent*.json. Wildcards are allowed and will be expanded into all matched JSON files, which will be read as one conversation per file. "uris": [ # Required. Google Cloud Storage URIs for the inputs. A URI is of the form: `gs://bucket/object-prefix-or-name` Whether a prefix or name is used depends on the use case. "A String", ], }, }, } x__xgafv: string, V1 error format. Allowed values 1 - v1 error format 2 - v2 error format Returns: An object of the form: { # This resource represents a long-running operation that is the result of a network API call. "done": True or False, # If the value is `false`, it means the operation is still in progress. If `true`, the operation is completed, and either `error` or `response` is available. "error": { # The `Status` type defines a logical error model that is suitable for different programming environments, including REST APIs and RPC APIs. It is used by [gRPC](https://github.com/grpc). Each `Status` message contains three pieces of data: error code, error message, and error details. You can find out more about this error model and how to work with it in the [API Design Guide](https://cloud.google.com/apis/design/errors). # The error result of the operation in case of failure or cancellation. "code": 42, # The status code, which should be an enum value of google.rpc.Code. "details": [ # A list of messages that carry the error details. There is a common set of message types for APIs to use. { "a_key": "", # Properties of the object. Contains field @type with type URL. }, ], "message": "A String", # A developer-facing error message, which should be in English. Any user-facing error message should be localized and sent in the google.rpc.Status.details field, or localized by the client. }, "metadata": { # Service-specific metadata associated with the operation. It typically contains progress information and common metadata such as create time. Some services might not provide such metadata. Any method that returns a long-running operation should document the metadata type, if any. "a_key": "", # Properties of the object. Contains field @type with type URL. }, "name": "A String", # The server-assigned name, which is only unique within the same service that originally returns it. If you use the default HTTP mapping, the `name` should be a resource name ending with `operations/{unique_id}`. "response": { # The normal, successful response of the operation. If the original method returns no data on success, such as `Delete`, the response is `google.protobuf.Empty`. If the original method is standard `Get`/`Create`/`Update`, the response should be the resource. For other methods, the response should have the type `XxxResponse`, where `Xxx` is the original method name. For example, if the original method name is `TakeSnapshot()`, the inferred response type is `TakeSnapshotResponse`. "a_key": "", # Properties of the object. Contains field @type with type URL. }, }
list(parent, pageSize=None, pageToken=None, x__xgafv=None)
Returns the list of all conversation datasets in the specified project and location. Args: parent: string, Required. The project and location name to list all conversation datasets for. Format: `projects//locations/` (required) pageSize: integer, Optional. Maximum number of conversation datasets to return in a single page. By default 100 and at most 1000. pageToken: string, Optional. The next_page_token value returned from a previous list request. x__xgafv: string, V1 error format. Allowed values 1 - v1 error format 2 - v2 error format Returns: An object of the form: { # The response message for ConversationDatasets.ListConversationDatasets. "conversationDatasets": [ # The list of datasets to return. { # Represents a conversation dataset that a user imports raw data into. The data inside ConversationDataset can not be changed after ImportConversationData finishes (and calling ImportConversationData on a dataset that already has data is not allowed). "conversationCount": "A String", # Output only. The number of conversations this conversation dataset contains. "conversationInfo": { # Represents metadata of a conversation. # Output only. Metadata set during conversation data import. "languageCode": "A String", # Optional. The language code of the conversation data within this dataset. See https://cloud.google.com/apis/design/standard_fields for more information. Supports all UTF-8 languages. }, "createTime": "A String", # Output only. Creation time of this dataset. "description": "A String", # Optional. The description of the dataset. Maximum of 10000 bytes. "displayName": "A String", # Required. The display name of the dataset. Maximum of 64 bytes. "inputConfig": { # Represents the configuration of importing a set of conversation files in Google Cloud Storage. # Output only. Input configurations set during conversation data import. "gcsSource": { # Google Cloud Storage location for the inputs. # The Cloud Storage URI has the form gs:////agent*.json. Wildcards are allowed and will be expanded into all matched JSON files, which will be read as one conversation per file. "uris": [ # Required. Google Cloud Storage URIs for the inputs. A URI is of the form: `gs://bucket/object-prefix-or-name` Whether a prefix or name is used depends on the use case. "A String", ], }, }, "name": "A String", # Output only. ConversationDataset resource name. Format: `projects//locations//conversationDatasets/` "satisfiesPzi": True or False, # Output only. A read only boolean field reflecting Zone Isolation status of the dataset. "satisfiesPzs": True or False, # Output only. A read only boolean field reflecting Zone Separation status of the dataset. }, ], "nextPageToken": "A String", # The token to use to retrieve the next page of results, or empty if there are no more results in the list. }
list_next()
Retrieves the next page of results. Args: previous_request: The request for the previous page. (required) previous_response: The response from the request for the previous page. (required) Returns: A request object that you can call 'execute()' on to request the next page. Returns None if there are no more items in the collection.