Dialogflow API . projects . locations . conversations . messages

Instance Methods

batchCreate(parent, body=None, x__xgafv=None)

Batch ingests messages to conversation. Customers can use this RPC to ingest historical messages to conversation.

close()

Close httplib2 connections.

list(parent, filter=None, pageSize=None, pageToken=None, x__xgafv=None)

Lists messages that belong to a given conversation. `messages` are ordered by `create_time` in descending order. To fetch updates without duplication, send request with filter `create_time_epoch_microseconds > [first item's create_time of previous request]` and empty page_token.

list_next()

Retrieves the next page of results.

Method Details

batchCreate(parent, body=None, x__xgafv=None)
Batch ingests messages to conversation. Customers can use this RPC to ingest historical messages to conversation.

Args:
  parent: string, Required. Resource identifier of the conversation to create message. Format: `projects//locations//conversations/`. (required)
  body: object, The request body.
    The object takes the form of:

{ # The request message for Conversations.BatchCreateMessagesRequest.
  "requests": [ # Required. A maximum of 300 messages can be created in a batch. CreateMessageRequest.message.send_time is required. All created messages will have identical Message.create_time.
    { # The request message to create one Message. Currently it is only used in BatchCreateMessagesRequest.
      "message": { # Represents a message posted into a conversation. # Required. The message to create. Message.participant is required.
        "content": "A String", # Required. The message content.
        "createTime": "A String", # Output only. The time when the message was created in Contact Center AI.
        "languageCode": "A String", # Optional. The message language. This should be a [BCP-47](https://www.rfc-editor.org/rfc/bcp/bcp47.txt) language tag. Example: "en-US".
        "messageAnnotation": { # Represents the result of annotation for the message. # Output only. The annotation for the message.
          "containEntities": True or False, # Required. Indicates whether the text message contains entities.
          "parts": [ # Optional. The collection of annotated message parts ordered by their position in the message. You can recover the annotated message by concatenating [AnnotatedMessagePart.text].
            { # Represents a part of a message possibly annotated with an entity. The part can be an entity or purely a part of the message between two entities or message start/end.
              "entityType": "A String", # Optional. The [Dialogflow system entity type](https://cloud.google.com/dialogflow/docs/reference/system-entities) of this message part. If this is empty, Dialogflow could not annotate the phrase part with a system entity.
              "formattedValue": "", # Optional. The [Dialogflow system entity formatted value ](https://cloud.google.com/dialogflow/docs/reference/system-entities) of this message part. For example for a system entity of type `@sys.unit-currency`, this may contain: { "amount": 5, "currency": "USD" }
              "text": "A String", # Required. A part of a message possibly annotated with an entity.
            },
          ],
        },
        "name": "A String", # Optional. The unique identifier of the message. Format: `projects//locations//conversations//messages/`.
        "participant": "A String", # Output only. The participant that sends this message.
        "participantRole": "A String", # Output only. The role of the participant.
        "responseMessages": [ # Optional. Automated agent responses.
          { # Response messages from an automated agent.
            "endInteraction": { # Indicates that interaction with the Dialogflow agent has ended. # A signal that indicates the interaction with the Dialogflow agent has ended.
            },
            "liveAgentHandoff": { # Indicates that the conversation should be handed off to a human agent. Dialogflow only uses this to determine which conversations were handed off to a human agent for measurement purposes. What else to do with this signal is up to you and your handoff procedures. You may set this, for example: * In the entry fulfillment of a CX Page if entering the page indicates something went extremely wrong in the conversation. * In a webhook response when you determine that the customer issue can only be handled by a human. # Hands off conversation to a live agent.
              "metadata": { # Custom metadata for your handoff procedure. Dialogflow doesn't impose any structure on this.
                "a_key": "", # Properties of the object.
              },
            },
            "mixedAudio": { # Represents an audio message that is composed of both segments synthesized from the Dialogflow agent prompts and ones hosted externally at the specified URIs. # An audio response message composed of both the synthesized Dialogflow agent responses and the audios hosted in places known to the client.
              "segments": [ # Segments this audio response is composed of.
                { # Represents one segment of audio.
                  "allowPlaybackInterruption": True or False, # Whether the playback of this segment can be interrupted by the end user's speech and the client should then start the next Dialogflow request.
                  "audio": "A String", # Raw audio synthesized from the Dialogflow agent's response using the output config specified in the request.
                  "uri": "A String", # Client-specific URI that points to an audio clip accessible to the client.
                },
              ],
            },
            "payload": { # Returns a response containing a custom, platform-specific payload.
              "a_key": "", # Properties of the object.
            },
            "telephonyTransferCall": { # Represents the signal that telles the client to transfer the phone call connected to the agent to a third-party endpoint. # A signal that the client should transfer the phone call connected to this agent to a third-party endpoint.
              "phoneNumber": "A String", # Transfer the call to a phone number in [E.164 format](https://en.wikipedia.org/wiki/E.164).
              "sipUri": "A String", # Transfer the call to a SIP endpoint.
            },
            "text": { # The text response message. # Returns a text response.
              "text": [ # A collection of text response variants. If multiple variants are defined, only one text response variant is returned at runtime.
                "A String",
              ],
            },
          },
        ],
        "sendTime": "A String", # Optional. The time when the message was sent.
        "sentimentAnalysis": { # The result of sentiment analysis. Sentiment analysis inspects user input and identifies the prevailing subjective opinion, especially to determine a user's attitude as positive, negative, or neutral. For Participants.DetectIntent, it needs to be configured in DetectIntentRequest.query_params. For Participants.StreamingDetectIntent, it needs to be configured in StreamingDetectIntentRequest.query_params. And for Participants.AnalyzeContent and Participants.StreamingAnalyzeContent, it needs to be configured in ConversationProfile.human_agent_assistant_config # Output only. The sentiment analysis result for the message.
          "queryTextSentiment": { # The sentiment, such as positive/negative feeling or association, for a unit of analysis, such as the query text. See: https://cloud.google.com/natural-language/docs/basics#interpreting_sentiment_analysis_values for how to interpret the result. # The sentiment analysis result for `query_text`.
            "magnitude": 3.14, # A non-negative number in the [0, +inf) range, which represents the absolute magnitude of sentiment, regardless of score (positive or negative).
            "score": 3.14, # Sentiment score between -1.0 (negative sentiment) and 1.0 (positive sentiment).
          },
        },
      },
      "parent": "A String", # Required. Resource identifier of the conversation to create message. Format: `projects//locations//conversations/`.
    },
  ],
}

  x__xgafv: string, V1 error format.
    Allowed values
      1 - v1 error format
      2 - v2 error format

Returns:
  An object of the form:

    { # The request message for Conversations.BatchCreateMessagesResponse.
  "messages": [ # Messages created.
    { # Represents a message posted into a conversation.
      "content": "A String", # Required. The message content.
      "createTime": "A String", # Output only. The time when the message was created in Contact Center AI.
      "languageCode": "A String", # Optional. The message language. This should be a [BCP-47](https://www.rfc-editor.org/rfc/bcp/bcp47.txt) language tag. Example: "en-US".
      "messageAnnotation": { # Represents the result of annotation for the message. # Output only. The annotation for the message.
        "containEntities": True or False, # Required. Indicates whether the text message contains entities.
        "parts": [ # Optional. The collection of annotated message parts ordered by their position in the message. You can recover the annotated message by concatenating [AnnotatedMessagePart.text].
          { # Represents a part of a message possibly annotated with an entity. The part can be an entity or purely a part of the message between two entities or message start/end.
            "entityType": "A String", # Optional. The [Dialogflow system entity type](https://cloud.google.com/dialogflow/docs/reference/system-entities) of this message part. If this is empty, Dialogflow could not annotate the phrase part with a system entity.
            "formattedValue": "", # Optional. The [Dialogflow system entity formatted value ](https://cloud.google.com/dialogflow/docs/reference/system-entities) of this message part. For example for a system entity of type `@sys.unit-currency`, this may contain: { "amount": 5, "currency": "USD" }
            "text": "A String", # Required. A part of a message possibly annotated with an entity.
          },
        ],
      },
      "name": "A String", # Optional. The unique identifier of the message. Format: `projects//locations//conversations//messages/`.
      "participant": "A String", # Output only. The participant that sends this message.
      "participantRole": "A String", # Output only. The role of the participant.
      "responseMessages": [ # Optional. Automated agent responses.
        { # Response messages from an automated agent.
          "endInteraction": { # Indicates that interaction with the Dialogflow agent has ended. # A signal that indicates the interaction with the Dialogflow agent has ended.
          },
          "liveAgentHandoff": { # Indicates that the conversation should be handed off to a human agent. Dialogflow only uses this to determine which conversations were handed off to a human agent for measurement purposes. What else to do with this signal is up to you and your handoff procedures. You may set this, for example: * In the entry fulfillment of a CX Page if entering the page indicates something went extremely wrong in the conversation. * In a webhook response when you determine that the customer issue can only be handled by a human. # Hands off conversation to a live agent.
            "metadata": { # Custom metadata for your handoff procedure. Dialogflow doesn't impose any structure on this.
              "a_key": "", # Properties of the object.
            },
          },
          "mixedAudio": { # Represents an audio message that is composed of both segments synthesized from the Dialogflow agent prompts and ones hosted externally at the specified URIs. # An audio response message composed of both the synthesized Dialogflow agent responses and the audios hosted in places known to the client.
            "segments": [ # Segments this audio response is composed of.
              { # Represents one segment of audio.
                "allowPlaybackInterruption": True or False, # Whether the playback of this segment can be interrupted by the end user's speech and the client should then start the next Dialogflow request.
                "audio": "A String", # Raw audio synthesized from the Dialogflow agent's response using the output config specified in the request.
                "uri": "A String", # Client-specific URI that points to an audio clip accessible to the client.
              },
            ],
          },
          "payload": { # Returns a response containing a custom, platform-specific payload.
            "a_key": "", # Properties of the object.
          },
          "telephonyTransferCall": { # Represents the signal that telles the client to transfer the phone call connected to the agent to a third-party endpoint. # A signal that the client should transfer the phone call connected to this agent to a third-party endpoint.
            "phoneNumber": "A String", # Transfer the call to a phone number in [E.164 format](https://en.wikipedia.org/wiki/E.164).
            "sipUri": "A String", # Transfer the call to a SIP endpoint.
          },
          "text": { # The text response message. # Returns a text response.
            "text": [ # A collection of text response variants. If multiple variants are defined, only one text response variant is returned at runtime.
              "A String",
            ],
          },
        },
      ],
      "sendTime": "A String", # Optional. The time when the message was sent.
      "sentimentAnalysis": { # The result of sentiment analysis. Sentiment analysis inspects user input and identifies the prevailing subjective opinion, especially to determine a user's attitude as positive, negative, or neutral. For Participants.DetectIntent, it needs to be configured in DetectIntentRequest.query_params. For Participants.StreamingDetectIntent, it needs to be configured in StreamingDetectIntentRequest.query_params. And for Participants.AnalyzeContent and Participants.StreamingAnalyzeContent, it needs to be configured in ConversationProfile.human_agent_assistant_config # Output only. The sentiment analysis result for the message.
        "queryTextSentiment": { # The sentiment, such as positive/negative feeling or association, for a unit of analysis, such as the query text. See: https://cloud.google.com/natural-language/docs/basics#interpreting_sentiment_analysis_values for how to interpret the result. # The sentiment analysis result for `query_text`.
          "magnitude": 3.14, # A non-negative number in the [0, +inf) range, which represents the absolute magnitude of sentiment, regardless of score (positive or negative).
          "score": 3.14, # Sentiment score between -1.0 (negative sentiment) and 1.0 (positive sentiment).
        },
      },
    },
  ],
}
close()
Close httplib2 connections.
list(parent, filter=None, pageSize=None, pageToken=None, x__xgafv=None)
Lists messages that belong to a given conversation. `messages` are ordered by `create_time` in descending order. To fetch updates without duplication, send request with filter `create_time_epoch_microseconds > [first item's create_time of previous request]` and empty page_token.

Args:
  parent: string, Required. The name of the conversation to list messages for. Format: `projects//locations//conversations/` (required)
  filter: string, Optional. Filter on message fields. Currently predicates on `create_time` and `create_time_epoch_microseconds` are supported. `create_time` only support milliseconds accuracy. E.g., `create_time_epoch_microseconds > 1551790877964485` or `create_time > "2017-01-15T01:30:15.01Z"`. For more information about filtering, see [API Filtering](https://aip.dev/160).
  pageSize: integer, Optional. The maximum number of items to return in a single page. By default 100 and at most 1000.
  pageToken: string, Optional. The next_page_token value returned from a previous list request.
  x__xgafv: string, V1 error format.
    Allowed values
      1 - v1 error format
      2 - v2 error format

Returns:
  An object of the form:

    { # The response message for Conversations.ListMessages.
  "messages": [ # Required. The list of messages. There will be a maximum number of items returned based on the page_size field in the request. `messages` is sorted by `create_time` in descending order.
    { # Represents a message posted into a conversation.
      "content": "A String", # Required. The message content.
      "createTime": "A String", # Output only. The time when the message was created in Contact Center AI.
      "languageCode": "A String", # Optional. The message language. This should be a [BCP-47](https://www.rfc-editor.org/rfc/bcp/bcp47.txt) language tag. Example: "en-US".
      "messageAnnotation": { # Represents the result of annotation for the message. # Output only. The annotation for the message.
        "containEntities": True or False, # Required. Indicates whether the text message contains entities.
        "parts": [ # Optional. The collection of annotated message parts ordered by their position in the message. You can recover the annotated message by concatenating [AnnotatedMessagePart.text].
          { # Represents a part of a message possibly annotated with an entity. The part can be an entity or purely a part of the message between two entities or message start/end.
            "entityType": "A String", # Optional. The [Dialogflow system entity type](https://cloud.google.com/dialogflow/docs/reference/system-entities) of this message part. If this is empty, Dialogflow could not annotate the phrase part with a system entity.
            "formattedValue": "", # Optional. The [Dialogflow system entity formatted value ](https://cloud.google.com/dialogflow/docs/reference/system-entities) of this message part. For example for a system entity of type `@sys.unit-currency`, this may contain: { "amount": 5, "currency": "USD" }
            "text": "A String", # Required. A part of a message possibly annotated with an entity.
          },
        ],
      },
      "name": "A String", # Optional. The unique identifier of the message. Format: `projects//locations//conversations//messages/`.
      "participant": "A String", # Output only. The participant that sends this message.
      "participantRole": "A String", # Output only. The role of the participant.
      "responseMessages": [ # Optional. Automated agent responses.
        { # Response messages from an automated agent.
          "endInteraction": { # Indicates that interaction with the Dialogflow agent has ended. # A signal that indicates the interaction with the Dialogflow agent has ended.
          },
          "liveAgentHandoff": { # Indicates that the conversation should be handed off to a human agent. Dialogflow only uses this to determine which conversations were handed off to a human agent for measurement purposes. What else to do with this signal is up to you and your handoff procedures. You may set this, for example: * In the entry fulfillment of a CX Page if entering the page indicates something went extremely wrong in the conversation. * In a webhook response when you determine that the customer issue can only be handled by a human. # Hands off conversation to a live agent.
            "metadata": { # Custom metadata for your handoff procedure. Dialogflow doesn't impose any structure on this.
              "a_key": "", # Properties of the object.
            },
          },
          "mixedAudio": { # Represents an audio message that is composed of both segments synthesized from the Dialogflow agent prompts and ones hosted externally at the specified URIs. # An audio response message composed of both the synthesized Dialogflow agent responses and the audios hosted in places known to the client.
            "segments": [ # Segments this audio response is composed of.
              { # Represents one segment of audio.
                "allowPlaybackInterruption": True or False, # Whether the playback of this segment can be interrupted by the end user's speech and the client should then start the next Dialogflow request.
                "audio": "A String", # Raw audio synthesized from the Dialogflow agent's response using the output config specified in the request.
                "uri": "A String", # Client-specific URI that points to an audio clip accessible to the client.
              },
            ],
          },
          "payload": { # Returns a response containing a custom, platform-specific payload.
            "a_key": "", # Properties of the object.
          },
          "telephonyTransferCall": { # Represents the signal that telles the client to transfer the phone call connected to the agent to a third-party endpoint. # A signal that the client should transfer the phone call connected to this agent to a third-party endpoint.
            "phoneNumber": "A String", # Transfer the call to a phone number in [E.164 format](https://en.wikipedia.org/wiki/E.164).
            "sipUri": "A String", # Transfer the call to a SIP endpoint.
          },
          "text": { # The text response message. # Returns a text response.
            "text": [ # A collection of text response variants. If multiple variants are defined, only one text response variant is returned at runtime.
              "A String",
            ],
          },
        },
      ],
      "sendTime": "A String", # Optional. The time when the message was sent.
      "sentimentAnalysis": { # The result of sentiment analysis. Sentiment analysis inspects user input and identifies the prevailing subjective opinion, especially to determine a user's attitude as positive, negative, or neutral. For Participants.DetectIntent, it needs to be configured in DetectIntentRequest.query_params. For Participants.StreamingDetectIntent, it needs to be configured in StreamingDetectIntentRequest.query_params. And for Participants.AnalyzeContent and Participants.StreamingAnalyzeContent, it needs to be configured in ConversationProfile.human_agent_assistant_config # Output only. The sentiment analysis result for the message.
        "queryTextSentiment": { # The sentiment, such as positive/negative feeling or association, for a unit of analysis, such as the query text. See: https://cloud.google.com/natural-language/docs/basics#interpreting_sentiment_analysis_values for how to interpret the result. # The sentiment analysis result for `query_text`.
          "magnitude": 3.14, # A non-negative number in the [0, +inf) range, which represents the absolute magnitude of sentiment, regardless of score (positive or negative).
          "score": 3.14, # Sentiment score between -1.0 (negative sentiment) and 1.0 (positive sentiment).
        },
      },
    },
  ],
  "nextPageToken": "A String", # Optional. Token to retrieve the next page of results, or empty if there are no more results in the list.
}
list_next()
Retrieves the next page of results.

        Args:
          previous_request: The request for the previous page. (required)
          previous_response: The response from the request for the previous page. (required)

        Returns:
          A request object that you can call 'execute()' on to request the next
          page. Returns None if there are no more items in the collection.