Returns the examples Resource.
Returns the versions Resource.
Close httplib2 connections.
create(parent, body=None, x__xgafv=None)
Creates a playbook in a specified agent.
Deletes a specified playbook.
export(name, body=None, x__xgafv=None)
Exports the specified playbook to a binary file. Note that resources (e.g. examples, tools) that the playbook references will also be exported.
Retrieves the specified Playbook.
import_(parent, body=None, x__xgafv=None)
Imports the specified playbook to the specified agent from a binary file.
list(parent, pageSize=None, pageToken=None, x__xgafv=None)
Returns a list of playbooks in the specified agent.
Retrieves the next page of results.
patch(name, body=None, updateMask=None, x__xgafv=None)
Updates the specified Playbook.
close()
Close httplib2 connections.
create(parent, body=None, x__xgafv=None)
Creates a playbook in a specified agent. Args: parent: string, Required. The agent to create a playbook for. Format: `projects//locations//agents/`. (required) body: object, The request body. The object takes the form of: { # Playbook is the basic building block to instruct the LLM how to execute a certain task. A playbook consists of a goal to accomplish, an optional list of step by step instructions (the step instruction may refers to name of the custom or default plugin tools to use) to perform the task, a list of contextual input data to be passed in at the beginning of the invoked, and a list of output parameters to store the playbook result. "createTime": "A String", # Output only. The timestamp of initial playbook creation. "displayName": "A String", # Required. The human-readable name of the playbook, unique within an agent. "goal": "A String", # Required. High level description of the goal the playbook intend to accomplish. A goal should be concise since it's visible to other playbooks that may reference this playbook. "handlers": [ # Optional. A list of registered handlers to execute based on the specified triggers. { # Handler can be used to define custom logic to be executed based on the user-specified triggers. "eventHandler": { # A handler that is triggered by the specified event. # A handler triggered by event. "condition": "A String", # Optional. The condition that must be satisfied to trigger this handler. "event": "A String", # Required. The name of the event that triggers this handler. "fulfillment": { # A fulfillment can do one or more of the following actions at the same time: * Generate rich message responses. * Set parameter values. * Call the webhook. Fulfillments can be called at various stages in the Page or Form lifecycle. For example, when a DetectIntentRequest drives a session to enter a new page, the page's entry fulfillment can add a static response to the QueryResult in the returning DetectIntentResponse, call the webhook (for example, to load user data from a database), or both. # Required. The fulfillment to call when the event occurs. "advancedSettings": { # Hierarchical advanced settings for agent/flow/page/fulfillment/parameter. Settings exposed at lower level overrides the settings exposed at higher level. Overriding occurs at the sub-setting level. For example, the playback_interruption_settings at fulfillment level only overrides the playback_interruption_settings at the agent level, leaving other settings at the agent level unchanged. DTMF settings does not override each other. DTMF settings set at different levels define DTMF detections running in parallel. Hierarchy: Agent->Flow->Page->Fulfillment/Parameter. # Hierarchical advanced settings for this fulfillment. The settings exposed at the lower level overrides the settings exposed at the higher level. "audioExportGcsDestination": { # Google Cloud Storage location for a Dialogflow operation that writes or exports objects (e.g. exported agent or transcripts) outside of Dialogflow. # If present, incoming audio is exported by Dialogflow to the configured Google Cloud Storage destination. Exposed at the following levels: - Agent level - Flow level "uri": "A String", # Required. The Google Cloud Storage URI for the exported objects. A URI is of the form: `gs://bucket/object-name-or-prefix` Whether a full object name, or just a prefix, its usage depends on the Dialogflow operation. }, "dtmfSettings": { # Define behaviors for DTMF (dual tone multi frequency). # Settings for DTMF. Exposed at the following levels: - Agent level - Flow level - Page level - Parameter level. "enabled": True or False, # If true, incoming audio is processed for DTMF (dual tone multi frequency) events. For example, if the caller presses a button on their telephone keypad and DTMF processing is enabled, Dialogflow will detect the event (e.g. a "3" was pressed) in the incoming audio and pass the event to the bot to drive business logic (e.g. when 3 is pressed, return the account balance). "endpointingTimeoutDuration": "A String", # Endpoint timeout setting for matching dtmf input to regex. "finishDigit": "A String", # The digit that terminates a DTMF digit sequence. "interdigitTimeoutDuration": "A String", # Interdigit timeout setting for matching dtmf input to regex. "maxDigits": 42, # Max length of DTMF digits. }, "loggingSettings": { # Define behaviors on logging. # Settings for logging. Settings for Dialogflow History, Contact Center messages, StackDriver logs, and speech logging. Exposed at the following levels: - Agent level. "enableConsentBasedRedaction": True or False, # Enables consent-based end-user input redaction, if true, a pre-defined session parameter `$session.params.conversation-redaction` will be used to determine if the utterance should be redacted. "enableInteractionLogging": True or False, # Enables DF Interaction logging. "enableStackdriverLogging": True or False, # Enables Google Cloud Logging. }, "speechSettings": { # Define behaviors of speech to text detection. # Settings for speech to text detection. Exposed at the following levels: - Agent level - Flow level - Page level - Parameter level "endpointerSensitivity": 42, # Sensitivity of the speech model that detects the end of speech. Scale from 0 to 100. "models": { # Mapping from language to Speech-to-Text model. The mapped Speech-to-Text model will be selected for requests from its corresponding language. For more information, see [Speech models](https://cloud.google.com/dialogflow/cx/docs/concept/speech-models). "a_key": "A String", }, "noSpeechTimeout": "A String", # Timeout before detecting no speech. "useTimeoutBasedEndpointing": True or False, # Use timeout based endpointing, interpreting endpointer sensitivity as seconds of timeout value. }, }, "conditionalCases": [ # Conditional cases for this fulfillment. { # A list of cascading if-else conditions. Cases are mutually exclusive. The first one with a matching condition is selected, all the rest ignored. "cases": [ # A list of cascading if-else conditions. { # Each case has a Boolean condition. When it is evaluated to be True, the corresponding messages will be selected and evaluated recursively. "caseContent": [ # A list of case content. { # The list of messages or conditional cases to activate for this case. "additionalCases": # Object with schema name: GoogleCloudDialogflowCxV3beta1FulfillmentConditionalCases # Additional cases to be evaluated. "message": { # Represents a response message that can be returned by a conversational agent. Response messages are also used for output audio synthesis. The approach is as follows: * If at least one OutputAudioText response is present, then all OutputAudioText responses are linearly concatenated, and the result is used for output audio synthesis. * If the OutputAudioText responses are a mixture of text and SSML, then the concatenated result is treated as SSML; otherwise, the result is treated as either text or SSML as appropriate. The agent designer should ideally use either text or SSML consistently throughout the bot design. * Otherwise, all Text responses are linearly concatenated, and the result is used for output audio synthesis. This approach allows for more sophisticated user experience scenarios, where the text displayed to the user may differ from what is heard. # Returned message. "channel": "A String", # The channel which the response is associated with. Clients can specify the channel via QueryParameters.channel, and only associated channel response will be returned. "conversationSuccess": { # Indicates that the conversation succeeded, i.e., the bot handled the issue that the customer talked to it about. Dialogflow only uses this to determine which conversations should be counted as successful and doesn't process the metadata in this message in any way. Note that Dialogflow also considers conversations that get to the conversation end page as successful even if they don't return ConversationSuccess. You may set this, for example: * In the entry_fulfillment of a Page if entering the page indicates that the conversation succeeded. * In a webhook response when you determine that you handled the customer issue. # Indicates that the conversation succeeded. "metadata": { # Custom metadata. Dialogflow doesn't impose any structure on this. "a_key": "", # Properties of the object. }, }, "endInteraction": { # Indicates that interaction with the Dialogflow agent has ended. This message is generated by Dialogflow only and not supposed to be defined by the user. # Output only. A signal that indicates the interaction with the Dialogflow agent has ended. This message is generated by Dialogflow only when the conversation reaches `END_SESSION` page. It is not supposed to be defined by the user. It's guaranteed that there is at most one such message in each response. }, "knowledgeInfoCard": { # Represents info card response. If the response contains generative knowledge prediction, Dialogflow will return a payload with Infobot Messenger compatible info card. Otherwise, the info card response is skipped. # Represents info card for knowledge answers, to be better rendered in Dialogflow Messenger. }, "liveAgentHandoff": { # Indicates that the conversation should be handed off to a live agent. Dialogflow only uses this to determine which conversations were handed off to a human agent for measurement purposes. What else to do with this signal is up to you and your handoff procedures. You may set this, for example: * In the entry_fulfillment of a Page if entering the page indicates something went extremely wrong in the conversation. * In a webhook response when you determine that the customer issue can only be handled by a human. # Hands off conversation to a human agent. "metadata": { # Custom metadata for your handoff procedure. Dialogflow doesn't impose any structure on this. "a_key": "", # Properties of the object. }, }, "mixedAudio": { # Represents an audio message that is composed of both segments synthesized from the Dialogflow agent prompts and ones hosted externally at the specified URIs. The external URIs are specified via play_audio. This message is generated by Dialogflow only and not supposed to be defined by the user. # Output only. An audio response message composed of both the synthesized Dialogflow agent responses and responses defined via play_audio. This message is generated by Dialogflow only and not supposed to be defined by the user. "segments": [ # Segments this audio response is composed of. { # Represents one segment of audio. "allowPlaybackInterruption": True or False, # Output only. Whether the playback of this segment can be interrupted by the end user's speech and the client should then start the next Dialogflow request. "audio": "A String", # Raw audio synthesized from the Dialogflow agent's response using the output config specified in the request. "uri": "A String", # Client-specific URI that points to an audio clip accessible to the client. Dialogflow does not impose any validation on it. }, ], }, "outputAudioText": { # A text or ssml response that is preferentially used for TTS output audio synthesis, as described in the comment on the ResponseMessage message. # A text or ssml response that is preferentially used for TTS output audio synthesis, as described in the comment on the ResponseMessage message. "allowPlaybackInterruption": True or False, # Output only. Whether the playback of this message can be interrupted by the end user's speech and the client can then starts the next Dialogflow request. "ssml": "A String", # The SSML text to be synthesized. For more information, see [SSML](/speech/text-to-speech/docs/ssml). "text": "A String", # The raw text to be synthesized. }, "payload": { # Returns a response containing a custom, platform-specific payload. "a_key": "", # Properties of the object. }, "playAudio": { # Specifies an audio clip to be played by the client as part of the response. # Signal that the client should play an audio clip hosted at a client-specific URI. Dialogflow uses this to construct mixed_audio. However, Dialogflow itself does not try to read or process the URI in any way. "allowPlaybackInterruption": True or False, # Output only. Whether the playback of this message can be interrupted by the end user's speech and the client can then starts the next Dialogflow request. "audioUri": "A String", # Required. URI of the audio clip. Dialogflow does not impose any validation on this value. It is specific to the client that reads it. }, "telephonyTransferCall": { # Represents the signal that telles the client to transfer the phone call connected to the agent to a third-party endpoint. # A signal that the client should transfer the phone call connected to this agent to a third-party endpoint. "phoneNumber": "A String", # Transfer the call to a phone number in [E.164 format](https://en.wikipedia.org/wiki/E.164). }, "text": { # The text response message. # Returns a text response. "allowPlaybackInterruption": True or False, # Output only. Whether the playback of this message can be interrupted by the end user's speech and the client can then starts the next Dialogflow request. "text": [ # Required. A collection of text response variants. If multiple variants are defined, only one text response variant is returned at runtime. "A String", ], }, "toolCall": { # Represents a call of a specific tool's action with the specified inputs. # Returns the definition of a tool call that should be executed by the client. "action": "A String", # Required. The name of the tool's action associated with this call. "inputParameters": { # Optional. The action's input parameters. "a_key": "", # Properties of the object. }, "tool": "A String", # Required. The tool associated with this call. Format: `projects//locations//agents//tools/`. }, }, }, ], "condition": "A String", # The condition to activate and select this case. Empty means the condition is always true. The condition is evaluated against form parameters or session parameters. See the [conditions reference](https://cloud.google.com/dialogflow/cx/docs/reference/condition). }, ], }, ], "enableGenerativeFallback": True or False, # If the flag is true, the agent will utilize LLM to generate a text response. If LLM generation fails, the defined responses in the fulfillment will be respected. This flag is only useful for fulfillments associated with no-match event handlers. "generators": [ # A list of Generators to be called during this fulfillment. { # Generator settings used by the LLM to generate a text response. "generator": "A String", # Required. The generator to call. Format: `projects//locations//agents//generators/`. "inputParameters": { # Map from placeholder parameter in the Generator to corresponding session parameters. By default, Dialogflow uses the session parameter with the same name to fill in the generator template. e.g. If there is a placeholder parameter `city` in the Generator, Dialogflow default to fill in the `$city` with `$session.params.city`. However, you may choose to fill `$city` with `$session.params.desination-city`. - Map key: parameter ID - Map value: session parameter name "a_key": "A String", }, "outputParameter": "A String", # Required. Output parameter which should contain the generator response. }, ], "messages": [ # The list of rich message responses to present to the user. { # Represents a response message that can be returned by a conversational agent. Response messages are also used for output audio synthesis. The approach is as follows: * If at least one OutputAudioText response is present, then all OutputAudioText responses are linearly concatenated, and the result is used for output audio synthesis. * If the OutputAudioText responses are a mixture of text and SSML, then the concatenated result is treated as SSML; otherwise, the result is treated as either text or SSML as appropriate. The agent designer should ideally use either text or SSML consistently throughout the bot design. * Otherwise, all Text responses are linearly concatenated, and the result is used for output audio synthesis. This approach allows for more sophisticated user experience scenarios, where the text displayed to the user may differ from what is heard. "channel": "A String", # The channel which the response is associated with. Clients can specify the channel via QueryParameters.channel, and only associated channel response will be returned. "conversationSuccess": { # Indicates that the conversation succeeded, i.e., the bot handled the issue that the customer talked to it about. Dialogflow only uses this to determine which conversations should be counted as successful and doesn't process the metadata in this message in any way. Note that Dialogflow also considers conversations that get to the conversation end page as successful even if they don't return ConversationSuccess. You may set this, for example: * In the entry_fulfillment of a Page if entering the page indicates that the conversation succeeded. * In a webhook response when you determine that you handled the customer issue. # Indicates that the conversation succeeded. "metadata": { # Custom metadata. Dialogflow doesn't impose any structure on this. "a_key": "", # Properties of the object. }, }, "endInteraction": { # Indicates that interaction with the Dialogflow agent has ended. This message is generated by Dialogflow only and not supposed to be defined by the user. # Output only. A signal that indicates the interaction with the Dialogflow agent has ended. This message is generated by Dialogflow only when the conversation reaches `END_SESSION` page. It is not supposed to be defined by the user. It's guaranteed that there is at most one such message in each response. }, "knowledgeInfoCard": { # Represents info card response. If the response contains generative knowledge prediction, Dialogflow will return a payload with Infobot Messenger compatible info card. Otherwise, the info card response is skipped. # Represents info card for knowledge answers, to be better rendered in Dialogflow Messenger. }, "liveAgentHandoff": { # Indicates that the conversation should be handed off to a live agent. Dialogflow only uses this to determine which conversations were handed off to a human agent for measurement purposes. What else to do with this signal is up to you and your handoff procedures. You may set this, for example: * In the entry_fulfillment of a Page if entering the page indicates something went extremely wrong in the conversation. * In a webhook response when you determine that the customer issue can only be handled by a human. # Hands off conversation to a human agent. "metadata": { # Custom metadata for your handoff procedure. Dialogflow doesn't impose any structure on this. "a_key": "", # Properties of the object. }, }, "mixedAudio": { # Represents an audio message that is composed of both segments synthesized from the Dialogflow agent prompts and ones hosted externally at the specified URIs. The external URIs are specified via play_audio. This message is generated by Dialogflow only and not supposed to be defined by the user. # Output only. An audio response message composed of both the synthesized Dialogflow agent responses and responses defined via play_audio. This message is generated by Dialogflow only and not supposed to be defined by the user. "segments": [ # Segments this audio response is composed of. { # Represents one segment of audio. "allowPlaybackInterruption": True or False, # Output only. Whether the playback of this segment can be interrupted by the end user's speech and the client should then start the next Dialogflow request. "audio": "A String", # Raw audio synthesized from the Dialogflow agent's response using the output config specified in the request. "uri": "A String", # Client-specific URI that points to an audio clip accessible to the client. Dialogflow does not impose any validation on it. }, ], }, "outputAudioText": { # A text or ssml response that is preferentially used for TTS output audio synthesis, as described in the comment on the ResponseMessage message. # A text or ssml response that is preferentially used for TTS output audio synthesis, as described in the comment on the ResponseMessage message. "allowPlaybackInterruption": True or False, # Output only. Whether the playback of this message can be interrupted by the end user's speech and the client can then starts the next Dialogflow request. "ssml": "A String", # The SSML text to be synthesized. For more information, see [SSML](/speech/text-to-speech/docs/ssml). "text": "A String", # The raw text to be synthesized. }, "payload": { # Returns a response containing a custom, platform-specific payload. "a_key": "", # Properties of the object. }, "playAudio": { # Specifies an audio clip to be played by the client as part of the response. # Signal that the client should play an audio clip hosted at a client-specific URI. Dialogflow uses this to construct mixed_audio. However, Dialogflow itself does not try to read or process the URI in any way. "allowPlaybackInterruption": True or False, # Output only. Whether the playback of this message can be interrupted by the end user's speech and the client can then starts the next Dialogflow request. "audioUri": "A String", # Required. URI of the audio clip. Dialogflow does not impose any validation on this value. It is specific to the client that reads it. }, "telephonyTransferCall": { # Represents the signal that telles the client to transfer the phone call connected to the agent to a third-party endpoint. # A signal that the client should transfer the phone call connected to this agent to a third-party endpoint. "phoneNumber": "A String", # Transfer the call to a phone number in [E.164 format](https://en.wikipedia.org/wiki/E.164). }, "text": { # The text response message. # Returns a text response. "allowPlaybackInterruption": True or False, # Output only. Whether the playback of this message can be interrupted by the end user's speech and the client can then starts the next Dialogflow request. "text": [ # Required. A collection of text response variants. If multiple variants are defined, only one text response variant is returned at runtime. "A String", ], }, "toolCall": { # Represents a call of a specific tool's action with the specified inputs. # Returns the definition of a tool call that should be executed by the client. "action": "A String", # Required. The name of the tool's action associated with this call. "inputParameters": { # Optional. The action's input parameters. "a_key": "", # Properties of the object. }, "tool": "A String", # Required. The tool associated with this call. Format: `projects//locations//agents//tools/`. }, }, ], "returnPartialResponses": True or False, # Whether Dialogflow should return currently queued fulfillment response messages in streaming APIs. If a webhook is specified, it happens before Dialogflow invokes webhook. Warning: 1) This flag only affects streaming API. Responses are still queued and returned once in non-streaming API. 2) The flag can be enabled in any fulfillment but only the first 3 partial responses will be returned. You may only want to apply it to fulfillments that have slow webhooks. "setParameterActions": [ # Set parameter values before executing the webhook. { # Setting a parameter value. "parameter": "A String", # Display name of the parameter. "value": "", # The new value of the parameter. A null value clears the parameter. }, ], "tag": "A String", # The value of this field will be populated in the WebhookRequest `fulfillmentInfo.tag` field by Dialogflow when the associated webhook is called. The tag is typically used by the webhook service to identify which fulfillment is being called, but it could be used for other purposes. This field is required if `webhook` is specified. "webhook": "A String", # The webhook to call. Format: `projects//locations//agents//webhooks/`. }, }, "lifecycleHandler": { # A handler that is triggered on the specific lifecycle_stage of the playbook execution. # A handler triggered during specific lifecycle of the playbook execution. "condition": "A String", # Optional. The condition that must be satisfied to trigger this handler. "fulfillment": { # A fulfillment can do one or more of the following actions at the same time: * Generate rich message responses. * Set parameter values. * Call the webhook. Fulfillments can be called at various stages in the Page or Form lifecycle. For example, when a DetectIntentRequest drives a session to enter a new page, the page's entry fulfillment can add a static response to the QueryResult in the returning DetectIntentResponse, call the webhook (for example, to load user data from a database), or both. # Required. The fulfillment to call when this handler is triggered. "advancedSettings": { # Hierarchical advanced settings for agent/flow/page/fulfillment/parameter. Settings exposed at lower level overrides the settings exposed at higher level. Overriding occurs at the sub-setting level. For example, the playback_interruption_settings at fulfillment level only overrides the playback_interruption_settings at the agent level, leaving other settings at the agent level unchanged. DTMF settings does not override each other. DTMF settings set at different levels define DTMF detections running in parallel. Hierarchy: Agent->Flow->Page->Fulfillment/Parameter. # Hierarchical advanced settings for this fulfillment. The settings exposed at the lower level overrides the settings exposed at the higher level. "audioExportGcsDestination": { # Google Cloud Storage location for a Dialogflow operation that writes or exports objects (e.g. exported agent or transcripts) outside of Dialogflow. # If present, incoming audio is exported by Dialogflow to the configured Google Cloud Storage destination. Exposed at the following levels: - Agent level - Flow level "uri": "A String", # Required. The Google Cloud Storage URI for the exported objects. A URI is of the form: `gs://bucket/object-name-or-prefix` Whether a full object name, or just a prefix, its usage depends on the Dialogflow operation. }, "dtmfSettings": { # Define behaviors for DTMF (dual tone multi frequency). # Settings for DTMF. Exposed at the following levels: - Agent level - Flow level - Page level - Parameter level. "enabled": True or False, # If true, incoming audio is processed for DTMF (dual tone multi frequency) events. For example, if the caller presses a button on their telephone keypad and DTMF processing is enabled, Dialogflow will detect the event (e.g. a "3" was pressed) in the incoming audio and pass the event to the bot to drive business logic (e.g. when 3 is pressed, return the account balance). "endpointingTimeoutDuration": "A String", # Endpoint timeout setting for matching dtmf input to regex. "finishDigit": "A String", # The digit that terminates a DTMF digit sequence. "interdigitTimeoutDuration": "A String", # Interdigit timeout setting for matching dtmf input to regex. "maxDigits": 42, # Max length of DTMF digits. }, "loggingSettings": { # Define behaviors on logging. # Settings for logging. Settings for Dialogflow History, Contact Center messages, StackDriver logs, and speech logging. Exposed at the following levels: - Agent level. "enableConsentBasedRedaction": True or False, # Enables consent-based end-user input redaction, if true, a pre-defined session parameter `$session.params.conversation-redaction` will be used to determine if the utterance should be redacted. "enableInteractionLogging": True or False, # Enables DF Interaction logging. "enableStackdriverLogging": True or False, # Enables Google Cloud Logging. }, "speechSettings": { # Define behaviors of speech to text detection. # Settings for speech to text detection. Exposed at the following levels: - Agent level - Flow level - Page level - Parameter level "endpointerSensitivity": 42, # Sensitivity of the speech model that detects the end of speech. Scale from 0 to 100. "models": { # Mapping from language to Speech-to-Text model. The mapped Speech-to-Text model will be selected for requests from its corresponding language. For more information, see [Speech models](https://cloud.google.com/dialogflow/cx/docs/concept/speech-models). "a_key": "A String", }, "noSpeechTimeout": "A String", # Timeout before detecting no speech. "useTimeoutBasedEndpointing": True or False, # Use timeout based endpointing, interpreting endpointer sensitivity as seconds of timeout value. }, }, "conditionalCases": [ # Conditional cases for this fulfillment. { # A list of cascading if-else conditions. Cases are mutually exclusive. The first one with a matching condition is selected, all the rest ignored. "cases": [ # A list of cascading if-else conditions. { # Each case has a Boolean condition. When it is evaluated to be True, the corresponding messages will be selected and evaluated recursively. "caseContent": [ # A list of case content. { # The list of messages or conditional cases to activate for this case. "additionalCases": # Object with schema name: GoogleCloudDialogflowCxV3beta1FulfillmentConditionalCases # Additional cases to be evaluated. "message": { # Represents a response message that can be returned by a conversational agent. Response messages are also used for output audio synthesis. The approach is as follows: * If at least one OutputAudioText response is present, then all OutputAudioText responses are linearly concatenated, and the result is used for output audio synthesis. * If the OutputAudioText responses are a mixture of text and SSML, then the concatenated result is treated as SSML; otherwise, the result is treated as either text or SSML as appropriate. The agent designer should ideally use either text or SSML consistently throughout the bot design. * Otherwise, all Text responses are linearly concatenated, and the result is used for output audio synthesis. This approach allows for more sophisticated user experience scenarios, where the text displayed to the user may differ from what is heard. # Returned message. "channel": "A String", # The channel which the response is associated with. Clients can specify the channel via QueryParameters.channel, and only associated channel response will be returned. "conversationSuccess": { # Indicates that the conversation succeeded, i.e., the bot handled the issue that the customer talked to it about. Dialogflow only uses this to determine which conversations should be counted as successful and doesn't process the metadata in this message in any way. Note that Dialogflow also considers conversations that get to the conversation end page as successful even if they don't return ConversationSuccess. You may set this, for example: * In the entry_fulfillment of a Page if entering the page indicates that the conversation succeeded. * In a webhook response when you determine that you handled the customer issue. # Indicates that the conversation succeeded. "metadata": { # Custom metadata. Dialogflow doesn't impose any structure on this. "a_key": "", # Properties of the object. }, }, "endInteraction": { # Indicates that interaction with the Dialogflow agent has ended. This message is generated by Dialogflow only and not supposed to be defined by the user. # Output only. A signal that indicates the interaction with the Dialogflow agent has ended. This message is generated by Dialogflow only when the conversation reaches `END_SESSION` page. It is not supposed to be defined by the user. It's guaranteed that there is at most one such message in each response. }, "knowledgeInfoCard": { # Represents info card response. If the response contains generative knowledge prediction, Dialogflow will return a payload with Infobot Messenger compatible info card. Otherwise, the info card response is skipped. # Represents info card for knowledge answers, to be better rendered in Dialogflow Messenger. }, "liveAgentHandoff": { # Indicates that the conversation should be handed off to a live agent. Dialogflow only uses this to determine which conversations were handed off to a human agent for measurement purposes. What else to do with this signal is up to you and your handoff procedures. You may set this, for example: * In the entry_fulfillment of a Page if entering the page indicates something went extremely wrong in the conversation. * In a webhook response when you determine that the customer issue can only be handled by a human. # Hands off conversation to a human agent. "metadata": { # Custom metadata for your handoff procedure. Dialogflow doesn't impose any structure on this. "a_key": "", # Properties of the object. }, }, "mixedAudio": { # Represents an audio message that is composed of both segments synthesized from the Dialogflow agent prompts and ones hosted externally at the specified URIs. The external URIs are specified via play_audio. This message is generated by Dialogflow only and not supposed to be defined by the user. # Output only. An audio response message composed of both the synthesized Dialogflow agent responses and responses defined via play_audio. This message is generated by Dialogflow only and not supposed to be defined by the user. "segments": [ # Segments this audio response is composed of. { # Represents one segment of audio. "allowPlaybackInterruption": True or False, # Output only. Whether the playback of this segment can be interrupted by the end user's speech and the client should then start the next Dialogflow request. "audio": "A String", # Raw audio synthesized from the Dialogflow agent's response using the output config specified in the request. "uri": "A String", # Client-specific URI that points to an audio clip accessible to the client. Dialogflow does not impose any validation on it. }, ], }, "outputAudioText": { # A text or ssml response that is preferentially used for TTS output audio synthesis, as described in the comment on the ResponseMessage message. # A text or ssml response that is preferentially used for TTS output audio synthesis, as described in the comment on the ResponseMessage message. "allowPlaybackInterruption": True or False, # Output only. Whether the playback of this message can be interrupted by the end user's speech and the client can then starts the next Dialogflow request. "ssml": "A String", # The SSML text to be synthesized. For more information, see [SSML](/speech/text-to-speech/docs/ssml). "text": "A String", # The raw text to be synthesized. }, "payload": { # Returns a response containing a custom, platform-specific payload. "a_key": "", # Properties of the object. }, "playAudio": { # Specifies an audio clip to be played by the client as part of the response. # Signal that the client should play an audio clip hosted at a client-specific URI. Dialogflow uses this to construct mixed_audio. However, Dialogflow itself does not try to read or process the URI in any way. "allowPlaybackInterruption": True or False, # Output only. Whether the playback of this message can be interrupted by the end user's speech and the client can then starts the next Dialogflow request. "audioUri": "A String", # Required. URI of the audio clip. Dialogflow does not impose any validation on this value. It is specific to the client that reads it. }, "telephonyTransferCall": { # Represents the signal that telles the client to transfer the phone call connected to the agent to a third-party endpoint. # A signal that the client should transfer the phone call connected to this agent to a third-party endpoint. "phoneNumber": "A String", # Transfer the call to a phone number in [E.164 format](https://en.wikipedia.org/wiki/E.164). }, "text": { # The text response message. # Returns a text response. "allowPlaybackInterruption": True or False, # Output only. Whether the playback of this message can be interrupted by the end user's speech and the client can then starts the next Dialogflow request. "text": [ # Required. A collection of text response variants. If multiple variants are defined, only one text response variant is returned at runtime. "A String", ], }, "toolCall": { # Represents a call of a specific tool's action with the specified inputs. # Returns the definition of a tool call that should be executed by the client. "action": "A String", # Required. The name of the tool's action associated with this call. "inputParameters": { # Optional. The action's input parameters. "a_key": "", # Properties of the object. }, "tool": "A String", # Required. The tool associated with this call. Format: `projects//locations//agents//tools/`. }, }, }, ], "condition": "A String", # The condition to activate and select this case. Empty means the condition is always true. The condition is evaluated against form parameters or session parameters. See the [conditions reference](https://cloud.google.com/dialogflow/cx/docs/reference/condition). }, ], }, ], "enableGenerativeFallback": True or False, # If the flag is true, the agent will utilize LLM to generate a text response. If LLM generation fails, the defined responses in the fulfillment will be respected. This flag is only useful for fulfillments associated with no-match event handlers. "generators": [ # A list of Generators to be called during this fulfillment. { # Generator settings used by the LLM to generate a text response. "generator": "A String", # Required. The generator to call. Format: `projects//locations//agents//generators/`. "inputParameters": { # Map from placeholder parameter in the Generator to corresponding session parameters. By default, Dialogflow uses the session parameter with the same name to fill in the generator template. e.g. If there is a placeholder parameter `city` in the Generator, Dialogflow default to fill in the `$city` with `$session.params.city`. However, you may choose to fill `$city` with `$session.params.desination-city`. - Map key: parameter ID - Map value: session parameter name "a_key": "A String", }, "outputParameter": "A String", # Required. Output parameter which should contain the generator response. }, ], "messages": [ # The list of rich message responses to present to the user. { # Represents a response message that can be returned by a conversational agent. Response messages are also used for output audio synthesis. The approach is as follows: * If at least one OutputAudioText response is present, then all OutputAudioText responses are linearly concatenated, and the result is used for output audio synthesis. * If the OutputAudioText responses are a mixture of text and SSML, then the concatenated result is treated as SSML; otherwise, the result is treated as either text or SSML as appropriate. The agent designer should ideally use either text or SSML consistently throughout the bot design. * Otherwise, all Text responses are linearly concatenated, and the result is used for output audio synthesis. This approach allows for more sophisticated user experience scenarios, where the text displayed to the user may differ from what is heard. "channel": "A String", # The channel which the response is associated with. Clients can specify the channel via QueryParameters.channel, and only associated channel response will be returned. "conversationSuccess": { # Indicates that the conversation succeeded, i.e., the bot handled the issue that the customer talked to it about. Dialogflow only uses this to determine which conversations should be counted as successful and doesn't process the metadata in this message in any way. Note that Dialogflow also considers conversations that get to the conversation end page as successful even if they don't return ConversationSuccess. You may set this, for example: * In the entry_fulfillment of a Page if entering the page indicates that the conversation succeeded. * In a webhook response when you determine that you handled the customer issue. # Indicates that the conversation succeeded. "metadata": { # Custom metadata. Dialogflow doesn't impose any structure on this. "a_key": "", # Properties of the object. }, }, "endInteraction": { # Indicates that interaction with the Dialogflow agent has ended. This message is generated by Dialogflow only and not supposed to be defined by the user. # Output only. A signal that indicates the interaction with the Dialogflow agent has ended. This message is generated by Dialogflow only when the conversation reaches `END_SESSION` page. It is not supposed to be defined by the user. It's guaranteed that there is at most one such message in each response. }, "knowledgeInfoCard": { # Represents info card response. If the response contains generative knowledge prediction, Dialogflow will return a payload with Infobot Messenger compatible info card. Otherwise, the info card response is skipped. # Represents info card for knowledge answers, to be better rendered in Dialogflow Messenger. }, "liveAgentHandoff": { # Indicates that the conversation should be handed off to a live agent. Dialogflow only uses this to determine which conversations were handed off to a human agent for measurement purposes. What else to do with this signal is up to you and your handoff procedures. You may set this, for example: * In the entry_fulfillment of a Page if entering the page indicates something went extremely wrong in the conversation. * In a webhook response when you determine that the customer issue can only be handled by a human. # Hands off conversation to a human agent. "metadata": { # Custom metadata for your handoff procedure. Dialogflow doesn't impose any structure on this. "a_key": "", # Properties of the object. }, }, "mixedAudio": { # Represents an audio message that is composed of both segments synthesized from the Dialogflow agent prompts and ones hosted externally at the specified URIs. The external URIs are specified via play_audio. This message is generated by Dialogflow only and not supposed to be defined by the user. # Output only. An audio response message composed of both the synthesized Dialogflow agent responses and responses defined via play_audio. This message is generated by Dialogflow only and not supposed to be defined by the user. "segments": [ # Segments this audio response is composed of. { # Represents one segment of audio. "allowPlaybackInterruption": True or False, # Output only. Whether the playback of this segment can be interrupted by the end user's speech and the client should then start the next Dialogflow request. "audio": "A String", # Raw audio synthesized from the Dialogflow agent's response using the output config specified in the request. "uri": "A String", # Client-specific URI that points to an audio clip accessible to the client. Dialogflow does not impose any validation on it. }, ], }, "outputAudioText": { # A text or ssml response that is preferentially used for TTS output audio synthesis, as described in the comment on the ResponseMessage message. # A text or ssml response that is preferentially used for TTS output audio synthesis, as described in the comment on the ResponseMessage message. "allowPlaybackInterruption": True or False, # Output only. Whether the playback of this message can be interrupted by the end user's speech and the client can then starts the next Dialogflow request. "ssml": "A String", # The SSML text to be synthesized. For more information, see [SSML](/speech/text-to-speech/docs/ssml). "text": "A String", # The raw text to be synthesized. }, "payload": { # Returns a response containing a custom, platform-specific payload. "a_key": "", # Properties of the object. }, "playAudio": { # Specifies an audio clip to be played by the client as part of the response. # Signal that the client should play an audio clip hosted at a client-specific URI. Dialogflow uses this to construct mixed_audio. However, Dialogflow itself does not try to read or process the URI in any way. "allowPlaybackInterruption": True or False, # Output only. Whether the playback of this message can be interrupted by the end user's speech and the client can then starts the next Dialogflow request. "audioUri": "A String", # Required. URI of the audio clip. Dialogflow does not impose any validation on this value. It is specific to the client that reads it. }, "telephonyTransferCall": { # Represents the signal that telles the client to transfer the phone call connected to the agent to a third-party endpoint. # A signal that the client should transfer the phone call connected to this agent to a third-party endpoint. "phoneNumber": "A String", # Transfer the call to a phone number in [E.164 format](https://en.wikipedia.org/wiki/E.164). }, "text": { # The text response message. # Returns a text response. "allowPlaybackInterruption": True or False, # Output only. Whether the playback of this message can be interrupted by the end user's speech and the client can then starts the next Dialogflow request. "text": [ # Required. A collection of text response variants. If multiple variants are defined, only one text response variant is returned at runtime. "A String", ], }, "toolCall": { # Represents a call of a specific tool's action with the specified inputs. # Returns the definition of a tool call that should be executed by the client. "action": "A String", # Required. The name of the tool's action associated with this call. "inputParameters": { # Optional. The action's input parameters. "a_key": "", # Properties of the object. }, "tool": "A String", # Required. The tool associated with this call. Format: `projects//locations//agents//tools/`. }, }, ], "returnPartialResponses": True or False, # Whether Dialogflow should return currently queued fulfillment response messages in streaming APIs. If a webhook is specified, it happens before Dialogflow invokes webhook. Warning: 1) This flag only affects streaming API. Responses are still queued and returned once in non-streaming API. 2) The flag can be enabled in any fulfillment but only the first 3 partial responses will be returned. You may only want to apply it to fulfillments that have slow webhooks. "setParameterActions": [ # Set parameter values before executing the webhook. { # Setting a parameter value. "parameter": "A String", # Display name of the parameter. "value": "", # The new value of the parameter. A null value clears the parameter. }, ], "tag": "A String", # The value of this field will be populated in the WebhookRequest `fulfillmentInfo.tag` field by Dialogflow when the associated webhook is called. The tag is typically used by the webhook service to identify which fulfillment is being called, but it could be used for other purposes. This field is required if `webhook` is specified. "webhook": "A String", # The webhook to call. Format: `projects//locations//agents//webhooks/`. }, "lifecycleStage": "A String", # Required. The name of the lifecycle stage that triggers this handler. Supported values: * `playbook-start` * `pre-action-selection` * `pre-action-execution` }, }, ], "inputParameterDefinitions": [ # Optional. Defined structured input parameters for this playbook. { # Defines the properties of a parameter. Used to define parameters used in the agent and the input / output parameters for each fulfillment. "description": "A String", # Human-readable description of the parameter. Limited to 300 characters. "name": "A String", # Required. Name of parameter. "type": "A String", # Type of parameter. "typeSchema": { # Encapsulates different type schema variations: either a reference to an a schema that's already defined by a tool, or an inline definition. # Optional. Type schema of parameter. "inlineSchema": { # A type schema object that's specified inline. # Set if this is an inline schema definition. "items": # Object with schema name: GoogleCloudDialogflowCxV3beta1TypeSchema # Schema of the elements if this is an ARRAY type. "type": "A String", # Data type of the schema. }, "schemaReference": { # A reference to the schema of an existing tool. # Set if this is a schema reference. "schema": "A String", # The name of the schema. "tool": "A String", # The tool that contains this schema definition. Format: `projects//locations//agents//tools/`. }, }, }, ], "instruction": { # Message of the Instruction of the playbook. # Instruction to accomplish target goal. "guidelines": "A String", # General guidelines for the playbook. These are unstructured instructions that are not directly part of the goal, e.g. "Always be polite". It's valid for this text to be long and used instead of steps altogether. "steps": [ # Ordered list of step by step execution instructions to accomplish target goal. { # Message of single step execution. "steps": [ # Sub-processing needed to execute the current step. # Object with schema name: GoogleCloudDialogflowCxV3beta1PlaybookStep ], "text": "A String", # Step instruction in text format. }, ], }, "llmModelSettings": { # Settings for LLM models. # Optional. Llm model settings for the playbook. "model": "A String", # The selected LLM model. "promptText": "A String", # The custom prompt to use. }, "name": "A String", # The unique identifier of the playbook. Format: `projects//locations//agents//playbooks/`. "outputParameterDefinitions": [ # Optional. Defined structured output parameters for this playbook. { # Defines the properties of a parameter. Used to define parameters used in the agent and the input / output parameters for each fulfillment. "description": "A String", # Human-readable description of the parameter. Limited to 300 characters. "name": "A String", # Required. Name of parameter. "type": "A String", # Type of parameter. "typeSchema": { # Encapsulates different type schema variations: either a reference to an a schema that's already defined by a tool, or an inline definition. # Optional. Type schema of parameter. "inlineSchema": { # A type schema object that's specified inline. # Set if this is an inline schema definition. "items": # Object with schema name: GoogleCloudDialogflowCxV3beta1TypeSchema # Schema of the elements if this is an ARRAY type. "type": "A String", # Data type of the schema. }, "schemaReference": { # A reference to the schema of an existing tool. # Set if this is a schema reference. "schema": "A String", # The name of the schema. "tool": "A String", # The tool that contains this schema definition. Format: `projects//locations//agents//tools/`. }, }, }, ], "playbookType": "A String", # Optional. Type of the playbook. "referencedFlows": [ # Output only. The resource name of flows referenced by the current playbook in the instructions. "A String", ], "referencedPlaybooks": [ # Output only. The resource name of other playbooks referenced by the current playbook in the instructions. "A String", ], "referencedTools": [ # Optional. The resource name of tools referenced by the current playbook in the instructions. If not provided explicitly, they are will be implied using the tool being referenced in goal and steps. "A String", ], "speechSettings": { # Define behaviors of speech to text detection. # Optional. Playbook level Settings for speech to text detection. "endpointerSensitivity": 42, # Sensitivity of the speech model that detects the end of speech. Scale from 0 to 100. "models": { # Mapping from language to Speech-to-Text model. The mapped Speech-to-Text model will be selected for requests from its corresponding language. For more information, see [Speech models](https://cloud.google.com/dialogflow/cx/docs/concept/speech-models). "a_key": "A String", }, "noSpeechTimeout": "A String", # Timeout before detecting no speech. "useTimeoutBasedEndpointing": True or False, # Use timeout based endpointing, interpreting endpointer sensitivity as seconds of timeout value. }, "tokenCount": "A String", # Output only. Estimated number of tokes current playbook takes when sent to the LLM. "updateTime": "A String", # Output only. Last time the playbook version was updated. } x__xgafv: string, V1 error format. Allowed values 1 - v1 error format 2 - v2 error format Returns: An object of the form: { # Playbook is the basic building block to instruct the LLM how to execute a certain task. A playbook consists of a goal to accomplish, an optional list of step by step instructions (the step instruction may refers to name of the custom or default plugin tools to use) to perform the task, a list of contextual input data to be passed in at the beginning of the invoked, and a list of output parameters to store the playbook result. "createTime": "A String", # Output only. The timestamp of initial playbook creation. "displayName": "A String", # Required. The human-readable name of the playbook, unique within an agent. "goal": "A String", # Required. High level description of the goal the playbook intend to accomplish. A goal should be concise since it's visible to other playbooks that may reference this playbook. "handlers": [ # Optional. A list of registered handlers to execute based on the specified triggers. { # Handler can be used to define custom logic to be executed based on the user-specified triggers. "eventHandler": { # A handler that is triggered by the specified event. # A handler triggered by event. "condition": "A String", # Optional. The condition that must be satisfied to trigger this handler. "event": "A String", # Required. The name of the event that triggers this handler. "fulfillment": { # A fulfillment can do one or more of the following actions at the same time: * Generate rich message responses. * Set parameter values. * Call the webhook. Fulfillments can be called at various stages in the Page or Form lifecycle. For example, when a DetectIntentRequest drives a session to enter a new page, the page's entry fulfillment can add a static response to the QueryResult in the returning DetectIntentResponse, call the webhook (for example, to load user data from a database), or both. # Required. The fulfillment to call when the event occurs. "advancedSettings": { # Hierarchical advanced settings for agent/flow/page/fulfillment/parameter. Settings exposed at lower level overrides the settings exposed at higher level. Overriding occurs at the sub-setting level. For example, the playback_interruption_settings at fulfillment level only overrides the playback_interruption_settings at the agent level, leaving other settings at the agent level unchanged. DTMF settings does not override each other. DTMF settings set at different levels define DTMF detections running in parallel. Hierarchy: Agent->Flow->Page->Fulfillment/Parameter. # Hierarchical advanced settings for this fulfillment. The settings exposed at the lower level overrides the settings exposed at the higher level. "audioExportGcsDestination": { # Google Cloud Storage location for a Dialogflow operation that writes or exports objects (e.g. exported agent or transcripts) outside of Dialogflow. # If present, incoming audio is exported by Dialogflow to the configured Google Cloud Storage destination. Exposed at the following levels: - Agent level - Flow level "uri": "A String", # Required. The Google Cloud Storage URI for the exported objects. A URI is of the form: `gs://bucket/object-name-or-prefix` Whether a full object name, or just a prefix, its usage depends on the Dialogflow operation. }, "dtmfSettings": { # Define behaviors for DTMF (dual tone multi frequency). # Settings for DTMF. Exposed at the following levels: - Agent level - Flow level - Page level - Parameter level. "enabled": True or False, # If true, incoming audio is processed for DTMF (dual tone multi frequency) events. For example, if the caller presses a button on their telephone keypad and DTMF processing is enabled, Dialogflow will detect the event (e.g. a "3" was pressed) in the incoming audio and pass the event to the bot to drive business logic (e.g. when 3 is pressed, return the account balance). "endpointingTimeoutDuration": "A String", # Endpoint timeout setting for matching dtmf input to regex. "finishDigit": "A String", # The digit that terminates a DTMF digit sequence. "interdigitTimeoutDuration": "A String", # Interdigit timeout setting for matching dtmf input to regex. "maxDigits": 42, # Max length of DTMF digits. }, "loggingSettings": { # Define behaviors on logging. # Settings for logging. Settings for Dialogflow History, Contact Center messages, StackDriver logs, and speech logging. Exposed at the following levels: - Agent level. "enableConsentBasedRedaction": True or False, # Enables consent-based end-user input redaction, if true, a pre-defined session parameter `$session.params.conversation-redaction` will be used to determine if the utterance should be redacted. "enableInteractionLogging": True or False, # Enables DF Interaction logging. "enableStackdriverLogging": True or False, # Enables Google Cloud Logging. }, "speechSettings": { # Define behaviors of speech to text detection. # Settings for speech to text detection. Exposed at the following levels: - Agent level - Flow level - Page level - Parameter level "endpointerSensitivity": 42, # Sensitivity of the speech model that detects the end of speech. Scale from 0 to 100. "models": { # Mapping from language to Speech-to-Text model. The mapped Speech-to-Text model will be selected for requests from its corresponding language. For more information, see [Speech models](https://cloud.google.com/dialogflow/cx/docs/concept/speech-models). "a_key": "A String", }, "noSpeechTimeout": "A String", # Timeout before detecting no speech. "useTimeoutBasedEndpointing": True or False, # Use timeout based endpointing, interpreting endpointer sensitivity as seconds of timeout value. }, }, "conditionalCases": [ # Conditional cases for this fulfillment. { # A list of cascading if-else conditions. Cases are mutually exclusive. The first one with a matching condition is selected, all the rest ignored. "cases": [ # A list of cascading if-else conditions. { # Each case has a Boolean condition. When it is evaluated to be True, the corresponding messages will be selected and evaluated recursively. "caseContent": [ # A list of case content. { # The list of messages or conditional cases to activate for this case. "additionalCases": # Object with schema name: GoogleCloudDialogflowCxV3beta1FulfillmentConditionalCases # Additional cases to be evaluated. "message": { # Represents a response message that can be returned by a conversational agent. Response messages are also used for output audio synthesis. The approach is as follows: * If at least one OutputAudioText response is present, then all OutputAudioText responses are linearly concatenated, and the result is used for output audio synthesis. * If the OutputAudioText responses are a mixture of text and SSML, then the concatenated result is treated as SSML; otherwise, the result is treated as either text or SSML as appropriate. The agent designer should ideally use either text or SSML consistently throughout the bot design. * Otherwise, all Text responses are linearly concatenated, and the result is used for output audio synthesis. This approach allows for more sophisticated user experience scenarios, where the text displayed to the user may differ from what is heard. # Returned message. "channel": "A String", # The channel which the response is associated with. Clients can specify the channel via QueryParameters.channel, and only associated channel response will be returned. "conversationSuccess": { # Indicates that the conversation succeeded, i.e., the bot handled the issue that the customer talked to it about. Dialogflow only uses this to determine which conversations should be counted as successful and doesn't process the metadata in this message in any way. Note that Dialogflow also considers conversations that get to the conversation end page as successful even if they don't return ConversationSuccess. You may set this, for example: * In the entry_fulfillment of a Page if entering the page indicates that the conversation succeeded. * In a webhook response when you determine that you handled the customer issue. # Indicates that the conversation succeeded. "metadata": { # Custom metadata. Dialogflow doesn't impose any structure on this. "a_key": "", # Properties of the object. }, }, "endInteraction": { # Indicates that interaction with the Dialogflow agent has ended. This message is generated by Dialogflow only and not supposed to be defined by the user. # Output only. A signal that indicates the interaction with the Dialogflow agent has ended. This message is generated by Dialogflow only when the conversation reaches `END_SESSION` page. It is not supposed to be defined by the user. It's guaranteed that there is at most one such message in each response. }, "knowledgeInfoCard": { # Represents info card response. If the response contains generative knowledge prediction, Dialogflow will return a payload with Infobot Messenger compatible info card. Otherwise, the info card response is skipped. # Represents info card for knowledge answers, to be better rendered in Dialogflow Messenger. }, "liveAgentHandoff": { # Indicates that the conversation should be handed off to a live agent. Dialogflow only uses this to determine which conversations were handed off to a human agent for measurement purposes. What else to do with this signal is up to you and your handoff procedures. You may set this, for example: * In the entry_fulfillment of a Page if entering the page indicates something went extremely wrong in the conversation. * In a webhook response when you determine that the customer issue can only be handled by a human. # Hands off conversation to a human agent. "metadata": { # Custom metadata for your handoff procedure. Dialogflow doesn't impose any structure on this. "a_key": "", # Properties of the object. }, }, "mixedAudio": { # Represents an audio message that is composed of both segments synthesized from the Dialogflow agent prompts and ones hosted externally at the specified URIs. The external URIs are specified via play_audio. This message is generated by Dialogflow only and not supposed to be defined by the user. # Output only. An audio response message composed of both the synthesized Dialogflow agent responses and responses defined via play_audio. This message is generated by Dialogflow only and not supposed to be defined by the user. "segments": [ # Segments this audio response is composed of. { # Represents one segment of audio. "allowPlaybackInterruption": True or False, # Output only. Whether the playback of this segment can be interrupted by the end user's speech and the client should then start the next Dialogflow request. "audio": "A String", # Raw audio synthesized from the Dialogflow agent's response using the output config specified in the request. "uri": "A String", # Client-specific URI that points to an audio clip accessible to the client. Dialogflow does not impose any validation on it. }, ], }, "outputAudioText": { # A text or ssml response that is preferentially used for TTS output audio synthesis, as described in the comment on the ResponseMessage message. # A text or ssml response that is preferentially used for TTS output audio synthesis, as described in the comment on the ResponseMessage message. "allowPlaybackInterruption": True or False, # Output only. Whether the playback of this message can be interrupted by the end user's speech and the client can then starts the next Dialogflow request. "ssml": "A String", # The SSML text to be synthesized. For more information, see [SSML](/speech/text-to-speech/docs/ssml). "text": "A String", # The raw text to be synthesized. }, "payload": { # Returns a response containing a custom, platform-specific payload. "a_key": "", # Properties of the object. }, "playAudio": { # Specifies an audio clip to be played by the client as part of the response. # Signal that the client should play an audio clip hosted at a client-specific URI. Dialogflow uses this to construct mixed_audio. However, Dialogflow itself does not try to read or process the URI in any way. "allowPlaybackInterruption": True or False, # Output only. Whether the playback of this message can be interrupted by the end user's speech and the client can then starts the next Dialogflow request. "audioUri": "A String", # Required. URI of the audio clip. Dialogflow does not impose any validation on this value. It is specific to the client that reads it. }, "telephonyTransferCall": { # Represents the signal that telles the client to transfer the phone call connected to the agent to a third-party endpoint. # A signal that the client should transfer the phone call connected to this agent to a third-party endpoint. "phoneNumber": "A String", # Transfer the call to a phone number in [E.164 format](https://en.wikipedia.org/wiki/E.164). }, "text": { # The text response message. # Returns a text response. "allowPlaybackInterruption": True or False, # Output only. Whether the playback of this message can be interrupted by the end user's speech and the client can then starts the next Dialogflow request. "text": [ # Required. A collection of text response variants. If multiple variants are defined, only one text response variant is returned at runtime. "A String", ], }, "toolCall": { # Represents a call of a specific tool's action with the specified inputs. # Returns the definition of a tool call that should be executed by the client. "action": "A String", # Required. The name of the tool's action associated with this call. "inputParameters": { # Optional. The action's input parameters. "a_key": "", # Properties of the object. }, "tool": "A String", # Required. The tool associated with this call. Format: `projects//locations//agents//tools/`. }, }, }, ], "condition": "A String", # The condition to activate and select this case. Empty means the condition is always true. The condition is evaluated against form parameters or session parameters. See the [conditions reference](https://cloud.google.com/dialogflow/cx/docs/reference/condition). }, ], }, ], "enableGenerativeFallback": True or False, # If the flag is true, the agent will utilize LLM to generate a text response. If LLM generation fails, the defined responses in the fulfillment will be respected. This flag is only useful for fulfillments associated with no-match event handlers. "generators": [ # A list of Generators to be called during this fulfillment. { # Generator settings used by the LLM to generate a text response. "generator": "A String", # Required. The generator to call. Format: `projects//locations//agents//generators/`. "inputParameters": { # Map from placeholder parameter in the Generator to corresponding session parameters. By default, Dialogflow uses the session parameter with the same name to fill in the generator template. e.g. If there is a placeholder parameter `city` in the Generator, Dialogflow default to fill in the `$city` with `$session.params.city`. However, you may choose to fill `$city` with `$session.params.desination-city`. - Map key: parameter ID - Map value: session parameter name "a_key": "A String", }, "outputParameter": "A String", # Required. Output parameter which should contain the generator response. }, ], "messages": [ # The list of rich message responses to present to the user. { # Represents a response message that can be returned by a conversational agent. Response messages are also used for output audio synthesis. The approach is as follows: * If at least one OutputAudioText response is present, then all OutputAudioText responses are linearly concatenated, and the result is used for output audio synthesis. * If the OutputAudioText responses are a mixture of text and SSML, then the concatenated result is treated as SSML; otherwise, the result is treated as either text or SSML as appropriate. The agent designer should ideally use either text or SSML consistently throughout the bot design. * Otherwise, all Text responses are linearly concatenated, and the result is used for output audio synthesis. This approach allows for more sophisticated user experience scenarios, where the text displayed to the user may differ from what is heard. "channel": "A String", # The channel which the response is associated with. Clients can specify the channel via QueryParameters.channel, and only associated channel response will be returned. "conversationSuccess": { # Indicates that the conversation succeeded, i.e., the bot handled the issue that the customer talked to it about. Dialogflow only uses this to determine which conversations should be counted as successful and doesn't process the metadata in this message in any way. Note that Dialogflow also considers conversations that get to the conversation end page as successful even if they don't return ConversationSuccess. You may set this, for example: * In the entry_fulfillment of a Page if entering the page indicates that the conversation succeeded. * In a webhook response when you determine that you handled the customer issue. # Indicates that the conversation succeeded. "metadata": { # Custom metadata. Dialogflow doesn't impose any structure on this. "a_key": "", # Properties of the object. }, }, "endInteraction": { # Indicates that interaction with the Dialogflow agent has ended. This message is generated by Dialogflow only and not supposed to be defined by the user. # Output only. A signal that indicates the interaction with the Dialogflow agent has ended. This message is generated by Dialogflow only when the conversation reaches `END_SESSION` page. It is not supposed to be defined by the user. It's guaranteed that there is at most one such message in each response. }, "knowledgeInfoCard": { # Represents info card response. If the response contains generative knowledge prediction, Dialogflow will return a payload with Infobot Messenger compatible info card. Otherwise, the info card response is skipped. # Represents info card for knowledge answers, to be better rendered in Dialogflow Messenger. }, "liveAgentHandoff": { # Indicates that the conversation should be handed off to a live agent. Dialogflow only uses this to determine which conversations were handed off to a human agent for measurement purposes. What else to do with this signal is up to you and your handoff procedures. You may set this, for example: * In the entry_fulfillment of a Page if entering the page indicates something went extremely wrong in the conversation. * In a webhook response when you determine that the customer issue can only be handled by a human. # Hands off conversation to a human agent. "metadata": { # Custom metadata for your handoff procedure. Dialogflow doesn't impose any structure on this. "a_key": "", # Properties of the object. }, }, "mixedAudio": { # Represents an audio message that is composed of both segments synthesized from the Dialogflow agent prompts and ones hosted externally at the specified URIs. The external URIs are specified via play_audio. This message is generated by Dialogflow only and not supposed to be defined by the user. # Output only. An audio response message composed of both the synthesized Dialogflow agent responses and responses defined via play_audio. This message is generated by Dialogflow only and not supposed to be defined by the user. "segments": [ # Segments this audio response is composed of. { # Represents one segment of audio. "allowPlaybackInterruption": True or False, # Output only. Whether the playback of this segment can be interrupted by the end user's speech and the client should then start the next Dialogflow request. "audio": "A String", # Raw audio synthesized from the Dialogflow agent's response using the output config specified in the request. "uri": "A String", # Client-specific URI that points to an audio clip accessible to the client. Dialogflow does not impose any validation on it. }, ], }, "outputAudioText": { # A text or ssml response that is preferentially used for TTS output audio synthesis, as described in the comment on the ResponseMessage message. # A text or ssml response that is preferentially used for TTS output audio synthesis, as described in the comment on the ResponseMessage message. "allowPlaybackInterruption": True or False, # Output only. Whether the playback of this message can be interrupted by the end user's speech and the client can then starts the next Dialogflow request. "ssml": "A String", # The SSML text to be synthesized. For more information, see [SSML](/speech/text-to-speech/docs/ssml). "text": "A String", # The raw text to be synthesized. }, "payload": { # Returns a response containing a custom, platform-specific payload. "a_key": "", # Properties of the object. }, "playAudio": { # Specifies an audio clip to be played by the client as part of the response. # Signal that the client should play an audio clip hosted at a client-specific URI. Dialogflow uses this to construct mixed_audio. However, Dialogflow itself does not try to read or process the URI in any way. "allowPlaybackInterruption": True or False, # Output only. Whether the playback of this message can be interrupted by the end user's speech and the client can then starts the next Dialogflow request. "audioUri": "A String", # Required. URI of the audio clip. Dialogflow does not impose any validation on this value. It is specific to the client that reads it. }, "telephonyTransferCall": { # Represents the signal that telles the client to transfer the phone call connected to the agent to a third-party endpoint. # A signal that the client should transfer the phone call connected to this agent to a third-party endpoint. "phoneNumber": "A String", # Transfer the call to a phone number in [E.164 format](https://en.wikipedia.org/wiki/E.164). }, "text": { # The text response message. # Returns a text response. "allowPlaybackInterruption": True or False, # Output only. Whether the playback of this message can be interrupted by the end user's speech and the client can then starts the next Dialogflow request. "text": [ # Required. A collection of text response variants. If multiple variants are defined, only one text response variant is returned at runtime. "A String", ], }, "toolCall": { # Represents a call of a specific tool's action with the specified inputs. # Returns the definition of a tool call that should be executed by the client. "action": "A String", # Required. The name of the tool's action associated with this call. "inputParameters": { # Optional. The action's input parameters. "a_key": "", # Properties of the object. }, "tool": "A String", # Required. The tool associated with this call. Format: `projects//locations//agents//tools/`. }, }, ], "returnPartialResponses": True or False, # Whether Dialogflow should return currently queued fulfillment response messages in streaming APIs. If a webhook is specified, it happens before Dialogflow invokes webhook. Warning: 1) This flag only affects streaming API. Responses are still queued and returned once in non-streaming API. 2) The flag can be enabled in any fulfillment but only the first 3 partial responses will be returned. You may only want to apply it to fulfillments that have slow webhooks. "setParameterActions": [ # Set parameter values before executing the webhook. { # Setting a parameter value. "parameter": "A String", # Display name of the parameter. "value": "", # The new value of the parameter. A null value clears the parameter. }, ], "tag": "A String", # The value of this field will be populated in the WebhookRequest `fulfillmentInfo.tag` field by Dialogflow when the associated webhook is called. The tag is typically used by the webhook service to identify which fulfillment is being called, but it could be used for other purposes. This field is required if `webhook` is specified. "webhook": "A String", # The webhook to call. Format: `projects//locations//agents//webhooks/`. }, }, "lifecycleHandler": { # A handler that is triggered on the specific lifecycle_stage of the playbook execution. # A handler triggered during specific lifecycle of the playbook execution. "condition": "A String", # Optional. The condition that must be satisfied to trigger this handler. "fulfillment": { # A fulfillment can do one or more of the following actions at the same time: * Generate rich message responses. * Set parameter values. * Call the webhook. Fulfillments can be called at various stages in the Page or Form lifecycle. For example, when a DetectIntentRequest drives a session to enter a new page, the page's entry fulfillment can add a static response to the QueryResult in the returning DetectIntentResponse, call the webhook (for example, to load user data from a database), or both. # Required. The fulfillment to call when this handler is triggered. "advancedSettings": { # Hierarchical advanced settings for agent/flow/page/fulfillment/parameter. Settings exposed at lower level overrides the settings exposed at higher level. Overriding occurs at the sub-setting level. For example, the playback_interruption_settings at fulfillment level only overrides the playback_interruption_settings at the agent level, leaving other settings at the agent level unchanged. DTMF settings does not override each other. DTMF settings set at different levels define DTMF detections running in parallel. Hierarchy: Agent->Flow->Page->Fulfillment/Parameter. # Hierarchical advanced settings for this fulfillment. The settings exposed at the lower level overrides the settings exposed at the higher level. "audioExportGcsDestination": { # Google Cloud Storage location for a Dialogflow operation that writes or exports objects (e.g. exported agent or transcripts) outside of Dialogflow. # If present, incoming audio is exported by Dialogflow to the configured Google Cloud Storage destination. Exposed at the following levels: - Agent level - Flow level "uri": "A String", # Required. The Google Cloud Storage URI for the exported objects. A URI is of the form: `gs://bucket/object-name-or-prefix` Whether a full object name, or just a prefix, its usage depends on the Dialogflow operation. }, "dtmfSettings": { # Define behaviors for DTMF (dual tone multi frequency). # Settings for DTMF. Exposed at the following levels: - Agent level - Flow level - Page level - Parameter level. "enabled": True or False, # If true, incoming audio is processed for DTMF (dual tone multi frequency) events. For example, if the caller presses a button on their telephone keypad and DTMF processing is enabled, Dialogflow will detect the event (e.g. a "3" was pressed) in the incoming audio and pass the event to the bot to drive business logic (e.g. when 3 is pressed, return the account balance). "endpointingTimeoutDuration": "A String", # Endpoint timeout setting for matching dtmf input to regex. "finishDigit": "A String", # The digit that terminates a DTMF digit sequence. "interdigitTimeoutDuration": "A String", # Interdigit timeout setting for matching dtmf input to regex. "maxDigits": 42, # Max length of DTMF digits. }, "loggingSettings": { # Define behaviors on logging. # Settings for logging. Settings for Dialogflow History, Contact Center messages, StackDriver logs, and speech logging. Exposed at the following levels: - Agent level. "enableConsentBasedRedaction": True or False, # Enables consent-based end-user input redaction, if true, a pre-defined session parameter `$session.params.conversation-redaction` will be used to determine if the utterance should be redacted. "enableInteractionLogging": True or False, # Enables DF Interaction logging. "enableStackdriverLogging": True or False, # Enables Google Cloud Logging. }, "speechSettings": { # Define behaviors of speech to text detection. # Settings for speech to text detection. Exposed at the following levels: - Agent level - Flow level - Page level - Parameter level "endpointerSensitivity": 42, # Sensitivity of the speech model that detects the end of speech. Scale from 0 to 100. "models": { # Mapping from language to Speech-to-Text model. The mapped Speech-to-Text model will be selected for requests from its corresponding language. For more information, see [Speech models](https://cloud.google.com/dialogflow/cx/docs/concept/speech-models). "a_key": "A String", }, "noSpeechTimeout": "A String", # Timeout before detecting no speech. "useTimeoutBasedEndpointing": True or False, # Use timeout based endpointing, interpreting endpointer sensitivity as seconds of timeout value. }, }, "conditionalCases": [ # Conditional cases for this fulfillment. { # A list of cascading if-else conditions. Cases are mutually exclusive. The first one with a matching condition is selected, all the rest ignored. "cases": [ # A list of cascading if-else conditions. { # Each case has a Boolean condition. When it is evaluated to be True, the corresponding messages will be selected and evaluated recursively. "caseContent": [ # A list of case content. { # The list of messages or conditional cases to activate for this case. "additionalCases": # Object with schema name: GoogleCloudDialogflowCxV3beta1FulfillmentConditionalCases # Additional cases to be evaluated. "message": { # Represents a response message that can be returned by a conversational agent. Response messages are also used for output audio synthesis. The approach is as follows: * If at least one OutputAudioText response is present, then all OutputAudioText responses are linearly concatenated, and the result is used for output audio synthesis. * If the OutputAudioText responses are a mixture of text and SSML, then the concatenated result is treated as SSML; otherwise, the result is treated as either text or SSML as appropriate. The agent designer should ideally use either text or SSML consistently throughout the bot design. * Otherwise, all Text responses are linearly concatenated, and the result is used for output audio synthesis. This approach allows for more sophisticated user experience scenarios, where the text displayed to the user may differ from what is heard. # Returned message. "channel": "A String", # The channel which the response is associated with. Clients can specify the channel via QueryParameters.channel, and only associated channel response will be returned. "conversationSuccess": { # Indicates that the conversation succeeded, i.e., the bot handled the issue that the customer talked to it about. Dialogflow only uses this to determine which conversations should be counted as successful and doesn't process the metadata in this message in any way. Note that Dialogflow also considers conversations that get to the conversation end page as successful even if they don't return ConversationSuccess. You may set this, for example: * In the entry_fulfillment of a Page if entering the page indicates that the conversation succeeded. * In a webhook response when you determine that you handled the customer issue. # Indicates that the conversation succeeded. "metadata": { # Custom metadata. Dialogflow doesn't impose any structure on this. "a_key": "", # Properties of the object. }, }, "endInteraction": { # Indicates that interaction with the Dialogflow agent has ended. This message is generated by Dialogflow only and not supposed to be defined by the user. # Output only. A signal that indicates the interaction with the Dialogflow agent has ended. This message is generated by Dialogflow only when the conversation reaches `END_SESSION` page. It is not supposed to be defined by the user. It's guaranteed that there is at most one such message in each response. }, "knowledgeInfoCard": { # Represents info card response. If the response contains generative knowledge prediction, Dialogflow will return a payload with Infobot Messenger compatible info card. Otherwise, the info card response is skipped. # Represents info card for knowledge answers, to be better rendered in Dialogflow Messenger. }, "liveAgentHandoff": { # Indicates that the conversation should be handed off to a live agent. Dialogflow only uses this to determine which conversations were handed off to a human agent for measurement purposes. What else to do with this signal is up to you and your handoff procedures. You may set this, for example: * In the entry_fulfillment of a Page if entering the page indicates something went extremely wrong in the conversation. * In a webhook response when you determine that the customer issue can only be handled by a human. # Hands off conversation to a human agent. "metadata": { # Custom metadata for your handoff procedure. Dialogflow doesn't impose any structure on this. "a_key": "", # Properties of the object. }, }, "mixedAudio": { # Represents an audio message that is composed of both segments synthesized from the Dialogflow agent prompts and ones hosted externally at the specified URIs. The external URIs are specified via play_audio. This message is generated by Dialogflow only and not supposed to be defined by the user. # Output only. An audio response message composed of both the synthesized Dialogflow agent responses and responses defined via play_audio. This message is generated by Dialogflow only and not supposed to be defined by the user. "segments": [ # Segments this audio response is composed of. { # Represents one segment of audio. "allowPlaybackInterruption": True or False, # Output only. Whether the playback of this segment can be interrupted by the end user's speech and the client should then start the next Dialogflow request. "audio": "A String", # Raw audio synthesized from the Dialogflow agent's response using the output config specified in the request. "uri": "A String", # Client-specific URI that points to an audio clip accessible to the client. Dialogflow does not impose any validation on it. }, ], }, "outputAudioText": { # A text or ssml response that is preferentially used for TTS output audio synthesis, as described in the comment on the ResponseMessage message. # A text or ssml response that is preferentially used for TTS output audio synthesis, as described in the comment on the ResponseMessage message. "allowPlaybackInterruption": True or False, # Output only. Whether the playback of this message can be interrupted by the end user's speech and the client can then starts the next Dialogflow request. "ssml": "A String", # The SSML text to be synthesized. For more information, see [SSML](/speech/text-to-speech/docs/ssml). "text": "A String", # The raw text to be synthesized. }, "payload": { # Returns a response containing a custom, platform-specific payload. "a_key": "", # Properties of the object. }, "playAudio": { # Specifies an audio clip to be played by the client as part of the response. # Signal that the client should play an audio clip hosted at a client-specific URI. Dialogflow uses this to construct mixed_audio. However, Dialogflow itself does not try to read or process the URI in any way. "allowPlaybackInterruption": True or False, # Output only. Whether the playback of this message can be interrupted by the end user's speech and the client can then starts the next Dialogflow request. "audioUri": "A String", # Required. URI of the audio clip. Dialogflow does not impose any validation on this value. It is specific to the client that reads it. }, "telephonyTransferCall": { # Represents the signal that telles the client to transfer the phone call connected to the agent to a third-party endpoint. # A signal that the client should transfer the phone call connected to this agent to a third-party endpoint. "phoneNumber": "A String", # Transfer the call to a phone number in [E.164 format](https://en.wikipedia.org/wiki/E.164). }, "text": { # The text response message. # Returns a text response. "allowPlaybackInterruption": True or False, # Output only. Whether the playback of this message can be interrupted by the end user's speech and the client can then starts the next Dialogflow request. "text": [ # Required. A collection of text response variants. If multiple variants are defined, only one text response variant is returned at runtime. "A String", ], }, "toolCall": { # Represents a call of a specific tool's action with the specified inputs. # Returns the definition of a tool call that should be executed by the client. "action": "A String", # Required. The name of the tool's action associated with this call. "inputParameters": { # Optional. The action's input parameters. "a_key": "", # Properties of the object. }, "tool": "A String", # Required. The tool associated with this call. Format: `projects//locations//agents//tools/`. }, }, }, ], "condition": "A String", # The condition to activate and select this case. Empty means the condition is always true. The condition is evaluated against form parameters or session parameters. See the [conditions reference](https://cloud.google.com/dialogflow/cx/docs/reference/condition). }, ], }, ], "enableGenerativeFallback": True or False, # If the flag is true, the agent will utilize LLM to generate a text response. If LLM generation fails, the defined responses in the fulfillment will be respected. This flag is only useful for fulfillments associated with no-match event handlers. "generators": [ # A list of Generators to be called during this fulfillment. { # Generator settings used by the LLM to generate a text response. "generator": "A String", # Required. The generator to call. Format: `projects//locations//agents//generators/`. "inputParameters": { # Map from placeholder parameter in the Generator to corresponding session parameters. By default, Dialogflow uses the session parameter with the same name to fill in the generator template. e.g. If there is a placeholder parameter `city` in the Generator, Dialogflow default to fill in the `$city` with `$session.params.city`. However, you may choose to fill `$city` with `$session.params.desination-city`. - Map key: parameter ID - Map value: session parameter name "a_key": "A String", }, "outputParameter": "A String", # Required. Output parameter which should contain the generator response. }, ], "messages": [ # The list of rich message responses to present to the user. { # Represents a response message that can be returned by a conversational agent. Response messages are also used for output audio synthesis. The approach is as follows: * If at least one OutputAudioText response is present, then all OutputAudioText responses are linearly concatenated, and the result is used for output audio synthesis. * If the OutputAudioText responses are a mixture of text and SSML, then the concatenated result is treated as SSML; otherwise, the result is treated as either text or SSML as appropriate. The agent designer should ideally use either text or SSML consistently throughout the bot design. * Otherwise, all Text responses are linearly concatenated, and the result is used for output audio synthesis. This approach allows for more sophisticated user experience scenarios, where the text displayed to the user may differ from what is heard. "channel": "A String", # The channel which the response is associated with. Clients can specify the channel via QueryParameters.channel, and only associated channel response will be returned. "conversationSuccess": { # Indicates that the conversation succeeded, i.e., the bot handled the issue that the customer talked to it about. Dialogflow only uses this to determine which conversations should be counted as successful and doesn't process the metadata in this message in any way. Note that Dialogflow also considers conversations that get to the conversation end page as successful even if they don't return ConversationSuccess. You may set this, for example: * In the entry_fulfillment of a Page if entering the page indicates that the conversation succeeded. * In a webhook response when you determine that you handled the customer issue. # Indicates that the conversation succeeded. "metadata": { # Custom metadata. Dialogflow doesn't impose any structure on this. "a_key": "", # Properties of the object. }, }, "endInteraction": { # Indicates that interaction with the Dialogflow agent has ended. This message is generated by Dialogflow only and not supposed to be defined by the user. # Output only. A signal that indicates the interaction with the Dialogflow agent has ended. This message is generated by Dialogflow only when the conversation reaches `END_SESSION` page. It is not supposed to be defined by the user. It's guaranteed that there is at most one such message in each response. }, "knowledgeInfoCard": { # Represents info card response. If the response contains generative knowledge prediction, Dialogflow will return a payload with Infobot Messenger compatible info card. Otherwise, the info card response is skipped. # Represents info card for knowledge answers, to be better rendered in Dialogflow Messenger. }, "liveAgentHandoff": { # Indicates that the conversation should be handed off to a live agent. Dialogflow only uses this to determine which conversations were handed off to a human agent for measurement purposes. What else to do with this signal is up to you and your handoff procedures. You may set this, for example: * In the entry_fulfillment of a Page if entering the page indicates something went extremely wrong in the conversation. * In a webhook response when you determine that the customer issue can only be handled by a human. # Hands off conversation to a human agent. "metadata": { # Custom metadata for your handoff procedure. Dialogflow doesn't impose any structure on this. "a_key": "", # Properties of the object. }, }, "mixedAudio": { # Represents an audio message that is composed of both segments synthesized from the Dialogflow agent prompts and ones hosted externally at the specified URIs. The external URIs are specified via play_audio. This message is generated by Dialogflow only and not supposed to be defined by the user. # Output only. An audio response message composed of both the synthesized Dialogflow agent responses and responses defined via play_audio. This message is generated by Dialogflow only and not supposed to be defined by the user. "segments": [ # Segments this audio response is composed of. { # Represents one segment of audio. "allowPlaybackInterruption": True or False, # Output only. Whether the playback of this segment can be interrupted by the end user's speech and the client should then start the next Dialogflow request. "audio": "A String", # Raw audio synthesized from the Dialogflow agent's response using the output config specified in the request. "uri": "A String", # Client-specific URI that points to an audio clip accessible to the client. Dialogflow does not impose any validation on it. }, ], }, "outputAudioText": { # A text or ssml response that is preferentially used for TTS output audio synthesis, as described in the comment on the ResponseMessage message. # A text or ssml response that is preferentially used for TTS output audio synthesis, as described in the comment on the ResponseMessage message. "allowPlaybackInterruption": True or False, # Output only. Whether the playback of this message can be interrupted by the end user's speech and the client can then starts the next Dialogflow request. "ssml": "A String", # The SSML text to be synthesized. For more information, see [SSML](/speech/text-to-speech/docs/ssml). "text": "A String", # The raw text to be synthesized. }, "payload": { # Returns a response containing a custom, platform-specific payload. "a_key": "", # Properties of the object. }, "playAudio": { # Specifies an audio clip to be played by the client as part of the response. # Signal that the client should play an audio clip hosted at a client-specific URI. Dialogflow uses this to construct mixed_audio. However, Dialogflow itself does not try to read or process the URI in any way. "allowPlaybackInterruption": True or False, # Output only. Whether the playback of this message can be interrupted by the end user's speech and the client can then starts the next Dialogflow request. "audioUri": "A String", # Required. URI of the audio clip. Dialogflow does not impose any validation on this value. It is specific to the client that reads it. }, "telephonyTransferCall": { # Represents the signal that telles the client to transfer the phone call connected to the agent to a third-party endpoint. # A signal that the client should transfer the phone call connected to this agent to a third-party endpoint. "phoneNumber": "A String", # Transfer the call to a phone number in [E.164 format](https://en.wikipedia.org/wiki/E.164). }, "text": { # The text response message. # Returns a text response. "allowPlaybackInterruption": True or False, # Output only. Whether the playback of this message can be interrupted by the end user's speech and the client can then starts the next Dialogflow request. "text": [ # Required. A collection of text response variants. If multiple variants are defined, only one text response variant is returned at runtime. "A String", ], }, "toolCall": { # Represents a call of a specific tool's action with the specified inputs. # Returns the definition of a tool call that should be executed by the client. "action": "A String", # Required. The name of the tool's action associated with this call. "inputParameters": { # Optional. The action's input parameters. "a_key": "", # Properties of the object. }, "tool": "A String", # Required. The tool associated with this call. Format: `projects//locations//agents//tools/`. }, }, ], "returnPartialResponses": True or False, # Whether Dialogflow should return currently queued fulfillment response messages in streaming APIs. If a webhook is specified, it happens before Dialogflow invokes webhook. Warning: 1) This flag only affects streaming API. Responses are still queued and returned once in non-streaming API. 2) The flag can be enabled in any fulfillment but only the first 3 partial responses will be returned. You may only want to apply it to fulfillments that have slow webhooks. "setParameterActions": [ # Set parameter values before executing the webhook. { # Setting a parameter value. "parameter": "A String", # Display name of the parameter. "value": "", # The new value of the parameter. A null value clears the parameter. }, ], "tag": "A String", # The value of this field will be populated in the WebhookRequest `fulfillmentInfo.tag` field by Dialogflow when the associated webhook is called. The tag is typically used by the webhook service to identify which fulfillment is being called, but it could be used for other purposes. This field is required if `webhook` is specified. "webhook": "A String", # The webhook to call. Format: `projects//locations//agents//webhooks/`. }, "lifecycleStage": "A String", # Required. The name of the lifecycle stage that triggers this handler. Supported values: * `playbook-start` * `pre-action-selection` * `pre-action-execution` }, }, ], "inputParameterDefinitions": [ # Optional. Defined structured input parameters for this playbook. { # Defines the properties of a parameter. Used to define parameters used in the agent and the input / output parameters for each fulfillment. "description": "A String", # Human-readable description of the parameter. Limited to 300 characters. "name": "A String", # Required. Name of parameter. "type": "A String", # Type of parameter. "typeSchema": { # Encapsulates different type schema variations: either a reference to an a schema that's already defined by a tool, or an inline definition. # Optional. Type schema of parameter. "inlineSchema": { # A type schema object that's specified inline. # Set if this is an inline schema definition. "items": # Object with schema name: GoogleCloudDialogflowCxV3beta1TypeSchema # Schema of the elements if this is an ARRAY type. "type": "A String", # Data type of the schema. }, "schemaReference": { # A reference to the schema of an existing tool. # Set if this is a schema reference. "schema": "A String", # The name of the schema. "tool": "A String", # The tool that contains this schema definition. Format: `projects//locations//agents//tools/`. }, }, }, ], "instruction": { # Message of the Instruction of the playbook. # Instruction to accomplish target goal. "guidelines": "A String", # General guidelines for the playbook. These are unstructured instructions that are not directly part of the goal, e.g. "Always be polite". It's valid for this text to be long and used instead of steps altogether. "steps": [ # Ordered list of step by step execution instructions to accomplish target goal. { # Message of single step execution. "steps": [ # Sub-processing needed to execute the current step. # Object with schema name: GoogleCloudDialogflowCxV3beta1PlaybookStep ], "text": "A String", # Step instruction in text format. }, ], }, "llmModelSettings": { # Settings for LLM models. # Optional. Llm model settings for the playbook. "model": "A String", # The selected LLM model. "promptText": "A String", # The custom prompt to use. }, "name": "A String", # The unique identifier of the playbook. Format: `projects//locations//agents//playbooks/`. "outputParameterDefinitions": [ # Optional. Defined structured output parameters for this playbook. { # Defines the properties of a parameter. Used to define parameters used in the agent and the input / output parameters for each fulfillment. "description": "A String", # Human-readable description of the parameter. Limited to 300 characters. "name": "A String", # Required. Name of parameter. "type": "A String", # Type of parameter. "typeSchema": { # Encapsulates different type schema variations: either a reference to an a schema that's already defined by a tool, or an inline definition. # Optional. Type schema of parameter. "inlineSchema": { # A type schema object that's specified inline. # Set if this is an inline schema definition. "items": # Object with schema name: GoogleCloudDialogflowCxV3beta1TypeSchema # Schema of the elements if this is an ARRAY type. "type": "A String", # Data type of the schema. }, "schemaReference": { # A reference to the schema of an existing tool. # Set if this is a schema reference. "schema": "A String", # The name of the schema. "tool": "A String", # The tool that contains this schema definition. Format: `projects//locations//agents//tools/`. }, }, }, ], "playbookType": "A String", # Optional. Type of the playbook. "referencedFlows": [ # Output only. The resource name of flows referenced by the current playbook in the instructions. "A String", ], "referencedPlaybooks": [ # Output only. The resource name of other playbooks referenced by the current playbook in the instructions. "A String", ], "referencedTools": [ # Optional. The resource name of tools referenced by the current playbook in the instructions. If not provided explicitly, they are will be implied using the tool being referenced in goal and steps. "A String", ], "speechSettings": { # Define behaviors of speech to text detection. # Optional. Playbook level Settings for speech to text detection. "endpointerSensitivity": 42, # Sensitivity of the speech model that detects the end of speech. Scale from 0 to 100. "models": { # Mapping from language to Speech-to-Text model. The mapped Speech-to-Text model will be selected for requests from its corresponding language. For more information, see [Speech models](https://cloud.google.com/dialogflow/cx/docs/concept/speech-models). "a_key": "A String", }, "noSpeechTimeout": "A String", # Timeout before detecting no speech. "useTimeoutBasedEndpointing": True or False, # Use timeout based endpointing, interpreting endpointer sensitivity as seconds of timeout value. }, "tokenCount": "A String", # Output only. Estimated number of tokes current playbook takes when sent to the LLM. "updateTime": "A String", # Output only. Last time the playbook version was updated. }
delete(name, x__xgafv=None)
Deletes a specified playbook. Args: name: string, Required. The name of the playbook to delete. Format: `projects//locations//agents//playbooks/`. (required) x__xgafv: string, V1 error format. Allowed values 1 - v1 error format 2 - v2 error format Returns: An object of the form: { # A generic empty message that you can re-use to avoid defining duplicated empty messages in your APIs. A typical example is to use it as the request or the response type of an API method. For instance: service Foo { rpc Bar(google.protobuf.Empty) returns (google.protobuf.Empty); } }
export(name, body=None, x__xgafv=None)
Exports the specified playbook to a binary file. Note that resources (e.g. examples, tools) that the playbook references will also be exported. Args: name: string, Required. The name of the playbook to export. Format: `projects//locations//agents//playbooks/`. (required) body: object, The request body. The object takes the form of: { # The request message for Playbooks.ExportPlaybook. "dataFormat": "A String", # Optional. The data format of the exported agent. If not specified, `BLOB` is assumed. "playbookUri": "A String", # Optional. The [Google Cloud Storage](https://cloud.google.com/storage/docs/) URI to export the playbook to. The format of this URI must be `gs:///`. If left unspecified, the serialized playbook is returned inline. Dialogflow performs a write operation for the Cloud Storage object on the caller's behalf, so your request authentication must have write permissions for the object. For more information, see [Dialogflow access control](https://cloud.google.com/dialogflow/cx/docs/concept/access-control#storage). } x__xgafv: string, V1 error format. Allowed values 1 - v1 error format 2 - v2 error format Returns: An object of the form: { # This resource represents a long-running operation that is the result of a network API call. "done": True or False, # If the value is `false`, it means the operation is still in progress. If `true`, the operation is completed, and either `error` or `response` is available. "error": { # The `Status` type defines a logical error model that is suitable for different programming environments, including REST APIs and RPC APIs. It is used by [gRPC](https://github.com/grpc). Each `Status` message contains three pieces of data: error code, error message, and error details. You can find out more about this error model and how to work with it in the [API Design Guide](https://cloud.google.com/apis/design/errors). # The error result of the operation in case of failure or cancellation. "code": 42, # The status code, which should be an enum value of google.rpc.Code. "details": [ # A list of messages that carry the error details. There is a common set of message types for APIs to use. { "a_key": "", # Properties of the object. Contains field @type with type URL. }, ], "message": "A String", # A developer-facing error message, which should be in English. Any user-facing error message should be localized and sent in the google.rpc.Status.details field, or localized by the client. }, "metadata": { # Service-specific metadata associated with the operation. It typically contains progress information and common metadata such as create time. Some services might not provide such metadata. Any method that returns a long-running operation should document the metadata type, if any. "a_key": "", # Properties of the object. Contains field @type with type URL. }, "name": "A String", # The server-assigned name, which is only unique within the same service that originally returns it. If you use the default HTTP mapping, the `name` should be a resource name ending with `operations/{unique_id}`. "response": { # The normal, successful response of the operation. If the original method returns no data on success, such as `Delete`, the response is `google.protobuf.Empty`. If the original method is standard `Get`/`Create`/`Update`, the response should be the resource. For other methods, the response should have the type `XxxResponse`, where `Xxx` is the original method name. For example, if the original method name is `TakeSnapshot()`, the inferred response type is `TakeSnapshotResponse`. "a_key": "", # Properties of the object. Contains field @type with type URL. }, }
get(name, x__xgafv=None)
Retrieves the specified Playbook. Args: name: string, Required. The name of the playbook. Format: `projects//locations//agents//playbooks/`. (required) x__xgafv: string, V1 error format. Allowed values 1 - v1 error format 2 - v2 error format Returns: An object of the form: { # Playbook is the basic building block to instruct the LLM how to execute a certain task. A playbook consists of a goal to accomplish, an optional list of step by step instructions (the step instruction may refers to name of the custom or default plugin tools to use) to perform the task, a list of contextual input data to be passed in at the beginning of the invoked, and a list of output parameters to store the playbook result. "createTime": "A String", # Output only. The timestamp of initial playbook creation. "displayName": "A String", # Required. The human-readable name of the playbook, unique within an agent. "goal": "A String", # Required. High level description of the goal the playbook intend to accomplish. A goal should be concise since it's visible to other playbooks that may reference this playbook. "handlers": [ # Optional. A list of registered handlers to execute based on the specified triggers. { # Handler can be used to define custom logic to be executed based on the user-specified triggers. "eventHandler": { # A handler that is triggered by the specified event. # A handler triggered by event. "condition": "A String", # Optional. The condition that must be satisfied to trigger this handler. "event": "A String", # Required. The name of the event that triggers this handler. "fulfillment": { # A fulfillment can do one or more of the following actions at the same time: * Generate rich message responses. * Set parameter values. * Call the webhook. Fulfillments can be called at various stages in the Page or Form lifecycle. For example, when a DetectIntentRequest drives a session to enter a new page, the page's entry fulfillment can add a static response to the QueryResult in the returning DetectIntentResponse, call the webhook (for example, to load user data from a database), or both. # Required. The fulfillment to call when the event occurs. "advancedSettings": { # Hierarchical advanced settings for agent/flow/page/fulfillment/parameter. Settings exposed at lower level overrides the settings exposed at higher level. Overriding occurs at the sub-setting level. For example, the playback_interruption_settings at fulfillment level only overrides the playback_interruption_settings at the agent level, leaving other settings at the agent level unchanged. DTMF settings does not override each other. DTMF settings set at different levels define DTMF detections running in parallel. Hierarchy: Agent->Flow->Page->Fulfillment/Parameter. # Hierarchical advanced settings for this fulfillment. The settings exposed at the lower level overrides the settings exposed at the higher level. "audioExportGcsDestination": { # Google Cloud Storage location for a Dialogflow operation that writes or exports objects (e.g. exported agent or transcripts) outside of Dialogflow. # If present, incoming audio is exported by Dialogflow to the configured Google Cloud Storage destination. Exposed at the following levels: - Agent level - Flow level "uri": "A String", # Required. The Google Cloud Storage URI for the exported objects. A URI is of the form: `gs://bucket/object-name-or-prefix` Whether a full object name, or just a prefix, its usage depends on the Dialogflow operation. }, "dtmfSettings": { # Define behaviors for DTMF (dual tone multi frequency). # Settings for DTMF. Exposed at the following levels: - Agent level - Flow level - Page level - Parameter level. "enabled": True or False, # If true, incoming audio is processed for DTMF (dual tone multi frequency) events. For example, if the caller presses a button on their telephone keypad and DTMF processing is enabled, Dialogflow will detect the event (e.g. a "3" was pressed) in the incoming audio and pass the event to the bot to drive business logic (e.g. when 3 is pressed, return the account balance). "endpointingTimeoutDuration": "A String", # Endpoint timeout setting for matching dtmf input to regex. "finishDigit": "A String", # The digit that terminates a DTMF digit sequence. "interdigitTimeoutDuration": "A String", # Interdigit timeout setting for matching dtmf input to regex. "maxDigits": 42, # Max length of DTMF digits. }, "loggingSettings": { # Define behaviors on logging. # Settings for logging. Settings for Dialogflow History, Contact Center messages, StackDriver logs, and speech logging. Exposed at the following levels: - Agent level. "enableConsentBasedRedaction": True or False, # Enables consent-based end-user input redaction, if true, a pre-defined session parameter `$session.params.conversation-redaction` will be used to determine if the utterance should be redacted. "enableInteractionLogging": True or False, # Enables DF Interaction logging. "enableStackdriverLogging": True or False, # Enables Google Cloud Logging. }, "speechSettings": { # Define behaviors of speech to text detection. # Settings for speech to text detection. Exposed at the following levels: - Agent level - Flow level - Page level - Parameter level "endpointerSensitivity": 42, # Sensitivity of the speech model that detects the end of speech. Scale from 0 to 100. "models": { # Mapping from language to Speech-to-Text model. The mapped Speech-to-Text model will be selected for requests from its corresponding language. For more information, see [Speech models](https://cloud.google.com/dialogflow/cx/docs/concept/speech-models). "a_key": "A String", }, "noSpeechTimeout": "A String", # Timeout before detecting no speech. "useTimeoutBasedEndpointing": True or False, # Use timeout based endpointing, interpreting endpointer sensitivity as seconds of timeout value. }, }, "conditionalCases": [ # Conditional cases for this fulfillment. { # A list of cascading if-else conditions. Cases are mutually exclusive. The first one with a matching condition is selected, all the rest ignored. "cases": [ # A list of cascading if-else conditions. { # Each case has a Boolean condition. When it is evaluated to be True, the corresponding messages will be selected and evaluated recursively. "caseContent": [ # A list of case content. { # The list of messages or conditional cases to activate for this case. "additionalCases": # Object with schema name: GoogleCloudDialogflowCxV3beta1FulfillmentConditionalCases # Additional cases to be evaluated. "message": { # Represents a response message that can be returned by a conversational agent. Response messages are also used for output audio synthesis. The approach is as follows: * If at least one OutputAudioText response is present, then all OutputAudioText responses are linearly concatenated, and the result is used for output audio synthesis. * If the OutputAudioText responses are a mixture of text and SSML, then the concatenated result is treated as SSML; otherwise, the result is treated as either text or SSML as appropriate. The agent designer should ideally use either text or SSML consistently throughout the bot design. * Otherwise, all Text responses are linearly concatenated, and the result is used for output audio synthesis. This approach allows for more sophisticated user experience scenarios, where the text displayed to the user may differ from what is heard. # Returned message. "channel": "A String", # The channel which the response is associated with. Clients can specify the channel via QueryParameters.channel, and only associated channel response will be returned. "conversationSuccess": { # Indicates that the conversation succeeded, i.e., the bot handled the issue that the customer talked to it about. Dialogflow only uses this to determine which conversations should be counted as successful and doesn't process the metadata in this message in any way. Note that Dialogflow also considers conversations that get to the conversation end page as successful even if they don't return ConversationSuccess. You may set this, for example: * In the entry_fulfillment of a Page if entering the page indicates that the conversation succeeded. * In a webhook response when you determine that you handled the customer issue. # Indicates that the conversation succeeded. "metadata": { # Custom metadata. Dialogflow doesn't impose any structure on this. "a_key": "", # Properties of the object. }, }, "endInteraction": { # Indicates that interaction with the Dialogflow agent has ended. This message is generated by Dialogflow only and not supposed to be defined by the user. # Output only. A signal that indicates the interaction with the Dialogflow agent has ended. This message is generated by Dialogflow only when the conversation reaches `END_SESSION` page. It is not supposed to be defined by the user. It's guaranteed that there is at most one such message in each response. }, "knowledgeInfoCard": { # Represents info card response. If the response contains generative knowledge prediction, Dialogflow will return a payload with Infobot Messenger compatible info card. Otherwise, the info card response is skipped. # Represents info card for knowledge answers, to be better rendered in Dialogflow Messenger. }, "liveAgentHandoff": { # Indicates that the conversation should be handed off to a live agent. Dialogflow only uses this to determine which conversations were handed off to a human agent for measurement purposes. What else to do with this signal is up to you and your handoff procedures. You may set this, for example: * In the entry_fulfillment of a Page if entering the page indicates something went extremely wrong in the conversation. * In a webhook response when you determine that the customer issue can only be handled by a human. # Hands off conversation to a human agent. "metadata": { # Custom metadata for your handoff procedure. Dialogflow doesn't impose any structure on this. "a_key": "", # Properties of the object. }, }, "mixedAudio": { # Represents an audio message that is composed of both segments synthesized from the Dialogflow agent prompts and ones hosted externally at the specified URIs. The external URIs are specified via play_audio. This message is generated by Dialogflow only and not supposed to be defined by the user. # Output only. An audio response message composed of both the synthesized Dialogflow agent responses and responses defined via play_audio. This message is generated by Dialogflow only and not supposed to be defined by the user. "segments": [ # Segments this audio response is composed of. { # Represents one segment of audio. "allowPlaybackInterruption": True or False, # Output only. Whether the playback of this segment can be interrupted by the end user's speech and the client should then start the next Dialogflow request. "audio": "A String", # Raw audio synthesized from the Dialogflow agent's response using the output config specified in the request. "uri": "A String", # Client-specific URI that points to an audio clip accessible to the client. Dialogflow does not impose any validation on it. }, ], }, "outputAudioText": { # A text or ssml response that is preferentially used for TTS output audio synthesis, as described in the comment on the ResponseMessage message. # A text or ssml response that is preferentially used for TTS output audio synthesis, as described in the comment on the ResponseMessage message. "allowPlaybackInterruption": True or False, # Output only. Whether the playback of this message can be interrupted by the end user's speech and the client can then starts the next Dialogflow request. "ssml": "A String", # The SSML text to be synthesized. For more information, see [SSML](/speech/text-to-speech/docs/ssml). "text": "A String", # The raw text to be synthesized. }, "payload": { # Returns a response containing a custom, platform-specific payload. "a_key": "", # Properties of the object. }, "playAudio": { # Specifies an audio clip to be played by the client as part of the response. # Signal that the client should play an audio clip hosted at a client-specific URI. Dialogflow uses this to construct mixed_audio. However, Dialogflow itself does not try to read or process the URI in any way. "allowPlaybackInterruption": True or False, # Output only. Whether the playback of this message can be interrupted by the end user's speech and the client can then starts the next Dialogflow request. "audioUri": "A String", # Required. URI of the audio clip. Dialogflow does not impose any validation on this value. It is specific to the client that reads it. }, "telephonyTransferCall": { # Represents the signal that telles the client to transfer the phone call connected to the agent to a third-party endpoint. # A signal that the client should transfer the phone call connected to this agent to a third-party endpoint. "phoneNumber": "A String", # Transfer the call to a phone number in [E.164 format](https://en.wikipedia.org/wiki/E.164). }, "text": { # The text response message. # Returns a text response. "allowPlaybackInterruption": True or False, # Output only. Whether the playback of this message can be interrupted by the end user's speech and the client can then starts the next Dialogflow request. "text": [ # Required. A collection of text response variants. If multiple variants are defined, only one text response variant is returned at runtime. "A String", ], }, "toolCall": { # Represents a call of a specific tool's action with the specified inputs. # Returns the definition of a tool call that should be executed by the client. "action": "A String", # Required. The name of the tool's action associated with this call. "inputParameters": { # Optional. The action's input parameters. "a_key": "", # Properties of the object. }, "tool": "A String", # Required. The tool associated with this call. Format: `projects//locations//agents//tools/`. }, }, }, ], "condition": "A String", # The condition to activate and select this case. Empty means the condition is always true. The condition is evaluated against form parameters or session parameters. See the [conditions reference](https://cloud.google.com/dialogflow/cx/docs/reference/condition). }, ], }, ], "enableGenerativeFallback": True or False, # If the flag is true, the agent will utilize LLM to generate a text response. If LLM generation fails, the defined responses in the fulfillment will be respected. This flag is only useful for fulfillments associated with no-match event handlers. "generators": [ # A list of Generators to be called during this fulfillment. { # Generator settings used by the LLM to generate a text response. "generator": "A String", # Required. The generator to call. Format: `projects//locations//agents//generators/`. "inputParameters": { # Map from placeholder parameter in the Generator to corresponding session parameters. By default, Dialogflow uses the session parameter with the same name to fill in the generator template. e.g. If there is a placeholder parameter `city` in the Generator, Dialogflow default to fill in the `$city` with `$session.params.city`. However, you may choose to fill `$city` with `$session.params.desination-city`. - Map key: parameter ID - Map value: session parameter name "a_key": "A String", }, "outputParameter": "A String", # Required. Output parameter which should contain the generator response. }, ], "messages": [ # The list of rich message responses to present to the user. { # Represents a response message that can be returned by a conversational agent. Response messages are also used for output audio synthesis. The approach is as follows: * If at least one OutputAudioText response is present, then all OutputAudioText responses are linearly concatenated, and the result is used for output audio synthesis. * If the OutputAudioText responses are a mixture of text and SSML, then the concatenated result is treated as SSML; otherwise, the result is treated as either text or SSML as appropriate. The agent designer should ideally use either text or SSML consistently throughout the bot design. * Otherwise, all Text responses are linearly concatenated, and the result is used for output audio synthesis. This approach allows for more sophisticated user experience scenarios, where the text displayed to the user may differ from what is heard. "channel": "A String", # The channel which the response is associated with. Clients can specify the channel via QueryParameters.channel, and only associated channel response will be returned. "conversationSuccess": { # Indicates that the conversation succeeded, i.e., the bot handled the issue that the customer talked to it about. Dialogflow only uses this to determine which conversations should be counted as successful and doesn't process the metadata in this message in any way. Note that Dialogflow also considers conversations that get to the conversation end page as successful even if they don't return ConversationSuccess. You may set this, for example: * In the entry_fulfillment of a Page if entering the page indicates that the conversation succeeded. * In a webhook response when you determine that you handled the customer issue. # Indicates that the conversation succeeded. "metadata": { # Custom metadata. Dialogflow doesn't impose any structure on this. "a_key": "", # Properties of the object. }, }, "endInteraction": { # Indicates that interaction with the Dialogflow agent has ended. This message is generated by Dialogflow only and not supposed to be defined by the user. # Output only. A signal that indicates the interaction with the Dialogflow agent has ended. This message is generated by Dialogflow only when the conversation reaches `END_SESSION` page. It is not supposed to be defined by the user. It's guaranteed that there is at most one such message in each response. }, "knowledgeInfoCard": { # Represents info card response. If the response contains generative knowledge prediction, Dialogflow will return a payload with Infobot Messenger compatible info card. Otherwise, the info card response is skipped. # Represents info card for knowledge answers, to be better rendered in Dialogflow Messenger. }, "liveAgentHandoff": { # Indicates that the conversation should be handed off to a live agent. Dialogflow only uses this to determine which conversations were handed off to a human agent for measurement purposes. What else to do with this signal is up to you and your handoff procedures. You may set this, for example: * In the entry_fulfillment of a Page if entering the page indicates something went extremely wrong in the conversation. * In a webhook response when you determine that the customer issue can only be handled by a human. # Hands off conversation to a human agent. "metadata": { # Custom metadata for your handoff procedure. Dialogflow doesn't impose any structure on this. "a_key": "", # Properties of the object. }, }, "mixedAudio": { # Represents an audio message that is composed of both segments synthesized from the Dialogflow agent prompts and ones hosted externally at the specified URIs. The external URIs are specified via play_audio. This message is generated by Dialogflow only and not supposed to be defined by the user. # Output only. An audio response message composed of both the synthesized Dialogflow agent responses and responses defined via play_audio. This message is generated by Dialogflow only and not supposed to be defined by the user. "segments": [ # Segments this audio response is composed of. { # Represents one segment of audio. "allowPlaybackInterruption": True or False, # Output only. Whether the playback of this segment can be interrupted by the end user's speech and the client should then start the next Dialogflow request. "audio": "A String", # Raw audio synthesized from the Dialogflow agent's response using the output config specified in the request. "uri": "A String", # Client-specific URI that points to an audio clip accessible to the client. Dialogflow does not impose any validation on it. }, ], }, "outputAudioText": { # A text or ssml response that is preferentially used for TTS output audio synthesis, as described in the comment on the ResponseMessage message. # A text or ssml response that is preferentially used for TTS output audio synthesis, as described in the comment on the ResponseMessage message. "allowPlaybackInterruption": True or False, # Output only. Whether the playback of this message can be interrupted by the end user's speech and the client can then starts the next Dialogflow request. "ssml": "A String", # The SSML text to be synthesized. For more information, see [SSML](/speech/text-to-speech/docs/ssml). "text": "A String", # The raw text to be synthesized. }, "payload": { # Returns a response containing a custom, platform-specific payload. "a_key": "", # Properties of the object. }, "playAudio": { # Specifies an audio clip to be played by the client as part of the response. # Signal that the client should play an audio clip hosted at a client-specific URI. Dialogflow uses this to construct mixed_audio. However, Dialogflow itself does not try to read or process the URI in any way. "allowPlaybackInterruption": True or False, # Output only. Whether the playback of this message can be interrupted by the end user's speech and the client can then starts the next Dialogflow request. "audioUri": "A String", # Required. URI of the audio clip. Dialogflow does not impose any validation on this value. It is specific to the client that reads it. }, "telephonyTransferCall": { # Represents the signal that telles the client to transfer the phone call connected to the agent to a third-party endpoint. # A signal that the client should transfer the phone call connected to this agent to a third-party endpoint. "phoneNumber": "A String", # Transfer the call to a phone number in [E.164 format](https://en.wikipedia.org/wiki/E.164). }, "text": { # The text response message. # Returns a text response. "allowPlaybackInterruption": True or False, # Output only. Whether the playback of this message can be interrupted by the end user's speech and the client can then starts the next Dialogflow request. "text": [ # Required. A collection of text response variants. If multiple variants are defined, only one text response variant is returned at runtime. "A String", ], }, "toolCall": { # Represents a call of a specific tool's action with the specified inputs. # Returns the definition of a tool call that should be executed by the client. "action": "A String", # Required. The name of the tool's action associated with this call. "inputParameters": { # Optional. The action's input parameters. "a_key": "", # Properties of the object. }, "tool": "A String", # Required. The tool associated with this call. Format: `projects//locations//agents//tools/`. }, }, ], "returnPartialResponses": True or False, # Whether Dialogflow should return currently queued fulfillment response messages in streaming APIs. If a webhook is specified, it happens before Dialogflow invokes webhook. Warning: 1) This flag only affects streaming API. Responses are still queued and returned once in non-streaming API. 2) The flag can be enabled in any fulfillment but only the first 3 partial responses will be returned. You may only want to apply it to fulfillments that have slow webhooks. "setParameterActions": [ # Set parameter values before executing the webhook. { # Setting a parameter value. "parameter": "A String", # Display name of the parameter. "value": "", # The new value of the parameter. A null value clears the parameter. }, ], "tag": "A String", # The value of this field will be populated in the WebhookRequest `fulfillmentInfo.tag` field by Dialogflow when the associated webhook is called. The tag is typically used by the webhook service to identify which fulfillment is being called, but it could be used for other purposes. This field is required if `webhook` is specified. "webhook": "A String", # The webhook to call. Format: `projects//locations//agents//webhooks/`. }, }, "lifecycleHandler": { # A handler that is triggered on the specific lifecycle_stage of the playbook execution. # A handler triggered during specific lifecycle of the playbook execution. "condition": "A String", # Optional. The condition that must be satisfied to trigger this handler. "fulfillment": { # A fulfillment can do one or more of the following actions at the same time: * Generate rich message responses. * Set parameter values. * Call the webhook. Fulfillments can be called at various stages in the Page or Form lifecycle. For example, when a DetectIntentRequest drives a session to enter a new page, the page's entry fulfillment can add a static response to the QueryResult in the returning DetectIntentResponse, call the webhook (for example, to load user data from a database), or both. # Required. The fulfillment to call when this handler is triggered. "advancedSettings": { # Hierarchical advanced settings for agent/flow/page/fulfillment/parameter. Settings exposed at lower level overrides the settings exposed at higher level. Overriding occurs at the sub-setting level. For example, the playback_interruption_settings at fulfillment level only overrides the playback_interruption_settings at the agent level, leaving other settings at the agent level unchanged. DTMF settings does not override each other. DTMF settings set at different levels define DTMF detections running in parallel. Hierarchy: Agent->Flow->Page->Fulfillment/Parameter. # Hierarchical advanced settings for this fulfillment. The settings exposed at the lower level overrides the settings exposed at the higher level. "audioExportGcsDestination": { # Google Cloud Storage location for a Dialogflow operation that writes or exports objects (e.g. exported agent or transcripts) outside of Dialogflow. # If present, incoming audio is exported by Dialogflow to the configured Google Cloud Storage destination. Exposed at the following levels: - Agent level - Flow level "uri": "A String", # Required. The Google Cloud Storage URI for the exported objects. A URI is of the form: `gs://bucket/object-name-or-prefix` Whether a full object name, or just a prefix, its usage depends on the Dialogflow operation. }, "dtmfSettings": { # Define behaviors for DTMF (dual tone multi frequency). # Settings for DTMF. Exposed at the following levels: - Agent level - Flow level - Page level - Parameter level. "enabled": True or False, # If true, incoming audio is processed for DTMF (dual tone multi frequency) events. For example, if the caller presses a button on their telephone keypad and DTMF processing is enabled, Dialogflow will detect the event (e.g. a "3" was pressed) in the incoming audio and pass the event to the bot to drive business logic (e.g. when 3 is pressed, return the account balance). "endpointingTimeoutDuration": "A String", # Endpoint timeout setting for matching dtmf input to regex. "finishDigit": "A String", # The digit that terminates a DTMF digit sequence. "interdigitTimeoutDuration": "A String", # Interdigit timeout setting for matching dtmf input to regex. "maxDigits": 42, # Max length of DTMF digits. }, "loggingSettings": { # Define behaviors on logging. # Settings for logging. Settings for Dialogflow History, Contact Center messages, StackDriver logs, and speech logging. Exposed at the following levels: - Agent level. "enableConsentBasedRedaction": True or False, # Enables consent-based end-user input redaction, if true, a pre-defined session parameter `$session.params.conversation-redaction` will be used to determine if the utterance should be redacted. "enableInteractionLogging": True or False, # Enables DF Interaction logging. "enableStackdriverLogging": True or False, # Enables Google Cloud Logging. }, "speechSettings": { # Define behaviors of speech to text detection. # Settings for speech to text detection. Exposed at the following levels: - Agent level - Flow level - Page level - Parameter level "endpointerSensitivity": 42, # Sensitivity of the speech model that detects the end of speech. Scale from 0 to 100. "models": { # Mapping from language to Speech-to-Text model. The mapped Speech-to-Text model will be selected for requests from its corresponding language. For more information, see [Speech models](https://cloud.google.com/dialogflow/cx/docs/concept/speech-models). "a_key": "A String", }, "noSpeechTimeout": "A String", # Timeout before detecting no speech. "useTimeoutBasedEndpointing": True or False, # Use timeout based endpointing, interpreting endpointer sensitivity as seconds of timeout value. }, }, "conditionalCases": [ # Conditional cases for this fulfillment. { # A list of cascading if-else conditions. Cases are mutually exclusive. The first one with a matching condition is selected, all the rest ignored. "cases": [ # A list of cascading if-else conditions. { # Each case has a Boolean condition. When it is evaluated to be True, the corresponding messages will be selected and evaluated recursively. "caseContent": [ # A list of case content. { # The list of messages or conditional cases to activate for this case. "additionalCases": # Object with schema name: GoogleCloudDialogflowCxV3beta1FulfillmentConditionalCases # Additional cases to be evaluated. "message": { # Represents a response message that can be returned by a conversational agent. Response messages are also used for output audio synthesis. The approach is as follows: * If at least one OutputAudioText response is present, then all OutputAudioText responses are linearly concatenated, and the result is used for output audio synthesis. * If the OutputAudioText responses are a mixture of text and SSML, then the concatenated result is treated as SSML; otherwise, the result is treated as either text or SSML as appropriate. The agent designer should ideally use either text or SSML consistently throughout the bot design. * Otherwise, all Text responses are linearly concatenated, and the result is used for output audio synthesis. This approach allows for more sophisticated user experience scenarios, where the text displayed to the user may differ from what is heard. # Returned message. "channel": "A String", # The channel which the response is associated with. Clients can specify the channel via QueryParameters.channel, and only associated channel response will be returned. "conversationSuccess": { # Indicates that the conversation succeeded, i.e., the bot handled the issue that the customer talked to it about. Dialogflow only uses this to determine which conversations should be counted as successful and doesn't process the metadata in this message in any way. Note that Dialogflow also considers conversations that get to the conversation end page as successful even if they don't return ConversationSuccess. You may set this, for example: * In the entry_fulfillment of a Page if entering the page indicates that the conversation succeeded. * In a webhook response when you determine that you handled the customer issue. # Indicates that the conversation succeeded. "metadata": { # Custom metadata. Dialogflow doesn't impose any structure on this. "a_key": "", # Properties of the object. }, }, "endInteraction": { # Indicates that interaction with the Dialogflow agent has ended. This message is generated by Dialogflow only and not supposed to be defined by the user. # Output only. A signal that indicates the interaction with the Dialogflow agent has ended. This message is generated by Dialogflow only when the conversation reaches `END_SESSION` page. It is not supposed to be defined by the user. It's guaranteed that there is at most one such message in each response. }, "knowledgeInfoCard": { # Represents info card response. If the response contains generative knowledge prediction, Dialogflow will return a payload with Infobot Messenger compatible info card. Otherwise, the info card response is skipped. # Represents info card for knowledge answers, to be better rendered in Dialogflow Messenger. }, "liveAgentHandoff": { # Indicates that the conversation should be handed off to a live agent. Dialogflow only uses this to determine which conversations were handed off to a human agent for measurement purposes. What else to do with this signal is up to you and your handoff procedures. You may set this, for example: * In the entry_fulfillment of a Page if entering the page indicates something went extremely wrong in the conversation. * In a webhook response when you determine that the customer issue can only be handled by a human. # Hands off conversation to a human agent. "metadata": { # Custom metadata for your handoff procedure. Dialogflow doesn't impose any structure on this. "a_key": "", # Properties of the object. }, }, "mixedAudio": { # Represents an audio message that is composed of both segments synthesized from the Dialogflow agent prompts and ones hosted externally at the specified URIs. The external URIs are specified via play_audio. This message is generated by Dialogflow only and not supposed to be defined by the user. # Output only. An audio response message composed of both the synthesized Dialogflow agent responses and responses defined via play_audio. This message is generated by Dialogflow only and not supposed to be defined by the user. "segments": [ # Segments this audio response is composed of. { # Represents one segment of audio. "allowPlaybackInterruption": True or False, # Output only. Whether the playback of this segment can be interrupted by the end user's speech and the client should then start the next Dialogflow request. "audio": "A String", # Raw audio synthesized from the Dialogflow agent's response using the output config specified in the request. "uri": "A String", # Client-specific URI that points to an audio clip accessible to the client. Dialogflow does not impose any validation on it. }, ], }, "outputAudioText": { # A text or ssml response that is preferentially used for TTS output audio synthesis, as described in the comment on the ResponseMessage message. # A text or ssml response that is preferentially used for TTS output audio synthesis, as described in the comment on the ResponseMessage message. "allowPlaybackInterruption": True or False, # Output only. Whether the playback of this message can be interrupted by the end user's speech and the client can then starts the next Dialogflow request. "ssml": "A String", # The SSML text to be synthesized. For more information, see [SSML](/speech/text-to-speech/docs/ssml). "text": "A String", # The raw text to be synthesized. }, "payload": { # Returns a response containing a custom, platform-specific payload. "a_key": "", # Properties of the object. }, "playAudio": { # Specifies an audio clip to be played by the client as part of the response. # Signal that the client should play an audio clip hosted at a client-specific URI. Dialogflow uses this to construct mixed_audio. However, Dialogflow itself does not try to read or process the URI in any way. "allowPlaybackInterruption": True or False, # Output only. Whether the playback of this message can be interrupted by the end user's speech and the client can then starts the next Dialogflow request. "audioUri": "A String", # Required. URI of the audio clip. Dialogflow does not impose any validation on this value. It is specific to the client that reads it. }, "telephonyTransferCall": { # Represents the signal that telles the client to transfer the phone call connected to the agent to a third-party endpoint. # A signal that the client should transfer the phone call connected to this agent to a third-party endpoint. "phoneNumber": "A String", # Transfer the call to a phone number in [E.164 format](https://en.wikipedia.org/wiki/E.164). }, "text": { # The text response message. # Returns a text response. "allowPlaybackInterruption": True or False, # Output only. Whether the playback of this message can be interrupted by the end user's speech and the client can then starts the next Dialogflow request. "text": [ # Required. A collection of text response variants. If multiple variants are defined, only one text response variant is returned at runtime. "A String", ], }, "toolCall": { # Represents a call of a specific tool's action with the specified inputs. # Returns the definition of a tool call that should be executed by the client. "action": "A String", # Required. The name of the tool's action associated with this call. "inputParameters": { # Optional. The action's input parameters. "a_key": "", # Properties of the object. }, "tool": "A String", # Required. The tool associated with this call. Format: `projects//locations//agents//tools/`. }, }, }, ], "condition": "A String", # The condition to activate and select this case. Empty means the condition is always true. The condition is evaluated against form parameters or session parameters. See the [conditions reference](https://cloud.google.com/dialogflow/cx/docs/reference/condition). }, ], }, ], "enableGenerativeFallback": True or False, # If the flag is true, the agent will utilize LLM to generate a text response. If LLM generation fails, the defined responses in the fulfillment will be respected. This flag is only useful for fulfillments associated with no-match event handlers. "generators": [ # A list of Generators to be called during this fulfillment. { # Generator settings used by the LLM to generate a text response. "generator": "A String", # Required. The generator to call. Format: `projects//locations//agents//generators/`. "inputParameters": { # Map from placeholder parameter in the Generator to corresponding session parameters. By default, Dialogflow uses the session parameter with the same name to fill in the generator template. e.g. If there is a placeholder parameter `city` in the Generator, Dialogflow default to fill in the `$city` with `$session.params.city`. However, you may choose to fill `$city` with `$session.params.desination-city`. - Map key: parameter ID - Map value: session parameter name "a_key": "A String", }, "outputParameter": "A String", # Required. Output parameter which should contain the generator response. }, ], "messages": [ # The list of rich message responses to present to the user. { # Represents a response message that can be returned by a conversational agent. Response messages are also used for output audio synthesis. The approach is as follows: * If at least one OutputAudioText response is present, then all OutputAudioText responses are linearly concatenated, and the result is used for output audio synthesis. * If the OutputAudioText responses are a mixture of text and SSML, then the concatenated result is treated as SSML; otherwise, the result is treated as either text or SSML as appropriate. The agent designer should ideally use either text or SSML consistently throughout the bot design. * Otherwise, all Text responses are linearly concatenated, and the result is used for output audio synthesis. This approach allows for more sophisticated user experience scenarios, where the text displayed to the user may differ from what is heard. "channel": "A String", # The channel which the response is associated with. Clients can specify the channel via QueryParameters.channel, and only associated channel response will be returned. "conversationSuccess": { # Indicates that the conversation succeeded, i.e., the bot handled the issue that the customer talked to it about. Dialogflow only uses this to determine which conversations should be counted as successful and doesn't process the metadata in this message in any way. Note that Dialogflow also considers conversations that get to the conversation end page as successful even if they don't return ConversationSuccess. You may set this, for example: * In the entry_fulfillment of a Page if entering the page indicates that the conversation succeeded. * In a webhook response when you determine that you handled the customer issue. # Indicates that the conversation succeeded. "metadata": { # Custom metadata. Dialogflow doesn't impose any structure on this. "a_key": "", # Properties of the object. }, }, "endInteraction": { # Indicates that interaction with the Dialogflow agent has ended. This message is generated by Dialogflow only and not supposed to be defined by the user. # Output only. A signal that indicates the interaction with the Dialogflow agent has ended. This message is generated by Dialogflow only when the conversation reaches `END_SESSION` page. It is not supposed to be defined by the user. It's guaranteed that there is at most one such message in each response. }, "knowledgeInfoCard": { # Represents info card response. If the response contains generative knowledge prediction, Dialogflow will return a payload with Infobot Messenger compatible info card. Otherwise, the info card response is skipped. # Represents info card for knowledge answers, to be better rendered in Dialogflow Messenger. }, "liveAgentHandoff": { # Indicates that the conversation should be handed off to a live agent. Dialogflow only uses this to determine which conversations were handed off to a human agent for measurement purposes. What else to do with this signal is up to you and your handoff procedures. You may set this, for example: * In the entry_fulfillment of a Page if entering the page indicates something went extremely wrong in the conversation. * In a webhook response when you determine that the customer issue can only be handled by a human. # Hands off conversation to a human agent. "metadata": { # Custom metadata for your handoff procedure. Dialogflow doesn't impose any structure on this. "a_key": "", # Properties of the object. }, }, "mixedAudio": { # Represents an audio message that is composed of both segments synthesized from the Dialogflow agent prompts and ones hosted externally at the specified URIs. The external URIs are specified via play_audio. This message is generated by Dialogflow only and not supposed to be defined by the user. # Output only. An audio response message composed of both the synthesized Dialogflow agent responses and responses defined via play_audio. This message is generated by Dialogflow only and not supposed to be defined by the user. "segments": [ # Segments this audio response is composed of. { # Represents one segment of audio. "allowPlaybackInterruption": True or False, # Output only. Whether the playback of this segment can be interrupted by the end user's speech and the client should then start the next Dialogflow request. "audio": "A String", # Raw audio synthesized from the Dialogflow agent's response using the output config specified in the request. "uri": "A String", # Client-specific URI that points to an audio clip accessible to the client. Dialogflow does not impose any validation on it. }, ], }, "outputAudioText": { # A text or ssml response that is preferentially used for TTS output audio synthesis, as described in the comment on the ResponseMessage message. # A text or ssml response that is preferentially used for TTS output audio synthesis, as described in the comment on the ResponseMessage message. "allowPlaybackInterruption": True or False, # Output only. Whether the playback of this message can be interrupted by the end user's speech and the client can then starts the next Dialogflow request. "ssml": "A String", # The SSML text to be synthesized. For more information, see [SSML](/speech/text-to-speech/docs/ssml). "text": "A String", # The raw text to be synthesized. }, "payload": { # Returns a response containing a custom, platform-specific payload. "a_key": "", # Properties of the object. }, "playAudio": { # Specifies an audio clip to be played by the client as part of the response. # Signal that the client should play an audio clip hosted at a client-specific URI. Dialogflow uses this to construct mixed_audio. However, Dialogflow itself does not try to read or process the URI in any way. "allowPlaybackInterruption": True or False, # Output only. Whether the playback of this message can be interrupted by the end user's speech and the client can then starts the next Dialogflow request. "audioUri": "A String", # Required. URI of the audio clip. Dialogflow does not impose any validation on this value. It is specific to the client that reads it. }, "telephonyTransferCall": { # Represents the signal that telles the client to transfer the phone call connected to the agent to a third-party endpoint. # A signal that the client should transfer the phone call connected to this agent to a third-party endpoint. "phoneNumber": "A String", # Transfer the call to a phone number in [E.164 format](https://en.wikipedia.org/wiki/E.164). }, "text": { # The text response message. # Returns a text response. "allowPlaybackInterruption": True or False, # Output only. Whether the playback of this message can be interrupted by the end user's speech and the client can then starts the next Dialogflow request. "text": [ # Required. A collection of text response variants. If multiple variants are defined, only one text response variant is returned at runtime. "A String", ], }, "toolCall": { # Represents a call of a specific tool's action with the specified inputs. # Returns the definition of a tool call that should be executed by the client. "action": "A String", # Required. The name of the tool's action associated with this call. "inputParameters": { # Optional. The action's input parameters. "a_key": "", # Properties of the object. }, "tool": "A String", # Required. The tool associated with this call. Format: `projects//locations//agents//tools/`. }, }, ], "returnPartialResponses": True or False, # Whether Dialogflow should return currently queued fulfillment response messages in streaming APIs. If a webhook is specified, it happens before Dialogflow invokes webhook. Warning: 1) This flag only affects streaming API. Responses are still queued and returned once in non-streaming API. 2) The flag can be enabled in any fulfillment but only the first 3 partial responses will be returned. You may only want to apply it to fulfillments that have slow webhooks. "setParameterActions": [ # Set parameter values before executing the webhook. { # Setting a parameter value. "parameter": "A String", # Display name of the parameter. "value": "", # The new value of the parameter. A null value clears the parameter. }, ], "tag": "A String", # The value of this field will be populated in the WebhookRequest `fulfillmentInfo.tag` field by Dialogflow when the associated webhook is called. The tag is typically used by the webhook service to identify which fulfillment is being called, but it could be used for other purposes. This field is required if `webhook` is specified. "webhook": "A String", # The webhook to call. Format: `projects//locations//agents//webhooks/`. }, "lifecycleStage": "A String", # Required. The name of the lifecycle stage that triggers this handler. Supported values: * `playbook-start` * `pre-action-selection` * `pre-action-execution` }, }, ], "inputParameterDefinitions": [ # Optional. Defined structured input parameters for this playbook. { # Defines the properties of a parameter. Used to define parameters used in the agent and the input / output parameters for each fulfillment. "description": "A String", # Human-readable description of the parameter. Limited to 300 characters. "name": "A String", # Required. Name of parameter. "type": "A String", # Type of parameter. "typeSchema": { # Encapsulates different type schema variations: either a reference to an a schema that's already defined by a tool, or an inline definition. # Optional. Type schema of parameter. "inlineSchema": { # A type schema object that's specified inline. # Set if this is an inline schema definition. "items": # Object with schema name: GoogleCloudDialogflowCxV3beta1TypeSchema # Schema of the elements if this is an ARRAY type. "type": "A String", # Data type of the schema. }, "schemaReference": { # A reference to the schema of an existing tool. # Set if this is a schema reference. "schema": "A String", # The name of the schema. "tool": "A String", # The tool that contains this schema definition. Format: `projects//locations//agents//tools/`. }, }, }, ], "instruction": { # Message of the Instruction of the playbook. # Instruction to accomplish target goal. "guidelines": "A String", # General guidelines for the playbook. These are unstructured instructions that are not directly part of the goal, e.g. "Always be polite". It's valid for this text to be long and used instead of steps altogether. "steps": [ # Ordered list of step by step execution instructions to accomplish target goal. { # Message of single step execution. "steps": [ # Sub-processing needed to execute the current step. # Object with schema name: GoogleCloudDialogflowCxV3beta1PlaybookStep ], "text": "A String", # Step instruction in text format. }, ], }, "llmModelSettings": { # Settings for LLM models. # Optional. Llm model settings for the playbook. "model": "A String", # The selected LLM model. "promptText": "A String", # The custom prompt to use. }, "name": "A String", # The unique identifier of the playbook. Format: `projects//locations//agents//playbooks/`. "outputParameterDefinitions": [ # Optional. Defined structured output parameters for this playbook. { # Defines the properties of a parameter. Used to define parameters used in the agent and the input / output parameters for each fulfillment. "description": "A String", # Human-readable description of the parameter. Limited to 300 characters. "name": "A String", # Required. Name of parameter. "type": "A String", # Type of parameter. "typeSchema": { # Encapsulates different type schema variations: either a reference to an a schema that's already defined by a tool, or an inline definition. # Optional. Type schema of parameter. "inlineSchema": { # A type schema object that's specified inline. # Set if this is an inline schema definition. "items": # Object with schema name: GoogleCloudDialogflowCxV3beta1TypeSchema # Schema of the elements if this is an ARRAY type. "type": "A String", # Data type of the schema. }, "schemaReference": { # A reference to the schema of an existing tool. # Set if this is a schema reference. "schema": "A String", # The name of the schema. "tool": "A String", # The tool that contains this schema definition. Format: `projects//locations//agents//tools/`. }, }, }, ], "playbookType": "A String", # Optional. Type of the playbook. "referencedFlows": [ # Output only. The resource name of flows referenced by the current playbook in the instructions. "A String", ], "referencedPlaybooks": [ # Output only. The resource name of other playbooks referenced by the current playbook in the instructions. "A String", ], "referencedTools": [ # Optional. The resource name of tools referenced by the current playbook in the instructions. If not provided explicitly, they are will be implied using the tool being referenced in goal and steps. "A String", ], "speechSettings": { # Define behaviors of speech to text detection. # Optional. Playbook level Settings for speech to text detection. "endpointerSensitivity": 42, # Sensitivity of the speech model that detects the end of speech. Scale from 0 to 100. "models": { # Mapping from language to Speech-to-Text model. The mapped Speech-to-Text model will be selected for requests from its corresponding language. For more information, see [Speech models](https://cloud.google.com/dialogflow/cx/docs/concept/speech-models). "a_key": "A String", }, "noSpeechTimeout": "A String", # Timeout before detecting no speech. "useTimeoutBasedEndpointing": True or False, # Use timeout based endpointing, interpreting endpointer sensitivity as seconds of timeout value. }, "tokenCount": "A String", # Output only. Estimated number of tokes current playbook takes when sent to the LLM. "updateTime": "A String", # Output only. Last time the playbook version was updated. }
import_(parent, body=None, x__xgafv=None)
Imports the specified playbook to the specified agent from a binary file. Args: parent: string, Required. The agent to import the playbook into. Format: `projects//locations//agents/`. (required) body: object, The request body. The object takes the form of: { # The request message for Playbooks.ImportPlaybook. "importStrategy": { # The playbook import strategy used for resource conflict resolution associated with an ImportPlaybookRequest. # Optional. Specifies the import strategy used when resolving resource conflicts. "mainPlaybookImportStrategy": "A String", # Optional. Specifies the import strategy used when resolving conflicts with the main playbook. If not specified, 'CREATE_NEW' is assumed. "nestedResourceImportStrategy": "A String", # Optional. Specifies the import strategy used when resolving referenced playbook/flow conflicts. If not specified, 'CREATE_NEW' is assumed. "toolImportStrategy": "A String", # Optional. Specifies the import strategy used when resolving tool conflicts. If not specified, 'CREATE_NEW' is assumed. This will be applied after the main playbook and nested resource import strategies, meaning if the playbook that references the tool is skipped, the tool will also be skipped. }, "playbookContent": "A String", # Uncompressed raw byte content for playbook. "playbookUri": "A String", # [Dialogflow access control] (https://cloud.google.com/dialogflow/cx/docs/concept/access-control#storage). } x__xgafv: string, V1 error format. Allowed values 1 - v1 error format 2 - v2 error format Returns: An object of the form: { # This resource represents a long-running operation that is the result of a network API call. "done": True or False, # If the value is `false`, it means the operation is still in progress. If `true`, the operation is completed, and either `error` or `response` is available. "error": { # The `Status` type defines a logical error model that is suitable for different programming environments, including REST APIs and RPC APIs. It is used by [gRPC](https://github.com/grpc). Each `Status` message contains three pieces of data: error code, error message, and error details. You can find out more about this error model and how to work with it in the [API Design Guide](https://cloud.google.com/apis/design/errors). # The error result of the operation in case of failure or cancellation. "code": 42, # The status code, which should be an enum value of google.rpc.Code. "details": [ # A list of messages that carry the error details. There is a common set of message types for APIs to use. { "a_key": "", # Properties of the object. Contains field @type with type URL. }, ], "message": "A String", # A developer-facing error message, which should be in English. Any user-facing error message should be localized and sent in the google.rpc.Status.details field, or localized by the client. }, "metadata": { # Service-specific metadata associated with the operation. It typically contains progress information and common metadata such as create time. Some services might not provide such metadata. Any method that returns a long-running operation should document the metadata type, if any. "a_key": "", # Properties of the object. Contains field @type with type URL. }, "name": "A String", # The server-assigned name, which is only unique within the same service that originally returns it. If you use the default HTTP mapping, the `name` should be a resource name ending with `operations/{unique_id}`. "response": { # The normal, successful response of the operation. If the original method returns no data on success, such as `Delete`, the response is `google.protobuf.Empty`. If the original method is standard `Get`/`Create`/`Update`, the response should be the resource. For other methods, the response should have the type `XxxResponse`, where `Xxx` is the original method name. For example, if the original method name is `TakeSnapshot()`, the inferred response type is `TakeSnapshotResponse`. "a_key": "", # Properties of the object. Contains field @type with type URL. }, }
list(parent, pageSize=None, pageToken=None, x__xgafv=None)
Returns a list of playbooks in the specified agent. Args: parent: string, Required. The agent to list playbooks from. Format: `projects//locations//agents/`. (required) pageSize: integer, The maximum number of items to return in a single page. By default 100 and at most 1000. pageToken: string, The next_page_token value returned from a previous list request. x__xgafv: string, V1 error format. Allowed values 1 - v1 error format 2 - v2 error format Returns: An object of the form: { # The response message for Playbooks.ListPlaybooks. "nextPageToken": "A String", # Token to retrieve the next page of results, or empty if there are no more results in the list. "playbooks": [ # The list of playbooks. There will be a maximum number of items returned based on the page_size field in the request. { # Playbook is the basic building block to instruct the LLM how to execute a certain task. A playbook consists of a goal to accomplish, an optional list of step by step instructions (the step instruction may refers to name of the custom or default plugin tools to use) to perform the task, a list of contextual input data to be passed in at the beginning of the invoked, and a list of output parameters to store the playbook result. "createTime": "A String", # Output only. The timestamp of initial playbook creation. "displayName": "A String", # Required. The human-readable name of the playbook, unique within an agent. "goal": "A String", # Required. High level description of the goal the playbook intend to accomplish. A goal should be concise since it's visible to other playbooks that may reference this playbook. "handlers": [ # Optional. A list of registered handlers to execute based on the specified triggers. { # Handler can be used to define custom logic to be executed based on the user-specified triggers. "eventHandler": { # A handler that is triggered by the specified event. # A handler triggered by event. "condition": "A String", # Optional. The condition that must be satisfied to trigger this handler. "event": "A String", # Required. The name of the event that triggers this handler. "fulfillment": { # A fulfillment can do one or more of the following actions at the same time: * Generate rich message responses. * Set parameter values. * Call the webhook. Fulfillments can be called at various stages in the Page or Form lifecycle. For example, when a DetectIntentRequest drives a session to enter a new page, the page's entry fulfillment can add a static response to the QueryResult in the returning DetectIntentResponse, call the webhook (for example, to load user data from a database), or both. # Required. The fulfillment to call when the event occurs. "advancedSettings": { # Hierarchical advanced settings for agent/flow/page/fulfillment/parameter. Settings exposed at lower level overrides the settings exposed at higher level. Overriding occurs at the sub-setting level. For example, the playback_interruption_settings at fulfillment level only overrides the playback_interruption_settings at the agent level, leaving other settings at the agent level unchanged. DTMF settings does not override each other. DTMF settings set at different levels define DTMF detections running in parallel. Hierarchy: Agent->Flow->Page->Fulfillment/Parameter. # Hierarchical advanced settings for this fulfillment. The settings exposed at the lower level overrides the settings exposed at the higher level. "audioExportGcsDestination": { # Google Cloud Storage location for a Dialogflow operation that writes or exports objects (e.g. exported agent or transcripts) outside of Dialogflow. # If present, incoming audio is exported by Dialogflow to the configured Google Cloud Storage destination. Exposed at the following levels: - Agent level - Flow level "uri": "A String", # Required. The Google Cloud Storage URI for the exported objects. A URI is of the form: `gs://bucket/object-name-or-prefix` Whether a full object name, or just a prefix, its usage depends on the Dialogflow operation. }, "dtmfSettings": { # Define behaviors for DTMF (dual tone multi frequency). # Settings for DTMF. Exposed at the following levels: - Agent level - Flow level - Page level - Parameter level. "enabled": True or False, # If true, incoming audio is processed for DTMF (dual tone multi frequency) events. For example, if the caller presses a button on their telephone keypad and DTMF processing is enabled, Dialogflow will detect the event (e.g. a "3" was pressed) in the incoming audio and pass the event to the bot to drive business logic (e.g. when 3 is pressed, return the account balance). "endpointingTimeoutDuration": "A String", # Endpoint timeout setting for matching dtmf input to regex. "finishDigit": "A String", # The digit that terminates a DTMF digit sequence. "interdigitTimeoutDuration": "A String", # Interdigit timeout setting for matching dtmf input to regex. "maxDigits": 42, # Max length of DTMF digits. }, "loggingSettings": { # Define behaviors on logging. # Settings for logging. Settings for Dialogflow History, Contact Center messages, StackDriver logs, and speech logging. Exposed at the following levels: - Agent level. "enableConsentBasedRedaction": True or False, # Enables consent-based end-user input redaction, if true, a pre-defined session parameter `$session.params.conversation-redaction` will be used to determine if the utterance should be redacted. "enableInteractionLogging": True or False, # Enables DF Interaction logging. "enableStackdriverLogging": True or False, # Enables Google Cloud Logging. }, "speechSettings": { # Define behaviors of speech to text detection. # Settings for speech to text detection. Exposed at the following levels: - Agent level - Flow level - Page level - Parameter level "endpointerSensitivity": 42, # Sensitivity of the speech model that detects the end of speech. Scale from 0 to 100. "models": { # Mapping from language to Speech-to-Text model. The mapped Speech-to-Text model will be selected for requests from its corresponding language. For more information, see [Speech models](https://cloud.google.com/dialogflow/cx/docs/concept/speech-models). "a_key": "A String", }, "noSpeechTimeout": "A String", # Timeout before detecting no speech. "useTimeoutBasedEndpointing": True or False, # Use timeout based endpointing, interpreting endpointer sensitivity as seconds of timeout value. }, }, "conditionalCases": [ # Conditional cases for this fulfillment. { # A list of cascading if-else conditions. Cases are mutually exclusive. The first one with a matching condition is selected, all the rest ignored. "cases": [ # A list of cascading if-else conditions. { # Each case has a Boolean condition. When it is evaluated to be True, the corresponding messages will be selected and evaluated recursively. "caseContent": [ # A list of case content. { # The list of messages or conditional cases to activate for this case. "additionalCases": # Object with schema name: GoogleCloudDialogflowCxV3beta1FulfillmentConditionalCases # Additional cases to be evaluated. "message": { # Represents a response message that can be returned by a conversational agent. Response messages are also used for output audio synthesis. The approach is as follows: * If at least one OutputAudioText response is present, then all OutputAudioText responses are linearly concatenated, and the result is used for output audio synthesis. * If the OutputAudioText responses are a mixture of text and SSML, then the concatenated result is treated as SSML; otherwise, the result is treated as either text or SSML as appropriate. The agent designer should ideally use either text or SSML consistently throughout the bot design. * Otherwise, all Text responses are linearly concatenated, and the result is used for output audio synthesis. This approach allows for more sophisticated user experience scenarios, where the text displayed to the user may differ from what is heard. # Returned message. "channel": "A String", # The channel which the response is associated with. Clients can specify the channel via QueryParameters.channel, and only associated channel response will be returned. "conversationSuccess": { # Indicates that the conversation succeeded, i.e., the bot handled the issue that the customer talked to it about. Dialogflow only uses this to determine which conversations should be counted as successful and doesn't process the metadata in this message in any way. Note that Dialogflow also considers conversations that get to the conversation end page as successful even if they don't return ConversationSuccess. You may set this, for example: * In the entry_fulfillment of a Page if entering the page indicates that the conversation succeeded. * In a webhook response when you determine that you handled the customer issue. # Indicates that the conversation succeeded. "metadata": { # Custom metadata. Dialogflow doesn't impose any structure on this. "a_key": "", # Properties of the object. }, }, "endInteraction": { # Indicates that interaction with the Dialogflow agent has ended. This message is generated by Dialogflow only and not supposed to be defined by the user. # Output only. A signal that indicates the interaction with the Dialogflow agent has ended. This message is generated by Dialogflow only when the conversation reaches `END_SESSION` page. It is not supposed to be defined by the user. It's guaranteed that there is at most one such message in each response. }, "knowledgeInfoCard": { # Represents info card response. If the response contains generative knowledge prediction, Dialogflow will return a payload with Infobot Messenger compatible info card. Otherwise, the info card response is skipped. # Represents info card for knowledge answers, to be better rendered in Dialogflow Messenger. }, "liveAgentHandoff": { # Indicates that the conversation should be handed off to a live agent. Dialogflow only uses this to determine which conversations were handed off to a human agent for measurement purposes. What else to do with this signal is up to you and your handoff procedures. You may set this, for example: * In the entry_fulfillment of a Page if entering the page indicates something went extremely wrong in the conversation. * In a webhook response when you determine that the customer issue can only be handled by a human. # Hands off conversation to a human agent. "metadata": { # Custom metadata for your handoff procedure. Dialogflow doesn't impose any structure on this. "a_key": "", # Properties of the object. }, }, "mixedAudio": { # Represents an audio message that is composed of both segments synthesized from the Dialogflow agent prompts and ones hosted externally at the specified URIs. The external URIs are specified via play_audio. This message is generated by Dialogflow only and not supposed to be defined by the user. # Output only. An audio response message composed of both the synthesized Dialogflow agent responses and responses defined via play_audio. This message is generated by Dialogflow only and not supposed to be defined by the user. "segments": [ # Segments this audio response is composed of. { # Represents one segment of audio. "allowPlaybackInterruption": True or False, # Output only. Whether the playback of this segment can be interrupted by the end user's speech and the client should then start the next Dialogflow request. "audio": "A String", # Raw audio synthesized from the Dialogflow agent's response using the output config specified in the request. "uri": "A String", # Client-specific URI that points to an audio clip accessible to the client. Dialogflow does not impose any validation on it. }, ], }, "outputAudioText": { # A text or ssml response that is preferentially used for TTS output audio synthesis, as described in the comment on the ResponseMessage message. # A text or ssml response that is preferentially used for TTS output audio synthesis, as described in the comment on the ResponseMessage message. "allowPlaybackInterruption": True or False, # Output only. Whether the playback of this message can be interrupted by the end user's speech and the client can then starts the next Dialogflow request. "ssml": "A String", # The SSML text to be synthesized. For more information, see [SSML](/speech/text-to-speech/docs/ssml). "text": "A String", # The raw text to be synthesized. }, "payload": { # Returns a response containing a custom, platform-specific payload. "a_key": "", # Properties of the object. }, "playAudio": { # Specifies an audio clip to be played by the client as part of the response. # Signal that the client should play an audio clip hosted at a client-specific URI. Dialogflow uses this to construct mixed_audio. However, Dialogflow itself does not try to read or process the URI in any way. "allowPlaybackInterruption": True or False, # Output only. Whether the playback of this message can be interrupted by the end user's speech and the client can then starts the next Dialogflow request. "audioUri": "A String", # Required. URI of the audio clip. Dialogflow does not impose any validation on this value. It is specific to the client that reads it. }, "telephonyTransferCall": { # Represents the signal that telles the client to transfer the phone call connected to the agent to a third-party endpoint. # A signal that the client should transfer the phone call connected to this agent to a third-party endpoint. "phoneNumber": "A String", # Transfer the call to a phone number in [E.164 format](https://en.wikipedia.org/wiki/E.164). }, "text": { # The text response message. # Returns a text response. "allowPlaybackInterruption": True or False, # Output only. Whether the playback of this message can be interrupted by the end user's speech and the client can then starts the next Dialogflow request. "text": [ # Required. A collection of text response variants. If multiple variants are defined, only one text response variant is returned at runtime. "A String", ], }, "toolCall": { # Represents a call of a specific tool's action with the specified inputs. # Returns the definition of a tool call that should be executed by the client. "action": "A String", # Required. The name of the tool's action associated with this call. "inputParameters": { # Optional. The action's input parameters. "a_key": "", # Properties of the object. }, "tool": "A String", # Required. The tool associated with this call. Format: `projects//locations//agents//tools/`. }, }, }, ], "condition": "A String", # The condition to activate and select this case. Empty means the condition is always true. The condition is evaluated against form parameters or session parameters. See the [conditions reference](https://cloud.google.com/dialogflow/cx/docs/reference/condition). }, ], }, ], "enableGenerativeFallback": True or False, # If the flag is true, the agent will utilize LLM to generate a text response. If LLM generation fails, the defined responses in the fulfillment will be respected. This flag is only useful for fulfillments associated with no-match event handlers. "generators": [ # A list of Generators to be called during this fulfillment. { # Generator settings used by the LLM to generate a text response. "generator": "A String", # Required. The generator to call. Format: `projects//locations//agents//generators/`. "inputParameters": { # Map from placeholder parameter in the Generator to corresponding session parameters. By default, Dialogflow uses the session parameter with the same name to fill in the generator template. e.g. If there is a placeholder parameter `city` in the Generator, Dialogflow default to fill in the `$city` with `$session.params.city`. However, you may choose to fill `$city` with `$session.params.desination-city`. - Map key: parameter ID - Map value: session parameter name "a_key": "A String", }, "outputParameter": "A String", # Required. Output parameter which should contain the generator response. }, ], "messages": [ # The list of rich message responses to present to the user. { # Represents a response message that can be returned by a conversational agent. Response messages are also used for output audio synthesis. The approach is as follows: * If at least one OutputAudioText response is present, then all OutputAudioText responses are linearly concatenated, and the result is used for output audio synthesis. * If the OutputAudioText responses are a mixture of text and SSML, then the concatenated result is treated as SSML; otherwise, the result is treated as either text or SSML as appropriate. The agent designer should ideally use either text or SSML consistently throughout the bot design. * Otherwise, all Text responses are linearly concatenated, and the result is used for output audio synthesis. This approach allows for more sophisticated user experience scenarios, where the text displayed to the user may differ from what is heard. "channel": "A String", # The channel which the response is associated with. Clients can specify the channel via QueryParameters.channel, and only associated channel response will be returned. "conversationSuccess": { # Indicates that the conversation succeeded, i.e., the bot handled the issue that the customer talked to it about. Dialogflow only uses this to determine which conversations should be counted as successful and doesn't process the metadata in this message in any way. Note that Dialogflow also considers conversations that get to the conversation end page as successful even if they don't return ConversationSuccess. You may set this, for example: * In the entry_fulfillment of a Page if entering the page indicates that the conversation succeeded. * In a webhook response when you determine that you handled the customer issue. # Indicates that the conversation succeeded. "metadata": { # Custom metadata. Dialogflow doesn't impose any structure on this. "a_key": "", # Properties of the object. }, }, "endInteraction": { # Indicates that interaction with the Dialogflow agent has ended. This message is generated by Dialogflow only and not supposed to be defined by the user. # Output only. A signal that indicates the interaction with the Dialogflow agent has ended. This message is generated by Dialogflow only when the conversation reaches `END_SESSION` page. It is not supposed to be defined by the user. It's guaranteed that there is at most one such message in each response. }, "knowledgeInfoCard": { # Represents info card response. If the response contains generative knowledge prediction, Dialogflow will return a payload with Infobot Messenger compatible info card. Otherwise, the info card response is skipped. # Represents info card for knowledge answers, to be better rendered in Dialogflow Messenger. }, "liveAgentHandoff": { # Indicates that the conversation should be handed off to a live agent. Dialogflow only uses this to determine which conversations were handed off to a human agent for measurement purposes. What else to do with this signal is up to you and your handoff procedures. You may set this, for example: * In the entry_fulfillment of a Page if entering the page indicates something went extremely wrong in the conversation. * In a webhook response when you determine that the customer issue can only be handled by a human. # Hands off conversation to a human agent. "metadata": { # Custom metadata for your handoff procedure. Dialogflow doesn't impose any structure on this. "a_key": "", # Properties of the object. }, }, "mixedAudio": { # Represents an audio message that is composed of both segments synthesized from the Dialogflow agent prompts and ones hosted externally at the specified URIs. The external URIs are specified via play_audio. This message is generated by Dialogflow only and not supposed to be defined by the user. # Output only. An audio response message composed of both the synthesized Dialogflow agent responses and responses defined via play_audio. This message is generated by Dialogflow only and not supposed to be defined by the user. "segments": [ # Segments this audio response is composed of. { # Represents one segment of audio. "allowPlaybackInterruption": True or False, # Output only. Whether the playback of this segment can be interrupted by the end user's speech and the client should then start the next Dialogflow request. "audio": "A String", # Raw audio synthesized from the Dialogflow agent's response using the output config specified in the request. "uri": "A String", # Client-specific URI that points to an audio clip accessible to the client. Dialogflow does not impose any validation on it. }, ], }, "outputAudioText": { # A text or ssml response that is preferentially used for TTS output audio synthesis, as described in the comment on the ResponseMessage message. # A text or ssml response that is preferentially used for TTS output audio synthesis, as described in the comment on the ResponseMessage message. "allowPlaybackInterruption": True or False, # Output only. Whether the playback of this message can be interrupted by the end user's speech and the client can then starts the next Dialogflow request. "ssml": "A String", # The SSML text to be synthesized. For more information, see [SSML](/speech/text-to-speech/docs/ssml). "text": "A String", # The raw text to be synthesized. }, "payload": { # Returns a response containing a custom, platform-specific payload. "a_key": "", # Properties of the object. }, "playAudio": { # Specifies an audio clip to be played by the client as part of the response. # Signal that the client should play an audio clip hosted at a client-specific URI. Dialogflow uses this to construct mixed_audio. However, Dialogflow itself does not try to read or process the URI in any way. "allowPlaybackInterruption": True or False, # Output only. Whether the playback of this message can be interrupted by the end user's speech and the client can then starts the next Dialogflow request. "audioUri": "A String", # Required. URI of the audio clip. Dialogflow does not impose any validation on this value. It is specific to the client that reads it. }, "telephonyTransferCall": { # Represents the signal that telles the client to transfer the phone call connected to the agent to a third-party endpoint. # A signal that the client should transfer the phone call connected to this agent to a third-party endpoint. "phoneNumber": "A String", # Transfer the call to a phone number in [E.164 format](https://en.wikipedia.org/wiki/E.164). }, "text": { # The text response message. # Returns a text response. "allowPlaybackInterruption": True or False, # Output only. Whether the playback of this message can be interrupted by the end user's speech and the client can then starts the next Dialogflow request. "text": [ # Required. A collection of text response variants. If multiple variants are defined, only one text response variant is returned at runtime. "A String", ], }, "toolCall": { # Represents a call of a specific tool's action with the specified inputs. # Returns the definition of a tool call that should be executed by the client. "action": "A String", # Required. The name of the tool's action associated with this call. "inputParameters": { # Optional. The action's input parameters. "a_key": "", # Properties of the object. }, "tool": "A String", # Required. The tool associated with this call. Format: `projects//locations//agents//tools/`. }, }, ], "returnPartialResponses": True or False, # Whether Dialogflow should return currently queued fulfillment response messages in streaming APIs. If a webhook is specified, it happens before Dialogflow invokes webhook. Warning: 1) This flag only affects streaming API. Responses are still queued and returned once in non-streaming API. 2) The flag can be enabled in any fulfillment but only the first 3 partial responses will be returned. You may only want to apply it to fulfillments that have slow webhooks. "setParameterActions": [ # Set parameter values before executing the webhook. { # Setting a parameter value. "parameter": "A String", # Display name of the parameter. "value": "", # The new value of the parameter. A null value clears the parameter. }, ], "tag": "A String", # The value of this field will be populated in the WebhookRequest `fulfillmentInfo.tag` field by Dialogflow when the associated webhook is called. The tag is typically used by the webhook service to identify which fulfillment is being called, but it could be used for other purposes. This field is required if `webhook` is specified. "webhook": "A String", # The webhook to call. Format: `projects//locations//agents//webhooks/`. }, }, "lifecycleHandler": { # A handler that is triggered on the specific lifecycle_stage of the playbook execution. # A handler triggered during specific lifecycle of the playbook execution. "condition": "A String", # Optional. The condition that must be satisfied to trigger this handler. "fulfillment": { # A fulfillment can do one or more of the following actions at the same time: * Generate rich message responses. * Set parameter values. * Call the webhook. Fulfillments can be called at various stages in the Page or Form lifecycle. For example, when a DetectIntentRequest drives a session to enter a new page, the page's entry fulfillment can add a static response to the QueryResult in the returning DetectIntentResponse, call the webhook (for example, to load user data from a database), or both. # Required. The fulfillment to call when this handler is triggered. "advancedSettings": { # Hierarchical advanced settings for agent/flow/page/fulfillment/parameter. Settings exposed at lower level overrides the settings exposed at higher level. Overriding occurs at the sub-setting level. For example, the playback_interruption_settings at fulfillment level only overrides the playback_interruption_settings at the agent level, leaving other settings at the agent level unchanged. DTMF settings does not override each other. DTMF settings set at different levels define DTMF detections running in parallel. Hierarchy: Agent->Flow->Page->Fulfillment/Parameter. # Hierarchical advanced settings for this fulfillment. The settings exposed at the lower level overrides the settings exposed at the higher level. "audioExportGcsDestination": { # Google Cloud Storage location for a Dialogflow operation that writes or exports objects (e.g. exported agent or transcripts) outside of Dialogflow. # If present, incoming audio is exported by Dialogflow to the configured Google Cloud Storage destination. Exposed at the following levels: - Agent level - Flow level "uri": "A String", # Required. The Google Cloud Storage URI for the exported objects. A URI is of the form: `gs://bucket/object-name-or-prefix` Whether a full object name, or just a prefix, its usage depends on the Dialogflow operation. }, "dtmfSettings": { # Define behaviors for DTMF (dual tone multi frequency). # Settings for DTMF. Exposed at the following levels: - Agent level - Flow level - Page level - Parameter level. "enabled": True or False, # If true, incoming audio is processed for DTMF (dual tone multi frequency) events. For example, if the caller presses a button on their telephone keypad and DTMF processing is enabled, Dialogflow will detect the event (e.g. a "3" was pressed) in the incoming audio and pass the event to the bot to drive business logic (e.g. when 3 is pressed, return the account balance). "endpointingTimeoutDuration": "A String", # Endpoint timeout setting for matching dtmf input to regex. "finishDigit": "A String", # The digit that terminates a DTMF digit sequence. "interdigitTimeoutDuration": "A String", # Interdigit timeout setting for matching dtmf input to regex. "maxDigits": 42, # Max length of DTMF digits. }, "loggingSettings": { # Define behaviors on logging. # Settings for logging. Settings for Dialogflow History, Contact Center messages, StackDriver logs, and speech logging. Exposed at the following levels: - Agent level. "enableConsentBasedRedaction": True or False, # Enables consent-based end-user input redaction, if true, a pre-defined session parameter `$session.params.conversation-redaction` will be used to determine if the utterance should be redacted. "enableInteractionLogging": True or False, # Enables DF Interaction logging. "enableStackdriverLogging": True or False, # Enables Google Cloud Logging. }, "speechSettings": { # Define behaviors of speech to text detection. # Settings for speech to text detection. Exposed at the following levels: - Agent level - Flow level - Page level - Parameter level "endpointerSensitivity": 42, # Sensitivity of the speech model that detects the end of speech. Scale from 0 to 100. "models": { # Mapping from language to Speech-to-Text model. The mapped Speech-to-Text model will be selected for requests from its corresponding language. For more information, see [Speech models](https://cloud.google.com/dialogflow/cx/docs/concept/speech-models). "a_key": "A String", }, "noSpeechTimeout": "A String", # Timeout before detecting no speech. "useTimeoutBasedEndpointing": True or False, # Use timeout based endpointing, interpreting endpointer sensitivity as seconds of timeout value. }, }, "conditionalCases": [ # Conditional cases for this fulfillment. { # A list of cascading if-else conditions. Cases are mutually exclusive. The first one with a matching condition is selected, all the rest ignored. "cases": [ # A list of cascading if-else conditions. { # Each case has a Boolean condition. When it is evaluated to be True, the corresponding messages will be selected and evaluated recursively. "caseContent": [ # A list of case content. { # The list of messages or conditional cases to activate for this case. "additionalCases": # Object with schema name: GoogleCloudDialogflowCxV3beta1FulfillmentConditionalCases # Additional cases to be evaluated. "message": { # Represents a response message that can be returned by a conversational agent. Response messages are also used for output audio synthesis. The approach is as follows: * If at least one OutputAudioText response is present, then all OutputAudioText responses are linearly concatenated, and the result is used for output audio synthesis. * If the OutputAudioText responses are a mixture of text and SSML, then the concatenated result is treated as SSML; otherwise, the result is treated as either text or SSML as appropriate. The agent designer should ideally use either text or SSML consistently throughout the bot design. * Otherwise, all Text responses are linearly concatenated, and the result is used for output audio synthesis. This approach allows for more sophisticated user experience scenarios, where the text displayed to the user may differ from what is heard. # Returned message. "channel": "A String", # The channel which the response is associated with. Clients can specify the channel via QueryParameters.channel, and only associated channel response will be returned. "conversationSuccess": { # Indicates that the conversation succeeded, i.e., the bot handled the issue that the customer talked to it about. Dialogflow only uses this to determine which conversations should be counted as successful and doesn't process the metadata in this message in any way. Note that Dialogflow also considers conversations that get to the conversation end page as successful even if they don't return ConversationSuccess. You may set this, for example: * In the entry_fulfillment of a Page if entering the page indicates that the conversation succeeded. * In a webhook response when you determine that you handled the customer issue. # Indicates that the conversation succeeded. "metadata": { # Custom metadata. Dialogflow doesn't impose any structure on this. "a_key": "", # Properties of the object. }, }, "endInteraction": { # Indicates that interaction with the Dialogflow agent has ended. This message is generated by Dialogflow only and not supposed to be defined by the user. # Output only. A signal that indicates the interaction with the Dialogflow agent has ended. This message is generated by Dialogflow only when the conversation reaches `END_SESSION` page. It is not supposed to be defined by the user. It's guaranteed that there is at most one such message in each response. }, "knowledgeInfoCard": { # Represents info card response. If the response contains generative knowledge prediction, Dialogflow will return a payload with Infobot Messenger compatible info card. Otherwise, the info card response is skipped. # Represents info card for knowledge answers, to be better rendered in Dialogflow Messenger. }, "liveAgentHandoff": { # Indicates that the conversation should be handed off to a live agent. Dialogflow only uses this to determine which conversations were handed off to a human agent for measurement purposes. What else to do with this signal is up to you and your handoff procedures. You may set this, for example: * In the entry_fulfillment of a Page if entering the page indicates something went extremely wrong in the conversation. * In a webhook response when you determine that the customer issue can only be handled by a human. # Hands off conversation to a human agent. "metadata": { # Custom metadata for your handoff procedure. Dialogflow doesn't impose any structure on this. "a_key": "", # Properties of the object. }, }, "mixedAudio": { # Represents an audio message that is composed of both segments synthesized from the Dialogflow agent prompts and ones hosted externally at the specified URIs. The external URIs are specified via play_audio. This message is generated by Dialogflow only and not supposed to be defined by the user. # Output only. An audio response message composed of both the synthesized Dialogflow agent responses and responses defined via play_audio. This message is generated by Dialogflow only and not supposed to be defined by the user. "segments": [ # Segments this audio response is composed of. { # Represents one segment of audio. "allowPlaybackInterruption": True or False, # Output only. Whether the playback of this segment can be interrupted by the end user's speech and the client should then start the next Dialogflow request. "audio": "A String", # Raw audio synthesized from the Dialogflow agent's response using the output config specified in the request. "uri": "A String", # Client-specific URI that points to an audio clip accessible to the client. Dialogflow does not impose any validation on it. }, ], }, "outputAudioText": { # A text or ssml response that is preferentially used for TTS output audio synthesis, as described in the comment on the ResponseMessage message. # A text or ssml response that is preferentially used for TTS output audio synthesis, as described in the comment on the ResponseMessage message. "allowPlaybackInterruption": True or False, # Output only. Whether the playback of this message can be interrupted by the end user's speech and the client can then starts the next Dialogflow request. "ssml": "A String", # The SSML text to be synthesized. For more information, see [SSML](/speech/text-to-speech/docs/ssml). "text": "A String", # The raw text to be synthesized. }, "payload": { # Returns a response containing a custom, platform-specific payload. "a_key": "", # Properties of the object. }, "playAudio": { # Specifies an audio clip to be played by the client as part of the response. # Signal that the client should play an audio clip hosted at a client-specific URI. Dialogflow uses this to construct mixed_audio. However, Dialogflow itself does not try to read or process the URI in any way. "allowPlaybackInterruption": True or False, # Output only. Whether the playback of this message can be interrupted by the end user's speech and the client can then starts the next Dialogflow request. "audioUri": "A String", # Required. URI of the audio clip. Dialogflow does not impose any validation on this value. It is specific to the client that reads it. }, "telephonyTransferCall": { # Represents the signal that telles the client to transfer the phone call connected to the agent to a third-party endpoint. # A signal that the client should transfer the phone call connected to this agent to a third-party endpoint. "phoneNumber": "A String", # Transfer the call to a phone number in [E.164 format](https://en.wikipedia.org/wiki/E.164). }, "text": { # The text response message. # Returns a text response. "allowPlaybackInterruption": True or False, # Output only. Whether the playback of this message can be interrupted by the end user's speech and the client can then starts the next Dialogflow request. "text": [ # Required. A collection of text response variants. If multiple variants are defined, only one text response variant is returned at runtime. "A String", ], }, "toolCall": { # Represents a call of a specific tool's action with the specified inputs. # Returns the definition of a tool call that should be executed by the client. "action": "A String", # Required. The name of the tool's action associated with this call. "inputParameters": { # Optional. The action's input parameters. "a_key": "", # Properties of the object. }, "tool": "A String", # Required. The tool associated with this call. Format: `projects//locations//agents//tools/`. }, }, }, ], "condition": "A String", # The condition to activate and select this case. Empty means the condition is always true. The condition is evaluated against form parameters or session parameters. See the [conditions reference](https://cloud.google.com/dialogflow/cx/docs/reference/condition). }, ], }, ], "enableGenerativeFallback": True or False, # If the flag is true, the agent will utilize LLM to generate a text response. If LLM generation fails, the defined responses in the fulfillment will be respected. This flag is only useful for fulfillments associated with no-match event handlers. "generators": [ # A list of Generators to be called during this fulfillment. { # Generator settings used by the LLM to generate a text response. "generator": "A String", # Required. The generator to call. Format: `projects//locations//agents//generators/`. "inputParameters": { # Map from placeholder parameter in the Generator to corresponding session parameters. By default, Dialogflow uses the session parameter with the same name to fill in the generator template. e.g. If there is a placeholder parameter `city` in the Generator, Dialogflow default to fill in the `$city` with `$session.params.city`. However, you may choose to fill `$city` with `$session.params.desination-city`. - Map key: parameter ID - Map value: session parameter name "a_key": "A String", }, "outputParameter": "A String", # Required. Output parameter which should contain the generator response. }, ], "messages": [ # The list of rich message responses to present to the user. { # Represents a response message that can be returned by a conversational agent. Response messages are also used for output audio synthesis. The approach is as follows: * If at least one OutputAudioText response is present, then all OutputAudioText responses are linearly concatenated, and the result is used for output audio synthesis. * If the OutputAudioText responses are a mixture of text and SSML, then the concatenated result is treated as SSML; otherwise, the result is treated as either text or SSML as appropriate. The agent designer should ideally use either text or SSML consistently throughout the bot design. * Otherwise, all Text responses are linearly concatenated, and the result is used for output audio synthesis. This approach allows for more sophisticated user experience scenarios, where the text displayed to the user may differ from what is heard. "channel": "A String", # The channel which the response is associated with. Clients can specify the channel via QueryParameters.channel, and only associated channel response will be returned. "conversationSuccess": { # Indicates that the conversation succeeded, i.e., the bot handled the issue that the customer talked to it about. Dialogflow only uses this to determine which conversations should be counted as successful and doesn't process the metadata in this message in any way. Note that Dialogflow also considers conversations that get to the conversation end page as successful even if they don't return ConversationSuccess. You may set this, for example: * In the entry_fulfillment of a Page if entering the page indicates that the conversation succeeded. * In a webhook response when you determine that you handled the customer issue. # Indicates that the conversation succeeded. "metadata": { # Custom metadata. Dialogflow doesn't impose any structure on this. "a_key": "", # Properties of the object. }, }, "endInteraction": { # Indicates that interaction with the Dialogflow agent has ended. This message is generated by Dialogflow only and not supposed to be defined by the user. # Output only. A signal that indicates the interaction with the Dialogflow agent has ended. This message is generated by Dialogflow only when the conversation reaches `END_SESSION` page. It is not supposed to be defined by the user. It's guaranteed that there is at most one such message in each response. }, "knowledgeInfoCard": { # Represents info card response. If the response contains generative knowledge prediction, Dialogflow will return a payload with Infobot Messenger compatible info card. Otherwise, the info card response is skipped. # Represents info card for knowledge answers, to be better rendered in Dialogflow Messenger. }, "liveAgentHandoff": { # Indicates that the conversation should be handed off to a live agent. Dialogflow only uses this to determine which conversations were handed off to a human agent for measurement purposes. What else to do with this signal is up to you and your handoff procedures. You may set this, for example: * In the entry_fulfillment of a Page if entering the page indicates something went extremely wrong in the conversation. * In a webhook response when you determine that the customer issue can only be handled by a human. # Hands off conversation to a human agent. "metadata": { # Custom metadata for your handoff procedure. Dialogflow doesn't impose any structure on this. "a_key": "", # Properties of the object. }, }, "mixedAudio": { # Represents an audio message that is composed of both segments synthesized from the Dialogflow agent prompts and ones hosted externally at the specified URIs. The external URIs are specified via play_audio. This message is generated by Dialogflow only and not supposed to be defined by the user. # Output only. An audio response message composed of both the synthesized Dialogflow agent responses and responses defined via play_audio. This message is generated by Dialogflow only and not supposed to be defined by the user. "segments": [ # Segments this audio response is composed of. { # Represents one segment of audio. "allowPlaybackInterruption": True or False, # Output only. Whether the playback of this segment can be interrupted by the end user's speech and the client should then start the next Dialogflow request. "audio": "A String", # Raw audio synthesized from the Dialogflow agent's response using the output config specified in the request. "uri": "A String", # Client-specific URI that points to an audio clip accessible to the client. Dialogflow does not impose any validation on it. }, ], }, "outputAudioText": { # A text or ssml response that is preferentially used for TTS output audio synthesis, as described in the comment on the ResponseMessage message. # A text or ssml response that is preferentially used for TTS output audio synthesis, as described in the comment on the ResponseMessage message. "allowPlaybackInterruption": True or False, # Output only. Whether the playback of this message can be interrupted by the end user's speech and the client can then starts the next Dialogflow request. "ssml": "A String", # The SSML text to be synthesized. For more information, see [SSML](/speech/text-to-speech/docs/ssml). "text": "A String", # The raw text to be synthesized. }, "payload": { # Returns a response containing a custom, platform-specific payload. "a_key": "", # Properties of the object. }, "playAudio": { # Specifies an audio clip to be played by the client as part of the response. # Signal that the client should play an audio clip hosted at a client-specific URI. Dialogflow uses this to construct mixed_audio. However, Dialogflow itself does not try to read or process the URI in any way. "allowPlaybackInterruption": True or False, # Output only. Whether the playback of this message can be interrupted by the end user's speech and the client can then starts the next Dialogflow request. "audioUri": "A String", # Required. URI of the audio clip. Dialogflow does not impose any validation on this value. It is specific to the client that reads it. }, "telephonyTransferCall": { # Represents the signal that telles the client to transfer the phone call connected to the agent to a third-party endpoint. # A signal that the client should transfer the phone call connected to this agent to a third-party endpoint. "phoneNumber": "A String", # Transfer the call to a phone number in [E.164 format](https://en.wikipedia.org/wiki/E.164). }, "text": { # The text response message. # Returns a text response. "allowPlaybackInterruption": True or False, # Output only. Whether the playback of this message can be interrupted by the end user's speech and the client can then starts the next Dialogflow request. "text": [ # Required. A collection of text response variants. If multiple variants are defined, only one text response variant is returned at runtime. "A String", ], }, "toolCall": { # Represents a call of a specific tool's action with the specified inputs. # Returns the definition of a tool call that should be executed by the client. "action": "A String", # Required. The name of the tool's action associated with this call. "inputParameters": { # Optional. The action's input parameters. "a_key": "", # Properties of the object. }, "tool": "A String", # Required. The tool associated with this call. Format: `projects//locations//agents//tools/`. }, }, ], "returnPartialResponses": True or False, # Whether Dialogflow should return currently queued fulfillment response messages in streaming APIs. If a webhook is specified, it happens before Dialogflow invokes webhook. Warning: 1) This flag only affects streaming API. Responses are still queued and returned once in non-streaming API. 2) The flag can be enabled in any fulfillment but only the first 3 partial responses will be returned. You may only want to apply it to fulfillments that have slow webhooks. "setParameterActions": [ # Set parameter values before executing the webhook. { # Setting a parameter value. "parameter": "A String", # Display name of the parameter. "value": "", # The new value of the parameter. A null value clears the parameter. }, ], "tag": "A String", # The value of this field will be populated in the WebhookRequest `fulfillmentInfo.tag` field by Dialogflow when the associated webhook is called. The tag is typically used by the webhook service to identify which fulfillment is being called, but it could be used for other purposes. This field is required if `webhook` is specified. "webhook": "A String", # The webhook to call. Format: `projects//locations//agents//webhooks/`. }, "lifecycleStage": "A String", # Required. The name of the lifecycle stage that triggers this handler. Supported values: * `playbook-start` * `pre-action-selection` * `pre-action-execution` }, }, ], "inputParameterDefinitions": [ # Optional. Defined structured input parameters for this playbook. { # Defines the properties of a parameter. Used to define parameters used in the agent and the input / output parameters for each fulfillment. "description": "A String", # Human-readable description of the parameter. Limited to 300 characters. "name": "A String", # Required. Name of parameter. "type": "A String", # Type of parameter. "typeSchema": { # Encapsulates different type schema variations: either a reference to an a schema that's already defined by a tool, or an inline definition. # Optional. Type schema of parameter. "inlineSchema": { # A type schema object that's specified inline. # Set if this is an inline schema definition. "items": # Object with schema name: GoogleCloudDialogflowCxV3beta1TypeSchema # Schema of the elements if this is an ARRAY type. "type": "A String", # Data type of the schema. }, "schemaReference": { # A reference to the schema of an existing tool. # Set if this is a schema reference. "schema": "A String", # The name of the schema. "tool": "A String", # The tool that contains this schema definition. Format: `projects//locations//agents//tools/`. }, }, }, ], "instruction": { # Message of the Instruction of the playbook. # Instruction to accomplish target goal. "guidelines": "A String", # General guidelines for the playbook. These are unstructured instructions that are not directly part of the goal, e.g. "Always be polite". It's valid for this text to be long and used instead of steps altogether. "steps": [ # Ordered list of step by step execution instructions to accomplish target goal. { # Message of single step execution. "steps": [ # Sub-processing needed to execute the current step. # Object with schema name: GoogleCloudDialogflowCxV3beta1PlaybookStep ], "text": "A String", # Step instruction in text format. }, ], }, "llmModelSettings": { # Settings for LLM models. # Optional. Llm model settings for the playbook. "model": "A String", # The selected LLM model. "promptText": "A String", # The custom prompt to use. }, "name": "A String", # The unique identifier of the playbook. Format: `projects//locations//agents//playbooks/`. "outputParameterDefinitions": [ # Optional. Defined structured output parameters for this playbook. { # Defines the properties of a parameter. Used to define parameters used in the agent and the input / output parameters for each fulfillment. "description": "A String", # Human-readable description of the parameter. Limited to 300 characters. "name": "A String", # Required. Name of parameter. "type": "A String", # Type of parameter. "typeSchema": { # Encapsulates different type schema variations: either a reference to an a schema that's already defined by a tool, or an inline definition. # Optional. Type schema of parameter. "inlineSchema": { # A type schema object that's specified inline. # Set if this is an inline schema definition. "items": # Object with schema name: GoogleCloudDialogflowCxV3beta1TypeSchema # Schema of the elements if this is an ARRAY type. "type": "A String", # Data type of the schema. }, "schemaReference": { # A reference to the schema of an existing tool. # Set if this is a schema reference. "schema": "A String", # The name of the schema. "tool": "A String", # The tool that contains this schema definition. Format: `projects//locations//agents//tools/`. }, }, }, ], "playbookType": "A String", # Optional. Type of the playbook. "referencedFlows": [ # Output only. The resource name of flows referenced by the current playbook in the instructions. "A String", ], "referencedPlaybooks": [ # Output only. The resource name of other playbooks referenced by the current playbook in the instructions. "A String", ], "referencedTools": [ # Optional. The resource name of tools referenced by the current playbook in the instructions. If not provided explicitly, they are will be implied using the tool being referenced in goal and steps. "A String", ], "speechSettings": { # Define behaviors of speech to text detection. # Optional. Playbook level Settings for speech to text detection. "endpointerSensitivity": 42, # Sensitivity of the speech model that detects the end of speech. Scale from 0 to 100. "models": { # Mapping from language to Speech-to-Text model. The mapped Speech-to-Text model will be selected for requests from its corresponding language. For more information, see [Speech models](https://cloud.google.com/dialogflow/cx/docs/concept/speech-models). "a_key": "A String", }, "noSpeechTimeout": "A String", # Timeout before detecting no speech. "useTimeoutBasedEndpointing": True or False, # Use timeout based endpointing, interpreting endpointer sensitivity as seconds of timeout value. }, "tokenCount": "A String", # Output only. Estimated number of tokes current playbook takes when sent to the LLM. "updateTime": "A String", # Output only. Last time the playbook version was updated. }, ], }
list_next()
Retrieves the next page of results. Args: previous_request: The request for the previous page. (required) previous_response: The response from the request for the previous page. (required) Returns: A request object that you can call 'execute()' on to request the next page. Returns None if there are no more items in the collection.
patch(name, body=None, updateMask=None, x__xgafv=None)
Updates the specified Playbook. Args: name: string, The unique identifier of the playbook. Format: `projects//locations//agents//playbooks/`. (required) body: object, The request body. The object takes the form of: { # Playbook is the basic building block to instruct the LLM how to execute a certain task. A playbook consists of a goal to accomplish, an optional list of step by step instructions (the step instruction may refers to name of the custom or default plugin tools to use) to perform the task, a list of contextual input data to be passed in at the beginning of the invoked, and a list of output parameters to store the playbook result. "createTime": "A String", # Output only. The timestamp of initial playbook creation. "displayName": "A String", # Required. The human-readable name of the playbook, unique within an agent. "goal": "A String", # Required. High level description of the goal the playbook intend to accomplish. A goal should be concise since it's visible to other playbooks that may reference this playbook. "handlers": [ # Optional. A list of registered handlers to execute based on the specified triggers. { # Handler can be used to define custom logic to be executed based on the user-specified triggers. "eventHandler": { # A handler that is triggered by the specified event. # A handler triggered by event. "condition": "A String", # Optional. The condition that must be satisfied to trigger this handler. "event": "A String", # Required. The name of the event that triggers this handler. "fulfillment": { # A fulfillment can do one or more of the following actions at the same time: * Generate rich message responses. * Set parameter values. * Call the webhook. Fulfillments can be called at various stages in the Page or Form lifecycle. For example, when a DetectIntentRequest drives a session to enter a new page, the page's entry fulfillment can add a static response to the QueryResult in the returning DetectIntentResponse, call the webhook (for example, to load user data from a database), or both. # Required. The fulfillment to call when the event occurs. "advancedSettings": { # Hierarchical advanced settings for agent/flow/page/fulfillment/parameter. Settings exposed at lower level overrides the settings exposed at higher level. Overriding occurs at the sub-setting level. For example, the playback_interruption_settings at fulfillment level only overrides the playback_interruption_settings at the agent level, leaving other settings at the agent level unchanged. DTMF settings does not override each other. DTMF settings set at different levels define DTMF detections running in parallel. Hierarchy: Agent->Flow->Page->Fulfillment/Parameter. # Hierarchical advanced settings for this fulfillment. The settings exposed at the lower level overrides the settings exposed at the higher level. "audioExportGcsDestination": { # Google Cloud Storage location for a Dialogflow operation that writes or exports objects (e.g. exported agent or transcripts) outside of Dialogflow. # If present, incoming audio is exported by Dialogflow to the configured Google Cloud Storage destination. Exposed at the following levels: - Agent level - Flow level "uri": "A String", # Required. The Google Cloud Storage URI for the exported objects. A URI is of the form: `gs://bucket/object-name-or-prefix` Whether a full object name, or just a prefix, its usage depends on the Dialogflow operation. }, "dtmfSettings": { # Define behaviors for DTMF (dual tone multi frequency). # Settings for DTMF. Exposed at the following levels: - Agent level - Flow level - Page level - Parameter level. "enabled": True or False, # If true, incoming audio is processed for DTMF (dual tone multi frequency) events. For example, if the caller presses a button on their telephone keypad and DTMF processing is enabled, Dialogflow will detect the event (e.g. a "3" was pressed) in the incoming audio and pass the event to the bot to drive business logic (e.g. when 3 is pressed, return the account balance). "endpointingTimeoutDuration": "A String", # Endpoint timeout setting for matching dtmf input to regex. "finishDigit": "A String", # The digit that terminates a DTMF digit sequence. "interdigitTimeoutDuration": "A String", # Interdigit timeout setting for matching dtmf input to regex. "maxDigits": 42, # Max length of DTMF digits. }, "loggingSettings": { # Define behaviors on logging. # Settings for logging. Settings for Dialogflow History, Contact Center messages, StackDriver logs, and speech logging. Exposed at the following levels: - Agent level. "enableConsentBasedRedaction": True or False, # Enables consent-based end-user input redaction, if true, a pre-defined session parameter `$session.params.conversation-redaction` will be used to determine if the utterance should be redacted. "enableInteractionLogging": True or False, # Enables DF Interaction logging. "enableStackdriverLogging": True or False, # Enables Google Cloud Logging. }, "speechSettings": { # Define behaviors of speech to text detection. # Settings for speech to text detection. Exposed at the following levels: - Agent level - Flow level - Page level - Parameter level "endpointerSensitivity": 42, # Sensitivity of the speech model that detects the end of speech. Scale from 0 to 100. "models": { # Mapping from language to Speech-to-Text model. The mapped Speech-to-Text model will be selected for requests from its corresponding language. For more information, see [Speech models](https://cloud.google.com/dialogflow/cx/docs/concept/speech-models). "a_key": "A String", }, "noSpeechTimeout": "A String", # Timeout before detecting no speech. "useTimeoutBasedEndpointing": True or False, # Use timeout based endpointing, interpreting endpointer sensitivity as seconds of timeout value. }, }, "conditionalCases": [ # Conditional cases for this fulfillment. { # A list of cascading if-else conditions. Cases are mutually exclusive. The first one with a matching condition is selected, all the rest ignored. "cases": [ # A list of cascading if-else conditions. { # Each case has a Boolean condition. When it is evaluated to be True, the corresponding messages will be selected and evaluated recursively. "caseContent": [ # A list of case content. { # The list of messages or conditional cases to activate for this case. "additionalCases": # Object with schema name: GoogleCloudDialogflowCxV3beta1FulfillmentConditionalCases # Additional cases to be evaluated. "message": { # Represents a response message that can be returned by a conversational agent. Response messages are also used for output audio synthesis. The approach is as follows: * If at least one OutputAudioText response is present, then all OutputAudioText responses are linearly concatenated, and the result is used for output audio synthesis. * If the OutputAudioText responses are a mixture of text and SSML, then the concatenated result is treated as SSML; otherwise, the result is treated as either text or SSML as appropriate. The agent designer should ideally use either text or SSML consistently throughout the bot design. * Otherwise, all Text responses are linearly concatenated, and the result is used for output audio synthesis. This approach allows for more sophisticated user experience scenarios, where the text displayed to the user may differ from what is heard. # Returned message. "channel": "A String", # The channel which the response is associated with. Clients can specify the channel via QueryParameters.channel, and only associated channel response will be returned. "conversationSuccess": { # Indicates that the conversation succeeded, i.e., the bot handled the issue that the customer talked to it about. Dialogflow only uses this to determine which conversations should be counted as successful and doesn't process the metadata in this message in any way. Note that Dialogflow also considers conversations that get to the conversation end page as successful even if they don't return ConversationSuccess. You may set this, for example: * In the entry_fulfillment of a Page if entering the page indicates that the conversation succeeded. * In a webhook response when you determine that you handled the customer issue. # Indicates that the conversation succeeded. "metadata": { # Custom metadata. Dialogflow doesn't impose any structure on this. "a_key": "", # Properties of the object. }, }, "endInteraction": { # Indicates that interaction with the Dialogflow agent has ended. This message is generated by Dialogflow only and not supposed to be defined by the user. # Output only. A signal that indicates the interaction with the Dialogflow agent has ended. This message is generated by Dialogflow only when the conversation reaches `END_SESSION` page. It is not supposed to be defined by the user. It's guaranteed that there is at most one such message in each response. }, "knowledgeInfoCard": { # Represents info card response. If the response contains generative knowledge prediction, Dialogflow will return a payload with Infobot Messenger compatible info card. Otherwise, the info card response is skipped. # Represents info card for knowledge answers, to be better rendered in Dialogflow Messenger. }, "liveAgentHandoff": { # Indicates that the conversation should be handed off to a live agent. Dialogflow only uses this to determine which conversations were handed off to a human agent for measurement purposes. What else to do with this signal is up to you and your handoff procedures. You may set this, for example: * In the entry_fulfillment of a Page if entering the page indicates something went extremely wrong in the conversation. * In a webhook response when you determine that the customer issue can only be handled by a human. # Hands off conversation to a human agent. "metadata": { # Custom metadata for your handoff procedure. Dialogflow doesn't impose any structure on this. "a_key": "", # Properties of the object. }, }, "mixedAudio": { # Represents an audio message that is composed of both segments synthesized from the Dialogflow agent prompts and ones hosted externally at the specified URIs. The external URIs are specified via play_audio. This message is generated by Dialogflow only and not supposed to be defined by the user. # Output only. An audio response message composed of both the synthesized Dialogflow agent responses and responses defined via play_audio. This message is generated by Dialogflow only and not supposed to be defined by the user. "segments": [ # Segments this audio response is composed of. { # Represents one segment of audio. "allowPlaybackInterruption": True or False, # Output only. Whether the playback of this segment can be interrupted by the end user's speech and the client should then start the next Dialogflow request. "audio": "A String", # Raw audio synthesized from the Dialogflow agent's response using the output config specified in the request. "uri": "A String", # Client-specific URI that points to an audio clip accessible to the client. Dialogflow does not impose any validation on it. }, ], }, "outputAudioText": { # A text or ssml response that is preferentially used for TTS output audio synthesis, as described in the comment on the ResponseMessage message. # A text or ssml response that is preferentially used for TTS output audio synthesis, as described in the comment on the ResponseMessage message. "allowPlaybackInterruption": True or False, # Output only. Whether the playback of this message can be interrupted by the end user's speech and the client can then starts the next Dialogflow request. "ssml": "A String", # The SSML text to be synthesized. For more information, see [SSML](/speech/text-to-speech/docs/ssml). "text": "A String", # The raw text to be synthesized. }, "payload": { # Returns a response containing a custom, platform-specific payload. "a_key": "", # Properties of the object. }, "playAudio": { # Specifies an audio clip to be played by the client as part of the response. # Signal that the client should play an audio clip hosted at a client-specific URI. Dialogflow uses this to construct mixed_audio. However, Dialogflow itself does not try to read or process the URI in any way. "allowPlaybackInterruption": True or False, # Output only. Whether the playback of this message can be interrupted by the end user's speech and the client can then starts the next Dialogflow request. "audioUri": "A String", # Required. URI of the audio clip. Dialogflow does not impose any validation on this value. It is specific to the client that reads it. }, "telephonyTransferCall": { # Represents the signal that telles the client to transfer the phone call connected to the agent to a third-party endpoint. # A signal that the client should transfer the phone call connected to this agent to a third-party endpoint. "phoneNumber": "A String", # Transfer the call to a phone number in [E.164 format](https://en.wikipedia.org/wiki/E.164). }, "text": { # The text response message. # Returns a text response. "allowPlaybackInterruption": True or False, # Output only. Whether the playback of this message can be interrupted by the end user's speech and the client can then starts the next Dialogflow request. "text": [ # Required. A collection of text response variants. If multiple variants are defined, only one text response variant is returned at runtime. "A String", ], }, "toolCall": { # Represents a call of a specific tool's action with the specified inputs. # Returns the definition of a tool call that should be executed by the client. "action": "A String", # Required. The name of the tool's action associated with this call. "inputParameters": { # Optional. The action's input parameters. "a_key": "", # Properties of the object. }, "tool": "A String", # Required. The tool associated with this call. Format: `projects//locations//agents//tools/`. }, }, }, ], "condition": "A String", # The condition to activate and select this case. Empty means the condition is always true. The condition is evaluated against form parameters or session parameters. See the [conditions reference](https://cloud.google.com/dialogflow/cx/docs/reference/condition). }, ], }, ], "enableGenerativeFallback": True or False, # If the flag is true, the agent will utilize LLM to generate a text response. If LLM generation fails, the defined responses in the fulfillment will be respected. This flag is only useful for fulfillments associated with no-match event handlers. "generators": [ # A list of Generators to be called during this fulfillment. { # Generator settings used by the LLM to generate a text response. "generator": "A String", # Required. The generator to call. Format: `projects//locations//agents//generators/`. "inputParameters": { # Map from placeholder parameter in the Generator to corresponding session parameters. By default, Dialogflow uses the session parameter with the same name to fill in the generator template. e.g. If there is a placeholder parameter `city` in the Generator, Dialogflow default to fill in the `$city` with `$session.params.city`. However, you may choose to fill `$city` with `$session.params.desination-city`. - Map key: parameter ID - Map value: session parameter name "a_key": "A String", }, "outputParameter": "A String", # Required. Output parameter which should contain the generator response. }, ], "messages": [ # The list of rich message responses to present to the user. { # Represents a response message that can be returned by a conversational agent. Response messages are also used for output audio synthesis. The approach is as follows: * If at least one OutputAudioText response is present, then all OutputAudioText responses are linearly concatenated, and the result is used for output audio synthesis. * If the OutputAudioText responses are a mixture of text and SSML, then the concatenated result is treated as SSML; otherwise, the result is treated as either text or SSML as appropriate. The agent designer should ideally use either text or SSML consistently throughout the bot design. * Otherwise, all Text responses are linearly concatenated, and the result is used for output audio synthesis. This approach allows for more sophisticated user experience scenarios, where the text displayed to the user may differ from what is heard. "channel": "A String", # The channel which the response is associated with. Clients can specify the channel via QueryParameters.channel, and only associated channel response will be returned. "conversationSuccess": { # Indicates that the conversation succeeded, i.e., the bot handled the issue that the customer talked to it about. Dialogflow only uses this to determine which conversations should be counted as successful and doesn't process the metadata in this message in any way. Note that Dialogflow also considers conversations that get to the conversation end page as successful even if they don't return ConversationSuccess. You may set this, for example: * In the entry_fulfillment of a Page if entering the page indicates that the conversation succeeded. * In a webhook response when you determine that you handled the customer issue. # Indicates that the conversation succeeded. "metadata": { # Custom metadata. Dialogflow doesn't impose any structure on this. "a_key": "", # Properties of the object. }, }, "endInteraction": { # Indicates that interaction with the Dialogflow agent has ended. This message is generated by Dialogflow only and not supposed to be defined by the user. # Output only. A signal that indicates the interaction with the Dialogflow agent has ended. This message is generated by Dialogflow only when the conversation reaches `END_SESSION` page. It is not supposed to be defined by the user. It's guaranteed that there is at most one such message in each response. }, "knowledgeInfoCard": { # Represents info card response. If the response contains generative knowledge prediction, Dialogflow will return a payload with Infobot Messenger compatible info card. Otherwise, the info card response is skipped. # Represents info card for knowledge answers, to be better rendered in Dialogflow Messenger. }, "liveAgentHandoff": { # Indicates that the conversation should be handed off to a live agent. Dialogflow only uses this to determine which conversations were handed off to a human agent for measurement purposes. What else to do with this signal is up to you and your handoff procedures. You may set this, for example: * In the entry_fulfillment of a Page if entering the page indicates something went extremely wrong in the conversation. * In a webhook response when you determine that the customer issue can only be handled by a human. # Hands off conversation to a human agent. "metadata": { # Custom metadata for your handoff procedure. Dialogflow doesn't impose any structure on this. "a_key": "", # Properties of the object. }, }, "mixedAudio": { # Represents an audio message that is composed of both segments synthesized from the Dialogflow agent prompts and ones hosted externally at the specified URIs. The external URIs are specified via play_audio. This message is generated by Dialogflow only and not supposed to be defined by the user. # Output only. An audio response message composed of both the synthesized Dialogflow agent responses and responses defined via play_audio. This message is generated by Dialogflow only and not supposed to be defined by the user. "segments": [ # Segments this audio response is composed of. { # Represents one segment of audio. "allowPlaybackInterruption": True or False, # Output only. Whether the playback of this segment can be interrupted by the end user's speech and the client should then start the next Dialogflow request. "audio": "A String", # Raw audio synthesized from the Dialogflow agent's response using the output config specified in the request. "uri": "A String", # Client-specific URI that points to an audio clip accessible to the client. Dialogflow does not impose any validation on it. }, ], }, "outputAudioText": { # A text or ssml response that is preferentially used for TTS output audio synthesis, as described in the comment on the ResponseMessage message. # A text or ssml response that is preferentially used for TTS output audio synthesis, as described in the comment on the ResponseMessage message. "allowPlaybackInterruption": True or False, # Output only. Whether the playback of this message can be interrupted by the end user's speech and the client can then starts the next Dialogflow request. "ssml": "A String", # The SSML text to be synthesized. For more information, see [SSML](/speech/text-to-speech/docs/ssml). "text": "A String", # The raw text to be synthesized. }, "payload": { # Returns a response containing a custom, platform-specific payload. "a_key": "", # Properties of the object. }, "playAudio": { # Specifies an audio clip to be played by the client as part of the response. # Signal that the client should play an audio clip hosted at a client-specific URI. Dialogflow uses this to construct mixed_audio. However, Dialogflow itself does not try to read or process the URI in any way. "allowPlaybackInterruption": True or False, # Output only. Whether the playback of this message can be interrupted by the end user's speech and the client can then starts the next Dialogflow request. "audioUri": "A String", # Required. URI of the audio clip. Dialogflow does not impose any validation on this value. It is specific to the client that reads it. }, "telephonyTransferCall": { # Represents the signal that telles the client to transfer the phone call connected to the agent to a third-party endpoint. # A signal that the client should transfer the phone call connected to this agent to a third-party endpoint. "phoneNumber": "A String", # Transfer the call to a phone number in [E.164 format](https://en.wikipedia.org/wiki/E.164). }, "text": { # The text response message. # Returns a text response. "allowPlaybackInterruption": True or False, # Output only. Whether the playback of this message can be interrupted by the end user's speech and the client can then starts the next Dialogflow request. "text": [ # Required. A collection of text response variants. If multiple variants are defined, only one text response variant is returned at runtime. "A String", ], }, "toolCall": { # Represents a call of a specific tool's action with the specified inputs. # Returns the definition of a tool call that should be executed by the client. "action": "A String", # Required. The name of the tool's action associated with this call. "inputParameters": { # Optional. The action's input parameters. "a_key": "", # Properties of the object. }, "tool": "A String", # Required. The tool associated with this call. Format: `projects//locations//agents//tools/`. }, }, ], "returnPartialResponses": True or False, # Whether Dialogflow should return currently queued fulfillment response messages in streaming APIs. If a webhook is specified, it happens before Dialogflow invokes webhook. Warning: 1) This flag only affects streaming API. Responses are still queued and returned once in non-streaming API. 2) The flag can be enabled in any fulfillment but only the first 3 partial responses will be returned. You may only want to apply it to fulfillments that have slow webhooks. "setParameterActions": [ # Set parameter values before executing the webhook. { # Setting a parameter value. "parameter": "A String", # Display name of the parameter. "value": "", # The new value of the parameter. A null value clears the parameter. }, ], "tag": "A String", # The value of this field will be populated in the WebhookRequest `fulfillmentInfo.tag` field by Dialogflow when the associated webhook is called. The tag is typically used by the webhook service to identify which fulfillment is being called, but it could be used for other purposes. This field is required if `webhook` is specified. "webhook": "A String", # The webhook to call. Format: `projects//locations//agents//webhooks/`. }, }, "lifecycleHandler": { # A handler that is triggered on the specific lifecycle_stage of the playbook execution. # A handler triggered during specific lifecycle of the playbook execution. "condition": "A String", # Optional. The condition that must be satisfied to trigger this handler. "fulfillment": { # A fulfillment can do one or more of the following actions at the same time: * Generate rich message responses. * Set parameter values. * Call the webhook. Fulfillments can be called at various stages in the Page or Form lifecycle. For example, when a DetectIntentRequest drives a session to enter a new page, the page's entry fulfillment can add a static response to the QueryResult in the returning DetectIntentResponse, call the webhook (for example, to load user data from a database), or both. # Required. The fulfillment to call when this handler is triggered. "advancedSettings": { # Hierarchical advanced settings for agent/flow/page/fulfillment/parameter. Settings exposed at lower level overrides the settings exposed at higher level. Overriding occurs at the sub-setting level. For example, the playback_interruption_settings at fulfillment level only overrides the playback_interruption_settings at the agent level, leaving other settings at the agent level unchanged. DTMF settings does not override each other. DTMF settings set at different levels define DTMF detections running in parallel. Hierarchy: Agent->Flow->Page->Fulfillment/Parameter. # Hierarchical advanced settings for this fulfillment. The settings exposed at the lower level overrides the settings exposed at the higher level. "audioExportGcsDestination": { # Google Cloud Storage location for a Dialogflow operation that writes or exports objects (e.g. exported agent or transcripts) outside of Dialogflow. # If present, incoming audio is exported by Dialogflow to the configured Google Cloud Storage destination. Exposed at the following levels: - Agent level - Flow level "uri": "A String", # Required. The Google Cloud Storage URI for the exported objects. A URI is of the form: `gs://bucket/object-name-or-prefix` Whether a full object name, or just a prefix, its usage depends on the Dialogflow operation. }, "dtmfSettings": { # Define behaviors for DTMF (dual tone multi frequency). # Settings for DTMF. Exposed at the following levels: - Agent level - Flow level - Page level - Parameter level. "enabled": True or False, # If true, incoming audio is processed for DTMF (dual tone multi frequency) events. For example, if the caller presses a button on their telephone keypad and DTMF processing is enabled, Dialogflow will detect the event (e.g. a "3" was pressed) in the incoming audio and pass the event to the bot to drive business logic (e.g. when 3 is pressed, return the account balance). "endpointingTimeoutDuration": "A String", # Endpoint timeout setting for matching dtmf input to regex. "finishDigit": "A String", # The digit that terminates a DTMF digit sequence. "interdigitTimeoutDuration": "A String", # Interdigit timeout setting for matching dtmf input to regex. "maxDigits": 42, # Max length of DTMF digits. }, "loggingSettings": { # Define behaviors on logging. # Settings for logging. Settings for Dialogflow History, Contact Center messages, StackDriver logs, and speech logging. Exposed at the following levels: - Agent level. "enableConsentBasedRedaction": True or False, # Enables consent-based end-user input redaction, if true, a pre-defined session parameter `$session.params.conversation-redaction` will be used to determine if the utterance should be redacted. "enableInteractionLogging": True or False, # Enables DF Interaction logging. "enableStackdriverLogging": True or False, # Enables Google Cloud Logging. }, "speechSettings": { # Define behaviors of speech to text detection. # Settings for speech to text detection. Exposed at the following levels: - Agent level - Flow level - Page level - Parameter level "endpointerSensitivity": 42, # Sensitivity of the speech model that detects the end of speech. Scale from 0 to 100. "models": { # Mapping from language to Speech-to-Text model. The mapped Speech-to-Text model will be selected for requests from its corresponding language. For more information, see [Speech models](https://cloud.google.com/dialogflow/cx/docs/concept/speech-models). "a_key": "A String", }, "noSpeechTimeout": "A String", # Timeout before detecting no speech. "useTimeoutBasedEndpointing": True or False, # Use timeout based endpointing, interpreting endpointer sensitivity as seconds of timeout value. }, }, "conditionalCases": [ # Conditional cases for this fulfillment. { # A list of cascading if-else conditions. Cases are mutually exclusive. The first one with a matching condition is selected, all the rest ignored. "cases": [ # A list of cascading if-else conditions. { # Each case has a Boolean condition. When it is evaluated to be True, the corresponding messages will be selected and evaluated recursively. "caseContent": [ # A list of case content. { # The list of messages or conditional cases to activate for this case. "additionalCases": # Object with schema name: GoogleCloudDialogflowCxV3beta1FulfillmentConditionalCases # Additional cases to be evaluated. "message": { # Represents a response message that can be returned by a conversational agent. Response messages are also used for output audio synthesis. The approach is as follows: * If at least one OutputAudioText response is present, then all OutputAudioText responses are linearly concatenated, and the result is used for output audio synthesis. * If the OutputAudioText responses are a mixture of text and SSML, then the concatenated result is treated as SSML; otherwise, the result is treated as either text or SSML as appropriate. The agent designer should ideally use either text or SSML consistently throughout the bot design. * Otherwise, all Text responses are linearly concatenated, and the result is used for output audio synthesis. This approach allows for more sophisticated user experience scenarios, where the text displayed to the user may differ from what is heard. # Returned message. "channel": "A String", # The channel which the response is associated with. Clients can specify the channel via QueryParameters.channel, and only associated channel response will be returned. "conversationSuccess": { # Indicates that the conversation succeeded, i.e., the bot handled the issue that the customer talked to it about. Dialogflow only uses this to determine which conversations should be counted as successful and doesn't process the metadata in this message in any way. Note that Dialogflow also considers conversations that get to the conversation end page as successful even if they don't return ConversationSuccess. You may set this, for example: * In the entry_fulfillment of a Page if entering the page indicates that the conversation succeeded. * In a webhook response when you determine that you handled the customer issue. # Indicates that the conversation succeeded. "metadata": { # Custom metadata. Dialogflow doesn't impose any structure on this. "a_key": "", # Properties of the object. }, }, "endInteraction": { # Indicates that interaction with the Dialogflow agent has ended. This message is generated by Dialogflow only and not supposed to be defined by the user. # Output only. A signal that indicates the interaction with the Dialogflow agent has ended. This message is generated by Dialogflow only when the conversation reaches `END_SESSION` page. It is not supposed to be defined by the user. It's guaranteed that there is at most one such message in each response. }, "knowledgeInfoCard": { # Represents info card response. If the response contains generative knowledge prediction, Dialogflow will return a payload with Infobot Messenger compatible info card. Otherwise, the info card response is skipped. # Represents info card for knowledge answers, to be better rendered in Dialogflow Messenger. }, "liveAgentHandoff": { # Indicates that the conversation should be handed off to a live agent. Dialogflow only uses this to determine which conversations were handed off to a human agent for measurement purposes. What else to do with this signal is up to you and your handoff procedures. You may set this, for example: * In the entry_fulfillment of a Page if entering the page indicates something went extremely wrong in the conversation. * In a webhook response when you determine that the customer issue can only be handled by a human. # Hands off conversation to a human agent. "metadata": { # Custom metadata for your handoff procedure. Dialogflow doesn't impose any structure on this. "a_key": "", # Properties of the object. }, }, "mixedAudio": { # Represents an audio message that is composed of both segments synthesized from the Dialogflow agent prompts and ones hosted externally at the specified URIs. The external URIs are specified via play_audio. This message is generated by Dialogflow only and not supposed to be defined by the user. # Output only. An audio response message composed of both the synthesized Dialogflow agent responses and responses defined via play_audio. This message is generated by Dialogflow only and not supposed to be defined by the user. "segments": [ # Segments this audio response is composed of. { # Represents one segment of audio. "allowPlaybackInterruption": True or False, # Output only. Whether the playback of this segment can be interrupted by the end user's speech and the client should then start the next Dialogflow request. "audio": "A String", # Raw audio synthesized from the Dialogflow agent's response using the output config specified in the request. "uri": "A String", # Client-specific URI that points to an audio clip accessible to the client. Dialogflow does not impose any validation on it. }, ], }, "outputAudioText": { # A text or ssml response that is preferentially used for TTS output audio synthesis, as described in the comment on the ResponseMessage message. # A text or ssml response that is preferentially used for TTS output audio synthesis, as described in the comment on the ResponseMessage message. "allowPlaybackInterruption": True or False, # Output only. Whether the playback of this message can be interrupted by the end user's speech and the client can then starts the next Dialogflow request. "ssml": "A String", # The SSML text to be synthesized. For more information, see [SSML](/speech/text-to-speech/docs/ssml). "text": "A String", # The raw text to be synthesized. }, "payload": { # Returns a response containing a custom, platform-specific payload. "a_key": "", # Properties of the object. }, "playAudio": { # Specifies an audio clip to be played by the client as part of the response. # Signal that the client should play an audio clip hosted at a client-specific URI. Dialogflow uses this to construct mixed_audio. However, Dialogflow itself does not try to read or process the URI in any way. "allowPlaybackInterruption": True or False, # Output only. Whether the playback of this message can be interrupted by the end user's speech and the client can then starts the next Dialogflow request. "audioUri": "A String", # Required. URI of the audio clip. Dialogflow does not impose any validation on this value. It is specific to the client that reads it. }, "telephonyTransferCall": { # Represents the signal that telles the client to transfer the phone call connected to the agent to a third-party endpoint. # A signal that the client should transfer the phone call connected to this agent to a third-party endpoint. "phoneNumber": "A String", # Transfer the call to a phone number in [E.164 format](https://en.wikipedia.org/wiki/E.164). }, "text": { # The text response message. # Returns a text response. "allowPlaybackInterruption": True or False, # Output only. Whether the playback of this message can be interrupted by the end user's speech and the client can then starts the next Dialogflow request. "text": [ # Required. A collection of text response variants. If multiple variants are defined, only one text response variant is returned at runtime. "A String", ], }, "toolCall": { # Represents a call of a specific tool's action with the specified inputs. # Returns the definition of a tool call that should be executed by the client. "action": "A String", # Required. The name of the tool's action associated with this call. "inputParameters": { # Optional. The action's input parameters. "a_key": "", # Properties of the object. }, "tool": "A String", # Required. The tool associated with this call. Format: `projects//locations//agents//tools/`. }, }, }, ], "condition": "A String", # The condition to activate and select this case. Empty means the condition is always true. The condition is evaluated against form parameters or session parameters. See the [conditions reference](https://cloud.google.com/dialogflow/cx/docs/reference/condition). }, ], }, ], "enableGenerativeFallback": True or False, # If the flag is true, the agent will utilize LLM to generate a text response. If LLM generation fails, the defined responses in the fulfillment will be respected. This flag is only useful for fulfillments associated with no-match event handlers. "generators": [ # A list of Generators to be called during this fulfillment. { # Generator settings used by the LLM to generate a text response. "generator": "A String", # Required. The generator to call. Format: `projects//locations//agents//generators/`. "inputParameters": { # Map from placeholder parameter in the Generator to corresponding session parameters. By default, Dialogflow uses the session parameter with the same name to fill in the generator template. e.g. If there is a placeholder parameter `city` in the Generator, Dialogflow default to fill in the `$city` with `$session.params.city`. However, you may choose to fill `$city` with `$session.params.desination-city`. - Map key: parameter ID - Map value: session parameter name "a_key": "A String", }, "outputParameter": "A String", # Required. Output parameter which should contain the generator response. }, ], "messages": [ # The list of rich message responses to present to the user. { # Represents a response message that can be returned by a conversational agent. Response messages are also used for output audio synthesis. The approach is as follows: * If at least one OutputAudioText response is present, then all OutputAudioText responses are linearly concatenated, and the result is used for output audio synthesis. * If the OutputAudioText responses are a mixture of text and SSML, then the concatenated result is treated as SSML; otherwise, the result is treated as either text or SSML as appropriate. The agent designer should ideally use either text or SSML consistently throughout the bot design. * Otherwise, all Text responses are linearly concatenated, and the result is used for output audio synthesis. This approach allows for more sophisticated user experience scenarios, where the text displayed to the user may differ from what is heard. "channel": "A String", # The channel which the response is associated with. Clients can specify the channel via QueryParameters.channel, and only associated channel response will be returned. "conversationSuccess": { # Indicates that the conversation succeeded, i.e., the bot handled the issue that the customer talked to it about. Dialogflow only uses this to determine which conversations should be counted as successful and doesn't process the metadata in this message in any way. Note that Dialogflow also considers conversations that get to the conversation end page as successful even if they don't return ConversationSuccess. You may set this, for example: * In the entry_fulfillment of a Page if entering the page indicates that the conversation succeeded. * In a webhook response when you determine that you handled the customer issue. # Indicates that the conversation succeeded. "metadata": { # Custom metadata. Dialogflow doesn't impose any structure on this. "a_key": "", # Properties of the object. }, }, "endInteraction": { # Indicates that interaction with the Dialogflow agent has ended. This message is generated by Dialogflow only and not supposed to be defined by the user. # Output only. A signal that indicates the interaction with the Dialogflow agent has ended. This message is generated by Dialogflow only when the conversation reaches `END_SESSION` page. It is not supposed to be defined by the user. It's guaranteed that there is at most one such message in each response. }, "knowledgeInfoCard": { # Represents info card response. If the response contains generative knowledge prediction, Dialogflow will return a payload with Infobot Messenger compatible info card. Otherwise, the info card response is skipped. # Represents info card for knowledge answers, to be better rendered in Dialogflow Messenger. }, "liveAgentHandoff": { # Indicates that the conversation should be handed off to a live agent. Dialogflow only uses this to determine which conversations were handed off to a human agent for measurement purposes. What else to do with this signal is up to you and your handoff procedures. You may set this, for example: * In the entry_fulfillment of a Page if entering the page indicates something went extremely wrong in the conversation. * In a webhook response when you determine that the customer issue can only be handled by a human. # Hands off conversation to a human agent. "metadata": { # Custom metadata for your handoff procedure. Dialogflow doesn't impose any structure on this. "a_key": "", # Properties of the object. }, }, "mixedAudio": { # Represents an audio message that is composed of both segments synthesized from the Dialogflow agent prompts and ones hosted externally at the specified URIs. The external URIs are specified via play_audio. This message is generated by Dialogflow only and not supposed to be defined by the user. # Output only. An audio response message composed of both the synthesized Dialogflow agent responses and responses defined via play_audio. This message is generated by Dialogflow only and not supposed to be defined by the user. "segments": [ # Segments this audio response is composed of. { # Represents one segment of audio. "allowPlaybackInterruption": True or False, # Output only. Whether the playback of this segment can be interrupted by the end user's speech and the client should then start the next Dialogflow request. "audio": "A String", # Raw audio synthesized from the Dialogflow agent's response using the output config specified in the request. "uri": "A String", # Client-specific URI that points to an audio clip accessible to the client. Dialogflow does not impose any validation on it. }, ], }, "outputAudioText": { # A text or ssml response that is preferentially used for TTS output audio synthesis, as described in the comment on the ResponseMessage message. # A text or ssml response that is preferentially used for TTS output audio synthesis, as described in the comment on the ResponseMessage message. "allowPlaybackInterruption": True or False, # Output only. Whether the playback of this message can be interrupted by the end user's speech and the client can then starts the next Dialogflow request. "ssml": "A String", # The SSML text to be synthesized. For more information, see [SSML](/speech/text-to-speech/docs/ssml). "text": "A String", # The raw text to be synthesized. }, "payload": { # Returns a response containing a custom, platform-specific payload. "a_key": "", # Properties of the object. }, "playAudio": { # Specifies an audio clip to be played by the client as part of the response. # Signal that the client should play an audio clip hosted at a client-specific URI. Dialogflow uses this to construct mixed_audio. However, Dialogflow itself does not try to read or process the URI in any way. "allowPlaybackInterruption": True or False, # Output only. Whether the playback of this message can be interrupted by the end user's speech and the client can then starts the next Dialogflow request. "audioUri": "A String", # Required. URI of the audio clip. Dialogflow does not impose any validation on this value. It is specific to the client that reads it. }, "telephonyTransferCall": { # Represents the signal that telles the client to transfer the phone call connected to the agent to a third-party endpoint. # A signal that the client should transfer the phone call connected to this agent to a third-party endpoint. "phoneNumber": "A String", # Transfer the call to a phone number in [E.164 format](https://en.wikipedia.org/wiki/E.164). }, "text": { # The text response message. # Returns a text response. "allowPlaybackInterruption": True or False, # Output only. Whether the playback of this message can be interrupted by the end user's speech and the client can then starts the next Dialogflow request. "text": [ # Required. A collection of text response variants. If multiple variants are defined, only one text response variant is returned at runtime. "A String", ], }, "toolCall": { # Represents a call of a specific tool's action with the specified inputs. # Returns the definition of a tool call that should be executed by the client. "action": "A String", # Required. The name of the tool's action associated with this call. "inputParameters": { # Optional. The action's input parameters. "a_key": "", # Properties of the object. }, "tool": "A String", # Required. The tool associated with this call. Format: `projects//locations//agents//tools/`. }, }, ], "returnPartialResponses": True or False, # Whether Dialogflow should return currently queued fulfillment response messages in streaming APIs. If a webhook is specified, it happens before Dialogflow invokes webhook. Warning: 1) This flag only affects streaming API. Responses are still queued and returned once in non-streaming API. 2) The flag can be enabled in any fulfillment but only the first 3 partial responses will be returned. You may only want to apply it to fulfillments that have slow webhooks. "setParameterActions": [ # Set parameter values before executing the webhook. { # Setting a parameter value. "parameter": "A String", # Display name of the parameter. "value": "", # The new value of the parameter. A null value clears the parameter. }, ], "tag": "A String", # The value of this field will be populated in the WebhookRequest `fulfillmentInfo.tag` field by Dialogflow when the associated webhook is called. The tag is typically used by the webhook service to identify which fulfillment is being called, but it could be used for other purposes. This field is required if `webhook` is specified. "webhook": "A String", # The webhook to call. Format: `projects//locations//agents//webhooks/`. }, "lifecycleStage": "A String", # Required. The name of the lifecycle stage that triggers this handler. Supported values: * `playbook-start` * `pre-action-selection` * `pre-action-execution` }, }, ], "inputParameterDefinitions": [ # Optional. Defined structured input parameters for this playbook. { # Defines the properties of a parameter. Used to define parameters used in the agent and the input / output parameters for each fulfillment. "description": "A String", # Human-readable description of the parameter. Limited to 300 characters. "name": "A String", # Required. Name of parameter. "type": "A String", # Type of parameter. "typeSchema": { # Encapsulates different type schema variations: either a reference to an a schema that's already defined by a tool, or an inline definition. # Optional. Type schema of parameter. "inlineSchema": { # A type schema object that's specified inline. # Set if this is an inline schema definition. "items": # Object with schema name: GoogleCloudDialogflowCxV3beta1TypeSchema # Schema of the elements if this is an ARRAY type. "type": "A String", # Data type of the schema. }, "schemaReference": { # A reference to the schema of an existing tool. # Set if this is a schema reference. "schema": "A String", # The name of the schema. "tool": "A String", # The tool that contains this schema definition. Format: `projects//locations//agents//tools/`. }, }, }, ], "instruction": { # Message of the Instruction of the playbook. # Instruction to accomplish target goal. "guidelines": "A String", # General guidelines for the playbook. These are unstructured instructions that are not directly part of the goal, e.g. "Always be polite". It's valid for this text to be long and used instead of steps altogether. "steps": [ # Ordered list of step by step execution instructions to accomplish target goal. { # Message of single step execution. "steps": [ # Sub-processing needed to execute the current step. # Object with schema name: GoogleCloudDialogflowCxV3beta1PlaybookStep ], "text": "A String", # Step instruction in text format. }, ], }, "llmModelSettings": { # Settings for LLM models. # Optional. Llm model settings for the playbook. "model": "A String", # The selected LLM model. "promptText": "A String", # The custom prompt to use. }, "name": "A String", # The unique identifier of the playbook. Format: `projects//locations//agents//playbooks/`. "outputParameterDefinitions": [ # Optional. Defined structured output parameters for this playbook. { # Defines the properties of a parameter. Used to define parameters used in the agent and the input / output parameters for each fulfillment. "description": "A String", # Human-readable description of the parameter. Limited to 300 characters. "name": "A String", # Required. Name of parameter. "type": "A String", # Type of parameter. "typeSchema": { # Encapsulates different type schema variations: either a reference to an a schema that's already defined by a tool, or an inline definition. # Optional. Type schema of parameter. "inlineSchema": { # A type schema object that's specified inline. # Set if this is an inline schema definition. "items": # Object with schema name: GoogleCloudDialogflowCxV3beta1TypeSchema # Schema of the elements if this is an ARRAY type. "type": "A String", # Data type of the schema. }, "schemaReference": { # A reference to the schema of an existing tool. # Set if this is a schema reference. "schema": "A String", # The name of the schema. "tool": "A String", # The tool that contains this schema definition. Format: `projects//locations//agents//tools/`. }, }, }, ], "playbookType": "A String", # Optional. Type of the playbook. "referencedFlows": [ # Output only. The resource name of flows referenced by the current playbook in the instructions. "A String", ], "referencedPlaybooks": [ # Output only. The resource name of other playbooks referenced by the current playbook in the instructions. "A String", ], "referencedTools": [ # Optional. The resource name of tools referenced by the current playbook in the instructions. If not provided explicitly, they are will be implied using the tool being referenced in goal and steps. "A String", ], "speechSettings": { # Define behaviors of speech to text detection. # Optional. Playbook level Settings for speech to text detection. "endpointerSensitivity": 42, # Sensitivity of the speech model that detects the end of speech. Scale from 0 to 100. "models": { # Mapping from language to Speech-to-Text model. The mapped Speech-to-Text model will be selected for requests from its corresponding language. For more information, see [Speech models](https://cloud.google.com/dialogflow/cx/docs/concept/speech-models). "a_key": "A String", }, "noSpeechTimeout": "A String", # Timeout before detecting no speech. "useTimeoutBasedEndpointing": True or False, # Use timeout based endpointing, interpreting endpointer sensitivity as seconds of timeout value. }, "tokenCount": "A String", # Output only. Estimated number of tokes current playbook takes when sent to the LLM. "updateTime": "A String", # Output only. Last time the playbook version was updated. } updateMask: string, The mask to control which fields get updated. If the mask is not present, all fields will be updated. x__xgafv: string, V1 error format. Allowed values 1 - v1 error format 2 - v2 error format Returns: An object of the form: { # Playbook is the basic building block to instruct the LLM how to execute a certain task. A playbook consists of a goal to accomplish, an optional list of step by step instructions (the step instruction may refers to name of the custom or default plugin tools to use) to perform the task, a list of contextual input data to be passed in at the beginning of the invoked, and a list of output parameters to store the playbook result. "createTime": "A String", # Output only. The timestamp of initial playbook creation. "displayName": "A String", # Required. The human-readable name of the playbook, unique within an agent. "goal": "A String", # Required. High level description of the goal the playbook intend to accomplish. A goal should be concise since it's visible to other playbooks that may reference this playbook. "handlers": [ # Optional. A list of registered handlers to execute based on the specified triggers. { # Handler can be used to define custom logic to be executed based on the user-specified triggers. "eventHandler": { # A handler that is triggered by the specified event. # A handler triggered by event. "condition": "A String", # Optional. The condition that must be satisfied to trigger this handler. "event": "A String", # Required. The name of the event that triggers this handler. "fulfillment": { # A fulfillment can do one or more of the following actions at the same time: * Generate rich message responses. * Set parameter values. * Call the webhook. Fulfillments can be called at various stages in the Page or Form lifecycle. For example, when a DetectIntentRequest drives a session to enter a new page, the page's entry fulfillment can add a static response to the QueryResult in the returning DetectIntentResponse, call the webhook (for example, to load user data from a database), or both. # Required. The fulfillment to call when the event occurs. "advancedSettings": { # Hierarchical advanced settings for agent/flow/page/fulfillment/parameter. Settings exposed at lower level overrides the settings exposed at higher level. Overriding occurs at the sub-setting level. For example, the playback_interruption_settings at fulfillment level only overrides the playback_interruption_settings at the agent level, leaving other settings at the agent level unchanged. DTMF settings does not override each other. DTMF settings set at different levels define DTMF detections running in parallel. Hierarchy: Agent->Flow->Page->Fulfillment/Parameter. # Hierarchical advanced settings for this fulfillment. The settings exposed at the lower level overrides the settings exposed at the higher level. "audioExportGcsDestination": { # Google Cloud Storage location for a Dialogflow operation that writes or exports objects (e.g. exported agent or transcripts) outside of Dialogflow. # If present, incoming audio is exported by Dialogflow to the configured Google Cloud Storage destination. Exposed at the following levels: - Agent level - Flow level "uri": "A String", # Required. The Google Cloud Storage URI for the exported objects. A URI is of the form: `gs://bucket/object-name-or-prefix` Whether a full object name, or just a prefix, its usage depends on the Dialogflow operation. }, "dtmfSettings": { # Define behaviors for DTMF (dual tone multi frequency). # Settings for DTMF. Exposed at the following levels: - Agent level - Flow level - Page level - Parameter level. "enabled": True or False, # If true, incoming audio is processed for DTMF (dual tone multi frequency) events. For example, if the caller presses a button on their telephone keypad and DTMF processing is enabled, Dialogflow will detect the event (e.g. a "3" was pressed) in the incoming audio and pass the event to the bot to drive business logic (e.g. when 3 is pressed, return the account balance). "endpointingTimeoutDuration": "A String", # Endpoint timeout setting for matching dtmf input to regex. "finishDigit": "A String", # The digit that terminates a DTMF digit sequence. "interdigitTimeoutDuration": "A String", # Interdigit timeout setting for matching dtmf input to regex. "maxDigits": 42, # Max length of DTMF digits. }, "loggingSettings": { # Define behaviors on logging. # Settings for logging. Settings for Dialogflow History, Contact Center messages, StackDriver logs, and speech logging. Exposed at the following levels: - Agent level. "enableConsentBasedRedaction": True or False, # Enables consent-based end-user input redaction, if true, a pre-defined session parameter `$session.params.conversation-redaction` will be used to determine if the utterance should be redacted. "enableInteractionLogging": True or False, # Enables DF Interaction logging. "enableStackdriverLogging": True or False, # Enables Google Cloud Logging. }, "speechSettings": { # Define behaviors of speech to text detection. # Settings for speech to text detection. Exposed at the following levels: - Agent level - Flow level - Page level - Parameter level "endpointerSensitivity": 42, # Sensitivity of the speech model that detects the end of speech. Scale from 0 to 100. "models": { # Mapping from language to Speech-to-Text model. The mapped Speech-to-Text model will be selected for requests from its corresponding language. For more information, see [Speech models](https://cloud.google.com/dialogflow/cx/docs/concept/speech-models). "a_key": "A String", }, "noSpeechTimeout": "A String", # Timeout before detecting no speech. "useTimeoutBasedEndpointing": True or False, # Use timeout based endpointing, interpreting endpointer sensitivity as seconds of timeout value. }, }, "conditionalCases": [ # Conditional cases for this fulfillment. { # A list of cascading if-else conditions. Cases are mutually exclusive. The first one with a matching condition is selected, all the rest ignored. "cases": [ # A list of cascading if-else conditions. { # Each case has a Boolean condition. When it is evaluated to be True, the corresponding messages will be selected and evaluated recursively. "caseContent": [ # A list of case content. { # The list of messages or conditional cases to activate for this case. "additionalCases": # Object with schema name: GoogleCloudDialogflowCxV3beta1FulfillmentConditionalCases # Additional cases to be evaluated. "message": { # Represents a response message that can be returned by a conversational agent. Response messages are also used for output audio synthesis. The approach is as follows: * If at least one OutputAudioText response is present, then all OutputAudioText responses are linearly concatenated, and the result is used for output audio synthesis. * If the OutputAudioText responses are a mixture of text and SSML, then the concatenated result is treated as SSML; otherwise, the result is treated as either text or SSML as appropriate. The agent designer should ideally use either text or SSML consistently throughout the bot design. * Otherwise, all Text responses are linearly concatenated, and the result is used for output audio synthesis. This approach allows for more sophisticated user experience scenarios, where the text displayed to the user may differ from what is heard. # Returned message. "channel": "A String", # The channel which the response is associated with. Clients can specify the channel via QueryParameters.channel, and only associated channel response will be returned. "conversationSuccess": { # Indicates that the conversation succeeded, i.e., the bot handled the issue that the customer talked to it about. Dialogflow only uses this to determine which conversations should be counted as successful and doesn't process the metadata in this message in any way. Note that Dialogflow also considers conversations that get to the conversation end page as successful even if they don't return ConversationSuccess. You may set this, for example: * In the entry_fulfillment of a Page if entering the page indicates that the conversation succeeded. * In a webhook response when you determine that you handled the customer issue. # Indicates that the conversation succeeded. "metadata": { # Custom metadata. Dialogflow doesn't impose any structure on this. "a_key": "", # Properties of the object. }, }, "endInteraction": { # Indicates that interaction with the Dialogflow agent has ended. This message is generated by Dialogflow only and not supposed to be defined by the user. # Output only. A signal that indicates the interaction with the Dialogflow agent has ended. This message is generated by Dialogflow only when the conversation reaches `END_SESSION` page. It is not supposed to be defined by the user. It's guaranteed that there is at most one such message in each response. }, "knowledgeInfoCard": { # Represents info card response. If the response contains generative knowledge prediction, Dialogflow will return a payload with Infobot Messenger compatible info card. Otherwise, the info card response is skipped. # Represents info card for knowledge answers, to be better rendered in Dialogflow Messenger. }, "liveAgentHandoff": { # Indicates that the conversation should be handed off to a live agent. Dialogflow only uses this to determine which conversations were handed off to a human agent for measurement purposes. What else to do with this signal is up to you and your handoff procedures. You may set this, for example: * In the entry_fulfillment of a Page if entering the page indicates something went extremely wrong in the conversation. * In a webhook response when you determine that the customer issue can only be handled by a human. # Hands off conversation to a human agent. "metadata": { # Custom metadata for your handoff procedure. Dialogflow doesn't impose any structure on this. "a_key": "", # Properties of the object. }, }, "mixedAudio": { # Represents an audio message that is composed of both segments synthesized from the Dialogflow agent prompts and ones hosted externally at the specified URIs. The external URIs are specified via play_audio. This message is generated by Dialogflow only and not supposed to be defined by the user. # Output only. An audio response message composed of both the synthesized Dialogflow agent responses and responses defined via play_audio. This message is generated by Dialogflow only and not supposed to be defined by the user. "segments": [ # Segments this audio response is composed of. { # Represents one segment of audio. "allowPlaybackInterruption": True or False, # Output only. Whether the playback of this segment can be interrupted by the end user's speech and the client should then start the next Dialogflow request. "audio": "A String", # Raw audio synthesized from the Dialogflow agent's response using the output config specified in the request. "uri": "A String", # Client-specific URI that points to an audio clip accessible to the client. Dialogflow does not impose any validation on it. }, ], }, "outputAudioText": { # A text or ssml response that is preferentially used for TTS output audio synthesis, as described in the comment on the ResponseMessage message. # A text or ssml response that is preferentially used for TTS output audio synthesis, as described in the comment on the ResponseMessage message. "allowPlaybackInterruption": True or False, # Output only. Whether the playback of this message can be interrupted by the end user's speech and the client can then starts the next Dialogflow request. "ssml": "A String", # The SSML text to be synthesized. For more information, see [SSML](/speech/text-to-speech/docs/ssml). "text": "A String", # The raw text to be synthesized. }, "payload": { # Returns a response containing a custom, platform-specific payload. "a_key": "", # Properties of the object. }, "playAudio": { # Specifies an audio clip to be played by the client as part of the response. # Signal that the client should play an audio clip hosted at a client-specific URI. Dialogflow uses this to construct mixed_audio. However, Dialogflow itself does not try to read or process the URI in any way. "allowPlaybackInterruption": True or False, # Output only. Whether the playback of this message can be interrupted by the end user's speech and the client can then starts the next Dialogflow request. "audioUri": "A String", # Required. URI of the audio clip. Dialogflow does not impose any validation on this value. It is specific to the client that reads it. }, "telephonyTransferCall": { # Represents the signal that telles the client to transfer the phone call connected to the agent to a third-party endpoint. # A signal that the client should transfer the phone call connected to this agent to a third-party endpoint. "phoneNumber": "A String", # Transfer the call to a phone number in [E.164 format](https://en.wikipedia.org/wiki/E.164). }, "text": { # The text response message. # Returns a text response. "allowPlaybackInterruption": True or False, # Output only. Whether the playback of this message can be interrupted by the end user's speech and the client can then starts the next Dialogflow request. "text": [ # Required. A collection of text response variants. If multiple variants are defined, only one text response variant is returned at runtime. "A String", ], }, "toolCall": { # Represents a call of a specific tool's action with the specified inputs. # Returns the definition of a tool call that should be executed by the client. "action": "A String", # Required. The name of the tool's action associated with this call. "inputParameters": { # Optional. The action's input parameters. "a_key": "", # Properties of the object. }, "tool": "A String", # Required. The tool associated with this call. Format: `projects//locations//agents//tools/`. }, }, }, ], "condition": "A String", # The condition to activate and select this case. Empty means the condition is always true. The condition is evaluated against form parameters or session parameters. See the [conditions reference](https://cloud.google.com/dialogflow/cx/docs/reference/condition). }, ], }, ], "enableGenerativeFallback": True or False, # If the flag is true, the agent will utilize LLM to generate a text response. If LLM generation fails, the defined responses in the fulfillment will be respected. This flag is only useful for fulfillments associated with no-match event handlers. "generators": [ # A list of Generators to be called during this fulfillment. { # Generator settings used by the LLM to generate a text response. "generator": "A String", # Required. The generator to call. Format: `projects//locations//agents//generators/`. "inputParameters": { # Map from placeholder parameter in the Generator to corresponding session parameters. By default, Dialogflow uses the session parameter with the same name to fill in the generator template. e.g. If there is a placeholder parameter `city` in the Generator, Dialogflow default to fill in the `$city` with `$session.params.city`. However, you may choose to fill `$city` with `$session.params.desination-city`. - Map key: parameter ID - Map value: session parameter name "a_key": "A String", }, "outputParameter": "A String", # Required. Output parameter which should contain the generator response. }, ], "messages": [ # The list of rich message responses to present to the user. { # Represents a response message that can be returned by a conversational agent. Response messages are also used for output audio synthesis. The approach is as follows: * If at least one OutputAudioText response is present, then all OutputAudioText responses are linearly concatenated, and the result is used for output audio synthesis. * If the OutputAudioText responses are a mixture of text and SSML, then the concatenated result is treated as SSML; otherwise, the result is treated as either text or SSML as appropriate. The agent designer should ideally use either text or SSML consistently throughout the bot design. * Otherwise, all Text responses are linearly concatenated, and the result is used for output audio synthesis. This approach allows for more sophisticated user experience scenarios, where the text displayed to the user may differ from what is heard. "channel": "A String", # The channel which the response is associated with. Clients can specify the channel via QueryParameters.channel, and only associated channel response will be returned. "conversationSuccess": { # Indicates that the conversation succeeded, i.e., the bot handled the issue that the customer talked to it about. Dialogflow only uses this to determine which conversations should be counted as successful and doesn't process the metadata in this message in any way. Note that Dialogflow also considers conversations that get to the conversation end page as successful even if they don't return ConversationSuccess. You may set this, for example: * In the entry_fulfillment of a Page if entering the page indicates that the conversation succeeded. * In a webhook response when you determine that you handled the customer issue. # Indicates that the conversation succeeded. "metadata": { # Custom metadata. Dialogflow doesn't impose any structure on this. "a_key": "", # Properties of the object. }, }, "endInteraction": { # Indicates that interaction with the Dialogflow agent has ended. This message is generated by Dialogflow only and not supposed to be defined by the user. # Output only. A signal that indicates the interaction with the Dialogflow agent has ended. This message is generated by Dialogflow only when the conversation reaches `END_SESSION` page. It is not supposed to be defined by the user. It's guaranteed that there is at most one such message in each response. }, "knowledgeInfoCard": { # Represents info card response. If the response contains generative knowledge prediction, Dialogflow will return a payload with Infobot Messenger compatible info card. Otherwise, the info card response is skipped. # Represents info card for knowledge answers, to be better rendered in Dialogflow Messenger. }, "liveAgentHandoff": { # Indicates that the conversation should be handed off to a live agent. Dialogflow only uses this to determine which conversations were handed off to a human agent for measurement purposes. What else to do with this signal is up to you and your handoff procedures. You may set this, for example: * In the entry_fulfillment of a Page if entering the page indicates something went extremely wrong in the conversation. * In a webhook response when you determine that the customer issue can only be handled by a human. # Hands off conversation to a human agent. "metadata": { # Custom metadata for your handoff procedure. Dialogflow doesn't impose any structure on this. "a_key": "", # Properties of the object. }, }, "mixedAudio": { # Represents an audio message that is composed of both segments synthesized from the Dialogflow agent prompts and ones hosted externally at the specified URIs. The external URIs are specified via play_audio. This message is generated by Dialogflow only and not supposed to be defined by the user. # Output only. An audio response message composed of both the synthesized Dialogflow agent responses and responses defined via play_audio. This message is generated by Dialogflow only and not supposed to be defined by the user. "segments": [ # Segments this audio response is composed of. { # Represents one segment of audio. "allowPlaybackInterruption": True or False, # Output only. Whether the playback of this segment can be interrupted by the end user's speech and the client should then start the next Dialogflow request. "audio": "A String", # Raw audio synthesized from the Dialogflow agent's response using the output config specified in the request. "uri": "A String", # Client-specific URI that points to an audio clip accessible to the client. Dialogflow does not impose any validation on it. }, ], }, "outputAudioText": { # A text or ssml response that is preferentially used for TTS output audio synthesis, as described in the comment on the ResponseMessage message. # A text or ssml response that is preferentially used for TTS output audio synthesis, as described in the comment on the ResponseMessage message. "allowPlaybackInterruption": True or False, # Output only. Whether the playback of this message can be interrupted by the end user's speech and the client can then starts the next Dialogflow request. "ssml": "A String", # The SSML text to be synthesized. For more information, see [SSML](/speech/text-to-speech/docs/ssml). "text": "A String", # The raw text to be synthesized. }, "payload": { # Returns a response containing a custom, platform-specific payload. "a_key": "", # Properties of the object. }, "playAudio": { # Specifies an audio clip to be played by the client as part of the response. # Signal that the client should play an audio clip hosted at a client-specific URI. Dialogflow uses this to construct mixed_audio. However, Dialogflow itself does not try to read or process the URI in any way. "allowPlaybackInterruption": True or False, # Output only. Whether the playback of this message can be interrupted by the end user's speech and the client can then starts the next Dialogflow request. "audioUri": "A String", # Required. URI of the audio clip. Dialogflow does not impose any validation on this value. It is specific to the client that reads it. }, "telephonyTransferCall": { # Represents the signal that telles the client to transfer the phone call connected to the agent to a third-party endpoint. # A signal that the client should transfer the phone call connected to this agent to a third-party endpoint. "phoneNumber": "A String", # Transfer the call to a phone number in [E.164 format](https://en.wikipedia.org/wiki/E.164). }, "text": { # The text response message. # Returns a text response. "allowPlaybackInterruption": True or False, # Output only. Whether the playback of this message can be interrupted by the end user's speech and the client can then starts the next Dialogflow request. "text": [ # Required. A collection of text response variants. If multiple variants are defined, only one text response variant is returned at runtime. "A String", ], }, "toolCall": { # Represents a call of a specific tool's action with the specified inputs. # Returns the definition of a tool call that should be executed by the client. "action": "A String", # Required. The name of the tool's action associated with this call. "inputParameters": { # Optional. The action's input parameters. "a_key": "", # Properties of the object. }, "tool": "A String", # Required. The tool associated with this call. Format: `projects//locations//agents//tools/`. }, }, ], "returnPartialResponses": True or False, # Whether Dialogflow should return currently queued fulfillment response messages in streaming APIs. If a webhook is specified, it happens before Dialogflow invokes webhook. Warning: 1) This flag only affects streaming API. Responses are still queued and returned once in non-streaming API. 2) The flag can be enabled in any fulfillment but only the first 3 partial responses will be returned. You may only want to apply it to fulfillments that have slow webhooks. "setParameterActions": [ # Set parameter values before executing the webhook. { # Setting a parameter value. "parameter": "A String", # Display name of the parameter. "value": "", # The new value of the parameter. A null value clears the parameter. }, ], "tag": "A String", # The value of this field will be populated in the WebhookRequest `fulfillmentInfo.tag` field by Dialogflow when the associated webhook is called. The tag is typically used by the webhook service to identify which fulfillment is being called, but it could be used for other purposes. This field is required if `webhook` is specified. "webhook": "A String", # The webhook to call. Format: `projects//locations//agents//webhooks/`. }, }, "lifecycleHandler": { # A handler that is triggered on the specific lifecycle_stage of the playbook execution. # A handler triggered during specific lifecycle of the playbook execution. "condition": "A String", # Optional. The condition that must be satisfied to trigger this handler. "fulfillment": { # A fulfillment can do one or more of the following actions at the same time: * Generate rich message responses. * Set parameter values. * Call the webhook. Fulfillments can be called at various stages in the Page or Form lifecycle. For example, when a DetectIntentRequest drives a session to enter a new page, the page's entry fulfillment can add a static response to the QueryResult in the returning DetectIntentResponse, call the webhook (for example, to load user data from a database), or both. # Required. The fulfillment to call when this handler is triggered. "advancedSettings": { # Hierarchical advanced settings for agent/flow/page/fulfillment/parameter. Settings exposed at lower level overrides the settings exposed at higher level. Overriding occurs at the sub-setting level. For example, the playback_interruption_settings at fulfillment level only overrides the playback_interruption_settings at the agent level, leaving other settings at the agent level unchanged. DTMF settings does not override each other. DTMF settings set at different levels define DTMF detections running in parallel. Hierarchy: Agent->Flow->Page->Fulfillment/Parameter. # Hierarchical advanced settings for this fulfillment. The settings exposed at the lower level overrides the settings exposed at the higher level. "audioExportGcsDestination": { # Google Cloud Storage location for a Dialogflow operation that writes or exports objects (e.g. exported agent or transcripts) outside of Dialogflow. # If present, incoming audio is exported by Dialogflow to the configured Google Cloud Storage destination. Exposed at the following levels: - Agent level - Flow level "uri": "A String", # Required. The Google Cloud Storage URI for the exported objects. A URI is of the form: `gs://bucket/object-name-or-prefix` Whether a full object name, or just a prefix, its usage depends on the Dialogflow operation. }, "dtmfSettings": { # Define behaviors for DTMF (dual tone multi frequency). # Settings for DTMF. Exposed at the following levels: - Agent level - Flow level - Page level - Parameter level. "enabled": True or False, # If true, incoming audio is processed for DTMF (dual tone multi frequency) events. For example, if the caller presses a button on their telephone keypad and DTMF processing is enabled, Dialogflow will detect the event (e.g. a "3" was pressed) in the incoming audio and pass the event to the bot to drive business logic (e.g. when 3 is pressed, return the account balance). "endpointingTimeoutDuration": "A String", # Endpoint timeout setting for matching dtmf input to regex. "finishDigit": "A String", # The digit that terminates a DTMF digit sequence. "interdigitTimeoutDuration": "A String", # Interdigit timeout setting for matching dtmf input to regex. "maxDigits": 42, # Max length of DTMF digits. }, "loggingSettings": { # Define behaviors on logging. # Settings for logging. Settings for Dialogflow History, Contact Center messages, StackDriver logs, and speech logging. Exposed at the following levels: - Agent level. "enableConsentBasedRedaction": True or False, # Enables consent-based end-user input redaction, if true, a pre-defined session parameter `$session.params.conversation-redaction` will be used to determine if the utterance should be redacted. "enableInteractionLogging": True or False, # Enables DF Interaction logging. "enableStackdriverLogging": True or False, # Enables Google Cloud Logging. }, "speechSettings": { # Define behaviors of speech to text detection. # Settings for speech to text detection. Exposed at the following levels: - Agent level - Flow level - Page level - Parameter level "endpointerSensitivity": 42, # Sensitivity of the speech model that detects the end of speech. Scale from 0 to 100. "models": { # Mapping from language to Speech-to-Text model. The mapped Speech-to-Text model will be selected for requests from its corresponding language. For more information, see [Speech models](https://cloud.google.com/dialogflow/cx/docs/concept/speech-models). "a_key": "A String", }, "noSpeechTimeout": "A String", # Timeout before detecting no speech. "useTimeoutBasedEndpointing": True or False, # Use timeout based endpointing, interpreting endpointer sensitivity as seconds of timeout value. }, }, "conditionalCases": [ # Conditional cases for this fulfillment. { # A list of cascading if-else conditions. Cases are mutually exclusive. The first one with a matching condition is selected, all the rest ignored. "cases": [ # A list of cascading if-else conditions. { # Each case has a Boolean condition. When it is evaluated to be True, the corresponding messages will be selected and evaluated recursively. "caseContent": [ # A list of case content. { # The list of messages or conditional cases to activate for this case. "additionalCases": # Object with schema name: GoogleCloudDialogflowCxV3beta1FulfillmentConditionalCases # Additional cases to be evaluated. "message": { # Represents a response message that can be returned by a conversational agent. Response messages are also used for output audio synthesis. The approach is as follows: * If at least one OutputAudioText response is present, then all OutputAudioText responses are linearly concatenated, and the result is used for output audio synthesis. * If the OutputAudioText responses are a mixture of text and SSML, then the concatenated result is treated as SSML; otherwise, the result is treated as either text or SSML as appropriate. The agent designer should ideally use either text or SSML consistently throughout the bot design. * Otherwise, all Text responses are linearly concatenated, and the result is used for output audio synthesis. This approach allows for more sophisticated user experience scenarios, where the text displayed to the user may differ from what is heard. # Returned message. "channel": "A String", # The channel which the response is associated with. Clients can specify the channel via QueryParameters.channel, and only associated channel response will be returned. "conversationSuccess": { # Indicates that the conversation succeeded, i.e., the bot handled the issue that the customer talked to it about. Dialogflow only uses this to determine which conversations should be counted as successful and doesn't process the metadata in this message in any way. Note that Dialogflow also considers conversations that get to the conversation end page as successful even if they don't return ConversationSuccess. You may set this, for example: * In the entry_fulfillment of a Page if entering the page indicates that the conversation succeeded. * In a webhook response when you determine that you handled the customer issue. # Indicates that the conversation succeeded. "metadata": { # Custom metadata. Dialogflow doesn't impose any structure on this. "a_key": "", # Properties of the object. }, }, "endInteraction": { # Indicates that interaction with the Dialogflow agent has ended. This message is generated by Dialogflow only and not supposed to be defined by the user. # Output only. A signal that indicates the interaction with the Dialogflow agent has ended. This message is generated by Dialogflow only when the conversation reaches `END_SESSION` page. It is not supposed to be defined by the user. It's guaranteed that there is at most one such message in each response. }, "knowledgeInfoCard": { # Represents info card response. If the response contains generative knowledge prediction, Dialogflow will return a payload with Infobot Messenger compatible info card. Otherwise, the info card response is skipped. # Represents info card for knowledge answers, to be better rendered in Dialogflow Messenger. }, "liveAgentHandoff": { # Indicates that the conversation should be handed off to a live agent. Dialogflow only uses this to determine which conversations were handed off to a human agent for measurement purposes. What else to do with this signal is up to you and your handoff procedures. You may set this, for example: * In the entry_fulfillment of a Page if entering the page indicates something went extremely wrong in the conversation. * In a webhook response when you determine that the customer issue can only be handled by a human. # Hands off conversation to a human agent. "metadata": { # Custom metadata for your handoff procedure. Dialogflow doesn't impose any structure on this. "a_key": "", # Properties of the object. }, }, "mixedAudio": { # Represents an audio message that is composed of both segments synthesized from the Dialogflow agent prompts and ones hosted externally at the specified URIs. The external URIs are specified via play_audio. This message is generated by Dialogflow only and not supposed to be defined by the user. # Output only. An audio response message composed of both the synthesized Dialogflow agent responses and responses defined via play_audio. This message is generated by Dialogflow only and not supposed to be defined by the user. "segments": [ # Segments this audio response is composed of. { # Represents one segment of audio. "allowPlaybackInterruption": True or False, # Output only. Whether the playback of this segment can be interrupted by the end user's speech and the client should then start the next Dialogflow request. "audio": "A String", # Raw audio synthesized from the Dialogflow agent's response using the output config specified in the request. "uri": "A String", # Client-specific URI that points to an audio clip accessible to the client. Dialogflow does not impose any validation on it. }, ], }, "outputAudioText": { # A text or ssml response that is preferentially used for TTS output audio synthesis, as described in the comment on the ResponseMessage message. # A text or ssml response that is preferentially used for TTS output audio synthesis, as described in the comment on the ResponseMessage message. "allowPlaybackInterruption": True or False, # Output only. Whether the playback of this message can be interrupted by the end user's speech and the client can then starts the next Dialogflow request. "ssml": "A String", # The SSML text to be synthesized. For more information, see [SSML](/speech/text-to-speech/docs/ssml). "text": "A String", # The raw text to be synthesized. }, "payload": { # Returns a response containing a custom, platform-specific payload. "a_key": "", # Properties of the object. }, "playAudio": { # Specifies an audio clip to be played by the client as part of the response. # Signal that the client should play an audio clip hosted at a client-specific URI. Dialogflow uses this to construct mixed_audio. However, Dialogflow itself does not try to read or process the URI in any way. "allowPlaybackInterruption": True or False, # Output only. Whether the playback of this message can be interrupted by the end user's speech and the client can then starts the next Dialogflow request. "audioUri": "A String", # Required. URI of the audio clip. Dialogflow does not impose any validation on this value. It is specific to the client that reads it. }, "telephonyTransferCall": { # Represents the signal that telles the client to transfer the phone call connected to the agent to a third-party endpoint. # A signal that the client should transfer the phone call connected to this agent to a third-party endpoint. "phoneNumber": "A String", # Transfer the call to a phone number in [E.164 format](https://en.wikipedia.org/wiki/E.164). }, "text": { # The text response message. # Returns a text response. "allowPlaybackInterruption": True or False, # Output only. Whether the playback of this message can be interrupted by the end user's speech and the client can then starts the next Dialogflow request. "text": [ # Required. A collection of text response variants. If multiple variants are defined, only one text response variant is returned at runtime. "A String", ], }, "toolCall": { # Represents a call of a specific tool's action with the specified inputs. # Returns the definition of a tool call that should be executed by the client. "action": "A String", # Required. The name of the tool's action associated with this call. "inputParameters": { # Optional. The action's input parameters. "a_key": "", # Properties of the object. }, "tool": "A String", # Required. The tool associated with this call. Format: `projects//locations//agents//tools/`. }, }, }, ], "condition": "A String", # The condition to activate and select this case. Empty means the condition is always true. The condition is evaluated against form parameters or session parameters. See the [conditions reference](https://cloud.google.com/dialogflow/cx/docs/reference/condition). }, ], }, ], "enableGenerativeFallback": True or False, # If the flag is true, the agent will utilize LLM to generate a text response. If LLM generation fails, the defined responses in the fulfillment will be respected. This flag is only useful for fulfillments associated with no-match event handlers. "generators": [ # A list of Generators to be called during this fulfillment. { # Generator settings used by the LLM to generate a text response. "generator": "A String", # Required. The generator to call. Format: `projects//locations//agents//generators/`. "inputParameters": { # Map from placeholder parameter in the Generator to corresponding session parameters. By default, Dialogflow uses the session parameter with the same name to fill in the generator template. e.g. If there is a placeholder parameter `city` in the Generator, Dialogflow default to fill in the `$city` with `$session.params.city`. However, you may choose to fill `$city` with `$session.params.desination-city`. - Map key: parameter ID - Map value: session parameter name "a_key": "A String", }, "outputParameter": "A String", # Required. Output parameter which should contain the generator response. }, ], "messages": [ # The list of rich message responses to present to the user. { # Represents a response message that can be returned by a conversational agent. Response messages are also used for output audio synthesis. The approach is as follows: * If at least one OutputAudioText response is present, then all OutputAudioText responses are linearly concatenated, and the result is used for output audio synthesis. * If the OutputAudioText responses are a mixture of text and SSML, then the concatenated result is treated as SSML; otherwise, the result is treated as either text or SSML as appropriate. The agent designer should ideally use either text or SSML consistently throughout the bot design. * Otherwise, all Text responses are linearly concatenated, and the result is used for output audio synthesis. This approach allows for more sophisticated user experience scenarios, where the text displayed to the user may differ from what is heard. "channel": "A String", # The channel which the response is associated with. Clients can specify the channel via QueryParameters.channel, and only associated channel response will be returned. "conversationSuccess": { # Indicates that the conversation succeeded, i.e., the bot handled the issue that the customer talked to it about. Dialogflow only uses this to determine which conversations should be counted as successful and doesn't process the metadata in this message in any way. Note that Dialogflow also considers conversations that get to the conversation end page as successful even if they don't return ConversationSuccess. You may set this, for example: * In the entry_fulfillment of a Page if entering the page indicates that the conversation succeeded. * In a webhook response when you determine that you handled the customer issue. # Indicates that the conversation succeeded. "metadata": { # Custom metadata. Dialogflow doesn't impose any structure on this. "a_key": "", # Properties of the object. }, }, "endInteraction": { # Indicates that interaction with the Dialogflow agent has ended. This message is generated by Dialogflow only and not supposed to be defined by the user. # Output only. A signal that indicates the interaction with the Dialogflow agent has ended. This message is generated by Dialogflow only when the conversation reaches `END_SESSION` page. It is not supposed to be defined by the user. It's guaranteed that there is at most one such message in each response. }, "knowledgeInfoCard": { # Represents info card response. If the response contains generative knowledge prediction, Dialogflow will return a payload with Infobot Messenger compatible info card. Otherwise, the info card response is skipped. # Represents info card for knowledge answers, to be better rendered in Dialogflow Messenger. }, "liveAgentHandoff": { # Indicates that the conversation should be handed off to a live agent. Dialogflow only uses this to determine which conversations were handed off to a human agent for measurement purposes. What else to do with this signal is up to you and your handoff procedures. You may set this, for example: * In the entry_fulfillment of a Page if entering the page indicates something went extremely wrong in the conversation. * In a webhook response when you determine that the customer issue can only be handled by a human. # Hands off conversation to a human agent. "metadata": { # Custom metadata for your handoff procedure. Dialogflow doesn't impose any structure on this. "a_key": "", # Properties of the object. }, }, "mixedAudio": { # Represents an audio message that is composed of both segments synthesized from the Dialogflow agent prompts and ones hosted externally at the specified URIs. The external URIs are specified via play_audio. This message is generated by Dialogflow only and not supposed to be defined by the user. # Output only. An audio response message composed of both the synthesized Dialogflow agent responses and responses defined via play_audio. This message is generated by Dialogflow only and not supposed to be defined by the user. "segments": [ # Segments this audio response is composed of. { # Represents one segment of audio. "allowPlaybackInterruption": True or False, # Output only. Whether the playback of this segment can be interrupted by the end user's speech and the client should then start the next Dialogflow request. "audio": "A String", # Raw audio synthesized from the Dialogflow agent's response using the output config specified in the request. "uri": "A String", # Client-specific URI that points to an audio clip accessible to the client. Dialogflow does not impose any validation on it. }, ], }, "outputAudioText": { # A text or ssml response that is preferentially used for TTS output audio synthesis, as described in the comment on the ResponseMessage message. # A text or ssml response that is preferentially used for TTS output audio synthesis, as described in the comment on the ResponseMessage message. "allowPlaybackInterruption": True or False, # Output only. Whether the playback of this message can be interrupted by the end user's speech and the client can then starts the next Dialogflow request. "ssml": "A String", # The SSML text to be synthesized. For more information, see [SSML](/speech/text-to-speech/docs/ssml). "text": "A String", # The raw text to be synthesized. }, "payload": { # Returns a response containing a custom, platform-specific payload. "a_key": "", # Properties of the object. }, "playAudio": { # Specifies an audio clip to be played by the client as part of the response. # Signal that the client should play an audio clip hosted at a client-specific URI. Dialogflow uses this to construct mixed_audio. However, Dialogflow itself does not try to read or process the URI in any way. "allowPlaybackInterruption": True or False, # Output only. Whether the playback of this message can be interrupted by the end user's speech and the client can then starts the next Dialogflow request. "audioUri": "A String", # Required. URI of the audio clip. Dialogflow does not impose any validation on this value. It is specific to the client that reads it. }, "telephonyTransferCall": { # Represents the signal that telles the client to transfer the phone call connected to the agent to a third-party endpoint. # A signal that the client should transfer the phone call connected to this agent to a third-party endpoint. "phoneNumber": "A String", # Transfer the call to a phone number in [E.164 format](https://en.wikipedia.org/wiki/E.164). }, "text": { # The text response message. # Returns a text response. "allowPlaybackInterruption": True or False, # Output only. Whether the playback of this message can be interrupted by the end user's speech and the client can then starts the next Dialogflow request. "text": [ # Required. A collection of text response variants. If multiple variants are defined, only one text response variant is returned at runtime. "A String", ], }, "toolCall": { # Represents a call of a specific tool's action with the specified inputs. # Returns the definition of a tool call that should be executed by the client. "action": "A String", # Required. The name of the tool's action associated with this call. "inputParameters": { # Optional. The action's input parameters. "a_key": "", # Properties of the object. }, "tool": "A String", # Required. The tool associated with this call. Format: `projects//locations//agents//tools/`. }, }, ], "returnPartialResponses": True or False, # Whether Dialogflow should return currently queued fulfillment response messages in streaming APIs. If a webhook is specified, it happens before Dialogflow invokes webhook. Warning: 1) This flag only affects streaming API. Responses are still queued and returned once in non-streaming API. 2) The flag can be enabled in any fulfillment but only the first 3 partial responses will be returned. You may only want to apply it to fulfillments that have slow webhooks. "setParameterActions": [ # Set parameter values before executing the webhook. { # Setting a parameter value. "parameter": "A String", # Display name of the parameter. "value": "", # The new value of the parameter. A null value clears the parameter. }, ], "tag": "A String", # The value of this field will be populated in the WebhookRequest `fulfillmentInfo.tag` field by Dialogflow when the associated webhook is called. The tag is typically used by the webhook service to identify which fulfillment is being called, but it could be used for other purposes. This field is required if `webhook` is specified. "webhook": "A String", # The webhook to call. Format: `projects//locations//agents//webhooks/`. }, "lifecycleStage": "A String", # Required. The name of the lifecycle stage that triggers this handler. Supported values: * `playbook-start` * `pre-action-selection` * `pre-action-execution` }, }, ], "inputParameterDefinitions": [ # Optional. Defined structured input parameters for this playbook. { # Defines the properties of a parameter. Used to define parameters used in the agent and the input / output parameters for each fulfillment. "description": "A String", # Human-readable description of the parameter. Limited to 300 characters. "name": "A String", # Required. Name of parameter. "type": "A String", # Type of parameter. "typeSchema": { # Encapsulates different type schema variations: either a reference to an a schema that's already defined by a tool, or an inline definition. # Optional. Type schema of parameter. "inlineSchema": { # A type schema object that's specified inline. # Set if this is an inline schema definition. "items": # Object with schema name: GoogleCloudDialogflowCxV3beta1TypeSchema # Schema of the elements if this is an ARRAY type. "type": "A String", # Data type of the schema. }, "schemaReference": { # A reference to the schema of an existing tool. # Set if this is a schema reference. "schema": "A String", # The name of the schema. "tool": "A String", # The tool that contains this schema definition. Format: `projects//locations//agents//tools/`. }, }, }, ], "instruction": { # Message of the Instruction of the playbook. # Instruction to accomplish target goal. "guidelines": "A String", # General guidelines for the playbook. These are unstructured instructions that are not directly part of the goal, e.g. "Always be polite". It's valid for this text to be long and used instead of steps altogether. "steps": [ # Ordered list of step by step execution instructions to accomplish target goal. { # Message of single step execution. "steps": [ # Sub-processing needed to execute the current step. # Object with schema name: GoogleCloudDialogflowCxV3beta1PlaybookStep ], "text": "A String", # Step instruction in text format. }, ], }, "llmModelSettings": { # Settings for LLM models. # Optional. Llm model settings for the playbook. "model": "A String", # The selected LLM model. "promptText": "A String", # The custom prompt to use. }, "name": "A String", # The unique identifier of the playbook. Format: `projects//locations//agents//playbooks/`. "outputParameterDefinitions": [ # Optional. Defined structured output parameters for this playbook. { # Defines the properties of a parameter. Used to define parameters used in the agent and the input / output parameters for each fulfillment. "description": "A String", # Human-readable description of the parameter. Limited to 300 characters. "name": "A String", # Required. Name of parameter. "type": "A String", # Type of parameter. "typeSchema": { # Encapsulates different type schema variations: either a reference to an a schema that's already defined by a tool, or an inline definition. # Optional. Type schema of parameter. "inlineSchema": { # A type schema object that's specified inline. # Set if this is an inline schema definition. "items": # Object with schema name: GoogleCloudDialogflowCxV3beta1TypeSchema # Schema of the elements if this is an ARRAY type. "type": "A String", # Data type of the schema. }, "schemaReference": { # A reference to the schema of an existing tool. # Set if this is a schema reference. "schema": "A String", # The name of the schema. "tool": "A String", # The tool that contains this schema definition. Format: `projects//locations//agents//tools/`. }, }, }, ], "playbookType": "A String", # Optional. Type of the playbook. "referencedFlows": [ # Output only. The resource name of flows referenced by the current playbook in the instructions. "A String", ], "referencedPlaybooks": [ # Output only. The resource name of other playbooks referenced by the current playbook in the instructions. "A String", ], "referencedTools": [ # Optional. The resource name of tools referenced by the current playbook in the instructions. If not provided explicitly, they are will be implied using the tool being referenced in goal and steps. "A String", ], "speechSettings": { # Define behaviors of speech to text detection. # Optional. Playbook level Settings for speech to text detection. "endpointerSensitivity": 42, # Sensitivity of the speech model that detects the end of speech. Scale from 0 to 100. "models": { # Mapping from language to Speech-to-Text model. The mapped Speech-to-Text model will be selected for requests from its corresponding language. For more information, see [Speech models](https://cloud.google.com/dialogflow/cx/docs/concept/speech-models). "a_key": "A String", }, "noSpeechTimeout": "A String", # Timeout before detecting no speech. "useTimeoutBasedEndpointing": True or False, # Use timeout based endpointing, interpreting endpointer sensitivity as seconds of timeout value. }, "tokenCount": "A String", # Output only. Estimated number of tokes current playbook takes when sent to the LLM. "updateTime": "A String", # Output only. Last time the playbook version was updated. }