Dialogflow API . projects . locations . statelessSuggestion

Instance Methods

close()

Close httplib2 connections.

generate(parent, body=None, x__xgafv=None)

Generates and returns a suggestion for a conversation that does not have a resource created for it.

Method Details

close()
Close httplib2 connections.
generate(parent, body=None, x__xgafv=None)
Generates and returns a suggestion for a conversation that does not have a resource created for it.

Args:
  parent: string, Required. The parent resource to charge for the Suggestion's generation. Format: `projects//locations/`. (required)
  body: object, The request body.
    The object takes the form of:

{ # The request message for Conversations.GenerateStatelessSuggestion.
  "conversationContext": { # Context of the conversation, including transcripts. # Optional. Context of the conversation, including transcripts.
    "messageEntries": [ # Optional. List of message transcripts in the conversation.
      { # Represents a message entry of a conversation.
        "createTime": "A String", # Optional. Create time of the message entry.
        "languageCode": "A String", # Optional. The language of the text. See [Language Support](https://cloud.google.com/dialogflow/docs/reference/language) for a list of the currently supported language codes.
        "role": "A String", # Optional. Participant role of the message.
        "text": "A String", # Optional. Transcript content of the message.
      },
    ],
  },
  "generator": { # LLM generator. # Uncreated generator. It should be a complete generator that includes all information about the generator.
    "createTime": "A String", # Output only. Creation time of this generator.
    "description": "A String", # Optional. Human readable description of the generator.
    "inferenceParameter": { # The parameters of inference. # Optional. Inference parameters for this generator.
      "maxOutputTokens": 42, # Optional. Maximum number of the output tokens for the generator.
      "temperature": 3.14, # Optional. Controls the randomness of LLM predictions. Low temperature = less random. High temperature = more random. If unset (or 0), uses a default value of 0.
      "topK": 42, # Optional. Top-k changes how the model selects tokens for output. A top-k of 1 means the selected token is the most probable among all tokens in the model's vocabulary (also called greedy decoding), while a top-k of 3 means that the next token is selected from among the 3 most probable tokens (using temperature). For each token selection step, the top K tokens with the highest probabilities are sampled. Then tokens are further filtered based on topP with the final token selected using temperature sampling. Specify a lower value for less random responses and a higher value for more random responses. Acceptable value is [1, 40], default to 40.
      "topP": 3.14, # Optional. Top-p changes how the model selects tokens for output. Tokens are selected from most K (see topK parameter) probable to least until the sum of their probabilities equals the top-p value. For example, if tokens A, B, and C have a probability of 0.3, 0.2, and 0.1 and the top-p value is 0.5, then the model will select either A or B as the next token (using temperature) and doesn't consider C. The default top-p value is 0.95. Specify a lower value for less random responses and a higher value for more random responses. Acceptable value is [0.0, 1.0], default to 0.95.
    },
    "name": "A String", # Output only. Identifier. The resource name of the generator. Format: `projects//locations//generators/`
    "summarizationContext": { # Summarization context that customer can configure. # Input of prebuilt Summarization feature.
      "fewShotExamples": [ # Optional. List of few shot examples.
        { # Providing examples in the generator (i.e. building a few-shot generator) helps convey the desired format of the LLM response.
          "conversationContext": { # Context of the conversation, including transcripts. # Optional. Conversation transcripts.
            "messageEntries": [ # Optional. List of message transcripts in the conversation.
              { # Represents a message entry of a conversation.
                "createTime": "A String", # Optional. Create time of the message entry.
                "languageCode": "A String", # Optional. The language of the text. See [Language Support](https://cloud.google.com/dialogflow/docs/reference/language) for a list of the currently supported language codes.
                "role": "A String", # Optional. Participant role of the message.
                "text": "A String", # Optional. Transcript content of the message.
              },
            ],
          },
          "extraInfo": { # Optional. Key is the placeholder field name in input, value is the value of the placeholder. E.g. instruction contains "@price", and ingested data has <"price", "10">
            "a_key": "A String",
          },
          "output": { # Suggestion generated using a Generator. # Required. Example output of the model.
            "summarySuggestion": { # Suggested summary of the conversation. # Optional. Suggested summary.
              "summarySections": [ # Required. All the parts of generated summary.
                { # A component of the generated summary.
                  "section": "A String", # Required. Name of the section.
                  "summary": "A String", # Required. Summary text for the section.
                },
              ],
            },
          },
          "summarizationSectionList": { # List of summarization sections. # Summarization sections.
            "summarizationSections": [ # Optional. Summarization sections.
              { # Represents the section of summarization.
                "definition": "A String", # Optional. Definition of the section, for example, "what the customer needs help with or has question about."
                "key": "A String", # Optional. Name of the section, for example, "situation".
                "type": "A String", # Optional. Type of the summarization section.
              },
            ],
          },
        },
      ],
      "outputLanguageCode": "A String", # Optional. The target language of the generated summary. The language code for conversation will be used if this field is empty. Supported 2.0 and later versions.
      "summarizationSections": [ # Optional. List of sections. Note it contains both predefined section sand customer defined sections.
        { # Represents the section of summarization.
          "definition": "A String", # Optional. Definition of the section, for example, "what the customer needs help with or has question about."
          "key": "A String", # Optional. Name of the section, for example, "situation".
          "type": "A String", # Optional. Type of the summarization section.
        },
      ],
      "version": "A String", # Optional. Version of the feature. If not set, default to latest version. Current candidates are ["1.0"].
    },
    "triggerEvent": "A String", # Optional. The trigger event of the generator. It defines when the generator is triggered in a conversation.
    "updateTime": "A String", # Output only. Update time of this generator.
  },
  "generatorName": "A String", # The resource name of the existing created generator. Format: `projects//locations//generators/`
  "triggerEvents": [ # Optional. A list of trigger events. Generator will be triggered only if it's trigger event is included here.
    "A String",
  ],
}

  x__xgafv: string, V1 error format.
    Allowed values
      1 - v1 error format
      2 - v2 error format

Returns:
  An object of the form:

    { # The response message for Conversations.GenerateStatelessSuggestion.
  "generatorSuggestion": { # Suggestion generated using a Generator. # Required. Generated suggestion for a conversation.
    "summarySuggestion": { # Suggested summary of the conversation. # Optional. Suggested summary.
      "summarySections": [ # Required. All the parts of generated summary.
        { # A component of the generated summary.
          "section": "A String", # Required. Name of the section.
          "summary": "A String", # Required. Summary text for the section.
        },
      ],
    },
  },
}