Configuration parameters for model interactions.

interface GenerationConfig {
    max_output_tokens?: number;
    seed?: number;
    speech_config?: GeminiNextGenAPIClient.Interactions.SpeechConfig[];
    stop_sequences?: string[];
    temperature?: number;
    thinking_level?: GeminiNextGenAPIClient.Interactions.ThinkingLevel;
    thinking_summaries?: "auto" | "none";
    tool_choice?: GeminiNextGenAPIClient.Interactions.ToolChoice;
    top_p?: number;
}

Properties

max_output_tokens?: number

The maximum number of tokens to include in the response.

seed?: number

Seed used in decoding for reproducibility.

Configuration for speech interaction.

stop_sequences?: string[]

A list of character sequences that will stop output interaction.

temperature?: number

Controls the randomness of the output.

The level of thought tokens that the model should generate.

thinking_summaries?: "auto" | "none"

Whether to include thought summaries in the response.

The tool choice for the interaction.

top_p?: number

The maximum cumulative probability of tokens to consider when sampling.