Interface GenerationConfig

Generation config.

interface GenerationConfig {
    audioTimestamp?: boolean;
    candidateCount?: number;
    enableAffectiveDialog?: boolean;
    enableEnhancedCivicAnswers?: boolean;
    frequencyPenalty?: number;
    logprobs?: number;
    maxOutputTokens?: number;
    mediaResolution?: MediaResolution;
    modelSelectionConfig?: ModelSelectionConfig;
    presencePenalty?: number;
    responseJsonSchema?: unknown;
    responseLogprobs?: boolean;
    responseMimeType?: string;
    responseModalities?: Modality[];
    responseSchema?: Schema;
    routingConfig?: GenerationConfigRoutingConfig;
    seed?: number;
    speechConfig?: SpeechConfig;
    stopSequences?: string[];
    temperature?: number;
    thinkingConfig?: ThinkingConfig;
    topK?: number;
    topP?: number;
}

Properties

audioTimestamp?: boolean

Optional. If enabled, audio timestamp will be included in the request to the model. This field is not supported in Gemini API.

candidateCount?: number

Optional. Number of candidates to generate.

enableAffectiveDialog?: boolean

Optional. If enabled, the model will detect emotions and adapt its responses accordingly. This field is not supported in Gemini API.

enableEnhancedCivicAnswers?: boolean

Optional. Enables enhanced civic answers. It may not be available for all models. This field is not supported in Vertex AI.

frequencyPenalty?: number

Optional. Frequency penalties.

logprobs?: number

Optional. Logit probabilities.

maxOutputTokens?: number

Optional. The maximum number of output tokens to generate per message.

mediaResolution?: MediaResolution

Optional. If specified, the media resolution specified will be used.

modelSelectionConfig?: ModelSelectionConfig

Optional. Config for model selection.

presencePenalty?: number

Optional. Positive penalties.

responseJsonSchema?: unknown

Output schema of the generated response. This is an alternative to response_schema that accepts JSON Schema.

responseLogprobs?: boolean

Optional. If true, export the logprobs results in response.

responseMimeType?: string

Optional. Output response mimetype of the generated candidate text. Supported mimetype: - text/plain: (default) Text output. - application/json: JSON response in the candidates. The model needs to be prompted to output the appropriate response type, otherwise the behavior is undefined. This is a preview feature.

responseModalities?: Modality[]

Optional. The modalities of the response.

responseSchema?: Schema

Optional. The Schema object allows the definition of input and output data types. These types can be objects, but also primitives and arrays. Represents a select subset of an OpenAPI 3.0 schema object. If set, a compatible response_mime_type must also be set. Compatible mimetypes: application/json: Schema for JSON response.

Optional. Routing configuration. This field is not supported in Gemini API.

seed?: number

Optional. Seed.

speechConfig?: SpeechConfig

Optional. The speech generation config.

stopSequences?: string[]

Optional. Stop sequences.

temperature?: number

Optional. Controls the randomness of predictions.

thinkingConfig?: ThinkingConfig

Optional. Config for thinking features. An error will be returned if this field is set for models that don't support thinking.

topK?: number

Optional. If specified, top-k sampling will be used.

topP?: number

Optional. If specified, nucleus sampling will be used.