Class GenerationConfig.Builder

    • Constructor Detail

      • GenerationConfig.Builder

        GenerationConfig.Builder()
    • Method Detail

      • responseJsonSchema

         abstract GenerationConfig.Builder responseJsonSchema(Object responseJsonSchema)

        Setter for responseJsonSchema.

        responseJsonSchema: Output schema of the generated response. This is an alternative to `response_schema` that accepts [JSON Schema](https://json-schema.org/).

      • audioTimestamp

         abstract GenerationConfig.Builder audioTimestamp(boolean audioTimestamp)

        Setter for audioTimestamp.

        audioTimestamp: Optional. If enabled, audio timestamps will be included in the request to the model. This can be useful for synchronizing audio with other modalities in the response. This field is not supported in Gemini API.

      • candidateCount

         abstract GenerationConfig.Builder candidateCount(Integer candidateCount)

        Setter for candidateCount.

        candidateCount: Optional. The number of candidate responses to generate. A higher `candidate_count` can provide more options to choose from, but it also consumes more resources. This can be useful for generating a variety of responses and selecting the best one.

      • enableAffectiveDialog

         abstract GenerationConfig.Builder enableAffectiveDialog(boolean enableAffectiveDialog)

        Setter for enableAffectiveDialog.

        enableAffectiveDialog: Optional. If enabled, the model will detect emotions and adapt its responses accordingly. For example, if the model detects that the user is frustrated, it may provide a more empathetic response. This field is not supported in Gemini API.

      • frequencyPenalty

         abstract GenerationConfig.Builder frequencyPenalty(Float frequencyPenalty)

        Setter for frequencyPenalty.

        frequencyPenalty: Optional. Penalizes tokens based on their frequency in the generated text. A positive value helps to reduce the repetition of words and phrases. Valid values can range from [-2.0, 2.0].

      • logprobs

         abstract GenerationConfig.Builder logprobs(Integer logprobs)

        Setter for logprobs.

        logprobs: Optional. The number of top log probabilities to return for each token. This can be used to see which other tokens were considered likely candidates for a given position. A higher value will return more options, but it will also increase the size of the response.

      • maxOutputTokens

         abstract GenerationConfig.Builder maxOutputTokens(Integer maxOutputTokens)

        Setter for maxOutputTokens.

        maxOutputTokens: Optional. The maximum number of tokens to generate in the response. A token is approximately four characters. The default value varies by model. This parameter can be used to control the length of the generated text and prevent overly long responses.

      • mediaResolution

         abstract GenerationConfig.Builder mediaResolution(MediaResolution mediaResolution)

        Setter for mediaResolution.

        mediaResolution: Optional. The token resolution at which input media content is sampled. This is used to control the trade-off between the quality of the response and the number of tokens used to represent the media. A higher resolution allows the model to perceive more detail, which can lead to a more nuanced response, but it will also use more tokens. This does not affect the image dimensions sent to the model.

      • mediaResolution

        @CanIgnoreReturnValue() GenerationConfig.Builder mediaResolution(MediaResolution.Known knownType)

        Setter for mediaResolution given a known enum.

        mediaResolution: Optional. The token resolution at which input media content is sampled. This is used to control the trade-off between the quality of the response and the number of tokens used to represent the media. A higher resolution allows the model to perceive more detail, which can lead to a more nuanced response, but it will also use more tokens. This does not affect the image dimensions sent to the model.

      • mediaResolution

        @CanIgnoreReturnValue() GenerationConfig.Builder mediaResolution(String mediaResolution)

        Setter for mediaResolution given a string.

        mediaResolution: Optional. The token resolution at which input media content is sampled. This is used to control the trade-off between the quality of the response and the number of tokens used to represent the media. A higher resolution allows the model to perceive more detail, which can lead to a more nuanced response, but it will also use more tokens. This does not affect the image dimensions sent to the model.

      • presencePenalty

         abstract GenerationConfig.Builder presencePenalty(Float presencePenalty)

        Setter for presencePenalty.

        presencePenalty: Optional. Penalizes tokens that have already appeared in the generated text. A positive value encourages the model to generate more diverse and less repetitive text. Valid values can range from [-2.0, 2.0].

      • responseLogprobs

         abstract GenerationConfig.Builder responseLogprobs(boolean responseLogprobs)

        Setter for responseLogprobs.

        responseLogprobs: Optional. If set to true, the log probabilities of the output tokens are returned. Log probabilities are the logarithm of the probability of a token appearing in the output. A higher log probability means the token is more likely to be generated. This can be useful for analyzing the model's confidence in its own output and for debugging.

      • responseMimeType

         abstract GenerationConfig.Builder responseMimeType(String responseMimeType)

        Setter for responseMimeType.

        responseMimeType: Optional. The IANA standard MIME type of the response. The model will generate output that conforms to this MIME type. Supported values include 'text/plain' (default) and 'application/json'. The model needs to be prompted to output the appropriate response type, otherwise the behavior is undefined. This is a preview feature.

      • responseModalities

         abstract GenerationConfig.Builder responseModalities(List<Modality> responseModalities)

        Setter for responseModalities.

        responseModalities: Optional. The modalities of the response. The model will generate a response that includes all the specified modalities. For example, if this is set to `[TEXT, IMAGE]`, the response will include both text and an image.

      • responseModalities

        @CanIgnoreReturnValue() GenerationConfig.Builder responseModalities(Array<Modality> responseModalities)

        Setter for responseModalities.

        responseModalities: Optional. The modalities of the response. The model will generate a response that includes all the specified modalities. For example, if this is set to `[TEXT, IMAGE]`, the response will include both text and an image.

      • responseModalities

        @CanIgnoreReturnValue() GenerationConfig.Builder responseModalities(Array<String> responseModalities)

        Setter for responseModalities given a varargs of strings.

        responseModalities: Optional. The modalities of the response. The model will generate a response that includes all the specified modalities. For example, if this is set to `[TEXT, IMAGE]`, the response will include both text and an image.

      • responseModalities

        @CanIgnoreReturnValue() GenerationConfig.Builder responseModalities(Array<Modality.Known> knownTypes)

        Setter for responseModalities given a varargs of known enums.

        responseModalities: Optional. The modalities of the response. The model will generate a response that includes all the specified modalities. For example, if this is set to `[TEXT, IMAGE]`, the response will include both text and an image.

      • responseModalitiesFromKnown

        @CanIgnoreReturnValue() GenerationConfig.Builder responseModalitiesFromKnown(List<Modality.Known> knownTypes)

        Setter for responseModalities given a list of known enums.

        responseModalities: Optional. The modalities of the response. The model will generate a response that includes all the specified modalities. For example, if this is set to `[TEXT, IMAGE]`, the response will include both text and an image.

      • responseModalitiesFromString

        @CanIgnoreReturnValue() GenerationConfig.Builder responseModalitiesFromString(List<String> responseModalities)

        Setter for responseModalities given a list of strings.

        responseModalities: Optional. The modalities of the response. The model will generate a response that includes all the specified modalities. For example, if this is set to `[TEXT, IMAGE]`, the response will include both text and an image.

      • responseSchema

         abstract GenerationConfig.Builder responseSchema(Schema responseSchema)

        Setter for responseSchema.

        responseSchema: Optional. Lets you to specify a schema for the model's response, ensuring that the output conforms to a particular structure. This is useful for generating structured data such as JSON. The schema is a subset of the [OpenAPI 3.0 schema object](https://spec.openapis.org/oas/v3.0.3#schema) object. When this field is set, you must also set the `response_mime_type` to `application/json`.

      • responseSchema

        @CanIgnoreReturnValue() GenerationConfig.Builder responseSchema(Schema.Builder responseSchemaBuilder)

        Setter for responseSchema builder.

        responseSchema: Optional. Lets you to specify a schema for the model's response, ensuring that the output conforms to a particular structure. This is useful for generating structured data such as JSON. The schema is a subset of the [OpenAPI 3.0 schema object](https://spec.openapis.org/oas/v3.0.3#schema) object. When this field is set, you must also set the `response_mime_type` to `application/json`.

      • seed

         abstract GenerationConfig.Builder seed(Integer seed)

        Setter for seed.

        seed: Optional. A seed for the random number generator. By setting a seed, you can make the model's output mostly deterministic. For a given prompt and parameters (like temperature, top_p, etc.), the model will produce the same response every time. However, it's not a guaranteed absolute deterministic behavior. This is different from parameters like `temperature`, which control the *level* of randomness. `seed` ensures that the "random" choices the model makes are the same on every run, making it essential for testing and ensuring reproducible results.

      • stopSequences

         abstract GenerationConfig.Builder stopSequences(List<String> stopSequences)

        Setter for stopSequences.

        stopSequences: Optional. A list of character sequences that will stop the model from generating further tokens. If a stop sequence is generated, the output will end at that point. This is useful for controlling the length and structure of the output. For example, you can use ["\n", "###"] to stop generation at a new line or a specific marker.

      • stopSequences

        @CanIgnoreReturnValue() GenerationConfig.Builder stopSequences(Array<String> stopSequences)

        Setter for stopSequences.

        stopSequences: Optional. A list of character sequences that will stop the model from generating further tokens. If a stop sequence is generated, the output will end at that point. This is useful for controlling the length and structure of the output. For example, you can use ["\n", "###"] to stop generation at a new line or a specific marker.

      • temperature

         abstract GenerationConfig.Builder temperature(Float temperature)

        Setter for temperature.

        temperature: Optional. Controls the randomness of the output. A higher temperature results in more creative and diverse responses, while a lower temperature makes the output more predictable and focused. The valid range is (0.0, 2.0].

      • thinkingConfig

         abstract GenerationConfig.Builder thinkingConfig(ThinkingConfig thinkingConfig)

        Setter for thinkingConfig.

        thinkingConfig: Optional. Configuration for thinking features. An error will be returned if this field is set for models that don't support thinking.

      • thinkingConfig

        @CanIgnoreReturnValue() GenerationConfig.Builder thinkingConfig(ThinkingConfig.Builder thinkingConfigBuilder)

        Setter for thinkingConfig builder.

        thinkingConfig: Optional. Configuration for thinking features. An error will be returned if this field is set for models that don't support thinking.

      • topK

         abstract GenerationConfig.Builder topK(Float topK)

        Setter for topK.

        topK: Optional. Specifies the top-k sampling threshold. The model considers only the top k most probable tokens for the next token. This can be useful for generating more coherent and less random text. For example, a `top_k` of 40 means the model will choose the next word from the 40 most likely words.

      • topP

         abstract GenerationConfig.Builder topP(Float topP)

        Setter for topP.

        topP: Optional. Specifies the nucleus sampling threshold. The model considers only the smallest set of tokens whose cumulative probability is at least `top_p`. This helps generate more diverse and less repetitive responses. For example, a `top_p` of 0.9 means the model considers tokens until the cumulative probability of the tokens to select from reaches 0.9. It's recommended to adjust either temperature or `top_p`, but not both.

      • enableEnhancedCivicAnswers

         abstract GenerationConfig.Builder enableEnhancedCivicAnswers(boolean enableEnhancedCivicAnswers)

        Setter for enableEnhancedCivicAnswers.

        enableEnhancedCivicAnswers: Optional. Enables enhanced civic answers. It may not be available for all models. This field is not supported in Vertex AI.