Package com.google.genai.types
Class GenerateContentConfig
java.lang.Object
com.google.genai.JsonSerializable
com.google.genai.types.GenerateContentConfig
Optional model configuration parameters.
For more information, see `Content generation parameters <https://cloud.google.com/vertex-ai/generative-ai/docs/multimodal/content-generation-parameters>`_.
-
Nested Class Summary
Nested ClassesModifier and TypeClassDescriptionstatic class
Builder for GenerateContentConfig. -
Constructor Summary
Constructors -
Method Summary
Modifier and TypeMethodDescriptionIf enabled, audio timestamp will be included in the request to the model.abstract Optional<AutomaticFunctionCallingConfig>
The configuration for automatic function calling.builder()
Instantiates a builder for GenerateContentConfig.Resource name of a context cache that can be used in subsequent requests.Number of response variations to return.Positive values penalize tokens that repeatedly appear in the generated text, increasing the probability of generating more diverse content.static GenerateContentConfig
Deserializes a JSON string to a GenerateContentConfig object.abstract Optional<HttpOptions>
Used to override HTTP request options.labels()
Labels with user-defined metadata to break down billed charges.logprobs()
Number of top candidate tokens to return the log probabilities for at each generation step.Maximum number of tokens that can be generated in the response.abstract Optional<MediaResolution>
If specified, the media resolution specified will be used.abstract Optional<ModelSelectionConfig>
Configuration for model selection.Positive values penalize tokens that already appear in the generated text, increasing the probability of generating more diverse content.Optional.Whether to return the log probabilities of the tokens that were chosen by the model at each step.Output response mimetype of the generated candidate text.The requested modalities of the response.The `Schema` object allows the definition of input and output data types.abstract Optional<GenerationConfigRoutingConfig>
Configuration for model router requests.abstract Optional<List<SafetySetting>>
Safety settings in the request to block unsafe content in the response.seed()
When ``seed`` is fixed to a specific number, the model makes a best effort to provide the same response for repeated requests.If true, the raw HTTP response will be returned in the 'sdk_http_response' field.abstract Optional<SpeechConfig>
The speech generation configuration.List of strings that tells the model to stop generating text if one of the strings is encountered in the response.Instructions for the model to steer it toward better performance.Value that controls the degree of randomness in token selection.abstract Optional<ThinkingConfig>
The thinking features configuration.abstract GenerateContentConfig.Builder
Creates a builder with the same values as this instance.abstract Optional<ToolConfig>
Associates model output to a specific function call.tools()
Code that enables the system to interact with external systems to perform an action outside of the knowledge and scope of the model.topK()
For each token selection step, the ``top_k`` tokens with the highest probabilities are sampled.topP()
Tokens are selected from the most to least probable until the sum of their probabilities equals this value.Methods inherited from class com.google.genai.JsonSerializable
stringToJsonNode, toJson
-
Constructor Details
-
GenerateContentConfig
public GenerateContentConfig()
-
-
Method Details
-
httpOptions
Used to override HTTP request options. -
shouldReturnHttpResponse
If true, the raw HTTP response will be returned in the 'sdk_http_response' field. -
systemInstruction
Instructions for the model to steer it toward better performance. For example, "Answer as concisely as possible" or "Don't use technical terms in your response". -
temperature
Value that controls the degree of randomness in token selection. Lower temperatures are good for prompts that require a less open-ended or creative response, while higher temperatures can lead to more diverse or creative results. -
topP
Tokens are selected from the most to least probable until the sum of their probabilities equals this value. Use a lower value for less random responses and a higher value for more random responses. -
topK
For each token selection step, the ``top_k`` tokens with the highest probabilities are sampled. Then tokens are further filtered based on ``top_p`` with the final token selected using temperature sampling. Use a lower number for less random responses and a higher number for more random responses. -
candidateCount
Number of response variations to return. -
maxOutputTokens
Maximum number of tokens that can be generated in the response. -
stopSequences
List of strings that tells the model to stop generating text if one of the strings is encountered in the response. -
responseLogprobs
Whether to return the log probabilities of the tokens that were chosen by the model at each step. -
logprobs
Number of top candidate tokens to return the log probabilities for at each generation step. -
presencePenalty
Positive values penalize tokens that already appear in the generated text, increasing the probability of generating more diverse content. -
frequencyPenalty
Positive values penalize tokens that repeatedly appear in the generated text, increasing the probability of generating more diverse content. -
seed
When ``seed`` is fixed to a specific number, the model makes a best effort to provide the same response for repeated requests. By default, a random number is used. -
responseMimeType
Output response mimetype of the generated candidate text. Supported mimetype: - `text/plain`: (default) Text output. - `application/json`: JSON response in the candidates. The model needs to be prompted to output the appropriate response type, otherwise the behavior is undefined. This is a preview feature. -
responseSchema
The `Schema` object allows the definition of input and output data types. These types can be objects, but also primitives and arrays. Represents a select subset of an [OpenAPI 3.0 schema object](https://spec.openapis.org/oas/v3.0.3#schema). If set, a compatible response_mime_type must also be set. Compatible mimetypes: `application/json`: Schema for JSON response. -
responseJsonSchema
Optional. Output schema of the generated response. This is an alternative to `response_schema` that accepts [JSON Schema](https://json-schema.org/). If set, `response_schema` must be omitted, but `response_mime_type` is required. While the full JSON Schema may be sent, not all features are supported. Specifically, only the following properties are supported: - `$id` - `$defs` - `$ref` - `$anchor` - `type` - `format` - `title` - `description` - `enum` (for strings and numbers) - `items` - `prefixItems` - `minItems` - `maxItems` - `minimum` - `maximum` - `anyOf` - `oneOf` (interpreted the same as `anyOf`) - `properties` - `additionalProperties` - `required` The non-standard `propertyOrdering` property may also be set. Cyclic references are unrolled to a limited degree and, as such, may only be used within non-required properties. (Nullable properties are not sufficient.) If `$ref` is set on a sub-schema, no other properties, except for than those starting as a `$`, may be set. -
routingConfig
Configuration for model router requests. -
modelSelectionConfig
Configuration for model selection. -
safetySettings
Safety settings in the request to block unsafe content in the response. -
tools
Code that enables the system to interact with external systems to perform an action outside of the knowledge and scope of the model. -
toolConfig
Associates model output to a specific function call. -
labels
Labels with user-defined metadata to break down billed charges. -
cachedContent
Resource name of a context cache that can be used in subsequent requests. -
responseModalities
The requested modalities of the response. Represents the set of modalities that the model can return. -
mediaResolution
If specified, the media resolution specified will be used. -
speechConfig
The speech generation configuration. -
audioTimestamp
If enabled, audio timestamp will be included in the request to the model. -
automaticFunctionCalling
The configuration for automatic function calling. -
thinkingConfig
The thinking features configuration. -
builder
Instantiates a builder for GenerateContentConfig. -
toBuilder
Creates a builder with the same values as this instance. -
fromJson
Deserializes a JSON string to a GenerateContentConfig object.
-