OptionalabortAbort signal which can be used to cancel the request.
NOTE: AbortSignal is a client-only operation. Using it to cancel an operation will not cancel the request in the service. You will still be charged usage for any applicable operations.
OptionalaudioIf enabled, audio timestamp will be included in the request to the model.
OptionalautomaticThe configuration for automatic function calling.
OptionalcachedResource name of a context cache that can be used in subsequent requests.
OptionalcandidateNumber of response variations to return.
OptionalfrequencyPositive values penalize tokens that repeatedly appear in the generated text, increasing the probability of generating more diverse content.
OptionalhttpUsed to override HTTP request options.
OptionalimageThe image generation configuration.
OptionallabelsLabels with user-defined metadata to break down billed charges.
OptionallogprobsNumber of top candidate tokens to return the log probabilities for at each generation step.
OptionalmaxMaximum number of tokens that can be generated in the response.
OptionalmediaIf specified, the media resolution specified will be used.
OptionalmodelConfiguration for model selection.
OptionalpresencePositive values penalize tokens that already appear in the generated text, increasing the probability of generating more diverse content.
OptionalresponseOptional. Output schema of the generated response.
This is an alternative to response_schema that accepts JSON
Schema. If set, response_schema must be
omitted, but response_mime_type is required. While the full JSON Schema
may be sent, not all features are supported. Specifically, only the
following properties are supported: - $id - $defs - $ref - $anchor
type - format - title - description - enum (for strings and
numbers) - items - prefixItems - minItems - maxItems - minimum -
maximum - anyOf - oneOf (interpreted the same as anyOf) -
properties - additionalProperties - required The non-standard
propertyOrdering property may also be set. Cyclic references are
unrolled to a limited degree and, as such, may only be used within
non-required properties. (Nullable properties are not sufficient.) If
$ref is set on a sub-schema, no other properties, except for than those
starting as a $, may be set.OptionalresponseWhether to return the log probabilities of the tokens that were chosen by the model at each step.
OptionalresponseOutput response mimetype of the generated candidate text. Supported mimetype:
text/plain: (default) Text output.application/json: JSON response in the candidates.
The model needs to be prompted to output the appropriate response type,
otherwise the behavior is undefined.
This is a preview feature.OptionalresponseThe requested modalities of the response. Represents the set of modalities that the model can return.
OptionalresponseThe Schema object allows the definition of input and output data types.
These types can be objects, but also primitives and arrays.
Represents a select subset of an OpenAPI 3.0 schema
object.
If set, a compatible response_mime_type must also be set.
Compatible mimetypes: application/json: Schema for JSON response.
OptionalroutingConfiguration for model router requests.
OptionalsafetySafety settings in the request to block unsafe content in the response.
OptionalseedWhen seed is fixed to a specific number, the model makes a best
effort to provide the same response for repeated requests. By default, a
random number is used.
OptionalspeechThe speech generation configuration.
OptionalstopList of strings that tells the model to stop generating text if one of the strings is encountered in the response.
OptionalsystemInstructions for the model to steer it toward better performance. For example, "Answer as concisely as possible" or "Don't use technical terms in your response".
OptionaltemperatureValue that controls the degree of randomness in token selection. Lower temperatures are good for prompts that require a less open-ended or creative response, while higher temperatures can lead to more diverse or creative results.
OptionalthinkingThe thinking features configuration.
OptionaltoolAssociates model output to a specific function call.
OptionaltoolsCode that enables the system to interact with external systems to perform an action outside of the knowledge and scope of the model.
OptionaltopKFor each token selection step, the top_k tokens with the
highest probabilities are sampled. Then tokens are further filtered based
on top_p with the final token selected using temperature sampling. Use
a lower number for less random responses and a higher number for more
random responses.
OptionaltopPTokens are selected from the most to least probable until the sum of their probabilities equals this value. Use a lower value for less random responses and a higher value for more random responses.
Optional model configuration parameters.
For more information, see
Content generation parameters <https://cloud.google.com/vertex-ai/generative-ai/docs/multimodal/content-generation-parameters>_.