Optional
abortAbort signal which can be used to cancel the request.
NOTE: AbortSignal is a client-only operation. Using it to cancel an operation will not cancel the request in the service. You will still be charged usage for any applicable operations.
Optional
audioIf enabled, audio timestamp will be included in the request to the model.
Optional
automaticThe configuration for automatic function calling.
Optional
cachedResource name of a context cache that can be used in subsequent requests.
Optional
candidateNumber of response variations to return.
Optional
frequencyPositive values penalize tokens that repeatedly appear in the generated text, increasing the probability of generating more diverse content.
Optional
httpUsed to override HTTP request options.
Optional
labelsLabels with user-defined metadata to break down billed charges.
Optional
logprobsNumber of top candidate tokens to return the log probabilities for at each generation step.
Optional
maxMaximum number of tokens that can be generated in the response.
Optional
mediaIf specified, the media resolution specified will be used.
Optional
modelConfiguration for model selection.
Optional
presencePositive values penalize tokens that already appear in the generated text, increasing the probability of generating more diverse content.
Optional
responseOptional. Output schema of the generated response.
This is an alternative to response_schema
that accepts JSON
Schema. If set, response_schema
must be
omitted, but response_mime_type
is required. While the full JSON Schema
may be sent, not all features are supported. Specifically, only the
following properties are supported: - $id
- $defs
- $ref
- $anchor
type
- format
- title
- description
- enum
(for strings and
numbers) - items
- prefixItems
- minItems
- maxItems
- minimum
-
maximum
- anyOf
- oneOf
(interpreted the same as anyOf
) -
properties
- additionalProperties
- required
The non-standard
propertyOrdering
property may also be set. Cyclic references are
unrolled to a limited degree and, as such, may only be used within
non-required properties. (Nullable properties are not sufficient.) If
$ref
is set on a sub-schema, no other properties, except for than those
starting as a $
, may be set.Optional
responseWhether to return the log probabilities of the tokens that were chosen by the model at each step.
Optional
responseOutput response mimetype of the generated candidate text. Supported mimetype:
text/plain
: (default) Text output.application/json
: JSON response in the candidates.
The model needs to be prompted to output the appropriate response type,
otherwise the behavior is undefined.
This is a preview feature.Optional
responseThe requested modalities of the response. Represents the set of modalities that the model can return.
Optional
responseThe Schema
object allows the definition of input and output data types.
These types can be objects, but also primitives and arrays.
Represents a select subset of an OpenAPI 3.0 schema
object.
If set, a compatible response_mime_type must also be set.
Compatible mimetypes: application/json
: Schema for JSON response.
Optional
routingConfiguration for model router requests.
Optional
safetySafety settings in the request to block unsafe content in the response.
Optional
seedWhen seed
is fixed to a specific number, the model makes a best
effort to provide the same response for repeated requests. By default, a
random number is used.
Optional
speechThe speech generation configuration.
Optional
stopList of strings that tells the model to stop generating text if one of the strings is encountered in the response.
Optional
systemInstructions for the model to steer it toward better performance. For example, "Answer as concisely as possible" or "Don't use technical terms in your response".
Optional
temperatureValue that controls the degree of randomness in token selection. Lower temperatures are good for prompts that require a less open-ended or creative response, while higher temperatures can lead to more diverse or creative results.
Optional
thinkingThe thinking features configuration.
Optional
toolAssociates model output to a specific function call.
Optional
toolsCode that enables the system to interact with external systems to perform an action outside of the knowledge and scope of the model.
Optional
topKFor each token selection step, the top_k
tokens with the
highest probabilities are sampled. Then tokens are further filtered based
on top_p
with the final token selected using temperature sampling. Use
a lower number for less random responses and a higher number for more
random responses.
Optional
topPTokens are selected from the most to least probable until the sum of their probabilities equals this value. Use a lower value for less random responses and a higher value for more random responses.
Optional model configuration parameters.
For more information, see
Content generation parameters <https://cloud.google.com/vertex-ai/generative-ai/docs/multimodal/content-generation-parameters>
_.