Optional
audioIf enabled, audio timestamp will be included in the request to the model.
Optional
cachedResource name of a context cache that can be used in subsequent requests.
Optional
candidateNumber of response variations to return.
Optional
frequencyPositive values penalize tokens that repeatedly appear in the generated text, increasing the probability of generating more diverse content.
Optional
httpUsed to override HTTP request options.
Optional
labelsLabels with user-defined metadata to break down billed charges.
Optional
logprobsNumber of top candidate tokens to return the log probabilities for at each generation step.
Optional
maxMaximum number of tokens that can be generated in the response.
Optional
mediaIf specified, the media resolution specified will be used.
Optional
presencePositive values penalize tokens that already appear in the generated text, increasing the probability of generating more diverse content.
Optional
responseWhether to return the log probabilities of the tokens that were chosen by the model at each step.
Optional
responseOutput response media type of the generated candidate text.
Optional
responseThe requested modalities of the response. Represents the set of modalities that the model can return.
Optional
responseSchema that the generated candidate text must adhere to.
Optional
routingConfiguration for model router requests.
Optional
safetySafety settings in the request to block unsafe content in the response.
Optional
seedWhen seed
is fixed to a specific number, the model makes a best
effort to provide the same response for repeated requests. By default, a
random number is used.
Optional
speechThe speech generation configuration.
Optional
stopList of strings that tells the model to stop generating text if one of the strings is encountered in the response.
Optional
systemInstructions for the model to steer it toward better performance. For example, "Answer as concisely as possible" or "Don't use technical terms in your response".
Optional
temperatureValue that controls the degree of randomness in token selection. Lower temperatures are good for prompts that require a less open-ended or creative response, while higher temperatures can lead to more diverse or creative results.
Optional
thinkingThe thinking features configuration.
Optional
toolAssociates model output to a specific function call.
Optional
toolsCode that enables the system to interact with external systems to perform an action outside of the knowledge and scope of the model.
Optional
topKFor each token selection step, the top_k
tokens with the
highest probabilities are sampled. Then tokens are further filtered based
on top_p
with the final token selected using temperature sampling. Use
a lower number for less random responses and a higher number for more
random responses.
Optional
topPTokens are selected from the most to least probable until the sum of their probabilities equals this value. Use a lower value for less random responses and a higher value for more random responses.
Optional model configuration parameters.
For more information, see
Content generation parameters <https://cloud.google.com/vertex-ai/generative-ai/docs/multimodal/content-generation-parameters>
_.