Interface LiveConnectConfig

Session config for the API connection.

Properties

abortSignal?: AbortSignal

Abort signal which can be used to cancel the request.

NOTE: AbortSignal is a client-only operation. Using it to cancel an operation will not cancel the request in the service. You will still be charged usage for any applicable operations.

contextWindowCompression?: ContextWindowCompressionConfig

Configures context window compression mechanism.

If included, server will compress context window to fit into given length.

enableAffectiveDialog?: boolean

If enabled, the model will detect emotions and adapt its responses accordingly.

generationConfig?: GenerationConfig

The generation configuration for the session.

httpOptions?: HttpOptions

Used to override HTTP request options.

inputAudioTranscription?: AudioTranscriptionConfig

The transcription of the input aligns with the input audio language.

maxOutputTokens?: number

Maximum number of tokens that can be generated in the response.

mediaResolution?: MediaResolution

If specified, the media resolution specified will be used.

outputAudioTranscription?: AudioTranscriptionConfig

The transcription of the output aligns with the language code specified for the output audio.

proactivity?: ProactivityConfig

Configures the proactivity of the model. This allows the model to respond proactively to the input and to ignore irrelevant input.

realtimeInputConfig?: RealtimeInputConfig

Configures the realtime input behavior in BidiGenerateContent.

responseModalities?: Modality[]

The requested modalities of the response. Represents the set of modalities that the model can return. Defaults to AUDIO if not specified.

seed?: number

When seed is fixed to a specific number, the model makes a best effort to provide the same response for repeated requests. By default, a random number is used.

sessionResumption?: SessionResumptionConfig

Configures session resumption mechanism.

If included the server will send SessionResumptionUpdate messages.

speechConfig?: SpeechConfig

The speech generation configuration.

systemInstruction?: ContentUnion

The user provided system instructions for the model. Note: only text should be used in parts and content in each part will be in a separate paragraph.

temperature?: number

Value that controls the degree of randomness in token selection. Lower temperatures are good for prompts that require a less open-ended or creative response, while higher temperatures can lead to more diverse or creative results.

A list of Tools the model may use to generate the next response.

A Tool is a piece of code that enables the system to interact with external systems to perform an action, or set of actions, outside of knowledge and scope of the model.

topK?: number

For each token selection step, the top_k tokens with the highest probabilities are sampled. Then tokens are further filtered based on top_p with the final token selected using temperature sampling. Use a lower number for less random responses and a higher number for more random responses.

topP?: number

Tokens are selected from the most to least probable until the sum of their probabilities equals this value. Use a lower value for less random responses and a higher value for more random responses.