Class SessionExperimental

Represents a connection to the API.

Constructors

  • Experimental

    Parameters

    • conn: WebSocket
    • apiClient: ApiClient

    Returns Session

Properties

conn: WebSocket

Methods

  • Experimental

    Terminates the WebSocket connection.

    Returns void

    const session = await ai.live.connect({
    model: 'gemini-2.0-flash-exp',
    config: {
    responseModalities: [Modality.AUDIO],
    }
    });

    session.close();
  • Experimental

    Send a message over the established connection.

    Parameters

    • params: SessionSendClientContentParameters

      Contains two optional properties, turns and turnComplete.

      • turns will be converted to a Content[]
      • turnComplete: true indicates that you are done sending content and expect a response.

    Returns void

    There are two ways to send messages to the live API: sendClientContent and sendRealtimeInput.

    sendClientContent messages are added to the model context in order. Having a conversation using sendClientContent messages is roughly equivalent to using the Chat.sendMessageStream, except that the state of the chat history is stored on the API server instead of locally.

    Because of sendClientContent's order guarantee, the model cannot respons as quickly to sendClientContent messages as to sendRealtimeInput messages. This makes the biggest difference when sending objects that have significant preprocessing time (typically images).

    The sendClientContent message sends a Content[] which has more options than the Blob sent by sendRealtimeInput.

    So the main use-cases for sendClientContent over sendRealtimeInput are:

    • Sending anything that can't be represented as a Blob (text, sendClientContent({turns="Hello?"})).
    • Managing turns when not using audio input and voice activity detection. (sendClientContent({turnComplete:true}) or the short form sendClientContent())
    • Prefilling a conversation context
      sendClientContent({
      turns: [
      Content({role:user, parts:...}),
      Content({role:user, parts:...}),
      ...
      ]
      })
  • Experimental

    Send a realtime message over the established connection.

    Parameters

    Returns void

    Use sendRealtimeInput for realtime audio chunks and video frames (images).

    With sendRealtimeInput the api will respond to audio automatically based on voice activity detection (VAD).

    sendRealtimeInput is optimized for responsivness at the expense of deterministic ordering guarantees. Audio and video tokens are to the context when they become available.

    Note: The Call signature expects a Blob object, but only a subset of audio and image mimetypes are allowed.

  • Experimental

    Send a function response message over the established connection.

    Parameters

    Returns void

    Use sendFunctionResponse to reply to LiveServerToolCall from the server.

    Use LiveConnectConfig#tools to configure the callable functions.

MMNEPVFCICPMFPCPTTAAATR