Google Gen AI SDK¶
https://github.com/googleapis/python-genai
google-genai is an initial Python client library for interacting with Google’s Generative AI APIs.
Google Gen AI Python SDK provides an interface for developers to integrate Google’s generative models into their Python applications. It supports the Gemini Developer API and Vertex AI APIs.
Installation¶
pip install google-genai
Imports¶
from google import genai
from google.genai import types
Create a client¶
Please run one of the following code blocks to create a client for different services (Gemini Developer API or Vertex AI). Feel free to switch the client and run all the examples to see how it behaves under different APIs.
# Only run this block for Gemini Developer API
client = genai.Client(api_key='GEMINI_API_KEY')
# Only run this block for Vertex AI API
client = genai.Client(
vertexai=True, project='your-project-id', location='us-central1'
)
(Optional) Using environment variables:
You can create a client by configuring the necessary environment variables. Configuration setup instructions depends on whether you’re using the Gemini Developer API or the Gemini API in Vertex AI.
Gemini Developer API: Set GOOGLE_API_KEY as shown below:
export GOOGLE_API_KEY='your-api-key'
Gemini API in Vertex AI: Set GOOGLE_GENAI_USE_VERTEXAI, GOOGLE_CLOUD_PROJECT and GOOGLE_CLOUD_LOCATION, as shown below:
export GOOGLE_GENAI_USE_VERTEXAI=true
export GOOGLE_CLOUD_PROJECT='your-project-id'
export GOOGLE_CLOUD_LOCATION='us-central1'
client = genai.Client()
API Selection¶
By default, the SDK uses the beta API endpoints provided by Google to support preview features in the APIs. The stable API endpoints can be selected by setting the API version to v1.
To set the API version use http_options
. For example, to set the API version to v1
for Vertex AI:
client = genai.Client(
vertexai=True,
project='your-project-id',
location='us-central1',
http_options=types.HttpOptions(api_version='v1')
)
To set the API version to v1alpha for the Gemini Developer API:
# Only run this block for Gemini Developer API
client = genai.Client(
api_key='GEMINI_API_KEY',
http_options=types.HttpOptions(api_version='v1alpha')
)
Types¶
Parameter types can be specified as either dictionaries(TypedDict
) or Pydantic Models.
Pydantic model types are available in the types
module.
Models¶
The client.models
modules exposes model inferencing and model
getters.
Generate Content¶
with text content¶
response = client.models.generate_content(
model='gemini-2.0-flash-001', contents='Why is the sky blue?'
)
print(response.text)
with uploaded file (Gemini Developer API only)¶
download the file in console.
!wget -q https://storage.googleapis.com/generativeai-downloads/data/a11.txt
python code.
file = client.files.upload(file='a11.txt')
response = client.models.generate_content(
model='gemini-2.0-flash-001',
contents=['Could you summarize this file?', file]
)
print(response.text)
How to structure contents argument for generate_content¶
The SDK always converts the inputs to the contents argument into list[types.Content]. The following shows some common ways to provide your inputs.
Provide a list[types.Content]¶
This is the canonical way to provide contents, SDK will not do any conversion.
Provide a types.Content instance¶
contents = types.Content(
role='user',
parts=[types.Part.from_text(text='Why is the sky blue?')]
)
SDK converts this to
[
types.Content(
role='user',
parts=[types.Part.from_text(text='Why is the sky blue?')]
)
]
Provide a string¶
contents='Why is the sky blue?'
The SDK will assume this is a text part, and it converts this into the following:
[
types.UserContent(
parts=[
types.Part.from_text(text='Why is the sky blue?')
]
)
]
Where a types.UserContent is a subclass of types.Content, it sets the role field to be user.
Provide a list of string¶
The SDK assumes these are 2 text parts, it converts this into a single content, like the following:
[
types.UserContent(
parts=[
types.Part.from_text(text='Why is the sky blue?'),
types.Part.from_text(text='Why is the cloud white?'),
]
)
]
Where a types.UserContent is a subclass of types.Content, the role field in types.UserContent is fixed to be user.
Provide a function call part¶
contents = types.Part.from_function_call(
name='get_weather_by_location',
args={'location': 'Boston'}
)
The SDK converts a function call part to a content with a model role:
[
types.ModelContent(
parts=[
types.Part.from_function_call(
name='get_weather_by_location',
args={'location': 'Boston'}
)
]
)
]
Where a types.ModelContent is a subclass of types.Content, the role field in types.ModelContent is fixed to be model.
Provide a list of function call parts¶
contents = [
types.Part.from_function_call(
name='get_weather_by_location',
args={'location': 'Boston'}
),
types.Part.from_function_call(
name='get_weather_by_location',
args={'location': 'New York'}
),
]
The SDK converts a list of function call parts to the a content with a model role:
[
types.ModelContent(
parts=[
types.Part.from_function_call(
name='get_weather_by_location',
args={'location': 'Boston'}
),
types.Part.from_function_call(
name='get_weather_by_location',
args={'location': 'New York'}
)
]
)
]
Where a types.ModelContent is a subclass of types.Content, the role field in types.ModelContent is fixed to be model.
Provide a non function call part¶
contents = types.Part.from_uri(
file_uri: 'gs://generativeai-downloads/images/scones.jpg',
mime_type: 'image/jpeg',
)
The SDK converts all non function call parts into a content with a user role.
[
types.UserContent(parts=[
types.Part.from_uri(
file_uri: 'gs://generativeai-downloads/images/scones.jpg',
mime_type: 'image/jpeg',
)
])
]
Provide a list of non function call parts¶
contents = [
types.Part.from_text('What is this image about?'),
types.Part.from_uri(
file_uri: 'gs://generativeai-downloads/images/scones.jpg',
mime_type: 'image/jpeg',
)
]
The SDK will convert the list of parts into a content with a user role
[
types.UserContent(
parts=[
types.Part.from_text('What is this image about?'),
types.Part.from_uri(
file_uri: 'gs://generativeai-downloads/images/scones.jpg',
mime_type: 'image/jpeg',
)
]
)
]
Mix types in contents¶
You can also provide a list of types.ContentUnion. The SDK leaves items of types.Content as is, it groups consecutive non function call parts into a single types.UserContent, and it groups consecutive function call parts into a single types.ModelContent.
If you put a list within a list, the inner list can only contain types.PartUnion items. The SDK will convert the inner list into a single types.UserContent.
System Instructions and Other Configs¶
The output of the model can be influenced by several optional settings available in generate_content’s config parameter. For example, the variability and length of the output can be influenced by the temperature and max_output_tokens respectively.
response = client.models.generate_content(
model='gemini-2.0-flash-001',
contents='high',
config=types.GenerateContentConfig(
system_instruction='I say high, you say low',
max_output_tokens=3,
temperature=0.3,
),
)
print(response.text)
Typed Config¶
All API methods support Pydantic types for parameters as well as
dictionaries. You can get the type from google.genai.types
.
response = client.models.generate_content(
model='gemini-2.0-flash-001',
contents=types.Part.from_text(text='Why is the sky blue?'),
config=types.GenerateContentConfig(
temperature=0,
top_p=0.95,
top_k=20,
candidate_count=1,
seed=5,
max_output_tokens=100,
stop_sequences=['STOP!'],
presence_penalty=0.0,
frequency_penalty=0.0,
),
)
print(response.text)
List Base Models¶
To retrieve tuned models, see: List Tuned Models
for model in client.models.list():
print(model)
pager = client.models.list(config={'page_size': 10})
print(pager.page_size)
print(pager[0])
pager.next_page()
print(pager[0])
async for job in await client.aio.models.list():
print(job)
async_pager = await client.aio.models.list(config={'page_size': 10})
print(async_pager.page_size)
print(async_pager[0])
await async_pager.next_page()
print(async_pager[0])
Safety Settings¶
response = client.models.generate_content(
model='gemini-2.0-flash-001',
contents='Say something bad.',
config=types.GenerateContentConfig(
safety_settings=[
types.SafetySetting(
category='HARM_CATEGORY_HATE_SPEECH',
threshold='BLOCK_ONLY_HIGH',
)
]
),
)
print(response.text)
Function Calling¶
You can pass a Python function directly and it will be automatically called and responded.
def get_current_weather(location: str) -> str:
"""Returns the current weather.
Args:
location: The city and state, e.g. San Francisco, CA
"""
return 'sunny'
response = client.models.generate_content(
model='gemini-2.0-flash-001',
contents='What is the weather like in Boston?',
config=types.GenerateContentConfig(
tools=[get_current_weather],
),
)
print(response.text)
If you pass in a python function as a tool directly, and do not want automatic function calling, you can disable automatic function calling as follows:
With automatic function calling disabled, you will get a list of function call parts in the response:
If you don’t want to use the automatic function support, you can manually declare the function and invoke it.
The following example shows how to declare a function and pass it as a tool. Then you will receive a function call part in the response.
function = types.FunctionDeclaration(
name='get_current_weather',
description='Get the current weather in a given location',
parameters=types.Schema(
type='OBJECT',
properties={
'location': types.Schema(
type='STRING',
description='The city and state, e.g. San Francisco, CA',
),
},
required=['location'],
),
)
tool = types.Tool(function_declarations=[function])
response = client.models.generate_content(
model='gemini-2.0-flash-001',
contents='What is the weather like in Boston?',
config=types.GenerateContentConfig(
tools=[tool],
),
)
print(response.function_calls[0])
After you receive the function call part from the model, you can invoke the function and get the function response. And then you can pass the function response to the model. The following example shows how to do it for a simple function invocation.
user_prompt_content = types.Content(
role='user',
parts=[types.Part.from_text(text='What is the weather like in Boston?')],
)
function_call_part = response.function_calls[0]
function_call_content = response.candidates[0].content
try:
function_result = get_current_weather(
**function_call_part.function_call.args
)
function_response = {'result': function_result}
except (
Exception
) as e: # instead of raising the exception, you can let the model handle it
function_response = {'error': str(e)}
function_response_part = types.Part.from_function_response(
name=function_call_part.name,
response=function_response,
)
function_response_content = types.Content(
role='tool', parts=[function_response_part]
)
response = client.models.generate_content(
model='gemini-2.0-flash-001',
contents=[
user_prompt_content,
function_call_content,
function_response_content,
],
config=types.GenerateContentConfig(
tools=[tool],
),
)
print(response.text)
If you configure function calling mode to be ANY, then the model will always return function call parts. If you also pass a python function as a tool, by default the SDK will perform automatic function calling until the remote calls exceed the maximum remote call for automatic function calling (default to 10 times).
If you’d like to disable automatic function calling in ANY mode:
def get_current_weather(location: str) -> str:
"""Returns the current weather.
Args:
location: The city and state, e.g. San Francisco, CA
"""
return "sunny"
response = client.models.generate_content(
model="gemini-2.0-flash-001",
contents="What is the weather like in Boston?",
config=types.GenerateContentConfig(
tools=[get_current_weather],
automatic_function_calling=types.AutomaticFunctionCallingConfig(
disable=True
),
tool_config=types.ToolConfig(
function_calling_config=types.FunctionCallingConfig(mode='ANY')
),
),
)
If you’d like to set x
number of automatic function call turns, you can
configure the maximum remote calls to be x + 1
.
Assuming you prefer 1
turn for automatic function calling:
def get_current_weather(location: str) -> str:
"""Returns the current weather.
Args:
location: The city and state, e.g. San Francisco, CA
"""
return "sunny"
response = client.models.generate_content(
model="gemini-2.0-flash-001",
contents="What is the weather like in Boston?",
config=types.GenerateContentConfig(
tools=[get_current_weather],
automatic_function_calling=types.AutomaticFunctionCallingConfig(
maximum_remote_calls=2
),
tool_config=types.ToolConfig(
function_calling_config=types.FunctionCallingConfig(mode='ANY')
),
),
)
JSON Response Schema¶
Schemas can be provided as Pydantic Models.
from pydantic import BaseModel
class CountryInfo(BaseModel):
name: str
population: int
capital: str
continent: str
gdp: int
official_language: str
total_area_sq_mi: int
response = client.models.generate_content(
model='gemini-2.0-flash-001',
contents='Give me information for the United States.',
config=types.GenerateContentConfig(
response_mime_type='application/json',
response_schema=CountryInfo,
),
)
print(response.text)
response = client.models.generate_content(
model='gemini-2.0-flash-001',
contents='Give me information for the United States.',
config=types.GenerateContentConfig(
response_mime_type='application/json',
response_schema={
'required': [
'name',
'population',
'capital',
'continent',
'gdp',
'official_language',
'total_area_sq_mi',
],
'properties': {
'name': {'type': 'STRING'},
'population': {'type': 'INTEGER'},
'capital': {'type': 'STRING'},
'continent': {'type': 'STRING'},
'gdp': {'type': 'INTEGER'},
'official_language': {'type': 'STRING'},
'total_area_sq_mi': {'type': 'INTEGER'},
},
'type': 'OBJECT',
},
),
)
print(response.text)
Enum Response Schema¶
You can set response_mime_type to ‘text/x.enum’ to return one of those enum values as the response.
from enum import Enum
class InstrumentEnum(Enum):
PERCUSSION = 'Percussion'
STRING = 'String'
WOODWIND = 'Woodwind'
BRASS = 'Brass'
KEYBOARD = 'Keyboard'
response = client.models.generate_content(
model='gemini-2.0-flash-001',
contents='What instrument plays multiple notes at once?',
config={
'response_mime_type': 'text/x.enum',
'response_schema': InstrumentEnum,
},
)
print(response.text)
You can also set response_mime_type to ‘application/json’, the response will be identical but in quotes.
class InstrumentEnum(Enum):
PERCUSSION = 'Percussion'
STRING = 'String'
WOODWIND = 'Woodwind'
BRASS = 'Brass'
KEYBOARD = 'Keyboard'
response = client.models.generate_content(
model='gemini-2.0-flash-001',
contents='What instrument plays multiple notes at once?',
config={
'response_mime_type': 'application/json',
'response_schema': InstrumentEnum,
},
)
print(response.text)
Streaming¶
for chunk in client.models.generate_content_stream(
model='gemini-2.0-flash-001', contents='Tell me a story in 300 words.'
):
print(chunk.text, end='')
If your image is stored in Google Cloud Storage, you can use the from_uri class method to create a Part object.
for chunk in client.models.generate_content_stream(
model='gemini-2.0-flash-001',
contents=[
'What is this image about?',
types.Part.from_uri(
file_uri='gs://generativeai-downloads/images/scones.jpg',
mime_type='image/jpeg',
),
],
):
print(chunk.text, end='')
If your image is stored in your local file system, you can read it in as bytes
data and use the from_bytes
class method to create a Part
object.
YOUR_IMAGE_PATH = 'your_image_path'
YOUR_IMAGE_MIME_TYPE = 'your_image_mime_type'
with open(YOUR_IMAGE_PATH, 'rb') as f:
image_bytes = f.read()
for chunk in client.models.generate_content_stream(
model='gemini-2.0-flash-001',
contents=[
'What is this image about?',
types.Part.from_bytes(data=image_bytes, mime_type=YOUR_IMAGE_MIME_TYPE),
],
):
print(chunk.text, end='')
Async¶
client.aio
exposes all the analogous async methods that are available on client
For example, client.aio.models.generate_content
is the async
version of client.models.generate_content
response = await client.aio.models.generate_content(
model='gemini-2.0-flash-001', contents='Tell me a story in 300 words.'
)
print(response.text)
Streaming¶
async for chunk in await client.aio.models.generate_content_stream(
model='gemini-2.0-flash-001', contents='Tell me a story in 300 words.'
):
print(chunk.text, end='')
Count Tokens and Compute Tokens¶
response = client.models.count_tokens(
model='gemini-2.0-flash-001',
contents='why is the sky blue?',
)
print(response)
Compute tokens is only supported in Vertex AI.
response = client.models.compute_tokens(
model='gemini-2.0-flash-001',
contents='why is the sky blue?',
)
print(response)
Async¶
response = await client.aio.models.count_tokens(
model='gemini-2.0-flash-001',
contents='why is the sky blue?',
)
print(response)
Embed Content¶
response = client.models.embed_content(
model='text-embedding-004',
contents='why is the sky blue?',
)
print(response)
# multiple contents with config
response = client.models.embed_content(
model='text-embedding-004',
contents=['why is the sky blue?', 'What is your age?'],
config=types.EmbedContentConfig(output_dimensionality=10),
)
print(response)
Imagen¶
Support for generate image in Gemini Developer API is behind an allowlist
# Generate Image
response1 = client.models.generate_images(
model='imagen-3.0-generate-002',
prompt='An umbrella in the foreground, and a rainy night sky in the background',
config=types.GenerateImagesConfig(
number_of_images=1,
include_rai_reason=True,
output_mime_type='image/jpeg',
),
)
response1.generated_images[0].image.show()
Upscale image is only supported in Vertex AI.
# Upscale the generated image from above
response2 = client.models.upscale_image(
model='imagen-3.0-generate-002',
image=response1.generated_images[0].image,
upscale_factor='x2',
config=types.UpscaleImageConfig(
include_rai_reason=True,
output_mime_type='image/jpeg',
),
)
response2.generated_images[0].image.show()
Edit image uses a separate model from generate and upscale.
Edit image is only supported in Vertex AI.
# Edit the generated image from above
from google.genai.types import RawReferenceImage, MaskReferenceImage
raw_ref_image = RawReferenceImage(
reference_id=1,
reference_image=response1.generated_images[0].image,
)
# Model computes a mask of the background
mask_ref_image = MaskReferenceImage(
reference_id=2,
config=types.MaskReferenceConfig(
mask_mode='MASK_MODE_BACKGROUND',
mask_dilation=0,
),
)
response3 = client.models.edit_image(
model='imagen-3.0-capability-001',
prompt='Sunlight and clear sky',
reference_images=[raw_ref_image, mask_ref_image],
config=types.EditImageConfig(
edit_mode='EDIT_MODE_INPAINT_INSERTION',
number_of_images=1,
include_rai_reason=True,
output_mime_type='image/jpeg',
),
)
response3.generated_images[0].image.show()
Veo¶
Support for generate videos in Vertex and Gemini Developer API is behind an allowlist
# Create operation
operation = client.models.generate_videos(
model='veo-2.0-generate-001',
prompt='A neon hologram of a cat driving at top speed',
config=types.GenerateVideosConfig(
number_of_videos=1,
fps=24,
duration_seconds=5,
enhance_prompt=True,
),
)
# Poll operation
while not operation.done:
time.sleep(20)
operation = client.operations.get(operation)
video = operation.result.generated_videos[0].video
video.show()
Chats¶
Create a chat session to start a multi-turn conversations with the model.
Send Message¶
chat = client.chats.create(model='gemini-2.0-flash-001')
response = chat.send_message('tell me a story')
print(response.text)
Streaming¶
chat = client.chats.create(model='gemini-2.0-flash-001')
for chunk in chat.send_message_stream('tell me a story'):
print(chunk.text, end='')
Async¶
chat = client.aio.chats.create(model='gemini-2.0-flash-001')
response = await chat.send_message('tell me a story')
print(response.text)
Async Streaming¶
chat = client.aio.chats.create(model='gemini-2.0-flash-001')
async for chunk in await chat.send_message_stream('tell me a story'):
print(chunk.text, end='')
Files¶
Files are only supported in Gemini Developer API.
gsutil cp gs://cloud-samples-data/generative-ai/pdf/2312.11805v3.pdf .
gsutil cp gs://cloud-samples-data/generative-ai/pdf/2403.05530.pdf .
Upload¶
file1 = client.files.upload(file='2312.11805v3.pdf')
file2 = client.files.upload(file='2403.05530.pdf')
print(file1)
print(file2)
Get¶
file1 = client.files.upload(file='2312.11805v3.pdf')
file_info = client.files.get(name=file1.name)
Delete¶
file3 = client.files.upload(file='2312.11805v3.pdf')
client.files.delete(name=file3.name)
Caches¶
client.caches
contains the control plane APIs for cached content
Create¶
if client.vertexai:
file_uris = [
'gs://cloud-samples-data/generative-ai/pdf/2312.11805v3.pdf',
'gs://cloud-samples-data/generative-ai/pdf/2403.05530.pdf',
]
else:
file_uris = [file1.uri, file2.uri]
cached_content = client.caches.create(
model='gemini-1.5-pro-002',
config=types.CreateCachedContentConfig(
contents=[
types.Content(
role='user',
parts=[
types.Part.from_uri(
file_uri=file_uris[0], mime_type='application/pdf'
),
types.Part.from_uri(
file_uri=file_uris[1],
mime_type='application/pdf',
),
],
)
],
system_instruction='What is the sum of the two pdfs?',
display_name='test cache',
ttl='3600s',
),
)
Get¶
cached_content = client.caches.get(name=cached_content.name)
Generate Content¶
response = client.models.generate_content(
model='gemini-1.5-pro-002',
contents='Summarize the pdfs',
config=types.GenerateContentConfig(
cached_content=cached_content.name,
),
)
print(response.text)
Tunings¶
client.tunings
contains tuning job APIs and supports supervised fine
tuning through tune
.
Tune¶
Vertex AI supports tuning from GCS source
Gemini Developer API supports tuning from inline examples
if client.vertexai:
model = 'gemini-1.5-pro-002'
training_dataset = types.TuningDataset(
gcs_uri='gs://cloud-samples-data/ai-platform/generative_ai/gemini-1_5/text/sft_train_data.jsonl',
)
else:
model = 'models/gemini-1.0-pro-001'
training_dataset = types.TuningDataset(
examples=[
types.TuningExample(
text_input=f'Input text {i}',
output=f'Output text {i}',
)
for i in range(5)
],
)
tuning_job = client.tunings.tune(
base_model=model,
training_dataset=training_dataset,
config=types.CreateTuningJobConfig(
epoch_count=1, tuned_model_display_name='test_dataset_examples model'
),
)
print(tuning_job)
Get Tuning Job¶
tuning_job = client.tunings.get(name=tuning_job.name)
print(tuning_job)
import time
running_states = set(
[
'JOB_STATE_PENDING',
'JOB_STATE_RUNNING',
]
)
while tuning_job.state in running_states:
print(tuning_job.state)
tuning_job = client.tunings.get(name=tuning_job.name)
time.sleep(10)
response = client.models.generate_content(
model=tuning_job.tuned_model.endpoint,
contents='why is the sky blue?',
)
print(response.text)
Get Tuned Model¶
tuned_model = client.models.get(model=tuning_job.tuned_model.model)
print(tuned_model)
List Tuned Models¶
To retrieve base models, see: List Base Models
for model in client.models.list(config={'page_size': 10, 'query_base': False}}):
print(model)
pager = client.models.list(config={'page_size': 10, 'query_base': False}})
print(pager.page_size)
print(pager[0])
pager.next_page()
print(pager[0])
async for job in await client.aio.models.list(config={'page_size': 10, 'query_base': False}}):
print(job)
async_pager = await client.aio.models.list(config={'page_size': 10, 'query_base': False}})
print(async_pager.page_size)
print(async_pager[0])
await async_pager.next_page()
print(async_pager[0])
Update Tuned Model¶
model = pager[0]
model = client.models.update(
model=model.name,
config=types.UpdateModelConfig(
display_name='my tuned model', description='my tuned model description'
),
)
print(model)
List Tuning Jobs¶
for job in client.tunings.list(config={'page_size': 10}):
print(job)
pager = client.tunings.list(config={'page_size': 10})
print(pager.page_size)
print(pager[0])
pager.next_page()
print(pager[0])
async for job in await client.aio.tunings.list(config={'page_size': 10}):
print(job)
async_pager = await client.aio.tunings.list(config={'page_size': 10})
print(async_pager.page_size)
print(async_pager[0])
await async_pager.next_page()
print(async_pager[0])
Batch Prediction¶
Only supported in Vertex AI.
Create¶
# Specify model and source file only, destination and job display name will be auto-populated
job = client.batches.create(
model='gemini-1.5-flash-002',
src='bq://my-project.my-dataset.my-table',
)
job
# Get a job by name
job = client.batches.get(name=job.name)
job.state
completed_states = set(
[
'JOB_STATE_SUCCEEDED',
'JOB_STATE_FAILED',
'JOB_STATE_CANCELLED',
'JOB_STATE_PAUSED',
]
)
while job.state not in completed_states:
print(job.state)
job = client.batches.get(name=job.name)
time.sleep(30)
job
List¶
for job in client.batches.list(config=types.ListBatchJobsConfig(page_size=10)):
print(job)
pager = client.batches.list(config=types.ListBatchJobsConfig(page_size=10))
print(pager.page_size)
print(pager[0])
pager.next_page()
print(pager[0])
async for job in await client.aio.batches.list(
config=types.ListBatchJobsConfig(page_size=10)
):
print(job)
async_pager = await client.aio.batches.list(
config=types.ListBatchJobsConfig(page_size=10)
)
print(async_pager.page_size)
print(async_pager[0])
await async_pager.next_page()
print(async_pager[0])
Delete¶
# Delete the job resource
delete_job = client.batches.delete(name=job.name)
delete_job
Error Handling¶
To handle errors raised by the model, the SDK provides this [APIError](https://github.com/googleapis/python-genai/blob/main/google/genai/errors.py) class.
try:
client.models.generate_content(
model="invalid-model-name",
contents="What is your name?",
)
except errors.APIError as e:
print(e.code) # 404
print(e.message)
Reference¶
- Submodules
- genai.client module
- genai.batches module
- genai.caches module
- genai.chats module
- genai.files module
- genai.live module
- genai.models module
AsyncModels
AsyncModels.compute_tokens()
AsyncModels.count_tokens()
AsyncModels.delete()
AsyncModels.edit_image()
AsyncModels.embed_content()
AsyncModels.generate_content()
AsyncModels.generate_content_stream()
AsyncModels.generate_images()
AsyncModels.generate_videos()
AsyncModels.get()
AsyncModels.list()
AsyncModels.update()
AsyncModels.upscale_image()
Models
- genai.tunings module
- genai.types module
AdapterSize
AutomaticFunctionCallingConfig
AutomaticFunctionCallingConfigDict
BatchJob
BatchJobDestination
BatchJobDestinationDict
BatchJobDict
BatchJobSource
BatchJobSourceDict
Blob
BlobDict
BlockedReason
CachedContent
CachedContentDict
CachedContentUsageMetadata
CachedContentUsageMetadataDict
CancelBatchJobConfig
CancelBatchJobConfigDict
Candidate
CandidateDict
Citation
CitationDict
CitationMetadata
CitationMetadataDict
CodeExecutionResult
CodeExecutionResultDict
ComputeTokensConfig
ComputeTokensConfigDict
ComputeTokensResponse
ComputeTokensResponseDict
Content
ContentDict
ContentEmbedding
ContentEmbeddingDict
ContentEmbeddingStatistics
ContentEmbeddingStatisticsDict
ControlReferenceConfig
ControlReferenceConfigDict
ControlReferenceImage
ControlReferenceImageDict
ControlReferenceType
CountTokensConfig
CountTokensConfigDict
CountTokensResponse
CountTokensResponseDict
CreateBatchJobConfig
CreateBatchJobConfigDict
CreateCachedContentConfig
CreateCachedContentConfigDict
CreateCachedContentConfigDict.contents
CreateCachedContentConfigDict.display_name
CreateCachedContentConfigDict.expire_time
CreateCachedContentConfigDict.http_options
CreateCachedContentConfigDict.system_instruction
CreateCachedContentConfigDict.tool_config
CreateCachedContentConfigDict.tools
CreateCachedContentConfigDict.ttl
CreateFileConfig
CreateFileConfigDict
CreateFileResponse
CreateFileResponseDict
CreateTuningJobConfig
CreateTuningJobConfig.adapter_size
CreateTuningJobConfig.batch_size
CreateTuningJobConfig.description
CreateTuningJobConfig.epoch_count
CreateTuningJobConfig.http_options
CreateTuningJobConfig.learning_rate
CreateTuningJobConfig.learning_rate_multiplier
CreateTuningJobConfig.tuned_model_display_name
CreateTuningJobConfig.validation_dataset
CreateTuningJobConfigDict
CreateTuningJobConfigDict.adapter_size
CreateTuningJobConfigDict.batch_size
CreateTuningJobConfigDict.description
CreateTuningJobConfigDict.epoch_count
CreateTuningJobConfigDict.http_options
CreateTuningJobConfigDict.learning_rate
CreateTuningJobConfigDict.learning_rate_multiplier
CreateTuningJobConfigDict.tuned_model_display_name
CreateTuningJobConfigDict.validation_dataset
DatasetDistribution
DatasetDistributionDict
DatasetDistributionDistributionBucket
DatasetDistributionDistributionBucketDict
DatasetStats
DatasetStats.total_billable_character_count
DatasetStats.total_tuning_character_count
DatasetStats.tuning_dataset_example_count
DatasetStats.tuning_step_count
DatasetStats.user_dataset_examples
DatasetStats.user_input_token_distribution
DatasetStats.user_message_per_example_distribution
DatasetStats.user_output_token_distribution
DatasetStatsDict
DatasetStatsDict.total_billable_character_count
DatasetStatsDict.total_tuning_character_count
DatasetStatsDict.tuning_dataset_example_count
DatasetStatsDict.tuning_step_count
DatasetStatsDict.user_dataset_examples
DatasetStatsDict.user_input_token_distribution
DatasetStatsDict.user_message_per_example_distribution
DatasetStatsDict.user_output_token_distribution
DeleteBatchJobConfig
DeleteBatchJobConfigDict
DeleteCachedContentConfig
DeleteCachedContentConfigDict
DeleteCachedContentResponse
DeleteCachedContentResponseDict
DeleteFileConfig
DeleteFileConfigDict
DeleteFileResponse
DeleteFileResponseDict
DeleteModelConfig
DeleteModelConfigDict
DeleteModelResponse
DeleteModelResponseDict
DeleteResourceJob
DeleteResourceJobDict
DeploymentResourcesType
DistillationDataStats
DistillationDataStatsDict
DistillationHyperParameters
DistillationHyperParametersDict
DistillationSpec
DistillationSpecDict
DownloadFileConfig
DownloadFileConfigDict
DynamicRetrievalConfig
DynamicRetrievalConfigDict
DynamicRetrievalConfigMode
EditImageConfig
EditImageConfig.aspect_ratio
EditImageConfig.base_steps
EditImageConfig.edit_mode
EditImageConfig.guidance_scale
EditImageConfig.http_options
EditImageConfig.include_rai_reason
EditImageConfig.include_safety_attributes
EditImageConfig.language
EditImageConfig.negative_prompt
EditImageConfig.number_of_images
EditImageConfig.output_compression_quality
EditImageConfig.output_gcs_uri
EditImageConfig.output_mime_type
EditImageConfig.person_generation
EditImageConfig.safety_filter_level
EditImageConfig.seed
EditImageConfigDict
EditImageConfigDict.aspect_ratio
EditImageConfigDict.base_steps
EditImageConfigDict.edit_mode
EditImageConfigDict.guidance_scale
EditImageConfigDict.http_options
EditImageConfigDict.include_rai_reason
EditImageConfigDict.include_safety_attributes
EditImageConfigDict.language
EditImageConfigDict.negative_prompt
EditImageConfigDict.number_of_images
EditImageConfigDict.output_compression_quality
EditImageConfigDict.output_gcs_uri
EditImageConfigDict.output_mime_type
EditImageConfigDict.person_generation
EditImageConfigDict.safety_filter_level
EditImageConfigDict.seed
EditImageResponse
EditImageResponseDict
EditMode
EmbedContentConfig
EmbedContentConfigDict
EmbedContentMetadata
EmbedContentMetadataDict
EmbedContentResponse
EmbedContentResponseDict
EncryptionSpec
EncryptionSpecDict
Endpoint
EndpointDict
ExecutableCode
ExecutableCodeDict
FetchPredictOperationConfig
FetchPredictOperationConfigDict
File
FileData
FileDataDict
FileDict
FileSource
FileState
FileStatus
FileStatusDict
FinishReason
FunctionCall
FunctionCallDict
FunctionCallingConfig
FunctionCallingConfigDict
FunctionCallingConfigMode
FunctionDeclaration
FunctionDeclarationDict
FunctionResponse
FunctionResponseDict
GenerateContentConfig
GenerateContentConfig.audio_timestamp
GenerateContentConfig.automatic_function_calling
GenerateContentConfig.cached_content
GenerateContentConfig.candidate_count
GenerateContentConfig.frequency_penalty
GenerateContentConfig.http_options
GenerateContentConfig.labels
GenerateContentConfig.logprobs
GenerateContentConfig.max_output_tokens
GenerateContentConfig.media_resolution
GenerateContentConfig.presence_penalty
GenerateContentConfig.response_logprobs
GenerateContentConfig.response_mime_type
GenerateContentConfig.response_modalities
GenerateContentConfig.response_schema
GenerateContentConfig.routing_config
GenerateContentConfig.safety_settings
GenerateContentConfig.seed
GenerateContentConfig.speech_config
GenerateContentConfig.stop_sequences
GenerateContentConfig.system_instruction
GenerateContentConfig.temperature
GenerateContentConfig.thinking_config
GenerateContentConfig.tool_config
GenerateContentConfig.tools
GenerateContentConfig.top_k
GenerateContentConfig.top_p
GenerateContentConfigDict
GenerateContentConfigDict.audio_timestamp
GenerateContentConfigDict.automatic_function_calling
GenerateContentConfigDict.cached_content
GenerateContentConfigDict.candidate_count
GenerateContentConfigDict.frequency_penalty
GenerateContentConfigDict.http_options
GenerateContentConfigDict.labels
GenerateContentConfigDict.logprobs
GenerateContentConfigDict.max_output_tokens
GenerateContentConfigDict.media_resolution
GenerateContentConfigDict.presence_penalty
GenerateContentConfigDict.response_logprobs
GenerateContentConfigDict.response_mime_type
GenerateContentConfigDict.response_modalities
GenerateContentConfigDict.response_schema
GenerateContentConfigDict.routing_config
GenerateContentConfigDict.safety_settings
GenerateContentConfigDict.seed
GenerateContentConfigDict.speech_config
GenerateContentConfigDict.stop_sequences
GenerateContentConfigDict.system_instruction
GenerateContentConfigDict.temperature
GenerateContentConfigDict.thinking_config
GenerateContentConfigDict.tool_config
GenerateContentConfigDict.tools
GenerateContentConfigDict.top_k
GenerateContentConfigDict.top_p
GenerateContentResponse
GenerateContentResponse.automatic_function_calling_history
GenerateContentResponse.candidates
GenerateContentResponse.create_time
GenerateContentResponse.model_version
GenerateContentResponse.parsed
GenerateContentResponse.prompt_feedback
GenerateContentResponse.response_id
GenerateContentResponse.usage_metadata
GenerateContentResponse.code_execution_result
GenerateContentResponse.executable_code
GenerateContentResponse.function_calls
GenerateContentResponse.text
GenerateContentResponseDict
GenerateContentResponsePromptFeedback
GenerateContentResponsePromptFeedbackDict
GenerateContentResponseUsageMetadata
GenerateContentResponseUsageMetadataDict
GenerateImagesConfig
GenerateImagesConfig.add_watermark
GenerateImagesConfig.aspect_ratio
GenerateImagesConfig.enhance_prompt
GenerateImagesConfig.guidance_scale
GenerateImagesConfig.http_options
GenerateImagesConfig.include_rai_reason
GenerateImagesConfig.include_safety_attributes
GenerateImagesConfig.language
GenerateImagesConfig.negative_prompt
GenerateImagesConfig.number_of_images
GenerateImagesConfig.output_compression_quality
GenerateImagesConfig.output_gcs_uri
GenerateImagesConfig.output_mime_type
GenerateImagesConfig.person_generation
GenerateImagesConfig.safety_filter_level
GenerateImagesConfig.seed
GenerateImagesConfigDict
GenerateImagesConfigDict.add_watermark
GenerateImagesConfigDict.aspect_ratio
GenerateImagesConfigDict.enhance_prompt
GenerateImagesConfigDict.guidance_scale
GenerateImagesConfigDict.http_options
GenerateImagesConfigDict.include_rai_reason
GenerateImagesConfigDict.include_safety_attributes
GenerateImagesConfigDict.language
GenerateImagesConfigDict.negative_prompt
GenerateImagesConfigDict.number_of_images
GenerateImagesConfigDict.output_compression_quality
GenerateImagesConfigDict.output_gcs_uri
GenerateImagesConfigDict.output_mime_type
GenerateImagesConfigDict.person_generation
GenerateImagesConfigDict.safety_filter_level
GenerateImagesConfigDict.seed
GenerateImagesResponse
GenerateImagesResponseDict
GenerateVideosConfig
GenerateVideosConfig.aspect_ratio
GenerateVideosConfig.duration_seconds
GenerateVideosConfig.enhance_prompt
GenerateVideosConfig.fps
GenerateVideosConfig.http_options
GenerateVideosConfig.negative_prompt
GenerateVideosConfig.number_of_videos
GenerateVideosConfig.output_gcs_uri
GenerateVideosConfig.person_generation
GenerateVideosConfig.pubsub_topic
GenerateVideosConfig.resolution
GenerateVideosConfig.seed
GenerateVideosConfigDict
GenerateVideosConfigDict.aspect_ratio
GenerateVideosConfigDict.duration_seconds
GenerateVideosConfigDict.enhance_prompt
GenerateVideosConfigDict.fps
GenerateVideosConfigDict.http_options
GenerateVideosConfigDict.negative_prompt
GenerateVideosConfigDict.number_of_videos
GenerateVideosConfigDict.output_gcs_uri
GenerateVideosConfigDict.person_generation
GenerateVideosConfigDict.pubsub_topic
GenerateVideosConfigDict.resolution
GenerateVideosConfigDict.seed
GenerateVideosOperation
GenerateVideosOperationDict
GenerateVideosResponse
GenerateVideosResponseDict
GeneratedImage
GeneratedImageDict
GeneratedVideo
GeneratedVideoDict
GenerationConfig
GenerationConfig.audio_timestamp
GenerationConfig.candidate_count
GenerationConfig.frequency_penalty
GenerationConfig.logprobs
GenerationConfig.max_output_tokens
GenerationConfig.presence_penalty
GenerationConfig.response_logprobs
GenerationConfig.response_mime_type
GenerationConfig.response_schema
GenerationConfig.routing_config
GenerationConfig.seed
GenerationConfig.stop_sequences
GenerationConfig.temperature
GenerationConfig.top_k
GenerationConfig.top_p
GenerationConfigDict
GenerationConfigDict.audio_timestamp
GenerationConfigDict.candidate_count
GenerationConfigDict.frequency_penalty
GenerationConfigDict.logprobs
GenerationConfigDict.max_output_tokens
GenerationConfigDict.presence_penalty
GenerationConfigDict.response_logprobs
GenerationConfigDict.response_mime_type
GenerationConfigDict.response_schema
GenerationConfigDict.routing_config
GenerationConfigDict.seed
GenerationConfigDict.stop_sequences
GenerationConfigDict.temperature
GenerationConfigDict.top_k
GenerationConfigDict.top_p
GenerationConfigRoutingConfig
GenerationConfigRoutingConfigAutoRoutingMode
GenerationConfigRoutingConfigAutoRoutingModeDict
GenerationConfigRoutingConfigDict
GenerationConfigRoutingConfigManualRoutingMode
GenerationConfigRoutingConfigManualRoutingModeDict
GetBatchJobConfig
GetBatchJobConfigDict
GetCachedContentConfig
GetCachedContentConfigDict
GetFileConfig
GetFileConfigDict
GetModelConfig
GetModelConfigDict
GetOperationConfig
GetOperationConfigDict
GetTuningJobConfig
GetTuningJobConfigDict
GoogleRpcStatus
GoogleRpcStatusDict
GoogleSearch
GoogleSearchDict
GoogleSearchRetrieval
GoogleSearchRetrievalDict
GoogleTypeDate
GoogleTypeDateDict
GroundingChunk
GroundingChunkDict
GroundingChunkRetrievedContext
GroundingChunkRetrievedContextDict
GroundingChunkWeb
GroundingChunkWebDict
GroundingMetadata
GroundingMetadataDict
GroundingSupport
GroundingSupportDict
HarmBlockMethod
HarmBlockThreshold
HarmCategory
HarmProbability
HarmSeverity
HttpOptions
HttpOptionsDict
Image
ImageDict
ImagePromptLanguage
JobError
JobErrorDict
JobState
JobState.JOB_STATE_CANCELLED
JobState.JOB_STATE_CANCELLING
JobState.JOB_STATE_EXPIRED
JobState.JOB_STATE_FAILED
JobState.JOB_STATE_PARTIALLY_SUCCEEDED
JobState.JOB_STATE_PAUSED
JobState.JOB_STATE_PENDING
JobState.JOB_STATE_QUEUED
JobState.JOB_STATE_RUNNING
JobState.JOB_STATE_SUCCEEDED
JobState.JOB_STATE_UNSPECIFIED
JobState.JOB_STATE_UPDATING
Language
ListBatchJobsConfig
ListBatchJobsConfigDict
ListBatchJobsResponse
ListBatchJobsResponseDict
ListCachedContentsConfig
ListCachedContentsConfigDict
ListCachedContentsResponse
ListCachedContentsResponseDict
ListFilesConfig
ListFilesConfigDict
ListFilesResponse
ListFilesResponseDict
ListModelsConfig
ListModelsConfigDict
ListModelsResponse
ListModelsResponseDict
ListTuningJobsConfig
ListTuningJobsConfigDict
ListTuningJobsResponse
ListTuningJobsResponseDict
LiveClientContent
LiveClientContentDict
LiveClientMessage
LiveClientMessageDict
LiveClientRealtimeInput
LiveClientRealtimeInputDict
LiveClientSetup
LiveClientSetupDict
LiveClientToolResponse
LiveClientToolResponseDict
LiveConnectConfig
LiveConnectConfigDict
LiveServerContent
LiveServerContentDict
LiveServerMessage
LiveServerMessageDict
LiveServerSetupComplete
LiveServerSetupCompleteDict
LiveServerToolCall
LiveServerToolCallCancellation
LiveServerToolCallCancellationDict
LiveServerToolCallDict
LogprobsResult
LogprobsResultCandidate
LogprobsResultCandidateDict
LogprobsResultDict
LogprobsResultTopCandidates
LogprobsResultTopCandidatesDict
MaskReferenceConfig
MaskReferenceConfigDict
MaskReferenceImage
MaskReferenceImageDict
MaskReferenceMode
MediaResolution
Modality
Mode
Model
ModelContent
ModelDict
Operation
OperationDict
Outcome
Part
Part.code_execution_result
Part.executable_code
Part.file_data
Part.function_call
Part.function_response
Part.inline_data
Part.text
Part.thought
Part.video_metadata
Part.from_bytes()
Part.from_code_execution_result()
Part.from_executable_code()
Part.from_function_call()
Part.from_function_response()
Part.from_text()
Part.from_uri()
Part.from_video_metadata()
PartDict
PartnerModelTuningSpec
PartnerModelTuningSpecDict
PersonGeneration
PrebuiltVoiceConfig
PrebuiltVoiceConfigDict
RawReferenceImage
RawReferenceImageDict
ReplayFile
ReplayFileDict
ReplayInteraction
ReplayInteractionDict
ReplayRequest
ReplayRequestDict
ReplayResponse
ReplayResponseDict
Retrieval
RetrievalDict
RetrievalMetadata
RetrievalMetadataDict
SafetyAttributes
SafetyAttributesDict
SafetyFilterLevel
SafetyRating
SafetyRatingDict
SafetySetting
SafetySettingDict
Schema
Schema.any_of
Schema.default
Schema.description
Schema.enum
Schema.example
Schema.format
Schema.items
Schema.max_items
Schema.max_length
Schema.max_properties
Schema.maximum
Schema.min_items
Schema.min_length
Schema.min_properties
Schema.minimum
Schema.nullable
Schema.pattern
Schema.properties
Schema.property_ordering
Schema.required
Schema.title
Schema.type
SchemaDict
SchemaDict.any_of
SchemaDict.default
SchemaDict.description
SchemaDict.enum
SchemaDict.example
SchemaDict.format
SchemaDict.max_items
SchemaDict.max_length
SchemaDict.max_properties
SchemaDict.maximum
SchemaDict.min_items
SchemaDict.min_length
SchemaDict.min_properties
SchemaDict.minimum
SchemaDict.nullable
SchemaDict.pattern
SchemaDict.properties
SchemaDict.property_ordering
SchemaDict.required
SchemaDict.title
SchemaDict.type
SearchEntryPoint
SearchEntryPointDict
Segment
SegmentDict
SpeechConfig
SpeechConfigDict
State
StyleReferenceConfig
StyleReferenceConfigDict
StyleReferenceImage
StyleReferenceImageDict
SubjectReferenceConfig
SubjectReferenceConfigDict
SubjectReferenceImage
SubjectReferenceImageDict
SubjectReferenceType
SupervisedHyperParameters
SupervisedHyperParametersDict
SupervisedTuningDataStats
SupervisedTuningDataStats.total_billable_character_count
SupervisedTuningDataStats.total_billable_token_count
SupervisedTuningDataStats.total_truncated_example_count
SupervisedTuningDataStats.total_tuning_character_count
SupervisedTuningDataStats.truncated_example_indices
SupervisedTuningDataStats.tuning_dataset_example_count
SupervisedTuningDataStats.tuning_step_count
SupervisedTuningDataStats.user_dataset_examples
SupervisedTuningDataStats.user_input_token_distribution
SupervisedTuningDataStats.user_message_per_example_distribution
SupervisedTuningDataStats.user_output_token_distribution
SupervisedTuningDataStatsDict
SupervisedTuningDataStatsDict.total_billable_character_count
SupervisedTuningDataStatsDict.total_billable_token_count
SupervisedTuningDataStatsDict.total_truncated_example_count
SupervisedTuningDataStatsDict.total_tuning_character_count
SupervisedTuningDataStatsDict.truncated_example_indices
SupervisedTuningDataStatsDict.tuning_dataset_example_count
SupervisedTuningDataStatsDict.tuning_step_count
SupervisedTuningDataStatsDict.user_dataset_examples
SupervisedTuningDataStatsDict.user_input_token_distribution
SupervisedTuningDataStatsDict.user_message_per_example_distribution
SupervisedTuningDataStatsDict.user_output_token_distribution
SupervisedTuningDatasetDistribution
SupervisedTuningDatasetDistribution.billable_sum
SupervisedTuningDatasetDistribution.buckets
SupervisedTuningDatasetDistribution.max
SupervisedTuningDatasetDistribution.mean
SupervisedTuningDatasetDistribution.median
SupervisedTuningDatasetDistribution.min
SupervisedTuningDatasetDistribution.p5
SupervisedTuningDatasetDistribution.p95
SupervisedTuningDatasetDistribution.sum
SupervisedTuningDatasetDistributionDatasetBucket
SupervisedTuningDatasetDistributionDatasetBucketDict
SupervisedTuningDatasetDistributionDict
SupervisedTuningDatasetDistributionDict.billable_sum
SupervisedTuningDatasetDistributionDict.buckets
SupervisedTuningDatasetDistributionDict.max
SupervisedTuningDatasetDistributionDict.mean
SupervisedTuningDatasetDistributionDict.median
SupervisedTuningDatasetDistributionDict.min
SupervisedTuningDatasetDistributionDict.p5
SupervisedTuningDatasetDistributionDict.p95
SupervisedTuningDatasetDistributionDict.sum
SupervisedTuningSpec
SupervisedTuningSpecDict
TestTableFile
TestTableFileDict
TestTableItem
TestTableItemDict
ThinkingConfig
ThinkingConfigDict
TokensInfo
TokensInfoDict
Tool
ToolCodeExecution
ToolCodeExecutionDict
ToolConfig
ToolConfigDict
ToolDict
TunedModel
TunedModelDict
TunedModelInfo
TunedModelInfoDict
TuningDataStats
TuningDataStatsDict
TuningDataset
TuningDatasetDict
TuningExample
TuningExampleDict
TuningJob
TuningJob.base_model
TuningJob.create_time
TuningJob.description
TuningJob.distillation_spec
TuningJob.encryption_spec
TuningJob.end_time
TuningJob.error
TuningJob.experiment
TuningJob.labels
TuningJob.name
TuningJob.partner_model_tuning_spec
TuningJob.pipeline_job
TuningJob.start_time
TuningJob.state
TuningJob.supervised_tuning_spec
TuningJob.tuned_model
TuningJob.tuned_model_display_name
TuningJob.tuning_data_stats
TuningJob.update_time
TuningJob.has_ended
TuningJob.has_succeeded
TuningJobDict
TuningJobDict.base_model
TuningJobDict.create_time
TuningJobDict.description
TuningJobDict.distillation_spec
TuningJobDict.encryption_spec
TuningJobDict.end_time
TuningJobDict.error
TuningJobDict.experiment
TuningJobDict.labels
TuningJobDict.name
TuningJobDict.partner_model_tuning_spec
TuningJobDict.pipeline_job
TuningJobDict.start_time
TuningJobDict.state
TuningJobDict.supervised_tuning_spec
TuningJobDict.tuned_model
TuningJobDict.tuned_model_display_name
TuningJobDict.tuning_data_stats
TuningJobDict.update_time
TuningValidationDataset
TuningValidationDatasetDict
Type
UpdateCachedContentConfig
UpdateCachedContentConfigDict
UpdateModelConfig
UpdateModelConfigDict
UploadFileConfig
UploadFileConfigDict
UpscaleImageConfig
UpscaleImageConfigDict
UpscaleImageParameters
UpscaleImageParametersDict
UpscaleImageResponse
UpscaleImageResponseDict
UserContent
VertexAISearch
VertexAISearchDict
VertexRagStore
VertexRagStoreDict
VertexRagStoreRagResource
VertexRagStoreRagResourceDict
Video
VideoDict
VideoMetadata
VideoMetadataDict
VoiceConfig
VoiceConfigDict