Generate text using the Vertex AI Gemini API
In this guide, you send a text prompt request, and then a multimodal prompt and image request to the Vertex AI Gemini API and view the responses.
Prerequisites
To complete this guide, you must have a Google Cloud project with the Vertex AI API enabled. You can use the Vertex AI setup guide to complete these steps.
Add the Vertex AI client library as a dependency
The Vertex AI client library includes many features. Compiling all of them is relatively slow. To speed up compilation, you can just enable the features you need:
cargo add google-cloud-aiplatform-v1 --no-default-features --features prediction-service
Send a prompt to the Vertex AI Gemini API
First, initialize the client using the default settings:
use google_cloud_aiplatform_v1 as vertexai;
let client = vertexai::client::PredictionService::builder()
.build()
.await?;
Then build the model name. For simplicity, this example receives the project ID
as an argument and uses a fixed location (global
) and model id
(gemini-2.0-flash-001
).
const MODEL: &str = "gemini-2.0-flash-001";
let model = format!("projects/{project_id}/locations/global/publishers/google/models/{MODEL}");
If you want to run this function in your own code, use the project id (without
any projects/
prefix) of the project you selected while going through the
prerequisites.
With the client initialized you can send the request:
let response = client
.generate_content().set_model(&model)
.set_contents([vertexai::model::Content::new().set_role("user").set_parts([
vertexai::model::Part::new().set_text("What's a good name for a flower shop that specializes in selling bouquets of dried flowers?"),
])])
.send()
.await;
And then print the response. You can use the :#?
format specifier to prettify
the nested response objects:
println!("RESPONSE = {response:#?}");
See below for the complete code.
Send a prompt and an image to the Vertex AI Gemini API
As in the previous example, initialize the client using the default settings:
use google_cloud_aiplatform_v1 as vertexai;
let client = vertexai::client::PredictionService::builder()
.build()
.await?;
And then build the model name:
const MODEL: &str = "gemini-2.0-flash-001";
let model = format!("projects/{project_id}/locations/global/publishers/google/models/{MODEL}");
The new request includes an image part:
vertexai::model::Part::new().set_file_data(
vertexai::model::FileData::new()
.set_mime_type("image/jpeg")
.set_file_uri("gs://generativeai-downloads/images/scones.jpg"),
),
And the prompt part:
vertexai::model::Part::new().set_text("Describe this picture."),
Send the full request:
let response = client
.generate_content()
.set_model(&model)
.set_contents(
[vertexai::model::Content::new().set_role("user").set_parts([
vertexai::model::Part::new().set_file_data(
vertexai::model::FileData::new()
.set_mime_type("image/jpeg")
.set_file_uri("gs://generativeai-downloads/images/scones.jpg"),
),
vertexai::model::Part::new().set_text("Describe this picture."),
])],
)
.send()
.await;
As in the previous example, print the full response:
println!("RESPONSE = {response:#?}");
See below for the complete code.
Text prompt: complete code
pub async fn text_prompt(project_id: &str) -> crate::Result<()> {
use google_cloud_aiplatform_v1 as vertexai;
let client = vertexai::client::PredictionService::builder()
.build()
.await?;
const MODEL: &str = "gemini-2.0-flash-001";
let model = format!("projects/{project_id}/locations/global/publishers/google/models/{MODEL}");
let response = client
.generate_content().set_model(&model)
.set_contents([vertexai::model::Content::new().set_role("user").set_parts([
vertexai::model::Part::new().set_text("What's a good name for a flower shop that specializes in selling bouquets of dried flowers?"),
])])
.send()
.await;
println!("RESPONSE = {response:#?}");
Ok(())
}
Prompt and image: complete code
pub async fn prompt_and_image(project_id: &str) -> crate::Result<()> {
use google_cloud_aiplatform_v1 as vertexai;
let client = vertexai::client::PredictionService::builder()
.build()
.await?;
const MODEL: &str = "gemini-2.0-flash-001";
let model = format!("projects/{project_id}/locations/global/publishers/google/models/{MODEL}");
let response = client
.generate_content()
.set_model(&model)
.set_contents(
[vertexai::model::Content::new().set_role("user").set_parts([
vertexai::model::Part::new().set_file_data(
vertexai::model::FileData::new()
.set_mime_type("image/jpeg")
.set_file_uri("gs://generativeai-downloads/images/scones.jpg"),
),
vertexai::model::Part::new().set_text("Describe this picture."),
])],
)
.send()
.await;
println!("RESPONSE = {response:#?}");
Ok(())
}