Generate text using the Vertex AI Gemini API
In this guide, you send a text prompt request, and then a multimodal prompt and image request to the Vertex AI Gemini API and view the responses.
Prerequisites
To complete this guide, you must have a Google Cloud project with the Vertex AI API enabled. You can use the Vertex AI setup guide to complete these steps.
Add the Vertex AI client library as a dependency
The Vertex AI client library includes many features. Compiling all of them is relatively slow. To speed up compilation, you can just enable the features you need:
cargo add google-cloud-aiplatform-v1 --no-default-features --features prediction-service
Send a prompt to the Vertex AI Gemini API
First, initialize the client using the default settings:
let client = PredictionService::builder().build().await?;
Then build the model name. For simplicity, this example receives the project ID
as an argument and uses a fixed location (global) and the model is hard-coded
to a specific version of gemini.
const MODEL: &str = "gemini-2.5-flash";
let model = format!("projects/{project_id}/locations/global/publishers/google/models/{MODEL}");
If you want to run this function in your own code, use the project id (without
any projects/ prefix) of the project you selected while going through the
prerequisites.
With the client initialized you can send the request:
let response = client
.generate_content()
.set_model(&model)
.set_contents([Content::new()
.set_role("user")
.set_parts([Part::new().set_text(
"What's a good name for a flower shop that specializes in selling bouquets of dried flowers?",
)])])
.send()
.await;
And then print the response. You can use the :#? format specifier to prettify
the nested response objects:
println!("RESPONSE = {response:#?}");
Review the complete code.
Send a prompt and an image to the Vertex AI Gemini API
As in the previous example, initialize the client using the default settings:
let client = PredictionService::builder().build().await?;
And then build the model name:
const MODEL: &str = "gemini-2.5-flash";
let model = format!("projects/{project_id}/locations/global/publishers/google/models/{MODEL}");
The new request includes an image part:
Part::new().set_file_data(
FileData::new()
.set_mime_type("image/jpeg")
.set_file_uri("gs://generativeai-downloads/images/scones.jpg"),
),
And the prompt part:
Part::new().set_text("Describe this picture."),
Send the full request:
let response = client
.generate_content()
.set_model(&model)
.set_contents([Content::new().set_role("user").set_parts([
Part::new().set_file_data(
FileData::new()
.set_mime_type("image/jpeg")
.set_file_uri("gs://generativeai-downloads/images/scones.jpg"),
),
Part::new().set_text("Describe this picture."),
])])
.send()
.await;
As in the previous example, print the full response:
println!("RESPONSE = {response:#?}");
Review the complete code.
Text prompt: complete code
use google_cloud_aiplatform_v1::client::PredictionService;
use google_cloud_aiplatform_v1::model::{Content, Part};
pub async fn sample(project_id: &str) -> anyhow::Result<()> {
let client = PredictionService::builder().build().await?;
const MODEL: &str = "gemini-2.5-flash";
let model = format!("projects/{project_id}/locations/global/publishers/google/models/{MODEL}");
let response = client
.generate_content()
.set_model(&model)
.set_contents([Content::new()
.set_role("user")
.set_parts([Part::new().set_text(
"What's a good name for a flower shop that specializes in selling bouquets of dried flowers?",
)])])
.send()
.await;
println!("RESPONSE = {response:#?}");
Ok(())
}
Prompt and image: complete code
use google_cloud_aiplatform_v1::client::PredictionService;
use google_cloud_aiplatform_v1::model::{Content, FileData, Part};
pub async fn sample(project_id: &str) -> anyhow::Result<()> {
let client = PredictionService::builder().build().await?;
const MODEL: &str = "gemini-2.5-flash";
let model = format!("projects/{project_id}/locations/global/publishers/google/models/{MODEL}");
let response = client
.generate_content()
.set_model(&model)
.set_contents([Content::new().set_role("user").set_parts([
Part::new().set_file_data(
FileData::new()
.set_mime_type("image/jpeg")
.set_file_uri("gs://generativeai-downloads/images/scones.jpg"),
),
Part::new().set_text("Describe this picture."),
])])
.send()
.await;
println!("RESPONSE = {response:#?}");
Ok(())
}