Introduction

The Google Cloud Client Libraries for Rust is a collection of Rust crates to interact with Google Cloud Services.

This guide is organized as a series of small tutorials showing how to perform specific actions with the client libraries. Most Google Cloud services follow a series of guidelines, collectively known as the AIPs. This makes the client libraries more from one service to the next. The functions to delete or list resources almost always have the same interface.

Audience

This guide is intended for Rust developers that are familiar with the language and the Rust ecosystem. We will assume you know how to use Rust and its supporting toolchain.

At the risk of being repetitive, most of the guides do not assume you have used any Google Service or client library before (in Rust or other language). However, the guides will refer you to service specific tutorials to initialize their projects and services.

Service specific documentation

These guides are not intended as tutorials for each services or as extended guides on how to design Rust applications to work on Google Cloud. They are starting points to get you productive with the client libraries for Rust.

We recommend you read the service documentation at https://cloud.google.com to learn more each service. If you need guidance on how to design your application for Google Cloud the Cloud Architecture Center may have what you are looking for.

Reporting bugs

We welcome bugs about the client libraries or their documentation. Please use GitHub Issues.

License

The client libraries source and their documentation are release under the Apache License, Version 2.0.

Setting up your development environment

Prepare your environment for Rust app development and deployment on Google Cloud by installing the following tools.

Install Rust

  1. To install Rust, see Getting Started.

  2. Confirm that you have the most recent version of Rust installed:

    cargo --version

Install an editor

The Getting Started guide links popular editor plugins and IDEs, which provide the following features:

  • Fully integrated debugging capabilities
  • Syntax highlighting
  • Code completion

Install the Google Cloud CLI

The Google Cloud CLI is a set of tools for Google Cloud. It contains the gcloud and bq command-line tools used to access Compute Engine, Cloud Storage, BigQuery, and other services from the command line. You can run these tools interactively or in your automated scripts.

To install the gcloud CLI, see Installing the gcloud CLI.

Install the Cloud Client Libraries for Rust in a New Project

The [Cloud Client Libraries for Rust] is the idiomatic way for Rust developers to integrate with Google Cloud services, such as Secret Manager and Workflows.

For example, to use the package for an individual API, such as the Secret Manager API, do the following:

  1. Create a new Rust project:

    cargo new my-project
  2. Change your directory to the new project:

    cd my-project
  3. Add the [Secret Manager] client library to the new project

    cargo add google-cloud-secretmanager-v1
  4. Add the tokio crate to the new project

    cargo add tokio --features macros
  5. Edit src/main.rs in your project to use the Secret Manager client library:

#[tokio::main] async fn main() -> Result<(), Box<dyn std::error::Error>> { use google_cloud_secretmanager_v1::client::SecretManagerService; let project_id = std::env::args().nth(1).unwrap(); let client = SecretManagerService::new().await?; let mut items = client .list_secrets(format!("projects/{project_id}")) .paginator() .await .items(); while let Some(item) = items.next().await { println!("{}", item?.name); } Ok(()) }
  1. Build your program:

    cargo build

    The program should build without errors.

Note: The source of the Cloud Client Libraries for Rust is on GitHub.

Running the program

  1. To use the Cloud Client Libraries in a local development environment, set up Application Default Credentials.

    gcloud auth application-default login

    For more information, see Authenticate for using client libraries.

  2. Run your program, replacing [PROJECT ID] with the id of your project:

    cargo run [PROJECT ID]

What's next

Setting up Rust on Cloud Shell

Cloud Shell is a great environment to run small examples and tests.

Start up Cloud Shell

  1. Open https://shell.cloud.google.com to start a new shell.

  2. Select a project.

Configure Rust

  1. Cloud Shell comes with rustup pre-installed. You can use it to install and configure the default version of Rust:

    rustup default stable
  2. Confirm that you have the most recent version of Rust installed:

    cargo --version

Install Rust client libraries in Cloud Shell

  1. Create a new Rust project:

    cargo new my-project
  2. Change your directory to the new project:

    cd my-project
  3. Add the [Secret Manager] client library to the new project

    cargo add google-cloud-secretmanager-v1
  4. Add the tokio crate to the new project

    cargo add tokio --features macros
  5. Edit src/main.rs in your project to use the Secret Manager client library:

#[tokio::main] async fn main() -> Result<(), Box<dyn std::error::Error>> { use google_cloud_secretmanager_v1::client::SecretManagerService; let project_id = std::env::args().nth(1).unwrap(); let client = SecretManagerService::new().await?; let mut items = client .list_secrets(format!("projects/{project_id}")) .paginator() .await .items(); while let Some(item) = items.next().await { println!("{}", item?.name); } Ok(()) }
  1. Run your program, replacing [PROJECT ID] with the id of your project:

    cargo run [PROJECT ID]

How to initialize a client

The Google Cloud Client Libraries for Rust use "clients" as the main abstraction to interface with specific services. Clients are implemented as Rust structs, with methods corresponding to each RPC offered by the service. In other words, to use a Google Cloud service using the Rust client libraries you need to first initialize a client.

Prerequisites

In this guide we will initialize a client and then use the client to make a simple RPC. To make this guide concrete, we will use the Secret Manager API. The same structure applies to any other service in Google Cloud.

We recommend you follow one of the "Getting Started" guides for Secret Manager before attempting to use the client library, such as how to Create a secret. These guides cover service specific concepts in more detail, and provide detailed instructions with respect to project prerequisites than we can fit in this guide.

We also recommend you follow the instructions in the Authenticate for using client libraries guide. This guide will show you how to login to configure the Application Default Credentials used in this guide.

Dependencies

As it is usual with Rust, you must declare the dependency in your Cargo.toml file. We use:

google-cloud-secretmanager-v1 = { version = "0.2", path = "../../src/generated/cloud/secretmanager/v1" }

The default initialization is designed to meet the requirements for most cases. You create a default client using new():

let client = SecretManagerService::new().await?;

Once successfully initialized, you can use this client to make RPCs:

let mut items = client .list_locations(format!("projects/{project_id}")) .paginator() .await; while let Some(page) = items.next().await { let page = page?; for location in page.locations { println!("{}", location.name); } }

Full program

Putting all this code together into a full program looks as follows:

pub type Result = std::result::Result<(), Box<dyn std::error::Error>>; pub async fn initialize_client(project_id: &str) -> Result { use google_cloud_secretmanager_v1::client::SecretManagerService; // Initialize a client with the default configuration. This is an // asynchronous operation that may fail, as it requires acquiring an an // access token. let client = SecretManagerService::new().await?; // Once initialized, use the client to make requests. let mut items = client .list_locations(format!("projects/{project_id}")) .paginator() .await; while let Some(page) = items.next().await { let page = page?; for location in page.locations { println!("{}", location.name); } } Ok(()) }

Working with long-running operations

Occasionally, an API may need to expose a method that takes a significant amount of time to complete. In these situations, it is often a poor user experience to simply block while the task runs; rather, it is better to return some kind of promise to the user and allow the user to check back in later.

The Google Cloud Client Libraries for Rust provide helpers to work with these long-running operations (LROs). This guide will show you how to start LROs and wait for their complication.

Prerequisites

The guide uses the Speech-To-Text V2 service to keep the code snippets concrete. The same ideas work for any other service using LROs.

We recommend you first follow one of the service guides, such as Transcribe speech to text by using the command line. These guides will cover critical topics such as ensuring your project has the API enabled, your account has the right permissions, and how to set up billing for your project (if needed). Skipping the service guides may result in problems that are hard to diagnose.

Dependencies

As it is usual with Rust, you must declare the dependency in your Cargo.toml file. We use:

google-cloud-speech-v2 = { version = "0.2", path = "../../src/generated/cloud/speech/v2" }

Starting a long-running operation

To start a long-running operation first initialize a client as usual and then make the RPC. But first, add some use declarations to avoid the long package names:

use google_cloud_longrunning as longrunning; use google_cloud_speech_v2 as speech;

Now create the client:

let client = speech::client::Speech::new().await?;

We will use batch recognize in this example. While this is designed for long audio files, it works well with small files too.

In Rust, each request is represented by a method that returns a request builder. First, call the right method on the client to create the request builder. We will use the default recognizer (_) in the global region.

let operation = client .batch_recognize(format!( "projects/{project_id}/locations/global/recognizers/_" ))

Then initialize the request to use a publicly available audio file:

.set_files([speech::model::BatchRecognizeFileMetadata::new() .set_uri("gs://cloud-samples-data/speech/hello.wav")])

Configure the request to return the transcripts inline:

.set_recognition_output_config( speech::model::RecognitionOutputConfig::new() .set_inline_response_config(speech::model::InlineOutputConfig::new()), )

Then configure the service to transcribe to US English, using the short model and some other default configuration:

.set_processing_strategy( speech::model::batch_recognize_request::ProcessingStrategy::DYNAMIC_BATCHING, ) .set_config( speech::model::RecognitionConfig::new() .set_language_codes(["en-US"]) .set_model("short") .set_auto_decoding_config(speech::model::AutoDetectDecodingConfig::new()), )

Then make the request and wait for Operation to be returned. This Operation acts as the promise to the result of the long-running request:

.send() .await?;

Finally, we need to poll this promise until it completes:

let response = manually_poll_lro(client, operation).await; println!("LRO completed, response={response:?}");

We will examine the manual_poll_lro() function in the Manually polling a long-running operation section.

You can find the full function below.

Automatically polling a long-running operation

Spoiler, preparing the request will be identical how we started a long-running operation. The difference will come at the end, where instead of sending the request to get the Operation promise:

.send() .await?;

we create a Poller and wait until it is done:

.poller() .until_done() .await?;

Let's review the code step-by-step, without spoilers this time. First, we need to introduce trait in scope via a use declaration:

use speech::Poller;

Then we initialize the client and prepare the request as before:

let client = speech::client::Speech::new().await?; let response = client .batch_recognize(format!( "projects/{project_id}/locations/global/recognizers/_" )) .set_files([speech::model::BatchRecognizeFileMetadata::new() .set_uri("gs://cloud-samples-data/speech/hello.wav")]) .set_recognition_output_config( speech::model::RecognitionOutputConfig::new() .set_inline_response_config(speech::model::InlineOutputConfig::new()), ) .set_processing_strategy( speech::model::batch_recognize_request::ProcessingStrategy::DYNAMIC_BATCHING, ) .set_config( speech::model::RecognitionConfig::new() .set_language_codes(["en-US"]) .set_model("short") .set_auto_decoding_config(speech::model::AutoDetectDecodingConfig::new()), )

And then we poll until the operation is completed and print the result:

.poller() .until_done() .await?; println!("LRO completed, response={response:?}");

You can find the full function below.

Polling a long-running operation

While .until_done() is convenient, it omits some information: long-running operations may report partial progress via a "metadata" attribute. If your application requires such information, you need to use the poller directly:

let mut poller = client .batch_recognize(/* stuff */) /* more stuff */ .poller();

Then use the poller in a loop:

// Copyright 2025 Google LLC // // Licensed under the Apache License, Version 2.0 (the "License"); // you may not use this file except in compliance with the License. // You may obtain a copy of the License at // // https://www.apache.org/licenses/LICENSE-2.0 // // Unless required by applicable law or agreed to in writing, software // distributed under the License is distributed on an "AS IS" BASIS, // WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. // See the License for the specific language governing permissions and // limitations under the License. use google_cloud_longrunning as longrunning; use google_cloud_speech_v2 as speech; pub async fn start(project_id: &str) -> crate::Result<()> { let client = speech::client::Speech::new().await?; let operation = client .batch_recognize(format!( "projects/{project_id}/locations/global/recognizers/_" )) .set_files([speech::model::BatchRecognizeFileMetadata::new() .set_uri("gs://cloud-samples-data/speech/hello.wav")]) .set_recognition_output_config( speech::model::RecognitionOutputConfig::new() .set_inline_response_config(speech::model::InlineOutputConfig::new()), ) .set_processing_strategy( speech::model::batch_recognize_request::ProcessingStrategy::DYNAMIC_BATCHING, ) .set_config( speech::model::RecognitionConfig::new() .set_language_codes(["en-US"]) .set_model("short") .set_auto_decoding_config(speech::model::AutoDetectDecodingConfig::new()), ) .send() .await?; println!("LRO started, response={operation:?}"); let response = manually_poll_lro(client, operation).await; println!("LRO completed, response={response:?}"); Ok(()) } pub async fn automatic(project_id: &str) -> crate::Result<()> { use speech::Poller; let client = speech::client::Speech::new().await?; let response = client .batch_recognize(format!( "projects/{project_id}/locations/global/recognizers/_" )) .set_files([speech::model::BatchRecognizeFileMetadata::new() .set_uri("gs://cloud-samples-data/speech/hello.wav")]) .set_recognition_output_config( speech::model::RecognitionOutputConfig::new() .set_inline_response_config(speech::model::InlineOutputConfig::new()), ) .set_processing_strategy( speech::model::batch_recognize_request::ProcessingStrategy::DYNAMIC_BATCHING, ) .set_config( speech::model::RecognitionConfig::new() .set_language_codes(["en-US"]) .set_model("short") .set_auto_decoding_config(speech::model::AutoDetectDecodingConfig::new()), ) .poller() .until_done() .await?; println!("LRO completed, response={response:?}"); Ok(()) } pub async fn polling(project_id: &str) -> crate::Result<()> { use speech::Poller; let client = speech::client::Speech::new().await?; let mut poller = client .batch_recognize(format!( "projects/{project_id}/locations/global/recognizers/_" )) .set_files([speech::model::BatchRecognizeFileMetadata::new() .set_uri("gs://cloud-samples-data/speech/hello.wav")]) .set_recognition_output_config( speech::model::RecognitionOutputConfig::new() .set_inline_response_config(speech::model::InlineOutputConfig::new()), ) .set_processing_strategy( speech::model::batch_recognize_request::ProcessingStrategy::DYNAMIC_BATCHING, ) .set_config( speech::model::RecognitionConfig::new() .set_language_codes(["en-US"]) .set_model("short") .set_auto_decoding_config(speech::model::AutoDetectDecodingConfig::new()), ) .poller(); while let Some(p) = poller.poll().await { match p { speech::PollingResult::Completed(r) => { println!("LRO completed, response={r:?}"); } speech::PollingResult::InProgress(m) => { println!("LRO in progress, metadata={m:?}"); } speech::PollingResult::PollingError(e) => { println!("Transient error polling the LRO: {e}"); } } tokio::time::sleep(std::time::Duration::from_millis(500)).await; } Ok(()) } pub async fn manually_poll_lro( client: speech::client::Speech, operation: longrunning::model::Operation, ) -> crate::Result<speech::model::BatchRecognizeResponse> { let mut operation = operation; loop { if operation.done { match &operation.result { None => { return Err("missing result for finished operation".into()); } Some(r) => { return match r { longrunning::model::operation::Result::Error(e) => { Err(format!("{e:?}").into()) } longrunning::model::operation::Result::Response(any) => { let response = any.try_into_message::<speech::model::BatchRecognizeResponse>()?; Ok(response) } _ => Err(format!("unexpected result branch {r:?}").into()), }; } } } if let Some(any) = &operation.metadata { let metadata = any.try_into_message::<speech::model::OperationMetadata>()?; println!("LRO in progress, metadata={metadata:?}"); } tokio::time::sleep(std::time::Duration::from_millis(500)).await; if let Ok(attempt) = client.get_operation(operation.name.clone()).send().await { operation = attempt; } } }

Note how this loop explicitly waits before polling again. The polling period depends on the specific operation and its payload. You should consult the service documentation and/or experiment with your own data to determine a good value.

The poller uses a policy to determine what polling errors are transient and may resolve themselves. The Configuring polling policies chapter covers this topic in detail.

You can find the full function below.

Manually polling a long-running operation

In general, we recommend you use the previous two approaches in your application. Manually polling a long-running operation can be quite tedious, and it is easy to get the types involved wrong. If you do need to manually poll a long-running operation this guide will walk you through the required steps. You may want to read the Operation message reference documentation, as some of the fields and types are used below.

Recall that we started the long-running operation using the client:

let mut operation = client .batch_recognize(/* stuff */) /* more stuff */ .send() .await?;

We are going to start a loop to poll the operation, and we need to check if the operation completed immediately, this is rare but does happen. The done field indicates if the operation completed:

// Copyright 2025 Google LLC // // Licensed under the Apache License, Version 2.0 (the "License"); // you may not use this file except in compliance with the License. // You may obtain a copy of the License at // // https://www.apache.org/licenses/LICENSE-2.0 // // Unless required by applicable law or agreed to in writing, software // distributed under the License is distributed on an "AS IS" BASIS, // WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. // See the License for the specific language governing permissions and // limitations under the License. use google_cloud_longrunning as longrunning; use google_cloud_speech_v2 as speech; pub async fn start(project_id: &str) -> crate::Result<()> { let client = speech::client::Speech::new().await?; let operation = client .batch_recognize(format!( "projects/{project_id}/locations/global/recognizers/_" )) .set_files([speech::model::BatchRecognizeFileMetadata::new() .set_uri("gs://cloud-samples-data/speech/hello.wav")]) .set_recognition_output_config( speech::model::RecognitionOutputConfig::new() .set_inline_response_config(speech::model::InlineOutputConfig::new()), ) .set_processing_strategy( speech::model::batch_recognize_request::ProcessingStrategy::DYNAMIC_BATCHING, ) .set_config( speech::model::RecognitionConfig::new() .set_language_codes(["en-US"]) .set_model("short") .set_auto_decoding_config(speech::model::AutoDetectDecodingConfig::new()), ) .send() .await?; println!("LRO started, response={operation:?}"); let response = manually_poll_lro(client, operation).await; println!("LRO completed, response={response:?}"); Ok(()) } pub async fn automatic(project_id: &str) -> crate::Result<()> { use speech::Poller; let client = speech::client::Speech::new().await?; let response = client .batch_recognize(format!( "projects/{project_id}/locations/global/recognizers/_" )) .set_files([speech::model::BatchRecognizeFileMetadata::new() .set_uri("gs://cloud-samples-data/speech/hello.wav")]) .set_recognition_output_config( speech::model::RecognitionOutputConfig::new() .set_inline_response_config(speech::model::InlineOutputConfig::new()), ) .set_processing_strategy( speech::model::batch_recognize_request::ProcessingStrategy::DYNAMIC_BATCHING, ) .set_config( speech::model::RecognitionConfig::new() .set_language_codes(["en-US"]) .set_model("short") .set_auto_decoding_config(speech::model::AutoDetectDecodingConfig::new()), ) .poller() .until_done() .await?; println!("LRO completed, response={response:?}"); Ok(()) } pub async fn polling(project_id: &str) -> crate::Result<()> { use speech::Poller; let client = speech::client::Speech::new().await?; let mut poller = client .batch_recognize(format!( "projects/{project_id}/locations/global/recognizers/_" )) .set_files([speech::model::BatchRecognizeFileMetadata::new() .set_uri("gs://cloud-samples-data/speech/hello.wav")]) .set_recognition_output_config( speech::model::RecognitionOutputConfig::new() .set_inline_response_config(speech::model::InlineOutputConfig::new()), ) .set_processing_strategy( speech::model::batch_recognize_request::ProcessingStrategy::DYNAMIC_BATCHING, ) .set_config( speech::model::RecognitionConfig::new() .set_language_codes(["en-US"]) .set_model("short") .set_auto_decoding_config(speech::model::AutoDetectDecodingConfig::new()), ) .poller(); while let Some(p) = poller.poll().await { match p { speech::PollingResult::Completed(r) => { println!("LRO completed, response={r:?}"); } speech::PollingResult::InProgress(m) => { println!("LRO in progress, metadata={m:?}"); } speech::PollingResult::PollingError(e) => { println!("Transient error polling the LRO: {e}"); } } tokio::time::sleep(std::time::Duration::from_millis(500)).await; } Ok(()) } pub async fn manually_poll_lro( client: speech::client::Speech, operation: longrunning::model::Operation, ) -> crate::Result<speech::model::BatchRecognizeResponse> { let mut operation = operation; loop { if operation.done { match &operation.result { None => { return Err("missing result for finished operation".into()); } Some(r) => { return match r { longrunning::model::operation::Result::Error(e) => { Err(format!("{e:?}").into()) } longrunning::model::operation::Result::Response(any) => { let response = any.try_into_message::<speech::model::BatchRecognizeResponse>()?; Ok(response) } _ => Err(format!("unexpected result branch {r:?}").into()), }; } } } if let Some(any) = &operation.metadata { let metadata = any.try_into_message::<speech::model::OperationMetadata>()?; println!("LRO in progress, metadata={metadata:?}"); } tokio::time::sleep(std::time::Duration::from_millis(500)).await; if let Ok(attempt) = client.get_operation(operation.name.clone()).send().await { operation = attempt; } } }

In most cases, if the operation is done it contains a result. However, the field is optional because the service could return done as true and no result: maybe the operation deletes resources and a successful completion has no return value. In our example, with the Speech-to-Text service, we treat this as an error:

// Copyright 2025 Google LLC // // Licensed under the Apache License, Version 2.0 (the "License"); // you may not use this file except in compliance with the License. // You may obtain a copy of the License at // // https://www.apache.org/licenses/LICENSE-2.0 // // Unless required by applicable law or agreed to in writing, software // distributed under the License is distributed on an "AS IS" BASIS, // WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. // See the License for the specific language governing permissions and // limitations under the License. use google_cloud_longrunning as longrunning; use google_cloud_speech_v2 as speech; pub async fn start(project_id: &str) -> crate::Result<()> { let client = speech::client::Speech::new().await?; let operation = client .batch_recognize(format!( "projects/{project_id}/locations/global/recognizers/_" )) .set_files([speech::model::BatchRecognizeFileMetadata::new() .set_uri("gs://cloud-samples-data/speech/hello.wav")]) .set_recognition_output_config( speech::model::RecognitionOutputConfig::new() .set_inline_response_config(speech::model::InlineOutputConfig::new()), ) .set_processing_strategy( speech::model::batch_recognize_request::ProcessingStrategy::DYNAMIC_BATCHING, ) .set_config( speech::model::RecognitionConfig::new() .set_language_codes(["en-US"]) .set_model("short") .set_auto_decoding_config(speech::model::AutoDetectDecodingConfig::new()), ) .send() .await?; println!("LRO started, response={operation:?}"); let response = manually_poll_lro(client, operation).await; println!("LRO completed, response={response:?}"); Ok(()) } pub async fn automatic(project_id: &str) -> crate::Result<()> { use speech::Poller; let client = speech::client::Speech::new().await?; let response = client .batch_recognize(format!( "projects/{project_id}/locations/global/recognizers/_" )) .set_files([speech::model::BatchRecognizeFileMetadata::new() .set_uri("gs://cloud-samples-data/speech/hello.wav")]) .set_recognition_output_config( speech::model::RecognitionOutputConfig::new() .set_inline_response_config(speech::model::InlineOutputConfig::new()), ) .set_processing_strategy( speech::model::batch_recognize_request::ProcessingStrategy::DYNAMIC_BATCHING, ) .set_config( speech::model::RecognitionConfig::new() .set_language_codes(["en-US"]) .set_model("short") .set_auto_decoding_config(speech::model::AutoDetectDecodingConfig::new()), ) .poller() .until_done() .await?; println!("LRO completed, response={response:?}"); Ok(()) } pub async fn polling(project_id: &str) -> crate::Result<()> { use speech::Poller; let client = speech::client::Speech::new().await?; let mut poller = client .batch_recognize(format!( "projects/{project_id}/locations/global/recognizers/_" )) .set_files([speech::model::BatchRecognizeFileMetadata::new() .set_uri("gs://cloud-samples-data/speech/hello.wav")]) .set_recognition_output_config( speech::model::RecognitionOutputConfig::new() .set_inline_response_config(speech::model::InlineOutputConfig::new()), ) .set_processing_strategy( speech::model::batch_recognize_request::ProcessingStrategy::DYNAMIC_BATCHING, ) .set_config( speech::model::RecognitionConfig::new() .set_language_codes(["en-US"]) .set_model("short") .set_auto_decoding_config(speech::model::AutoDetectDecodingConfig::new()), ) .poller(); while let Some(p) = poller.poll().await { match p { speech::PollingResult::Completed(r) => { println!("LRO completed, response={r:?}"); } speech::PollingResult::InProgress(m) => { println!("LRO in progress, metadata={m:?}"); } speech::PollingResult::PollingError(e) => { println!("Transient error polling the LRO: {e}"); } } tokio::time::sleep(std::time::Duration::from_millis(500)).await; } Ok(()) } pub async fn manually_poll_lro( client: speech::client::Speech, operation: longrunning::model::Operation, ) -> crate::Result<speech::model::BatchRecognizeResponse> { let mut operation = operation; loop { if operation.done { match &operation.result { None => { return Err("missing result for finished operation".into()); } Some(r) => { return match r { longrunning::model::operation::Result::Error(e) => { Err(format!("{e:?}").into()) } longrunning::model::operation::Result::Response(any) => { let response = any.try_into_message::<speech::model::BatchRecognizeResponse>()?; Ok(response) } _ => Err(format!("unexpected result branch {r:?}").into()), }; } } } if let Some(any) = &operation.metadata { let metadata = any.try_into_message::<speech::model::OperationMetadata>()?; println!("LRO in progress, metadata={metadata:?}"); } tokio::time::sleep(std::time::Duration::from_millis(500)).await; if let Ok(attempt) = client.get_operation(operation.name.clone()).send().await { operation = attempt; } } }

Assuming we have a result value this may be an error or a valid response: starting a long-running operation successfully does not guarantee that it will complete successfully. We need to check for both. First check for errors:

// Copyright 2025 Google LLC // // Licensed under the Apache License, Version 2.0 (the "License"); // you may not use this file except in compliance with the License. // You may obtain a copy of the License at // // https://www.apache.org/licenses/LICENSE-2.0 // // Unless required by applicable law or agreed to in writing, software // distributed under the License is distributed on an "AS IS" BASIS, // WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. // See the License for the specific language governing permissions and // limitations under the License. use google_cloud_longrunning as longrunning; use google_cloud_speech_v2 as speech; pub async fn start(project_id: &str) -> crate::Result<()> { let client = speech::client::Speech::new().await?; let operation = client .batch_recognize(format!( "projects/{project_id}/locations/global/recognizers/_" )) .set_files([speech::model::BatchRecognizeFileMetadata::new() .set_uri("gs://cloud-samples-data/speech/hello.wav")]) .set_recognition_output_config( speech::model::RecognitionOutputConfig::new() .set_inline_response_config(speech::model::InlineOutputConfig::new()), ) .set_processing_strategy( speech::model::batch_recognize_request::ProcessingStrategy::DYNAMIC_BATCHING, ) .set_config( speech::model::RecognitionConfig::new() .set_language_codes(["en-US"]) .set_model("short") .set_auto_decoding_config(speech::model::AutoDetectDecodingConfig::new()), ) .send() .await?; println!("LRO started, response={operation:?}"); let response = manually_poll_lro(client, operation).await; println!("LRO completed, response={response:?}"); Ok(()) } pub async fn automatic(project_id: &str) -> crate::Result<()> { use speech::Poller; let client = speech::client::Speech::new().await?; let response = client .batch_recognize(format!( "projects/{project_id}/locations/global/recognizers/_" )) .set_files([speech::model::BatchRecognizeFileMetadata::new() .set_uri("gs://cloud-samples-data/speech/hello.wav")]) .set_recognition_output_config( speech::model::RecognitionOutputConfig::new() .set_inline_response_config(speech::model::InlineOutputConfig::new()), ) .set_processing_strategy( speech::model::batch_recognize_request::ProcessingStrategy::DYNAMIC_BATCHING, ) .set_config( speech::model::RecognitionConfig::new() .set_language_codes(["en-US"]) .set_model("short") .set_auto_decoding_config(speech::model::AutoDetectDecodingConfig::new()), ) .poller() .until_done() .await?; println!("LRO completed, response={response:?}"); Ok(()) } pub async fn polling(project_id: &str) -> crate::Result<()> { use speech::Poller; let client = speech::client::Speech::new().await?; let mut poller = client .batch_recognize(format!( "projects/{project_id}/locations/global/recognizers/_" )) .set_files([speech::model::BatchRecognizeFileMetadata::new() .set_uri("gs://cloud-samples-data/speech/hello.wav")]) .set_recognition_output_config( speech::model::RecognitionOutputConfig::new() .set_inline_response_config(speech::model::InlineOutputConfig::new()), ) .set_processing_strategy( speech::model::batch_recognize_request::ProcessingStrategy::DYNAMIC_BATCHING, ) .set_config( speech::model::RecognitionConfig::new() .set_language_codes(["en-US"]) .set_model("short") .set_auto_decoding_config(speech::model::AutoDetectDecodingConfig::new()), ) .poller(); while let Some(p) = poller.poll().await { match p { speech::PollingResult::Completed(r) => { println!("LRO completed, response={r:?}"); } speech::PollingResult::InProgress(m) => { println!("LRO in progress, metadata={m:?}"); } speech::PollingResult::PollingError(e) => { println!("Transient error polling the LRO: {e}"); } } tokio::time::sleep(std::time::Duration::from_millis(500)).await; } Ok(()) } pub async fn manually_poll_lro( client: speech::client::Speech, operation: longrunning::model::Operation, ) -> crate::Result<speech::model::BatchRecognizeResponse> { let mut operation = operation; loop { if operation.done { match &operation.result { None => { return Err("missing result for finished operation".into()); } Some(r) => { return match r { longrunning::model::operation::Result::Error(e) => { Err(format!("{e:?}").into()) } longrunning::model::operation::Result::Response(any) => { let response = any.try_into_message::<speech::model::BatchRecognizeResponse>()?; Ok(response) } _ => Err(format!("unexpected result branch {r:?}").into()), }; } } } if let Some(any) = &operation.metadata { let metadata = any.try_into_message::<speech::model::OperationMetadata>()?; println!("LRO in progress, metadata={metadata:?}"); } tokio::time::sleep(std::time::Duration::from_millis(500)).await; if let Ok(attempt) = client.get_operation(operation.name.clone()).send().await { operation = attempt; } } }

The error type is a Status message type. This does not implement the standard Error interface, you need to manually convert that to a valid error. You can use ServiceError::from to perform this conversion.

Assuming the result is successful, you need to extract the response type. You can find this type in the documentation for the LRO method, or by reading the service API documentation:

// Copyright 2025 Google LLC // // Licensed under the Apache License, Version 2.0 (the "License"); // you may not use this file except in compliance with the License. // You may obtain a copy of the License at // // https://www.apache.org/licenses/LICENSE-2.0 // // Unless required by applicable law or agreed to in writing, software // distributed under the License is distributed on an "AS IS" BASIS, // WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. // See the License for the specific language governing permissions and // limitations under the License. use google_cloud_longrunning as longrunning; use google_cloud_speech_v2 as speech; pub async fn start(project_id: &str) -> crate::Result<()> { let client = speech::client::Speech::new().await?; let operation = client .batch_recognize(format!( "projects/{project_id}/locations/global/recognizers/_" )) .set_files([speech::model::BatchRecognizeFileMetadata::new() .set_uri("gs://cloud-samples-data/speech/hello.wav")]) .set_recognition_output_config( speech::model::RecognitionOutputConfig::new() .set_inline_response_config(speech::model::InlineOutputConfig::new()), ) .set_processing_strategy( speech::model::batch_recognize_request::ProcessingStrategy::DYNAMIC_BATCHING, ) .set_config( speech::model::RecognitionConfig::new() .set_language_codes(["en-US"]) .set_model("short") .set_auto_decoding_config(speech::model::AutoDetectDecodingConfig::new()), ) .send() .await?; println!("LRO started, response={operation:?}"); let response = manually_poll_lro(client, operation).await; println!("LRO completed, response={response:?}"); Ok(()) } pub async fn automatic(project_id: &str) -> crate::Result<()> { use speech::Poller; let client = speech::client::Speech::new().await?; let response = client .batch_recognize(format!( "projects/{project_id}/locations/global/recognizers/_" )) .set_files([speech::model::BatchRecognizeFileMetadata::new() .set_uri("gs://cloud-samples-data/speech/hello.wav")]) .set_recognition_output_config( speech::model::RecognitionOutputConfig::new() .set_inline_response_config(speech::model::InlineOutputConfig::new()), ) .set_processing_strategy( speech::model::batch_recognize_request::ProcessingStrategy::DYNAMIC_BATCHING, ) .set_config( speech::model::RecognitionConfig::new() .set_language_codes(["en-US"]) .set_model("short") .set_auto_decoding_config(speech::model::AutoDetectDecodingConfig::new()), ) .poller() .until_done() .await?; println!("LRO completed, response={response:?}"); Ok(()) } pub async fn polling(project_id: &str) -> crate::Result<()> { use speech::Poller; let client = speech::client::Speech::new().await?; let mut poller = client .batch_recognize(format!( "projects/{project_id}/locations/global/recognizers/_" )) .set_files([speech::model::BatchRecognizeFileMetadata::new() .set_uri("gs://cloud-samples-data/speech/hello.wav")]) .set_recognition_output_config( speech::model::RecognitionOutputConfig::new() .set_inline_response_config(speech::model::InlineOutputConfig::new()), ) .set_processing_strategy( speech::model::batch_recognize_request::ProcessingStrategy::DYNAMIC_BATCHING, ) .set_config( speech::model::RecognitionConfig::new() .set_language_codes(["en-US"]) .set_model("short") .set_auto_decoding_config(speech::model::AutoDetectDecodingConfig::new()), ) .poller(); while let Some(p) = poller.poll().await { match p { speech::PollingResult::Completed(r) => { println!("LRO completed, response={r:?}"); } speech::PollingResult::InProgress(m) => { println!("LRO in progress, metadata={m:?}"); } speech::PollingResult::PollingError(e) => { println!("Transient error polling the LRO: {e}"); } } tokio::time::sleep(std::time::Duration::from_millis(500)).await; } Ok(()) } pub async fn manually_poll_lro( client: speech::client::Speech, operation: longrunning::model::Operation, ) -> crate::Result<speech::model::BatchRecognizeResponse> { let mut operation = operation; loop { if operation.done { match &operation.result { None => { return Err("missing result for finished operation".into()); } Some(r) => { return match r { longrunning::model::operation::Result::Error(e) => { Err(format!("{e:?}").into()) } longrunning::model::operation::Result::Response(any) => { let response = any.try_into_message::<speech::model::BatchRecognizeResponse>()?; Ok(response) } _ => Err(format!("unexpected result branch {r:?}").into()), }; } } } if let Some(any) = &operation.metadata { let metadata = any.try_into_message::<speech::model::OperationMetadata>()?; println!("LRO in progress, metadata={metadata:?}"); } tokio::time::sleep(std::time::Duration::from_millis(500)).await; if let Ok(attempt) = client.get_operation(operation.name.clone()).send().await { operation = attempt; } } }

Note that extraction of the value may fail if the type does not match what the service sent.

All types in Google Cloud may add fields and branches in the future. While this is unlikely for a common type such as Operation, it happens frequently for most service messages. The Google Cloud Client Libraries for Rust mark all structs and enums as #[non_exhaustive] to signal that such changes are possible. In this case, you must handle this unexpected case:

// Copyright 2025 Google LLC // // Licensed under the Apache License, Version 2.0 (the "License"); // you may not use this file except in compliance with the License. // You may obtain a copy of the License at // // https://www.apache.org/licenses/LICENSE-2.0 // // Unless required by applicable law or agreed to in writing, software // distributed under the License is distributed on an "AS IS" BASIS, // WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. // See the License for the specific language governing permissions and // limitations under the License. use google_cloud_longrunning as longrunning; use google_cloud_speech_v2 as speech; pub async fn start(project_id: &str) -> crate::Result<()> { let client = speech::client::Speech::new().await?; let operation = client .batch_recognize(format!( "projects/{project_id}/locations/global/recognizers/_" )) .set_files([speech::model::BatchRecognizeFileMetadata::new() .set_uri("gs://cloud-samples-data/speech/hello.wav")]) .set_recognition_output_config( speech::model::RecognitionOutputConfig::new() .set_inline_response_config(speech::model::InlineOutputConfig::new()), ) .set_processing_strategy( speech::model::batch_recognize_request::ProcessingStrategy::DYNAMIC_BATCHING, ) .set_config( speech::model::RecognitionConfig::new() .set_language_codes(["en-US"]) .set_model("short") .set_auto_decoding_config(speech::model::AutoDetectDecodingConfig::new()), ) .send() .await?; println!("LRO started, response={operation:?}"); let response = manually_poll_lro(client, operation).await; println!("LRO completed, response={response:?}"); Ok(()) } pub async fn automatic(project_id: &str) -> crate::Result<()> { use speech::Poller; let client = speech::client::Speech::new().await?; let response = client .batch_recognize(format!( "projects/{project_id}/locations/global/recognizers/_" )) .set_files([speech::model::BatchRecognizeFileMetadata::new() .set_uri("gs://cloud-samples-data/speech/hello.wav")]) .set_recognition_output_config( speech::model::RecognitionOutputConfig::new() .set_inline_response_config(speech::model::InlineOutputConfig::new()), ) .set_processing_strategy( speech::model::batch_recognize_request::ProcessingStrategy::DYNAMIC_BATCHING, ) .set_config( speech::model::RecognitionConfig::new() .set_language_codes(["en-US"]) .set_model("short") .set_auto_decoding_config(speech::model::AutoDetectDecodingConfig::new()), ) .poller() .until_done() .await?; println!("LRO completed, response={response:?}"); Ok(()) } pub async fn polling(project_id: &str) -> crate::Result<()> { use speech::Poller; let client = speech::client::Speech::new().await?; let mut poller = client .batch_recognize(format!( "projects/{project_id}/locations/global/recognizers/_" )) .set_files([speech::model::BatchRecognizeFileMetadata::new() .set_uri("gs://cloud-samples-data/speech/hello.wav")]) .set_recognition_output_config( speech::model::RecognitionOutputConfig::new() .set_inline_response_config(speech::model::InlineOutputConfig::new()), ) .set_processing_strategy( speech::model::batch_recognize_request::ProcessingStrategy::DYNAMIC_BATCHING, ) .set_config( speech::model::RecognitionConfig::new() .set_language_codes(["en-US"]) .set_model("short") .set_auto_decoding_config(speech::model::AutoDetectDecodingConfig::new()), ) .poller(); while let Some(p) = poller.poll().await { match p { speech::PollingResult::Completed(r) => { println!("LRO completed, response={r:?}"); } speech::PollingResult::InProgress(m) => { println!("LRO in progress, metadata={m:?}"); } speech::PollingResult::PollingError(e) => { println!("Transient error polling the LRO: {e}"); } } tokio::time::sleep(std::time::Duration::from_millis(500)).await; } Ok(()) } pub async fn manually_poll_lro( client: speech::client::Speech, operation: longrunning::model::Operation, ) -> crate::Result<speech::model::BatchRecognizeResponse> { let mut operation = operation; loop { if operation.done { match &operation.result { None => { return Err("missing result for finished operation".into()); } Some(r) => { return match r { longrunning::model::operation::Result::Error(e) => { Err(format!("{e:?}").into()) } longrunning::model::operation::Result::Response(any) => { let response = any.try_into_message::<speech::model::BatchRecognizeResponse>()?; Ok(response) } _ => Err(format!("unexpected result branch {r:?}").into()), }; } } } if let Some(any) = &operation.metadata { let metadata = any.try_into_message::<speech::model::OperationMetadata>()?; println!("LRO in progress, metadata={metadata:?}"); } tokio::time::sleep(std::time::Duration::from_millis(500)).await; if let Ok(attempt) = client.get_operation(operation.name.clone()).send().await { operation = attempt; } } }

If the operation has not completed, then it may contain some metadata. Some services just include initial information about the request, while other services include partial progress reports. You can choose to extract and report this metadata:

// Copyright 2025 Google LLC // // Licensed under the Apache License, Version 2.0 (the "License"); // you may not use this file except in compliance with the License. // You may obtain a copy of the License at // // https://www.apache.org/licenses/LICENSE-2.0 // // Unless required by applicable law or agreed to in writing, software // distributed under the License is distributed on an "AS IS" BASIS, // WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. // See the License for the specific language governing permissions and // limitations under the License. use google_cloud_longrunning as longrunning; use google_cloud_speech_v2 as speech; pub async fn start(project_id: &str) -> crate::Result<()> { let client = speech::client::Speech::new().await?; let operation = client .batch_recognize(format!( "projects/{project_id}/locations/global/recognizers/_" )) .set_files([speech::model::BatchRecognizeFileMetadata::new() .set_uri("gs://cloud-samples-data/speech/hello.wav")]) .set_recognition_output_config( speech::model::RecognitionOutputConfig::new() .set_inline_response_config(speech::model::InlineOutputConfig::new()), ) .set_processing_strategy( speech::model::batch_recognize_request::ProcessingStrategy::DYNAMIC_BATCHING, ) .set_config( speech::model::RecognitionConfig::new() .set_language_codes(["en-US"]) .set_model("short") .set_auto_decoding_config(speech::model::AutoDetectDecodingConfig::new()), ) .send() .await?; println!("LRO started, response={operation:?}"); let response = manually_poll_lro(client, operation).await; println!("LRO completed, response={response:?}"); Ok(()) } pub async fn automatic(project_id: &str) -> crate::Result<()> { use speech::Poller; let client = speech::client::Speech::new().await?; let response = client .batch_recognize(format!( "projects/{project_id}/locations/global/recognizers/_" )) .set_files([speech::model::BatchRecognizeFileMetadata::new() .set_uri("gs://cloud-samples-data/speech/hello.wav")]) .set_recognition_output_config( speech::model::RecognitionOutputConfig::new() .set_inline_response_config(speech::model::InlineOutputConfig::new()), ) .set_processing_strategy( speech::model::batch_recognize_request::ProcessingStrategy::DYNAMIC_BATCHING, ) .set_config( speech::model::RecognitionConfig::new() .set_language_codes(["en-US"]) .set_model("short") .set_auto_decoding_config(speech::model::AutoDetectDecodingConfig::new()), ) .poller() .until_done() .await?; println!("LRO completed, response={response:?}"); Ok(()) } pub async fn polling(project_id: &str) -> crate::Result<()> { use speech::Poller; let client = speech::client::Speech::new().await?; let mut poller = client .batch_recognize(format!( "projects/{project_id}/locations/global/recognizers/_" )) .set_files([speech::model::BatchRecognizeFileMetadata::new() .set_uri("gs://cloud-samples-data/speech/hello.wav")]) .set_recognition_output_config( speech::model::RecognitionOutputConfig::new() .set_inline_response_config(speech::model::InlineOutputConfig::new()), ) .set_processing_strategy( speech::model::batch_recognize_request::ProcessingStrategy::DYNAMIC_BATCHING, ) .set_config( speech::model::RecognitionConfig::new() .set_language_codes(["en-US"]) .set_model("short") .set_auto_decoding_config(speech::model::AutoDetectDecodingConfig::new()), ) .poller(); while let Some(p) = poller.poll().await { match p { speech::PollingResult::Completed(r) => { println!("LRO completed, response={r:?}"); } speech::PollingResult::InProgress(m) => { println!("LRO in progress, metadata={m:?}"); } speech::PollingResult::PollingError(e) => { println!("Transient error polling the LRO: {e}"); } } tokio::time::sleep(std::time::Duration::from_millis(500)).await; } Ok(()) } pub async fn manually_poll_lro( client: speech::client::Speech, operation: longrunning::model::Operation, ) -> crate::Result<speech::model::BatchRecognizeResponse> { let mut operation = operation; loop { if operation.done { match &operation.result { None => { return Err("missing result for finished operation".into()); } Some(r) => { return match r { longrunning::model::operation::Result::Error(e) => { Err(format!("{e:?}").into()) } longrunning::model::operation::Result::Response(any) => { let response = any.try_into_message::<speech::model::BatchRecognizeResponse>()?; Ok(response) } _ => Err(format!("unexpected result branch {r:?}").into()), }; } } } if let Some(any) = &operation.metadata { let metadata = any.try_into_message::<speech::model::OperationMetadata>()?; println!("LRO in progress, metadata={metadata:?}"); } tokio::time::sleep(std::time::Duration::from_millis(500)).await; if let Ok(attempt) = client.get_operation(operation.name.clone()).send().await { operation = attempt; } } }

As the operation has not completed, you need to wait before polling again. Consider adjusting the polling period, maybe using a form of truncated exponential backoff. In this example we simply poll every 500ms:

// Copyright 2025 Google LLC // // Licensed under the Apache License, Version 2.0 (the "License"); // you may not use this file except in compliance with the License. // You may obtain a copy of the License at // // https://www.apache.org/licenses/LICENSE-2.0 // // Unless required by applicable law or agreed to in writing, software // distributed under the License is distributed on an "AS IS" BASIS, // WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. // See the License for the specific language governing permissions and // limitations under the License. use google_cloud_longrunning as longrunning; use google_cloud_speech_v2 as speech; pub async fn start(project_id: &str) -> crate::Result<()> { let client = speech::client::Speech::new().await?; let operation = client .batch_recognize(format!( "projects/{project_id}/locations/global/recognizers/_" )) .set_files([speech::model::BatchRecognizeFileMetadata::new() .set_uri("gs://cloud-samples-data/speech/hello.wav")]) .set_recognition_output_config( speech::model::RecognitionOutputConfig::new() .set_inline_response_config(speech::model::InlineOutputConfig::new()), ) .set_processing_strategy( speech::model::batch_recognize_request::ProcessingStrategy::DYNAMIC_BATCHING, ) .set_config( speech::model::RecognitionConfig::new() .set_language_codes(["en-US"]) .set_model("short") .set_auto_decoding_config(speech::model::AutoDetectDecodingConfig::new()), ) .send() .await?; println!("LRO started, response={operation:?}"); let response = manually_poll_lro(client, operation).await; println!("LRO completed, response={response:?}"); Ok(()) } pub async fn automatic(project_id: &str) -> crate::Result<()> { use speech::Poller; let client = speech::client::Speech::new().await?; let response = client .batch_recognize(format!( "projects/{project_id}/locations/global/recognizers/_" )) .set_files([speech::model::BatchRecognizeFileMetadata::new() .set_uri("gs://cloud-samples-data/speech/hello.wav")]) .set_recognition_output_config( speech::model::RecognitionOutputConfig::new() .set_inline_response_config(speech::model::InlineOutputConfig::new()), ) .set_processing_strategy( speech::model::batch_recognize_request::ProcessingStrategy::DYNAMIC_BATCHING, ) .set_config( speech::model::RecognitionConfig::new() .set_language_codes(["en-US"]) .set_model("short") .set_auto_decoding_config(speech::model::AutoDetectDecodingConfig::new()), ) .poller() .until_done() .await?; println!("LRO completed, response={response:?}"); Ok(()) } pub async fn polling(project_id: &str) -> crate::Result<()> { use speech::Poller; let client = speech::client::Speech::new().await?; let mut poller = client .batch_recognize(format!( "projects/{project_id}/locations/global/recognizers/_" )) .set_files([speech::model::BatchRecognizeFileMetadata::new() .set_uri("gs://cloud-samples-data/speech/hello.wav")]) .set_recognition_output_config( speech::model::RecognitionOutputConfig::new() .set_inline_response_config(speech::model::InlineOutputConfig::new()), ) .set_processing_strategy( speech::model::batch_recognize_request::ProcessingStrategy::DYNAMIC_BATCHING, ) .set_config( speech::model::RecognitionConfig::new() .set_language_codes(["en-US"]) .set_model("short") .set_auto_decoding_config(speech::model::AutoDetectDecodingConfig::new()), ) .poller(); while let Some(p) = poller.poll().await { match p { speech::PollingResult::Completed(r) => { println!("LRO completed, response={r:?}"); } speech::PollingResult::InProgress(m) => { println!("LRO in progress, metadata={m:?}"); } speech::PollingResult::PollingError(e) => { println!("Transient error polling the LRO: {e}"); } } tokio::time::sleep(std::time::Duration::from_millis(500)).await; } Ok(()) } pub async fn manually_poll_lro( client: speech::client::Speech, operation: longrunning::model::Operation, ) -> crate::Result<speech::model::BatchRecognizeResponse> { let mut operation = operation; loop { if operation.done { match &operation.result { None => { return Err("missing result for finished operation".into()); } Some(r) => { return match r { longrunning::model::operation::Result::Error(e) => { Err(format!("{e:?}").into()) } longrunning::model::operation::Result::Response(any) => { let response = any.try_into_message::<speech::model::BatchRecognizeResponse>()?; Ok(response) } _ => Err(format!("unexpected result branch {r:?}").into()), }; } } } if let Some(any) = &operation.metadata { let metadata = any.try_into_message::<speech::model::OperationMetadata>()?; println!("LRO in progress, metadata={metadata:?}"); } tokio::time::sleep(std::time::Duration::from_millis(500)).await; if let Ok(attempt) = client.get_operation(operation.name.clone()).send().await { operation = attempt; } } }

And then poll the operation to get its new status:

// Copyright 2025 Google LLC // // Licensed under the Apache License, Version 2.0 (the "License"); // you may not use this file except in compliance with the License. // You may obtain a copy of the License at // // https://www.apache.org/licenses/LICENSE-2.0 // // Unless required by applicable law or agreed to in writing, software // distributed under the License is distributed on an "AS IS" BASIS, // WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. // See the License for the specific language governing permissions and // limitations under the License. use google_cloud_longrunning as longrunning; use google_cloud_speech_v2 as speech; pub async fn start(project_id: &str) -> crate::Result<()> { let client = speech::client::Speech::new().await?; let operation = client .batch_recognize(format!( "projects/{project_id}/locations/global/recognizers/_" )) .set_files([speech::model::BatchRecognizeFileMetadata::new() .set_uri("gs://cloud-samples-data/speech/hello.wav")]) .set_recognition_output_config( speech::model::RecognitionOutputConfig::new() .set_inline_response_config(speech::model::InlineOutputConfig::new()), ) .set_processing_strategy( speech::model::batch_recognize_request::ProcessingStrategy::DYNAMIC_BATCHING, ) .set_config( speech::model::RecognitionConfig::new() .set_language_codes(["en-US"]) .set_model("short") .set_auto_decoding_config(speech::model::AutoDetectDecodingConfig::new()), ) .send() .await?; println!("LRO started, response={operation:?}"); let response = manually_poll_lro(client, operation).await; println!("LRO completed, response={response:?}"); Ok(()) } pub async fn automatic(project_id: &str) -> crate::Result<()> { use speech::Poller; let client = speech::client::Speech::new().await?; let response = client .batch_recognize(format!( "projects/{project_id}/locations/global/recognizers/_" )) .set_files([speech::model::BatchRecognizeFileMetadata::new() .set_uri("gs://cloud-samples-data/speech/hello.wav")]) .set_recognition_output_config( speech::model::RecognitionOutputConfig::new() .set_inline_response_config(speech::model::InlineOutputConfig::new()), ) .set_processing_strategy( speech::model::batch_recognize_request::ProcessingStrategy::DYNAMIC_BATCHING, ) .set_config( speech::model::RecognitionConfig::new() .set_language_codes(["en-US"]) .set_model("short") .set_auto_decoding_config(speech::model::AutoDetectDecodingConfig::new()), ) .poller() .until_done() .await?; println!("LRO completed, response={response:?}"); Ok(()) } pub async fn polling(project_id: &str) -> crate::Result<()> { use speech::Poller; let client = speech::client::Speech::new().await?; let mut poller = client .batch_recognize(format!( "projects/{project_id}/locations/global/recognizers/_" )) .set_files([speech::model::BatchRecognizeFileMetadata::new() .set_uri("gs://cloud-samples-data/speech/hello.wav")]) .set_recognition_output_config( speech::model::RecognitionOutputConfig::new() .set_inline_response_config(speech::model::InlineOutputConfig::new()), ) .set_processing_strategy( speech::model::batch_recognize_request::ProcessingStrategy::DYNAMIC_BATCHING, ) .set_config( speech::model::RecognitionConfig::new() .set_language_codes(["en-US"]) .set_model("short") .set_auto_decoding_config(speech::model::AutoDetectDecodingConfig::new()), ) .poller(); while let Some(p) = poller.poll().await { match p { speech::PollingResult::Completed(r) => { println!("LRO completed, response={r:?}"); } speech::PollingResult::InProgress(m) => { println!("LRO in progress, metadata={m:?}"); } speech::PollingResult::PollingError(e) => { println!("Transient error polling the LRO: {e}"); } } tokio::time::sleep(std::time::Duration::from_millis(500)).await; } Ok(()) } pub async fn manually_poll_lro( client: speech::client::Speech, operation: longrunning::model::Operation, ) -> crate::Result<speech::model::BatchRecognizeResponse> { let mut operation = operation; loop { if operation.done { match &operation.result { None => { return Err("missing result for finished operation".into()); } Some(r) => { return match r { longrunning::model::operation::Result::Error(e) => { Err(format!("{e:?}").into()) } longrunning::model::operation::Result::Response(any) => { let response = any.try_into_message::<speech::model::BatchRecognizeResponse>()?; Ok(response) } _ => Err(format!("unexpected result branch {r:?}").into()), }; } } } if let Some(any) = &operation.metadata { let metadata = any.try_into_message::<speech::model::OperationMetadata>()?; println!("LRO in progress, metadata={metadata:?}"); } tokio::time::sleep(std::time::Duration::from_millis(500)).await; if let Ok(attempt) = client.get_operation(operation.name.clone()).send().await { operation = attempt; } } }

For simplicity, we have chosen to ignore all errors. In your application you may chose to treat only a subset of the errors as non-recoverable, and may want to limit the number of polling attempts if these fail.

You can find the full function below.

Starting a long-running operation: complete code

pub async fn start(project_id: &str) -> crate::Result<()> { let client = speech::client::Speech::new().await?; let operation = client .batch_recognize(format!( "projects/{project_id}/locations/global/recognizers/_" )) .set_files([speech::model::BatchRecognizeFileMetadata::new() .set_uri("gs://cloud-samples-data/speech/hello.wav")]) .set_recognition_output_config( speech::model::RecognitionOutputConfig::new() .set_inline_response_config(speech::model::InlineOutputConfig::new()), ) .set_processing_strategy( speech::model::batch_recognize_request::ProcessingStrategy::DYNAMIC_BATCHING, ) .set_config( speech::model::RecognitionConfig::new() .set_language_codes(["en-US"]) .set_model("short") .set_auto_decoding_config(speech::model::AutoDetectDecodingConfig::new()), ) .send() .await?; println!("LRO started, response={operation:?}"); let response = manually_poll_lro(client, operation).await; println!("LRO completed, response={response:?}"); Ok(()) }

Automatically polling a long-running operation: complete code

// Copyright 2025 Google LLC // // Licensed under the Apache License, Version 2.0 (the "License"); // you may not use this file except in compliance with the License. // You may obtain a copy of the License at // // https://www.apache.org/licenses/LICENSE-2.0 // // Unless required by applicable law or agreed to in writing, software // distributed under the License is distributed on an "AS IS" BASIS, // WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. // See the License for the specific language governing permissions and // limitations under the License. use google_cloud_longrunning as longrunning; use google_cloud_speech_v2 as speech; pub async fn start(project_id: &str) -> crate::Result<()> { let client = speech::client::Speech::new().await?; let operation = client .batch_recognize(format!( "projects/{project_id}/locations/global/recognizers/_" )) .set_files([speech::model::BatchRecognizeFileMetadata::new() .set_uri("gs://cloud-samples-data/speech/hello.wav")]) .set_recognition_output_config( speech::model::RecognitionOutputConfig::new() .set_inline_response_config(speech::model::InlineOutputConfig::new()), ) .set_processing_strategy( speech::model::batch_recognize_request::ProcessingStrategy::DYNAMIC_BATCHING, ) .set_config( speech::model::RecognitionConfig::new() .set_language_codes(["en-US"]) .set_model("short") .set_auto_decoding_config(speech::model::AutoDetectDecodingConfig::new()), ) .send() .await?; println!("LRO started, response={operation:?}"); let response = manually_poll_lro(client, operation).await; println!("LRO completed, response={response:?}"); Ok(()) } pub async fn automatic(project_id: &str) -> crate::Result<()> { use speech::Poller; let client = speech::client::Speech::new().await?; let response = client .batch_recognize(format!( "projects/{project_id}/locations/global/recognizers/_" )) .set_files([speech::model::BatchRecognizeFileMetadata::new() .set_uri("gs://cloud-samples-data/speech/hello.wav")]) .set_recognition_output_config( speech::model::RecognitionOutputConfig::new() .set_inline_response_config(speech::model::InlineOutputConfig::new()), ) .set_processing_strategy( speech::model::batch_recognize_request::ProcessingStrategy::DYNAMIC_BATCHING, ) .set_config( speech::model::RecognitionConfig::new() .set_language_codes(["en-US"]) .set_model("short") .set_auto_decoding_config(speech::model::AutoDetectDecodingConfig::new()), ) .poller() .until_done() .await?; println!("LRO completed, response={response:?}"); Ok(()) } pub async fn polling(project_id: &str) -> crate::Result<()> { use speech::Poller; let client = speech::client::Speech::new().await?; let mut poller = client .batch_recognize(format!( "projects/{project_id}/locations/global/recognizers/_" )) .set_files([speech::model::BatchRecognizeFileMetadata::new() .set_uri("gs://cloud-samples-data/speech/hello.wav")]) .set_recognition_output_config( speech::model::RecognitionOutputConfig::new() .set_inline_response_config(speech::model::InlineOutputConfig::new()), ) .set_processing_strategy( speech::model::batch_recognize_request::ProcessingStrategy::DYNAMIC_BATCHING, ) .set_config( speech::model::RecognitionConfig::new() .set_language_codes(["en-US"]) .set_model("short") .set_auto_decoding_config(speech::model::AutoDetectDecodingConfig::new()), ) .poller(); while let Some(p) = poller.poll().await { match p { speech::PollingResult::Completed(r) => { println!("LRO completed, response={r:?}"); } speech::PollingResult::InProgress(m) => { println!("LRO in progress, metadata={m:?}"); } speech::PollingResult::PollingError(e) => { println!("Transient error polling the LRO: {e}"); } } tokio::time::sleep(std::time::Duration::from_millis(500)).await; } Ok(()) } pub async fn manually_poll_lro( client: speech::client::Speech, operation: longrunning::model::Operation, ) -> crate::Result<speech::model::BatchRecognizeResponse> { let mut operation = operation; loop { if operation.done { match &operation.result { None => { return Err("missing result for finished operation".into()); } Some(r) => { return match r { longrunning::model::operation::Result::Error(e) => { Err(format!("{e:?}").into()) } longrunning::model::operation::Result::Response(any) => { let response = any.try_into_message::<speech::model::BatchRecognizeResponse>()?; Ok(response) } _ => Err(format!("unexpected result branch {r:?}").into()), }; } } } if let Some(any) = &operation.metadata { let metadata = any.try_into_message::<speech::model::OperationMetadata>()?; println!("LRO in progress, metadata={metadata:?}"); } tokio::time::sleep(std::time::Duration::from_millis(500)).await; if let Ok(attempt) = client.get_operation(operation.name.clone()).send().await { operation = attempt; } } }

Polling a long-running operation: complete code

// Copyright 2025 Google LLC // // Licensed under the Apache License, Version 2.0 (the "License"); // you may not use this file except in compliance with the License. // You may obtain a copy of the License at // // https://www.apache.org/licenses/LICENSE-2.0 // // Unless required by applicable law or agreed to in writing, software // distributed under the License is distributed on an "AS IS" BASIS, // WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. // See the License for the specific language governing permissions and // limitations under the License. use google_cloud_longrunning as longrunning; use google_cloud_speech_v2 as speech; pub async fn start(project_id: &str) -> crate::Result<()> { let client = speech::client::Speech::new().await?; let operation = client .batch_recognize(format!( "projects/{project_id}/locations/global/recognizers/_" )) .set_files([speech::model::BatchRecognizeFileMetadata::new() .set_uri("gs://cloud-samples-data/speech/hello.wav")]) .set_recognition_output_config( speech::model::RecognitionOutputConfig::new() .set_inline_response_config(speech::model::InlineOutputConfig::new()), ) .set_processing_strategy( speech::model::batch_recognize_request::ProcessingStrategy::DYNAMIC_BATCHING, ) .set_config( speech::model::RecognitionConfig::new() .set_language_codes(["en-US"]) .set_model("short") .set_auto_decoding_config(speech::model::AutoDetectDecodingConfig::new()), ) .send() .await?; println!("LRO started, response={operation:?}"); let response = manually_poll_lro(client, operation).await; println!("LRO completed, response={response:?}"); Ok(()) } pub async fn automatic(project_id: &str) -> crate::Result<()> { use speech::Poller; let client = speech::client::Speech::new().await?; let response = client .batch_recognize(format!( "projects/{project_id}/locations/global/recognizers/_" )) .set_files([speech::model::BatchRecognizeFileMetadata::new() .set_uri("gs://cloud-samples-data/speech/hello.wav")]) .set_recognition_output_config( speech::model::RecognitionOutputConfig::new() .set_inline_response_config(speech::model::InlineOutputConfig::new()), ) .set_processing_strategy( speech::model::batch_recognize_request::ProcessingStrategy::DYNAMIC_BATCHING, ) .set_config( speech::model::RecognitionConfig::new() .set_language_codes(["en-US"]) .set_model("short") .set_auto_decoding_config(speech::model::AutoDetectDecodingConfig::new()), ) .poller() .until_done() .await?; println!("LRO completed, response={response:?}"); Ok(()) } pub async fn polling(project_id: &str) -> crate::Result<()> { use speech::Poller; let client = speech::client::Speech::new().await?; let mut poller = client .batch_recognize(format!( "projects/{project_id}/locations/global/recognizers/_" )) .set_files([speech::model::BatchRecognizeFileMetadata::new() .set_uri("gs://cloud-samples-data/speech/hello.wav")]) .set_recognition_output_config( speech::model::RecognitionOutputConfig::new() .set_inline_response_config(speech::model::InlineOutputConfig::new()), ) .set_processing_strategy( speech::model::batch_recognize_request::ProcessingStrategy::DYNAMIC_BATCHING, ) .set_config( speech::model::RecognitionConfig::new() .set_language_codes(["en-US"]) .set_model("short") .set_auto_decoding_config(speech::model::AutoDetectDecodingConfig::new()), ) .poller(); while let Some(p) = poller.poll().await { match p { speech::PollingResult::Completed(r) => { println!("LRO completed, response={r:?}"); } speech::PollingResult::InProgress(m) => { println!("LRO in progress, metadata={m:?}"); } speech::PollingResult::PollingError(e) => { println!("Transient error polling the LRO: {e}"); } } tokio::time::sleep(std::time::Duration::from_millis(500)).await; } Ok(()) } pub async fn manually_poll_lro( client: speech::client::Speech, operation: longrunning::model::Operation, ) -> crate::Result<speech::model::BatchRecognizeResponse> { let mut operation = operation; loop { if operation.done { match &operation.result { None => { return Err("missing result for finished operation".into()); } Some(r) => { return match r { longrunning::model::operation::Result::Error(e) => { Err(format!("{e:?}").into()) } longrunning::model::operation::Result::Response(any) => { let response = any.try_into_message::<speech::model::BatchRecognizeResponse>()?; Ok(response) } _ => Err(format!("unexpected result branch {r:?}").into()), }; } } } if let Some(any) = &operation.metadata { let metadata = any.try_into_message::<speech::model::OperationMetadata>()?; println!("LRO in progress, metadata={metadata:?}"); } tokio::time::sleep(std::time::Duration::from_millis(500)).await; if let Ok(attempt) = client.get_operation(operation.name.clone()).send().await { operation = attempt; } } }

Manually polling a long-running operation: complete code

// Copyright 2025 Google LLC // // Licensed under the Apache License, Version 2.0 (the "License"); // you may not use this file except in compliance with the License. // You may obtain a copy of the License at // // https://www.apache.org/licenses/LICENSE-2.0 // // Unless required by applicable law or agreed to in writing, software // distributed under the License is distributed on an "AS IS" BASIS, // WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. // See the License for the specific language governing permissions and // limitations under the License. use google_cloud_longrunning as longrunning; use google_cloud_speech_v2 as speech; pub async fn start(project_id: &str) -> crate::Result<()> { let client = speech::client::Speech::new().await?; let operation = client .batch_recognize(format!( "projects/{project_id}/locations/global/recognizers/_" )) .set_files([speech::model::BatchRecognizeFileMetadata::new() .set_uri("gs://cloud-samples-data/speech/hello.wav")]) .set_recognition_output_config( speech::model::RecognitionOutputConfig::new() .set_inline_response_config(speech::model::InlineOutputConfig::new()), ) .set_processing_strategy( speech::model::batch_recognize_request::ProcessingStrategy::DYNAMIC_BATCHING, ) .set_config( speech::model::RecognitionConfig::new() .set_language_codes(["en-US"]) .set_model("short") .set_auto_decoding_config(speech::model::AutoDetectDecodingConfig::new()), ) .send() .await?; println!("LRO started, response={operation:?}"); let response = manually_poll_lro(client, operation).await; println!("LRO completed, response={response:?}"); Ok(()) } pub async fn automatic(project_id: &str) -> crate::Result<()> { use speech::Poller; let client = speech::client::Speech::new().await?; let response = client .batch_recognize(format!( "projects/{project_id}/locations/global/recognizers/_" )) .set_files([speech::model::BatchRecognizeFileMetadata::new() .set_uri("gs://cloud-samples-data/speech/hello.wav")]) .set_recognition_output_config( speech::model::RecognitionOutputConfig::new() .set_inline_response_config(speech::model::InlineOutputConfig::new()), ) .set_processing_strategy( speech::model::batch_recognize_request::ProcessingStrategy::DYNAMIC_BATCHING, ) .set_config( speech::model::RecognitionConfig::new() .set_language_codes(["en-US"]) .set_model("short") .set_auto_decoding_config(speech::model::AutoDetectDecodingConfig::new()), ) .poller() .until_done() .await?; println!("LRO completed, response={response:?}"); Ok(()) } pub async fn polling(project_id: &str) -> crate::Result<()> { use speech::Poller; let client = speech::client::Speech::new().await?; let mut poller = client .batch_recognize(format!( "projects/{project_id}/locations/global/recognizers/_" )) .set_files([speech::model::BatchRecognizeFileMetadata::new() .set_uri("gs://cloud-samples-data/speech/hello.wav")]) .set_recognition_output_config( speech::model::RecognitionOutputConfig::new() .set_inline_response_config(speech::model::InlineOutputConfig::new()), ) .set_processing_strategy( speech::model::batch_recognize_request::ProcessingStrategy::DYNAMIC_BATCHING, ) .set_config( speech::model::RecognitionConfig::new() .set_language_codes(["en-US"]) .set_model("short") .set_auto_decoding_config(speech::model::AutoDetectDecodingConfig::new()), ) .poller(); while let Some(p) = poller.poll().await { match p { speech::PollingResult::Completed(r) => { println!("LRO completed, response={r:?}"); } speech::PollingResult::InProgress(m) => { println!("LRO in progress, metadata={m:?}"); } speech::PollingResult::PollingError(e) => { println!("Transient error polling the LRO: {e}"); } } tokio::time::sleep(std::time::Duration::from_millis(500)).await; } Ok(()) } pub async fn manually_poll_lro( client: speech::client::Speech, operation: longrunning::model::Operation, ) -> crate::Result<speech::model::BatchRecognizeResponse> { let mut operation = operation; loop { if operation.done { match &operation.result { None => { return Err("missing result for finished operation".into()); } Some(r) => { return match r { longrunning::model::operation::Result::Error(e) => { Err(format!("{e:?}").into()) } longrunning::model::operation::Result::Response(any) => { let response = any.try_into_message::<speech::model::BatchRecognizeResponse>()?; Ok(response) } _ => Err(format!("unexpected result branch {r:?}").into()), }; } } } if let Some(any) = &operation.metadata { let metadata = any.try_into_message::<speech::model::OperationMetadata>()?; println!("LRO in progress, metadata={metadata:?}"); } tokio::time::sleep(std::time::Duration::from_millis(500)).await; if let Ok(attempt) = client.get_operation(operation.name.clone()).send().await { operation = attempt; } } }

What's Next

Configuring polling policies

The Google Cloud Client Libraries for Rust provide helper functions to simplify waiting and monitoring the progress of LROs (Long-Running Operations). These helpers use policies to configure the polling frequency and to determine what polling errors are transient and may be ignored until the next polling event.

This guide will walk you through the configuration of these policies for all the long-running operations started by a client, or just for one specific request.

There are two different policies controlling the behavior of the LRO loops:

  • The polling backoff policy controls how long the loop waits before polling the status of a LRO that is still in progress.
  • The polling error policy controls what to do on an polling error. Some polling errors are unrecoverable, and indicate that the operation was aborted or the caller has no permissions to check the status of the LRO. Other polling errors are transient, and indicate a temporary problem in the client network or the service.

Each one of these policies can be set independently, and each one can be set for all the LROs started on a client or changed for just one request.

Prerequisites

The guide uses the Speech-To-Text V2 service to keep the code snippets concrete. The same ideas work for any other service using LROs.

We recommend you first follow one of the service guides, such as Transcribe speech to text by using the command line. These guides will cover critical topics such as ensuring your project has the API enabled, your account has the right permissions, and how to set up billing for your project (if needed). Skipping the service guides may result in problems that are hard to diagnose.

Dependencies

As it is usual with Rust, you must declare the dependency in your Cargo.toml file. We use:

google-cloud-speech-v2 = { version = "0.2", path = "../../src/generated/cloud/speech/v2" }

Configuring the polling frequency for all requests in a client

If you are planning to use the same polling backoff policy for all (or even most) requests with the same client then consider setting this as a client option.

To configure the polling frequency you use a type implementing the PollingBackoffPolicy trait. The client libraries provide ExponentialBackoff:

use google_cloud_gax::exponential_backoff::ExponentialBackoffBuilder; use google_cloud_gax::options::ClientConfig;

Then initialize the client with the configuration you want:

let client = speech::client::Speech::new_with_config( ClientConfig::default().set_polling_backoff_policy( ExponentialBackoffBuilder::new() .with_initial_delay(Duration::from_millis(250)) .with_maximum_delay(Duration::from_secs(10)) .build()?, ), ) .await?;

Unless you override the policy with a per-request setting this policy will be in effect for any long-running operation started with the client. In this example, if you make a call such as:

let mut operation = client .batch_recognize(/* stuff */) /* more stuff */ .send() .await?;

The client library will first wait for 500ms, after the first polling attempt, then for 1,000ms (or 1s) for the second attempt, and sub-sequent attempts will wait 2s, 4s, 8s and then all attempts will wait 10s.

See below for the complete code.

Configuring the polling frequency for a specific request

As described in the previous section. We need a type implementing the PollingBackoffPolicy trait to configure the polling frequency. We will also use ExponentialBackoff in this example:

use google_cloud_gax::exponential_backoff::ExponentialBackoffBuilder; use std::time::Duration;

The configuration of the request will require bringing a trait within scope:

use google_cloud_gax::options::RequestOptionsBuilder;

You create the request builder as usual:

let response = client .batch_recognize(format!( "projects/{project_id}/locations/global/recognizers/_" ))

And then configure the polling backoff policy:

.with_polling_backoff_policy( ExponentialBackoffBuilder::new() .with_initial_delay(Duration::from_millis(250)) .with_maximum_delay(Duration::from_secs(10)) .build()?, )

You can issue this request as usual. For example:

.poller() .until_done() .await?; println!("LRO completed, response={response:?}");

See below for the complete code.

Configuring the retryable polling errors for all requests in a client

To configure the retryable errors we need to use a type implementing the PollingErrorPolicy trait. The client libraries provide a number of them, a conservative choice is Aip194Strict:

use google_cloud_gax::options::ClientConfig; use google_cloud_gax::polling_error_policy::Aip194Strict; use google_cloud_gax::polling_error_policy::PollingErrorPolicyExt; use std::time::Duration;

If you are planning to use the same polling policy for all (or even most) requests with the same client then consider setting this as a client option.

Initialize the client with the configuration you want:

let client = speech::client::Speech::new_with_config( ClientConfig::default().set_polling_error_policy( Aip194Strict .with_attempt_limit(100) .with_time_limit(Duration::from_secs(300)), ), ) .await?;

Unless you override the policy with a per-request setting this policy will be in effect for any long-running operation started with the client. In this example, if you make a call such as:

let mut operation = client .batch_recognize(/* stuff */) /* more stuff */ .send() .await?;

The client library will only treat UNAVAILABLE (see AIP-194) as a retryable error, and will stop polling after 100 attempts or 300 seconds, whichever comes first.

See below for the complete code.

Configuring the retryable polling errors for a specific request

To configure the retryable errors we need to use a type implementing the PollingErrorPolicy trait. The client libraries provide a number of them, a conservative choice is Aip194Strict:

use google_cloud_gax::polling_error_policy::Aip194Strict; use google_cloud_gax::polling_error_policy::PollingErrorPolicyExt; use std::time::Duration;

The configuration of the request will require bringing a trait within scope:

use google_cloud_gax::options::RequestOptionsBuilder;

You create the request builder as usual:

let response = client .batch_recognize(format!( "projects/{project_id}/locations/global/recognizers/_" ))

And then configure the polling backoff policy:

.with_polling_error_policy( Aip194Strict .with_attempt_limit(100) .with_time_limit(Duration::from_secs(300)), ) .set_files([speech::model::BatchRecognizeFileMetadata::new() .set_uri("gs://cloud-samples-data/speech/hello.wav")]) .set_recognition_output_config( speech::model::RecognitionOutputConfig::new() .set_inline_response_config(speech::model::InlineOutputConfig::new()), ) .set_processing_strategy( speech::model::batch_recognize_request::ProcessingStrategy::DYNAMIC_BATCHING, ) .set_config( speech::model::RecognitionConfig::new() .set_language_codes(["en-US"]) .set_model("short") .set_auto_decoding_config(speech::model::AutoDetectDecodingConfig::new()), ) .poller() .until_done() .await?; println!("LRO completed, response={response:?}"); Ok(()) }

You can issue this request as usual. For example:

.poller() .until_done() .await?; println!("LRO completed, response={response:?}");

See below for the complete code.

Configuring the polling frequency for all requests in a client: complete code

pub async fn client_backoff(project_id: &str) -> crate::Result<()> { use google_cloud_gax::exponential_backoff::ExponentialBackoffBuilder; use google_cloud_gax::options::ClientConfig; use speech::Poller; use std::time::Duration; let client = speech::client::Speech::new_with_config( ClientConfig::default().set_polling_backoff_policy( ExponentialBackoffBuilder::new() .with_initial_delay(Duration::from_millis(250)) .with_maximum_delay(Duration::from_secs(10)) .build()?, ), ) .await?; let response = client .batch_recognize(format!( "projects/{project_id}/locations/global/recognizers/_" )) .set_files([speech::model::BatchRecognizeFileMetadata::new() .set_uri("gs://cloud-samples-data/speech/hello.wav")]) .set_recognition_output_config( speech::model::RecognitionOutputConfig::new() .set_inline_response_config(speech::model::InlineOutputConfig::new()), ) .set_processing_strategy( speech::model::batch_recognize_request::ProcessingStrategy::DYNAMIC_BATCHING, ) .set_config( speech::model::RecognitionConfig::new() .set_language_codes(["en-US"]) .set_model("short") .set_auto_decoding_config(speech::model::AutoDetectDecodingConfig::new()), ) .poller() .until_done() .await?; println!("LRO completed, response={response:?}"); Ok(()) }

Configuring the polling frequency for a specific request: complete code

pub async fn rpc_backoff(project_id: &str) -> crate::Result<()> { use google_cloud_gax::exponential_backoff::ExponentialBackoffBuilder; use std::time::Duration; use google_cloud_gax::options::RequestOptionsBuilder; use speech::Poller; let client = speech::client::Speech::new().await?; let response = client .batch_recognize(format!( "projects/{project_id}/locations/global/recognizers/_" )) .with_polling_backoff_policy( ExponentialBackoffBuilder::new() .with_initial_delay(Duration::from_millis(250)) .with_maximum_delay(Duration::from_secs(10)) .build()?, ) .set_files([speech::model::BatchRecognizeFileMetadata::new() .set_uri("gs://cloud-samples-data/speech/hello.wav")]) .set_recognition_output_config( speech::model::RecognitionOutputConfig::new() .set_inline_response_config(speech::model::InlineOutputConfig::new()), ) .set_processing_strategy( speech::model::batch_recognize_request::ProcessingStrategy::DYNAMIC_BATCHING, ) .set_config( speech::model::RecognitionConfig::new() .set_language_codes(["en-US"]) .set_model("short") .set_auto_decoding_config(speech::model::AutoDetectDecodingConfig::new()), ) .poller() .until_done() .await?; println!("LRO completed, response={response:?}"); Ok(()) }

Configuring the retryable polling errors for all requests in a client: complete code

pub async fn client_backoff(project_id: &str) -> crate::Result<()> { use google_cloud_gax::exponential_backoff::ExponentialBackoffBuilder; use google_cloud_gax::options::ClientConfig; use speech::Poller; use std::time::Duration; let client = speech::client::Speech::new_with_config( ClientConfig::default().set_polling_backoff_policy( ExponentialBackoffBuilder::new() .with_initial_delay(Duration::from_millis(250)) .with_maximum_delay(Duration::from_secs(10)) .build()?, ), ) .await?; let response = client .batch_recognize(format!( "projects/{project_id}/locations/global/recognizers/_" )) .set_files([speech::model::BatchRecognizeFileMetadata::new() .set_uri("gs://cloud-samples-data/speech/hello.wav")]) .set_recognition_output_config( speech::model::RecognitionOutputConfig::new() .set_inline_response_config(speech::model::InlineOutputConfig::new()), ) .set_processing_strategy( speech::model::batch_recognize_request::ProcessingStrategy::DYNAMIC_BATCHING, ) .set_config( speech::model::RecognitionConfig::new() .set_language_codes(["en-US"]) .set_model("short") .set_auto_decoding_config(speech::model::AutoDetectDecodingConfig::new()), ) .poller() .until_done() .await?; println!("LRO completed, response={response:?}"); Ok(()) }

Configuring the retryable polling errors for a specific request: complete code

pub async fn rpc_backoff(project_id: &str) -> crate::Result<()> { use google_cloud_gax::exponential_backoff::ExponentialBackoffBuilder; use std::time::Duration; use google_cloud_gax::options::RequestOptionsBuilder; use speech::Poller; let client = speech::client::Speech::new().await?; let response = client .batch_recognize(format!( "projects/{project_id}/locations/global/recognizers/_" )) .with_polling_backoff_policy( ExponentialBackoffBuilder::new() .with_initial_delay(Duration::from_millis(250)) .with_maximum_delay(Duration::from_secs(10)) .build()?, ) .set_files([speech::model::BatchRecognizeFileMetadata::new() .set_uri("gs://cloud-samples-data/speech/hello.wav")]) .set_recognition_output_config( speech::model::RecognitionOutputConfig::new() .set_inline_response_config(speech::model::InlineOutputConfig::new()), ) .set_processing_strategy( speech::model::batch_recognize_request::ProcessingStrategy::DYNAMIC_BATCHING, ) .set_config( speech::model::RecognitionConfig::new() .set_language_codes(["en-US"]) .set_model("short") .set_auto_decoding_config(speech::model::AutoDetectDecodingConfig::new()), ) .poller() .until_done() .await?; println!("LRO completed, response={response:?}"); Ok(()) }