Working with long-running operations
Occasionally, an API may need to expose a method that takes a significant amount of time to complete. In these situations, it's often a poor user experience to simply block while the task runs. It's usually better to return some kind of promise to the user and allow the user to check back later.
The Google Cloud Client Libraries for Rust provide helpers to work with these long-running operations (LROs). This guide shows you how to start LROs and wait for their completion.
Prerequisites
The guide uses the Speech-To-Text V2 service to keep the code snippets concrete. The same ideas work for any other service using LROs.
We recommend that you first follow one of the service guides, such as Transcribe speech to text by using the command line. These guides cover critical topics such as ensuring your project has the API enabled, your account has the right permissions, and how to set up billing for your project (if needed). Skipping the service guides may result in problems that are hard to diagnose.
For complete setup instructions for the Rust libraries, see Setting up your development environment.
Dependencies
Declare Google Cloud dependencies in your Cargo.toml
file:
cargo add google-cloud-speech-v2 google-cloud-lro google-cloud-longrunning
You'll also need several tokio
features:
cargo add tokio --features full,macros
Starting a long-running operation
To start a long-running operation, you'll initialize a client and then make the RPC. But first, add some use declarations to avoid the long package names:
use google_cloud_longrunning as longrunning;
use google_cloud_speech_v2 as speech;
Now create the client:
let client = speech::client::Speech::builder()
.with_retry_policy(
Aip194Strict
.with_attempt_limit(5)
.with_time_limit(Duration::from_secs(30)),
)
.build()
.await?;
You'll use batch recognize for this example. While this is designed for long audio files, it works well with small files too.
In the Rust client libraries, each request is represented by a method that
returns a request builder. First, call the right method on the client to create
the request builder. You'll use the default recognizer (_
) in the global
region.
let operation = client
.batch_recognize()
.set_recognizer(format!(
"projects/{project_id}/locations/global/recognizers/_"
))
Then initialize the request to use a publicly available audio file:
.set_files([speech::model::BatchRecognizeFileMetadata::new()
.set_uri("gs://cloud-samples-data/speech/hello.wav")])
Configure the request to return the transcripts inline:
.set_recognition_output_config(
speech::model::RecognitionOutputConfig::new()
.set_inline_response_config(speech::model::InlineOutputConfig::new()),
)
Then configure the service to transcribe to US English, using the short model and some other default configuration:
.set_processing_strategy(
speech::model::batch_recognize_request::ProcessingStrategy::DynamicBatching,
)
.set_config(
speech::model::RecognitionConfig::new()
.set_language_codes(["en-US"])
.set_model("short")
.set_auto_decoding_config(speech::model::AutoDetectDecodingConfig::new()),
)
Make the request and wait for an Operation to
be returned. This Operation
acts as a promise for the result of the
long-running request:
.send()
.await?;
Finally, poll the promise until it completes:
let response = manually_poll_lro(client, operation).await;
println!("LRO completed, response={response:?}");
You'll examine the manually_poll_lro()
function in the
Manually polling a long-running operation section.
You can find the full function below.
Automatically polling a long-running operation
To configure automatic polling, you prepare the request just like you did to
start a long-running operation. The difference comes at the end, where instead
of sending the request to get the Operation
promise:
.send()
.await?;
... you create a Poller
and wait until it is done:
.poller()
.until_done()
.await?;
Let's review the code step-by-step.
First, introduce the trait in scope via a use
declaration:
use google_cloud_lro::Poller;
Then initialize the client and prepare the request as before:
let response = client
.batch_recognize()
.set_recognizer(format!(
"projects/{project_id}/locations/global/recognizers/_"
))
.set_files([speech::model::BatchRecognizeFileMetadata::new()
.set_uri("gs://cloud-samples-data/speech/hello.wav")])
.set_recognition_output_config(
speech::model::RecognitionOutputConfig::new()
.set_inline_response_config(speech::model::InlineOutputConfig::new()),
)
.set_processing_strategy(
speech::model::batch_recognize_request::ProcessingStrategy::DynamicBatching,
)
.set_config(
speech::model::RecognitionConfig::new()
.set_language_codes(["en-US"])
.set_model("short")
.set_auto_decoding_config(speech::model::AutoDetectDecodingConfig::new()),
)
And then poll until the operation is completed and print the result:
.poller()
.until_done()
.await?;
println!("LRO completed, response={response:?}");
You can find the full function below.
Polling a long-running operation
While .until_done()
is convenient, it omits some information: long-running
operations may report partial progress via a "metadata" attribute. If your
application requires such information, you need to use the poller directly:
let mut poller = client
.batch_recognize(/* stuff */)
/* more stuff */
.poller();
Then use the poller in a loop:
// Copyright 2025 Google LLC
//
// Licensed under the Apache License, Version 2.0 (the "License");
// you may not use this file except in compliance with the License.
// You may obtain a copy of the License at
//
// https://www.apache.org/licenses/LICENSE-2.0
//
// Unless required by applicable law or agreed to in writing, software
// distributed under the License is distributed on an "AS IS" BASIS,
// WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
// See the License for the specific language governing permissions and
// limitations under the License.
use google_cloud_longrunning as longrunning;
use google_cloud_speech_v2 as speech;
pub async fn start(project_id: &str) -> crate::Result<()> {
use google_cloud_gax::retry_policy::Aip194Strict;
use google_cloud_gax::retry_policy::RetryPolicyExt;
use std::time::Duration;
let client = speech::client::Speech::builder()
.with_retry_policy(
Aip194Strict
.with_attempt_limit(5)
.with_time_limit(Duration::from_secs(30)),
)
.build()
.await?;
let operation = client
.batch_recognize()
.set_recognizer(format!(
"projects/{project_id}/locations/global/recognizers/_"
))
.set_files([speech::model::BatchRecognizeFileMetadata::new()
.set_uri("gs://cloud-samples-data/speech/hello.wav")])
.set_recognition_output_config(
speech::model::RecognitionOutputConfig::new()
.set_inline_response_config(speech::model::InlineOutputConfig::new()),
)
.set_processing_strategy(
speech::model::batch_recognize_request::ProcessingStrategy::DynamicBatching,
)
.set_config(
speech::model::RecognitionConfig::new()
.set_language_codes(["en-US"])
.set_model("short")
.set_auto_decoding_config(speech::model::AutoDetectDecodingConfig::new()),
)
.send()
.await?;
println!("LRO started, response={operation:?}");
let response = manually_poll_lro(client, operation).await;
println!("LRO completed, response={response:?}");
Ok(())
}
pub async fn automatic(project_id: &str) -> crate::Result<()> {
use google_cloud_gax::retry_policy::Aip194Strict;
use google_cloud_gax::retry_policy::RetryPolicyExt;
use std::time::Duration;
use google_cloud_lro::Poller;
let client = speech::client::Speech::builder()
.with_retry_policy(
Aip194Strict
.with_attempt_limit(5)
.with_time_limit(Duration::from_secs(30)),
)
.build()
.await?;
let response = client
.batch_recognize()
.set_recognizer(format!(
"projects/{project_id}/locations/global/recognizers/_"
))
.set_files([speech::model::BatchRecognizeFileMetadata::new()
.set_uri("gs://cloud-samples-data/speech/hello.wav")])
.set_recognition_output_config(
speech::model::RecognitionOutputConfig::new()
.set_inline_response_config(speech::model::InlineOutputConfig::new()),
)
.set_processing_strategy(
speech::model::batch_recognize_request::ProcessingStrategy::DynamicBatching,
)
.set_config(
speech::model::RecognitionConfig::new()
.set_language_codes(["en-US"])
.set_model("short")
.set_auto_decoding_config(speech::model::AutoDetectDecodingConfig::new()),
)
.poller()
.until_done()
.await?;
println!("LRO completed, response={response:?}");
Ok(())
}
pub async fn polling(project_id: &str) -> crate::Result<()> {
use google_cloud_gax::retry_policy::Aip194Strict;
use google_cloud_gax::retry_policy::RetryPolicyExt;
use std::time::Duration;
use google_cloud_lro::{Poller, PollingResult};
let client = speech::client::Speech::builder()
.with_retry_policy(
Aip194Strict
.with_attempt_limit(5)
.with_time_limit(Duration::from_secs(30)),
)
.build()
.await?;
let mut poller = client
.batch_recognize()
.set_recognizer(format!(
"projects/{project_id}/locations/global/recognizers/_"
))
.set_files([speech::model::BatchRecognizeFileMetadata::new()
.set_uri("gs://cloud-samples-data/speech/hello.wav")])
.set_recognition_output_config(
speech::model::RecognitionOutputConfig::new()
.set_inline_response_config(speech::model::InlineOutputConfig::new()),
)
.set_processing_strategy(
speech::model::batch_recognize_request::ProcessingStrategy::DynamicBatching,
)
.set_config(
speech::model::RecognitionConfig::new()
.set_language_codes(["en-US"])
.set_model("short")
.set_auto_decoding_config(speech::model::AutoDetectDecodingConfig::new()),
)
.poller();
while let Some(p) = poller.poll().await {
match p {
PollingResult::Completed(r) => {
println!("LRO completed, response={r:?}");
}
PollingResult::InProgress(m) => {
println!("LRO in progress, metadata={m:?}");
}
PollingResult::PollingError(e) => {
println!("Transient error polling the LRO: {e}");
}
}
tokio::time::sleep(std::time::Duration::from_millis(500)).await;
}
Ok(())
}
pub async fn manually_poll_lro(
client: speech::client::Speech,
operation: longrunning::model::Operation,
) -> crate::Result<speech::model::BatchRecognizeResponse> {
let mut operation = operation;
loop {
if operation.done {
match &operation.result {
None => {
return Err("missing result for finished operation".into());
}
Some(r) => {
return match r {
longrunning::model::operation::Result::Error(e) => {
Err(format!("{e:?}").into())
}
longrunning::model::operation::Result::Response(any) => {
let response = any.to_msg::<speech::model::BatchRecognizeResponse>()?;
Ok(response)
}
_ => Err(format!("unexpected result branch {r:?}").into()),
};
}
}
}
if let Some(any) = &operation.metadata {
let metadata = any.to_msg::<speech::model::OperationMetadata>()?;
println!("LRO in progress, metadata={metadata:?}");
}
tokio::time::sleep(std::time::Duration::from_millis(500)).await;
if let Ok(attempt) = client
.get_operation()
.set_name(&operation.name)
.send()
.await
{
operation = attempt;
}
}
}
Note how this loop explicitly waits before polling again. The polling period depends on the specific operation and its payload. You should consult the service documentation and/or experiment with your own data to determine a good value.
The poller uses a policy to determine what polling errors are transient and may resolve themselves. The Configuring polling policies chapter covers this topic in detail.
You can find the full function below.
Manually polling a long-running operation
In general, we recommend that you use the previous two approaches in your application. Alternatively, you can manually poll a long-running operation, but this can be quite tedious, and it is easy to get the types wrong. If you do need to manually poll a long-running operation, this section walks you through the required steps. You may want to read the Operation message reference documentation, as some of the fields and types are used below.
Recall that you started the long-running operation using the client:
let mut operation = client
.batch_recognize(/* stuff */)
/* more stuff */
.send()
.await?;
You are going to start a loop to poll the operation
, and you need to check if
the operation completed immediately (this is rare but does happen). The done
field indicates if the operation completed:
// Copyright 2025 Google LLC
//
// Licensed under the Apache License, Version 2.0 (the "License");
// you may not use this file except in compliance with the License.
// You may obtain a copy of the License at
//
// https://www.apache.org/licenses/LICENSE-2.0
//
// Unless required by applicable law or agreed to in writing, software
// distributed under the License is distributed on an "AS IS" BASIS,
// WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
// See the License for the specific language governing permissions and
// limitations under the License.
use google_cloud_longrunning as longrunning;
use google_cloud_speech_v2 as speech;
pub async fn start(project_id: &str) -> crate::Result<()> {
use google_cloud_gax::retry_policy::Aip194Strict;
use google_cloud_gax::retry_policy::RetryPolicyExt;
use std::time::Duration;
let client = speech::client::Speech::builder()
.with_retry_policy(
Aip194Strict
.with_attempt_limit(5)
.with_time_limit(Duration::from_secs(30)),
)
.build()
.await?;
let operation = client
.batch_recognize()
.set_recognizer(format!(
"projects/{project_id}/locations/global/recognizers/_"
))
.set_files([speech::model::BatchRecognizeFileMetadata::new()
.set_uri("gs://cloud-samples-data/speech/hello.wav")])
.set_recognition_output_config(
speech::model::RecognitionOutputConfig::new()
.set_inline_response_config(speech::model::InlineOutputConfig::new()),
)
.set_processing_strategy(
speech::model::batch_recognize_request::ProcessingStrategy::DynamicBatching,
)
.set_config(
speech::model::RecognitionConfig::new()
.set_language_codes(["en-US"])
.set_model("short")
.set_auto_decoding_config(speech::model::AutoDetectDecodingConfig::new()),
)
.send()
.await?;
println!("LRO started, response={operation:?}");
let response = manually_poll_lro(client, operation).await;
println!("LRO completed, response={response:?}");
Ok(())
}
pub async fn automatic(project_id: &str) -> crate::Result<()> {
use google_cloud_gax::retry_policy::Aip194Strict;
use google_cloud_gax::retry_policy::RetryPolicyExt;
use std::time::Duration;
use google_cloud_lro::Poller;
let client = speech::client::Speech::builder()
.with_retry_policy(
Aip194Strict
.with_attempt_limit(5)
.with_time_limit(Duration::from_secs(30)),
)
.build()
.await?;
let response = client
.batch_recognize()
.set_recognizer(format!(
"projects/{project_id}/locations/global/recognizers/_"
))
.set_files([speech::model::BatchRecognizeFileMetadata::new()
.set_uri("gs://cloud-samples-data/speech/hello.wav")])
.set_recognition_output_config(
speech::model::RecognitionOutputConfig::new()
.set_inline_response_config(speech::model::InlineOutputConfig::new()),
)
.set_processing_strategy(
speech::model::batch_recognize_request::ProcessingStrategy::DynamicBatching,
)
.set_config(
speech::model::RecognitionConfig::new()
.set_language_codes(["en-US"])
.set_model("short")
.set_auto_decoding_config(speech::model::AutoDetectDecodingConfig::new()),
)
.poller()
.until_done()
.await?;
println!("LRO completed, response={response:?}");
Ok(())
}
pub async fn polling(project_id: &str) -> crate::Result<()> {
use google_cloud_gax::retry_policy::Aip194Strict;
use google_cloud_gax::retry_policy::RetryPolicyExt;
use std::time::Duration;
use google_cloud_lro::{Poller, PollingResult};
let client = speech::client::Speech::builder()
.with_retry_policy(
Aip194Strict
.with_attempt_limit(5)
.with_time_limit(Duration::from_secs(30)),
)
.build()
.await?;
let mut poller = client
.batch_recognize()
.set_recognizer(format!(
"projects/{project_id}/locations/global/recognizers/_"
))
.set_files([speech::model::BatchRecognizeFileMetadata::new()
.set_uri("gs://cloud-samples-data/speech/hello.wav")])
.set_recognition_output_config(
speech::model::RecognitionOutputConfig::new()
.set_inline_response_config(speech::model::InlineOutputConfig::new()),
)
.set_processing_strategy(
speech::model::batch_recognize_request::ProcessingStrategy::DynamicBatching,
)
.set_config(
speech::model::RecognitionConfig::new()
.set_language_codes(["en-US"])
.set_model("short")
.set_auto_decoding_config(speech::model::AutoDetectDecodingConfig::new()),
)
.poller();
while let Some(p) = poller.poll().await {
match p {
PollingResult::Completed(r) => {
println!("LRO completed, response={r:?}");
}
PollingResult::InProgress(m) => {
println!("LRO in progress, metadata={m:?}");
}
PollingResult::PollingError(e) => {
println!("Transient error polling the LRO: {e}");
}
}
tokio::time::sleep(std::time::Duration::from_millis(500)).await;
}
Ok(())
}
pub async fn manually_poll_lro(
client: speech::client::Speech,
operation: longrunning::model::Operation,
) -> crate::Result<speech::model::BatchRecognizeResponse> {
let mut operation = operation;
loop {
if operation.done {
match &operation.result {
None => {
return Err("missing result for finished operation".into());
}
Some(r) => {
return match r {
longrunning::model::operation::Result::Error(e) => {
Err(format!("{e:?}").into())
}
longrunning::model::operation::Result::Response(any) => {
let response = any.to_msg::<speech::model::BatchRecognizeResponse>()?;
Ok(response)
}
_ => Err(format!("unexpected result branch {r:?}").into()),
};
}
}
}
if let Some(any) = &operation.metadata {
let metadata = any.to_msg::<speech::model::OperationMetadata>()?;
println!("LRO in progress, metadata={metadata:?}");
}
tokio::time::sleep(std::time::Duration::from_millis(500)).await;
if let Ok(attempt) = client
.get_operation()
.set_name(&operation.name)
.send()
.await
{
operation = attempt;
}
}
}
In most cases, if the operation is done it contains a result. However, the field
is optional because the service could return done
as true and no result: maybe
the operation deletes resources and a successful completion has no return value.
In this example using the Speech-to-Text service, you can treat this as an
error:
// Copyright 2025 Google LLC
//
// Licensed under the Apache License, Version 2.0 (the "License");
// you may not use this file except in compliance with the License.
// You may obtain a copy of the License at
//
// https://www.apache.org/licenses/LICENSE-2.0
//
// Unless required by applicable law or agreed to in writing, software
// distributed under the License is distributed on an "AS IS" BASIS,
// WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
// See the License for the specific language governing permissions and
// limitations under the License.
use google_cloud_longrunning as longrunning;
use google_cloud_speech_v2 as speech;
pub async fn start(project_id: &str) -> crate::Result<()> {
use google_cloud_gax::retry_policy::Aip194Strict;
use google_cloud_gax::retry_policy::RetryPolicyExt;
use std::time::Duration;
let client = speech::client::Speech::builder()
.with_retry_policy(
Aip194Strict
.with_attempt_limit(5)
.with_time_limit(Duration::from_secs(30)),
)
.build()
.await?;
let operation = client
.batch_recognize()
.set_recognizer(format!(
"projects/{project_id}/locations/global/recognizers/_"
))
.set_files([speech::model::BatchRecognizeFileMetadata::new()
.set_uri("gs://cloud-samples-data/speech/hello.wav")])
.set_recognition_output_config(
speech::model::RecognitionOutputConfig::new()
.set_inline_response_config(speech::model::InlineOutputConfig::new()),
)
.set_processing_strategy(
speech::model::batch_recognize_request::ProcessingStrategy::DynamicBatching,
)
.set_config(
speech::model::RecognitionConfig::new()
.set_language_codes(["en-US"])
.set_model("short")
.set_auto_decoding_config(speech::model::AutoDetectDecodingConfig::new()),
)
.send()
.await?;
println!("LRO started, response={operation:?}");
let response = manually_poll_lro(client, operation).await;
println!("LRO completed, response={response:?}");
Ok(())
}
pub async fn automatic(project_id: &str) -> crate::Result<()> {
use google_cloud_gax::retry_policy::Aip194Strict;
use google_cloud_gax::retry_policy::RetryPolicyExt;
use std::time::Duration;
use google_cloud_lro::Poller;
let client = speech::client::Speech::builder()
.with_retry_policy(
Aip194Strict
.with_attempt_limit(5)
.with_time_limit(Duration::from_secs(30)),
)
.build()
.await?;
let response = client
.batch_recognize()
.set_recognizer(format!(
"projects/{project_id}/locations/global/recognizers/_"
))
.set_files([speech::model::BatchRecognizeFileMetadata::new()
.set_uri("gs://cloud-samples-data/speech/hello.wav")])
.set_recognition_output_config(
speech::model::RecognitionOutputConfig::new()
.set_inline_response_config(speech::model::InlineOutputConfig::new()),
)
.set_processing_strategy(
speech::model::batch_recognize_request::ProcessingStrategy::DynamicBatching,
)
.set_config(
speech::model::RecognitionConfig::new()
.set_language_codes(["en-US"])
.set_model("short")
.set_auto_decoding_config(speech::model::AutoDetectDecodingConfig::new()),
)
.poller()
.until_done()
.await?;
println!("LRO completed, response={response:?}");
Ok(())
}
pub async fn polling(project_id: &str) -> crate::Result<()> {
use google_cloud_gax::retry_policy::Aip194Strict;
use google_cloud_gax::retry_policy::RetryPolicyExt;
use std::time::Duration;
use google_cloud_lro::{Poller, PollingResult};
let client = speech::client::Speech::builder()
.with_retry_policy(
Aip194Strict
.with_attempt_limit(5)
.with_time_limit(Duration::from_secs(30)),
)
.build()
.await?;
let mut poller = client
.batch_recognize()
.set_recognizer(format!(
"projects/{project_id}/locations/global/recognizers/_"
))
.set_files([speech::model::BatchRecognizeFileMetadata::new()
.set_uri("gs://cloud-samples-data/speech/hello.wav")])
.set_recognition_output_config(
speech::model::RecognitionOutputConfig::new()
.set_inline_response_config(speech::model::InlineOutputConfig::new()),
)
.set_processing_strategy(
speech::model::batch_recognize_request::ProcessingStrategy::DynamicBatching,
)
.set_config(
speech::model::RecognitionConfig::new()
.set_language_codes(["en-US"])
.set_model("short")
.set_auto_decoding_config(speech::model::AutoDetectDecodingConfig::new()),
)
.poller();
while let Some(p) = poller.poll().await {
match p {
PollingResult::Completed(r) => {
println!("LRO completed, response={r:?}");
}
PollingResult::InProgress(m) => {
println!("LRO in progress, metadata={m:?}");
}
PollingResult::PollingError(e) => {
println!("Transient error polling the LRO: {e}");
}
}
tokio::time::sleep(std::time::Duration::from_millis(500)).await;
}
Ok(())
}
pub async fn manually_poll_lro(
client: speech::client::Speech,
operation: longrunning::model::Operation,
) -> crate::Result<speech::model::BatchRecognizeResponse> {
let mut operation = operation;
loop {
if operation.done {
match &operation.result {
None => {
return Err("missing result for finished operation".into());
}
Some(r) => {
return match r {
longrunning::model::operation::Result::Error(e) => {
Err(format!("{e:?}").into())
}
longrunning::model::operation::Result::Response(any) => {
let response = any.to_msg::<speech::model::BatchRecognizeResponse>()?;
Ok(response)
}
_ => Err(format!("unexpected result branch {r:?}").into()),
};
}
}
}
if let Some(any) = &operation.metadata {
let metadata = any.to_msg::<speech::model::OperationMetadata>()?;
println!("LRO in progress, metadata={metadata:?}");
}
tokio::time::sleep(std::time::Duration::from_millis(500)).await;
if let Ok(attempt) = client
.get_operation()
.set_name(&operation.name)
.send()
.await
{
operation = attempt;
}
}
}
Starting a long-running operation successfully does not guarantee that it will complete successfully. The result may be an error or a valid response. You need to check for both. First check for errors:
// Copyright 2025 Google LLC
//
// Licensed under the Apache License, Version 2.0 (the "License");
// you may not use this file except in compliance with the License.
// You may obtain a copy of the License at
//
// https://www.apache.org/licenses/LICENSE-2.0
//
// Unless required by applicable law or agreed to in writing, software
// distributed under the License is distributed on an "AS IS" BASIS,
// WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
// See the License for the specific language governing permissions and
// limitations under the License.
use google_cloud_longrunning as longrunning;
use google_cloud_speech_v2 as speech;
pub async fn start(project_id: &str) -> crate::Result<()> {
use google_cloud_gax::retry_policy::Aip194Strict;
use google_cloud_gax::retry_policy::RetryPolicyExt;
use std::time::Duration;
let client = speech::client::Speech::builder()
.with_retry_policy(
Aip194Strict
.with_attempt_limit(5)
.with_time_limit(Duration::from_secs(30)),
)
.build()
.await?;
let operation = client
.batch_recognize()
.set_recognizer(format!(
"projects/{project_id}/locations/global/recognizers/_"
))
.set_files([speech::model::BatchRecognizeFileMetadata::new()
.set_uri("gs://cloud-samples-data/speech/hello.wav")])
.set_recognition_output_config(
speech::model::RecognitionOutputConfig::new()
.set_inline_response_config(speech::model::InlineOutputConfig::new()),
)
.set_processing_strategy(
speech::model::batch_recognize_request::ProcessingStrategy::DynamicBatching,
)
.set_config(
speech::model::RecognitionConfig::new()
.set_language_codes(["en-US"])
.set_model("short")
.set_auto_decoding_config(speech::model::AutoDetectDecodingConfig::new()),
)
.send()
.await?;
println!("LRO started, response={operation:?}");
let response = manually_poll_lro(client, operation).await;
println!("LRO completed, response={response:?}");
Ok(())
}
pub async fn automatic(project_id: &str) -> crate::Result<()> {
use google_cloud_gax::retry_policy::Aip194Strict;
use google_cloud_gax::retry_policy::RetryPolicyExt;
use std::time::Duration;
use google_cloud_lro::Poller;
let client = speech::client::Speech::builder()
.with_retry_policy(
Aip194Strict
.with_attempt_limit(5)
.with_time_limit(Duration::from_secs(30)),
)
.build()
.await?;
let response = client
.batch_recognize()
.set_recognizer(format!(
"projects/{project_id}/locations/global/recognizers/_"
))
.set_files([speech::model::BatchRecognizeFileMetadata::new()
.set_uri("gs://cloud-samples-data/speech/hello.wav")])
.set_recognition_output_config(
speech::model::RecognitionOutputConfig::new()
.set_inline_response_config(speech::model::InlineOutputConfig::new()),
)
.set_processing_strategy(
speech::model::batch_recognize_request::ProcessingStrategy::DynamicBatching,
)
.set_config(
speech::model::RecognitionConfig::new()
.set_language_codes(["en-US"])
.set_model("short")
.set_auto_decoding_config(speech::model::AutoDetectDecodingConfig::new()),
)
.poller()
.until_done()
.await?;
println!("LRO completed, response={response:?}");
Ok(())
}
pub async fn polling(project_id: &str) -> crate::Result<()> {
use google_cloud_gax::retry_policy::Aip194Strict;
use google_cloud_gax::retry_policy::RetryPolicyExt;
use std::time::Duration;
use google_cloud_lro::{Poller, PollingResult};
let client = speech::client::Speech::builder()
.with_retry_policy(
Aip194Strict
.with_attempt_limit(5)
.with_time_limit(Duration::from_secs(30)),
)
.build()
.await?;
let mut poller = client
.batch_recognize()
.set_recognizer(format!(
"projects/{project_id}/locations/global/recognizers/_"
))
.set_files([speech::model::BatchRecognizeFileMetadata::new()
.set_uri("gs://cloud-samples-data/speech/hello.wav")])
.set_recognition_output_config(
speech::model::RecognitionOutputConfig::new()
.set_inline_response_config(speech::model::InlineOutputConfig::new()),
)
.set_processing_strategy(
speech::model::batch_recognize_request::ProcessingStrategy::DynamicBatching,
)
.set_config(
speech::model::RecognitionConfig::new()
.set_language_codes(["en-US"])
.set_model("short")
.set_auto_decoding_config(speech::model::AutoDetectDecodingConfig::new()),
)
.poller();
while let Some(p) = poller.poll().await {
match p {
PollingResult::Completed(r) => {
println!("LRO completed, response={r:?}");
}
PollingResult::InProgress(m) => {
println!("LRO in progress, metadata={m:?}");
}
PollingResult::PollingError(e) => {
println!("Transient error polling the LRO: {e}");
}
}
tokio::time::sleep(std::time::Duration::from_millis(500)).await;
}
Ok(())
}
pub async fn manually_poll_lro(
client: speech::client::Speech,
operation: longrunning::model::Operation,
) -> crate::Result<speech::model::BatchRecognizeResponse> {
let mut operation = operation;
loop {
if operation.done {
match &operation.result {
None => {
return Err("missing result for finished operation".into());
}
Some(r) => {
return match r {
longrunning::model::operation::Result::Error(e) => {
Err(format!("{e:?}").into())
}
longrunning::model::operation::Result::Response(any) => {
let response = any.to_msg::<speech::model::BatchRecognizeResponse>()?;
Ok(response)
}
_ => Err(format!("unexpected result branch {r:?}").into()),
};
}
}
}
if let Some(any) = &operation.metadata {
let metadata = any.to_msg::<speech::model::OperationMetadata>()?;
println!("LRO in progress, metadata={metadata:?}");
}
tokio::time::sleep(std::time::Duration::from_millis(500)).await;
if let Ok(attempt) = client
.get_operation()
.set_name(&operation.name)
.send()
.await
{
operation = attempt;
}
}
}
The error type is a Status message type. This does not
implement the standard Error
interface. You need to manually convert it to a
valid error. You can use Error::service to perform this conversion.
Assuming the result is successful, you need to extract the response type. You can find this type in the documentation for the LRO method, or by reading the service API documentation:
// Copyright 2025 Google LLC
//
// Licensed under the Apache License, Version 2.0 (the "License");
// you may not use this file except in compliance with the License.
// You may obtain a copy of the License at
//
// https://www.apache.org/licenses/LICENSE-2.0
//
// Unless required by applicable law or agreed to in writing, software
// distributed under the License is distributed on an "AS IS" BASIS,
// WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
// See the License for the specific language governing permissions and
// limitations under the License.
use google_cloud_longrunning as longrunning;
use google_cloud_speech_v2 as speech;
pub async fn start(project_id: &str) -> crate::Result<()> {
use google_cloud_gax::retry_policy::Aip194Strict;
use google_cloud_gax::retry_policy::RetryPolicyExt;
use std::time::Duration;
let client = speech::client::Speech::builder()
.with_retry_policy(
Aip194Strict
.with_attempt_limit(5)
.with_time_limit(Duration::from_secs(30)),
)
.build()
.await?;
let operation = client
.batch_recognize()
.set_recognizer(format!(
"projects/{project_id}/locations/global/recognizers/_"
))
.set_files([speech::model::BatchRecognizeFileMetadata::new()
.set_uri("gs://cloud-samples-data/speech/hello.wav")])
.set_recognition_output_config(
speech::model::RecognitionOutputConfig::new()
.set_inline_response_config(speech::model::InlineOutputConfig::new()),
)
.set_processing_strategy(
speech::model::batch_recognize_request::ProcessingStrategy::DynamicBatching,
)
.set_config(
speech::model::RecognitionConfig::new()
.set_language_codes(["en-US"])
.set_model("short")
.set_auto_decoding_config(speech::model::AutoDetectDecodingConfig::new()),
)
.send()
.await?;
println!("LRO started, response={operation:?}");
let response = manually_poll_lro(client, operation).await;
println!("LRO completed, response={response:?}");
Ok(())
}
pub async fn automatic(project_id: &str) -> crate::Result<()> {
use google_cloud_gax::retry_policy::Aip194Strict;
use google_cloud_gax::retry_policy::RetryPolicyExt;
use std::time::Duration;
use google_cloud_lro::Poller;
let client = speech::client::Speech::builder()
.with_retry_policy(
Aip194Strict
.with_attempt_limit(5)
.with_time_limit(Duration::from_secs(30)),
)
.build()
.await?;
let response = client
.batch_recognize()
.set_recognizer(format!(
"projects/{project_id}/locations/global/recognizers/_"
))
.set_files([speech::model::BatchRecognizeFileMetadata::new()
.set_uri("gs://cloud-samples-data/speech/hello.wav")])
.set_recognition_output_config(
speech::model::RecognitionOutputConfig::new()
.set_inline_response_config(speech::model::InlineOutputConfig::new()),
)
.set_processing_strategy(
speech::model::batch_recognize_request::ProcessingStrategy::DynamicBatching,
)
.set_config(
speech::model::RecognitionConfig::new()
.set_language_codes(["en-US"])
.set_model("short")
.set_auto_decoding_config(speech::model::AutoDetectDecodingConfig::new()),
)
.poller()
.until_done()
.await?;
println!("LRO completed, response={response:?}");
Ok(())
}
pub async fn polling(project_id: &str) -> crate::Result<()> {
use google_cloud_gax::retry_policy::Aip194Strict;
use google_cloud_gax::retry_policy::RetryPolicyExt;
use std::time::Duration;
use google_cloud_lro::{Poller, PollingResult};
let client = speech::client::Speech::builder()
.with_retry_policy(
Aip194Strict
.with_attempt_limit(5)
.with_time_limit(Duration::from_secs(30)),
)
.build()
.await?;
let mut poller = client
.batch_recognize()
.set_recognizer(format!(
"projects/{project_id}/locations/global/recognizers/_"
))
.set_files([speech::model::BatchRecognizeFileMetadata::new()
.set_uri("gs://cloud-samples-data/speech/hello.wav")])
.set_recognition_output_config(
speech::model::RecognitionOutputConfig::new()
.set_inline_response_config(speech::model::InlineOutputConfig::new()),
)
.set_processing_strategy(
speech::model::batch_recognize_request::ProcessingStrategy::DynamicBatching,
)
.set_config(
speech::model::RecognitionConfig::new()
.set_language_codes(["en-US"])
.set_model("short")
.set_auto_decoding_config(speech::model::AutoDetectDecodingConfig::new()),
)
.poller();
while let Some(p) = poller.poll().await {
match p {
PollingResult::Completed(r) => {
println!("LRO completed, response={r:?}");
}
PollingResult::InProgress(m) => {
println!("LRO in progress, metadata={m:?}");
}
PollingResult::PollingError(e) => {
println!("Transient error polling the LRO: {e}");
}
}
tokio::time::sleep(std::time::Duration::from_millis(500)).await;
}
Ok(())
}
pub async fn manually_poll_lro(
client: speech::client::Speech,
operation: longrunning::model::Operation,
) -> crate::Result<speech::model::BatchRecognizeResponse> {
let mut operation = operation;
loop {
if operation.done {
match &operation.result {
None => {
return Err("missing result for finished operation".into());
}
Some(r) => {
return match r {
longrunning::model::operation::Result::Error(e) => {
Err(format!("{e:?}").into())
}
longrunning::model::operation::Result::Response(any) => {
let response = any.to_msg::<speech::model::BatchRecognizeResponse>()?;
Ok(response)
}
_ => Err(format!("unexpected result branch {r:?}").into()),
};
}
}
}
if let Some(any) = &operation.metadata {
let metadata = any.to_msg::<speech::model::OperationMetadata>()?;
println!("LRO in progress, metadata={metadata:?}");
}
tokio::time::sleep(std::time::Duration::from_millis(500)).await;
if let Ok(attempt) = client
.get_operation()
.set_name(&operation.name)
.send()
.await
{
operation = attempt;
}
}
}
Note that extraction of the value may fail if the type does not match what the service sent.
All types in Google Cloud may add fields and branches in the future. While this
is unlikely for a common type such as Operation
, it happens frequently for
most service messages. The Google Cloud Client Libraries for Rust mark all
structs and enums as #[non_exhaustive]
to signal that such changes are
possible. In this case, you must handle this unexpected case:
// Copyright 2025 Google LLC
//
// Licensed under the Apache License, Version 2.0 (the "License");
// you may not use this file except in compliance with the License.
// You may obtain a copy of the License at
//
// https://www.apache.org/licenses/LICENSE-2.0
//
// Unless required by applicable law or agreed to in writing, software
// distributed under the License is distributed on an "AS IS" BASIS,
// WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
// See the License for the specific language governing permissions and
// limitations under the License.
use google_cloud_longrunning as longrunning;
use google_cloud_speech_v2 as speech;
pub async fn start(project_id: &str) -> crate::Result<()> {
use google_cloud_gax::retry_policy::Aip194Strict;
use google_cloud_gax::retry_policy::RetryPolicyExt;
use std::time::Duration;
let client = speech::client::Speech::builder()
.with_retry_policy(
Aip194Strict
.with_attempt_limit(5)
.with_time_limit(Duration::from_secs(30)),
)
.build()
.await?;
let operation = client
.batch_recognize()
.set_recognizer(format!(
"projects/{project_id}/locations/global/recognizers/_"
))
.set_files([speech::model::BatchRecognizeFileMetadata::new()
.set_uri("gs://cloud-samples-data/speech/hello.wav")])
.set_recognition_output_config(
speech::model::RecognitionOutputConfig::new()
.set_inline_response_config(speech::model::InlineOutputConfig::new()),
)
.set_processing_strategy(
speech::model::batch_recognize_request::ProcessingStrategy::DynamicBatching,
)
.set_config(
speech::model::RecognitionConfig::new()
.set_language_codes(["en-US"])
.set_model("short")
.set_auto_decoding_config(speech::model::AutoDetectDecodingConfig::new()),
)
.send()
.await?;
println!("LRO started, response={operation:?}");
let response = manually_poll_lro(client, operation).await;
println!("LRO completed, response={response:?}");
Ok(())
}
pub async fn automatic(project_id: &str) -> crate::Result<()> {
use google_cloud_gax::retry_policy::Aip194Strict;
use google_cloud_gax::retry_policy::RetryPolicyExt;
use std::time::Duration;
use google_cloud_lro::Poller;
let client = speech::client::Speech::builder()
.with_retry_policy(
Aip194Strict
.with_attempt_limit(5)
.with_time_limit(Duration::from_secs(30)),
)
.build()
.await?;
let response = client
.batch_recognize()
.set_recognizer(format!(
"projects/{project_id}/locations/global/recognizers/_"
))
.set_files([speech::model::BatchRecognizeFileMetadata::new()
.set_uri("gs://cloud-samples-data/speech/hello.wav")])
.set_recognition_output_config(
speech::model::RecognitionOutputConfig::new()
.set_inline_response_config(speech::model::InlineOutputConfig::new()),
)
.set_processing_strategy(
speech::model::batch_recognize_request::ProcessingStrategy::DynamicBatching,
)
.set_config(
speech::model::RecognitionConfig::new()
.set_language_codes(["en-US"])
.set_model("short")
.set_auto_decoding_config(speech::model::AutoDetectDecodingConfig::new()),
)
.poller()
.until_done()
.await?;
println!("LRO completed, response={response:?}");
Ok(())
}
pub async fn polling(project_id: &str) -> crate::Result<()> {
use google_cloud_gax::retry_policy::Aip194Strict;
use google_cloud_gax::retry_policy::RetryPolicyExt;
use std::time::Duration;
use google_cloud_lro::{Poller, PollingResult};
let client = speech::client::Speech::builder()
.with_retry_policy(
Aip194Strict
.with_attempt_limit(5)
.with_time_limit(Duration::from_secs(30)),
)
.build()
.await?;
let mut poller = client
.batch_recognize()
.set_recognizer(format!(
"projects/{project_id}/locations/global/recognizers/_"
))
.set_files([speech::model::BatchRecognizeFileMetadata::new()
.set_uri("gs://cloud-samples-data/speech/hello.wav")])
.set_recognition_output_config(
speech::model::RecognitionOutputConfig::new()
.set_inline_response_config(speech::model::InlineOutputConfig::new()),
)
.set_processing_strategy(
speech::model::batch_recognize_request::ProcessingStrategy::DynamicBatching,
)
.set_config(
speech::model::RecognitionConfig::new()
.set_language_codes(["en-US"])
.set_model("short")
.set_auto_decoding_config(speech::model::AutoDetectDecodingConfig::new()),
)
.poller();
while let Some(p) = poller.poll().await {
match p {
PollingResult::Completed(r) => {
println!("LRO completed, response={r:?}");
}
PollingResult::InProgress(m) => {
println!("LRO in progress, metadata={m:?}");
}
PollingResult::PollingError(e) => {
println!("Transient error polling the LRO: {e}");
}
}
tokio::time::sleep(std::time::Duration::from_millis(500)).await;
}
Ok(())
}
pub async fn manually_poll_lro(
client: speech::client::Speech,
operation: longrunning::model::Operation,
) -> crate::Result<speech::model::BatchRecognizeResponse> {
let mut operation = operation;
loop {
if operation.done {
match &operation.result {
None => {
return Err("missing result for finished operation".into());
}
Some(r) => {
return match r {
longrunning::model::operation::Result::Error(e) => {
Err(format!("{e:?}").into())
}
longrunning::model::operation::Result::Response(any) => {
let response = any.to_msg::<speech::model::BatchRecognizeResponse>()?;
Ok(response)
}
_ => Err(format!("unexpected result branch {r:?}").into()),
};
}
}
}
if let Some(any) = &operation.metadata {
let metadata = any.to_msg::<speech::model::OperationMetadata>()?;
println!("LRO in progress, metadata={metadata:?}");
}
tokio::time::sleep(std::time::Duration::from_millis(500)).await;
if let Ok(attempt) = client
.get_operation()
.set_name(&operation.name)
.send()
.await
{
operation = attempt;
}
}
}
If the operation has not completed, then it may contain some metadata. Some services just include initial information about the request, while other services include partial progress reports. You can choose to extract and report this metadata:
// Copyright 2025 Google LLC
//
// Licensed under the Apache License, Version 2.0 (the "License");
// you may not use this file except in compliance with the License.
// You may obtain a copy of the License at
//
// https://www.apache.org/licenses/LICENSE-2.0
//
// Unless required by applicable law or agreed to in writing, software
// distributed under the License is distributed on an "AS IS" BASIS,
// WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
// See the License for the specific language governing permissions and
// limitations under the License.
use google_cloud_longrunning as longrunning;
use google_cloud_speech_v2 as speech;
pub async fn start(project_id: &str) -> crate::Result<()> {
use google_cloud_gax::retry_policy::Aip194Strict;
use google_cloud_gax::retry_policy::RetryPolicyExt;
use std::time::Duration;
let client = speech::client::Speech::builder()
.with_retry_policy(
Aip194Strict
.with_attempt_limit(5)
.with_time_limit(Duration::from_secs(30)),
)
.build()
.await?;
let operation = client
.batch_recognize()
.set_recognizer(format!(
"projects/{project_id}/locations/global/recognizers/_"
))
.set_files([speech::model::BatchRecognizeFileMetadata::new()
.set_uri("gs://cloud-samples-data/speech/hello.wav")])
.set_recognition_output_config(
speech::model::RecognitionOutputConfig::new()
.set_inline_response_config(speech::model::InlineOutputConfig::new()),
)
.set_processing_strategy(
speech::model::batch_recognize_request::ProcessingStrategy::DynamicBatching,
)
.set_config(
speech::model::RecognitionConfig::new()
.set_language_codes(["en-US"])
.set_model("short")
.set_auto_decoding_config(speech::model::AutoDetectDecodingConfig::new()),
)
.send()
.await?;
println!("LRO started, response={operation:?}");
let response = manually_poll_lro(client, operation).await;
println!("LRO completed, response={response:?}");
Ok(())
}
pub async fn automatic(project_id: &str) -> crate::Result<()> {
use google_cloud_gax::retry_policy::Aip194Strict;
use google_cloud_gax::retry_policy::RetryPolicyExt;
use std::time::Duration;
use google_cloud_lro::Poller;
let client = speech::client::Speech::builder()
.with_retry_policy(
Aip194Strict
.with_attempt_limit(5)
.with_time_limit(Duration::from_secs(30)),
)
.build()
.await?;
let response = client
.batch_recognize()
.set_recognizer(format!(
"projects/{project_id}/locations/global/recognizers/_"
))
.set_files([speech::model::BatchRecognizeFileMetadata::new()
.set_uri("gs://cloud-samples-data/speech/hello.wav")])
.set_recognition_output_config(
speech::model::RecognitionOutputConfig::new()
.set_inline_response_config(speech::model::InlineOutputConfig::new()),
)
.set_processing_strategy(
speech::model::batch_recognize_request::ProcessingStrategy::DynamicBatching,
)
.set_config(
speech::model::RecognitionConfig::new()
.set_language_codes(["en-US"])
.set_model("short")
.set_auto_decoding_config(speech::model::AutoDetectDecodingConfig::new()),
)
.poller()
.until_done()
.await?;
println!("LRO completed, response={response:?}");
Ok(())
}
pub async fn polling(project_id: &str) -> crate::Result<()> {
use google_cloud_gax::retry_policy::Aip194Strict;
use google_cloud_gax::retry_policy::RetryPolicyExt;
use std::time::Duration;
use google_cloud_lro::{Poller, PollingResult};
let client = speech::client::Speech::builder()
.with_retry_policy(
Aip194Strict
.with_attempt_limit(5)
.with_time_limit(Duration::from_secs(30)),
)
.build()
.await?;
let mut poller = client
.batch_recognize()
.set_recognizer(format!(
"projects/{project_id}/locations/global/recognizers/_"
))
.set_files([speech::model::BatchRecognizeFileMetadata::new()
.set_uri("gs://cloud-samples-data/speech/hello.wav")])
.set_recognition_output_config(
speech::model::RecognitionOutputConfig::new()
.set_inline_response_config(speech::model::InlineOutputConfig::new()),
)
.set_processing_strategy(
speech::model::batch_recognize_request::ProcessingStrategy::DynamicBatching,
)
.set_config(
speech::model::RecognitionConfig::new()
.set_language_codes(["en-US"])
.set_model("short")
.set_auto_decoding_config(speech::model::AutoDetectDecodingConfig::new()),
)
.poller();
while let Some(p) = poller.poll().await {
match p {
PollingResult::Completed(r) => {
println!("LRO completed, response={r:?}");
}
PollingResult::InProgress(m) => {
println!("LRO in progress, metadata={m:?}");
}
PollingResult::PollingError(e) => {
println!("Transient error polling the LRO: {e}");
}
}
tokio::time::sleep(std::time::Duration::from_millis(500)).await;
}
Ok(())
}
pub async fn manually_poll_lro(
client: speech::client::Speech,
operation: longrunning::model::Operation,
) -> crate::Result<speech::model::BatchRecognizeResponse> {
let mut operation = operation;
loop {
if operation.done {
match &operation.result {
None => {
return Err("missing result for finished operation".into());
}
Some(r) => {
return match r {
longrunning::model::operation::Result::Error(e) => {
Err(format!("{e:?}").into())
}
longrunning::model::operation::Result::Response(any) => {
let response = any.to_msg::<speech::model::BatchRecognizeResponse>()?;
Ok(response)
}
_ => Err(format!("unexpected result branch {r:?}").into()),
};
}
}
}
if let Some(any) = &operation.metadata {
let metadata = any.to_msg::<speech::model::OperationMetadata>()?;
println!("LRO in progress, metadata={metadata:?}");
}
tokio::time::sleep(std::time::Duration::from_millis(500)).await;
if let Ok(attempt) = client
.get_operation()
.set_name(&operation.name)
.send()
.await
{
operation = attempt;
}
}
}
As the operation has not completed, you need to wait before polling again. Consider adjusting the polling period, maybe using a form of truncated exponential backoff. This example simply polls every 500ms:
// Copyright 2025 Google LLC
//
// Licensed under the Apache License, Version 2.0 (the "License");
// you may not use this file except in compliance with the License.
// You may obtain a copy of the License at
//
// https://www.apache.org/licenses/LICENSE-2.0
//
// Unless required by applicable law or agreed to in writing, software
// distributed under the License is distributed on an "AS IS" BASIS,
// WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
// See the License for the specific language governing permissions and
// limitations under the License.
use google_cloud_longrunning as longrunning;
use google_cloud_speech_v2 as speech;
pub async fn start(project_id: &str) -> crate::Result<()> {
use google_cloud_gax::retry_policy::Aip194Strict;
use google_cloud_gax::retry_policy::RetryPolicyExt;
use std::time::Duration;
let client = speech::client::Speech::builder()
.with_retry_policy(
Aip194Strict
.with_attempt_limit(5)
.with_time_limit(Duration::from_secs(30)),
)
.build()
.await?;
let operation = client
.batch_recognize()
.set_recognizer(format!(
"projects/{project_id}/locations/global/recognizers/_"
))
.set_files([speech::model::BatchRecognizeFileMetadata::new()
.set_uri("gs://cloud-samples-data/speech/hello.wav")])
.set_recognition_output_config(
speech::model::RecognitionOutputConfig::new()
.set_inline_response_config(speech::model::InlineOutputConfig::new()),
)
.set_processing_strategy(
speech::model::batch_recognize_request::ProcessingStrategy::DynamicBatching,
)
.set_config(
speech::model::RecognitionConfig::new()
.set_language_codes(["en-US"])
.set_model("short")
.set_auto_decoding_config(speech::model::AutoDetectDecodingConfig::new()),
)
.send()
.await?;
println!("LRO started, response={operation:?}");
let response = manually_poll_lro(client, operation).await;
println!("LRO completed, response={response:?}");
Ok(())
}
pub async fn automatic(project_id: &str) -> crate::Result<()> {
use google_cloud_gax::retry_policy::Aip194Strict;
use google_cloud_gax::retry_policy::RetryPolicyExt;
use std::time::Duration;
use google_cloud_lro::Poller;
let client = speech::client::Speech::builder()
.with_retry_policy(
Aip194Strict
.with_attempt_limit(5)
.with_time_limit(Duration::from_secs(30)),
)
.build()
.await?;
let response = client
.batch_recognize()
.set_recognizer(format!(
"projects/{project_id}/locations/global/recognizers/_"
))
.set_files([speech::model::BatchRecognizeFileMetadata::new()
.set_uri("gs://cloud-samples-data/speech/hello.wav")])
.set_recognition_output_config(
speech::model::RecognitionOutputConfig::new()
.set_inline_response_config(speech::model::InlineOutputConfig::new()),
)
.set_processing_strategy(
speech::model::batch_recognize_request::ProcessingStrategy::DynamicBatching,
)
.set_config(
speech::model::RecognitionConfig::new()
.set_language_codes(["en-US"])
.set_model("short")
.set_auto_decoding_config(speech::model::AutoDetectDecodingConfig::new()),
)
.poller()
.until_done()
.await?;
println!("LRO completed, response={response:?}");
Ok(())
}
pub async fn polling(project_id: &str) -> crate::Result<()> {
use google_cloud_gax::retry_policy::Aip194Strict;
use google_cloud_gax::retry_policy::RetryPolicyExt;
use std::time::Duration;
use google_cloud_lro::{Poller, PollingResult};
let client = speech::client::Speech::builder()
.with_retry_policy(
Aip194Strict
.with_attempt_limit(5)
.with_time_limit(Duration::from_secs(30)),
)
.build()
.await?;
let mut poller = client
.batch_recognize()
.set_recognizer(format!(
"projects/{project_id}/locations/global/recognizers/_"
))
.set_files([speech::model::BatchRecognizeFileMetadata::new()
.set_uri("gs://cloud-samples-data/speech/hello.wav")])
.set_recognition_output_config(
speech::model::RecognitionOutputConfig::new()
.set_inline_response_config(speech::model::InlineOutputConfig::new()),
)
.set_processing_strategy(
speech::model::batch_recognize_request::ProcessingStrategy::DynamicBatching,
)
.set_config(
speech::model::RecognitionConfig::new()
.set_language_codes(["en-US"])
.set_model("short")
.set_auto_decoding_config(speech::model::AutoDetectDecodingConfig::new()),
)
.poller();
while let Some(p) = poller.poll().await {
match p {
PollingResult::Completed(r) => {
println!("LRO completed, response={r:?}");
}
PollingResult::InProgress(m) => {
println!("LRO in progress, metadata={m:?}");
}
PollingResult::PollingError(e) => {
println!("Transient error polling the LRO: {e}");
}
}
tokio::time::sleep(std::time::Duration::from_millis(500)).await;
}
Ok(())
}
pub async fn manually_poll_lro(
client: speech::client::Speech,
operation: longrunning::model::Operation,
) -> crate::Result<speech::model::BatchRecognizeResponse> {
let mut operation = operation;
loop {
if operation.done {
match &operation.result {
None => {
return Err("missing result for finished operation".into());
}
Some(r) => {
return match r {
longrunning::model::operation::Result::Error(e) => {
Err(format!("{e:?}").into())
}
longrunning::model::operation::Result::Response(any) => {
let response = any.to_msg::<speech::model::BatchRecognizeResponse>()?;
Ok(response)
}
_ => Err(format!("unexpected result branch {r:?}").into()),
};
}
}
}
if let Some(any) = &operation.metadata {
let metadata = any.to_msg::<speech::model::OperationMetadata>()?;
println!("LRO in progress, metadata={metadata:?}");
}
tokio::time::sleep(std::time::Duration::from_millis(500)).await;
if let Ok(attempt) = client
.get_operation()
.set_name(&operation.name)
.send()
.await
{
operation = attempt;
}
}
}
Then you can poll the operation to get its new status:
// Copyright 2025 Google LLC
//
// Licensed under the Apache License, Version 2.0 (the "License");
// you may not use this file except in compliance with the License.
// You may obtain a copy of the License at
//
// https://www.apache.org/licenses/LICENSE-2.0
//
// Unless required by applicable law or agreed to in writing, software
// distributed under the License is distributed on an "AS IS" BASIS,
// WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
// See the License for the specific language governing permissions and
// limitations under the License.
use google_cloud_longrunning as longrunning;
use google_cloud_speech_v2 as speech;
pub async fn start(project_id: &str) -> crate::Result<()> {
use google_cloud_gax::retry_policy::Aip194Strict;
use google_cloud_gax::retry_policy::RetryPolicyExt;
use std::time::Duration;
let client = speech::client::Speech::builder()
.with_retry_policy(
Aip194Strict
.with_attempt_limit(5)
.with_time_limit(Duration::from_secs(30)),
)
.build()
.await?;
let operation = client
.batch_recognize()
.set_recognizer(format!(
"projects/{project_id}/locations/global/recognizers/_"
))
.set_files([speech::model::BatchRecognizeFileMetadata::new()
.set_uri("gs://cloud-samples-data/speech/hello.wav")])
.set_recognition_output_config(
speech::model::RecognitionOutputConfig::new()
.set_inline_response_config(speech::model::InlineOutputConfig::new()),
)
.set_processing_strategy(
speech::model::batch_recognize_request::ProcessingStrategy::DynamicBatching,
)
.set_config(
speech::model::RecognitionConfig::new()
.set_language_codes(["en-US"])
.set_model("short")
.set_auto_decoding_config(speech::model::AutoDetectDecodingConfig::new()),
)
.send()
.await?;
println!("LRO started, response={operation:?}");
let response = manually_poll_lro(client, operation).await;
println!("LRO completed, response={response:?}");
Ok(())
}
pub async fn automatic(project_id: &str) -> crate::Result<()> {
use google_cloud_gax::retry_policy::Aip194Strict;
use google_cloud_gax::retry_policy::RetryPolicyExt;
use std::time::Duration;
use google_cloud_lro::Poller;
let client = speech::client::Speech::builder()
.with_retry_policy(
Aip194Strict
.with_attempt_limit(5)
.with_time_limit(Duration::from_secs(30)),
)
.build()
.await?;
let response = client
.batch_recognize()
.set_recognizer(format!(
"projects/{project_id}/locations/global/recognizers/_"
))
.set_files([speech::model::BatchRecognizeFileMetadata::new()
.set_uri("gs://cloud-samples-data/speech/hello.wav")])
.set_recognition_output_config(
speech::model::RecognitionOutputConfig::new()
.set_inline_response_config(speech::model::InlineOutputConfig::new()),
)
.set_processing_strategy(
speech::model::batch_recognize_request::ProcessingStrategy::DynamicBatching,
)
.set_config(
speech::model::RecognitionConfig::new()
.set_language_codes(["en-US"])
.set_model("short")
.set_auto_decoding_config(speech::model::AutoDetectDecodingConfig::new()),
)
.poller()
.until_done()
.await?;
println!("LRO completed, response={response:?}");
Ok(())
}
pub async fn polling(project_id: &str) -> crate::Result<()> {
use google_cloud_gax::retry_policy::Aip194Strict;
use google_cloud_gax::retry_policy::RetryPolicyExt;
use std::time::Duration;
use google_cloud_lro::{Poller, PollingResult};
let client = speech::client::Speech::builder()
.with_retry_policy(
Aip194Strict
.with_attempt_limit(5)
.with_time_limit(Duration::from_secs(30)),
)
.build()
.await?;
let mut poller = client
.batch_recognize()
.set_recognizer(format!(
"projects/{project_id}/locations/global/recognizers/_"
))
.set_files([speech::model::BatchRecognizeFileMetadata::new()
.set_uri("gs://cloud-samples-data/speech/hello.wav")])
.set_recognition_output_config(
speech::model::RecognitionOutputConfig::new()
.set_inline_response_config(speech::model::InlineOutputConfig::new()),
)
.set_processing_strategy(
speech::model::batch_recognize_request::ProcessingStrategy::DynamicBatching,
)
.set_config(
speech::model::RecognitionConfig::new()
.set_language_codes(["en-US"])
.set_model("short")
.set_auto_decoding_config(speech::model::AutoDetectDecodingConfig::new()),
)
.poller();
while let Some(p) = poller.poll().await {
match p {
PollingResult::Completed(r) => {
println!("LRO completed, response={r:?}");
}
PollingResult::InProgress(m) => {
println!("LRO in progress, metadata={m:?}");
}
PollingResult::PollingError(e) => {
println!("Transient error polling the LRO: {e}");
}
}
tokio::time::sleep(std::time::Duration::from_millis(500)).await;
}
Ok(())
}
pub async fn manually_poll_lro(
client: speech::client::Speech,
operation: longrunning::model::Operation,
) -> crate::Result<speech::model::BatchRecognizeResponse> {
let mut operation = operation;
loop {
if operation.done {
match &operation.result {
None => {
return Err("missing result for finished operation".into());
}
Some(r) => {
return match r {
longrunning::model::operation::Result::Error(e) => {
Err(format!("{e:?}").into())
}
longrunning::model::operation::Result::Response(any) => {
let response = any.to_msg::<speech::model::BatchRecognizeResponse>()?;
Ok(response)
}
_ => Err(format!("unexpected result branch {r:?}").into()),
};
}
}
}
if let Some(any) = &operation.metadata {
let metadata = any.to_msg::<speech::model::OperationMetadata>()?;
println!("LRO in progress, metadata={metadata:?}");
}
tokio::time::sleep(std::time::Duration::from_millis(500)).await;
if let Ok(attempt) = client
.get_operation()
.set_name(&operation.name)
.send()
.await
{
operation = attempt;
}
}
}
For simplicity, the example ignores all errors. In your application you may choose to treat only a subset of the errors as non-recoverable, and may want to limit the number of polling attempts if these fail.
You can find the full function below.
What's next
- To learn about customizing error handling and backoff periods for LROs, see Configuring polling policies.
- To learn how to simulate LROs in your unit tests, see How to write tests for long-running operations.
Starting a long-running operation: complete code
pub async fn start(project_id: &str) -> crate::Result<()> {
use google_cloud_gax::retry_policy::Aip194Strict;
use google_cloud_gax::retry_policy::RetryPolicyExt;
use std::time::Duration;
let client = speech::client::Speech::builder()
.with_retry_policy(
Aip194Strict
.with_attempt_limit(5)
.with_time_limit(Duration::from_secs(30)),
)
.build()
.await?;
let operation = client
.batch_recognize()
.set_recognizer(format!(
"projects/{project_id}/locations/global/recognizers/_"
))
.set_files([speech::model::BatchRecognizeFileMetadata::new()
.set_uri("gs://cloud-samples-data/speech/hello.wav")])
.set_recognition_output_config(
speech::model::RecognitionOutputConfig::new()
.set_inline_response_config(speech::model::InlineOutputConfig::new()),
)
.set_processing_strategy(
speech::model::batch_recognize_request::ProcessingStrategy::DynamicBatching,
)
.set_config(
speech::model::RecognitionConfig::new()
.set_language_codes(["en-US"])
.set_model("short")
.set_auto_decoding_config(speech::model::AutoDetectDecodingConfig::new()),
)
.send()
.await?;
println!("LRO started, response={operation:?}");
let response = manually_poll_lro(client, operation).await;
println!("LRO completed, response={response:?}");
Ok(())
}
Automatically polling a long-running operation: complete code
// Copyright 2025 Google LLC
//
// Licensed under the Apache License, Version 2.0 (the "License");
// you may not use this file except in compliance with the License.
// You may obtain a copy of the License at
//
// https://www.apache.org/licenses/LICENSE-2.0
//
// Unless required by applicable law or agreed to in writing, software
// distributed under the License is distributed on an "AS IS" BASIS,
// WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
// See the License for the specific language governing permissions and
// limitations under the License.
use google_cloud_longrunning as longrunning;
use google_cloud_speech_v2 as speech;
pub async fn start(project_id: &str) -> crate::Result<()> {
use google_cloud_gax::retry_policy::Aip194Strict;
use google_cloud_gax::retry_policy::RetryPolicyExt;
use std::time::Duration;
let client = speech::client::Speech::builder()
.with_retry_policy(
Aip194Strict
.with_attempt_limit(5)
.with_time_limit(Duration::from_secs(30)),
)
.build()
.await?;
let operation = client
.batch_recognize()
.set_recognizer(format!(
"projects/{project_id}/locations/global/recognizers/_"
))
.set_files([speech::model::BatchRecognizeFileMetadata::new()
.set_uri("gs://cloud-samples-data/speech/hello.wav")])
.set_recognition_output_config(
speech::model::RecognitionOutputConfig::new()
.set_inline_response_config(speech::model::InlineOutputConfig::new()),
)
.set_processing_strategy(
speech::model::batch_recognize_request::ProcessingStrategy::DynamicBatching,
)
.set_config(
speech::model::RecognitionConfig::new()
.set_language_codes(["en-US"])
.set_model("short")
.set_auto_decoding_config(speech::model::AutoDetectDecodingConfig::new()),
)
.send()
.await?;
println!("LRO started, response={operation:?}");
let response = manually_poll_lro(client, operation).await;
println!("LRO completed, response={response:?}");
Ok(())
}
pub async fn automatic(project_id: &str) -> crate::Result<()> {
use google_cloud_gax::retry_policy::Aip194Strict;
use google_cloud_gax::retry_policy::RetryPolicyExt;
use std::time::Duration;
use google_cloud_lro::Poller;
let client = speech::client::Speech::builder()
.with_retry_policy(
Aip194Strict
.with_attempt_limit(5)
.with_time_limit(Duration::from_secs(30)),
)
.build()
.await?;
let response = client
.batch_recognize()
.set_recognizer(format!(
"projects/{project_id}/locations/global/recognizers/_"
))
.set_files([speech::model::BatchRecognizeFileMetadata::new()
.set_uri("gs://cloud-samples-data/speech/hello.wav")])
.set_recognition_output_config(
speech::model::RecognitionOutputConfig::new()
.set_inline_response_config(speech::model::InlineOutputConfig::new()),
)
.set_processing_strategy(
speech::model::batch_recognize_request::ProcessingStrategy::DynamicBatching,
)
.set_config(
speech::model::RecognitionConfig::new()
.set_language_codes(["en-US"])
.set_model("short")
.set_auto_decoding_config(speech::model::AutoDetectDecodingConfig::new()),
)
.poller()
.until_done()
.await?;
println!("LRO completed, response={response:?}");
Ok(())
}
pub async fn polling(project_id: &str) -> crate::Result<()> {
use google_cloud_gax::retry_policy::Aip194Strict;
use google_cloud_gax::retry_policy::RetryPolicyExt;
use std::time::Duration;
use google_cloud_lro::{Poller, PollingResult};
let client = speech::client::Speech::builder()
.with_retry_policy(
Aip194Strict
.with_attempt_limit(5)
.with_time_limit(Duration::from_secs(30)),
)
.build()
.await?;
let mut poller = client
.batch_recognize()
.set_recognizer(format!(
"projects/{project_id}/locations/global/recognizers/_"
))
.set_files([speech::model::BatchRecognizeFileMetadata::new()
.set_uri("gs://cloud-samples-data/speech/hello.wav")])
.set_recognition_output_config(
speech::model::RecognitionOutputConfig::new()
.set_inline_response_config(speech::model::InlineOutputConfig::new()),
)
.set_processing_strategy(
speech::model::batch_recognize_request::ProcessingStrategy::DynamicBatching,
)
.set_config(
speech::model::RecognitionConfig::new()
.set_language_codes(["en-US"])
.set_model("short")
.set_auto_decoding_config(speech::model::AutoDetectDecodingConfig::new()),
)
.poller();
while let Some(p) = poller.poll().await {
match p {
PollingResult::Completed(r) => {
println!("LRO completed, response={r:?}");
}
PollingResult::InProgress(m) => {
println!("LRO in progress, metadata={m:?}");
}
PollingResult::PollingError(e) => {
println!("Transient error polling the LRO: {e}");
}
}
tokio::time::sleep(std::time::Duration::from_millis(500)).await;
}
Ok(())
}
pub async fn manually_poll_lro(
client: speech::client::Speech,
operation: longrunning::model::Operation,
) -> crate::Result<speech::model::BatchRecognizeResponse> {
let mut operation = operation;
loop {
if operation.done {
match &operation.result {
None => {
return Err("missing result for finished operation".into());
}
Some(r) => {
return match r {
longrunning::model::operation::Result::Error(e) => {
Err(format!("{e:?}").into())
}
longrunning::model::operation::Result::Response(any) => {
let response = any.to_msg::<speech::model::BatchRecognizeResponse>()?;
Ok(response)
}
_ => Err(format!("unexpected result branch {r:?}").into()),
};
}
}
}
if let Some(any) = &operation.metadata {
let metadata = any.to_msg::<speech::model::OperationMetadata>()?;
println!("LRO in progress, metadata={metadata:?}");
}
tokio::time::sleep(std::time::Duration::from_millis(500)).await;
if let Ok(attempt) = client
.get_operation()
.set_name(&operation.name)
.send()
.await
{
operation = attempt;
}
}
}
Polling a long-running operation: complete code
// Copyright 2025 Google LLC
//
// Licensed under the Apache License, Version 2.0 (the "License");
// you may not use this file except in compliance with the License.
// You may obtain a copy of the License at
//
// https://www.apache.org/licenses/LICENSE-2.0
//
// Unless required by applicable law or agreed to in writing, software
// distributed under the License is distributed on an "AS IS" BASIS,
// WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
// See the License for the specific language governing permissions and
// limitations under the License.
use google_cloud_longrunning as longrunning;
use google_cloud_speech_v2 as speech;
pub async fn start(project_id: &str) -> crate::Result<()> {
use google_cloud_gax::retry_policy::Aip194Strict;
use google_cloud_gax::retry_policy::RetryPolicyExt;
use std::time::Duration;
let client = speech::client::Speech::builder()
.with_retry_policy(
Aip194Strict
.with_attempt_limit(5)
.with_time_limit(Duration::from_secs(30)),
)
.build()
.await?;
let operation = client
.batch_recognize()
.set_recognizer(format!(
"projects/{project_id}/locations/global/recognizers/_"
))
.set_files([speech::model::BatchRecognizeFileMetadata::new()
.set_uri("gs://cloud-samples-data/speech/hello.wav")])
.set_recognition_output_config(
speech::model::RecognitionOutputConfig::new()
.set_inline_response_config(speech::model::InlineOutputConfig::new()),
)
.set_processing_strategy(
speech::model::batch_recognize_request::ProcessingStrategy::DynamicBatching,
)
.set_config(
speech::model::RecognitionConfig::new()
.set_language_codes(["en-US"])
.set_model("short")
.set_auto_decoding_config(speech::model::AutoDetectDecodingConfig::new()),
)
.send()
.await?;
println!("LRO started, response={operation:?}");
let response = manually_poll_lro(client, operation).await;
println!("LRO completed, response={response:?}");
Ok(())
}
pub async fn automatic(project_id: &str) -> crate::Result<()> {
use google_cloud_gax::retry_policy::Aip194Strict;
use google_cloud_gax::retry_policy::RetryPolicyExt;
use std::time::Duration;
use google_cloud_lro::Poller;
let client = speech::client::Speech::builder()
.with_retry_policy(
Aip194Strict
.with_attempt_limit(5)
.with_time_limit(Duration::from_secs(30)),
)
.build()
.await?;
let response = client
.batch_recognize()
.set_recognizer(format!(
"projects/{project_id}/locations/global/recognizers/_"
))
.set_files([speech::model::BatchRecognizeFileMetadata::new()
.set_uri("gs://cloud-samples-data/speech/hello.wav")])
.set_recognition_output_config(
speech::model::RecognitionOutputConfig::new()
.set_inline_response_config(speech::model::InlineOutputConfig::new()),
)
.set_processing_strategy(
speech::model::batch_recognize_request::ProcessingStrategy::DynamicBatching,
)
.set_config(
speech::model::RecognitionConfig::new()
.set_language_codes(["en-US"])
.set_model("short")
.set_auto_decoding_config(speech::model::AutoDetectDecodingConfig::new()),
)
.poller()
.until_done()
.await?;
println!("LRO completed, response={response:?}");
Ok(())
}
pub async fn polling(project_id: &str) -> crate::Result<()> {
use google_cloud_gax::retry_policy::Aip194Strict;
use google_cloud_gax::retry_policy::RetryPolicyExt;
use std::time::Duration;
use google_cloud_lro::{Poller, PollingResult};
let client = speech::client::Speech::builder()
.with_retry_policy(
Aip194Strict
.with_attempt_limit(5)
.with_time_limit(Duration::from_secs(30)),
)
.build()
.await?;
let mut poller = client
.batch_recognize()
.set_recognizer(format!(
"projects/{project_id}/locations/global/recognizers/_"
))
.set_files([speech::model::BatchRecognizeFileMetadata::new()
.set_uri("gs://cloud-samples-data/speech/hello.wav")])
.set_recognition_output_config(
speech::model::RecognitionOutputConfig::new()
.set_inline_response_config(speech::model::InlineOutputConfig::new()),
)
.set_processing_strategy(
speech::model::batch_recognize_request::ProcessingStrategy::DynamicBatching,
)
.set_config(
speech::model::RecognitionConfig::new()
.set_language_codes(["en-US"])
.set_model("short")
.set_auto_decoding_config(speech::model::AutoDetectDecodingConfig::new()),
)
.poller();
while let Some(p) = poller.poll().await {
match p {
PollingResult::Completed(r) => {
println!("LRO completed, response={r:?}");
}
PollingResult::InProgress(m) => {
println!("LRO in progress, metadata={m:?}");
}
PollingResult::PollingError(e) => {
println!("Transient error polling the LRO: {e}");
}
}
tokio::time::sleep(std::time::Duration::from_millis(500)).await;
}
Ok(())
}
pub async fn manually_poll_lro(
client: speech::client::Speech,
operation: longrunning::model::Operation,
) -> crate::Result<speech::model::BatchRecognizeResponse> {
let mut operation = operation;
loop {
if operation.done {
match &operation.result {
None => {
return Err("missing result for finished operation".into());
}
Some(r) => {
return match r {
longrunning::model::operation::Result::Error(e) => {
Err(format!("{e:?}").into())
}
longrunning::model::operation::Result::Response(any) => {
let response = any.to_msg::<speech::model::BatchRecognizeResponse>()?;
Ok(response)
}
_ => Err(format!("unexpected result branch {r:?}").into()),
};
}
}
}
if let Some(any) = &operation.metadata {
let metadata = any.to_msg::<speech::model::OperationMetadata>()?;
println!("LRO in progress, metadata={metadata:?}");
}
tokio::time::sleep(std::time::Duration::from_millis(500)).await;
if let Ok(attempt) = client
.get_operation()
.set_name(&operation.name)
.send()
.await
{
operation = attempt;
}
}
}
Manually polling a long-running operation: complete code
// Copyright 2025 Google LLC
//
// Licensed under the Apache License, Version 2.0 (the "License");
// you may not use this file except in compliance with the License.
// You may obtain a copy of the License at
//
// https://www.apache.org/licenses/LICENSE-2.0
//
// Unless required by applicable law or agreed to in writing, software
// distributed under the License is distributed on an "AS IS" BASIS,
// WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
// See the License for the specific language governing permissions and
// limitations under the License.
use google_cloud_longrunning as longrunning;
use google_cloud_speech_v2 as speech;
pub async fn start(project_id: &str) -> crate::Result<()> {
use google_cloud_gax::retry_policy::Aip194Strict;
use google_cloud_gax::retry_policy::RetryPolicyExt;
use std::time::Duration;
let client = speech::client::Speech::builder()
.with_retry_policy(
Aip194Strict
.with_attempt_limit(5)
.with_time_limit(Duration::from_secs(30)),
)
.build()
.await?;
let operation = client
.batch_recognize()
.set_recognizer(format!(
"projects/{project_id}/locations/global/recognizers/_"
))
.set_files([speech::model::BatchRecognizeFileMetadata::new()
.set_uri("gs://cloud-samples-data/speech/hello.wav")])
.set_recognition_output_config(
speech::model::RecognitionOutputConfig::new()
.set_inline_response_config(speech::model::InlineOutputConfig::new()),
)
.set_processing_strategy(
speech::model::batch_recognize_request::ProcessingStrategy::DynamicBatching,
)
.set_config(
speech::model::RecognitionConfig::new()
.set_language_codes(["en-US"])
.set_model("short")
.set_auto_decoding_config(speech::model::AutoDetectDecodingConfig::new()),
)
.send()
.await?;
println!("LRO started, response={operation:?}");
let response = manually_poll_lro(client, operation).await;
println!("LRO completed, response={response:?}");
Ok(())
}
pub async fn automatic(project_id: &str) -> crate::Result<()> {
use google_cloud_gax::retry_policy::Aip194Strict;
use google_cloud_gax::retry_policy::RetryPolicyExt;
use std::time::Duration;
use google_cloud_lro::Poller;
let client = speech::client::Speech::builder()
.with_retry_policy(
Aip194Strict
.with_attempt_limit(5)
.with_time_limit(Duration::from_secs(30)),
)
.build()
.await?;
let response = client
.batch_recognize()
.set_recognizer(format!(
"projects/{project_id}/locations/global/recognizers/_"
))
.set_files([speech::model::BatchRecognizeFileMetadata::new()
.set_uri("gs://cloud-samples-data/speech/hello.wav")])
.set_recognition_output_config(
speech::model::RecognitionOutputConfig::new()
.set_inline_response_config(speech::model::InlineOutputConfig::new()),
)
.set_processing_strategy(
speech::model::batch_recognize_request::ProcessingStrategy::DynamicBatching,
)
.set_config(
speech::model::RecognitionConfig::new()
.set_language_codes(["en-US"])
.set_model("short")
.set_auto_decoding_config(speech::model::AutoDetectDecodingConfig::new()),
)
.poller()
.until_done()
.await?;
println!("LRO completed, response={response:?}");
Ok(())
}
pub async fn polling(project_id: &str) -> crate::Result<()> {
use google_cloud_gax::retry_policy::Aip194Strict;
use google_cloud_gax::retry_policy::RetryPolicyExt;
use std::time::Duration;
use google_cloud_lro::{Poller, PollingResult};
let client = speech::client::Speech::builder()
.with_retry_policy(
Aip194Strict
.with_attempt_limit(5)
.with_time_limit(Duration::from_secs(30)),
)
.build()
.await?;
let mut poller = client
.batch_recognize()
.set_recognizer(format!(
"projects/{project_id}/locations/global/recognizers/_"
))
.set_files([speech::model::BatchRecognizeFileMetadata::new()
.set_uri("gs://cloud-samples-data/speech/hello.wav")])
.set_recognition_output_config(
speech::model::RecognitionOutputConfig::new()
.set_inline_response_config(speech::model::InlineOutputConfig::new()),
)
.set_processing_strategy(
speech::model::batch_recognize_request::ProcessingStrategy::DynamicBatching,
)
.set_config(
speech::model::RecognitionConfig::new()
.set_language_codes(["en-US"])
.set_model("short")
.set_auto_decoding_config(speech::model::AutoDetectDecodingConfig::new()),
)
.poller();
while let Some(p) = poller.poll().await {
match p {
PollingResult::Completed(r) => {
println!("LRO completed, response={r:?}");
}
PollingResult::InProgress(m) => {
println!("LRO in progress, metadata={m:?}");
}
PollingResult::PollingError(e) => {
println!("Transient error polling the LRO: {e}");
}
}
tokio::time::sleep(std::time::Duration::from_millis(500)).await;
}
Ok(())
}
pub async fn manually_poll_lro(
client: speech::client::Speech,
operation: longrunning::model::Operation,
) -> crate::Result<speech::model::BatchRecognizeResponse> {
let mut operation = operation;
loop {
if operation.done {
match &operation.result {
None => {
return Err("missing result for finished operation".into());
}
Some(r) => {
return match r {
longrunning::model::operation::Result::Error(e) => {
Err(format!("{e:?}").into())
}
longrunning::model::operation::Result::Response(any) => {
let response = any.to_msg::<speech::model::BatchRecognizeResponse>()?;
Ok(response)
}
_ => Err(format!("unexpected result branch {r:?}").into()),
};
}
}
}
if let Some(any) = &operation.metadata {
let metadata = any.to_msg::<speech::model::OperationMetadata>()?;
println!("LRO in progress, metadata={metadata:?}");
}
tokio::time::sleep(std::time::Duration::from_millis(500)).await;
if let Ok(attempt) = client
.get_operation()
.set_name(&operation.name)
.send()
.await
{
operation = attempt;
}
}
}