Working with long-running operations
Occasionally, an API may need to expose a method that takes a significant amount of time to complete. In these situations, it is often a poor user experience to simply block while the task runs; rather, it is better to return some kind of promise to the user and allow the user to check back in later.
The Google Cloud Client Libraries for Rust provide helpers to work with these long-running operations (LROs). This guide will show you how to start LROs and wait for their complication.
Prerequisites
The guide uses the Speech-To-Text V2 service to keep the code snippets concrete. The same ideas work for any other service using LROs.
We recommend you first follow one of the service guides, such as Transcribe speech to text by using the command line. These guides will cover critical topics such as ensuring your project has the API enabled, your account has the right permissions, and how to set up billing for your project (if needed). Skipping the service guides may result in problems that are hard to diagnose.
Dependencies
As it is usual with Rust, you must declare the dependency in your
Cargo.toml
file. We use:
google-cloud-speech-v2 = { version = "0.2", path = "../../src/generated/cloud/speech/v2" }
Starting a long-running operation
To start a long-running operation first initialize a client as usual and then make the RPC. But first, add some use declarations to avoid the long package names:
use google_cloud_longrunning as longrunning;
use google_cloud_speech_v2 as speech;
Now create the client:
let client = speech::client::Speech::new().await?;
We will use batch recognize in this example. While this is designed for long audio files, it works well with small files too.
In Rust, each request is represented by a method that returns a request builder.
First, call the right method on the client to create the request builder. We
will use the default recognizer (_
) in the global
region.
let operation = client
.batch_recognize(format!(
"projects/{project_id}/locations/global/recognizers/_"
))
Then initialize the request to use a publicly available audio file:
.set_files([speech::model::BatchRecognizeFileMetadata::new()
.set_uri("gs://cloud-samples-data/speech/hello.wav")])
Configure the request to return the transcripts inline:
.set_recognition_output_config(
speech::model::RecognitionOutputConfig::new()
.set_inline_response_config(speech::model::InlineOutputConfig::new()),
)
Then configure the service to transcribe to US English, using the short model and some other default configuration:
.set_processing_strategy(
speech::model::batch_recognize_request::ProcessingStrategy::DYNAMIC_BATCHING,
)
.set_config(
speech::model::RecognitionConfig::new()
.set_language_codes(["en-US"])
.set_model("short")
.set_auto_decoding_config(speech::model::AutoDetectDecodingConfig::new()),
)
Then make the request and wait for Operation to
be returned. This Operation
acts as the promise to the result of the
long-running request:
.send()
.await?;
Finally, we need to poll this promise until it completes:
let response = manually_poll_lro(client, operation).await;
println!("LRO completed, response={response:?}");
We will examine the manual_poll_lro()
function in the
Manually polling a long-running operation section.
You can find the full function below.
Automatically polling a long-running operation
Spoiler, preparing the request will be identical how we started a long-running
operation. The difference will come at the end, where instead of sending the
request to get the Operation
promise:
.send()
.await?;
we create a Poller
and wait until it is done:
.poller()
.until_done()
.await?;
Let's review the code step-by-step, without spoilers this time. First, we need
to introduce trait in scope via a use
declaration:
use speech::Poller;
Then we initialize the client and prepare the request as before:
let client = speech::client::Speech::new().await?;
let response = client
.batch_recognize(format!(
"projects/{project_id}/locations/global/recognizers/_"
))
.set_files([speech::model::BatchRecognizeFileMetadata::new()
.set_uri("gs://cloud-samples-data/speech/hello.wav")])
.set_recognition_output_config(
speech::model::RecognitionOutputConfig::new()
.set_inline_response_config(speech::model::InlineOutputConfig::new()),
)
.set_processing_strategy(
speech::model::batch_recognize_request::ProcessingStrategy::DYNAMIC_BATCHING,
)
.set_config(
speech::model::RecognitionConfig::new()
.set_language_codes(["en-US"])
.set_model("short")
.set_auto_decoding_config(speech::model::AutoDetectDecodingConfig::new()),
)
And then we poll until the operation is completed and print the result:
.poller()
.until_done()
.await?;
println!("LRO completed, response={response:?}");
You can find the full function below.
Polling a long-running operation
While .until_done()
is convenient, it omits some information: long-running
operations may report partial progress via a "metadata" attribute. If your
application requires such information, you need to use the poller directly:
let mut poller = client
.batch_recognize(/* stuff */)
/* more stuff */
.poller();
Then use the poller in a loop:
// Copyright 2025 Google LLC
//
// Licensed under the Apache License, Version 2.0 (the "License");
// you may not use this file except in compliance with the License.
// You may obtain a copy of the License at
//
// https://www.apache.org/licenses/LICENSE-2.0
//
// Unless required by applicable law or agreed to in writing, software
// distributed under the License is distributed on an "AS IS" BASIS,
// WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
// See the License for the specific language governing permissions and
// limitations under the License.
use google_cloud_longrunning as longrunning;
use google_cloud_speech_v2 as speech;
pub async fn start(project_id: &str) -> crate::Result<()> {
let client = speech::client::Speech::new().await?;
let operation = client
.batch_recognize(format!(
"projects/{project_id}/locations/global/recognizers/_"
))
.set_files([speech::model::BatchRecognizeFileMetadata::new()
.set_uri("gs://cloud-samples-data/speech/hello.wav")])
.set_recognition_output_config(
speech::model::RecognitionOutputConfig::new()
.set_inline_response_config(speech::model::InlineOutputConfig::new()),
)
.set_processing_strategy(
speech::model::batch_recognize_request::ProcessingStrategy::DYNAMIC_BATCHING,
)
.set_config(
speech::model::RecognitionConfig::new()
.set_language_codes(["en-US"])
.set_model("short")
.set_auto_decoding_config(speech::model::AutoDetectDecodingConfig::new()),
)
.send()
.await?;
println!("LRO started, response={operation:?}");
let response = manually_poll_lro(client, operation).await;
println!("LRO completed, response={response:?}");
Ok(())
}
pub async fn automatic(project_id: &str) -> crate::Result<()> {
use speech::Poller;
let client = speech::client::Speech::new().await?;
let response = client
.batch_recognize(format!(
"projects/{project_id}/locations/global/recognizers/_"
))
.set_files([speech::model::BatchRecognizeFileMetadata::new()
.set_uri("gs://cloud-samples-data/speech/hello.wav")])
.set_recognition_output_config(
speech::model::RecognitionOutputConfig::new()
.set_inline_response_config(speech::model::InlineOutputConfig::new()),
)
.set_processing_strategy(
speech::model::batch_recognize_request::ProcessingStrategy::DYNAMIC_BATCHING,
)
.set_config(
speech::model::RecognitionConfig::new()
.set_language_codes(["en-US"])
.set_model("short")
.set_auto_decoding_config(speech::model::AutoDetectDecodingConfig::new()),
)
.poller()
.until_done()
.await?;
println!("LRO completed, response={response:?}");
Ok(())
}
pub async fn polling(project_id: &str) -> crate::Result<()> {
use speech::Poller;
let client = speech::client::Speech::new().await?;
let mut poller = client
.batch_recognize(format!(
"projects/{project_id}/locations/global/recognizers/_"
))
.set_files([speech::model::BatchRecognizeFileMetadata::new()
.set_uri("gs://cloud-samples-data/speech/hello.wav")])
.set_recognition_output_config(
speech::model::RecognitionOutputConfig::new()
.set_inline_response_config(speech::model::InlineOutputConfig::new()),
)
.set_processing_strategy(
speech::model::batch_recognize_request::ProcessingStrategy::DYNAMIC_BATCHING,
)
.set_config(
speech::model::RecognitionConfig::new()
.set_language_codes(["en-US"])
.set_model("short")
.set_auto_decoding_config(speech::model::AutoDetectDecodingConfig::new()),
)
.poller();
while let Some(p) = poller.poll().await {
match p {
speech::PollingResult::Completed(r) => {
println!("LRO completed, response={r:?}");
}
speech::PollingResult::InProgress(m) => {
println!("LRO in progress, metadata={m:?}");
}
speech::PollingResult::PollingError(e) => {
println!("Transient error polling the LRO: {e}");
}
}
tokio::time::sleep(std::time::Duration::from_millis(500)).await;
}
Ok(())
}
pub async fn manually_poll_lro(
client: speech::client::Speech,
operation: longrunning::model::Operation,
) -> crate::Result<speech::model::BatchRecognizeResponse> {
let mut operation = operation;
loop {
if operation.done {
match &operation.result {
None => {
return Err("missing result for finished operation".into());
}
Some(r) => {
return match r {
longrunning::model::operation::Result::Error(e) => {
Err(format!("{e:?}").into())
}
longrunning::model::operation::Result::Response(any) => {
let response =
any.try_into_message::<speech::model::BatchRecognizeResponse>()?;
Ok(response)
}
_ => Err(format!("unexpected result branch {r:?}").into()),
};
}
}
}
if let Some(any) = &operation.metadata {
let metadata = any.try_into_message::<speech::model::OperationMetadata>()?;
println!("LRO in progress, metadata={metadata:?}");
}
tokio::time::sleep(std::time::Duration::from_millis(500)).await;
if let Ok(attempt) = client.get_operation(operation.name.clone()).send().await {
operation = attempt;
}
}
}
Note how this loop explicitly waits before polling again. The polling period depends on the specific operation and its payload. You should consult the service documentation and/or experiment with your own data to determine a good value.
The poller uses a policy to determine what polling errors are transient and may resolve themselves. The Configuring polling policies chapter covers this topic in detail.
You can find the full function below.
Manually polling a long-running operation
In general, we recommend you use the previous two approaches in your application. Manually polling a long-running operation can be quite tedious, and it is easy to get the types involved wrong. If you do need to manually poll a long-running operation this guide will walk you through the required steps. You may want to read the Operation message reference documentation, as some of the fields and types are used below.
Recall that we started the long-running operation using the client:
let mut operation = client
.batch_recognize(/* stuff */)
/* more stuff */
.send()
.await?;
We are going to start a loop to poll the operation
, and we need to check if
the operation completed immediately, this is rare but does happen. The done
field indicates if the operation completed:
// Copyright 2025 Google LLC
//
// Licensed under the Apache License, Version 2.0 (the "License");
// you may not use this file except in compliance with the License.
// You may obtain a copy of the License at
//
// https://www.apache.org/licenses/LICENSE-2.0
//
// Unless required by applicable law or agreed to in writing, software
// distributed under the License is distributed on an "AS IS" BASIS,
// WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
// See the License for the specific language governing permissions and
// limitations under the License.
use google_cloud_longrunning as longrunning;
use google_cloud_speech_v2 as speech;
pub async fn start(project_id: &str) -> crate::Result<()> {
let client = speech::client::Speech::new().await?;
let operation = client
.batch_recognize(format!(
"projects/{project_id}/locations/global/recognizers/_"
))
.set_files([speech::model::BatchRecognizeFileMetadata::new()
.set_uri("gs://cloud-samples-data/speech/hello.wav")])
.set_recognition_output_config(
speech::model::RecognitionOutputConfig::new()
.set_inline_response_config(speech::model::InlineOutputConfig::new()),
)
.set_processing_strategy(
speech::model::batch_recognize_request::ProcessingStrategy::DYNAMIC_BATCHING,
)
.set_config(
speech::model::RecognitionConfig::new()
.set_language_codes(["en-US"])
.set_model("short")
.set_auto_decoding_config(speech::model::AutoDetectDecodingConfig::new()),
)
.send()
.await?;
println!("LRO started, response={operation:?}");
let response = manually_poll_lro(client, operation).await;
println!("LRO completed, response={response:?}");
Ok(())
}
pub async fn automatic(project_id: &str) -> crate::Result<()> {
use speech::Poller;
let client = speech::client::Speech::new().await?;
let response = client
.batch_recognize(format!(
"projects/{project_id}/locations/global/recognizers/_"
))
.set_files([speech::model::BatchRecognizeFileMetadata::new()
.set_uri("gs://cloud-samples-data/speech/hello.wav")])
.set_recognition_output_config(
speech::model::RecognitionOutputConfig::new()
.set_inline_response_config(speech::model::InlineOutputConfig::new()),
)
.set_processing_strategy(
speech::model::batch_recognize_request::ProcessingStrategy::DYNAMIC_BATCHING,
)
.set_config(
speech::model::RecognitionConfig::new()
.set_language_codes(["en-US"])
.set_model("short")
.set_auto_decoding_config(speech::model::AutoDetectDecodingConfig::new()),
)
.poller()
.until_done()
.await?;
println!("LRO completed, response={response:?}");
Ok(())
}
pub async fn polling(project_id: &str) -> crate::Result<()> {
use speech::Poller;
let client = speech::client::Speech::new().await?;
let mut poller = client
.batch_recognize(format!(
"projects/{project_id}/locations/global/recognizers/_"
))
.set_files([speech::model::BatchRecognizeFileMetadata::new()
.set_uri("gs://cloud-samples-data/speech/hello.wav")])
.set_recognition_output_config(
speech::model::RecognitionOutputConfig::new()
.set_inline_response_config(speech::model::InlineOutputConfig::new()),
)
.set_processing_strategy(
speech::model::batch_recognize_request::ProcessingStrategy::DYNAMIC_BATCHING,
)
.set_config(
speech::model::RecognitionConfig::new()
.set_language_codes(["en-US"])
.set_model("short")
.set_auto_decoding_config(speech::model::AutoDetectDecodingConfig::new()),
)
.poller();
while let Some(p) = poller.poll().await {
match p {
speech::PollingResult::Completed(r) => {
println!("LRO completed, response={r:?}");
}
speech::PollingResult::InProgress(m) => {
println!("LRO in progress, metadata={m:?}");
}
speech::PollingResult::PollingError(e) => {
println!("Transient error polling the LRO: {e}");
}
}
tokio::time::sleep(std::time::Duration::from_millis(500)).await;
}
Ok(())
}
pub async fn manually_poll_lro(
client: speech::client::Speech,
operation: longrunning::model::Operation,
) -> crate::Result<speech::model::BatchRecognizeResponse> {
let mut operation = operation;
loop {
if operation.done {
match &operation.result {
None => {
return Err("missing result for finished operation".into());
}
Some(r) => {
return match r {
longrunning::model::operation::Result::Error(e) => {
Err(format!("{e:?}").into())
}
longrunning::model::operation::Result::Response(any) => {
let response =
any.try_into_message::<speech::model::BatchRecognizeResponse>()?;
Ok(response)
}
_ => Err(format!("unexpected result branch {r:?}").into()),
};
}
}
}
if let Some(any) = &operation.metadata {
let metadata = any.try_into_message::<speech::model::OperationMetadata>()?;
println!("LRO in progress, metadata={metadata:?}");
}
tokio::time::sleep(std::time::Duration::from_millis(500)).await;
if let Ok(attempt) = client.get_operation(operation.name.clone()).send().await {
operation = attempt;
}
}
}
In most cases, if the operation is done it contains a result. However, the field
is optional because the service could return done
as true and no result: maybe
the operation deletes resources and a successful completion has no return value.
In our example, with the Speech-to-Text service, we treat this as an error:
// Copyright 2025 Google LLC
//
// Licensed under the Apache License, Version 2.0 (the "License");
// you may not use this file except in compliance with the License.
// You may obtain a copy of the License at
//
// https://www.apache.org/licenses/LICENSE-2.0
//
// Unless required by applicable law or agreed to in writing, software
// distributed under the License is distributed on an "AS IS" BASIS,
// WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
// See the License for the specific language governing permissions and
// limitations under the License.
use google_cloud_longrunning as longrunning;
use google_cloud_speech_v2 as speech;
pub async fn start(project_id: &str) -> crate::Result<()> {
let client = speech::client::Speech::new().await?;
let operation = client
.batch_recognize(format!(
"projects/{project_id}/locations/global/recognizers/_"
))
.set_files([speech::model::BatchRecognizeFileMetadata::new()
.set_uri("gs://cloud-samples-data/speech/hello.wav")])
.set_recognition_output_config(
speech::model::RecognitionOutputConfig::new()
.set_inline_response_config(speech::model::InlineOutputConfig::new()),
)
.set_processing_strategy(
speech::model::batch_recognize_request::ProcessingStrategy::DYNAMIC_BATCHING,
)
.set_config(
speech::model::RecognitionConfig::new()
.set_language_codes(["en-US"])
.set_model("short")
.set_auto_decoding_config(speech::model::AutoDetectDecodingConfig::new()),
)
.send()
.await?;
println!("LRO started, response={operation:?}");
let response = manually_poll_lro(client, operation).await;
println!("LRO completed, response={response:?}");
Ok(())
}
pub async fn automatic(project_id: &str) -> crate::Result<()> {
use speech::Poller;
let client = speech::client::Speech::new().await?;
let response = client
.batch_recognize(format!(
"projects/{project_id}/locations/global/recognizers/_"
))
.set_files([speech::model::BatchRecognizeFileMetadata::new()
.set_uri("gs://cloud-samples-data/speech/hello.wav")])
.set_recognition_output_config(
speech::model::RecognitionOutputConfig::new()
.set_inline_response_config(speech::model::InlineOutputConfig::new()),
)
.set_processing_strategy(
speech::model::batch_recognize_request::ProcessingStrategy::DYNAMIC_BATCHING,
)
.set_config(
speech::model::RecognitionConfig::new()
.set_language_codes(["en-US"])
.set_model("short")
.set_auto_decoding_config(speech::model::AutoDetectDecodingConfig::new()),
)
.poller()
.until_done()
.await?;
println!("LRO completed, response={response:?}");
Ok(())
}
pub async fn polling(project_id: &str) -> crate::Result<()> {
use speech::Poller;
let client = speech::client::Speech::new().await?;
let mut poller = client
.batch_recognize(format!(
"projects/{project_id}/locations/global/recognizers/_"
))
.set_files([speech::model::BatchRecognizeFileMetadata::new()
.set_uri("gs://cloud-samples-data/speech/hello.wav")])
.set_recognition_output_config(
speech::model::RecognitionOutputConfig::new()
.set_inline_response_config(speech::model::InlineOutputConfig::new()),
)
.set_processing_strategy(
speech::model::batch_recognize_request::ProcessingStrategy::DYNAMIC_BATCHING,
)
.set_config(
speech::model::RecognitionConfig::new()
.set_language_codes(["en-US"])
.set_model("short")
.set_auto_decoding_config(speech::model::AutoDetectDecodingConfig::new()),
)
.poller();
while let Some(p) = poller.poll().await {
match p {
speech::PollingResult::Completed(r) => {
println!("LRO completed, response={r:?}");
}
speech::PollingResult::InProgress(m) => {
println!("LRO in progress, metadata={m:?}");
}
speech::PollingResult::PollingError(e) => {
println!("Transient error polling the LRO: {e}");
}
}
tokio::time::sleep(std::time::Duration::from_millis(500)).await;
}
Ok(())
}
pub async fn manually_poll_lro(
client: speech::client::Speech,
operation: longrunning::model::Operation,
) -> crate::Result<speech::model::BatchRecognizeResponse> {
let mut operation = operation;
loop {
if operation.done {
match &operation.result {
None => {
return Err("missing result for finished operation".into());
}
Some(r) => {
return match r {
longrunning::model::operation::Result::Error(e) => {
Err(format!("{e:?}").into())
}
longrunning::model::operation::Result::Response(any) => {
let response =
any.try_into_message::<speech::model::BatchRecognizeResponse>()?;
Ok(response)
}
_ => Err(format!("unexpected result branch {r:?}").into()),
};
}
}
}
if let Some(any) = &operation.metadata {
let metadata = any.try_into_message::<speech::model::OperationMetadata>()?;
println!("LRO in progress, metadata={metadata:?}");
}
tokio::time::sleep(std::time::Duration::from_millis(500)).await;
if let Ok(attempt) = client.get_operation(operation.name.clone()).send().await {
operation = attempt;
}
}
}
Assuming we have a result value this may be an error or a valid response: starting a long-running operation successfully does not guarantee that it will complete successfully. We need to check for both. First check for errors:
// Copyright 2025 Google LLC
//
// Licensed under the Apache License, Version 2.0 (the "License");
// you may not use this file except in compliance with the License.
// You may obtain a copy of the License at
//
// https://www.apache.org/licenses/LICENSE-2.0
//
// Unless required by applicable law or agreed to in writing, software
// distributed under the License is distributed on an "AS IS" BASIS,
// WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
// See the License for the specific language governing permissions and
// limitations under the License.
use google_cloud_longrunning as longrunning;
use google_cloud_speech_v2 as speech;
pub async fn start(project_id: &str) -> crate::Result<()> {
let client = speech::client::Speech::new().await?;
let operation = client
.batch_recognize(format!(
"projects/{project_id}/locations/global/recognizers/_"
))
.set_files([speech::model::BatchRecognizeFileMetadata::new()
.set_uri("gs://cloud-samples-data/speech/hello.wav")])
.set_recognition_output_config(
speech::model::RecognitionOutputConfig::new()
.set_inline_response_config(speech::model::InlineOutputConfig::new()),
)
.set_processing_strategy(
speech::model::batch_recognize_request::ProcessingStrategy::DYNAMIC_BATCHING,
)
.set_config(
speech::model::RecognitionConfig::new()
.set_language_codes(["en-US"])
.set_model("short")
.set_auto_decoding_config(speech::model::AutoDetectDecodingConfig::new()),
)
.send()
.await?;
println!("LRO started, response={operation:?}");
let response = manually_poll_lro(client, operation).await;
println!("LRO completed, response={response:?}");
Ok(())
}
pub async fn automatic(project_id: &str) -> crate::Result<()> {
use speech::Poller;
let client = speech::client::Speech::new().await?;
let response = client
.batch_recognize(format!(
"projects/{project_id}/locations/global/recognizers/_"
))
.set_files([speech::model::BatchRecognizeFileMetadata::new()
.set_uri("gs://cloud-samples-data/speech/hello.wav")])
.set_recognition_output_config(
speech::model::RecognitionOutputConfig::new()
.set_inline_response_config(speech::model::InlineOutputConfig::new()),
)
.set_processing_strategy(
speech::model::batch_recognize_request::ProcessingStrategy::DYNAMIC_BATCHING,
)
.set_config(
speech::model::RecognitionConfig::new()
.set_language_codes(["en-US"])
.set_model("short")
.set_auto_decoding_config(speech::model::AutoDetectDecodingConfig::new()),
)
.poller()
.until_done()
.await?;
println!("LRO completed, response={response:?}");
Ok(())
}
pub async fn polling(project_id: &str) -> crate::Result<()> {
use speech::Poller;
let client = speech::client::Speech::new().await?;
let mut poller = client
.batch_recognize(format!(
"projects/{project_id}/locations/global/recognizers/_"
))
.set_files([speech::model::BatchRecognizeFileMetadata::new()
.set_uri("gs://cloud-samples-data/speech/hello.wav")])
.set_recognition_output_config(
speech::model::RecognitionOutputConfig::new()
.set_inline_response_config(speech::model::InlineOutputConfig::new()),
)
.set_processing_strategy(
speech::model::batch_recognize_request::ProcessingStrategy::DYNAMIC_BATCHING,
)
.set_config(
speech::model::RecognitionConfig::new()
.set_language_codes(["en-US"])
.set_model("short")
.set_auto_decoding_config(speech::model::AutoDetectDecodingConfig::new()),
)
.poller();
while let Some(p) = poller.poll().await {
match p {
speech::PollingResult::Completed(r) => {
println!("LRO completed, response={r:?}");
}
speech::PollingResult::InProgress(m) => {
println!("LRO in progress, metadata={m:?}");
}
speech::PollingResult::PollingError(e) => {
println!("Transient error polling the LRO: {e}");
}
}
tokio::time::sleep(std::time::Duration::from_millis(500)).await;
}
Ok(())
}
pub async fn manually_poll_lro(
client: speech::client::Speech,
operation: longrunning::model::Operation,
) -> crate::Result<speech::model::BatchRecognizeResponse> {
let mut operation = operation;
loop {
if operation.done {
match &operation.result {
None => {
return Err("missing result for finished operation".into());
}
Some(r) => {
return match r {
longrunning::model::operation::Result::Error(e) => {
Err(format!("{e:?}").into())
}
longrunning::model::operation::Result::Response(any) => {
let response =
any.try_into_message::<speech::model::BatchRecognizeResponse>()?;
Ok(response)
}
_ => Err(format!("unexpected result branch {r:?}").into()),
};
}
}
}
if let Some(any) = &operation.metadata {
let metadata = any.try_into_message::<speech::model::OperationMetadata>()?;
println!("LRO in progress, metadata={metadata:?}");
}
tokio::time::sleep(std::time::Duration::from_millis(500)).await;
if let Ok(attempt) = client.get_operation(operation.name.clone()).send().await {
operation = attempt;
}
}
}
The error type is a Status message type. This does not
implement the standard Error
interface, you need to manually convert that to
a valid error. You can use ServiceError::from to perform this conversion.
Assuming the result is successful, you need to extract the response type. You can find this type in the documentation for the LRO method, or by reading the service API documentation:
// Copyright 2025 Google LLC
//
// Licensed under the Apache License, Version 2.0 (the "License");
// you may not use this file except in compliance with the License.
// You may obtain a copy of the License at
//
// https://www.apache.org/licenses/LICENSE-2.0
//
// Unless required by applicable law or agreed to in writing, software
// distributed under the License is distributed on an "AS IS" BASIS,
// WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
// See the License for the specific language governing permissions and
// limitations under the License.
use google_cloud_longrunning as longrunning;
use google_cloud_speech_v2 as speech;
pub async fn start(project_id: &str) -> crate::Result<()> {
let client = speech::client::Speech::new().await?;
let operation = client
.batch_recognize(format!(
"projects/{project_id}/locations/global/recognizers/_"
))
.set_files([speech::model::BatchRecognizeFileMetadata::new()
.set_uri("gs://cloud-samples-data/speech/hello.wav")])
.set_recognition_output_config(
speech::model::RecognitionOutputConfig::new()
.set_inline_response_config(speech::model::InlineOutputConfig::new()),
)
.set_processing_strategy(
speech::model::batch_recognize_request::ProcessingStrategy::DYNAMIC_BATCHING,
)
.set_config(
speech::model::RecognitionConfig::new()
.set_language_codes(["en-US"])
.set_model("short")
.set_auto_decoding_config(speech::model::AutoDetectDecodingConfig::new()),
)
.send()
.await?;
println!("LRO started, response={operation:?}");
let response = manually_poll_lro(client, operation).await;
println!("LRO completed, response={response:?}");
Ok(())
}
pub async fn automatic(project_id: &str) -> crate::Result<()> {
use speech::Poller;
let client = speech::client::Speech::new().await?;
let response = client
.batch_recognize(format!(
"projects/{project_id}/locations/global/recognizers/_"
))
.set_files([speech::model::BatchRecognizeFileMetadata::new()
.set_uri("gs://cloud-samples-data/speech/hello.wav")])
.set_recognition_output_config(
speech::model::RecognitionOutputConfig::new()
.set_inline_response_config(speech::model::InlineOutputConfig::new()),
)
.set_processing_strategy(
speech::model::batch_recognize_request::ProcessingStrategy::DYNAMIC_BATCHING,
)
.set_config(
speech::model::RecognitionConfig::new()
.set_language_codes(["en-US"])
.set_model("short")
.set_auto_decoding_config(speech::model::AutoDetectDecodingConfig::new()),
)
.poller()
.until_done()
.await?;
println!("LRO completed, response={response:?}");
Ok(())
}
pub async fn polling(project_id: &str) -> crate::Result<()> {
use speech::Poller;
let client = speech::client::Speech::new().await?;
let mut poller = client
.batch_recognize(format!(
"projects/{project_id}/locations/global/recognizers/_"
))
.set_files([speech::model::BatchRecognizeFileMetadata::new()
.set_uri("gs://cloud-samples-data/speech/hello.wav")])
.set_recognition_output_config(
speech::model::RecognitionOutputConfig::new()
.set_inline_response_config(speech::model::InlineOutputConfig::new()),
)
.set_processing_strategy(
speech::model::batch_recognize_request::ProcessingStrategy::DYNAMIC_BATCHING,
)
.set_config(
speech::model::RecognitionConfig::new()
.set_language_codes(["en-US"])
.set_model("short")
.set_auto_decoding_config(speech::model::AutoDetectDecodingConfig::new()),
)
.poller();
while let Some(p) = poller.poll().await {
match p {
speech::PollingResult::Completed(r) => {
println!("LRO completed, response={r:?}");
}
speech::PollingResult::InProgress(m) => {
println!("LRO in progress, metadata={m:?}");
}
speech::PollingResult::PollingError(e) => {
println!("Transient error polling the LRO: {e}");
}
}
tokio::time::sleep(std::time::Duration::from_millis(500)).await;
}
Ok(())
}
pub async fn manually_poll_lro(
client: speech::client::Speech,
operation: longrunning::model::Operation,
) -> crate::Result<speech::model::BatchRecognizeResponse> {
let mut operation = operation;
loop {
if operation.done {
match &operation.result {
None => {
return Err("missing result for finished operation".into());
}
Some(r) => {
return match r {
longrunning::model::operation::Result::Error(e) => {
Err(format!("{e:?}").into())
}
longrunning::model::operation::Result::Response(any) => {
let response =
any.try_into_message::<speech::model::BatchRecognizeResponse>()?;
Ok(response)
}
_ => Err(format!("unexpected result branch {r:?}").into()),
};
}
}
}
if let Some(any) = &operation.metadata {
let metadata = any.try_into_message::<speech::model::OperationMetadata>()?;
println!("LRO in progress, metadata={metadata:?}");
}
tokio::time::sleep(std::time::Duration::from_millis(500)).await;
if let Ok(attempt) = client.get_operation(operation.name.clone()).send().await {
operation = attempt;
}
}
}
Note that extraction of the value may fail if the type does not match what the service sent.
All types in Google Cloud may add fields and branches in the future. While this
is unlikely for a common type such as Operation
, it happens frequently for
most service messages. The Google Cloud Client Libraries for Rust mark all
structs and enums as #[non_exhaustive]
to signal that such changes are
possible. In this case, you must handle this unexpected case:
// Copyright 2025 Google LLC
//
// Licensed under the Apache License, Version 2.0 (the "License");
// you may not use this file except in compliance with the License.
// You may obtain a copy of the License at
//
// https://www.apache.org/licenses/LICENSE-2.0
//
// Unless required by applicable law or agreed to in writing, software
// distributed under the License is distributed on an "AS IS" BASIS,
// WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
// See the License for the specific language governing permissions and
// limitations under the License.
use google_cloud_longrunning as longrunning;
use google_cloud_speech_v2 as speech;
pub async fn start(project_id: &str) -> crate::Result<()> {
let client = speech::client::Speech::new().await?;
let operation = client
.batch_recognize(format!(
"projects/{project_id}/locations/global/recognizers/_"
))
.set_files([speech::model::BatchRecognizeFileMetadata::new()
.set_uri("gs://cloud-samples-data/speech/hello.wav")])
.set_recognition_output_config(
speech::model::RecognitionOutputConfig::new()
.set_inline_response_config(speech::model::InlineOutputConfig::new()),
)
.set_processing_strategy(
speech::model::batch_recognize_request::ProcessingStrategy::DYNAMIC_BATCHING,
)
.set_config(
speech::model::RecognitionConfig::new()
.set_language_codes(["en-US"])
.set_model("short")
.set_auto_decoding_config(speech::model::AutoDetectDecodingConfig::new()),
)
.send()
.await?;
println!("LRO started, response={operation:?}");
let response = manually_poll_lro(client, operation).await;
println!("LRO completed, response={response:?}");
Ok(())
}
pub async fn automatic(project_id: &str) -> crate::Result<()> {
use speech::Poller;
let client = speech::client::Speech::new().await?;
let response = client
.batch_recognize(format!(
"projects/{project_id}/locations/global/recognizers/_"
))
.set_files([speech::model::BatchRecognizeFileMetadata::new()
.set_uri("gs://cloud-samples-data/speech/hello.wav")])
.set_recognition_output_config(
speech::model::RecognitionOutputConfig::new()
.set_inline_response_config(speech::model::InlineOutputConfig::new()),
)
.set_processing_strategy(
speech::model::batch_recognize_request::ProcessingStrategy::DYNAMIC_BATCHING,
)
.set_config(
speech::model::RecognitionConfig::new()
.set_language_codes(["en-US"])
.set_model("short")
.set_auto_decoding_config(speech::model::AutoDetectDecodingConfig::new()),
)
.poller()
.until_done()
.await?;
println!("LRO completed, response={response:?}");
Ok(())
}
pub async fn polling(project_id: &str) -> crate::Result<()> {
use speech::Poller;
let client = speech::client::Speech::new().await?;
let mut poller = client
.batch_recognize(format!(
"projects/{project_id}/locations/global/recognizers/_"
))
.set_files([speech::model::BatchRecognizeFileMetadata::new()
.set_uri("gs://cloud-samples-data/speech/hello.wav")])
.set_recognition_output_config(
speech::model::RecognitionOutputConfig::new()
.set_inline_response_config(speech::model::InlineOutputConfig::new()),
)
.set_processing_strategy(
speech::model::batch_recognize_request::ProcessingStrategy::DYNAMIC_BATCHING,
)
.set_config(
speech::model::RecognitionConfig::new()
.set_language_codes(["en-US"])
.set_model("short")
.set_auto_decoding_config(speech::model::AutoDetectDecodingConfig::new()),
)
.poller();
while let Some(p) = poller.poll().await {
match p {
speech::PollingResult::Completed(r) => {
println!("LRO completed, response={r:?}");
}
speech::PollingResult::InProgress(m) => {
println!("LRO in progress, metadata={m:?}");
}
speech::PollingResult::PollingError(e) => {
println!("Transient error polling the LRO: {e}");
}
}
tokio::time::sleep(std::time::Duration::from_millis(500)).await;
}
Ok(())
}
pub async fn manually_poll_lro(
client: speech::client::Speech,
operation: longrunning::model::Operation,
) -> crate::Result<speech::model::BatchRecognizeResponse> {
let mut operation = operation;
loop {
if operation.done {
match &operation.result {
None => {
return Err("missing result for finished operation".into());
}
Some(r) => {
return match r {
longrunning::model::operation::Result::Error(e) => {
Err(format!("{e:?}").into())
}
longrunning::model::operation::Result::Response(any) => {
let response =
any.try_into_message::<speech::model::BatchRecognizeResponse>()?;
Ok(response)
}
_ => Err(format!("unexpected result branch {r:?}").into()),
};
}
}
}
if let Some(any) = &operation.metadata {
let metadata = any.try_into_message::<speech::model::OperationMetadata>()?;
println!("LRO in progress, metadata={metadata:?}");
}
tokio::time::sleep(std::time::Duration::from_millis(500)).await;
if let Ok(attempt) = client.get_operation(operation.name.clone()).send().await {
operation = attempt;
}
}
}
If the operation has not completed, then it may contain some metadata. Some services just include initial information about the request, while other services include partial progress reports. You can choose to extract and report this metadata:
// Copyright 2025 Google LLC
//
// Licensed under the Apache License, Version 2.0 (the "License");
// you may not use this file except in compliance with the License.
// You may obtain a copy of the License at
//
// https://www.apache.org/licenses/LICENSE-2.0
//
// Unless required by applicable law or agreed to in writing, software
// distributed under the License is distributed on an "AS IS" BASIS,
// WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
// See the License for the specific language governing permissions and
// limitations under the License.
use google_cloud_longrunning as longrunning;
use google_cloud_speech_v2 as speech;
pub async fn start(project_id: &str) -> crate::Result<()> {
let client = speech::client::Speech::new().await?;
let operation = client
.batch_recognize(format!(
"projects/{project_id}/locations/global/recognizers/_"
))
.set_files([speech::model::BatchRecognizeFileMetadata::new()
.set_uri("gs://cloud-samples-data/speech/hello.wav")])
.set_recognition_output_config(
speech::model::RecognitionOutputConfig::new()
.set_inline_response_config(speech::model::InlineOutputConfig::new()),
)
.set_processing_strategy(
speech::model::batch_recognize_request::ProcessingStrategy::DYNAMIC_BATCHING,
)
.set_config(
speech::model::RecognitionConfig::new()
.set_language_codes(["en-US"])
.set_model("short")
.set_auto_decoding_config(speech::model::AutoDetectDecodingConfig::new()),
)
.send()
.await?;
println!("LRO started, response={operation:?}");
let response = manually_poll_lro(client, operation).await;
println!("LRO completed, response={response:?}");
Ok(())
}
pub async fn automatic(project_id: &str) -> crate::Result<()> {
use speech::Poller;
let client = speech::client::Speech::new().await?;
let response = client
.batch_recognize(format!(
"projects/{project_id}/locations/global/recognizers/_"
))
.set_files([speech::model::BatchRecognizeFileMetadata::new()
.set_uri("gs://cloud-samples-data/speech/hello.wav")])
.set_recognition_output_config(
speech::model::RecognitionOutputConfig::new()
.set_inline_response_config(speech::model::InlineOutputConfig::new()),
)
.set_processing_strategy(
speech::model::batch_recognize_request::ProcessingStrategy::DYNAMIC_BATCHING,
)
.set_config(
speech::model::RecognitionConfig::new()
.set_language_codes(["en-US"])
.set_model("short")
.set_auto_decoding_config(speech::model::AutoDetectDecodingConfig::new()),
)
.poller()
.until_done()
.await?;
println!("LRO completed, response={response:?}");
Ok(())
}
pub async fn polling(project_id: &str) -> crate::Result<()> {
use speech::Poller;
let client = speech::client::Speech::new().await?;
let mut poller = client
.batch_recognize(format!(
"projects/{project_id}/locations/global/recognizers/_"
))
.set_files([speech::model::BatchRecognizeFileMetadata::new()
.set_uri("gs://cloud-samples-data/speech/hello.wav")])
.set_recognition_output_config(
speech::model::RecognitionOutputConfig::new()
.set_inline_response_config(speech::model::InlineOutputConfig::new()),
)
.set_processing_strategy(
speech::model::batch_recognize_request::ProcessingStrategy::DYNAMIC_BATCHING,
)
.set_config(
speech::model::RecognitionConfig::new()
.set_language_codes(["en-US"])
.set_model("short")
.set_auto_decoding_config(speech::model::AutoDetectDecodingConfig::new()),
)
.poller();
while let Some(p) = poller.poll().await {
match p {
speech::PollingResult::Completed(r) => {
println!("LRO completed, response={r:?}");
}
speech::PollingResult::InProgress(m) => {
println!("LRO in progress, metadata={m:?}");
}
speech::PollingResult::PollingError(e) => {
println!("Transient error polling the LRO: {e}");
}
}
tokio::time::sleep(std::time::Duration::from_millis(500)).await;
}
Ok(())
}
pub async fn manually_poll_lro(
client: speech::client::Speech,
operation: longrunning::model::Operation,
) -> crate::Result<speech::model::BatchRecognizeResponse> {
let mut operation = operation;
loop {
if operation.done {
match &operation.result {
None => {
return Err("missing result for finished operation".into());
}
Some(r) => {
return match r {
longrunning::model::operation::Result::Error(e) => {
Err(format!("{e:?}").into())
}
longrunning::model::operation::Result::Response(any) => {
let response =
any.try_into_message::<speech::model::BatchRecognizeResponse>()?;
Ok(response)
}
_ => Err(format!("unexpected result branch {r:?}").into()),
};
}
}
}
if let Some(any) = &operation.metadata {
let metadata = any.try_into_message::<speech::model::OperationMetadata>()?;
println!("LRO in progress, metadata={metadata:?}");
}
tokio::time::sleep(std::time::Duration::from_millis(500)).await;
if let Ok(attempt) = client.get_operation(operation.name.clone()).send().await {
operation = attempt;
}
}
}
As the operation has not completed, you need to wait before polling again. Consider adjusting the polling period, maybe using a form of truncated exponential backoff. In this example we simply poll every 500ms:
// Copyright 2025 Google LLC
//
// Licensed under the Apache License, Version 2.0 (the "License");
// you may not use this file except in compliance with the License.
// You may obtain a copy of the License at
//
// https://www.apache.org/licenses/LICENSE-2.0
//
// Unless required by applicable law or agreed to in writing, software
// distributed under the License is distributed on an "AS IS" BASIS,
// WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
// See the License for the specific language governing permissions and
// limitations under the License.
use google_cloud_longrunning as longrunning;
use google_cloud_speech_v2 as speech;
pub async fn start(project_id: &str) -> crate::Result<()> {
let client = speech::client::Speech::new().await?;
let operation = client
.batch_recognize(format!(
"projects/{project_id}/locations/global/recognizers/_"
))
.set_files([speech::model::BatchRecognizeFileMetadata::new()
.set_uri("gs://cloud-samples-data/speech/hello.wav")])
.set_recognition_output_config(
speech::model::RecognitionOutputConfig::new()
.set_inline_response_config(speech::model::InlineOutputConfig::new()),
)
.set_processing_strategy(
speech::model::batch_recognize_request::ProcessingStrategy::DYNAMIC_BATCHING,
)
.set_config(
speech::model::RecognitionConfig::new()
.set_language_codes(["en-US"])
.set_model("short")
.set_auto_decoding_config(speech::model::AutoDetectDecodingConfig::new()),
)
.send()
.await?;
println!("LRO started, response={operation:?}");
let response = manually_poll_lro(client, operation).await;
println!("LRO completed, response={response:?}");
Ok(())
}
pub async fn automatic(project_id: &str) -> crate::Result<()> {
use speech::Poller;
let client = speech::client::Speech::new().await?;
let response = client
.batch_recognize(format!(
"projects/{project_id}/locations/global/recognizers/_"
))
.set_files([speech::model::BatchRecognizeFileMetadata::new()
.set_uri("gs://cloud-samples-data/speech/hello.wav")])
.set_recognition_output_config(
speech::model::RecognitionOutputConfig::new()
.set_inline_response_config(speech::model::InlineOutputConfig::new()),
)
.set_processing_strategy(
speech::model::batch_recognize_request::ProcessingStrategy::DYNAMIC_BATCHING,
)
.set_config(
speech::model::RecognitionConfig::new()
.set_language_codes(["en-US"])
.set_model("short")
.set_auto_decoding_config(speech::model::AutoDetectDecodingConfig::new()),
)
.poller()
.until_done()
.await?;
println!("LRO completed, response={response:?}");
Ok(())
}
pub async fn polling(project_id: &str) -> crate::Result<()> {
use speech::Poller;
let client = speech::client::Speech::new().await?;
let mut poller = client
.batch_recognize(format!(
"projects/{project_id}/locations/global/recognizers/_"
))
.set_files([speech::model::BatchRecognizeFileMetadata::new()
.set_uri("gs://cloud-samples-data/speech/hello.wav")])
.set_recognition_output_config(
speech::model::RecognitionOutputConfig::new()
.set_inline_response_config(speech::model::InlineOutputConfig::new()),
)
.set_processing_strategy(
speech::model::batch_recognize_request::ProcessingStrategy::DYNAMIC_BATCHING,
)
.set_config(
speech::model::RecognitionConfig::new()
.set_language_codes(["en-US"])
.set_model("short")
.set_auto_decoding_config(speech::model::AutoDetectDecodingConfig::new()),
)
.poller();
while let Some(p) = poller.poll().await {
match p {
speech::PollingResult::Completed(r) => {
println!("LRO completed, response={r:?}");
}
speech::PollingResult::InProgress(m) => {
println!("LRO in progress, metadata={m:?}");
}
speech::PollingResult::PollingError(e) => {
println!("Transient error polling the LRO: {e}");
}
}
tokio::time::sleep(std::time::Duration::from_millis(500)).await;
}
Ok(())
}
pub async fn manually_poll_lro(
client: speech::client::Speech,
operation: longrunning::model::Operation,
) -> crate::Result<speech::model::BatchRecognizeResponse> {
let mut operation = operation;
loop {
if operation.done {
match &operation.result {
None => {
return Err("missing result for finished operation".into());
}
Some(r) => {
return match r {
longrunning::model::operation::Result::Error(e) => {
Err(format!("{e:?}").into())
}
longrunning::model::operation::Result::Response(any) => {
let response =
any.try_into_message::<speech::model::BatchRecognizeResponse>()?;
Ok(response)
}
_ => Err(format!("unexpected result branch {r:?}").into()),
};
}
}
}
if let Some(any) = &operation.metadata {
let metadata = any.try_into_message::<speech::model::OperationMetadata>()?;
println!("LRO in progress, metadata={metadata:?}");
}
tokio::time::sleep(std::time::Duration::from_millis(500)).await;
if let Ok(attempt) = client.get_operation(operation.name.clone()).send().await {
operation = attempt;
}
}
}
And then poll the operation to get its new status:
// Copyright 2025 Google LLC
//
// Licensed under the Apache License, Version 2.0 (the "License");
// you may not use this file except in compliance with the License.
// You may obtain a copy of the License at
//
// https://www.apache.org/licenses/LICENSE-2.0
//
// Unless required by applicable law or agreed to in writing, software
// distributed under the License is distributed on an "AS IS" BASIS,
// WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
// See the License for the specific language governing permissions and
// limitations under the License.
use google_cloud_longrunning as longrunning;
use google_cloud_speech_v2 as speech;
pub async fn start(project_id: &str) -> crate::Result<()> {
let client = speech::client::Speech::new().await?;
let operation = client
.batch_recognize(format!(
"projects/{project_id}/locations/global/recognizers/_"
))
.set_files([speech::model::BatchRecognizeFileMetadata::new()
.set_uri("gs://cloud-samples-data/speech/hello.wav")])
.set_recognition_output_config(
speech::model::RecognitionOutputConfig::new()
.set_inline_response_config(speech::model::InlineOutputConfig::new()),
)
.set_processing_strategy(
speech::model::batch_recognize_request::ProcessingStrategy::DYNAMIC_BATCHING,
)
.set_config(
speech::model::RecognitionConfig::new()
.set_language_codes(["en-US"])
.set_model("short")
.set_auto_decoding_config(speech::model::AutoDetectDecodingConfig::new()),
)
.send()
.await?;
println!("LRO started, response={operation:?}");
let response = manually_poll_lro(client, operation).await;
println!("LRO completed, response={response:?}");
Ok(())
}
pub async fn automatic(project_id: &str) -> crate::Result<()> {
use speech::Poller;
let client = speech::client::Speech::new().await?;
let response = client
.batch_recognize(format!(
"projects/{project_id}/locations/global/recognizers/_"
))
.set_files([speech::model::BatchRecognizeFileMetadata::new()
.set_uri("gs://cloud-samples-data/speech/hello.wav")])
.set_recognition_output_config(
speech::model::RecognitionOutputConfig::new()
.set_inline_response_config(speech::model::InlineOutputConfig::new()),
)
.set_processing_strategy(
speech::model::batch_recognize_request::ProcessingStrategy::DYNAMIC_BATCHING,
)
.set_config(
speech::model::RecognitionConfig::new()
.set_language_codes(["en-US"])
.set_model("short")
.set_auto_decoding_config(speech::model::AutoDetectDecodingConfig::new()),
)
.poller()
.until_done()
.await?;
println!("LRO completed, response={response:?}");
Ok(())
}
pub async fn polling(project_id: &str) -> crate::Result<()> {
use speech::Poller;
let client = speech::client::Speech::new().await?;
let mut poller = client
.batch_recognize(format!(
"projects/{project_id}/locations/global/recognizers/_"
))
.set_files([speech::model::BatchRecognizeFileMetadata::new()
.set_uri("gs://cloud-samples-data/speech/hello.wav")])
.set_recognition_output_config(
speech::model::RecognitionOutputConfig::new()
.set_inline_response_config(speech::model::InlineOutputConfig::new()),
)
.set_processing_strategy(
speech::model::batch_recognize_request::ProcessingStrategy::DYNAMIC_BATCHING,
)
.set_config(
speech::model::RecognitionConfig::new()
.set_language_codes(["en-US"])
.set_model("short")
.set_auto_decoding_config(speech::model::AutoDetectDecodingConfig::new()),
)
.poller();
while let Some(p) = poller.poll().await {
match p {
speech::PollingResult::Completed(r) => {
println!("LRO completed, response={r:?}");
}
speech::PollingResult::InProgress(m) => {
println!("LRO in progress, metadata={m:?}");
}
speech::PollingResult::PollingError(e) => {
println!("Transient error polling the LRO: {e}");
}
}
tokio::time::sleep(std::time::Duration::from_millis(500)).await;
}
Ok(())
}
pub async fn manually_poll_lro(
client: speech::client::Speech,
operation: longrunning::model::Operation,
) -> crate::Result<speech::model::BatchRecognizeResponse> {
let mut operation = operation;
loop {
if operation.done {
match &operation.result {
None => {
return Err("missing result for finished operation".into());
}
Some(r) => {
return match r {
longrunning::model::operation::Result::Error(e) => {
Err(format!("{e:?}").into())
}
longrunning::model::operation::Result::Response(any) => {
let response =
any.try_into_message::<speech::model::BatchRecognizeResponse>()?;
Ok(response)
}
_ => Err(format!("unexpected result branch {r:?}").into()),
};
}
}
}
if let Some(any) = &operation.metadata {
let metadata = any.try_into_message::<speech::model::OperationMetadata>()?;
println!("LRO in progress, metadata={metadata:?}");
}
tokio::time::sleep(std::time::Duration::from_millis(500)).await;
if let Ok(attempt) = client.get_operation(operation.name.clone()).send().await {
operation = attempt;
}
}
}
For simplicity, we have chosen to ignore all errors. In your application you may chose to treat only a subset of the errors as non-recoverable, and may want to limit the number of polling attempts if these fail.
You can find the full function below.
Starting a long-running operation: complete code
pub async fn start(project_id: &str) -> crate::Result<()> {
let client = speech::client::Speech::new().await?;
let operation = client
.batch_recognize(format!(
"projects/{project_id}/locations/global/recognizers/_"
))
.set_files([speech::model::BatchRecognizeFileMetadata::new()
.set_uri("gs://cloud-samples-data/speech/hello.wav")])
.set_recognition_output_config(
speech::model::RecognitionOutputConfig::new()
.set_inline_response_config(speech::model::InlineOutputConfig::new()),
)
.set_processing_strategy(
speech::model::batch_recognize_request::ProcessingStrategy::DYNAMIC_BATCHING,
)
.set_config(
speech::model::RecognitionConfig::new()
.set_language_codes(["en-US"])
.set_model("short")
.set_auto_decoding_config(speech::model::AutoDetectDecodingConfig::new()),
)
.send()
.await?;
println!("LRO started, response={operation:?}");
let response = manually_poll_lro(client, operation).await;
println!("LRO completed, response={response:?}");
Ok(())
}
Automatically polling a long-running operation: complete code
// Copyright 2025 Google LLC
//
// Licensed under the Apache License, Version 2.0 (the "License");
// you may not use this file except in compliance with the License.
// You may obtain a copy of the License at
//
// https://www.apache.org/licenses/LICENSE-2.0
//
// Unless required by applicable law or agreed to in writing, software
// distributed under the License is distributed on an "AS IS" BASIS,
// WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
// See the License for the specific language governing permissions and
// limitations under the License.
use google_cloud_longrunning as longrunning;
use google_cloud_speech_v2 as speech;
pub async fn start(project_id: &str) -> crate::Result<()> {
let client = speech::client::Speech::new().await?;
let operation = client
.batch_recognize(format!(
"projects/{project_id}/locations/global/recognizers/_"
))
.set_files([speech::model::BatchRecognizeFileMetadata::new()
.set_uri("gs://cloud-samples-data/speech/hello.wav")])
.set_recognition_output_config(
speech::model::RecognitionOutputConfig::new()
.set_inline_response_config(speech::model::InlineOutputConfig::new()),
)
.set_processing_strategy(
speech::model::batch_recognize_request::ProcessingStrategy::DYNAMIC_BATCHING,
)
.set_config(
speech::model::RecognitionConfig::new()
.set_language_codes(["en-US"])
.set_model("short")
.set_auto_decoding_config(speech::model::AutoDetectDecodingConfig::new()),
)
.send()
.await?;
println!("LRO started, response={operation:?}");
let response = manually_poll_lro(client, operation).await;
println!("LRO completed, response={response:?}");
Ok(())
}
pub async fn automatic(project_id: &str) -> crate::Result<()> {
use speech::Poller;
let client = speech::client::Speech::new().await?;
let response = client
.batch_recognize(format!(
"projects/{project_id}/locations/global/recognizers/_"
))
.set_files([speech::model::BatchRecognizeFileMetadata::new()
.set_uri("gs://cloud-samples-data/speech/hello.wav")])
.set_recognition_output_config(
speech::model::RecognitionOutputConfig::new()
.set_inline_response_config(speech::model::InlineOutputConfig::new()),
)
.set_processing_strategy(
speech::model::batch_recognize_request::ProcessingStrategy::DYNAMIC_BATCHING,
)
.set_config(
speech::model::RecognitionConfig::new()
.set_language_codes(["en-US"])
.set_model("short")
.set_auto_decoding_config(speech::model::AutoDetectDecodingConfig::new()),
)
.poller()
.until_done()
.await?;
println!("LRO completed, response={response:?}");
Ok(())
}
pub async fn polling(project_id: &str) -> crate::Result<()> {
use speech::Poller;
let client = speech::client::Speech::new().await?;
let mut poller = client
.batch_recognize(format!(
"projects/{project_id}/locations/global/recognizers/_"
))
.set_files([speech::model::BatchRecognizeFileMetadata::new()
.set_uri("gs://cloud-samples-data/speech/hello.wav")])
.set_recognition_output_config(
speech::model::RecognitionOutputConfig::new()
.set_inline_response_config(speech::model::InlineOutputConfig::new()),
)
.set_processing_strategy(
speech::model::batch_recognize_request::ProcessingStrategy::DYNAMIC_BATCHING,
)
.set_config(
speech::model::RecognitionConfig::new()
.set_language_codes(["en-US"])
.set_model("short")
.set_auto_decoding_config(speech::model::AutoDetectDecodingConfig::new()),
)
.poller();
while let Some(p) = poller.poll().await {
match p {
speech::PollingResult::Completed(r) => {
println!("LRO completed, response={r:?}");
}
speech::PollingResult::InProgress(m) => {
println!("LRO in progress, metadata={m:?}");
}
speech::PollingResult::PollingError(e) => {
println!("Transient error polling the LRO: {e}");
}
}
tokio::time::sleep(std::time::Duration::from_millis(500)).await;
}
Ok(())
}
pub async fn manually_poll_lro(
client: speech::client::Speech,
operation: longrunning::model::Operation,
) -> crate::Result<speech::model::BatchRecognizeResponse> {
let mut operation = operation;
loop {
if operation.done {
match &operation.result {
None => {
return Err("missing result for finished operation".into());
}
Some(r) => {
return match r {
longrunning::model::operation::Result::Error(e) => {
Err(format!("{e:?}").into())
}
longrunning::model::operation::Result::Response(any) => {
let response =
any.try_into_message::<speech::model::BatchRecognizeResponse>()?;
Ok(response)
}
_ => Err(format!("unexpected result branch {r:?}").into()),
};
}
}
}
if let Some(any) = &operation.metadata {
let metadata = any.try_into_message::<speech::model::OperationMetadata>()?;
println!("LRO in progress, metadata={metadata:?}");
}
tokio::time::sleep(std::time::Duration::from_millis(500)).await;
if let Ok(attempt) = client.get_operation(operation.name.clone()).send().await {
operation = attempt;
}
}
}
Polling a long-running operation: complete code
// Copyright 2025 Google LLC
//
// Licensed under the Apache License, Version 2.0 (the "License");
// you may not use this file except in compliance with the License.
// You may obtain a copy of the License at
//
// https://www.apache.org/licenses/LICENSE-2.0
//
// Unless required by applicable law or agreed to in writing, software
// distributed under the License is distributed on an "AS IS" BASIS,
// WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
// See the License for the specific language governing permissions and
// limitations under the License.
use google_cloud_longrunning as longrunning;
use google_cloud_speech_v2 as speech;
pub async fn start(project_id: &str) -> crate::Result<()> {
let client = speech::client::Speech::new().await?;
let operation = client
.batch_recognize(format!(
"projects/{project_id}/locations/global/recognizers/_"
))
.set_files([speech::model::BatchRecognizeFileMetadata::new()
.set_uri("gs://cloud-samples-data/speech/hello.wav")])
.set_recognition_output_config(
speech::model::RecognitionOutputConfig::new()
.set_inline_response_config(speech::model::InlineOutputConfig::new()),
)
.set_processing_strategy(
speech::model::batch_recognize_request::ProcessingStrategy::DYNAMIC_BATCHING,
)
.set_config(
speech::model::RecognitionConfig::new()
.set_language_codes(["en-US"])
.set_model("short")
.set_auto_decoding_config(speech::model::AutoDetectDecodingConfig::new()),
)
.send()
.await?;
println!("LRO started, response={operation:?}");
let response = manually_poll_lro(client, operation).await;
println!("LRO completed, response={response:?}");
Ok(())
}
pub async fn automatic(project_id: &str) -> crate::Result<()> {
use speech::Poller;
let client = speech::client::Speech::new().await?;
let response = client
.batch_recognize(format!(
"projects/{project_id}/locations/global/recognizers/_"
))
.set_files([speech::model::BatchRecognizeFileMetadata::new()
.set_uri("gs://cloud-samples-data/speech/hello.wav")])
.set_recognition_output_config(
speech::model::RecognitionOutputConfig::new()
.set_inline_response_config(speech::model::InlineOutputConfig::new()),
)
.set_processing_strategy(
speech::model::batch_recognize_request::ProcessingStrategy::DYNAMIC_BATCHING,
)
.set_config(
speech::model::RecognitionConfig::new()
.set_language_codes(["en-US"])
.set_model("short")
.set_auto_decoding_config(speech::model::AutoDetectDecodingConfig::new()),
)
.poller()
.until_done()
.await?;
println!("LRO completed, response={response:?}");
Ok(())
}
pub async fn polling(project_id: &str) -> crate::Result<()> {
use speech::Poller;
let client = speech::client::Speech::new().await?;
let mut poller = client
.batch_recognize(format!(
"projects/{project_id}/locations/global/recognizers/_"
))
.set_files([speech::model::BatchRecognizeFileMetadata::new()
.set_uri("gs://cloud-samples-data/speech/hello.wav")])
.set_recognition_output_config(
speech::model::RecognitionOutputConfig::new()
.set_inline_response_config(speech::model::InlineOutputConfig::new()),
)
.set_processing_strategy(
speech::model::batch_recognize_request::ProcessingStrategy::DYNAMIC_BATCHING,
)
.set_config(
speech::model::RecognitionConfig::new()
.set_language_codes(["en-US"])
.set_model("short")
.set_auto_decoding_config(speech::model::AutoDetectDecodingConfig::new()),
)
.poller();
while let Some(p) = poller.poll().await {
match p {
speech::PollingResult::Completed(r) => {
println!("LRO completed, response={r:?}");
}
speech::PollingResult::InProgress(m) => {
println!("LRO in progress, metadata={m:?}");
}
speech::PollingResult::PollingError(e) => {
println!("Transient error polling the LRO: {e}");
}
}
tokio::time::sleep(std::time::Duration::from_millis(500)).await;
}
Ok(())
}
pub async fn manually_poll_lro(
client: speech::client::Speech,
operation: longrunning::model::Operation,
) -> crate::Result<speech::model::BatchRecognizeResponse> {
let mut operation = operation;
loop {
if operation.done {
match &operation.result {
None => {
return Err("missing result for finished operation".into());
}
Some(r) => {
return match r {
longrunning::model::operation::Result::Error(e) => {
Err(format!("{e:?}").into())
}
longrunning::model::operation::Result::Response(any) => {
let response =
any.try_into_message::<speech::model::BatchRecognizeResponse>()?;
Ok(response)
}
_ => Err(format!("unexpected result branch {r:?}").into()),
};
}
}
}
if let Some(any) = &operation.metadata {
let metadata = any.try_into_message::<speech::model::OperationMetadata>()?;
println!("LRO in progress, metadata={metadata:?}");
}
tokio::time::sleep(std::time::Duration::from_millis(500)).await;
if let Ok(attempt) = client.get_operation(operation.name.clone()).send().await {
operation = attempt;
}
}
}
Manually polling a long-running operation: complete code
// Copyright 2025 Google LLC
//
// Licensed under the Apache License, Version 2.0 (the "License");
// you may not use this file except in compliance with the License.
// You may obtain a copy of the License at
//
// https://www.apache.org/licenses/LICENSE-2.0
//
// Unless required by applicable law or agreed to in writing, software
// distributed under the License is distributed on an "AS IS" BASIS,
// WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
// See the License for the specific language governing permissions and
// limitations under the License.
use google_cloud_longrunning as longrunning;
use google_cloud_speech_v2 as speech;
pub async fn start(project_id: &str) -> crate::Result<()> {
let client = speech::client::Speech::new().await?;
let operation = client
.batch_recognize(format!(
"projects/{project_id}/locations/global/recognizers/_"
))
.set_files([speech::model::BatchRecognizeFileMetadata::new()
.set_uri("gs://cloud-samples-data/speech/hello.wav")])
.set_recognition_output_config(
speech::model::RecognitionOutputConfig::new()
.set_inline_response_config(speech::model::InlineOutputConfig::new()),
)
.set_processing_strategy(
speech::model::batch_recognize_request::ProcessingStrategy::DYNAMIC_BATCHING,
)
.set_config(
speech::model::RecognitionConfig::new()
.set_language_codes(["en-US"])
.set_model("short")
.set_auto_decoding_config(speech::model::AutoDetectDecodingConfig::new()),
)
.send()
.await?;
println!("LRO started, response={operation:?}");
let response = manually_poll_lro(client, operation).await;
println!("LRO completed, response={response:?}");
Ok(())
}
pub async fn automatic(project_id: &str) -> crate::Result<()> {
use speech::Poller;
let client = speech::client::Speech::new().await?;
let response = client
.batch_recognize(format!(
"projects/{project_id}/locations/global/recognizers/_"
))
.set_files([speech::model::BatchRecognizeFileMetadata::new()
.set_uri("gs://cloud-samples-data/speech/hello.wav")])
.set_recognition_output_config(
speech::model::RecognitionOutputConfig::new()
.set_inline_response_config(speech::model::InlineOutputConfig::new()),
)
.set_processing_strategy(
speech::model::batch_recognize_request::ProcessingStrategy::DYNAMIC_BATCHING,
)
.set_config(
speech::model::RecognitionConfig::new()
.set_language_codes(["en-US"])
.set_model("short")
.set_auto_decoding_config(speech::model::AutoDetectDecodingConfig::new()),
)
.poller()
.until_done()
.await?;
println!("LRO completed, response={response:?}");
Ok(())
}
pub async fn polling(project_id: &str) -> crate::Result<()> {
use speech::Poller;
let client = speech::client::Speech::new().await?;
let mut poller = client
.batch_recognize(format!(
"projects/{project_id}/locations/global/recognizers/_"
))
.set_files([speech::model::BatchRecognizeFileMetadata::new()
.set_uri("gs://cloud-samples-data/speech/hello.wav")])
.set_recognition_output_config(
speech::model::RecognitionOutputConfig::new()
.set_inline_response_config(speech::model::InlineOutputConfig::new()),
)
.set_processing_strategy(
speech::model::batch_recognize_request::ProcessingStrategy::DYNAMIC_BATCHING,
)
.set_config(
speech::model::RecognitionConfig::new()
.set_language_codes(["en-US"])
.set_model("short")
.set_auto_decoding_config(speech::model::AutoDetectDecodingConfig::new()),
)
.poller();
while let Some(p) = poller.poll().await {
match p {
speech::PollingResult::Completed(r) => {
println!("LRO completed, response={r:?}");
}
speech::PollingResult::InProgress(m) => {
println!("LRO in progress, metadata={m:?}");
}
speech::PollingResult::PollingError(e) => {
println!("Transient error polling the LRO: {e}");
}
}
tokio::time::sleep(std::time::Duration::from_millis(500)).await;
}
Ok(())
}
pub async fn manually_poll_lro(
client: speech::client::Speech,
operation: longrunning::model::Operation,
) -> crate::Result<speech::model::BatchRecognizeResponse> {
let mut operation = operation;
loop {
if operation.done {
match &operation.result {
None => {
return Err("missing result for finished operation".into());
}
Some(r) => {
return match r {
longrunning::model::operation::Result::Error(e) => {
Err(format!("{e:?}").into())
}
longrunning::model::operation::Result::Response(any) => {
let response =
any.try_into_message::<speech::model::BatchRecognizeResponse>()?;
Ok(response)
}
_ => Err(format!("unexpected result branch {r:?}").into()),
};
}
}
}
if let Some(any) = &operation.metadata {
let metadata = any.try_into_message::<speech::model::OperationMetadata>()?;
println!("LRO in progress, metadata={metadata:?}");
}
tokio::time::sleep(std::time::Duration::from_millis(500)).await;
if let Ok(attempt) = client.get_operation(operation.name.clone()).send().await {
operation = attempt;
}
}
}
What's Next
- Configuring polling policies describes how to customize error handling and backoff periods for LROs.