The Google Cloud Client Libraries for Rust is a collection of Rust crates to
interact with Google Cloud Services.
This guide is organized as a series of small tutorials showing how to perform
specific actions with the client libraries. Most Google Cloud services follow a
series of guidelines, collectively known as the AIPs.
This makes the client libraries more from one service to the next. The functions
to delete or list resources almost always have the same interface.
This guide is intended for Rust developers that are familiar with the language
and the Rust ecosystem. We will assume you know how to use Rust and its
supporting toolchain.
At the risk of being repetitive, most of the guides do not assume you have used
any Google Service or client library before (in Rust or other language).
However, the guides will refer you to service specific tutorials to
initialize their projects and services.
These guides are not intended as tutorials for each services or as extended guides
on how to design Rust applications to work on Google Cloud. They are starting
points to get you productive with the client libraries for Rust.
We recommend you read the service documentation at https://cloud.google.com to
learn more each service. If you need guidance on how to design your application
for Google Cloud the Cloud Architecture Center may have what you are looking
for.
The Google Cloud CLI is a set of tools for Google Cloud. It contains the
gcloud
and bq
command-line tools used to access Compute Engine, Cloud Storage,
BigQuery, and other services from the command line. You can run these
tools interactively or in your automated scripts.
The [Cloud Client Libraries for Rust] is the idiomatic way for Rust developers
to integrate with Google Cloud services, such as Secret Manager and Workflows.
For example, to use the package for an individual API, such as the
Secret Manager API, do the following:
Create a new Rust project:
cargo new my-project
Change your directory to the new project:
cd my-project
Add the [Secret Manager] client library to the new project
The Google Cloud Client Libraries for Rust use "clients" as the main
abstraction to interface with specific services. Clients are implemented
as Rust structs, with methods corresponding to each RPC offered by the
service. In other words, to use a Google Cloud service using the Rust
client libraries you need to first initialize a client.
In this guide we will initialize a client and then use the client to make
a simple RPC. To make this guide concrete, we will use the
Secret Manager API. The same structure applies to any other service in
Google Cloud.
We recommend you follow one of the "Getting Started" guides for Secret Manager
before attempting to use the client library, such as how to Create a secret.
These guides cover service specific concepts in more detail, and provide
detailed instructions with respect to project prerequisites than we can fit in
this guide.
Putting all this code together into a full program looks as follows:
pubtypeResult = std::result::Result<(), Box<dyn std::error::Error>>;
pubasyncfninitialize_client(project_id: &str) -> Result {
use google_cloud_secretmanager_v1::client::SecretManagerService;
// Initialize a client with the default configuration. This is an// asynchronous operation that may fail, as it requires acquiring an an// access token.let client = SecretManagerService::new().await?;
// Once initialized, use the client to make requests.letmut items = client
.list_locations(format!("projects/{project_id}"))
.paginator()
.await;
whileletSome(page) = items.next().await {
let page = page?;
for location in page.locations {
println!("{}", location.name);
}
}
Ok(())
}
Occasionally, an API may need to expose a method that takes a significant amount
of time to complete. In these situations, it is often a poor user experience to
simply block while the task runs; rather, it is better to return some kind of
promise to the user and allow the user to check back in later.
The Google Cloud Client Libraries for Rust provide helpers to work with these
long-running operations (LROs). This guide will show you how to start LROs and
wait for their complication.
The guide uses the Speech-To-Text V2 service to keep the code snippets
concrete. The same ideas work for any other service using LROs.
We recommend you first follow one of the service guides, such as
Transcribe speech to text by using the command line. These guides will cover
critical topics such as ensuring your project has the API enabled, your account
has the right permissions, and how to set up billing for your project (if
needed). Skipping the service guides may result in problems that are hard to
diagnose.
To start a long-running operation first initialize a client as
usual and then make the RPC. But first, add some
use declarations to avoid the long package names:
use google_cloud_longrunning as longrunning;
use google_cloud_speech_v2 as speech;
Now create the client:
let client = speech::client::Speech::new().await?;
We will use batch recognize in this example. While this is designed for long
audio files, it works well with small files too.
In Rust, each request is represented by a method that returns a request builder.
First, call the right method on the client to create the request builder. We
will use the default recognizer (_) in the global region.
let operation = client
.batch_recognize(format!(
"projects/{project_id}/locations/global/recognizers/_"
))
Then initialize the request to use a publicly available audio file:
Spoiler, preparing the request will be identical how we started a long-running
operation. The difference will come at the end, where instead of sending the
request to get the Operation promise:
.send()
.await?;
we create a Poller and wait until it is done:
.poller()
.until_done()
.await?;
Let's review the code step-by-step, without spoilers this time. First, we need
to introduce trait in scope via a use declaration:
use speech::Poller;
Then we initialize the client and prepare the request as before:
While .until_done() is convenient, it omits some information: long-running
operations may report partial progress via a "metadata" attribute. If your
application requires such information, you need to use the poller directly:
// Copyright 2025 Google LLC//// Licensed under the Apache License, Version 2.0 (the "License");// you may not use this file except in compliance with the License.// You may obtain a copy of the License at//// https://www.apache.org/licenses/LICENSE-2.0//// Unless required by applicable law or agreed to in writing, software// distributed under the License is distributed on an "AS IS" BASIS,// WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.// See the License for the specific language governing permissions and// limitations under the License.use google_cloud_longrunning as longrunning;
use google_cloud_speech_v2 as speech;
pubasyncfnstart(project_id: &str) -> crate::Result<()> {
let client = speech::client::Speech::new().await?;
let operation = client
.batch_recognize(format!(
"projects/{project_id}/locations/global/recognizers/_" ))
.set_files([speech::model::BatchRecognizeFileMetadata::new()
.set_uri("gs://cloud-samples-data/speech/hello.wav")])
.set_recognition_output_config(
speech::model::RecognitionOutputConfig::new()
.set_inline_response_config(speech::model::InlineOutputConfig::new()),
)
.set_processing_strategy(
speech::model::batch_recognize_request::ProcessingStrategy::DYNAMIC_BATCHING,
)
.set_config(
speech::model::RecognitionConfig::new()
.set_language_codes(["en-US"])
.set_model("short")
.set_auto_decoding_config(speech::model::AutoDetectDecodingConfig::new()),
)
.send()
.await?;
println!("LRO started, response={operation:?}");
let response = manually_poll_lro(client, operation).await;
println!("LRO completed, response={response:?}");
Ok(())
}
pubasyncfnautomatic(project_id: &str) -> crate::Result<()> {
use speech::Poller;
let client = speech::client::Speech::new().await?;
let response = client
.batch_recognize(format!(
"projects/{project_id}/locations/global/recognizers/_" ))
.set_files([speech::model::BatchRecognizeFileMetadata::new()
.set_uri("gs://cloud-samples-data/speech/hello.wav")])
.set_recognition_output_config(
speech::model::RecognitionOutputConfig::new()
.set_inline_response_config(speech::model::InlineOutputConfig::new()),
)
.set_processing_strategy(
speech::model::batch_recognize_request::ProcessingStrategy::DYNAMIC_BATCHING,
)
.set_config(
speech::model::RecognitionConfig::new()
.set_language_codes(["en-US"])
.set_model("short")
.set_auto_decoding_config(speech::model::AutoDetectDecodingConfig::new()),
)
.poller()
.until_done()
.await?;
println!("LRO completed, response={response:?}");
Ok(())
}
pubasyncfnpolling(project_id: &str) -> crate::Result<()> {
use speech::Poller;
let client = speech::client::Speech::new().await?;
letmut poller = client
.batch_recognize(format!(
"projects/{project_id}/locations/global/recognizers/_" ))
.set_files([speech::model::BatchRecognizeFileMetadata::new()
.set_uri("gs://cloud-samples-data/speech/hello.wav")])
.set_recognition_output_config(
speech::model::RecognitionOutputConfig::new()
.set_inline_response_config(speech::model::InlineOutputConfig::new()),
)
.set_processing_strategy(
speech::model::batch_recognize_request::ProcessingStrategy::DYNAMIC_BATCHING,
)
.set_config(
speech::model::RecognitionConfig::new()
.set_language_codes(["en-US"])
.set_model("short")
.set_auto_decoding_config(speech::model::AutoDetectDecodingConfig::new()),
)
.poller();
whileletSome(p) = poller.poll().await {
match p {
speech::PollingResult::Completed(r) => {
println!("LRO completed, response={r:?}");
}
speech::PollingResult::InProgress(m) => {
println!("LRO in progress, metadata={m:?}");
}
speech::PollingResult::PollingError(e) => {
println!("Transient error polling the LRO: {e}");
}
}
tokio::time::sleep(std::time::Duration::from_millis(500)).await;
}
Ok(())
}
pubasyncfnmanually_poll_lro(
client: speech::client::Speech,
operation: longrunning::model::Operation,
) -> crate::Result<speech::model::BatchRecognizeResponse> {
letmut operation = operation;
loop {
if operation.done {
match &operation.result {
None => {
returnErr("missing result for finished operation".into());
}
Some(r) => {
returnmatch r {
longrunning::model::operation::Result::Error(e) => {
Err(format!("{e:?}").into())
}
longrunning::model::operation::Result::Response(any) => {
let response =
any.try_into_message::<speech::model::BatchRecognizeResponse>()?;
Ok(response)
}
_ => Err(format!("unexpected result branch {r:?}").into()),
};
}
}
}
ifletSome(any) = &operation.metadata {
let metadata = any.try_into_message::<speech::model::OperationMetadata>()?;
println!("LRO in progress, metadata={metadata:?}");
}
tokio::time::sleep(std::time::Duration::from_millis(500)).await;
ifletOk(attempt) = client.get_operation(operation.name.clone()).send().await {
operation = attempt;
}
}
}
Note how this loop explicitly waits before polling again. The polling period
depends on the specific operation and its payload. You should consult the
service documentation and/or experiment with your own data to determine a good
value.
The poller uses a policy to determine what polling errors are transient and may
resolve themselves. The Configuring polling policies chapter covers this topic
in detail.
In general, we recommend you use the previous two approaches in your
application. Manually polling a long-running operation can be quite tedious, and
it is easy to get the types involved wrong. If you do need to manually poll
a long-running operation this guide will walk you through the required steps.
You may want to read the Operation message
reference documentation, as some of the fields and types are used below.
Recall that we started the long-running operation using the client:
We are going to start a loop to poll the operation, and we need to check if
the operation completed immediately, this is rare but does happen. The done
field indicates if the operation completed:
// Copyright 2025 Google LLC//// Licensed under the Apache License, Version 2.0 (the "License");// you may not use this file except in compliance with the License.// You may obtain a copy of the License at//// https://www.apache.org/licenses/LICENSE-2.0//// Unless required by applicable law or agreed to in writing, software// distributed under the License is distributed on an "AS IS" BASIS,// WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.// See the License for the specific language governing permissions and// limitations under the License.use google_cloud_longrunning as longrunning;
use google_cloud_speech_v2 as speech;
pubasyncfnstart(project_id: &str) -> crate::Result<()> {
let client = speech::client::Speech::new().await?;
let operation = client
.batch_recognize(format!(
"projects/{project_id}/locations/global/recognizers/_" ))
.set_files([speech::model::BatchRecognizeFileMetadata::new()
.set_uri("gs://cloud-samples-data/speech/hello.wav")])
.set_recognition_output_config(
speech::model::RecognitionOutputConfig::new()
.set_inline_response_config(speech::model::InlineOutputConfig::new()),
)
.set_processing_strategy(
speech::model::batch_recognize_request::ProcessingStrategy::DYNAMIC_BATCHING,
)
.set_config(
speech::model::RecognitionConfig::new()
.set_language_codes(["en-US"])
.set_model("short")
.set_auto_decoding_config(speech::model::AutoDetectDecodingConfig::new()),
)
.send()
.await?;
println!("LRO started, response={operation:?}");
let response = manually_poll_lro(client, operation).await;
println!("LRO completed, response={response:?}");
Ok(())
}
pubasyncfnautomatic(project_id: &str) -> crate::Result<()> {
use speech::Poller;
let client = speech::client::Speech::new().await?;
let response = client
.batch_recognize(format!(
"projects/{project_id}/locations/global/recognizers/_" ))
.set_files([speech::model::BatchRecognizeFileMetadata::new()
.set_uri("gs://cloud-samples-data/speech/hello.wav")])
.set_recognition_output_config(
speech::model::RecognitionOutputConfig::new()
.set_inline_response_config(speech::model::InlineOutputConfig::new()),
)
.set_processing_strategy(
speech::model::batch_recognize_request::ProcessingStrategy::DYNAMIC_BATCHING,
)
.set_config(
speech::model::RecognitionConfig::new()
.set_language_codes(["en-US"])
.set_model("short")
.set_auto_decoding_config(speech::model::AutoDetectDecodingConfig::new()),
)
.poller()
.until_done()
.await?;
println!("LRO completed, response={response:?}");
Ok(())
}
pubasyncfnpolling(project_id: &str) -> crate::Result<()> {
use speech::Poller;
let client = speech::client::Speech::new().await?;
letmut poller = client
.batch_recognize(format!(
"projects/{project_id}/locations/global/recognizers/_" ))
.set_files([speech::model::BatchRecognizeFileMetadata::new()
.set_uri("gs://cloud-samples-data/speech/hello.wav")])
.set_recognition_output_config(
speech::model::RecognitionOutputConfig::new()
.set_inline_response_config(speech::model::InlineOutputConfig::new()),
)
.set_processing_strategy(
speech::model::batch_recognize_request::ProcessingStrategy::DYNAMIC_BATCHING,
)
.set_config(
speech::model::RecognitionConfig::new()
.set_language_codes(["en-US"])
.set_model("short")
.set_auto_decoding_config(speech::model::AutoDetectDecodingConfig::new()),
)
.poller();
whileletSome(p) = poller.poll().await {
match p {
speech::PollingResult::Completed(r) => {
println!("LRO completed, response={r:?}");
}
speech::PollingResult::InProgress(m) => {
println!("LRO in progress, metadata={m:?}");
}
speech::PollingResult::PollingError(e) => {
println!("Transient error polling the LRO: {e}");
}
}
tokio::time::sleep(std::time::Duration::from_millis(500)).await;
}
Ok(())
}
pubasyncfnmanually_poll_lro(
client: speech::client::Speech,
operation: longrunning::model::Operation,
) -> crate::Result<speech::model::BatchRecognizeResponse> {
letmut operation = operation;
loop {
if operation.done {
match &operation.result {
None => {
returnErr("missing result for finished operation".into());
}
Some(r) => {
returnmatch r {
longrunning::model::operation::Result::Error(e) => {
Err(format!("{e:?}").into())
}
longrunning::model::operation::Result::Response(any) => {
let response =
any.try_into_message::<speech::model::BatchRecognizeResponse>()?;
Ok(response)
}
_ => Err(format!("unexpected result branch {r:?}").into()),
};
}
}
}
ifletSome(any) = &operation.metadata {
let metadata = any.try_into_message::<speech::model::OperationMetadata>()?;
println!("LRO in progress, metadata={metadata:?}");
}
tokio::time::sleep(std::time::Duration::from_millis(500)).await;
ifletOk(attempt) = client.get_operation(operation.name.clone()).send().await {
operation = attempt;
}
}
}
In most cases, if the operation is done it contains a result. However, the field
is optional because the service could return done as true and no result: maybe
the operation deletes resources and a successful completion has no return value.
In our example, with the Speech-to-Text service, we treat this as an error:
// Copyright 2025 Google LLC//// Licensed under the Apache License, Version 2.0 (the "License");// you may not use this file except in compliance with the License.// You may obtain a copy of the License at//// https://www.apache.org/licenses/LICENSE-2.0//// Unless required by applicable law or agreed to in writing, software// distributed under the License is distributed on an "AS IS" BASIS,// WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.// See the License for the specific language governing permissions and// limitations under the License.use google_cloud_longrunning as longrunning;
use google_cloud_speech_v2 as speech;
pubasyncfnstart(project_id: &str) -> crate::Result<()> {
let client = speech::client::Speech::new().await?;
let operation = client
.batch_recognize(format!(
"projects/{project_id}/locations/global/recognizers/_" ))
.set_files([speech::model::BatchRecognizeFileMetadata::new()
.set_uri("gs://cloud-samples-data/speech/hello.wav")])
.set_recognition_output_config(
speech::model::RecognitionOutputConfig::new()
.set_inline_response_config(speech::model::InlineOutputConfig::new()),
)
.set_processing_strategy(
speech::model::batch_recognize_request::ProcessingStrategy::DYNAMIC_BATCHING,
)
.set_config(
speech::model::RecognitionConfig::new()
.set_language_codes(["en-US"])
.set_model("short")
.set_auto_decoding_config(speech::model::AutoDetectDecodingConfig::new()),
)
.send()
.await?;
println!("LRO started, response={operation:?}");
let response = manually_poll_lro(client, operation).await;
println!("LRO completed, response={response:?}");
Ok(())
}
pubasyncfnautomatic(project_id: &str) -> crate::Result<()> {
use speech::Poller;
let client = speech::client::Speech::new().await?;
let response = client
.batch_recognize(format!(
"projects/{project_id}/locations/global/recognizers/_" ))
.set_files([speech::model::BatchRecognizeFileMetadata::new()
.set_uri("gs://cloud-samples-data/speech/hello.wav")])
.set_recognition_output_config(
speech::model::RecognitionOutputConfig::new()
.set_inline_response_config(speech::model::InlineOutputConfig::new()),
)
.set_processing_strategy(
speech::model::batch_recognize_request::ProcessingStrategy::DYNAMIC_BATCHING,
)
.set_config(
speech::model::RecognitionConfig::new()
.set_language_codes(["en-US"])
.set_model("short")
.set_auto_decoding_config(speech::model::AutoDetectDecodingConfig::new()),
)
.poller()
.until_done()
.await?;
println!("LRO completed, response={response:?}");
Ok(())
}
pubasyncfnpolling(project_id: &str) -> crate::Result<()> {
use speech::Poller;
let client = speech::client::Speech::new().await?;
letmut poller = client
.batch_recognize(format!(
"projects/{project_id}/locations/global/recognizers/_" ))
.set_files([speech::model::BatchRecognizeFileMetadata::new()
.set_uri("gs://cloud-samples-data/speech/hello.wav")])
.set_recognition_output_config(
speech::model::RecognitionOutputConfig::new()
.set_inline_response_config(speech::model::InlineOutputConfig::new()),
)
.set_processing_strategy(
speech::model::batch_recognize_request::ProcessingStrategy::DYNAMIC_BATCHING,
)
.set_config(
speech::model::RecognitionConfig::new()
.set_language_codes(["en-US"])
.set_model("short")
.set_auto_decoding_config(speech::model::AutoDetectDecodingConfig::new()),
)
.poller();
whileletSome(p) = poller.poll().await {
match p {
speech::PollingResult::Completed(r) => {
println!("LRO completed, response={r:?}");
}
speech::PollingResult::InProgress(m) => {
println!("LRO in progress, metadata={m:?}");
}
speech::PollingResult::PollingError(e) => {
println!("Transient error polling the LRO: {e}");
}
}
tokio::time::sleep(std::time::Duration::from_millis(500)).await;
}
Ok(())
}
pubasyncfnmanually_poll_lro(
client: speech::client::Speech,
operation: longrunning::model::Operation,
) -> crate::Result<speech::model::BatchRecognizeResponse> {
letmut operation = operation;
loop {
if operation.done {
match &operation.result {
None => {
returnErr("missing result for finished operation".into());
}
Some(r) => {
returnmatch r {
longrunning::model::operation::Result::Error(e) => {
Err(format!("{e:?}").into())
}
longrunning::model::operation::Result::Response(any) => {
let response =
any.try_into_message::<speech::model::BatchRecognizeResponse>()?;
Ok(response)
}
_ => Err(format!("unexpected result branch {r:?}").into()),
};
}
}
}
ifletSome(any) = &operation.metadata {
let metadata = any.try_into_message::<speech::model::OperationMetadata>()?;
println!("LRO in progress, metadata={metadata:?}");
}
tokio::time::sleep(std::time::Duration::from_millis(500)).await;
ifletOk(attempt) = client.get_operation(operation.name.clone()).send().await {
operation = attempt;
}
}
}
Assuming we have a result value this may be an error or a valid response:
starting a long-running operation successfully does not guarantee that it will
complete successfully. We need to check for both. First check for errors:
// Copyright 2025 Google LLC//// Licensed under the Apache License, Version 2.0 (the "License");// you may not use this file except in compliance with the License.// You may obtain a copy of the License at//// https://www.apache.org/licenses/LICENSE-2.0//// Unless required by applicable law or agreed to in writing, software// distributed under the License is distributed on an "AS IS" BASIS,// WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.// See the License for the specific language governing permissions and// limitations under the License.use google_cloud_longrunning as longrunning;
use google_cloud_speech_v2 as speech;
pubasyncfnstart(project_id: &str) -> crate::Result<()> {
let client = speech::client::Speech::new().await?;
let operation = client
.batch_recognize(format!(
"projects/{project_id}/locations/global/recognizers/_" ))
.set_files([speech::model::BatchRecognizeFileMetadata::new()
.set_uri("gs://cloud-samples-data/speech/hello.wav")])
.set_recognition_output_config(
speech::model::RecognitionOutputConfig::new()
.set_inline_response_config(speech::model::InlineOutputConfig::new()),
)
.set_processing_strategy(
speech::model::batch_recognize_request::ProcessingStrategy::DYNAMIC_BATCHING,
)
.set_config(
speech::model::RecognitionConfig::new()
.set_language_codes(["en-US"])
.set_model("short")
.set_auto_decoding_config(speech::model::AutoDetectDecodingConfig::new()),
)
.send()
.await?;
println!("LRO started, response={operation:?}");
let response = manually_poll_lro(client, operation).await;
println!("LRO completed, response={response:?}");
Ok(())
}
pubasyncfnautomatic(project_id: &str) -> crate::Result<()> {
use speech::Poller;
let client = speech::client::Speech::new().await?;
let response = client
.batch_recognize(format!(
"projects/{project_id}/locations/global/recognizers/_" ))
.set_files([speech::model::BatchRecognizeFileMetadata::new()
.set_uri("gs://cloud-samples-data/speech/hello.wav")])
.set_recognition_output_config(
speech::model::RecognitionOutputConfig::new()
.set_inline_response_config(speech::model::InlineOutputConfig::new()),
)
.set_processing_strategy(
speech::model::batch_recognize_request::ProcessingStrategy::DYNAMIC_BATCHING,
)
.set_config(
speech::model::RecognitionConfig::new()
.set_language_codes(["en-US"])
.set_model("short")
.set_auto_decoding_config(speech::model::AutoDetectDecodingConfig::new()),
)
.poller()
.until_done()
.await?;
println!("LRO completed, response={response:?}");
Ok(())
}
pubasyncfnpolling(project_id: &str) -> crate::Result<()> {
use speech::Poller;
let client = speech::client::Speech::new().await?;
letmut poller = client
.batch_recognize(format!(
"projects/{project_id}/locations/global/recognizers/_" ))
.set_files([speech::model::BatchRecognizeFileMetadata::new()
.set_uri("gs://cloud-samples-data/speech/hello.wav")])
.set_recognition_output_config(
speech::model::RecognitionOutputConfig::new()
.set_inline_response_config(speech::model::InlineOutputConfig::new()),
)
.set_processing_strategy(
speech::model::batch_recognize_request::ProcessingStrategy::DYNAMIC_BATCHING,
)
.set_config(
speech::model::RecognitionConfig::new()
.set_language_codes(["en-US"])
.set_model("short")
.set_auto_decoding_config(speech::model::AutoDetectDecodingConfig::new()),
)
.poller();
whileletSome(p) = poller.poll().await {
match p {
speech::PollingResult::Completed(r) => {
println!("LRO completed, response={r:?}");
}
speech::PollingResult::InProgress(m) => {
println!("LRO in progress, metadata={m:?}");
}
speech::PollingResult::PollingError(e) => {
println!("Transient error polling the LRO: {e}");
}
}
tokio::time::sleep(std::time::Duration::from_millis(500)).await;
}
Ok(())
}
pubasyncfnmanually_poll_lro(
client: speech::client::Speech,
operation: longrunning::model::Operation,
) -> crate::Result<speech::model::BatchRecognizeResponse> {
letmut operation = operation;
loop {
if operation.done {
match &operation.result {
None => {
returnErr("missing result for finished operation".into());
}
Some(r) => {
returnmatch r {
longrunning::model::operation::Result::Error(e) => {
Err(format!("{e:?}").into())
}
longrunning::model::operation::Result::Response(any) => {
let response =
any.try_into_message::<speech::model::BatchRecognizeResponse>()?;
Ok(response)
}
_ => Err(format!("unexpected result branch {r:?}").into()),
};
}
}
}
ifletSome(any) = &operation.metadata {
let metadata = any.try_into_message::<speech::model::OperationMetadata>()?;
println!("LRO in progress, metadata={metadata:?}");
}
tokio::time::sleep(std::time::Duration::from_millis(500)).await;
ifletOk(attempt) = client.get_operation(operation.name.clone()).send().await {
operation = attempt;
}
}
}
The error type is a Status message type. This does not
implement the standard Error interface, you need to manually convert that to
a valid error. You can use ServiceError::from to perform this conversion.
Assuming the result is successful, you need to extract the response type. You
can find this type in the documentation for the LRO method, or by reading the
service API documentation:
// Copyright 2025 Google LLC//// Licensed under the Apache License, Version 2.0 (the "License");// you may not use this file except in compliance with the License.// You may obtain a copy of the License at//// https://www.apache.org/licenses/LICENSE-2.0//// Unless required by applicable law or agreed to in writing, software// distributed under the License is distributed on an "AS IS" BASIS,// WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.// See the License for the specific language governing permissions and// limitations under the License.use google_cloud_longrunning as longrunning;
use google_cloud_speech_v2 as speech;
pubasyncfnstart(project_id: &str) -> crate::Result<()> {
let client = speech::client::Speech::new().await?;
let operation = client
.batch_recognize(format!(
"projects/{project_id}/locations/global/recognizers/_" ))
.set_files([speech::model::BatchRecognizeFileMetadata::new()
.set_uri("gs://cloud-samples-data/speech/hello.wav")])
.set_recognition_output_config(
speech::model::RecognitionOutputConfig::new()
.set_inline_response_config(speech::model::InlineOutputConfig::new()),
)
.set_processing_strategy(
speech::model::batch_recognize_request::ProcessingStrategy::DYNAMIC_BATCHING,
)
.set_config(
speech::model::RecognitionConfig::new()
.set_language_codes(["en-US"])
.set_model("short")
.set_auto_decoding_config(speech::model::AutoDetectDecodingConfig::new()),
)
.send()
.await?;
println!("LRO started, response={operation:?}");
let response = manually_poll_lro(client, operation).await;
println!("LRO completed, response={response:?}");
Ok(())
}
pubasyncfnautomatic(project_id: &str) -> crate::Result<()> {
use speech::Poller;
let client = speech::client::Speech::new().await?;
let response = client
.batch_recognize(format!(
"projects/{project_id}/locations/global/recognizers/_" ))
.set_files([speech::model::BatchRecognizeFileMetadata::new()
.set_uri("gs://cloud-samples-data/speech/hello.wav")])
.set_recognition_output_config(
speech::model::RecognitionOutputConfig::new()
.set_inline_response_config(speech::model::InlineOutputConfig::new()),
)
.set_processing_strategy(
speech::model::batch_recognize_request::ProcessingStrategy::DYNAMIC_BATCHING,
)
.set_config(
speech::model::RecognitionConfig::new()
.set_language_codes(["en-US"])
.set_model("short")
.set_auto_decoding_config(speech::model::AutoDetectDecodingConfig::new()),
)
.poller()
.until_done()
.await?;
println!("LRO completed, response={response:?}");
Ok(())
}
pubasyncfnpolling(project_id: &str) -> crate::Result<()> {
use speech::Poller;
let client = speech::client::Speech::new().await?;
letmut poller = client
.batch_recognize(format!(
"projects/{project_id}/locations/global/recognizers/_" ))
.set_files([speech::model::BatchRecognizeFileMetadata::new()
.set_uri("gs://cloud-samples-data/speech/hello.wav")])
.set_recognition_output_config(
speech::model::RecognitionOutputConfig::new()
.set_inline_response_config(speech::model::InlineOutputConfig::new()),
)
.set_processing_strategy(
speech::model::batch_recognize_request::ProcessingStrategy::DYNAMIC_BATCHING,
)
.set_config(
speech::model::RecognitionConfig::new()
.set_language_codes(["en-US"])
.set_model("short")
.set_auto_decoding_config(speech::model::AutoDetectDecodingConfig::new()),
)
.poller();
whileletSome(p) = poller.poll().await {
match p {
speech::PollingResult::Completed(r) => {
println!("LRO completed, response={r:?}");
}
speech::PollingResult::InProgress(m) => {
println!("LRO in progress, metadata={m:?}");
}
speech::PollingResult::PollingError(e) => {
println!("Transient error polling the LRO: {e}");
}
}
tokio::time::sleep(std::time::Duration::from_millis(500)).await;
}
Ok(())
}
pubasyncfnmanually_poll_lro(
client: speech::client::Speech,
operation: longrunning::model::Operation,
) -> crate::Result<speech::model::BatchRecognizeResponse> {
letmut operation = operation;
loop {
if operation.done {
match &operation.result {
None => {
returnErr("missing result for finished operation".into());
}
Some(r) => {
returnmatch r {
longrunning::model::operation::Result::Error(e) => {
Err(format!("{e:?}").into())
}
longrunning::model::operation::Result::Response(any) => {
let response =
any.try_into_message::<speech::model::BatchRecognizeResponse>()?;
Ok(response)
}
_ => Err(format!("unexpected result branch {r:?}").into()),
};
}
}
}
ifletSome(any) = &operation.metadata {
let metadata = any.try_into_message::<speech::model::OperationMetadata>()?;
println!("LRO in progress, metadata={metadata:?}");
}
tokio::time::sleep(std::time::Duration::from_millis(500)).await;
ifletOk(attempt) = client.get_operation(operation.name.clone()).send().await {
operation = attempt;
}
}
}
Note that extraction of the value may fail if the type does not match what the
service sent.
All types in Google Cloud may add fields and branches in the future. While this
is unlikely for a common type such as Operation, it happens frequently for
most service messages. The Google Cloud Client Libraries for Rust mark all
structs and enums as #[non_exhaustive] to signal that such changes are
possible. In this case, you must handle this unexpected case:
// Copyright 2025 Google LLC//// Licensed under the Apache License, Version 2.0 (the "License");// you may not use this file except in compliance with the License.// You may obtain a copy of the License at//// https://www.apache.org/licenses/LICENSE-2.0//// Unless required by applicable law or agreed to in writing, software// distributed under the License is distributed on an "AS IS" BASIS,// WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.// See the License for the specific language governing permissions and// limitations under the License.use google_cloud_longrunning as longrunning;
use google_cloud_speech_v2 as speech;
pubasyncfnstart(project_id: &str) -> crate::Result<()> {
let client = speech::client::Speech::new().await?;
let operation = client
.batch_recognize(format!(
"projects/{project_id}/locations/global/recognizers/_" ))
.set_files([speech::model::BatchRecognizeFileMetadata::new()
.set_uri("gs://cloud-samples-data/speech/hello.wav")])
.set_recognition_output_config(
speech::model::RecognitionOutputConfig::new()
.set_inline_response_config(speech::model::InlineOutputConfig::new()),
)
.set_processing_strategy(
speech::model::batch_recognize_request::ProcessingStrategy::DYNAMIC_BATCHING,
)
.set_config(
speech::model::RecognitionConfig::new()
.set_language_codes(["en-US"])
.set_model("short")
.set_auto_decoding_config(speech::model::AutoDetectDecodingConfig::new()),
)
.send()
.await?;
println!("LRO started, response={operation:?}");
let response = manually_poll_lro(client, operation).await;
println!("LRO completed, response={response:?}");
Ok(())
}
pubasyncfnautomatic(project_id: &str) -> crate::Result<()> {
use speech::Poller;
let client = speech::client::Speech::new().await?;
let response = client
.batch_recognize(format!(
"projects/{project_id}/locations/global/recognizers/_" ))
.set_files([speech::model::BatchRecognizeFileMetadata::new()
.set_uri("gs://cloud-samples-data/speech/hello.wav")])
.set_recognition_output_config(
speech::model::RecognitionOutputConfig::new()
.set_inline_response_config(speech::model::InlineOutputConfig::new()),
)
.set_processing_strategy(
speech::model::batch_recognize_request::ProcessingStrategy::DYNAMIC_BATCHING,
)
.set_config(
speech::model::RecognitionConfig::new()
.set_language_codes(["en-US"])
.set_model("short")
.set_auto_decoding_config(speech::model::AutoDetectDecodingConfig::new()),
)
.poller()
.until_done()
.await?;
println!("LRO completed, response={response:?}");
Ok(())
}
pubasyncfnpolling(project_id: &str) -> crate::Result<()> {
use speech::Poller;
let client = speech::client::Speech::new().await?;
letmut poller = client
.batch_recognize(format!(
"projects/{project_id}/locations/global/recognizers/_" ))
.set_files([speech::model::BatchRecognizeFileMetadata::new()
.set_uri("gs://cloud-samples-data/speech/hello.wav")])
.set_recognition_output_config(
speech::model::RecognitionOutputConfig::new()
.set_inline_response_config(speech::model::InlineOutputConfig::new()),
)
.set_processing_strategy(
speech::model::batch_recognize_request::ProcessingStrategy::DYNAMIC_BATCHING,
)
.set_config(
speech::model::RecognitionConfig::new()
.set_language_codes(["en-US"])
.set_model("short")
.set_auto_decoding_config(speech::model::AutoDetectDecodingConfig::new()),
)
.poller();
whileletSome(p) = poller.poll().await {
match p {
speech::PollingResult::Completed(r) => {
println!("LRO completed, response={r:?}");
}
speech::PollingResult::InProgress(m) => {
println!("LRO in progress, metadata={m:?}");
}
speech::PollingResult::PollingError(e) => {
println!("Transient error polling the LRO: {e}");
}
}
tokio::time::sleep(std::time::Duration::from_millis(500)).await;
}
Ok(())
}
pubasyncfnmanually_poll_lro(
client: speech::client::Speech,
operation: longrunning::model::Operation,
) -> crate::Result<speech::model::BatchRecognizeResponse> {
letmut operation = operation;
loop {
if operation.done {
match &operation.result {
None => {
returnErr("missing result for finished operation".into());
}
Some(r) => {
returnmatch r {
longrunning::model::operation::Result::Error(e) => {
Err(format!("{e:?}").into())
}
longrunning::model::operation::Result::Response(any) => {
let response =
any.try_into_message::<speech::model::BatchRecognizeResponse>()?;
Ok(response)
}
_ => Err(format!("unexpected result branch {r:?}").into()),
};
}
}
}
ifletSome(any) = &operation.metadata {
let metadata = any.try_into_message::<speech::model::OperationMetadata>()?;
println!("LRO in progress, metadata={metadata:?}");
}
tokio::time::sleep(std::time::Duration::from_millis(500)).await;
ifletOk(attempt) = client.get_operation(operation.name.clone()).send().await {
operation = attempt;
}
}
}
If the operation has not completed, then it may contain some metadata. Some
services just include initial information about the request, while other
services include partial progress reports. You can choose to extract and report
this metadata:
// Copyright 2025 Google LLC//// Licensed under the Apache License, Version 2.0 (the "License");// you may not use this file except in compliance with the License.// You may obtain a copy of the License at//// https://www.apache.org/licenses/LICENSE-2.0//// Unless required by applicable law or agreed to in writing, software// distributed under the License is distributed on an "AS IS" BASIS,// WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.// See the License for the specific language governing permissions and// limitations under the License.use google_cloud_longrunning as longrunning;
use google_cloud_speech_v2 as speech;
pubasyncfnstart(project_id: &str) -> crate::Result<()> {
let client = speech::client::Speech::new().await?;
let operation = client
.batch_recognize(format!(
"projects/{project_id}/locations/global/recognizers/_" ))
.set_files([speech::model::BatchRecognizeFileMetadata::new()
.set_uri("gs://cloud-samples-data/speech/hello.wav")])
.set_recognition_output_config(
speech::model::RecognitionOutputConfig::new()
.set_inline_response_config(speech::model::InlineOutputConfig::new()),
)
.set_processing_strategy(
speech::model::batch_recognize_request::ProcessingStrategy::DYNAMIC_BATCHING,
)
.set_config(
speech::model::RecognitionConfig::new()
.set_language_codes(["en-US"])
.set_model("short")
.set_auto_decoding_config(speech::model::AutoDetectDecodingConfig::new()),
)
.send()
.await?;
println!("LRO started, response={operation:?}");
let response = manually_poll_lro(client, operation).await;
println!("LRO completed, response={response:?}");
Ok(())
}
pubasyncfnautomatic(project_id: &str) -> crate::Result<()> {
use speech::Poller;
let client = speech::client::Speech::new().await?;
let response = client
.batch_recognize(format!(
"projects/{project_id}/locations/global/recognizers/_" ))
.set_files([speech::model::BatchRecognizeFileMetadata::new()
.set_uri("gs://cloud-samples-data/speech/hello.wav")])
.set_recognition_output_config(
speech::model::RecognitionOutputConfig::new()
.set_inline_response_config(speech::model::InlineOutputConfig::new()),
)
.set_processing_strategy(
speech::model::batch_recognize_request::ProcessingStrategy::DYNAMIC_BATCHING,
)
.set_config(
speech::model::RecognitionConfig::new()
.set_language_codes(["en-US"])
.set_model("short")
.set_auto_decoding_config(speech::model::AutoDetectDecodingConfig::new()),
)
.poller()
.until_done()
.await?;
println!("LRO completed, response={response:?}");
Ok(())
}
pubasyncfnpolling(project_id: &str) -> crate::Result<()> {
use speech::Poller;
let client = speech::client::Speech::new().await?;
letmut poller = client
.batch_recognize(format!(
"projects/{project_id}/locations/global/recognizers/_" ))
.set_files([speech::model::BatchRecognizeFileMetadata::new()
.set_uri("gs://cloud-samples-data/speech/hello.wav")])
.set_recognition_output_config(
speech::model::RecognitionOutputConfig::new()
.set_inline_response_config(speech::model::InlineOutputConfig::new()),
)
.set_processing_strategy(
speech::model::batch_recognize_request::ProcessingStrategy::DYNAMIC_BATCHING,
)
.set_config(
speech::model::RecognitionConfig::new()
.set_language_codes(["en-US"])
.set_model("short")
.set_auto_decoding_config(speech::model::AutoDetectDecodingConfig::new()),
)
.poller();
whileletSome(p) = poller.poll().await {
match p {
speech::PollingResult::Completed(r) => {
println!("LRO completed, response={r:?}");
}
speech::PollingResult::InProgress(m) => {
println!("LRO in progress, metadata={m:?}");
}
speech::PollingResult::PollingError(e) => {
println!("Transient error polling the LRO: {e}");
}
}
tokio::time::sleep(std::time::Duration::from_millis(500)).await;
}
Ok(())
}
pubasyncfnmanually_poll_lro(
client: speech::client::Speech,
operation: longrunning::model::Operation,
) -> crate::Result<speech::model::BatchRecognizeResponse> {
letmut operation = operation;
loop {
if operation.done {
match &operation.result {
None => {
returnErr("missing result for finished operation".into());
}
Some(r) => {
returnmatch r {
longrunning::model::operation::Result::Error(e) => {
Err(format!("{e:?}").into())
}
longrunning::model::operation::Result::Response(any) => {
let response =
any.try_into_message::<speech::model::BatchRecognizeResponse>()?;
Ok(response)
}
_ => Err(format!("unexpected result branch {r:?}").into()),
};
}
}
}
ifletSome(any) = &operation.metadata {
let metadata = any.try_into_message::<speech::model::OperationMetadata>()?;
println!("LRO in progress, metadata={metadata:?}");
}
tokio::time::sleep(std::time::Duration::from_millis(500)).await;
ifletOk(attempt) = client.get_operation(operation.name.clone()).send().await {
operation = attempt;
}
}
}
As the operation has not completed, you need to wait before polling again.
Consider adjusting the polling period, maybe using a form of truncated
exponential backoff. In this example we simply poll every 500ms:
// Copyright 2025 Google LLC//// Licensed under the Apache License, Version 2.0 (the "License");// you may not use this file except in compliance with the License.// You may obtain a copy of the License at//// https://www.apache.org/licenses/LICENSE-2.0//// Unless required by applicable law or agreed to in writing, software// distributed under the License is distributed on an "AS IS" BASIS,// WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.// See the License for the specific language governing permissions and// limitations under the License.use google_cloud_longrunning as longrunning;
use google_cloud_speech_v2 as speech;
pubasyncfnstart(project_id: &str) -> crate::Result<()> {
let client = speech::client::Speech::new().await?;
let operation = client
.batch_recognize(format!(
"projects/{project_id}/locations/global/recognizers/_" ))
.set_files([speech::model::BatchRecognizeFileMetadata::new()
.set_uri("gs://cloud-samples-data/speech/hello.wav")])
.set_recognition_output_config(
speech::model::RecognitionOutputConfig::new()
.set_inline_response_config(speech::model::InlineOutputConfig::new()),
)
.set_processing_strategy(
speech::model::batch_recognize_request::ProcessingStrategy::DYNAMIC_BATCHING,
)
.set_config(
speech::model::RecognitionConfig::new()
.set_language_codes(["en-US"])
.set_model("short")
.set_auto_decoding_config(speech::model::AutoDetectDecodingConfig::new()),
)
.send()
.await?;
println!("LRO started, response={operation:?}");
let response = manually_poll_lro(client, operation).await;
println!("LRO completed, response={response:?}");
Ok(())
}
pubasyncfnautomatic(project_id: &str) -> crate::Result<()> {
use speech::Poller;
let client = speech::client::Speech::new().await?;
let response = client
.batch_recognize(format!(
"projects/{project_id}/locations/global/recognizers/_" ))
.set_files([speech::model::BatchRecognizeFileMetadata::new()
.set_uri("gs://cloud-samples-data/speech/hello.wav")])
.set_recognition_output_config(
speech::model::RecognitionOutputConfig::new()
.set_inline_response_config(speech::model::InlineOutputConfig::new()),
)
.set_processing_strategy(
speech::model::batch_recognize_request::ProcessingStrategy::DYNAMIC_BATCHING,
)
.set_config(
speech::model::RecognitionConfig::new()
.set_language_codes(["en-US"])
.set_model("short")
.set_auto_decoding_config(speech::model::AutoDetectDecodingConfig::new()),
)
.poller()
.until_done()
.await?;
println!("LRO completed, response={response:?}");
Ok(())
}
pubasyncfnpolling(project_id: &str) -> crate::Result<()> {
use speech::Poller;
let client = speech::client::Speech::new().await?;
letmut poller = client
.batch_recognize(format!(
"projects/{project_id}/locations/global/recognizers/_" ))
.set_files([speech::model::BatchRecognizeFileMetadata::new()
.set_uri("gs://cloud-samples-data/speech/hello.wav")])
.set_recognition_output_config(
speech::model::RecognitionOutputConfig::new()
.set_inline_response_config(speech::model::InlineOutputConfig::new()),
)
.set_processing_strategy(
speech::model::batch_recognize_request::ProcessingStrategy::DYNAMIC_BATCHING,
)
.set_config(
speech::model::RecognitionConfig::new()
.set_language_codes(["en-US"])
.set_model("short")
.set_auto_decoding_config(speech::model::AutoDetectDecodingConfig::new()),
)
.poller();
whileletSome(p) = poller.poll().await {
match p {
speech::PollingResult::Completed(r) => {
println!("LRO completed, response={r:?}");
}
speech::PollingResult::InProgress(m) => {
println!("LRO in progress, metadata={m:?}");
}
speech::PollingResult::PollingError(e) => {
println!("Transient error polling the LRO: {e}");
}
}
tokio::time::sleep(std::time::Duration::from_millis(500)).await;
}
Ok(())
}
pubasyncfnmanually_poll_lro(
client: speech::client::Speech,
operation: longrunning::model::Operation,
) -> crate::Result<speech::model::BatchRecognizeResponse> {
letmut operation = operation;
loop {
if operation.done {
match &operation.result {
None => {
returnErr("missing result for finished operation".into());
}
Some(r) => {
returnmatch r {
longrunning::model::operation::Result::Error(e) => {
Err(format!("{e:?}").into())
}
longrunning::model::operation::Result::Response(any) => {
let response =
any.try_into_message::<speech::model::BatchRecognizeResponse>()?;
Ok(response)
}
_ => Err(format!("unexpected result branch {r:?}").into()),
};
}
}
}
ifletSome(any) = &operation.metadata {
let metadata = any.try_into_message::<speech::model::OperationMetadata>()?;
println!("LRO in progress, metadata={metadata:?}");
}
tokio::time::sleep(std::time::Duration::from_millis(500)).await;
ifletOk(attempt) = client.get_operation(operation.name.clone()).send().await {
operation = attempt;
}
}
}
And then poll the operation to get its new status:
// Copyright 2025 Google LLC//// Licensed under the Apache License, Version 2.0 (the "License");// you may not use this file except in compliance with the License.// You may obtain a copy of the License at//// https://www.apache.org/licenses/LICENSE-2.0//// Unless required by applicable law or agreed to in writing, software// distributed under the License is distributed on an "AS IS" BASIS,// WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.// See the License for the specific language governing permissions and// limitations under the License.use google_cloud_longrunning as longrunning;
use google_cloud_speech_v2 as speech;
pubasyncfnstart(project_id: &str) -> crate::Result<()> {
let client = speech::client::Speech::new().await?;
let operation = client
.batch_recognize(format!(
"projects/{project_id}/locations/global/recognizers/_" ))
.set_files([speech::model::BatchRecognizeFileMetadata::new()
.set_uri("gs://cloud-samples-data/speech/hello.wav")])
.set_recognition_output_config(
speech::model::RecognitionOutputConfig::new()
.set_inline_response_config(speech::model::InlineOutputConfig::new()),
)
.set_processing_strategy(
speech::model::batch_recognize_request::ProcessingStrategy::DYNAMIC_BATCHING,
)
.set_config(
speech::model::RecognitionConfig::new()
.set_language_codes(["en-US"])
.set_model("short")
.set_auto_decoding_config(speech::model::AutoDetectDecodingConfig::new()),
)
.send()
.await?;
println!("LRO started, response={operation:?}");
let response = manually_poll_lro(client, operation).await;
println!("LRO completed, response={response:?}");
Ok(())
}
pubasyncfnautomatic(project_id: &str) -> crate::Result<()> {
use speech::Poller;
let client = speech::client::Speech::new().await?;
let response = client
.batch_recognize(format!(
"projects/{project_id}/locations/global/recognizers/_" ))
.set_files([speech::model::BatchRecognizeFileMetadata::new()
.set_uri("gs://cloud-samples-data/speech/hello.wav")])
.set_recognition_output_config(
speech::model::RecognitionOutputConfig::new()
.set_inline_response_config(speech::model::InlineOutputConfig::new()),
)
.set_processing_strategy(
speech::model::batch_recognize_request::ProcessingStrategy::DYNAMIC_BATCHING,
)
.set_config(
speech::model::RecognitionConfig::new()
.set_language_codes(["en-US"])
.set_model("short")
.set_auto_decoding_config(speech::model::AutoDetectDecodingConfig::new()),
)
.poller()
.until_done()
.await?;
println!("LRO completed, response={response:?}");
Ok(())
}
pubasyncfnpolling(project_id: &str) -> crate::Result<()> {
use speech::Poller;
let client = speech::client::Speech::new().await?;
letmut poller = client
.batch_recognize(format!(
"projects/{project_id}/locations/global/recognizers/_" ))
.set_files([speech::model::BatchRecognizeFileMetadata::new()
.set_uri("gs://cloud-samples-data/speech/hello.wav")])
.set_recognition_output_config(
speech::model::RecognitionOutputConfig::new()
.set_inline_response_config(speech::model::InlineOutputConfig::new()),
)
.set_processing_strategy(
speech::model::batch_recognize_request::ProcessingStrategy::DYNAMIC_BATCHING,
)
.set_config(
speech::model::RecognitionConfig::new()
.set_language_codes(["en-US"])
.set_model("short")
.set_auto_decoding_config(speech::model::AutoDetectDecodingConfig::new()),
)
.poller();
whileletSome(p) = poller.poll().await {
match p {
speech::PollingResult::Completed(r) => {
println!("LRO completed, response={r:?}");
}
speech::PollingResult::InProgress(m) => {
println!("LRO in progress, metadata={m:?}");
}
speech::PollingResult::PollingError(e) => {
println!("Transient error polling the LRO: {e}");
}
}
tokio::time::sleep(std::time::Duration::from_millis(500)).await;
}
Ok(())
}
pubasyncfnmanually_poll_lro(
client: speech::client::Speech,
operation: longrunning::model::Operation,
) -> crate::Result<speech::model::BatchRecognizeResponse> {
letmut operation = operation;
loop {
if operation.done {
match &operation.result {
None => {
returnErr("missing result for finished operation".into());
}
Some(r) => {
returnmatch r {
longrunning::model::operation::Result::Error(e) => {
Err(format!("{e:?}").into())
}
longrunning::model::operation::Result::Response(any) => {
let response =
any.try_into_message::<speech::model::BatchRecognizeResponse>()?;
Ok(response)
}
_ => Err(format!("unexpected result branch {r:?}").into()),
};
}
}
}
ifletSome(any) = &operation.metadata {
let metadata = any.try_into_message::<speech::model::OperationMetadata>()?;
println!("LRO in progress, metadata={metadata:?}");
}
tokio::time::sleep(std::time::Duration::from_millis(500)).await;
ifletOk(attempt) = client.get_operation(operation.name.clone()).send().await {
operation = attempt;
}
}
}
For simplicity, we have chosen to ignore all errors. In your application you
may chose to treat only a subset of the errors as non-recoverable, and may want
to limit the number of polling attempts if these fail.
The Google Cloud Client Libraries for Rust provide helper functions to simplify
waiting and monitoring the progress of
LROs (Long-Running Operations).
These helpers use policies to configure the polling frequency and to determine
what polling errors are transient and may be ignored until the next polling
event.
This guide will walk you through the configuration of these policies for all the
long-running operations started by a client, or just for one specific request.
There are two different policies controlling the behavior of the LRO loops:
The polling backoff policy controls how long the loop waits before polling
the status of a LRO that is still in progress.
The polling error policy controls what to do on an polling error. Some polling
errors are unrecoverable, and indicate that the operation was aborted or the
caller has no permissions to check the status of the LRO. Other polling errors
are transient, and indicate a temporary problem in the client network or the
service.
Each one of these policies can be set independently, and each one can be set for
all the LROs started on a client or changed for just one request.
The guide uses the Speech-To-Text V2 service to keep the code snippets
concrete. The same ideas work for any other service using LROs.
We recommend you first follow one of the service guides, such as
Transcribe speech to text by using the command line. These guides will cover
critical topics such as ensuring your project has the API enabled, your account
has the right permissions, and how to set up billing for your project (if
needed). Skipping the service guides may result in problems that are hard to
diagnose.
If you are planning to use the same polling backoff policy for all (or even
most) requests with the same client then consider setting this as a client
option.
Unless you override the policy with a per-request setting this policy will be
in effect for any long-running operation started with the client. In this
example, if you make a call such as:
The client library will first wait for 500ms, after the first polling attempt,
then for 1,000ms (or 1s) for the second attempt, and sub-sequent attempts will
wait 2s, 4s, 8s and then all attempts will wait 10s.
As described in the previous section. We need a type implementing the
PollingBackoffPolicy trait to configure the polling frequency. We will also
use ExponentialBackoff in this example:
use google_cloud_gax::exponential_backoff::ExponentialBackoffBuilder;
use std::time::Duration;
The configuration of the request will require bringing a trait within scope:
use google_cloud_gax::options::RequestOptionsBuilder;
You create the request builder as usual:
let response = client
.batch_recognize(format!(
"projects/{project_id}/locations/global/recognizers/_"
))
To configure the retryable errors we need to use a type implementing the
PollingErrorPolicy trait. The client libraries provide a number of them, a
conservative choice is Aip194Strict:
use google_cloud_gax::options::ClientConfig;
use google_cloud_gax::polling_error_policy::Aip194Strict;
use google_cloud_gax::polling_error_policy::PollingErrorPolicyExt;
use std::time::Duration;
If you are planning to use the same polling policy for all (or even most)
requests with the same client then consider setting this as a client option.
Initialize the client with the configuration you want:
Unless you override the policy with a per-request setting this policy will be
in effect for any long-running operation started with the client. In this
example, if you make a call such as:
The client library will only treat UNAVAILABLE (see AIP-194) as a retryable
error, and will stop polling after 100 attempts or 300 seconds, whichever comes
first.
To configure the retryable errors we need to use a type implementing the
PollingErrorPolicy trait. The client libraries provide a number of them, a
conservative choice is Aip194Strict:
use google_cloud_gax::polling_error_policy::Aip194Strict;
use google_cloud_gax::polling_error_policy::PollingErrorPolicyExt;
use std::time::Duration;
The configuration of the request will require bringing a trait within scope:
use google_cloud_gax::options::RequestOptionsBuilder;
You create the request builder as usual:
let response = client
.batch_recognize(format!(
"projects/{project_id}/locations/global/recognizers/_"
))