public interface StreamingRecognitionConfigOrBuilder
extends com.google.protobuf.MessageOrBuilder
| Modifier and Type | Method and Description |
|---|---|
RecognitionConfig |
getConfig()
[Required] The `config` message provides information to the recognizer
that specifies how to process the request.
|
RecognitionConfigOrBuilder |
getConfigOrBuilder()
[Required] The `config` message provides information to the recognizer
that specifies how to process the request.
|
boolean |
getInterimResults()
[Optional] If `true`, interim results (tentative hypotheses) may be
returned as they become available (these interim results are indicated with
the `is_final=false` flag).
|
boolean |
getSingleUtterance()
[Optional] If `false` or omitted, the recognizer will perform continuous
recognition (continuing to process audio even if the user pauses speaking)
until the client closes the output stream (gRPC API) or when the maximum
time limit has been reached.
|
boolean |
hasConfig()
[Required] The `config` message provides information to the recognizer
that specifies how to process the request.
|
findInitializationErrors, getAllFields, getDefaultInstanceForType, getDescriptorForType, getField, getInitializationErrorString, getOneofFieldDescriptor, getRepeatedField, getRepeatedFieldCount, getUnknownFields, hasField, hasOneofboolean hasConfig()
[Required] The `config` message provides information to the recognizer that specifies how to process the request.
optional .google.cloud.speech.v1beta1.RecognitionConfig config = 1;RecognitionConfig getConfig()
[Required] The `config` message provides information to the recognizer that specifies how to process the request.
optional .google.cloud.speech.v1beta1.RecognitionConfig config = 1;RecognitionConfigOrBuilder getConfigOrBuilder()
[Required] The `config` message provides information to the recognizer that specifies how to process the request.
optional .google.cloud.speech.v1beta1.RecognitionConfig config = 1;boolean getSingleUtterance()
[Optional] If `false` or omitted, the recognizer will perform continuous recognition (continuing to process audio even if the user pauses speaking) until the client closes the output stream (gRPC API) or when the maximum time limit has been reached. Multiple `StreamingRecognitionResult`s with the `is_final` flag set to `true` may be returned. If `true`, the recognizer will detect a single spoken utterance. When it detects that the user has paused or stopped speaking, it will return an `END_OF_UTTERANCE` event and cease recognition. It will return no more than one `StreamingRecognitionResult` with the `is_final` flag set to `true`.
optional bool single_utterance = 2;boolean getInterimResults()
[Optional] If `true`, interim results (tentative hypotheses) may be returned as they become available (these interim results are indicated with the `is_final=false` flag). If `false` or omitted, only `is_final=true` result(s) are returned.
optional bool interim_results = 3;