public interface StreamingRecognizeRequestOrBuilder
extends com.google.protobuf.MessageOrBuilder
| Modifier and Type | Method and Description |
|---|---|
com.google.protobuf.ByteString |
getAudioContent()
The audio data to be recognized.
|
StreamingRecognitionConfig |
getStreamingConfig()
The `streaming_config` message provides information to the recognizer
that specifies how to process the request.
|
StreamingRecognitionConfigOrBuilder |
getStreamingConfigOrBuilder()
The `streaming_config` message provides information to the recognizer
that specifies how to process the request.
|
StreamingRecognizeRequest.StreamingRequestCase |
getStreamingRequestCase() |
findInitializationErrors, getAllFields, getDefaultInstanceForType, getDescriptorForType, getField, getInitializationErrorString, getOneofFieldDescriptor, getRepeatedField, getRepeatedFieldCount, getUnknownFields, hasField, hasOneofStreamingRecognitionConfig getStreamingConfig()
The `streaming_config` message provides information to the recognizer that specifies how to process the request. The first `StreamingRecognizeRequest` message must contain a `streaming_config` message.
optional .google.cloud.speech.v1beta1.StreamingRecognitionConfig streaming_config = 1;StreamingRecognitionConfigOrBuilder getStreamingConfigOrBuilder()
The `streaming_config` message provides information to the recognizer that specifies how to process the request. The first `StreamingRecognizeRequest` message must contain a `streaming_config` message.
optional .google.cloud.speech.v1beta1.StreamingRecognitionConfig streaming_config = 1;com.google.protobuf.ByteString getAudioContent()
The audio data to be recognized. Sequential chunks of audio data are sent in sequential `StreamingRecognizeRequest` messages. The first `StreamingRecognizeRequest` message must not contain `audio_content` data and all subsequent `StreamingRecognizeRequest` messages must contain `audio_content` data. The audio bytes must be encoded as specified in `RecognitionConfig`. Note: as with all bytes fields, protobuffers use a pure binary representation (not base64). See [audio limits](https://cloud.google.com/speech/limits#content).
optional bytes audio_content = 2;StreamingRecognizeRequest.StreamingRequestCase getStreamingRequestCase()