public interface StreamingRecognizeRequestOrBuilder extends MessageOrBuilder
Modifier and Type | Method and Description |
---|---|
ByteString |
getAudioContent()
The audio data to be recognized.
|
StreamingRecognitionConfig |
getStreamingConfig()
Provides information to the recognizer that specifies how to process the
request.
|
StreamingRecognitionConfigOrBuilder |
getStreamingConfigOrBuilder()
Provides information to the recognizer that specifies how to process the
request.
|
StreamingRecognizeRequest.StreamingRequestCase |
getStreamingRequestCase() |
boolean |
hasStreamingConfig()
Provides information to the recognizer that specifies how to process the
request.
|
findInitializationErrors, getAllFields, getDefaultInstanceForType, getDescriptorForType, getField, getInitializationErrorString, getOneofFieldDescriptor, getRepeatedField, getRepeatedFieldCount, getUnknownFields, hasField, hasOneof
isInitialized
boolean hasStreamingConfig()
Provides information to the recognizer that specifies how to process the request. The first `StreamingRecognizeRequest` message must contain a `streaming_config` message.
.google.cloud.speech.v1.StreamingRecognitionConfig streaming_config = 1;
StreamingRecognitionConfig getStreamingConfig()
Provides information to the recognizer that specifies how to process the request. The first `StreamingRecognizeRequest` message must contain a `streaming_config` message.
.google.cloud.speech.v1.StreamingRecognitionConfig streaming_config = 1;
StreamingRecognitionConfigOrBuilder getStreamingConfigOrBuilder()
Provides information to the recognizer that specifies how to process the request. The first `StreamingRecognizeRequest` message must contain a `streaming_config` message.
.google.cloud.speech.v1.StreamingRecognitionConfig streaming_config = 1;
ByteString getAudioContent()
The audio data to be recognized. Sequential chunks of audio data are sent in sequential `StreamingRecognizeRequest` messages. The first `StreamingRecognizeRequest` message must not contain `audio_content` data and all subsequent `StreamingRecognizeRequest` messages must contain `audio_content` data. The audio bytes must be encoded as specified in `RecognitionConfig`. Note: as with all bytes fields, protobuffers use a pure binary representation (not base64). See [audio limits](https://cloud.google.com/speech/limits#content).
bytes audio_content = 2;
StreamingRecognizeRequest.StreamingRequestCase getStreamingRequestCase()