public static final class StreamingRecognitionConfig.Builder extends GeneratedMessageV3.Builder<StreamingRecognitionConfig.Builder> implements StreamingRecognitionConfigOrBuilder
Provides information to the recognizer that specifies how to process the request.Protobuf type
google.cloud.speech.v1beta1.StreamingRecognitionConfig| Modifier and Type | Method and Description |
|---|---|
StreamingRecognitionConfig.Builder |
addRepeatedField(Descriptors.FieldDescriptor field,
java.lang.Object value) |
StreamingRecognitionConfig |
build() |
StreamingRecognitionConfig |
buildPartial() |
StreamingRecognitionConfig.Builder |
clear() |
StreamingRecognitionConfig.Builder |
clearConfig()
*Required* Provides information to the recognizer that specifies how to
process the request.
|
StreamingRecognitionConfig.Builder |
clearField(Descriptors.FieldDescriptor field) |
StreamingRecognitionConfig.Builder |
clearInterimResults()
*Optional* If `true`, interim results (tentative hypotheses) may be
returned as they become available (these interim results are indicated with
the `is_final=false` flag).
|
StreamingRecognitionConfig.Builder |
clearOneof(Descriptors.OneofDescriptor oneof) |
StreamingRecognitionConfig.Builder |
clearSingleUtterance()
*Optional* If `false` or omitted, the recognizer will perform continuous
recognition (continuing to wait for and process audio even if the user
pauses speaking) until the client closes the input stream (gRPC API) or
until the maximum time limit has been reached.
|
StreamingRecognitionConfig.Builder |
clone() |
RecognitionConfig |
getConfig()
*Required* Provides information to the recognizer that specifies how to
process the request.
|
RecognitionConfig.Builder |
getConfigBuilder()
*Required* Provides information to the recognizer that specifies how to
process the request.
|
RecognitionConfigOrBuilder |
getConfigOrBuilder()
*Required* Provides information to the recognizer that specifies how to
process the request.
|
StreamingRecognitionConfig |
getDefaultInstanceForType() |
static Descriptors.Descriptor |
getDescriptor() |
Descriptors.Descriptor |
getDescriptorForType() |
boolean |
getInterimResults()
*Optional* If `true`, interim results (tentative hypotheses) may be
returned as they become available (these interim results are indicated with
the `is_final=false` flag).
|
boolean |
getSingleUtterance()
*Optional* If `false` or omitted, the recognizer will perform continuous
recognition (continuing to wait for and process audio even if the user
pauses speaking) until the client closes the input stream (gRPC API) or
until the maximum time limit has been reached.
|
boolean |
hasConfig()
*Required* Provides information to the recognizer that specifies how to
process the request.
|
protected GeneratedMessageV3.FieldAccessorTable |
internalGetFieldAccessorTable() |
boolean |
isInitialized() |
StreamingRecognitionConfig.Builder |
mergeConfig(RecognitionConfig value)
*Required* Provides information to the recognizer that specifies how to
process the request.
|
StreamingRecognitionConfig.Builder |
mergeFrom(CodedInputStream input,
ExtensionRegistryLite extensionRegistry) |
StreamingRecognitionConfig.Builder |
mergeFrom(Message other) |
StreamingRecognitionConfig.Builder |
mergeFrom(StreamingRecognitionConfig other) |
StreamingRecognitionConfig.Builder |
mergeUnknownFields(UnknownFieldSet unknownFields) |
StreamingRecognitionConfig.Builder |
setConfig(RecognitionConfig.Builder builderForValue)
*Required* Provides information to the recognizer that specifies how to
process the request.
|
StreamingRecognitionConfig.Builder |
setConfig(RecognitionConfig value)
*Required* Provides information to the recognizer that specifies how to
process the request.
|
StreamingRecognitionConfig.Builder |
setField(Descriptors.FieldDescriptor field,
java.lang.Object value) |
StreamingRecognitionConfig.Builder |
setInterimResults(boolean value)
*Optional* If `true`, interim results (tentative hypotheses) may be
returned as they become available (these interim results are indicated with
the `is_final=false` flag).
|
StreamingRecognitionConfig.Builder |
setRepeatedField(Descriptors.FieldDescriptor field,
int index,
java.lang.Object value) |
StreamingRecognitionConfig.Builder |
setSingleUtterance(boolean value)
*Optional* If `false` or omitted, the recognizer will perform continuous
recognition (continuing to wait for and process audio even if the user
pauses speaking) until the client closes the input stream (gRPC API) or
until the maximum time limit has been reached.
|
StreamingRecognitionConfig.Builder |
setUnknownFields(UnknownFieldSet unknownFields) |
getAllFields, getField, getFieldBuilder, getOneofFieldDescriptor, getParentForChildren, getRepeatedField, getRepeatedFieldBuilder, getRepeatedFieldCount, getUnknownFields, hasField, hasOneof, internalGetMapField, internalGetMutableMapField, isClean, markClean, newBuilderForField, onBuilt, onChanged, setUnknownFieldsProto3findInitializationErrors, getInitializationErrorString, internalMergeFrom, mergeDelimitedFrom, mergeDelimitedFrom, mergeFrom, mergeFrom, mergeFrom, mergeFrom, mergeFrom, mergeFrom, mergeFrom, mergeFrom, mergeFrom, newUninitializedMessageException, toStringaddAll, addAll, mergeFrom, newUninitializedMessageExceptionequals, finalize, getClass, hashCode, notify, notifyAll, wait, wait, waitfindInitializationErrors, getAllFields, getField, getInitializationErrorString, getOneofFieldDescriptor, getRepeatedField, getRepeatedFieldCount, getUnknownFields, hasField, hasOneofmergeFrompublic static final Descriptors.Descriptor getDescriptor()
protected GeneratedMessageV3.FieldAccessorTable internalGetFieldAccessorTable()
internalGetFieldAccessorTable in class GeneratedMessageV3.Builder<StreamingRecognitionConfig.Builder>public StreamingRecognitionConfig.Builder clear()
clear in interface Message.Builderclear in interface MessageLite.Builderclear in class GeneratedMessageV3.Builder<StreamingRecognitionConfig.Builder>public Descriptors.Descriptor getDescriptorForType()
getDescriptorForType in interface Message.BuildergetDescriptorForType in interface MessageOrBuildergetDescriptorForType in class GeneratedMessageV3.Builder<StreamingRecognitionConfig.Builder>public StreamingRecognitionConfig getDefaultInstanceForType()
getDefaultInstanceForType in interface MessageLiteOrBuildergetDefaultInstanceForType in interface MessageOrBuilderpublic StreamingRecognitionConfig build()
build in interface Message.Builderbuild in interface MessageLite.Builderpublic StreamingRecognitionConfig buildPartial()
buildPartial in interface Message.BuilderbuildPartial in interface MessageLite.Builderpublic StreamingRecognitionConfig.Builder clone()
clone in interface Message.Builderclone in interface MessageLite.Builderclone in class GeneratedMessageV3.Builder<StreamingRecognitionConfig.Builder>public StreamingRecognitionConfig.Builder setField(Descriptors.FieldDescriptor field, java.lang.Object value)
setField in interface Message.BuildersetField in class GeneratedMessageV3.Builder<StreamingRecognitionConfig.Builder>public StreamingRecognitionConfig.Builder clearField(Descriptors.FieldDescriptor field)
clearField in interface Message.BuilderclearField in class GeneratedMessageV3.Builder<StreamingRecognitionConfig.Builder>public StreamingRecognitionConfig.Builder clearOneof(Descriptors.OneofDescriptor oneof)
clearOneof in interface Message.BuilderclearOneof in class GeneratedMessageV3.Builder<StreamingRecognitionConfig.Builder>public StreamingRecognitionConfig.Builder setRepeatedField(Descriptors.FieldDescriptor field, int index, java.lang.Object value)
setRepeatedField in interface Message.BuildersetRepeatedField in class GeneratedMessageV3.Builder<StreamingRecognitionConfig.Builder>public StreamingRecognitionConfig.Builder addRepeatedField(Descriptors.FieldDescriptor field, java.lang.Object value)
addRepeatedField in interface Message.BuilderaddRepeatedField in class GeneratedMessageV3.Builder<StreamingRecognitionConfig.Builder>public StreamingRecognitionConfig.Builder mergeFrom(Message other)
mergeFrom in interface Message.BuildermergeFrom in class AbstractMessage.Builder<StreamingRecognitionConfig.Builder>public StreamingRecognitionConfig.Builder mergeFrom(StreamingRecognitionConfig other)
public final boolean isInitialized()
isInitialized in interface MessageLiteOrBuilderisInitialized in class GeneratedMessageV3.Builder<StreamingRecognitionConfig.Builder>public StreamingRecognitionConfig.Builder mergeFrom(CodedInputStream input, ExtensionRegistryLite extensionRegistry) throws java.io.IOException
mergeFrom in interface Message.BuildermergeFrom in interface MessageLite.BuildermergeFrom in class AbstractMessage.Builder<StreamingRecognitionConfig.Builder>java.io.IOExceptionpublic boolean hasConfig()
*Required* Provides information to the recognizer that specifies how to process the request.
.google.cloud.speech.v1beta1.RecognitionConfig config = 1;hasConfig in interface StreamingRecognitionConfigOrBuilderpublic RecognitionConfig getConfig()
*Required* Provides information to the recognizer that specifies how to process the request.
.google.cloud.speech.v1beta1.RecognitionConfig config = 1;getConfig in interface StreamingRecognitionConfigOrBuilderpublic StreamingRecognitionConfig.Builder setConfig(RecognitionConfig value)
*Required* Provides information to the recognizer that specifies how to process the request.
.google.cloud.speech.v1beta1.RecognitionConfig config = 1;public StreamingRecognitionConfig.Builder setConfig(RecognitionConfig.Builder builderForValue)
*Required* Provides information to the recognizer that specifies how to process the request.
.google.cloud.speech.v1beta1.RecognitionConfig config = 1;public StreamingRecognitionConfig.Builder mergeConfig(RecognitionConfig value)
*Required* Provides information to the recognizer that specifies how to process the request.
.google.cloud.speech.v1beta1.RecognitionConfig config = 1;public StreamingRecognitionConfig.Builder clearConfig()
*Required* Provides information to the recognizer that specifies how to process the request.
.google.cloud.speech.v1beta1.RecognitionConfig config = 1;public RecognitionConfig.Builder getConfigBuilder()
*Required* Provides information to the recognizer that specifies how to process the request.
.google.cloud.speech.v1beta1.RecognitionConfig config = 1;public RecognitionConfigOrBuilder getConfigOrBuilder()
*Required* Provides information to the recognizer that specifies how to process the request.
.google.cloud.speech.v1beta1.RecognitionConfig config = 1;getConfigOrBuilder in interface StreamingRecognitionConfigOrBuilderpublic boolean getSingleUtterance()
*Optional* If `false` or omitted, the recognizer will perform continuous recognition (continuing to wait for and process audio even if the user pauses speaking) until the client closes the input stream (gRPC API) or until the maximum time limit has been reached. May return multiple `StreamingRecognitionResult`s with the `is_final` flag set to `true`. If `true`, the recognizer will detect a single spoken utterance. When it detects that the user has paused or stopped speaking, it will return an `END_OF_UTTERANCE` event and cease recognition. It will return no more than one `StreamingRecognitionResult` with the `is_final` flag set to `true`.
bool single_utterance = 2;getSingleUtterance in interface StreamingRecognitionConfigOrBuilderpublic StreamingRecognitionConfig.Builder setSingleUtterance(boolean value)
*Optional* If `false` or omitted, the recognizer will perform continuous recognition (continuing to wait for and process audio even if the user pauses speaking) until the client closes the input stream (gRPC API) or until the maximum time limit has been reached. May return multiple `StreamingRecognitionResult`s with the `is_final` flag set to `true`. If `true`, the recognizer will detect a single spoken utterance. When it detects that the user has paused or stopped speaking, it will return an `END_OF_UTTERANCE` event and cease recognition. It will return no more than one `StreamingRecognitionResult` with the `is_final` flag set to `true`.
bool single_utterance = 2;public StreamingRecognitionConfig.Builder clearSingleUtterance()
*Optional* If `false` or omitted, the recognizer will perform continuous recognition (continuing to wait for and process audio even if the user pauses speaking) until the client closes the input stream (gRPC API) or until the maximum time limit has been reached. May return multiple `StreamingRecognitionResult`s with the `is_final` flag set to `true`. If `true`, the recognizer will detect a single spoken utterance. When it detects that the user has paused or stopped speaking, it will return an `END_OF_UTTERANCE` event and cease recognition. It will return no more than one `StreamingRecognitionResult` with the `is_final` flag set to `true`.
bool single_utterance = 2;public boolean getInterimResults()
*Optional* If `true`, interim results (tentative hypotheses) may be returned as they become available (these interim results are indicated with the `is_final=false` flag). If `false` or omitted, only `is_final=true` result(s) are returned.
bool interim_results = 3;getInterimResults in interface StreamingRecognitionConfigOrBuilderpublic StreamingRecognitionConfig.Builder setInterimResults(boolean value)
*Optional* If `true`, interim results (tentative hypotheses) may be returned as they become available (these interim results are indicated with the `is_final=false` flag). If `false` or omitted, only `is_final=true` result(s) are returned.
bool interim_results = 3;public StreamingRecognitionConfig.Builder clearInterimResults()
*Optional* If `true`, interim results (tentative hypotheses) may be returned as they become available (these interim results are indicated with the `is_final=false` flag). If `false` or omitted, only `is_final=true` result(s) are returned.
bool interim_results = 3;public final StreamingRecognitionConfig.Builder setUnknownFields(UnknownFieldSet unknownFields)
setUnknownFields in interface Message.BuildersetUnknownFields in class GeneratedMessageV3.Builder<StreamingRecognitionConfig.Builder>public final StreamingRecognitionConfig.Builder mergeUnknownFields(UnknownFieldSet unknownFields)
mergeUnknownFields in interface Message.BuildermergeUnknownFields in class GeneratedMessageV3.Builder<StreamingRecognitionConfig.Builder>