public static final class StreamingRecognitionConfig.Builder extends GeneratedMessageV3.Builder<StreamingRecognitionConfig.Builder> implements StreamingRecognitionConfigOrBuilder
Provides information to the recognizer that specifies how to process the request.Protobuf type
google.cloud.speech.v1beta1.StreamingRecognitionConfig
Modifier and Type | Method and Description |
---|---|
StreamingRecognitionConfig.Builder |
addRepeatedField(Descriptors.FieldDescriptor field,
java.lang.Object value) |
StreamingRecognitionConfig |
build() |
StreamingRecognitionConfig |
buildPartial() |
StreamingRecognitionConfig.Builder |
clear() |
StreamingRecognitionConfig.Builder |
clearConfig()
*Required* Provides information to the recognizer that specifies how to
process the request.
|
StreamingRecognitionConfig.Builder |
clearField(Descriptors.FieldDescriptor field) |
StreamingRecognitionConfig.Builder |
clearInterimResults()
*Optional* If `true`, interim results (tentative hypotheses) may be
returned as they become available (these interim results are indicated with
the `is_final=false` flag).
|
StreamingRecognitionConfig.Builder |
clearOneof(Descriptors.OneofDescriptor oneof) |
StreamingRecognitionConfig.Builder |
clearSingleUtterance()
*Optional* If `false` or omitted, the recognizer will perform continuous
recognition (continuing to wait for and process audio even if the user
pauses speaking) until the client closes the input stream (gRPC API) or
until the maximum time limit has been reached.
|
StreamingRecognitionConfig.Builder |
clone() |
RecognitionConfig |
getConfig()
*Required* Provides information to the recognizer that specifies how to
process the request.
|
RecognitionConfig.Builder |
getConfigBuilder()
*Required* Provides information to the recognizer that specifies how to
process the request.
|
RecognitionConfigOrBuilder |
getConfigOrBuilder()
*Required* Provides information to the recognizer that specifies how to
process the request.
|
StreamingRecognitionConfig |
getDefaultInstanceForType() |
static Descriptors.Descriptor |
getDescriptor() |
Descriptors.Descriptor |
getDescriptorForType() |
boolean |
getInterimResults()
*Optional* If `true`, interim results (tentative hypotheses) may be
returned as they become available (these interim results are indicated with
the `is_final=false` flag).
|
boolean |
getSingleUtterance()
*Optional* If `false` or omitted, the recognizer will perform continuous
recognition (continuing to wait for and process audio even if the user
pauses speaking) until the client closes the input stream (gRPC API) or
until the maximum time limit has been reached.
|
boolean |
hasConfig()
*Required* Provides information to the recognizer that specifies how to
process the request.
|
protected GeneratedMessageV3.FieldAccessorTable |
internalGetFieldAccessorTable() |
boolean |
isInitialized() |
StreamingRecognitionConfig.Builder |
mergeConfig(RecognitionConfig value)
*Required* Provides information to the recognizer that specifies how to
process the request.
|
StreamingRecognitionConfig.Builder |
mergeFrom(CodedInputStream input,
ExtensionRegistryLite extensionRegistry) |
StreamingRecognitionConfig.Builder |
mergeFrom(Message other) |
StreamingRecognitionConfig.Builder |
mergeFrom(StreamingRecognitionConfig other) |
StreamingRecognitionConfig.Builder |
mergeUnknownFields(UnknownFieldSet unknownFields) |
StreamingRecognitionConfig.Builder |
setConfig(RecognitionConfig.Builder builderForValue)
*Required* Provides information to the recognizer that specifies how to
process the request.
|
StreamingRecognitionConfig.Builder |
setConfig(RecognitionConfig value)
*Required* Provides information to the recognizer that specifies how to
process the request.
|
StreamingRecognitionConfig.Builder |
setField(Descriptors.FieldDescriptor field,
java.lang.Object value) |
StreamingRecognitionConfig.Builder |
setInterimResults(boolean value)
*Optional* If `true`, interim results (tentative hypotheses) may be
returned as they become available (these interim results are indicated with
the `is_final=false` flag).
|
StreamingRecognitionConfig.Builder |
setRepeatedField(Descriptors.FieldDescriptor field,
int index,
java.lang.Object value) |
StreamingRecognitionConfig.Builder |
setSingleUtterance(boolean value)
*Optional* If `false` or omitted, the recognizer will perform continuous
recognition (continuing to wait for and process audio even if the user
pauses speaking) until the client closes the input stream (gRPC API) or
until the maximum time limit has been reached.
|
StreamingRecognitionConfig.Builder |
setUnknownFields(UnknownFieldSet unknownFields) |
getAllFields, getField, getFieldBuilder, getOneofFieldDescriptor, getParentForChildren, getRepeatedField, getRepeatedFieldBuilder, getRepeatedFieldCount, getUnknownFields, hasField, hasOneof, internalGetMapField, internalGetMutableMapField, isClean, markClean, newBuilderForField, onBuilt, onChanged, setUnknownFieldsProto3
findInitializationErrors, getInitializationErrorString, internalMergeFrom, mergeDelimitedFrom, mergeDelimitedFrom, mergeFrom, mergeFrom, mergeFrom, mergeFrom, mergeFrom, mergeFrom, mergeFrom, mergeFrom, mergeFrom, newUninitializedMessageException, toString
addAll, addAll, mergeFrom, newUninitializedMessageException
equals, finalize, getClass, hashCode, notify, notifyAll, wait, wait, wait
findInitializationErrors, getAllFields, getField, getInitializationErrorString, getOneofFieldDescriptor, getRepeatedField, getRepeatedFieldCount, getUnknownFields, hasField, hasOneof
mergeFrom
public static final Descriptors.Descriptor getDescriptor()
protected GeneratedMessageV3.FieldAccessorTable internalGetFieldAccessorTable()
internalGetFieldAccessorTable
in class GeneratedMessageV3.Builder<StreamingRecognitionConfig.Builder>
public StreamingRecognitionConfig.Builder clear()
clear
in interface Message.Builder
clear
in interface MessageLite.Builder
clear
in class GeneratedMessageV3.Builder<StreamingRecognitionConfig.Builder>
public Descriptors.Descriptor getDescriptorForType()
getDescriptorForType
in interface Message.Builder
getDescriptorForType
in interface MessageOrBuilder
getDescriptorForType
in class GeneratedMessageV3.Builder<StreamingRecognitionConfig.Builder>
public StreamingRecognitionConfig getDefaultInstanceForType()
getDefaultInstanceForType
in interface MessageLiteOrBuilder
getDefaultInstanceForType
in interface MessageOrBuilder
public StreamingRecognitionConfig build()
build
in interface Message.Builder
build
in interface MessageLite.Builder
public StreamingRecognitionConfig buildPartial()
buildPartial
in interface Message.Builder
buildPartial
in interface MessageLite.Builder
public StreamingRecognitionConfig.Builder clone()
clone
in interface Message.Builder
clone
in interface MessageLite.Builder
clone
in class GeneratedMessageV3.Builder<StreamingRecognitionConfig.Builder>
public StreamingRecognitionConfig.Builder setField(Descriptors.FieldDescriptor field, java.lang.Object value)
setField
in interface Message.Builder
setField
in class GeneratedMessageV3.Builder<StreamingRecognitionConfig.Builder>
public StreamingRecognitionConfig.Builder clearField(Descriptors.FieldDescriptor field)
clearField
in interface Message.Builder
clearField
in class GeneratedMessageV3.Builder<StreamingRecognitionConfig.Builder>
public StreamingRecognitionConfig.Builder clearOneof(Descriptors.OneofDescriptor oneof)
clearOneof
in interface Message.Builder
clearOneof
in class GeneratedMessageV3.Builder<StreamingRecognitionConfig.Builder>
public StreamingRecognitionConfig.Builder setRepeatedField(Descriptors.FieldDescriptor field, int index, java.lang.Object value)
setRepeatedField
in interface Message.Builder
setRepeatedField
in class GeneratedMessageV3.Builder<StreamingRecognitionConfig.Builder>
public StreamingRecognitionConfig.Builder addRepeatedField(Descriptors.FieldDescriptor field, java.lang.Object value)
addRepeatedField
in interface Message.Builder
addRepeatedField
in class GeneratedMessageV3.Builder<StreamingRecognitionConfig.Builder>
public StreamingRecognitionConfig.Builder mergeFrom(Message other)
mergeFrom
in interface Message.Builder
mergeFrom
in class AbstractMessage.Builder<StreamingRecognitionConfig.Builder>
public StreamingRecognitionConfig.Builder mergeFrom(StreamingRecognitionConfig other)
public final boolean isInitialized()
isInitialized
in interface MessageLiteOrBuilder
isInitialized
in class GeneratedMessageV3.Builder<StreamingRecognitionConfig.Builder>
public StreamingRecognitionConfig.Builder mergeFrom(CodedInputStream input, ExtensionRegistryLite extensionRegistry) throws java.io.IOException
mergeFrom
in interface Message.Builder
mergeFrom
in interface MessageLite.Builder
mergeFrom
in class AbstractMessage.Builder<StreamingRecognitionConfig.Builder>
java.io.IOException
public boolean hasConfig()
*Required* Provides information to the recognizer that specifies how to process the request.
.google.cloud.speech.v1beta1.RecognitionConfig config = 1;
hasConfig
in interface StreamingRecognitionConfigOrBuilder
public RecognitionConfig getConfig()
*Required* Provides information to the recognizer that specifies how to process the request.
.google.cloud.speech.v1beta1.RecognitionConfig config = 1;
getConfig
in interface StreamingRecognitionConfigOrBuilder
public StreamingRecognitionConfig.Builder setConfig(RecognitionConfig value)
*Required* Provides information to the recognizer that specifies how to process the request.
.google.cloud.speech.v1beta1.RecognitionConfig config = 1;
public StreamingRecognitionConfig.Builder setConfig(RecognitionConfig.Builder builderForValue)
*Required* Provides information to the recognizer that specifies how to process the request.
.google.cloud.speech.v1beta1.RecognitionConfig config = 1;
public StreamingRecognitionConfig.Builder mergeConfig(RecognitionConfig value)
*Required* Provides information to the recognizer that specifies how to process the request.
.google.cloud.speech.v1beta1.RecognitionConfig config = 1;
public StreamingRecognitionConfig.Builder clearConfig()
*Required* Provides information to the recognizer that specifies how to process the request.
.google.cloud.speech.v1beta1.RecognitionConfig config = 1;
public RecognitionConfig.Builder getConfigBuilder()
*Required* Provides information to the recognizer that specifies how to process the request.
.google.cloud.speech.v1beta1.RecognitionConfig config = 1;
public RecognitionConfigOrBuilder getConfigOrBuilder()
*Required* Provides information to the recognizer that specifies how to process the request.
.google.cloud.speech.v1beta1.RecognitionConfig config = 1;
getConfigOrBuilder
in interface StreamingRecognitionConfigOrBuilder
public boolean getSingleUtterance()
*Optional* If `false` or omitted, the recognizer will perform continuous recognition (continuing to wait for and process audio even if the user pauses speaking) until the client closes the input stream (gRPC API) or until the maximum time limit has been reached. May return multiple `StreamingRecognitionResult`s with the `is_final` flag set to `true`. If `true`, the recognizer will detect a single spoken utterance. When it detects that the user has paused or stopped speaking, it will return an `END_OF_UTTERANCE` event and cease recognition. It will return no more than one `StreamingRecognitionResult` with the `is_final` flag set to `true`.
bool single_utterance = 2;
getSingleUtterance
in interface StreamingRecognitionConfigOrBuilder
public StreamingRecognitionConfig.Builder setSingleUtterance(boolean value)
*Optional* If `false` or omitted, the recognizer will perform continuous recognition (continuing to wait for and process audio even if the user pauses speaking) until the client closes the input stream (gRPC API) or until the maximum time limit has been reached. May return multiple `StreamingRecognitionResult`s with the `is_final` flag set to `true`. If `true`, the recognizer will detect a single spoken utterance. When it detects that the user has paused or stopped speaking, it will return an `END_OF_UTTERANCE` event and cease recognition. It will return no more than one `StreamingRecognitionResult` with the `is_final` flag set to `true`.
bool single_utterance = 2;
public StreamingRecognitionConfig.Builder clearSingleUtterance()
*Optional* If `false` or omitted, the recognizer will perform continuous recognition (continuing to wait for and process audio even if the user pauses speaking) until the client closes the input stream (gRPC API) or until the maximum time limit has been reached. May return multiple `StreamingRecognitionResult`s with the `is_final` flag set to `true`. If `true`, the recognizer will detect a single spoken utterance. When it detects that the user has paused or stopped speaking, it will return an `END_OF_UTTERANCE` event and cease recognition. It will return no more than one `StreamingRecognitionResult` with the `is_final` flag set to `true`.
bool single_utterance = 2;
public boolean getInterimResults()
*Optional* If `true`, interim results (tentative hypotheses) may be returned as they become available (these interim results are indicated with the `is_final=false` flag). If `false` or omitted, only `is_final=true` result(s) are returned.
bool interim_results = 3;
getInterimResults
in interface StreamingRecognitionConfigOrBuilder
public StreamingRecognitionConfig.Builder setInterimResults(boolean value)
*Optional* If `true`, interim results (tentative hypotheses) may be returned as they become available (these interim results are indicated with the `is_final=false` flag). If `false` or omitted, only `is_final=true` result(s) are returned.
bool interim_results = 3;
public StreamingRecognitionConfig.Builder clearInterimResults()
*Optional* If `true`, interim results (tentative hypotheses) may be returned as they become available (these interim results are indicated with the `is_final=false` flag). If `false` or omitted, only `is_final=true` result(s) are returned.
bool interim_results = 3;
public final StreamingRecognitionConfig.Builder setUnknownFields(UnknownFieldSet unknownFields)
setUnknownFields
in interface Message.Builder
setUnknownFields
in class GeneratedMessageV3.Builder<StreamingRecognitionConfig.Builder>
public final StreamingRecognitionConfig.Builder mergeUnknownFields(UnknownFieldSet unknownFields)
mergeUnknownFields
in interface Message.Builder
mergeUnknownFields
in class GeneratedMessageV3.Builder<StreamingRecognitionConfig.Builder>