public static enum RecognitionConfig.AudioEncoding extends java.lang.Enum<RecognitionConfig.AudioEncoding> implements ProtocolMessageEnum
The encoding of the audio data sent in the request. All encodings support only 1 channel (mono) audio. For best results, the audio source should be captured and transmitted using a lossless encoding (`FLAC` or `LINEAR16`). The accuracy of the speech recognition can be reduced if lossy codecs are used to capture or transmit audio, particularly if background noise is present. Lossy codecs include `MULAW`, `AMR`, `AMR_WB`, `OGG_OPUS`, and `SPEEX_WITH_HEADER_BYTE`. The `FLAC` and `WAV` audio file formats include a header that describes the included audio content. You can request recognition for `WAV` files that contain either `LINEAR16` or `MULAW` encoded audio. If you send `FLAC` or `WAV` audio file format in your request, you do not need to specify an `AudioEncoding`; the audio encoding format is determined from the file header. If you specify an `AudioEncoding` when you send send `FLAC` or `WAV` audio, the encoding configuration must match the encoding described in the audio header; otherwise the request returns an [google.rpc.Code.INVALID_ARGUMENT][google.rpc.Code.INVALID_ARGUMENT] error code.Protobuf enum
google.cloud.speech.v1p1beta1.RecognitionConfig.AudioEncoding
Enum Constant and Description |
---|
AMR
Adaptive Multi-Rate Narrowband codec.
|
AMR_WB
Adaptive Multi-Rate Wideband codec.
|
ENCODING_UNSPECIFIED
Not specified.
|
FLAC
`FLAC` (Free Lossless Audio
Codec) is the recommended encoding because it is
lossless--therefore recognition is not compromised--and
requires only about half the bandwidth of `LINEAR16`.
|
LINEAR16
Uncompressed 16-bit signed little-endian samples (Linear PCM).
|
MULAW
8-bit samples that compand 14-bit audio samples using G.711 PCMU/mu-law.
|
OGG_OPUS
Opus encoded audio frames in Ogg container
([OggOpus](https://wiki.xiph.org/OggOpus)).
|
SPEEX_WITH_HEADER_BYTE
Although the use of lossy encodings is not recommended, if a very low
bitrate encoding is required, `OGG_OPUS` is highly preferred over
Speex encoding.
|
UNRECOGNIZED |
Modifier and Type | Field and Description |
---|---|
static int |
AMR_VALUE
Adaptive Multi-Rate Narrowband codec.
|
static int |
AMR_WB_VALUE
Adaptive Multi-Rate Wideband codec.
|
static int |
ENCODING_UNSPECIFIED_VALUE
Not specified.
|
static int |
FLAC_VALUE
`FLAC` (Free Lossless Audio
Codec) is the recommended encoding because it is
lossless--therefore recognition is not compromised--and
requires only about half the bandwidth of `LINEAR16`.
|
static int |
LINEAR16_VALUE
Uncompressed 16-bit signed little-endian samples (Linear PCM).
|
static int |
MULAW_VALUE
8-bit samples that compand 14-bit audio samples using G.711 PCMU/mu-law.
|
static int |
OGG_OPUS_VALUE
Opus encoded audio frames in Ogg container
([OggOpus](https://wiki.xiph.org/OggOpus)).
|
static int |
SPEEX_WITH_HEADER_BYTE_VALUE
Although the use of lossy encodings is not recommended, if a very low
bitrate encoding is required, `OGG_OPUS` is highly preferred over
Speex encoding.
|
Modifier and Type | Method and Description |
---|---|
static RecognitionConfig.AudioEncoding |
forNumber(int value) |
static Descriptors.EnumDescriptor |
getDescriptor() |
Descriptors.EnumDescriptor |
getDescriptorForType() |
int |
getNumber() |
Descriptors.EnumValueDescriptor |
getValueDescriptor() |
static Internal.EnumLiteMap<RecognitionConfig.AudioEncoding> |
internalGetValueMap() |
static RecognitionConfig.AudioEncoding |
valueOf(Descriptors.EnumValueDescriptor desc) |
static RecognitionConfig.AudioEncoding |
valueOf(int value)
Deprecated.
Use
forNumber(int) instead. |
static RecognitionConfig.AudioEncoding |
valueOf(java.lang.String name)
Returns the enum constant of this type with the specified name.
|
static RecognitionConfig.AudioEncoding[] |
values()
Returns an array containing the constants of this enum type, in
the order they are declared.
|
public static final RecognitionConfig.AudioEncoding ENCODING_UNSPECIFIED
Not specified.
ENCODING_UNSPECIFIED = 0;
public static final RecognitionConfig.AudioEncoding LINEAR16
Uncompressed 16-bit signed little-endian samples (Linear PCM).
LINEAR16 = 1;
public static final RecognitionConfig.AudioEncoding FLAC
`FLAC` (Free Lossless Audio Codec) is the recommended encoding because it is lossless--therefore recognition is not compromised--and requires only about half the bandwidth of `LINEAR16`. `FLAC` stream encoding supports 16-bit and 24-bit samples, however, not all fields in `STREAMINFO` are supported.
FLAC = 2;
public static final RecognitionConfig.AudioEncoding MULAW
8-bit samples that compand 14-bit audio samples using G.711 PCMU/mu-law.
MULAW = 3;
public static final RecognitionConfig.AudioEncoding AMR
Adaptive Multi-Rate Narrowband codec. `sample_rate_hertz` must be 8000.
AMR = 4;
public static final RecognitionConfig.AudioEncoding AMR_WB
Adaptive Multi-Rate Wideband codec. `sample_rate_hertz` must be 16000.
AMR_WB = 5;
public static final RecognitionConfig.AudioEncoding OGG_OPUS
Opus encoded audio frames in Ogg container ([OggOpus](https://wiki.xiph.org/OggOpus)). `sample_rate_hertz` must be one of 8000, 12000, 16000, 24000, or 48000.
OGG_OPUS = 6;
public static final RecognitionConfig.AudioEncoding SPEEX_WITH_HEADER_BYTE
Although the use of lossy encodings is not recommended, if a very low bitrate encoding is required, `OGG_OPUS` is highly preferred over Speex encoding. The [Speex](https://speex.org/) encoding supported by Cloud Speech API has a header byte in each block, as in MIME type `audio/x-speex-with-header-byte`. It is a variant of the RTP Speex encoding defined in [RFC 5574](https://tools.ietf.org/html/rfc5574). The stream is a sequence of blocks, one block per RTP packet. Each block starts with a byte containing the length of the block, in bytes, followed by one or more frames of Speex data, padded to an integral number of bytes (octets) as specified in RFC 5574. In other words, each RTP header is replaced with a single byte containing the block length. Only Speex wideband is supported. `sample_rate_hertz` must be 16000.
SPEEX_WITH_HEADER_BYTE = 7;
public static final RecognitionConfig.AudioEncoding UNRECOGNIZED
public static final int ENCODING_UNSPECIFIED_VALUE
Not specified.
ENCODING_UNSPECIFIED = 0;
public static final int LINEAR16_VALUE
Uncompressed 16-bit signed little-endian samples (Linear PCM).
LINEAR16 = 1;
public static final int FLAC_VALUE
`FLAC` (Free Lossless Audio Codec) is the recommended encoding because it is lossless--therefore recognition is not compromised--and requires only about half the bandwidth of `LINEAR16`. `FLAC` stream encoding supports 16-bit and 24-bit samples, however, not all fields in `STREAMINFO` are supported.
FLAC = 2;
public static final int MULAW_VALUE
8-bit samples that compand 14-bit audio samples using G.711 PCMU/mu-law.
MULAW = 3;
public static final int AMR_VALUE
Adaptive Multi-Rate Narrowband codec. `sample_rate_hertz` must be 8000.
AMR = 4;
public static final int AMR_WB_VALUE
Adaptive Multi-Rate Wideband codec. `sample_rate_hertz` must be 16000.
AMR_WB = 5;
public static final int OGG_OPUS_VALUE
Opus encoded audio frames in Ogg container ([OggOpus](https://wiki.xiph.org/OggOpus)). `sample_rate_hertz` must be one of 8000, 12000, 16000, 24000, or 48000.
OGG_OPUS = 6;
public static final int SPEEX_WITH_HEADER_BYTE_VALUE
Although the use of lossy encodings is not recommended, if a very low bitrate encoding is required, `OGG_OPUS` is highly preferred over Speex encoding. The [Speex](https://speex.org/) encoding supported by Cloud Speech API has a header byte in each block, as in MIME type `audio/x-speex-with-header-byte`. It is a variant of the RTP Speex encoding defined in [RFC 5574](https://tools.ietf.org/html/rfc5574). The stream is a sequence of blocks, one block per RTP packet. Each block starts with a byte containing the length of the block, in bytes, followed by one or more frames of Speex data, padded to an integral number of bytes (octets) as specified in RFC 5574. In other words, each RTP header is replaced with a single byte containing the block length. Only Speex wideband is supported. `sample_rate_hertz` must be 16000.
SPEEX_WITH_HEADER_BYTE = 7;
public static RecognitionConfig.AudioEncoding[] values()
for (RecognitionConfig.AudioEncoding c : RecognitionConfig.AudioEncoding.values()) System.out.println(c);
public static RecognitionConfig.AudioEncoding valueOf(java.lang.String name)
name
- the name of the enum constant to be returned.java.lang.IllegalArgumentException
- if this enum type has no constant with the specified namejava.lang.NullPointerException
- if the argument is nullpublic final int getNumber()
getNumber
in interface Internal.EnumLite
getNumber
in interface ProtocolMessageEnum
@Deprecated public static RecognitionConfig.AudioEncoding valueOf(int value)
forNumber(int)
instead.public static RecognitionConfig.AudioEncoding forNumber(int value)
public static Internal.EnumLiteMap<RecognitionConfig.AudioEncoding> internalGetValueMap()
public final Descriptors.EnumValueDescriptor getValueDescriptor()
getValueDescriptor
in interface ProtocolMessageEnum
public final Descriptors.EnumDescriptor getDescriptorForType()
getDescriptorForType
in interface ProtocolMessageEnum
public static final Descriptors.EnumDescriptor getDescriptor()
public static RecognitionConfig.AudioEncoding valueOf(Descriptors.EnumValueDescriptor desc)