Reference documentation and code samples for the Cloud Speech-to-Text V1p1beta1 API module Google::Cloud::Speech::V1p1beta1::RecognitionConfig::AudioEncoding.
The encoding of the audio data sent in the request.
All encodings support only 1 channel (mono) audio, unless the
audio_channel_count and enable_separate_recognition_per_channel fields
are set.
For best results, the audio source should be captured and transmitted using
a lossless encoding (FLAC or LINEAR16). The accuracy of the speech
recognition can be reduced if lossy codecs are used to capture or transmit
audio, particularly if background noise is present. Lossy codecs include
MULAW, AMR, AMR_WB, OGG_OPUS, SPEEX_WITH_HEADER_BYTE, MP3,
and WEBM_OPUS.
The FLAC and WAV audio file formats include a header that describes the
included audio content. You can request recognition for WAV files that
contain either LINEAR16 or MULAW encoded audio.
If you send FLAC or WAV audio file format in
your request, you do not need to specify an AudioEncoding; the audio
encoding format is determined from the file header. If you specify
an AudioEncoding when you send send FLAC or WAV audio, the
encoding configuration must match the encoding described in the audio
header; otherwise the request returns an
[google.rpc.Code.INVALID_ARGUMENT][google.rpc.Code.INVALID_ARGUMENT] error
code.
Constants
ENCODING_UNSPECIFIED
value: 0 Not specified.
LINEAR16
value: 1 Uncompressed 16-bit signed little-endian samples (Linear PCM).
FLAC
value: 2 FLAC (Free Lossless Audio
Codec) is the recommended encoding because it is
lossless--therefore recognition is not compromised--and
requires only about half the bandwidth of LINEAR16. FLAC stream
encoding supports 16-bit and 24-bit samples, however, not all fields in
STREAMINFO are supported.
MULAW
value: 3 8-bit samples that compand 14-bit audio samples using G.711 PCMU/mu-law.
AMR
value: 4 Adaptive Multi-Rate Narrowband codec. sample_rate_hertz must be 8000.
AMR_WB
value: 5 Adaptive Multi-Rate Wideband codec. sample_rate_hertz must be 16000.
OGG_OPUS
value: 6 Opus encoded audio frames in Ogg container
(OggOpus).
sample_rate_hertz must be one of 8000, 12000, 16000, 24000, or 48000.
SPEEX_WITH_HEADER_BYTE
value: 7 Although the use of lossy encodings is not recommended, if a very low
bitrate encoding is required, OGG_OPUS is highly preferred over
Speex encoding. The Speex encoding supported by
Cloud Speech API has a header byte in each block, as in MIME type
audio/x-speex-with-header-byte.
It is a variant of the RTP Speex encoding defined in
RFC 5574.
The stream is a sequence of blocks, one block per RTP packet. Each block
starts with a byte containing the length of the block, in bytes, followed
by one or more frames of Speex data, padded to an integral number of
bytes (octets) as specified in RFC 5574. In other words, each RTP header
is replaced with a single byte containing the block length. Only Speex
wideband is supported. sample_rate_hertz must be 16000.
MP3
value: 8 MP3 audio. MP3 encoding is a Beta feature and only available in
v1p1beta1. Support all standard MP3 bitrates (which range from 32-320
kbps). When using this encoding, sample_rate_hertz has to match the
sample rate of the file being used.
WEBM_OPUS
value: 9 Opus encoded audio frames in WebM container
(WebM).
sample_rate_hertz must be one of 8000, 12000, 16000, 24000, or 48000.
ALAW
value: 10 8-bit samples that compand 13-bit audio samples using G.711 PCMU/a-law.
[[["Easy to understand","easyToUnderstand","thumb-up"],["Solved my problem","solvedMyProblem","thumb-up"],["Other","otherUp","thumb-up"]],[["Hard to understand","hardToUnderstand","thumb-down"],["Incorrect information or sample code","incorrectInformationOrSampleCode","thumb-down"],["Missing the information/samples I need","missingTheInformationSamplesINeed","thumb-down"],["Other","otherDown","thumb-down"]],["Last updated 2025-08-28 UTC."],[],[],null,["# Cloud Speech-to-Text V1p1beta1 API - Module Google::Cloud::Speech::V1p1beta1::RecognitionConfig::AudioEncoding (v0.25.0)\n\nVersion latestkeyboard_arrow_down\n\n- [0.25.0 (latest)](/ruby/docs/reference/google-cloud-speech-v1p1beta1/latest/Google-Cloud-Speech-V1p1beta1-RecognitionConfig-AudioEncoding)\n- [0.24.1](/ruby/docs/reference/google-cloud-speech-v1p1beta1/0.24.1/Google-Cloud-Speech-V1p1beta1-RecognitionConfig-AudioEncoding)\n- [0.23.0](/ruby/docs/reference/google-cloud-speech-v1p1beta1/0.23.0/Google-Cloud-Speech-V1p1beta1-RecognitionConfig-AudioEncoding)\n- [0.22.0](/ruby/docs/reference/google-cloud-speech-v1p1beta1/0.22.0/Google-Cloud-Speech-V1p1beta1-RecognitionConfig-AudioEncoding)\n- [0.21.1](/ruby/docs/reference/google-cloud-speech-v1p1beta1/0.21.1/Google-Cloud-Speech-V1p1beta1-RecognitionConfig-AudioEncoding)\n- [0.20.2](/ruby/docs/reference/google-cloud-speech-v1p1beta1/0.20.2/Google-Cloud-Speech-V1p1beta1-RecognitionConfig-AudioEncoding)\n- [0.19.0](/ruby/docs/reference/google-cloud-speech-v1p1beta1/0.19.0/Google-Cloud-Speech-V1p1beta1-RecognitionConfig-AudioEncoding)\n- [0.18.1](/ruby/docs/reference/google-cloud-speech-v1p1beta1/0.18.1/Google-Cloud-Speech-V1p1beta1-RecognitionConfig-AudioEncoding)\n- [0.17.1](/ruby/docs/reference/google-cloud-speech-v1p1beta1/0.17.1/Google-Cloud-Speech-V1p1beta1-RecognitionConfig-AudioEncoding)\n- [0.16.0](/ruby/docs/reference/google-cloud-speech-v1p1beta1/0.16.0/Google-Cloud-Speech-V1p1beta1-RecognitionConfig-AudioEncoding)\n- [0.15.3](/ruby/docs/reference/google-cloud-speech-v1p1beta1/0.15.3/Google-Cloud-Speech-V1p1beta1-RecognitionConfig-AudioEncoding)\n- [0.14.0](/ruby/docs/reference/google-cloud-speech-v1p1beta1/0.14.0/Google-Cloud-Speech-V1p1beta1-RecognitionConfig-AudioEncoding)\n- [0.13.0](/ruby/docs/reference/google-cloud-speech-v1p1beta1/0.13.0/Google-Cloud-Speech-V1p1beta1-RecognitionConfig-AudioEncoding)\n- [0.12.4](/ruby/docs/reference/google-cloud-speech-v1p1beta1/0.12.4/Google-Cloud-Speech-V1p1beta1-RecognitionConfig-AudioEncoding) \nReference documentation and code samples for the Cloud Speech-to-Text V1p1beta1 API module Google::Cloud::Speech::V1p1beta1::RecognitionConfig::AudioEncoding.\n\nThe encoding of the audio data sent in the request.\n\n\nAll encodings support only 1 channel (mono) audio, unless the\n`audio_channel_count` and `enable_separate_recognition_per_channel` fields\nare set.\n\nFor best results, the audio source should be captured and transmitted using\na lossless encoding (`FLAC` or `LINEAR16`). The accuracy of the speech\nrecognition can be reduced if lossy codecs are used to capture or transmit\naudio, particularly if background noise is present. Lossy codecs include\n`MULAW`, `AMR`, `AMR_WB`, `OGG_OPUS`, `SPEEX_WITH_HEADER_BYTE`, `MP3`,\nand `WEBM_OPUS`.\n\n\u003cbr /\u003e\n\nThe `FLAC` and `WAV` audio file formats include a header that describes the\nincluded audio content. You can request recognition for `WAV` files that\ncontain either `LINEAR16` or `MULAW` encoded audio.\nIf you send `FLAC` or `WAV` audio file format in\nyour request, you do not need to specify an `AudioEncoding`; the audio\nencoding format is determined from the file header. If you specify\nan `AudioEncoding` when you send send `FLAC` or `WAV` audio, the\nencoding configuration must match the encoding described in the audio\nheader; otherwise the request returns an\n\\[google.rpc.Code.INVALID_ARGUMENT\\]\\[google.rpc.Code.INVALID_ARGUMENT\\] error\ncode.\n\nConstants\n---------\n\n### ENCODING_UNSPECIFIED\n\n**value:** 0 \nNot specified.\n\n### LINEAR16\n\n**value:** 1 \nUncompressed 16-bit signed little-endian samples (Linear PCM).\n\n### FLAC\n\n**value:** 2 \n`FLAC` (Free Lossless Audio\nCodec) is the recommended encoding because it is\nlossless--therefore recognition is not compromised--and\nrequires only about half the bandwidth of `LINEAR16`. `FLAC` stream\nencoding supports 16-bit and 24-bit samples, however, not all fields in\n`STREAMINFO` are supported.\n\n### MULAW\n\n**value:** 3 \n8-bit samples that compand 14-bit audio samples using G.711 PCMU/mu-law.\n\n### AMR\n\n**value:** 4 \nAdaptive Multi-Rate Narrowband codec. `sample_rate_hertz` must be 8000.\n\n### AMR_WB\n\n**value:** 5 \nAdaptive Multi-Rate Wideband codec. `sample_rate_hertz` must be 16000.\n\n### OGG_OPUS\n\n**value:** 6 \nOpus encoded audio frames in Ogg container\n([OggOpus](https://wiki.xiph.org/OggOpus)).\n`sample_rate_hertz` must be one of 8000, 12000, 16000, 24000, or 48000.\n\n### SPEEX_WITH_HEADER_BYTE\n\n**value:** 7 \nAlthough the use of lossy encodings is not recommended, if a very low\nbitrate encoding is required, `OGG_OPUS` is highly preferred over\nSpeex encoding. The [Speex](https://speex.org/) encoding supported by\nCloud Speech API has a header byte in each block, as in MIME type\n`audio/x-speex-with-header-byte`.\nIt is a variant of the RTP Speex encoding defined in\n[RFC 5574](https://tools.ietf.org/html/rfc5574).\nThe stream is a sequence of blocks, one block per RTP packet. Each block\nstarts with a byte containing the length of the block, in bytes, followed\nby one or more frames of Speex data, padded to an integral number of\nbytes (octets) as specified in RFC 5574. In other words, each RTP header\nis replaced with a single byte containing the block length. Only Speex\nwideband is supported. `sample_rate_hertz` must be 16000.\n\n### MP3\n\n**value:** 8 \nMP3 audio. MP3 encoding is a Beta feature and only available in\nv1p1beta1. Support all standard MP3 bitrates (which range from 32-320\nkbps). When using this encoding, `sample_rate_hertz` has to match the\nsample rate of the file being used.\n\n### WEBM_OPUS\n\n**value:** 9 \nOpus encoded audio frames in WebM container\n([WebM](https://www.webmproject.org/docs/container/)).\n`sample_rate_hertz` must be one of 8000, 12000, 16000, 24000, or 48000.\n\n### ALAW\n\n**value:** 10 \n8-bit samples that compand 13-bit audio samples using G.711 PCMU/a-law."]]