- 4.48.0 (latest)
- 4.47.0
- 4.46.0
- 4.44.0
- 4.43.0
- 4.42.0
- 4.41.0
- 4.40.0
- 4.39.0
- 4.38.0
- 4.37.0
- 4.36.0
- 4.35.0
- 4.34.0
- 4.32.0
- 4.31.0
- 4.30.0
- 4.29.0
- 4.28.0
- 4.27.0
- 4.26.0
- 4.25.0
- 4.24.0
- 4.23.0
- 4.22.0
- 4.19.0
- 4.18.0
- 4.17.0
- 4.16.0
- 4.15.0
- 4.14.0
- 4.13.0
- 4.12.0
- 4.11.0
- 4.10.0
- 4.9.0
- 4.8.0
- 4.7.0
- 4.6.0
- 4.4.0
- 4.3.0
- 4.2.0
- 4.1.0
- 4.0.0
- 3.0.0
- 2.6.1
- 2.5.9
- 2.4.0
- 2.3.0
- 2.2.15
public interface StreamingRecognizeRequestOrBuilder extends MessageOrBuilder
Implements
MessageOrBuilderMethods
getAudioContent()
public abstract ByteString getAudioContent()
The audio data to be recognized. Sequential chunks of audio data are sent
in sequential StreamingRecognizeRequest
messages. The first
StreamingRecognizeRequest
message must not contain audio_content
data
and all subsequent StreamingRecognizeRequest
messages must contain
audio_content
data. The audio bytes must be encoded as specified in
RecognitionConfig
. Note: as with all bytes fields, proto buffers use a
pure binary representation (not base64). See
content limits.
bytes audio_content = 2;
Type | Description |
ByteString | The audioContent. |
getStreamingConfig()
public abstract StreamingRecognitionConfig getStreamingConfig()
Provides information to the recognizer that specifies how to process the
request. The first StreamingRecognizeRequest
message must contain a
streaming_config
message.
.google.cloud.speech.v1p1beta1.StreamingRecognitionConfig streaming_config = 1;
Type | Description |
StreamingRecognitionConfig | The streamingConfig. |
getStreamingConfigOrBuilder()
public abstract StreamingRecognitionConfigOrBuilder getStreamingConfigOrBuilder()
Provides information to the recognizer that specifies how to process the
request. The first StreamingRecognizeRequest
message must contain a
streaming_config
message.
.google.cloud.speech.v1p1beta1.StreamingRecognitionConfig streaming_config = 1;
Type | Description |
StreamingRecognitionConfigOrBuilder |
getStreamingRequestCase()
public abstract StreamingRecognizeRequest.StreamingRequestCase getStreamingRequestCase()
Type | Description |
StreamingRecognizeRequest.StreamingRequestCase |
hasAudioContent()
public abstract boolean hasAudioContent()
The audio data to be recognized. Sequential chunks of audio data are sent
in sequential StreamingRecognizeRequest
messages. The first
StreamingRecognizeRequest
message must not contain audio_content
data
and all subsequent StreamingRecognizeRequest
messages must contain
audio_content
data. The audio bytes must be encoded as specified in
RecognitionConfig
. Note: as with all bytes fields, proto buffers use a
pure binary representation (not base64). See
content limits.
bytes audio_content = 2;
Type | Description |
boolean | Whether the audioContent field is set. |
hasStreamingConfig()
public abstract boolean hasStreamingConfig()
Provides information to the recognizer that specifies how to process the
request. The first StreamingRecognizeRequest
message must contain a
streaming_config
message.
.google.cloud.speech.v1p1beta1.StreamingRecognitionConfig streaming_config = 1;
Type | Description |
boolean | Whether the streamingConfig field is set. |