Reference documentation and code samples for the Cloud Speech-to-Text V2 API class Google::Cloud::Speech::V2::Recognizer.
A Recognizer message. Stores recognition configuration and metadata.
Inherits
- Object
Extended By
- Google::Protobuf::MessageExts::ClassMethods
Includes
- Google::Protobuf::MessageExts
Methods
#annotations
def annotations() -> ::Google::Protobuf::Map{::String => ::String}
- (::Google::Protobuf::Map{::String => ::String}) — Allows users to store small amounts of arbitrary data. Both the key and the value must be 63 characters or less each. At most 100 annotations.
#annotations=
def annotations=(value) -> ::Google::Protobuf::Map{::String => ::String}
- value (::Google::Protobuf::Map{::String => ::String}) — Allows users to store small amounts of arbitrary data. Both the key and the value must be 63 characters or less each. At most 100 annotations.
- (::Google::Protobuf::Map{::String => ::String}) — Allows users to store small amounts of arbitrary data. Both the key and the value must be 63 characters or less each. At most 100 annotations.
#create_time
def create_time() -> ::Google::Protobuf::Timestamp
- (::Google::Protobuf::Timestamp) — Output only. Creation time.
#default_recognition_config
def default_recognition_config() -> ::Google::Cloud::Speech::V2::RecognitionConfig
- (::Google::Cloud::Speech::V2::RecognitionConfig) — Default configuration to use for requests with this Recognizer. This can be overwritten by inline configuration in the RecognizeRequest.config field.
#default_recognition_config=
def default_recognition_config=(value) -> ::Google::Cloud::Speech::V2::RecognitionConfig
- value (::Google::Cloud::Speech::V2::RecognitionConfig) — Default configuration to use for requests with this Recognizer. This can be overwritten by inline configuration in the RecognizeRequest.config field.
- (::Google::Cloud::Speech::V2::RecognitionConfig) — Default configuration to use for requests with this Recognizer. This can be overwritten by inline configuration in the RecognizeRequest.config field.
#delete_time
def delete_time() -> ::Google::Protobuf::Timestamp
- (::Google::Protobuf::Timestamp) — Output only. The time at which this Recognizer was requested for deletion.
#display_name
def display_name() -> ::String
- (::String) — User-settable, human-readable name for the Recognizer. Must be 63 characters or less.
#display_name=
def display_name=(value) -> ::String
- value (::String) — User-settable, human-readable name for the Recognizer. Must be 63 characters or less.
- (::String) — User-settable, human-readable name for the Recognizer. Must be 63 characters or less.
#etag
def etag() -> ::String
- (::String) — Output only. This checksum is computed by the server based on the value of other fields. This may be sent on update, undelete, and delete requests to ensure the client has an up-to-date value before proceeding.
#expire_time
def expire_time() -> ::Google::Protobuf::Timestamp
- (::Google::Protobuf::Timestamp) — Output only. The time at which this Recognizer will be purged.
#kms_key_name
def kms_key_name() -> ::String
-
(::String) — Output only. The KMS key
name with which
the Recognizer is encrypted. The expected format is
projects/{project}/locations/{location}/keyRings/{key_ring}/cryptoKeys/{crypto_key}
.
#kms_key_version_name
def kms_key_version_name() -> ::String
-
(::String) — Output only. The KMS key version
name
with which the Recognizer is encrypted. The expected format is
projects/{project}/locations/{location}/keyRings/{key_ring}/cryptoKeys/{crypto_key}/cryptoKeyVersions/{crypto_key_version}
.
#language_codes
def language_codes() -> ::Array<::String>
-
(::Array<::String>) — Required. The language of the supplied audio as a
BCP-47 language tag.
Supported languages:
en-US
en-GB
fr-FR
If additional languages are provided, recognition result will contain recognition in the most likely language detected. The recognition result will include the language tag of the language detected in the audio. When you create or update a Recognizer, these values are stored in normalized BCP-47 form. For example, "en-us" is stored as "en-US".
#language_codes=
def language_codes=(value) -> ::Array<::String>
-
value (::Array<::String>) — Required. The language of the supplied audio as a
BCP-47 language tag.
Supported languages:
en-US
en-GB
fr-FR
If additional languages are provided, recognition result will contain recognition in the most likely language detected. The recognition result will include the language tag of the language detected in the audio. When you create or update a Recognizer, these values are stored in normalized BCP-47 form. For example, "en-us" is stored as "en-US".
-
(::Array<::String>) — Required. The language of the supplied audio as a
BCP-47 language tag.
Supported languages:
en-US
en-GB
fr-FR
If additional languages are provided, recognition result will contain recognition in the most likely language detected. The recognition result will include the language tag of the language detected in the audio. When you create or update a Recognizer, these values are stored in normalized BCP-47 form. For example, "en-us" is stored as "en-US".
#model
def model() -> ::String
-
(::String) — Required. Which model to use for recognition requests. Select the model
best suited to your domain to get best results.
Supported models:
latest_long
Best for long form content like media or conversation.
latest_short
Best for short form content like commands or single shot directed speech. When using this model, the service will stop transcribing audio after the first utterance is detected and completed.
When using this model, SEPARATE_RECOGNITION_PER_CHANNEL is not supported; multi-channel audio is accepted, but only the first channel will be processed and transcribed.
#model=
def model=(value) -> ::String
-
value (::String) — Required. Which model to use for recognition requests. Select the model
best suited to your domain to get best results.
Supported models:
latest_long
Best for long form content like media or conversation.
latest_short
Best for short form content like commands or single shot directed speech. When using this model, the service will stop transcribing audio after the first utterance is detected and completed.
When using this model, SEPARATE_RECOGNITION_PER_CHANNEL is not supported; multi-channel audio is accepted, but only the first channel will be processed and transcribed.
-
(::String) — Required. Which model to use for recognition requests. Select the model
best suited to your domain to get best results.
Supported models:
latest_long
Best for long form content like media or conversation.
latest_short
Best for short form content like commands or single shot directed speech. When using this model, the service will stop transcribing audio after the first utterance is detected and completed.
When using this model, SEPARATE_RECOGNITION_PER_CHANNEL is not supported; multi-channel audio is accepted, but only the first channel will be processed and transcribed.
#name
def name() -> ::String
-
(::String) — Output only. The resource name of the Recognizer.
Format:
projects/{project}/locations/{location}/recognizers/{recognizer}
.
#reconciling
def reconciling() -> ::Boolean
- (::Boolean) — Output only. Whether or not this Recognizer is in the process of being updated.
#state
def state() -> ::Google::Cloud::Speech::V2::Recognizer::State
- (::Google::Cloud::Speech::V2::Recognizer::State) — Output only. The Recognizer lifecycle state.
#uid
def uid() -> ::String
- (::String) — Output only. System-assigned unique identifier for the Recognizer.
#update_time
def update_time() -> ::Google::Protobuf::Timestamp
- (::Google::Protobuf::Timestamp) — Output only. The most recent time this Recognizer was modified.