Mit Sammlungen den Überblick behalten
Sie können Inhalte basierend auf Ihren Einstellungen speichern und kategorisieren.
Verwenden Sie ein trainiertes benutzerdefiniertes Speech-to-Text-Modell in Ihrer Produktionsanwendung oder in Benchmarking-Workflows. Wenn Sie Ihr Modell über einen dedizierten Endpunkt bereitstellen, erhalten Sie automatisch programmatischen Zugriff über ein Erkennungsobjekt, das direkt über die Speech-to-Text V2 API oder in der Google Cloud Console verwendet werden kann.
Hinweise
Sie müssen sich für ein Google Cloud -Konto registriert, ein Projekt erstellt, ein benutzerdefiniertes Sprachmodell trainiert und über einen Endpunkt bereitgestellt haben.
Inferenz in V2 ausführen
Damit ein benutzerdefiniertes Speech-to-Text-Modell einsatzbereit ist, muss der Status des Modells im Tab ModelleAktiv sein, und der dedizierte Endpunkt im Tab Endpunkte muss Bereitgestellt sein.
In unserem Beispiel, in dem die Google Cloud Projekt-ID custom-models-walkthrough ist, lautet der Endpunkt, der dem benutzerdefinierten Speech-to-Text-Modell quantum-computing-lectures-custom-model entspricht, quantum-computing-lectures-custom-model-prod-endpoint. Die Region, in der es verfügbar ist, ist us-east1 und die Batch-Transkriptionsanfrage lautet:
fromgoogle.api_coreimportclient_optionsfromgoogle.cloud.speech_v2importSpeechClientfromgoogle.cloud.speech_v2.typesimportcloud_speechdefquickstart_v2(project_id:str,audio_file:str,)-> cloud_speech.RecognizeResponse:"""Transcribe an audio file."""# Instantiates a clientclient=SpeechClient(client_options=client_options.ClientOptions(api_endpoint="us-east1-speech.googleapis.com"))# Reads a file as byteswithopen(audio_file,"rb")asf:content=f.read()config=cloud_speech.RecognitionConfig(auto_decoding_config=cloud_speech.AutoDetectDecodingConfig(),language_codes=["en-US"],model="projects/custom-models-walkthrough/locations/us-east1/endpoints/quantum-computing-lectures-custom-model-prod-endpoint",)request=cloud_speech.RecognizeRequest(recognizer=f"projects/custom-models-walkthrough/locations/us-east1/recognizers/_",config=config,content=content,)# Transcribes the audio into textresponse=client.recognize(request=request)forresultinresponse.results:print(f"Transcript: {result.alternatives[0].transcript}")returnresponse
Nächste Schritte
Folgen Sie der Anleitung, um benutzerdefinierte Sprachmodelle in Ihrer Anwendung zu verwenden. Weitere Informationen finden Sie unter Benutzerdefinierte Modelle bewerten.
[[["Leicht verständlich","easyToUnderstand","thumb-up"],["Mein Problem wurde gelöst","solvedMyProblem","thumb-up"],["Sonstiges","otherUp","thumb-up"]],[["Schwer verständlich","hardToUnderstand","thumb-down"],["Informationen oder Beispielcode falsch","incorrectInformationOrSampleCode","thumb-down"],["Benötigte Informationen/Beispiele nicht gefunden","missingTheInformationSamplesINeed","thumb-down"],["Problem mit der Übersetzung","translationIssue","thumb-down"],["Sonstiges","otherDown","thumb-down"]],["Zuletzt aktualisiert: 2025-08-18 (UTC)."],[],[],null,["# Use models\n\n| **Preview**\n|\n|\n| This feature is subject to the \"Pre-GA Offerings Terms\" in the General Service Terms section\n| of the [Service Specific Terms](/terms/service-terms#1).\n|\n| Pre-GA features are available \"as is\" and might have limited support.\n|\n| For more information, see the\n| [launch stage descriptions](/products#product-launch-stages).\n\nUse a trained Custom Speech-to-Text model in your production application or benchmarking workflows. As soon as you deploy your model through a dedicated endpoint, you automatically get programmatic access through a recognizer object, which can be used directly through the Speech-to-Text V2 API or in the Google Cloud console.\n\nBefore you begin\n----------------\n\nEnsure you have signed up for a Google Cloud account, created a project, trained a custom speech model, and deployed it using an endpoint.\n\nPerform inference in V2\n-----------------------\n\nFor a Custom Speech-to-Text model to be ready for use, the state of the model in the **Models** tab should be **Active** , and the dedicated endpoint in the **Endpoints** tab must be **Deployed**.\n\nIn our example, where a Google Cloud project ID is `custom-models-walkthrough`, the endpoint that corresponds to the Custom Speech-to-Text model `quantum-computing-lectures-custom-model` is `quantum-computing-lectures-custom-model-prod-endpoint`. The region that it's available is `us-east1`, and the batch transcription request is the following: \n\n from google.api_core import client_options\n from google.cloud.speech_v2 import SpeechClient\n from google.cloud.speech_v2.types import cloud_speech\n\n def quickstart_v2(\n project_id: str,\n audio_file: str,\n ) -\u003e cloud_speech.RecognizeResponse:\n \"\"\"Transcribe an audio file.\"\"\"\n # Instantiates a client\n client = SpeechClient(\n client_options=client_options.ClientOptions(\n api_endpoint=\"us-east1-speech.googleapis.com\"\n )\n )\n\n # Reads a file as bytes\n with open(audio_file, \"rb\") as f:\n content = f.read()\n\n config = cloud_speech.RecognitionConfig(\n auto_decoding_config=cloud_speech.https://cloud.google.com/python/docs/reference/speech/latest/google.cloud.speech_v2.types.AutoDetectDecodingConfig.html(),\n language_codes=[\"en-US\"],\n model=\"projects/custom-models-walkthrough/locations/us-east1/endpoints/quantum-computing-lectures-custom-model-prod-endpoint\",\n )\n request = cloud_speech.RecognizeRequest(\n recognizer=f\"projects/custom-models-walkthrough/locations/us-east1/recognizers/_\",\n config=config,\n content=content,\n )\n\n # Transcribes the audio into text\n response = client.https://cloud.google.com/python/docs/reference/speech/latest/google.cloud.speech_v1.services.speech.SpeechClient.html#google_cloud_speech_v1_services_speech_SpeechClient_recognize(request=request)\n\n for result in response.results:\n print(f\"Transcript: {result.alternatives[0].transcript}\")\n\n return response\n\n| **Note:** If you try to create a recognizer object in a different region than the one that the endpoint is created in, the request will fail.\n\nWhat's next\n-----------\n\nFollow the resources to take advantage of custom speech models in your application. See [Evaluate your custom models](/speech-to-text/v2/docs/custom-speech-models/evaluate-model)."]]