Tetap teratur dengan koleksi
Simpan dan kategorikan konten berdasarkan preferensi Anda.
Anda dapat menggunakan fitur streaming yang diperluas untuk melakukan streaming konten audio ke
Dialogflow dan melakukan streaming kembali saran agen manusia.
Biasanya, Anda menutup sebagian atau memberi tahu Dialogflow API kapan harus mengakhiri streaming untuk membuat transkrip akhir dan saran Agent Assist. Hal ini terjadi pada giliran percakapan, saat API menerima parameter is_final=true dari hasil pengenalan, StreamingAnalyzeContentResponse.recognition_result.
Streaming yang diperpanjang mengurangi kebutuhan untuk menutup setengah saat pergantian percakapan. Hal ini memperpanjang waktu tunggu koneksi hingga tiga menit, selama itu Anda dapat mengirimkan streaming audio tanpa menutup sebagian. Dialogflow API secara otomatis mengirimkan transkrip akhir dan saran Bantuan Agen kembali ke stream. Anda hanya memulai ulang streaming jika waktu habis.
Dasar-dasar streaming
Fitur streaming yang diperluas Agent Assist serupa dengan streaming
audio untuk Transkripsi CCAI. Sistem Anda
mengalirkan data audio ke API, dan Dialogflow mengalirkan kembali
data StreamingAnalyzeContentResponse. Data yang ditampilkan mencakup saran
untuk agen manual Anda.
Streaming yang Diperluas hanya mendukung tahap Agent Assist. Lihat tahap percakapan. Untuk menggunakan fitur ini:
Panggil metode streamingAnalyzeContent dan tetapkan kolom berikut:
StreamingAnalyzeContentRequest.audio_config.audio_encoding:
AUDIO_ENCODING_LINEAR_16 atau AUDIO_ENCODING_MULAW
enable_extended_streaming: true.
Permintaan streamingAnalyzeContent pertama menyiapkan streaming dan menyetel
konfigurasi audio Anda.
Dalam permintaan berikutnya, Anda mengirim byte audio ke streaming.
Selama Anda terus mengirim audio, Anda akan terus menerima saran.
Anda tidak perlu menutup streaming secara manual. Panel ini akan otomatis ditutup
setelah Agent Assist mendeteksi bahwa ucapan telah berhenti.
Mulai ulang streaming (yang mencakup pengiriman ulang konfigurasi audio awal) dalam kasus berikut:
Streaming rusak (streaming berhenti saat seharusnya tidak berhenti).
Data audio Anda akan mencapai waktu tunggu otomatis pada 3 menit.
Anda menerima error yang dapat dicoba lagi. Anda dapat mencoba lagi hingga tiga kali.
[[["Mudah dipahami","easyToUnderstand","thumb-up"],["Memecahkan masalah saya","solvedMyProblem","thumb-up"],["Lainnya","otherUp","thumb-up"]],[["Sulit dipahami","hardToUnderstand","thumb-down"],["Informasi atau kode contoh salah","incorrectInformationOrSampleCode","thumb-down"],["Informasi/contoh yang saya butuhkan tidak ada","missingTheInformationSamplesINeed","thumb-down"],["Masalah terjemahan","translationIssue","thumb-down"],["Lainnya","otherDown","thumb-down"]],["Terakhir diperbarui pada 2025-09-04 UTC."],[[["\u003cp\u003eExtended Streaming allows streaming audio content to Dialogflow and receiving human agent suggestions in return, similar to audio streaming for CCAI Transcription.\u003c/p\u003e\n"],["\u003cp\u003eThis feature, available "as is" and with potentially limited support under the Pre-GA Offerings Terms, is accessible via the \u003ccode\u003estreamingAnalyzeContent\u003c/code\u003e method in the RPC API and client libraries.\u003c/p\u003e\n"],["\u003cp\u003eTo initiate Extended Streaming, users must set \u003ccode\u003eenable_extended_streaming\u003c/code\u003e to \u003ccode\u003etrue\u003c/code\u003e and provide appropriate audio configurations (\u003ccode\u003eAUDIO_ENCODING_LINEAR_16\u003c/code\u003e or \u003ccode\u003eAUDIO_ENCODING_MULAW\u003c/code\u003e).\u003c/p\u003e\n"],["\u003cp\u003eThe stream remains active as long as audio data is sent, automatically closing when utterances stop, and it will automatically timeout after 3 minutes of activity.\u003c/p\u003e\n"],["\u003cp\u003eUsers should restart the stream if it breaks unexpectedly, if approaching the three-minute timeout, or after receiving a re-tryable error (up to three retries are allowed).\u003c/p\u003e\n"]]],[],null,["# Extended streaming\n\n| **Preview**\n|\n|\n| This feature is subject to the \"Pre-GA Offerings Terms\" in the General Service Terms section\n| of the [Service Specific Terms](/terms/service-terms#1).\n|\n| Pre-GA features are available \"as is\" and might have limited support.\n|\n| For more information, see the\n| [launch stage descriptions](/products#product-launch-stages).\n\nYou can use the extended streaming feature to stream audio content to\nDialogflow and stream human agent suggestions back.\n\nNormally, you half-close or tell the Dialogflow API when to end the stream to generate the final transcript and Agent Assist suggestions. This happens at conversation turns, where the API receives the parameter `is_final=true` from the recognition result, `StreamingAnalyzeContentResponse.recognition_result`.\n\nExtended streaming reduces the need for half-closing at conversation turns. It extends the connection timeout to three minutes, during which you can send audio streams without half-closing. The Dialogflow API automatically sends the final transcripts and Agent Assist suggestions back to the stream. You only restart the stream if it times out.\n| **Note:** Streaming is supported by the RPC API and client libraries only.\n\nStreaming basics\n----------------\n\nThe Agent Assist extended streaming feature is similar to [audio\nstreaming](/agent-assist/docs/transcription) for CCAI Transcription. Your system\nstreams audio data to the API, and Dialogflow streams back\n`StreamingAnalyzeContentResponse` data. The returned data includes suggestions\nfor your human agents.\n| **Note:** Streaming automatically times out after three minutes. If your conversation lasts longer than three minutes, you can handle the timeout by closing and re-opening the stream.\n\nTo use Extended Streaming, call the\n[`streamingAnalyzeContent`](/dialogflow/es/docs/reference/rpc/google.cloud.dialogflow.v2beta1#google.cloud.dialogflow.v2beta1.Participants.StreamingAnalyzeContent)\nmethod.\n\nExtended Streaming only supports Agent Assist stage. See [conversation\nstage](/agent-assist/docs/basics). To use this feature:\n\n1. Call the `streamingAnalyzeContent` method and set the following fields:\n - `StreamingAnalyzeContentRequest.audio_config.audio_encoding`: `AUDIO_ENCODING_LINEAR_16` or `AUDIO_ENCODING_MULAW`\n - `enable_extended_streaming`: `true`.\n2. The first `streamingAnalyzeContent` request prepares the stream and sets your audio configuration.\n3. In subsequent requests, you send audio bytes to the stream.\n4. As long as you continue to send audio, you will keep receiving suggestions. You don't need to manually close the stream. It will close automatically once Agent Assist detects that utterances have stopped.\n5. Restart the stream (which includes resending the initial audio configuration) in the following cases:\n - The stream is broken (the stream stopped when it wasn't supposed to).\n - Your audio data is approaching the automatic timeout at 3 minutes.\n - You received a re-tryable error. You can retry up to three times."]]