[[["易于理解","easyToUnderstand","thumb-up"],["解决了我的问题","solvedMyProblem","thumb-up"],["其他","otherUp","thumb-up"]],[["很难理解","hardToUnderstand","thumb-down"],["信息或示例代码不正确","incorrectInformationOrSampleCode","thumb-down"],["没有我需要的信息/示例","missingTheInformationSamplesINeed","thumb-down"],["翻译问题","translationIssue","thumb-down"],["其他","otherDown","thumb-down"]],["最后更新时间 (UTC):2025-08-18。"],[[["\u003cp\u003eAdvanced speech settings offer granular control over speech features at the agent, flow, page, and fulfillment levels, allowing for overrides at each level, provided that the \u003cstrong\u003eEnable advanced speech settings\u003c/strong\u003e box is checked at the agent level.\u003c/p\u003e\n"],["\u003cp\u003eVarious settings like model selection, end of speech sensitivity, advanced timeout-based end of speech sensitivity, and no speech timeout can be configured at the agent, flow, and page levels, while smart endpointing is agent-level only, and barge-in is available at the agent, flow, and fulfillment level.\u003c/p\u003e\n"],["\u003cp\u003eThe setting "Allow cancellation of partial response playback" is exclusive to the fulfillment level, enabling the cancellation of a partial response playback when another message is added to the queue, which is beneficial when an initial message might be superseded.\u003c/p\u003e\n"],["\u003cp\u003eThe audio export bucket feature, configurable at the agent and flow levels, enables the storage of end-user audio input and synthesized Text-to-Speech (TTS) audio to a specified Cloud Storage bucket, after granting the required permissions to the relevant service accounts.\u003c/p\u003e\n"],["\u003cp\u003eThere are a variety of settings to fine tune how the system interacts with the user, including, setting the sensitivity for recognizing the end of speech, allowing for smart endpointing to analyze user input, and setting the duration for a "no-speech timeout".\u003c/p\u003e\n"]]],[],null,["# Advanced speech settings\n\n| **Note:** Advanced speech settings are only supported by [telephony integrations](/dialogflow/cx/docs/concept/integration).\n\nThis guide provides additional, advanced settings for speech features. You can\ntoggle these advanced settings off and on by navigating to **Agent settings \\\u003e\nSpeech and IVR \\\u003e Speech-to-Text \\\u003e Advanced speech settings**.\n\nThese settings are available in agent settings (applies to entire agent),\n[flow settings](/dialogflow/cx/docs/concept/flow#settings) (applies to entire\nflow and overrides agent settings),\n[page settings](/dialogflow/cx/docs/concept/page#settings) (applies to page and\noverrides flow and agent settings), and\n[fulfillment settings](/dialogflow/cx/docs/concept/fulfillment#settings)\n(applies to fulfillment and overrides page, flow, and agent settings). A subset\nof these settings are available at each level, depending on the setting\nrelevance for the level. \\*Before you can configure these settings at any lower\nlevel, you must first check the **Enable advanced speech settings** box at the\nagent level (Agent settings \\\u003e Speech and IVR \\\u003e Speech-to-Text \\\u003e Advanced speech\nsettings)\\*.\n\nUpdated agent-level settings don't propagate to the flow, page, and fulfillment\nlevel when the **Customize** option is selected on these lower levels. If the\n**Customize** option emcompasses multiple settings and you want to update only\nsome of them, you might also need to update other settings if you want them to\nbe the same as the agent-level settings.\n\nSettings availability by level\n------------------------------\n\nThe following table indicates which advanced speech settings are available at\neach level:\n\nModel selection (Speech-to-Text)\n--------------------------------\n\nCan be set at the agent, flow, and page levels.\n\nSets the speech model used for speech recognition. This setting is\nlanguage-specific, so you can select different models for different languages.\nYou can also check **Override request-level speech model**, which will cause the\nselected model to be used even if a runtime API call specifies a different\nmodel.\n\nFor Conversational Agents (Dialogflow CX) Phone Gateway, see\n[limitations](/dialogflow/cx/docs/concept/integration/phone-gateway#limits).\nFor more information, see\n[Speech models](/dialogflow/cx/docs/concept/speech-models).\n\nEnd of speech sensitivity\n-------------------------\n\nCan be set at the agent, flow, and page levels.\n\nControls the sensitivity for recognizing the end of speech in end-user audio\ninput. The value ranges from 0 (low sensitivity, less likely to end speech) to\n100 (high sensitivity, more likely to end speech).\n| **Note:** This setting only supports the `phone_call` speech model for the `en-US` [language](/dialogflow/cx/docs/reference/language) tag unless you check the **Enable advanced timeout-based end of speech sensitivity** setting.\n\nAdvanced timeout-based end of speech sensitivity\n------------------------------------------------\n\nCan be set at the agent level and disabled at the flow and page levels.\n\nIf this setting is enabled, the\n[**End of speech sensitivity**](/dialogflow/cx/docs/concept/advanced-speech#end-of-speech-sensitivity)\nsetting value is used as a gauge to establish a relative audio silence time-out\nto determine the end of speech. If this setting is disabled (the default), the\n**End of speech sensitivity** setting value is used to determine the end of\nspeech by the ML model provided by Google Cloud Speech-to-Text.\n\nWhile the **End of speech sensitivity** setting supports only the `phone_call`\nspeech model for the `en-US` [language](/dialogflow/cx/docs/reference/language)\ntag by default, the **Enable advanced timeout-based end of speech sensitivity**\nsetting allows for configuring end of speech sensitivity for all languages and\nspeech models supported by Dialogflow.\n\nEnable smart endpointing\n------------------------\n\nCan be set at the agent level only.\n\nIf this setting is enabled, Conversational Agents (Dialogflow CX) will analyze partial user input to\ndetermine the end of speech. For example, if the user says \"I would like to\" and\npauses, Conversational Agents (Dialogflow CX) will wait for the user to continue the sentence.\n\nThis is particularly useful for numeric parameter collection, where the user\nmight say \"1234\" and pause before saying \"5678\". In order to apply this setting\nfor a specific parameter, you have to set up\n[Smart endpointing](/dialogflow/cx/docs/concept/parameter#smart-endpointing) in\nthe parameter's form.\n\nThis setting is available for the `en-US` language tag only and is disabled by\ndefault.\n\nNo speech timeout\n-----------------\n\nCan be set at the agent, flow, and page levels.\n\nThe time duration in seconds for which Conversational Agents (Dialogflow CX) will stop waiting for end-user\naudio input. The default is 5 seconds, and the maximum value is 60 seconds. For\nthis timeout, Conversational Agents (Dialogflow CX) invokes a [no-input event](/dialogflow/cx/docs/concept/handler#event-built-in).\n\nBarge-in\n--------\n\nCan be set at the agent, flow, and fulfillment levels.\n\nWhen enabled, an end-user can interrupt Conversational Agents (Dialogflow CX) response audio. When\ninterrupted, Conversational Agents (Dialogflow CX) will stop sending audio, and it will process the next\nend-user input.\n\nIf there are multiple messages in the [message queue](/dialogflow/cx/docs/concept/fulfillment#queue),\nand a message was queued by a fulfillment associated with a page, flow, or agent\nthat has barge-in enabled, then all following messages in the queue will also\nhave barge-in enabled. In this case, the integration will stop playing audio for\nall of the queued messages with barge-in enabled.\n\nAllow cancellation of partial response playback\n-----------------------------------------------\n\nCan be set at the fulfillment level only.\n\nYou can enable this setting when the **Enable advanced speech settings** box is\nchecked in **Agent settings \\\u003e Speech and IVR** and\n[partial response](/dialogflow/cx/docs/concept/fulfillment#partial-response)\nis enabled at the fulfillment level. This setting allows cancellation of a\npartial response playback.\n\nIf a message in the [message queue](/dialogflow/cx/docs/concept/fulfillment#queue)\nis created by a fulfillment that allows cancellation, playback of the message is\ncancelled if another message is added to the queue. This is useful when you want\nan initial message to start playback, but for that playback to be cancelled if a\nworking webhook produces another message before playback of the initial message\ncompletes.\n\nAudio export bucket\n-------------------\n\nCan be set at the agent and flow levels.\n\nIf supplied, any audio data associated with a request will be saved to the\nCloud Storage bucket:\n\nGrant the **Storage Object Creator** role to the following service accounts in your project:\n\n- To the service account of the format\n `one-click@df-cx-`\u003cvar translate=\"no\"\u003eALPHANUMERIC_VALUE\u003c/var\u003e`-`\u003cvar translate=\"no\"\u003eALPHANUMERIC_VALUE\u003c/var\u003e`.iam.gserviceaccount.com`\n if you use a [partner built-in telephony integration](/dialogflow/cx/docs/concept/integration#telephony-partner-built-in).\n\n- To the service account of the format\n `service-`\u003cvar translate=\"no\"\u003ePROJECT_NUMBER\u003c/var\u003e`@gcp-sa-dialogflow.iam.gserviceaccount.com`\n if you use the [Dialogflow CX Phone Gateway](/dialogflow/cx/docs/concept/integration/phone-gateway)\n integration. To find this service account in IAM, check the **Include\n Google-provided role grants** option.\n\nDTMF\n----\n\nSee the\n[DTMF (Dual-tone multi-frequency signaling) documentation](/dialogflow/cx/docs/concept/dtmf)\nfor more information about this feature."]]