此外,您还应该重试执行指数退避算法。这样,您的系统就可以在 API 服务负载过重时找到一个可接受的速率。
Cloud API 错误
如果您使用的是 Google 提供的客户端库,则系统会为您执行采用指数退避算法的 Cloud API 错误重试。
如果您已使用 REST 或 gRPC 实现自己的客户端库,则必须为您的客户端实现重试。
如需了解哪些错误应重试或哪些错误不应重试,请参阅 API 改进建议:自动重试配置。
网络钩子错误
如果 API 调用触发了网络钩子调用,则您的网络钩子可能会返回错误。即使您使用 Google 提供的客户端库,系统也不会自动重试网络钩子错误。您的代码应该重试从网络钩子接收到的 503 Service Unavailable 错误。如需了解网络钩子错误类型以及如何检查这些错误,请参阅网络钩子服务文档。
切勿将用于访问 Dialogflow API 的私钥存储在最终用户设备上。这适用于直接在设备上存储密钥以及在应用中对密钥进行硬编码的情况。当您的客户端应用需要调用 Dialogflow API 时,应向安全平台上开发者拥有的代理服务发送请求。代理服务可以进行实际的经过身份验证的 Dialogflow 调用。
[[["易于理解","easyToUnderstand","thumb-up"],["解决了我的问题","solvedMyProblem","thumb-up"],["其他","otherUp","thumb-up"]],[["很难理解","hardToUnderstand","thumb-down"],["信息或示例代码不正确","incorrectInformationOrSampleCode","thumb-down"],["没有我需要的信息/示例","missingTheInformationSamplesINeed","thumb-down"],["翻译问题","translationIssue","thumb-down"],["其他","otherDown","thumb-down"]],["最后更新时间 (UTC):2025-08-18。"],[[["\u003cp\u003eThis guide offers best practices for Dialogflow, emphasizing efficiency, accuracy, and optimal response times.\u003c/p\u003e\n"],["\u003cp\u003eFor production environments, it's crucial to utilize agent versions, reuse session clients, and implement error handling with retries.\u003c/p\u003e\n"],["\u003cp\u003eTo ensure data safety, regular agent backups should be exported, allowing for quick recovery from accidental data loss.\u003c/p\u003e\n"],["\u003cp\u003eWhen calling Dialogflow from client apps, a proxy service should be used to prevent private keys from being stored on user devices.\u003c/p\u003e\n"],["\u003cp\u003eLoad testing is recommended, and it is important to note that the most common actions are speech operations and flow actions, all of which are heavily affected by the large language model prompt output and input length.\u003c/p\u003e\n"]]],[],null,["# Service use best practices\n\nThis guide provides best practices for using the Dialogflow service.\nThese guidelines are designed for greater efficiency and accuracy\nas well as optimal response times from the service.\n\nYou should also see the\n\n[general agent design](/dialogflow/cx/docs/concept/agent-design)\n\nguide for all agent types,\nand the\n\n[voice agent design](/dialogflow/cx/docs/concept/voice-agent-design)\n\nguide specifically for designing voice agents.\n\nProductionization\n-----------------\n\nBefore running your agent in production,\nbe sure to implement the following best practices:\n\n- [Use agent versions](#agent-version)\n- [Reuse session clients](#session-client-reuse)\n- [Implement error handling with retries](#retries)\n\nAgent versions\n--------------\n\nYou should always use agent versions for your production traffic.\nSee\n\n[Versions and environments](/dialogflow/cx/docs/concept/version)\n\nfor details.\n\nCreate agent backup\n-------------------\n\nKeep an up-to-date\n\n[exported](/dialogflow/cx/docs/concept/agent#export)\n\nagent backup. This will allow you to quickly recover if you or your team members\naccidentally delete the agent or the project.\n\nClient reuse\n------------\n\nYou can improve the performance of your application\nby reusing `*Client` client library instances\nfor the duration of your application's execution lifetime.\n\nMost importantly,\nyou can improve the performance of detect intent API calls\nby reusing a `SessionsClient` client library instance.\nGo to the Session API reference \n**Select a protocol and version for the Session reference:**\n\nClose\n\nFor more information on this, see the\n[Best Practices with Client Libraries guide](/apis/docs/client-libraries-best-practices#reuse_client_objects_and_sessions).\n\nAPI error retries\n-----------------\n\nWhen calling API methods, you may receive error responses.\nThere are some errors which should be retried,\nbecause they are often due to transient issues.\nThere are two types of errors:\n\n- [Cloud API errors](/apis/design/errors).\n- Errors sent from your [webhook service](/dialogflow/cx/docs/concept/webhook#errors).\n\nIn addition, you should implement an\n[exponential backoff](https://en.wikipedia.org/wiki/Exponential_backoff)\nfor retries.\nThis allows your system to find an acceptable rate\nwhile the API service is under heavy load.\n\n### Cloud API errors\n\nIf you are using a Google supplied\n\n[client library](/dialogflow/cx/docs/reference/library/overview),\n\nCloud API error retries with exponential backoff are implemented for you.\n\nIf you have implemented your own client library using REST or gRPC,\nyou must implement retries for your client.\nFor information on the errors that you should or should not retry, see\n[API Improvement Proposals: Automatic retry configuration](https://google.aip.dev/194).\n\n### Webhook errors\n\nIf your API call triggers a webhook call,\nyour webhook may return an error.\nEven if you are using a Google supplied client library,\nwebhook errors will not be retried automatically.\nYour code should retry `503 Service Unavailable`\nerrors received from your webhook.\nSee the\n\n[webhook service](/dialogflow/cx/docs/concept/webhook#errors)\n\ndocumentation for information on the types of webhook errors\nand how to check for them.\n\nLoad testing\n------------\n\nIt is a best practice to execute load testing for your system\nbefore you release code to production.\nConsider these points before implementing your load tests:\n\nCalling Dialogflow securely from an end-user device\n---------------------------------------------------\n\nYou should never store your private keys\nused to access the Dialogflow API on an end-user device.\nThis applies to storing keys on the device directly\nand to hard coding keys in applications.\nWhen your client application needs to call the Dialogflow API,\nit should send requests to a developer-owned proxy service on a secure platform.\nThe proxy service can make the actual, authenticated Dialogflow calls.\n\nFor example, you should not create a mobile application\nthat calls Dialogflow directly.\nDoing so would require you to store private keys on an end-user device.\nYour mobile application should instead\npass requests through a secure proxy service.\n| **Note:** Some Dialogflow integrations, like Dialogflow Messenger, provide both client code and a proxy service, similar to the description above. The proxy service only responds to requests when the integration is enabled. To improve utility of these integrations, the proxy service may not require authentication. The proxy service API is limited to a small subset of Dialogflow API methods that are required for the integration. In addition, the proxy service never provides Google Cloud or Dialogflow administrative API access without requiring authentication. This limited proxy API reduces the vulnerability for abuse.\n\nPerformance\n-----------\n\nThis section outlines performance information for various operations\nwithin Dialogflow. Understanding latency is important for designing\nresponsive agents and setting realistic performance expectations, although these\nvalues are not part of the Dialogflow SLA.\n\nWhen building monitoring and alerting tools, note that Large Language Models\n(LLMs) and speech processing are typically handled using streaming methods.\nResponses are sent to the client as soon as possible, often much earlier than\nthe total duration of the method call. For more information, see the\n[Best practices with large language models (LLMs)](/vertex-ai/generative-ai/docs/learn/prompt-best-practices).\n\n### Performance per operation\n\nThe following table provides information about the typical performance of\nDialogflow operations:\n\n**Key Notes:**\n\n- **Streaming:** For streaming calls (speech recognition and synthesis), data is processed as it arrives, and responses are returned as soon as possible. This means the initial response is typically much faster than the total time of the call.\n- **Playbooks:** An LLM prompt is constructed based on the playbook instructions, the conversation context and the tool input. Multiple LLM prompts can be executed in a single playbook call. This is why the playbook execution is variable, depending on the amount of prompts issued and the complexity of the calls.\n\n### Important Latency Considerations\n\n- **No Latency Guarantees:** Dialogflow SLAs do not consider latency, even under Provisioned Throughput.\n- **LLM Latency:** Be aware that LLM processing can introduce significant latency. Factor this into your agent design and user expectations.\n- **Monitoring and Alerting:** When setting up monitoring and alerting, account for the streamed nature of responses from LLMs and speech services. Don't assume full response time is equal to time to first response."]]