[[["易于理解","easyToUnderstand","thumb-up"],["解决了我的问题","solvedMyProblem","thumb-up"],["其他","otherUp","thumb-up"]],[["很难理解","hardToUnderstand","thumb-down"],["信息或示例代码不正确","incorrectInformationOrSampleCode","thumb-down"],["没有我需要的信息/示例","missingTheInformationSamplesINeed","thumb-down"],["翻译问题","translationIssue","thumb-down"],["其他","otherDown","thumb-down"]],["最后更新时间 (UTC):2025-08-28。"],[],[],null,["# Resolving resource limit issues in Cloud Service Mesh\n=====================================================\n\nThis section explains common Cloud Service Mesh problems and how to resolve\nthem. If you need additional assistance, see\n[Getting support](/service-mesh/v1.22/docs/getting-support).\n\nCloud Service Mesh resource limit problems can be caused by any of the\nfollowing:\n\n- `LimitRange` objects created in the `istio-system` namespace or any namespace with automatic sidecar injection enabled.\n- User-defined limits that are set too low.\n- Nodes run out of memory or other resources.\n\nPotential symptoms of resource problems:\n\n- Cloud Service Mesh repeatedly not receiving configuration from the control plane indicated by the error, `Envoy proxy NOT ready`. Seeing this error a few times at startup is normal, but otherwise it is a concern.\n- Networking problems with some pods or nodes that become unreachable.\n- `istioctl proxy-status` showing `STALE` statuses in the output.\n- `OOMKilled` messages in the logs of a node.\n- Memory usage by containers: `kubectl top pod POD_NAME --containers`.\n- Memory usage by pods inside a node: `kubectl top node my-node`.\n- Envoy out of memory: `kubectl get pods` shows status `OOMKilled` in the output.\n\nSidecars take a long time to receive configuration\n--------------------------------------------------\n\nSlow configuration propagation can occur due to insufficient resources allocated\nto `istiod` or an excessively large cluster size.\n\nThere are several possible solutions to this problem:\n\n1. For in-cluster Cloud Service Mesh, if your monitoring tools (prometheus,\n stackdriver, etc.) show high utilization of a resource by `istiod`, increase\n the allocation of that resource, for example increase the CPU or memory limit\n of the `istiod` deployment. This is a temporary solution and we recommended\n that you investigate methods for reducing resource consumption.\n\n2. If you encounter this issue in a large cluster or deployment, reduce the\n amount of configuration state pushed to each proxy by configuring\n [Sidecar resources](https://istio.io/v1.22/docs/reference/config/networking/sidecar/).\n\n3. For in-cluster Cloud Service Mesh, if the problem persists, try\n horizontally scaling `istiod`.\n\n4. If all other troubleshooting steps fail to resolve the problem, report a bug\n detailing your deployment and the observed problems. Follow\n [these steps](https://github.com/istio/istio/wiki/Analyzing-Istio-Performance)\n to include a CPU/Memory profile in the bug report if possible, along with a\n detailed description of cluster size, number of pods, and number of services."]]