Stay organized with collections
Save and categorize content based on your preferences.
This page provides information on Dataproc on
Compute Engine VM out-of-memory (OOM) errors, and explains steps you can take
to troubleshoot and resolve OOM errors.
OOM error effects
When Dataproc on Compute Engine VMs encounter out-of-memory
(OOM) errors, the effects include the following conditions:
Master and worker VMs freeze for a period of time.
Master VMs OOM errors cause jobs to fail with "task not acquired" errors.
Worker VM OOM errors cause a loss of the node on YARN HDFS, which delays
Dataproc job execution.
Identify and confirm memory protection terminations
You can use the following information to identify and confirm
job terminations due to memory pressure.
Process terminations
Processes that Dataproc memory protection terminates exit
with code 137 or 143.
When Dataproc terminates a process due to memory pressure,
the following actions or conditions can occur:
Dataproc increments the
dataproc.googleapis.com/node/problem_count cumulative metric, and sets
the reason to ProcessKilledDueToMemoryPressure.
See Dataproc resource metric collection.
Dataproc writes a google.dataproc.oom-killer log with the message:
"A process is killed due to memory pressure: process name.
To view these messages, enable Logging, then use the
following log filter:
resource.type="cloud_dataproc_cluster"
resource.labels.cluster_name="CLUSTER_NAME"
resource.labels.cluster_uuid="CLUSTER_UUID"
jsonPayload.message:"A process is killed due to memory pressure:"
Master node or driver node pool job terminations
When a Dataproc master node or driver node pool job
terminates due to memory pressure, the job fails with error
Driver received SIGTERM/SIGKILL signal and exited with INT
code. To view these messages, enable Logging, then use the
following log filter:
resource.type="cloud_dataproc_cluster"
resource.labels.cluster_name="CLUSTER_NAME"
resource.labels.cluster_uuid="CLUSTER_UUID"
jsonPayload.message:"Driver received SIGTERM/SIGKILL signal and exited with"
Check the
google.dataproc.oom-killer log or the dataproc.googleapis.com/node/problem_count
to confirm that Dataproc Memory Protection terminated the
job (see Process terminations).
Solutions:
If the cluster has a
driver pool,
increase driver-required-memory-mb to actual job memory usage.
If the cluster does not have a driver pool, recreate the cluster, lowering the
maximum number of concurrent jobs
running on the cluster.
Use a master node machine type with increased memory.
Worker node YARN container terminations
Dataproc writes the following message in the YARN
resource manager: container id exited with code
EXIT_CODE. To view these messages, enable
Logging, then use the following log filter:
resource.type="cloud_dataproc_cluster"
resource.labels.cluster_name="CLUSTER_NAME"
resource.labels.cluster_uuid="CLUSTER_UUID"
jsonPayload.message:"container" AND "exited with code" AND "which potentially signifies memory pressure on NODE
If a container exited with code INT, check the
google.dataproc.oom-killer log or the dataproc.googleapis.com/node/problem_count
to confirm that Dataproc Memory Protection terminated the job
(see Process terminations).
Solutions:
Check that container sizes are configured correctly.
Consider lowering yarn.nodemanager.resource.memory-mb. This property
controls the amount of memory used for scheduling YARN containers.
If job containers consistently fail, check if data skew is causing
increased usage of specific containers. If so, repartition the job or
increase worker size to accommodate additional memory requirements.
[[["Easy to understand","easyToUnderstand","thumb-up"],["Solved my problem","solvedMyProblem","thumb-up"],["Other","otherUp","thumb-up"]],[["Hard to understand","hardToUnderstand","thumb-down"],["Incorrect information or sample code","incorrectInformationOrSampleCode","thumb-down"],["Missing the information/samples I need","missingTheInformationSamplesINeed","thumb-down"],["Other","otherDown","thumb-down"]],["Last updated 2025-08-25 UTC."],[[["\u003cp\u003eThis document details how to troubleshoot and resolve out-of-memory (OOM) errors that can occur in Dataproc on Compute Engine VMs.\u003c/p\u003e\n"],["\u003cp\u003eOOM errors in Dataproc VMs can result in frozen VMs, job failures, and delays in job execution due to node loss on YARN HDFS.\u003c/p\u003e\n"],["\u003cp\u003eDataproc offers memory protection that terminates processes or containers when VMs experience memory pressure, using exit codes 137 or 143 to indicate terminations.\u003c/p\u003e\n"],["\u003cp\u003eJob terminations due to memory pressure can be confirmed by reviewing the \u003ccode\u003egoogle.dataproc.oom-killer\u003c/code\u003e log or checking the \u003ccode\u003edataproc.googleapis.com/node/problem_count\u003c/code\u003e metric.\u003c/p\u003e\n"],["\u003cp\u003eSolutions to memory pressure issues include increasing driver memory, recreating clusters with lower concurrent job limits, or adjusting YARN container memory configurations.\u003c/p\u003e\n"]]],[],null,["# Troubleshoot VM out-of-memory errors\n\nThis page provides information on Dataproc on\nCompute Engine VM out-of-memory (OOM) errors, and explains steps you can take\nto troubleshoot and resolve OOM errors.\n\nOOM error effects\n-----------------\n\nWhen Dataproc on Compute Engine VMs encounter out-of-memory\n(OOM) errors, the effects include the following conditions:\n\n- Master and worker VMs freeze for a period of time.\n\n- Master VMs OOM errors cause jobs to fail with \"task not acquired\" errors.\n\n- Worker VM OOM errors cause a loss of the node on YARN HDFS, which delays\n Dataproc job execution.\n\nYARN memory controls\n--------------------\n\n[Apache YARN](https://hadoop.apache.org/docs/stable/hadoop-yarn/hadoop-yarn-site/YARN.html)\nprovides the following types of\n[memory controls](https://hadoop.apache.org/docs/stable/hadoop-yarn/hadoop-yarn-site/NodeManagerCGroupsMemory.html):\n\n- Polling based (legacy)\n- Strict\n- Elastic\n\nBy default, Dataproc doesn't set\n`yarn.nodemanager.resource.memory.enabled` to enable YARN memory controls, for\nthe following reasons:\n\n- Strict memory control can cause the termination of containers when there is sufficient memory if container sizes aren't configured correctly.\n- Elastic memory control requirements can adversely affect job execution.\n- YARN memory controls can fail to prevent OOM errors when processes aggressively consume memory.\n\nDataproc memory protection\n--------------------------\n\nWhen a Dataproc cluster VM is under memory pressure,\nDataproc memory protection terminates processes or containers\nuntil the OOM condition is removed.\n\nDataproc provides memory protection for the following cluster\nnodes in the following\n[Dataproc on Compute Engine image versions](/dataproc/docs/concepts/versioning/dataproc-version-clusters):\n\n| Use Dataproc image versions with memory protection to help avoid VM OOM errors.\n\n### Identify and confirm memory protection terminations\n\nYou can use the following information to identify and confirm\njob terminations due to memory pressure.\n\n#### Process terminations\n\n- Processes that Dataproc memory protection terminates exit\n with code `137` or `143`.\n\n- When Dataproc terminates a process due to memory pressure,\n the following actions or conditions can occur:\n\n - Dataproc increments the `dataproc.googleapis.com/node/problem_count` cumulative metric, and sets the `reason` to `ProcessKilledDueToMemoryPressure`. See [Dataproc resource metric collection](/dataproc/docs/guides/dataproc-metrics#dataproc_resource_metric_collection).\n - Dataproc writes a `google.dataproc.oom-killer` log with the message: `\"A process is killed due to memory pressure: `\u003cvar translate=\"no\"\u003eprocess name\u003c/var\u003e. To view these messages, enable Logging, then use the following log filter: \n\n ```\n resource.type=\"cloud_dataproc_cluster\"\n resource.labels.cluster_name=\"CLUSTER_NAME\"\n resource.labels.cluster_uuid=\"CLUSTER_UUID\"\n jsonPayload.message:\"A process is killed due to memory pressure:\"\n ```\n\n#### Master node or driver node pool job terminations\n\n- When a Dataproc master node or driver node pool job\n terminates due to memory pressure, the job fails with error\n `Driver received SIGTERM/SIGKILL signal and exited with `\u003cvar translate=\"no\"\u003eINT\u003c/var\u003e\n code. To view these messages, enable Logging, then use the\n following log filter:\n\n ```\n resource.type=\"cloud_dataproc_cluster\"\n resource.labels.cluster_name=\"CLUSTER_NAME\"\n resource.labels.cluster_uuid=\"CLUSTER_UUID\"\n jsonPayload.message:\"Driver received SIGTERM/SIGKILL signal and exited with\"\n \n ```\n\n \u003cbr /\u003e\n\n - Check the `google.dataproc.oom-killer` log or the `dataproc.googleapis.com/node/problem_count` to confirm that Dataproc Memory Protection terminated the job (see [Process terminations](#process_terminations)).\n\n **Solutions:**\n - If the cluster has a [driver pool](/dataproc/docs/guides/node-groups/dataproc-driver-node-groups), increase `driver-required-memory-mb` to actual job memory usage.\n - If the cluster does not have a driver pool, recreate the cluster, lowering the [maximum number of concurrent jobs](/dataproc/docs/concepts/jobs/life-of-a-job#job_concurrency) running on the cluster.\n - Use a master node machine type with increased memory.\n\n#### Worker node YARN container terminations\n\n- Dataproc writes the following message in the YARN\n resource manager: \u003cvar translate=\"no\"\u003econtainer id\u003c/var\u003e` exited with code\n `\u003cvar translate=\"no\"\u003eEXIT_CODE\u003c/var\u003e. To view these messages, enable\n Logging, then use the following log filter:\n\n ```\n resource.type=\"cloud_dataproc_cluster\"\n resource.labels.cluster_name=\"CLUSTER_NAME\"\n resource.labels.cluster_uuid=\"CLUSTER_UUID\"\n jsonPayload.message:\"container\" AND \"exited with code\" AND \"which potentially signifies memory pressure on NODE\n ```\n- If a container exited with `code `\u003cvar translate=\"no\"\u003eINT\u003c/var\u003e, check the\n `google.dataproc.oom-killer` log or the `dataproc.googleapis.com/node/problem_count`\n to confirm that Dataproc Memory Protection terminated the job\n (see [Process terminations](#process_terminations)).\n\n **Solutions:**\n - Check that container sizes are configured correctly.\n - Consider lowering `yarn.nodemanager.resource.memory-mb`. This property controls the amount of memory used for scheduling YARN containers.\n - If job containers consistently fail, check if data skew is causing increased usage of specific containers. If so, repartition the job or increase worker size to accommodate additional memory requirements."]]