This document provides guidance on troubleshooting common issues that prevent Google Cloud Serverless for Apache Spark Spark batch workloads and interactive sessions from starting.
Overview
Typically, when a batch or session fails to start, it reports the following error message:
Driver compute node failed to initialize for batch in 600 seconds
This error message indicates that the Spark driver couldn't start within the default timeout period of 600 seconds (10 minutes). Common causes are related to service account permissions, resource availability, network configuration, or Spark properties.
Batch and session start failure causes and troubleshooting steps
The following sections list common causes of batch and session start failures with troubleshooting tips to help you resolve the issues.
Insufficient service account permissions
The service account used by your Serverless for Apache Spark batch or session requires specific IAM roles that include permissions for Serverless for Apache Spark operation and access to Google Cloud resources. If the service account lacks the necessary roles, the Spark driver for the batch or session can fail to initialize.
- Required Worker role: The batch or session service account must have the
Dataproc Worker role (
roles/dataproc.worker
). This role contains the minimum permissions needed for Serverless for Apache Spark to provision and manage compute resources. - Data Access Permissions: If your Spark application reads from or
writes to Cloud Storage or BigQuery, the service
account needs roles related to those services:
- Cloud Storage: The
Storage Object Viewer
role (roles/storage.objectViewer
) is needed for reading, and theStorage Object Creator
role (roles/storage.objectCreator
) orStorage Object Admin
role (roles/storage.admin
) is needed for writing. - BigQuery: The
BigQuery Data Viewer
role (roles/bigquery.dataViewer
) is needed for reading and theBigQuery Data Editor
role (roles/bigquery.dataEditor
) is needed for writing.
- Cloud Storage: The
- Logging Permissions: The service account needs a role with
permission to write logs to Cloud Logging. Typically, the
Logging Writer
role (roles/logging.logWriter
) is sufficient.
Troubleshooting tips:
- Identify the batch or session service account. If not specified, it defaults to the Compute Engine default service account.
- Go to the IAM & Admin > IAM page in the Google Cloud console, find the batch or session service account, and then verify that it has the necessary roles needed for operations. Grant any missing roles.
Insufficient quota
Exceeding project or region-specific quotas for Google Cloud Serverless for Apache Spark or other Google Cloud resources can prevent new batches or session from starting.
Troubleshooting tips:
Review the Google Cloud Serverless for Apache Spark quotas page to understand limits on concurrent batches, DCUs, and shuffle storage.
- You can also use the
gcloud compute quotas list
command to view current usage and limits for your project and region:gcloud compute quotas list --project=PROJECT_ID --filter="service:dataproc.googleapis.com"
- You can also use the
If you repeatedly hit quota limits, consider requesting a quota increase through the Google Cloud console.
Network configuration issues
Incorrect network settings, such as VPC configuration, Private Google Access, or firewall rules, can block the Spark driver from initializing or connecting to necessary services.
Troubleshooting tips:
Verify that the VPC network and subnet specified for your batch or session are correctly configured and have sufficient IP addresses available.
If your batch or session needs to access Google APIs and services without traversing the public internet, verify Private Google Access is enabled for the subnet.
Review your VPC firewall rules to verify they don't inadvertently block internal communication or egress to Google APIs or external services that are required by your Spark application.
Invalid spark properties or application code issues
Misconfigured Spark properties, particularly those related to driver resources, or issues within your Spark application code can lead to startup failures.
Troubleshooting tips:
Check
spark.driver.memory
andspark.driver.cores
values. Verify they are within reasonable limits and align with available DCUs. Excessively large values for these properties can lead to resource exhaustion and initialization failures. Remove any unnecessary or experimental Spark properties to simplify debugging.Try running a "Hello World" Spark application to determine if the issue is with your environment setup or due to code complexity or errors.
Verify that all application JARs, Python files, or dependencies specified for your batch or session are correctly located in Cloud Storage and are accessible by the batch or session service account.
Check logs
A critical step in diagnosing batch creation failures is to examine the detailed logs in Cloud Logging.
- Go to the Cloud Logging page in the Google Cloud console.
- Filter for Serverless for Apache Spark Batches or Sessions:
- In the Resource drop-down, select
Cloud Dataproc Batch
orCloud Dataproc Session
. - Filter by
batch_id
orsession_id
for the failed batch or session. You can also filter byproject_id
andlocation
(region).
- In the Resource drop-down, select
- Look for log entries with
jsonPayload.component="driver"
. These logs often contain specific error messages or stack traces that can pinpoint the reason for the driver initialization failure before the 600-second timeout occurs.