Troubleshoot batch and session creation failures

This document provides guidance on troubleshooting common issues that prevent Google Cloud Serverless for Apache Spark Spark batch workloads and interactive sessions from starting.

Overview

Typically, when a batch or session fails to start, it reports the following error message:

Driver compute node failed to initialize for batch in 600 seconds

This error message indicates that the Spark driver couldn't start within the default timeout period of 600 seconds (10 minutes). Common causes are related to service account permissions, resource availability, network configuration, or Spark properties.

Batch and session start failure causes and troubleshooting steps

The following sections list common causes of batch and session start failures with troubleshooting tips to help you resolve the issues.

Insufficient service account permissions

The service account used by your Serverless for Apache Spark batch or session requires specific IAM roles that include permissions for Serverless for Apache Spark operation and access to Google Cloud resources. If the service account lacks the necessary roles, the Spark driver for the batch or session can fail to initialize.

  • Required Worker role: The batch or session service account must have the Dataproc Worker role (roles/dataproc.worker). This role contains the minimum permissions needed for Serverless for Apache Spark to provision and manage compute resources.
  • Data Access Permissions: If your Spark application reads from or writes to Cloud Storage or BigQuery, the service account needs roles related to those services:
    • Cloud Storage: The Storage Object Viewer role (roles/storage.objectViewer) is needed for reading, and the Storage Object Creator role (roles/storage.objectCreator) or Storage Object Admin role (roles/storage.admin) is needed for writing.
    • BigQuery: The BigQuery Data Viewer role (roles/bigquery.dataViewer) is needed for reading and the BigQuery Data Editor role (roles/bigquery.dataEditor) is needed for writing.
  • Logging Permissions: The service account needs a role with permission to write logs to Cloud Logging. Typically, the Logging Writer role (roles/logging.logWriter) is sufficient.

Troubleshooting tips:

Insufficient quota

Exceeding project or region-specific quotas for Google Cloud Serverless for Apache Spark or other Google Cloud resources can prevent new batches or session from starting.

Troubleshooting tips:

  • Review the Google Cloud Serverless for Apache Spark quotas page to understand limits on concurrent batches, DCUs, and shuffle storage.

    • You can also use the gcloud compute quotas list command to view current usage and limits for your project and region:
      gcloud compute quotas list --project=PROJECT_ID --filter="service:dataproc.googleapis.com"
      
  • If you repeatedly hit quota limits, consider requesting a quota increase through the Google Cloud console.

Network configuration issues

Incorrect network settings, such as VPC configuration, Private Google Access, or firewall rules, can block the Spark driver from initializing or connecting to necessary services.

Troubleshooting tips:

  • Verify that the VPC network and subnet specified for your batch or session are correctly configured and have sufficient IP addresses available.

  • If your batch or session needs to access Google APIs and services without traversing the public internet, verify Private Google Access is enabled for the subnet.

  • Review your VPC firewall rules to verify they don't inadvertently block internal communication or egress to Google APIs or external services that are required by your Spark application.

Invalid spark properties or application code issues

Misconfigured Spark properties, particularly those related to driver resources, or issues within your Spark application code can lead to startup failures.

Troubleshooting tips:

  • Check spark.driver.memory and spark.driver.cores values. Verify they are within reasonable limits and align with available DCUs. Excessively large values for these properties can lead to resource exhaustion and initialization failures. Remove any unnecessary or experimental Spark properties to simplify debugging.

  • Try running a "Hello World" Spark application to determine if the issue is with your environment setup or due to code complexity or errors.

  • Verify that all application JARs, Python files, or dependencies specified for your batch or session are correctly located in Cloud Storage and are accessible by the batch or session service account.

Check logs

A critical step in diagnosing batch creation failures is to examine the detailed logs in Cloud Logging.

  1. Go to the Cloud Logging page in the Google Cloud console.
  2. Filter for Serverless for Apache Spark Batches or Sessions:
    1. In the Resource drop-down, select Cloud Dataproc Batch or Cloud Dataproc Session.
    2. Filter by batch_id or session_id for the failed batch or session. You can also filter by project_id and location (region).
  3. Look for log entries with jsonPayload.component="driver". These logs often contain specific error messages or stack traces that can pinpoint the reason for the driver initialization failure before the 600-second timeout occurs.