Dataproc on GKE overview

Dataproc on GKE allows you to execute Big Data applications using the Dataproc jobs API on GKE clusters. Use the Google Cloud console, Google Cloud CLI or the Dataproc API (HTTP request or Cloud Client Libraries) to create a Dataproc on GKE virtual cluster, then submit a Spark, PySpark, SparkR, or Spark-SQL job to the Dataproc service.

Dataproc on GKE supports Spark 3.1 and Spark 3.5 versions.

How Dataproc on GKE works

Dataproc on GKE deploys Dataproc virtual clusters on a GKE cluster. Unlike Dataproc on Compute Engine clusters, Dataproc on GKE virtual clusters do not include separate master and worker VMs. Instead, when you create a Dataproc on GKE virtual cluster, Dataproc on GKE creates node pools within a GKE cluster. Dataproc on GKE jobs are run as pods on these node pools. The node pools and scheduling of pods on the node pools are managed by GKE.