Profile Google Cloud Serverless for Apache Spark resource usage
Stay organized with collections
Save and categorize content based on your preferences.
This document describes how to profile Google Cloud Serverless for Apache Spark resource
usage. Cloud Profiler continuously gathers and reports
application CPU usage and memory allocation information. You can enable
profiling when you submit a batch or create a session workload
by using the profiling properties listed in the following table.
Google Cloud Serverless for Apache Spark appends related JVM options to
the spark.driver.extraJavaOptions and spark.executor.extraJavaOptions
configurations used for the workload.
Serverless for Apache Spark sets the profiler version to either
the batch UUID
or the session UUID.
Profiler supports the following Spark workload types:
Spark, PySpark, SparkSql, and SparkR.
A workload must run for more than three minutes to allow Profiler
to collect and upload data to a project.
You can override profiling options submitted with a workload by constructing a
SparkConf, and then setting extraJavaOptions in your code.
Note that setting extraJavaOptions properties when the workload is submitted
doesn't override profiling options submitted with the workload.
[[["Easy to understand","easyToUnderstand","thumb-up"],["Solved my problem","solvedMyProblem","thumb-up"],["Other","otherUp","thumb-up"]],[["Hard to understand","hardToUnderstand","thumb-down"],["Incorrect information or sample code","incorrectInformationOrSampleCode","thumb-down"],["Missing the information/samples I need","missingTheInformationSamplesINeed","thumb-down"],["Other","otherDown","thumb-down"]],["Last updated 2025-08-25 UTC."],[[["\u003cp\u003eDataproc Serverless for Spark allows profiling of resource usage, including CPU and memory, through Cloud Profiler.\u003c/p\u003e\n"],["\u003cp\u003eProfiling is enabled by setting the \u003ccode\u003edataproc.profiling.enabled\u003c/code\u003e property to \u003ccode\u003etrue\u003c/code\u003e when submitting a batch or creating a session workload.\u003c/p\u003e\n"],["\u003cp\u003eThe \u003ccode\u003edataproc.profiling.name\u003c/code\u003e property sets the profile name in the Profiler service, with a default format of \u003ccode\u003espark-WORKLOAD_TYPE-WORKLOAD_ID\u003c/code\u003e.\u003c/p\u003e\n"],["\u003cp\u003eWorkloads must run for at least three minutes for Profiler to successfully gather and upload the resource usage data.\u003c/p\u003e\n"],["\u003cp\u003eTo enable profiling, the Profiler must be enabled, the correct roles must be assigned to the custom service accounts if needed, and the profiling properties must be set during workload submission or session creation.\u003c/p\u003e\n"]]],[],null,["# Profile Google Cloud Serverless for Apache Spark resource usage\n\nThis document describes how to profile Google Cloud Serverless for Apache Spark resource\nusage. [Cloud Profiler](/profiler/docs) continuously gathers and reports\napplication CPU usage and memory allocation information. You can enable\nprofiling when you submit a batch or create a session workload\nby using the profiling properties listed in the following table.\nGoogle Cloud Serverless for Apache Spark appends related JVM options to\nthe `spark.driver.extraJavaOptions` and `spark.executor.extraJavaOptions`\nconfigurations used for the workload.\n\nNotes:\n\n- Serverless for Apache Spark sets the profiler version to either the [batch UUID](/dataproc-serverless/docs/reference/rest/v1/projects.locations.batches#Batch.FIELDS.uuid) or the [session UUID](/dataproc-serverless/docs/reference/rest/v1/projects.locations.sessions#Session.FIELDS.uuid).\n- Profiler supports the following Spark workload types: `Spark`, `PySpark`, `SparkSql`, and `SparkR`.\n- A workload must run for more than three minutes to allow Profiler to collect and upload data to a project.\n- You can override profiling options submitted with a workload by constructing a `SparkConf`, and then setting `extraJavaOptions` in your code. Note that setting `extraJavaOptions` properties when the workload is submitted doesn't override profiling options submitted with the workload.\n\nFor an example of profiler options used with a batch submission, see the\n[PySpark batch workload example](#pyspark_batch_workload_example).\n\nEnable profiling\n----------------\n\nComplete the following steps to enable profiling on a workload:\n\n1. [Enable the Profiler](/profiler/docs/profiling-java#enabling-profiler).\n2. If you are using a [custom VM service account](/dataproc-serverless/docs/concepts/service-account), grant the [Cloud Profiler Agent](/profiler/docs/iam#cloudprofiler.agent) role to the custom VM service account. This role contains required Profiler permissions.\n3. Set profiling properties when you [submit a batch workload](/dataproc-serverless/docs/quickstarts/spark-batch#submit_a_spark_batch_workload) or [create a session template](/dataproc-serverless/docs/quickstarts/jupyterlab-sessions#create_a_serverless_runtime_template).\n\n### PySpark batch workload example\n\nThe following example uses the gcloud CLI to submit a PySpark batch\nworkload with profiling enabled. \n\n```\ngcloud dataproc batches submit pyspark PYTHON_WORKLOAD_FILE \\\n --region=REGION \\\n --properties=dataproc.profiling.enabled=true,dataproc.profiling.name=PROFILE_NAME \\\n -- other args\n```\n\nTwo profiles are created:\n\n- \u003cvar translate=\"no\"\u003ePROFILE_NAME\u003c/var\u003e`-driver` to profile spark driver tasks\n- \u003cvar translate=\"no\"\u003ePROFILE_NAME\u003c/var\u003e`-executor` to profile spark executor tasks\n\nView profiles\n-------------\n\nYou can view profiles from [Profiler](https://console.cloud.google.com/profiler)\nin the Google Cloud console.\n\nWhat's next\n-----------\n\n- See [Monitor and troubleshoot Serverless for Apache Spark workloads](/dataproc-serverless/docs/guides/monitor-troubleshoot-batches)."]]