Stay organized with collections
Save and categorize content based on your preferences.
A provisioner is responsible for creating and tearing down the cloud cluster
where the pipeline is executed. Different provisioners are capable of
creating different types of clusters on various clouds.
Each provisioner exposes a set of configuration settings that control the type
of cluster that's created for a run. For example, the Dataproc
and Amazon EMR provisioners have cluster size settings. Provisioners also have
settings for the credentials required to talk to their respective clouds and
provision the required compute nodes.
Supported provisioners in Cloud Data Fusion
Cloud Data Fusion supports the following provisioners:
[[["Easy to understand","easyToUnderstand","thumb-up"],["Solved my problem","solvedMyProblem","thumb-up"],["Other","otherUp","thumb-up"]],[["Hard to understand","hardToUnderstand","thumb-down"],["Incorrect information or sample code","incorrectInformationOrSampleCode","thumb-down"],["Missing the information/samples I need","missingTheInformationSamplesINeed","thumb-down"],["Other","otherDown","thumb-down"]],["Last updated 2025-08-29 UTC."],[[["\u003cp\u003eProvisioners manage the creation and deletion of cloud clusters for pipeline execution.\u003c/p\u003e\n"],["\u003cp\u003eDifferent provisioners enable the creation of various cluster types on different cloud platforms.\u003c/p\u003e\n"],["\u003cp\u003eProvisioners offer configuration settings to define the characteristics of the created cluster and credentials for cloud access.\u003c/p\u003e\n"],["\u003cp\u003eCloud Data Fusion supports Dataproc, Amazon EMR, and Remote Hadoop provisioners.\u003c/p\u003e\n"]]],[],null,["# Provisioners in Cloud Data Fusion\n\nA provisioner is responsible for creating and tearing down the cloud cluster\nwhere the pipeline is executed. Different provisioners are capable of\ncreating different types of clusters on various clouds.\n\nEach provisioner exposes a set of configuration settings that control the type\nof cluster that's created for a run. For example, the Dataproc\nand Amazon EMR provisioners have cluster size settings. Provisioners also have\nsettings for the credentials required to talk to their respective clouds and\nprovision the required compute nodes.\n\nSupported provisioners in Cloud Data Fusion\n-------------------------------------------\n\nCloud Data Fusion supports the following provisioners:\n\n[Dataproc](/data-fusion/docs/concepts/dataproc)\n: A fast, easy-to-use, and fully-managed cloud service for running Apache Spark\n and Apache Hadoop clusters.\n\nAmazon Elastic MapReduce (EMR)\n: Provides a managed Hadoop framework that processes vast amounts of data across\n dynamically scalable Amazon EC2 instances.\n\nRemote Hadoop\n: Runs jobs on a pre-existing Hadoop cluster, either on-premises or in the\n cloud."]]