Stay organized with collections
Save and categorize content based on your preferences.
Use the following concepts to help you understand how
Dataproc Metastore works and the different features you can use
with your service.
Dataproc Metastore versions
When you create a Dataproc Metastore service, you can choose to use
a Dataproc Metastore 2 service or a Dataproc Metastore 1
service.
Dataproc Metastore 2
Dataproc Metastore 2 uses a scaling factor to determine how
many resources your service uses at a given time. After you create a
Dataproc Metastore 2, you can scale the service up or down by modifying
the scaling factor.
Dataproc Metastore 2 is the new generation of the service that offers
horizontal scalability in addition to Dataproc Metastore features.
For more information, see features and benefits.
Dataproc Metastore 1 uses service tiers to determine how many
resources your service uses at a given time. Service tiers provide a predictable,
predetermined amount of resources.
Check your Dataproc Metastore version
You can check what version of Dataproc Metastore you're using in the
Google Cloud console.
Dataproc Metastore 2: The configuration table contains the
following value: Edition Enterprise - Single Region.
Dataproc Metastore 1: The configuration table contains one of the
following values: Tier: DEVELOPER or Tier: ENTERPRISE.
Common Dataproc Metastore terms
The following terms are used commonly throughout the Dataproc Metastore
ecosystem and documentation.
Services
Apache Hive. Hive is a popular open source data warehouse system built
on Apache Hadoop. Hive offers a SQL-like query language called HiveQL, which
is used to analyze large, structured datasets.
Apache Hive metastore. The Hive metastore holds metadata about Hive
tables, such as their schema and location.
Dataproc. Dataproc is a fast, easy-to-use, fully
managed service on Google Cloud for running Apache Spark and Apache
Hadoop workloads in a simple, cost-efficient way. After you create a
Dataproc Metastore, you can connect to it from a
Dataproc cluster.
Dataproc cluster. After you create a
Dataproc Metastore service, you can connect to it from a
Dataproc cluster. You can also use Dataproc Metastore
with various other clusters, such as self-managed Apache Hive, Apache Spark,
or Presto clusters.
Dataproc Metastore service. The name of the metastore
instance you create in Google Cloud. You can have one or many different
metastore services in your implementation.
Private Service Connect. Private Service Connect lets you set
up a private connection to Dataproc Metastore metadata across
VPC networks. You can use it for networking as an alternative to VPC
peering.
VPC Service Controls. VPC Service Controls improves your ability to mitigate
the risk of data exfiltration from Google Cloud services by allowing you to
create perimeters that protect the resources and data of services that you
explicitly specify.
Concepts
Tables. All Hive applications have managed internal or unmanaged
external tables that store your data.
Hive warehouse directory. The default location where managed table data
is stored.
Artifacts bucket. A Cloud Storage bucket that is created in your
project automatically with every metastore service that you create. This
bucket can be used to store your service artifacts, such as exported
metadata and managed table data. By default, the artifacts bucket stores the
default warehouse directory of your Dataproc Metastore
service.
Endpoints. A Dataproc Metastore service provides clients
access to the stored Hive Metastore metadata through one or more network
endpoints. Dataproc Metastore provides URIs for
these endpoints.
Endpoint protocols. The over-the-wire network protocol used for
communication between Dataproc Metastore and Hive Metastore
clients. Dataproc Metastore supports Apache Thrift and
gRPC endpoints.
Metadata Federation. A feature that lets you access metadata that is
stored in multiple Dataproc Metastore instances.
Auxiliary versions. A feature that lets you connect multiple Hive client
versions to the same Dataproc Metastore service.
Hive metastore concepts
Using a Dataproc Metastore service requires that you understand
basic Hive metastore concepts. For more information, see Hive Metastore.
Network Requirements
The Dataproc Metastore service requires networking access to work
correctly. For more information, see Configure network requirements.
Project configurations
There are a number of possible project configurations you can use when deploying a
Dataproc cluster and a Dataproc Metastore service.
For more information, see cross-project deployment.
[[["Easy to understand","easyToUnderstand","thumb-up"],["Solved my problem","solvedMyProblem","thumb-up"],["Other","otherUp","thumb-up"]],[["Hard to understand","hardToUnderstand","thumb-down"],["Incorrect information or sample code","incorrectInformationOrSampleCode","thumb-down"],["Missing the information/samples I need","missingTheInformationSamplesINeed","thumb-down"],["Other","otherDown","thumb-down"]],["Last updated 2025-08-28 UTC."],[[["\u003cp\u003eDataproc Metastore offers two service versions: Dataproc Metastore 1, which uses service tiers for resource allocation, and Dataproc Metastore 2, which uses a scaling factor for dynamic resource scaling.\u003c/p\u003e\n"],["\u003cp\u003eDataproc Metastore 2 provides horizontal scalability and has a different pricing plan compared to Dataproc Metastore 1.\u003c/p\u003e\n"],["\u003cp\u003eDataproc Metastore uses common terms like Apache Hive, Apache Hive metastore, Dataproc cluster, and Private Service Connect to describe its features and ecosystem.\u003c/p\u003e\n"],["\u003cp\u003eDataproc Metastore stores metadata about Hive tables, offers options for networking like Private Service Connect and VPC Service Controls, and uses a Cloud Storage bucket for service artifacts.\u003c/p\u003e\n"],["\u003cp\u003eThe Dataproc Metastore has services to create, update, delete and import metadata into it, to aid in the management of the metastore.\u003c/p\u003e\n"]]],[],null,["# Dataproc Metastore core concepts\n\nUse the following concepts to help you understand how\nDataproc Metastore works and the different features you can use\nwith your service.\n\nDataproc Metastore versions\n---------------------------\n\nWhen you create a Dataproc Metastore service, you can choose to use\na *Dataproc Metastore 2 service* or a *Dataproc Metastore 1\nservice*.\n\n### Dataproc Metastore 2\n\nDataproc Metastore 2 uses a scaling factor to determine how\nmany resources your service uses at a given time. After you create a\nDataproc Metastore 2, you can scale the service up or down by modifying\nthe scaling factor.\n\n- Dataproc Metastore 2 is the new generation of the service that offers\n horizontal scalability in addition to Dataproc Metastore features.\n For more information, see [features and benefits](/dataproc-metastore/docs/overview#why-use-Dataproc%20Metastore).\n\n- Dataproc Metastore 2 has a different pricing plan than\n Dataproc Metastore. For more information, see [pricing plans and scaling configurations](/dataproc-metastore/pricing).\n\n### Dataproc Metastore 1\n\nDataproc Metastore 1 uses service tiers to determine how many\nresources your service uses at a given time. Service tiers provide a predictable,\npredetermined amount of resources.\n\n### Check your Dataproc Metastore version\n\nYou can check what version of Dataproc Metastore you're using in the\nGoogle Cloud console.\n\n- **Dataproc Metastore 2** : The configuration table contains the following value: **Edition Enterprise - Single Region**.\n- **Dataproc Metastore 1** : The configuration table contains one of the following values: **Tier: DEVELOPER** or **Tier: ENTERPRISE**.\n\nCommon Dataproc Metastore terms\n-------------------------------\n\nThe following terms are used commonly throughout the Dataproc Metastore\necosystem and documentation.\n\n#### Services\n\n- **Apache Hive**. Hive is a popular open source data warehouse system built on Apache Hadoop. Hive offers a SQL-like query language called HiveQL, which is used to analyze large, structured datasets.\n- **Apache Hive metastore**. The Hive metastore holds metadata about Hive tables, such as their schema and location.\n- **Dataproc**. Dataproc is a fast, easy-to-use, fully managed service on Google Cloud for running Apache Spark and Apache Hadoop workloads in a simple, cost-efficient way. After you create a Dataproc Metastore, you can connect to it from a Dataproc cluster.\n- **Dataproc cluster**. After you create a Dataproc Metastore service, you can connect to it from a Dataproc cluster. You can also use Dataproc Metastore with various other clusters, such as self-managed Apache Hive, Apache Spark, or Presto clusters.\n- **Dataproc Metastore service**. The name of the metastore instance you create in Google Cloud. You can have one or many different metastore services in your implementation.\n- **Private Service Connect**. Private Service Connect lets you set up a private connection to Dataproc Metastore metadata across VPC networks. You can use it for networking as an alternative to VPC peering.\n- **VPC Service Controls**. VPC Service Controls improves your ability to mitigate the risk of data exfiltration from Google Cloud services by allowing you to create perimeters that protect the resources and data of services that you explicitly specify.\n\n#### Concepts\n\n- **Tables**. All Hive applications have managed internal or unmanaged external tables that store your data.\n- **Hive warehouse directory**. The default location where managed table data is stored.\n- **Artifacts bucket**. A Cloud Storage bucket that is created in your project automatically with every metastore service that you create. This bucket can be used to store your service artifacts, such as exported metadata and managed table data. By default, the artifacts bucket stores the default warehouse directory of your Dataproc Metastore service.\n- **Endpoints**. A Dataproc Metastore service provides clients access to the stored Hive Metastore metadata through one or more network endpoints. Dataproc Metastore provides URIs for these endpoints.\n- **Endpoint protocols**. The over-the-wire network protocol used for communication between Dataproc Metastore and Hive Metastore clients. Dataproc Metastore supports Apache Thrift and gRPC endpoints.\n- **Metadata Federation**. A feature that lets you access metadata that is stored in multiple Dataproc Metastore instances.\n- **Auxiliary versions**. A feature that lets you connect multiple Hive client versions to the same Dataproc Metastore service.\n\nHive metastore concepts\n-----------------------\n\nUsing a Dataproc Metastore service requires that you understand\nbasic Hive metastore concepts. For more information, see [Hive Metastore](/dataproc-metastore/docs/hive-metastore).\n\nNetwork Requirements\n--------------------\n\nThe Dataproc Metastore service requires networking access to work\ncorrectly. For more information, see [Configure network requirements](/dataproc-metastore/docs/access-service).\n\nProject configurations\n----------------------\n\nThere are a number of possible project configurations you can use when deploying a\nDataproc cluster and a Dataproc Metastore service.\nFor more information, see [cross-project deployment](/dataproc-metastore/docs/cross-project-deployment).\n\nWhat's next\n-----------\n\n- [Create a service](/dataproc-metastore/docs/create-service)\n- [Update and delete a service](/dataproc-metastore/docs/manage-service)\n- [Import metadata into a service](/dataproc-metastore/docs/import-metadata)"]]