Stay organized with collections
Save and categorize content based on your preferences.
Dataproc Metastore is a fully managed Apache Hive metastore (HMS) that runs on Google Cloud.
An (HMS) is the established standard in the open source big data
ecosystem for managing technical metadata, such as schemas, partitions, and column
statistics in a relational database.
Dataproc Metastore is highly available, autohealing, and serverless.
Use it to manage data lake
metadata and provide interoperability between the various data processing engines
and tools that you're using.
How Dataproc Metastore works
You can use a Dataproc Metastore service by connecting it to
a Dataproc cluster. A Dataproc cluster includes
components that rely on an HMS to drive query planning and execution.
This integration lets you keep your table information between jobs or make
metadata available to other clusters and other processing engines.
For example, implementing a metastore might help you designate that a subset
of your files contains revenue data, as opposed to manually tracking the filenames.
In this case, you can define a table for those files and store the metadata in
Dataproc Metastore. After, you can connect it to a
Dataproc cluster and query the table for information using Hive,
Spark SQL, or other query services.
Dataproc Metastore versions
When you create a Dataproc Metastore service, you can choose to use
a Dataproc Metastore 2 service or a Dataproc Metastore 1
service.
Dataproc Metastore 2 is the new generation of the service that offers
horizontal scalability in addition to Dataproc Metastore 1 features.
For more information, see features and benefits.
All use cases listed in this section are supported by Dataproc Metastore
2 and Dataproc Metastore 1, unless otherwise noted.
Assign meaning to your data. Create a centralized metadata repository
that's shared among many ephemeral Dataproc clusters. Use
different open source software (OSS) engines, such as Apache Hive
, Apache Spark, and Presto.
Build a unified view of your data. Provide interoperability between
Google Cloud services, such as Dataproc, Dataplex Universal Catalog,
and BigQuery, or use other open source-based partner offerings on
Google Cloud.
Features and benefits
All features listed in this section are supported by Dataproc Metastore
2 and Dataproc Metastore 1, unless otherwise noted.
OSS compatibility. Connect to your existing data processing engines,
such as Apache Hive, Apache Spark, and Presto.
Management. Create or update a metastore within minutes, complete with
fully configured monitoring and operation tasks.
Integration. Integrate with other Google Cloud products, such as
using BigQuery as the source of metadata for a Dataproc
cluster.
Simple import. Import existing metadata stored in an external Hive Metastore
metastore into a Dataproc Metastore service.
Automatic Backups. Configure automatic metastore backups to help avoid
data loss.
Performance monitoring. Set performance tiers to dynamically respond to
highly intensive workloads and spikes, without pre-warming or caching.
High availability (HA).
Dataproc Metastore 2. Provides zonal high availability (HA)
without requiring any specific configuration or on-going management. This is
accomplished by automatically replicating backend databases and HMS servers
across multiple zones in the region you choose. In addition to Zonal HA,
Dataproc Metastore 2 supports regional HA and
Disaster Recovery (DR).
Dataproc Metastore 1. By default, provides zonal high
availability (HA) without requiring any specific configuration or on-going
management. This is accomplished by automatically replicating backend databases
and HMS servers across multiple zones in the region you choose.
For more information about region-specific considerations, see
Geography and regions.
Scalability.
Dataproc Metastore 2. Use a horizontal scaling factor to
determine how many resources your service needs to use at a given time.
The scaling factor can be manually controlled or set to autoscale
when needed.
Dataproc Metastore 1. Choose between a developer tier or
enterprise tier when you set up your service. This tier determines how
many resources your service needs to use at a given time.
Support. Benefit from standard Google Cloud SLAs and support channels.
Integrations with Google Cloud
All integrations listed in this section are supported by Dataproc Metastore
1 and Dataproc Metastore 2, unless otherwise noted.
Dataproc. Connect to a Dataproc cluster, so you can serve
metadata for OSS big data workloads.
BigQuery. Query BigQuery datasets in your Dataproc
workloads.
Dataplex Universal Catalog. Query structured and semi-structured data discovered in a
Dataplex Universal Catalog lake.
Data Catalog. Sync Dataproc Metastore with Data Catalog
to enable search and discovery of metadata.
Logging and Monitoring. Integrate Dataproc Metastore with
Cloud Monitoring and Logging products.
Authentication and IAM. Rely on standard OAuth authentication used by other
Google Cloud products, which supports using granular Identity and Access Management roles to
enable access control for individual resources.
[[["Easy to understand","easyToUnderstand","thumb-up"],["Solved my problem","solvedMyProblem","thumb-up"],["Other","otherUp","thumb-up"]],[["Hard to understand","hardToUnderstand","thumb-down"],["Incorrect information or sample code","incorrectInformationOrSampleCode","thumb-down"],["Missing the information/samples I need","missingTheInformationSamplesINeed","thumb-down"],["Other","otherDown","thumb-down"]],["Last updated 2025-08-25 UTC."],[[["\u003cp\u003eDataproc Metastore is a fully managed, highly available, and serverless Apache Hive metastore (HMS) on Google Cloud, designed for managing data lake metadata.\u003c/p\u003e\n"],["\u003cp\u003eDataproc Metastore integrates with Dataproc clusters and other Google Cloud services like BigQuery and Dataplex, enabling interoperability and a unified data view across various open-source engines.\u003c/p\u003e\n"],["\u003cp\u003eIt supports two versions, Dataproc Metastore 1 and the newer Dataproc Metastore 2, with the latter offering horizontal scalability and zonal, regional high availability (HA), and Disaster Recovery (DR).\u003c/p\u003e\n"],["\u003cp\u003eKey features include OSS compatibility, built-in security, simple metadata import, automatic backups, and performance monitoring to manage the storage of table information between jobs.\u003c/p\u003e\n"],["\u003cp\u003eDataproc Metastore allows for centralized metadata repositories, enabling users to assign meaning to data and make it available across multiple ephemeral Dataproc clusters using different open source software (OSS) engines.\u003c/p\u003e\n"]]],[],null,["# Dataproc Metastore overview\n\n\u003cbr /\u003e\n\nDataproc Metastore is a fully managed [Apache Hive metastore](https://cwiki.apache.org/confluence/display/Hive/Design#Design-Metastore) (HMS) that runs on Google Cloud.\nAn (HMS) is the established standard in the open source big data\necosystem for managing technical metadata, such as schemas, partitions, and column\nstatistics in a relational database.\n\nDataproc Metastore is highly available, autohealing, and serverless.\nUse it to manage [data lake](https://cloud.google.com/learn/what-is-a-data-lake)\nmetadata and provide interoperability between the various data processing engines\nand tools that you're using.\n\nHow Dataproc Metastore works\n----------------------------\n\nYou can use a Dataproc Metastore service by connecting it to\na Dataproc cluster. A Dataproc cluster includes\ncomponents that rely on an HMS to drive query planning and execution.\n\nThis integration lets you keep your table information between jobs or make\nmetadata available to other clusters and other processing engines.\n\nFor example, implementing a metastore might help you designate that a subset\nof your files contains revenue data, as opposed to manually tracking the filenames.\nIn this case, you can define a table for those files and store the metadata in\nDataproc Metastore. After, you can connect it to a\nDataproc cluster and query the table for information using Hive,\nSpark SQL, or other query services.\n\n**Dataproc Metastore versions**\n\nWhen you create a Dataproc Metastore service, you can choose to use\na *Dataproc Metastore 2 service* or a *Dataproc Metastore 1\nservice*.\n\n- Dataproc Metastore 2 is the new generation of the service that offers\n horizontal scalability in addition to Dataproc Metastore 1 features.\n For more information, see [features and benefits](#Dataproc%20Metastore-features).\n\n- Dataproc Metastore 2 has a different pricing plan than\n Dataproc Metastore. For more information, see [pricing plans and scaling configurations](/dataproc-metastore/pricing).\n\n### Common use cases\n\nAll use cases listed in this section are supported by Dataproc Metastore\n2 and Dataproc Metastore 1, unless otherwise noted.\n\n- **Assign meaning to your data.** Create a centralized metadata repository\n that's shared among many ephemeral Dataproc clusters. Use\n different open source software (OSS) engines, such as [Apache Hive](https://hive.apache.org)\n , [Apache Spark](https://spark.apache.org/), and [Presto](https://prestodb.io/).\n\n- **Build a unified view of your data.** Provide interoperability between\n Google Cloud services, such as Dataproc, Dataplex Universal Catalog,\n and BigQuery, or use other open source-based partner offerings on\n Google Cloud.\n\n### Features and benefits\n\nAll features listed in this section are supported by Dataproc Metastore\n2 and Dataproc Metastore 1, unless otherwise noted.\n\n- **OSS compatibility**. Connect to your existing data processing engines,\n such as Apache Hive, Apache Spark, and Presto.\n\n- **Management**. Create or update a metastore within minutes, complete with\n fully configured monitoring and operation tasks.\n\n- **Integration**. Integrate with other Google Cloud products, such as\n using BigQuery as the source of metadata for a Dataproc\n cluster.\n\n- **Built-in security** . Use established Google Cloud security protocols,\n such as [Identity and Access Management (IAM)](/dataproc-metastore/docs/iam-and-access-control)\n and [Kerberos authentication](/dataproc-metastore/docs/configure-kerberos).\n\n- **Simple import**. Import existing metadata stored in an external Hive Metastore\n metastore into a Dataproc Metastore service.\n\n- **Automatic Backups**. Configure automatic metastore backups to help avoid\n data loss.\n\n- **Performance monitoring**. Set performance tiers to dynamically respond to\n highly intensive workloads and spikes, without pre-warming or caching.\n\n- **High availability (HA)**.\n\n - **Dataproc Metastore 2.** Provides zonal high availability (HA) without requiring any specific configuration or on-going management. This is accomplished by automatically replicating backend databases and HMS servers across multiple zones in the region you choose. In addition to Zonal HA, Dataproc Metastore 2 supports regional HA and Disaster Recovery (DR).\n - **Dataproc Metastore 1.** By default, provides zonal high availability (HA) without requiring any specific configuration or on-going management. This is accomplished by automatically replicating backend databases and HMS servers across multiple zones in the region you choose.\n\n\n For more information about region-specific considerations, see\n [Geography and regions](/docs/geography-and-regions#regions_and_zones).\n- **Scalability**.\n\n - **Dataproc Metastore 2.** Use a horizontal scaling factor to determine how many resources your service needs to use at a given time. The scaling factor can be manually controlled or set to autoscale when needed.\n - **Dataproc Metastore 1.** Choose between a developer tier or enterprise tier when you set up your service. This tier determines how many resources your service needs to use at a given time.\n- **Support**. Benefit from standard Google Cloud SLAs and support channels.\n\nIntegrations with Google Cloud\n------------------------------\n\nAll integrations listed in this section are supported by Dataproc Metastore\n1 and Dataproc Metastore 2, unless otherwise noted.\n\n- **Dataproc.** Connect to a Dataproc cluster, so you can serve metadata for OSS big data workloads.\n- **BigQuery.** Query BigQuery datasets in your Dataproc workloads.\n- **Dataplex Universal Catalog.** Query structured and semi-structured data discovered in a Dataplex Universal Catalog lake.\n- **Data Catalog.** Sync Dataproc Metastore with Data Catalog to enable search and discovery of metadata.\n- **Logging and Monitoring.** Integrate Dataproc Metastore with Cloud Monitoring and Logging products.\n- **Authentication and IAM.** Rely on standard OAuth authentication used by other Google Cloud products, which supports using granular Identity and Access Management roles to enable access control for individual resources.\n\nNext steps\n----------\n\n- Get started with the quickstart guide, [Deploying a Dataproc Metastore service](/dataproc-metastore/docs/create-service-cluster).\n- Understand [Dataproc Metastore pricing](/dataproc-metastore/pricing).\n- Understand [quotas and limits for Dataproc Metastore](/dataproc-metastore/docs/quotas).\n- Read the [Dataproc Metastore release notes](/dataproc-metastore/docs/release-notes).\n- Access Dataproc Metastore using the [Google Cloud console](https://console.cloud.google.com/dataproc/metastore), the [Google Cloud CLI](/sdk/gcloud/reference/metastore) or with the [Dataproc Metastore API](/dataproc-metastore/docs/reference/rest)."]]