[[["易于理解","easyToUnderstand","thumb-up"],["解决了我的问题","solvedMyProblem","thumb-up"],["其他","otherUp","thumb-up"]],[["很难理解","hardToUnderstand","thumb-down"],["信息或示例代码不正确","incorrectInformationOrSampleCode","thumb-down"],["没有我需要的信息/示例","missingTheInformationSamplesINeed","thumb-down"],["翻译问题","translationIssue","thumb-down"],["其他","otherDown","thumb-down"]],["最后更新时间 (UTC):2025-08-27。"],[[["\u003cp\u003eThis page details how to configure Kerberos for Dataproc Metastore services using the gRPC endpoint protocol, as opposed to the Thrift protocol which has its own configuration guide.\u003c/p\u003e\n"],["\u003cp\u003eBefore starting, users should understand Kerberos basics, have a hosted Kerberos KDC or a Dataproc cluster's local KDC, and access to a Cloud Storage bucket for storing the \u003ccode\u003ekrb5.conf\u003c/code\u003e file.\u003c/p\u003e\n"],["\u003cp\u003eCreating a Kerberos-configured Dataproc Metastore service with the gRPC endpoint involves using the \u003ccode\u003egcloud metastore services create\u003c/code\u003e command with the \u003ccode\u003e--endpoint-protocol=grpc\u003c/code\u003e option.\u003c/p\u003e\n"],["\u003cp\u003eCreating a Dataproc cluster configured with Kerberos will generate a Keytab file, a \u003ccode\u003ekrb5.conf\u003c/code\u003e file, and a principal, all of which can be done with the \u003ccode\u003egcloud dataproc clusters create\u003c/code\u003e command.\u003c/p\u003e\n"],["\u003cp\u003eUsers need specific IAM roles, such as \u003ccode\u003eroles/metastore.editor\u003c/code\u003e, \u003ccode\u003eroles/metastore.admin\u003c/code\u003e, and \u003ccode\u003eroles/metastore.metadataEditor\u003c/code\u003e, to create a Kerberos-configured Dataproc Metastore service, and the \u003ccode\u003ehive\u003c/code\u003e user must be added to the \u003ccode\u003eallowed.system.users\u003c/code\u003e property in Hadoop's \u003ccode\u003econtainer-executor.cfg\u003c/code\u003e.\u003c/p\u003e\n"]]],[],null,["# Configure Kerberos for Dataproc Metastore gRPC endpoints\n\nThis page explains how to configure Kerberos for your\nDataproc Metastore service that uses the gRPC endpoint protocol.\nIf your Dataproc Metastore service uses the Thrift endpoint protocol, see\n[Configure Kerberos for Thrift endpoints](/dataproc-metastore/docs/configure-kerberos).\n\nBefore you begin\n----------------\n\n- Understand the basics of\n [Kerberos](https://en.wikipedia.org/wiki/Kerberos_(protocol)).\n\n In these instructions, you use a Dataproc cluster to create\n the following Kerberos assets:\n - A Keytab file.\n - A `krb5.conf` file.\n - A Kerberos principal.\n\n For more information about how these Kerberos assets work with a\n Dataproc Metastore service, see [About Kerberos](/dataproc-metastore/docs/about-kerberos).\n- Create and host your own Kerberos KDC or learn how to use the local KDC of a\n [Dataproc cluster](/dataproc/docs/concepts/configuring-clusters/security#create_a_kerberos_cluster).\n\n- Create a Cloud Storage bucket or get access to an existing one. You\n must store your `krb5.conf` file in this bucket.\n\n### Required Roles\n\n\nTo get the permission that\nyou need to create a Dataproc Metastore configured with Kerberos ,\n\nask your administrator to grant you the\nfollowing IAM roles on your project, based on the principle of least privilege:\n\n- [Grant full control of Dataproc Metastore resources](/iam/docs/roles-permissions/metastore#metastore.editor) (`roles/metastore.editor`)\n- [Grant full access to all Dataproc Metastore resources, including IAM policy administration](/iam/docs/roles-permissions/metastore#metastore.admin) (`roles/metastore.admin`)\n- [Grant gRPC read-write access to Dataproc Metastore metadata](/iam/docs/roles-permissions/metastore#metastore.metadataEditor) (`roles/metastore.metadataEditor`)\n\n\nFor more information about granting roles, see [Manage access to projects, folders, and organizations](/iam/docs/granting-changing-revoking-access).\n\n\nThis predefined role contains the\n` metastore.services.create`\npermission,\nwhich is required to\ncreate a Dataproc Metastore configured with Kerberos .\n\n\nYou might also be able to get\nthis permission\nwith [custom roles](/iam/docs/creating-custom-roles) or\nother [predefined roles](/iam/docs/roles-overview#predefined).\nFor more information about specific Dataproc Metastore roles and permissions, see [Manage access with IAM](/dataproc-metastore/docs/iam-and-access-control).\n\n\u003cbr /\u003e\n\nFor more information, see [Dataproc Metastore IAM and access control](/dataproc-metastore/docs/iam-and-access-control).\n\nConfigure Kerberos for Dataproc Metastore\n-----------------------------------------\n\nThe following instructions show you how to configure Kerberos for a\nDataproc Metastore service that uses the gRPC endpoint.\n\nFirst, you create a Dataproc Metastore that uses the gRPC\nendpoint. After, you create a Dataproc cluster configured with\nKerberos and connect to it.\n\n### Create a Dataproc Metastore service with the gRPC endpoint\n\nTo create a Dataproc Metastore that uses the gRPC endpoint, run\nthe following [`gcloud metastore services create`](/sdk/gcloud/reference/metastore/services/create) command: \n\n### gcloud\n\n gcloud metastore services create \u003cvar translate=\"no\"\u003eSERVICE\u003c/var\u003e \\\n --instance-size=medium \\\n --endpoint-protocol=grpc\n\nReplace:\n\n- \u003cvar translate=\"no\"\u003eSERVICE\u003c/var\u003e: The name of your Dataproc Metastore service\n\n### Create a Dataproc cluster and connect to your service\n\nTo create a Dataproc configured with Kerberos, run the following\n`gcloud dataproc clusters create` command.\n\nIn this command, the `--enable-kerberos` option creates the Kerberos\nKeytab file, `krb5.conf` file, and principal. These values are all created using\ndefault names and settings set by the Dataproc cluster. \n\n### gcloud\n\n gcloud dataproc clusters create \u003cvar translate=\"no\"\u003eCLUSTER_NAME\u003c/var\u003e \\\n --project \u003cvar translate=\"no\"\u003ePROJECT_ID\u003c/var\u003e \\\n --region \u003cvar translate=\"no\"\u003eREGION\u003c/var\u003e \\\n --image-version 2.0-debian10 \\\n --dataproc-metastore \u003cvar translate=\"no\"\u003eDATAPROC_METASTORE_NAME\u003c/var\u003e \\\n --enable-kerberos \\\n --scopes 'https://www.googleapis.com/auth/cloud-platform'\n\nReplace:\n\n- \u003cvar translate=\"no\"\u003eCLUSTER_NAME\u003c/var\u003e: the name of your Dataproc cluster.\n- \u003cvar translate=\"no\"\u003ePROJECT_ID\u003c/var\u003e: Your Google Cloud project ID.\n- \u003cvar translate=\"no\"\u003eREGION\u003c/var\u003e: the Google Cloud region that you want to create your Dataproc cluster in.\n- \u003cvar translate=\"no\"\u003eDATAPROC_METASTORE_NAME\u003c/var\u003e: the name of the Dataproc Metastore service that you're attaching to the cluster, in the following format: `projects/\u003cmy_project\u003e/locations/\u003clocation\u003e/services/\u003cservice_id\u003e`.\n\n### Configure Dataproc before submitting jobs\n\nTo run your [Dataproc jobs](https://hadoop.apache.org/docs/r2.7.2/hadoop-yarn/hadoop-yarn-site/SecureContainer.html),\nyou must add the `hive` user to the `allowed.system.users` property in the\nHadoop `container-executor.cfg` file. This lets users run queries to access data,\nsuch as `select * from`.\n\nThe following instructions show you how to SSH into your primary Dataproc\ncluster that's associated with your Dataproc Metastore service and\nupdate the `container-executor.cfg` file.\n\n1. In the Google Cloud console go to the [VM\n Instances](https://console.cloud.google.com/compute/instances) page.\n2. In the list of virtual machine instances, click **SSH** in the row of the\n Dataproc primary node (`your-cluster-name-m`).\n\n A browser window opens in your home directory on the node.\n3. In the ssh session, open the Hadoop `container-executor.cfg` file.\n\n sudo vim /etc/hadoop/conf/container-executor.cfg\n\n Add the following line on every Dataproc node. \n\n allowed.system.users=hive\n\n### Get a kerberos ticket\n\nThe following instructions show you how to generate a Kerberos ticket.\n\n1. In the Dataproc cluster ssh session, generate a kerberos\n ticket and connect to your Dataproc Metastore service.\n\n This command uses the default keytab name generated by your\n Dataproc cluster. \n\n sudo klist -kte /etc/security/keytab/hive.service.keytab\n sudo kinit -kt /etc/security/keytab/hive.service.keytab hive/_HOST@${realm}\n sudo klist # gets the ticket information.\n\n The `_HOST` value is retrieved when the\n keytab file is listed using the `klist -kte` command. It contains the\n primary node's hostname in it.\n\n### (Optional) Add a new principal\n\n1. To add a new principal, run the following command.\n\n sudo kadmin.local -q \"addprinc -randkey PRINCIPAL\"\n sudo kadmin.local -q \"ktadd -k /etc/security/keytab/hive.service.keytab PRINCIPAL\"\n\n2. Get the kerberos ticket.\n\n sudo klist -kte /etc/security/keytab/hive.service.keytab\n sudo kinit -kt /etc/security/keytab/hive.service.keytab PRINCIPAL\n sudo klist\n sudo hive\n\nWhat's next\n-----------\n\n- [Access a service](/dataproc-metastore/docs/access-service)\n- [Update and delete a service](/dataproc-metastore/docs/manage-service)\n- [Import metadata into a service](/dataproc-metastore/docs/import-metadata)"]]