Delete a Google Cloud Managed Service for Apache Kafka cluster

To delete a cluster, you can use the Google Cloud console, the Google Cloud CLI, the client libraries, or the Managed Kafka API. You can't use the open source Apache Kafka API to delete a cluster.

Required roles and permissions to delete a cluster

To get the permissions that you need to delete a cluster, ask your administrator to grant you the Managed Kafka Cluster Editor (roles/managedkafka.clusterEditor) IAM role on your project. For more information about granting roles, see Manage access to projects, folders, and organizations.

This predefined role contains the permissions required to delete a cluster. To see the exact permissions that are required, expand the Required permissions section:

Required permissions

The following permissions are required to delete a cluster:

  • Delete a cluster: managedkafka.clusters.delete

You might also be able to get these permissions with custom roles or other predefined roles.

The Managed Kafka Cluster Editor role does not let you create, delete, or modify topics and consumer groups on Managed Service for Apache Kafka clusters. Nor does it allow data plane access to publish or consume messages within clusters. For more information about this role, see Managed Service for Apache Kafka predefined roles.

Delete a cluster

The following is a list of important considerations before deleting a cluster:

  • Data loss: Deleting a cluster erases all data stored within it, including topics, messages, configurations, and any other associated resources. This action is irreversible.

  • Service disruption: Any applications or services that rely on the cluster lose access and experience disruptions. Ensure you have a plan for handling this dependency before deleting the cluster.

  • Billing: You stop incurring charges for the cluster after it's deleted. However, you might still be billed for resources used up to the point of deletion.

  • Asynchronous operation: By default, the deletion command operates asynchronously. It returns immediately, and you can track the progress of the deletion separately.

To delete a cluster, follow these steps:

Console

  1. In the Google Cloud console, go to the Cluster page.

    Go to Clusters

  2. From the list of clusters, select the cluster or clusters that you want to delete.
  3. Click Delete.

gcloud

  1. In the Google Cloud console, activate Cloud Shell.

    Activate Cloud Shell

    At the bottom of the Google Cloud console, a Cloud Shell session starts and displays a command-line prompt. Cloud Shell is a shell environment with the Google Cloud CLI already installed and with values already set for your current project. It can take a few seconds for the session to initialize.

  2. Run the gcloud managed-kafka clusters delete command:

    gcloud managed-kafka clusters delete CLUSTER_ID \
        --location=LOCATION

    Replace the following:

    • CLUSTER_ID: The ID or name of the cluster.

    • LOCATION: The location of the cluster.

Go

import (
	"context"
	"fmt"
	"io"

	"cloud.google.com/go/managedkafka/apiv1/managedkafkapb"
	"google.golang.org/api/option"

	managedkafka "cloud.google.com/go/managedkafka/apiv1"
)

func deleteCluster(w io.Writer, projectID, region, clusterID string, opts ...option.ClientOption) error {
	// projectID := "my-project-id"
	// region := "us-central1"
	// clusterID := "my-cluster"
	ctx := context.Background()
	client, err := managedkafka.NewClient(ctx, opts...)
	if err != nil {
		return fmt.Errorf("managedkafka.NewClient got err: %w", err)
	}
	defer client.Close()

	clusterPath := fmt.Sprintf("projects/%s/locations/%s/clusters/%s", projectID, region, clusterID)
	req := &managedkafkapb.DeleteClusterRequest{
		Name: clusterPath,
	}
	op, err := client.DeleteCluster(ctx, req)
	if err != nil {
		return fmt.Errorf("client.DeleteCluster got err: %w", err)
	}
	err = op.Wait(ctx)
	if err != nil {
		return fmt.Errorf("op.Wait got err: %w", err)
	}
	fmt.Fprint(w, "Deleted cluster\n")
	return nil
}

Java

import com.google.api.gax.longrunning.OperationFuture;
import com.google.api.gax.rpc.ApiException;
import com.google.cloud.managedkafka.v1.ClusterName;
import com.google.cloud.managedkafka.v1.DeleteClusterRequest;
import com.google.cloud.managedkafka.v1.ManagedKafkaClient;
import com.google.cloud.managedkafka.v1.OperationMetadata;
import com.google.protobuf.Empty;
import java.io.IOException;

public class DeleteCluster {

  public static void main(String[] args) throws Exception {
    // TODO(developer): Replace these variables before running the example.
    String projectId = "my-project-id";
    String region = "my-region"; // e.g. us-east1
    String clusterId = "my-cluster";
    deleteCluster(projectId, region, clusterId);
  }

  public static void deleteCluster(String projectId, String region, String clusterId)
      throws Exception {
    try (ManagedKafkaClient managedKafkaClient = ManagedKafkaClient.create()) {
      DeleteClusterRequest request =
          DeleteClusterRequest.newBuilder()
              .setName(ClusterName.of(projectId, region, clusterId).toString())
              .build();
      OperationFuture<Empty, OperationMetadata> future =
          managedKafkaClient.deleteClusterOperationCallable().futureCall(request);
      future.get();
      System.out.println("Deleted cluster");
    } catch (IOException | ApiException e) {
      System.err.printf("managedKafkaClient.deleteCluster got err: %s", e.getMessage());
    }
  }
}

Python

from google.api_core.exceptions import GoogleAPICallError
from google.cloud import managedkafka_v1

# TODO(developer)
# project_id = "my-project-id"
# region = "us-central1"
# cluster_id = "my-cluster"

client = managedkafka_v1.ManagedKafkaClient()

request = managedkafka_v1.DeleteClusterRequest(
    name=client.cluster_path(project_id, region, cluster_id),
)

try:
    operation = client.delete_cluster(request=request)
    print(f"Waiting for operation {operation.operation.name} to complete...")
    operation.result()
    print("Deleted cluster")
except GoogleAPICallError as e:
    print(f"The operation failed with error: {e.message}")

What's next?

Apache Kafka® is a registered trademark of The Apache Software Foundation or its affiliates in the United States and/or other countries.