Overview
This page shows you how to migrate version 1.30 or higher user clusters to the following recommended features:
- Migrate to Dataplane V2 as the container network interface (CNI).
- Migrate user clusters using kubeception to Controlplane V2.
Migrate the load balancer configuration:
From the integrated F5 BIG-IP load balancer configuration to
ManualLB
or
From the bundled Seesaw load balancer to MetalLB.
This page is for IT administrators and Operators who manage the lifecycle of the underlying tech infrastructure. For information about the common roles and example tasks that we reference in Google Cloud content, see Common GKE Enterprise user roles and tasks.
Best practices
recommend that you first migrate the least critical environment, such as test. After you verify that the migration was successful, repeat this process for each environment, migrating your production environment last. This lets you validate each migration's success and ensure that your workloads are running properly, before moving to the next more critical environment.
We recommend that you create a new user cluster with Controlplane V2 enabled to learn about the architectural differences with kubeception clusters. The new cluster doesn't affect your workloads. However, in a worst-case scenario, if the migration fails, you have a cluster ready for your workloads.
For more information about migration planning, see Plan cluster migration to recommended features.
Requirements
For this migration:
- The user cluster must be at version 1.30 or higher.
- All node pools must be the same version as the user cluster.
- If the cluster is using the Seesaw load balancer, make sure you aren't relying on Seesaw for client IP preservation as described in the next section.
Seesaw client IP preservation
To check the externalTrafficPolicy
, run the following command:
kubectl --kubeconfig USER_CLUSTER_KUBECONFIG get svc -A -o yaml | grep "externalTrafficPolicy: Local"
If you have this issue, contact Google Support.
Estimate the time commitment and plan a maintenance window
When you update the cluster, by default, all node pools are updated in parallel. However, within each node pool, the nodes are updated sequentially. Therefore, the total update time depends on the number of nodes in the largest node pool. To calculate a rough estimate for each update:
- If migrating from Seesaw to MetalLB, then estimate 15 minutes for the update to choose a node pool for the MetalLB load balancer. For this update, only the selected node pool is updated.
- For any other update in the migration process, multiply 15 minutes by the number of nodes in the node pool.
To estimate the time commitment, count the number of times you need to
update the cluster. The following high-level steps show when you need to run
gkectl update cluster
to update the cluster:
- If the user cluster uses
always-on secret encryption,
disable the feature and run
gkectl update cluster
. - If the user cluster has
enableDataplaneV2
unset or set tofalse
, make the configuration changes, and then rungkectl update cluster
to migrate to Dataplane V2. Prepare for load balancer and control plane migration:
- If the admin cluster has
auto-repair enabled,
disable it. Then run
gkectl update admin
. This update finishes quickly because it does not recreate the admin cluster nodes. - If the user cluster uses Seesaw, choose a node pool for the
MetalLB load balancer to use, then run
gkectl update cluster
. This update only updates the nodes in the selected node pool.
- If the admin cluster has
auto-repair enabled,
disable it. Then run
Make all the needed configuration changes to update your load balancer and to migrate to Controlplane V2. Then run
gkectl update cluster
.After the migration, if you disabled always-on secret encryption, re-enable the feature and run
gkectl update cluster
.
The total time for the migration depends on how many times you must run
gkectl cluster update
, which depends on your configuration. For example,
suppose that you are migrating to Dataplane V2, ControlPlane V2, and to MetalLB.
Also assume that there are 10 nodes in the largest node pool and 3 nodes in the node
pool that MetalLB will use. To calculate an estimate for the migration time, add
the following:
- 150 minutes for the migration to Dataplane V2 because 15 minutes * 10 nodes in the biggest pool = 150 minutes.
- 45 minutes for the node pool used by MetalLB because 15 minutes * 3 nodes in that node pool = 45 minutes.
- 150 minutes for the Controlplane V2 and MetalLB update because 15 minutes * 10 nodes in the biggest pool = 150 minutes.
The total time for the migration is approximately 345 minutes, which is equal to 5 hours and 45 minutes.
If needed, you can do the migration in stages. Using the previous example, you can do the migration to Dataplane V2 in one maintenance window, and do the rest of the migration in one or two maintenance windows.
Plan for downtime during migration
When planning your migration, plan for these types of downtime:
- Control-plane downtime: Access to the Kubernetes API server is
affected during the migration. If you are migrating to Controlplane V2,
there is control-plane downtime for kubeception user clusters as the
loadBalancer.vips.controlPlaneVIP
is migrated. The downtime is typically be less than 10 minutes, but the length of the downtime depends on your infrastructure. - Workload downtime: The virtual IPs (VIPs) used by Services of type: LoadBalancer are unavailable. This only occurs during a migration from Seesaw to MetalLB. The MetalLB migration process will stop network connections to all VIPs in the user cluster for Kubernetes Services of type LoadBalancer for about two to ten minutes. After the migration is complete, the connections work again.
The following table describes the migration's impact:
From | To | Kubernetes API Access | User workloads |
---|---|---|---|
Kubeception cluster using Calico, which has
enableDataplaneV2 unset or set to false |
Kubeception cluster with Dataplane V2 | Not affected | Not affected |
Kubeception cluster, which has enableControlplaneV2
unset or set to false with MetalLB or
ManualLB |
Controlplane V2 cluster with the same kind of load balancer | Affected | Not affected |
Kubeception cluster with loadBalancer.kind: "F5BigIP" |
Controlplane V2 cluster with manual load balancer configuration | Affected | Not affected |
Kubeception cluster with loadBalancer.kind: "Seesaw" |
Controlplane V2 cluster with MetalLB | Affected | Affected |
- Affected: There is a noticeable service disruption during the migration.
- Not affected: There is either no service disruption or it is almost unnoticeable.
Prepare for the migration
To ensure a successful migration, perform the steps in the following sections.
Allocate new IP addresses
If you are migrating to Controlplane V2, allocate new static IP addresses in the same VLAN as the worker nodes (the nodes in node pools).
You need one IP address for the
loadBalancer.vips.controlPlaneVIP
.Allocate one IP address for each control-plane node. The number of IP addresses that you need to allocate depends on whether the user cluster will be highly available (HA) or non-HA.
- Non-HA: one IP address
- HA: three IP addresses
Update firewall rules
If you are migrating to Controlplane V2, update the firewall rules on your user clusters. Ensure that the newly allocated IP addresses for the control-plane nodes on the user cluster can reach all required APIs and other destinations, as described in Firewall rules user cluster nodes.
Check the cluster and node pool versions
Verify that all node pools use the same version as the user cluster, which must be at version 1.30 or higher. If not, upgrade the node pools to the gkeOnPremVersion in the user cluster configuration file before continuing with the migration. To check the versions, run the following command:
gkectl version --kubeconfig ADMIN_CLUSTER_KUBECONFIG --details
Replace ADMIN_CLUSTER_KUBECONFIG
with the path to your
admin cluster kubeconfig file.
Check the cluster health
Check the cluster health and fix any issues that the gkectl diagnose cluster command reports:
gkectl diagnose cluster --kubeconfig ADMIN_CLUSTER_KUBECONFIG \
--cluster-name USER_CLUSTER_NAME
Replace the following:
ADMIN_CLUSTER_KUBECONFIG
: the path of the admin cluster kubeconfig file.USER_CLUSTER_NAME
: the name of the user cluster.
Disable auto-repair in the admin cluster
If you are migrating the user cluster to use Controlplane V2 and
auto-repair is enabled
in the admin cluster, disable auto-repair. Check the admin cluster configuration
file's autoRepair.enabled
field. If that field is unset or set to true
,
perform the following steps:
In the admin cluster configuration file, set
autoRepair.enabled
tofalse
. For example:autoRepair: enabled: false
Update the admin cluster:
gkectl update admin --kubeconfig ADMIN_CLUSTER_KUBECONFIG \ --config ADMIN_CLUSTER_CONFIG
Replace the following:
ADMIN_CLUSTER_KUBECONFIG
: the path to the admin cluster's kubeconfig file.ADMIN_CLUSTER_CONFIG
: the path to the admin cluster configuration file.
After the migration completes, be sure to re-enable auto-repair in the admin cluster.
Check for an issue with always-on secret encryption
If you are migrating the user cluster to use Controlplane V2, check for an issue with always-on secret encryption.
If always-on secrets encryption has ever been enabled on the user cluster, you must do the steps in Disable always-on secrets encryption and decrypt secrets before starting the migration. Otherwise, the new Controlplane V2 cluster is unable to decrypt secrets.
Before startng the migration, run the following command to see if always-on secrets encryption has ever been enabled at some point:
kubectl --kubeconfig ADMIN_CLUSTER_KUBECONFIG \ get onpremusercluster USER_CLUSTER_NAME \ -n USER_CLUSTER_NAME-gke-onprem-mgmt \ -o jsonpath={.spec.secretsEncryption}
If the output of the preceding command is empty, then always-on secrets encryption has never been enabled. You can start the migration.
If the output of the preceding command isn't empty, then always-on secrets encryption previously had been enabled. Before migrating, you must do the steps in the next section to ensure that the new Controlplane V2 cluster can decrypt secrets.
The following example shows non-empty output:
{"generatedKeyVersions":{"keyVersions":[1]}}
Disable always-on secrets encryption and decrypt secrets if needed
To disable always-on secrets encryption and decrypt secrets, perform the following steps:
In the user cluster configuration file, to disable always-on secrets encryption add a
disabled: true
field to thesecretsEncryption
section:secretsEncryption: mode: GeneratedKey generatedKey: keyVersion: KEY_VERSION disabled: true
Update the cluster:
gkectl update cluster --kubeconfig ADMIN_CLUSTER_KUBECONFIG \ --config USER_CLUSTER_CONFIG
Replace the following:
ADMIN_CLUSTER_KUBECONFIG
: the path of the admin cluster kubeconfig fileUSER_CLUSTER_CONFIG
: the path of the user cluster configuration file
Do a rolling update on a specific DaemonSet, as follows:
kubectl --kubeconfig ADMIN_CLUSTER_KUBECONFIG \ rollout restart statefulsets kube-apiserver \ -n USER_CLUSTER_NAME
Get the manifests of all the secrets in the user cluster, in YAML format:
kubectl --kubeconfig USER_CLUSTER_KUBECONFIG \ get secrets -A -o yaml > SECRETS_MANIFEST.yaml
So that all secrets are stored in etcd as plaintext, reapply all the secrets in the user cluster:
kubectl --kubeconfig USER_CLUSTER_KUBECONFIG \ apply -f SECRETS_MANIFEST.yaml
You can now start the migration to Controlplane V2. After the migration completes, you can re-enable always-on secrets encryption on the cluster.
Enable a node pool for use by MetalLB
If you are migrating from the bundled Seesaw load balancer to MetalLB, perform the
steps in this section. The cluster is using Seesaw if loadBalancer.kind:
Seesaw
is in the user cluster configuration file. If you are migrating the
integrated F5 BIG-IP configuration, skip to the next section,
Migrate to Dataplane V2.
Choose a node pool and enable it for use with MetalLB. The migration deploys MetalLB on the nodes in that node pool.
In the nodePools section in your user cluster configuration file, choose a node pool or add a new node pool, and set
enableLoadBalancer
totrue
. For example:nodePools: - name: pool-1 replicas: 3 enableLoadBalancer: true
Update the cluster:
gkectl update cluster --kubeconfig ADMIN_CLUSTER_KUBECONFIG \ --config USER_CLUSTER_CONFIG
For more information about MetalLB, see Bundled load balancing with MetalLB.
Migrate to Dataplane V2
Before migrating, check whether DataPlane V2 is enabled on the cluster by running the following command:
kubectl --kubeconfig ADMIN_CLUSTER_KUBECONFIG get onpremusercluster USER_CLUSTER_NAME -n USER_CLUSTER_NAME-gke-onprem-mgmt -o yaml | grep enableDataplaneV2
If Dataplane V2 is already enabled, skip to the next section, Prepare for the load balancer migration.
To migrate to Dataplane V2, you have the following options:
Upgrade the cluster to 1.31. For detailed steps, see Enable Dataplane V2.
Update the 1.30 cluster.
In both cases you need to temporarily remove the NetworkPolicy
specification
as described in the following steps.
To migrate to Dataplane V2, perform the following steps. If you have concerns about
temporarily removing the NetworkPolicy
specification, contact Google Support.
If your cluster is using a NetworkPolicy
, temporarily remove the
its specification from the cluster, as follows:
Check whether there's any non-system
NetworkPolicy
applied to your cluster:kubectl --kubeconfig USER_CLUSTER_KUBECONFIG get networkpolicy -A -o wide | grep -v kube-system
If the output of the prior step was not empty, save each
NetworkPolicy
specification to a file so that you can reapply the specification after updating the cluster.kubectl --kubeconfig USER_CLUSTER_KUBECONFIG get networkpolicy NETWORK_POLICY_NAME -n NETWORK_POLICY_NAMESPACE -o yaml > NETWORK_POLICY_NAME.yaml
Replace the following:
NETWORK_POLICY_NAME
: the name of theNetworkPolicy
that you are saving.NETWORK_POLICY_NAMESPACE
: the namespace of theNetworkPolicy
.
Delete the
NetworkPolicy
, using the following command:kubectl --kubeconfig USER_CLUSTER_KUBECONFIG delete networkpolicy NETWORK_POLICY_NAME -n NETWORK_POLICY_NAMESPACE
Migrate to Dataplane V2 using these steps:
Set
enableDataplaneV2
totrue
in your user cluster configuration file.To enable DataPlane V2, update your cluster:
gkectl update cluster --kubeconfig ADMIN_CLUSTER_KUBECONFIG \ --config USER_CLUSTER_CONFIG
If you removed any non-system
NetworkPolicy
specifications in a prior step, then after the update completes, reapply them with this command:kubectl --kubeconfig USER_CLUSTER_KUBECONFIG apply -f NETWORK_POLICY_NAME.yaml
After completing those steps, you have Dataplane V2 is enabled. Next, prepare to migrate your cluster to the recommended load balancer and Controlplane V2.
Prepare for the load balancer migration
If your user clusters are using the Seesaw load balancer or integrated F5 BIG-IP, follow the steps in this section to make the needed user cluster configuration file changes. Otherwise, skip to the next section, Prepare for the migration to Controlplane V2.
F5 BIG-IP
If your clusters use the integrated F5 BIG-IP configuration, prepare for the
migration to ManualLB
by making the following changes to the user cluster
configuration file:
- Change
loadBalancer.kind
to"ManualLB"
. - Keep the same value for the
loadBalancer.vips.ingressVIP
field. - If you are migrating to Controlplane V2, change the value of the
loadBalancer.vips.controlPlaneVIP
field to the IP address that you allocated. Otherwise, you can keep the same value. - Delete the entire
loadBalancer.f5BigIP
section.
The following example user cluster configuration file shows these changes:
loadBalancer: vips: controlPlaneVIP: 192.0.2.5 ingressVIP: 198.51.100.20 kind:"f5BigIP""ManualLB"f5BigIP: address: "203.0.113.2" credentials: fileRef: path: "my-config-folder/user-creds.yaml" entry: "f5-creds" partition: "my-f5-user-partition"
Seesaw
If your user clusters use the Seesaw load balancer, prepare for the migration to MetalLB by performing the steps in the following sections.
Specify address pools
The MetalLB controller does IP address management for Services. So when an application developer creates a Service of type LoadBalancer in a user cluster, they don't have to manually specify an IP address for the Service. Instead, the MetalLB controller chooses an IP address from an address pool that you specify.
Consider the maximum number of LoadBalancer Services likely to be active in your
user cluster. Then in the
loadBalancer.metalLB.addressPools
section of your user cluster configuration file, specify enough IP addresses to
accommodate those Services.
When specifying address pools, include the ingress VIP for your user cluster in one of the pools. This is because the ingress proxy is exposed by a Service of type LoadBalancer.
Addresses must be in CIDR format or range format. If you want to specify an
individual address, use a /32 CIDR such as 198.51.100.10/32
.
Update the cluster configuration file
Update the cluster configuration file to remove the Seesaw section and add a MetalLB section, as follows:
- Set
loadBalancer.kind
to"MetalLB"
. - You can keep the same value for the
loadBalancer.vips.ingressVIP
field. - Add the ingress VIP to a MetalLB address pool.
- If you are migrating to Controlplane V2, change the value of the
loadBalancer.vips.controlPlaneVIP
field to the IP address that you allocated. Otherwise, you can keep the same value. - Remove the
loadBalancer.seesaw
section. - Add a
loadBalancer.metalLB
section.
The following portion of a user cluster configuration file shows these changes and the MetalLB configuration of:
- An address pool for the MetalLB controller to choose from and assign to
Services of type LoadBalancer. The ingress VIP, which in this example is
198.51.100.10
, is in this pool in range format notation,198.51.100.10/32
. - The VIP designated for the Kubernetes API server of the user cluster.
- The ingress VIP you have configured for the ingress proxy.
- A node pool enabled to use MetalLB. The migration deploys MetalLB on the nodes in this node pool.
loadBalancer: vips: controlPlaneVIP: "198.51.100.50" ingressVIP: "198.51.100.10" kind: "MetalLB""Seesaw" seesaw: ipBlockFilePath: "user-cluster-2-ipblock.yaml" vrid: 1 masterIP: "" cpus: 4 memoryMB: 3072metalLB: addressPools: - name: "address-pool-1" addresses: - "198.51.100.10/32" - "198.51.100.80 - 198.51.100.89"
Prepare for the migration to Controlplane V2
If the cluster doesn't have Controlplane V2 enabled:
- Update the user cluster configuration file.
- If the cluster is using manual load balancing (
loadBalancer.kind: "ManualLB"
), also update the configuration on your load balancer.
These steps are described in the following sections.
If the cluster already has Controlplane V2 enabled, skip to the Migrate the user cluster section.
Update the user cluster configuration file
Make the following changes to the existing user cluster configuration file:
- Set
enableControlplaneV2
to true. - Optionally, make the control plane for the Controlplane V2 user cluster
highly available (HA). To change from a non-HA to an HA cluster, change
masterNode.replicas
from 1 to 3. - Add the static IP address (or addresses) for the user cluster
control-plane node(s) to the
network.controlPlaneIPBlock.ips
section. The IP address (or addresses) for the control-plane nodes must be in the same VLAN as the worker nodes. - Fill in the
netmask
andgateway
in thenetwork.controlPlaneIPBlock
section. - If the
network.hostConfig
section is empty, fill it in. - Make sure that the
loadBalancer.vips.controlPlaneVIP
field has the new IP address for the control plane VIP. The IP address has to be in the same VLAN as the control plane node IPs. - If the user cluster uses manual load balancing, set
loadBalancer.manualLB.controlPlaneNodePort
andloadBalancer.manualLB.konnectivityServerNodePort
to 0. They are not required when Controlplane V2 is enabled but they must have 0 as a value. - If the user cluster uses manual load balancing, configure your load balancer as described in the next section.
Adjust mappings in your load balancer if needed
If your user cluster is already using manual load balancing, you need to configure some mappings on your load balancer. If you are migrating from the integrated F5 BIG-IP configuration to manual load balancing, you don't need to make any configuration changes on your load balancer and can skip to the next section, Migrate the user cluster.
For each IP address that you
specified in the network.controlPlaneIPBlock
section, configure the following
mappings in your load balancer for the control-plane nodes:
(ingressVIP:80) -> (NEW_NODE_IP_ADDRESS:ingressHTTPNodePort)
(ingressVIP:443) -> (NEW_NODE_IP_ADDRESS:ingressHTTPSNodePort)
These mappings are needed for all nodes in the user cluster, both control plane nodes and worker nodes. Because NodePorts are configured on the cluster, Kubernetes opens the NodePorts on all cluster nodes so any node in the cluster can handle data plane traffic.
After you configure the mappings, the load balancer listens for traffic on the IP address that you configured for the user cluster's ingress VIP on standard HTTP and HTTPS ports. The load balancer routes requests to any node in the cluster. After a request is routed to one of the cluster nodes, internal Kubernetes networking routes the request to the destination Pod.
Migrate the user cluster
First, carefully review all changes that you made to the user cluster configuration file. All the load balancer and Controlplane V2 settings are immutable except when you are updating the cluster for the migration.
To update the cluster, run this command:
gkectl update cluster --kubeconfig ADMIN_CLUSTER_KUBECONFIG \
--config USER_CLUSTER_CONFIG
Controlplane V2 migration
During the Controlplane V2 migration, the update performs the following actions:
- Creates the control plane of a new cluster with ControlPlane V2 enabled.
- Stops the kubeception cluster's Kubernetes control plane.
- Takes an etcd snapshot of the kubeception cluster.
- Powers off the kubeception cluster's user cluster control plane nodes. Until the migration completes, the nodes are not deleted because that permits failure recovery by falling back to that kubeception cluster.
- Restores the cluster data in the new control plane, using the etcd snapshot created in an earlier step.
- Connects the nodepool nodes of the kubeception cluster to the new
control-plane, which is accessible with the new
controlPlaneVIP
. - Reconciles the restored user cluster to meet the end state of the cluster with ControlPlane V2 enabled.
Note the following:
- There's no downtime for user cluster workloads during migration.
- There is some downtime for the user cluster control plane during migration. Specifically, the control plane is unavailable between stopping the kubeception cluster's Kubernetes control plane and the completion of connecting the nodepool nodes of the kubeception cluster to the new control-plane. (In tests, this downtime was less than 7 minutes, but the actual length depends on your infrastructure).
- At the end of the migration, the user cluster control plane nodes of the
kubeception clusters are deleted. If the admin cluster has
network.ipMode.type
set to
"static"
, you can recycle some of the unused static IP addresses. You can list the admin cluster node objects withkubectl get nodes -o wide
to see what IP addresses are in use. To recycle those IP addresses, remove them from the admin cluster configuration file and rungkectl update admin
.
After the migration
After the update completes, perform the following steps:
Verify that your user cluster is running:
kubectl get nodes --kubeconfig USER_CLUSTER_KUBECONFIG
The output is similar to the following:
cp-vm-1 Ready control-plane,master 18m cp-vm-2 Ready control-plane,master 18m cp-vm-3 Ready control-plane,master 18m worker-vm-1 Ready 6m7s worker-vm-2 Ready 6m6s worker-vm-3 Ready 6m14s
If you migrated to Controlplane V2, update the firewall rules on your admin cluster to remove the kubeception user cluster's control-plane nodes.
If you disabled always-on secret encryption, re-enable the feature.
- In the user cluster configuration file, remove the
disabled: true
field. Update the user cluster:
gkectl update cluster --kubeconfig ADMIN_CLUSTER_KUBECONFIG \ --config USER_CLUSTER_CONFIG
- In the user cluster configuration file, remove the
If you disabled auto-repair in the admin cluster, re-enable the feature.
In your admin cluster configuration file, set
autoRepair.enabled
totrue
.Update the cluster:
gkectl update admin --kubeconfig ADMIN_CLUSTER_KUBECONFIG \ --config ADMIN_CLUSTER_CONFIG
Load balancer migration
If you migrated the load balancer, verify that the load balancer components are running successfully.
MetalLB migration
If you migrated to MetalLB, verify that the MetalLB components are running successfully:
kubectl --kubeconfig USER_CLUSTER_KUBECONFIG get pods \
--namespace kube-system --selector app=metallb
The output shows Pods for the MetalLB controller and speaker. For example:
metallb-controller-744884bf7b-rznr9 1/1 Running
metallb-speaker-6n8ws 1/1 Running
metallb-speaker-nb52z 1/1 Running
metallb-speaker-rq4pp 1/1 Running
After a successful migration, delete the powered-off Seesaw VMs for the user
cluster. You can find the Seesaw VM names in the vmnames
section of the
seesaw-for-[USERCLUSTERNAME].yaml
file in your configuration directory.
F5 BIG-IP migration
After the migration to manual load balancing, traffic to your clusters isn't interrupted. This is because the existing F5 resources still exist, as you can see by running the following command:
kubectl --kubeconfig CLUSTER_KUBECONFIG \
api-resources --verbs=list -o name | xargs -n 1 kubectl --kubeconfig
CLUSTER_KUBECONFIG get --show-kind --ignore-not-found
--selector=onprem.cluster.gke.io/legacy-f5-resource=true -A
The expected output is similar to the following:
Warning: v1 ComponentStatus is deprecated in v1.19+
NAMESPACE NAME TYPE DATA AGE
kube-system secret/bigip-login-sspwrd Opaque 4 14h
NAMESPACE NAME SECRETS AGE
kube-system serviceaccount/bigip-ctlr 0 14h
kube-system serviceaccount/load-balancer-f5 0 14h
NAMESPACE NAME READY UP-TO-DATE AVAILABLE AGE
kube-system deployment.apps/k8s-bigip-ctlr-deployment 1/1 1 1 14h
kube-system deployment.apps/load-balancer-f5 1/1 1 1 14h
NAME ROLE AGE
clusterrolebinding.rbac.authorization.k8s.io/bigip-ctlr-clusterrole-binding ClusterRole/bigip-ctlr-clusterrole 14h
clusterrolebinding.rbac.authorization.k8s.io/load-balancer-f5-clusterrole-binding ClusterRole/load-balancer-f5-clusterrole 14h
NAME CREATED AT
clusterrole.rbac.authorization.k8s.io/bigip-ctlr-clusterrole 2024-03-25T05:16:40Z
clusterrole.rbac.authorization.k8s.io/load-balancer-f5-clusterrole 2024-03-25T05:16:41Z