This page explains how to deploy workloads that use the Stream Control Transmission Protocol (SCTP) on Google Kubernetes Engine (GKE) Standard clusters.
SCTP is supported on Cilium technology. Because GKE Dataplane V2 is implemented using Cilium, you can use SCTP only on clusters that have been enabled with GKE Dataplane V2. With SCTP support, you can enable direct SCTP communication for Pod-to-Pod and Pod-to-Service traffic. To learn more, see SCTP support on Cilium.
This page is for Operators and Developers who provision and configure cloud resources and deploy apps and services. To learn more about common roles and example tasks referenced in Google Cloud content, see Common GKE Enterprise user roles and tasks.
Before you begin
Before you start, make sure you have performed the following tasks:
- Enable the Google Kubernetes Engine API. Enable Google Kubernetes Engine API
- If you want to use the Google Cloud CLI for this task,
install and then
initialize the
gcloud CLI. If you previously installed the gcloud CLI, get the latest
version by running
gcloud components update
.
Requirements and limitations
SCTP support on GKE Standard clusters has the following requirements and limitations:
- Your cluster must run GKE version 1.32.2-gke.1297000 or later.
- Cluster nodes must use Ubuntu node images. SCTP is not supported for Container-Optimized OS images.
- To enable SCTP support, ensure that your Ubuntu-based container images and the
underlying GKE node OS is loaded with the
sctp
kernel module. - You can't use SCTP on clusters that are enabled with multi-network support for Pods.
- The setup time for an SCTP association can take longer than the setup time for a TCP connection. Design your applications to handle potential delays while associations are established.
- To learn more about what Cilium supports and doesn't support with SCTP, see the Cilium documentation.
Deploy workloads with SCTP
Test your deployment thoroughly in a non-production environment before you deploy workloads to production.
From GKE version 1.32.2-gke.1297000, SCTP is enabled by default in clusters that use GKE Dataplane V2 and Ubuntu node images. To deploy workloads with SCTP, complete the following steps:
To create a cluster with GKE Dataplane V2 and Ubuntu images, run the following command:
gcloud container clusters create CLUSTER_NAME \ --region=REGION \ --cluster-version=CLUSTER_VERSION \ --enable-dataplane-v2 \ --image-type=ubuntu_containerd
Replace the following values:
CLUSTER_NAME
: the name of your cluster.REGION
: the Google Cloud region in which the cluster is created.CLUSTER_VERSION
: the GKE version, which must be 1.32.2-gke.1297000 or later.
To containerize the application, ensure that your container image includes an application that is configured to use SCTP. You can use any application that supports SCTP, such as a custom application.
The following is an example of a
Dockerfile
to containerize the application, assuming you use Docker:Build and push the image to a container registry like Artifact Registry. For more information about how this file works, see Dockerfile reference in the Docker documentation.
To create a Deployment and a Service, save the following manifest as
sctp-deployment.yaml
:apiVersion: apps/v1 kind: Deployment metadata: name: sctp-app spec: replicas: 1 selector: matchLabels: app: sctp-app template: metadata: labels: app: sctp-app spec: containers: - name: sctp-container image: CONTAINER_IMAGE ports: - containerPort: PORT protocol: SCTP --- apiVersion: v1 kind: Service metadata: name: sctp-service spec: selector: app: sctp-app ports: - protocol: SCTP port: PORT targetPort: PORT type: ClusterIP
Replace the following:
CONTAINER_IMAGE
: the container image you built in the preceding step.PORT
: the SCTP port and target port numbers of the application. The value forport
andtargetPort
must be the same.
To apply the Deployment and Service, run the following command:
kubectl apply -f sctp-deployment.yaml
To verify SCTP connectivity for the Service, create a Pod within the same cluster and run the following command:
kubectl run sctp-client \ --image=ubuntu:latest \ --namespace=default \ -it --rm \ --command -- bash -c 'apt-get update && apt-get install -y socat && (echo "Hello, SCTP!"; sleep 1) | socat - SCTP:sctp-service:PORT'
The output is similar to the following:
Preparing to unpack .../socat_1.8.0.0-4build3_amd64.deb ... Setting up socat (1.8.0.0-4build3) ... Hello, SCTP!
Troubleshooting
If you experience issues with SCTP connectivity, follow this guidance to help determine the source of the issue:
Check Pod logs. To check the logs of your application for any errors, run the following command:
kubectl logs POD_NAME
These logs can help you identify what caused the Pod to crash.
Check the status of the SCTP Service object:
kubectl describe service SCTP_SERVICE_NAME
Check your network policies. Network policies can restrict SCTP traffic. Ensure that your network policies allow the necessary SCTP traffic for your applications.
Check the status of GKE Dataplane V2. To verify that GKE Dataplane V2 is enabled on your cluster, run the following command:
kubectl -n kube-system get pods -l k8s-app=cilium -o wide
Verify that the output includes Pods with the prefix
anetd-
. anetd is the networking controller for GKE Dataplane V2.To improve throughput, increase the
sysctl
parametersnet.core.wmem_default
andnet.core.rmem_default
to a larger value, for example, 4194304 (4 MB). For more information, see Sysctl configuration options.You might face issues if you use Network Address Translation (NAT) with SCTP in GKE. For more information about what Cilium supports with SCTP, see the Cilium documentation.
SCTP packets are subject to the Maximum Transmission Unit (MTU) of the network. Ensure that your network's MTU is sufficient for your SCTP traffic.
The performance of SCTP can be affected by factors such as network latency, packet loss, and kernel tuning. Monitor the performance of your application's SCTP and adjust the settings as needed.
What's next
- Learn about SCTP support in the Cilium documentation.
- Learn how to enable GKE Dataplane V2 on your cluster.