Last reviewed 2023-12-20 UTC

The following diagram shows the high-level architecture that is deployed by the blueprint for a single environment. You deploy this architecture across three separate environments: production, non-production, and development.

The blueprint architecture.

This diagram includes the following:

  • Cloud Load Balancing distributes application traffic across regions to Kubernetes service objects. Behind each service is a logical grouping of related pods.
  • Anthos Service Mesh lets Kubernetes services communicate with each other.
  • Kubernetes services are grouped into tenants, which are represented as Kubernetes namespaces. Tenants are an abstraction that represent multiple users and workloads that operate in a cluster, with separate RBAC for access control. Each tenant also has its own project for tenant-specific cloud resources such as databases, storage buckets, and Pub/Sub subscriptions.
  • Namespaces with their own identities for accessing peer services and cloud resources. The identity is consistent across the same namespace in different clusters because of fleet Workload Identity. Each environment has a separate workload identity pool to mitigate privilege escalation between environments.
  • Each service has a dedicated pipeline that builds and deploys that service. The same pipeline is used to deploy the service into the development environment, then deploy the service into the non-production environment, and finally deploy the service into the production environment.

Key architectural decisions for developer platform

The following table describes the architecture decisions that the blueprint implements.

Decision area Decision Reason

Deployment archetype

Deploy across multiple regions.

Permit availability of applications during region outages.

Organizational architecture

Deploy on top of the enterprise foundation blueprint.

Use the organizational structure and security controls that are provided by the foundation.

Use the three environment folders that are set up in the foundation: development, nonproduction, and production.

Provide isolation for environments that have different access controls.

Developer platform cluster architecture

Package and deploy applications as containers.

Support separation of responsibilities, efficient operations, and application portability.

Run applications on GKE clusters.

Use a managed container service that is built by the company that pioneered containers.

Replicate and run application containers in an active-active configuration.

Achieve higher availability and rapid progressive rollouts, improving development velocity.

Provision the production environment with two GKE clusters in two different regions.

Achieve higher availability than a single cloud region.

Provision the non-production environment with two GKE clusters in two different regions.

Stage changes to cross-regional settings, such as load balancers, before deployment to production.

Provision the development environment with a single GKE cluster instance.

Helps reduce cost.

Configure highly-available control planes for each GKE cluster.

Ensure that the cluster control plane is available during upgrade and resizing.

Use the concept of sameness across namespaces, services, and identity in each GKE cluster.

Ensure that Kubernetes objects with the same name in different clusters are treated as the same thing. This normalization is done to make administering fleet resources easier.

Enable private IP address spaces for GKE clusters through Private Service Connect access to the control plane and private node pools.

Help protect the Kubernetes cluster API from scanning attacks.

Enable administrative access to the GKE clusters through the Connect gateway.

Use one command to fetch credentials for access to multiple clusters. Use groups and third-party identity providers to manage cluster access.

Use Cloud NAT to provide GKE pods with access to resources with public IP addresses.

Improve the overall security posture of the cluster, because pods are not directly exposed to the internet, but are still able to access internet-facing resources.

Configure nodes to use Container-Optimized OS and Shielded GKE Nodes.

Limit the attack surface of the nodes.

Associate each environment with a GKE fleet.

Permit management of sets of GKE clusters as a unit.

Use the foundation infrastructure pipeline to deploy the application factory, fleet-scope pipeline, and multi-tenant infrastructure pipeline.

Provide a controllable, auditable, and repeatable mechanism to deploy application infrastructure.

Configure GKE clusters using GKE Enterprise configuration and policy management features.

Provide a service that allows configuration-as-code for GKE clusters.

Use an application factory to deploy the application CI/CD pipelines used in the blueprint.

Provide a repeatable pattern to deploy application pipelines more easily.

Use an application CI/CD pipeline to build and deploy the blueprint application components.

Provide a controllable, auditable, and repeatable mechanism to deploy applications.

Configure the application CI/CD pipeline to use Cloud Build, Cloud Deploy, and Artifact Registry.

Use managed build and deployment services to optimize for security, scale, and simplicity.

Use immutable containers across environments, and sign the containers with Binary Authorization.

Provide clear code provenance and ensure that code has been tested across environments.

Use Google Cloud Observability, which includes Cloud Logging and Cloud Monitoring.

Simplify operations by using an integrated managed service of Google Cloud.

Enable Container Threat Detection (a service in Security Command Center) to monitor the integrity of containers.

Use a managed service that enhances security by continually monitoring containers.

Control access to the GKE clusters by Kubernetes role-based access control (RBAC), which is based on Google Groups for GKE.

Enhance security by linking access control to Google Cloud identities.

Service architecture

Use a unique Kubernetes service account for each Kubernetes service. This account acts as an IAM service account through the use of Workload Identity.

Enhance security by minimizing the permissions each service needs to be provided.

Expose services through the GKE Gateway API.

Simplify configuration management by providing a declarative-based and resource-based approach to managing ingress rules and load-balancing configurations.

Run services as distributed services through the use of Anthos Service Mesh with Certificate Authority Service.

Provide enhanced security through enforcing authentication between services and also provides automatic fault tolerance by redirecting traffic away from unhealthy services.

Use cross-region replication for AlloyDB for PostgreSQL.

Provide for high-availability in the database layer.

Network architecture

Shared VPC instances are configured in each environment and GKE clusters are created in service projects.

Shared VPC provides centralized network configuration management while maintaining separation of environments.

Use Cloud Load Balancing in a multi-cluster, multi-region configuration.

Provide a single anycast IP address to access regionalized GKE clusters for high availability and low-latency services.

Use HTTPS connections for client access to services. Redirect any client HTTP requests to HTTPS.

Help protect sensitive data in transit and help prevent person-in-the-middle-attacks.

Use Certificate Manager to manage public certificates.

Manage certificates in a unified way.

Protect the web interface with Google Cloud Armor.

Enhance security by protecting against common web application vulnerabilities and volumetric attacks.

Your decisions might vary from the blueprint. For information about alternatives, see Alternatives to default recommendations.

What's next