Stay organized with collections
Save and categorize content based on your preferences.
This page describes how to create a backup repository for cluster workloads in Google Distributed Cloud (GDC) air-gapped.
A backup repository represents a compatible storage location for your
backups. A backup repository is also used to store records of
backups, backup plans, restore plans, and restores.
Before you begin
To create a backup repository, you must have the following:
A compatible endpoint available.
A previously created bucket
to use as the backup repository.
Organization Backup Admin: manages backup resources such as backup and
restore plans in user clusters. Ask your Organization IAM Admin to grant you
the Organization Backup Admin (organization-backup-admin) role. For more
information, see Role
definitions.
Create a repository
Create a repository by using the GDC console or the API.
Console
Sign in to the GDC console.
In the navigation menu, click Backup for Clusters. Ensure no project
is selected in the project selector.
Click Create repository.
Enter a repository name. The description is optional.
In the Main cluster (read/write) list, choose a cluster.
In the Linked clusters (read only) list, choose the linked clusters.
In the S3 URI endpoint field, enter an endpoint containing the fully-qualified
domain name of your object storage site.
In the Bucket name field, enter the name of the fully qualified name of the bucket, which can be found from the status of the GDC bucket custom resource.
In the Bucket region field, enter the region where the bucket was
created.
In the Access Key ID list, enter the access key ID.
In the Access key field, enter the access key.
Click Create.
API
To use the backup and restore APIs, you must configure a valid
ClusterBackupRepository custom resource to be the location of your
backups, and supply the required credentials.
Add a ClusterBackupRepository custom resource to use these credentials
and apply the new resource to the Management API server.
Backup repositories are cluster-scoped:
A NamespacedName referencing the secret that contains
access credentials for the endpoint.
endpoint
The fully-qualified domain name for the storage system.
type
The type of backup repository. Only the type of S3 is
supported.
s3Options
The configuration for the S3 endpoint. This is required if the type is
S3.
bucket: the fully qualified name of the bucket, which can be found from the status of the GDC bucket custom resource.
region: the region of the given endpoint. The region is
storage system specific.
forcePathStyle: this option decides whether to force path style URLs for
objects.
importPolicy
Set to one of the following:
ReadWrite: This repository can be used to schedule
or create backups, backup plans, and restores.
ReadOnly: This repository can only be used to import
and view backups. No new backups or resources can be created in this
repository, but restores can use and reference read-only backups for
restoration. There is no restriction on how often a
backup repository can be used as ReadOnly.
Backup repository import policies
All clusters must have at least one ReadWrite repository to successfully use backup and restore features. ReadOnly repositories are optional, have no
limit, and are used to gain visibility into other cluster backups for
cross-cluster restores.
ReadOnly repositories cannot be used as storage locations for additional
backups or for backup plans within the cluster they were imported.
Importing a repository as ReadWrite claims the repository for that cluster,
preventing other clusters from importing the same repository as
ReadWrite. After importing a ReadWrite repository, all records of previous
backups, backup plans, and restores in that repository are imported into the
target cluster as local custom resources.
Importing a repository as ReadOnly does not claim the repository, it only
imports the backups, backup plans, restores, and restore plans. Backup plans in read-only repositories don't schedule backups,
they exist to provide visibility into what backup plans exist in the cluster you are importing from. Removing a ReadOnly repository cleans up any imported resources from
the cluster and has no effect on the resources in the storage location as no write operations occur to object storage for read-only repositories.
When a ReadWrite repository is removed from the cluster:
All local custom resources associated with that repository, such as backups
and restores, are removed from the current cluster.
That cluster's claim on the repository is removed, allowing the repository
to be used as ReadWrite by another cluster. However, these resources are not removed from the storage endpoint.
[[["Easy to understand","easyToUnderstand","thumb-up"],["Solved my problem","solvedMyProblem","thumb-up"],["Other","otherUp","thumb-up"]],[["Hard to understand","hardToUnderstand","thumb-down"],["Incorrect information or sample code","incorrectInformationOrSampleCode","thumb-down"],["Missing the information/samples I need","missingTheInformationSamplesINeed","thumb-down"],["Other","otherDown","thumb-down"]],["Last updated 2025-08-29 UTC."],[[["\u003cp\u003eThis guide details the process of creating a backup repository in Google Distributed Cloud (GDC) air-gapped environments for storing cluster workload backups and related records.\u003c/p\u003e\n"],["\u003cp\u003eCreating a backup repository requires a compatible endpoint, a pre-existing object storage bucket, granted access to the bucket, and the appropriate identity and access role, like the Organization Backup Admin.\u003c/p\u003e\n"],["\u003cp\u003eRepositories can be created through either the GDC console, involving the input of the main cluster, linked cluster(s), fully qualified domain name, bucket name, region, access key ID and access key; or via API, where a \u003ccode\u003eClusterBackupRepository\u003c/code\u003e custom resource is defined with the relevant credentials and storage details.\u003c/p\u003e\n"],["\u003cp\u003eBackup repositories have two import policies, \u003ccode\u003eReadWrite\u003c/code\u003e for creating new backups and resources, and \u003ccode\u003eReadOnly\u003c/code\u003e for viewing backups without the ability to create new ones, with \u003ccode\u003eReadWrite\u003c/code\u003e repositories being unique to a single cluster, and \u003ccode\u003eReadOnly\u003c/code\u003e available to many.\u003c/p\u003e\n"],["\u003cp\u003eRemoving a \u003ccode\u003eReadWrite\u003c/code\u003e repository from a cluster removes the associated custom resources from the cluster and releases the claim on the repository, while removing a \u003ccode\u003eReadOnly\u003c/code\u003e repository only removes imported resources without affecting the storage location.\u003c/p\u003e\n"]]],[],null,["# Add a backup repository\n\nThis page describes how to create a backup repository for cluster workloads in Google Distributed Cloud (GDC) air-gapped.\n\nA backup repository represents a compatible storage location for your\nbackups. A backup repository is also used to store records of\nbackups, backup plans, restore plans, and restores.\n\nBefore you begin\n----------------\n\nTo create a backup repository, you must have the following:\n\n\u003cbr /\u003e\n\n- A compatible endpoint available.\n- A previously [created bucket](/distributed-cloud/hosted/docs/latest/gdch/platform/pa-user/create-storage-buckets) to use as the backup repository.\n- You have [granted access](/distributed-cloud/hosted/docs/latest/gdch/platform/pa-user/grant-obtain-storage-access) for the object storage bucket.\n- The necessary identity and access role:\n\n - Organization Backup Admin: manages backup resources such as backup and restore plans in user clusters. Ask your Organization IAM Admin to grant you the Organization Backup Admin (`organization-backup-admin`) role. For more information, see [Role\n definitions](/distributed-cloud/hosted/docs/latest/gdch/platform/pa-user/iam/role-definitions).\n\nCreate a repository\n-------------------\n\nCreate a repository by using the GDC console or the API. \n\n### Console\n\n1. Sign in to the GDC console.\n2. In the navigation menu, click **Backup for Clusters**. Ensure no project is selected in the project selector.\n3. Click **Create repository**.\n4. Enter a repository name. The description is optional.\n5. In the **Main cluster (read/write)** list, choose a cluster.\n6. In the **Linked clusters (read only)** list, choose the linked clusters.\n7. In the **S3 URI endpoint** field, enter an endpoint containing the fully-qualified domain name of your object storage site.\n8. In the **Bucket name** field, enter the name of the fully qualified name of the bucket, which can be found from the status of the GDC bucket custom resource.\n9. In the **Bucket region** field, enter the region where the bucket was created.\n10. In the **Access Key ID** list, enter the access key ID.\n11. In the **Access key** field, enter the access key.\n12. Click **Create**.\n\n### API\n\n\nTo use the backup and restore APIs, you must configure a valid\n`ClusterBackupRepository` custom resource to be the location of your\nbackups, and supply the required credentials.\n\n1. Fetch the secret generated in [Grant and obtain storage bucket access](/distributed-cloud/hosted/docs/latest/gdch/platform/pa-user/grant-obtain-storage-access#getting_bucket_access_credentials).\n\n2. Add a `ClusterBackupRepository` custom resource to use these credentials\n and apply the new resource to the Management API server.\n Backup repositories are cluster-scoped:\n\n apiVersion: backup.gdc.goog/v1\n kind: ClusterBackupRepository\n metadata:\n name: user-1-user\n namespace: user-1-user-cluster\n spec:\n secretReference:\n namespace: \"object-storage-secret-ns\"\n name: \"object-storage-secret\"\n endpoint: \"https://objectstorage.google.gdch.test\"\n type: \"S3\"\n s3Options:\n bucket: \"fully-qualified-bucket-name\"\n region: \"us-east-1\"\n forcePathStyle: true\n importPolicy: \"ReadWrite\"\n\n This example includes the following values:\n\nBackup repository import policies\n---------------------------------\n\nAll clusters must have at least one `ReadWrite` repository to successfully use backup and restore features. `ReadOnly` repositories are optional, have no\nlimit, and are used to gain visibility into other cluster backups for\ncross-cluster restores.\n\n`ReadOnly` repositories cannot be used as storage locations for additional\nbackups or for backup plans within the cluster they were imported.\n\nImporting a repository as `ReadWrite` claims the repository for that cluster,\npreventing other clusters from importing the same repository as\n`ReadWrite`. After importing a `ReadWrite` repository, all records of previous\nbackups, backup plans, and restores in that repository are imported into the\ntarget cluster as local custom resources.\n\nImporting a repository as `ReadOnly` does not claim the repository, it only\nimports the backups, backup plans, restores, and restore plans. Backup plans in read-only repositories don't schedule backups,\nthey exist to provide visibility into what backup plans exist in the cluster you are importing from. Removing a `ReadOnly` repository cleans up any imported resources from\nthe cluster and has no effect on the resources in the storage location as no write operations occur to object storage for read-only repositories.\n\nWhen a `ReadWrite` repository is removed from the cluster:\n\n- All local custom resources associated with that repository, such as backups and restores, are removed from the current cluster.\n- That cluster's claim on the repository is removed, allowing the repository to be used as `ReadWrite` by another cluster. However, these resources are not removed from the storage endpoint.\n\nWhat's next\n-----------\n\n- [Customize backup and restore for an application](/distributed-cloud/hosted/docs/latest/gdch/platform-application/pa-ao-operations/cluster-backup/customize-backup-restore)\n- [Plan a set of backups](/distributed-cloud/hosted/docs/latest/gdch/platform-application/pa-ao-operations/cluster-backup/plan-backups)"]]