Install and configure the storage CLI for projects

The gdcloud CLI tool can be used for managing object storage. This page shows you how to download, install, and configure the package before using storage buckets and objects.

Prepare the gdcloud CLI

To use the gdcloud CLI, see the following steps to download, install, and configure the package.

Download the gdcloud CLI

To download the gdcloud CLI, see the gdcloud CLI overview.

Install the gdcloud CLI

In order to use the storage command tree, the storage dependencies component must be installed.

  1. Follow Install gdcloud CLI.

  2. To install the storage dependencies component, run the following commands:

    gdcloud components install storage-cli-dependencies
    

    For more information on the components install command, see Install gdcloud CLI.

Configure the gdcloud CLI for object storage

The following configurations must be set to use the gdcloud CLI for object storage.

  1. Replace ACCESS_KEY_ID with the access key ID obtained from the secret in getting access credentials:

    gdcloud config set storage/s3_access_key_id ACCESS_KEY_ID
    
  2. Replace SECRET_ACCESS_KEY with the secret key obtained from the secret in getting access credentials:

    gdcloud config set storage/s3_secret_access_key SECRET_ACCESS_KEY
    
  3. Replace CA_BUNDLE_FILE with the path to the CA Certificate:

    gdcloud config set storage/s3_custom_ca_certs_file CA_BUNDLE_FILE
    
  4. Replace ENDPOINT with the endpoint your Infrastructure Operator (IO) provides:

    gdcloud config set storage/s3_endpoint ENDPOINT
    

    This step differs slightly for dual-zone buckets. Each dual-zone bucket provides three endpoints that you can choose to access the bucket. For most situations, the global endpoint is appropriate in order to take advantage of automatic failovers:

    • Zone1 endpoint: This endpoint is always received by zone1. If using this endpoint, you have read-after-write consistency for any objects written to zone1. However, if zone1 goes down, a client needs to change to using either the zone2 or global endpoint to continue reading/writing to this bucket. If the client is coming from a user cluster, then this endpoint will only be accessible from within zone1.
    • Zone2 endpoint: This endpoint is always received by zone2. If using this endpoint, you have read-after-write consistency for any objects written to zone2. However, if zone2 goes down, a client needs to change to using either the zone1 or global endpoint to continue reading/writing to this bucket. If the client is coming from a user cluster, then this endpoint will only be accessible from within zone2.
    • Global endpoint: This endpoint results in your request being routed to either zone1 or zone2. This option doesn't provide session affinity, meaning that requests made using the same session may arrive in either zone1 or zone2. This means that there is no read-after-write guarantee for requests made to the global endpoint. The global endpoint provides automatic failover in the event that a zone goes down, so users won't have to change endpoints in their workloads the way they would need to if using the zonal endpoint. In addition, this endpoint is accessible from user clusters in all zones.

    Run the following commands to view the global/zonal endpoints for your bucket and choose the one you want to use:

    kgo get buckets $BUCKET_NAME -n $PROJECT_NAME -o jsonpath="{.status.globalEndpoint}"
    
    kgo get buckets $BUCKET_NAME -n $PROJECT_NAME -o jsonpath="{.status.zonalEndpoints}"