The Database Service import service account must have access to the dump file.
The service account is named
postgresql-import-DATABASE_CLUSTER_NAME or
oracle-import-DATABASE_CLUSTER_NAME, depending on
the type of database you are importing.
Replace DATABASE_CLUSTER_NAME with the name of the
database cluster where you are importing data.
You can import a dump file into a database cluster using either the
GDC console or the Distributed Cloud CLI:
Console
Open the Database cluster overview page in the GDC console to see
the cluster that contains the database you are importing.
Click Import. The Import data to accounts panel opens.
In the Source section of the Import data to accounts panel, specify
the location of the SQL data dump file you uploaded previously.
In the Destination field, specify an existing destination database for the import.
Click Import. A banner on the GDC console shows the status of
the import.
[[["Easy to understand","easyToUnderstand","thumb-up"],["Solved my problem","solvedMyProblem","thumb-up"],["Other","otherUp","thumb-up"]],[["Hard to understand","hardToUnderstand","thumb-down"],["Incorrect information or sample code","incorrectInformationOrSampleCode","thumb-down"],["Missing the information/samples I need","missingTheInformationSamplesINeed","thumb-down"],["Other","otherDown","thumb-down"]],["Last updated 2025-08-25 UTC."],[],[],null,["# Import from a dump file\n\nBefore importing data, you must:\n\n1. [Create a database cluster](/distributed-cloud/hosted/docs/latest/gdch/application/ao-user/db-service#create)\n to import the data to.\n\n2. Upload the dump file to a storage bucket. See\n [Upload objects to storage buckets](/distributed-cloud/hosted/docs/latest/gdch/platform/pa-user/upload-download-storage-objects#upload_objects_to_storage_buckets)\n for instructions.\n\n The Database Service import service account must have access to the dump file.\n The service account is named\n `postgresql-import-`\u003cvar translate=\"no\"\u003eDATABASE_CLUSTER_NAME\u003c/var\u003e or\n `oracle-import-`\u003cvar translate=\"no\"\u003eDATABASE_CLUSTER_NAME\u003c/var\u003e, depending on\n the type of database you are importing.\n\n Replace \u003cvar translate=\"no\"\u003eDATABASE_CLUSTER_NAME\u003c/var\u003e with the name of the\n database cluster where you are importing data.\n\nYou can import a dump file into a database cluster using either the\nGDC console or the Distributed Cloud CLI: \n\n### Console\n\n1. Open the **Database cluster overview** page in the GDC console to see\n the cluster that contains the database you are importing.\n\n2. Click **Import** . The **Import data to accounts** panel opens.\n\n3. In the **Source** section of the **Import data to accounts** panel, specify\n the location of the SQL data dump file you uploaded previously.\n\n4. In the **Destination** field, specify an existing destination database for the import.\n\n | **Note:** You can leave this field empty if the dump file already specifies a destination. If the dump file specifies a destination and you fill the **Destination** field, the **Destination** field overrides the destination specified in the dump file.\n5. Click **Import**. A banner on the GDC console shows the status of\n the import.\n\n### gdcloud CLI\n\n1. Before using Distributed Cloud CLI,\n [install and initialize](/distributed-cloud/hosted/docs/latest/gdch/resources/gdcloud-install) it.\n Then, [authenticate](/distributed-cloud/hosted/docs/latest/gdch/resources/gdcloud-auth) with your\n organization.\n\n2. Run the following command to import a dump file into a database:\n\n gdcloud database import sql \u003cvar translate=\"no\"\u003eDATABASE_CLUSTER\u003c/var\u003e s3://\u003cvar translate=\"no\"\u003eBUCKET_NAME/sample.dmp\u003c/var\u003e \\\n --project=\u003cvar translate=\"no\"\u003ePROJECT_NAME\u003c/var\u003e\n\n Replace the following:\n - \u003cvar translate=\"no\"\u003eDATABASE_CLUSTER\u003c/var\u003e with the name of the database cluster to import data into.\n - \u003cvar translate=\"no\"\u003eBUCKET_NAME/SAMPLE.dmp\u003c/var\u003e with the location of the dump file.\n - \u003cvar translate=\"no\"\u003ePROJECT_NAME\u003c/var\u003e with the name of the project that the database cluster is in.\n\n### API\n\n apiVersion: \u003cvar translate=\"no\"\u003e\u003cspan class=\"devsite-syntax-l devsite-syntax-l-Scalar devsite-syntax-l-Scalar-Plain\"\u003eDBENGINE_NAME\u003c/span\u003e\u003c/var\u003e.dbadmin.gdc.goog/v1\n kind: Import\n metadata:\n name: \u003cvar translate=\"no\"\u003e\u003cspan class=\"devsite-syntax-l devsite-syntax-l-Scalar devsite-syntax-l-Scalar-Plain\"\u003eIMPORT_NAME\u003c/span\u003e\u003c/var\u003e\n namespace: \u003cvar translate=\"no\"\u003e\u003cspan class=\"devsite-syntax-l devsite-syntax-l-Scalar devsite-syntax-l-Scalar-Plain\"\u003eUSER_PROJECT\u003c/span\u003e\u003c/var\u003e\n spec:\n dbclusterRef: \u003cvar translate=\"no\"\u003e\u003cspan class=\"devsite-syntax-l devsite-syntax-l-Scalar devsite-syntax-l-Scalar-Plain\"\u003eDBCLUSTER_NAME\u003c/span\u003e\u003c/var\u003e\n dumpStorage:\n s3Options:\n bucket: \u003cvar translate=\"no\"\u003e\u003cspan class=\"devsite-syntax-l devsite-syntax-l-Scalar devsite-syntax-l-Scalar-Plain\"\u003eBUCKET_NAME\u003c/span\u003e\u003c/var\u003e\n key: \u003cvar translate=\"no\"\u003e\u003cspan class=\"devsite-syntax-l devsite-syntax-l-Scalar devsite-syntax-l-Scalar-Plain\"\u003eDUMP_FILE_PATH\u003c/span\u003e\u003c/var\u003e\n type: S3\n\nReplace the following variables:\n\n- \u003cvar translate=\"no\"\u003eDBENGINE_NAME\u003cvar translate=\"no\"\u003e\u003c/var\u003e\u003c/var\u003e: the name of the database engine. This is one of `alloydbomni`, `postgresql` or `oracle`.\n- \u003cvar translate=\"no\"\u003eIMPORT_NAME\u003cvar translate=\"no\"\u003e\u003c/var\u003e\u003c/var\u003e: the name of the import operation.\n- \u003cvar translate=\"no\"\u003eUSER_PROJECT\u003cvar translate=\"no\"\u003e\u003c/var\u003e\u003c/var\u003e: the name of the user project where the database cluster to import is created.\n- \u003cvar translate=\"no\"\u003eDBCLUSTER_NAME\u003cvar translate=\"no\"\u003e\u003c/var\u003e\u003c/var\u003e: the name of the database cluster.\n- \u003cvar translate=\"no\"\u003eBUCKET_NAME\u003cvar translate=\"no\"\u003e\u003c/var\u003e\u003c/var\u003e: the name of the object storage bucket that stores the import files.\n- \u003cvar translate=\"no\"\u003eDUMP_FILE_PATH\u003cvar translate=\"no\"\u003e\u003c/var\u003e\u003c/var\u003e: the name of the object storage path to the stored files."]]