Stay organized with collections
Save and categorize content based on your preferences.
To ingest data from Cloud SQL, use the following steps to set up
Cloud SQL access, create a data store, and ingest data.
Set up staging bucket access for Cloud SQL instances
When ingesting data from Cloud SQL, data is first staged to a
Cloud Storage bucket. Follow these steps to give a Cloud SQL
instance access to
Cloud Storage buckets.
Click the Cloud SQL instance that you plan to import from.
Copy the identifier for the instance's service account, which looks like an
email address—for example,
p9876-abcd33f@gcp-sa-cloud-sql.iam.gserviceaccount.com.
To give Agentspace access to Cloud SQL data that's in a
different project, follow these steps:
Replace the following PROJECT_NUMBER variable with your
Agentspace project number, and then copy the contents of the
code block. This is your Agentspace service account
identifier:
Specify the project ID, instance ID, database ID, and table ID of the data
that you plan to import.
Click Browse and choose an intermediate Cloud Storage location to
export data to, and then click Select. Alternatively, enter the location
directly in the gs:// field.
Select whether to turn on serverless export. Serverless export incurs
additional cost. For information about serverless export, see Minimize the
performance impact of exports in
the Cloud SQL documentation.
Click Continue.
Choose a region for your data store.
Enter a name for your data store.
Click Create.
To check the status of your ingestion, go to the Data Stores page
and click your data store name to see details about it on its Data page.
When the status column on the Activity tab changes from In progress
to Import completed, the ingestion is complete.
Depending on the size of your data, ingestion can take several
minutes or several hours.
REST
To use the command line to create a data store and ingest data from
Cloud SQL, follow these steps:
DATA_STORE_ID: the ID of the data store. The ID can
contain only lowercase letters, digits, underscores, and hyphens.
SQL_PROJECT_ID: the ID of your Cloud SQL
project.
INSTANCE_ID: the ID of your Cloud SQL instance.
DATABASE_ID: the ID of your Cloud SQL database.
TABLE_ID: the ID of your Cloud SQL table.
STAGING_DIRECTORY: optional. A Cloud Storage
directory—for example,
gs://<your-gcs-bucket>/directory/import_errors.
RECONCILIATION_MODE: optional. Values are FULL and
INCREMENTAL. Default is INCREMENTAL. Specifying INCREMENTAL
causes an incremental refresh of data from Cloud SQL to your
data store. This does an upsert operation, which adds new documents and
replaces existing documents with updated documents with the same ID.
Specifying FULL causes a full rebase of the documents in your data store. In other words, new and updated documents are added to your data store, and documents that are not in Cloud SQL are removed
from your data store. The FULL mode is helpful if you want to
automatically delete documents that you no longer need.
Next steps
To attach your data store to an app, create an app and select your data store
following the steps in
Create a search app.
To preview how your search results appear after your app and data store are
set up, see
Preview search results.
[[["Easy to understand","easyToUnderstand","thumb-up"],["Solved my problem","solvedMyProblem","thumb-up"],["Other","otherUp","thumb-up"]],[["Hard to understand","hardToUnderstand","thumb-down"],["Incorrect information or sample code","incorrectInformationOrSampleCode","thumb-down"],["Missing the information/samples I need","missingTheInformationSamplesINeed","thumb-down"],["Other","otherDown","thumb-down"]],["Last updated 2025-09-03 UTC."],[],[],null,["# Sync from Cloud SQL\n\nTo ingest data from Cloud SQL, use the following steps to set up\nCloud SQL access, create a data store, and ingest data.\n\nSet up staging bucket access for Cloud SQL instances\n----------------------------------------------------\n\nWhen ingesting data from Cloud SQL, data is first staged to a\nCloud Storage bucket. Follow these steps to give a Cloud SQL\ninstance access to\nCloud Storage buckets.\n\n1. In the Google Cloud console, go to the **SQL** page.\n\n [**SQL**](https://console.cloud.google.com/sql/instances)\n2. Click the Cloud SQL instance that you plan to import from.\n\n3. Copy the identifier for the instance's service account, which looks like an\n email address---for example,\n `p9876-abcd33f@gcp-sa-cloud-sql.iam.gserviceaccount.com`.\n\n4. Go to the **IAM \\& Admin** page.\n\n [**IAM \\& Admin**](https://console.cloud.google.com/iam-admin/iam)\n5. Click **Grant access**.\n\n6. For **New principals** , enter the instance's service account identifier and\n select the **Cloud Storage \\\u003e Storage Admin** role.\n\n7. Click **Save**.\n\nNext:\n\n- If your Cloud SQL data is in the same project as Agentspace:\n Go to [Import data from Cloud SQL](#cloud-sql-procedure).\n\n- If your Cloud SQL data is in a different project than your\n Agentspace project: Go to [Set up Cloud SQL\n access from a different project](#set-up-cloud-sql-access).\n\nSet up Cloud SQL access from a different project\n------------------------------------------------\n\nTo give Agentspace access to Cloud SQL data that's in a\ndifferent project, follow these steps:\n\n1. Replace the following \u003cvar translate=\"no\"\u003ePROJECT_NUMBER\u003c/var\u003e variable with your\n Agentspace project number, and then copy the contents of the\n code block. This is your Agentspace service account\n identifier:\n\n service-\u003cvar translate=\"no\"\u003ePROJECT_NUMBER\u003c/var\u003e@gcp-sa-discoveryengine.iam.gserviceaccount.com\n\n2. Go to the **IAM \\& Admin** page.\n\n [IAM \\& Admin](https://console.cloud.google.com/iam-admin/iam)\n3. Switch to your Cloud SQL project on the **IAM \\& Admin** page\n and click **Grant Access**.\n\n4. For **New principals** , enter the identifier for the service account and\n select the **Cloud SQL \\\u003e Cloud SQL Viewer** role.\n\n5. Click **Save**.\n\nNext, go to [Import data from Cloud SQL](#cloud-sql-procedure).\n\nImport data from Cloud SQL\n--------------------------\n\n### Console\n\nTo use the console to ingest data from Cloud SQL, follow these\nsteps:\n\n1. In the Google Cloud console, go to the **Agentspace** page.\n\n [Agentspace](https://console.cloud.google.com/gen-app-builder/)\n2. Go to the **Data Stores** page.\n\n3. Click **New data store**.\n\n4. On the **Source** page, select **Cloud SQL**.\n\n5. Specify the project ID, instance ID, database ID, and table ID of the data\n that you plan to import.\n\n6. Click **Browse** and choose an intermediate Cloud Storage location to\n export data to, and then click **Select** . Alternatively, enter the location\n directly in the **`gs://`** field.\n\n7. Select whether to turn on serverless export. Serverless export incurs\n additional cost. For information about serverless export, see [Minimize the\n performance impact of exports](/sql/docs/mysql/import-export#serverless) in\n the Cloud SQL documentation.\n\n8. Click **Continue**.\n\n9. Choose a region for your data store.\n\n10. Enter a name for your data store.\n\n11. Click **Create**.\n\n12. To check the status of your ingestion, go to the **Data Stores** page\n and click your data store name to see details about it on its **Data** page.\n When the status column on the **Activity** tab changes from **In progress**\n to **Import completed**, the ingestion is complete.\n\n Depending on the size of your data, ingestion can take several\n minutes or several hours.\n\n### REST\n\nTo use the command line to create a data store and ingest data from\nCloud SQL, follow these steps:\n\n1. Create a data store.\n\n curl -X POST \\\n -H \"Authorization: Bearer $(gcloud auth print-access-token)\" \\\n -H \"Content-Type: application/json\" \\\n -H \"X-Goog-User-Project: \u003cvar translate=\"no\"\u003ePROJECT_ID\u003c/var\u003e\" \\\n \"https://discoveryengine.googleapis.com/v1alpha/projects/\u003cvar translate=\"no\"\u003ePROJECT_ID\u003c/var\u003e/locations/global/collections/default_collection/dataStores?dataStoreId=\u003cvar translate=\"no\"\u003eDATA_STORE_ID\u003c/var\u003e\" \\\n -d '{\n \"displayName\": \"\u003cvar translate=\"no\"\u003eDISPLAY_NAME\u003c/var\u003e\",\n \"industryVertical\": \"GENERIC\",\n \"solutionTypes\": [\"SOLUTION_TYPE_SEARCH\"],\n }'\n\n Replace the following:\n - \u003cvar translate=\"no\"\u003ePROJECT_ID\u003c/var\u003e: the ID of your project.\n - \u003cvar translate=\"no\"\u003eDATA_STORE_ID\u003c/var\u003e: the ID of the data store. The ID can contain only lowercase letters, digits, underscores, and hyphens.\n - \u003cvar translate=\"no\"\u003eDISPLAY_NAME\u003c/var\u003e: the display name of the data store. This might be displayed in the Google Cloud console.\n2. Import data from Cloud SQL.\n\n curl -X POST \\\n -H \"Authorization: Bearer $(gcloud auth print-access-token)\" \\\n -H \"Content-Type: application/json\" \\\n \"https://discoveryengine.googleapis.com/v1/projects/\u003cvar translate=\"no\"\u003ePROJECT_ID\u003c/var\u003e/locations/global/collections/default_collection/dataStores/\u003cvar translate=\"no\"\u003eDATA_STORE_ID\u003c/var\u003e/branches/0/documents:import\" \\\n -d '{\n \"cloudSqlSource\": {\n \"projectId\": \"\u003cvar translate=\"no\"\u003eSQL_PROJECT_ID\u003c/var\u003e\",\n \"instanceId\": \"\u003cvar translate=\"no\"\u003eINSTANCE_ID\u003c/var\u003e\",\n \"databaseId\": \"\u003cvar translate=\"no\"\u003eDATABASE_ID\u003c/var\u003e\",\n \"tableId\": \"\u003cvar translate=\"no\"\u003eTABLE_ID\u003c/var\u003e\",\n \"gcsStagingDir\": \"\u003cvar translate=\"no\"\u003eSTAGING_DIRECTORY\u003c/var\u003e\"\n },\n \"reconciliationMode\": \"\u003cvar translate=\"no\"\u003eRECONCILIATION_MODE\u003c/var\u003e\",\n \"autoGenerateIds\": \"\u003cvar translate=\"no\"\u003eAUTO_GENERATE_IDS\u003c/var\u003e\",\n \"idField\": \"\u003cvar translate=\"no\"\u003eID_FIELD\u003c/var\u003e\",\n }'\n\n Replace the following:\n - \u003cvar translate=\"no\"\u003ePROJECT_ID\u003c/var\u003e: the ID of your Agentspace project.\n - \u003cvar translate=\"no\"\u003eDATA_STORE_ID\u003c/var\u003e: the ID of the data store. The ID can contain only lowercase letters, digits, underscores, and hyphens.\n - \u003cvar translate=\"no\"\u003eSQL_PROJECT_ID\u003c/var\u003e: the ID of your Cloud SQL project.\n - \u003cvar translate=\"no\"\u003eINSTANCE_ID\u003c/var\u003e: the ID of your Cloud SQL instance.\n - \u003cvar translate=\"no\"\u003eDATABASE_ID\u003c/var\u003e: the ID of your Cloud SQL database.\n - \u003cvar translate=\"no\"\u003eTABLE_ID\u003c/var\u003e: the ID of your Cloud SQL table.\n - \u003cvar translate=\"no\"\u003eSTAGING_DIRECTORY\u003c/var\u003e: optional. A Cloud Storage directory---for example, `gs://\u003cyour-gcs-bucket\u003e/directory/import_errors`.\n - \u003cvar translate=\"no\"\u003eRECONCILIATION_MODE\u003c/var\u003e: optional. Values are `FULL` and `INCREMENTAL`. Default is `INCREMENTAL`. Specifying `INCREMENTAL` causes an incremental refresh of data from Cloud SQL to your data store. This does an upsert operation, which adds new documents and replaces existing documents with updated documents with the same ID. Specifying `FULL` causes a full rebase of the documents in your data store. In other words, new and updated documents are added to your data store, and documents that are not in Cloud SQL are removed from your data store. The `FULL` mode is helpful if you want to automatically delete documents that you no longer need.\n\n\u003cbr /\u003e\n\nNext steps\n----------\n\n- To attach your data store to an app, create an app and select your data store\n following the steps in\n [Create a search app](/agentspace/docs/create-app).\n\n- To preview how your search results appear after your app and data store are\n set up, see\n [Preview search results](/agentspace/docs/preview-search-results)."]]