Sourcedb to Spanner template

The SourceDB to Spanner template is a batch pipeline that copies data from a relational database into an existing Spanner database. This pipeline uses JDBC to connect to the relational database. You can use this template to copy data from any relational database with available JDBC drivers into Spanner. This only supports a limited set of types of MySQL

For an extra layer of protection, you can also pass in a Cloud KMS key along with a Base64-encoded username, password, and connection string parameters encrypted with the Cloud KMS key. See the Cloud KMS API encryption endpoint for additional details on encrypting your username, password, and connection string parameters.

Pipeline requirements

  • The JDBC drivers for the relational database must be available.
  • The Spanner tables must exist before pipeline execution.
  • The Spanner tables must have a compatible schema.
  • The relational database must be accessible from the subnet where Dataflow runs.

Template parameters

Parameter Description
sourceConfigURL The JDBC connection URL string. For example, jdbc:mysql://127.4.5.30:3306/my-db?autoReconnect=true&maxReconnects=10&unicode=true&characterEncoding=UTF-8 or the shard config.
instanceId The destination Cloud Spanner instance.
databaseId The destination Cloud Spanner database.
projectId This is the name of the Cloud Spanner project.
outputDirectory This directory is used to dump the failed/skipped/filtered records in a migration.
jdbcDriverJars Optional: The comma-separated list of driver JAR files. For example: gs://your-bucket/driver_jar1.jar,gs://your-bucket/driver_jar2.jar. Defaults to empty.
jdbcDriverClassName Optional: The JDBC driver class name. For example: com.mysql.jdbc.Driver. Defaults to: com.mysql.jdbc.Driver.
username Optional: The username to be used for the JDBC connection. Defaults to empty.
password Optional: The password to be used for the JDBC connection. Defaults to empty.
tables Optional: Tables to migrate from source. Defaults to empty.
numPartitions Optional: The number of partitions. This, along with the lower and upper bound, form partitions strides for generated WHERE clause expressions used to split the partition column evenly. When the input is less than 1, the number is set to 1. Defaults to: 0.
spannerHost Optional: The Cloud Spanner endpoint to call in the template. For example: https://batch-spanner.googleapis.com. Defaults to: https://batch-spanner.googleapis.com.
maxConnections Optional: Configures the JDBC connection pool on each worker with maximum number of connections. Use a negative number for no limit. For example: -1. Defaults to: 0.
sessionFilePath Optional: Session path in Cloud Storage that contains mapping information from Spanner Migration Tool. Defaults to empty.
transformationJarPath Optional: Custom jar location in Cloud Storage that contains the custom transformation logic for processing records. Defaults to empty.
transformationClassName Optional: Fully qualified class name having the custom transformation logic. It is a mandatory field in case transformationJarPath is specified. Defaults to empty.
transformationCustomParameters Optional: String containing any custom parameters to be passed to the custom transformation class. Defaults to empty.
disabledAlgorithms Optional: Comma separated algorithms to disable. If this value is set to none, no algorithm is disabled. Use this parameter with caution, because the algorithms disabled by default might have vulnerabilities or performance issues. For example: SSLv3, RC4.
extraFilesToStage Optional: Comma separated Cloud Storage paths or Secret Manager secrets for files to stage in the worker. These files are saved in the /extra_files directory in each worker. For example: gs://<BUCKET>/file.txt,projects/<PROJECT_ID>/secrets/<SECRET_ID>/versions/<VERSION_ID>.

Run the template

Console

  1. Go to the Dataflow Create job from template page.
  2. Go to Create job from template
  3. In the Job name field, enter a unique job name.
  4. Optional: For Regional endpoint, select a value from the drop-down menu. The default region is us-central1.

    For a list of regions where you can run a Dataflow job, see Dataflow locations.

  5. From the Dataflow template drop-down menu, select the Sourcedb to Spanner template.
  6. In the provided parameter fields, enter your parameter values.
  7. Click Run job.

gcloud CLI

In your shell or terminal, run the template:

gcloud dataflow flex-template run JOB_NAME \
    --template-file-gcs-location=gs://dataflow-templates-REGION_NAME/VERSION/flex/Sourcedb_to_Spanner_Flex \
    --project=PROJECT_ID \
    --region=REGION_NAME \
    --parameters \
       sourceConfigURL=SOURCE_CONFIG_URL,\
       instanceId=INSTANCE_ID,\
       databaseId=DATABASE_ID,\
       projectId=PROJECT_ID,\
       outputDirectory=OUTPUT_DIRECTORY,\

Replace the following:

  • JOB_NAME: a unique job name of your choice
  • VERSION: the version of the template that you want to use

    You can use the following values:

  • REGION_NAME: the region where you want to deploy your Dataflow job—for example, us-central1
  • SOURCE_CONFIG_URL: the URL to connect to the source database host. It can be either of 1. The JDBC connection URL - which must contain the host, port and source db name and can optionally contain properties like autoReconnect, maxReconnects etc. Format: `jdbc:mysql://{host}:{port}/{dbName}?{parameters}`2. The shard config path
  • INSTANCE_ID: the Cloud Spanner Instance Id.
  • DATABASE_ID: the Cloud Spanner Database Id.
  • PROJECT_ID: the Cloud Spanner Project Id.
  • OUTPUT_DIRECTORY: the Output directory for failed/skipped/filtered events

API

To run the template using the REST API, send an HTTP POST request. For more information on the API and its authorization scopes, see projects.templates.launch.

POST https://dataflow.googleapis.com/v1b3/projects/PROJECT_ID/locations/LOCATION/flexTemplates:launch
{
   "launchParameter": {
     "jobName": "JOB_NAME",
     "parameters": {
       "sourceConfigURL": "SOURCE_CONFIG_URL",
       "instanceId": "INSTANCE_ID",
       "databaseId": "DATABASE_ID",
       "projectId": "PROJECT_ID",
       "outputDirectory": "OUTPUT_DIRECTORY",
     },
     "containerSpecGcsPath": "gs://dataflow-templates-LOCATION/VERSION/flex/Sourcedb_to_Spanner_Flex",
     "environment": { "maxWorkers": "10" }
  }
}

Replace the following:

  • PROJECT_ID: the Google Cloud project ID where you want to run the Dataflow job
  • JOB_NAME: a unique job name of your choice
  • VERSION: the version of the template that you want to use

    You can use the following values:

  • LOCATION: the region where you want to deploy your Dataflow job—for example, us-central1
  • SOURCE_CONFIG_URL: the URL to connect to the source database host. It can be either of 1. The JDBC connection URL - which must contain the host, port and source db name and can optionally contain properties like autoReconnect, maxReconnects etc. Format: `jdbc:mysql://{host}:{port}/{dbName}?{parameters}`2. The shard config path
  • INSTANCE_ID: the Cloud Spanner Instance Id.
  • DATABASE_ID: the Cloud Spanner Database Id.
  • PROJECT_ID: the Cloud Spanner Project Id.
  • OUTPUT_DIRECTORY: the Output directory for failed/skipped/filtered events