The SQL Server to BigQuery template is a batch pipeline that copies data from a SQL Server table into an existing BigQuery table. This pipeline uses JDBC to connect to SQL Server. For an extra layer of protection, you can also pass in a Cloud KMS key along with Base64-encoded username, password, and connection string parameters encrypted with the Cloud KMS key. For more information about encrypting your username, password, and connection string parameters, see the Cloud KMS API encryption endpoint.
Pipeline requirements
- The BigQuery table must exist before pipeline execution.
- The BigQuery table must have a compatible schema.
- The relational database must be accessible from the subnet where Dataflow runs.
Template parameters
Required parameters
- driverJars : The comma-separated list of driver JAR files. (Example: gs://your-bucket/driver_jar1.jar,gs://your-bucket/driver_jar2.jar).
- driverClassName : The JDBC driver class name. (Example: com.mysql.jdbc.Driver).
- connectionURL : The JDBC connection URL string. For example,
jdbc:mysql://some-host:3306/sampledb
. Can be passed in as a string that's Base64-encoded and then encrypted with a Cloud KMS key. Note the difference between an Oracle non-RAC database connection string (jdbc:oracle:thin:@some-host:<port>:<sid>
) and an Oracle RAC database connection string (jdbc:oracle:thin:@//some-host[:<port>]/<service_name>
). (Example: jdbc:mysql://some-host:3306/sampledb). - outputTable : BigQuery table location to write the output to. The name should be in the format
<project>:<dataset>.<table_name>
. The table's schema must match input objects. (Example: - bigQueryLoadingTemporaryDirectory : The temporary directory for the BigQuery loading process (Example: gs://your-bucket/your-files/temp_dir).
Optional parameters
- connectionProperties : Properties string to use for the JDBC connection. Format of the string must be [propertyName=property;]*. (Example: unicode=true;characterEncoding=UTF-8).
- username : The username to be used for the JDBC connection. Can be passed in as a Base64-encoded string encrypted with a Cloud KMS key.
- password : The password to be used for the JDBC connection. Can be passed in as a Base64-encoded string encrypted with a Cloud KMS key.
- query : The query to be run on the source to extract the data. Either query OR both table AND PartitionColumn must be specified. (Example: select * from sampledb.sample_table).
- KMSEncryptionKey : Cloud KMS Encryption Key to decrypt the username, password, and connection string. If Cloud KMS key is passed in, the username, password, and connection string must all be passed in encrypted. (Example: projects/your-project/locations/global/keyRings/your-keyring/cryptoKeys/your-key).
- useColumnAlias : If enabled (set to true) the pipeline will consider column alias ("AS") instead of the column name to map the rows to BigQuery. Defaults to false.
- isTruncate : If enabled (set to true) the pipeline will truncate before loading data into BigQuery. Defaults to false, which is used to only append data.
- partitionColumn : If this parameter is provided (along with
table
), JdbcIO reads the table in parallel by executing multiple instances of the query on the same table (subquery) using ranges. Currently, only Long partition columns are supported. Either query OR both table AND PartitionColumn must be specified. - table : Table to read from using partitions. Either query OR both table AND PartitionColumn must be specified. This parameter also accepts a subquery in parentheses. (Example: (select id, name from Person) as subq).
- numPartitions : The number of partitions. This, along with the lower and upper bound, form partitions strides for generated WHERE clause expressions used to split the partition column evenly. When the input is less than 1, the number is set to 1.
- lowerBound : Lower bound used in the partition scheme. If not provided, it is automatically inferred by Beam (for the supported types).
- upperBound : Upper bound used in partition scheme. If not provided, it is automatically inferred by Beam (for the supported types).
- fetchSize : The number of rows to be fetched from database at a time. Not used for partitioned reads. Defaults to: 50000.
- createDisposition : BigQuery CreateDisposition. For example, CREATE_IF_NEEDED, CREATE_NEVER. Defaults to: CREATE_NEVER.
- bigQuerySchemaPath : The Cloud Storage path for the BigQuery JSON schema. If
createDisposition
is set to CREATE_IF_NEEDED, this parameter must be specified. (Example: gs://your-bucket/your-schema.json). - disabledAlgorithms : Comma-separated algorithms to disable. If this value is set to
none
then no algorithm is disabled. Use with care, because the algorithms that are disabled by default are known to have either vulnerabilities or performance issues. (Example: SSLv3, RC4). - extraFilesToStage : Comma separated Cloud Storage paths or Secret Manager secrets for files to stage in the worker. These files will be saved under the
/extra_files
directory in each worker (Example: gs://your-bucket/file.txt,projects/project-id/secrets/secret-id/versions/version-id). - useStorageWriteApi : If enabled (set to true) the pipeline will use Storage Write API when writing the data to BigQuery (see https://cloud.google.com/blog/products/data-analytics/streaming-data-into-bigquery-using-storage-write-api). Defaults to: false.
- useStorageWriteApiAtLeastOnce : This parameter takes effect only if "Use BigQuery Storage Write API" is enabled. If enabled the at-least-once semantics will be used for Storage Write API, otherwise exactly-once semantics will be used. Defaults to: false.
Run the template
Console
- Go to the Dataflow Create job from template page. Go to Create job from template
- In the Job name field, enter a unique job name.
- Optional: For Regional endpoint, select a value from the drop-down menu. The default
region is
us-central1
.For a list of regions where you can run a Dataflow job, see Dataflow locations.
- From the Dataflow template drop-down menu, select the SQL Server to BigQuery template.
- In the provided parameter fields, enter your parameter values.
- Click Run job.
gcloud
In your shell or terminal, run the template:
gcloud dataflow flex-template run JOB_NAME \ --project=PROJECT_ID \ --region=REGION_NAME \ --template-file-gcs-location=gs://dataflow-templates-REGION_NAME/VERSION/flex/SQLServer_to_BigQuery \ --parameters \ connectionURL=JDBC_CONNECTION_URL,\ query=SOURCE_SQL_QUERY,\ outputTable=PROJECT_ID:DATASET.TABLE_NAME, bigQueryLoadingTemporaryDirectory=PATH_TO_TEMP_DIR_ON_GCS,\ connectionProperties=CONNECTION_PROPERTIES,\ username=CONNECTION_USERNAME,\ password=CONNECTION_PASSWORD,\ KMSEncryptionKey=KMS_ENCRYPTION_KEY
Replace the following:
JOB_NAME
: a unique job name of your choiceVERSION
: the version of the template that you want to useYou can use the following values:
latest
to use the latest version of the template, which is available in the non-dated parent folder in the bucket— gs://dataflow-templates-REGION_NAME/latest/- the version name, like
2023-09-12-00_RC00
, to use a specific version of the template, which can be found nested in the respective dated parent folder in the bucket— gs://dataflow-templates-REGION_NAME/
REGION_NAME
: the region where you want to deploy your Dataflow job—for example,us-central1
JDBC_CONNECTION_URL
: the JDBC connection URLSOURCE_SQL_QUERY
: the SQL query to run on the source databaseDATASET
: your BigQuery datasetTABLE_NAME
: your BigQuery table namePATH_TO_TEMP_DIR_ON_GCS
: your Cloud Storage path to the temp directoryCONNECTION_PROPERTIES
: the JDBC connection properties, if neededCONNECTION_USERNAME
: the JDBC connection usernameCONNECTION_PASSWORD
: the JDBC connection passwordKMS_ENCRYPTION_KEY
: the Cloud KMS encryption key
API
To run the template using the REST API, send an HTTP POST request. For more information on the
API and its authorization scopes, see
projects.templates.launch
.
POST https://dataflow.googleapis.com/v1b3/projects/PROJECT_ID/locations/LOCATION/flexTemplates:launch { "launchParameter": { "jobName": "JOB_NAME", "containerSpecGcsPath": "gs://dataflow-templates-LOCATION/VERSION/flex/SQLServer_to_BigQuery" "parameters": { "connectionURL": "JDBC_CONNECTION_URL", "query": "SOURCE_SQL_QUERY", "outputTable": "PROJECT_ID:DATASET.TABLE_NAME", "bigQueryLoadingTemporaryDirectory": "PATH_TO_TEMP_DIR_ON_GCS", "connectionProperties": "CONNECTION_PROPERTIES", "username": "CONNECTION_USERNAME", "password": "CONNECTION_PASSWORD", "KMSEncryptionKey":"KMS_ENCRYPTION_KEY" }, "environment": { "zone": "us-central1-f" } } }
Replace the following:
PROJECT_ID
: the Google Cloud project ID where you want to run the Dataflow jobJOB_NAME
: a unique job name of your choiceVERSION
: the version of the template that you want to useYou can use the following values:
latest
to use the latest version of the template, which is available in the non-dated parent folder in the bucket— gs://dataflow-templates-REGION_NAME/latest/- the version name, like
2023-09-12-00_RC00
, to use a specific version of the template, which can be found nested in the respective dated parent folder in the bucket— gs://dataflow-templates-REGION_NAME/
LOCATION
: the region where you want to deploy your Dataflow job—for example,us-central1
JDBC_CONNECTION_URL
: the JDBC connection URLSOURCE_SQL_QUERY
: the SQL query to run on the source databaseDATASET
: your BigQuery datasetTABLE_NAME
: your BigQuery table namePATH_TO_TEMP_DIR_ON_GCS
: your Cloud Storage path to the temp directoryCONNECTION_PROPERTIES
: the JDBC connection properties, if neededCONNECTION_USERNAME
: the JDBC connection usernameCONNECTION_PASSWORD
: the JDBC connection passwordKMS_ENCRYPTION_KEY
: the Cloud KMS encryption key
What's next
- Learn about Dataflow templates.
- See the list of Google-provided templates.