- Resource: MigrationWorkflow
- MigrationTask
- AssessmentTaskDetails
- AssessmentFeatureHandle
- TranslationTaskDetails
- TeradataOptions
- BteqOptions
- DatasetReference
- TranslationFileMapping
- FileEncoding
- IdentifierSettings
- IdentifierCase
- IdentifierRewriteMode
- TokenType
- Filter
- TranslationConfigDetails
- ObjectNameMappingList
- ObjectNameMapping
- NameMappingKey
- Type
- NameMappingValue
- Dialect
- BigQueryDialect
- HiveQLDialect
- RedshiftDialect
- TeradataDialect
- Mode
- OracleDialect
- SparkSQLDialect
- SnowflakeDialect
- NetezzaDialect
- AzureSynapseDialect
- VerticaDialect
- SQLServerDialect
- PostgresqlDialect
- PrestoDialect
- MySQLDialect
- DB2Dialect
- SQLiteDialect
- GreenplumDialect
- SourceEnv
- SourceTargetLocationMapping
- SourceLocation
- TargetLocation
- TranslationDetails
- SourceTargetMapping
- SourceSpec
- Literal
- TargetSpec
- SourceEnvironment
- MetadataCaching
- State
- MigrationTaskOrchestrationResult
- AssessmentOrchestrationResultDetails
- TranslationTaskResult
- GcsReportLogMessage
- MigrationTaskResult
- State
- Methods
Resource: MigrationWorkflow
A migration workflow which specifies what needs to be done for an EDW migration.
JSON representation |
---|
{ "name": string, "displayName": string, "tasks": { string: { object ( |
Fields | |
---|---|
name |
Output only. Immutable. Identifier. The unique identifier for the migration workflow. The ID is server-generated. Example: |
display |
The display name of the workflow. This can be set to give a workflow a descriptive name. There is no guarantee or enforcement of uniqueness. |
tasks |
The tasks in a workflow in a named map. The name (i.e. key) has no meaning and is merely a convenient way to address a specific task in a workflow. |
state |
Output only. That status of the workflow. |
create |
Time when the workflow was created. |
last |
Time when the workflow was last updated. |
MigrationTask
A single task for a migration which has details about the configuration of the task.
JSON representation |
---|
{ "id": string, "type": string, "details": { "@type": string, field1: ..., ... }, "state": enum ( |
Fields | |
---|---|
id |
Output only. Immutable. The unique identifier for the migration task. The ID is server-generated. |
type |
The type of the task. This must be one of the supported task types: Translation_Teradata2BQ, Translation_Redshift2BQ, Translation_Bteq2BQ, Translation_Oracle2BQ, Translation_HiveQL2BQ, Translation_SparkSQL2BQ, Translation_Snowflake2BQ, Translation_Netezza2BQ, Translation_AzureSynapse2BQ, Translation_Vertica2BQ, Translation_SQLServer2BQ, Translation_Presto2BQ, Translation_MySQL2BQ, Translation_Postgresql2BQ, Translation_SQLite2BQ, Translation_Greenplum2BQ. |
details |
DEPRECATED! Use one of the task_details below. The details of the task. The type URL must be one of the supported task details messages and correspond to the Task's type. |
state |
Output only. The current state of the task. |
processing |
Output only. An explanation that may be populated when the task is in FAILED state. |
create |
Time when the task was created. |
last |
Time when the task was last updated. |
orchestrationResult |
Output only. Deprecated: Use the taskResult field below instead. Additional information about the orchestration. |
resource |
Output only. Provides details to errors and issues encountered while processing the task. Presence of error details does not mean that the task failed. |
resource |
The number or resources with errors. Note: This is not the total number of errors as each resource can have more than one error. This is used to indicate truncation by having a |
metrics[] |
The metrics for the task. |
task |
Output only. The result of the task. |
total |
Count of all the processing errors in this task and its subtasks. |
total |
Count of all the resource errors in this task and its subtasks. |
Union field task_details . The details of the task. task_details can be only one of the following: |
|
assessment |
Task configuration for Assessment. |
translation |
Task configuration for Batch SQL Translation. |
translation |
Task configuration for CW Batch/Offline SQL Translation. |
translation |
Task details for unified SQL Translation. |
AssessmentTaskDetails
Assessment task config.
JSON representation |
---|
{
"inputPath": string,
"outputDataset": string,
"querylogsPath": string,
"dataSource": string,
"featureHandle": {
object ( |
Fields | |
---|---|
input |
Required. The Cloud Storage path for assessment input files. |
output |
Required. The BigQuery dataset for output. |
querylogs |
Optional. An optional Cloud Storage path to write the query logs (which is then used as an input path on the translation task) |
data |
Required. The data source or data warehouse type (eg: TERADATA/REDSHIFT) from which the input data is extracted. |
feature |
Optional. A collection of additional feature flags for this assessment. |
AssessmentFeatureHandle
User-definable feature flags for assessment tasks.
JSON representation |
---|
{ "addShareableDataset": boolean } |
Fields | |
---|---|
add |
Optional. Whether to create a dataset containing non-PII data in addition to the output dataset. |
TranslationTaskDetails
The translation task config to capture necessary settings for a translation task and subtask.
JSON representation |
---|
{ "inputPath": string, "outputPath": string, "filePaths": [ { object ( |
Fields | |
---|---|
input |
The Cloud Storage path for translation input files. |
output |
The Cloud Storage path for translation output files. |
file |
Cloud Storage files to be processed for translation. |
schema |
The Cloud Storage path to DDL files as table schema to assist semantic translation. |
file |
The file encoding type. |
identifier |
The settings for SQL identifiers. |
special |
The map capturing special tokens to be replaced during translation. The key is special token in string. The value is the token data type. This is used to translate SQL query template which contains special token as place holder. The special token makes a query invalid to parse. This map will be applied to annotate those special token with types to let parser understand how to parse them into proper structure with type information. |
filter |
The filter applied to translation details. |
translation |
Specifies the exact name of the bigquery table ("dataset.table") to be used for surfacing raw translation errors. If the table does not exist, we will create it. If it already exists and the schema is the same, we will re-use. If the table exists and the schema is different, we will throw an error. |
Union field language_options . The language specific settings for the translation task. language_options can be only one of the following: |
|
teradata |
The Teradata SQL specific settings for the translation task. |
bteq |
The BTEQ specific settings for the translation task. |
TeradataOptions
This type has no fields.
Teradata SQL specific translation task related settings.
BteqOptions
BTEQ translation task related settings.
JSON representation |
---|
{
"projectDataset": {
object ( |
Fields | |
---|---|
project |
Specifies the project and dataset in BigQuery that will be used for external table creation during the translation. |
default |
The Cloud Storage location to be used as the default path for files that are not otherwise specified in the file replacement map. |
file |
Maps the local paths that are used in BTEQ scripts (the keys) to the paths in Cloud Storage that should be used in their stead in the translation (the value). |
DatasetReference
Reference to a BigQuery dataset.
JSON representation |
---|
{ "datasetId": string, "projectId": string, "datasetIdAlternative": [ string ], "projectIdAlternative": [ string ] } |
Fields | |
---|---|
dataset |
A unique ID for this dataset, without the project name. The ID must contain only letters (a-z, A-Z), numbers (0-9), or underscores (_). The maximum length is 1,024 characters. |
project |
The ID of the project containing this dataset. |
dataset |
The alternative field that will be used when the service is not able to translate the received data to the datasetId field. |
project |
The alternative field that will be used when the service is not able to translate the received data to the projectId field. |
TranslationFileMapping
Mapping between an input and output file to be translated in a subtask.
JSON representation |
---|
{ "inputPath": string, "outputPath": string } |
Fields | |
---|---|
input |
The Cloud Storage path for a file to translation in a subtask. |
output |
The Cloud Storage path to write back the corresponding input file to. |
FileEncoding
The file encoding types.
Enums | |
---|---|
FILE_ENCODING_UNSPECIFIED |
File encoding setting is not specified. |
UTF_8 |
File encoding is UTF_8. |
ISO_8859_1 |
File encoding is ISO_8859_1. |
US_ASCII |
File encoding is US_ASCII. |
UTF_16 |
File encoding is UTF_16. |
UTF_16LE |
File encoding is UTF_16LE. |
UTF_16BE |
File encoding is UTF_16BE. |
IdentifierSettings
Settings related to SQL identifiers.
JSON representation |
---|
{ "outputIdentifierCase": enum ( |
Fields | |
---|---|
output |
The setting to control output queries' identifier case. |
identifier |
Specifies the rewrite mode for SQL identifiers. |
IdentifierCase
The identifier case type.
Enums | |
---|---|
IDENTIFIER_CASE_UNSPECIFIED |
The identifier case is not specified. |
ORIGINAL |
Identifiers' cases will be kept as the original cases. |
UPPER |
Identifiers will be in upper cases. |
LOWER |
Identifiers will be in lower cases. |
IdentifierRewriteMode
The SQL identifier rewrite mode.
Enums | |
---|---|
IDENTIFIER_REWRITE_MODE_UNSPECIFIED |
SQL Identifier rewrite mode is unspecified. |
NONE |
SQL identifiers won't be rewrite. |
REWRITE_ALL |
All SQL identifiers will be rewrite. |
TokenType
The special token data type.
Enums | |
---|---|
TOKEN_TYPE_UNSPECIFIED |
Token type is not specified. |
STRING |
Token type as string. |
INT64 |
Token type as integer. |
NUMERIC |
Token type as numeric. |
BOOL |
Token type as boolean. |
FLOAT64 |
Token type as float. |
DATE |
Token type as date. |
TIMESTAMP |
Token type as timestamp. |
Filter
The filter applied to fields of translation details.
JSON representation |
---|
{ "inputFileExclusionPrefixes": [ string ] } |
Fields | |
---|---|
input |
The list of prefixes used to exclude processing for input files. |
TranslationConfigDetails
The translation config to capture necessary settings for a translation task and subtask.
JSON representation |
---|
{ "sourceDialect": { object ( |
Fields | |
---|---|
source |
The dialect of the input files. |
target |
The target dialect for the engine to translate the input to. |
source |
The default source environment values for the translation. |
source |
The mapping from source location paths to target location paths. |
request |
The indicator to show translation request initiator. |
target |
The types of output to generate, e.g. sql, metadata etc. If not specified, a default set of targets will be generated. Some additional target types may be slower to generate. See the documentation for the set of available target types. |
Union field source_location . The chosen path where the source for input files will be found. source_location can be only one of the following: |
|
gcs |
The Cloud Storage path for a directory of files to translate in a task. |
Union field target_location . The chosen path where the destination for output files will be found. target_location can be only one of the following: |
|
gcs |
The Cloud Storage path to write back the corresponding input files to. |
Union field output_name_mapping . The mapping of full SQL object names from their current state to the desired output. output_name_mapping can be only one of the following: |
|
name |
The mapping of objects to their desired output names in list form. |
ObjectNameMappingList
Represents a map of name mappings using a list of key:value proto messages of existing name to desired output name.
JSON representation |
---|
{
"nameMap": [
{
object ( |
Fields | |
---|---|
name |
The elements of the object name map. |
ObjectNameMapping
Represents a key-value pair of NameMappingKey to NameMappingValue to represent the mapping of SQL names from the input value to desired output.
JSON representation |
---|
{ "source": { object ( |
Fields | |
---|---|
source |
The name of the object in source that is being mapped. |
target |
The desired target name of the object that is being mapped. |
NameMappingKey
The potential components of a full name mapping that will be mapped during translation in the source data warehouse.
JSON representation |
---|
{
"type": enum ( |
Fields | |
---|---|
type |
The type of object that is being mapped. |
database |
The database name (BigQuery project ID equivalent in the source data warehouse). |
schema |
The schema name (BigQuery dataset equivalent in the source data warehouse). |
relation |
The relation name (BigQuery table or view equivalent in the source data warehouse). |
attribute |
The attribute name (BigQuery column equivalent in the source data warehouse). |
Type
The type of the object that is being mapped.
Enums | |
---|---|
TYPE_UNSPECIFIED |
Unspecified name mapping type. |
DATABASE |
The object being mapped is a database. |
SCHEMA |
The object being mapped is a schema. |
RELATION |
The object being mapped is a relation. |
ATTRIBUTE |
The object being mapped is an attribute. |
RELATION_ALIAS |
The object being mapped is a relation alias. |
ATTRIBUTE_ALIAS |
The object being mapped is a an attribute alias. |
FUNCTION |
The object being mapped is a function. |
NameMappingValue
The potential components of a full name mapping that will be mapped during translation in the target data warehouse.
JSON representation |
---|
{ "database": string, "schema": string, "relation": string, "attribute": string } |
Fields | |
---|---|
database |
The database name (BigQuery project ID equivalent in the target data warehouse). |
schema |
The schema name (BigQuery dataset equivalent in the target data warehouse). |
relation |
The relation name (BigQuery table or view equivalent in the target data warehouse). |
attribute |
The attribute name (BigQuery column equivalent in the target data warehouse). |
Dialect
The possible dialect options for translation.
JSON representation |
---|
{ // Union field |
Fields | |
---|---|
Union field dialect_value . The possible dialect options that this message represents. dialect_value can be only one of the following: |
|
bigquery |
The BigQuery dialect |
hiveql |
The HiveQL dialect |
redshift |
The Redshift dialect |
teradata |
The Teradata dialect |
oracle |
The Oracle dialect |
sparksql |
The SparkSQL dialect |
snowflake |
The Snowflake dialect |
netezza |
The Netezza dialect |
azure |
The Azure Synapse dialect |
vertica |
The Vertica dialect |
sql |
The SQL Server dialect |
postgresql |
The Postgresql dialect |
presto |
The Presto dialect |
mysql |
The MySQL dialect |
db2 |
DB2 dialect |
sqlite |
SQLite dialect |
greenplum |
Greenplum dialect |
BigQueryDialect
This type has no fields.
The dialect definition for BigQuery.
HiveQLDialect
This type has no fields.
The dialect definition for HiveQL.
RedshiftDialect
This type has no fields.
The dialect definition for Redshift.
TeradataDialect
The dialect definition for Teradata.
JSON representation |
---|
{
"mode": enum ( |
Fields | |
---|---|
mode |
Which Teradata sub-dialect mode the user specifies. |
Mode
The sub-dialect options for Teradata.
Enums | |
---|---|
MODE_UNSPECIFIED |
Unspecified mode. |
SQL |
Teradata SQL mode. |
BTEQ |
BTEQ mode (which includes SQL). |
OracleDialect
This type has no fields.
The dialect definition for Oracle.
SparkSQLDialect
This type has no fields.
The dialect definition for SparkSQL.
SnowflakeDialect
This type has no fields.
The dialect definition for Snowflake.
NetezzaDialect
This type has no fields.
The dialect definition for Netezza.
AzureSynapseDialect
This type has no fields.
The dialect definition for Azure Synapse.
VerticaDialect
This type has no fields.
The dialect definition for Vertica.
SQLServerDialect
This type has no fields.
The dialect definition for SQL Server.
PostgresqlDialect
This type has no fields.
The dialect definition for Postgresql.
PrestoDialect
This type has no fields.
The dialect definition for Presto.
MySQLDialect
This type has no fields.
The dialect definition for MySQL.
DB2Dialect
This type has no fields.
The dialect definition for DB2.
SQLiteDialect
This type has no fields.
The dialect definition for SQLite.
GreenplumDialect
This type has no fields.
The dialect definition for Greenplum.
SourceEnv
Represents the default source environment values for the translation.
JSON representation |
---|
{ "defaultDatabase": string, "schemaSearchPath": [ string ], "metadataStoreDataset": string } |
Fields | |
---|---|
default |
The default database name to fully qualify SQL objects when their database name is missing. |
schema |
The schema search path. When SQL objects are missing schema name, translation engine will search through this list to find the value. |
metadata |
Optional. Expects a valid BigQuery dataset ID that exists, e.g., project-123.metadata_store_123. If specified, translation will search and read the required schema information from a metadata store in this dataset. If metadata store doesn't exist, translation will parse the metadata file and upload the schema info to a temp table in the dataset to speed up future translation jobs. |
SourceTargetLocationMapping
Represents one mapping from a source location path to an optional target location path.
JSON representation |
---|
{ "sourceLocation": { object ( |
Fields | |
---|---|
source |
The path to the location of the source data. |
target |
The path to the location of the target data. |
SourceLocation
Represents one path to the location that holds source data.
JSON representation |
---|
{ // Union field |
Fields | |
---|---|
Union field location . The location of the source data. location can be only one of the following: |
|
gcs |
The Cloud Storage path for a directory of files. |
TargetLocation
// Represents one path to the location that holds target data.
JSON representation |
---|
{ // Union field |
Fields | |
---|---|
Union field location . The location of the target data. location can be only one of the following: |
|
gcs |
The Cloud Storage path for a directory of files. |
TranslationDetails
The translation details to capture the necessary settings for a translation job.
JSON representation |
---|
{ "sourceTargetMapping": [ { object ( |
Fields | |
---|---|
source |
The mapping from source to target SQL. |
target |
The base URI for all writes to persistent storage. |
source |
The default source environment values for the translation. |
target |
The list of literal targets that will be directly returned to the response. Each entry consists of the constructed path, EXCLUDING the base path. Not providing a targetBaseUri will prevent writing to persistent storage. |
target |
The types of output to generate, e.g. sql, metadata, lineage_from_sql_scripts, etc. If not specified, a default set of targets will be generated. Some additional target types may be slower to generate. See the documentation for the set of available target types. |
SourceTargetMapping
Represents one mapping from a source SQL to a target SQL.
JSON representation |
---|
{ "sourceSpec": { object ( |
Fields | |
---|---|
source |
The source SQL or the path to it. |
target |
The target SQL or the path for it. |
SourceSpec
Represents one path to the location that holds source data.
JSON representation |
---|
{ "encoding": string, // Union field |
Fields | |
---|---|
encoding |
Optional. The optional field to specify the encoding of the sql bytes. |
Union field source . The specific source SQL. source can be only one of the following: |
|
base |
The base URI for all files to be read in as sources for translation. |
literal |
Source literal. |
Literal
Literal data.
JSON representation |
---|
{ "relativePath": string, // Union field |
Fields | |
---|---|
relative |
Required. The identifier of the literal entry. |
Union field literal_data . The literal SQL contents. literal_data can be only one of the following: |
|
literal |
Literal string data. |
literal |
Literal byte data. |
TargetSpec
Represents one path to the location that holds target data.
JSON representation |
---|
{ "relativePath": string } |
Fields | |
---|---|
relative |
The relative path for the target data. Given source file |
SourceEnvironment
Represents the default source environment values for the translation.
JSON representation |
---|
{
"defaultDatabase": string,
"schemaSearchPath": [
string
],
"metadataStoreDataset": string,
"metadataCaching": {
object ( |
Fields | |
---|---|
default |
The default database name to fully qualify SQL objects when their database name is missing. |
schema |
The schema search path. When SQL objects are missing schema name, translation engine will search through this list to find the value. |
metadata |
Optional. Expects a validQ BigQuery dataset ID that exists, e.g., project-123.metadata_store_123. If specified, translation will search and read the required schema information from a metadata store in this dataset. If metadata store doesn't exist, translation will parse the metadata file and upload the schema info to a temp table in the dataset to speed up future translation jobs. |
metadata |
Optional. Metadata caching settings. If specified, translation will cache the metadata. Otherwise, metadata will be parsed from the metadata file. The cache is stored on the service side. Hence, enabling this feature will store data from the provided metadata file on the service side for up to 7 days. |
MetadataCaching
Metadata caching settings.
JSON representation |
---|
{ "maxCacheAge": string } |
Fields | |
---|---|
max |
Optional. The maximum age of the metadata cache. If the cache is older than this value, the cache will be refreshed. A cache will not be kept for longer than 7 days. Providing no value or a value larger than 7 days will result in using the cache if available (i.e. the same as setting the value to 7 days). Setting the duration to 0 or a negative value will refresh the cache. |
State
Possible states of a migration task.
Enums | |
---|---|
STATE_UNSPECIFIED |
The state is unspecified. |
PENDING |
The task is waiting for orchestration. |
ORCHESTRATING |
The task is assigned to an orchestrator. |
RUNNING |
The task is running, i.e. its subtasks are ready for execution. |
PAUSED |
Tha task is paused. Assigned subtasks can continue, but no new subtasks will be scheduled. |
SUCCEEDED |
The task finished successfully. |
FAILED |
The task finished unsuccessfully. |
MigrationTaskOrchestrationResult
Additional information from the orchestrator when it is done with the task orchestration.
JSON representation |
---|
{ // Union field |
Fields | |
---|---|
Union field details . Details specific to the task type. details can be only one of the following: |
|
assessment |
Details specific to assessment task types. |
translation |
Details specific to translation task types. |
AssessmentOrchestrationResultDetails
Details for an assessment task orchestration result.
JSON representation |
---|
{ "outputTablesSchemaVersion": string, "reportUri": string } |
Fields | |
---|---|
output |
Optional. The version used for the output table schemas. |
report |
Optional. The URI of the Data Studio report. |
TranslationTaskResult
Translation specific result details from the migration task.
JSON representation |
---|
{ "translatedLiterals": [ { object ( |
Fields | |
---|---|
translated |
The list of the translated literals. |
report |
The records from the aggregate CSV report for a migration workflow. |
console |
The Cloud Console URI for the migration workflow. |
GcsReportLogMessage
A record in the aggregate CSV report for a migration workflow
JSON representation |
---|
{ "severity": string, "category": string, "filePath": string, "filename": string, "sourceScriptLine": integer, "sourceScriptColumn": integer, "message": string, "scriptContext": string, "action": string, "effect": string, "objectName": string } |
Fields | |
---|---|
severity |
Severity of the translation record. |
category |
Category of the error/warning. Example: SyntaxError |
file |
The file path in which the error occurred |
filename |
The file name in which the error occurred |
source |
Specifies the row from the source text where the error occurred (0 based, -1 for messages without line location). Example: 2 |
source |
Specifies the column from the source texts where the error occurred. (0 based, -1 for messages without column location) example: 6 |
message |
Detailed message of the record. |
script |
The script context (obfuscated) in which the error occurred |
action |
Category of the error/warning. Example: SyntaxError |
effect |
Effect of the error/warning. Example: COMPATIBILITY |
object |
Name of the affected object in the log message. |
MigrationTaskResult
The migration task result.
JSON representation |
---|
{ // Union field |
Fields | |
---|---|
Union field details . Details specific to the task type. details can be only one of the following: |
|
assessment |
Details specific to assessment task types. |
translation |
Details specific to translation task types. |
State
Possible migration workflow states.
Enums | |
---|---|
STATE_UNSPECIFIED |
Workflow state is unspecified. |
DRAFT |
Workflow is in draft status, i.e. tasks are not yet eligible for execution. |
RUNNING |
Workflow is running (i.e. tasks are eligible for execution). |
PAUSED |
Workflow is paused. Tasks currently in progress may continue, but no further tasks will be scheduled. |
COMPLETED |
Workflow is complete. There should not be any task in a non-terminal state, but if they are (e.g. forced termination), they will not be scheduled. |
Methods |
|
---|---|
|
Creates a migration workflow. |
|
Deletes a migration workflow by name. |
|
Gets a previously created migration workflow. |
|
Lists previously created migration workflow. |
|
Starts a previously created migration workflow. |