- Resource: MigrationWorkflow
- MigrationTask
- TranslationConfigDetails
- ObjectNameMappingList
- ObjectNameMapping
- NameMappingKey
- Type
- NameMappingValue
- Dialect
- BigQueryDialect
- HiveQLDialect
- RedshiftDialect
- TeradataDialect
- Mode
- OracleDialect
- SparkSQLDialect
- SnowflakeDialect
- NetezzaDialect
- AzureSynapseDialect
- VerticaDialect
- SQLServerDialect
- PostgresqlDialect
- PrestoDialect
- MySQLDialect
- DB2Dialect
- SQLiteDialect
- GreenplumDialect
- SourceEnv
- TranslationDetails
- SourceTargetMapping
- SourceSpec
- Literal
- TargetSpec
- SourceEnvironment
- State
- MigrationTaskResult
- TranslationTaskResult
- GcsReportLogMessage
- State
- Methods
Resource: MigrationWorkflow
A migration workflow which specifies what needs to be done for an EDW migration.
JSON representation |
---|
{ "name": string, "displayName": string, "tasks": { string: { object ( |
Fields | |
---|---|
name |
Output only. Immutable. Identifier. The unique identifier for the migration workflow. The ID is server-generated. Example: |
display |
The display name of the workflow. This can be set to give a workflow a descriptive name. There is no guarantee or enforcement of uniqueness. |
tasks |
The tasks in a workflow in a named map. The name (i.e. key) has no meaning and is merely a convenient way to address a specific task in a workflow. |
state |
Output only. That status of the workflow. |
create |
Time when the workflow was created. |
last |
Time when the workflow was last updated. |
MigrationTask
A single task for a migration which has details about the configuration of the task.
JSON representation |
---|
{ "id": string, "type": string, "state": enum ( |
Fields | |
---|---|
id |
Output only. Immutable. The unique identifier for the migration task. The ID is server-generated. |
type |
The type of the task. This must be one of the supported task types: Translation_Teradata2BQ, Translation_Redshift2BQ, Translation_Bteq2BQ, Translation_Oracle2BQ, Translation_HiveQL2BQ, Translation_SparkSQL2BQ, Translation_Snowflake2BQ, Translation_Netezza2BQ, Translation_AzureSynapse2BQ, Translation_Vertica2BQ, Translation_SQLServer2BQ, Translation_Presto2BQ, Translation_MySQL2BQ, Translation_Postgresql2BQ, Translation_SQLite2BQ, Translation_Greenplum2BQ. |
state |
Output only. The current state of the task. |
processing |
Output only. An explanation that may be populated when the task is in FAILED state. |
create |
Time when the task was created. |
last |
Time when the task was last updated. |
resource |
Output only. Provides details to errors and issues encountered while processing the task. Presence of error details does not mean that the task failed. |
resource |
The number or resources with errors. Note: This is not the total number of errors as each resource can have more than one error. This is used to indicate truncation by having a |
metrics[] |
The metrics for the task. |
task |
Output only. The result of the task. |
total |
Count of all the processing errors in this task and its subtasks. |
total |
Count of all the resource errors in this task and its subtasks. |
Union field task_details . The details of the task. task_details can be only one of the following: |
|
translation |
Task configuration for CW Batch/Offline SQL Translation. |
translation |
Task details for unified SQL Translation. |
TranslationConfigDetails
The translation config to capture necessary settings for a translation task and subtask.
JSON representation |
---|
{ "sourceDialect": { object ( |
Fields | |
---|---|
source |
The dialect of the input files. |
target |
The target dialect for the engine to translate the input to. |
source |
The default source environment values for the translation. |
request |
The indicator to show translation request initiator. |
target |
The types of output to generate, e.g. sql, metadata etc. If not specified, a default set of targets will be generated. Some additional target types may be slower to generate. See the documentation for the set of available target types. |
Union field source_location . The chosen path where the source for input files will be found. source_location can be only one of the following: |
|
gcs |
The Cloud Storage path for a directory of files to translate in a task. |
Union field target_location . The chosen path where the destination for output files will be found. target_location can be only one of the following: |
|
gcs |
The Cloud Storage path to write back the corresponding input files to. |
Union field output_name_mapping . The mapping of full SQL object names from their current state to the desired output. output_name_mapping can be only one of the following: |
|
name |
The mapping of objects to their desired output names in list form. |
ObjectNameMappingList
Represents a map of name mappings using a list of key:value proto messages of existing name to desired output name.
JSON representation |
---|
{
"nameMap": [
{
object ( |
Fields | |
---|---|
name |
The elements of the object name map. |
ObjectNameMapping
Represents a key-value pair of NameMappingKey to NameMappingValue to represent the mapping of SQL names from the input value to desired output.
JSON representation |
---|
{ "source": { object ( |
Fields | |
---|---|
source |
The name of the object in source that is being mapped. |
target |
The desired target name of the object that is being mapped. |
NameMappingKey
The potential components of a full name mapping that will be mapped during translation in the source data warehouse.
JSON representation |
---|
{
"type": enum ( |
Fields | |
---|---|
type |
The type of object that is being mapped. |
database |
The database name (BigQuery project ID equivalent in the source data warehouse). |
schema |
The schema name (BigQuery dataset equivalent in the source data warehouse). |
relation |
The relation name (BigQuery table or view equivalent in the source data warehouse). |
attribute |
The attribute name (BigQuery column equivalent in the source data warehouse). |
Type
The type of the object that is being mapped.
Enums | |
---|---|
TYPE_UNSPECIFIED |
Unspecified name mapping type. |
DATABASE |
The object being mapped is a database. |
SCHEMA |
The object being mapped is a schema. |
RELATION |
The object being mapped is a relation. |
ATTRIBUTE |
The object being mapped is an attribute. |
RELATION_ALIAS |
The object being mapped is a relation alias. |
ATTRIBUTE_ALIAS |
The object being mapped is a an attribute alias. |
FUNCTION |
The object being mapped is a function. |
NameMappingValue
The potential components of a full name mapping that will be mapped during translation in the target data warehouse.
JSON representation |
---|
{ "database": string, "schema": string, "relation": string, "attribute": string } |
Fields | |
---|---|
database |
The database name (BigQuery project ID equivalent in the target data warehouse). |
schema |
The schema name (BigQuery dataset equivalent in the target data warehouse). |
relation |
The relation name (BigQuery table or view equivalent in the target data warehouse). |
attribute |
The attribute name (BigQuery column equivalent in the target data warehouse). |
Dialect
The possible dialect options for translation.
JSON representation |
---|
{ // Union field |
Fields | |
---|---|
Union field dialect_value . The possible dialect options that this message represents. dialect_value can be only one of the following: |
|
bigquery |
The BigQuery dialect |
hiveql |
The HiveQL dialect |
redshift |
The Redshift dialect |
teradata |
The Teradata dialect |
oracle |
The Oracle dialect |
sparksql |
The SparkSQL dialect |
snowflake |
The Snowflake dialect |
netezza |
The Netezza dialect |
azure |
The Azure Synapse dialect |
vertica |
The Vertica dialect |
sql |
The SQL Server dialect |
postgresql |
The Postgresql dialect |
presto |
The Presto dialect |
mysql |
The MySQL dialect |
db2 |
DB2 dialect |
sqlite |
SQLite dialect |
greenplum |
Greenplum dialect |
BigQueryDialect
This type has no fields.
The dialect definition for BigQuery.
HiveQLDialect
This type has no fields.
The dialect definition for HiveQL.
RedshiftDialect
This type has no fields.
The dialect definition for Redshift.
TeradataDialect
The dialect definition for Teradata.
JSON representation |
---|
{
"mode": enum ( |
Fields | |
---|---|
mode |
Which Teradata sub-dialect mode the user specifies. |
Mode
The sub-dialect options for Teradata.
Enums | |
---|---|
MODE_UNSPECIFIED |
Unspecified mode. |
SQL |
Teradata SQL mode. |
BTEQ |
BTEQ mode (which includes SQL). |
OracleDialect
This type has no fields.
The dialect definition for Oracle.
SparkSQLDialect
This type has no fields.
The dialect definition for SparkSQL.
SnowflakeDialect
This type has no fields.
The dialect definition for Snowflake.
NetezzaDialect
This type has no fields.
The dialect definition for Netezza.
AzureSynapseDialect
This type has no fields.
The dialect definition for Azure Synapse.
VerticaDialect
This type has no fields.
The dialect definition for Vertica.
SQLServerDialect
This type has no fields.
The dialect definition for SQL Server.
PostgresqlDialect
This type has no fields.
The dialect definition for Postgresql.
PrestoDialect
This type has no fields.
The dialect definition for Presto.
MySQLDialect
This type has no fields.
The dialect definition for MySQL.
DB2Dialect
This type has no fields.
The dialect definition for DB2.
SQLiteDialect
This type has no fields.
The dialect definition for SQLite.
GreenplumDialect
This type has no fields.
The dialect definition for Greenplum.
SourceEnv
Represents the default source environment values for the translation.
JSON representation |
---|
{ "defaultDatabase": string, "schemaSearchPath": [ string ], "metadataStoreDataset": string } |
Fields | |
---|---|
default |
The default database name to fully qualify SQL objects when their database name is missing. |
schema |
The schema search path. When SQL objects are missing schema name, translation engine will search through this list to find the value. |
metadata |
Optional. Expects a valid BigQuery dataset ID that exists, e.g., project-123.metadata_store_123. If specified, translation will search and read the required schema information from a metadata store in this dataset. If metadata store doesn't exist, translation will parse the metadata file and upload the schema info to a temp table in the dataset to speed up future translation jobs. |
TranslationDetails
The translation details to capture the necessary settings for a translation job.
JSON representation |
---|
{ "sourceTargetMapping": [ { object ( |
Fields | |
---|---|
source |
The mapping from source to target SQL. |
target |
The base URI for all writes to persistent storage. |
source |
The default source environment values for the translation. |
target |
The list of literal targets that will be directly returned to the response. Each entry consists of the constructed path, EXCLUDING the base path. Not providing a targetBaseUri will prevent writing to persistent storage. |
target |
The types of output to generate, e.g. sql, metadata, lineage_from_sql_scripts, etc. If not specified, a default set of targets will be generated. Some additional target types may be slower to generate. See the documentation for the set of available target types. |
SourceTargetMapping
Represents one mapping from a source SQL to a target SQL.
JSON representation |
---|
{ "sourceSpec": { object ( |
Fields | |
---|---|
source |
The source SQL or the path to it. |
target |
The target SQL or the path for it. |
SourceSpec
Represents one path to the location that holds source data.
JSON representation |
---|
{ "encoding": string, // Union field |
Fields | |
---|---|
encoding |
Optional. The optional field to specify the encoding of the sql bytes. |
Union field source . The specific source SQL. source can be only one of the following: |
|
base |
The base URI for all files to be read in as sources for translation. |
literal |
Source literal. |
Literal
Literal data.
JSON representation |
---|
{ "relativePath": string, // Union field |
Fields | |
---|---|
relative |
Required. The identifier of the literal entry. |
Union field literal_data . The literal SQL contents. literal_data can be only one of the following: |
|
literal |
Literal string data. |
literal |
Literal byte data. |
TargetSpec
Represents one path to the location that holds target data.
JSON representation |
---|
{ "relativePath": string } |
Fields | |
---|---|
relative |
The relative path for the target data. Given source file |
SourceEnvironment
Represents the default source environment values for the translation.
JSON representation |
---|
{ "defaultDatabase": string, "schemaSearchPath": [ string ], "metadataStoreDataset": string } |
Fields | |
---|---|
default |
The default database name to fully qualify SQL objects when their database name is missing. |
schema |
The schema search path. When SQL objects are missing schema name, translation engine will search through this list to find the value. |
metadata |
Optional. Expects a validQ BigQuery dataset ID that exists, e.g., project-123.metadata_store_123. If specified, translation will search and read the required schema information from a metadata store in this dataset. If metadata store doesn't exist, translation will parse the metadata file and upload the schema info to a temp table in the dataset to speed up future translation jobs. |
State
Possible states of a migration task.
Enums | |
---|---|
STATE_UNSPECIFIED |
The state is unspecified. |
PENDING |
The task is waiting for orchestration. |
ORCHESTRATING |
The task is assigned to an orchestrator. |
RUNNING |
The task is running, i.e. its subtasks are ready for execution. |
PAUSED |
Tha task is paused. Assigned subtasks can continue, but no new subtasks will be scheduled. |
SUCCEEDED |
The task finished successfully. |
FAILED |
The task finished unsuccessfully. |
MigrationTaskResult
The migration task result.
JSON representation |
---|
{ // Union field |
Fields | |
---|---|
Union field details . Details specific to the task type. details can be only one of the following: |
|
translation |
Details specific to translation task types. |
TranslationTaskResult
Translation specific result details from the migration task.
JSON representation |
---|
{ "translatedLiterals": [ { object ( |
Fields | |
---|---|
translated |
The list of the translated literals. |
report |
The records from the aggregate CSV report for a migration workflow. |
console |
The Cloud Console URI for the migration workflow. |
GcsReportLogMessage
A record in the aggregate CSV report for a migration workflow
JSON representation |
---|
{ "severity": string, "category": string, "filePath": string, "filename": string, "sourceScriptLine": integer, "sourceScriptColumn": integer, "message": string, "scriptContext": string, "action": string, "effect": string, "objectName": string } |
Fields | |
---|---|
severity |
Severity of the translation record. |
category |
Category of the error/warning. Example: SyntaxError |
file |
The file path in which the error occurred |
filename |
The file name in which the error occurred |
source |
Specifies the row from the source text where the error occurred (0 based, -1 for messages without line location). Example: 2 |
source |
Specifies the column from the source texts where the error occurred. (0 based, -1 for messages without column location) example: 6 |
message |
Detailed message of the record. |
script |
The script context (obfuscated) in which the error occurred |
action |
Category of the error/warning. Example: SyntaxError |
effect |
Effect of the error/warning. Example: COMPATIBILITY |
object |
Name of the affected object in the log message. |
State
Possible migration workflow states.
Enums | |
---|---|
STATE_UNSPECIFIED |
Workflow state is unspecified. |
DRAFT |
Workflow is in draft status, i.e. tasks are not yet eligible for execution. |
RUNNING |
Workflow is running (i.e. tasks are eligible for execution). |
PAUSED |
Workflow is paused. Tasks currently in progress may continue, but no further tasks will be scheduled. |
COMPLETED |
Workflow is complete. There should not be any task in a non-terminal state, but if they are (e.g. forced termination), they will not be scheduled. |
Methods |
|
---|---|
|
Creates a migration workflow. |
|
Deletes a migration workflow by name. |
|
Gets a previously created migration workflow. |
|
Lists previously created migration workflow. |
|
Starts a previously created migration workflow. |