In Dataform, a table is one of the types of objects that make up a workflow. You can create tables that reference data from the data sources declared for your workflow or from other tables in your workflow. Dataform compiles your table definitions into SQL in real time. When you trigger execution, Dataform executes the SQL code and creates your defined tables in BigQuery.
You can create the following table types in a type: "table"
SQLX file:
table
: a regular table.incremental
: an incremental table.view
: a table view. For more information, see Introduction to views.materialized
: a materialized table view. For more information, see Introduction to materialized views.
You can also define table partitions and clusters.
To keep a record of the purpose of a table or its relation to other tables in your workflow, you can add documentation to the table or its selected columns.
To test the data in a table against specific conditions, you can create data quality test queries called assertions. Dataform runs assertions every time it updates your workflow and alerts you if any assertions fail.
To override the default table settings, such as database
or schema
,
and disable table creation, or execute a SQL statement before or after
table creation, you can
configure additional table settings.
You can configure additional table settings to do the following:
- Override the default table settings, such as
database
orschema
. - Disable table creation.
- Execute a SQL statement before or after table creation.
To organize your tables in BigQuery after you execute them, you can add BigQuery labels. To learn more, see Introduction to labels.
To restrict data access at the table column level, you can add BigQuery policy tags. To learn more, see Introduction to column-level access control.
In addition to defining tables in a type: "table"
SQLX file, you can
create empty tables
by defining a custom SQL query in a type: "operations"
SQLX file.
You might want to create an empty table so that a different service
can populate it with data.
Before you begin
In the Google Cloud console, go to the Dataform page.
Create and initialize a development workspace in your repository.
Optional: Declare a data source.
Required roles
To get the permissions that
you need to complete the tasks in this document,
ask your administrator to grant you the
Dataform Editor (roles/dataform.editor
) IAM role on workspaces.
For more information about granting roles, see Manage access to projects, folders, and organizations.
You might also be able to get the required permissions through custom roles or other predefined roles.
Create a table
This section shows you how to create tables with Dataform core in Dataform.
About table definitions
To define a table, you define the table type and write a SELECT
statement in
a type: "table"
SQLX file. Dataform then compiles your
Dataform core code into SQL, executes the SQL code, and creates your
defined tables in BigQuery.
In a Dataform core SELECT
statement, you define the table structure
and reference other objects of your workflow.
In addition to defining tables in a type: "table"
SLQX file, you can create
empty tables by defining a custom SQL query in a type: "operations"
SQLX file.
For more information, see
Create an empty table.
Reference dependencies with ref
To reference a workflow object in a SELECT
statement and automatically
add it as a dependency, use the ref
function. Dataform executes
dependencies before tables that depend on them to ensure correct pipeline ordering.
The ref
function is a built-in Dataform core function that is
critical to dependency management in Dataform. The ref
function lets
you reference and automatically depend on the following objects defined in your
Dataform workflow, instead of hard coding the schema and table names:
- Tables of all supported table types.
- Data source declarations.
- Custom SQL operations with the
hasOutput
property set totrue
.
Dataform uses the ref
function to build a dependency tree of all the
tables to be created or updated.
After compiling, Dataform adds boilerplate statements to the SQL
statement, such as CREATE
, REPLACE
, INSERT
, or MERGE
.
The following code sample shows a table definition with the use of the
ref
function:
config { type: "table" }
SELECT
order_date AS date,
order_id AS order_id,
order_status AS order_status,
SUM(item_count) AS item_count,
SUM(amount) AS revenue
FROM ${ref("store_clean")}
GROUP BY 1, 2
In the ref
function, you provide the name of the table or data source
declaration that you want to depend on. This is typically the filename of the
SQLX file in which that table or data source declaration is defined.
If a table name is overridden, use the overridden name in the ref
function.
For example, reference a table with config { name: "overridden_name" }
as ref("overridden_name")
. For more information about overriding table names,
see Configure additional table settings.
When you have multiple tables of the same name in different schemas, you can
reference a specific table by providing two arguments to the ref
function:
schema name and table name.
The following code sample shows the ref
function with two arguments to
specify a table within a specific schema:
config { type: "table" }
SELECT * FROM ${ref("schema", "store_clean")}
You can also add table dependencies manually to the config
block for tables,
assertions, data source declarations, or custom SQL operations that are not
referenced in a ref
function in the SELECT
statement. Dataform
executes these dependencies before dependent tables.
The following code sample shows a table dependency in the config
block:
config { dependencies: [ "unreferenced_table" ] }
SELECT * FROM ...
For more information on dependency management in your workflow, see Declare dependencies.
Reference other tables with resolve
The resolve
function
lets you reference a table or data source declaration in a SELECT
statement
like the ref
function, but it doesn't add the reference as a dependency. This
means that the object referenced using the resolve
function does not affect
the execution of the table that uses the resolve
function.
For more information on built-in Dataform core functions, see Dataform core reference.
Create a SQLX file for a table definition
Store table definition SQLX files in the definitions/
directory. To create a
new SQLX file in the definitions/
directory, follow these steps:
In Google Cloud console, go to the Dataform page.
To open a repository, click the repository name.
To open a development workspace, click the workspace name.
In the Files pane, next to
definitions/
, click More.Click Create file.
In the Add a file path field, enter the name of the file followed by
.sqlx
afterdefinitions/
. For example,definitions/my-table.sqlx
.Filenames can only include numbers, letters, hyphens, and underscores.
Click Create file.
Define the table type
To create a new table type definition, follow these steps:
- In your development workspace, in the Files pane, expand the
definitions/
directory. - Select the table definition SQLX file that you want to edit.
In the file, enter the following code snippet:
config { type: "TABLE_TYPE" }
Replace TABLE_TYPE with one of the following table types:
table
incremental
view
Optional: To define a materialized view, enter the
materialized
property undertype: "view"
in the following format:config { type: "view", materialized: true }
For more information, see ITableConfig.
Optional: Click Format.
Define table structure and dependencies
To write a table definition SELECT
statement and define the table structure
and dependencies, follow these steps:
- In your development workspace, in the Files pane, expand
the
definitions/
directory. - Select the table definition SQLX file that you want to edit.
- Below the
config
block, write aSELECT
statement. - Optional: Click Format.
The following code sample shows a table definition with a SELECT
statement
and the ref
function:
config { type: "table" }
SELECT
customers.id AS id,
customers.first_name AS first_name,
customers.last_name AS last_name,
customers.email AS email,
customers.country AS country,
COUNT(orders.id) AS order_count,
SUM(orders.amount) AS total_spent
FROM
dataform-samples.dataform_sample.crm_customers AS customers
LEFT JOIN ${ref('order_stats')} orders
ON customers.id = orders.customer_id
WHERE
customers.id IS NOT NULL
AND customers.first_name <> 'Internal account'
AND country IN ('UK', 'US', 'FR', 'ES', 'NG', 'JP')
GROUP BY 1, 2, 3, 4, 5
Add manual table dependencies
To add table dependencies that are not referenced in the SELECT
statement
but that need to be executed before the current table, follow these steps:
- In your development workspace, in the Files pane, expand
the
definitions/
directory. - Select the table definition SQLX file that you want to edit.
In the
config
block of the table, enter the following code snippet:dependencies: [ "DEPENDENCY_TABLE", ]
Replace DEPENDENCY_TABLE with the filename of the table you want to add as a dependency. You can enter multiple filenames.
Optional: Click Format.
The following code sample shows two tables added as manual table dependencies
to the config
block of a table definition file:
config { dependencies: [ "some_table", "some_other_table" ] }
Create table partitions and clusters
This section shows you how to use Dataform core to create table partitions and clusters. BigQuery supports partitioned tables and table clustering. For more information, see Introduction to partitioned tables and Creating and using clustered tables.
Create a table partition
To create a table partition, follow these steps:
- Go to your development workspace.
- In the Files pane, expand
definitions/
. - Open a table definition SQLX file.
In the
config
block, add thebigquery
block below the table type declaration in the following format:config { type: "table", bigquery: { } }
In the
bigquery
block, enter the following code snippet:partitionBy: "PARTITION_EXPRESSION"
Replace PARTITION_EXPRESSION with an expression for partitioning the table.
Optional: Click Format.
The following code sample shows partitioning a table by hour in a table definition SQLX file:
config {
type: "table",
bigquery: {
partitionBy: "DATETIME_TRUNC(<timestamp_column>, HOUR)"
}
}
The following code sample shows partitioning a table by an integer value in a table definition SQLX file:
config {
type: "table",
bigquery: {
partitionBy: "RANGE_BUCKET(<integer_column>, GENERATE_ARRAY(0, 1000000, 1000))"
}
}
Set a partition filter
To set a partition filter, follow these steps:
- Go to your development workspace.
- In the Files pane, expand
definitions/
. - Open a partitioned table definition SQLX file.
In the
bigquery
block, enter the following code snippet:requirePartitionFilter : true
Optional: Click Format.
The following code sample shows a partition filter set in the bigquery
block
of a partitioned table SQLX file:
config {
type: "table",
bigquery: {
partitionBy: "DATE(ts)",
requirePartitionFilter : true
}
}
SELECT CURRENT_TIMESTAMP() AS ts
For more information on the partition filter in BigQuery, see Setting the require partition filter attribute on a partitioned table.
Set a retention period for partitions
To control the retention of all partitions in a partitioned table, follow these steps:
- Go to your development workspace.
- In the Files pane, expand
definitions/
. - Open a partitioned table definition SQLX file.
In the
bigquery
block, enter the following code snippet:partitionExpirationDays: NUMBER_OF_DAYS
Replace NUMBER_OF_DAYS with the number of days that you want to retain the partitions for.
Optional: Click Format.
The following code sample shows a retention period for partitions set to
14 days in the bigquery
block of a partitioned table SQLX file:
config {
type: "table",
bigquery: {
partitionBy: "DATE(ts)",
partitionExpirationDays: 14,
}
}
SELECT CURRENT_TIMESTAMP() AS ts
Create a table cluster
To create a table cluster, follow these steps:
- Go to your development workspace.
- In the Files pane, expand
definitions/
. - Open a table definition SQLX file.
In the
bigquery
block, enter the following code snippet:clusterBy: ["CLUSTER_COLUMN"]
Replace CLUSTER_COLUMN with the name of the column by which you want to cluster the table. For more information, see clustering_column_list.
Optional: Click Format.
The following code sample shows a partitioned table clustered by
name
and revenue
columns:
config {
type: "table",
bigquery: {
partitionBy: "DATE(ts)",
clusterBy: ["name", "revenue"]
}
}
SELECT CURRENT_TIMESTAMP() as ts, name, revenue
Configure an incremental table
This section shows you how to use Dataform core to configure an incremental table.
About incremental tables
Dataform updates tables differently based on the table type. During each execution of a table or a view, Dataform rebuilds the whole table or view from scratch.
When you define an incremental table, Dataform builds the incremental table from scratch only for the first time. During subsequent executions, Dataform only inserts or merges new rows into the incremental table according to the conditions that you configure.
Dataform inserts new rows only into columns that already exist in the incremental table. If you make changes to the incremental table definition query—for example, by adding a new column—you must rebuild the table from scratch. To do so, the next time you trigger an execution of the table, select the Run with full refresh option.
Here are some common use cases for incremental tables:
- Performance optimization
- For some kinds of data, such as web logs or analytics data, you might want to only process new records instead of reprocessing the entire table.
- Latency reduction
- You can use incremental tables to execute workflows quickly but frequently, reducing the downstream latency of the output tables.
- Daily snapshots
- You can configure an incremental table to create daily snapshots of the table data, for example, for longitudinal analysis of user settings stored in a production database.
Process a subset of rows in an incremental table
To determine a subset of rows for Dataform to process during each
execution, add a conditional WHERE
clause to the incremental table SQLX
definition file. In the WHERE
clause, you can specify an incremental condition
and a non-incremental condition. Dataform applies the incremental
condition during table execution without a full refresh, and the non-incremental
condition during execution with a full refresh.
To configure an incremental table, follow these steps:
- Go to your development workspace.
- In the Files pane, expand
definitions/
. - Open an incremental table definition SQLX file.
Enter a
WHERE
clause in the following format:config { type: "incremental" } SELECT_STATEMENT ${when(incremental(), `WHERE INCREMENTAL_CONDITION`, `WHERE NON_INCREMENTAL_CONDITION`) }
Replace the following:
- SELECT_STATEMENT: the
SELECT
statement that defines your table. - INCREMENTAL_CONDITION: the condition you specify in the
WHERE
clause to select rows for Dataform to process during table execution without a full refresh. - NON_INCREMENTAL_CONDITION: the condition you specify in the
WHERE
clause to select rows for Dataform to process during table execution with a full refresh.
- SELECT_STATEMENT: the
Optional: Click Format.
The following code sample shows an incremental table that incrementally
processes rows of the productiondb.logs
table:
config { type: "incremental" }
SELECT timestamp, message FROM ${ref("productiondb", "logs")}
${when(incremental(),
`WHERE date > (SELECT MAX(date) FROM ${self()}) AND country = "UK"`,
`WHERE country = "UK"`)}
The following code sample shows an incremental table that creates a snapshot
of the productiondb.customers
table:
config { type: "incremental" }
SELECT CURRENT_DATE() AS snapshot_date, customer_id, name, account_settings FROM ${ref("productiondb", "customers")}
${when(incremental(), `WHERE snapshot_date > (SELECT MAX(snapshot_date) FROM ${self()})`) }
Merge rows in an incremental table
To ensure that an incremental table contains only one row corresponding to a
selected combination of columns, set the selected columns as uniqueKey
to
merge rows that have the same uniqueKey
value. When updating the table,
Dataform merges rows with the same uniqueKey
value instead of
appending them.
To configure merging in an incremental table, follow these steps:
- Go to your development workspace.
- In the Files pane, expand
definitions/
. - Select an incremental table definition SQLX file
In the
config
block, set the selected columns asuniqueKey
in the following format:uniqueKey: ["COLUMN_NAME"]
Replace COLUMN_NAME with the name of a selected column.
Optional: Click Format.
The following code sample shows an incremental table with the transaction_id
column set as uniqueKey
to ensure it always contains one row:
config {
type: "incremental",
uniqueKey: ["transaction_id"]
}
SELECT timestamp, action FROM weblogs.user_actions
${ when(incremental(), `WHERE timestamp > (SELECT MAX(timestamp) FROM ${self()})`) }
Filter rows in an incremental table
In an incremental partitioned table, to avoid Dataform scanning the
whole table to find matching rows, set updatePartitionFilter
to only consider
a subset of records.
The following code sample shows an incremental partitioned table with merging
configured by setting the uniqueKey
and updatePartitionFilter
properties:
config {
type: "incremental",
uniqueKey: ["transaction_id"],
bigquery: {
partitionBy: "DATE(timestamp)",
updatePartitionFilter:
"timestamp >= timestamp_sub(current_timestamp(), interval 24 hour)"
}
}
SELECT timestamp, action FROM weblogs.user_actions
${ when(incremental(), `WHERE timestamp > (SELECT MAX(timestamp) FROM ${self()})`) }
Avoid full table scans when ingesting from a partitioned table
When you create an incremental table that references a partitioned table, we recommend that you build your table query to avoid full table scans of the partitioned table during each incremental update.
You can limit the number of partitions that BigQuery scans to update
the incremental table by using a constant expression in your table query. To
turn a value from the partitioned table into a constant expression, use
BigQuery scripting to declare the value as a variable in the
pre_operations
block. Then, use the variable as a constant expression in a
WHERE
clause in the SELECT
query.
With this configuration, Dataform updates the incremental table based on the most recent partitions of the referenced partitioned table, without scanning the entire table.
To configure an incremental table that references a partitioned table and avoids full table scans, follow these steps:
- Go to your development workspace.
- In the Files pane, expand
definitions/
. - Select an incremental table definition SQLX file
- In the
pre_operations
block, declare a variable with BigQuery scripting. - Filter the
SELECT
statement that defines the table with aWHERE
clause that references the declared variable. - Optional: Click Format.
The following code sample shows an incremental table in which the referenced
raw_events
table is partitioned by event_timestamp
:
config {
type: "incremental",
}
pre_operations {
DECLARE event_timestamp_checkpoint DEFAULT (
${when(incremental(),
`SELECT max(event_timestamp) FROM ${self()}`,
`SELECT timestamp("2000-01-01")`)}
)
}
SELECT
*
FROM
${ref("raw_events")}
WHERE event_timestamp > event_timestamp_checkpoint
In the preceding code sample, the event_timestamp_checkpoint
variable is
defined in the pre_operations
block. The event_timestamp_checkpoint
variable
is then used as a constant expression in the WHERE
clause.
Rebuild an incremental table from scratch with full refresh
You can force an incremental table to be rebuilt from scratch using either the
command-line interface with the --full-refresh
option
or the Run with full refresh option when
triggering a workflow execution.
When you select the full refresh option, in your development workspace or by
using the Dataform CLI, Dataform ignores the
${when(incremental(), ... }
parameter during execution and recreates the
table with a CREATE OR REPLACE
statement.
Protect an incremental table from full refresh
To protect an incremental table from being rebuilt from scratch and potential
data loss, you can set the incremental table as protected
. You might want to
prevent an incremental table from being rebuilt if your data source is temporary.
To mark an incremental table as protected
, follow these steps:
- Go to your development workspace.
- In the Files pane, expand
definitions/
. - Select an incremental table definition SQLX file.
- In the
config
block, enterprotected: true
. - Optional: Click Format.
The following code sample shows an incremental table marked as protected
:
config {
type: "incremental",
protected: true
}
SELECT ...
Add table documentation
This section shows you how to add descriptions of a table and its columns and records to a Dataform core SQLX file.
You can add table, column, and record descriptions to all table types in Dataform: tables, incremental tables, and views.
You might want to document the following:
- The purpose of the table.
- The content or role of columns or records in the table.
- The relationship of the table and other actions of your workflow, for example, the tables or views that depend on the current table.
- The assertions applied to the table.
- The pre-operations or post-operations applied to the table.
- The owner of the table—that is, the user who created it. This information might be useful if multiple team members work on a workflow.
Add a table description
To add a description to a table in a SQLX file, follow these steps:
In Google Cloud console, go to the Dataform page.
Select a repository.
Select a development workspace.
In the Files pane, click the table definition SQLX file that you want to edit.
In the
config
block of the file, enter the table description in the following format:description: "Description of the table",
Optional: Click Format.
The following code sample shows a table description added to the config
block
of a SQLX table definition file:
config {
type: "table",
description: "Description of the table",
}
Add column and record descriptions
To add descriptions of individual columns and records to a SQLX file, follow these steps:
- In the
config
block of your table definition file, entercolumns: {}
. Inside
columns: {}
, enter column descriptions in the following format:column_name: "Description of the column",
Inside
columns: {}
, enter record descriptions in the following format:record_name: { description: "Description of the record", columns: { record_column_name: "Description of the record column" } }
Optional: Click Format.
The following code sample shows table, column, and record descriptions in
the config
block of a SQLX table definition file:
config {
type: "table",
description: "Description of the table.",
columns: {
column1_name: "Description of the first column",
column2_name: "Description of the second column",
column3_name: "Description of the third column",
record_name: {
description: "Description of the record.",
columns: {
record_column1_name: "Description of the first record column",
record_column2_name: "Description of the second record column",
}
}
}
}
SELECT
"first_column_value" AS column_1_name,
"second_column_value" AS column_2_name,
"third_column_value" AS column_3_name,
STRUCT("first" AS record_column1_name,
"second" AS record_column2_name) AS record_name
Reuse column documentation with includes
You can reuse descriptions of columns in Dataform across your SQL workflow with JavaScript includes. You might want to reuse column documentation if you have multiple columns with the same name and description in your SQL workflow.
- To create a reusable a column description, define a JavaScript include constant with the name of the column and its description.
You can define a constant with a description of a single column, or a constant with a set or column description to reuse the descriptions of all columns in a table. For more information on creating and using includes in Dataform, see Reuse code across a single repository with includes.
The following code sample shows multiple constants with descriptions of
individual columns defined in the includes/docs.js
JavaScript file:
// filename is includes/docs.js
const user_id = `A unique identifier for a user`;
const age = `The age of a user`;
const creation_date = `The date this user signed up`;
const user_tenure = `The number of years since the user's creation date`;
const badge_count = `The all-time number of badges the user has received`;
const questions_and_answer_count = `The all-time number of questions and answers the user has created`;
const question_count = `The all-time number of questions the user has created`;
const answer_count = `The all-time number of answers the user has created`;
const last_badge_received_at = `The time the user received their most recent badge`;
const last_posted_at = `The time the user last posted a question or answer`;
const last_question_posted_at = `The time the user last posted an answer`;
const last_answer_posted_at = `The time the user last posted a question`;
module.exports = {
user_id,
age,
creation_date,
user_tenure,
badge_count,
questions_and_answer_count,
question_count,
answer_count,
last_badge_received_at,
last_posted_at,
last_question_posted_at,
last_answer_posted_at,
};
The following code sample shows the user_id
and age
constants, defined in
includes/docs.js
, used in the definitions/my_table.sqlx
SQLX table
definition file to generate documentation for selected columns in the table:
config {
type: "table",
description: "Table description.",
columns: {
user_id: docs.user_id,
column2_name: "Description of the second column",
column3_name: "Description of the third column",
age: docs.age,
}
}
SELECT ...
The following code sample shows a constant with a set of column descriptions
defined in the includes/docs.js
JavaScript file:
// filename is includes/docs.js
const columns = {
user_id = `A unique identifier for a user`,
age = `The age of a user`,
creation_date = `The date this user signed up`,
user_tenure = `The number of years since the user's creation date`,
badge_count = `The all-time number of badges the user has received`,
questions_and_answer_count = `The all-time number of questions and answers the user has created`,
question_count = `The all-time number of questions the user has created`,
answer_count = `The all-time number of answers the user has created`,
last_badge_received_at = `The time the user received their most recent badge`,
last_posted_at = `The time the user last posted a question or answer`,
last_question_posted_at = `The time the user last posted an answer`,
last_answer_posted_at = `The time the user last posted a question`,
}
module.exports = {
columns
};
The following code sample shows the columns
constant, which is defined in
includes/table_docs.js
and used in the definitions/my_table.sqlx
SQLX table
definition file to generate documentation for all columns in the table:
config { type: "table",
description: "My table description",
columns: docs.columns
}
SELECT 1 AS one
What's next
- To learn how to configure additional table settings, see Configure additional table settings.
- To learn how to test table data with assertions, see Test tables with assertions.
- To learn how to define tables with JavaScript, see Create Dataform workflows with JavaScript.
- To learn how to reuse code with includes, see Reuse code across a single repository with includes.
- To learn how to use the Dataform command-line interface, see Use the Dataform CLI.