Stay organized with collections
Save and categorize content based on your preferences.
This page describes how to specify your data warehousing requirements for
evaluating the cost of running your data analytics and warehousing setup on
BigQuery on Google Cloud.
On the Start your estimate page, in the Data warehousing card,
click Start.
If you have completed estimation for another environment and want to
start specifying details in the data warehousing cost calculator, on the
results page, click Start in the Data warehousing card.
To add a warehouse, in the Warehouses section, click Add Warehouse
and perform the following steps:
Enter a warehouse name.
From the Select edition list, choose a BigQuery
edition for your estimate. To learn about BigQuery
editions and the features associated with them, see
Introduction to BigQuery editions.
From the Region list, select a region for migrating your data
warehouse.
Choose a pricing track. The default pricing
track is 3-year commitment.
From the Environment list, choose the environment from which you are
migrating.
Repeat the previous step to add more warehouses to your estimate.
A cost estimate is displayed for each warehouse you add. If you want to view
the cost breakdown for the warehouses, click View details.
From the currency list, select the currency in which you want to see the
estimate. The estimate is generated in US dollars by default.
To review the estimate for your data warehouse migration, click Submit.
You can see the data warehouse cost estimate on the results page.
Specify the migration environment details
The following sections describe how to specify your migration environment
details.
Teradata
If you are migrating from Teradata, provide the following details:
Select a Teradata generation.
In the TCore count field, specify the number of TCores used in your
current data warehouse environment. The default value is 0.
In the % average usage of TCores field, specify the average
percentage of the total TCores used in your current warehouse. The default
value is 75%.
In the % sustained usage of TCores field, specify the percentage of total
TCores run in a sustained way for your workloads. The default value is 50%.
In the % CPU used for index/stats field, specify the percentage of total
CPUs used for indexing and statistics. The default value is 10%.
In the % CPU used for disaster recovery field, specify the percentage of
total CPUs used for disaster recovery.
In the % CPU for data ingestion and maintenance field, specify the
percentage of total CPUs used for data ingestion and maintenance tasks. The
default value is 5%.
In the Total compressed storage in Teradata (GB) field, specify the total
compressed storage, in GB, that you're using.
In the BigQuery storage section, specify the percentages to reserve for
active and long-term storage. The default values are 20% and 80%
respectively.
For more information, see Storage pricing.
Snowflake
If you are migrating from Snowflake, provide the following details:
In the Snowflakes credits per month field, specify the Snowflake credits
you consume per month.
In the % average usage of Snowflakes credits field, specify the average
core utilization percentage of your existing workloads. The default value is
75%.
In the % sustained usage of Snowflakes credits field, specify percentage
of your capacity run in a sustained way for your workloads. The default
value is 50%.
Specify the total compressed storage for your cluster in Snowflake, in GB.
In the BigQuery storage section, specify the percentages to reserve for
active and long-term storage. The default values are 20% and 80%
respectively.
For more information, see Storage pricing.
Exadata
If you are migrating from Exadata, provide the following details:
From the Exadata model list, select your Exadata platform.
In the Number of database servers field, enter the total number of
database servers that you have.
In the Number of storage servers field, enter the total number of
storage servers that you have.
In the % average usage of Exadata servers field, enter the average core
utilization for your existing workloads. The default value is 75%.
In the % sustained usage of average workload field, enter the minimum system utilization as a percentage of your total system size. You can calculate your sustained usage with the following formula:
$$
U = \mu{(u)} - \sigma{(u)}
$$
where $U$ is the sustained usage, $\mu{(u)}$ is the average system utilization, and $\sigma{(u)}$ is the standard deviation of the system utilization.
The default value is 50%.
In the Total compressed storage in Exadata, enter the sum of your active
and long-term compressed storage sizes in GB.
In the BigQuery storage section, specify the percentages to reserve for
active and long-term storage. The default values are 20% and
80%, respectively.
For more information, see Storage pricing.
[[["Easy to understand","easyToUnderstand","thumb-up"],["Solved my problem","solvedMyProblem","thumb-up"],["Other","otherUp","thumb-up"]],[["Hard to understand","hardToUnderstand","thumb-down"],["Incorrect information or sample code","incorrectInformationOrSampleCode","thumb-down"],["Missing the information/samples I need","missingTheInformationSamplesINeed","thumb-down"],["Other","otherDown","thumb-down"]],["Last updated 2025-08-28 UTC."],[],[],null,["# Specify data warehousing requirements\n\nThis page describes how to specify your data warehousing requirements for\nevaluating the cost of running your data analytics and warehousing setup on\nBigQuery on Google Cloud.\n\nBefore you begin\n----------------\n\nComplete the steps to [start a cost estimate](/migration-center/docs/estimate/start-estimation).\n\nSpecify data warehousing requirements\n-------------------------------------\n\n1. On the **Start your estimate** page, in the **Data warehousing** card,\n click **Start**.\n\n If you have completed estimation for another environment and want to\n start specifying details in the data warehousing cost calculator, on the\n results page, click **Start** in the **Data warehousing** card.\n2. To add a warehouse, in the **Warehouses** section, click **Add Warehouse**\n and perform the following steps:\n\n 1. Enter a warehouse name.\n\n 2. From the **Select edition** list, choose a BigQuery\n edition for your estimate. To learn about BigQuery\n editions and the features associated with them, see\n [Introduction to BigQuery editions](/bigquery/docs/editions-intro).\n\n 3. From the **Region** list, select a region for migrating your data\n warehouse.\n\n 4. Choose a pricing track. The default pricing\n track is **3-year commitment**.\n\n 5. From the **Environment** list, choose the environment from which you are\n migrating.\n\n 6. In the additional fields that appear, [provide the required details](#specify_the_migration_environment_details).\n\n 7. Click **Done** to add the warehouse to your estimate.\n\n3. Repeat the previous step to add more warehouses to your estimate.\n\n A cost estimate is displayed for each warehouse you add. If you want to view\n the cost breakdown for the warehouses, click **View details**.\n4. From the currency list, select the currency in which you want to see the\n estimate. The estimate is generated in US dollars by default.\n\n5. To review the estimate for your data warehouse migration, click **Submit**.\n\nYou can see the data warehouse cost estimate on the results page.\n\n### Specify the migration environment details\n\nThe following sections describe how to specify your migration environment\ndetails.\n\n#### Teradata\n\nIf you are migrating from Teradata, provide the following details:\n\n1. Select a Teradata generation.\n\n2. In the **TCore count** field, specify the number of TCores used in your\n current data warehouse environment. The default value is `0`.\n\n3. In the **% average usage of TCores** field, specify the average\n percentage of the total TCores used in your current warehouse. The default\n value is `75%`.\n\n4. In the **% sustained usage of TCores** field, specify the percentage of total\n TCores run in a sustained way for your workloads. The default value is `50%`.\n\n5. In the **% CPU used for index/stats** field, specify the percentage of total\n CPUs used for indexing and statistics. The default value is `10%`.\n\n6. In the **% CPU used for disaster recovery** field, specify the percentage of\n total CPUs used for disaster recovery.\n\n7. In the **% CPU for data ingestion and maintenance** field, specify the\n percentage of total CPUs used for data ingestion and maintenance tasks. The\n default value is `5%`.\n\n8. In the **Total compressed storage in Teradata (GB)** field, specify the total\n compressed storage, in GB, that you're using.\n\n9. In the **BigQuery storage** section, specify the percentages to reserve for\n active and long-term storage. The default values are `20%` and `80%`\n respectively.\n For more information, see [Storage pricing](/bigquery/pricing#storage).\n\n#### Snowflake\n\nIf you are migrating from Snowflake, provide the following details:\n\n1. In the **Snowflakes credits per month** field, specify the Snowflake credits\n you consume per month.\n\n2. In the **% average usage of Snowflakes credits** field, specify the average\n core utilization percentage of your existing workloads. The default value is\n `75%`.\n\n3. In the **% sustained usage of Snowflakes credits** field, specify percentage\n of your capacity run in a sustained way for your workloads. The default\n value is `50%`.\n\n4. Specify the total compressed storage for your cluster in Snowflake, in GB.\n\n5. In the **BigQuery storage** section, specify the percentages to reserve for\n active and long-term storage. The default values are `20%` and `80%`\n respectively.\n For more information, see [Storage pricing](/bigquery/pricing#storage).\n\n#### Exadata\n\nIf you are migrating from Exadata, provide the following details:\n\n1. From the **Exadata model** list, select your Exadata platform.\n2. In the **Number of database servers** field, enter the total number of database servers that you have.\n3. In the **Number of storage servers** field, enter the total number of storage servers that you have.\n4. In the **% average usage of Exadata servers** field, enter the average core utilization for your existing workloads. The default value is `75%`.\n5. In the **% sustained usage of average workload** field, enter the minimum system utilization as a percentage of your total system size. You can calculate your sustained usage with the following formula: \n $$ U = \\\\mu{(u)} - \\\\sigma{(u)} $$\n where $U$ is the sustained usage, $\\\\mu{(u)}$ is the average system utilization, and $\\\\sigma{(u)}$ is the standard deviation of the system utilization. The default value is `50%`.\n6. In the **Total compressed storage in Exadata**, enter the sum of your active and long-term compressed storage sizes in GB.\n7. In the **BigQuery storage** section, specify the percentages to reserve for active and long-term storage. The default values are `20%` and `80%`, respectively. For more information, see [Storage pricing](/bigquery/pricing#storage).\n\nWhat's next\n-----------\n\n- Learn how to [review and export the estimate results](/migration-center/docs/estimate/review-and-export-results)."]]