Stay organized with collections
Save and categorize content based on your preferences.
Design a schema
The ideal schema for a Bigtable table is highly dependent on a
number of factors, including use case, data access patterns, and the data you
plan to store. This page provides an overview of the Bigtable
schema design process.
Create or identify a Bigtable instance that
you can use to test your schema.
Gather information
Identify the data that you plan to store in Bigtable.
Questions to ask include:
What format does the data use? Possible formats include raw
bytes, strings, protobufs, and json.
What constitutes an entity in your data? For example, are you
storing page views, stock prices, ad placements, device measurements,
or some other type of entity? What are the entities composed of?
Is the data time-based?
Identify and rank the queries that you use to get the data you
need. Considering the entities you will be storing, think about how you
will want the data to be sorted and grouped when you use it. Your schema
design might not satisfy all your queries, but ideally it satisfies the most
important or most frequently used queries. Examples of queries might include
the following:
A month's worth of temperature readings for IoT objects.
Daily ad views for an IP address.
The most recent location of a mobile device.
All application events per day per user.
Design
Decide on an initial schema design. This means planning the pattern that
your row keys will follow, the column families your table will have, and
the column qualifiers for the columns you want within those column families.
Follow the general schema design guidelines. If your data
is time-based, also follow the guidelines for time series data.
If you plan to query your table using SQL instead of the Bigtable
Data API ReadRows method, consult the following documents:
Run a heavy load test for several minutes. This step gives
Bigtable a chance to balance data across nodes based on the access
patterns that it observes.
Run a one-hour simulation of the reads and writes you would normally
send to the table.
Review the results of your simulation using Key Visualizer and
Cloud Monitoring.
The Key Visualizer tool for Bigtable
provides scans that show the usage patterns for each table in a cluster.
Key Visualizer helps you check whether your schema design and usage
patterns are causing undesirable results, such as hotspots on specific rows.
Monitoring helps you check metrics,
such as CPU utilization of the hottest node in a cluster, to help you
determine if the schema design is causing problems.
Refine
Revise your schema design as necessary, based on what you learned
with Key Visualizer. For instance:
If you see evidence of hotspotting, use different row keys.
If you notice latency, find out whether your rows exceed the
100 MB per row limit.
If you find that you have to use filters to get the data you
need, consider normalizing the data in a way that allows simpler
(and faster) reads: reading a single row or ranges of rows by row key.
After you've revised your schema, test and review the results again.
Continue modifying your schema design and testing until an inspection in
Key Visualizer tells you that the schema design is optimal.
[[["Easy to understand","easyToUnderstand","thumb-up"],["Solved my problem","solvedMyProblem","thumb-up"],["Other","otherUp","thumb-up"]],[["Hard to understand","hardToUnderstand","thumb-down"],["Incorrect information or sample code","incorrectInformationOrSampleCode","thumb-down"],["Missing the information/samples I need","missingTheInformationSamplesINeed","thumb-down"],["Other","otherDown","thumb-down"]],["Last updated 2025-08-25 UTC."],[[["\u003cp\u003eThe optimal Bigtable schema is dependent on factors such as use case, data access patterns, and the nature of the data being stored.\u003c/p\u003e\n"],["\u003cp\u003eBefore designing a schema, it's crucial to identify the data format, entity types, and whether the data is time-based, alongside defining the queries needed to retrieve the data.\u003c/p\u003e\n"],["\u003cp\u003eThe design process involves planning row key patterns, column families, and column qualifiers, following general schema guidelines and any time-series data-specific guidelines.\u003c/p\u003e\n"],["\u003cp\u003eTesting the schema requires creating a table, loading it with a significant amount of data, running load tests, and simulating normal usage patterns to identify potential issues.\u003c/p\u003e\n"],["\u003cp\u003eSchema refinement involves revising the design based on insights from tools like Key Visualizer and Cloud Monitoring, addressing issues such as hotspotting, latency, or complex filtering requirements.\u003c/p\u003e\n"]]],[],null,["# Design a schema\n===============\n\nThe ideal schema for a Bigtable table is highly dependent on a\nnumber of factors, including use case, data access patterns, and the data you\nplan to store. This page provides an overview of the Bigtable\nschema design process.\n\nBefore you read this page, you should understand [schema design](/bigtable/docs/schema-design)\nconcepts and best practices. If applicable, also read\n[Schema design for time series data](/bigtable/docs/schema-design-time-series).\n\nBefore you begin\n----------------\n\n**[Create](/bigtable/docs/creating-instance) or identify a Bigtable instance** that\nyou can use to test your schema.\n\nGather information\n------------------\n\n1. **Identify the data that you plan to store in Bigtable** . Questions to ask include:\n - What format does the data use? Possible formats include raw bytes, strings, protobufs, and json.\n - What constitutes an *entity* in your data? For example, are you storing page views, stock prices, ad placements, device measurements, or some other type of entity? What are the entities composed of?\n - Is the data time-based?\n2. **Identify and rank the queries that you use to get the data you\n need** . Considering the entities you will be storing, think about how you will want the data to be sorted and grouped when you use it. Your schema design might not satisfy all your queries, but ideally it satisfies the most important or most frequently used queries. Examples of queries might include the following:\n - A month's worth of temperature readings for IoT objects.\n - Daily ad views for an IP address.\n - The most recent location of a mobile device.\n - All application events per day per user.\n\nDesign\n------\n\n**Decide on an initial schema design** . This means planning the pattern that\nyour row keys will follow, the column families your table will have, and\nthe column qualifiers for the columns you want within those column families.\nFollow the general [schema design guidelines](/bigtable/docs/schema-design). If your data\nis time-based, also follow the [guidelines for time series data](/bigtable/docs/schema-design-time-series).\n\nIf you plan to query your table using SQL instead of the Bigtable\nData API `ReadRows` method, consult the following documents:\n\n- [GoogleSQL for Bigtable overview](/bigtable/docs/googlesql-overview)\n- [Manage row key schemas](/bigtable/docs/manage-row-key-schemas)\n\nIf you want to use SQL to query views of your table as well as the table itself,\nreview [Tables and views](/bigtable/docs/tables-and-views).\n\nTest\n----\n\n1. **Create a table** using the column families and column qualifiers that you came up with for your schema.\n2. **[Load the table](https://github.com/GoogleCloudPlatform/PerfKitBenchmarker/tree/master/tutorials/bigtable_walkthrough) with at least 30 GB of test data** , using row keys that you identified in your draft plan. **Stay below the\n [storage utilization per node](/bigtable/quotas#storage-per-node) limits.**\n3. **Run a heavy load test for several minutes.** This step gives Bigtable a chance to balance data across nodes based on the access patterns that it observes.\n4. **Run a one-hour simulation** of the reads and writes you would normally send to the table.\n5. **Review the results of your simulation using Key Visualizer and\n Cloud Monitoring**.\n\n - The [Key Visualizer tool for Bigtable](/bigtable/docs/keyvis-overview)\n provides scans that show the usage patterns for each table in a cluster.\n Key Visualizer helps you check whether your schema design and usage\n patterns are causing undesirable results, such as hotspots on specific rows.\n\n - [Monitoring](/bigtable/docs/monitoring-instance) helps you check metrics,\n such as CPU utilization of the hottest node in a cluster, to help you\n determine if the schema design is causing problems.\n\nRefine\n------\n\n1. **Revise your schema design** as necessary, based on what you learned with Key Visualizer. For instance:\n - If you see evidence of hotspotting, use different row keys.\n - If you notice latency, find out whether your rows exceed the 100 MB per row limit.\n - If you find that you have to use filters to get the data you need, consider normalizing the data in a way that allows simpler (and faster) reads: reading a single row or ranges of rows by row key.\n2. **After you've revised your schema, test and review the results again**.\n3. **Continue modifying your schema design and testing** until an inspection in Key Visualizer tells you that the schema design is optimal.\n\nWhat's next\n-----------\n\n- Watch a presentation on the [iterative design process](/bigtable/docs/media#visualizing-cloud-bigtable-access-patterns-at-twitter-for-optimizing-analytics-google-cloud-next-18) that Twitter used for Bigtable.\n- Learn more about [Bigtable performance](/bigtable/docs/performance)."]]