Stay organized with collections
Save and categorize content based on your preferences.
Deployment sizes
Before deploying Manufacturing Data Engine (MDE), you need to select
a deployment size. This page outlines the available sizes and their
characteristics. These are general recommendations and may not fit every use
case. For production deployments, consider using a custom size tailored to your
specific needs.
Sizes
MDE provides four different sizes:
Pilot: For small tests and proof of concepts, the cheapest option. This
deployment size omits the following components (which are included in all
other sizes):
Bigtable
Bigtable Dataflow writer
Event Change Dataflow transformation
Window Dataflow transformation
Small: For small projects that expect up to 1k messages per second.
Medium: For medium projects that expect up to 20k messages per second.
Large: For large projects that expect up to 100k messages per second.
MDE supports higher throughput if needed
through custom sizes. If you need to handle a higher throughput than the
included sizes, contact the MDE team
for recommendations.
Sizes detail
The following table describes sizes details. This configuration is stored in the
mde_size_details variable in the files
terraform/modules/infrastructure/variables.tf and
terraform/modules/deployment/variables.tf and can be overwritten if needed.
Pilot
Small
Medium
Large
Max messages/second
300
1000
20000
100000
GKE
CIDR pods
/19 pods
/17 pods
/17 pods
/17 pods
CIDR services
/22 services
/22 services
/22 services
/22 services
compute-class
normal
normal
Scale-Out
Scale-Out
message-mapper max replicas
1
5
50
200
configuration manager max replicas
1
5
50
200
metadata-manager max replicas
1
5
50
200
bigquery-sink max replicas
1
5
50
200
federation-api max replicas
1
2
5
10
SQL
machineType
db-custom-1-3840
db-custom-2-7680
db-custom-16-30720
db-custom-32-61440
max_connections flag
500
500
1000
4000
Redis
tier
Basic
Standard
Standard
Standard
memory
1Gb
5Gn
20Gb
40Gb
read replicas
0
1
2
5
Dataflow
GCSWriter machine type
n1-standard-1
n1-standard-2
n1-standard-4
n1-highmem-4
GCSWriter max workers
1
1
5
10
Bigtable Writer machine type
N/A
n1-standard-2
n1-standard-4
n1-standard-4
Bigtable Writer max workers
N/A
1
3
5
GCSReader machine type
N/A
n1-standard-2
n1-standard-4
n1-standard-4
GCSReader max workers
N/A
1
1
2
EventChange machine type
N/A
n1-standard-2
n1-standard-4
n1-highmem-4
EventChange max workers
N/A
1
2
2
Window machine type
N/A
n1-standard-2
n1-standard-4
n1-highmem-4
Window max workers
N/A
1
2
2
Bigtable
max nodes
N/A
1
3
5
For more information about how to increase CPU quotas, see the
CPU quotas documentation.
Limitations and recommendations
We recommend that you create no more than 100 types in the Pilot and
Small size deployment and no more than 500 types in the Medium and
Large sizes.
This is not a hard limit and MDE can
support a much higher number of types. If
you need support for a higher number of types reach out to the
MDE team for
assistance in making the necessary Terraform modifications.
[[["Easy to understand","easyToUnderstand","thumb-up"],["Solved my problem","solvedMyProblem","thumb-up"],["Other","otherUp","thumb-up"]],[["Hard to understand","hardToUnderstand","thumb-down"],["Incorrect information or sample code","incorrectInformationOrSampleCode","thumb-down"],["Missing the information/samples I need","missingTheInformationSamplesINeed","thumb-down"],["Other","otherDown","thumb-down"]],["Last updated 2025-08-25 UTC."],[],[],null,["# Deployment sizes\n================\n\nBefore deploying Manufacturing Data Engine (MDE), you need to select\na deployment size. This page outlines the available sizes and their\ncharacteristics. These are general recommendations and may not fit every use\ncase. For production deployments, consider using a custom size tailored to your\nspecific needs.\n\nSizes\n-----\n\nMDE provides four different sizes:\n\n- **Pilot** : For small tests and proof of concepts, the cheapest option. This deployment size omits the following components (which are included in all other sizes):\n - Bigtable\n - Bigtable Dataflow writer\n - Event Change Dataflow transformation\n - Window Dataflow transformation\n- **Small**: For small projects that expect up to 1k messages per second.\n- **Medium**: For medium projects that expect up to 20k messages per second.\n- **Large**: For large projects that expect up to 100k messages per second.\n\nMDE supports higher throughput if needed\nthrough custom sizes. If you need to handle a higher throughput than the\nincluded sizes, contact the [MDE team](mailto:mde-eng@google.com)\nfor recommendations.\n\nSizes detail\n------------\n\nThe following table describes sizes details. This configuration is stored in the\n`mde_size_details` variable in the files\n`terraform/modules/infrastructure/variables.tf` and\n`terraform/modules/deployment/variables.tf` and can be overwritten if needed.\n\n| **Important:** Medium and Large sizes use the [scale-out compute class for GKE cluster](/kubernetes-engine/docs/concepts/autopilot-compute-classes#when-to-use), which uses **T2D CPUs** . Additionally, these deployments need at least **305** and **1210** vCPUs respectively when running at full capacity. Make sure to request this quota extension before trying to deploy MDE.\n\nFor more information about how to increase CPU quotas, see the\n[CPU quotas](/compute/resource-usage#cpu_quota) documentation.\n\nLimitations and recommendations\n-------------------------------\n\nWe recommend that you create no more than **100** types in the `Pilot` and\n`Small` size deployment and no more than **500** types in the `Medium` and\n`Large` sizes.\n\nThis is not a hard limit and MDE can\nsupport a much higher number of types. If\nyou need support for a higher number of types reach out to the\n[MDE team](mailto:mde-eng@google.com) for\nassistance in making the necessary Terraform modifications."]]