See the supported connectors for Application Integration.
About autoscaling in Application Integration
Application Integration, built on the same underlying infrastructure as some of Google Cloud's largest-scale services, offers autoscaling capabilities. Autoscaling enables integration workloads to adapt automatically to changing demands. This helps eliminate the need for manual intervention or complex tuning in most cases and provides reliable performance and seamless scaling.
How Application Integration scales
Application Integration uses both horizontal and vertical scaling to manage varying workloads:
Horizontal scaling
Scales out dynamically by provisioning or deprovisioning the Application Integration instances based on the workload or demand. When demand increases, horizontal autoscaling provisions additional instances to run concurrently and handle the increased load. Conversely, during periods of low activity, the unused instances are deprovisioned to optimize resource usage.
Example: A sudden surge in orders can trigger autoscaling, which provisions additional integration instances to handle the increased volume and process requests in parallel.
Vertical scaling
Scales up by allocating additional resources (such as GCU and RAM) to each individual task or Application Integration instance on demand. Instead of provisioning more instances, vertical scaling enhances the performance of the existing instances.
Example: A complex data transformation task may require additional memory. Vertical scaling allocates additional memory to help complete the task more efficiently.
Application Integration adheres to defined quotas and limits, which can be increased upon request. For more information, see Quotas and limits.
To understand the autoscaling behavior of Integration Connectors, see About autoscaling in Integration Connectors.
Benefits
The autoscaling features of Application Integration provide the following advantages:
- Performance at scale: Automatically adjust to changes in traffic and usage. Planned scaling is still recommended for predictable, high-impact events like seasonal peaks.
- Reduced operational overhead: Scaling is automated and doesn't require manual intervention in most cases. For extreme load spikes, advance planning may still be needed.
- Enhanced reliability: Responds to traffic spikes while maintaining service availability.
- Simplified management: Scaling is handled by Google Cloud's infrastructure, letting teams focus on building integrations instead of managing scaling infrastructure.
Observability and monitoring
Application Integration automatically exports a rich set of metrics to Cloud Monitoring, providing deep insights into the usage, performance, and health of your integrations.
While a single, pre-built dashboard for all metrics across projects isn't directly available within the Application Integration platform, you can leverage Cloud Monitoring to create custom dashboards tailored to your specific needs.
Key Application Integration metrics
Key metrics | Description |
---|---|
Integration execution latency | The time taken for individual integration executions to complete. |
Data processed by integration executions | The size of data processed by integrations, including input/output parameters and payloads. |
Integration execution count | The number of individual tasks or triggers executed within an integration workflow. |
Status | The execution status of the integration or integration step (task or trigger). For example, succeeded, failed, or cancelled. |
For a comprehensive list of all Application Integration metrics available for monitoring, refer to Monitor Application Integration resources.