You can collect and store metrics your components produce over time. This section describes how to scrape metrics from workloads in Google Distributed Cloud (GDC) air-gapped for system monitoring and data observability.
The Observability platform provides a custom API to scrape running workloads in your GDC project namespace through monitoring targets. You must deploy a MonitoringTarget
custom resource to your project namespace in the org admin cluster. Based on this custom resource, the Observability platform starts collecting data for system monitoring.
The MonitoringTarget
custom resource instructs the system monitoring pipeline to scrape pods of your project. Then, the collected metrics become visible in the
system monitoring instance of your project
to let you evaluate Observability data from your applications.
To configure the MonitoringTarget
custom resource, specify the selected pods of your project namespace for collecting metrics. These pods must expose an HTTP endpoint to deliver metrics in a Prometheus exposition format, for example, the OpenMetrics format. You can customize settings in the resource such as the scrapping frequency, the metrics endpoint of the pods, labels, and annotations.
Before you begin
To get the permissions you need to collect metrics or view collected metrics, ask your Project IAM Admin to grant you one of the following roles in your project namespace:
- Monitoring Target Editor: edits or modifies
MonitoringTarget
custom resources. Request the Monitoring Target Editor (monitoringtarget-editor
) role. - Monitoring Target Viewer: views
MonitoringTarget
custom resources. Request the Monitoring Target Viewer (monitoringtarget-viewer
) role.
Configure metric collection
Work through the following steps to configure the Observability platform for metric collection:
- Identify the GDC project from where you want to collect metrics for system monitoring.
Optional: Include the following annotations in the deployment file of the container from where you want to collect metrics.
apiVersion: apps/v1 kind: Deployment metadata: name: CONTAINER_NAME namespace: PROJECT_NAMESPACE labels: app: CONTAINER_NAME annotations: # These annotations are not required. They demonstrate selecting # pod metric endpoints via annotations. prometheus.io/path: /metrics prometheus.io/port: \"2112\" prometheus.io/scheme: http
Replace the following:
- PROJECT_NAMESPACE: the namespace of your project
- CONTAINER_NAME: the container name prefix from which you want to collect metrics
Configure the
MonitoringTarget
custom resource, specifying the selected pods for collecting your metrics, the metrics endpoint of those pods, the scrapping frequency, and any additional settings. For information on how to configure this resource, see Configure theMonitoringTarget
custom resource.Deploy the
MonitoringTarget
custom resource into the admin cluster in the same namespace as the pods you want to scrape. The Observability service starts collecting metrics from the specified pods.
Configure the MonitoringTarget
custom resource
The MonitoringTarget
custom resource configures the Observability service to scrape workloads and get metrics. It contains the following information:
- The pods and the endpoints from which you want to collect metrics in a GDC project.
- The frequency to scrape the selected pods for system and server monitoring.
- The way you want to modify the labels on the metric streams if needed.
Choose between the statically defined and the pod annotations methods to determine the metrics endpoint for your selected pods. Configure the MonitoringTarget
custom resource using one of those methods.
Statically defined
Work through the following steps to expose metrics from your selected pods on a static endpoint:
Declare to the system that a port you choose is open on the pod from which you want to scrape metrics. The following example shows how to declare port
2112
on the pod specification:... spec: template: spec: containers: - name: CONTAINER_NAME ports: - containerPort: 2112 ...
Configure the
MonitoringTarget
custom resource with the statically defined metrics endpoint. You must configure theMonitoringTarget
to match the port you chose in the previous step. The following YAML file shows aMonitoringTarget
configuration example where every selected pod has to expose metrics on the same endpoint:http://CONTAINER_NAME:2112/metrics
apiVersion: monitoring.gdc.goog/v1 kind: MonitoringTarget metadata: # Choose the same namespace as the workload pods namespace: PROJECT_NAMESPACE name: CONTAINER_NAME spec: selector: # Choose pod-labels to consider for this job # Optional: Map of key-value pairs. # Default: No filtering by label. # To consider every pod in the project namespace, remove selector fields. # If you followed the optional step on the Collect metrics section, # then you can include app: CONTAINER_NAME in matchLabels. matchLabels: key: value podMetricsEndpoints: port: value: 2112 path: # Choose any value for your endpoint. # The /metrics value is an example. value: /metrics scheme: value: http
Replace the following:
- PROJECT_NAMESPACE: the namespace of your project
- CONTAINER_NAME: the container name prefix from which you want to collect metrics
Pod annotations
The following YAML file shows a MonitoringTarget
configuration example using annotations. This specification tells the custom resource to gather the metrics endpoint information from annotations on the selected pods. With this method, each pod can have different metrics endpoints for system monitoring and data observability.
apiVersion: monitoring.gdc.goog/v1
kind: MonitoringTarget
metadata:
# Choose the same namespace as the workload pods
namespace: PROJECT_NAMESPACE
name: CONTAINER_NAME
spec:
selector:
matchLabels:
app: CONTAINER_NAME
podMetricsEndpoints:
port:
annotation: prometheus.io/port
path:
annotation: prometheus.io/path
scheme:
annotation: prometheus.io/scheme
Replace the following:
- PROJECT_NAMESPACE: the namespace of your project
- CONTAINER_NAME: the container name prefix from which you want to collect metrics
The following YAML file shows an example for the specification of the MonitoringTarget
custom resource:
apiVersion: monitoring.gdc.goog/v1
kind: MonitoringTarget
metadata:
# Choose the same namespace as the workload pods
namespace: PROJECT_NAMESPACE
name: string
spec:
# Choose matching pattern that identifies pods for this job
# Optional
# Relationship between different selectors: AND
selector:
# Choose clusters to consider for this job
# Optional: List
# Default: All clusters applicable to this project.
# Relationship between different list elements: OR
matchClusters:
- string
# Choose pod-labels to consider for this job
# Optional: Map of key-value pairs.
# Default: No filtering by label.
# Relationship between different pairs: AND
matchLabels:
key1: value1
# Choose annotations to consider for this job
# Optional: Map of key-value pairs
# Default: No filtering by annotation
# Relationship between different pairs: AND
matchAnnotations:
key1: value1
# Configure the endpoint exposed for this job
podMetricsEndpoints:
# Choose port either via static value or annotation
# Optional
# Annotation takes priority
# Default: static port 80
port:
value: integer
annotation: string
# Choose path either via static value or annotation
# Optional
# Annotation takes priority
# Default: static path /metrics
path:
value: string
annotation: string
# Choose scheme either via static value (http or https) or annotation
# Optional
# Annotation takes priority
# Default: static scheme http
scheme:
value: string
annotation: string
# Choose the frequency to scrape the metrics endpoint defined in podMetricsEndpoints
# Optional
# Default: 60s
scrapeInterval: string
# Dynamically rewrite the label set of a target before it gets scraped.
# https://prometheus.io/docs/prometheus/latest/configuration/configuration/#relabel_config
# Optional
# Default: No filtering by label
metricsRelabelings:
- sourceLabels:
- string
separator: string
regex: string
action: string
targetLabel: string
replacement: string
Replace PROJECT_NAMESPACE with the namespace of your project.