Ingestion metrics field reference for dashboards

The Ingestion metrics Explore interface provides a variety of measure fields that you can use to create new dashboards. Dimensions and measures are the fundamental components of a dashboard. A dimension is a field that can be used to filter query results by grouping data. A measure is a field that calculates a value using a SQL aggregate function, such as COUNT, SUM, AVG, MIN, or MAX. Any field derived from other measure values is also considered a measure.

For information about the dimension fields and ingestion metrics schemas, see Ingestion metrics schema.

Ingestion metrics fields

The following table describes the additional fields that you can use as dimensions, filters, and measures:

Field Description
timestamp The Unix epoch time that represents the start time of the aggregated time interval associated with the metric.
total_entry_number The number of logs ingested through the Ingestion API component (i.e component == Ingestion API).
total_entry_number_in_million The number of logs ingested through the Ingestion API component, in millions.
total_entry_number_in_million_for_drill The number of logs ingested through the Ingestion API component, in millions rounded to 0 decimal places.
total_size_bytes The log volume ingested through the Ingestion API component, in bytes.
total_size_bytes_GB The log volume ingested through the Ingestion API component, in GB (gigabyte) rounded to 2 decimals. A GB is 109 bytes.
total_size_bytes_GB_for_drill Same as total_size_bytes_GB.
total_size_bytes_GiB The log volume ingested through the Ingestion API component, in GiB (gibibyte) rounded to 2 decimals. A GiB is 230 bytes.
total_events The count of validated events during normalization.
total_error_events The count of events that failed validation or failed parsing during normalization.
total_error_count_in_million The count of failed validation and failed parsing errors, in millions rounded to 0 decimals.
total_normalized_events The count of events that passed validation during normalization.
total_validation_error_events The count of events that failed during normalization.
total_parsing_error_events The count of events that failed to parse during normalization.
period The reporting period as selected by the Period Filter. Values include This Period and Previous Period.
period_filter The reporting period before the specified date or after the specified date.
log_type_for_drill Only populated for non-null log types.
valid_log_type Same as log_type_for_drill.
offered_gcp_log_type 43 (The count of Google Cloud log types offered by Google Security Operations.)
gcp_log_types_used Percentage of the available Google Cloud log types that the customer ingests.
gcp_log_type Only populated for non-null Google Cloud log types.
total_log_volume_mb_per_hour The total volume of logs (in all components), in MB per hour rounded to 2 decimals.
max_quota_limit_mb_per_second The maximum quota limit, in MB per second.

Use case: Sample query

The following table contains values for a sample query:

Measure Rows populated
Ingested log count collector_id, log_type, log_count
Ingested volume collector_id, log_type, log_volume
Normalized events collector_id, log_type, event_count
Forwarder cpu usage collector_id, log_type, cpu_used

The table has 4 components:

  1. Forward
  2. Ingestion API
  3. Normalizer
  4. Out Of Band (OOB)

Logs can be ingested to Google SecOps by OOB, the Forwarder, direct customer calls to Ingestion API, or internal service calls to the Ingestion API (for example, ETD, HTTPS Push webhooks, or Azure event hub integration).

All logs ingested into Google Security Operations flow through the Ingestion API. After that, the logs are normalized by the Normalizer component.

Log count

  • Number of ingested logs:
SELECT
  *
FROM
  `chronicle-catfood.datalake.ingestion_metrics`
WHERE
  log_count IS NOT NULL
  AND component = 'Ingestion API'
LIMIT
  2 ;
  • Volume of ingested logs:
SELECT
  *
FROM
  `chronicle-catfood.datalake.ingestion_metrics`
WHERE
  log_volume IS NOT NULL
  AND component = 'Ingestion API'
LIMIT
  2 ;
  1. Apply the logtype or collectorID filter, and in the WHERE clause, add log_type = <LOGTYPE> or collector_id = <COLLECTOR_ID>.
  2. Select Add GROUP BY in the query with the appropriate field to perform a group query.

The Normalizer component handles parsing errors, which occur when events are generated. These errors are recorded in the drop_reason_code and state columns.