Generate traces and metrics with Node.js

This document describes how to modify a Node.js JavaScript app to collect trace and metric data using the open source OpenTelemetry framework, and how to write structured JSON logs to standard out. This document also provides information about a sample Node.js app that you can install and run. The app uses the Fastify web framework and is configured to generate metrics, traces, and logs.

To learn more about instrumentation, see the following documents:

About manual and automatic instrumentation

For this language, OpenTelemetry defines automatic instrumentation as the practice of collecting telemetry from libraries and frameworks without making code changes. However, you do have install modules and set environment variables.

This document doesn't describe automatic instrumentation. For information about that topic, see Automatic Instrumentation for Node.

For general information, see OpenTelemetry Instrumentation for Node.

Before you begin

Enable the Cloud Logging, Cloud Monitoring, and Cloud Trace APIs.

Enable the APIs

Instrument your app to collect traces, metrics, and logs

To instrument your app to collect trace and metric data, and to write structured JSON to standard out, perform the following steps as described in subsequent sections of this document:

  1. Configure OpenTelemetry
  2. Configure your app to preload the OpenTelemetry configuration
  3. Configure structured logging
  4. Write structured logs

Configure OpenTelemetry

The default configuration for the OpenTelemetry Node.js SDK exports traces by using the OTLP protocol. It also configures OpenTelemetry to use the W3C Trace Context format for propagating trace context. This configuration ensures that spans have the correct parent-child relationship within a trace.

The following code sample illustrates a JavaScript module to setup OpenTelemetry.

To view the full sample, click More, and then select View on GitHub.

  new DiagConsoleLogger(),

const sdk = new opentelemetry.NodeSDK({
  instrumentations: getNodeAutoInstrumentations({
    // Disable noisy instrumentations
    '@opentelemetry/instrumentation-fs': {enabled: false},
  resourceDetectors: getResourceDetectorsFromEnv(),
  metricReader: getMetricReader(),

try {
  sdk.start();'OpenTelemetry automatic instrumentation started successfully');
} catch (error) {
    'Error initializing OpenTelemetry SDK. Your application is not instrumented and will not produce telemetry',

// Gracefully shut down the SDK to flush telemetry when the program exits
process.on('SIGTERM', () => {
    .then(() => diag.debug('OpenTelemetry SDK terminated'))
    .catch(error => diag.error('Error terminating OpenTelemetry SDK', error));

The previous code sample configures OpenTelemetry to export metrics using the OTLP protocol, and it uses the @opentelemetry/auto-instrumentations-node package to configure all available Node.js instrumentations.

To ensure that all pending telemetry is flushed and that connections are closed gracefully before the application shuts down, the SIGTERM handler calls shutdown.

For more information and configuration options, see OpenTelemetry Node.js automatic instrumentation.

Configure your app to preload the OpenTelemetry configuration

To configure the app to write structured logs and to collect metrics and trace data by using OpenTelemetry, update the invocation of your app to preload the instrumentation module with the Node.js --require flag. Using the --require flag ensures that OpenTelemetry is initialized before your app starts. For more information, see OpenTelemetry Node.js Getting Started.

The following code sample illustrates a Dockerfile passing the --require flag:

CMD node --require ./build/src/instrumentation.js build/src/index.js 2>&1 | tee /var/log/app.log

Configure structured logging

To include the trace information as part of the JSON-formatted logs written to standard output, configure your app to output structured logs in JSON format. Fastify uses the Pino log framework and provides a logger in each request handler. The following code sample illustrates a Pino LoggerOptions object that configures the app to output JSON structured logs:

// Expected attributes that OpenTelemetry adds to correlate logs with spans
interface LogRecord {
  trace_id?: string;
  span_id?: string;
  trace_flags?: string;
  [key: string]: unknown;

const PinoLevelToSeverityLookup: Record<string, string | undefined> = {
  trace: 'DEBUG',
  debug: 'DEBUG',
  info: 'INFO',
  warn: 'WARNING',
  error: 'ERROR',
  fatal: 'CRITICAL',

export const loggerConfig = {
  messageKey: 'message',
  // Same as pino.stdTimeFunctions.isoTime but uses "timestamp" key instead of "time"
  timestamp(): string {
    return `,"timestamp":"${new Date(}"`;
  formatters: {
    log(object: LogRecord): Record<string, unknown> {
      // Add trace context attributes following Cloud Logging structured log format described
      // in
      const {trace_id, span_id, trace_flags,} = object;

      return {
        '': trace_id,
        '': span_id,
        '': trace_flags
          ? trace_flags === '01'
          : undefined,,
    // See
    level(label: string) {
      return {
          PinoLevelToSeverityLookup[label] ?? PinoLevelToSeverityLookup['info'],
} satisfies LoggerOptions;

The previous configuration extracts information about the active span from the log message, and then adds that information as attributes to the JSON structured log. These attributes can then be used to correlate a log with a trace:

  • Resource name of the trace associated with the log entry.
  • The span ID with the trace that is associated with the log entry.
  • The value of this field must be true or false.

For more information about these fields, see the LogEntry structure.

To use the Pino configuration with Fastify, pass the logger config object when creating the Fastify app:

// Create the Fastify app providing the Pino logger config
const fastify = Fastify({
  logger: loggerConfig,

Write structured logs

To write structured logs that link to a trace, use the Fastify provided Pino logger. For example, the following statement shows how to call the method:{subRequests}, 'handle /multi request');

OpenTelemetry automatically populates the Pino log entries with the span context of the current active span in the OpenTelemetry Context. This span context is then included in the JSON logs as described in Configure structured logging.

Run a sample app configured to collect telemetry

The example app uses vendor-neutral formats, including JSON for logs and OTLP for metrics and traces, and the Fastify framework. To route the telemetry to Google Cloud, this sample uses the OpenTelemetry Collector configured with Google exporters. The app has two endpoints:

  • The /multi endpoint is handled by the handleMulti function. The load generator in the app issues requests to the /multi endpoint. When this endpoint receives a request, it sends between three and seven requests to the /single endpoint on the local server.

     * handleMulti handles an http request by making 3-7 http requests to the /single endpoint.
     * OpenTelemetry instrumentation requires no changes here. It will automatically generate a
     * span for the handler body.
    fastify.get('/multi', async request => {
      const subRequests = randInt(3, 8);{subRequests}, 'handle /multi request');
      for (let i = 0; i < subRequests; i++) {
        await axios.get(`http://localhost:${port}/single`);
      return 'ok';
  • The /single endpoint is handled by the handleSingle function. When this endpoint receives a request, it sleeps for a short delay and then responds with a string.

     * handleSingle handles an http request by sleeping for 100-200 ms. It writes the number of
     * milliseconds slept as its response.
    fastify.get('/single', async request => {
      // Sleep between 100-200 milliseconds
      const sleepMillis = randInt(100, 200);{sleepMillis}, 'Going to sleep');
      await sleep(sleepMillis);
      return `slept ${sleepMillis}\n`;

Download and deploy the app

To run the sample, do the following:

  1. In the Google Cloud console, activate Cloud Shell.

    Activate Cloud Shell

    At the bottom of the Google Cloud console, a Cloud Shell session starts and displays a command-line prompt. Cloud Shell is a shell environment with the Google Cloud CLI already installed and with values already set for your current project. It can take a few seconds for the session to initialize.

  2. Clone the repository:

    git clone
  3. Go to the sample directory:

    cd opentelemetry-operations-js/samples/instrumentation-quickstart
  4. Build and run the sample:

    docker compose up --abort-on-container-exit

    If you aren't running on Cloud Shell, then run the application with the GOOGLE_APPLICATION_CREDENTIALS environment variable pointing to a credentials file. Application Default Credentials provides a credentials file at $HOME/.config/gcloud/application_default_credentials.json.

    # Set environment variables
    export GOOGLE_APPLICATION_CREDENTIALS="$HOME/.config/gcloud/application_default_credentials.json"
    export USERID="$(id -u)"
    # Run
    docker compose -f docker-compose.yaml -f docker-compose.creds.yaml up --abort-on-container-exit

View your metrics

The OpenTelemetry instrumentation in the sample app generates Prometheus metrics that you can view by using the Metrics Explorer:

  • Prometheus/http_server_duration_milliseconds/histogram records the duration of server requests and stores the results in a histogram.

  • Prometheus/http_client_duration_milliseconds/histogram records the duration of client requests and stores the results in a histogram.

To view the metrics generated by the sample app, do the following:
  1. In the Google Cloud console, go to the  Metrics explorer page:

    Go to Metrics explorer

    If you use the search bar to find this page, then select the result whose subheading is Monitoring.

  2. In the Metric element, expand the Select a metric menu, enter http_server in the filter bar, and then use the submenus to select a specific resource type and metric:
    1. In the Active resources menu, select Prometheus Target.
    2. In the Active metric categories menu, select Http.
    3. In the Active metrics menu, select a metric.
    4. Click Apply.
  3. Configure how the data is viewed.

    When the measurements for a metric are cumulative, Metrics Explorer automatically normalizes the measured data by the alignment period, which which results in the chart displaying a rate. For more information, see Kinds, types, and conversions.

    When integer or double values are measured, such as with the two counter metrics, Metrics Explorer automatically sums all time series. To view the data for the /multi and /single HTTP routes, set the first menu of the Aggregation entry to None.

    For more information about configuring a chart, see Select metrics when using Metrics Explorer.

View your traces

To view your trace data, do the following:

  1. In the Google Cloud console, go to the Trace explorer page:

    Go to Trace explorer

    You can also find this page by using the search bar.

  2. In the scatter plot, select a trace with the URI of /multi.
  3. In the Gantt chart on the Trace details panel, select the span labeled /multi.

    A panel opens that displays information about the HTTP request. These details include the method, status code, number of bytes, and the user agent of the caller.

  4. To view the logs associated with this trace, select the Logs & Events tab.

    The tab shows individual logs. To view the details of the log entry, expand the log entry. You can also click View Logs and view the log by using the Logs Explorer.

For more information about using the Cloud Trace explorer, see Find and explore traces.

View your logs

From the Logs Explorer, you can inspect your logs, and you can also view associated traces, when they exist.

  1. In the Google Cloud console, go to the Logs Explorer page:

    Go to Logs Explorer

    If you use the search bar to find this page, then select the result whose subheading is Logging.

  2. Locate a log with the description of handle /multi request.

    To view the details of the log, expand the log entry.

  3. Click Traces on a log entry with the "handle /multi request" message, and then select View trace details.

    A Trace details panel opens and displays the selected trace.

For more information about using the Logs Explorer, see View logs by using the Logs Explorer.

What's next