Collect Harness IO audit logs

Supported in:

This document explains how to ingest Harness IO audit logs to Google Security Operations using Amazon S3.

Before you begin

  • Google SecOps instance
  • Privileged access to Harness (API key and account ID)
  • Privileged access to AWS (S3, IAM, Lambda, EventBridge)

Get Harness API key and account ID for a Personal Account

  1. Sign in to the Harness web UI.
  2. Go to your User Profile > My API Keys.
  3. Select API Key.
  4. Enter a Name for the API key.
  5. Click Save.
  6. Select Token under your new API key.
  7. Enter a Name for the token.
  8. Click Generate Token.
  9. Copy and save the token in a secure location.
  10. Copy and save your Account ID (appears in the Harness URL and in Account Settings).

Optional: Get Harness API key and account ID for a service account

  1. Sign in to the Harness web UI.
  2. Create a service account
  3. Go to Account Settings > Access Control.
  4. Select Service Accounts > select the service account for which you want to create an API key.
  5. Under API Keys, select API Key.
  6. Enter a Name for the API key.
  7. Click Save.
  8. Select Token under the new API key.
  9. Enter a Name for the token.
  10. Click Generate Token.
  11. Copy and save the token in a secure location.
  12. Copy and save your Account ID (appears in the Harness URL and in Account Settings).

Configure AWS S3 bucket and IAM for Google SecOps

  1. Create Amazon S3 bucket following this user guide: Creating a bucket
  2. Save bucket Name and Region for future reference (for example, harness-io).
  3. Create a user following this user guide: Creating an IAM user.
  4. Select the created User.
  5. Select the Security credentials tab.
  6. Click Create Access Key in the Access Keys section.
  7. Select Third-party service as the Use case.
  8. Click Next.
  9. Optional: add a description tag.
  10. Click Create access key.
  11. Click Download CSV file to save the Access Key and Secret Access Key for later use.
  12. Click Done.
  13. Select the Permissions tab.
  14. Click Add permissions in the Permissions policies section.
  15. Select Add permissions.
  16. Select Attach policies directly
  17. Search for and select the AmazonS3FullAccess policy.
  18. Click Next.
  19. Click Add permissions.

Configure the IAM policy and role for S3 uploads

  1. In the AWS console, go to IAM > Policies > Create policy > JSON tab.
  2. Enter the following policy:

    {
      "Version": "2012-10-17",
      "Statement": [
        {
          "Sid": "AllowPutHarnessObjects",
          "Effect": "Allow",
          "Action": "s3:PutObject",
          "Resource": "arn:aws:s3:::harness-io/*"
        },
        {
          "Sid": "AllowGetStateObject",
          "Effect": "Allow",
          "Action": "s3:GetObject",
          "Resource": "arn:aws:s3:::harness-io/harness/audit/state.json"
        }
      ]
    }
    
    
    • Replace harness-io if you entered a different bucket name):
  3. Click Next > Create policy.

  4. Go to IAM > Roles > Create role > AWS service > Lambda.

  5. Attach the newly created policy.

  6. Name the role WriteHarnessToS3Role and click Create role.

Create the Lambda function

  1. In the AWS Console, go to Lambda > Functions > Create function.
  2. Click Author from scratch.
  3. Provide the following configuration details:

    Setting Value
    Name harness_io_to_s3
    Runtime Python 3.13
    Architecture x86_64
    Execution role WriteHarnessToS3Role
  4. After the function is created, open the Code tab, delete the stub and enter the following the code (harness_io_to_s3.py).

    #!/usr/bin/env python3
    
    import os, json, time, urllib.parse
    from urllib.request import Request, urlopen
    from urllib.error import HTTPError, URLError
    import boto3
    
    API_BASE = os.environ.get("HARNESS_API_BASE", "https://app.harness.io").rstrip("/")
    ACCOUNT_ID = os.environ["HARNESS_ACCOUNT_ID"]
    API_KEY = os.environ["HARNESS_API_KEY"]  # x-api-key token
    BUCKET = os.environ["S3_BUCKET"]
    PREFIX = os.environ.get("S3_PREFIX", "harness/audit/").strip("/")
    STATE_KEY = os.environ.get("STATE_KEY", "harness/audit/state.json")
    PAGE_SIZE = min(int(os.environ.get("PAGE_SIZE", "100")), 100)  # <=100
    START_MINUTES_BACK = int(os.environ.get("START_MINUTES_BACK", "60"))
    
    s3 = boto3.client("s3")
    HDRS = {"x-api-key": API_KEY, "Content-Type": "application/json", "Accept": "application/json"}
    
    def _read_state():
        try:
            obj = s3.get_object(Bucket=BUCKET, Key=STATE_KEY)
            j = json.loads(obj["Body"].read())
            return j.get("since"), j.get("pageToken")
        except Exception:
            return None, None
    
    def _write_state(since_ms: int, page_token: str | None):
        body = json.dumps({"since": since_ms, "pageToken": page_token}).encode("utf-8")
        s3.put_object(Bucket=BUCKET, Key=STATE_KEY, Body=body, ContentType="application/json")
    
    def _http_post(path: str, body: dict, query: dict, timeout: int = 60, max_retries: int = 5) -> dict:
        qs = urllib.parse.urlencode(query)
        url = f"{API_BASE}{path}?{qs}"
        data = json.dumps(body).encode("utf-8")
    
        attempt, backoff = 0, 1.0
        while True:
            req = Request(url, data=data, method="POST")
            for k, v in HDRS.items():
                req.add_header(k, v)
            try:
                with urlopen(req, timeout=timeout) as r:
                    return json.loads(r.read().decode("utf-8"))
            except HTTPError as e:
                if (e.code == 429 or 500 <= e.code <= 599) and attempt < max_retries:
                    time.sleep(backoff)
                    attempt += 1
                    backoff *= 2
                    continue
                raise
            except URLError:
                if attempt < max_retries:
                    time.sleep(backoff)
                    attempt += 1
                    backoff *= 2
                    continue
                raise
    
    def _write_page(obj: dict, now: float, page_index: int) -> str:
        ts = time.strftime("%Y/%m/%d/%H%M%S", time.gmtime(now))
        key = f"{PREFIX}/{ts}-page{page_index:05d}.json"
        s3.put_object(
            Bucket=BUCKET,
            Key=key,
            Body=json.dumps(obj, separators=(",", ":")).encode("utf-8"),
            ContentType="application/json",
        )
        return key
    
    def fetch_and_store():
        now_s = time.time()
        since_ms, page_token = _read_state()
        if since_ms is None:
            since_ms = int((now_s - START_MINUTES_BACK * 60) * 1000)
        until_ms = int(now_s * 1000)
    
        page_index = 0
        total = 0
    
        while True:
            body = {"startTime": since_ms, "endTime": until_ms}
            query = {"accountIdentifier": ACCOUNT_ID, "pageSize": PAGE_SIZE}
            if page_token:
                query["pageToken"] = page_token
            else:
                query["pageIndex"] = page_index
    
            data = _http_post("/audit/api/audits/listV2", body, query)
            _write_page(data, now_s, page_index)
    
            entries = []
            for key in ("data", "content", "response", "resource", "resources", "items"):
                if isinstance(data.get(key), list):
                    entries = data[key]
                    break
            total += len(entries) if isinstance(entries, list) else 0
    
            next_token = (
                data.get("pageToken")
                or (isinstance(data.get("meta"), dict) and data["meta"].get("pageToken"))
                or (isinstance(data.get("metadata"), dict) and data["metadata"].get("pageToken"))
            )
    
            if next_token:
                page_token = next_token
                page_index += 1
                continue
    
            if len(entries) < PAGE_SIZE:
                break
            page_index += 1
    
        _write_state(until_ms, None)
        return {"pages": page_index + 1, "objects_estimate": total}
    
    def lambda_handler(event=None, context=None):
        return fetch_and_store()
    
    if __name__ == "__main__":
        print(lambda_handler())
    
    
  5. Go to Configuration > Environment variables > Edit > Add new environment variable.

  6. Enter the following environment variables, replacing with your values:

    Key Example
    S3_BUCKET harness-io
    S3_PREFIX harness/audit/
    STATE_KEY harness/audit/state.json
    HARNESS_ACCOUNT_ID 123456789
    HARNESS_API_KEY harness_xxx_token
    HARNESS_API_BASE https://app.harness.io
    PAGE_SIZE 100
    START_MINUTES_BACK 60
  7. After the function is created, stay on its page (or open Lambda > Functions > your‑function).

  8. Select the Configuration tab.

  9. In the General configuration panel click Edit.

  10. Change Timeout to 5 minutes (300 seconds) and click Save.

Create an EventBridge schedule

  1. Go to Amazon EventBridge > Scheduler > Create schedule.
  2. Provide the following configuration details:

    • Recurring schedule: Rate (1 hour).
    • Target: your Lambda function.
    • Name: harness-io-1h.
  3. Click Create schedule.

Create read-only IAM user & keys for Google SecOps

  1. In the AWS Console, go to IAM > Users, then click Add users.
  2. Provide the following configuration details:
    • User: Enter a unique name (for example, secops-reader)
    • Access type: Select Access key - Programmatic access
    • Click Create user.
  3. Attach minimal read policy (custom): Users > select secops-reader > Permissions > Add permissions > Attach policies directly > Create policy
  4. In the JSON editor, enter the following policy:

    {
      "Version": "2012-10-17",
      "Statement": [
        {
          "Effect": "Allow",
          "Action": ["s3:GetObject"],
          "Resource": "arn:aws:s3:::<your-bucket>/*"
        },
        {
          "Effect": "Allow",
          "Action": ["s3:ListBucket"],
          "Resource": "arn:aws:s3:::<your-bucket>"
        }
      ]
    }
    
  5. Set the name to secops-reader-policy.

  6. Go to Create policy > search/select > Next > Add permissions.

  7. Go to Security credentials > Access keys > Create access key.

  8. Download the CSV (these values are entered into the feed).

Configure a feed in Google SecOps to ingest Harness IO logs

  1. Go to SIEM Settings > Feeds.
  2. Click Add New Feed.
  3. In the Feed name field, enter a name for the feed (for example, Harness IO).
  4. Select Amazon S3 V2 as the Source type.
  5. Select Harness IO as the Log type.
  6. Click Next.
  7. Specify values for the following input parameters:
    • S3 URI: s3://harness-io/harness/audit/
    • Source deletion options: Select the deletion option according to your preference.
    • Maximum File Age: Default 180 Days.
    • Access Key ID: User access key with access to the S3 bucket.
    • Secret Access Key: User secret key with access to the S3 bucket.
    • Asset namespace: The asset namespace.
    • Ingestion labels: The label to be applied to the events from this feed.
  8. Click Next.
  9. Review your new feed configuration in the Finalize screen, and then click Submit.

Need more help? Get answers from Community members and Google SecOps professionals.