Collect Code42 Incydr core datasets
This document explains how to ingest Code42 Incydr core datasets (Users, Sessions, Audit, Cases, and optionally File Events) to Google Security Operations using Amazon S3.
Before you begin
- Google SecOps instance
- Privileged access to Code42 Incydr
- Privileged access to AWS (S3, IAM, Lambda, EventBridge)
Collect source prerequisites (IDs, API keys, org IDs, tokens)
- Sign in to the Code42 Incydr web UI.
- Go to Administration > Integrations > API Clients.
- Create a new Client.
- Copy and save the following details in a secure location:
- Client ID.
- Client Secret.
- Base URL: (for example,
https://api.us.code42.com
,https://api.us2.code42.com
,https://api.ie.code42.com
,https://api.gov.code42.com
).
Configure AWS S3 bucket and IAM for Google SecOps
- Create an Amazon S3 bucket following this user guide: Creating a bucket.
- Save the bucket Name and Region for later use.
- Create a user following this user guide: Creating an IAM user.
- Select the created User.
- Select the Security credentials tab.
- Click Create Access Key in the Access Keys section.
- Select Third-party service as the Use case.
- Click Next.
- Optional: add a description tag.
- Click Create access key.
- Click Download CSV file to save the Access Key and Secret Access Key for later use.
- Click Done.
- Select the Permissions tab.
- Click Add permissions in the Permissions policies section.
- Select Add permissions.
- Select Attach policies directly
- Search for and select the AmazonS3FullAccess policy.
- Click Next.
- Click Add permissions.
Set up AWS Lambda for polling Code42 Incydr (no transform)
- In the AWS Console, go to Lambda > Functions > Create function.
- Click Author from scratch.
- Provide the following configuration details:
- Name: Enter a unique and meaningful name (for example,
code42-incydr-pull
) - Runtime: Select Python 3.13.
- Permissions: Select a role with s3:PutObject and Cloudwatch.
- Name: Enter a unique and meaningful name (for example,
- Click Create function.
- Select Configuration > General configuration > Edit.
- Configure Timeout=5m and Memory=1024 MB.
- Click Save.
- Select Configuration > Environment variables > Edit > Add.
INCYDR_BASE_URL
=https://api.us.code42.com
INCYDR_CLIENT_ID
=<Client ID>
INCYDR_CLIENT_SECRET
=<Client Secret>
S3_BUCKET
=code42-incydr
S3_PREFIX
=code42/
PAGE_SIZE
=500
LOOKBACK_MINUTES
=60
STREAMS
=users,sessions,audit,cases
- Optional:
FE_ADV_QUERY_JSON
= `` - Optional:
FE_PAGE_SIZE
=1000
- Click Save.
Select Code and enter the following Python code:
import base64, json, os, time from datetime import datetime, timedelta, timezone from urllib.parse import urlencode from urllib.request import Request, urlopen import boto3 BASE = os.environ["INCYDR_BASE_URL"].rstrip("/") CID = os.environ["INCYDR_CLIENT_ID"] CSECRET = os.environ["INCYDR_CLIENT_SECRET"] BUCKET = os.environ["S3_BUCKET"] PREFIX_BASE = os.environ.get("S3_PREFIX", "code42/") PAGE_SIZE = int(os.environ.get("PAGE_SIZE", "500")) LOOKBACK_MINUTES = int(os.environ.get("LOOKBACK_MINUTES", "60")) STREAMS = [s.strip() for s in os.environ.get("STREAMS", "users").split(",") if s.strip()] FE_ADV_QUERY_JSON = os.environ.get("FE_ADV_QUERY_JSON", "").strip() FE_PAGE_SIZE = int(os.environ.get("FE_PAGE_SIZE", "1000")) s3 = boto3.client("s3") def now_utc(): return datetime.now(timezone.utc) def iso_minus(minutes: int): return (now_utc() - timedelta(minutes=minutes)).strftime("%Y-%m-%dT%H:%M:%SZ") def put_bytes(key: str, body: bytes): s3.put_object(Bucket=BUCKET, Key=key, Body=body) def put_json(prefix: str, page_label: str, data): ts = now_utc().strftime("%Y/%m/%d/%H%M%S") key = f"{PREFIX_BASE}{prefix}{ts}-{page_label}.json" put_bytes(key, json.dumps(data).encode("utf-8")) return key def auth_header(): auth = base64.b64encode(f"{CID}:{CSECRET}".encode()).decode() req = Request(f"{BASE}/v1/oauth", data=b"", method="POST") req.add_header("Authorization", f"Basic {auth}") req.add_header("Accept", "application/json") with urlopen(req, timeout=30) as r: data = json.loads(r.read().decode()) return {"Authorization": f"Bearer {data['access_token']}", "Accept": "application/json"} def http_get(path: str, params: dict | None = None, headers: dict | None = None): url = f"{BASE}{path}" if params: url += ("?" + urlencode(params)) req = Request(url, method="GET") for k, v in (headers or {}).items(): req.add_header(k, v) with urlopen(req, timeout=60) as r: return r.read() def http_post_json(path: str, body: dict, headers: dict | None = None): url = f"{BASE}{path}" req = Request(url, data=json.dumps(body).encode("utf-8"), method="POST") req.add_header("Content-Type", "application/json") for k, v in (headers or {}).items(): req.add_header(k, v) with urlopen(req, timeout=120) as r: return r.read() # USERS (/v1/users) def pull_users(hdrs): next_token = None pages = 0 while True: params = {"active": "true", "blocked": "false", "pageSize": PAGE_SIZE} if next_token: params["pgToken"] = next_token raw = http_get("/v1/users", params, hdrs) data = json.loads(raw.decode()) put_json("users/", f"users-page-{pages}", data) pages += 1 next_token = data.get("nextPgToken") or data.get("next_pg_token") if not next_token: break return pages # SESSIONS (/v1/sessions) — alerts live inside sessions def pull_sessions(hdrs): start_iso = iso_minus(LOOKBACK_MINUTES) next_token = None pages = 0 while True: params = { "hasAlerts": "true", "startTime": start_iso, "pgSize": PAGE_SIZE, } if next_token: params["pgToken"] = next_token raw = http_get("/v1/sessions", params, hdrs) data = json.loads(raw.decode()) put_json("sessions/", f"sessions-page-{pages}", data) pages += 1 next_token = data.get("nextPgToken") or data.get("next_page_token") if not next_token: break return pages # AUDIT LOG (/v1/audit) — CSV export or paged JSON; write as received def pull_audit(hdrs): start_iso = iso_minus(LOOKBACK_MINUTES) next_token = None pages = 0 while True: params = {"startTime": start_iso, "pgSize": PAGE_SIZE} if next_token: params["pgToken"] = next_token raw = http_get("/v1/audit", params, hdrs) try: data = json.loads(raw.decode()) put_json("audit/", f"audit-page-{pages}", data) next_token = data.get("nextPgToken") or data.get("next_page_token") pages += 1 if not next_token: break except Exception: ts = now_utc().strftime("%Y/%m/%d/%H%M%S") key = f"{PREFIX_BASE}audit/{ts}-audit-export.bin" put_bytes(key, raw) pages += 1 break return pages # CASES (/v1/cases) def pull_cases(hdrs): next_token = None pages = 0 while True: params = {"pgSize": PAGE_SIZE} if next_token: params["pgToken"] = next_token raw = http_get("/v1/cases", params, hdrs) data = json.loads(raw.decode()) put_json("cases/", f"cases-page-{pages}", data) pages += 1 next_token = data.get("nextPgToken") or data.get("next_page_token") if not next_token: break return pages # FILE EVENTS (/v2/file-events/search) — enabled only if you provide FE_ADV_QUERY_JSON def pull_file_events(hdrs): if not FE_ADV_QUERY_JSON: return 0 try: base_query = json.loads(FE_ADV_QUERY_JSON) except Exception: raise RuntimeError("FE_ADV_QUERY_JSON is not valid JSON") pages = 0 next_token = None while True: body = dict(base_query) body["pgSize"] = FE_PAGE_SIZE if next_token: body["pgToken"] = next_token raw = http_post_json("/v2/file-events/search", body, hdrs) data = json.loads(raw.decode()) put_json("file_events/", f"fileevents-page-{pages}", data) pages += 1 next_token = ( data.get("nextPgToken") or data.get("next_page_token") or (data.get("file_events") or {}).get("nextPgToken") ) if not next_token: break return pages def handler(event, context): hdrs = auth_header() report = {} if "users" in STREAMS: report["users_pages"] = pull_users(hdrs) if "sessions" in STREAMS: report["sessions_pages"] = pull_sessions(hdrs) if "audit" in STREAMS: report["audit_pages"] = pull_audit(hdrs) if "cases" in STREAMS: report["cases_pages"] = pull_cases(hdrs) if "file_events" in STREAMS: report["file_events_pages"] = pull_file_events(hdrs) return report def lambda_handler(event, context): return handler(event, context)
Click Deploy.
Create an EventBridge schedule
- In the AWS Console, go to Amazon EventBridge > Rules.
- Click Create rule.
- Provide the following configuration details:
- Schedule pattern: Select Fixed rate of
1
hour. - Name: Enter a unique and meaningful name (for example,
code42-incydr-hourly
). - Target: Select Lambda function and choose
code42-incydr-pull
.
- Schedule pattern: Select Fixed rate of
- Click Create rule
Optional: Create read-only IAM user & keys for Google SecOps
- In the AWS Console, go to IAM > Users, then click Add users.
- Provide the following configuration details:
- User: Enter a unique name (for example,
secops-reader
) - Access type: Select Access key - Programmatic access
- Click Create user.
- User: Enter a unique name (for example,
- Attach minimal read policy (custom): Users > select
secops-reader
> Permissions > Add permissions > Attach policies directly > Create policy In the JSON editor, enter the following policy:
{ "Version": "2012-10-17", "Statement": [ { "Effect": "Allow", "Action": [ "s3:GetObject" ], "Resource": "arn:aws:s3:::<your-bucket>/*" }, { "Effect": "Allow", "Action": [ "s3:ListBucket" ], "Resource": "arn:aws:s3:::<your-bucket>" } ] }
Set the name to
secops-reader-policy
.Go to Create policy > search/select > Next > Add permissions.
Go to Security credentials > Access keys > Create access key.
Download the CSV (these values are entered into the feed).
Configure a feed in Google SecOps to ingest the Code42 Incydr log
- Go to SIEM Settings > Feeds.
- Click Add New Feed.
- In the Feed name field, enter a name for the feed (for example,
Code42 Incydr Datasets
). - Select Amazon S3 V2 as the Source type.
- Select Code42 Incydr as the Log type.
- Click Next.
- Specify values for the following input parameters:
- S3 URI:
s3://code42-incydr/code42/
- Source deletion options: Select the deletion option according to your preference.
- Maximum File Age: Default 180 Days.
- Access Key ID: User access key with access to the S3 bucket.
- Secret Access Key: User secret key with access to the S3 bucket.
- Asset namespace: The asset namespace.
- Ingestion labels: The label to be applied to the events from this feed.
- S3 URI:
- Click Next.
- Review your new feed configuration in the Finalize screen, and then click Submit.
Need more help? Get answers from Community members and Google SecOps professionals.