Collect Jamf Pro context logs
This document explains how to ingest Jamf Pro context logs (device & user context) to (Google Security Operations) using AWS S3 using Lambda and EventBridge schedule.
Before you begin
- Google SecOps instance
- Privileged access to Jamf Pro tenant
- Privileged access to AWS (S3, IAM, Lambda, EventBridge)
Configure Jamf API Role
- Sign in to Jamf web UI.
- Go to Settings > System section > API Roles and Clients.
- Select the API Roles tab.
- Click New.
- Enter a display name for the API role (for example,
context_role
). In the Jamf Pro API role privileges, type the name of a privilege, and then select it from the menu.
- Computer Inventory
- Mobile Device Inventory
Click Save.
Configure Jamf API Client
- In Jamf Pro, go to Settings > System section > API roles and clients.
- Select the APl Clients tab.
- Click New.
- Enter a display name for the API client (for example,
context_client
). - In the API Roles field, add the
context_role
role you previously created. - Under Access Token Lifetime, enter the time in seconds for the access tokens to be valid.
- Click Save.
- Click Edit.
- Click Enable API Client.
- Click Save.
Configure Jamf Client Secret
- In Jamf Pro, go to the newly created API client.
- Click Generate Client Secret.
- In a confirmation screen, click Create Secret.
- Save the following parameters in a secure location:
- Base URL:
https://<your>.jamfcloud.com
- Client ID: UUID.
- Client Secret: Value is shown once.
- Base URL:
Configure AWS S3 bucket and IAM for Google SecOps
- Create Amazon S3 bucket following this user guide: Creating a bucket.
- Save bucket Name and Region for future reference (for example,
jamfpro
). - Create a User following this user guide: Creating an IAM user.
- Select the created User.
- Select Security credentials tab.
- Click Create Access Key in section Access Keys.
- Select Third-party service as Use case.
- Click Next.
- Optional: Add description tag.
- Click Create access key.
- Click Download CSV file for save the Access Key and Secret Access Key for future reference.
- Click Done.
- Select Permissions tab.
- Click Add permissions in section Permissions policies.
- Select Add permissions.
- Select Attach policies directly.
- Search for and select the AmazonS3FullAccess policy.
- Click Next.
- Click Add permissions.
Configure the IAM policy and role for S3 uploads
Policy JSON (replace
jamfpro
if you entered a different bucket name):{ "Version": "2012-10-17", "Statement": [ { "Sid": "AllowPutJamfObjects", "Effect": "Allow", "Action": "s3:PutObject", "Resource": "arn:aws:s3:::jamfpro/*" } ] }
Go to AWS console > IAM > Policies > Create policy > JSON tab.
Copy and paste the policy.
Click Next > Create policy.
Go to IAM > Roles > Create role > AWS service > Lambda.
Attach the newly created policy.
Name the role
WriteJamfToS3Role
and click Create role.
Create the Lambda function
- In the AWS Console, go to Lambda > Functions > Create function.
- Click Author from scratch.
- Provide the following configuration details:
Setting | Value |
---|---|
Name | jamf_pro_to_s3 |
Runtime | Python 3.13 |
Architecture | x86_64 |
Permissions | WriteJamfToS3Role |
After the function is created, open the Code tab, delete the stub and enter the following code (
jamf_pro_to_s3.py
):import os import io import json import gzip import time import logging from datetime import datetime, timezone import boto3 import requests log = logging.getLogger() log.setLevel(logging.INFO) BASE_URL = os.environ.get("JAMF_BASE_URL", "").rstrip("/") CLIENT_ID = os.environ.get("JAMF_CLIENT_ID") CLIENT_SECRET = os.environ.get("JAMF_CLIENT_SECRET") S3_BUCKET = os.environ.get("S3_BUCKET") S3_PREFIX = os.environ.get("S3_PREFIX", "jamf-pro/context/") PAGE_SIZE = int(os.environ.get("PAGE_SIZE", "200")) SECTIONS = [ "GENERAL", "HARDWARE", "OPERATING_SYSTEM", "USER_AND_LOCATION", "DISK_ENCRYPTION", "SECURITY", "EXTENSION_ATTRIBUTES", "APPLICATIONS", "CONFIGURATION_PROFILES", "LOCAL_USER_ACCOUNTS", "CERTIFICATES", "SERVICES", "PRINTERS", "SOFTWARE_UPDATES", "GROUP_MEMBERSHIPS", "CONTENT_CACHING", "STORAGE", "FONTS", "PACKAGE_RECEIPTS", "PLUGINS", "ATTACHMENTS", "LICENSED_SOFTWARE", "IBEACONS", "PURCHASING", ] s3 = boto3.client("s3") def _now_iso(): return datetime.now(timezone.utc).isoformat() def get_token(): """OAuth2 client credentials > access_token""" url = f"{BASE_URL}/api/oauth/token" data = { "grant_type": "client_credentials", "client_id": CLIENT_ID, "client_secret": CLIENT_SECRET, } headers = {"Content-Type": "application/x-www-form-urlencoded"} r = requests.post(url, data=data, headers=headers, timeout=30) r.raise_for_status() j = r.json() return j["access_token"], int(j.get("expires_in", 1200)) def fetch_page(token: str, page: int): """GET /api/v1/computers-inventory with sections & pagination""" url = f"{BASE_URL}/api/v1/computers-inventory" params = [("page", page), ("page-size", PAGE_SIZE)] + [("section", s) for s in SECTIONS] hdrs = {"Authorization": f"Bearer {token}", "Accept": "application/json"} r = requests.get(url, params=params, headers=hdrs, timeout=60) r.raise_for_status() return r.json() def to_context_event(item: dict) -> dict: inv = item.get("inventory", {}) or {} general = inv.get("general", {}) or {} hardware = inv.get("hardware", {}) or {} osinfo = inv.get("operatingSystem", {}) or {} loc = inv.get("location", {}) or inv.get("userAndLocation", {}) or {} computer = { "udid": general.get("udid") or hardware.get("udid"), "deviceName": general.get("name") or general.get("deviceName"), "serialNumber": hardware.get("serialNumber") or general.get("serialNumber"), "model": hardware.get("model") or general.get("model"), "osVersion": osinfo.get("version") or general.get("osVersion"), "osBuild": osinfo.get("build") or general.get("osBuild"), "macAddress": hardware.get("macAddress"), "alternateMacAddress": hardware.get("wifiMacAddress"), "ipAddress": general.get("ipAddress"), "reportedIpV4Address": general.get("reportedIpV4Address"), "reportedIpV6Address": general.get("reportedIpV6Address"), "modelIdentifier": hardware.get("modelIdentifier"), "assetTag": general.get("assetTag"), } user_block = { "userDirectoryID": loc.get("username") or loc.get("userDirectoryId"), "emailAddress": loc.get("emailAddress"), "realName": loc.get("realName"), "phone": loc.get("phone") or loc.get("phoneNumber"), "position": loc.get("position"), "department": loc.get("department"), "building": loc.get("building"), "room": loc.get("room"), } return { "webhook": {"name": "api.inventory"}, "event_type": "ComputerInventory", "event_action": "snapshot", "event_timestamp": _now_iso(), "event_data": { "computer": {k: v for k, v in computer.items() if v not in (None, "")}, **{k: v for k, v in user_block.items() if v not in (None, "")}, }, "_jamf": { "id": item.get("id"), "inventory": inv, }, } def write_ndjson_gz(objs, when: datetime): buf = io.BytesIO() with gzip.GzipFile(filename="-", mode="wb", fileobj=buf, mtime=int(time.time())) as gz: for obj in objs: line = json.dumps(obj, separators=(",", ":")) + "\n" gz.write(line.encode("utf-8")) buf.seek(0) prefix = S3_PREFIX.strip("/") + "/" if S3_PREFIX else "" key = f"{prefix}{when:%Y/%m/%d}/jamf_pro_context_{int(when.timestamp())}.ndjson.gz" s3.put_object(Bucket=S3_BUCKET, Key=key, Body=buf.getvalue()) return key def lambda_handler(event, context): assert BASE_URL and CLIENT_ID and CLIENT_SECRET and S3_BUCKET, "Missing required env vars" token, _ttl = get_token() page = 0 total = 0 batch = [] now = datetime.now(timezone.utc) while True: payload = fetch_page(token, page) results = payload.get("results") or payload.get("computerInventoryList") or [] if not results: break for item in results: batch.append(to_context_event(item)) total += 1 if len(batch) >= 5000: key = write_ndjson_gz(batch, now) log.info("wrote %s records to s3://%s/%s", len(batch), S3_BUCKET, key) batch = [] if len(results) < PAGE_SIZE: break page += 1 if batch: key = write_ndjson_gz(batch, now) log.info("wrote %s records to s3://%s/%s", len(batch), S3_BUCKET, key) return {"ok": True, "count": total}
Go to Configuration > Environment variables > Edit > Add new environment variable.
Enter the following environment variables, replacing with your values.
Environment variables
Key Example S3_BUCKET
jamfpro
S3_PREFIX
jamf-pro/context/
AWS_REGION
Select your Region JAMF_CLIENT_ID
Enter Jamf Client ID JAMF_CLIENT_SECRET
Enter Jamf Client Secret JAMF_BASE_URL
Enter Jamf URL, replace <your>
inhttps://<your>.jamfcloud.com
PAGE_SIZE
200
After the function is created, stay on its page (or open Lambda > Functions > your-function).
Select the Configuration tab.
In the General configuration panel click Edit.
Change Timeout to 5 minutes (300 seconds) and click Save.
Create an EventBridge schedule
- Go to Amazon EventBridge > Scheduler > Create schedule.
- Provide the following configuration details:
- Recurring schedule: Rate (
1 hour
). - Target: your Lambda function.
- Name:
jamfpro-context-schedule-1h
.
- Recurring schedule: Rate (
- Click Create schedule.
Configure a feed in Google SecOps to ingest Jamf Pro context logs
- Go to SIEM Settings > Feeds.
- Click + Add New Feed.
- In the Feed name field, enter a name for the feed (for example,
Jamf Pro Context logs
). - Select Amazon S3 V2 as the Source type.
- Select Jamf pro context as the Log type.
- Click Next.
- Specify values for the following input parameters:
- S3 URI: The bucket URI
s3://jamfpro/jamf-pro/context/
- Replace
jamfpro
with the actual name of the bucket.
- Replace
- Source deletion options: Select the deletion option according to your preference.
- Maximum File Age: Include files modified in the last number of days. Default is 180 Days.
- Access Key ID: User access key with access to the s3 bucket.
- Secret Access Key: User secret key with access to the s3 bucket.
- Asset namespace: The asset namespace.
- Ingestion labels: The label to be applied to the events from this feed.
- S3 URI: The bucket URI
- Click Next.
- Review your new feed configuration in the Finalize screen, and click Submit.
Need more help? Get answers from Community members and Google SecOps professionals.