Collect Duo entity context logs
This document explains how to ingest Duo entity context data to Google Security Operations using Amazon S3. The parser transforms the JSON logs into a unified data model (UDM) by first extracting fields from the raw JSON, then mapping those fields to UDM attributes. It handles various data scenarios, including user and asset information, software details, and security labels, ensuring comprehensive representation within the UDM schema.
Before you begin
- Google SecOps instance
- Privileged access to Duo tenant (Admin API application)
- Privileged access to AWS (S3, IAM, Lambda, EventBridge)
Configure Duo Admin API application
- Sign in to Duo Admin Panel.
- Go to Applications > Application Catalog.
- Add Admin API application.
- Record the following values:
- Integration key (ikey)
- Secret key (skey)
- API hostname (for example,
api-XXXXXXXX.duosecurity.com
)
- In Permissions, enable Grant resource – Read (to read users, groups, devices/endpoints).
- Save the application.
Configure AWS S3 bucket and IAM for Google SecOps
- Create Amazon S3 bucket following this user guide: Creating a bucket
- Save bucket Name and Region for future reference (for example,
duo-context
). - Create a user following this user guide: Creating an IAM user.
- Select the created User.
- Select the Security credentials tab.
- Click Create Access Key in the Access Keys section.
- Select Third-party service as the Use case.
- Click Next.
- Optional: add a description tag.
- Click Create access key.
- Click Download CSV file to save the Access Key and Secret Access Key for later use.
- Click Done.
- Select the Permissions tab.
- Click Add permissions in the Permissions policies section.
- Select Add permissions.
- Select Attach policies directly
- Search for and select the AmazonS3FullAccess policy.
- Click Next.
- Click Add permissions.
Configure the IAM policy and role for S3 uploads
- Go to AWS console > IAM > Policies > Create policy > JSON tab.
Enter the following policy:
{ "Version": "2012-10-17", "Statement": [ { "Sid": "AllowPutDuoObjects", "Effect": "Allow", "Action": "s3:PutObject", "Resource": "arn:aws:s3:::duo-context/*" } ] }
- Replace
duo-context
if you entered a different bucket name:
- Replace
Click Next > Create policy.
Go to IAM > Roles > Create role > AWS service > Lambda.
Attach the newly created policy.
Name the role
WriteDuoToS3Role
and click Create role.
Create the Lambda function
- In the AWS Console, go to Lambda > Functions > Create function.
- Click Author from scratch.
Provide the following configuration details:
Setting Value Name duo_entity_context_to_s3
Runtime Python 3.13 Architecture x86_64 Execution role WriteDuoToS3Role
After the function is created, open the Code tab, delete the stub and enter the following code (
duo_entity_context_to_s3.py
):#!/usr/bin/env python3 import os, json, time, hmac, hashlib, base64, email.utils, urllib.parse from urllib.request import Request, urlopen import boto3 # Env DUO_IKEY = os.environ["DUO_IKEY"] DUO_SKEY = os.environ["DUO_SKEY"] DUO_API_HOSTNAME = os.environ["DUO_API_HOSTNAME"].strip() S3_BUCKET = os.environ["S3_BUCKET"] S3_PREFIX = os.environ.get("S3_PREFIX", "duo/context/") # Default set can be adjusted via ENV RESOURCES = [r.strip() for r in os.environ.get( "RESOURCES", "users,groups,phones,endpoints,tokens,webauthncredentials,desktop_authenticators" ).split(",") if r.strip()] # Duo paging: default 100; max 500 for these endpoints LIMIT = int(os.environ.get("LIMIT", "500")) s3 = boto3.client("s3") def _canon_params(params: dict) -> str: """RFC3986 encoding with '~' unescaped, keys sorted lexicographically.""" if not params: return "" parts = [] for k in sorted(params.keys()): v = params[k] if v is None: continue ks = urllib.parse.quote(str(k), safe="~") vs = urllib.parse.quote(str(v), safe="~") parts.append(f"{ks}={vs}") return "&".join(parts) def _sign(method: str, host: str, path: str, params: dict) -> dict: """Construct Duo Admin API Authorization + Date headers (HMAC-SHA1).""" now = email.utils.formatdate() canon = "\n".join([now, method.upper(), host.lower(), path, _canon_params(params)]) sig = hmac.new(DUO_SKEY.encode("utf-8"), canon.encode("utf-8"), hashlib.sha1).hexdigest() auth = base64.b64encode(f"{DUO_IKEY}:{sig}".encode("utf-8")).decode("utf-8") return {"Date": now, "Authorization": f"Basic {auth}"} def _call(method: str, path: str, params: dict) -> dict: host = DUO_API_HOSTNAME assert host.startswith("api-") and host.endswith(".duosecurity.com"), \ "DUO_API_HOSTNAME must be e.g. api-XXXXXXXX.duosecurity.com" qs = _canon_params(params) url = f"https://{host}{path}" + (f"?{qs}" if method.upper() == "GET" and qs else "") req = Request(url, method=method.upper()) for k, v in _sign(method, host, path, params).items(): req.add_header(k, v) with urlopen(req, timeout=60) as r: return json.loads(r.read().decode("utf-8")) def _write_json(obj: dict, when: float, resource: str, page: int) -> str: prefix = S3_PREFIX.strip("/") + "/" if S3_PREFIX else "" key = f"{prefix}{time.strftime('%Y/%m/%d', time.gmtime(when))}/duo-{resource}-{page:05d}.json" s3.put_object(Bucket=S3_BUCKET, Key=key, Body=json.dumps(obj, separators=(",", ":")).encode("utf-8")) return key def _fetch_resource(resource: str) -> dict: """Fetch all pages for a list endpoint using limit/offset + metadata.next_offset.""" path = f"/admin/v1/{resource}" offset = 0 page = 0 now = time.time() total_items = 0 while True: params = {"limit": LIMIT, "offset": offset} data = _call("GET", path, params) _write_json(data, now, resource, page) page += 1 resp = data.get("response") # most endpoints return a list; if not a list, count as 1 object page if isinstance(resp, list): total_items += len(resp) elif resp is not None: total_items += 1 meta = data.get("metadata") or {} next_offset = meta.get("next_offset") if next_offset is None: break # Duo returns next_offset as int try: offset = int(next_offset) except Exception: break return {"resource": resource, "pages": page, "objects": total_items} def lambda_handler(event=None, context=None): results = [] for res in RESOURCES: results.append(_fetch_resource(res)) return {"ok": True, "results": results} if __name__ == "__main__": print(lambda_handler())
Go to Configuration > Environment variables > Edit > Add new environment variable.
Enter the following environment variables, replacing with your values.
Key Example S3_BUCKET
duo-context
S3_PREFIX
duo/context/
DUO_IKEY
DIXYZ...
DUO_SKEY
****************
DUO_API_HOSTNAME
api-XXXXXXXX.duosecurity.com
LIMIT
200
RESOURCES
users,groups,phones,endpoints,tokens,webauthncredentials
After the function is created, stay on its page (or open Lambda > Functions > your-function).
Select the Configuration tab.
In the General configuration panel click Edit.
Change Timeout to 5 minutes (300 seconds) and click Save.
Create an EventBridge schedule
- Go to Amazon EventBridge > Scheduler > Create schedule.
- Provide the following configuration details:
- Recurring schedule: Rate (
1 hour
). - Target: your Lambda function.
- Name:
duo-entity-context-1h
.
- Recurring schedule: Rate (
- Click Create schedule.
Optional: Create read-only IAM user & keys for Google SecOps
- In the AWS Console, go to IAM > Users, then click Add users.
- Provide the following configuration details:
- User: Enter a unique name (for example,
secops-reader
) - Access type: Select Access key - Programmatic access
- Click Create user.
- User: Enter a unique name (for example,
- Attach minimal read policy (custom): Users > select
secops-reader
> Permissions > Add permissions > Attach policies directly > Create policy In the JSON editor, enter the following policy:
{ "Version": "2012-10-17", "Statement": [ { "Effect": "Allow", "Action": ["s3:GetObject"], "Resource": "arn:aws:s3:::<your-bucket>/*" }, { "Effect": "Allow", "Action": ["s3:ListBucket"], "Resource": "arn:aws:s3:::<your-bucket>" } ] }
Set the name to
secops-reader-policy
.Go to Create policy > search/select > Next > Add permissions.
Go to Security credentials > Access keys > Create access key.
Download the CSV (these values are entered into the feed).
Configure a feed in Google SecOps to ingest Duo Entity Context data
- Go to SIEM Settings > Feeds.
- Click + Add New Feed.
- In the Feed name field, enter a name for the feed (for example,
Duo Entity Context
). - Select Amazon S3 V2 as the Source type.
- Select Duo Entity context data as the Log type.
- Click Next.
- Specify values for the following input parameters:
- S3 URI:
s3://duo-context/duo/context/
- Source deletion options: Select the deletion option according to your preference.
- Maximum File Age: Default 180 Days.
- Access Key ID: User access key with access to the S3 bucket.
- Secret Access Key: User secret key with access to the S3 bucket.
- Asset namespace: the asset namespace.
- Ingestion labels: the label applied to the events from this feed.
- S3 URI:
- Click Next.
- Review your new feed configuration in the Finalize screen, and then click Submit.
UDM Mapping Table
Log Field | UDM Mapping | Logic |
---|---|---|
activated | entity.asset.deployment_status | If 'activated' is false, set to "DECOMISSIONED", otherwise "ACTIVE". |
browsers.browser_family | entity.asset.software.name | Extracted from the 'browsers' array in the raw log. |
browsers.browser_version | entity.asset.software.version | Extracted from the 'browsers' array in the raw log. |
device_name | entity.asset.hostname | Directly mapped from the raw log. |
disk_encryption_status | entity.asset.attribute.labels.key: "disk_encryption_status", entity.asset.attribute.labels.value: |
Directly mapped from the raw log, converted to lowercase. |
entity.user.email_addresses | Directly mapped from the raw log if it contains "@", otherwise uses 'username' or 'username1' if they contain "@". | |
encrypted | entity.asset.attribute.labels.key: "Encrypted", entity.asset.attribute.labels.value: |
Directly mapped from the raw log, converted to lowercase. |
epkey | entity.asset.product_object_id | Used as 'product_object_id' if present, otherwise uses 'phone_id' or 'token_id'. |
fingerprint | entity.asset.attribute.labels.key: "Finger Print", entity.asset.attribute.labels.value: |
Directly mapped from the raw log, converted to lowercase. |
firewall_status | entity.asset.attribute.labels.key: "firewall_status", entity.asset.attribute.labels.value: |
Directly mapped from the raw log, converted to lowercase. |
hardware_uuid | entity.asset.asset_id | Used as 'asset_id' if present, otherwise uses 'user_id'. |
last_seen | entity.asset.last_discover_time | Parsed as an ISO8601 timestamp and mapped. |
model | entity.asset.hardware.model | Directly mapped from the raw log. |
number | entity.user.phone_numbers | Directly mapped from the raw log. |
os_family | entity.asset.platform_software.platform | Mapped to "WINDOWS", "LINUX", or "MAC" based on the value, case-insensitive. |
os_version | entity.asset.platform_software.platform_version | Directly mapped from the raw log. |
password_status | entity.asset.attribute.labels.key: "password_status", entity.asset.attribute.labels.value: |
Directly mapped from the raw log, converted to lowercase. |
phone_id | entity.asset.product_object_id | Used as 'product_object_id' if 'epkey' is not present, otherwise uses 'token_id'. |
security_agents.security_agent | entity.asset.software.name | Extracted from the 'security_agents' array in the raw log. |
security_agents.version | entity.asset.software.version | Extracted from the 'security_agents' array in the raw log. |
timestamp | entity.metadata.collected_timestamp | Populates the 'collected_timestamp' field within the 'metadata' object. |
token_id | entity.asset.product_object_id | Used as 'product_object_id' if 'epkey' and 'phone_id' are not present. |
trusted_endpoint | entity.asset.attribute.labels.key: "trusted_endpoint", entity.asset.attribute.labels.value: |
Directly mapped from the raw log, converted to lowercase. |
type | entity.asset.type | If the raw log 'type' contains "mobile" (case-insensitive), set to "MOBILE", otherwise "LAPTOP". |
user_id | entity.asset.asset_id | Used as 'asset_id' if 'hardware_uuid' is not present. |
users.email | entity.user.email_addresses | Used as 'email_addresses' if it's the first user in the 'users' array and contains "@". |
users.username | entity.user.userid | Extracted username before "@" and used as 'userid' if it's the first user in the 'users' array. |
entity.metadata.vendor_name | "Duo" | |
entity.metadata.product_name | "Duo Entity Context Data" | |
entity.metadata.entity_type | ASSET | |
entity.relations.entity_type | USER | |
entity.relations.relationship | OWNS |
Need more help? Get answers from Community members and Google SecOps professionals.