Collect Duo Telephony logs
This document explains how to ingest Duo Telephony logs to Google Security Operations using Amazon S3. The parser extracts fields from the logs, transforms and maps them to the Unified Data Model (UDM). It handles various Duo log formats, converting timestamps, extracting user information, network details, and security results, and finally structuring the output into the standardized UDM format.
Before you begin
Make sure you have the following prerequisites:
- Google SecOps instance.
- Privileged access to Duo Admin Panel with Owner role.
- Privileged access to AWS (S3, Identity and Access Management (IAM), Lambda, EventBridge).
Collect Duo prerequisites (API credentials)
- Sign in to the Duo Admin Panel as an administrator with the Owner role.
- Go to Applications > Application Catalog.
- Locate the entry for Admin API in the catalog.
- Click + Add to create the application.
- Copy and save in a secure location the following details:
- Integration Key
- Secret Key
- API Hostname (for example,
api-yyyyyyyy.duosecurity.com
)
- In the Permissions section, deselect all the permission options except Grant read log.
- Click Save Changes.
Configure AWS S3 bucket and IAM for Google SecOps
- Create Amazon S3 bucket following this user guide: Creating a bucket
- Save bucket Name and Region for future reference (for example,
duo-telephony-logs
). - Create a User following this user guide: Creating an IAM user.
- Select the created User.
- Select Security credentials tab.
- Click Create Access Key in section Access Keys.
- Select Third-party service as Use case.
- Click Next.
- Optional: Add a description tag.
- Click Create access key.
- Click Download .CSV file to save the Access Key and Secret Access Key for future reference.
- Click Done.
- Select the Permissions tab.
- Click Add permissions in section Permissions policies.
- Select Add permissions.
- Select Attach policies directly.
- Search for AmazonS3FullAccess policy.
- Select the policy.
- Click Next.
- Click Add permissions.
Configure the IAM policy and role for S3 uploads
- In the AWS console, go to IAM > Policies.
- Click Create policy > JSON tab.
- Copy and paste the following policy.
Policy JSON (replace
duo-telephony-logs
if you entered a different bucket name):{ "Version": "2012-10-17", "Statement": [ { "Sid": "AllowPutObjects", "Effect": "Allow", "Action": "s3:PutObject", "Resource": "arn:aws:s3:::duo-telephony-logs/*" }, { "Sid": "AllowGetStateObject", "Effect": "Allow", "Action": "s3:GetObject", "Resource": "arn:aws:s3:::duo-telephony-logs/duo-telephony/state.json" } ] }
Click Next > Create policy.
Go to IAM > Roles > Create role > AWS service > Lambda.
Attach the newly created policy.
Name the role
duo-telephony-lambda-role
and click Create role.
Create the Lambda function
- In the AWS Console, go to Lambda > Functions > Create function.
- Click Author from scratch.
Provide the following configuration details:
Setting Value Name duo-telephony-logs-collector
Runtime Python 3.13 Architecture x86_64 Execution role duo-telephony-lambda-role
After the function is created, open the Code tab, delete the stub and paste the following code (
duo-telephony-logs-collector.py
).import json import boto3 import os import hmac import hashlib import base64 import urllib.parse import urllib.request import email.utils from datetime import datetime, timedelta, timezone from typing import Dict, Any, List, Optional from botocore.exceptions import ClientError s3 = boto3.client('s3') def lambda_handler(event, context): """ Lambda function to fetch Duo telephony logs and store them in S3. """ try: # Get configuration from environment variables bucket_name = os.environ['S3_BUCKET'] s3_prefix = os.environ['S3_PREFIX'].rstrip('/') state_key = os.environ['STATE_KEY'] integration_key = os.environ['DUO_IKEY'] secret_key = os.environ['DUO_SKEY'] api_hostname = os.environ['DUO_API_HOST'] # Load state state = load_state(bucket_name, state_key) # Calculate time range now = datetime.now(timezone.utc) if state.get('last_offset'): # Continue from last offset next_offset = state['last_offset'] logs = [] has_more = True else: # Start from last timestamp or 24 hours ago mintime = state.get('last_timestamp_ms', int((now - timedelta(hours=24)).timestamp() * 1000)) # Apply 2-minute delay as recommended by Duo maxtime = int((now - timedelta(minutes=2)).timestamp() * 1000) next_offset = None logs = [] has_more = True # Fetch logs with pagination total_fetched = 0 max_iterations = int(os.environ.get('MAX_ITERATIONS', '10')) while has_more and total_fetched < max_iterations: if next_offset: # Use offset for pagination params = { 'limit': '1000', 'next_offset': next_offset } else: # Initial request with time range params = { 'mintime': str(mintime), 'maxtime': str(maxtime), 'limit': '1000', 'sort': 'ts:asc' } # Make API request with retry logic response = duo_api_call_with_retry( 'GET', api_hostname, '/admin/v2/logs/telephony', params, integration_key, secret_key ) if 'items' in response: logs.extend(response['items']) total_fetched += 1 # Check for more data if 'metadata' in response and 'next_offset' in response['metadata']: next_offset = response['metadata']['next_offset'] state['last_offset'] = next_offset else: has_more = False state['last_offset'] = None # Update timestamp for next run if logs: # Get the latest timestamp from logs latest_ts = max([log.get('ts', '') for log in logs]) if latest_ts: # Convert ISO timestamp to milliseconds dt = datetime.fromisoformat(latest_ts.replace('Z', '+00:00')) state['last_timestamp_ms'] = int(dt.timestamp() * 1000) + 1 else: has_more = False # Save logs to S3 if any were fetched if logs: timestamp = datetime.now(timezone.utc).strftime('%Y%m%d_%H%M%S') key = f"{s3_prefix}/telephony_{timestamp}.json" # Format logs as newline-delimited JSON log_data = '\n'.join(json.dumps(log) for log in logs) s3.put_object( Bucket=bucket_name, Key=key, Body=log_data.encode('utf-8'), ContentType='application/x-ndjson' ) print(f"Saved {len(logs)} telephony logs to s3://{bucket_name}/{key}") else: print("No new telephony logs found") # Save state save_state(bucket_name, state_key, state) return { 'statusCode': 200, 'body': json.dumps({ 'message': f'Successfully processed {len(logs)} telephony logs', 'logs_count': len(logs) }) } except Exception as e: print(f"Error: {str(e)}") return { 'statusCode': 500, 'body': json.dumps({'error': str(e)}) } def duo_api_call_with_retry(method: str, host: str, path: str, params: Dict[str, str], ikey: str, skey: str, max_retries: int = 3) -> Dict[str, Any]: """ Make an authenticated API call to Duo Admin API with retry logic. """ for attempt in range(max_retries): try: return duo_api_call(method, host, path, params, ikey, skey) except Exception as e: if '429' in str(e) or '5' in str(e)[:1]: # Rate limit or server error if attempt < max_retries - 1: wait_time = (2 ** attempt) * 2 # Exponential backoff print(f"Retrying after {wait_time} seconds...") import time time.sleep(wait_time) continue raise def duo_api_call(method: str, host: str, path: str, params: Dict[str, str], ikey: str, skey: str) -> Dict[str, Any]: """ Make an authenticated API call to Duo Admin API. """ # Create canonical string for signing using RFC 2822 date format now = email.utils.formatdate() canon = [now, method.upper(), host.lower(), path] # Add parameters args = [] for key in sorted(params.keys()): val = params[key] args.append(f"{urllib.parse.quote(key, '~')}={urllib.parse.quote(val, '~')}") canon.append('&'.join(args)) canon_str = '\n'.join(canon) # Sign the request sig = hmac.new( skey.encode('utf-8'), canon_str.encode('utf-8'), hashlib.sha1 ).hexdigest() # Create authorization header auth = base64.b64encode(f"{ikey}:{sig}".encode('utf-8')).decode('utf-8') # Build URL url = f"https://{host}{path}" if params: url += '?' + '&'.join(args) # Make request req = urllib.request.Request(url) req.add_header('Authorization', f'Basic {auth}') req.add_header('Date', now) req.add_header('Host', host) req.add_header('User-Agent', 'duo-telephony-s3-ingestor/1.0') try: with urllib.request.urlopen(req, timeout=30) as response: data = json.loads(response.read().decode('utf-8')) if data.get('stat') == 'OK': return data.get('response', {}) else: raise Exception(f"API error: {data.get('message', 'Unknown error')}") except urllib.error.HTTPError as e: error_body = e.read().decode('utf-8') raise Exception(f"HTTP error {e.code}: {error_body}") def load_state(bucket: str, key: str) -> Dict[str, Any]: """Load state from S3.""" try: response = s3.get_object(Bucket=bucket, Key=key) return json.loads(response['Body'].read().decode('utf-8')) except ClientError as e: if e.response.get('Error', {}).get('Code') in ('NoSuchKey', '404'): return {} print(f"Error loading state: {e}") return {} except Exception as e: print(f"Error loading state: {e}") return {} def save_state(bucket: str, key: str, state: Dict[str, Any]): """Save state to S3.""" try: s3.put_object( Bucket=bucket, Key=key, Body=json.dumps(state).encode('utf-8'), ContentType='application/json' ) except Exception as e: print(f"Error saving state: {e}")
Go to Configuration > Environment variables.
Click Edit > Add new environment variable.
Enter the environment variables provided in the following table, replacing the example values with your values.
Environment variables
Key Example value S3_BUCKET
duo-telephony-logs
S3_PREFIX
duo-telephony/
STATE_KEY
duo-telephony/state.json
DUO_IKEY
<your-integration-key>
DUO_SKEY
<your-secret-key>
DUO_API_HOST
api-yyyyyyyy.duosecurity.com
MAX_ITERATIONS
10
After the function is created, stay on its page (or open Lambda > Functions > duo-telephony-logs-collector).
Select the Configuration tab.
In the General configuration panel, click Edit.
Change Timeout to 5 minutes (300 seconds) and click Save.
Create an EventBridge schedule
- Go to Amazon EventBridge > Scheduler > Create schedule.
- Provide the following configuration details:
- Recurring schedule: Rate (
1 hour
). - Target: your Lambda function
duo-telephony-logs-collector
. - Name:
duo-telephony-logs-1h
.
- Recurring schedule: Rate (
- Click Create schedule.
(Optional) Create read-only IAM user & keys for Google SecOps
- Go to AWS Console > IAM > Users.
- Click Add users.
- Provide the following configuration details:
- User: Enter
secops-reader
. - Access type: Select Access key – Programmatic access.
- User: Enter
- Click Create user.
- Attach minimal read policy (custom): Users > secops-reader > Permissions > Add permissions > Attach policies directly > Create policy.
JSON:
{ "Version": "2012-10-17", "Statement": [ { "Effect": "Allow", "Action": ["s3:GetObject"], "Resource": "arn:aws:s3:::duo-telephony-logs/*" }, { "Effect": "Allow", "Action": ["s3:ListBucket"], "Resource": "arn:aws:s3:::duo-telephony-logs" } ] }
Name =
secops-reader-policy
.Click Create policy > search/select > Next > Add permissions.
Create access key for
secops-reader
: Security credentials > Access keys.Click Create access key.
Download the
.CSV
. (You'll paste these values into the feed).
Configure a feed in Google SecOps to ingest Duo Telephony logs
- Go to SIEM Settings > Feeds.
- Click + Add New Feed.
- In the Feed name field, enter a name for the feed (for example,
Duo Telephony logs
). - Select Amazon S3 V2 as the Source type.
- Select Duo Telephony Logs as the Log type.
- Click Next.
- Specify values for the following input parameters:
- S3 URI:
s3://duo-telephony-logs/duo-telephony/
- Source deletion options: Select deletion option according to your preference.
- Maximum File Age: Include files modified in the last number of days. Default is 180 days.
- Access Key ID: User access key with access to the S3 bucket.
- Secret Access Key: User secret key with access to the S3 bucket.
- Asset namespace: The asset namespace.
- Ingestion labels: The label applied to the events from this feed.
- S3 URI:
- Click Next.
- Review your new feed configuration in the Finalize screen, and then click Submit.
UDM mapping table
Log field | UDM mapping | Logic |
---|---|---|
context |
metadata.product_event_type |
Directly mapped from the context field in the raw log. |
credits |
security_result.detection_fields.value |
Directly mapped from the credits field in the raw log, nested under a detection_fields object with the corresponding key credits . |
eventtype |
security_result.detection_fields.value |
Directly mapped from the eventtype field in the raw log, nested under a detection_fields object with the corresponding key eventtype . |
host |
principal.hostname |
Directly mapped from the host field in the raw log if it's not an IP address. Set to a static value of "ALLOW" in the parser. Set to a static value of "MECHANISM_UNSPECIFIED" in the parser. Parsed from the timestamp field in the raw log, which represents seconds since epoch. Set to "USER_UNCATEGORIZED" if both context and host fields are present in the raw log. Set to "STATUS_UPDATE" if only host is present. Otherwise, set to "GENERIC_EVENT". Directly taken from the raw log's log_type field. Set to a static value of "Telephony" in the parser. Set to a static value of "Duo" in the parser. |
phone |
principal.user.phone_numbers |
Directly mapped from the phone field in the raw log. |
phone |
principal.user.userid |
Directly mapped from the phone field in the raw log. Set to a static value of "INFORMATIONAL" in the parser. Set to a static value of "Duo Telephony" in the parser. |
timestamp |
metadata.event_timestamp |
Parsed from the timestamp field in the raw log, which represents seconds since epoch. |
type |
security_result.summary |
Directly mapped from the type field in the raw log. |
Need more help? Get answers from Community members and Google SecOps professionals.