Collect Delinea SSO logs
This document explains how to ingest Delinea (formerly Centrify) Single Sign-On (SSO) logs
to Google Security Operations using Amazon S3. The parser extracts the logs, handling
both JSON and syslog formats. It parses key-value pairs, timestamps, and other
relevant fields, mapping them to the UDM model, with specific logic for handling
login failures, user agents, severity levels, authentication mechanisms, and
various event types. It prioritizes FailUserName
over NormalizedUser
for
target email addresses in failure events.
Before you begin
Make sure you have the following prerequisites:
- A Google SecOps instance.
- Privileged access to Delinea (Centrify) SSO tenant.
- Privileged access to AWS (S3, Identity and Access Management (IAM), Lambda, EventBridge).
Collect Delinea (Centrify) SSO prerequisites (IDs, API keys, org IDs, tokens)
- Sign in to the Delinea Admin Portal.
- Go to Apps > Add Apps.
- Search for OAuth2 Client and click Add.
- Click Yes in the Add Web App dialog.
- Click Close in the Add Web Apps dialog.
- On the Application Configuration page, configure the following:
- General tab:
- Application ID: Enter a unique identifier (for example,
secops-oauth-client
) - Application Name: Enter a descriptive name (for example,
SecOps Data Export
) - Application Description: Enter description (for example,
OAuth client for exporting audit events to SecOps
)
- Application ID: Enter a unique identifier (for example,
- Trust tab:
- Application is Confidential: Check this option
- Client ID Type: Select Confidential
- Issued Client ID: Copy and save this value
- Issued Client Secret: Copy and save this value
- Tokens tab:
- Auth methods: Select Client Creds
- Token Type: Select Jwt RS256
- Scope tab:
- Add scope siem with the description SIEM Integration Access.
- Add scope redrock/query with the description Query API Access.
- General tab:
- Click Save to create the OAuth client.
- Go to Core Services > Users > Add User.
- Configure the service user:
- Login Name: Enter the Client ID from step 6.
- Email Address: Enter a valid email (required field).
- Display Name: Enter a descriptive name (for example,
SecOps Service User
). - Password and Confirm Password: Enter the Client Secret from step 6
- Status: Select Is OAuth confidential client.
- Click Create User.
- Go to Access > Roles and assign the service user to a role with appropriate permissions to query audit events.
- Copy and save in a secure location the following details:
- Tenant URL: Your Centrify tenant URL (for example,
https://yourtenant.my.centrify.com
) - Client ID: From step 6
- Client Secret: From step 6
- OAuth Application ID: From the Application Configuration
- Tenant URL: Your Centrify tenant URL (for example,
Configure AWS S3 bucket and IAM for Google SecOps
- Create Amazon S3 bucket following this user guide: Creating a bucket.
- Save bucket Name and Region for future reference (for example,
delinea-centrify-logs-bucket
). - Create a User following this user guide: Creating an IAM user.
- Select the created User.
- Select Security credentials tab.
- Click Create Access Key in section Access Keys.
- Select Third-party service as Use case.
- Click Next.
- Optional: Add a description tag.
- Click Create access key.
- Click Download .CSV file to save the Access Key and Secret Access Key for future reference.
- Click Done.
- Select Permissions tab.
- Click Add permissions in section Permissions policies.
- Select Add permissions.
- Select Attach policies directly.
- Search for AmazonS3FullAccess policy.
- Select the policy.
- Click Next.
- Click Add permissions.
Configure the IAM policy and role for S3 uploads
- In the AWS console, go to IAM > Policies.
- Click Create policy > JSON tab.
- Copy and paste the following policy.
Policy JSON (replace
delinea-centrify-logs-bucket
if you entered a different bucket name):{ "Version": "2012-10-17", "Statement": [ { "Sid": "AllowPutObjects", "Effect": "Allow", "Action": "s3:PutObject", "Resource": "arn:aws:s3:::delinea-centrify-logs-bucket/*" }, { "Sid": "AllowGetStateObject", "Effect": "Allow", "Action": "s3:GetObject", "Resource": "arn:aws:s3:::delinea-centrify-logs-bucket/centrify-sso-logs/state.json" } ] }
Click Next > Create policy.
Go to IAM > Roles.
Click Create role > AWS service > Lambda.
Attach the newly created policy and the managed policy AWSLambdaBasicExecutionRole (for CloudWatch logging).
Name the role
CentrifySSOLogExportRole
and click Create role.
Create the Lambda function
- In the AWS Console, go to Lambda > Functions > Create function.
- Click Author from scratch.
Provide the following configuration details:
Setting Value Name CentrifySSOLogExport
Runtime Python 3.13 Architecture x86_64 Execution role CentrifySSOLogExportRole
After the function is created, open the Code tab, delete the stub and paste the following code (
CentrifySSOLogExport.py
).import json import boto3 import requests import base64 from datetime import datetime, timedelta import os from typing import Dict, List, Optional def lambda_handler(event, context): """ Lambda function to fetch Delinea Centrify SSO audit events and store them in S3 """ # Environment variables S3_BUCKET = os.environ['S3_BUCKET'] S3_PREFIX = os.environ['S3_PREFIX'] STATE_KEY = os.environ['STATE_KEY'] # Centrify API credentials TENANT_URL = os.environ['TENANT_URL'] CLIENT_ID = os.environ['CLIENT_ID'] CLIENT_SECRET = os.environ['CLIENT_SECRET'] OAUTH_APP_ID = os.environ['OAUTH_APP_ID'] # Optional parameters PAGE_SIZE = int(os.environ.get('PAGE_SIZE', '1000')) MAX_PAGES = int(os.environ.get('MAX_PAGES', '10')) s3_client = boto3.client('s3') try: # Get last execution state last_timestamp = get_last_state(s3_client, S3_BUCKET, STATE_KEY) # Get OAuth access token access_token = get_oauth_token(TENANT_URL, CLIENT_ID, CLIENT_SECRET, OAUTH_APP_ID) # Fetch audit events events = fetch_audit_events(TENANT_URL, access_token, last_timestamp, PAGE_SIZE, MAX_PAGES) if events: # Store events in S3 current_timestamp = datetime.utcnow() filename = f"{S3_PREFIX}centrify-sso-events-{current_timestamp.strftime('%Y%m%d_%H%M%S')}.json" store_events_to_s3(s3_client, S3_BUCKET, filename, events) # Update state with latest timestamp latest_timestamp = get_latest_event_timestamp(events) update_state(s3_client, S3_BUCKET, STATE_KEY, latest_timestamp) print(f"Successfully processed {len(events)} events and stored to {filename}") else: print("No new events found") return { 'statusCode': 200, 'body': json.dumps(f'Successfully processed {len(events) if events else 0} events') } except Exception as e: print(f"Error processing Centrify SSO logs: {str(e)}") return { 'statusCode': 500, 'body': json.dumps(f'Error: {str(e)}') } def get_oauth_token(tenant_url: str, client_id: str, client_secret: str, oauth_app_id: str) -> str: """ Get OAuth access token using client credentials flow """ # Create basic auth token credentials = f"{client_id}:{client_secret}" basic_auth = base64.b64encode(credentials.encode()).decode() token_url = f"{tenant_url}/oauth2/token/{oauth_app_id}" headers = { 'Authorization': f'Basic {basic_auth}', 'X-CENTRIFY-NATIVE-CLIENT': 'True', 'Content-Type': 'application/x-www-form-urlencoded' } data = { 'grant_type': 'client_credentials', 'scope': 'siem redrock/query' } response = requests.post(token_url, headers=headers, data=data) response.raise_for_status() token_data = response.json() return token_data['access_token'] def fetch_audit_events(tenant_url: str, access_token: str, last_timestamp: str, page_size: int, max_pages: int) -> List[Dict]: """ Fetch audit events from Centrify using the Redrock/query API """ query_url = f"{tenant_url}/Redrock/query" headers = { 'Authorization': f'Bearer {access_token}', 'X-CENTRIFY-NATIVE-CLIENT': 'True', 'Content-Type': 'application/json' } # Build SQL query with timestamp filter if last_timestamp: sql_query = f"Select * from Event where WhenOccurred > '{last_timestamp}' ORDER BY WhenOccurred ASC" else: # First run - get events from last 24 hours sql_query = "Select * from Event where WhenOccurred > datefunc('now', '-1') ORDER BY WhenOccurred ASC" payload = { "Script": sql_query, "args": { "PageSize": page_size, "Limit": page_size * max_pages, "Caching": -1 } } response = requests.post(query_url, headers=headers, json=payload) response.raise_for_status() response_data = response.json() if not response_data.get('success', False): raise Exception(f"API query failed: {response_data.get('Message', 'Unknown error')}") # Parse the response result = response_data.get('Result', {}) columns = {col['Name']: i for i, col in enumerate(result.get('Columns', []))} raw_results = result.get('Results', []) events = [] for raw_event in raw_results: event = {} row_data = raw_event.get('Row', {}) # Map column names to values for col_name, col_index in columns.items(): if col_name in row_data and row_data[col_name] is not None: event[col_name] = row_data[col_name] # Add metadata event['_source'] = 'centrify_sso' event['_collected_at'] = datetime.utcnow().isoformat() + 'Z' events.append(event) return events def get_last_state(s3_client, bucket: str, state_key: str) -> Optional[str]: """ Get the last processed timestamp from S3 state file """ try: response = s3_client.get_object(Bucket=bucket, Key=state_key) state_data = json.loads(response['Body'].read().decode('utf-8')) return state_data.get('last_timestamp') except s3_client.exceptions.NoSuchKey: print("No previous state found, starting from 24 hours ago") return None except Exception as e: print(f"Error reading state: {e}") return None def update_state(s3_client, bucket: str, state_key: str, timestamp: str): """ Update the state file with the latest processed timestamp """ state_data = { 'last_timestamp': timestamp, 'updated_at': datetime.utcnow().isoformat() + 'Z' } s3_client.put_object( Bucket=bucket, Key=state_key, Body=json.dumps(state_data), ContentType='application/json' ) def store_events_to_s3(s3_client, bucket: str, key: str, events: List[Dict]): """ Store events as JSONL (one JSON object per line) in S3 """ # Convert to JSONL format (one JSON object per line) jsonl_content = 'n'.join(json.dumps(event, default=str) for event in events) s3_client.put_object( Bucket=bucket, Key=key, Body=jsonl_content, ContentType='application/x-ndjson' ) def get_latest_event_timestamp(events: List[Dict]) -> str: """ Get the latest timestamp from the events for state tracking """ if not events: return datetime.utcnow().isoformat() + 'Z' latest = None for event in events: when_occurred = event.get('WhenOccurred') if when_occurred: if latest is None or when_occurred > latest: latest = when_occurred return latest or datetime.utcnow().isoformat() + 'Z'
Go to Configuration > Environment variables.
Click Edit > Add new environment variable.
Enter the environment variables provided in the following table, replacing the example values with your values.
Environment variables
Key Example value S3_BUCKET
delinea-centrify-logs-bucket
S3_PREFIX
centrify-sso-logs/
STATE_KEY
centrify-sso-logs/state.json
TENANT_URL
https://yourtenant.my.centrify.com
CLIENT_ID
your-client-id
CLIENT_SECRET
your-client-secret
OAUTH_APP_ID
your-oauth-application-id
OAUTH_SCOPE
siem
PAGE_SIZE
1000
MAX_PAGES
10
After the function is created, stay on its page (or open Lambda > Functions > your-function).
Select the Configuration tab.
In the General configuration panel click Edit.
Change Timeout to 5 minutes (300 seconds) and click Save.
Create an EventBridge schedule
- Go to Amazon EventBridge > Scheduler > Create schedule.
- Provide the following configuration details:
- Recurring schedule: Rate (
1 hour
). - Target: your Lambda function
CentrifySSOLogExport
. - Name:
CentrifySSOLogExport-1h
.
- Recurring schedule: Rate (
- Click Create schedule.
(Optional) Create read-only IAM user & keys for Google SecOps
- In the AWS Console, go to IAM > Users.
- Click Add users.
- Provide the following configuration details:
- User: Enter
secops-reader
. - Access type: Select Access key – Programmatic access.
- User: Enter
- Click Create user.
- Attach minimal read policy (custom): Users > secops-reader > Permissions.
- Click Add permissions > Attach policies directly.
- Select Create policy.
JSON:
{ "Version": "2012-10-17", "Statement": [ { "Effect": "Allow", "Action": ["s3:GetObject"], "Resource": "arn:aws:s3:::delinea-centrify-logs-bucket/*" }, { "Effect": "Allow", "Action": ["s3:ListBucket"], "Resource": "arn:aws:s3:::delinea-centrify-logs-bucket" } ] }
Name =
secops-reader-policy
.Click Create policy > search/select > Next.
Click Add permissions.
Create access key for
secops-reader
: Security credentials > Access keys.Click Create access key.
Download the
.CSV
. (You'll paste these values into the feed).
Configure a feed in Google SecOps to ingest Delinea (Centrify) SSO logs
- Go to SIEM Settings > Feeds.
- Click + Add New Feed.
- In the Feed name field, enter a name for the feed (for example,
Delinea Centrify SSO logs
). - Select Amazon S3 V2 as the Source type.
- Select Centrify as the Log type.
- Click Next.
- Specify values for the following input parameters:
- S3 URI:
s3://delinea-centrify-logs-bucket/centrify-sso-logs/
- Source deletion options: Select deletion option according to your preference.
- Maximum File Age: Include files modified in the last number of days. Default is 180 days.
- Access Key ID: User access key with access to the S3 bucket.
- Secret Access Key: User secret key with access to the S3 bucket.
- Asset namespace: the asset namespace.
- Ingestion labels: the label applied to the events from this feed.
- S3 URI:
- Click Next.
- Review your new feed configuration in the Finalize screen, and then click Submit.
UDM mapping table
Log field | UDM mapping | Logic |
---|---|---|
AccountID |
security_result.detection_fields.value |
The value of AccountID from the raw log is assigned to a security_result.detection_fields object with key :Account ID . |
ApplicationName |
target.application |
The value of ApplicationName from the raw log is assigned to the target.application field. |
AuthorityFQDN |
target.asset.network_domain |
The value of AuthorityFQDN from the raw log is assigned to the target.asset.network_domain field. |
AuthorityID |
target.asset.asset_id |
The value of AuthorityID from the raw log is assigned to the target.asset.asset_id field, prefixed with "AuthorityID:". |
AzDeploymentId |
security_result.detection_fields.value |
The value of AzDeploymentId from the raw log is assigned to a security_result.detection_fields object with key :AzDeploymentId . |
AzRoleId |
additional.fields.value.string_value |
The value of AzRoleId from the raw log is assigned to an additional.fields object with key :AzRole Id . |
AzRoleName |
target.user.attribute.roles.name |
The value of AzRoleName from the raw log is assigned to the target.user.attribute.roles.name field. |
ComputerFQDN |
principal.asset.network_domain |
The value of ComputerFQDN from the raw log is assigned to the principal.asset.network_domain field. |
ComputerID |
principal.asset.asset_id |
The value of ComputerID from the raw log is assigned to the principal.asset.asset_id field, prefixed with "ComputerId:". |
ComputerName |
about.hostname |
The value of ComputerName from the raw log is assigned to the about.hostname field. |
CredentialId |
security_result.detection_fields.value |
The value of CredentialId from the raw log is assigned to a security_result.detection_fields object with key :Credential Id . |
DirectoryServiceName |
security_result.detection_fields.value |
The value of DirectoryServiceName from the raw log is assigned to a security_result.detection_fields object with key :Directory Service Name . |
DirectoryServiceNameLocalized |
security_result.detection_fields.value |
The value of DirectoryServiceNameLocalized from the raw log is assigned to a security_result.detection_fields object with key :Directory Service Name Localized . |
DirectoryServiceUuid |
security_result.detection_fields.value |
The value of DirectoryServiceUuid from the raw log is assigned to a security_result.detection_fields object with key :Directory Service Uuid . |
EventMessage |
security_result.summary |
The value of EventMessage from the raw log is assigned to the security_result.summary field. |
EventType |
metadata.product_event_type |
The value of EventType from the raw log is assigned to the metadata.product_event_type field. It is also used to determine the metadata.event_type . |
FailReason |
security_result.summary |
The value of FailReason from the raw log is assigned to the security_result.summary field when present. |
FailUserName |
target.user.email_addresses |
The value of FailUserName from the raw log is assigned to the target.user.email_addresses field when present. |
FromIPAddress |
principal.ip |
The value of FromIPAddress from the raw log is assigned to the principal.ip field. |
ID |
security_result.detection_fields.value |
The value of ID from the raw log is assigned to a security_result.detection_fields object with key :ID . |
InternalTrackingID |
metadata.product_log_id |
The value of InternalTrackingID from the raw log is assigned to the metadata.product_log_id field. |
JumpType |
additional.fields.value.string_value |
The value of JumpType from the raw log is assigned to an additional.fields object with key :Jump Type . |
NormalizedUser |
target.user.email_addresses |
The value of NormalizedUser from the raw log is assigned to the target.user.email_addresses field. |
OperationMode |
additional.fields.value.string_value |
The value of OperationMode from the raw log is assigned to an additional.fields object with key :Operation Mode . |
ProxyId |
security_result.detection_fields.value |
The value of ProxyId from the raw log is assigned to a security_result.detection_fields object with key :Proxy Id . |
RequestUserAgent |
network.http.user_agent |
The value of RequestUserAgent from the raw log is assigned to the network.http.user_agent field. |
SessionGuid |
network.session_id |
The value of SessionGuid from the raw log is assigned to the network.session_id field. |
Tenant |
additional.fields.value.string_value |
The value of Tenant from the raw log is assigned to an additional.fields object with key :Tenant . |
ThreadType |
additional.fields.value.string_value |
The value of ThreadType from the raw log is assigned to an additional.fields object with key :Thread Type . |
UserType |
principal.user.attribute.roles.name |
The value of UserType from the raw log is assigned to the principal.user.attribute.roles.name field. |
WhenOccurred |
metadata.event_timestamp |
The value of WhenOccurred from the raw log is parsed and assigned to the metadata.event_timestamp field. This field also populates the top-level timestamp field. Hardcoded value "SSO". Determined by the EventType field. Defaults to STATUS_UPDATE if EventType is not present or doesn't match any specific criteria. Can be USER_LOGIN , USER_CREATION , USER_RESOURCE_ACCESS , USER_LOGOUT , or USER_CHANGE_PASSWORD . Hardcoded value "CENTRIFY_SSO". Hardcoded value "SSO". Hardcoded value "Centrify". If message field contains a session ID, it is extracted and used. Otherwise defaults to "1". Extracted from the host field if available, which comes from the syslog header. Extracted from the pid field if available, which comes from the syslog header. If UserGuid is present, its value is used. Otherwise, if message field contains a user ID, it is extracted and used. Set to "ALLOW" if Level is "Info", and "BLOCK" if FailReason is present. Set to "AUTH_VIOLATION" if FailReason is present. Determined by the Level field. Set to "INFORMATIONAL" if Level is "Info", "MEDIUM" if Level is "Warning", and "ERROR" if Level is "Error". |
Need more help? Get answers from Community members and Google SecOps professionals.