Go to AWS console > IAM > Policies > Create policy > JSON tab.
Copy and paste the policy.
Click Next > Create policy.
Go to IAM > Roles > Create role > AWS service > Lambda.
Attach the newly created policy.
Name the role WriteMulesoftToS3Role and click Create role.
Create the Lambda function
Setting
Value
Name
mulesoft_audit_to_s3
Runtime
Python 3.13
Architecture
x86_64
Execution role
Use existing >WriteMulesoftToS3Role
After the function is created, open the Code tab, delete the stub and enter the following code (mulesoft_audit_to_s3.py).
#!/usr/bin/env python3importos,json,gzip,io,uuid,datetimeasdt,urllib.request,urllib.error,urllib.parseimportboto3ORG_ID=os.environ["MULE_ORG_ID"]CLIENT_ID=os.environ["CLIENT_ID"]CLIENT_SECRET=os.environ["CLIENT_SECRET"]S3_BUCKET=os.environ["S3_BUCKET_NAME"]TOKEN_URL="https://anypoint.mulesoft.com/accounts/api/v2/oauth2/token"QUERY_URL=f"https://anypoint.mulesoft.com/audit/v2/organizations/{ORG_ID}/query"defhttp_post(url,data,headers=None):raw=json.dumps(data).encode()ifheaderselseurllib.parse.urlencode(data).encode()req=urllib.request.Request(url,raw,headersor{})try:withurllib.request.urlopen(req,timeout=30)asr:returnjson.loads(r.read())excepturllib.error.HTTPErrorase:print("MuleSoft error body →",e.read().decode())raisedefget_token():returnhttp_post(TOKEN_URL,{"grant_type":"client_credentials","client_id":CLIENT_ID,"client_secret":CLIENT_SECRET})["access_token"]deffetch_audit(token,start,end):headers={"Authorization":f"Bearer {token}","Content-Type":"application/json"}body={"startDate":f"{start.isoformat(timespec='milliseconds')}Z","endDate":f"{end.isoformat(timespec='milliseconds')}Z","limit":200,"offset":0,"ascending":False}whileTrue:data=http_post(QUERY_URL,body,headers)ifnotdata.get("data"):breakyield fromdata["data"]body["offset"]+=body["limit"]defupload(events,ts):key=f"{ts:%Y/%m/%d}/mulesoft-audit-{uuid.uuid4()}.json.gz"buf=io.BytesIO()withgzip.GzipFile(fileobj=buf,mode="w")asgz:forevinevents:gz.write((json.dumps(ev)+"\n").encode())buf.seek(0)boto3.client("s3").upload_fileobj(buf,S3_BUCKET,key)deflambda_handler(event=None,context=None):now=dt.datetime.utcnow().replace(microsecond=0)start=now-dt.timedelta(days=1)token=get_token()events=list(fetch_audit(token,start,now))ifevents:upload(events,start)print(f"Uploaded {len(events)} events")else:print("No events in the last 24 h")# For local testingif__name__=="__main__":lambda_handler()
Go to Configuration > Environment variables > Edit > Add new environment variable.
Enter the following environment variables provided, replacing with your value.
Key
Example value
MULE_ORG_ID
your_org_id
CLIENT_ID
your_client_id
CLIENT_SECRET
your_client_secret
S3_BUCKET_NAME
mulesoft-audit-logs
Schedule the Lambda function (EventBridge Scheduler)
Go to Configuration > Triggers > Add trigger > EventBridge Scheduler > Create rule.
Provide the following configuration details:
Name: daily-mulesoft-audit export.
Schedule pattern: Cron expression.
Expression: 0 2 * * * (runs daily at 02:00 UTC).
Leave the rest as default and click Create.
Configure a feed in Google SecOps to ingest the MuleSoft logs
Go to SIEM Settings > Feeds.
Click Add new.
In the Feed name field, enter a name for the feed (for example, MuleSoft Logs).
Select Amazon S3 V2 as the Source type.
Select Mulesoft as the Log type.
Click Next.
Specify values for the following input parameters:
S3 URI: The bucket URI
s3://mulesoft-audit-logs/
Replace mulesoft-audit-logs with the actual name of the bucket.
Source deletion options: select the deletion option according to your preference.
Maximum File Age: Include files modified in the last number of days. Default is 180 days.
Access Key ID: User access key with access to the s3 bucket.
Secret Access Key: User secret key with access to the s3 bucket.
[[["Easy to understand","easyToUnderstand","thumb-up"],["Solved my problem","solvedMyProblem","thumb-up"],["Other","otherUp","thumb-up"]],[["Hard to understand","hardToUnderstand","thumb-down"],["Incorrect information or sample code","incorrectInformationOrSampleCode","thumb-down"],["Missing the information/samples I need","missingTheInformationSamplesINeed","thumb-down"],["Other","otherDown","thumb-down"]],["Last updated 2025-08-29 UTC."],[],[],null,["Collect MuleSoft Anypoint logs \nSupported in: \nGoogle secops [SIEM](/chronicle/docs/secops/google-secops-siem-toc)\n| **Note:** This feature is covered by [Pre-GA Offerings Terms](https://chronicle.security/legal/service-terms/) of the Google Security Operations Service Specific Terms. Pre-GA features might have limited support, and changes to pre-GA features might not be compatible with other pre-GA versions. For more information, see the [Google SecOps Technical Support Service guidelines](https://chronicle.security/legal/technical-support-services-guidelines/) and the [Google SecOps Service Specific Terms](https://chronicle.security/legal/service-terms/).\n\nThis document explains how to ingest audit-trail events from MuleSoft Anypoint platform logs to Google Security Operations using AWS S3.\n\nBefore you begin\n\nMake sure you have the following prerequisites:\n\n- Google SecOps instance\n- Privileged access to MuleSoft\n- Privileged access to AWS\n\nGet the MuleSoft Organization ID\n\n1. Sign in to the Anypoint Platform.\n2. Go to **Menu \\\u003e Access Management**.\n3. In the **Business Groups** table, click your organization's name.\n4. Copy the **Organization ID** (for example, `0a12b3c4-d5e6-789f-1021-1a2b34cd5e6f`).\n\n- Alternatively, go to [MuleSoft Business Groups](https://anypoint.mulesoft.com/accounts/businessGroups) and copy the ID from the URL.\n\nConfigure AWS S3 bucket and IAM for Google SecOps\n\n1. Create **Amazon S3 bucket** following this user guide: [Creating a bucket](https://docs.aws.amazon.com/AmazonS3/latest/userguide/creating-bucket.html).\n2. Save bucket **Name** and **Region** for future reference (for example, `mulesoft-audit-logs`).\n3. Create a **User** following this user guide: [Creating an IAM user](https://docs.aws.amazon.com/IAM/latest/UserGuide/id_users_create.html#id_users_create_console).\n4. Select the created **User**.\n5. Select **Security credentials** tab.\n6. Click **Create Access Key** in section **Access Keys**.\n7. Select **Third-party service** as **Use case**.\n8. Click **Next**.\n9. Optional: Add description tag.\n10. Click **Create access key**.\n11. Click **Download CSV file** for save the **Access Key** and **Secret Access Key** for future reference.\n12. Click **Done**.\n13. Select **Permissions** tab.\n14. Click **Add permissions** in section **Permissions policies**.\n15. Select **Add permissions**.\n16. Select **Attach policies directly**.\n17. Search for and select the **AmazonS3FullAccess** policy.\n18. Click **Next**.\n19. Click **Add permissions**.\n\nCreate the MuleSoft Connected App\n\n1. Sign in to the Anypoint Platform.\n2. Go to **Access Management \\\u003e Connected Apps \\\u003e Create App**.\n3. Provide the following configuration details:\n - **App name** : Enter a unique name (for example, `Google SecOps export`).\n - Select **App acts on its own behalf (client credentials)**.\n - Click **Add scopes → Audit Log Viewer → Next**.\n - Select every Business Group whose logs you need.\n - Click **Next \\\u003e Add scopes**.\n4. Click **Save** and copy the **Client ID** and **Client Secret**.\n\nConfigure IAM policy \\& role for S3 uploads\n\n1. **Policy JSON** (replace `mulesoft-audit-logs` with your bucket name):\n\n {\n \"Version\": \"2012-10-17\",\n \"Statement\": [\n {\n \"Sid\": \"AllowPutAuditObjects\",\n \"Effect\": \"Allow\",\n \"Action\": [\"s3:PutObject\"],\n \"Resource\": \"arn:aws:s3:::mulesoft-audit-logs/*\"\n }\n ]\n }\n\n2. Go to **AWS console \\\u003e IAM \\\u003e Policies \\\u003e Create policy \\\u003e JSON tab**.\n\n3. Copy and paste the policy.\n\n4. Click **Next \\\u003e Create policy**.\n\n5. Go to **IAM \\\u003e Roles \\\u003e Create role \\\u003e AWS service \\\u003e Lambda**.\n\n6. Attach the newly created policy.\n\n7. Name the role `WriteMulesoftToS3Role` and click **Create role**.\n\nCreate the Lambda function\n\n| Setting | Value |\n|--------------------|-----------------------------------------|\n| **Name** | `mulesoft_audit_to_s3` |\n| **Runtime** | Python 3.13 |\n| **Architecture** | x86_64 |\n| **Execution role** | Use existing \\\u003e `WriteMulesoftToS3Role` |\n\n1. After the function is created, open the **Code** tab, delete the stub and enter the following code (`mulesoft_audit_to_s3.py`).\n\n #!/usr/bin/env python3\n\n import os, json, gzip, io, uuid, datetime as dt, urllib.request, urllib.error, urllib.parse\n import boto3\n\n ORG_ID = os.environ[\"MULE_ORG_ID\"]\n CLIENT_ID = os.environ[\"CLIENT_ID\"]\n CLIENT_SECRET = os.environ[\"CLIENT_SECRET\"]\n S3_BUCKET = os.environ[\"S3_BUCKET_NAME\"]\n\n TOKEN_URL = \"https://anypoint.mulesoft.com/accounts/api/v2/oauth2/token\"\n QUERY_URL = f\"https://anypoint.mulesoft.com/audit/v2/organizations/{ORG_ID}/query\"\n\n def http_post(url, data, headers=None):\n raw = json.dumps(data).encode() if headers else urllib.parse.urlencode(data).encode()\n req = urllib.request.Request(url, raw, headers or {})\n try:\n with urllib.request.urlopen(req, timeout=30) as r:\n return json.loads(r.read())\n except urllib.error.HTTPError as e:\n print(\"MuleSoft error body →\", e.read().decode())\n raise\n\n def get_token():\n return http_post(TOKEN_URL, {\n \"grant_type\": \"client_credentials\",\n \"client_id\": CLIENT_ID,\n \"client_secret\": CLIENT_SECRET\n })[\"access_token\"]\n\n def fetch_audit(token, start, end):\n headers = {\n \"Authorization\": f\"Bearer {token}\",\n \"Content-Type\": \"application/json\"\n }\n body = {\n \"startDate\": f\"{start.isoformat(timespec='milliseconds')}Z\",\n \"endDate\": f\"{end.isoformat(timespec='milliseconds')}Z\",\n \"limit\": 200,\n \"offset\": 0,\n \"ascending\": False\n }\n while True:\n data = http_post(QUERY_URL, body, headers)\n if not data.get(\"data\"):\n break\n yield from data[\"data\"]\n body[\"offset\"] += body[\"limit\"]\n\n def upload(events, ts):\n key = f\"{ts:%Y/%m/%d}/mulesoft-audit-{uuid.uuid4()}.json.gz\"\n buf = io.BytesIO()\n with gzip.GzipFile(fileobj=buf, mode=\"w\") as gz:\n for ev in events:\n gz.write((json.dumps(ev) + \"\\n\").encode())\n buf.seek(0)\n boto3.client(\"s3\").upload_fileobj(buf, S3_BUCKET, key)\n\n def lambda_handler(event=None, context=None):\n now = dt.datetime.utcnow().replace(microsecond=0)\n start = now - dt.timedelta(days=1)\n\n token = get_token()\n events = list(fetch_audit(token, start, now))\n\n if events:\n upload(events, start)\n print(f\"Uploaded {len(events)} events\")\n else:\n print(\"No events in the last 24 h\")\n\n # For local testing\n if __name__ == \"__main__\":\n lambda_handler()\n\n2. Go to **Configuration \\\u003e Environment variables \\\u003e Edit \\\u003e Add new environment variable**.\n\n3. Enter the following environment variables provided, replacing with your value.\n\n | Key | Example value |\n |------------------|-----------------------|\n | `MULE_ORG_ID` | `your_org_id` |\n | `CLIENT_ID` | `your_client_id` |\n | `CLIENT_SECRET` | `your_client_secret` |\n | `S3_BUCKET_NAME` | `mulesoft-audit-logs` |\n\nSchedule the Lambda function (EventBridge Scheduler)\n\n1. Go to **Configuration \\\u003e Triggers \\\u003e Add trigger \\\u003e EventBridge Scheduler \\\u003e Create rule**.\n2. Provide the following configuration details:\n - **Name** : `daily-mulesoft-audit export`.\n - **Schedule pattern** : **Cron expression**.\n - **Expression** : `0 2 * * *` (runs daily at 02:00 UTC).\n3. Leave the rest as default and click **Create**.\n\nConfigure a feed in Google SecOps to ingest the MuleSoft logs\n\n1. Go to **SIEM Settings \\\u003e Feeds**.\n2. Click **Add new**.\n3. In the **Feed name** field, enter a name for the feed (for example, `MuleSoft Logs`).\n4. Select **Amazon S3 V2** as the **Source type**.\n5. Select **Mulesoft** as the **Log type**.\n6. Click **Next**.\n7. Specify values for the following input parameters:\n\n - **S3 URI** : The bucket URI\n - `s3://mulesoft-audit-logs/`\n - Replace `mulesoft-audit-logs` with the actual name of the bucket.\n - **Source deletion options**: select the deletion option according to your preference.\n\n | **Note:** If you select the `Delete transferred files` or `Delete transferred files and empty directories` option, make sure that you granted appropriate permissions to the service account.\n - **Maximum File Age**: Include files modified in the last number of days. Default is 180 days.\n\n - **Access Key ID**: User access key with access to the s3 bucket.\n\n - **Secret Access Key**: User secret key with access to the s3 bucket.\n\n - **Asset namespace** : The [asset namespace](/chronicle/docs/investigation/asset-namespaces).\n\n - **Ingestion labels**: The label to be applied to the events from this feed.\n\n8. Click **Next**.\n\n9. Review your new feed configuration in the **Finalize** screen, and then click **Submit**.\n\n**Need more help?** [Get answers from Community members and Google SecOps professionals.](https://security.googlecloudcommunity.com/google-security-operations-2)"]]