Este documento explica como ingerir eventos de rastreamento de auditoria dos registros da plataforma MuleSoft Anypoint no Google Security Operations usando o AWS S3.
Antes de começar
Verifique se você tem os pré-requisitos a seguir:
Instância do Google SecOps
Acesso privilegiado ao MuleSoft
Acesso privilegiado à AWS
Encontrar o ID da organização do MuleSoft
Faça login na plataforma Anypoint.
Acesse Menu > Gerenciamento de acesso.
Na tabela Grupos empresariais, clique no nome da sua organização.
Copie o ID da organização (por exemplo, 0a12b3c4-d5e6-789f-1021-1a2b34cd5e6f).
Acesse Console da AWS > IAM > Políticas > Criar política > guia JSON.
Copie e cole a política.
Clique em Próxima > Criar política.
Acesse IAM > Funções > Criar função > Serviço da AWS > Lambda.
Anexe a política recém-criada.
Nomeie a função como WriteMulesoftToS3Role e clique em Criar função.
Criar a função Lambda
Configuração
Valor
Nome
mulesoft_audit_to_s3
Ambiente de execução
Python 3.13
Arquitetura
x86_64
Função de execução
Usar >WriteMulesoftToS3Role
Depois que a função for criada, abra a guia Código, exclua o stub e insira o seguinte código (mulesoft_audit_to_s3.py).
#!/usr/bin/env python3importos,json,gzip,io,uuid,datetimeasdt,urllib.request,urllib.error,urllib.parseimportboto3ORG_ID=os.environ["MULE_ORG_ID"]CLIENT_ID=os.environ["CLIENT_ID"]CLIENT_SECRET=os.environ["CLIENT_SECRET"]S3_BUCKET=os.environ["S3_BUCKET_NAME"]TOKEN_URL="https://anypoint.mulesoft.com/accounts/api/v2/oauth2/token"QUERY_URL=f"https://anypoint.mulesoft.com/audit/v2/organizations/{ORG_ID}/query"defhttp_post(url,data,headers=None):raw=json.dumps(data).encode()ifheaderselseurllib.parse.urlencode(data).encode()req=urllib.request.Request(url,raw,headersor{})try:withurllib.request.urlopen(req,timeout=30)asr:returnjson.loads(r.read())excepturllib.error.HTTPErrorase:print("MuleSoft error body →",e.read().decode())raisedefget_token():returnhttp_post(TOKEN_URL,{"grant_type":"client_credentials","client_id":CLIENT_ID,"client_secret":CLIENT_SECRET})["access_token"]deffetch_audit(token,start,end):headers={"Authorization":f"Bearer {token}","Content-Type":"application/json"}body={"startDate":f"{start.isoformat(timespec='milliseconds')}Z","endDate":f"{end.isoformat(timespec='milliseconds')}Z","limit":200,"offset":0,"ascending":False}whileTrue:data=http_post(QUERY_URL,body,headers)ifnotdata.get("data"):breakyield fromdata["data"]body["offset"]+=body["limit"]defupload(events,ts):key=f"{ts:%Y/%m/%d}/mulesoft-audit-{uuid.uuid4()}.json.gz"buf=io.BytesIO()withgzip.GzipFile(fileobj=buf,mode="w")asgz:forevinevents:gz.write((json.dumps(ev)+"\n").encode())buf.seek(0)boto3.client("s3").upload_fileobj(buf,S3_BUCKET,key)deflambda_handler(event=None,context=None):now=dt.datetime.utcnow().replace(microsecond=0)start=now-dt.timedelta(days=1)token=get_token()events=list(fetch_audit(token,start,now))ifevents:upload(events,start)print(f"Uploaded {len(events)} events")else:print("No events in the last 24 h")# For local testingif__name__=="__main__":lambda_handler()
Acesse Configuração > Variáveis de ambiente > Editar > Adicionar nova variável de ambiente.
Insira as variáveis de ambiente a seguir, substituindo pelo seu valor.
Chave
Valor de exemplo
MULE_ORG_ID
your_org_id
CLIENT_ID
your_client_id
CLIENT_SECRET
your_client_secret
S3_BUCKET_NAME
mulesoft-audit-logs
Programar a função do Lambda (EventBridge Scheduler)
[[["Fácil de entender","easyToUnderstand","thumb-up"],["Meu problema foi resolvido","solvedMyProblem","thumb-up"],["Outro","otherUp","thumb-up"]],[["Difícil de entender","hardToUnderstand","thumb-down"],["Informações incorretas ou exemplo de código","incorrectInformationOrSampleCode","thumb-down"],["Não contém as informações/amostras de que eu preciso","missingTheInformationSamplesINeed","thumb-down"],["Problema na tradução","translationIssue","thumb-down"],["Outro","otherDown","thumb-down"]],["Última atualização 2025-08-21 UTC."],[],[],null,["Collect MuleSoft Anypoint logs \nSupported in: \nGoogle secops [SIEM](/chronicle/docs/secops/google-secops-siem-toc)\n| **Note:** This feature is covered by [Pre-GA Offerings Terms](https://chronicle.security/legal/service-terms/) of the Google Security Operations Service Specific Terms. Pre-GA features might have limited support, and changes to pre-GA features might not be compatible with other pre-GA versions. For more information, see the [Google SecOps Technical Support Service guidelines](https://chronicle.security/legal/technical-support-services-guidelines/) and the [Google SecOps Service Specific Terms](https://chronicle.security/legal/service-terms/).\n\nThis document explains how to ingest audit-trail events from MuleSoft Anypoint platform logs to Google Security Operations using AWS S3.\n\nBefore you begin\n\nMake sure you have the following prerequisites:\n\n- Google SecOps instance\n- Privileged access to MuleSoft\n- Privileged access to AWS\n\nGet the MuleSoft Organization ID\n\n1. Sign in to the Anypoint Platform.\n2. Go to **Menu \\\u003e Access Management**.\n3. In the **Business Groups** table, click your organization's name.\n4. Copy the **Organization ID** (for example, `0a12b3c4-d5e6-789f-1021-1a2b34cd5e6f`).\n\n- Alternatively, go to [MuleSoft Business Groups](https://anypoint.mulesoft.com/accounts/businessGroups) and copy the ID from the URL.\n\nConfigure AWS S3 bucket and IAM for Google SecOps\n\n1. Create **Amazon S3 bucket** following this user guide: [Creating a bucket](https://docs.aws.amazon.com/AmazonS3/latest/userguide/creating-bucket.html).\n2. Save bucket **Name** and **Region** for future reference (for example, `mulesoft-audit-logs`).\n3. Create a **User** following this user guide: [Creating an IAM user](https://docs.aws.amazon.com/IAM/latest/UserGuide/id_users_create.html#id_users_create_console).\n4. Select the created **User**.\n5. Select **Security credentials** tab.\n6. Click **Create Access Key** in section **Access Keys**.\n7. Select **Third-party service** as **Use case**.\n8. Click **Next**.\n9. Optional: Add description tag.\n10. Click **Create access key**.\n11. Click **Download CSV file** for save the **Access Key** and **Secret Access Key** for future reference.\n12. Click **Done**.\n13. Select **Permissions** tab.\n14. Click **Add permissions** in section **Permissions policies**.\n15. Select **Add permissions**.\n16. Select **Attach policies directly**.\n17. Search for and select the **AmazonS3FullAccess** policy.\n18. Click **Next**.\n19. Click **Add permissions**.\n\nCreate the MuleSoft Connected App\n\n1. Sign in to the Anypoint Platform.\n2. Go to **Access Management \\\u003e Connected Apps \\\u003e Create App**.\n3. Provide the following configuration details:\n - **App name** : Enter a unique name (for example, `Google SecOps export`).\n - Select **App acts on its own behalf (client credentials)**.\n - Click **Add scopes → Audit Log Viewer → Next**.\n - Select every Business Group whose logs you need.\n - Click **Next \\\u003e Add scopes**.\n4. Click **Save** and copy the **Client ID** and **Client Secret**.\n\nConfigure IAM policy \\& role for S3 uploads\n\n1. **Policy JSON** (replace `mulesoft-audit-logs` with your bucket name):\n\n {\n \"Version\": \"2012-10-17\",\n \"Statement\": [\n {\n \"Sid\": \"AllowPutAuditObjects\",\n \"Effect\": \"Allow\",\n \"Action\": [\"s3:PutObject\"],\n \"Resource\": \"arn:aws:s3:::mulesoft-audit-logs/*\"\n }\n ]\n }\n\n2. Go to **AWS console \\\u003e IAM \\\u003e Policies \\\u003e Create policy \\\u003e JSON tab**.\n\n3. Copy and paste the policy.\n\n4. Click **Next \\\u003e Create policy**.\n\n5. Go to **IAM \\\u003e Roles \\\u003e Create role \\\u003e AWS service \\\u003e Lambda**.\n\n6. Attach the newly created policy.\n\n7. Name the role `WriteMulesoftToS3Role` and click **Create role**.\n\nCreate the Lambda function\n\n| Setting | Value |\n|--------------------|-----------------------------------------|\n| **Name** | `mulesoft_audit_to_s3` |\n| **Runtime** | Python 3.13 |\n| **Architecture** | x86_64 |\n| **Execution role** | Use existing \\\u003e `WriteMulesoftToS3Role` |\n\n1. After the function is created, open the **Code** tab, delete the stub and enter the following code (`mulesoft_audit_to_s3.py`).\n\n #!/usr/bin/env python3\n\n import os, json, gzip, io, uuid, datetime as dt, urllib.request, urllib.error, urllib.parse\n import boto3\n\n ORG_ID = os.environ[\"MULE_ORG_ID\"]\n CLIENT_ID = os.environ[\"CLIENT_ID\"]\n CLIENT_SECRET = os.environ[\"CLIENT_SECRET\"]\n S3_BUCKET = os.environ[\"S3_BUCKET_NAME\"]\n\n TOKEN_URL = \"https://anypoint.mulesoft.com/accounts/api/v2/oauth2/token\"\n QUERY_URL = f\"https://anypoint.mulesoft.com/audit/v2/organizations/{ORG_ID}/query\"\n\n def http_post(url, data, headers=None):\n raw = json.dumps(data).encode() if headers else urllib.parse.urlencode(data).encode()\n req = urllib.request.Request(url, raw, headers or {})\n try:\n with urllib.request.urlopen(req, timeout=30) as r:\n return json.loads(r.read())\n except urllib.error.HTTPError as e:\n print(\"MuleSoft error body →\", e.read().decode())\n raise\n\n def get_token():\n return http_post(TOKEN_URL, {\n \"grant_type\": \"client_credentials\",\n \"client_id\": CLIENT_ID,\n \"client_secret\": CLIENT_SECRET\n })[\"access_token\"]\n\n def fetch_audit(token, start, end):\n headers = {\n \"Authorization\": f\"Bearer {token}\",\n \"Content-Type\": \"application/json\"\n }\n body = {\n \"startDate\": f\"{start.isoformat(timespec='milliseconds')}Z\",\n \"endDate\": f\"{end.isoformat(timespec='milliseconds')}Z\",\n \"limit\": 200,\n \"offset\": 0,\n \"ascending\": False\n }\n while True:\n data = http_post(QUERY_URL, body, headers)\n if not data.get(\"data\"):\n break\n yield from data[\"data\"]\n body[\"offset\"] += body[\"limit\"]\n\n def upload(events, ts):\n key = f\"{ts:%Y/%m/%d}/mulesoft-audit-{uuid.uuid4()}.json.gz\"\n buf = io.BytesIO()\n with gzip.GzipFile(fileobj=buf, mode=\"w\") as gz:\n for ev in events:\n gz.write((json.dumps(ev) + \"\\n\").encode())\n buf.seek(0)\n boto3.client(\"s3\").upload_fileobj(buf, S3_BUCKET, key)\n\n def lambda_handler(event=None, context=None):\n now = dt.datetime.utcnow().replace(microsecond=0)\n start = now - dt.timedelta(days=1)\n\n token = get_token()\n events = list(fetch_audit(token, start, now))\n\n if events:\n upload(events, start)\n print(f\"Uploaded {len(events)} events\")\n else:\n print(\"No events in the last 24 h\")\n\n # For local testing\n if __name__ == \"__main__\":\n lambda_handler()\n\n2. Go to **Configuration \\\u003e Environment variables \\\u003e Edit \\\u003e Add new environment variable**.\n\n3. Enter the following environment variables provided, replacing with your value.\n\n | Key | Example value |\n |------------------|-----------------------|\n | `MULE_ORG_ID` | `your_org_id` |\n | `CLIENT_ID` | `your_client_id` |\n | `CLIENT_SECRET` | `your_client_secret` |\n | `S3_BUCKET_NAME` | `mulesoft-audit-logs` |\n\nSchedule the Lambda function (EventBridge Scheduler)\n\n1. Go to **Configuration \\\u003e Triggers \\\u003e Add trigger \\\u003e EventBridge Scheduler \\\u003e Create rule**.\n2. Provide the following configuration details:\n - **Name** : `daily-mulesoft-audit export`.\n - **Schedule pattern** : **Cron expression**.\n - **Expression** : `0 2 * * *` (runs daily at 02:00 UTC).\n3. Leave the rest as default and click **Create**.\n\nConfigure a feed in Google SecOps to ingest the MuleSoft logs\n\n1. Go to **SIEM Settings \\\u003e Feeds**.\n2. Click **Add new**.\n3. In the **Feed name** field, enter a name for the feed (for example, `MuleSoft Logs`).\n4. Select **Amazon S3 V2** as the **Source type**.\n5. Select **Mulesoft** as the **Log type**.\n6. Click **Next**.\n7. Specify values for the following input parameters:\n\n - **S3 URI** : The bucket URI\n - `s3://mulesoft-audit-logs/`\n - Replace `mulesoft-audit-logs` with the actual name of the bucket.\n - **Source deletion options**: select the deletion option according to your preference.\n\n | **Note:** If you select the `Delete transferred files` or `Delete transferred files and empty directories` option, make sure that you granted appropriate permissions to the service account.\n - **Maximum File Age**: Include files modified in the last number of days. Default is 180 days.\n\n - **Access Key ID**: User access key with access to the s3 bucket.\n\n - **Secret Access Key**: User secret key with access to the s3 bucket.\n\n - **Asset namespace** : The [asset namespace](/chronicle/docs/investigation/asset-namespaces).\n\n - **Ingestion labels**: The label to be applied to the events from this feed.\n\n8. Click **Next**.\n\n9. Review your new feed configuration in the **Finalize** screen, and then click **Submit**.\n\n**Need more help?** [Get answers from Community members and Google SecOps professionals.](https://security.googlecloudcommunity.com/google-security-operations-2)"]]