Label penyerapan mengidentifikasi parser yang menormalisasi data log mentah ke format UDM terstruktur. Informasi dalam dokumen ini berlaku untuk parser dengan label penyerapan AWS_RDS.
Sebelum memulai
Pastikan Anda memiliki prasyarat berikut:
Akun AWS yang dapat Anda gunakan untuk login
Administrator global atau administrator RDS
Cara mengonfigurasi AWS RDS
Gunakan database yang ada atau buat database baru:
Untuk menggunakan database yang ada, pilih database, klik Ubah, lalu pilih Ekspor log.
Untuk menggunakan database baru, saat Anda membuat database, pilih Konfigurasi tambahan.
Untuk memublikasikan ke Amazon CloudWatch, pilih jenis log berikut:
Log audit
Log error
Log umum
Log kueri lambat
Untuk menentukan ekspor log untuk AWS Aurora PostgreSQL dan PostgreSQL, pilih Log PostgreSQL.
Untuk menentukan ekspor log untuk server AWS Microsoft SQL, pilih jenis log berikut:
Log agen
Log error
Simpan konfigurasi log.
Pilih CloudWatch > Logs untuk melihat log yang dikumpulkan. Grup log dibuat secara otomatis setelah log tersedia melalui instance.
Untuk memublikasikan log ke CloudWatch, konfigurasi kebijakan pengguna IAM dan kunci KMS. Untuk mengetahui informasi selengkapnya, lihat Kebijakan kunci KMS dan pengguna IAM.
Berdasarkan layanan dan region, identifikasi endpoint untuk konektivitas dengan melihat dokumentasi AWS berikut:
Ada dua titik entri berbeda untuk menyiapkan feed di platform Google SecOps:
Setelan SIEM > Feed > Tambahkan Baru
Hub Konten > Paket Konten > Mulai
Cara menyiapkan feed AWS RDS
Klik paket Amazon Cloud Platform.
Cari jenis log AWS RDS.
Google SecOps mendukung pengumpulan log menggunakan ID kunci akses dan metode rahasia.
Untuk membuat ID kunci akses dan kunci rahasia, lihat Mengonfigurasi autentikasi alat dengan AWS.
Tentukan nilai di kolom berikut.
Jenis Sumber: Amazon SQS V2
Nama Antrean: Nama antrean SQS yang akan dibaca
URI S3: URI bucket.
s3://your-log-bucket-name/
Ganti your-log-bucket-name dengan nama sebenarnya bucket S3 Anda.
Opsi penghapusan sumber: Pilih opsi penghapusan sesuai dengan preferensi penyerapan Anda.
Usia File Maksimum: Menyertakan file yang diubah dalam beberapa hari terakhir. Defaultnya adalah 180 hari.
ID Kunci Akses Antrean SQS: Kunci akses akun yang berupa string alfanumerik 20 karakter.
Kunci Akses Rahasia Antrean SQS: Kunci akses akun yang berupa string alfanumerik 40 karakter.
Opsi lanjutan
Nama Feed: Nilai yang telah diisi otomatis yang mengidentifikasi feed.
Namespace Aset: Namespace yang terkait dengan feed.
Label Penyerapan: Label yang diterapkan ke semua peristiwa dari feed ini.
Klik Buat feed.
Untuk mengetahui informasi selengkapnya tentang cara mengonfigurasi beberapa feed untuk berbagai jenis log dalam keluarga produk ini, lihat Mengonfigurasi feed menurut produk.
Referensi pemetaan kolom
Parser ini mengekstrak kolom dari pesan syslog AWS RDS, terutama berfokus pada stempel waktu, deskripsi, dan IP klien. Skrip ini menggunakan pola grok untuk mengidentifikasi kolom ini dan mengisi kolom UDM yang sesuai, mengklasifikasikan peristiwa sebagai GENERIC_EVENT atau STATUS_UPDATE berdasarkan keberadaan IP klien.
Tabel pemetaan UDM
Kolom Log
Pemetaan UDM
Logika
client_ip
principal.ip
Diekstrak dari pesan log mentah menggunakan ekspresi reguler \\[CLIENT: %{IP:client_ip}\\].
create_time.nanos
T/A
Tidak dipetakan ke objek IDM.
create_time.seconds
T/A
Tidak dipetakan ke objek IDM.
metadata.description
Pesan deskriptif dari log, diekstrak menggunakan pola grok. Disalin dari create_time.nanos. Disalin dari create_time.seconds. Secara default disetel ke "GENERIC_EVENT". Diubah menjadi "STATUS_UPDATE" jika client_ip ada. Nilai statis "AWS_RDS", ditetapkan oleh parser. Nilai statis "AWS_RDS", ditetapkan oleh parser.
pid
principal.process.pid
Diekstrak dari kolom descrip menggunakan ekspresi reguler process ID of %{INT:pid}.
[[["Mudah dipahami","easyToUnderstand","thumb-up"],["Memecahkan masalah saya","solvedMyProblem","thumb-up"],["Lainnya","otherUp","thumb-up"]],[["Sulit dipahami","hardToUnderstand","thumb-down"],["Informasi atau kode contoh salah","incorrectInformationOrSampleCode","thumb-down"],["Informasi/contoh yang saya butuhkan tidak ada","missingTheInformationSamplesINeed","thumb-down"],["Masalah terjemahan","translationIssue","thumb-down"],["Lainnya","otherDown","thumb-down"]],["Terakhir diperbarui pada 2025-09-04 UTC."],[[["\u003cp\u003eThis document provides instructions on how to collect AWS RDS logs and ingest them into Google SecOps using a Google SecOps feed.\u003c/p\u003e\n"],["\u003cp\u003eTo collect logs from AWS RDS, you must first configure log exports in your RDS database, selecting the appropriate log types for your database engine and publishing them to Amazon CloudWatch.\u003c/p\u003e\n"],["\u003cp\u003eThe \u003ccode\u003eAWS_RDS\u003c/code\u003e ingestion label is used for the parser that structures raw log data into the Unified Data Model (UDM) format in Google SecOps.\u003c/p\u003e\n"],["\u003cp\u003eConfiguring a feed in Google SecOps involves selecting either Amazon S3 or Amazon SQS as the source type and providing the necessary AWS credentials and parameters, such as region, S3 URI, or queue details.\u003c/p\u003e\n"],["\u003cp\u003eThe parser for AWS RDS logs extracts key information like timestamp, description, and client IP, mapping these to corresponding UDM fields, and categorizes events as either \u003ccode\u003eGENERIC_EVENT\u003c/code\u003e or \u003ccode\u003eSTATUS_UPDATE\u003c/code\u003e based on the log content.\u003c/p\u003e\n"]]],[],null,["# Collect AWS RDS logs\n====================\n\nSupported in: \nGoogle secops [SIEM](/chronicle/docs/secops/google-secops-siem-toc)\n| **Note:** This feature is covered by [Pre-GA Offerings Terms](https://chronicle.security/legal/service-terms/) of the Google Security Operations Service Specific Terms. Pre-GA features might have limited support, and changes to pre-GA features might not be compatible with other pre-GA versions. For more information, see the [Google SecOps Technical Support Service guidelines](https://chronicle.security/legal/technical-support-services-guidelines/) and the [Google SecOps Service Specific Terms](https://chronicle.security/legal/service-terms/).\n\nThis document describes how you can collect AWS RDS logs by\nsetting up a Google SecOps feed.\n\nFor more information, see [Data ingestion to Google SecOps](/chronicle/docs/data-ingestion-flow).\n\nAn ingestion label identifies the parser that normalizes raw log data\nto structured UDM format. The information in this document applies to the parser with the `AWS_RDS` ingestion label.\n\nBefore you begin\n----------------\n\nEnsure you have the following prerequisites:\n\n- An AWS account that you can sign in to\n\n- A global administrator or RDS administrator\n\nHow to configure AWS RDS\n------------------------\n\n1. Use an existing database or create a new database:\n - To use an existing database, select the database, click **Modify** , and then select **Log exports**.\n - To use a new database, when you create the database, select **Additional configuration**.\n2. To publish to Amazon CloudWatch, select the following log types:\n - **Audit log**\n - **Error log**\n - **General log**\n - **Slow query log**\n3. To specify log export for AWS Aurora PostgreSQL and PostgreSQL, select **PostgreSQL log**.\n4. To specify log export for AWS Microsoft SQL server, select the following log types:\n - **Agent log**\n - **Error log**\n5. Save the log configuration.\n6. Select **CloudWatch \\\u003e Logs** to view the collected logs. The log groups are automatically created after the logs are available through the instance.\n\nTo publish the logs to CloudWatch, configure IAM user and KMS key policies. For more information, see [IAM user and KMS key policies](https://docs.aws.amazon.com/kms/latest/developerguide/iam-policies.html).\n\nBased on the service and region, identify the endpoints for connectivity by referring to the following AWS documentation:\n\n- For information about any logging sources, see [AWS Identity and Access Management endpoints and quotas](https://docs.aws.amazon.com/general/latest/gr/iam-service.html).\n\n- For information about CloudWatch logging sources, see [CloudWatch logs endpoints and quotas](https://docs.aws.amazon.com/general/latest/gr/cwl_region.html).\n\nFor engine-specific information, see the following documentation:\n\n- [Publishing MariaDB logs to Amazon CloudWatch Logs](https://docs.aws.amazon.com/AmazonRDS/latest/UserGuide/USER_LogAccess.Concepts.MariaDB.html#USER_LogAccess.MariaDB.PublishtoCloudWatchLogs).\n\n- [Publishing MySQL logs to Amazon CloudWatch Logs](https://docs.aws.amazon.com/AmazonRDS/latest/UserGuide/USER_LogAccess.MySQLDB.PublishtoCloudWatchLogs.html).\n\n- [Publishing Oracle logs to Amazon CloudWatch Logs](https://docs.aws.amazon.com/AmazonRDS/latest/UserGuide/USER_LogAccess.Concepts.Oracle.html#USER_LogAccess.Oracle.PublishtoCloudWatchLogs).\n\n- [Publishing PostgreSQL logs to Amazon CloudWatch Logs](https://docs.aws.amazon.com/AmazonRDS/latest/UserGuide/USER_LogAccess.Concepts.PostgreSQL.html#USER_LogAccess.Concepts.PostgreSQL.PublishtoCloudWatchLogs).\n\n- [Publishing SQL Server logs to Amazon CloudWatch Logs](https://docs.aws.amazon.com/AmazonRDS/latest/UserGuide/USER_LogAccess.Concepts.SQLServer.html#USER_LogAccess.SQLServer.PublishtoCloudWatchLogs).\n\nSet up feeds\n------------\n\nThere are two different entry points to set up feeds in the\nGoogle SecOps platform:\n\n- **SIEM Settings \\\u003e Feeds \\\u003e Add New**\n- **Content Hub \\\u003e Content Packs \\\u003e Get Started**\n\nHow to set up the AWS RDS feed\n------------------------------\n\n1. Click the **Amazon Cloud Platform** pack.\n2. Locate the **AWS RDS** log type.\n3. Google SecOps supports log collection using an access key ID and secret method. To create the access key ID and secret, see [Configure tool authentication with AWS](https://docs.aws.amazon.com/powershell/latest/userguide/creds-idc.html).\n4. Specify the values in the following fields.\n\n - **Source Type**: Amazon SQS V2\n - **Queue Name**: The SQS queue name to read from\n - **S3 URI** : The bucket URI.\n - `s3://your-log-bucket-name/`\n - Replace `your-log-bucket-name` with the actual name of your S3 bucket.\n - **Source deletion options**: Select the deletion option according to your ingestion preferences.\n\n | **Note:** If you select the `Delete transferred files` or `Delete transferred files and empty directories` option, make sure that you granted appropriate permissions to the service account.\n - **Maximum File Age**: Include files modified in the last number of days. Default is 180 days.\n\n - **SQS Queue Access Key ID**: An account access key that is a 20-character alphanumeric string.\n\n - **SQS Queue Secret Access Key**: An account access key that is a 40-character alphanumeric string.\n\n **Advanced options**\n - **Feed Name**: A prepopulated value that identifies the feed.\n - **Asset Namespace**: Namespace associated with the feed.\n - **Ingestion Labels**: Labels applied to all events from this feed.\n5. Click **Create feed**.\n\n| **Note:** The Content Hub is not available on the SIEM standalone platform. To upgrade, contact your Google SecOps representative.\n\nFor more information about configuring multiple feeds for different log types within this product family, see [Configure feeds by product](/chronicle/docs/ingestion/ingestion-entities/configure-multiple-feeds).\n\nField mapping reference\n-----------------------\n\nThis parser extracts fields from AWS RDS syslog messages, primarily focusing on timestamp, description, and client IP. It uses grok patterns to identify these fields and populates corresponding UDM fields, classifying events as either `GENERIC_EVENT` or `STATUS_UPDATE` based on the presence of a client IP.\n\nUDM mapping table\n-----------------\n\n**Need more help?** [Get answers from Community members and Google SecOps professionals.](https://security.googlecloudcommunity.com/google-security-operations-2)"]]