[[["易于理解","easyToUnderstand","thumb-up"],["解决了我的问题","solvedMyProblem","thumb-up"],["其他","otherUp","thumb-up"]],[["很难理解","hardToUnderstand","thumb-down"],["信息或示例代码不正确","incorrectInformationOrSampleCode","thumb-down"],["没有我需要的信息/示例","missingTheInformationSamplesINeed","thumb-down"],["翻译问题","translationIssue","thumb-down"],["其他","otherDown","thumb-down"]],["最后更新时间 (UTC):2025-09-04。"],[[["\u003cp\u003eThis document provides instructions on how to collect AWS RDS logs and ingest them into Google SecOps using a Google SecOps feed.\u003c/p\u003e\n"],["\u003cp\u003eTo collect logs from AWS RDS, you must first configure log exports in your RDS database, selecting the appropriate log types for your database engine and publishing them to Amazon CloudWatch.\u003c/p\u003e\n"],["\u003cp\u003eThe \u003ccode\u003eAWS_RDS\u003c/code\u003e ingestion label is used for the parser that structures raw log data into the Unified Data Model (UDM) format in Google SecOps.\u003c/p\u003e\n"],["\u003cp\u003eConfiguring a feed in Google SecOps involves selecting either Amazon S3 or Amazon SQS as the source type and providing the necessary AWS credentials and parameters, such as region, S3 URI, or queue details.\u003c/p\u003e\n"],["\u003cp\u003eThe parser for AWS RDS logs extracts key information like timestamp, description, and client IP, mapping these to corresponding UDM fields, and categorizes events as either \u003ccode\u003eGENERIC_EVENT\u003c/code\u003e or \u003ccode\u003eSTATUS_UPDATE\u003c/code\u003e based on the log content.\u003c/p\u003e\n"]]],[],null,["# Collect AWS RDS logs\n====================\n\nSupported in: \nGoogle secops [SIEM](/chronicle/docs/secops/google-secops-siem-toc)\n| **Note:** This feature is covered by [Pre-GA Offerings Terms](https://chronicle.security/legal/service-terms/) of the Google Security Operations Service Specific Terms. Pre-GA features might have limited support, and changes to pre-GA features might not be compatible with other pre-GA versions. For more information, see the [Google SecOps Technical Support Service guidelines](https://chronicle.security/legal/technical-support-services-guidelines/) and the [Google SecOps Service Specific Terms](https://chronicle.security/legal/service-terms/).\n\nThis document describes how you can collect AWS RDS logs by\nsetting up a Google SecOps feed.\n\nFor more information, see [Data ingestion to Google SecOps](/chronicle/docs/data-ingestion-flow).\n\nAn ingestion label identifies the parser that normalizes raw log data\nto structured UDM format. The information in this document applies to the parser with the `AWS_RDS` ingestion label.\n\nBefore you begin\n----------------\n\nEnsure you have the following prerequisites:\n\n- An AWS account that you can sign in to\n\n- A global administrator or RDS administrator\n\nHow to configure AWS RDS\n------------------------\n\n1. Use an existing database or create a new database:\n - To use an existing database, select the database, click **Modify** , and then select **Log exports**.\n - To use a new database, when you create the database, select **Additional configuration**.\n2. To publish to Amazon CloudWatch, select the following log types:\n - **Audit log**\n - **Error log**\n - **General log**\n - **Slow query log**\n3. To specify log export for AWS Aurora PostgreSQL and PostgreSQL, select **PostgreSQL log**.\n4. To specify log export for AWS Microsoft SQL server, select the following log types:\n - **Agent log**\n - **Error log**\n5. Save the log configuration.\n6. Select **CloudWatch \\\u003e Logs** to view the collected logs. The log groups are automatically created after the logs are available through the instance.\n\nTo publish the logs to CloudWatch, configure IAM user and KMS key policies. For more information, see [IAM user and KMS key policies](https://docs.aws.amazon.com/kms/latest/developerguide/iam-policies.html).\n\nBased on the service and region, identify the endpoints for connectivity by referring to the following AWS documentation:\n\n- For information about any logging sources, see [AWS Identity and Access Management endpoints and quotas](https://docs.aws.amazon.com/general/latest/gr/iam-service.html).\n\n- For information about CloudWatch logging sources, see [CloudWatch logs endpoints and quotas](https://docs.aws.amazon.com/general/latest/gr/cwl_region.html).\n\nFor engine-specific information, see the following documentation:\n\n- [Publishing MariaDB logs to Amazon CloudWatch Logs](https://docs.aws.amazon.com/AmazonRDS/latest/UserGuide/USER_LogAccess.Concepts.MariaDB.html#USER_LogAccess.MariaDB.PublishtoCloudWatchLogs).\n\n- [Publishing MySQL logs to Amazon CloudWatch Logs](https://docs.aws.amazon.com/AmazonRDS/latest/UserGuide/USER_LogAccess.MySQLDB.PublishtoCloudWatchLogs.html).\n\n- [Publishing Oracle logs to Amazon CloudWatch Logs](https://docs.aws.amazon.com/AmazonRDS/latest/UserGuide/USER_LogAccess.Concepts.Oracle.html#USER_LogAccess.Oracle.PublishtoCloudWatchLogs).\n\n- [Publishing PostgreSQL logs to Amazon CloudWatch Logs](https://docs.aws.amazon.com/AmazonRDS/latest/UserGuide/USER_LogAccess.Concepts.PostgreSQL.html#USER_LogAccess.Concepts.PostgreSQL.PublishtoCloudWatchLogs).\n\n- [Publishing SQL Server logs to Amazon CloudWatch Logs](https://docs.aws.amazon.com/AmazonRDS/latest/UserGuide/USER_LogAccess.Concepts.SQLServer.html#USER_LogAccess.SQLServer.PublishtoCloudWatchLogs).\n\nSet up feeds\n------------\n\nThere are two different entry points to set up feeds in the\nGoogle SecOps platform:\n\n- **SIEM Settings \\\u003e Feeds \\\u003e Add New**\n- **Content Hub \\\u003e Content Packs \\\u003e Get Started**\n\nHow to set up the AWS RDS feed\n------------------------------\n\n1. Click the **Amazon Cloud Platform** pack.\n2. Locate the **AWS RDS** log type.\n3. Google SecOps supports log collection using an access key ID and secret method. To create the access key ID and secret, see [Configure tool authentication with AWS](https://docs.aws.amazon.com/powershell/latest/userguide/creds-idc.html).\n4. Specify the values in the following fields.\n\n - **Source Type**: Amazon SQS V2\n - **Queue Name**: The SQS queue name to read from\n - **S3 URI** : The bucket URI.\n - `s3://your-log-bucket-name/`\n - Replace `your-log-bucket-name` with the actual name of your S3 bucket.\n - **Source deletion options**: Select the deletion option according to your ingestion preferences.\n\n | **Note:** If you select the `Delete transferred files` or `Delete transferred files and empty directories` option, make sure that you granted appropriate permissions to the service account.\n - **Maximum File Age**: Include files modified in the last number of days. Default is 180 days.\n\n - **SQS Queue Access Key ID**: An account access key that is a 20-character alphanumeric string.\n\n - **SQS Queue Secret Access Key**: An account access key that is a 40-character alphanumeric string.\n\n **Advanced options**\n - **Feed Name**: A prepopulated value that identifies the feed.\n - **Asset Namespace**: Namespace associated with the feed.\n - **Ingestion Labels**: Labels applied to all events from this feed.\n5. Click **Create feed**.\n\n| **Note:** The Content Hub is not available on the SIEM standalone platform. To upgrade, contact your Google SecOps representative.\n\nFor more information about configuring multiple feeds for different log types within this product family, see [Configure feeds by product](/chronicle/docs/ingestion/ingestion-entities/configure-multiple-feeds).\n\nField mapping reference\n-----------------------\n\nThis parser extracts fields from AWS RDS syslog messages, primarily focusing on timestamp, description, and client IP. It uses grok patterns to identify these fields and populates corresponding UDM fields, classifying events as either `GENERIC_EVENT` or `STATUS_UPDATE` based on the presence of a client IP.\n\nUDM mapping table\n-----------------\n\n**Need more help?** [Get answers from Community members and Google SecOps professionals.](https://security.googlecloudcommunity.com/google-security-operations-2)"]]