Stay organized with collections
Save and categorize content based on your preferences.
During change data capture, Datastream reads Oracle redo log files to
monitor your source databases for changes and replicate them to the destination
instance. Each Oracle database has a set of online redo log files. All
transaction records on the database are recorded in the files. When the current
redo log file is rotated (or switched), the archive process
copies this file into an archive storage. Meanwhile, the database promotes
another file to serve as the current file.
Datastream Oracle connector extracts change data capture (CDC) events
from archived Oracle redo log files.
Access redo log files
Datastream can use the Oracle LogMiner API or the binary
reader method to access the redo log files:
Oracle LogMiner: an out-of-the-box utility included in Oracle databases.
If you configure Datastream to use Oracle LogMiner API, Datastream
can only work with archived redo log files, online redo log files aren't supported.
The LogMiner API method is single-threaded and is subject to higher latency and
lower throughput when working with large transaction number source databases.
LogMiner supports most data types and Oracle database features.
Binary reader (Preview):
a specialized, high-performance utility that works with both online and archived
redo log files. Binary reader can access the log files using Automatic Storage
Management (ASM) or by reading the files directly using database directory objects.
Binary reader is multithreaded and supports low-latency CDC. It also creates low
impact on the source database as redo logs are parsed outside of the database
operations. The binary reader CDC method has limited support for certain data
types or features. For more information, see
Known limitations.
Set configuration parameters for Oracle redo log files
This design has profound implications on Datastream's potential latency. If Oracle's redo log files are switched frequently or kept to a smaller size (for example, < 256MB), Datastream can replicate changes faster.
There are configuration parameters that you can set to control the log file rotation frequency:
Size: Online redo log files have a minimum size of 4 MB, and the default size is dependent on your operating system. You can modify the size of the log files by creating new online log files and dropping the older log files.
To find the size of the online redo log files, run the following query:
SELECTGROUP#,STATUS,BYTES/1024/1024MBFROMV$LOG
Time: The ARCHIVE_LAG_TARGET parameter provides an upper limit of how long (in seconds) the current log of the primary database can span.
This isn't the exact log switch time, because it takes into account how long it will take to archive the log. The default value is 0 (no upper bound), and a reasonable value of 1800 (or 30 minutes) or less is suggested.
You can use the following commands to set the ARCHIVE_LAG_TARGET parameter, either during initialization or while the database is up:
SHOW PARAMETER ARCHIVE_LAG_TARGET; This command displays how many seconds it will take for the current log to span.
ALTER SYSTEM SET ARCHIVE_LAG_TARGET = number-of-seconds; Use this command to change the upper limit.
For example, to set the upper limit to 10 minutes (or 600 seconds), enter ALTER SYSTEM SET ARCHIVE_LAG_TARGET = 600;
[[["Easy to understand","easyToUnderstand","thumb-up"],["Solved my problem","solvedMyProblem","thumb-up"],["Other","otherUp","thumb-up"]],[["Hard to understand","hardToUnderstand","thumb-down"],["Incorrect information or sample code","incorrectInformationOrSampleCode","thumb-down"],["Missing the information/samples I need","missingTheInformationSamplesINeed","thumb-down"],["Other","otherDown","thumb-down"]],["Last updated 2025-08-25 UTC."],[[["\u003cp\u003eDatastream utilizes Oracle redo log files to capture changes in source databases and replicate them to the destination instance.\u003c/p\u003e\n"],["\u003cp\u003eDatastream can access redo logs using either the Oracle LogMiner API, which only supports archived files and is single-threaded, or the binary reader method, which supports both online and archived files and is multi-threaded.\u003c/p\u003e\n"],["\u003cp\u003eThe frequency of Oracle redo log file switching, controlled by size and time parameters, significantly impacts Datastream's latency, with smaller, more frequently switched files enabling faster replication.\u003c/p\u003e\n"],["\u003cp\u003eConfiguration parameters, including file size (ideally under 1GB) and the \u003ccode\u003eARCHIVE_LAG_TARGET\u003c/code\u003e parameter (recommended to be 1800 seconds or less), can be adjusted to optimize redo log file rotation for Datastream's performance.\u003c/p\u003e\n"],["\u003cp\u003eWhile manual switching of redo log files via \u003ccode\u003eALTER SYSTEM SWITCH LOGFILE;\u003c/code\u003e is possible for testing, it is not recommended for production due to required privileges and performance impact on the database.\u003c/p\u003e\n"]]],[],null,["# Work with Oracle database redo log files\n\nDuring change data capture, Datastream reads Oracle redo log files to\nmonitor your source databases for changes and replicate them to the destination\ninstance. Each Oracle database has a set of online redo log files. All\ntransaction records on the database are recorded in the files. When the current\nredo log file is rotated (or switched), the archive process\n[copies this file into an archive storage](https://docs.oracle.com/cd/B19306_01/server.102/b14231/archredo.htm#i1006148). Meanwhile, the database promotes\nanother file to serve as the current file.\n\nDatastream Oracle connector extracts change data capture (CDC) events\nfrom **archived** Oracle redo log files.\n\nAccess redo log files\n---------------------\n\nDatastream can use the Oracle LogMiner API or the binary\nreader method to access the redo log files:\n\n- **Oracle LogMiner** : an out-of-the-box utility included in Oracle databases.\n If you configure Datastream to use Oracle LogMiner API, Datastream\n can only work with *archived* redo log files, online redo log files aren't supported.\n The LogMiner API method is single-threaded and is subject to higher latency and\n lower throughput when working with large transaction number source databases.\n LogMiner supports most data types and Oracle database features.\n\n- **Binary reader** ([Preview](/products#product-launch-stages)):\n a specialized, high-performance utility that works with both online and archived\n redo log files. Binary reader can access the log files using Automatic Storage\n Management (ASM) or by reading the files directly using database directory objects.\n Binary reader is multithreaded and supports low-latency CDC. It also creates low\n impact on the source database as redo logs are parsed outside of the database\n operations. The binary reader CDC method has limited support for certain data\n types or features. For more information, see\n [Known limitations](/datastream/docs/sources-oracle#binary-reader-limitations).\n\nSet configuration parameters for Oracle redo log files\n------------------------------------------------------\n\nThis design has profound implications on Datastream's potential latency. If Oracle's redo log files are switched frequently or kept to a smaller size (for example, \\\u003c 256MB), Datastream can replicate changes faster.\n\nThere are configuration parameters that you can set to control the log file rotation frequency:\n\n- **Size:** Online redo log files have a minimum size of 4 MB, and the default size is dependent on your operating system. You can modify the size of the log files by creating new online log files and dropping the older log files.\n\n To find the size of the online redo log files, run the following query: \n\n ```sql\n SELECT GROUP#, STATUS, BYTES/1024/1024 MB FROM V$LOG\n ```\n | Generating very large log files might cause Datastream to time out, which can lead to stream failure. The recommended redo log file size is below 1GB.\n |\n | For information about resizing redo logs for a self-hosted Oracle instance, see [How to resize redo logs in Oracle](https://logic.edchen.org/how-to-resize-redo-logs-in-oracle).\n |\n | For information about resizing redo logs for an Amazon RDS Oracle instance, see [Resizing online redo logs](https://docs.aws.amazon.com/AmazonRDS/latest/UserGuide/Appendix.Oracle.CommonDBATasks.Log.html#Appendix.Oracle.CommonDBATasks.ResizingRedoLogs).\n- **Time:** The `ARCHIVE_LAG_TARGET` parameter provides an upper limit of how long (in seconds) the current log of the primary database can span.\n\n This isn't the exact log switch time, because it takes into account how long it will take to archive the log. The default value is `0` (no upper bound), and a reasonable value of `1800` (or 30 minutes) or less is suggested.\n\n You can use the following commands to set the `ARCHIVE_LAG_TARGET` parameter, either during initialization or while the database is up:\n - `SHOW PARAMETER ARCHIVE_LAG_TARGET;` This command displays how many seconds it will take for the current log to span.\n - `ALTER SYSTEM SET ARCHIVE_LAG_TARGET = `\u003cvar translate=\"no\"\u003enumber-of-seconds\u003c/var\u003e`;` Use this command to change the upper limit.\n\n For example, to set the upper limit to 10 minutes (or 600 seconds), enter `ALTER SYSTEM SET ARCHIVE_LAG_TARGET = `\u003cvar translate=\"no\"\u003e600\u003c/var\u003e`;`\n| You can switch the redo log files manually by running the following command:\n|\n| `ALTER SYSTEM SWITCH LOGFILE;`\n|\n| Although using this command is effective for testing purposes, we don't recommend it for production use-cases because of the privileges it requires and the significant performance impact on the database.\n\nWhat's next\n-----------\n\n- Learn more about [Oracle as a source](/datastream/docs/sources-oracle).\n- Learn more about [configuring a source Oracle database](/datastream/docs/configure-your-source-oracle-database)."]]