Tetap teratur dengan koleksi
Simpan dan kategorikan konten berdasarkan preferensi Anda.
Bagian ini berisi informasi tentang:
Perilaku cara Datastream menangani data yang ditarik dari database Oracle sumber
Versi database Oracle yang didukung Datastream
Ringkasan cara menyiapkan database Oracle sumber agar data dapat di-streaming dari database tersebut ke tujuan
Batasan yang diketahui untuk menggunakan database Oracle sebagai sumber
Perilaku
Datastream mendukung dua metode ekstraksi perubahan pada data dari file log pengulangan online: pembaca log biner Oracle (Pratinjau) dan Oracle LogMiner.
Dengan metode pembaca log biner
(Pratinjau), perilaku berikut
diamati:
Jika ada jeda baca saat mengekstrak perubahan dari file log online,
Datastream akan mengekstrak perubahan dari file log yang diarsipkan.
Datastream hanya mereplikasi perubahan yang di-commit ke tujuan.
Transaksi yang belum di-commit atau di-roll back tidak direplikasi.
Pembaca biner mendukung mereplikasi kolom VARCHAR2 Oracle yang lebih panjang dari
4.000 karakter.
Datastream juga mendukung fitur Oracle LogMiner untuk mengekspos perubahan pada data. Metode ini memiliki perilaku berikut:
Semua skema atau skema tertentu dari database tertentu, serta semua tabel dari skema atau tabel tertentu, dapat dipilih.
Semua data historis direplikasi.
Semua perubahan bahasa pengolahan data (DML), seperti penyisipan, pembaruan, dan penghapusan dari database dan tabel yang ditentukan, direplikasi.
Datastream mereplikasi perubahan yang di-commit dan, dalam beberapa kasus, perubahan yang tidak di-commit ke tujuan. Datastream membaca perubahan yang belum di-commit. Jika terjadi rollback, output Datastream juga mencakup operasi yang berlawanan. Misalnya, jika ada operasi INSERT yang di-roll back, maka rekaman output juga akan berisi operasi DELETE yang sesuai. Dalam hal ini, peristiwa akan muncul sebagai peristiwa DELETE dengan hanya ROWID.
Pengisian ulang berbasis ROWID
Di Oracle, ROWID adalah kolom pseudo yang menyimpan ID unik untuk baris dalam tabel. Datastream menggunakan nilai ROWID untuk operasi pengisian ulang. Oleh karena itu, sebaiknya Anda tidak melakukan tindakan apa pun yang dapat mengubah nilai ROWID di database Oracle sumber hingga operasi pengisian ulang selesai.
Tindakan yang dapat mengubah nilai ROWID meliputi:
Perpindahan fisik baris:
Operasi ekspor dan impor: saat Anda mengekspor tabel, lalu mengimpornya kembali, lokasi fisik baris mungkin berubah, sehingga menghasilkan nilai ROWID baru.
Perintah ALTER TABLE (...) MOVE: memindahkan tabel ke tablespace lain dapat mengubah penyimpanan fisik dan menyebabkan perubahan ROWID.
Perintah ALTER TABLE (...) SHRINK SPACE: perintah ini mengompresi tabel, yang berpotensi memindahkan baris dan memengaruhi nilai ROWID-nya.
Operasi partisi: memisahkan, menggabungkan, atau memindahkan partisi dapat mengubah penempatan fisik baris dan nilai ROWID-nya.
Operasi kilas balik:
FLASHBACK TABLE: memulihkan tabel ke status sebelumnya melibatkan penghapusan dan penyisipan ulang baris, sehingga membuat nilai ROWID baru.
FLASHBACK_TRANSACTION_QUERY: Mirip dengan FLASHBACK TABLE. Mengembalikan transaksi dapat menyebabkan perubahan ROWID jika baris dihapus atau diperbarui dalam transaksi.
Versi
Datastream mendukung versi database Oracle berikut:
Oracle 11g, Versi 11.2.0.4 (hanya didukung dengan metode CDC Logminer)
Oracle 12c, Versi 12.1.0.2
Oracle 12c, Versi 12.2.0.1
Oracle 18c
Oracle 19c
Oracle 21c
Datastream mendukung jenis database Oracle berikut:
Dihosting sendiri di lokal atau di penyedia cloud mana pun
Amazon RDS untuk Oracle
Oracle Cloud
Oracle Exadata
Oracle RAC
Database standby Oracle Active Data Guard
Penyiapan
Untuk menyiapkan database Oracle sumber agar data dari database tersebut dapat di-streaming ke tujuan, Anda harus mengonfigurasi database untuk memberikan akses, menyiapkan logging, dan menentukan kebijakan retensi.
Lihat Mengonfigurasi database Oracle sumber untuk mempelajari cara mengonfigurasi database ini agar Datastream dapat menarik data dari database tersebut ke tujuan.
Batasan umum
Batasan yang diketahui untuk menggunakan database Oracle sebagai sumber mencakup:
Aliran data dibatasi hingga 10.000 tabel. Jika aliran menyertakan lebih dari 10.000 tabel, aliran tersebut mungkin mengalami error.
Datastream mendukung arsitektur multi-tenant Oracle (CDB/PDB), tetapi Anda hanya dapat mereplikasi satu database yang dapat di-plug dalam stream.
Oracle Autonomous Database tidak didukung.
Untuk tabel yang tidak memiliki kunci utama, Datastream menggunakan ROWID baris untuk melakukan operasi penggabungan di sisi konsumen. Perhatikan bahwa ROWID mungkin tidak unik. Jika Anda menghapus dan menyisipkan kembali baris dengan utilitas Ekspor/Impor Oracle, misalnya, ROWID baris tersebut mungkin berubah. Jika Anda menghapus baris, Oracle dapat menetapkan ulang ROWID-nya ke baris baru yang disisipkan nanti.
Tabel yang diatur indeks (IOT) tidak didukung.
Tabel sementara tidak didukung.
Kolom jenis data ANYDATA, BFILE, INTERVAL DAY TO SECOND, INTERVAL YEAR TO MONTH, LONG/LONG RAW, SDO_GEOMETRY, UDT, UROWID, XMLTYPE tidak didukung, dan diganti dengan nilai NULL.
Untuk melakukan streaming kolom jenis data objek besar, seperti objek besar biner (BLOB), objek besar karakter (CLOB), dan objek besar karakter nasional (NCLOB), Anda harus menyertakan tanda streamLargeObjects dalam konfigurasi streaming. Jika Anda tidak menyertakan tanda, Datastream tidak akan melakukan streaming kolom tersebut dan kolom tersebut akan diganti dengan nilai NULL di tujuan. Untuk mengetahui informasi selengkapnya, lihat Mengaktifkan streaming objek besar untuk sumber Oracle.
Untuk Oracle 11g, tabel yang memiliki kolom jenis data ANYDATA atau UDT tidak didukung, dan seluruh tabel tidak akan direplikasi.
Oracle Label Security (OLS) tidak direplikasi.
Datastream secara berkala mengambil skema terbaru dari sumber saat peristiwa diproses. Jika skema berubah, beberapa peristiwa dari skema baru mungkin dibaca saat skema lama masih diterapkan. Dalam hal ini, Datastream mendeteksi perubahan skema, memicu pengambilan skema, dan memproses ulang peristiwa yang gagal.
Tidak semua perubahan pada skema sumber dapat dideteksi secara otomatis, sehingga dapat menyebabkan kerusakan data. Perubahan skema berikut dapat menyebabkan kerusakan data atau kegagalan memproses peristiwa di hilir:
Melepas kolom
Menambahkan kolom ke tengah tabel
Mengubah jenis data kolom
Mengurutkan ulang kolom
Menghapus tabel (relevan jika tabel yang sama kemudian dibuat ulang dengan data baru yang ditambahkan)
Memangkas tabel
Datastream tidak mendukung replikasi tampilan.
Datastream mendukung tampilan terwujud. Namun, tampilan baru yang dibuat saat streaming sedang berjalan tidak akan otomatis diisi ulang.
Saat menggunakan metode Oracle LogMiner, pernyataan SAVEPOINT tidak didukung dan dapat menyebabkan perbedaan data jika terjadi rollback.
Saat menggunakan metode Oracle LogMiner, Datastream tidak mendukung replikasi tabel dan kolom yang namanya melebihi 30 karakter.
Datastream mendukung encoding set karakter berikut untuk database Oracle:
AL16UTF16
AL32UTF8
IN8ISCII
IW8ISO8859P8
JA16SJIS
JA16SJISTILDE
KO16MSWIN949
US7ASCII
UTF8
WE8ISO8859P1
WE8ISO8859P9
WE8ISO8859P15
WE8MSWIN1252
ZHT16BIG5
Datastream tidak mendukung replikasi nilai tanggal nol. Tanggal tersebut diganti dengan nilai NULL.
Datastream tidak mendukung konektivitas langsung ke database menggunakan fitur Single Client Access Name (SCAN) di lingkungan Oracle Real Application Clusters (RAC). Untuk mengetahui informasi tentang potensi solusi, lihat Perilaku dan batasan sumber Oracle.
Jika sumbernya adalah database standby Oracle Active Data Guard, Datastream tidak mendukung replikasi data terenkripsi.
Batasan tambahan saat menggunakan pembaca biner
Pembaca biner tidak mendukung fitur berikut:
Enkripsi Database Transparan (TDE)
Hybrid Columnar Compression
Mengamankan file
ASM tidak didukung untuk sumber Amazon RDS.
Metode CDC pembaca biner tidak mendukung Oracle 11g dan versi yang lebih lama.
[[["Mudah dipahami","easyToUnderstand","thumb-up"],["Memecahkan masalah saya","solvedMyProblem","thumb-up"],["Lainnya","otherUp","thumb-up"]],[["Sulit dipahami","hardToUnderstand","thumb-down"],["Informasi atau kode contoh salah","incorrectInformationOrSampleCode","thumb-down"],["Informasi/contoh yang saya butuhkan tidak ada","missingTheInformationSamplesINeed","thumb-down"],["Masalah terjemahan","translationIssue","thumb-down"],["Lainnya","otherDown","thumb-down"]],["Terakhir diperbarui pada 2025-08-12 UTC."],[[["\u003cp\u003eDatastream extracts data changes from Oracle databases using either the Oracle binary log reader or Oracle LogMiner, each with distinct behaviors regarding committed vs.uncommitted transactions and handling of data types.\u003c/p\u003e\n"],["\u003cp\u003eDatastream supports multiple versions of Oracle database, ranging from 11g to 21c, and various deployment types including self-hosted, Amazon RDS, and Oracle Cloud, however Oracle Autonomous Database is not supported.\u003c/p\u003e\n"],["\u003cp\u003eProper configuration of the source Oracle database is required to allow Datastream to stream data, including setting up logging and defining a retention policy.\u003c/p\u003e\n"],["\u003cp\u003eThere are several known limitations when using an Oracle database as a source, such as restrictions on the number of tables, certain data types not being supported, and limitations on schema changes.\u003c/p\u003e\n"],["\u003cp\u003eThe use of the binary reader has additional limitations such as not supporting Transparent Database Encryption, Hybrid Columnar Compression, and Secure files, as well as not being available for Oracle 11g and earlier versions.\u003c/p\u003e\n"]]],[],null,["# Source Oracle database\n\nThis section contains information about:\n\n- The behavior of how Datastream handles data that's being pulled from a source Oracle database\n- The versions of Oracle database that Datastream supports\n- An overview of how to setup a source Oracle database so that data can be streamed from it to a destination\n- Known limitations for using Oracle database as a source\n\nBehavior\n--------\n\nDatastream supports two methods of extracting changes to the data from\nonline redo log files: the Oracle binary log reader\n([Preview](/products#product-launch-stages)) and the Oracle LogMiner.\n\nWith the binary log reader method\n([Preview](/products#product-launch-stages)), the following behavior\nis observed:\n\n- If there's a read lag when extracting the changes from the online log files,\n Datastream extracts the changes from archived log files.\n\n- Datastream replicates only committed changes into the destination.\n Uncommitted or rolled back transactions aren't replicated.\n\n- The binary reader supports replicating Oracle `VARCHAR2` columns longer than\n 4000 characters.\n\nDatastream also supports the [Oracle LogMiner](https://docs.oracle.com/en/database/oracle/oracle-database/18/sutil/oracle-logminer-utility.html#GUID-3417B738-374C-4EE3-B15C-3A66E01AE2B5) feature for exposing changes to the data. The method has the following behavior:\n\n- All schemas or specific schemas from a given database, as well as all tables from the schemas or specific tables, can be selected.\n- All historical data is replicated.\n- All data manipulation language (DML) changes, such as inserts, updates, and deletes from the specified databases and tables, are replicated.\n- Datastream replicates both committed and, in some cases, uncommitted changes into the destination. Datastream reads uncommitted changes. In case of a rollback, the Datastream output records also include the opposite operation. For example, if there's a rolled-back `INSERT` operation, then the output records will also contain a corresponding `DELETE` operation. In this case, the event will appear as a `DELETE` event with only the `ROWID`.\n\n### `ROWID` based backfill\n\nIn Oracle, `ROWID` is a pseudocolumn that stores unique identifiers for rows in a table. Datastream uses the `ROWID` values for its backfill operations. Because of this, we recommend that you don't perform any actions that could change the `ROWID` values in your source Oracle database until the backfill operation completes.\n\nThe actions that can change the `ROWID` values include:\n\n- Physical movement of rows:\n\n - Export and import operations: when you export a table, and then you import it back, the physical location of rows might change, resulting in new `ROWID` values.\n - `ALTER TABLE (...) MOVE` command: moving a table to a different tablespace can change the physical storage and lead to `ROWID` changes.\n - `ALTER TABLE (...) SHRINK SPACE` command: this command compresses the table, potentially moving rows and affecting their `ROWID` values.\n - Partitioning operations: splitting, merging, or moving partitions can change the physical placement of rows and their `ROWID` values.\n- Flashback operations:\n\n - `FLASHBACK TABLE` command: restoring a table to a previous state involves deleting and re-inserting rows, thus creating new `ROWID` values.\n - `FLASHBACK_TRANSACTION_QUERY`: Similar to `FLASHBACK TABLE`. Rolling back a transaction can cause `ROWID` changes if rows were deleted or updated within the transaction.\n\nVersions\n--------\n\nDatastream supports the following versions of Oracle database:\n\n- Oracle 11g, Version 11.2.0.4 (supported only with the Logminer CDC method)\n- Oracle 12c, Version 12.1.0.2\n- Oracle 12c, Version 12.2.0.1\n- Oracle 18c\n- Oracle 19c\n- Oracle 21c\n\nDatastream supports the following types of Oracle database:\n\n- Self-hosted on-premises or on any cloud provider\n- Amazon RDS for Oracle\n- Oracle Cloud\n- Oracle Exadata\n- Oracle RAC\n- Oracle Active Data Guard standby database\n\nSetup\n-----\n\nTo set up a source Oracle database so that data from it can be streamed into a destination, you must configure the database to grant access, set up logging, and define a retention policy.\n\nSee [Configure a source Oracle database](/datastream/docs/configure-your-source-oracle-database) to learn how to configure this database so that Datastream can pull data from it into a destination.\n\nKnown limitations\n-----------------\n\nKnown limitations for using Oracle database as a source include:\n\n- Streams are limited to 10,000 tables. If a stream includes more than 10,000 tables, then it might run into errors.\n- Datastream supports Oracle multi-tenant architecture (CDB/PDB), however, you can only replicate a single pluggable database in a stream.\n- Oracle Autonomous Database isn't supported.\n- For tables that don't have a primary key, Datastream uses the row's [`ROWID`](https://docs.oracle.com/en/database/oracle/oracle-database/19/sqlrf/ROWID-Pseudocolumn.html) to perform a merge operation on the consumer side. Note that the `ROWID` might not be unique. If you delete and reinsert a row with [Oracle's Export/Import utility](https://docs.oracle.com/en/database/other-databases/nosql-database/21.2/admin/using-export-and-import-utilities.html), for example, then the row's `ROWID` might change. If you delete a row, then Oracle can reassign its `ROWID` to a new row inserted later.\n- Index-organized tables (IOTs) aren't supported.\n- Temporary tables aren't supported.\n- Columns of data types `ANYDATA`, `BFILE`, `INTERVAL DAY TO SECOND`, `INTERVAL YEAR TO MONTH`, `LONG/LONG RAW`, `SDO_GEOMETRY`, `UDT`, `UROWID`, `XMLTYPE` aren't supported, and are replaced with `NULL` values.\n- To stream columns of large object data types, such as binary large objects (`BLOB`), character large objects (`CLOB`) and national character large objects (`NCLOB`), you need to include the `streamLargeObjects` flag in your stream configuration. If you don't include the flag, Datastream doesn't stream such columns and they're replaced with `NULL` values in the destination. For more information, see [Enable streaming of large objects for Oracle sources](/datastream/docs/manage-streams#streamlargeobjects).\n- For Oracle 11g, tables that have columns of data types `ANYDATA` or `UDT` aren't supported, and the entire table won't be replicated.\n- Oracle Label Security (OLS) isn't replicated.\n- Datastream periodically fetches the latest schema from the source as events are processed. If a schema changes, then some events from the new schema might be read while the old schema is still applied. In this case, Datastream detects the schema change, triggers a schema fetch, and reprocesses the failed events.\n- Not all changes to the source schema can be detected automatically, in which case data corruption may occur. The following schema changes may cause data corruption or failure to process the events downstream:\n - Dropping columns\n - Adding columns to the middle of a table\n - Changing the data type of a column\n - Reordering columns\n - Dropping tables (relevant if the same table is then recreated with new data added)\n - Truncating tables\n- Datastream doesn't support replicating views.\n- Datastream supports materialized views. However, new views created while the stream is running aren't backfilled automatically.\n- When using the Oracle LogMiner method, [`SAVEPOINT` statements](https://docs.oracle.com/cd/B19306_01/server.102/b14200/statements_10001.htm) aren't supported and can cause data discrepancy in case of a rollback.\n- When using the Oracle LogMiner method, Datastream doesn't support replicating tables and columns whose names exceed 30 characters.\n- Datastream supports the following character set encodings for Oracle databases:\n - `AL16UTF16`\n - `AL32UTF8`\n - `IN8ISCII`\n - `IW8ISO8859P8`\n - `JA16SJIS`\n - `JA16SJISTILDE`\n - `KO16MSWIN949`\n - `US7ASCII`\n - `UTF8`\n - `WE8ISO8859P1`\n - `WE8ISO8859P9`\n - `WE8ISO8859P15`\n - `WE8MSWIN1252`\n - `ZHT16BIG5`\n- Datastream doesn't support replicating zero date values. Such dates are replaced with `NULL` values.\n- Datastream doesn't support direct connectivity to databases using the Single Client Access Name (SCAN) feature in Oracle Real Application Clusters (RAC) environments. For information about potential solutions, see [Oracle source behavior and limitations](/datastream/docs/faq#oracle-source).\n- If the source is an Oracle Active Data Guard standby database, Datastream doesn't support replicating encrypted data.\n\nAdditional limitations when using the binary reader\n---------------------------------------------------\n\n|\n| **Preview**\n|\n|\n| This feature is subject to the \"Pre-GA Offerings Terms\" in the General Service Terms section\n| of the [Service Specific Terms](/terms/service-terms#1).\n|\n| Pre-GA features are available \"as is\" and might have limited support.\n|\n| For more information, see the\n| [launch stage descriptions](/products#product-launch-stages).\n\n- Binary reader doesn't support the following features:\n\n - Transparent Database Encryption (TDE)\n - Hybrid Columnar Compression\n - Secure files\n - ASM isn't supported for Amazon RDS sources.\n - The binary reader CDC method doesn't support Oracle 11g and earlier versions.\n\nWhat's next\n-----------\n\n- Learn how to [configure an Oracle source](/datastream/docs/configure-your-source-oracle-database) for use with Datastream."]]