Skip to content

Commit

Permalink
DBZ-752 Table of contents for connector docs
Browse files Browse the repository at this point in the history
  • Loading branch information
jpechane committed Jun 25, 2018
1 parent d3bea0b commit 70b1207
Show file tree
Hide file tree
Showing 4 changed files with 16 additions and 0 deletions.
4 changes: 4 additions & 0 deletions docs/connectors/mongodb.asciidoc
Original file line number Diff line number Diff line change
@@ -1,9 +1,13 @@
= Debezium Connector for MongoDB
:awestruct-layout: doc
:toc:
:toc-placement: macro
:linkattrs:
:icons: font
:source-highlighter: highlight.js

toc::[]

Debezium's MongoDB Connector can monitor a https://docs.mongodb.com/manual/tutorial/deploy-replica-set/[MongoDB replica set] or a https://docs.mongodb.com/manual/core/sharded-cluster-components/[MongoDB sharded cluster] for document changes in databases and collections, recording those changes as events in Kafka topics. The connector automatically handles the https://docs.mongodb.com/manual/tutorial/add-shards-to-shard-cluster/[addition] or https://docs.mongodb.com/manual/tutorial/remove-shards-from-cluster/[removal] of shards in a sharded cluster, changes in membership of each replica set, https://docs.mongodb.com/manual/core/replica-set-elections/[elections] within each replica set, and awaiting the resolution of communications problems.

[[overview]]
Expand Down
4 changes: 4 additions & 0 deletions docs/connectors/mysql.asciidoc
Original file line number Diff line number Diff line change
@@ -1,9 +1,13 @@
= Debezium Connector for MySQL
:awestruct-layout: doc
:toc:
:toc-placement: macro
:linkattrs:
:icons: font
:source-highlighter: highlight.js

toc::[]

Debezium's MySQL Connector can monitor and record all of the row-level changes in the databases on a MySQL server or HA MySQL cluster. The first time it connects to a MySQL server/cluster, it reads a consistent snapshot of all of the databases. When that snapshot is complete, the connector continuously reads the changes that were committed to MySQL 5.6 or later and generates corresponding insert, update and delete events. All of the events for each table are recorded in a separate Kafka topic, where they can be easily consumed by applications and services.

As of Debezium 0.4.0, this connector adds preliminary support for https://aws.amazon.com/rds/mysql/[Amazon RDS] and https://aws.amazon.com/rds/aurora/[Amazon Aurora (MySQL compatibility)]. However, due to limitations of these hosted forms of MySQL, the connector retains locks during an initial consistent snapshot link:#snapshots-without-global-read-locks[for the duration of the snapshot].
Expand Down
4 changes: 4 additions & 0 deletions docs/connectors/oracle.asciidoc
Original file line number Diff line number Diff line change
@@ -1,8 +1,12 @@
= Debezium Connector for Oracle
:awestruct-layout: doc
:toc:
:toc-placement: macro
:linkattrs:
:icons: font

toc::[]

Debezium's Oracle Connector can monitor and record all of the row-level changes in the databases on an Oracle server.
This connector is at a very early stage of development and considered an incubating feature as of Debezium 0.8.
It is not feature-complete (most notably, there's no support for initial snapshots yet) and the structure of emitted CDC messages may change in future revisions.
Expand Down
4 changes: 4 additions & 0 deletions docs/connectors/postgresql.asciidoc
Original file line number Diff line number Diff line change
@@ -1,9 +1,13 @@
= Debezium Connector for PostgreSQL
:awestruct-layout: doc
:toc:
:toc-placement: macro
:linkattrs:
:icons: font
:source-highlighter: highlight.js

toc::[]

Debezium's PostgreSQL Connector can monitor and record the row-level changes in the schemas of a PostgreSQL database. This connector was added in Debezium 0.4.0.

The first time it connects to a PostgreSQL server/cluster, it reads a consistent snapshot of all of the schemas. When that snapshot is complete, the connector continuously streams the changes that were committed to PostgreSQL 9.6 or later and generates corresponding insert, update and delete events. All of the events for each table are recorded in a separate Kafka topic, where they can be easily consumed by applications and services.
Expand Down

0 comments on commit 70b1207

Please sign in to comment.