The {prodname} SQL Server connector captures row-level changes that occur in the schemas of a SQL Server database.
The first time that the {prodname} SQL Server connector connects to a SQL Server database or cluster, it takes a consistent snapshot of the schemas in the database.
After the initial snapshot is complete, the connector continuously captures row-level changes for INSERT
, UPDATE
, or DELETE
operations that are committed to the SQL Server databases that are enabled for CDC.
The connector produces events for each data change operation, and streams them to Kafka topics.
The connector streams all of the events for a table to a dedicated Kafka topic.
Applications and services can then consume data change event records from that topic.
The {prodname} SQL Server connector is based on the change data capture feature that is available in SQL Server 2016 Service Pack 1 (SP1) and later Standard edition or Enterprise edition. The SQL Server capture process monitors designated databases and tables, and stores the changes into specifically created change tables that have stored procedure facades.
To enable the {prodname} SQL Server connector to capture change event records for database operations,
you must first enable change data capture on the SQL Server database.
CDC must be enabled on both the database and on each table that you want to capture.
After you set up CDC on the source database, the connector can capture row-level INSERT
, UPDATE
, and DELETE
operations
that occur in the database.
The connector writes event records for each source table to a Kafka topic especially dedicated to that table.
One topic exists for each captured table.
Client applications read the Kafka topics for the database tables that they follow, and can respond to the row-level events they consume from those topics.
The first time that the connector connects to a SQL Server database or cluster, it takes a consistent snapshot of the schemas for all tables for which it is configured to capture changes, and streams this state to Kafka. After the snapshot is complete, the connector continuously captures subsequent row-level changes that occur. By first establishing a consistent view of all of the data, the connector can continue reading without having lost any of the changes that were made while the snapshot was taking place.
The {prodname} SQL Server connector is tolerant of failures. As the connector reads changes and produces events, it periodically records the position of events in the database log (LSN / Log Sequence Number). If the connector stops for any reason (including communication failures, network problems, or crashes), after a restart the connector resumes reading the SQL Server CDC tables from the last point that it read.
Note
|
Offsets are committed periodically. They are not committed at the time that a change event occurs. As a result, following an outage, duplicate events might be generated. |
Fault tolerance also applies to snapshots. That is, if the connector stops during a snapshot, the connector begins a new snapshot when it restarts.
To optimally configure and run a {prodname} SQL Server connector, it is helpful to understand how the connector performs snapshots, streams change events, determines Kafka topic names, and uses metadata.
SQL Server CDC is not designed to store a complete history of database changes. For the {prodname} SQL Server connector to establish a baseline for the current state of the database, it uses a process called snapshotting. The initial snapshot captures the structure and data of the tables in the database.
The following workflow lists the steps that {prodname} takes to create a snapshot.
These steps describe the process for a snapshot when the snapshot.mode
configuration property is set to its default value, which is initial
.
You can customize the way that the connector creates snapshots by changing the value of the snapshot.mode
property.
If you configure a different snapshot mode, the connector completes the snapshot by using a modified version of this workflow.
-
Establish a connection to the database.
-
Determine the tables to be captured. By default, the connector captures all non-system tables. To have the connector capture a subset of tables or table elements, you can set a number of
include
andexclude
properties to filter the data, for example,table.include.list
ortable.exclude.list
. -
Obtain a lock on the SQL Server tables for which CDC is enabled to prevent structural changes from occurring during creation of the snapshot. The level of the lock is determined by the
snapshot.isolation.mode
configuration property. -
Read the maximum log sequence number (LSN) position in the server’s transaction log.
-
Capture the structure of all non-system, or all tables that are designated for capture. The connector persists this information in its internal database schema history topic. The schema history provides information about the structure that is in effect when a change event occurs.
NoteBy default, the connector captures the schema of every table in the database that is in capture mode, including tables that are not configured for capture. If tables are not configured for capture, the initial snapshot captures only their structure; it does not capture any table data. For more information about why snapshots persist schema information for tables that you did not include in the initial snapshot, see Understanding why initial snapshots capture the schema for all tables.
-
Release the locks obtained in Step 3, if necessary. Other database clients can now write to any previously locked tables.
-
At the LSN position read in Step 4, the connector scans the tables to be captured. During the scan, the connector completes the following tasks:
-
Confirms that the table was created before the snapshot began. If the table was created after the snapshot began, the connector skips the table. After the snapshot is complete, and the connector transitions to streaming, it emits change events for any tables that were created after the snapshot began.
-
Produces a
read
event for each row that is captured from a table. Allread
events contain the same LSN position, which is the LSN position that was obtained in step 4. -
Emits each
read
event to the Kafka topic for the table.
-
-
Records the successful completion of the snapshot in the connector offsets.
The resulting initial snapshot captures the current state of each row in the tables that are enabled for CDC. From this baseline state, the connector captures subsequent changes as they occur.
After the snapshot process begins, if the process is interrupted due to connector failure, rebalancing, or other reasons, the process restarts after the connector restarts.
After the connector completes the initial snapshot, it continues streaming from the position that it read in Step 4 so that it does not miss any updates.
If the connector stops again for any reason, after it restarts, it resumes streaming changes from where it previously left off.
The initial snapshot that a connector runs captures two types of information:
- Table data
-
Information about
INSERT
,UPDATE
, andDELETE
operations in tables that are named in the connector’stable.include.list
property. - Schema data
-
DDL statements that describe the structural changes that are applied to tables. Schema data is persisted to both the internal schema history topic, and to the connector’s schema change topic, if one is configured.
After you run an initial snapshot, you might notice that the snapshot captures schema information for tables that are not designated for capture. By default, initial snapshots are designed to capture schema information for every table that is present in the database, not only from tables that are designated for capture. Connectors require that the table’s schema is present in the schema history topic before they can capture a table. By enabling the initial snapshot to capture schema data for tables that are not part of the original capture set, {prodname} prepares the connector to readily capture event data from these tables should that later become necessary. If the initial snapshot does not capture a table’s schema, you must add the schema to the history topic before the connector can capture data from the table.
In some cases, you might want to limit schema capture in the initial snapshot. This can be useful when you want to reduce the time required to complete a snapshot. Or when {prodname} connects to the database instance through a user account that has access to multiple logical databases, but you want the connector to capture changes only from tables in a specific logic database.
-
Capturing data from tables not captured by the initial snapshot (no schema change)
-
Capturing data from tables not captured by the initial snapshot (schema change)
-
Setting the
schema.history.internal.store.only.captured.tables.ddl
property to specify the tables from which to capture schema information. -
Setting the
schema.history.internal.store.only.captured.databases.ddl
property to specify the logical databases from which to capture schema changes.
In some cases, you might want the connector to capture data from a table whose schema was not captured by the initial snapshot. Depending on the connector configuration, the initial snapshot might capture the table schema only for specific tables in the database. If the table schema is not present in the history topic, the connector fails to capture the table, and reports a missing schema error.
You might still be able to capture data from the table, but you must perform additional steps to add the table schema.
-
You want to capture data from a table with a schema that the connector did not capture during the initial snapshot.
-
No schema changes were applied to the table between the LSNs of the earliest and latest change table entry that the connector reads. For information about capturing data from a new table that has undergone structural changes, see [db2-capturing-data-from-new-tables-with-schema-changes].
-
Stop the connector.
-
Remove the internal database schema history topic that is specified by the
schema.history.internal.kafka.topic property
. -
Clear the offsets in the configured Kafka Connect
offset.storage.topic
. For more information about how to remove offsets, see the {prodname} community FAQ.WarningRemoving offsets should be performed only by advanced users who have experience in manipulating internal Kafka Connect data. This operation is potentially destructive, and should be performed only as a last resort.
-
Apply the following changes to the connector configuration:
-
(Optional) Set the value of
schema.history.internal.captured.tables.ddl
tofalse
. This setting causes the snapshot to capture the schema for all tables, and guarantees that, in the future, the connector can reconstruct the schema history for all tables.NoteSnapshots that capture the schema for all tables require more time to complete.
-
Add the tables that you want the connector to capture to
table.include.list
. -
Set the
snapshot.mode
to one of the following values:initial
-
When you restart the connector, it takes a full snapshot of the database that captures the table data and table structures.
If you select this option, consider setting the value of theschema.history.internal.captured.tables.ddl
property tofalse
to enable the connector to capture the schema of all tables. schema_only
-
When you restart the connector, it takes a snapshot that captures only the table schema. Unlike a full data snapshot, this option does not capture any table data. Use this option if you want to restart the connector more quickly than with a full snapshot.
-
-
Restart the connector. The connector completes the type of snapshot specified by the
snapshot.mode
. -
(Optional) If the connector performed a
schema_only
snapshot, after the snapshot completes, initiate an incremental snapshot to capture data from the tables that you added. The connector runs the snapshot while it continues to stream real-time changes from the tables. Running an incremental snapshot captures the following data changes:-
For tables that the connector previously captured, the incremental snapsot captures changes that occur while the connector was down, that is, in the interval between the time that the connector was stopped, and the current restart.
-
For newly added tables, the incremental snapshot captures all existing table rows.
-
If a schema change is applied to a table, records that are committed before the schema change have different structures than those that were committed after the change. When {prodname} captures data from a table, it reads the schema history to ensure that it applies the correct schema to each event. If the schema is not present in the schema history topic, the connector is unable to capture the table, and an error results.
If you want to capture data from a table that was not captured by the initial snapshot, and the schema of the table was modified, you must add the schema to the history topic, if it is not already available. You can add the schema by running a new schema snapshot, or by running an initial snapshot for the table.
-
You want to capture data from a table with a schema that the connector did not capture during the initial snapshot.
-
A schema change was applied to the table so that the records to be captured do not have a uniform structure.
- Initial snapshot captured the schema for all tables (
store.only.captured.tables.ddl
was set tofalse
) -
-
Edit the
table.include.list
property to specify the tables that you want to capture. -
Restart the connector.
-
Initiate an incremental snapshot if you want to capture existing data from the newly added tables.
-
- Initial snapshot did not capture the schema for all tables (
store.only.captured.tables.ddl
was set totrue
) -
If the initial snapshot did not save the schema of the table that you want to capture, complete one of the following procedures:
- Procedure 1: Schema snapshot, followed by incremental snapshot
-
In this procedure, the connector first performs a schema snapshot. You can then initiate an incremental snapshot to enable the connector to synchronize data.
-
Stop the connector.
-
Remove the internal database schema history topic that is specified by the
schema.history.internal.kafka.topic property
. -
Clear the offsets in the configured Kafka Connect
offset.storage.topic
. For more information about how to remove offsets, see the {prodname} community FAQ.WarningRemoving offsets should be performed only by advanced users who have experience in manipulating internal Kafka Connect data. This operation is potentially destructive, and should be performed only as a last resort.
-
Set values for properties in the connector configuration as described in the following steps:
-
Set the value of the
snapshot.mode
property toschema_only
. -
Edit the
table.include.list
to add the tables that you want to capture.
-
-
Restart the connector.
-
Wait for {prodname} to capture the schema of the new and existing tables. Data changes that occurred any tables after the connector stopped are not captured.
-
To ensure that no data is lost, initiate an incremental snapshot.
-
- Procedure 2: Initial snapshot, followed by optional incremental snapshot
-
In this procedure the connector performs a full initial snapshot of the database. As with any initial snapshot, in a database with many large tables, running an initial snapshot can be a time-consuming operation. After the snapshot completes, you can optionally trigger an incremental snapshot to capture any changes that occur while the connector is off-line.
-
Stop the connector.
-
Remove the internal database schema history topic that is specified by the
schema.history.internal.kafka.topic property
. -
Clear the offsets in the configured Kafka Connect
offset.storage.topic
. For more information about how to remove offsets, see the {prodname} community FAQ.WarningRemoving offsets should be performed only by advanced users who have experience in manipulating internal Kafka Connect data. This operation is potentially destructive, and should be performed only as a last resort.
-
Edit the
table.include.list
to add the tables that you want to capture. -
Set values for properties in the connector configuration as described in the following steps:
-
Set the value of the
snapshot.mode
property toinitial
. -
(Optional) Set
schema.history.internal.store.only.captured.tables.ddl
tofalse
.
-
-
Restart the connector. The connector takes a full database snapshot. After the snapshot completes, the connector transitions to streaming.
-
(Optional) To capture any data that changed while the connector was off-line, initiate an incremental snapshot.
-
Warning
|
SQL Server collations
Each SQL Server server or database is configured to use a specific collation, which determines how character data is stored, sorted, compared, and displayed.
The sorting rules for some collation sets, such as the SQL Server collations (SQL_*) are not compatible with the Unicode sorting algorithm.
In some cases, the incompatible sorting rules can lead to lost data when the connector runs an ad hoc snapshot.
For example, if SQL Server is configured to send strings as Unicode (that is, the connection property For more information about using the |
Warning
|
The {prodname} connector for SQL Server does not support schema changes while an incremental snapshot is running. |
When the connector first starts, it takes a structural snapshot of the structure of the captured tables and persists this information to its internal database schema history topic. The connector then identifies a change table for each source table, and completes the following steps.
-
For each change table, the connector read all of the changes that were created between the last stored maximum LSN and the current maximum LSN.
-
The connector sorts the changes that it reads in ascending order, based on the values of their commit LSN and change LSN. This sorting order ensures that the changes are replayed by {prodname} in the same order in which they occurred in the database.
-
The connector passes the commit and change LSNs as offsets to Kafka Connect.
-
The connector stores the maximum LSN and restarts the process from Step 1.
After a restart, the connector resumes processing from the last offset (commit and change LSNs) that it read.
The connector is able to detect whether CDC is enabled or disabled for included source tables and adjust its behavior.
There may be situations when no maximum LSN is recorded in the database because:
-
SQL Server Agent is not running
-
No changes are recorded in the change table yet
-
Database has low activity and the cdc clean up job periodically clears entries from the cdc tables
Out of these possibilities, since a running SQL Server Agent is a prerequisite, No 1. is a real problem (while No 2. and 3. are normal).
In order to mitigate this issue and differentiate between No 1. and the others, a check for the status of the SQL Server Agent is done through the following query "SELECT CASE WHEN dss.[status]=4 THEN 1 ELSE 0 END AS isRunning FROM [#db].sys.dm_server_services dss WHERE dss.[servicename] LIKE N’SQL Server Agent (%';"
.
If the SQL Server Agent is not running, an ERROR is written in the log: "No maximum LSN recorded in the database; SQL Server Agent is not running".
Important
|
The SQL Server Agent running status query requires |
SQL Server specifically requires the base object to be a table in order to create a change capture instance. As consequence, capturing changes from indexed views (aka. materialized views) is not supported by SQL Server and hence {prodname} SQL Server connector.
By default, the SQL Server connector writes events for all INSERT
, UPDATE
, and DELETE
operations that occur in a table to a single Apache Kafka topic that is specific to that table.
The connector uses the following convention to name change event topics:
<topicPrefix>.<schemaName>.<tableName>
The following list provides definitions for the components of the default name:
- topicPrefix
-
The logical name of the server, as specified by the
topic.prefix
configuration property. - schemaName
-
The name of the database schema in which the change event occurred.
- tableName
-
The name of the database table in which the change event occurred.
For example, if fulfillment
is the logical server name, and dbo
is the schema name, and the database contains tables with the names products
, products_on_hand
, customers
, and orders
,
the connector would stream change event records to the following Kafka topics:
-
fulfillment.testDB.dbo.products
-
fulfillment.testDB.dbo.products_on_hand
-
fulfillment.testDB.dbo.customers
-
fulfillment.testDB.dbo.orders
The connector applies similar naming conventions to label its internal database schema history topics, schema change topics, and transaction metadata topics.
If the default topic name do not meet your requirements, you can configure custom topic names. To configure custom topic names, you specify regular expressions in the logical topic routing SMT. For more information about using the logical topic routing SMT to customize topic naming, see {link-prefix}:{link-topic-routing}#topic-routing[Topic routing].
When a database client queries a database, the client uses the database’s current schema. However, the database schema can be changed at any time, which means that the connector must be able to identify what the schema was at the time each insert, update, or delete operation was recorded. Also, a connector cannot necessarily apply the current schema to every event. If an event is relatively old, it’s possible that it was recorded before the current schema was applied.
To ensure correct processing of change events that occur after a schema change, the {prodname} SQL Server connector stores a snapshot of the new schema based on the structure in the SQL Server change tables, which mirror the structure of their associated data tables. The connector stores the table schema information, together with the LSN of operations the result in schema changes, in the database schema history Kafka topic. The connector uses the stored schema representation to produce change events that correctly mirror the structure of tables at the time of each insert, update, or delete operation.
When the connector restarts after either a crash or a graceful stop, it resumes reading entries in the SQL Server CDC tables from the last position that it read. Based on the schema information that the connector reads from the database schema history topic, the connector applies the table structures that existed at the position where the connector restarts.
If you update the schema of a Db2 table that is in capture mode, it’s important that you also update the schema of the corresponding change table. You must be a SQL Server database administrator with elevated privileges to update database schema. For more information about updating SQL Server database schema in {prodname} environmenbts, see Database schema evolution.
The database schema history topic is for internal connector use only. Optionally, the connector can also emit schema change events to a different topic that is intended for consumer applications.
-
Default names for topics that receive {prodname} event records.
For each table for which CDC is enabled, the {prodname} SQL Server connector stores a history of the schema change events that are applied to tables in the database.
The connector writes schema change events to a Kafka topic named <topicPrefix>
, where topicPrefix
is the logical server name that is specified in the topic.prefix
configuration property.
Messages that the connector sends to the schema change topic contain a payload, and, optionally, also contain the schema of the change event message.
The schema for the schema change event has the following elements:
name
-
The name of the schema change event message.
type
-
The type of the change event message.
version
-
The version of the schema. The version is an integer that is incremented each time the schema is changed.
fields
-
The fields that are included in the change event message.
The following example shows a typical schema in JSON format.
{
"schema": {
"type": "struct",
"fields": [
{
"type": "string",
"optional": false,
"field": "databaseName"
}
],
"optional": false,
"name": "io.debezium.connector.sqlserver.SchemaChangeKey",
"version": 1
},
"payload": {
"databaseName": "inventory"
}
}
The payload of a schema change event message includes the following elements:
databaseName
-
The name of the database to which the statements are applied. The value of
databaseName
serves as the message key. tableChanges
-
A structured representation of the entire table schema after the schema change. The
tableChanges
field contains an array that includes entries for each column of the table. Because the structured representation presents data in JSON or Avro format, consumers can easily read messages without first processing them through a DDL parser.
Important
|
When the connector is configured to capture a table, it stores the history of the table’s schema changes not only in the schema change topic, but also in an internal database schema history topic. The internal database schema history topic is for connector use only and it is not intended for direct use by consuming applications. Ensure that applications that require notifications about schema changes consume that information only from the schema change topic. |
Warning
|
The format of the messages that a connector emits to its schema change topic is in an incubating state and can change without notice. |
{prodname} emits a message to the schema change topic when the following events occur:
-
You enable CDC for a table.
-
You disable CDC for a table.
-
You alter the structure of a table for which CDC is enabled by following the schema evolution procedure.
The following example shows a message in the schema change topic. The message contains a logical representation of the table schema.
{
"schema": {
...
},
"payload": {
"source": {
"version": "{debezium-version}",
"connector": "sqlserver",
"name": "server1",
"ts_ms": 0,
"snapshot": "true",
"db": "testDB",
"schema": "dbo",
"table": "customers",
"change_lsn": null,
"commit_lsn": "00000025:00000d98:00a2",
"event_serial_no": null
},
"ts_ms": 1588252618953, // (1)
"databaseName": "testDB", // (2)
"schemaName": "dbo",
"ddl": null, // (3)
"tableChanges": [ // (4)
{
"type": "CREATE", // (5)
"id": "\"testDB\".\"dbo\".\"customers\"", // (6)
"table": { // (7)
"defaultCharsetName": null,
"primaryKeyColumnNames": [ // (8)
"id"
],
"columns": [ // (9)
{
"name": "id",
"jdbcType": 4,
"nativeType": null,
"typeName": "int identity",
"typeExpression": "int identity",
"charsetName": null,
"length": 10,
"scale": 0,
"position": 1,
"optional": false,
"autoIncremented": false,
"generated": false
},
{
"name": "first_name",
"jdbcType": 12,
"nativeType": null,
"typeName": "varchar",
"typeExpression": "varchar",
"charsetName": null,
"length": 255,
"scale": null,
"position": 2,
"optional": false,
"autoIncremented": false,
"generated": false
},
{
"name": "last_name",
"jdbcType": 12,
"nativeType": null,
"typeName": "varchar",
"typeExpression": "varchar",
"charsetName": null,
"length": 255,
"scale": null,
"position": 3,
"optional": false,
"autoIncremented": false,
"generated": false
},
{
"name": "email",
"jdbcType": 12,
"nativeType": null,
"typeName": "varchar",
"typeExpression": "varchar",
"charsetName": null,
"length": 255,
"scale": null,
"position": 4,
"optional": false,
"autoIncremented": false,
"generated": false
}
],
"attributes": [ // (10)
{
"customAttribute": "attributeValue"
}
]
}
}
]
}
}
Item | Field name | Description |
---|---|---|
1 |
|
Optional field that displays the time at which the connector processed the event. The time is based on the system clock in the JVM running the Kafka Connect task. In the source object, ts_ms indicates the time that the change was made in the database. By comparing the value for payload.source.ts_ms with the value for payload.ts_ms, you can determine the lag between the source database update and Debezium. |
2 |
|
Identifies the database and the schema that contain the change. |
3 |
|
Always |
4 |
|
An array of one or more items that contain the schema changes generated by a DDL command. |
5 |
|
Describes the kind of change. The value is one of the following:
|
6 |
|
Full identifier of the table that was created, altered, or dropped. |
7 |
|
Represents table metadata after the applied change. |
8 |
|
List of columns that compose the table’s primary key. |
9 |
|
Metadata for each column in the changed table. |
10 |
|
Custom attribute metadata for each table change. |
In messages that the connector sends to the schema change topic, the key is the name of the database that contains the schema change.
In the following example, the payload
field contains the key:
{
"schema": {
"type": "struct",
"fields": [
{
"type": "string",
"optional": false,
"field": "databaseName"
}
],
"optional": false,
"name": "io.debezium.connector.sqlserver.SchemaChangeKey",
"version": 1
},
"payload": {
"databaseName": "testDB"
}
}
The {prodname} SQL Server connector generates a data change event for each row-level INSERT
, UPDATE
, and DELETE
operation. Each event contains a key and a value. The structure of the key and the value depends on the table that was changed.
{prodname} and Kafka Connect are designed around continuous streams of event messages. However, the structure of these events may change over time, which can be difficult for consumers to handle. To address this, each event contains the schema for its content or, if you are using a schema registry, a schema ID that a consumer can use to obtain the schema from the registry. This makes each event self-contained.
The following skeleton JSON shows the basic four parts of a change event. However, how you configure the Kafka Connect converter that you choose to use in your application determines the representation of these four parts in change events. A schema
field is in a change event only when you configure the converter to produce it. Likewise, the event key and event payload are in a change event only if you configure a converter to produce it. If you use the JSON converter and you configure it to produce all four basic change event parts, change events have this structure:
{
"schema": { // (1)
...
},
"payload": { // (2)
...
},
"schema": { // (3)
...
},
"payload": { // (4)
...
},
}
Item | Field name | Description |
---|---|---|
1 |
|
The first |
2 |
|
The first |
3 |
|
The second |
4 |
|
The second |
By default, the connector streams change event records to topics with names that are the same as the event’s originating table. For more information, see topic names.
Warning
|
The SQL Server connector ensures that all Kafka Connect schema names adhere to the Avro schema name format. This means that the logical server name must start with a Latin letter or an underscore, that is, a-z, A-Z, or _. Each remaining character in the logical server name and each character in the database and table names must be a Latin letter, a digit, or an underscore, that is, a-z, A-Z, 0-9, or \_. If there is an invalid character it is replaced with an underscore character. This can lead to unexpected conflicts if the logical server name, a database name, or a table name contains invalid characters, and the only characters that distinguish names from one another are invalid and thus replaced with underscores. |
A change event’s key contains the schema for the changed table’s key and the changed row’s actual key. Both the schema and its corresponding payload contain a field for each column in the changed table’s primary key (or unique key constraint) at the time the connector created the event.
Consider the following customers
table, which is followed by an example of a change event key for this table.
CREATE TABLE customers (
id INTEGER IDENTITY(1001,1) NOT NULL PRIMARY KEY,
first_name VARCHAR(255) NOT NULL,
last_name VARCHAR(255) NOT NULL,
email VARCHAR(255) NOT NULL UNIQUE
);
Every change event that captures a change to the customers
table has the same event key schema. For as long as the customers
table has the previous definition, every change event that captures a change to the customers
table has the following key structure, which in JSON, looks like this:
{
"schema": { // (1)
"type": "struct",
"fields": [ // (2)
{
"type": "int32",
"optional": false,
"field": "id"
}
],
"optional": false, // (3)
"name": "server1.testDB.dbo.customers.Key" // (4)
},
"payload": { // (5)
"id": 1004
}
}
Item | Field name | Description |
---|---|---|
1 |
|
The schema portion of the key specifies a Kafka Connect schema that describes what is in the key’s |
2 |
|
Specifies each field that is expected in the |
3 |
|
Indicates whether the event key must contain a value in its |
4 |
|
Name of the schema that defines the structure of the key’s payload. This schema describes the structure of the primary key for the table that was changed. Key schema names have the format connector-name.database-schema-name.table-name.
|
5 |
|
Contains the key for the row for which this change event was generated. In this example, the key, contains a single |
The value in a change event is a bit more complicated than the key. Like the key, the value has a schema
section and a payload
section. The schema
section contains the schema that describes the Envelope
structure of the payload
section, including its nested fields. Change events for operations that create, update or delete data all have a value payload with an envelope structure.
Consider the same sample table that was used to show an example of a change event key:
CREATE TABLE customers (
id INTEGER IDENTITY(1001,1) NOT NULL PRIMARY KEY,
first_name VARCHAR(255) NOT NULL,
last_name VARCHAR(255) NOT NULL,
email VARCHAR(255) NOT NULL UNIQUE
);
The value portion of a change event for a change to this table is described for each event type.
The following example shows the value portion of a change event that the connector generates for an operation that creates data in the customers
table:
{
"schema": { // (1)
"type": "struct",
"fields": [
{
"type": "struct",
"fields": [
{
"type": "int32",
"optional": false,
"field": "id"
},
{
"type": "string",
"optional": false,
"field": "first_name"
},
{
"type": "string",
"optional": false,
"field": "last_name"
},
{
"type": "string",
"optional": false,
"field": "email"
}
],
"optional": true,
"name": "server1.dbo.testDB.customers.Value", // (2)
"field": "before"
},
{
"type": "struct",
"fields": [
{
"type": "int32",
"optional": false,
"field": "id"
},
{
"type": "string",
"optional": false,
"field": "first_name"
},
{
"type": "string",
"optional": false,
"field": "last_name"
},
{
"type": "string",
"optional": false,
"field": "email"
}
],
"optional": true,
"name": "server1.dbo.testDB.customers.Value",
"field": "after"
},
{
"type": "struct",
"fields": [
{
"type": "string",
"optional": false,
"field": "version"
},
{
"type": "string",
"optional": false,
"field": "connector"
},
{
"type": "string",
"optional": false,
"field": "name"
},
{
"type": "int64",
"optional": false,
"field": "ts_ms"
},
{
"type": "boolean",
"optional": true,
"default": false,
"field": "snapshot"
},
{
"type": "string",
"optional": false,
"field": "db"
},
{
"type": "string",
"optional": false,
"field": "schema"
},
{
"type": "string",
"optional": false,
"field": "table"
},
{
"type": "string",
"optional": true,
"field": "change_lsn"
},
{
"type": "string",
"optional": true,
"field": "commit_lsn"
},
{
"type": "int64",
"optional": true,
"field": "event_serial_no"
}
],
"optional": false,
"name": "io.debezium.connector.sqlserver.Source", // (3)
"field": "source"
},
{
"type": "string",
"optional": false,
"field": "op"
},
{
"type": "int64",
"optional": true,
"field": "ts_ms"
}
],
"optional": false,
"name": "server1.dbo.testDB.customers.Envelope" // (4)
},
"payload": { // (5)
"before": null, // (6)
"after": { // (7)
"id": 1005,
"first_name": "john",
"last_name": "doe",
"email": "john.doe@example.org"
},
"source": { // (8)
"version": "{debezium-version}",
"connector": "sqlserver",
"name": "server1",
"ts_ms": 1559729468470,
"snapshot": false,
"db": "testDB",
"schema": "dbo",
"table": "customers",
"change_lsn": "00000027:00000758:0003",
"commit_lsn": "00000027:00000758:0005",
"event_serial_no": "1"
},
"op": "c", // (9)
"ts_ms": 1559729471739 // (10)
}
}
Item | Field name | Description |
---|---|---|
1 |
|
The value’s schema, which describes the structure of the value’s payload. A change event’s value schema is the same in every change event that the connector generates for a particular table. |
2 |
|
In the |
3 |
|
|
4 |
|
|
5 |
|
The value’s actual data. This is the information that the change event is providing. |
6 |
|
An optional field that specifies the state of the row before the event occurred. When the |
7 |
|
An optional field that specifies the state of the row after the event occurred. In this example, the |
8 |
|
Mandatory field that describes the source metadata for the event. This field contains information that you can use to compare this event with other events, with regard to the origin of the events, the order in which the events occurred, and whether events were part of the same transaction. The source metadata includes:
|
9 |
|
Mandatory string that describes the type of operation that caused the connector to generate the event. In this example,
|
10 |
|
Optional field that displays the time at which the connector processed the event.
In the event message envelope, the time is based on the system clock in the JVM running the Kafka Connect task. |
The value of a change event for an update in the sample customers
table has the same schema as a create event for that table. Likewise, the event value’s payload has the same structure. However, the event value payload contains different values in an update event. Here is an example of a change event value in an event that the connector generates for an update in the customers
table:
{
"schema": { ... },
"payload": {
"before": { // (1)
"id": 1005,
"first_name": "john",
"last_name": "doe",
"email": "john.doe@example.org"
},
"after": { // (2)
"id": 1005,
"first_name": "john",
"last_name": "doe",
"email": "noreply@example.org"
},
"source": { // (3)
"version": "{debezium-version}",
"connector": "sqlserver",
"name": "server1",
"ts_ms": 1559729995937,
"snapshot": false,
"db": "testDB",
"schema": "dbo",
"table": "customers",
"change_lsn": "00000027:00000ac0:0002",
"commit_lsn": "00000027:00000ac0:0007",
"event_serial_no": "2"
},
"op": "u", // (4)
"ts_ms": 1559729998706 // (5)
}
}
Item | Field name | Description |
---|---|---|
1 |
|
An optional field that specifies the state of the row before the event occurred. In an update event value, the |
2 |
|
An optional field that specifies the state of the row after the event occurred. You can compare the |
3 |
|
Mandatory field that describes the source metadata for the event. The
The
|
4 |
|
Mandatory string that describes the type of operation. In an update event value, the |
5 |
|
Optional field that displays the time at which the connector processed the event.
In the event message envelope, the time is based on the system clock in the JVM running the Kafka Connect task. |
Note
|
Updating the columns for a row’s primary/unique key changes the value of the row’s key. When a key changes, {prodname} outputs three events: a delete event and a tombstone event with the old key for the row, followed by a create event with the new key for the row. |
The value in a delete change event has the same schema
portion as create and update events for the same table. The payload
portion in a delete event for the sample customers
table looks like this:
{
"schema": { ... },
},
"payload": {
"before": { <>
"id": 1005,
"first_name": "john",
"last_name": "doe",
"email": "noreply@example.org"
},
"after": null, (2)
"source": { (3)
"version": "{debezium-version}",
"connector": "sqlserver",
"name": "server1",
"ts_ms": 1559730445243,
"snapshot": false,
"db": "testDB",
"schema": "dbo",
"table": "customers",
"change_lsn": "00000027:00000db0:0005",
"commit_lsn": "00000027:00000db0:0007",
"event_serial_no": "1"
},
"op": "d", (4)
"ts_ms": 1559730450205 (5)
}
}
Item | Field name | Description |
---|---|---|
1 |
|
Optional field that specifies the state of the row before the event occurred. In a delete event value, the |
2 |
|
Optional field that specifies the state of the row after the event occurred. In a delete event value, the |
3 |
|
Mandatory field that describes the source metadata for the event. In a delete event value, the
|
4 |
|
Mandatory string that describes the type of operation. The |
5 |
|
Optional field that displays the time at which the connector processed the event.
In the event message envelope, the time is based on the system clock in the JVM running the Kafka Connect task. |
SQL Server connector events are designed to work with Kafka log compaction. Log compaction enables removal of some older messages as long as at least the most recent message for every key is kept. This lets Kafka reclaim storage space while ensuring that the topic contains a complete data set and can be used for reloading key-based state.
When a row is deleted, the delete event value still works with log compaction, because Kafka can remove all earlier messages that have that same key. However, for Kafka to remove all messages that have that same key, the message value must be null
. To make this possible, after {prodname}’s SQL Server connector emits a delete event, the connector emits a special tombstone event that has the same key but a null
value.
{prodname} can generate events that represent transaction boundaries and that enrich data change event messages.
Note
|
Limits on when {prodname} receives transaction metadata
{prodname} registers and receives metadata only for transactions that occur after you deploy the connector. Metadata for transactions that occur before you deploy the connector is not available. |
Database transactions are represented by a statement block that is enclosed between the BEGIN
and END
keywords.
{prodname} generates transaction boundary events for the BEGIN
and END
delimiters in every transaction.
Transaction boundary events contain the following fields:
status
-
BEGIN
orEND
. id
-
String representation of the unique transaction identifier.
ts_ms
-
The time of a transaction boundary event (
BEGIN
orEND
event) at the data source. If the data source does not provide {prodname} with the event time, then the field instead represents the time at which {prodname} processes the event. event_count
(forEND
events)-
Total number of events emmitted by the transaction.
data_collections
(forEND
events)-
An array of pairs of
data_collection
andevent_count
elements that indicates the number of events that the connector emits for changes that originate from a data collection.
Warning
|
There is no way for {prodname} to reliably identify when a transaction has ended.
The transaction |
The following example shows a typical transaction boundary message:
{
"status": "BEGIN",
"id": "00000025:00000d08:0025",
"ts_ms": 1486500577125,
"event_count": null,
"data_collections": null
}
{
"status": "END",
"id": "00000025:00000d08:0025",
"ts_ms": 1486500577691,
"event_count": 2,
"data_collections": [
{
"data_collection": "testDB.dbo.testDB.tablea",
"event_count": 1
},
{
"data_collection": "testDB.dbo.testDB.tableb",
"event_count": 1
}
]
}
Unless overridden via the topic.transaction
option,
transaction events are written to the topic named <topic.prefix>
.transaction
.
When transaction metadata is enabled, the data message Envelope
is enriched with a new transaction
field.
This field provides information about every event in the form of a composite of fields:
id
-
String representation of unique transaction identifier
total_order
-
The absolute position of the event among all events generated by the transaction
data_collection_order
-
The per-data collection position of the event among all events that were emitted by the transaction
The following example shows what a typical message looks like:
{
"before": null,
"after": {
"pk": "2",
"aa": "1"
},
"source": {
...
},
"op": "c",
"ts_ms": "1580390884335",
"transaction": {
"id": "00000025:00000d08:0025",
"total_order": "1",
"data_collection_order": "1"
}
}
The {prodname} SQL Server connector represents changes to table row data by producing events that are structured like the table in which the row exists. Each event contains fields to represent the column values for the row. The way in which an event represents the column values for an operation depends on the SQL data type of the column. In the event, the connector maps the fields for each SQL Server data type to both a literal type and a semantic type.
The connector can map SQL Server data types to both literal and semantic types.
- Literal type
-
Describes how the value is literally represented by using Kafka Connect schema types, namely
INT8
,INT16
,INT32
,INT64
,FLOAT32
,FLOAT64
,BOOLEAN
,STRING
,BYTES
,ARRAY
,MAP
, andSTRUCT
. - Semantic type
-
Describes how the Kafka Connect schema captures the meaning of the field using the name of the Kafka Connect schema for the field.
If the default data type conversions do not meet your needs, you can {link-prefix}:{link-custom-converters}#custom-converters[create a custom converter] for the connector.
The following table shows how the connector maps basic SQL Server data types.
SQL Server data type | Literal type (schema type) | Semantic type (schema name) and Notes |
---|---|---|
|
|
n/a |
|
|
n/a |
|
|
n/a |
|
|
n/a |
|
|
n/a |
|
|
n/a |
|
|
n/a |
|
|
n/a |
|
|
n/a |
|
|
n/a |
|
|
n/a |
|
|
n/a |
|
|
n/a |
|
|
|
|
|
|
Other data type mappings are described in the following sections.
If present, a column’s default value is propagated to the corresponding field’s Kafka Connect schema. Change messages will contain the field’s default value (unless an explicit column value had been given), so there should rarely be the need to obtain the default value from the schema.
Other than SQL Server’s DATETIMEOFFSET
data type (which contain time zone information), the other temporal types depend on the value of the time.precision.mode
configuration property. When the time.precision.mode
configuration property is set to adaptive
(the default), then the connector will determine the literal type and semantic type for the temporal types based on the column’s data type definition so that events exactly represent the values in the database:
SQL Server data type | Literal type (schema type) | Semantic type (schema name) and Notes |
---|---|---|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
When the time.precision.mode
configuration property is set to connect
, then the connector will use the predefined Kafka Connect logical types. This may be useful when consumers only know about the built-in Kafka Connect logical types and are unable to handle variable-precision time values. On the other hand, since SQL Server supports tenth of microsecond precision, the events generated by a connector with the connect
time precision mode will result in a loss of precision when the database column has a fractional second precision value greater than 3:
SQL Server data type | Literal type (schema type) | Semantic type (schema name) and Notes |
---|---|---|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
The DATETIME
, SMALLDATETIME
and DATETIME2
types represent a timestamp without time zone information.
Such columns are converted into an equivalent Kafka Connect value based on UTC.
So for instance the DATETIME2
value "2018-06-20 15:13:16.945104" is represented by a io.debezium.time.MicroTimestamp
with the value "1529507596945104".
Note that the timezone of the JVM running Kafka Connect and {prodname} does not affect this conversion.
{prodname} connectors handle decimals according to the setting of the decimal.handling.mode
connector configuration property.
- decimal.handling.mode=precise
-
Table 8. Mappings when
decimal.handling.mode=precise
SQL Server type Literal type (schema type) Semantic type (schema name) NUMERIC[(P[,S])]
BYTES
org.apache.kafka.connect.data.Decimal
Thescale
schema parameter contains an integer that represents how many digits the decimal point shifted.DECIMAL[(P[,S])]
BYTES
org.apache.kafka.connect.data.Decimal
Thescale
schema parameter contains an integer that represents how many digits the decimal point shifted.SMALLMONEY
BYTES
org.apache.kafka.connect.data.Decimal
Thescale
schema parameter contains an integer that represents how many digits the decimal point shifted.MONEY
BYTES
org.apache.kafka.connect.data.Decimal
Thescale
schema parameter contains an integer that represents how many digits the decimal point shifted. - decimal.handling.mode=double
-
Table 9. Mappings when
decimal.handling.mode=double
SQL Server type Literal type Semantic type NUMERIC[(M[,D])]
FLOAT64
n/a
DECIMAL[(M[,D])]
FLOAT64
n/a
SMALLMONEY[(M[,D])]
FLOAT64
n/a
MONEY[(M[,D])]
FLOAT64
n/a
- decimal.handling.mode=string
-
Table 10. Mappings when
decimal.handling.mode=string
SQL Server type Literal type Semantic type NUMERIC[(M[,D])]
STRING
n/a
DECIMAL[(M[,D])]
STRING
n/a
SMALLMONEY[(M[,D])]
STRING
n/a
MONEY[(M[,D])]
STRING
n/a
For {prodname} to capture change events from SQL Server tables, a SQL Server administrator with the necessary privileges must first run a query to enable CDC on the database. The administrator must then enable CDC for each table that you want Debezium to capture.
Note
|
By default, JDBC connections to Microsoft SQL Server are protected by SSL encryption.
If SSL is not enabled for a SQL Server database, or if you want to connect to the database without using SSL, you can disable SSL by setting the value of the |
After CDC is applied, it captures all of the INSERT
, UPDATE
, and DELETE
operations that are committed to the tables for which CDD is enabled.
The {prodname} connector can then capture these events and emit them to Kafka topics.
Before you can enable CDC for a table, you must enable it for the SQL Server database. A SQL Server administrator enables CDC by running a system stored procedure. System stored procedures can be run by using SQL Server Management Studio, or by using Transact-SQL.
-
You are a member of the sysadmin fixed server role for the SQL Server.
-
You are a db_owner of the database.
-
The SQL Server Agent is running.
Note
|
The SQL Server CDC feature processes changes that occur in user-created tables only. You cannot enable CDC on the SQL Server master database.
|
-
From the View menu in SQL Server Management Studio, click Template Explorer.
-
In the Template Browser, expand SQL Server Templates.
-
Expand Change Data Capture > Configuration and then click Enable Database for CDC.
-
In the template, replace the database name in the
USE
statement with the name of the database that you want to enable for CDC. -
Run the stored procedure
sys.sp_cdc_enable_db
to enable the database for CDC.After the database is enabled for CDC, a schema with the name
cdc
is created, along with a CDC user, metadata tables, and other system objects.The following example shows how to enable CDC for the database
MyDB
:Example: Enabling a SQL Server database for the CDC templateUSE MyDB GO EXEC sys.sp_cdc_enable_db GO
A SQL Server administrator must enable change data capture on the source tables that you want to Debezium to capture.
The database must already be enabled for CDC.
To enable CDC on a table, a SQL Server administrator runs the stored procedure sys.sp_cdc_enable_table
for the table.
The stored procedures can be run by using SQL Server Management Studio, or by using Transact-SQL.
SQL Server CDC must be enabled for every table that you want to capture.
-
CDC is enabled on the SQL Server database.
-
The SQL Server Agent is running.
-
You are a member of the
db_owner
fixed database role for the database.
-
From the View menu in SQL Server Management Studio, click Template Explorer.
-
In the Template Browser, expand SQL Server Templates.
-
Expand Change Data Capture > Configuration, and then click Enable Table Specifying Filegroup Option.
-
In the template, replace the table name in the
USE
statement with the name of the table that you want to capture. -
Run the stored procedure
sys.sp_cdc_enable_table
.The following example shows how to enable CDC for the table
MyTable
:Example: Enabling CDC for a SQL Server tableUSE MyDB GO EXEC sys.sp_cdc_enable_table @source_schema = N'dbo', @source_name = N'MyTable', //(1) @role_name = N'MyRole', //(2) @filegroup_name = N'MyDB_CT',//(3) @supports_net_changes = 0 GO
-
Specifies the name of the table that you want to capture.
-
Specifies a role
MyRole
to which you can add users to whom you want to grantSELECT
permission on the captured columns of the source table. Users in thesysadmin
ordb_owner
role also have access to the specified change tables. Set the value of@role_name
toNULL
, to allow only members in thesysadmin
ordb_owner
to have full access to captured information. -
Specifies the
filegroup
where SQL Server places the change table for the captured table. The namedfilegroup
must already exist. It is best not to locate change tables in the samefilegroup
that you use for source tables.
-
A SQL Server administrator can run a system stored procedure to query a database or table to retrieve its CDC configuration information. The stored procedures can be run by using SQL Server Management Studio, or by using Transact-SQL.
-
You have
SELECT
permission on all of the captured columns of the capture instance. Members of thedb_owner
database role can view information for all of the defined capture instances. -
You have membership in any gating roles that are defined for the table information that the query includes.
-
From the View menu in SQL Server Management Studio, click Object Explorer.
-
From the Object Explorer, expand Databases, and then expand your database object, for example, MyDB.
-
Expand Programmability > Stored Procedures > System Stored Procedures.
-
Run the
sys.sp_cdc_help_change_data_capture
stored procedure to query the table.Queries should not return empty results.
The following example runs the stored procedure
sys.sp_cdc_help_change_data_capture
on the databaseMyDB
:Example: Querying a table for CDC configuration informationUSE MyDB; GO EXEC sys.sp_cdc_help_change_data_capture GO
The query returns configuration information for each table in the database that is enabled for CDC and that contains change data that the caller is authorized to access. If the result is empty, verify that the user has privileges to access both the capture instance and the CDC tables.
The {prodname} SQL Server connector can be used with SQL Server on Azure. Refer to this example for configuring CDC for SQL Server on Azure and using it with {prodname}.
When a database administrator enables change data capture for a source table, the capture job agent begins to run. The agent reads new change event records from the transaction log and replicates the event records to a change data table. Between the time that a change is committed in the source table, and the time that the change appears in the corresponding change table, there is always a small latency interval. This latency interval represents a gap between when changes occur in the source table and when they become available for {prodname} to stream to Apache Kafka.
Ideally, for applications that must respond quickly to changes in data, you want to maintain close synchronization between the source and change tables. You might imagine that running the capture agent to continuously process change events as rapidly as possible might result in increased throughput and reduced latency — populating change tables with new event records as soon as possible after the events occur, in near real time. However, this is not necessarily the case. There is a performance penalty to pay in the pursuit of more immediate synchronization. Each time that the capture job agent queries the database for new event records, it increases the CPU load on the database host. The additional load on the server can have a negative effect on overall database performance, and potentially reduce transaction efficiency, especially during times of peak database use.
It’s important to monitor database metrics so that you know if the database reaches the point where the server can no longer support the capture agent’s level of activity. If you notice performance problems, there are SQL Server capture agent settings that you can modify to help balance the overall CPU load on the database host with a tolerable degree of latency.
On SQL Server, parameters that control the behavior of the capture job agent are defined in the SQL Server table msdb.dbo.cdc_jobs
.
If you experience performance issues while running the capture job agent, adjust capture jobs settings to reduce CPU load by running the sys.sp_cdc_change_job
stored procedure and supplying new values.
Note
|
Specific guidance about how to configure SQL Server capture job agent parameters is beyond the scope of this documentation. |
The following parameters are the most significant for modifying capture agent behavior for use with the {prodname} SQL Server connector:
pollinginterval
-
-
Specifies the number of seconds that the capture agent waits between log scan cycles.
-
A higher value reduces the load on the database host and increases latency.
-
A value of
0
specifies no wait between scans. -
The default value is
5
.
-
maxtrans
-
-
Specifies the maximum number of transactions to process during each log scan cycle. After the capture job processes the specified number of transactions, it pauses for the length of time that the
pollinginterval
specifies before the next scan begins. -
A lower value reduces the load on the database host and increases latency.
-
The default value is
500
.
-
maxscans
-
-
Specifies a limit on the number of scan cycles that the capture job can attempt in capturing the full contents of the database transaction log. If the
continuous
parameter is set to1
, the job pauses for the length of time that thepollinginterval
specifies before it resumes scanning. -
A lower values reduces the load on the database host and increases latency.
-
The default value is
10
.
-
-
For more information about capture agent parameters, see the SQL Server documentation.
For the complete list of the configuration properties that you can set for the {prodname} SQL Server connector, see SQL Server connector properties.
When the connector starts, it performs a consistent snapshot of the SQL Server databases that the connector is configured for. The connector then starts generating data change events for row-level operations and streaming the change event records to Kafka topics.
The {prodname} SQL Server connector has numerous configuration properties that you can use to achieve the right connector behavior for your application. Many properties have default values.
Information about the properties is organized as follows:
-
Database schema history connector configuration properties that control how {prodname} processes events that it reads from the database schema history topic.
-
Pass-through database driver properties that control the behavior of the database driver.
The following configuration properties are required unless a default value is available.
Property | Default | Description | ||
---|---|---|---|---|
No default |
Unique name for the connector. Attempting to register again with the same name will fail. (This property is required by all Kafka Connect connectors.) |
|||
No default |
The name of the Java class for the connector. Always use a value of |
|||
|
Specifies the maximum number of tasks that the connector can use to capture data from the database instance. |
|||
No default |
IP address or hostname of the SQL Server database server. |
|||
|
Integer port number of the SQL Server database server.
If both |
|||
No default |
Username to use when connecting to the SQL Server database server. Can be omitted when using Kerberos authentication, which can be configured using pass-through properties. |
|||
No default |
Password to use when connecting to the SQL Server database server. |
|||
No default |
Specifies the instance name of the SQL Server named instance.
If both |
|||
No default |
Topic prefix that provides a namespace for the SQL Server database server that you want {prodname} to capture.
The prefix should be unique across all other connectors, since it is used as the prefix for all Kafka topic names that receive records from this connector.
Only alphanumeric characters, hyphens, dots and underscores must be used in the database server logical name.
|
|||
No default |
An optional, comma-separated list of regular expressions that match names of schemas for which you want to capture changes.
Any schema name not included in To match the name of a schema, {prodname} applies the regular expression that you specify as an anchored regular expression.
That is, the specified expression is matched against the entire name string of the schema; it does not match substrings that might be present in a schema name. |
|||
No default |
An optional, comma-separated list of regular expressions that match names of schemas for which you do not want to capture changes.
Any schema whose name is not included in To match the name of a schema, {prodname} applies the regular expression that you specify as an anchored regular expression.
That is, the specified expression is matched against the entire name string of the schema; it does not match substrings that might be present in a schema name. |
|||
No default |
An optional comma-separated list of regular expressions that match fully-qualified table identifiers for tables that you want {prodname} to capture.
By default, the connector captures all non-system tables for the designated schemas.
When this property is set, the connector captures changes only from the specified tables.
Each identifier is of the form schemaName.tableName. To match the name of a table, {prodname} applies the regular expression that you specify as an anchored regular expression.
That is, the specified expression is matched against the entire name string of the table; it does not match substrings that might be present in a table name. |
|||
No default |
An optional comma-separated list of regular expressions that match fully-qualified table identifiers for the tables that you want to exclude from being captured.
{prodname} captures all tables that are not included in To match the name of a table, {prodname} applies the regular expression that you specify as an anchored regular expression.
That is, the specified expression is matched against the entire name string of the table; it does not match substrings that might be present in a table name. |
|||
empty string |
An optional comma-separated list of regular expressions that match the fully-qualified names of columns that should be included in the change event message values.
Fully-qualified names for columns are of the form schemaName.tableName.columnName.
Note that primary key columns are always included in the event’s key, even if not included in the value. To match the name of a column, {prodname} applies the regular expression that you specify as an anchored regular expression.
That is, the specified expression is matched against the entire name string of the column; it does not match substrings that might be present in a column name. |
|||
empty string |
An optional comma-separated list of regular expressions that match the fully-qualified names of columns that should be excluded from change event message values.
Fully-qualified names for columns are of the form schemaName.tableName.columnName.
Note that primary key columns are always included in the event’s key, also if excluded from the value. To match the name of a column, {prodname} applies the regular expression that you specify as an anchored regular expression.
That is, the specified expression is matched against the entire name string of the column; it does not match substrings that might be present in a column name. |
|||
|
Specifies whether to skip publishing messages when there is no change in included columns. This would essentially filter messages if there is no change in columns included as per |
|||
n/a |
An optional, comma-separated list of regular expressions that match the fully-qualified names of character-based columns.
Fully-qualified names for columns are of the form `<schemaName>.<tableName>.<columnName>`. A pseudonym consists of the hashed value that results from applying the specified hashAlgorithm and salt.
Based on the hash function that is used, referential integrity is maintained, while column values are replaced with pseudonyms.
Supported hash functions are described in the {link-java7-standard-names}[MessageDigest section] of the Java Cryptography Architecture Standard Algorithm Name Documentation. column.mask.hash.SHA-256.with.salt.CzQMA0cB5K = inventory.orders.customerName, inventory.shipment.customerName If necessary, the pseudonym is automatically shortened to the length of the column.
The connector configuration can include multiple properties that specify different hash algorithms and salts. |
|||
|
Time, date, and timestamps can be represented with different kinds of precision, including: |
|||
|
Specifies how the connector should handle values for |
|||
|
Boolean value that specifies whether the connector should publish changes in the database schema to a Kafka topic with the same name as the database server ID. Each schema change is recorded with a key that contains the database name and a value that is a JSON structure that describes the schema update. This is independent of how the connector internally records database schema history. The default is |
|||
|
Controls whether a delete event is followed by a tombstone event. |
|||
n/a |
An optional, comma-separated list of regular expressions that match the fully-qualified names of character-based columns.
Set this property if you want to truncate the data in a set of columns when it exceeds the number of characters specified by the length in the property name.
Set The fully-qualified name of a column observes the following format: You can specify multiple properties with different lengths in a single configuration. |
|||
n/a Fully-qualified names for columns are of the form schemaName.tableName.columnName. |
An optional, comma-separated list of regular expressions that match the fully-qualified names of character-based columns.
Set this property if you want the connector to mask the values for a set of columns, for example, if they contain sensitive data.
Set The fully-qualified name of a column observes the following format: schemaName.tableName.columnName. To match the name of a column, {prodname} applies the regular expression that you specify as an anchored regular expression. That is, the specified expression is matched against the entire name string of the column; the expression does not match substrings that might be present in a column name. You can specify multiple properties with different lengths in a single configuration. |
|||
n/a |
An optional, comma-separated list of regular expressions that match the fully-qualified names of columns for which you want the connector to emit extra parameters that represent column metadata. When this property is set, the connector adds the following fields to the schema of event records:
These parameters propagate a column’s original type name and length (for variable-width types), respectively. The fully-qualified name of a column observes the following format: schemaName.tableName.columnName. |
|||
n/a |
An optional, comma-separated list of regular expressions that specify the fully-qualified names of data types that are defined for columns in a database. When this property is set, for columns with matching data types, the connector emits event records that include the following extra fields in their schema:
These parameters propagate a column’s original type name and length (for variable-width types), respectively. The fully-qualified name of a column observes the following format: schemaName.tableName.typeName. For the list of SQL Server-specific data type names, see the SQL Server data type mappings. |
|||
n/a |
A list of expressions that specify the columns that the connector uses to form custom message keys for change event records that it publishes to the Kafka topics for specified tables. By default, {prodname} uses the primary key column of a table as the message key for records that it emits.
In place of the default, or to specify a key for tables that lack a primary key, you can configure custom message keys based on one or more columns. Each fully-qualified table name is a regular expression in the following format: There is no limit to the number of columns that you use to create custom message keys. However, it’s best to use the minimum number that are required to specify a unique key. |
|||
bytes |
Specifies how binary ( |
|||
none |
Specifies how schema names should be adjusted for compatibility with the message converter used by the connector. Possible settings:
|
|||
none |
Specifies how field names should be adjusted for compatibility with the message converter used by the connector. Possible settings:
For more information, see {link-prefix}:{link-avro-serialization}#avro-naming[Avro naming]. |
The following advanced configuration properties have good defaults that will work in most situations and therefore rarely need to be specified in the connector’s configuration.
Property | Default | Description |
---|---|---|
No default |
Enumerates a comma-separated list of the symbolic names of the {link-prefix}:{link-custom-converters}#custom-converters[custom converter] instances that the connector can use.
For example,
You must set the For each converter that you configure for a connector, you must also add a
For example, isbn.type: io.debezium.test.IsbnConverter If you want to further control the behavior of a configured converter, you can add one or more configuration parameters to pass values to the converter.
To associate any additional configuration parameter with a converter, prefix the parameter names with the symbolic name of the converter.
For example, isbn.schema.name: io.debezium.sqlserver.type.Isbn |
|
initial |
A mode for taking an initial snapshot of the structure and optionally data of captured tables. Once the snapshot is complete, the connector will continue reading change events from the database’s redo logs. The following values are supported:
|
|
All tables specified in |
An optional, comma-separated list of regular expressions that match the fully-qualified names ( To match the name of a table, {prodname} applies the regular expression that you specify as an anchored regular expression. That is, the specified expression is matched against the entire name string of the table; it does not match substrings that might be present in a table name. |
|
repeatable_read |
Mode to control which transaction isolation level is used and how long the connector locks tables that are designated for capture. The following values are supported:
The Mode choice also affects data consistency. Only |
|
|
Specifies how the connector should react to exceptions during processing of events.
|
|
|
Positive integer value that specifies the number of milliseconds the connector should wait during each iteration for new change events to appear. Defaults to 500 milliseconds, or 0.5 second. |
|
|
Positive integer value that specifies the maximum number of records that the blocking queue can hold.
When {prodname} reads events streamed from the database, it places the events in the blocking queue before it writes them to Kafka.
The blocking queue can provide backpressure for reading change events from the database
in cases where the connector ingests messages faster than it can write them to Kafka, or when Kafka becomes unavailable.
Events that are held in the queue are disregarded when the connector periodically records offsets.
Always set the value of |
|
|
A long integer value that specifies the maximum volume of the blocking queue in bytes.
By default, volume limits are not specified for the blocking queue.
To specify the number of bytes that the queue can consume, set this property to a positive long value. |
|
|
Positive integer value that specifies the maximum size of each batch of events that should be processed during each iteration of this connector. |
|
|
Controls how frequently heartbeat messages are sent. |
|
No default |
An interval in milli-seconds that the connector should wait before taking a snapshot after starting up; |
|
|
Specifies the maximum number of rows that should be read in one go from each table while taking a snapshot. The connector will read the table contents in multiple batches of this size. Defaults to 2000. |
|
No default |
Specifies the number of rows that will be fetched for each database round-trip of a given query. Defaults to the JDBC driver’s default fetch size. |
|
|
An integer value that specifies the maximum amount of time (in milliseconds) to wait to obtain table locks when performing a snapshot. If table locks cannot be acquired in this time interval, the snapshot will fail (also see snapshots). |
|
No default |
Specifies the table rows to include in a snapshot. Use the property if you want a snapshot to include only a subset of the rows in a table. This property affects snapshots only. It does not apply to events that the connector reads from the log. The property contains a comma-separated list of fully-qualified table names in the form From a "snapshot.select.statement.overrides": "customer.orders", "snapshot.select.statement.overrides.customer.orders": "SELECT * FROM [customers].[orders] WHERE delete_flag = 0 ORDER BY id DESC" In the resulting snapshot, the connector includes only the records for which |
|
|
When set to |
|
10000 (10 seconds) |
The number of milli-seconds to wait before restarting a connector after a retriable error occurs. |
|
|
A comma-separated list of operation types that will be skipped during streaming.
The operations include: |
|
No default value |
Fully-qualified name of the data collection that is used to send {link-prefix}:{link-signalling}#debezium-signaling-enabling-source-signaling-channel[signals] to the connector. |
|
source |
List of the signaling channel names that are enabled for the connector. By default, the following channels are available:
|
|
No default |
List of notification channel names that are enabled for the connector. By default, the following channels are available:
|
|
|
Allow schema changes during an incremental snapshot. When enabled the connector will detect schema change during an incremental snapshot and re-select a current chunk to avoid locking DDLs. |
|
|
The maximum number of rows that the connector fetches and reads into memory during an incremental snapshot chunk. Increasing the chunk size provides greater efficiency, because the snapshot runs fewer snapshot queries of a greater size. However, larger chunk sizes also require more memory to buffer the snapshot data. Adjust the chunk size to a value that provides the best performance in your environment. |
|
|
Specifies the watermarking mechanism that the connector uses during an incremental snapshot to deduplicate events that might be captured by an incremental snapshot and then recaptured after streaming resumes.
|
|
0 |
Specifies the maximum number of transactions per iteration to be used to reduce the memory footprint when streaming changes from multiple tables in a database.
When set to |
|
|
Uses OPTION(RECOMPILE) query option to all SELECT statements used during an incremental snapshot. This can help to solve parameter sniffing issues that may occur but can cause increased CPU load on the source database, depending on the frequency of query execution. |
|
|
The name of the TopicNamingStrategy class that should be used to determine the topic name for data change, schema change, transaction, heartbeat event etc., defaults to |
|
|
Specify the delimiter for topic name, defaults to |
|
|
The size used for holding the topic names in bounded concurrent hash map. This cache will help to determine the topic name corresponding to a given data collection. |
|
|
Controls the name of the topic to which the connector sends heartbeat messages. The topic name has this pattern: |
|
|
Controls the name of the topic to which the connector sends transaction metadata messages. The topic name has this pattern: For more information, see Transaction Metadata. |
|
|
Specifies the number of threads that the connector uses when performing an initial snapshot. To enable parallel initial snapshots, set the property to a value greater than 1. In a parallel initial snapshot, the connector processes multiple tables concurrently. |
|
|
The custom metric tags will accept key-value pairs to customize the MBean object name which should be appended the end of regular name, each key would represent a tag for the MBean object name, and the corresponding value would be the value of that tag the key is. For example: |
|
|
The maximum number of retries on retriable errors (e.g. connection errors) before failing (-1 = no limit, 0 = disabled, > 0 = num of retries). |
When change data capture is enabled for a SQL Server table, as changes occur in the table, event records are persisted to a capture table on the server. If you introduce a change in the structure of the source table change, for example, by adding a new column, that change is not dynamically reflected in the change table. For as long as the capture table continues to use the outdated schema, the {prodname} connector is unable to emit data change events for the table correctly. You must intervene to refresh the capture table to enable the connector to resume processing change events.
Because of the way that CDC is implemented in SQL Server, you cannot use {prodname} to update capture tables. To refresh capture tables, one must be a SQL Server database operator with elevated privileges. As a {prodname} user, you must coordinate tasks with the SQL Server database operator to complete the schema refresh and restore streaming to Kafka topics.
You can use one of the following methods to update capture tables after a schema change:
-
Offline schema updates require you to stop the {prodname} connector before you can update capture tables.
-
Online schema updates can update capture tables while the {prodname} connector is running.
There are advantages and disadvantages to using each type of procedure.
Warning
|
Whether you use the online or offline update method, you must complete the entire schema update process before you apply subsequent schema updates on the same source table. The best practice is to execute all DDLs in a single batch so the procedure can be run only once. |
Note
|
Some schema changes are not supported on source tables that have CDC enabled. For example, if CDC is enabled on a table, SQL Server does not allow you to change the schema of the table if you renamed one of its columns or changed the column type. |
Note
|
After you change a column in a source table from |
Note
|
After you rename a table using |
Offline schema updates provide the safest method for updating capture tables. However, offline updates might not be feasible for use with applications that require high-availability.
-
An update was committed to the schema of a SQL Server table that has CDC enabled.
-
You are a SQL Server database operator with elevated privileges.
-
Suspend the application that updates the database.
-
Wait for the {prodname} connector to stream all unstreamed change event records.
-
Stop the {prodname} connector.
-
Apply all changes to the source table schema.
-
Create a new capture table for the update source table using
sys.sp_cdc_enable_table
procedure with a unique value for parameter@capture_instance
. -
Resume the application that you suspended in Step 1.
-
Start the {prodname} connector.
-
After the {prodname} connector starts streaming from the new capture table, drop the old capture table by running the stored procedure
sys.sp_cdc_disable_table
with the parameter@capture_instance
set to the old capture instance name.
The procedure for completing an online schema updates is simpler than the procedure for running an offline schema update, and you can complete it without requiring any downtime in application and data processing. However, with online schema updates, a potential processing gap can occur after you update the schema in the source database, but before you create the new capture instance. During that interval, change events continue to be captured by the old instance of the change table, and the change data that is saved to the old table retains the structure of the earlier schema. So, for example, if you added a new column to a source table, change events that are produced before the new capture table is ready, do not contain a field for the new column. If your application does not tolerate such a transition period, it is best to use the offline schema update procedure.
-
An update was committed to the schema of a SQL Server table that has CDC enabled.
-
You are a SQL Server database operator with elevated privileges.
-
Apply all changes to the source table schema.
-
Create a new capture table for the update source table by running the
sys.sp_cdc_enable_table
stored procedure with a unique value for the parameter@capture_instance
. -
When {prodname} starts streaming from the new capture table, you can drop the old capture table by running the
sys.sp_cdc_disable_table
stored procedure with the parameter@capture_instance
set to the old capture instance name.
-
Modify the schema of the
customers
source table by running the following query to add thephone_number
field:ALTER TABLE customers ADD phone_number VARCHAR(32);
-
Create the new capture instance by running the
sys.sp_cdc_enable_table
stored procedure.EXEC sys.sp_cdc_enable_table @source_schema = 'dbo', @source_name = 'customers', @role_name = NULL, @supports_net_changes = 0, @capture_instance = 'dbo_customers_v2'; GO
-
Insert new data into the
customers
table by running the following query:INSERT INTO customers(first_name,last_name,email,phone_number) VALUES ('John','Doe','john.doe@example.com', '+1-555-123456'); GO
The Kafka Connect log reports on configuration updates through entries similar to the following message:
connect_1 | 2019-01-17 10:11:14,924 INFO || Multiple capture instances present for the same table: Capture instance "dbo_customers" [sourceTableId=testDB.dbo.customers, changeTableId=testDB.cdc.dbo_customers_CT, startLsn=00000024:00000d98:0036, changeTableObjectId=1525580473, stopLsn=00000025:00000ef8:0048] and Capture instance "dbo_customers_v2" [sourceTableId=testDB.dbo.customers, changeTableId=testDB.cdc.dbo_customers_v2_CT, startLsn=00000025:00000ef8:0048, changeTableObjectId=1749581271, stopLsn=NULL] [io.debezium.connector.sqlserver.SqlServerStreamingChangeEventSource] connect_1 | 2019-01-17 10:11:14,924 INFO || Schema will be changed for ChangeTable [captureInstance=dbo_customers_v2, sourceTableId=testDB.dbo.customers, changeTableId=testDB.cdc.dbo_customers_v2_CT, startLsn=00000025:00000ef8:0048, changeTableObjectId=1749581271, stopLsn=NULL] [io.debezium.connector.sqlserver.SqlServerStreamingChangeEventSource] ... connect_1 | 2019-01-17 10:11:33,719 INFO || Migrating schema to ChangeTable [captureInstance=dbo_customers_v2, sourceTableId=testDB.dbo.customers, changeTableId=testDB.cdc.dbo_customers_v2_CT, startLsn=00000025:00000ef8:0048, changeTableObjectId=1749581271, stopLsn=NULL] [io.debezium.connector.sqlserver.SqlServerStreamingChangeEventSource]
Eventually, the
phone_number
field is added to the schema and its value appears in messages written to the Kafka topic.... { "type": "string", "optional": true, "field": "phone_number" } ... "after": { "id": 1005, "first_name": "John", "last_name": "Doe", "email": "john.doe@example.com", "phone_number": "+1-555-123456" },
-
Drop the old capture instance by running the
sys.sp_cdc_disable_table
stored procedure.EXEC sys.sp_cdc_disable_table @source_schema = 'dbo', @source_name = 'dbo_customers', @capture_instance = 'dbo_customers'; GO
The {prodname} SQL Server connector provides three types of metrics that are in addition to the built-in support for JMX metrics that Zookeeper, Kafka, and Kafka Connect provide. The connector provides the following metrics:
-
Snapshot metrics for monitoring the connector when performing snapshots.
-
Streaming metrics for monitoring the connector when reading CDC table data.
-
Schema history metrics for monitoring the status of the connector’s schema history.
For information about how to expose the preceding metrics through JMX, see the {link-prefix}:{link-debezium-monitoring}#monitoring-debezium[{prodname} monitoring documentation].