-
Notifications
You must be signed in to change notification settings - Fork 137
Closed
Description
Problem
(In incremental stage only)
dtle will have table structure (metadata) (e.g. column names, types) stored in memory (src & dst). The metadata will be utilized when
- [src] filtering rows by 'where' predicate
- [dst] constructing DML
- and more...
Src MySQL Task
The metadata for some table is queried from src mysqld when a DDL on the table is encountered
Consider the case
| mysqld | dtle |
|---|---|
| DDL1 executed | |
| some DML | |
| DDL2 executed | |
| process DDL1 | |
| - get metadata from mysqld | |
| - but the data is after DDL2 | |
| process some DML with inconsistent metadata | |
| more DML | |
| process DDL2 | |
| ... |
Dst MySQL Task
The metadata for some table is queried from dst mysqld when a DDL on the table has been replayed. Nothing is wrong as far as considered
Dst Kafka Task
On dest side (Kafka), the metadata is passed from src MySQL task, which might be inconsistent to the table structure at the time of row change.
Possible solution
- Run a helper mysqld, run only DDL and query table structure from it
- Parse DDL and calculate table structure change
- like debezium
Metadata
Metadata
Assignees
Labels
No labels