Data collected was processed with JuPy Notebooks for the experiment comparing the Adabas Event Replicator Target Adapter vs. a custom Kafka-based Adabas replication solution.
First, the performance of the replication pipeline was tested without any parallelism involved: the source and sink connectors were run on one Connect worker each, with only one task each. There was a single Kafka broker with only one partition.
This scenario tested the parallelization capabilities of Kafka. There were three source tasks and three sink tasks. Two workers were used in total, one for the sink connector and one for the source connector. There were also three brokers, each broker being assigned as the partition leader for a single partition.
This scenario tested the parallelization capabilities of Kafka to a higher degree than in the previous scenario. There were seven source tasks and seven sink tasks, run on one worker each. Seven partitions were configured, balanced among three brokers. The broker number was kept at three to test whether the brokers could handle a higher partition number.
This scenario was similar to the previous one. In this case, however, the seven source tasks and seven sink tasks were balanced on two Connect workers for the source and sink connector each (with four workers in total). The aim was to provide insight into whether the distribution of tasks would provide additional throughput improvements.
The last scenario used a significant increase in parallelization. It was selected to see whether the performance would scale effectively. The source and sink connectors were run on five workers each, with 20 tasks assigned per connector. The 20 partitions were hosted on three brokers.