-
Notifications
You must be signed in to change notification settings - Fork 1.2k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
[ST] Fix MigrationST #9954
[ST] Fix MigrationST #9954
Conversation
Signed-off-by: Lukas Kral <lukywill16@gmail.com>
/azp run migration |
Azure Pipelines successfully started running 1 pipeline(s). |
/azp run migration |
Azure Pipelines successfully started running 1 pipeline(s). |
/azp run migration |
Azure Pipelines successfully started running 1 pipeline(s). |
Signed-off-by: Lukas Kral <lukywill16@gmail.com>
/azp run migration |
Azure Pipelines successfully started running 1 pipeline(s). |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Good Job! Increase in request.timeout.ms
and delivery.timeout.ms
seems reasonable to me. Also the cleaning logs seems to be clean of any errors now, so thanks for looking on that.
systemtest/src/test/java/io/strimzi/systemtest/migration/MigrationST.java
Show resolved
Hide resolved
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
LGTM assuming it works.
At the same time you should plan investigations on the stuff already mentioned in the description. This PR sounds like a temporary workaround to me if I got it correctly.
The only thing that needs investigation is the deletion of topics by UTO. |
Type of change
Description
This PR fixes few things failing in the STs in the
MigrationST
class:delivery.timeout.ms
(together withrequest.timeout.ms
) to 30s, which should be sufficient in case of disconnection from the nodeKRaftDualWriting
and other states -> extended to 5 minutes, however it's still not enough from time to time, depending on the KafkaRoller and when the Pods are rolled. This is being investigated as wellOther than this I decreased the wait timeout for "default" resource deletion -> from 3 minutes to 2, as it should be sufficient timeout for resources like CRDs, KafkaTopic, KafkaUser, Secret, ConfigMap, etc.
Checklist