-
Notifications
You must be signed in to change notification settings - Fork 22
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. Weβll occasionally send you account related emails.
Already on GitHub? Sign in to your account
RHIROS-1117 - Refactor processors #321
base: main
Are you sure you want to change the base?
Conversation
9a705be
to
b37f8b6
Compare
Codecov ReportPatch coverage:
Additional details and impacted files@@ Coverage Diff @@
## main #321 +/- ##
==========================================
+ Coverage 72.97% 73.70% +0.73%
==========================================
Files 27 28 +1
Lines 1321 1316 -5
==========================================
+ Hits 964 970 +6
+ Misses 357 346 -11
Flags with carried forward coverage won't be shown. Click here to find out more.
β View full report in Codecov by Sentry. |
/retest |
b37f8b6
to
463088e
Compare
finally: | ||
self.consumer.commit() | ||
LOG.warning("Stopping inventory consumer") | ||
self.consumer.close() |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Hello @patilsuraj767 -
In inventory related processor, we are explicitly closing the consumer whereas in engine related processor, we are not closing it.
Could you share few inputs & any scenario you can think of might cause issue when we close on engine side too?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
closing the consumer connection is good practice but I guess not mandatory. Basically as per me closing the connection just speed up the process. Example - Consider there is a pod consuming msgs from Kafka but suddenly pod dies for some reason without closing the connection and then OpenShift spin up the new pod but new pod will not be quickly able to connect to Kafka topic till the timeout (30s default) happens and Kafka broker terminates the previous connection.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
This makes it better then! I would close the connections for both! Thanks for inputs!
463088e
to
39d19e8
Compare
39d19e8
to
677c1d2
Compare
I have rebased this! request to review! |
PR Title π₯
Why do we need this change? π
Please include the context of this change here.
Documentation update? π
Security Checklist π
Upon raising this PR please go through RedHatInsights/secure-coding-checklist
πββοΈ Checklist π―
Additional π£
Feel free to add any other relevant details such as links, notes, screenshots, here.