-
Notifications
You must be signed in to change notification settings - Fork 366
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
INDY-2208: Integration of Services: Cleanup #1300
INDY-2208: Integration of Services: Cleanup #1300
Conversation
Signed-off-by: toktar <renata.toktar@dsr-corporation.com>
…y-plenum into public/indy-2169
…y-plenum into public/indy-2169
-- remove `l_update_watermark_from_3pc` from Ordering Service -- simplify `primaries_batch_needed` logic -- remove `requestQueues` from Shared Data -- Remove `to_nodes` from Replica's `send` -- `_init_internal_bus` subscribed twice Signed-off-by: toktar <renata.toktar@dsr-corporation.com>
…nto task-2208-refactoring-ordering-service
Signed-off-by: toktar <renata.toktar@dsr-corporation.com>
Signed-off-by: toktar <renata.toktar@dsr-corporation.com>
Signed-off-by: toktar <renata.toktar@dsr-corporation.com>
Signed-off-by: toktar <renata.toktar@dsr-corporation.com>
Signed-off-by: toktar <renata.toktar@dsr-corporation.com>
Signed-off-by: toktar <renata.toktar@dsr-corporation.com>
Signed-off-by: toktar <renata.toktar@dsr-corporation.com>
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I'm not opposed to merging this right now, since there was a mass renaming which needs to be merged, but afterwards problem with quorums should be addressed ASAP in my opinion.
@@ -23,7 +23,7 @@ class ConsensusSharedData: | |||
TODO: Restore primary name from audit ledger instead of passing through constructor | |||
""" | |||
|
|||
def __init__(self, name: str, validators: List[str], inst_id: int, is_master: bool = True): | |||
def __init__(self, name: str, validators: List[str], inst_id: int, is_master: bool = True, quorums: Quorums=None): |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Why we explicitly pass quorums when it can (and in my opinion should) be calculated from validators list?
plenum/server/node.py
Outdated
@@ -829,7 +815,7 @@ def setPoolParams(self): | |||
self.f = getMaxFailures(self.totalNodes) | |||
self.requiredNumberOfInstances = self.f + 1 # per RBFT | |||
self.minimumNodes = (2 * self.f) + 1 # minimum for a functional pool | |||
self.quorums = Quorums(self.totalNodes) | |||
self.quorums.update_quorum(self.totalNodes) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Did I understand correctly that this is done in order to make quorums inside ConsensusSharedData updated automatically? If so I think this is quite dangerous change, because it is not apparent from the code that this quorums object is actually shared. Moreover, ConsensusSharedData is no longer self-contained, which can become a problem if we start moving replicas into different processes as RBFT requires. And yet another problem is that now quorums inside ConsensusSharedData is updated out of sync with validator nodes set.
In my opinion correct way to do this is to let ConsensusSharedData contain it's own quorums object and update it when set of validators is changed. This piece of code should explicitly update validators set in ConsensusSharedData and probably send a notification message ValidatorsChanged
.
self._consensus_data.node_mode = self.node.mode | ||
self._consensus_data.primaries_batch_needed = self.node.primaries_batch_needed | ||
self._consensus_data.quorums = self.quorums |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
We shouldn't be directly updating quorums object inside consensus data.
@@ -15,7 +15,6 @@ def msg(replica): | |||
|
|||
def test_unstash_catchup(replica, msg): | |||
pre_prepare, _ = msg | |||
replica.set_view_no(pre_prepare.viewNo + 1) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Why remove this?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Because we do not need to change the view_no.
Fix for this comment - #1280 (comment)
Signed-off-by: toktar <renata.toktar@dsr-corporation.com>
cfa5db4
to
cceb702
Compare
-- remove
l_update_watermark_from_3pc
from Ordering Service;-- simplify
primaries_batch_needed
logic;-- remove
requestQueues
from Shared Data;-- remove
to_nodes
from Replica'ssend
;--
_init_internal_bus
subscribed twice;-- remove "l_" prefix from OrderingService's methods;
-- cleanup internal messages (remove unused ones);
-- fixes in
_bootstrap_consensus_data
(get rid of doing actions twice);