You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
{{ message }}
This repository has been archived by the owner on Nov 30, 2019. It is now read-only.
The Pastry paper makes no mention of duplicating information to nearby Nodes, though some applications the authors built on top of Pastry (PAST, for example) reference the ability to store redundant copies of information in nearby nodes.
I'd propose adding a new Duplicate field to the message body. There are three ways I could see this working:
When a message is received with the Duplicate field set to n, where n is an integer above 0, it dispatches a copy of the message (with the Duplicate field set to 0) to the n Nodes on either side of the leaf set. So if a message arrived with the Duplicate field set to 4, the message would be mirrored to the 8 closest Nodes to the receiving Node.
When a message is received with the Duplicate field set to true, it checks its redundancy configuration (set with SetRedundancy) and uses that as n to copy to the n Nodes on either side of the leaf set.
A combination: the Duplicate field determines the number of Nodes to store the information on, but the redundancy configuration on the Node can set a maximum value.
Option 1 allows for a message to determine its priority on a sliding scale. It could be important enough to store redundantly, but not necessarily important enough to copy to the entire leaf set. Or it could be important enough to copy to the entire leaf set. It's up to the sender to decide that.
Option 2 allows a Node to determine its level of redundancy. A message either needs to be stored redundantly or it does not; there is no sliding scale here. This allows for Nodes to know more about the number of connections they'll be opening when they receive a message like this. I'm not sure what benefit this offers, really, except that it limits the damage a rogue Node in the cluster could do.
Option 3 kind of allows both: a message still gets a say in how important it is, but the Node can lock down how many connections it opens for redundancy messages.
Thoughts? Feedback? Anyone feel strongly about this, either way?
Redundancy can also be achieved by applications using the OnForward callback to store copies of the information as it traverses the cluster, but that doesn't carry guarantees quite as strong. If a message is delivered in a single hop, that information now lives only on a single Node, which makes it volatile. This proposal would let applications create a strong guarantee about the redundancy of their information.
The text was updated successfully, but these errors were encountered:
Personally, I don't have a solid use case for this. So while it may be something others are interested in, I'm going to hold off on implementing it until there's a clear reason for doing so.
The Pastry paper makes no mention of duplicating information to nearby Nodes, though some applications the authors built on top of Pastry (PAST, for example) reference the ability to store redundant copies of information in nearby nodes.
I'd propose adding a new Duplicate field to the message body. There are three ways I could see this working:
Option 1 allows for a message to determine its priority on a sliding scale. It could be important enough to store redundantly, but not necessarily important enough to copy to the entire leaf set. Or it could be important enough to copy to the entire leaf set. It's up to the sender to decide that.
Option 2 allows a Node to determine its level of redundancy. A message either needs to be stored redundantly or it does not; there is no sliding scale here. This allows for Nodes to know more about the number of connections they'll be opening when they receive a message like this. I'm not sure what benefit this offers, really, except that it limits the damage a rogue Node in the cluster could do.
Option 3 kind of allows both: a message still gets a say in how important it is, but the Node can lock down how many connections it opens for redundancy messages.
Thoughts? Feedback? Anyone feel strongly about this, either way?
Redundancy can also be achieved by applications using the OnForward callback to store copies of the information as it traverses the cluster, but that doesn't carry guarantees quite as strong. If a message is delivered in a single hop, that information now lives only on a single Node, which makes it volatile. This proposal would let applications create a strong guarantee about the redundancy of their information.
The text was updated successfully, but these errors were encountered: