Skip to content
This repository has been archived by the owner on Jun 25, 2021. It is now read-only.

feat/core: split up large and deduplicate group messages #1038

Merged
merged 3 commits into from Jun 9, 2016

Conversation

afck
Copy link
Contributor

@afck afck commented Jun 7, 2016

This splits up requests and responses into chunks of at most 20 kB to
reduce network latency.

The caching functionality is removed, since hop nodes now only get to
see parts of GetSuccess responses.


This change is Reviewable

@maidsafe-highfive
Copy link

@afck: no appropriate reviewer found, use r? to override

@afck
Copy link
Contributor Author

afck commented Jun 7, 2016

r? @Viv-Rajkumar

@Viv-Rajkumar
Copy link
Contributor

Review status: 0 of 14 files reviewed at latest revision, 2 unresolved discussions, some commit checks failed.


src/core.rs, line 277 [r2] (raw file):

            stats: Default::default(),
            send_filter: LruCache::with_expiry_duration(Duration::from_secs(60 * 10)),
            user_msg_cache: LruCache::with_expiry_duration(Duration::from_secs(60 * 20)),

why 20mins for this btw? to be consistent with signed_msg_filter?


src/stats.rs, line 62 [r2] (raw file):

            MessageContent::GetNodeNameResponse { .. } => self.msg_get_node_name_rsp += 1,
            MessageContent::Ack(..) => self.msg_ack += 1,
            MessageContent::UserMessagePart { .. } => self.msg_user += 1,

ouch. could we maybe look at these stats now getting updated at SignedMessage points(at src/dst auth) rather than Hops maybe?


Comments from Reviewable

This splits up requests and responses into chunks of at most 20 kB to
reduce network latency.

The caching functionality is removed, since hop nodes now only get to
see parts of `GetSuccess` responses.
@afck
Copy link
Contributor Author

afck commented Jun 7, 2016

Review status: 0 of 15 files reviewed at latest revision, 2 unresolved discussions.


src/core.rs, line 277 [r2] (raw file):

Previously, Viv-Rajkumar (Viv Rajkumar) wrote…

why 20mins for this btw? to be consistent with signed_msg_filter?

Probably anything between 10 minutes and a day would be fine here. As discussed, let's keep it at 20 minutes for now.

src/stats.rs, line 62 [r2] (raw file):

Previously, Viv-Rajkumar (Viv Rajkumar) wrote…

ouch. could we maybe look at these stats now getting updated at SignedMessage points(at src/dst auth) rather than Hops maybe?

Done.

Comments from Reviewable

If the source authority is a group, every member used to send the whole
message, which is wasteful for large messages. With this change, all but
one group member will only send a hash instead of the message itself.
@afck afck changed the title feat/core: split up large messages feat/core: split up large and deduplicate group messages Jun 8, 2016
@Viv-Rajkumar
Copy link
Contributor

Reviewed 2 of 12 files at r1, 2 of 2 files at r2, 1 of 3 files at r3, 2 of 6 files at r5.
Review status: 7 of 15 files reviewed at latest revision, 4 unresolved discussions.


src/core.rs, line 197 [r5] (raw file):

    received_acks: MessageFilter<u64>,
    bucket_filter: MessageFilter<usize>,
    message_accumulator: Accumulator<RoutingMessage, (sign::PublicKey, RoutingMessage)>,

why not make RoutingMessage an Option here?


src/core.rs, line 877 [r5] (raw file):

        }
        let key = *public_id.signing_public_key();
        let hash_msg = if let Ok(hash_msg) = message.to_hash() {

am prolly missing something here, but this fn can when called at dst is gonna have 1 RoutingMsg which is the full msg and remaining grp members are already sending it the Hash msg right? so why are we trying to create a new HashMsg for everyone here?


src/core.rs, line 883 [r5] (raw file):

            return None;
        };
        if let Some(values) = self.message_accumulator.add(hash_msg, (key, message.clone())) {

why even choose to add (key, message.clone()) instead of say for all but one grp member (key, None) as mentioned above in the type definition. That way we can just prolly iter() the values and see if we have any RoutingMessage and return right?


Comments from Reviewable

Messages critical to data relocation have a high priority to prevent
data loss.
Messages that mutate data have a medium priority to protect consensus.
Client `Get` requests have the lowest priority.

This means that under extreme load, if network bandwidth is
insufficient, the network will become temporarily unresponsive instead
of breaking consensus or losing data.
@Viv-Rajkumar
Copy link
Contributor

Reviewed 3 of 12 files at r1, 2 of 6 files at r5, 3 of 3 files at r6.
Review status: all files reviewed at latest revision, all discussions resolved.


Comments from Reviewable

@Viv-Rajkumar Viv-Rajkumar merged commit 56214f2 into maidsafe:async Jun 9, 2016
Sign up for free to subscribe to this conversation on GitHub. Already have an account? Sign in.
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

None yet

4 participants