Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Changed goroutine model of gossip server-side #1211

Merged
merged 21 commits into from
Jul 8, 2019

Conversation

electricmonk
Copy link
Contributor

Before this change, DirectTransport had a goroutine per connection, and handling the message occurred on that goroutine, effectively blocking further communication from that peer.

This PR creates a goroutine per gossip topic, writing from the connection goroutines to the topic goroutines via a buffered channel. This essentially serializes all messages from all peers to a single goroutine, but frees the connection goroutines and creates QoS per topic.

In addition, we create a one-off goroutine per Block Sync request, so that scanning blocks or reading chunks from disk will not block the Block Sync topic goroutine.

It is expected that the Lean Helix topic will not be blocked, as it will have a goroutine that deals with reading from the topic.

// and Transaction Relay shouldn't block for long anyway.
func makeMessageDispatcher() (d gossipMessageDispatcher) {
d = make(gossipMessageDispatcher)
d[gossipmessages.HEADER_TOPIC_TRANSACTION_RELAY] = make(chan gossipMessage, 10)
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I suggest making the buffer size configurable. We don't know if 10 is the optimal configuration.

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

yagni, I think

return
}

ch <- gossipMessage{header: header, payloads: payloads} //TODO should the channel have *gossipMessage as type?
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Is there a possibility of a deadlock here if too many messages were sent into the buffered channel?

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I don't think so - worst case scenario is that the transport goroutine is blocked on the channel

@noambergIL
Copy link
Contributor

As far as i see it looks good.

Copy link
Contributor

@gadcl gadcl left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

looks good.
Monitoring the channel buffer and adding a comment on component responsible for non-blocking incoming assumption.

@electricmonk
Copy link
Contributor Author

@gadcl what do you mean, "monitoring"? are you requesting that we add something?

@gadcl
Copy link
Contributor

gadcl commented Jul 7, 2019

@gadcl what do you mean, "monitoring"? are you requesting that we add something?

we discussed (suggested by Eran) the option of adding a len(ch) vs. cap(ch) metric to monitor buffer usage.

case gossipmessages.HEADER_TOPIC_BENCHMARK_CONSENSUS:
return d.benchmarkConsensus, nil
default:
return nil, errors.Errorf("no message channel for topic", log.Int("topic", int(topic)))

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

printf: Errorf call has arguments but no formatting directives (from govet)

@electricmonk
Copy link
Contributor Author

Added manual draining of topics on shutdown, this seems to resolve the memory issue. Not ideal and probably not needed, but this should act as a workaround until we decide whether or not to retain the memory leak test

cc @gadcl @noambergIL @IdoZilberberg

@electricmonk electricmonk merged commit 96985e6 into master Jul 8, 2019
@electricmonk electricmonk deleted the feature/goroutine-per-topic branch July 8, 2019 14:32
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

5 participants