-
Notifications
You must be signed in to change notification settings - Fork 12
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Changed goroutine model of gossip server-side #1211
Conversation
reducing buffer size to 10 because why not :)
services/gossip/service.go
Outdated
// and Transaction Relay shouldn't block for long anyway. | ||
func makeMessageDispatcher() (d gossipMessageDispatcher) { | ||
d = make(gossipMessageDispatcher) | ||
d[gossipmessages.HEADER_TOPIC_TRANSACTION_RELAY] = make(chan gossipMessage, 10) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I suggest making the buffer size configurable. We don't know if 10 is the optimal configuration.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
yagni, I think
services/gossip/service.go
Outdated
return | ||
} | ||
|
||
ch <- gossipMessage{header: header, payloads: payloads} //TODO should the channel have *gossipMessage as type? |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Is there a possibility of a deadlock here if too many messages were sent into the buffered channel?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I don't think so - worst case scenario is that the transport goroutine is blocked on the channel
As far as i see it looks good. |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
looks good.
Monitoring the channel buffer and adding a comment on component responsible for non-blocking incoming assumption.
@gadcl what do you mean, "monitoring"? are you requesting that we add something? |
we discussed (suggested by Eran) the option of adding a len(ch) vs. cap(ch) metric to monitor buffer usage. |
added metrics for queue size and used size
… feature/goroutine-per-topic
…f context is done
services/gossip/dispatcher.go
Outdated
case gossipmessages.HEADER_TOPIC_BENCHMARK_CONSENSUS: | ||
return d.benchmarkConsensus, nil | ||
default: | ||
return nil, errors.Errorf("no message channel for topic", log.Int("topic", int(topic))) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
printf: Errorf call has arguments but no formatting directives (from govet
)
Added manual draining of topics on shutdown, this seems to resolve the memory issue. Not ideal and probably not needed, but this should act as a workaround until we decide whether or not to retain the memory leak test |
Before this change,
DirectTransport
had a goroutine per connection, and handling the message occurred on that goroutine, effectively blocking further communication from that peer.This PR creates a goroutine per gossip topic, writing from the connection goroutines to the topic goroutines via a buffered channel. This essentially serializes all messages from all peers to a single goroutine, but frees the connection goroutines and creates QoS per topic.
In addition, we create a one-off goroutine per Block Sync request, so that scanning blocks or reading chunks from disk will not block the Block Sync topic goroutine.
It is expected that the Lean Helix topic will not be blocked, as it will have a goroutine that deals with reading from the topic.