Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

[DONE] Fix concurrently closing and writing to the same chan #35

Merged
merged 1 commit into from
Aug 8, 2018

Conversation

rcmgleite
Copy link
Contributor

@rcmgleite rcmgleite commented Aug 8, 2018

Problem

The go client spawns two goroutines: handlePackets and pendingRequestsReaper

They both write to client.IncomingMsgChan.
The problem is: the client.Disconnect method was closing the IncomingMsgChan
and there is no way for the handlePackets and pendingRequestReaper goroutines to know
that this happened.
When one of these 2 goroutines tries to write to the closed channel the application panics.

Solution

I'm not completely sure that just not closing the IncomingMsgChan is the correct solution
but in some articles like this there is a 'rule' to only close channels from the writer goroutine.

@coveralls
Copy link

coveralls commented Aug 8, 2018

Pull Request Test Coverage Report for Build 347

  • 0 of 0 changed or added relevant lines in 0 files are covered.
  • 2 unchanged lines in 1 file lost coverage.
  • Overall coverage increased (+0.02%) to 75.335%

Files with Coverage Reduction New Missed Lines %
agent/agent.go 2 81.25%
Totals Coverage Status
Change from base Build 338: 0.02%
Covered Lines: 3537
Relevant Lines: 4695

💛 - Coveralls

@henrod
Copy link
Contributor

henrod commented Aug 8, 2018

That way won't you leak channels?

@rcmgleite
Copy link
Contributor Author

rcmgleite commented Aug 8, 2018

@henrod is there a simple way to verify it?
I tried to monitor memory usage using the client with this fix and everything seemed normal

I found some articles stating that the GC would clean the channels on behalf of the application

client/client.go Outdated
@@ -300,8 +300,6 @@ func (c *Client) sendHeartbeats(interval int) {
func (c *Client) Disconnect() {
if c.Connected {
c.Connected = false
close(c.closeChan)
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

case <-c.closeChan:

closeChan is actually being used to stop these goroutines:

	go c.handlePackets()
	go c.pendingRequestsReaper()

so by not closing it you will have a leak

@felipejfc
Copy link
Contributor

felipejfc commented Aug 8, 2018

can't you solve this with a lock in both of the goroutines + the Disconnect() method?

@rcmgleite rcmgleite force-pushed the bugfix/concurrency-chan-close-on-client branch from 724c8df to b3bdd5b Compare August 8, 2018 18:03
@rcmgleite
Copy link
Contributor Author

rcmgleite commented Aug 8, 2018

as @cscatolini pointed out, the closeChan must be closed to avoid leaking goroutines.
As for the IncomingMsgChan, it seems that not closing it would no be a problem.

@felipejfc I could. But isn't it simpler this way?

@felipejfc
Copy link
Contributor

I guess we could leak channels by not closing them but in this case it doesn't seems like a reason for not doing it because we're only using this client for tests... (depending on how we implement pitaya-bot this may be a point of memory leak tho)

@rcmgleite
Copy link
Contributor Author

@felipejfc I think channels are cleaned by the GC, no?
And for now we are using this client to implement the pitaya-bot. This problem was actually found while developing the pitaya bot.

@felipejfc
Copy link
Contributor

After doing some research I guess it is @rcmgleite, you can merge this

@rcmgleite rcmgleite merged commit c05b0ec into master Aug 8, 2018
@rcmgleite rcmgleite deleted the bugfix/concurrency-chan-close-on-client branch August 8, 2018 18:55
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

None yet

5 participants