Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Remove reference to global config in monitor.go #251

Merged
merged 2 commits into from
Feb 18, 2020

Conversation

dadrian
Copy link
Member

@dadrian dadrian commented Feb 18, 2020

This updates MakeMonitor() to take the channel size as a parameter,
instead of reading it from the global config object. Unfortunately,
the caller of MakeMonitor() doesn't actually have access to the global,
since it's in a different package (bin vs the root package). Luckily,
there doesn't appear to be a reason to have a buffer in this channel.
This updates the caller to pass a hardcoded size of 1.

Notes & Caveats

@codyprime what do you think about channel size here?

This updates MakeMonitor() to take the channel size as a parameter,
instead of reading it from the global `config` object. Unfortunately,
the caller of MakeMonitor() doesn't actually have access to the global,
since it's in a different package (bin vs the root package). Luckily,
there doesn't appear to be a reason to have a buffer in this channel.
This updates the caller to pass a hardcoded size of 1.
@codyprime
Copy link
Member

@codyprime what do you think about channel size here?

Generally, I think it is best to have as small of a channel size as needed (seems like larger is more likely to mask a sync issue where an empty buffer causes deadlock, etc.), so I like this change. In this particular case, the statusesChan has a writer that is called within a worker goroutine.

I don't see any comments or commit messages in the git history that explains why it is config.Senders*4, so I don't know if there was any reasoning for a channel depth 4 times larger than the number of workers (that seems rather specific?).

I guess one test would be to see if it affects throughput, which I could see being the case if we have workers waiting on the monitor goroutine.

Copy link
Member

@codyprime codyprime left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

The only concern I have is if a single monitor goroutine acts as a bottleneck. Given how lightweight the monitor is, I suspect it does not. Have you done any testing with a a large set of hosts?

The goroutine running the monitor isn't actually closed. This PR updates
the API to allow that Goroutine to properly block program exit. This can
be leveraged as we continue to make the configuration non-global.
@dadrian
Copy link
Member Author

dadrian commented Feb 18, 2020

@codyprime I have not tested against a large number of hosts yet. I had the same worry as you, but it should be pretty fast. I think that having a buffer size of 1 effectively forces this to be scheduled before the next scan finishes.

Copy link
Member

@codyprime codyprime left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

The Monitor WaitGroup addition (from commit 178d984) looks good to me as well.

@dadrian dadrian merged commit ef33737 into master Feb 18, 2020
@dadrian dadrian deleted the dadrian/no-global-monitor branch February 18, 2020 20:46
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

2 participants