-
Notifications
You must be signed in to change notification settings - Fork 214
Added possibility to change number of vm's for MultipleVmsTest #356
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Merged
nmeyerhans
merged 1 commit into
firecracker-microvm:master
from
Zyqsempai:343-increase-number-of-vms
Dec 6, 2019
Merged
Changes from all commits
Commits
File filter
Filter by extension
Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
There are no files selected for viewing
This file contains hidden or bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
This file contains hidden or bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
This file contains hidden or bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
Add this suggestion to a batch that can be applied as a single commit.
This suggestion is invalid because no changes were made to the code.
Suggestions cannot be applied while the pull request is closed.
Suggestions cannot be applied while viewing a subset of changes.
Only one suggestion per line can be applied in a batch.
Add this suggestion to a batch that can be applied as a single commit.
Applying suggestions on deleted lines is not supported.
You must change the existing code in this line in order to create a valid suggestion.
Outdated suggestions cannot be applied.
This suggestion has been applied or marked resolved.
Suggestions cannot be applied from pending reviews.
Suggestions cannot be applied on multi-line comments.
Suggestions cannot be applied while the pull request is queued to merge.
Suggestion cannot be applied right now. Please check back later.
Uh oh!
There was an error while loading. Please reload this page.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I am also curious to how many are we actually running in parallel due to the
StopVM
call. We may need to callStopVM
at a later time and have the VMs run some process to ensure we are getting accurate number of concurrency. And since 100 is a pretty large number, I'm worried that a lot of the VMs are exiting before we can test the true bottleneck of what is parallel. My guess based on the time is we are only running 15-20 or so VMs in parallel. Thoughts on this @sipsmaThere was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I don't think this is meant to be a true scale test, so we don't really need to max out the host capacity. AFAICT, 100 was chosen pretty much arbitrarily, and it is an upper bound on the number of VMs that will run concurrently, not necessarily a goal. Even if we're only running ~20 VMs at any one time, that's still more than we're running today. If we assume that more VMs means more likelihood of catching concurrency-related bugs, then a 3-4x increase in the number of running VMs sounds good to me.
At some point in the future we will need to be performing actual scale tests, but this PR doesn't need to be that.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
This very well may be the case, but I agree with Noah that it's fine if so. We are primarily looking to find problems that occur as we actually perform actions on/in the VM (spin up/down containers, copy io, etc.). So even if we modified the test case to wait to call StopVM at the end, we would likely just end up with a bunch of VMs sitting around doing nothing for a while, which doesn't really improve coverage IMO. I agree though that if we want scale tests in the future we'd need to be more careful about issues like this.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Ah, I was under the impression we wanted to actually test load. In that case, this looks fine