Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Support for more device nodes with EBS volumes #598

Conversation

thevilledev
Copy link

Currently libstorage supports device nodes for EBS volumes within range /dev/xvd[f-p] which is in line with Amazon's recommendations. However, this is just a recommendation and typically it is not feasible for environments where volumes are used a lot more extensively. This is the case with any job scheduler that uses persistent storage such as Kubernetes or Nomad.

Kubernetes AWS device allocator uses a much broader device namespace /dev/xvd[b-c][a-z]. Related source code reference here: https://github.com/kubernetes/kubernetes/blob/master/pkg/cloudprovider/providers/aws/device_allocator.go#L80-L92

Now, this issue has been discussed previously at least in rexray/rexray#773. This pull request changes the device range from /dev/xvd[f-p] to the much larger one /dev/xvd[b-c][a-z]. And in order to prevent ghost device issues such as rexray/rexray#410 the device allocator iterates the devices in a random order.

Ville Törhönen added 3 commits July 20, 2017 21:41
Use Amazon supported `/dev/xvd[b-c][a-z]` for device mapping.
Previously used device range was "recommended", but this larger
spectrum is supported just as well.
Shuffle slices to prevent possible phantom device issues.
@CLAassistant
Copy link

CLAassistant commented Jul 27, 2017

CLA assistant check
All committers have signed the CLA.

@kshitizbakshi
Copy link

@vtorhonen Any idea when will this get merged? This fix is of importance to us. 10 volume restriction for one agent node in a cluster is a hampering limitation. [Our agent nodes already have some number of EBS volumes assigned.]

@thevilledev
Copy link
Author

Hello @kshitizbakshi, I haven't heard anything from the maintainers. I too would like to know when this can be merged.

@clintkitson
Copy link
Collaborator

@vtorhonen The libStorage project is being merged back into the REX-Ray project for the 0.10 release. See notes on main project page.

We are open to discussing the proposal, but it would be great if you could rebase this over to the head of the RR project under /libstorage.

It might be more broadly applicable if the decision about the device paths was made an option. I think the decision for a smaller range was based on compatibility with more EC2 instances. I do recognize that K8s does support the broader range that you defined.

@akutz
Copy link
Collaborator

akutz commented Aug 30, 2017

Hi @vtorhonen,

If you would file this PR against REX-Ray per @clintkitson's guidance and make your change one that can be enabled via a configuration property then I will have zero issue merging it.

@thevilledev
Copy link
Author

Sure thing! It'll probably take a day or two to test it, but I'll create a PR once that's done.

@miry
Copy link

miry commented Aug 31, 2017

@vtorhonen can you suggest how to use your build for testing with rexray and kubernetes?

@thevilledev
Copy link
Author

I have now created a new PR for REX-Ray: rexray/rexray#996

@miry If you want to test this I suggest you check out the latest REX-Ray build I made: https://github.com/vtorhonen/rexray/releases/tag/v0.10.0-lds. Just update the binaries on your environment and add useLargeDeviceRange: true to both client and server configs.

I'll close this PR so we can continue on the RR side.

@thevilledev thevilledev closed this Sep 5, 2017
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
6 participants