Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Update device plugin to support 4 enclaves per instance #13

Open
wants to merge 1 commit into
base: main
Choose a base branch
from

Conversation

shankar95raju
Copy link

@shankar95raju shankar95raju commented Dec 4, 2023

Issue #
Increase the nitro_enclaves capacity to 4 as per the documentation #9

Problem
AWS supports running up to 4 enclaves per instance. Currently, the device plugin allows only one pod to be scheduled in a k8s worker node/EC2 instance requiring nitro_enclaves host device. This is because, the device plugin server's ListAndWatch endpoint is configured to send the list of Devices as 1, preventing k8s scheduler from scheduling multiple pods requiring nitro_enclaves resource on the same worker node/EC2 instance.

solution

Updating the response for ListAndWatch request with 4, will let the kubelet know the available/allocatable "aws.ec2.nitro/nitro_enclaves" devices in a given k8s worker node.

@shankar95raju shankar95raju force-pushed the feature/update-device-plugin-increase-capacity branch from 8d61c57 to 1d814b6 Compare December 4, 2023 19:35
// in a k8s worker node. Number of devices, in this context does not represent number of "nitro_enclaves" device files present in the host,
// instead it can be interpreted as number pods that can share the same host device file. The same host device file "nitro_enclaves",
// can be mounted into multiple pods, which can be used to run an enclave.
// This lets us to schedule 2 or more pods requiring nitro_enclaves device on the same k8s node/EC2 instance.

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Looks like those 2 or more potentially independent pods will have access to controlling enclaves of each other. Could it be a problem?

Copy link

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I think this should not be a problem.

The nitro_enclaves device fops only allows the creation of new enclaves through ne_ioctl which will create a new file descriptor for each enclave through ne_create_enclave_ioctl. That enclave specific file descriptor allows control of the enclave through ne_enclave_fops.

So as long as an owning pod does not make an effort to share access to the file descriptor of its enclave with other pods, control to each enclave belongs to the pod that created the enclave.

@shankar95raju
Copy link
Author

If this MR looks good, can I get another approval to merge this?

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

None yet

3 participants