-
-
Notifications
You must be signed in to change notification settings - Fork 39
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Enable a limited tmpfs for shared memory #113
Conversation
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
This looks good to me.
As for your request to test breaking the memory limit, writing there using multiprocessing.shared_memory
would have also been my one and only test, and since we're doing that here, I personally can't think of another possibility to break this. Since it says "File too large", what happens if multiple shared_memory.SharedMemory
objects are created that exceed the limit together? I would assume that the first file exceeding the limit hit the error?
Someone made me realise that shared memory could be accessed across NsJail instances. Can you think of a way have isolated mounts for each instance? |
How could that happen? We create a new tmpfs for every nsjail instance — it's not shared. |
Is it really a new one or is it just mounting the same directory over and over? |
Is that what is_bind false does? |
Lines 107 to 113 in e6cf1c4
If this works the same way that it would in the equivalent fstab, it will be a new tmpfs instance for each execution. I'm having trouble reading through how nsjail actually allocates this though, I can't tell if it is doing something special or just mounting a filesystem in the typical way.
is_bind enables:
I'm not sure any of these are required, it might be a dud option for tmpfs mounts. |
Are these standard semantics for tmpfs? For the sake of my curiosity, is it even possible to mount the same tmpfs under different directories? |
You can achieve that through symlinking, I don't think there is a "same tmpfs" due to the nature of tmpfs allocation (a new tmpfs for every mountpoint, as far as I understand). |
Okay, great. Just need to address the comments I left a couple weeks ago. |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Thoughts on adding a test that creates lots of empty directories to ensure there are limits on that? Memory taken up by them doesn't count towards the tmpfs size afaict. Currently the limit seems to be due to it hitting cgroup_mem_max
, which is fine, although we could also consider setting nr_inodes
in the mount options to explicitly limit that if we wanted to.
9e69352
to
8028c08
Compare
for shm_size, buffer_size, return_code in cases: | ||
with self.subTest(shm_size=shm_size, buffer_size=buffer_size): | ||
# Need enough memory for buffer and bytearray plus some overhead. | ||
mem_max = (buffer_size * 2) + (400 * Size.MiB) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I don't know why it needs so much overhead 🤷 but in any case, the point of this test isn't to test the normal NsJail OOM killer — there are other tests for that.
@wookie184 Testing for inode limits (and having inode limits) sounds like a good idea. Do you mind writing up a new issue for that? I think that situation would also apply to our temporary file system feature. |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Great stuff, thanks for working on this
This PR introduces a small 40mb tmpfs mounted at /dev/shm to allow shared memory, utilised by
multiprrocessing
to work. This means thatmultiprocessing.Pool
should work in snekbox (and we have a new test that verifies this).I'd appreciate if anyone would like to test that you try break the 40mb limit and other general trickery surrounding these new capabilities.