Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Enable a limited tmpfs for shared memory #113

Merged
merged 3 commits into from
Aug 30, 2023
Merged

Enable a limited tmpfs for shared memory #113

merged 3 commits into from
Aug 30, 2023

Conversation

jb3
Copy link
Member

@jb3 jb3 commented Jul 21, 2021

This PR introduces a small 40mb tmpfs mounted at /dev/shm to allow shared memory, utilised by multiprrocessing to work. This means that multiprocessing.Pool should work in snekbox (and we have a new test that verifies this).

I'd appreciate if anyone would like to test that you try break the 40mb limit and other general trickery surrounding these new capabilities.

@coveralls
Copy link

coveralls commented Jul 21, 2021

Coverage Status

coverage: 90.947%. remained the same when pulling 08e7636 on jb3/shared-mem into afda301 on main.

Copy link
Member

@jchristgit jchristgit left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

This looks good to me.

As for your request to test breaking the memory limit, writing there using multiprocessing.shared_memory would have also been my one and only test, and since we're doing that here, I personally can't think of another possibility to break this. Since it says "File too large", what happens if multiple shared_memory.SharedMemory objects are created that exceed the limit together? I would assume that the first file exceeding the limit hit the error?

tests/test_nsjail.py Outdated Show resolved Hide resolved
config/snekbox.cfg Outdated Show resolved Hide resolved
tests/test_nsjail.py Outdated Show resolved Hide resolved
@MarkKoz
Copy link
Member

MarkKoz commented Aug 7, 2021

Someone made me realise that shared memory could be accessed across NsJail instances. Can you think of a way have isolated mounts for each instance?

@jb3
Copy link
Member Author

jb3 commented Aug 7, 2021

How could that happen? We create a new tmpfs for every nsjail instance — it's not shared.

@MarkKoz
Copy link
Member

MarkKoz commented Aug 7, 2021

How could that happen? We create a new tmpfs for every nsjail instance — it's not shared.

Is it really a new one or is it just mounting the same directory over and over?

@MarkKoz
Copy link
Member

MarkKoz commented Aug 7, 2021

Is that what is_bind false does?

@jb3
Copy link
Member Author

jb3 commented Aug 7, 2021

Is it really a new one or is it just mounting the same directory over and over?

snekbox/config/snekbox.cfg

Lines 107 to 113 in e6cf1c4

mount {
dst: "/dev/shm"
fstype: "tmpfs"
rw: true
is_bind: false
options: "size=40m"
}

If this works the same way that it would in the equivalent fstab, it will be a new tmpfs instance for each execution. I'm having trouble reading through how nsjail actually allocates this though, I can't tell if it is doing something special or just mounting a filesystem in the typical way.

Is that what is_bind false does?

is_bind enables:

  • MS_BIND ("A bind mount makes a file or a directory subtree visible at another point within the single directory hierarchy")
  • MS_REC ("Used in conjunction with MS_BIND to create a recursive bind mount, and in conjunction with the propagation type flags to recursively change the propagation type of all of the mounts in a subtree.")
  • MS_PRIVATE ("Make this mount point private. Mount and unmount events do not propagate into or out of this mount point.")

I'm not sure any of these are required, it might be a dud option for tmpfs mounts.

@MarkKoz
Copy link
Member

MarkKoz commented Aug 7, 2021

Are these standard semantics for tmpfs? For the sake of my curiosity, is it even possible to mount the same tmpfs under different directories?

@jb3
Copy link
Member Author

jb3 commented Aug 7, 2021

You can achieve that through symlinking, I don't think there is a "same tmpfs" due to the nature of tmpfs allocation (a new tmpfs for every mountpoint, as far as I understand).

@MarkKoz
Copy link
Member

MarkKoz commented Aug 7, 2021

Okay, great. Just need to address the comments I left a couple weeks ago.

Copy link
Contributor

@wookie184 wookie184 left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Thoughts on adding a test that creates lots of empty directories to ensure there are limits on that? Memory taken up by them doesn't count towards the tmpfs size afaict. Currently the limit seems to be due to it hitting cgroup_mem_max, which is fine, although we could also consider setting nr_inodes in the mount options to explicitly limit that if we wanted to.

tests/test_nsjail.py Outdated Show resolved Hide resolved
tests/test_nsjail.py Outdated Show resolved Hide resolved
for shm_size, buffer_size, return_code in cases:
with self.subTest(shm_size=shm_size, buffer_size=buffer_size):
# Need enough memory for buffer and bytearray plus some overhead.
mem_max = (buffer_size * 2) + (400 * Size.MiB)
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I don't know why it needs so much overhead 🤷 but in any case, the point of this test isn't to test the normal NsJail OOM killer — there are other tests for that.

@MarkKoz
Copy link
Member

MarkKoz commented Aug 28, 2023

@wookie184 Testing for inode limits (and having inode limits) sounds like a good idea. Do you mind writing up a new issue for that? I think that situation would also apply to our temporary file system feature.

Copy link
Contributor

@wookie184 wookie184 left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Great stuff, thanks for working on this

@wookie184 wookie184 merged commit 2b5f8a0 into main Aug 30, 2023
6 checks passed
@wookie184 wookie184 deleted the jb3/shared-mem branch August 30, 2023 12:55
@jb3 jb3 mentioned this pull request Sep 2, 2023
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

5 participants