Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

docker-compose: bump db's shm_size #5348

Merged
merged 1 commit into from
Mar 31, 2020
Merged

docker-compose: bump db's shm_size #5348

merged 1 commit into from
Mar 31, 2020

Conversation

cirocosta
Copy link
Member

@cirocosta cirocosta commented Mar 23, 2020

Why do we need this PR?

it seems like Postgres by default might need to resize the amount of
shared memory that it uses for performing parallel work.

I've hit this when making 100's of concurrent requests to certain api
endpoints:

 could not resize shared memory segment "/PostgreSQL.<..>"
 to $N bytes: No space left on device

after applying the change (which makes docker go from 64MB to 1GB), , I
can then execute those requests w/ no problems.

Changes proposed in this pull request

bump the default shared memory device size from 64MB to 1GB


saw this when testing #5341 (comment) out

it seems like postgres by default might need to resize the amount of
shared memory that it uses for performing parallel work.

i've hit this when making 100's of concurrent requests to certain api
endpoints:

	 could not resize shared memory segment "/PostgreSQL.<..>"
	 to $N bytes: No space left on device

after applying the change (which makes docker go from 64MB to 1GB), , I
can then execute those requests w/ no problems.

Signed-off-by: Ciro S. Costa <cscosta@pivotal.io>
@cirocosta
Copy link
Member Author

and it seems like the Helm Chart we depend upon when launching conocurse/concourse-chart has higher /dev/shm by default: helm/charts#19025

@vito
Copy link
Member

vito commented Mar 27, 2020

after applying the change (which makes docker go from 64MB to 1GB), , I

Does this mean it pre-allocates ~1GB of memory? If so, can we make that lower? It would be nice to keep the overall resource requirements low for development.

@jamieklassen
Copy link
Member

I wonder if this can/should take the form of an override? I wonder about folks who are using modest machines - is this change going to mean that one docker container is eating a gig of memory at baseline?

EDIT: @vito beat me to it.

@cirocosta
Copy link
Member Author

ooh, sorry for the confusion - having /dev/shm with a particular size does
not
mean that it'll have preallocate memory for it.

this is because /dev/shm is literally just a tmpfs mount, which means that
the size you specify is literally just giving a ceiling to how much of memory
the kernel can consume for files that end up in such mountpoint:

size:      The limit of allocated bytes for this tmpfs instance. The 
           default is half of your physical RAM without swap. 	   

e.g.:

$ cat /proc/meminfo  | ag -i available
MemAvailable:   27536700 kB

$ mount -t tmpfs -o size=10G tmpfs /mnt/tmpfs

$ cat /proc/meminfo  | ag -i available
MemAvailable:   27534936 kB

buut ... it can be problematic to have a size that's too big:

size:      [...] . If you oversize your tmpfs instances the machine will
	   deadlock since the OOM handler will not be able to free that memory.

now, that seems very scary 😅 but it's actually "fine" from what I
tested (total la garantia soy yo) on linux 5.3.0:

# I can mount even with size 100G (having only 16G of ram available in the vm),
# with no problems
#
tmpfs            10G     0   10G   0% /mnt/tmpfs-1
tmpfs            10G     0   10G   0% /mnt/tmpfs-2
tmpfs            10G     0   10G   0% /mnt/tmpfs-3
tmpfs           100G     0   100G  0% /mnt/tmpfs-biig

and if I try to put a file under /mnt/tmpfs-biig that's ... big ... it'll not
be a big deal - before it gets full, regular OOM will kick in (sure, not fun),
which seem to prevent disaster.

ps.: OOM will start evicting processes based on the OOM score systme, but, it
will not remove files from the tmpfs-based mountpoint (aside from swapping
things to disk if needed AFAIK).


that's a long way of saying that I think 1G here is fine and costs nothing to
the user.

Copy link
Member

@jamieklassen jamieklassen left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Thanks for giving the explanation so I didn't have to do that research myself!

@cirocosta cirocosta merged commit 004e924 into master Mar 31, 2020
@cirocosta cirocosta deleted the increase-shm branch March 31, 2020 22:44
@jwntrs jwntrs added the release/no-impact This is an issue that never affected released versions (i.e. a regression caught prior to shipping). label Apr 29, 2020
muntac pushed a commit that referenced this pull request Dec 7, 2020
Guardian defaults to the Linux value which is half of the memory on the
machine. Docker uses 64 MB. For certain use cases, such as running Concourse in
docker-compose in a task, the postgres container requires much more than that.

As mentioned in the link below the memory is not pre-allocated and the
OOM-killer will remove the process if it takes up too much memory.
#5348 (comment)

#6246

Signed-off-by: Muntasir Chowdhury <mchowdhury@pivotal.io>
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
release/no-impact This is an issue that never affected released versions (i.e. a regression caught prior to shipping).
Projects
None yet
Development

Successfully merging this pull request may close these issues.

None yet

4 participants