Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Far too high RSS memory consumption (memory leak?) with gocryptfs #569

Closed
Slartibartfast27 opened this issue May 30, 2021 · 23 comments
Closed
Milestone

Comments

@Slartibartfast27
Copy link

Slartibartfast27 commented May 30, 2021

Far too high RSS memory consumption (memory leak?) with gocryptfs

I'm about to migrate my backup from EncFS to gocryptfs as gocryptfs IMHO is the best choice you can currently make for encrypted filesystems on Unix.

My version of gocryptfs is the one installed with apt install gocryptfs from my linux distribution Armbian (basically an Ubuntu 20.04 LTS Focal)

gocryptfs --version
gocryptfs 1.7.1; go-fuse 0.0~git20190214.58dcd77; 2019-12-26 go1.13.5 linux/arm

cat /etc/issue
Armbian 21.05.1 Focal \l

My setup uses https://dirvish.org/ to create frequent rotating hard-link-snapshots, which results in approximately these numbers:

  • 500,000 inodes
  • 12,000,000 files (approx 25 hard linked files per inode)
  • Hardware with 2 GB of total RAM (Odroid XU4Q)

Before migrating from EncFS to gocryptfs there were absolutely no problems with the EncFS RAM (RSS) usage, typically approx 100 kB or even zero (!).

The RSS memory consumption with EncFS was perfectly low, especially with respect to the fact, that approx. 20 docker containers are running on the system.

image-20210530070417145

After switching to gocryptfs, the RAM usage exploded. gocryptfs consumes 32.5% of the total available RSS RAM (645 MB of 2 GB):

image-20210529071413801

Even after waiting approx 7 days (without any activity in gocryptfs!), gocryptfs still holds the same amount of RAM.

Here is a historical comparison with encfs:

How to read it (see also https://github.com/prometheus/node_exporter ):

"RAM Used" = RSS-Sum = ${node_memory_MemTotal_bytes} - ${node_memory_MemAvailable_bytes}
"RAM Free"           = ${node_memory_MemFree_bytes} = ${node_memory_MemTotal_bytes} - ${RAM used}
"RAM Buffer + Cache" = ${node_memory_MemAvailable_bytes} - ${node_memory_MemFree_bytes}
"SWAP used"          = ${node_memory_SwapTotal_bytes} - ${node_memory_SwapFree_bytes}
"RAM + SWAP Used"    = ${RAM used} + ${SWAP used}

image-20210529071714533

image-20210529092539072

I found #132 and tried to force the kernel to drop all caches (taken from https://askubuntu.com/questions/609226/freeing-page-cache-using-echo-3-proc-sys-vm-drop-caches-doesnt-work ):

sync; echo 3 > /proc/sys/vm/drop_caches

Unfortunately this had no effect in the RAM usage of gocryptfs.

I also tried to allocate 2 GB of RAM with a shell script for 10 Minutes (taken from https://stackoverflow.com/questions/4964799/write-a-bash-shell-script-that-consumes-a-constant-amount-of-ram-for-a-user-defi ):

#!/bin/bash

echo "Provide sleep time in the form of NUMBER[SUFFIX]"
echo "   SUFFIX may be 's' for seconds (default), 'm' for minutes,"
echo "   'h' for hours, or 'd' for days."
read -p "> " delay

vmstat
echo "begin allocating memory..."
for index in $(seq 1000); do
    value=$(seq -w -s '' $index $(($index + 100000)))
    eval array$index=$value
done
echo "...end allocating memory"
vmstat
echo "sleeping for $delay"
vmstat
sleep $delay

This also had no effect on the RAM usage of gocryptfs.

image-20210529092812055

image-20210529092900640

Only remounting of the gocrypt filesystem could reduce the RAM usage:

image-20210529093127139

Very nice RAM usage after remount:

image-20210529093102556

When doing a ls -R only over the approx. 500,000 "real" files, not over the hardlink snapshots (which basically matches the behaviour of the daily rsync job), the gocryptfs RAM usage again goes up.

image-20210529094338735

Doing the same thing a couple of consecutive times, basically all memory usages constantly increase, which could be an indicator for a memory leak (detailed log see attached file rss-mem-log-1.txt ):

> umount-gocryptfs.sh ; mount-gocryptfs.sh
> ps -o rss,size,vsize,cmd $(pgrep gocryptfs)
  RSS SIZE   VSZ    CMD
75124 143104 875808 /usr/bin/gocryptfs -fg -notifypid=1787 -nonempty /home/cb/enc-backup-files/gocryptfs-vol /home/cb/private-backup-gocryptfs/

> for i in 1 2 3 4 5 6 7 8 9 ; do
>  echo "Run $i ----------------------------------"
>  ls -R private-backup-gocryptfs/current | wc -l
>  ps -o rss,size,vsize,cmd $(pgrep gocryptfs)
> done
Run RSS SIZE VSZ CMD
1 142104 285888 950848 /usr/bin/gocryptfs -fg -notifypid=1787 -nonempty /home/cb/enc-backup-files/gocryptfs-vol /home/cb/private-backup-gocryptfs/
2 193332 365444 952032 /usr/bin/gocryptfs -fg -notifypid=1787 -nonempty /home/cb/enc-backup-files/gocryptfs-vol /home/cb/private-backup-gocryptfs/
3 205764 365444 952032 /usr/bin/gocryptfs -fg -notifypid=1787 -nonempty /home/cb/enc-backup-files/gocryptfs-vol /home/cb/private-backup-gocryptfs/
4 224436 365444 952032 /usr/bin/gocryptfs -fg -notifypid=1787 -nonempty /home/cb/enc-backup-files/gocryptfs-vol /home/cb/private-backup-gocryptfs/
5 224436 365444 952032 /usr/bin/gocryptfs -fg -notifypid=1787 -nonempty /home/cb/enc-backup-files/gocryptfs-vol /home/cb/private-backup-gocryptfs/
6 223392 365444 952032 /usr/bin/gocryptfs -fg -notifypid=1787 -nonempty /home/cb/enc-backup-files/gocryptfs-vol /home/cb/private-backup-gocryptfs/
7 225236 373636 960228 /usr/bin/gocryptfs -fg -notifypid=1787 -nonempty /home/cb/enc-backup-files/gocryptfs-vol /home/cb/private-backup-gocryptfs/
8 224424 373636 960228 /usr/bin/gocryptfs -fg -notifypid=1787 -nonempty /home/cb/enc-backup-files/gocryptfs-vol /home/cb/private-backup-gocryptfs/
9 230916 373636 960228 /usr/bin/gocryptfs -fg -notifypid=1787 -nonempty /home/cb/enc-backup-files/gocryptfs-vol /home/cb/private-backup-gocryptfs/

Doing the same thing a couple of consecutive times over all files including the hardlink snapshots (approx 12,000,000), basically all memory usages constantly increase, which could be an indicator for a memory leak (detailed log see attached file rss-mem-log-2.txt ):

umount-gocryptfs.sh ; mount-gocryptfs.sh
ps -o rss,size,vsize,cmd $(pgrep gocryptfs)
for i in 1 2 3 4 5 6 7 8 9 ; do 
 	echo "Run $i ----------------------------------"
 	ls -R private-backup-gocryptfs/current private-backup-gocryptfs/snapshots | wc -l 
 	ps -o rss,size,vsize,cmd $(pgrep gocryptfs)
done
Run RSS SIZE VSZ CMD
1 833348 993072 1228656 /usr/bin/gocryptfs -fg -notifypid=7788 -nonempty /home/cb/enc-backup-files/gocryptfs-vol /home/cb/private-backup-gocryptfs/
2 832360 1279864 1498928 /usr/bin/gocryptfs -fg -notifypid=7788 -nonempty /home/cb/enc-backup-files/gocryptfs-vol /home/cb/private-backup-gocryptfs/
3 933928 1302148 1519920 /usr/bin/gocryptfs -fg -notifypid=7788 -nonempty /home/cb/enc-backup-files/gocryptfs-vol /home/cb/private-backup-gocryptfs/
4 939984 1302148 1519920 /usr/bin/gocryptfs -fg -notifypid=7788 -nonempty /home/cb/enc-backup-files/gocryptfs-vol /home/cb/private-backup-gocryptfs/
5 941632 1319820 1536560 /usr/bin/gocryptfs -fg -notifypid=7788 -nonempty /home/cb/enc-backup-files/gocryptfs-vol /home/cb/private-backup-gocryptfs/

In case of interest: Here are some details of the gocryptfs process with an RSS of 918 MB:
pmap.txt

With these numbers it's a really hard decision, whether to to do the final switch from EncFS to gocryptfs.

Does anybody know, whether there have been improvements in this area from the current version distributed by apt (gocryptfs 1.7.1) to the current stable version (gocryptfs 1.8.0) or the current beta version (v2.0-beta4)?

Any other hints what to use gocryptfs without wasting so much RSS RAM?

@lechner
Copy link
Contributor

lechner commented May 30, 2021

@rfjakob Please let me know if you need assistance via my access to Debian's arm64 porter boxes.

@Slartibartfast27
Copy link
Author

@rfjakob : Please also let me know if there is anything I can do to assist you fixing this, e.g. running a profiled or new version on my system...

@rfjakob
Copy link
Owner

rfjakob commented May 30, 2021

Thanks for the excellent report! Also thanks for the offers @lechner and @Slartibartfast27 . I will see if I can repro on amd64 first.

@Slartibartfast27 could you check what

echo 3 | sudo tee /proc/sys/vm/drop_caches
sleep 5m
grep VmRSS /proc/$(pgrep gocryptfs)/status
grep LazyFree /proc/$(pgrep gocryptfs)/smaps

says? The "sleep 5 minutes" is to allow for the Go scavenger to run.


PS: Note to self for testing: How to create 500000 empty files quickly (500 directories with 1000 files each):

for i in $(seq 1 500) ; do mkdir -v $i && cd $i && touch $(seq 1 1000) && cd .. ; done

@Slartibartfast27
Copy link
Author

Slartibartfast27 commented May 30, 2021

Hello @rfjakob, thanks for the extremely quick response.
I'm very happy to assist ;-)

Here are the requested results for the "small" ls (lists 500,000 files).
In case you need the really big RSS size (600 MB+), please let me know. It just takes a couple of hours to get there, as I need to iterate over 12 million file entries....

Initial state after remount:

> umount-gocryptfs.sh ; mount-gocryptfs.sh
root@odroid2020:~# ps -o rss,size,vsize,cmd $(pgrep gocryptfs)
  RSS  SIZE    VSZ CMD
75088 151684 885284 /usr/bin/gocryptfs -fg -notifypid=22622 -nonempty /home/cb/enc-backup-files/gocryptfs-vol /home/cb/private-backup-gocryptfs/
root@odroid2020:~# grep VmRSS /proc/$(pgrep gocryptfs)/status
VmRSS:     75088 kB
root@odroid2020:~# grep LazyFree /proc/$(pgrep gocryptfs)/smaps | grep -v "LazyFree:              0 kB"
LazyFree:          65200 kB

Final state after ls:

> ls -R private-backup-gocryptfs/current | wc -l
493733

> ps -o rss,size,vsize,cmd $(pgrep gocryptfs)
  RSS  SIZE    VSZ CMD
104856 246444 950592 /usr/bin/gocryptfs -fg -notifypid=19009 -nonempty /home/cb/enc-backup-files/gocryptfs-vol /home/cb/private-backup-gocryptfs/


root@odroid2020:~# echo 3 | sudo tee /proc/sys/vm/drop_caches
3
root@odroid2020:~# sleep 5m
root@odroid2020:~# grep VmRSS /proc/$(pgrep gocryptfs)/status
VmRSS:     91008 kB
root@odroid2020:~# grep LazyFree /proc/$(pgrep gocryptfs)/smaps
LazyFree:              0 kB
LazyFree:              0 kB
LazyFree:              0 kB
LazyFree:              0 kB
LazyFree:          29480 kB
LazyFree:              0 kB
... ("0 kB" repeated approx 100 times)
LazyFree:              0 kB


root@odroid2020:~# ps -o rss,size,vsize,cmd $(pgrep gocryptfs)
  RSS  SIZE    VSZ CMD
87736 246444 950592 /usr/bin/gocryptfs -fg -notifypid=19009 -nonempty /home/cb/enc-backup-files/gocryptfs-vol /home/cb/private-backup-gocryptfs/

I will see if I can repro on amd64 first.

Just a remark in case it eventually matters: My architecture is 32bit, not 64bit:
root@odroid2020:~# uname -a
Linux odroid2020 5.4.116-odroidxu4 #21.05.1 SMP PREEMPT Thu May 6 21:50:05 UTC 2021 armv7l armv7l armv7l GNU/Linux

@rfjakob
Copy link
Owner

rfjakob commented May 30, 2021

Thanks! Some background: Go versions 1.12 to 1.15 use MADV_FREE / LazyFree to return memory to the kernel. This memory is still counted as VmRSS, but you can see it marked as LazyFree, so to get the real RSS you have to substract LazyFree. Go 1.16 stopped using LazyFree, but your gocryptfs version is compiled with Go 1.13.5.

From your data I see:

Initial state:

VmRSS:     75088 kB
LazyFree:  65200 kB
----
Net:        9888 kB ... ~ 10 MB, that's good

The ls -R now fills caches both in the kernel and in gocryptfs. echo 3 > /proc/sys/vm/drop_caches (or the variant with sudo, both work) should clear those caches.

After drop_caches you have:

VmRSS:     91008 kB
LazyFree:  29480 kB
-----
Net:       61528 kB ... ~ 61 MB, that's 51 MB more where I'm not sure where it goes.

@Slartibartfast27
Copy link
Author

Would it make sense if I compile 1.8 and try again?

@rfjakob
Copy link
Owner

rfjakob commented May 31, 2021

1.8 should be the same, but could you check how the memory usage of v2.0-beta4-21 (attached) looks like? This is compiled with Go 1.16, so no more LazyFree.

gocryptfs.v2.0-beta4-21-g242cdf9.armv7.gz

On amd64 (=64 bit Intel) I get this:

  • 11 MB fresh mount
  • 287 MB after "ls -R" of 500000 files
  • 340 MB immediately after drop_caches
  • Stays on 340 for 180 seconds
  • Ramp down do 108 MB

image

@rfjakob
Copy link
Owner

rfjakob commented May 31, 2021

RSS values were collected using

while sleep 1 ; do grep VmRSS /proc/$(pgrep gocryptfs)/status ; done

Also I figured out why RSS does not go back down to "fresh mount" level: The hash tables tracking the inodes cannot shrink back to empty completely ( golang/go#20135 ). There are no tracked inodes after drop_caches, but the memory will be resused on the next "ls -R".

@Slartibartfast27
Copy link
Author

Wow!
The v2.0-beta4-21-g242cdf9 works perfectly:

Min RSS (directly after mount): 9,760 kB
Max RSS (during ls over 500,000 files): 36,616 kB
RSS before drop caches: 19,216 kB
RSS after drop caches: 19,328 kB
RSS after drop caches + 5m: 15,416 kB

Teaser: I'm going to iterate over all 12,000,000 files over night, but don't expect any changes of the overall result.

Steps performed:

/tmp/gocryptfs.v2.0-beta4-21-g242cdf9.armv7 -nonempty ~/enc-backup-files/gocryptfs-vol ~/private-backup-gocryptfs/
ls -R private-backup-gocryptfs/current | wc -l
while sleep 10 ; do grep VmRSS /proc/$(pgrep gocryptfs)/status | tee -a /tmp/VmRSS-gocryptfs.v2.0-beta4-21-g242cdf9.log ; done
echo "echo 3 | sudo tee /proc/sys/vm/drop_caches" >> /tmp/VmRSS-gocryptfs.v2.0-beta4-21-g242cdf9.log
echo 3 | sudo tee /proc/sys/vm/drop_caches
sleep 5m
cat /tmp/VmRSS-gocryptfs.v2.0-beta4-21-g242cdf9.log | sort --unique | sort -n --reverse

Detailed logs attached VmRSS-gocryptfs.v2.0-beta4-21-g242cdf9.log

The hash tables tracking the inodes cannot shrink back to empty completely ( golang/go#20135 ).

Just for technical curiosity, although definitely not needed:
The performance impact of changing a HashMap (o(n), not shrinking) to a TreeMap (o(log(n), shrinking) would be very interesting, if go supports TreeMaps...

Which option would you recommend me now?
(A) wait until 2.0 is released (not beta anymore)
(B) use 2.0 beta version (should really be reliable, as it holds more than 30 years of data history on my side....)

@rfjakob
Copy link
Owner

rfjakob commented May 31, 2021

This... looks too good to be true. Did you have little free memory on the system?

I recommend waiting for 2.0 final as I'm still working on some minor issues discovered by xfstests testing.

@Slartibartfast27
Copy link
Author

This... looks too good to be true. Did you have little free memory on the system?

It is true ;-)
The system had plenty of it's 2 GB RAM available during the ls and is currently iterating over the 12,000,000 files...
image

I recommend waiting for 2.0 final as I'm still working on some minor issues discovered by xfstests testing

I'm not in a hurry. Any thoughts, when 2.0 final could be released (best case ... worst case) ?

@rfjakob
Copy link
Owner

rfjakob commented May 31, 2021

I just tried running the gocryptfs.v2.0-beta4-21-g242cdf9.armv7 binary on a Raspberry Pi 4 (armv8 64 bit) that I have. Seems to work fine!

But I see

jakob@192-168-0-106:~$ grep VmRSS /proc/$(pgrep gocryptfs)/status 
VmRSS:	  274484 kB

after "ls -R". I guess it should be a little lower because 32 bit pointers take half as much space as 64 bit pointers, but still, it's in the same range as amd64

@rfjakob
Copy link
Owner

rfjakob commented May 31, 2021

Maybe the difference is that your system has swap space? That may also explain the fact that you saw EncFS at RSS zero?

rfjakob added a commit to rfjakob/go-fuse that referenced this issue May 31, 2021
Maps do not free all memory when elements get deleted
( golang/go#20135 ).

As a workaround, we recreate our two big maps (kernelNodeIds & stableAttrs)
when they have shrunk dramatically (100 x smaller).

Benchmarkung with loopback (go version go1.16.2 linux/amd64) shows:

Before this change:

State              VmRSS (kiB)
-----              -----------
Fresh mount          4000
ls -R 500k files   271100
after drop_cache   336448
wait ~ 3 minutes   101588

After:

State              VmRSS (kiB)
-----              -----------
Fresh mount          4012
ls -R 500k files   271100
after drop_cache    31528

Results for gocryptfs are similar.

Fixes: rfjakob/gocryptfs#569
Change-Id: Idcae1ab953270516735839a034d586717647b8db
@Slartibartfast27
Copy link
Author

Slartibartfast27 commented May 31, 2021

Maybe the difference is that your system has swap space? That may also explain the fact that you saw EncFS at RSS zero?

Sure: Otherwise EncFS could not exist with 0 memory. I defined 16 GB of SWAP but basically never need to use it.
Anyway: If gocyrptfs uses SWAP, it's perfectly OK. Only RSS hurts...

@Slartibartfast27
Copy link
Author

Current the ls -R over the 12,000,000 files is running since a couple of hours.
SWAP is basically 0 and gocrypt bounces between 16 MB and 25 MB ;-)
image

@Slartibartfast27
Copy link
Author

Slartibartfast27 commented May 31, 2021

I just tried running the gocryptfs.v2.0-beta4-21-g242cdf9.armv7 binary on a Raspberry Pi 4 (armv8 64 bit) that I have. Seems to work fine!

But I see

jakob@192-168-0-106:~$ grep VmRSS /proc/$(pgrep gocryptfs)/status 
VmRSS:	  274484 kB

after "ls -R". I guess it should be a little lower because 32 bit pointers take half as much space as 64 bit pointers, but still, it's in the same range as amd64

There could be two reasons:
A) Currently my USB-disk /dev/sda is under pretty heavy load as I'm doing an chown ... over all the 12,000,000 files and a full new sync to my remote friend, which needs to upload 2 TB. Therefore my system has more time to dispose unused memory before new memory is allocated. If I find time (earliest tomorrow, not to disturb the 12,000,000 files test), I'll try it on my eMMC-Card, which has an extremely good performance for an SBC:

TMPFILE=/tmp/speedtest  <<---- on eMMC

dd if=/dev/zero of=$TMPFILE bs=8M count=100 oflag=direct
dd if=$TMPFILE of=/dev/null bs=8M count=100 iflag=direct
rm -f $TMPFILE
838860800 bytes (839 MB, 800 MiB) copied, 9.69906 s, 86.5 MB/s  <<-- write performance
838860800 bytes (839 MB, 800 MiB) copied, 5.8522 s, 143 MB/s  <<-- read performance

B) If you run the default Raspberry Pi 4 image, you'll have an 32 bit OS. The 64 bit RP4-OS is only beta.

@rfjakob
Copy link
Owner

rfjakob commented May 31, 2021

A) Yes I guess the other chown is preventing too much caching in gocryptfs

B) It's been a while since I installed this, I guess it's the beta

0 jakob@192-168-0-106:~$ uname -a
Linux 192-168-0-106 5.3.0-1021-raspi2 #23-Ubuntu SMP Fri Mar 27 09:12:04 UTC 2020 aarch64 aarch64 aarch64 GNU/Linux

@rfjakob
Copy link
Owner

rfjakob commented May 31, 2021

PS @ A): I don't think it matters if it on a different disk, what matters is that there is another process touching a lot of files. This will push files out of the gocryptfs cache.

@Slartibartfast27
Copy link
Author

Even after iteration over 12,000,000 files over night, there were absolutely no RSS-problems:

Min RSS (directly after mount): 9,760 kB
Max RSS (during ls over 12,000,000 files): 36,616 kB  (yes, same as before with 500,000 files!)
RSS before drop caches: 15,652 kB
RSS after drop caches: 21,164 kB
RSS after drop caches + 5m: 10,792 kB  --> absolutely phantastic!

image

image

Logs: VmRSS-gocryptfs.v2.0-beta4-21-g242cdf9.log

So I'll be happily and patiently waiting for the 2.0 release before I do the switch from EncFS --> gocrpytfs ;-)

@Slartibartfast27
Copy link
Author

PS @ A): I don't think it matters if it on a different disk, what matters is that there is another process touching a lot of files. This will push files out of the gocryptfs cache.

I'll do some more tests when my system is idle again, likely in approx 1 week...

rfjakob added a commit to rfjakob/go-fuse that referenced this issue Jun 1, 2021
Maps do not free all memory when elements get deleted
( golang/go#20135 ).

As a workaround, we recreate our two big maps (kernelNodeIds & stableAttrs)
when they have shrunk dramatically (100 x smaller).

Benchmarkung with loopback (go version go1.16.2 linux/amd64) shows:

Before this change:

State              VmRSS (kiB)
-----              -----------
Fresh mount          4000
ls -R 500k files   271100
after drop_cache   336448
wait ~ 3 minutes   101588

After:

State              VmRSS (kiB)
-----              -----------
Fresh mount          4012
ls -R 500k files   271100
after drop_cache    31528

Results for gocryptfs are similar.

Fixes: rfjakob/gocryptfs#569
Change-Id: Idcae1ab953270516735839a034d586717647b8db
@rfjakob rfjakob added this to the v2.1 milestone Jun 5, 2021
@Slartibartfast27
Copy link
Author

Slartibartfast27 commented Jun 6, 2021

Hello @rfjakob,

here are the promised results of my tests on an idle system (Odroid XU4Q with Armbian) :

When RSS on eMMC [kB] RSS on USB-HDD [kB]
after creation of fresh FS 8,804 9,740
max during creation of 500,000 files 85,380 61,192
after creation of 500,000 files 33,416 46,080
immediately after drop caches 75,856 59,080
5 minutes after drop caches 21,388 16,336
max during ls -R over 500,000 files (no re-mount!) 165,892 151,432
after ls -R over 500,000 files (no re-mount!) 145,684 112,788
immediately after drop caches 165,552 133,292
5 minutes after drop caches 39,184 33,444
after fresh re-mount 11,076 9,096
max during ls -R over 500,000 files (after re-mount!) 171,628 169,136
after ls -R over 500,000 files (after re-mount!) 126,376 142,096
immediately after drop caches 171,628 169,136
5 minutes after drop caches 31,288 33,008

I tested everything with the attached script in 2 ways:

  • with cipher on a fast eMMC card
  • with cipher on an USB 3.0 HDD

Wanted to do the test also against an tmpfs, but my tmpfs had not enough inodes free for 500,000 files:

Filesystem     Inodes IUsed  IFree IUse% Mounted on
tmpfs          187507     7 187500    1% /tmp-ram

So I executed the tests with these commands:

/root/bin/gocryptfs-RAM-test.sh  /tmp/gocryptfs.v2.0-beta4-21-g242cdf9.armv7  500 1000 /tmp/aaa | tee /tmp/500000-files-on-eMMC.log
grep RSS: /tmp/500000-files-on-eMMC.log > /tmp/500000-files-on-eMMC-memonly.log


/root/bin/gocryptfs-RAM-test.sh  /tmp/gocryptfs.v2.0-beta4-21-g242cdf9.armv7  500 1000 /mnt/usbhd02/tmp/aaa | tee /tmp/500000-files-on-USB-HD.log
grep RSS: /tmp/500000-files-on-USB-HD.log > /tmp/500000-files-on-USB-HD-memonly.log


Here are the results as charts:

image
image

During the tests, I always had plenty of RAM available (less than 1 GB RSS-Sum of total 2 GB):
image

Overall for me the results of gocryptfs.v2.0-beta4-21-g242cdf9.armv7 still are perfect also on an idle system: Always less than 40 MB remaining RSS.

For me there is no need to optimize anything anymore in terms of RAM usage.

But if you like and give me a compiled exe, I'll be happy to repeat the test on my system with merged rfjakob/go-fuse@aa124b1 ...

Here are the scripts I used for testing:
analysis-with-idle-fs.zip

@Slartibartfast27
Copy link
Author

Just for reference the results for gocryptfs version 1.7.1 (ignoring LazyFree) using the same script from above:

image

--> Version 2.0 is under all circumstances better than 1.7.1.
1.7.1 starts with 76 MB and 2.0 ends with max 39 MB after creation+ls of 500,000 files!

rfjakob added a commit to rfjakob/go-fuse that referenced this issue Jun 10, 2021
Maps do not free all memory when elements get deleted
( golang/go#20135 ).

As a workaround, we recreate our two big maps (kernelNodeIds & stableAttrs)
when they have shrunk dramatically (100 x smaller).

Benchmarkung with loopback (go version go1.16.2 linux/amd64) shows:

Before this change:

Step               VmRSS (kiB)
-----              -----------
Fresh mount          4000
ls -R 500k files   271100
after drop_cache   336448
wait ~ 3 minutes   101588

After:

Step               VmRSS (kiB)
-----              -----------
Fresh mount          4012
ls -R 500k files   271100
after drop_cache    31528

Results for gocryptfs are similar.

Fixes: rfjakob/gocryptfs#569
Change-Id: Idcae1ab953270516735839a034d586717647b8db
rfjakob added a commit to rfjakob/go-fuse that referenced this issue Jun 11, 2021
Maps do not free all memory when elements get deleted
( golang/go#20135 ).

As a workaround, we recreate our two big maps (kernelNodeIds & stableAttrs)
when they have shrunk dramatically (100 x smaller).

Benchmarkung with loopback (go version go1.16.2 linux/amd64) shows:

Before this change:

Step               VmRSS (kiB)
-----              -----------
Fresh mount          4000
ls -R 500k files   271100
after drop_cache   336448
wait ~ 3 minutes   101588

After:

Step               VmRSS (kiB)
-----              -----------
Fresh mount          4012
ls -R 500k files   271100
after drop_cache    31528

Results for gocryptfs are similar.

Has survived xfstests via gocryptfs.

Fixes: rfjakob/gocryptfs#569
Change-Id: Idcae1ab953270516735839a034d586717647b8db
hanwen pushed a commit to hanwen/go-fuse that referenced this issue Jun 11, 2021
Maps do not free all memory when elements get deleted
( golang/go#20135 ).

As a workaround, we recreate our two big maps (kernelNodeIds & stableAttrs)
when they have shrunk dramatically (100 x smaller).

Benchmarkung with loopback (go version go1.16.2 linux/amd64) shows:

Before this change:

Step               VmRSS (kiB)
-----              -----------
Fresh mount          4000
ls -R 500k files   271100
after drop_cache   336448
wait ~ 3 minutes   101588

After:

Step               VmRSS (kiB)
-----              -----------
Fresh mount          4012
ls -R 500k files   271100
after drop_cache    31528

Results for gocryptfs are similar.

Has survived xfstests via gocryptfs.

Fixes: rfjakob/gocryptfs#569
Change-Id: Idcae1ab953270516735839a034d586717647b8db
@rfjakob
Copy link
Owner

rfjakob commented Jun 11, 2021

Memory usage after drop_caches should be much better now. Some numbers are here:
hanwen/go-fuse@24a1dfe

Again thanks for the excellent report & testing!

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

3 participants