Skip to content
This repository has been archived by the owner on May 17, 2024. It is now read-only.

Memory holding issue #14

Closed
iromeo opened this issue Sep 13, 2017 · 3 comments
Closed

Memory holding issue #14

iromeo opened this issue Sep 13, 2017 · 3 comments

Comments

@iromeo
Copy link

iromeo commented Sep 13, 2017

Recently I switched from ByteBuffer to this library so as able to work with files > 2GB in library for reading *.bw files (0.3.6 -> 0.4.x changes in https://github.com/JetBrains-Research/big) . Unfortunately I've faced with memory issues, looks like MMapBuffer.close() doesn't actually mark memory as free. Finally my process is killed by linux OOM killer

[23946008.796781] Killed process 12309 (java) total-vm:182699968kB, anon-rss:174968140kB, file-rss:0kB

I'm pretty sure than I don't have other memory leaks in my application. It works with ByteBuffer based big library version. My application sequentially performs some calculations. Each iteration (calculation) opens MMapBuffer(Path path, FileChannel.MapMode mapMode, ByteOrder order) ~30 files (~200 mb) in parallel (each file in own thread), than does multiple reads of different parts of each file and then closes related MMapBuffer. Using your library memory consumption is continuously increasing according to top. Also lsof shows me that during each operation I have 30 memory mapped files, in the end of each iteration I get expected 0 mem mapped files. I added some logging, so I'm sure that each opened MMapBuffer will be closed in the end of each iteration. I cannot figure out what is going on and why memory consumption grows. My code finishes on machine with 378 GB RAM and killed by OOM killer on machine with 190 GB RAM. As I mentioned before ByteBuffer based impl successfully works on machine with 190 GB RAM and consume significantly less memory although memory mapped files are unmapped by JVM in some random moments and normally stays mapped after all iterations have completed.

Do you have any recommendations how to fix or identify this memory leak? Big library doesn't use madvise features or tracked buffers. I really wan't to use your library to deal with large files, but current issue stops me from such switch.

@iromeo
Copy link
Author

iromeo commented Sep 13, 2017

Example from htop
screen shot 2017-09-13 at 12 56 53

@iromeo
Copy link
Author

iromeo commented Sep 13, 2017

and from top
screen shot 2017-09-13 at 13 00 46

pmap shows me huge blocks of [anon] memory, e.g.

Address           Kbytes     RSS   Dirty Mode  Mapping
0000000000400000       4       4       0 r-x-- java
0000000000600000       4       4       4 r---- java
0000000000601000       4       4       4 rw--- java
000000000205a000     132      16      16 rw---   [ anon ]
00007f29e4000000   65524   65524   65524 rw---   [ anon ]
00007f29e7ffd000      12       0       0 -----   [ anon ]
00007f29e8000000   65536   65460   65460 rw---   [ anon ]
...
00007f319fffd000      12       0       0 -----   [ anon ]
00007f31a0000000   65520   65516   65516 rw---   [ anon ]
00007f31a3ffc000      16       0       0 -----   [ anon ]
00007f31a4000000   65520   65512   65512 rw---   [ anon ]
00007f31a7ffc000      16       0       0 -----   [ anon ]
00007f31a8000000   65520   65512   65512 rw---   [ anon ]
00007f31abffc000      16       0       0 -----   [ anon ]
00007f31ac000000   65528   65508   65508 rw---   [ anon ]
00007f31afffe000       8       0       0 -----   [ anon ]
00007f31b4c7b000  112432   77380       0 r--s- OD10_R1_hg19_rpkm.bw
00007f31bba47000  123316   85612       0 r--s- OD17_R1_hg19_rpkm.bw
00007f31c32b4000  157008   92804       0 r--s- OD11_R1_hg19_rpkm.bw
00007f31ccc08000  129876   79424       0 r--s- OD9_R1_hg19_rpkm.bw
00007f31d4add000  130800   93952       0 r--s- OD5_R1_hg19_rpkm.bw
00007f31dca99000  109272   86172       0 r--s- YD17_R1_hg19_rpkm.bw
00007f31e354f000  122520   86904       0 r--s- OD13_R1_hg19_rpkm.bw
00007f31eacf5000  141144  104268       0 r--s- YD5_R1_hg19_rpkm.bw
...
00007f42f8000000   65524   65516   65516 rw---   [ anon ]
00007f42fbffd000      12       0       0 -----   [ anon ]
00007f42fc000000   65524   65524   65524 rw---   [ anon ]
00007f42ffffd000      12       0       0 -----   [ anon ]
00007f4300000000   65524   64928   64928 rw---   [ anon ]
00007f4303ffd000      12       0       0 -----   [ anon ]
00007f4304000000   65520   65512   65512 rw---   [ anon ]
00007f4307ffc000      16       0       0 -----   [ anon ]
00007f4308000000   65524   65524   65524 rw---   [ anon ]
00007f430bffd000      12       0       0 -----   [ anon ]
...
total kB         158251800 149814060 147203776

Could it be side effects of frequent opening/closing MMapBuffer?

@iromeo
Copy link
Author

iromeo commented Sep 13, 2017

Seems it was a bug in big library, it uses non-static ThreadLocal field for tmp buffers. So please ignore this issue. If big library fix won't help I'll reopen this issue.

@iromeo iromeo closed this as completed Sep 13, 2017
Sign up for free to subscribe to this conversation on GitHub. Already have an account? Sign in.
Labels
None yet
Projects
None yet
Development

No branches or pull requests

1 participant