Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Memory leak after opening maildir mailbox #4087

Closed
jpalus opened this issue Oct 28, 2023 · 8 comments · Fixed by #4090
Closed

Memory leak after opening maildir mailbox #4087

jpalus opened this issue Oct 28, 2023 · 8 comments · Fixed by #4090
Labels

Comments

@jpalus
Copy link
Contributor

jpalus commented Oct 28, 2023

Expected Behaviour

Memory use remains constant when reopening over and over same, unmodified maildir mailbox.

Actual Behaviour

Memory use grows with every open.

Steps to Reproduce

  1. Open neomutt which by default goes to small maildir mailbox
  2. Go to maildir mailbox that has bigger number of mails (~100K in my case)
  3. Open big mailbox a couple of times
  4. Go back to small maildir mailbox

Observe difference in memory between first and last step.

It appears to be caused by memory not being freed:

mx_alloc_memory(m, m->msg_count);

as indicated by massif after step 4:

99.88% (165,580,230B) (heap allocation functions) malloc/new/new[], --alloc-fns, etc.
->98.03% (162,515,648B) 0x518404: mutt_mem_realloc (memory.c:131)
| ->65.31% (108,265,600B) 0x43033F: mx_alloc_memory (mx.c:1260)
| | ->65.31% (108,265,600B) 0x4B6E46: maildir_move_to_mailbox (shared.c:95)
| |   ->65.31% (108,265,600B) 0x4B1E40: maildir_read_dir (maildir.c:731)
| |     ->65.31% (108,265,600B) 0x4B1FEA: maildir_mbox_open (maildir.c:1127)
| |       ->65.31% (108,265,600B) 0x431563: mx_mbox_open (mx.c:388)
| |         ->65.31% (108,265,600B) 0x43BCAC: change_folder_mailbox (dlg_index.c:696)
| |         | ->65.31% (108,265,600B) 0x44035F: op_main_change_folder (functions.c:1156)
| |         |   ->65.31% (108,265,600B) 0x4428F3: index_function_dispatcher (functions.c:3323)
| |         |     ->65.31% (108,265,600B) 0x43C782: dlg_index (dlg_index.c:1343)
| |         |       ->65.31% (108,265,600B) 0x40C7B6: main (main.c:1392)
| |         |         
| |         ->00.00% (0B) in 1+ places, all below ms_print's threshold (01.00%)
| |         
| ->32.65% (54,132,800B) 0x430350: mx_alloc_memory (mx.c:1261)
| | ->32.65% (54,132,800B) 0x4B6E46: maildir_move_to_mailbox (shared.c:95)
| |   ->32.65% (54,132,800B) 0x4B1E40: maildir_read_dir (maildir.c:731)
| |     ->32.65% (54,132,800B) 0x4B1FEA: maildir_mbox_open (maildir.c:1127)
| |       ->32.65% (54,132,800B) 0x431563: mx_mbox_open (mx.c:388)
| |         ->32.65% (54,132,800B) 0x43BCAC: change_folder_mailbox (dlg_index.c:696)
| |         | ->32.65% (54,132,800B) 0x44035F: op_main_change_folder (functions.c:1156)
| |         |   ->32.65% (54,132,800B) 0x4428F3: index_function_dispatcher (functions.c:3323)
| |         |     ->32.65% (54,132,800B) 0x43C782: dlg_index (dlg_index.c:1343)
| |         |       ->32.65% (54,132,800B) 0x40C7B6: main (main.c:1392)

I'm also not sure if having 470MB allocated when viewing ~100K mailbox is fine or not but if needed I can share massif breakdown of it too.

How often does this happen?

  • Always

When did it start to happen?

  • When I upgraded
    I first observed with version 20231006 but didn't confirm if it's really the first version it occurred in.

NeoMutt Version

NeoMutt 20231023
Copyright (C) 1996-2022 Michael R. Elkins and others.
NeoMutt comes with ABSOLUTELY NO WARRANTY; for details type 'neomutt -vv'.
NeoMutt is free software, and you are welcome to redistribute it
under certain conditions; type 'neomutt -vv' for details.

System: Linux 6.5.5-1 (x86_64)
ncurses: ncurses 6.4.20231001 (compiled with 6.4.20231001)
libidn2: 2.3.4 (compiled with 2.3.4)
GPGME: 1.23.0
OpenSSL: OpenSSL 3.1.4 24 Oct 2023
PCRE2: 10.42 2022-12-11
storage: bdb, lmdb
compression: lz4, zlib, zstd

Configure options: {LDFLAGS=-Wl,--as-needed -Wl,--no-copy-dt-needed-entries -Wl,-z,relro -Wl,-z,combreloc } {CFLAGS=-O2 -fwrapv -pipe -Wformat -Werror=format-security -gdwarf-4 -fno-debug-types-section -fvar-tracking-assignments -g2 -Wp,-D_FORTIFY_SOURCE=2 -fstack-protector-strong --param=ssp-buffer-size=4 -Werror=trampolines -fPIC -march=x86-64-v3 -mtune=generic -gdwarf-4 -fno-debug-types-section -fvar-tracking-assignments -g2 } {CXXFLAGS=-O2 -fwrapv -pipe -Wformat -Werror=format-security -gdwarf-4 -fno-debug-types-section -fvar-tracking-assignments -g2 -Wp,-D_FORTIFY_SOURCE=2 -fstack-protector-strong --param=ssp-buffer-size=4 -Werror=trampolines -fPIC -march=x86-64-v3 -mtune=generic -gdwarf-4 -fno-debug-types-section -fvar-tracking-assignments -g2 } {FFLAGS=-O2 -fwrapv -pipe -Wformat -Werror=format-security -gdwarf-4 -fno-debug-types-section -fvar-tracking-assignments -g2 -Wp,-D_FORTIFY_SOURCE=2 -fstack-protector-strong --param=ssp-buffer-size=4 -Werror=trampolines -fPIC -march=x86-64-v3 -mtune=generic -gdwarf-4 -fno-debug-types-section -fvar-tracking-assignments -g2 } {FCFLAGS=-O2 -fwrapv -pipe -Wformat -Werror=format-security -gdwarf-4 -fno-debug-types-section -fvar-tracking-assignments -g2 -Wp,-D_FORTIFY_SOURCE=2 -fstack-protector-strong --param=ssp-buffer-size=4 -Werror=trampolines -fPIC -march=x86-64-v3 -mtune=generic -gdwarf-4 -fno-debug-types-section -fvar-tracking-assignments -g2 } CPPFLAGS= CC=x86_64-pld-linux-gcc CXX=x86_64-pld-linux-g++ --host=x86_64-pld-linux --build=x86_64-pld-linux --prefix=/usr --disable-debug --gpgme --bdb --with-bdb=/usr --lmdb --lua --lz4 --with-mailpath=/var/mail --mixmaster --pcre2 --sasl --ssl --zlib --zstd

Compilation CFLAGS: -O2 -fwrapv -pipe -Wformat -Werror=format-security -gdwarf-4 -fno-debug-types-section -fvar-tracking-assignments -g2 -Wp,-D_FORTIFY_SOURCE=2 -fstack-protector-strong --param=ssp-buffer-size=4 -Werror=trampolines -fPIC -march=x86-64-v3 -mtune=generic -gdwarf-4 -fno-debug-types-section -fvar-tracking-assignments -g2  -fno-delete-null-pointer-checks -D_ALL_SOURCE=1 -D_GNU_SOURCE=1 -D__EXTENSIONS__ -D_XOPEN_SOURCE_EXTENDED -I/usr/include/lua5.4 -I/usr/include -DNCURSES_WIDECHAR -I/include -I/usr/include/ -I/usr/include -O2

Default options:
  +attach_headers_color +compose_to_sender +compress +cond_date +debug 
  +encrypt_to_self +forgotten_attachments +forwref +ifdef +imap +index_color 
  +initials +limit_current_thread +multiple_fcc +nested_if +new_mail +nntp +pop 
  +progress +quasi_delete +regcomp +reply_with_xorig +sensible_browser +sidebar 
  +skip_quoted +smtp +status_color +timeout +tls_sni +trash 

Compile options:
  -autocrypt +fcntl -flock -fmemopen +futimens +getaddrinfo -gnutls +gpgme 
  -gsasl -gss +hcache -homespool +idn +inotify -locales_hack +lua +mixmaster 
  +nls -notmuch +openssl +pcre2 +pgp +sasl +smime -sqlite +sun_attachment 
  +truecolor 

MAILPATH="/var/mail"
MIXMASTER="mixmaster"
PKGDATADIR="/usr/share/neomutt"
SENDMAIL="/usr/sbin/sendmail"
SYSCONFDIR="/etc"

To learn more about NeoMutt, visit: https://neomutt.org
If you find a bug in NeoMutt, please raise an issue at:
    https://github.com/neomutt/neomutt/issues
or send an email to: <neomutt-devel@neomutt.org>
@jpalus jpalus added the type:bug Bug label Oct 28, 2023
@flatcap
Copy link
Member

flatcap commented Oct 28, 2023

I not sure that NeoMutt is leaking, but it does seem to be using quite a lot of memory.

When you open NeoMutt, obviously it allocates memory for all the mailboxes in the sidebar and all the emails in the open mailbox.
Even mailboxes that aren't open take up memory.

I did quick test whilst watching the memory in top (low-tech, I know).

I created two macros:

macro index <f5> "<change-folder>-<enter>"
macro index <f6> "<f5><f5><f5><f5><f5><f5><f5><f5><f5><f5>"

Then, I opened a small folder, then opened a very large folder.
Changing folder to - means open the previous folder.

I hit <f6> five times and waited.
That should open each folder 25 times.

The memory usage fluctuated as folders were opened and closed, but crucially it didn't creep upwards.
At least, not by any obvious amount.


I'll leave this open and keep thinking

@jpalus
Copy link
Contributor Author

jpalus commented Oct 28, 2023

Maybe some more data to compare:

Memory use after opening initial small mailbox only without opening big one is negligible and less than 2MB.

Memory use after opening small mx -> big mx -> small mx

99.45% (34,998,419B) (heap allocation functions) malloc/new/new[], --alloc-fns, etc.
->92.71% (32,624,676B) 0x518404: mutt_mem_realloc (memory.c:131)
| ->61.58% (21,669,400B) 0x43033F: mx_alloc_memory (mx.c:1260)
| | ->61.58% (21,669,400B) 0x4B6E46: maildir_move_to_mailbox (shared.c:95)
| |   ->61.58% (21,669,400B) 0x4B1E40: maildir_read_dir (maildir.c:731)
| |     ->61.58% (21,669,400B) 0x4B1FEA: maildir_mbox_open (maildir.c:1127)
| |     | ->61.58% (21,669,400B) 0x431563: mx_mbox_open (mx.c:388)
| |     |   ->61.58% (21,669,400B) 0x43BCAC: change_folder_mailbox (dlg_index.c:696)
| |     |   | ->61.58% (21,669,400B) 0x44035F: op_main_change_folder (functions.c:1156)
| |     |   |   ->61.58% (21,669,400B) 0x4428F3: index_function_dispatcher (functions.c:3323)
| |     |   |     ->61.58% (21,669,400B) 0x43C782: dlg_index (dlg_index.c:1343)
| |     |   |       ->61.58% (21,669,400B) 0x40C7B6: main (main.c:1392)
| |     |   |         
| |     |   ->00.00% (0B) in 1+ places, all below ms_print's threshold (01.00%)
| |     |   
| |     ->00.00% (0B) in 1+ places, all below ms_print's threshold (01.00%)
| |     
| ->30.79% (10,834,700B) 0x430350: mx_alloc_memory (mx.c:1261)
| | ->30.79% (10,834,700B) 0x4B6E46: maildir_move_to_mailbox (shared.c:95)
| |   ->30.79% (10,834,700B) 0x4B1E40: maildir_read_dir (maildir.c:731)
| |     ->30.79% (10,834,700B) 0x4B1FEA: maildir_mbox_open (maildir.c:1127)
| |     | ->30.79% (10,834,700B) 0x431563: mx_mbox_open (mx.c:388)
| |     |   ->30.79% (10,834,700B) 0x43BCAC: change_folder_mailbox (dlg_index.c:696)
| |     |   | ->30.79% (10,834,700B) 0x44035F: op_main_change_folder (functions.c:1156)
| |     |   |   ->30.79% (10,834,700B) 0x4428F3: index_function_dispatcher (functions.c:3323)
| |     |   |     ->30.79% (10,834,700B) 0x43C782: dlg_index (dlg_index.c:1343)
| |     |   |       ->30.79% (10,834,700B) 0x40C7B6: main (main.c:1392)
| |     |   |         
| |     |   ->00.00% (0B) in 1+ places, all below ms_print's threshold (01.00%)
| |     |   
| |     ->00.00% (0B) in 1+ places, all below ms_print's threshold (01.00%)
| |     
| ->00.34% (120,576B) in 1+ places, all below ms_print's threshold (01.00%)

Memory use after opening small mx -> big mx -> big mx -> small mx:

99.71% (67,434,423B) (heap allocation functions) malloc/new/new[], --alloc-fns, etc.
->96.26% (65,101,648B) 0x518404: mutt_mem_realloc (memory.c:131)
| ->64.05% (43,320,800B) 0x43033F: mx_alloc_memory (mx.c:1260)
| | ->64.05% (43,320,800B) 0x4B6E46: maildir_move_to_mailbox (shared.c:95)
| |   ->64.05% (43,320,800B) 0x4B1E40: maildir_read_dir (maildir.c:731)
| |     ->64.05% (43,320,800B) 0x4B1FEA: maildir_mbox_open (maildir.c:1127)
| |     | ->64.05% (43,320,800B) 0x431563: mx_mbox_open (mx.c:388)
| |     |   ->64.05% (43,320,800B) 0x43BCAC: change_folder_mailbox (dlg_index.c:696)
| |     |   | ->64.05% (43,320,800B) 0x44035F: op_main_change_folder (functions.c:1156)
| |     |   |   ->64.05% (43,320,800B) 0x4428F3: index_function_dispatcher (functions.c:3323)
| |     |   |     ->64.05% (43,320,800B) 0x43C782: dlg_index (dlg_index.c:1343)
| |     |   |       ->64.05% (43,320,800B) 0x40C7B6: main (main.c:1392)
| |     |   |         
| |     |   ->00.00% (0B) in 1+ places, all below ms_print's threshold (01.00%)
| |     |   
| |     ->00.00% (0B) in 1+ places, all below ms_print's threshold (01.00%)
| |     
| ->32.03% (21,660,400B) 0x430350: mx_alloc_memory (mx.c:1261)
| | ->32.03% (21,660,400B) 0x4B6E46: maildir_move_to_mailbox (shared.c:95)
| |   ->32.03% (21,660,400B) 0x4B1E40: maildir_read_dir (maildir.c:731)
| |     ->32.03% (21,660,400B) 0x4B1FEA: maildir_mbox_open (maildir.c:1127)
| |     | ->32.03% (21,660,400B) 0x431563: mx_mbox_open (mx.c:388)
| |     |   ->32.03% (21,660,400B) 0x43BCAC: change_folder_mailbox (dlg_index.c:696)
| |     |   | ->32.03% (21,660,400B) 0x44035F: op_main_change_folder (functions.c:1156)
| |     |   |   ->32.03% (21,660,400B) 0x4428F3: index_function_dispatcher (functions.c:3323)
| |     |   |     ->32.03% (21,660,400B) 0x43C782: dlg_index (dlg_index.c:1343)
| |     |   |       ->32.03% (21,660,400B) 0x40C7B6: main (main.c:1392)

So it's seems pretty linear with 30MB allocated more per big mx open and matches pretty well allocation of 157MB after opening big mx 5 times that was shown in initial comment.

@flatcap
Copy link
Member

flatcap commented Oct 28, 2023

OK, thanks.

If you've got the time, please could you build with the Address Sanitizer
It tracks all memory allocations / frees.

If there are any leaks, it prints out a report on exit telling us exactly where the allocations occurred.

@jpalus
Copy link
Contributor Author

jpalus commented Oct 28, 2023

I don't think asan would show anything different than massif already does.

The reason why I used valgrind's massif in the first place is because valgrind's memcheck does not show any leaks so my assumption was the memory gets ultimately freed but only at exit. That's why I'm monitoring runtime memory use and not memory that was not freed at exit.

Anyway I found the issue:

neomutt/mx.c

Lines 1245 to 1249 in fa83255

const int grow = 25;
// Sanity checks
req_size = MAX(req_size, m->email_max);
req_size = ROUND_UP(req_size + 1, grow);

This logic is wrong as far as I can tell. Calculated rounded req_size is assigned to m->email_max which will always be selected in above MAX() and further increased by 25 due to increment and it will only keep on growing by 25 with each call. Since it is called per each email and every call allocates 25 more of sizeof(void*) + sizeof(int) which on my machine happens to be 12b, with 108260 emails that I have in big mailbox gives 108260*25*12=32478000. The 30MB leak that I'm seeing.

@jpalus
Copy link
Contributor Author

jpalus commented Oct 28, 2023

I guess the growing should occur only if original req_size (as present in function invocation) is higher than m->email_max. Not always.

@flatcap
Copy link
Member

flatcap commented Oct 28, 2023

Damn! You're good!

Thanks for the debugging.
I'll create a fix after I've done my chores -- lots of things need merging.

jpalus added a commit to jpalus/neomutt that referenced this issue Oct 29, 2023
@jpalus
Copy link
Contributor Author

jpalus commented Oct 29, 2023

Made an attempt to fix the issue in #4089.

Memory use in situation from description (smal mx -> 5x big mx -> small mx)

  • before fix: ~157MB
  • after fix: 3.5MB

Although use when having big mx open is possibly still concerning: 500MB. To be analyzed independently though.

I will create one more PR for reducing allocation overhead a little.

@flatcap
Copy link
Member

flatcap commented Oct 30, 2023

Thanks for diving in; I'll have a play in a second.

when having big mx open is possibly still concerning: 500MB

Long ago, I had an idea to create a :memory page, like firefox's about:memory
I'd be very interested in knowing where it's all being used.

flatcap pushed a commit to jpalus/neomutt that referenced this issue Oct 30, 2023
Fixes neomutt#4087

- Fix overallocation of memory
- Reduce number of re-allocations

Reduce mx allocation overhead by gradually increasing grow step

so far memory allocated for emails grew by capacity of 25 entries per
each call. for big mailboxes this causes quite a lot of reallocations.
change the logic to initially allocate 25 entries while further
expansions double current capacity. cap number of new entries to 1000
though.
@flatcap flatcap linked a pull request Oct 30, 2023 that will close this issue
flatcap pushed a commit that referenced this issue Oct 30, 2023
Fixes #4087

- Fix overallocation of memory
- Reduce number of re-allocations

Reduce mx allocation overhead by gradually increasing grow step

so far memory allocated for emails grew by capacity of 25 entries per
each call. for big mailboxes this causes quite a lot of reallocations.
change the logic to initially allocate 25 entries while further
expansions double current capacity. cap number of new entries to 1000
though.
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
Projects
None yet
Development

Successfully merging a pull request may close this issue.

2 participants