New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Thumbnail generator creates too many processes #1369
Comments
Those aren't threads but processes. |
Ok, thanks. Nevertheless there are too many of them. |
What version do you use? By design, the web front-end should send thumbnail generating request one after another. There should not be too many concurrent processes. |
I saw the problem for the first time in 5.1.3 and it persisted in all higher versions I used, 5.1.4, 6.0.3 and now 6.0.4. I turned off thumbnail creation completely right now in ../conf/seahub_settings.py using: |
So I did some further testing and it turns out that it's not the generation of thumbnails itself but the loading of existing thumbnails which kills my machine. What I did to test this:
Btw I only used very tiny thumbnail size: THUMBNAIL_DEFAULT_SIZE = 24 |
Would it be best to just limit the Nginx connections then? |
I think so. |
I have the same problem with my small server (centrino based cpu). Overall Seafile runs fine on this hardware but when openning folders with lot of images the thumbnail creation processes consume all the memory. Anyway this is my Ngxin configuration to solve the problem by now:
The burst valua has to be very high otherwise Nginx will drop a lot of request resulting in missing thumbnails. |
Then it looks like the directive doesn't have any effect. Better would be to handle request limits via js. A good solution would be to add a setting where the admin can define how many requests each client is allowed to do in parallel. (To make sure an attacker does not ignore these limits to run DOS attacks one could then add request limits using nginx without effecting normal users - most users won't need this additional step). The default could be something like 10. The raspberry pi version could be packaged with a lower default limit (e.g. 3). |
It has an effect: it limits the the request processing rate per second. Otherwise unlimited requests will pass through Seahub so it starts fo every request a thumbnail creation process. |
Ah okay, I see (reading the documentation is often helpful ;), I missinterpret the value without doing so - https://nginx.org/en/docs/http/ngx_http_limit_req_module.html#limit_req) But then the Nginx solution looks like a good one to me. |
Hello,
when opening a library/folder containing many pictures (several hundred) the seahub thumbnail generator creates too many processes and makes my tiny server run out of memory. I use an ARM A20, 1GB RAM machine with seafile for raspberry pi.
When opening a folder with many pictures, first I see lots of processes in top:
top - 11:50:22 up 24 min, 1 user, load average: 20,49, 6,27, 2,75
Tasks: 167 total, 47 running, 120 sleeping, 0 stopped, 0 zombie
%Cpu(s): 85,3 us, 14,1 sy, 0,0 ni, 0,0 id, 0,0 wa, 0,0 hi, 0,6 si, 0,0 st
KiB Mem: 1008516 total, 957004 used, 51512 free, 100684 buffers
KiB Swap: 0 total, 0 used, 0 free. 90960 cached Mem
PID USER PR NI VIRT RES SHR S %CPU %MEM TIME+ COMMAND
1906 seafile+ 20 0 502420 7564 2824 S 9,6 0,8 0:28.33 seaf-server
986 mysql 20 0 328716 60628 5380 S 4,8 6,0 0:34.45 mysqld
2150 seafile+ 20 0 48192 25248 1852 R 4,5 2,5 0:00.99 python2.7
2157 seafile+ 20 0 47936 25036 1852 R 4,5 2,5 0:00.92 python2.7
2158 seafile+ 20 0 47936 25224 1852 R 4,5 2,5 0:00.97 python2.7
2195 seafile+ 20 0 47308 24664 1744 R 4,5 2,4 0:00.81 python2.7
2107 seafile+ 20 0 52168 29348 2000 R 4,1 2,9 0:02.23 python2.7
2133 seafile+ 20 0 47936 25092 1852 R 4,1 2,5 0:00.97 python2.7
2137 seafile+ 20 0 47936 25116 1852 R 4,1 2,5 0:00.98 python2.7
2138 seafile+ 20 0 48192 25364 1852 R 4,1 2,5 0:01.02 python2.7
... many more ...
2171 seafile+ 20 0 47576 24888 1792 R 3,7 2,5 0:00.85 python2.7
2172 seafile+ 20 0 47308 24616 1740 R 3,7 2,4 0:00.84 python2.7
2173 seafile+ 20 0 47936 25040 1852 R 3,7 2,5 0:00.89 python2.7
2176 seafile+ 20 0 47308 24552 1692 R 3,7 2,4 0:00.85 python2.7
2182 seafile+ 20 0 47572 24892 1796 R 3,7 2,5 0:00.84 python2.7
2183 seafile+ 20 0 47680 24904 1808 R 3,7 2,5 0:00.83 python2.7
2186 seafile+ 20 0 47576 24896 1792 R 3,7 2,5 0:00.82 python2.7
2187 root 20 0 84180 15608 8564 R 3,7 1,5 0:00.76 horde-alarms
2190 seafile+ 20 0 47572 24888 1792 R 3,7 2,5 0:00.82 python2.7
After < 1 minute I see in syslog that my server runs out of memory and starts to kill processes:
[ 1510.748932] lowmemorykiller: Killing 'mysqld' (1004), adj 0,
[ 1510.748941] to free 56168kB on behalf of 'python2.7' (2195) because
[ 1510.748946] cache 6084kB is below limit 6144kB for oom_score_adj 0
[ 1510.748951] Free memory is -2644kB above reserved
...
The most weird thing about this is, in grid mode this doesn't happen, there I only see 5 processes max. It runs slow but doesn't crash. But I cannot prevent my users from using list mode.
I can reproduce this issue any time and provide more output, just tell me what you need.
Any help is appreciated!
The text was updated successfully, but these errors were encountered: