-
-
Notifications
You must be signed in to change notification settings - Fork 868
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Too much RAM consumption on specific ARM hardware (odroid-hc2 from hardkernel) #893
Comments
|
@MHCraftbeer I also do not use Docker myself. @nrandon, @jkt628, @pauliacomi, @mnagaku - any suggestions / ideas here to diagnose / troubleshoot? |
|
Thank you very much for your reply!
I hope this helps? |
|
@MHCraftbeer It also looks like it is not doing much. You may have to increase the verbosity (update the Docker init scripts to include The images you have posted are 'interesting' but also do not help as they are just of the Docker process and not the processes inside Docker. You most likely need to log into the Docker container itself, whilst the client is running, and perform actions from inside the container to look at what the memory utilisation is actually doing and what is consuming what. This most likely will require you to rebuild a Docker container so that you can use something like valgrind or similar to inspect the running process inside Docker + the onedrive client at the same time. I have no idea however if this is going to be possible given your hardware limited resources. What I will do however, is get a similar file set organised, and run that in monitor mode so that you have at least some comparison available from a resource usage perspective. |
|
Thanks for your reply! I will try to rebuild the container with verbose enabled. It would be very interesting to see how much memory the client normally requires. |
|
@MHCraftbeer Very lightweight thus far. |
|
@MHCraftbeer |
|
@abraunegg After 2 hours:
starting 2 hours ago with 50 mb, now at ~200 mb Is it possible that all network traffic gets stuck in ram? |
|
@abraunegg It "manages temporary filesystems across reboots, to decrease writes on permanent storage. This allows the installation of OMV on [...] SD cards" Could this be the issue? |
|
@abraunegg |
|
@abraunegg top - 16:16:10 up 5 days, 14:09, 3 users, load average: 1.83, 0.67, 0.50 PID USER PR NI VIRT RES SHR S %CPU %MEM TIME+ COMMAND |
|
@abraunegg with a few minutes between Applying changes... and sync complete, the used ram increases by ~30mb. Any ideas? |
Sorry - no idea. Best ask openmediavault |
|
@MHCraftbeer |
|
@abraunegg do you still have the client running? How much ram does it use now after >10 hours? |
At this point no. Will wait for my upload to complete and see what is happening during the same stage for similar data set. |
I started the client ~7pm my time. it has been already running for ~11hrs: |
|
@abraunegg |
|
Update: The CPU now is also >50% occupied. |
|
@MHCraftbeer
I will wait to see what my client starts behaving like once all the data is uploaded - and see if I get a similar 30Mb memory increase per cycle. If I do not see any sort of similar situation - avenue's of potential actions for you are:
@pauliacomi - As the original author of |
|
@abraunegg thanks for the Roadmap and all the effort. |
I would test without Docker first to confirm what the application will do before then testing with Docker - do not jump straight to testing with Docker if you dont know how it is going to act / operate. |
|
Hi @MHCraftbeer, just a quick comment regarding my setup:
My docker run command is so i can use a previously-created config file with some slight changes from default: I also had no problems with the initial sync, everything hummed along nicely until it was finished after a few days. I've not seen any memory leaks . Admitedly I have not updated the client since I started using docker (if it isn't broke...), so if it is related to any recent changes to the codebase I would not know. I would definitely try to go with what @abraunegg suggested and try to remove as many variables out of the equation:
Hope it helps! |
|
@MHCraftbeer After another 10 sync process: I am not seeing any memory increase, or anything even close to a 30Mb increase per cycle. I am going to mark this as a local environment issue & unable to replicate. |
|
@pauliacomi thank you very much for this insight. My docker build command was My docker run command was Before I headed off to work I ran the container using your settings. I'll report in the afternoon and start removing variables when I have more time. |
@abraunegg thank you very much, it works!
it seems i don't have any sqlite library installed? since i did not build the client yet? should i install it? |
|
@MHCraftbeer
The next aspect is to run 'valgrind' against a debug version of the client. To do this:
Once 'valgrind' has fully completed, provide the txt file via email. |
You need to refer to https://github.com/abraunegg/onedrive/blob/master/docs/INSTALL.md and install the dependencies for your OS as per these requirements. Whilst you are at it, install the debug symbols too. |
@abraunegg sorry, up to now i was fully occupied with the previous tasks:
Tomorrow I will build the client from the current version |
|
@abraunegg I am trying to compile the client on my armhf debian buster server using: first Dependencies: Raspbian (ARMHF) which works perfectly fine. and second ARMHF Architecture Which results in an error regarding the compiler: Do you know why this happens? Am i missing an important step? |
I was able to check these two tasks by running version 2.4.1 dockerized with flashmemory removed. |
Your system is missing a symbolic link to this file somewhere. To fix you may need to do the following: |
OK .. but what does it do without Docker? Hopefully you can compile and get that issue sorted. What does it do on other hardware that you own - x86_64 / i686 ? |
unfortunately i it does not find "libtinfo.so" it only finds "libtinfo.so.#" files: what should i do? |
|
@MHCraftbeer |
|
My current status. Home Server : Intel(R) Core(TM) i7-3635QM CPU @ 2.40GHz For context, I handle a mix of small files (few kb) and large files (a few gb maybe this is the reason of the high cache). |
|
@LordPato thank you very much for the information. Your cache is indeed very high, but at least the used memory is in the range of everyone exept me. @abraunegg thank you so much for your help so far. However, the last few days it became clear to me that I lack the knowledge and the skills to pursue this issue further.
While that might be true, i am not willing to sacrifice much more ram for synchronizing my files. For now my workaround is to increase the full sync intervall, since it is mainly responsible for the memory leak. I know this result is not satisfying at all (especially for me), but I am afraid more is not possible for me. |
|
In my installation there are no real memory issues, although its memory uses gradually increases over time (see below). some 20 minutes later: Hope this helps.... |
|
Please can you validate the following PR to further diagnose your issue: You will need to then either 'install' or run the updated application binary from the PR folder to validate the fix, or build a specific Docker instance using this PR version. It would be great to obtain some feedback as to whether this PR improves your situation or not. @LordPato |
|
Sorry. I'm not running ARM. I'm using Intel Core i7 |
|
@LordPato |
|
@MHCraftbeer |
|
@abraunegg i ran the test PR #910 with fresh docker images: Using this config file: This is the ramlog I made using a cron job: It restarted a few minutes ago on its own since the assigned memory was occupied. Unfortunately there is still no improvement. However, thank you so much for your effort @abraunegg! |
|
Hmm, that is strange. I have been running onedrive from the respective PR now quite some hours, and there is an increase in memory (as reported using onedrive itself): These are all the different numbers I see in my log besides intermediate peaks which occur during the periodic sync, but drop back to the value from before immediately. This is all in monitor mode. Systemd reports There is some increase when files are downloaded, though. So something is strange when running in a docker image it seems. |
I can't think of any good explanation for why running in a container would matter, except for differences in libraries from the base image. Alpine for example uses musl instead of glibc, and so has a completely different malloc implementation, etc. You could well expect to see differences in behavior from a program in an alpine-based image and one running outside of a container on glibc. If both the image and OS being compared are glibc based, there could still be differences from version to version or in any other libraries used. |
|
@tsarna
Given other folk who use Docker (on any architecture) is not seeing this sort of behavior, leads me to think that there is something 100% environmental on that system which is the contributing factor. Also chatting with folk off here - another potential reason could be OS corruption - given the issues around not being able to run the application outside of Docker - this is a potential possibility. Also I dont have ARM hardware to test, but 100% would like to get to the bottom of this & have folk test as much as possible to provide more data points. |
|
@MHCraftbeer No other memory issues are being raised - so would like to understand what you are seeing, otherwise I will close this issue ticket as a local environment issue that cannot be replicated. Please can you advise. |
|
Closing issue |
|
This issue has been automatically locked since there has not been any recent activity after it was closed. Please open a new issue for related bugs. |







Bug Report Details
Describe the bug
Memory consumption in docker container until synchronization won't work anymore.
Application and Operating System Details:
Release-Date: 2019-02-06
Protocols: dict file ftp ftps gopher http https imap imaps ldap ldaps pop3 pop3s rtmp rtsp scp sftp smb smbs smtp smtps telnet tftp
Features: AsynchDNS IDN IPv6 Largefile GSS-API Kerberos SPNEGO NTLM NTLM_WB SSL libz TLS-SRP HTTP2 UnixSockets HTTPS-proxy PSL
To Reproduce
Steps to reproduce the behavior if not causing an application crash:
Additional context
The odroid-hc2 only has 2gb of memory. After a few hours the onedrive docker container uses 500mb ram. After 24 hours it uses 1gb and is still slowly increasing until synchronization does not work anymore. In this time only a few small files have been synchronized.
I have 40gb onedrive space with 4.000 directories and 30.000 files. Synchronization works until too much ram is used.
Any ideas?
I tried to reduce the usable memory of the docker container with -m. But it only shortened the time until the synchronization stopped working.
The text was updated successfully, but these errors were encountered: