New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
insufficient RAM available #98
Comments
Mh. The program may be trying to load an unexpectedly large image file |
To be a little bit more specific, last night it started to have this issue with javascript mode disabled. Then i enabled javascript mode and it did about another 100 images over the night, but then it happened again and i am not able to run it in both modes. Can is see somewhere some logging to check this file, or something else? |
You should basically have a minumum of 10GB swap (I needed it) |
Wait a sec.... You might've also added that to your nextcloud configuration |
This issue/problem is new |
I'm not sure where this 10% limitation comes from, @marcelklehr might have some insights EDIT : Just noticed that he's trying to allocate 806 MB only, it's much less than 10% of your free ram apparently |
Wait I make a video Uploading... |
https://cloud.privacyy.ch/index.php/s/DE4ZSRx67k2BnXy What... I don't see the memory error anymore |
I noticed that the 10% of free RAM is only a warning. The classifier has to wait until the system releases/frees RAM We more should concentrate to the error message.. I don't know how it works I just share my experience so it might be completely wrong what I say |
A memory leak is very possible. Still that should not prevent the program from starting :/ |
@parsupo do you have any other log? Is it killed by the timeout? If so, it should be fixed in the next release |
I rather think that you don't free up the space that was used once (like you make a variable and you don't delete it. You really have to delete it - (it means stdin, read all from the terminal, pipe or whatsoever) not clear what's in it) Wait... You use the node with the - instead of a filename. Might that be a problem? Are you sure that there's a function to clean unused variables? |
Yeah... After 100 pictures there are like 7 (Looked into htop) |
@debian-user-france1 Can you try again with the latest version of recognize? |
Do you mean @parsupo ? |
@marcelklehr, allright I'll make a back up first of my server and then i will update to the newest version. It might take a few days before i have some results. |
This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions. |
@marcelklehr It seems to be working again, it is now classifying photo's. |
In that case, I'm closing this for now :) Thank for getting back to us! |
Hi,
I am running recognize and it has classified about 30000 photo's this far. But i have to start it using the occ command, however that is not an issue for me. This worked up untill now. When i try to run it now it will kill after a minute. From the log i understand that it exceeds the amount of required RAM.
My system (VM) has about 8GB RAM and an Intel quad core CPU (i5). It is only running the object recognition and the javascript mode is disabled.
Even with javascript mode enabled this problem happens.
Classifier process output: 2021-10-08 11:02:48.254659: I tensorflow/core/platform/cpu_feature_guard.cc:142] This TensorFlow binary is optimized with oneAPI Deep Neural Network Library (oneDNN) to use the following CPU instructions in performance-critical operations: AVX2 FMA To enable them in other operations, rebuild TensorFlow with the appropriate compiler flags. 2021-10-08 11:02:57.837144: W tensorflow/core/framework/cpu_allocator_impl.cc:80] Allocation of 806249910 exceeds 10% of free system memory. 2021-10-08 11:03:02.888040: W tensorflow/core/framework/cpu_allocator_impl.cc:80] Allocation of 3224999640 exceeds 10% of free system memory. 2021-10-08 11:03:03.950179: W tensorflow/core/framework/cpu_allocator_impl.cc:80] Allocation of 3224999640 exceeds 10% of free system memory. 2021-10-08 11:03:04.692891: W tensorflow/core/framework/cpu_allocator_impl.cc:80] Allocation of 3224999640 exceeds 10% of free system memory.
Before this issue happened the task was killed after everey ~1000 photos with a different memory allocation issue.
Classifier process output: ============================ Hi there 👋. Looks like you are running TensorFlow.js in Node.js. To speed things up dramatically, install our node backend, which binds to TensorFlow C++, by running npm i @tensorflow/tfjs-node, or npm i @tensorflow/tfjs-node-gpu if you have CUDA. Then call require('@tensorflow/tfjs-node'); (-gpu suffix for CUDA) at the start of your program. Visit https://github.com/tensorflow/tfjs-node for more details. ============================ Error: maxMemoryUsageInMB limit exceeded by at least 6MB at requestMemoryAllocation (/var/www/nextcloud/apps/recognize/node_modules/jpeg-js/lib/decoder.js:1051:13) at prepareComponents (/var/www/nextcloud/apps/recognize/node_modules/jpeg-js/lib/decoder.js:601:13) at constructor.parse (/var/www/nextcloud/apps/recognize/node_modules/jpeg-js/lib/decoder.js:755:13) at Object.decode [as image/jpeg] (/var/www/nextcloud/apps/recognize/node_modules/jpeg-js/lib/decoder.js:1096:11) at Jimp.parseBitmap (/var/www/nextcloud/apps/recognize/node_modules/@jimp/core/dist/utils/image-bitmap.js:196:53) at Jimp.parseBitmap (/var/www/nextcloud/apps/recognize/node_modules/@jimp/core/dist/index.js:431:32) at /var/www/nextcloud/apps/recognize/node_modules/@jimp/core/dist/index.js:373:15 at FSReqCallback.readFileAfterClose [as oncomplete]
Any idea on what is causing this issue?
The text was updated successfully, but these errors were encountered: