-
-
Notifications
You must be signed in to change notification settings - Fork 1.3k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
exception: expected result >= minBytes; Premature EOF #1130
Comments
Approx. total numbers:
Non-default configuration options:
|
Thanks for the expansive report! I see that the error has occurred with other Sandstom users, in other applications. I think the grain is doing something bad to the Sandstorm environment. I will keep you posted. See also: sandstorm-io/sandstorm#3022 |
Thank you! Glad to learn it's a known issue. |
(Sandstorm developer here -- but I don't work on Firefly specifically.) FWIW, the "Premature EOF" error usually indicates that the application server crashed (and hence Sandstorm's connection to the server received EOF). There should be more information in the debug log (click the monitor icon in the Sandstorm top bar). You may need to restart the grain (restart button in the top bar) before you can open the debug log. One possibility is that the system ran out of RAM and killed the app. |
It fits the description of @julianfoad, which is a memory intensive routine (applying rules). |
This grain shows the same exception immediately every time I try to access it. I have rebooted the server machine. There is 2 GB of free RAM and 9 GB free disk on the server's root filesystem and my other (newer) Firefly grain is working fine. The "Restart App" button in this grain says
The "Debug log" button says
The "Download Backup" button successfully gives me an archive, in which the "log" file ends with:
Does that mean some internal filesystem within the grain is full? The "/data/storage/cache/" directory within the archive is about 11 MB, and "/data/lib/mysql/" 33 MB and the whole archive 46 MB (uncompressed sizes). |
I am not sure yet, but it is helpful! Thanks! If you could:
|
Here is the grain's log file: 'sandbox/storage/framework/cache/data/c9/be' is a folder containing one file 'c9be39155cf2ec6149991f9817f040831b61d4d9' containing 17 bytes of text:
|
And here is one of the two log files in '/data/storage/logs/': The other is named 'ff3-fpm-fcgi-2018-01-12.log' (700 KB) and contains details of my imported transactions so I am not attaching it. It contains a number of different error messages, but none that look directly related to the out-of-space exception. The last 20 or so entries are all:
|
This is very helpful, thanks! |
This indicates that you've reached the per-user limit on inotify watches. This is weird. On my system, the default limit is 65536. The grain supervisor uses one inotify watch per directory inside the grain's private storage. Surely there are not over 65536 directories in the grain storage? When you download the backup zip, does it contain an excessively large number of directories? You can check the limit on your system with:
|
Looking at a backup from a blank Firefly instance I created, I see it does contain quite a few directories... but "quite a few" is 68. Does this manage to grow by 1000x somehow? |
Mmmm, there is a cache routine that could hit that limit, assuming Laravel (the underlying framework) is a fan of creating directories. I can fix that. @Kentov, could I use memcached on a sandstorm grain? In addition to the fix of course. |
max_user_watches is 8192 (system default, not manually set, Ubuntu 16.04) How many watches current in use (other services are running, including Sandstorm with a small new Firefly grain open)?
How many dirs in the grain backup zip? So I tried to up the limit:
Bingo! Now the grain loads and works. Hurray! I have got my data back. Thank you both.
That's curious. No additional watches are in use once the grain has loaded. Indeed, a backup archive downloaded now contains So it was a temporary use of lots of cache files. |
Which I will take upon me to fix for the next release. Thanks @julianfoad , @kentonv for debugging this! |
Maybe lsof only lists the inotify file descriptors, not the inotify watch descriptors? |
I am running Firefly III version 4.6.13 in self-hosted Sandstorm.
Description of my issue:
On attempting to view my Firefly-III grain in Sandstorm, the iframe now shows only this error message:
Closing and re-opening the grain just shows the same. I don't know how to proceed to debug or fix it.
Steps to reproduce
The last things I did leading up to this happening were:
Earlier I had created one rule in the default group and applied it, successfully.
I haven't tried to reproduce, yet.
Other important details (log files, system info):
The broken instance doesn't have any GUI to access debug log details.
I can successfully create a new instance within Sandstorm. The first time I clicked the version number in the right corner of the home Firefly III page in that, I got:
but on a second attempt I got this:
Debug information generated at 2018-01-12 11:06:17 UTC for Firefly III version 4.6.13.
I am not familiar with debugging in Sandstorm nor PHP, but am a software developer and willing to learn it if you can guide me where to begin.
The text was updated successfully, but these errors were encountered: