Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

exception: expected result >= minBytes; Premature EOF #1130

Closed
julianfoad opened this issue Jan 12, 2018 · 16 comments
Closed

exception: expected result >= minBytes; Premature EOF #1130

julianfoad opened this issue Jan 12, 2018 · 16 comments
Assignees
Labels
bug Verified and replicated bugs and issues. fixed Bugs that are fixed (in a coming release).
Milestone

Comments

@julianfoad
Copy link

I am running Firefly III version 4.6.13 in self-hosted Sandstorm.

Description of my issue:

On attempting to view my Firefly-III grain in Sandstorm, the iframe now shows only this error message:

Error: remote exception: expected result >= minBytes; Premature EOF
C++ location:(remote):??
type: failed

Closing and re-opening the grain just shows the same. I don't know how to proceed to debug or fix it.

Steps to reproduce

The last things I did leading up to this happening were:

  • create 3 more rules (making 4 in total) in the default group,
  • "apply the rules in the default group",
  • select just one account (containing ~6000 transactions) to apply them to,
  • saw progress followed by a "success" message,
  • clicked on some other Firefly UI element (probably "accounts" or "asset accounts").

Earlier I had created one rule in the default group and applied it, successfully.

I haven't tried to reproduce, yet.

Other important details (log files, system info):

The broken instance doesn't have any GUI to access debug log details.

I can successfully create a new instance within Sandstorm. The first time I clicked the version number in the right corner of the home Firefly III page in that, I got:

Whoops! An error occurred.
Unfortunately, this error was not recoverable. Firefly III broke. The error is:
file_get_contents(/opt/app/storage/logs/ff3-fpm-fcgi-2018-01-12.log): failed to open stream: No such file or directory

but on a second attempt I got this:

Debug information generated at 2018-01-12 11:06:17 UTC for Firefly III version 4.6.13.

Variable Content
FF version 4.6.13
App environment local
App debug mode false
App cache driver file
App logging info, syslog
PHP version 7.1.12-3+0~20180105140140.13+jessie~1.gbp691a82
Display errors Off
Error reporting ALL errors
Host Linux sandbox 4.4.0-104-generic # 127-Ubuntu SMP Mon Dec 11 12:16:42 UTC 2017 x86_64
Interface fpm-fcgi
UserID 1
DB drivers mysql
Current driver mysql
Using Sandstorm? yes
Sandstorm anon? no
Is Docker (.env) false
Trusted proxies (.env)
User agent Mozilla/5.0 (X11; Ubuntu; Linux x86_64; rv:57.0) Gecko/20100101 Firefox/57.0
Loaded extensions Core, date, libxml, openssl, pcre, zlib, filter, hash, Reflection, SPL, session, standard, cgi-fcgi, mysqlnd, PDO, xml, bcmath, calendar, ctype, curl, dom, mbstring, fileinfo, ftp, gd, gettext, iconv, intl, json, exif, mysqli, pdo_mysql, Phar, posix, readline, shmop, SimpleXML, sockets, sysvmsg, sysvsem, sysvshm, tokenizer, wddx, xmlreader, xmlwriter, xsl, zip, Zend OPcache
Installed packages vlucas/phpdotenv@v2.4.0, symfony/css-selector@v3.4.3, tijsverkoyen/css-to-inline-styles@2.2.0, symfony/polyfill-mbstring@v1.6.0, symfony/var-dumper@v3.4.3, symfony/routing@v3.4.3, symfony/process@v3.4.3, paragonie/random_compat@v2.0.11, symfony/polyfill-php70@v1.6.0, symfony/http-foundation@v3.4.3, symfony/event-dispatcher@v4.0.3, psr/log@1.0.2, symfony/debug@v3.4.3, symfony/http-kernel@v3.4.3, symfony/finder@v3.4.3, symfony/console@v3.4.3, doctrine/lexer@v1.0.1, egulias/email-validator@2.1.3, swiftmailer/swiftmailer@v6.0.2, ramsey/uuid@3.7.1, psr/simple-cache@1.0.0, psr/container@1.0.0, symfony/translation@v3.4.3, nesbot/carbon@1.22.1, mtdowling/cron-expression@v1.2.1, monolog/monolog@1.23.0, league/flysystem@1.0.41, erusev/parsedown@1.6.4, doctrine/inflector@v1.2.0, laravel/framework@v5.5.28, symfony/polyfill-util@v1.6.0, symfony/polyfill-php56@v1.6.0, paragonie/constant_time_encoding@v2.2.0, pragmarx/google2fa@v2.0.7, pragmarx/google2fa-laravel@v0.1.4, bacon/bacon-qr-code@1.0.3, davejamesmiller/laravel-breadcrumbs@4.2.0, doctrine/collections@v1.5.0, doctrine/cache@v1.7.1, doctrine/annotations@v1.6.0, doctrine/common@v2.8.1, doctrine/dbal@v2.6.3, fideloper/proxy@3.3.4, twig/twig@v1.35.0, rcrowe/twigbridge@v0.9.5, laravelcollective/html@v5.5.1, league/commonmark@0.17.0, league/csv@9.1.1, rmccue/requests@v1.7.0, watson/validating@3.1.2,

I am not familiar with debugging in Sandstorm nor PHP, but am a software developer and willing to learn it if you can guide me where to begin.

@julianfoad
Copy link
Author

Approx. total numbers:

  • 8 asset accounts, 6000 txns (nearly all are in one account)
  • 4 rules, 2 categories
  • no other accounts, budgets, reports, tags, bills, piggy banks.
  • grain size in Sandstorm: 85 MB

Non-default configuration options:

  • Fiscal year start date: 05/04/2018
  • View range: one year

@JC5
Copy link
Member

JC5 commented Jan 12, 2018

Thanks for the expansive report! I see that the error has occurred with other Sandstom users, in other applications. I think the grain is doing something bad to the Sandstorm environment. I will keep you posted. See also: sandstorm-io/sandstorm#3022

@julianfoad
Copy link
Author

Thank you! Glad to learn it's a known issue.

@JC5 JC5 self-assigned this Jan 12, 2018
@JC5 JC5 added the bug Verified and replicated bugs and issues. label Jan 12, 2018
@JC5 JC5 added this to the 4.7.0 milestone Jan 12, 2018
@kentonv
Copy link

kentonv commented Jan 13, 2018

(Sandstorm developer here -- but I don't work on Firefly specifically.)

FWIW, the "Premature EOF" error usually indicates that the application server crashed (and hence Sandstorm's connection to the server received EOF).

There should be more information in the debug log (click the monitor icon in the Sandstorm top bar). You may need to restart the grain (restart button in the top bar) before you can open the debug log.

One possibility is that the system ran out of RAM and killed the app.

@JC5
Copy link
Member

JC5 commented Jan 14, 2018

It fits the description of @julianfoad, which is a memory intensive routine (applying rules).

@julianfoad
Copy link
Author

This grain shows the same exception immediately every time I try to access it. I have rebooted the server machine. There is 2 GB of free RAM and 9 GB free disk on the server's root filesystem and my other (newer) Firefly grain is working fine.

The "Restart App" button in this grain says

Restart failed: Error: Internal server error [500]

The "Debug log" button says

...*** couldn't connect to grain (Error: remote exception: expected result >= minBytes; Premature EOF) ***

The "Download Backup" button successfully gives me an archive, in which the "log" file ends with:

** SANDSTORM SUPERVISOR: Starting up grain. Sandbox type: userns
*** Uncaught exception ***
sandstorm/supervisor.c++:232: overloaded: inotify_add_watch: No space left on device; path = sandbox/storage/framework/cache/data/c9/be
stack: 0x4170f8 0x416df6 0x41639c 0x40b4a3 0x40a749 0x41b6f9 0x4b45b1

Does that mean some internal filesystem within the grain is full?

The "/data/storage/cache/" directory within the archive is about 11 MB, and "/data/lib/mysql/" 33 MB and the whole archive 46 MB (uncompressed sizes).

@JC5
Copy link
Member

JC5 commented Jan 15, 2018

I am not sure yet, but it is helpful! Thanks!

If you could:

  1. Access any previous log entries, before the error? They might lead me to the location of the cause of the error.
  2. sandbox/storage/framework/cache/data/c9/be. Is this a folder or a file? Is it readable?

@julianfoad
Copy link
Author

Here is the grain's log file:
log

'sandbox/storage/framework/cache/data/c9/be' is a folder containing one file 'c9be39155cf2ec6149991f9817f040831b61d4d9' containing 17 bytes of text:

9999999999s:0:"";

@julianfoad
Copy link
Author

And here is one of the two log files in '/data/storage/logs/':
ff3-cli-2018-01-12.log

The other is named 'ff3-fpm-fcgi-2018-01-12.log' (700 KB) and contains details of my imported transactions so I am not attaching it. It contains a number of different error messages, but none that look directly related to the out-of-space exception. The last 20 or so entries are all:

local.INFO: Rule has no actions!

@JC5
Copy link
Member

JC5 commented Jan 15, 2018

This is very helpful, thanks!

@kentonv
Copy link

kentonv commented Jan 16, 2018

inotify_add_watch: No space left on device

This indicates that you've reached the per-user limit on inotify watches. This is weird. On my system, the default limit is 65536. The grain supervisor uses one inotify watch per directory inside the grain's private storage. Surely there are not over 65536 directories in the grain storage? When you download the backup zip, does it contain an excessively large number of directories?

You can check the limit on your system with:

cat /proc/sys/fs/inotify/max_user_watches

@kentonv
Copy link

kentonv commented Jan 16, 2018

Looking at a backup from a blank Firefly instance I created, I see it does contain quite a few directories... but "quite a few" is 68. Does this manage to grow by 1000x somehow?

@JC5
Copy link
Member

JC5 commented Jan 16, 2018

Mmmm, there is a cache routine that could hit that limit, assuming Laravel (the underlying framework) is a fan of creating directories.

I can fix that.

@Kentov, could I use memcached on a sandstorm grain? In addition to the fix of course.

@julianfoad
Copy link
Author

max_user_watches is 8192 (system default, not manually set, Ubuntu 16.04)

How many watches current in use (other services are running, including Sandstorm with a small new Firefly grain open)?

$ sudo lsof | grep -i inotify | wc -l
181

How many dirs in the grain backup zip?
8216 dirs and 8583 files in the whole backup zip;
8195 dirs and 8437 files in 'data/storage/';
8130 dirs and 8375 files in 'data/storage/framework/cache;
more or less every file is 'cache' is in its own shard directory.

So I tried to up the limit:

$ sudo sysctl fs.inotify.max_user_watches=65536
fs.inotify.max_user_watches = 65536
(this won't survive reboot, I know)
$ cat /proc/sys/fs/inotify/max_user_watches
65536
(no joy at this point loading the grain and using "Restart App" button etc. within Sandstorm)
$ sudo service sandstorm restart

Bingo! Now the grain loads and works. Hurray! I have got my data back. Thank you both.

$ sudo lsof | grep -i inotify | wc -l
181

That's curious. No additional watches are in use once the grain has loaded.

Indeed, a backup archive downloaded now contains
558 dirs and 488 files in the whole backup zip;
509 dirs and 323 files in 'data/storage/framework/cache;

So it was a temporary use of lots of cache files.

@JC5
Copy link
Member

JC5 commented Jan 16, 2018

Which I will take upon me to fix for the next release. Thanks @julianfoad , @kentonv for debugging this!

@kentonv
Copy link

kentonv commented Jan 16, 2018

Maybe lsof only lists the inotify file descriptors, not the inotify watch descriptors?

@JC5 JC5 added the fixed Bugs that are fixed (in a coming release). label Jan 17, 2018
@JC5 JC5 closed this as completed Jan 31, 2018
@lock lock bot locked as resolved and limited conversation to collaborators Jan 24, 2020
Sign up for free to subscribe to this conversation on GitHub. Already have an account? Sign in.
Labels
bug Verified and replicated bugs and issues. fixed Bugs that are fixed (in a coming release).
Projects
None yet
Development

No branches or pull requests

3 participants