New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Confusing error: Logstash failed to create queue {"exception"=>"Page file size is too small to hold elements" ... #8480
Comments
Just happened to me on 6.6.2, had to purge queue to fix the problem. |
@davtex - Next time, make sure you backup your queue dir somewhere else where we can run some diagnostics on it. Also, if you could provide more contextual information it would help in understanding what events lead to this problem (any warnings or error logs?, any crash?, etc). In 6.6.2 there is |
@colinsurprenant Our primary logstash was inoperable since 18th March and I have just noticed today since secondary was able to keep up with load just until today, when I noticed that Logstash is struggling and there is a 3min gap in log processing. Initial issue was this:
This was fixed by setting correct permissions on /xxx/data/logstash (logstash:logstash, was root:logstash for some reason). Next issue:
I checked queue size manually and it was low, so I did not think twice - wverythink was OK afterwards.. I have checked monitoring in Kibana and I can now say that the queue was empty. I suspect PQ could have been corrupted for the same reason as our problem with .lock file - change of file wonership mid-flight. David. |
Also happening on 6.7.2, pqcheck/pqrepair did not help. FIle permissions were all correct. |
Logstash 6.7.1 here stuck with the same issue. PQ* utils output:
Permissions:
No space problems detected (50Gb free on root, that holds /opt):
Have to delete files from dir in order to make Logstash work again. |
Also happens on 7.4.2 |
@nicenemo could you please provide more information, like the exact error/exception log, the content of the queue dir and the output of |
Since I had a backup of all data as JSON files, I restarted after clearing
the queues. There was nothing meaningful in the logs.
Op vr 15 nov. 2019 17:49 schreef Colin Surprenant <notifications@github.com
…:
@nicenemo <https://github.com/nicenemo> could you please provide more
information, like the exact error/exception log, the content of the queue
dir and the output of bin/pqcheck. Also, all contextual information to
what might have lead to this; did logstash crash? was it restarted? was
there any errors or warnings? etc.
—
You are receiving this because you were mentioned.
Reply to this email directly, view it on GitHub
<#8480?email_source=notifications&email_token=AAUNS5E3CFAN3QJHGWRCB23QT3HJHA5CNFSM4D6YYQI2YY3PNVWWK3TUL52HS4DFVREXG43VMVBW63LNMVXHJKTDN5WW2ZLOORPWSZGOEEGA32Q#issuecomment-554438122>,
or unsubscribe
<https://github.com/notifications/unsubscribe-auth/AAUNS5DO6LUNXTMI4R7EJSTQT3HJHANCNFSM4D6YYQIQ>
.
|
@colinsurprenant >, :backtrace=>["org/logstash/execution/ConvergeResultExt.java:103:in create'", "org/logstash/execution/ConvergeResultExt.java:34:in add'", "/usr/share/logstash/logstash-core/lib/lo [ERROR] 2019-12-10 06:07:18.755 [LogStash::Runner] Logstash - java.lang.IllegalStateException: Logstash stopped processing because of an error: (SystemExit) exit
|
same here. using Logstash 7.4.2. This keeps happening quite frequently.
|
logstash version 7.6.2, using persistence queue, is this resovled?
|
Same here on 7.6.2 |
same thing happened to me on logstash 7.5 Once the filesystem where the FS pipeline writes got full, logstash started reporting errors, when i noticed we made some space on the filesystem, but it never started again. I had to remove the queues and then it was able to start. The filesystem that was full was where FS pipeline writes, not where LS has the queues. I'm also using persitent queues, Before deleting the queues I run pqcheck and pqrepair but didnt work. pqcheck reported not fully ack so i guess i have lost some information in the queues. Is this a bug? is there any clue when theres going to be solved? thanks |
Hi,
Tests:
This will confirm that the queue is not corrupted anymore and logstash have permissions to the new queue folder.
Empty the queue. Stop all data input in the node. Wait for 15 minuttes and check the size of the queue. It should be less that the queue_backup (pkt 2). Restart the container again when you feel the queue is stuck.
|
Hi, |
Still happening in 7.14.2 @richardgilm |
The issue is fixed in v7.17.1 |
The error is confusing for end users. We should more clearly state that this is potentially a corruption problem. See #8098
The text was updated successfully, but these errors were encountered: