Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Protocol error - packet too large #40

Closed
sysmonk opened this issue Sep 4, 2014 · 4 comments
Closed

Protocol error - packet too large #40

sysmonk opened this issue Sep 4, 2014 · 4 comments
Assignees
Labels
Milestone

Comments

@sysmonk
Copy link
Contributor

sysmonk commented Sep 4, 2014

Hi,

Recently i started seeing packet too large errors in logstash:

{:timestamp=>"2014-09-04T13:57:56.467000+0000", :message=>"[LogCourierServer] Protocol error on connection from 1.2.3.4:17009: Packet too large (1435480)", :level=>:warn}

Log-courier side:

Sep  4 14:04:45 es13 log-courier[25423]: Connected to 2.3.4.5
Sep  4 14:04:45 es13 log-courier[25423]: Transport error, will try again: write tcp 2.3.4.5:9001: connection reset by peer
Sep  4 14:04:55 es13 log-courier[25423]: Attempting to connect to 2.3.4.5:9001 (logstash2.sat.wordpress.com)

The client tries to send 'too large' packet, gets a protocol error, tries to reconnect and gets the same error again. Infinite loop. Unfortunately, switching to smaller sizes would let through some smaller packets until it hit the huge log message which is too big.

Not sure what's the best solution here. 1) try smaller spools until packet gets through
2) split the message if it's too big?
3) log-courier knows what's the biggest packet it can send - don't send it if it's too big and do 1) and/or 2) when it hits this?

@driskell
Copy link
Owner

driskell commented Sep 4, 2014

It's something I will be working on next. It was originally found in #28

There is a hard-coded limit in the logstash plugin that does this as a sanity check that needs adjusting, and some limits need placing in the courier side too at both multiline collection, line length and spool size. I'm just working out the best way to handle them.

The only question is what is better: skip the long line or split it. My initial instinct is to skip - if it's so big I would generally expect it to be a corruption or mistake anyway.

@sysmonk
Copy link
Contributor Author

sysmonk commented Sep 4, 2014

Currently i'm in the position i'd happilly skip them, but don't know how ...

If log-courier would skip it, it'd be nice if logged something about it (log line in file XX, LINE YY too big - skipping)

The best solution, of course, would be an option in configuration file to skip or split the log. Some hosts which would want to be PCI compliant would want the log to be split, rather than skipped. Other hosts that are not PCI compliant don't care if one or two log lines would be skipped...

@driskell
Copy link
Owner

driskell commented Sep 4, 2014

If you look at the offset in the ".log-courier" file it tells you last offset. From there you may be able to look in your log files and see a really big line. In #28 a developer had written a PDF to the log file! That should get things moving forward hopefully. The other option is to modify the log-courier gem to remove or increase the limit. It's set to 1_048_576 (1MB) - which is the compressed size, so it must be quite large - chances are binary too as that's less compressible.

@driskell
Copy link
Owner

driskell commented Sep 4, 2014

An option to split/skip makes perfect sense. Sorry about the pain - hopefully I can get something implemented sooner rather than later.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
Projects
None yet
Development

No branches or pull requests

2 participants