Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Error during socket read: End of file; 0 bytes read so far #17

Open
gil-blau opened this issue Aug 10, 2015 · 12 comments
Open

Error during socket read: End of file; 0 bytes read so far #17

gil-blau opened this issue Aug 10, 2015 · 12 comments

Comments

@gil-blau
Copy link

Hi,

I am running with KPL v0.10.0 and keep getting the following error message:
[error][io_service_socket.h:229] Error during socket read: End of file; 0 bytes read so far.
I am trying to run logstash plug in based on KPL (=logstash-output-kinesis).

Your support is much appreciated.

@kevincdeng
Copy link
Contributor

Does the error say which endpoint it is? How frequently are you seeing the errors?

These errors usually occur because the remote closed the socket. There should be no loss of data since there are automatic retries.

@gil-blau
Copy link
Author

Hi and thank you for the quick answer!
The error comes up every second. It seems to me that we are losing significant number of events since we are reading into the log stash server from one steam and writing to another stream and the records written to the second stream are ~10% of the records log stash reads from the first stream.
The topology is something like: kinesis stream A - logstash out - kinesis stream B.

@jbarrajon
Copy link

I'm getting the same error using the logstash plugin 'logstash-output-kinesis' with KPL v0.10.1 when sending log data to a kinesis stream

@samcday
Copy link

samcday commented Oct 8, 2015

We've been observing this error in a few logstash-output-kinesis rollouts. As far as I can tell, data is still making it into Kinesis fine. I think this message is just noise. Would be nice to drop it down to DEBUG / INFO level.

@samcday
Copy link

samcday commented Oct 20, 2015

@Gil-Bl it's worth noting that the default KPL record TTL is quite low (30s). So if you're losing data it's not necessarily because of the connection resets, but maybe because the retry after a connection reset is happening too late.

@gil-blau
Copy link
Author

Thanks Sam!
We did some modifications and now we get this error message after several hours. Due to this error logstash service stops and no records are being sent to kinesis.
In the error file we get the following error:
[2015-10-22 00:22:27.679701] [0x00007f6ca799f700] [error] [io_service_socket.h:229] Error during socket read: Operation canceled; 0 bytes read so far (kinesis.us-west-2.amazonaws.com:443)
terminate called after throwing an instance of 'std::runtime_error'
what(): EOF reached while reading from ipc channel

Sam - also can you please explain where can I modify the KPL record TTL, and what are the considerations to modify this value?

Any support will be much appreciated.

@samcday
Copy link

samcday commented Oct 24, 2015

terminate called after throwing an instance of 'std::runtime_error'
what(): EOF reached while reading from ipc channel

That actually sounds like a different problem that you should raise a separate issue for ;)

As for the record TTL question, see the documentation here

@robinjmurphy
Copy link

robinjmurphy commented Aug 1, 2016

Hi. I'm getting this error every second or so when using the Logstash output.

2016-08-01 13:02:17.805727] [0x00007f6d72a12700] [error] [io_service_socket.h:200] Error during socket write: Connection reset by peer; 49152 bytes out of 59582 written (kinesis.eu-west-1.amazonaws.com:443)
[2016-08-01 13:02:25.604829] [0x00007f6d72a12700] [error] [io_service_socket.h:200] Error during socket write: Connection reset by peer; 49152 bytes out of 63738 written (kinesis.eu-west-1.amazonaws.com:443)
[2016-08-01 13:02:32.555636] [0x00007f6d72a12700] [error] [io_service_socket.h:229] Error during socket read: End of file; 0 bytes read so far (kinesis.eu-west-1.amazonaws.com:443)
[2016-08-01 13:02:40.517302] [0x00007f6d72a12700] [error] [io_service_socket.h:229] Error during socket read: End of file; 0 bytes read so far (kinesis.eu-west-1.amazonaws.com:443)
[2016-08-01 13:02:47.885180] [0x00007f6d72a12700] [error] [io_service_socket.h:229] Error during socket read: End of file; 0 bytes read so far (kinesis.eu-west-1.amazonaws.com:443)
[2016-08-01 13:03:03.307367] [0x00007f6d72a12700] [error] [io_service_socket.h:229] Error during socket read: End of file; 0 bytes read so far (kinesis.eu-west-1.amazonaws.com:443)
[2016-08-01 13:03:03.359123] [0x00007f6d72a12700] [error] [io_service_socket.h:229] Error during socket read: End of file; 0 bytes read so far (kinesis.eu-west-1.amazonaws.com:443)
[2016-08-01 13:03:11.088866] [0x00007f6d72a12700] [error] [io_service_socket.h:229] Error during socket read: End of file; 0 bytes read so far (kinesis.eu-west-1.amazonaws.com:443)
[2016-08-01 13:03:17.527024] [0x00007f6d72a12700] [error] [io_service_socket.h:229] Error during socket read: End of file; 0 bytes read so far (kinesis.eu-west-1.amazonaws.com:443)
[2016-08-01 13:03:32.369491] [0x00007f6d72a12700] [error] [io_service_socket.h:229] Error during socket read: End of file; 0 bytes read so far (kinesis.eu-west-1.amazonaws.com:443)

Is this to be expected?

@AdamTheAnalyst
Copy link

We are seeing this also - its causing us to drop data. We have a relatively low test throughput of 1 event per second, so it's not to do with Kinesis volume.

Oddly we only have this issue when using logstash-output-kinesis with large message sizes (we have a working system with average message size of ~300 characters) the system we are having problems with has an average message size of 3500 characters.

I seem to have had some temporary success by disabling aggregation with:

aggregation_enabled => false

Now I see intermittent:

Failed to open connection to kinesis.eu-west-2.amazonaws.com:443 : Operation canceled

But logs continue to flow. My guess is that i was having intermittent connection issues that was causing me to build up back pressure, LogStash was then aggregating the back-pressure into even larger messages that didn't stand a chance of being accepted into Kinesis which was causing the pipe to become blocked.

I am running testing with this today and will report back if this fixed it for me.

@AdamTheAnalyst
Copy link

We added the rate limit field below and this behavior seems to have subsided.

output {
        kinesis {
                stream_name => "xxx"
                region => "eu-west-2"
                access_key => "xxx"
                secret_key => "xxx"
                metrics_level => "none"
                aggregation_enabled => false
                rate_limit => 80
        }
        file {
                path => "/tmp/debug.log"
        }
}

@pfifer
Copy link
Contributor

pfifer commented Feb 15, 2017

Which version of the KPL are you using?

@ryangeno
Copy link

I'm seeing a similar issue. Data from filebeats to logstash. I write them out in a file, and have a count of 127.

Then when it goes through Kinesis, i get 84 records.

This is just a subset of the data. I'm probably sending 40k records in total every time I run a spark job.

I'm running these as SysV services on AWS Linux:
Filebeat - 5.2.1
Logstash - 2.4.1 (Need the Kinesis Output, which is why I'm still on this version) - 8 workers

Updated logstash-input-beats 3.1.8 to 3.1.12

      kinesis {
           stream_name => "${KINESIS_STREAM:DEFAULT_KINESIS_STREAM}"
           region => "us-east-1"
           randomized_partition_key => true
           aggregation_enabled => false
           max_pending_records => 10000
           rate_limit => 80
       }

       if ("application_master" in [tags]) and ([type] == "yarn-container-stdout-logs") {
           file {
               path => "/tmp/app-mstr-logs.log"
               flush_interval => 0
           }
       }

No errors in the logs and I'm pushing to 18 kinesis shards. Not sure what to do.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Projects
None yet
Development

No branches or pull requests

8 participants