You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Hello, I updated my brokers to kafka 0.10.1.1 and now the plugin is throwing the errors below when started.
Version: logstash-input-kafka-6.2.x, logstash 2.4 and kafka brokers with version 0.10.1.1
Operating System: Amazon Linux AMI release 2015.03
Sample Data:
$ tail -n100 /var/log/logstash/logstash.err
Exception in thread "Ruby-0-Thread-36: /opt/logstash/vendor/local_gems/5a1a2485/logstash-input-kafka-6.2.2/lib/logstash/inputs/kafka.rb:231" java.lang.NullPointerException
at org.apache.kafka.common.record.ByteBufferInputStream.read(org/apache/kafka/common/record/ByteBufferInputStream.java:34)
at java.util.zip.CheckedInputStream.read(java/util/zip/CheckedInputStream.java:59)
at java.util.zip.GZIPInputStream.readUByte(java/util/zip/GZIPInputStream.java:266)
at java.util.zip.GZIPInputStream.readUShort(java/util/zip/GZIPInputStream.java:258)
at java.util.zip.GZIPInputStream.readHeader(java/util/zip/GZIPInputStream.java:164)
at java.util.zip.GZIPInputStream.<init>(java/util/zip/GZIPInputStream.java:79)
at java.util.zip.GZIPInputStream.<init>(java/util/zip/GZIPInputStream.java:91)
at org.apache.kafka.common.record.Compressor.wrapForInput(org/apache/kafka/common/record/Compressor.java:280)
at org.apache.kafka.common.record.MemoryRecords$RecordsIterator.<init>(org/apache/kafka/common/record/MemoryRecords.java:247)
at org.apache.kafka.common.record.MemoryRecords$RecordsIterator.makeNext(org/apache/kafka/common/record/MemoryRecords.java:316)
at org.apache.kafka.common.record.MemoryRecords$RecordsIterator.makeNext(org/apache/kafka/common/record/MemoryRecords.java:222)
at org.apache.kafka.common.utils.AbstractIterator.maybeComputeNext(org/apache/kafka/common/utils/AbstractIterator.java:79)
at org.apache.kafka.common.utils.AbstractIterator.hasNext(org/apache/kafka/common/utils/AbstractIterator.java:45)
at org.apache.kafka.clients.consumer.internals.Fetcher.parseFetchedData(org/apache/kafka/clients/consumer/internals/Fetcher.java:685)
at org.apache.kafka.clients.consumer.internals.Fetcher.fetchedRecords(org/apache/kafka/clients/consumer/internals/Fetcher.java:424)
at org.apache.kafka.clients.consumer.KafkaConsumer.pollOnce(org/apache/kafka/clients/consumer/KafkaConsumer.java:1045)
at org.apache.kafka.clients.consumer.KafkaConsumer.poll(org/apache/kafka/clients/consumer/KafkaConsumer.java:979)
at java.lang.reflect.Method.invoke(java/lang/reflect/Method.java:498)
at RUBY.thread_runner(/opt/logstash/vendor/local_gems/5a1a2485/logstash-input-kafka-6.2.2/lib/logstash/inputs/kafka.rb:241)
at java.lang.Thread.run(java/lang/Thread.java:745)
Exception in thread "Ruby-0-Thread-23: /opt/logstash/vendor/local_gems/5a1a2485/logstash-input-kafka-6.2.2/lib/logstash/inputs/kafka.rb:231" java.lang.NullPointerException
at org.apache.kafka.common.record.ByteBufferInputStream.read(org/apache/kafka/common/record/ByteBufferInputStream.java:34)
at java.util.zip.CheckedInputStream.read(java/util/zip/CheckedInputStream.java:59)
at java.util.zip.GZIPInputStream.readUByte(java/util/zip/GZIPInputStream.java:266)
at java.util.zip.GZIPInputStream.readUShort(java/util/zip/GZIPInputStream.java:258)
at java.util.zip.GZIPInputStream.readHeader(java/util/zip/GZIPInputStream.java:164)
at java.util.zip.GZIPInputStream.<init>(java/util/zip/GZIPInputStream.java:79)
at java.util.zip.GZIPInputStream.<init>(java/util/zip/GZIPInputStream.java:91)
at org.apache.kafka.common.record.Compressor.wrapForInput(org/apache/kafka/common/record/Compressor.java:280)
at org.apache.kafka.common.record.MemoryRecords$RecordsIterator.<init>(org/apache/kafka/common/record/MemoryRecords.java:247)
at org.apache.kafka.common.record.MemoryRecords$RecordsIterator.makeNext(org/apache/kafka/common/record/MemoryRecords.java:316)
at org.apache.kafka.common.record.MemoryRecords$RecordsIterator.makeNext(org/apache/kafka/common/record/MemoryRecords.java:222)
at org.apache.kafka.common.utils.AbstractIterator.maybeComputeNext(org/apache/kafka/common/utils/AbstractIterator.java:79)
at org.apache.kafka.common.utils.AbstractIterator.hasNext(org/apache/kafka/common/utils/AbstractIterator.java:45)
at org.apache.kafka.clients.consumer.internals.Fetcher.parseFetchedData(org/apache/kafka/clients/consumer/internals/Fetcher.java:685)
at org.apache.kafka.clients.consumer.internals.Fetcher.fetchedRecords(org/apache/kafka/clients/consumer/internals/Fetcher.java:424)
at org.apache.kafka.clients.consumer.KafkaConsumer.pollOnce(org/apache/kafka/clients/consumer/KafkaConsumer.java:1045)
at org.apache.kafka.clients.consumer.KafkaConsumer.poll(org/apache/kafka/clients/consumer/KafkaConsumer.java:979)
at java.lang.reflect.Method.invoke(java/lang/reflect/Method.java:498)
at RUBY.thread_runner(/opt/logstash/vendor/local_gems/5a1a2485/logstash-input-kafka-6.2.2/lib/logstash/inputs/kafka.rb:241)
at java.lang.Thread.run(java/lang/Thread.java:745)
And on kafka brokers are reported this error:
$ tail -n100 /opt/kafka/logs/kafkaServer.out
[2017-01-20 15:40:00,115] WARN Failed to send SSL Close message (org.apache.kafka.common.network.SslTransportLayer)
java.io.IOException: Broken pipe
at sun.nio.ch.FileDispatcherImpl.write0(Native Method)
at sun.nio.ch.SocketDispatcher.write(SocketDispatcher.java:47)
at sun.nio.ch.IOUtil.writeFromNativeBuffer(IOUtil.java:93)
at sun.nio.ch.IOUtil.write(IOUtil.java:65)
at sun.nio.ch.SocketChannelImpl.write(SocketChannelImpl.java:471)
at org.apache.kafka.common.network.SslTransportLayer.flush(SslTransportLayer.java:195)
at org.apache.kafka.common.network.SslTransportLayer.close(SslTransportLayer.java:163)
at org.apache.kafka.common.utils.Utils.closeAll(Utils.java:690)
at org.apache.kafka.common.network.KafkaChannel.close(KafkaChannel.java:47)
at org.apache.kafka.common.network.Selector.close(Selector.java:487)
at org.apache.kafka.common.network.Selector.pollSelectionKeys(Selector.java:368)
at org.apache.kafka.common.network.Selector.poll(Selector.java:291)
at kafka.network.Processor.poll(SocketServer.scala:476)
at kafka.network.Processor.run(SocketServer.scala:416)
at java.lang.Thread.run(Thread.java:745)
[2017-01-20 15:40:00,400] WARN Failed to send SSL Close message (org.apache.kafka.common.network.SslTransportLayer)
java.io.IOException: Broken pipe
at sun.nio.ch.FileDispatcherImpl.write0(Native Method)
at sun.nio.ch.SocketDispatcher.write(SocketDispatcher.java:47)
at sun.nio.ch.IOUtil.writeFromNativeBuffer(IOUtil.java:93)
at sun.nio.ch.IOUtil.write(IOUtil.java:65)
at sun.nio.ch.SocketChannelImpl.write(SocketChannelImpl.java:471)
at org.apache.kafka.common.network.SslTransportLayer.flush(SslTransportLayer.java:195)
at org.apache.kafka.common.network.SslTransportLayer.close(SslTransportLayer.java:163)
at org.apache.kafka.common.utils.Utils.closeAll(Utils.java:690)
at org.apache.kafka.common.network.KafkaChannel.close(KafkaChannel.java:47)
at org.apache.kafka.common.network.Selector.close(Selector.java:487)
at org.apache.kafka.common.network.Selector.pollSelectionKeys(Selector.java:368)
at org.apache.kafka.common.network.Selector.poll(Selector.java:291)
at kafka.network.Processor.poll(SocketServer.scala:476)
at kafka.network.Processor.run(SocketServer.scala:416)
at java.lang.Thread.run(Thread.java:745)
Hello, I updated my brokers to kafka 0.10.1.1 and now the plugin is throwing the errors below when started.
$ tail -n100 /var/log/logstash/logstash.err
And on kafka brokers are reported this error:
$ tail -n100 /opt/kafka/logs/kafkaServer.out
My logstash configuration:
And one of kafka brokers configuration:
Anyone have some hint about what cause this problem?
The text was updated successfully, but these errors were encountered: