Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Netty4 Exception #19893

Closed
aleph-zero opened this issue Aug 9, 2016 · 12 comments
Closed

Netty4 Exception #19893

aleph-zero opened this issue Aug 9, 2016 · 12 comments
Labels
blocker >bug :Distributed/Network Http and internode communication implementations

Comments

@aleph-zero
Copy link
Contributor

Elasticsearch version:
alpha5

Plugins installed: []
None

JVM version:
java version "1.8.0_71"
Java(TM) SE Runtime Environment (build 1.8.0_71-b15)
Java HotSpot(TM) 64-Bit Server VM (build 25.71-b15, mixed mode)

OS version:
OSX

Description of the problem including expected versus actual behavior:
Exception thrown in netty4 module while importing dashboards for metricbeat.

Steps to reproduce:

  1. Start ES alpha5
  2. Import the Kibana dashboard for metricbeat: cd metricbeat-5.0.0-alpha5-darwin-x86_64/kibana && ./import_dashboards.sh
  3. Observe ES logs:
[2016-08-09 11:31:01,091][WARN ][http.netty4              ] [wtOV9Vb] caught exception while handling client http traffic, closing connection [id: 0x1320b717, L:/0:0:0:0:0:0:0:1:9200 - R:/0:0:0:0:0:0:0:1:54732]
java.lang.UnsupportedOperationException: unsupported message type: DefaultFullHttpResponse (expected: ByteBuf, FileRegion)
    at io.netty.channel.nio.AbstractNioByteChannel.filterOutboundMessage(AbstractNioByteChannel.java:260)
    at io.netty.channel.AbstractChannel$AbstractUnsafe.write(AbstractChannel.java:799)
    at io.netty.channel.DefaultChannelPipeline$HeadContext.write(DefaultChannelPipeline.java:1291)
    at io.netty.channel.AbstractChannelHandlerContext.invokeWrite0(AbstractChannelHandlerContext.java:748)
    at io.netty.channel.AbstractChannelHandlerContext.invokeWriteAndFlush(AbstractChannelHandlerContext.java:811)
    at io.netty.channel.AbstractChannelHandlerContext.write(AbstractChannelHandlerContext.java:824)
    at io.netty.channel.AbstractChannelHandlerContext.writeAndFlush(AbstractChannelHandlerContext.java:804)
    at io.netty.channel.AbstractChannelHandlerContext.writeAndFlush(AbstractChannelHandlerContext.java:841)
    at io.netty.handler.codec.MessageAggregator.decode(MessageAggregator.java:222)
    at io.netty.handler.codec.MessageToMessageDecoder.channelRead(MessageToMessageDecoder.java:88)
    at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:372)
    at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:358)
    at io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:350)
    at io.netty.handler.codec.MessageToMessageDecoder.channelRead(MessageToMessageDecoder.java:102)
    at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:372)
    at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:358)
    at io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:350)
    at io.netty.handler.codec.ByteToMessageDecoder.fireChannelRead(ByteToMessageDecoder.java:293)
    at io.netty.handler.codec.ByteToMessageDecoder.channelRead(ByteToMessageDecoder.java:267)
    at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:372)
    at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:358)
    at io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:350)
    at io.netty.channel.ChannelInboundHandlerAdapter.channelRead(ChannelInboundHandlerAdapter.java:86)
    at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:372)
    at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:358)
    at io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:350)
    at io.netty.channel.DefaultChannelPipeline$HeadContext.channelRead(DefaultChannelPipeline.java:1334)
    at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:372)
    at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:358)
    at io.netty.channel.DefaultChannelPipeline.fireChannelRead(DefaultChannelPipeline.java:926)
    at io.netty.channel.nio.AbstractNioByteChannel$NioByteUnsafe.read(AbstractNioByteChannel.java:129)
    at io.netty.channel.nio.NioEventLoop.processSelectedKey(NioEventLoop.java:571)
    at io.netty.channel.nio.NioEventLoop.processSelectedKeysPlain(NioEventLoop.java:474)
    at io.netty.channel.nio.NioEventLoop.processSelectedKeys(NioEventLoop.java:428)
    at io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:398)
    at io.netty.util.concurrent.SingleThreadEventExecutor$5.run(SingleThreadEventExecutor.java:877)
    at java.lang.Thread.run(Thread.java:745)
@aleph-zero aleph-zero added :Distributed/Network Http and internode communication implementations v5.0.0-alpha5 labels Aug 9, 2016
@jasontedor
Copy link
Member

Thanks for reporting @aleph-zero, I'll attempt to reproduce and will investigate.

@tsg
Copy link

tsg commented Aug 9, 2016

Some additional information from our testing:

  • This works with the internal RC3 alpha5 but doesn't work with RC4.
  • A more minimal way to reproduce it:
curl -XPUT 'http://localhost:9200/.kibana/index-pattern/filebeat-*' -d @./index-pattern/filebeat.json

Where ./index-pattern/filebeat.json has the following contents:

{
  "fields": "[{\"name\":\"offset\",\"type\":\"number\",\"count\":0,\"scripted\":false,\"indexed\":true,\"analyzed\":false,\"doc_values\":true},{\"name\":\"_index\",\"type\":\"string\",\"count\":0,\"scripted\":false,\"indexed\":false,\"analyzed\":false,\"doc_values\":false},{\"name\":\"line\",\"type\":\"number\",\"count\":0,\"scripted\":false,\"indexed\":true,\"analyzed\":false,\"doc_values\":true},{\"name\":\"_type\",\"type\":\"string\",\"count\":0,\"scripted\":false,\"indexed\":true,\"analyzed\":false,\"doc_values\":false},{\"name\":\"message\",\"type\":\"string\",\"count\":0,\"scripted\":false,\"indexed\":true,\"analyzed\":true,\"doc_values\":false},{\"name\":\"_source\",\"type\":\"_source\",\"count\":0,\"scripted\":false,\"indexed\":false,\"analyzed\":false,\"doc_values\":false},{\"name\":\"_id\",\"type\":\"string\",\"count\":0,\"scripted\":false,\"indexed\":false,\"analyzed\":false,\"doc_values\":false},{\"name\":\"@timestamp\",\"type\":\"date\",\"count\":0,\"scripted\":false,\"indexed\":true,\"analyzed\":false,\"doc_values\":false},{\"name\":\"beat.name\",\"type\":\"string\",\"count\":0,\"scripted\":false,\"indexed\":true,\"analyzed\":false,\"doc_values\":true},{\"name\":\"count\",\"type\":\"number\",\"count\":0,\"scripted\":false,\"indexed\":true,\"analyzed\":false,\"doc_values\":true},{\"name\":\"source\",\"type\":\"string\",\"count\":0,\"scripted\":false,\"indexed\":true,\"analyzed\":false,\"doc_values\":true},{\"name\":\"type\",\"type\":\"string\",\"count\":0,\"scripted\":false,\"indexed\":true,\"analyzed\":false,\"doc_values\":true}]"
}

@imotov
Copy link
Contributor

imotov commented Aug 9, 2016

Just stumbled upon the same issue. It fails for any body larger than 1k:

curl -XPUT localhost:9200/test/doc/1 -d '{"text": "blah blah blah blah blah blah blah blah blah blah blah blah blah blah blah blah blah blah blah blah blah blah blah blah blah blah blah blah blah blah blah blah blah blah blah blah blah blah blah blah blah blah blah blah blah blah blah blah blah blah blah blah blah blah blah blah blah blah blah blah blah blah blah blah blah blah blah blah blah blah blah blah blah blah blah blah blah blah blah blah blah blah blah blah blah blah blah blah blah blah blah blah blah blah blah blah blah blah blah blah blah blah blah blah blah blah blah blah blah blah blah blah blah blah blah blah blah blah blah blah blah blah blah blah blah blah blah blah blah blah blah blah blah blah blah blah blah blah blah blah blah blah blah blah blah blah blah blah blah blah blah blah blah blah blah blah blah blah blah blah blah blah blah blah blah blah blah blah blah blah blah blah blah blah blah blah blah blah blah blah blah blah blah blah blah blah blah blah blah blah blah blah blah blah blah blah blah blah blah blah blah blah blah"}'

If you reduce the last blah to bl it works.

@dakrone
Copy link
Member

dakrone commented Aug 9, 2016

Whoa, it's unfortunate that none of our tests exercise this, this means that no _bulk ingestions with bodies over 1k will work :(

@jaymode
Copy link
Member

jaymode commented Aug 9, 2016

The curl reproduction looks like the same issue as #19834 where Netty 4 doesn't handle a expect 100 continue header properly (@tlrx is looking into it):

curl -XPUT localhost:9200/test/doc/1 -d '{"text": "blah blah blah blah blah blah blah blah blah blah blah blah blah blah blah blah blah blah blah blah blah blah blah blah blah blah blah blah blah blah blah blah blah blah blah blah blah blah blah blah blah blah blah blah blah blah blah blah blah blah blah blah blah blah blah blah blah blah blah blah blah blah blah blah blah blah blah blah blah blah blah blah blah blah blah blah blah blah blah blah blah blah blah blah blah blah blah blah blah blah blah blah blah blah blah blah blah blah blah blah blah blah blah blah blah blah blah blah blah blah blah blah blah blah blah blah blah blah blah blah blah blah blah blah blah blah blah blah blah blah blah blah blah blah blah blah blah blah blah blah blah blah blah blah blah blah blah blah blah blah blah blah blah blah blah blah blah blah blah blah blah blah blah blah blah blah blah blah blah blah blah blah blah blah blah blah blah blah blah blah blah blah blah blah blah blah blah blah blah blah blah blah blah blah blah blah blah blah blah blah blah blah blah"}' -v

*   Trying ::1...
* Connected to localhost (::1) port 9200 (#0)
> PUT /test/doc/1 HTTP/1.1
> Host: localhost:9200
> User-Agent: curl/7.43.0
> Accept: */*
> Content-Length: 1026
> Content-Type: application/x-www-form-urlencoded
> Expect: 100-continue
> 
* Empty reply from server
* Connection #0 to host localhost left intact
curl: (52) Empty reply from server

@jasontedor
Copy link
Member

This works with the internal RC3 alpha5 but doesn't work with RC4.

@tsg RC3 did not have Netty 4 as the default.

@jasontedor
Copy link
Member

It fails for any body larger than 1k:

@imotov Sending the curl request with -H "Expect:" to prevent curl from sending a 100 continue header provides a workaround for the issue.

@jasontedor
Copy link
Member

jasontedor commented Aug 9, 2016

Whoa, it's unfortunate that none of our tests exercise this, this means that no _bulk ingestions with bodies over 1k will work :(

The benchmarks send large bodies, the issue is not large bodies, it's the 100 continue header.

@jasontedor
Copy link
Member

Duplicates #19834

@jasontedor
Copy link
Member

jasontedor commented Aug 9, 2016

@aleph-zero and @tsg Here's a workaround for the the metricbeats import script:

diff --git a/dev-tools/import_dashboards.sh b/dev-tools/import_dashboards.sh
index 231c97e..b7a1400 100755
--- a/dev-tools/import_dashboards.sh
+++ b/dev-tools/import_dashboards.sh
@@ -151,7 +151,7 @@ if [ -d "${DIR}/visualization" ]; then
     do
         NAME=`basename ${file} .json`
         echo "Import visualization ${NAME}:"
-        ${CURL} -XPUT ${ELASTICSEARCH}/${KIBANA_INDEX}/visualization/${NAME} \
+        ${CURL} -H "Expect:" -XPUT ${ELASTICSEARCH}/${KIBANA_INDEX}/visualization/${NAME} \
             -d @${file} || exit 1
         echo
     done
@@ -162,7 +162,7 @@ if [ -d "${DIR}/dashboard" ]; then
     do
         NAME=`basename ${file} .json`
         echo "Import dashboard ${NAME}:"
-        ${CURL} -XPUT ${ELASTICSEARCH}/${KIBANA_INDEX}/dashboard/${NAME} \
+        ${CURL} -H "Expect:" -XPUT ${ELASTICSEARCH}/${KIBANA_INDEX}/dashboard/${NAME} \
             -d @${file} || exit 1
         echo
     done
@@ -174,7 +174,7 @@ if [ -d "${DIR}/index-pattern" ]; then
         NAME=`awk '$1 == "\"title\":" {gsub(/[",]/, "", $2); print $2}' ${file}`
         echo "Import index pattern ${NAME}:"

-        ${CURL} -XPUT ${ELASTICSEARCH}/${KIBANA_INDEX}/index-pattern/${NAME} \
+        ${CURL} -H "Expect:" -XPUT ${ELASTICSEARCH}/${KIBANA_INDEX}/index-pattern/${NAME} \
             -d @${file} || exit 1
         echo
     done

We just need to tell curl to not send the "Expect: 100-continue" header. I tested this locally and it works fine.

@aleph-zero
Copy link
Contributor Author

thanks @jasontedor

@jasontedor
Copy link
Member

I have verified that the patch in #19904 fixes the issue reported here.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
blocker >bug :Distributed/Network Http and internode communication implementations
Projects
None yet
Development

No branches or pull requests

7 participants