New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

NotSerializableExceptionWrapper exception #128

Closed
hleb-albau opened this Issue Oct 1, 2017 · 11 comments

Comments

Projects
None yet
3 participants
@hleb-albau

hleb-albau commented Oct 1, 2017

I have next Query:

curl -XGET "http://localhost:9200/blockchains/_search?pretty&q=0000000018920212d4d4dcddb6e24f37d23b35a0078d270227c83051bb350049" 

If, in q i supply value, that cause elastic to find single bitcoin_tx type row, than everything is ok. If i supply in q value, that should provide both bitcoin_tx and bitcoin_block items i have next error:

 org.elasticsearch.action.search.SearchPhaseExecutionException: all shards failed
	at org.elasticsearch.action.search.AbstractSearchAsyncAction.onFirstPhaseResult(AbstractSearchAsyncAction.java:206) ~[elasticsearch-2.4.6.jar:2.4.6]
	at org.elasticsearch.action.search.AbstractSearchAsyncAction$1.onFailure(AbstractSearchAsyncAction.java:152) ~[elasticsearch-2.4.6.jar:2.4.6]
	at org.elasticsearch.action.ActionListenerResponseHandler.handleException(ActionListenerResponseHandler.java:46) ~[elasticsearch-2.4.6.jar:2.4.6]
	at org.elasticsearch.transport.TransportService$DirectResponseChannel.processException(TransportService.java:874) ~[elasticsearch-2.4.6.jar:2.4.6]
	at org.elasticsearch.transport.TransportService$DirectResponseChannel.sendResponse(TransportService.java:852) ~[elasticsearch-2.4.6.jar:2.4.6]
	at org.elasticsearch.transport.TransportService$4.onFailure(TransportService.java:389) ~[elasticsearch-2.4.6.jar:2.4.6]
	at org.elasticsearch.common.util.concurrent.AbstractRunnable.run(AbstractRunnable.java:39) ~[elasticsearch-2.4.6.jar:2.4.6]
	at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142) ~[na:1.8.0_131]
	at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617) ~[na:1.8.0_131]
	at java.lang.Thread.run(Thread.java:748) ~[na:1.8.0_131]
Caused by: org.elasticsearch.common.io.stream.NotSerializableExceptionWrapper: : line 1:8 no viable alternative at input 'FROM' (SELECT  [FROM]...)
	at org.elasticsearch.ElasticsearchException.guessRootCauses(ElasticsearchException.java:386) ~[elasticsearch-2.4.6.jar:2.4.6]
	at org.elasticsearch.action.search.SearchPhaseExecutionException.guessRootCauses(SearchPhaseExecutionException.java:152) ~[elasticsearch-2.4.6.jar:2.4.6]
	at org.elasticsearch.action.search.SearchPhaseExecutionException.getCause(SearchPhaseExecutionException.java:99) ~[elasticsearch-2.4.6.jar:2.4.6]
	at org.elasticsearch.ElasticsearchException.writeTo(ElasticsearchException.java:226) ~[elasticsearch-2.4.6.jar:2.4.6]
	at org.elasticsearch.action.search.SearchPhaseExecutionException.writeTo(SearchPhaseExecutionException.java:64) ~[elasticsearch-2.4.6.jar:2.4.6]
	at org.elasticsearch.common.io.stream.StreamOutput.writeThrowable(StreamOutput.java:590) ~[elasticsearch-2.4.6.jar:2.4.6]
	at org.elasticsearch.ElasticsearchException.writeTo(ElasticsearchException.java:226) ~[elasticsearch-2.4.6.jar:2.4.6]
	at org.elasticsearch.transport.ActionTransportException.writeTo(ActionTransportException.java:64) ~[elasticsearch-2.4.6.jar:2.4.6]
	at org.elasticsearch.common.io.stream.StreamOutput.writeThrowable(StreamOutput.java:590) ~[elasticsearch-2.4.6.jar:2.4.6]
	at org.elasticsearch.transport.netty.NettyTransportChannel.sendResponse(NettyTransportChannel.java:137) ~[elasticsearch-2.4.6.jar:2.4.6]
	at org.elasticsearch.transport.DelegatingTransportChannel.sendResponse(DelegatingTransportChannel.java:68) ~[elasticsearch-2.4.6.jar:2.4.6]
	at org.elasticsearch.transport.RequestHandlerRegistry$TransportChannelWrapper.sendResponse(RequestHandlerRegistry.java:152) ~[elasticsearch-2.4.6.jar:2.4.6]
	at org.elasticsearch.action.support.HandledTransportAction$TransportHandler$1.onFailure(HandledTransportAction.java:82) ~[elasticsearch-2.4.6.jar:2.4.6]
	at org.elasticsearch.action.search.AbstractSearchAsyncAction.raiseEarlyFailure(AbstractSearchAsyncAction.java:294) ~[elasticsearch-2.4.6.jar:2.4.6]
	... 10 common frames omitted
Caused by: org.elasticsearch.common.io.stream.NotSerializableExceptionWrapper: syntax_exception: line 1:8 no viable alternative at input 'FROM' (SELECT  [FROM]...)
	at org.apache.cassandra.cql3.ErrorCollector.throwFirstSyntaxError(ErrorCollector.java:101) ~[na:na]
	at org.apache.cassandra.cql3.CQLFragmentParser.parseAnyUnhandled(CQLFragmentParser.java:80) ~[na:na]
	at org.apache.cassandra.cql3.QueryProcessor.parseStatement(QueryProcessor.java:580) ~[na:na]
	at org.apache.cassandra.cql3.QueryProcessor.getStatement(QueryProcessor.java:550) ~[na:na] 

Cql tables:

CREATE KEYSPACE IF NOT EXISTS blockchains
    WITH REPLICATION = {'class' : 'NetworkTopologyStrategy','LOCAL_DEVELOPMENT' : 1};


CREATE TYPE IF NOT EXISTS blockchains.bitcoin_block_tx_io (
    address text,
    amount text
);


CREATE TYPE IF NOT EXISTS blockchains.bitcoin_block_tx (
    fee text,
    lock_time bigint,
    hash text,
    ins list<FROZEN <blockchains.bitcoin_block_tx_io>>,
    outs list<FROZEN <blockchains.bitcoin_block_tx_io>>
);


CREATE TABLE IF NOT EXISTS blockchains.bitcoin_block (
     hash text,
     height bigint PRIMARY KEY,
     time timestamp,
     nonce bigint,
     merkleroot text,
     size int,
     version int,
     weight int,
     bits text,
     tx_number int,
     total_outputs_value text,
     difficulty varint,
     txs list<FROZEN <blockchains.bitcoin_block_tx>>
);


CREATE TYPE IF NOT EXISTS blockchains.bitcoin_tx_out (
    address text,
    amount text,
    asm text,
    out int,
    required_signatures int
);


CREATE TYPE IF NOT EXISTS blockchains.bitcoin_tx_in (
    address text,
    amount text,
    asm text,
    tx_id text,
    tx_out int
);


CREATE TABLE IF NOT EXISTS blockchains.bitcoin_tx (
     txid text PRIMARY KEY,
     block_number bigint,
     block_hash text,
     block_time timestamp,
     size int,
     coinbase text,
     lock_time bigint,
     fee text,
     total_input text,
     total_output text,
     ins list<FROZEN <blockchains.bitcoin_tx_in>>,
     outs list<FROZEN <blockchains.bitcoin_tx_out>>
);

Elastic mapping

curl -XPUT "http://localhost:9200/blockchains/" -d '{
   "settings" : { "keyspace" : "blockchains" } },
}'

curl -XPUT "http://localhost:9200/blockchains/_mapping/bitcoin_tx" -d '{
    "bitcoin_tx" : {
      "properties": {
        "txid": {"type": "string", "index": "not_analyzed", "include_in_all": true, "cql_collection" : "singleton"},
        "block_hash": {"type": "string", "index": "not_analyzed", "include_in_all": true, "cql_collection" : "singleton"},
        "block_number": {"type": "long", "index": "no", "include_in_all": false, "cql_collection" : "singleton"},
        "block_time": {"type": "date", "index": "no", "include_in_all": false, "cql_collection" : "singleton"},
        "fee": {"type": "string", "index": "no", "include_in_all": false, "cql_collection" : "singleton"},
        "total_output": {"type": "string", "index": "no", "include_in_all": false, "cql_collection" : "singleton"}
      }
    }
}'


curl -XPUT "http://localhost:9200/blockchains/_mapping/bitcoin_block" -d '{
    "bitcoin_block" : {
      "properties": {
        "hash": {"type": "string", "index": "not_analyzed", "include_in_all": true, "cql_collection" : "singleton"},
        "height": {"type": "long", "index": "no", "include_in_all": true, "cql_collection" : "singleton"},
        "time": {"type": "date", "index": "no", "include_in_all": false, "cql_collection" : "singleton"},
        "tx_number": {"type": "integer", "index": "no", "include_in_all": false, "cql_collection" : "singleton"},
        "total_outputs_value": {"type": "string", "index": "no", "include_in_all": false, "cql_collection" : "singleton"}
      }
    }
}

elassandra 2.4.5.5

@vroyer

This comment has been minimized.

Show comment
Hide comment
@vroyer

vroyer Oct 1, 2017

Collaborator
Collaborator

vroyer commented Oct 1, 2017

@hleb-albau

This comment has been minimized.

Show comment
Hide comment
@hleb-albau

hleb-albau Oct 1, 2017

Hi, is it possible to adjust log lvl just by env properties for elassandra docker image?

hleb-albau commented Oct 1, 2017

Hi, is it possible to adjust log lvl just by env properties for elassandra docker image?

@DBarthe

This comment has been minimized.

Show comment
Hide comment
@DBarthe

DBarthe Oct 1, 2017

Member

Hi @hleb-albau,

No this is not yet possible. It is based on offcial cassandra image which does not provide such feature.
A work around is to call nodetoot setLoggingLevel at runtime.

Hope this helps...

Member

DBarthe commented Oct 1, 2017

Hi @hleb-albau,

No this is not yet possible. It is based on offcial cassandra image which does not provide such feature.
A work around is to call nodetoot setLoggingLevel at runtime.

Hope this helps...

@hleb-albau

This comment has been minimized.

Show comment
Hide comment
@hleb-albau

hleb-albau Oct 1, 2017

@DBarthe I use ./nodetool setlogginglevel org.elassandra.cluster TRACE and see next message:

StorageService.java:3792 setLoggingLevel set log level to TRACE for classes under 'org.elassandra.cluster' (if the level doesn't look like 'TRACE' then the logger couldn't parse 'TRACE')

But when i searching, there is no additional log entries with cql in system.log file :(
Just

WARN  [elasticsearch[127.0.1.1][search][T#6]] BytesRestResponse.java:134 convert path: /blockchains/_search, params: {q=000000003fd0fa5f78eea07b6daf176bfab63fb28a56768e7bbce39f047a7c14, pretty=, index=blockchains}

Update: Also same for next logback.xml file:

<!--
 Licensed to the Apache Software Foundation (ASF) under one
 or more contributor license agreements.  See the NOTICE file
 distributed with this work for additional information
 regarding copyright ownership.  The ASF licenses this file
 to you under the Apache License, Version 2.0 (the
 "License"); you may not use this file except in compliance
 with the License.  You may obtain a copy of the License at

   http://www.apache.org/licenses/LICENSE-2.0

 Unless required by applicable law or agreed to in writing,
 software distributed under the License is distributed on an
 "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
 KIND, either express or implied.  See the License for the
 specific language governing permissions and limitations
 under the License.
-->

<configuration scan="true" debug="true">
    <jmxConfigurator/>
    <appender name="FILE" class="ch.qos.logback.core.rolling.RollingFileAppender">
        <file>${cassandra.logdir}/system_${cassandra.node_ordinal:-0}.log</file>
        <rollingPolicy class="ch.qos.logback.core.rolling.FixedWindowRollingPolicy">
            <fileNamePattern>${cassandra.logdir}/system_${cassandra.node_ordinal:-0}.log.%i.zip</fileNamePattern>
            <minIndex>1</minIndex>
            <maxIndex>20</maxIndex>
        </rollingPolicy>

        <triggeringPolicy class="ch.qos.logback.core.rolling.SizeBasedTriggeringPolicy">
            <maxFileSize>500MB</maxFileSize>
        </triggeringPolicy>
        <encoder>
            <pattern>%date{ISO8601} %-5level [%thread] %F:%L %M %msg%n</pattern>
            <!-- old-style log format
            <pattern>%5level [%thread] %date{ISO8601} %F (line %L) %msg%n</pattern>
            -->
        </encoder>
    </appender>

    <appender name="STDOUT" class="ch.qos.logback.core.ConsoleAppender">
        <encoder>
            <pattern>%date{ISO8601} %-5level [%thread] %C.%M\(%F:%L\) %msg%n</pattern>
        </encoder>
    </appender>


    <logger name="com.thinkaurelius.thrift" level="ERROR"/>
    <logger name="org.apache" level="WARN"/>

    <logger name="org.apache.cassandra" level="WARN"/>
    <logger name="org.apache.cassandra.service" level="WARN"/>
    <logger name="org.apache.cassandra.db.commitlog" level="WARN"/>
    <logger name="org.apache.cassandra.db.compaction" level="WARN"/>
    <logger name="org.apache.cassandra.config.DatabaseDescriptor" level="WARN"/>
    <logger name="org.apache.cassandra.service.CassandraDaemon" level="DEBUG"/>
    <logger name="org.apache.cassandra.service.ElassandraDaemon" level="TRACE"/>

    <!--    <logger name="org.elasticsearch.cluster.service" level="DEBUG" />
        " />
        <logger name="org.elassandra.index" level="DEBUG" />
        <logger name="org.elassandra.discovery" level="TRACE" />
        <logger name="org.elassandra.cluster.routing" level="WARN" />
        <logger name="org.elasticsearch.index" level="TRACE" />
        <logger name="org.elasticsearch.indices" level="TRACE" />
        <logger name="org.elassandra.index.shard" level="DEBUG" />
        <logger name="org.elassandra.shard" level="DEBUG" />
         />

        <logger name="org.elasticsearch" level="TRACE" />
        <logger name="org.elasticsearch.http" level="WARN"/>
        <logger name="org.elasticsearch.transport" level="WARN"/>-->

    <logger name="org.elassandra.cluster" level="TRACE"/>
    <logger name="org.elassandra.search" level="TRACE"/>
    <logger name="org.elassandra.cluster.routing" level="DEBUG"/>
    <logger name="org.elasticsearch.indices.cluster" level="DEBUG"/>


    <root level="INFO">
        <appender-ref ref="FILE"/>
        <!--<appender-ref ref="STDOUT" />-->
    </root>

</configuration>

hleb-albau commented Oct 1, 2017

@DBarthe I use ./nodetool setlogginglevel org.elassandra.cluster TRACE and see next message:

StorageService.java:3792 setLoggingLevel set log level to TRACE for classes under 'org.elassandra.cluster' (if the level doesn't look like 'TRACE' then the logger couldn't parse 'TRACE')

But when i searching, there is no additional log entries with cql in system.log file :(
Just

WARN  [elasticsearch[127.0.1.1][search][T#6]] BytesRestResponse.java:134 convert path: /blockchains/_search, params: {q=000000003fd0fa5f78eea07b6daf176bfab63fb28a56768e7bbce39f047a7c14, pretty=, index=blockchains}

Update: Also same for next logback.xml file:

<!--
 Licensed to the Apache Software Foundation (ASF) under one
 or more contributor license agreements.  See the NOTICE file
 distributed with this work for additional information
 regarding copyright ownership.  The ASF licenses this file
 to you under the Apache License, Version 2.0 (the
 "License"); you may not use this file except in compliance
 with the License.  You may obtain a copy of the License at

   http://www.apache.org/licenses/LICENSE-2.0

 Unless required by applicable law or agreed to in writing,
 software distributed under the License is distributed on an
 "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
 KIND, either express or implied.  See the License for the
 specific language governing permissions and limitations
 under the License.
-->

<configuration scan="true" debug="true">
    <jmxConfigurator/>
    <appender name="FILE" class="ch.qos.logback.core.rolling.RollingFileAppender">
        <file>${cassandra.logdir}/system_${cassandra.node_ordinal:-0}.log</file>
        <rollingPolicy class="ch.qos.logback.core.rolling.FixedWindowRollingPolicy">
            <fileNamePattern>${cassandra.logdir}/system_${cassandra.node_ordinal:-0}.log.%i.zip</fileNamePattern>
            <minIndex>1</minIndex>
            <maxIndex>20</maxIndex>
        </rollingPolicy>

        <triggeringPolicy class="ch.qos.logback.core.rolling.SizeBasedTriggeringPolicy">
            <maxFileSize>500MB</maxFileSize>
        </triggeringPolicy>
        <encoder>
            <pattern>%date{ISO8601} %-5level [%thread] %F:%L %M %msg%n</pattern>
            <!-- old-style log format
            <pattern>%5level [%thread] %date{ISO8601} %F (line %L) %msg%n</pattern>
            -->
        </encoder>
    </appender>

    <appender name="STDOUT" class="ch.qos.logback.core.ConsoleAppender">
        <encoder>
            <pattern>%date{ISO8601} %-5level [%thread] %C.%M\(%F:%L\) %msg%n</pattern>
        </encoder>
    </appender>


    <logger name="com.thinkaurelius.thrift" level="ERROR"/>
    <logger name="org.apache" level="WARN"/>

    <logger name="org.apache.cassandra" level="WARN"/>
    <logger name="org.apache.cassandra.service" level="WARN"/>
    <logger name="org.apache.cassandra.db.commitlog" level="WARN"/>
    <logger name="org.apache.cassandra.db.compaction" level="WARN"/>
    <logger name="org.apache.cassandra.config.DatabaseDescriptor" level="WARN"/>
    <logger name="org.apache.cassandra.service.CassandraDaemon" level="DEBUG"/>
    <logger name="org.apache.cassandra.service.ElassandraDaemon" level="TRACE"/>

    <!--    <logger name="org.elasticsearch.cluster.service" level="DEBUG" />
        " />
        <logger name="org.elassandra.index" level="DEBUG" />
        <logger name="org.elassandra.discovery" level="TRACE" />
        <logger name="org.elassandra.cluster.routing" level="WARN" />
        <logger name="org.elasticsearch.index" level="TRACE" />
        <logger name="org.elasticsearch.indices" level="TRACE" />
        <logger name="org.elassandra.index.shard" level="DEBUG" />
        <logger name="org.elassandra.shard" level="DEBUG" />
         />

        <logger name="org.elasticsearch" level="TRACE" />
        <logger name="org.elasticsearch.http" level="WARN"/>
        <logger name="org.elasticsearch.transport" level="WARN"/>-->

    <logger name="org.elassandra.cluster" level="TRACE"/>
    <logger name="org.elassandra.search" level="TRACE"/>
    <logger name="org.elassandra.cluster.routing" level="DEBUG"/>
    <logger name="org.elasticsearch.indices.cluster" level="DEBUG"/>


    <root level="INFO">
        <appender-ref ref="FILE"/>
        <!--<appender-ref ref="STDOUT" />-->
    </root>

</configuration>

@hleb-albau

This comment has been minimized.

Show comment
Hide comment
@hleb-albau

hleb-albau commented Oct 2, 2017

@vroyer @DBarthe What i missed?

@hleb-albau

This comment has been minimized.

Show comment
Hide comment
@hleb-albau

hleb-albau Oct 5, 2017

Also, when i did separate indexes for bitcoin_tx and bitcoin_block -> everything works fine.

hleb-albau commented Oct 5, 2017

Also, when i did separate indexes for bitcoin_tx and bitcoin_block -> everything works fine.

@DBarthe

This comment has been minimized.

Show comment
Hide comment
@DBarthe

DBarthe Oct 5, 2017

Member

@hleb-albau, that's interesting. More likely we'll try to reproduce this bug on v5.5.0

Member

DBarthe commented Oct 5, 2017

@hleb-albau, that's interesting. More likely we'll try to reproduce this bug on v5.5.0

@vroyer

This comment has been minimized.

Show comment
Hide comment
@vroyer

vroyer Oct 11, 2017

Collaborator

Do still have an issue ?
Using q=0000000018920212d4d4dcddb6e24f37d23b35a0078d270227c83051bb350049 is an incorrect elasticsearch syntax...

Collaborator

vroyer commented Oct 11, 2017

Do still have an issue ?
Using q=0000000018920212d4d4dcddb6e24f37d23b35a0078d270227c83051bb350049 is an incorrect elasticsearch syntax...

@hleb-albau

This comment has been minimized.

Show comment
Hide comment
@hleb-albau

hleb-albau Oct 11, 2017

Why it is incorrect? Its correct! If, in result we have one document type, than everything works fine. We have problems, only when we have two+ types of documents.
Also, i have same problem using java api

        val elasticResponse = elasticClient.prepareSearch("blockchains")
                .setQuery(elasticQuery)
                .setFrom(page * pageSize).setSize(pageSize).setExplain(true)
                .execute()
                .actionGet()

Right now we decided to separated entities into separates indexes, and that fix our problem, but for described here case it still remains.

hleb-albau commented Oct 11, 2017

Why it is incorrect? Its correct! If, in result we have one document type, than everything works fine. We have problems, only when we have two+ types of documents.
Also, i have same problem using java api

        val elasticResponse = elasticClient.prepareSearch("blockchains")
                .setQuery(elasticQuery)
                .setFrom(page * pageSize).setSize(pageSize).setExplain(true)
                .execute()
                .actionGet()

Right now we decided to separated entities into separates indexes, and that fix our problem, but for described here case it still remains.

@vroyer vroyer closed this in 819e4f1 Oct 18, 2017

@vroyer

This comment has been minimized.

Show comment
Hide comment
@vroyer

vroyer Oct 18, 2017

Collaborator

Sorry, you're right, you query is OK.
I have reproduced and fixed your issue in the last release 5.5.04.
This fix can be back ported to 2.4+ if you really need it.
Thanks.

Collaborator

vroyer commented Oct 18, 2017

Sorry, you're right, you query is OK.
I have reproduced and fixed your issue in the last release 5.5.04.
This fix can be back ported to 2.4+ if you really need it.
Thanks.

@hleb-albau

This comment has been minimized.

Show comment
Hide comment
@hleb-albau

hleb-albau Oct 18, 2017

@vroyer
Hi, thanks for fix. Port is up to you. Right now we are migrating to 5.x branch. Thanks!

hleb-albau commented Oct 18, 2017

@vroyer
Hi, thanks for fix. Port is up to you. Right now we are migrating to 5.x branch. Thanks!

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment