Skip to content
This repository has been archived by the owner on Sep 2, 2020. It is now read-only.

[ Webservice ]. Cassandra has data but API does not response result. #12

Closed
ghost opened this issue Feb 24, 2014 · 13 comments
Closed

[ Webservice ]. Cassandra has data but API does not response result. #12

ghost opened this issue Feb 24, 2014 · 13 comments

Comments

@ghost
Copy link

ghost commented Feb 24, 2014

Dear,
I setup cyanite and it works ok when i push some metrics. But when i push so many data in 2, 3 days, cyanite not display new metrics on graphite web. So i restart cyanite and all metrics not display on graphite-web. I check cyanite API: http://10.30.12.133:8080/paths?query=*, result = [] , fletch data : http://10.30.12.133:8080/metrics?path=UP_ZME_Test_30_14_10_30_12_42.loadavg.1min&from=1393150441&to=1393200441 , result = {"error":"LIMIT must be strictly positive"} . Please help me, and i don't understand rollups ("period and rollup") in config and why has 2 rollups? . I tried setup cyanite 3 times and all error with webservice.

Result Cassandra query:
cqlsh:metric> select path,data,time from metric where path in ('UP_ZME_Test_30_14_10_30_12_42.loadavg.1min') and rollup = 600 and period = 105120 and time >= 1393212735 and time <= 1394212735 order by time asc limit 1;

path | data | time
--------------------------------------------+--------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+------------
UP_ZME_Test_30_14_10_30_12_42.loadavg.1min | [0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0] | 1393213200

@pyr
Copy link
Owner

pyr commented Feb 28, 2014

Hi @hiepsikhokhao can I get the full stacktrace from the cyanite server log please ?

@ghost
Copy link
Author

ghost commented Mar 10, 2014

Dear,
I run cyanite someday. I can get data from api easily, but when i restart cyanite , with a same URL, APi response {"error":"LIMIT must be strictly positive"}. And i can put new data into cassandra but can not get it from webservice anymore. Please fix it, i need cyanite for production. Thanks so much.

DEBUG [2014-03-10 15:21:23,702] New I/O worker #19 - so.grep.cyanite.http - got request: {:remote-addr 10.30.56.131, :scheme :http, :request-method :get, :query-string query=%2A, :action :paths, :content-type nil, :keep-alive? true, :uri /paths, :server-name localhost, :params {:query }, :headers {user-agent python-requests/2.2.1 CPython/2.6.6 Linux/2.6.32-279.5.2.el6.x86_64, accept */, accept-encoding gzip, deflate, compress, host 10.30.12.133:8080}, :content-length nil, :server-port 8080, :character-encoding nil, :body nil}
DEBUG [2014-03-10 15:21:24,621] New I/O worker #20 - so.grep.cyanite.http - got request: {:remote-addr 10.30.56.131, :scheme :http, :request-method :get, :query-string query=TicketSystem-133_10_30_12_133.%2A, :action :paths, :content-type nil, :keep-alive? true, :uri /paths, :server-name localhost, :params {:query TicketSystem-133_10_30_12_133.}, :headers {user-agent python-requests/2.2.1 CPython/2.6.6 Linux/2.6.32-279.5.2.el6.x86_64, accept */, accept-encoding gzip, deflate, compress, host 10.30.12.133:8080}, :content-length nil, :server-port 8080, :character-encoding nil, :body nil}
DEBUG [2014-03-10 15:21:25,887] New I/O worker #21 - so.grep.cyanite.http - got request: {:remote-addr 10.30.56.131, :scheme :http, :request-method :get, :query-string query=TicketSystem-133_10_30_12_133.cpu.%2A, :action :paths, :content-type nil, :keep-alive? true, :uri /paths, :server-name localhost, :params {:query TicketSystem-133_10_30_12_133.cpu.}, :headers {user-agent python-requests/2.2.1 CPython/2.6.6 Linux/2.6.32-279.5.2.el6.x86_64, accept */, accept-encoding gzip, deflate, compress, host 10.30.12.133:8080}, :content-length nil, :server-port 8080, :character-encoding nil, :body nil}
4-03-10 15:21:14,910] Cassandra Java Driver worker-0 - com.datastax.driver.core.Session - Adding cassandra/10.30.12.133 to list of queried hosts
INFO [2014-03-10 15:21:15,141] main - so.grep.cyanite.carbon - starting carbon handler
INFO [2014-03-10 15:21:15,163] main - so.grep.cyanite.carbon - starting carbon handler
DEBUG [2014-03-10 15:21:57,257] main - so.grep.cyanite.config - building :store with so.grep.cyanite.store/cassandra-metric-store
INFO [2014-03-10 15:21:57,258] main - so.grep.cyanite.store - connecting to cassandra cluster
DEBUG [2014-03-10 15:21:57,276] main - com.datastax.driver.core.Cluster - Starting new cluster with contact points [cassandra/10.30.12.133]
DEBUG [2014-03-10 15:21:58,477] main - com.datastax.driver.core.ControlConnection - [Control connection] Refreshing node list and token map
DEBUG [2014-03-10 15:21:58,586] main - com.datastax.driver.core.ControlConnection - [Control connection] Refreshing schema
DEBUG [2014-03-10 15:21:58,891] main - com.datastax.driver.core.ControlConnection - [Control connection] Successfully connected to cassandra/10.30.12.133
DEBUG [2014-03-10 15:21:58,900] Cassandra Java Driver worker-0 - com.datastax.driver.core.Session - Adding cassandra/10.30.12.133 to list of queried hosts
INFO [2014-03-10 15:22:02,566] main - so.grep.cyanite.carbon - starting carbon handler
DEBUG [2014-03-10 15:22:03,285] New I/O worker #19 - so.grep.cyanite.http - got request: {:remote-addr 10.74.40.173, :scheme :http, :request-method :get, :query-string path=TicketSystem-133_10_30_12_133.cpu.guest&from=1393302320&to, :action :metrics, :content-type nil, :keep-alive? true, :uri /metrics, :server-name localhost, :params {:path TicketSystem-133_10_30_12_133.cpu.guest, :from 1393302320, :to nil}, :headers {cookie PHPSESSID=ch8rkcl5l0l33s8bc3b81lo481; zpauth=eyJsYXN0dGltZSI6MTM5NDQxODgzNjA4MywidXNlcl9hZ2VudCI6IkE0Q0E3NTRDNTI2MTY4NzM4MDk4N0NFQTRDRTMxM0ZDIiwiYWNjb3VudCI6ImxvY3RoMiIsImNsaWVudF9pcCI6IjExOC4xMDIuNy4xNDYifQ, accept-language en-US,en;q=0.8,vi;q=0.6, accept-encoding gzip,deflate,sdch, user-agent Mozilla/5.0 (X11; Linux x86_64) AppleWebKit/537.36 (KHTML, like Gecko) Ubuntu Chromium/32.0.1700.107 Chrome/32.0.1700.107 Safari/537.36, accept text/html,application/xhtml+xml,application/xml;q=0.9,image/webp,/;q=0.8, cache-control max-age=0, connection keep-alive, host 10.30.12.133:8080}, :content-length nil, :server-port 8080, :character-encoding nil, :body nil}
DEBUG [2014-03-10 15:22:03,288] New I/O worker #19 - so.grep.cyanite.http - fetching paths: TicketSystem-133_10_30_12_133.cpu.guest
DEBUG [2014-03-10 15:22:03,317] New I/O worker #19 - so.grep.cyanite.store - fetching paths from store: () 600 105120 1393302320 1394439723 0
ERROR [2014-03-10 15:22:05,811] New I/O worker #19 - so.grep.cyanite.http - could not process request
com.datastax.driver.core.exceptions.InvalidQueryException: LIMIT must be strictly positive
at com.datastax.driver.core.exceptions.InvalidQueryException.copy(InvalidQueryException.java:35)
at com.datastax.driver.core.ResultSetFuture.extractCauseFromExecutionException(ResultSetFuture.java:269)
at com.datastax.driver.core.ResultSetFuture.getUninterruptibly(ResultSetFuture.java:183)
at com.datastax.driver.core.Session.execute(Session.java:111)
at qbits.alia$execute.doInvoke(alia.clj:190)
at clojure.lang.RestFn.invoke(RestFn.java:457)
at so.grep.cyanite.store$fetch.invoke(store.clj:228)
at so.grep.cyanite.http$fn__14067.invoke(http.clj:79)
at clojure.lang.MultiFn.invoke(MultiFn.java:227)
at so.grep.cyanite.http$wrap_process$fn__14077.invoke(http.clj:97)
at so.grep.cyanite.http$wrap_process.invoke(http.clj:93)
at so.grep.cyanite.http$start$handler__14088.invoke(http.clj:115)
at aleph.http.netty$start_http_server$fn$reify__13462$stage0_13448__13463.invoke(netty.clj:77)
at aleph.http.netty$start_http_server$fn$reify__13462.run(netty.clj:77)
at lamina.core.pipeline$fn__3632$run__3639.invoke(pipeline.clj:31)
at lamina.core.pipeline$resume_pipeline.invoke(pipeline.clj:61)
at lamina.core.pipeline$start_pipeline.invoke(pipeline.clj:78)
at aleph.http.netty$start_http_server$fn$reify__13462.invoke(netty.clj:77)
at aleph.http.netty$start_http_server$fn__13445.invoke(netty.clj:77)
at lamina.connections$server_generator_$this$reify__13241$stage0_13227__13242.invoke(connections.clj:376)
at lamina.connections$server_generator_$this$reify__13241.run(connections.clj:376)
at lamina.core.pipeline$fn__3632$run__3639.invoke(pipeline.clj:31)
at lamina.core.pipeline$resume_pipeline.invoke(pipeline.clj:61)
at lamina.core.pipeline$start_pipeline.invoke(pipeline.clj:78)
at lamina.connections$server_generator_$this$reify__13241.invoke(connections.clj:376)
at lamina.connections$server_generator_$this__13224.invoke(connections.clj:376)
at lamina.connections$server_generator_$this__13224.invoke(connections.clj:371)
at lamina.trace.instrument$instrument_fn$fn__6340$fn__6374.invoke(instrument.clj:140)
at lamina.trace.instrument$instrument_fn$fn__6340.invoke(instrument.clj:140)
at clojure.lang.AFn.applyToHelper(AFn.java:161)
at clojure.lang.RestFn.applyTo(RestFn.java:132)
at clojure.lang.AFunction$1.doInvoke(AFunction.java:29)
at clojure.lang.RestFn.invoke(RestFn.java:408)
at lamina.connections$server_generator$fn$reify__13288.run(connections.clj:407)
at lamina.core.pipeline$fn__3632$run__3639.invoke(pipeline.clj:31)
at lamina.core.pipeline$resume_pipeline.invoke(pipeline.clj:61)
at lamina.core.pipeline$subscribe$fn__3665.invoke(pipeline.clj:118)
at lamina.core.result.ResultChannel.success_BANG_(result.clj:388)
at lamina.core.result$fn__1315$success_BANG___1318.invoke(result.clj:37)
at lamina.core.queue$dispatch_consumption.invoke(queue.clj:111)
at lamina.core.queue.EventQueue.enqueue(queue.clj:327)
at lamina.core.queue$fn__1946$enqueue__1961.invoke(queue.clj:131)
at lamina.core.graph.node.Node.propagate(node.clj:282)
at lamina.core.graph.core$fn__1875$propagate__1880.invoke(core.clj:34)
at lamina.core.graph.node.Node.propagate(node.clj:282)
at lamina.core.graph.core$fn__1875$propagate__1880.invoke(core.clj:34)
at lamina.core.channel.Channel.enqueue(channel.clj:63)
at lamina.core.utils$fn__1070$enqueue__1071.invoke(utils.clj:74)
at lamina.core$enqueue.invoke(core.clj:107)
at aleph.http.core$collapse_reads$fn__12303.invoke(core.clj:229)
at lamina.core.graph.propagator$bridge$fn__2919.invoke(propagator.clj:194)
at lamina.core.graph.propagator.BridgePropagator.propagate(propagator.clj:61)
at lamina.core.graph.core$fn__1875$propagate__1880.invoke(core.clj:34)
at lamina.core.graph.node.Node.propagate(node.clj:282)
at lamina.core.graph.core$fn__1875$propagate__1880.invoke(core.clj:34)
at lamina.core.channel.SplicedChannel.enqueue(channel.clj:111)
at lamina.core.utils$fn__1070$enqueue__1071.invoke(utils.clj:74)
at lamina.core$enqueue.invoke(core.clj:107)
at aleph.netty.server$server_message_handler$reify__9192.handleUpstream(server.clj:135)
at org.jboss.netty.channel.DefaultChannelPipeline.sendUpstream(DefaultChannelPipeline.java:564)
at org.jboss.netty.channel.DefaultChannelPipeline$DefaultChannelHandlerContext.sendUpstream(DefaultChannelPipeline.java:791)
at org.jboss.netty.handler.codec.http.HttpContentEncoder.messageReceived(HttpContentEncoder.java:81)
at org.jboss.netty.channel.SimpleChannelHandler.handleUpstream(SimpleChannelHandler.java:88)
at org.jboss.netty.channel.DefaultChannelPipeline.sendUpstream(DefaultChannelPipeline.java:564)
at org.jboss.netty.channel.DefaultChannelPipeline$DefaultChannelHandlerContext.sendUpstream(DefaultChannelPipeline.java:791)
at org.jboss.netty.channel.Channels.fireMessageReceived(Channels.java:296)
at org.jboss.netty.handler.codec.frame.FrameDecoder.unfoldAndFireMessageReceived(FrameDecoder.java:459)
at org.jboss.netty.handler.codec.replay.ReplayingDecoder.callDecode(ReplayingDecoder.java:536)
at org.jboss.netty.handler.codec.replay.ReplayingDecoder.messageReceived(ReplayingDecoder.java:435)
at org.jboss.netty.channel.SimpleChannelUpstreamHandler.handleUpstream(SimpleChannelUpstreamHandler.java:70)
at org.jboss.netty.channel.DefaultChannelPipeline.sendUpstream(DefaultChannelPipeline.java:564)
at org.jboss.netty.channel.DefaultChannelPipeline$DefaultChannelHandlerContext.sendUpstream(DefaultChannelPipeline.java:791)
at aleph.netty.core$upstream_traffic_handler$reify__8884.handleUpstream(core.clj:258)
at org.jboss.netty.channel.DefaultChannelPipeline.sendUpstream(DefaultChannelPipeline.java:564)
at org.jboss.netty.channel.DefaultChannelPipeline$DefaultChannelHandlerContext.sendUpstream(DefaultChannelPipeline.java:791)
at aleph.netty.core$connection_handler$reify__8877.handleUpstream(core.clj:240)
at org.jboss.netty.channel.DefaultChannelPipeline.sendUpstream(DefaultChannelPipeline.java:564)
at org.jboss.netty.channel.DefaultChannelPipeline$DefaultChannelHandlerContext.sendUpstream(DefaultChannelPipeline.java:791)
at aleph.netty.core$upstream_error_handler$reify__8867.handleUpstream(core.clj:199)
at org.jboss.netty.channel.DefaultChannelPipeline.sendUpstream(DefaultChannelPipeline.java:564)
at org.jboss.netty.channel.DefaultChannelPipeline.sendUpstream(DefaultChannelPipeline.java:559)
at org.jboss.netty.channel.Channels.fireMessageReceived(Channels.java:268)
at org.jboss.netty.channel.Channels.fireMessageReceived(Channels.java:255)
at org.jboss.netty.channel.socket.nio.NioWorker.read(NioWorker.java:88)
at org.jboss.netty.channel.socket.nio.AbstractNioWorker.process(AbstractNioWorker.java:109)
at org.jboss.netty.channel.socket.nio.AbstractNioSelector.run(AbstractNioSelector.java:312)
at org.jboss.netty.channel.socket.nio.AbstractNioWorker.run(AbstractNioWorker.java:90)
at org.jboss.netty.channel.socket.nio.NioWorker.run(NioWorker.java:178)
at org.jboss.netty.util.ThreadRenamingRunnable.run(ThreadRenamingRunnable.java:108)
at org.jboss.netty.util.internal.DeadLockProofWorker$1.run(DeadLockProofWorker.java:42)
at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145)
at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615)
at aleph.netty.core$cached_thread_executor$reify__8830$fn__8831.invoke(core.clj:78)
at clojure.lang.AFn.run(AFn.java:24)
at java.lang.Thread.run(Thread.java:722)
Caused by: com.datastax.driver.core.exceptions.InvalidQueryException: LIMIT must be strictly positive
at com.datastax.driver.core.ResultSetFuture.convertException(ResultSetFuture.java:307)
at com.datastax.driver.core.ResultSetFuture$ResponseCallback.onSet(ResultSetFuture.java:125)
at com.datastax.driver.core.RequestHandler.setFinalResult(RequestHandler.java:213)
at com.datastax.driver.core.RequestHandler.onSet(RequestHandler.java:334)
at com.datastax.driver.core.Connection$Dispatcher.messageReceived(Connection.java:534)
at org.jboss.netty.channel.SimpleChannelUpstreamHandler.handleUpstream(SimpleChannelUpstreamHandler.java:70)
at org.jboss.netty.channel.DefaultChannelPipeline.sendUpstream(DefaultChannelPipeline.java:564)
at org.jboss.netty.channel.DefaultChannelPipeline$DefaultChannelHandlerContext.sendUpstream(DefaultChannelPipeline.java:791)
at org.jboss.netty.channel.Channels.fireMessageReceived(Channels.java:296)
at org.jboss.netty.handler.codec.oneone.OneToOneDecoder.handleUpstream(OneToOneDecoder.java:70)
at org.jboss.netty.channel.DefaultChannelPipeline.sendUpstream(DefaultChannelPipeline.java:564)
at org.jboss.netty.channel.DefaultChannelPipeline$DefaultChannelHandlerContext.sendUpstream(DefaultChannelPipeline.java:791)
at org.jboss.netty.handler.codec.oneone.OneToOneDecoder.handleUpstream(OneToOneDecoder.java:68)
at org.jboss.netty.channel.DefaultChannelPipeline.sendUpstream(DefaultChannelPipeline.java:564)
at org.jboss.netty.channel.DefaultChannelPipeline$DefaultChannelHandlerContext.sendUpstream(DefaultChannelPipeline.java:791)
at org.jboss.netty.channel.Channels.fireMessageReceived(Channels.java:296)
at org.jboss.netty.handler.codec.frame.FrameDecoder.unfoldAndFireMessageReceived(FrameDecoder.java:462)
at org.jboss.netty.handler.codec.frame.FrameDecoder.callDecode(FrameDecoder.java:443)
at org.jboss.netty.handler.codec.frame.FrameDecoder.messageReceived(FrameDecoder.java:303)
at org.jboss.netty.channel.SimpleChannelUpstreamHandler.handleUpstream(SimpleChannelUpstreamHandler.java:70)
at org.jboss.netty.channel.DefaultChannelPipeline.sendUpstream(DefaultChannelPipeline.java:564)
at org.jboss.netty.channel.DefaultChannelPipeline.sendUpstream(DefaultChannelPipeline.java:559)
at org.jboss.netty.channel.Channels.fireMessageReceived(Channels.java:268)
at org.jboss.netty.channel.Channels.fireMessageReceived(Channels.java:255)
at org.jboss.netty.channel.socket.nio.NioWorker.read(NioWorker.java:88)
at org.jboss.netty.channel.socket.nio.AbstractNioWorker.process(AbstractNioWorker.java:109)
at org.jboss.netty.channel.socket.nio.AbstractNioSelector.run(AbstractNioSelector.java:312)
at org.jboss.netty.channel.socket.nio.AbstractNioWorker.run(AbstractNioWorker.java:90)
at org.jboss.netty.channel.socket.nio.NioWorker.run(NioWorker.java:178)
at org.jboss.netty.util.ThreadRenamingRunnable.run(ThreadRenamingRunnable.java:108)
at org.jboss.netty.util.internal.DeadLockProofWorker$1.run(DeadLockProofWorker.java:42)
at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145)
at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615)
... 1 more

@pyr
Copy link
Owner

pyr commented Mar 11, 2014

Thanks for the report, will look into it

On Mon, Mar 10, 2014 at 9:29 AM, Tran Loc notifications@github.com wrote:

Dear,
I run cyanite someday. I can get data from api easily, but when i restart
cyanite , with a same URL, APi response {"error":"LIMIT must be strictly
positive"}. And i can put new data into cassandra but can not get it from
webservice anymore. Please fix it, i need cyanite for production. Thanks so
much.

DEBUG [2014-03-10 15:21:23,702] New I/O worker #19 - so.grep.cyanite.http

  • got request: {:remote-addr 10.30.56.131, :scheme :http, :request-method
    :get, :query-string query=%2A, :action :paths, :content-type nil,
    :keep-alive? true, :uri /paths, :server-name localhost, :params {:query },
    :headers {user-agent python-requests/2.2.1 CPython/2.6.6
    Linux/2.6.32-279.5.2.el6.x86_64, accept */
    , accept-encoding gzip,
    deflate, compress, host 10.30.12.133:8080}, :content-length nil,
    :server-port 8080, :character-encoding nil, :body nil}
    DEBUG [2014-03-10 15:21:24,621] New I/O worker Carbon shorthand #20 - so.grep.cyanite.http
  • got request: {:remote-addr 10.30.56.131, :scheme :http, :request-method
    :get, :query-string query=TicketSystem-133_10_30_12_133.%2A, :action
    :paths, :content-type nil, :keep-alive? true, :uri /paths, :server-name
    localhost, :params {:query TicketSystem-133_10_30_12_133.}, :headers
    {user-agent python-requests/2.2.1 CPython/2.6.6
    Linux/2.6.32-279.5.2.el6.x86_64, accept */
    , accept-encoding gzip,
    deflate, compress, host 10.30.12.133:8080}, :content-length nil,
    :server-port 8080, :character-encoding nil, :body nil}
    DEBUG [2014-03-10 15:21:25,887] New I/O worker Add a simple /ping route #21 - so.grep.cyanite.http
  • got request: {:remote-addr 10.30.56.131, :scheme :http, :request-method
    :get, :query-string query=TicketSystem-133_10_30_12_133.cpu.%2A, :action
    :paths, :content-type nil, :keep-alive? true, :uri /paths, :server-name
    localhost, :params {:query TicketSystem-133_10_30_12_133.cpu.}, :headers
    {user-agent python-requests/2.2.1 CPython/2.6.6
    Linux/2.6.32-279.5.2.el6.x86_64, accept */
    , accept-encoding gzip,
    deflate, compress, host 10.30.12.133:8080}, :content-length nil,
    :server-port 8080, :character-encoding nil, :body nil}
    4-03-10 15:21:14,910] Cassandra Java Driver worker-0 -
    com.datastax.driver.core.Session - Adding cassandra/10.30.12.133 to list
    of queried hosts
    INFO [2014-03-10 15:21:15,141] main - so.grep.cyanite.carbon - starting
    carbon handler
    INFO [2014-03-10 15:21:15,163] main - so.grep.cyanite.carbon - starting
    carbon handler
    DEBUG [2014-03-10 15:21:57,257] main - so.grep.cyanite.config - building
    :store with so.grep.cyanite.store/cassandra-metric-store
    INFO [2014-03-10 15:21:57,258] main - so.grep.cyanite.store - connecting
    to cassandra cluster
    DEBUG [2014-03-10 15:21:57,276] main - com.datastax.driver.core.Cluster -
    Starting new cluster with contact points [cassandra/10.30.12.133]
    DEBUG [2014-03-10 15:21:58,477] main -
    com.datastax.driver.core.ControlConnection - [Control connection]
    Refreshing node list and token map
    DEBUG [2014-03-10 15:21:58,586] main -
    com.datastax.driver.core.ControlConnection - [Control connection]
    Refreshing schema
    DEBUG [2014-03-10 15:21:58,891] main -
    com.datastax.driver.core.ControlConnection - [Control connection]
    Successfully connected to cassandra/10.30.12.133
    DEBUG [2014-03-10 15:21:58,900] Cassandra Java Driver worker-0 -
    com.datastax.driver.core.Session - Adding cassandra/10.30.12.133 to list
    of queried hosts
    INFO [2014-03-10 15:22:02,566] main - so.grep.cyanite.carbon - starting
    carbon handler
    DEBUG [2014-03-10 15:22:03,285] New I/O worker Value out of range for int #19 - so.grep.cyanite.http
  • got request: {:remote-addr 10.74.40.173, :scheme :http, :request-method
    :get, :query-string
    path=TicketSystem-133_10_30_12_133.cpu.guest&from=1393302320&to, :action
    :metrics, :content-type nil, :keep-alive? true, :uri /metrics, :server-name
    localhost, :params {:path TicketSystem-133_10_30_12_133.cpu.guest, :from
    1393302320, :to nil}, :headers {cookie
    PHPSESSID=ch8rkcl5l0l33s8bc3b81lo481;
    zpauth=eyJsYXN0dGltZSI6MTM5NDQxODgzNjA4MywidXNlcl9hZ2VudCI6IkE0Q0E3NTRDNTI2MTY4NzM4MDk4N0NFQTRDRTMxM0ZDIiwiYWNjb3VudCI6ImxvY3RoMiIsImNsaWVudF9pcCI6IjExOC4xMDIuNy4xNDYifQ,
    accept-language en-US,en;q=0.8,vi;q=0.6, accept-encoding gzip,deflate,sdch,
    user-agent Mozilla/5.0 (X11; Linux x86_64) AppleWebKit/537.36 (KHTML, like
    Gecko) Ubuntu Chromium/32.0.1700.107 Chrome/32.0.1700.107 Safari/537.36,
    accept text/html,application/xhtml+xml,application/xml;q=0.9,image/webp,
    /;q=0.8, cache-control max-age=0, connection k eep-alive, host
    10.30.12.133:8080}, :content-length nil, :server-port 8080,
    :character-encoding nil, :body nil}
    DEBUG [2014-03-10 15:22:03,288] New I/O worker Value out of range for int #19 - so.grep.cyanite.http
  • fetching paths: TicketSystem-133_10_30_12_133.cpu.guest
    DEBUG [2014-03-10 15:22:03,317] New I/O worker Value out of range for int #19 - so.grep.cyanite.store
  • fetching paths from store: () 600 105120 1393302320 1394439723 0
    ERROR [2014-03-10 15:22:05,811] New I/O worker Value out of range for int #19 - so.grep.cyanite.http
  • could not process request
    com.datastax.driver.core.exceptions.InvalidQueryException: LIMIT must be
    strictly positive
    at
    com.datastax.driver.core.exceptions.InvalidQueryException.copy(InvalidQueryException.java:35)
    at
    com.datastax.driver.core.ResultSetFuture.extractCauseFromExecutionException(ResultSetFuture.java:269)
    at
    com.datastax.driver.core.ResultSetFuture.getUninterruptibly(ResultSetFuture.java:183)
    at com.datastax.driver.core.Session.execute(Session.java:111)
    at qbits.alia$execute.doInvoke(alia.clj:190)
    at clojure.lang.RestFn.invoke(RestFn.java:457)
    at so.grep.cyanite.store$fetch.invoke(store.clj:228)
    at so.grep.cyanite.http$fn__14067.invoke(http.clj:79)
    at clojure.lang.MultiFn.invoke(MultiFn.java:227)
    at so.grep.cyanite.http$wrap_process$fn__14077.invoke(http.clj:97)
    at so.grep.cyanite.http$wrap_process.invoke(http.clj:93)
    at so.grep.cyanite.http$start$handler__14088.invoke(http.clj:115)
    at
    aleph.http.netty$start_http_server$fn$reify__13462$stage0_13448__13463.invoke(netty.clj:77)
    at aleph.http.netty$start_http_server$fn$reify__13462.run(netty.clj:77)
    at lamina.core.pipeline$fn__3632$run__3639.invoke(pipeline.clj:31)
    at lamina.core.pipeline$resume_pipeline.invoke(pipeline.clj:61)
    at lamina.core.pipeline$start_pipeline.invoke(pipeline.clj:78)
    at aleph.http.netty$start_http_server$fn$reify__13462.invoke(netty.clj:77)
    at aleph.http.netty$start_http_server$fn__13445.invoke(netty.clj:77)
    at
    lamina.connections$server_generator_$this$reify__13241$stage0_13227__13242.invoke(connections.clj:376)
    at
    lamina.connections$server_generator_$this$reify__13241.run(connections.clj:376)
    at lamina.core.pipeline$fn__3632$run__3639.invoke(pipeline.clj:31)
    at lamina.core.pipeline$resume_pipeline.invoke(pipeline.clj:61)
    at lamina.core.pipeline$start_pipeline.invoke(pipeline.clj:78)
    at
    lamina.connections$server_generator_$this$reify__13241.invoke(connections.clj:376)
    at
    lamina.connections$server_generator_$this__13224.invoke(connections.clj:376)
    at
    lamina.connections$server_generator_$this__13224.invoke(connections.clj:371)
    at
    lamina.trace.instrument$instrument_fn$fn__6340$fn__6374.invoke(instrument.clj:140)
    at
    lamina.trace.instrument$instrument_fn$fn__6340.invoke(instrument.clj:140)
    at clojure.lang.AFn.applyToHelper(AFn.java:161)
    at clojure.lang.RestFn.applyTo(RestFn.java:132)
    at clojure.lang.AFunction$1.doInvoke(AFunction.java:29)
    at clojure.lang.RestFn.invoke(RestFn.java:408)
    at
    lamina.connections$server_generator$fn$reify__13288.run(connections.clj:407)
    at lamina.core.pipeline$fn__3632$run__3639.invoke(pipeline.clj:31)
    at lamina.core.pipeline$resume_pipeline.invoke(pipeline.clj:61)
    at lamina.core.pipeline$subscribe$fn__3665.invoke(pipeline.clj:118)
    at lamina.core.result.ResultChannel.success_BANG_(result.clj:388)
    at lamina.core.result$fn__1315$success_BANG___1318.invoke(result.clj:37)
    at lamina.core.queue$dispatch_consumption.invoke(queue.clj:111)
    at lamina.core.queue.EventQueue.enqueue(queue.clj:327)
    at lamina.core.queue$fn__1946$enqueue__1961.invoke(queue.clj:131)
    at lamina.core.graph.node.Node.propagate(node.clj:282)
    at lamina.core.graph.core$fn__1875$propagate__1880.invoke(core.clj:34)
    at lamina.core.graph.node.Node.propagate(node.clj:282)
    at lamina.core.graph.core$fn__1875$propagate__1880.invoke(core.clj:34)
    at lamina.core.channel.Channel.enqueue(channel.clj:63)
    at lamina.core.utils$fn__1070$enqueue__1071.invoke(utils.clj:74)
    at lamina.core$enqueue.invoke(core.clj:107)
    at aleph.http.core$collapse_reads$fn__12303.invoke(core.clj:229)
    at lamina.core.graph.propagator$bridge$fn__2919.invoke(propagator.clj:194)
    at
    lamina.core.graph.propagator.BridgePropagator.propagate(propagator.clj:61)
    at lamina.core.graph.core$fn__1875$propagate__1880.invoke(core.clj:34)
    at lamina.core.graph.node.Node.propagate(node.clj:282)
    at lamina.core.graph.core$fn__1875$propagate__1880.invoke(core.clj:34)
    at lamina.core.channel.SplicedChannel.enqueue(channel.clj:111)
    at lamina.core.utils$fn__1070$enqueue__1071.invoke(utils.clj:74)
    at lamina.core$enqueue.invoke(core.clj:107)
    at
    aleph.netty.server$server_message_handler$reify__9192.handleUpstream(server.clj:135)
    at
    org.jboss.netty.channel.DefaultChannelPipeline.sendUpstream(DefaultChannelPipeline.java:564)
    at
    org.jboss.netty.channel.DefaultChannelPipeline$DefaultChannelHandlerContext.sendUpstream(DefaultChannelPipeline.java:791)
    at
    org.jboss.netty.handler.codec.http.HttpContentEncoder.messageReceived(HttpContentEncoder.java:81)
    at
    org.jboss.netty.channel.SimpleChannelHandler.handleUpstream(SimpleChannelHandler.java:88)
    at
    org.jboss.netty.channel.DefaultChannelPipeline.sendUpstream(DefaultChannelPipeline.java:564)
    at
    org.jboss.netty.channel.DefaultChannelPipeline$DefaultChannelHandlerContext.sendUpstream(DefaultChannelPipeline.java:791)
    at org.jboss.netty.channel.Channels.fireMessageReceived(Channels.java:296)
    at
    org.jboss.netty.handler.codec.frame.FrameDecoder.unfoldAndFireMessageReceived(FrameDecoder.java:459)
    at
    org.jboss.netty.handler.codec.replay.ReplayingDecoder.callDecode(ReplayingDecoder.java:536)
    at
    org.jboss.netty.handler.codec.replay.ReplayingDecoder.messageReceived(ReplayingDecoder.java:435)
    at
    org.jboss.netty.channel.SimpleChannelUpstreamHandler.handleUpstream(SimpleChannelUpstreamHandler.java:70)
    at
    org.jboss.netty.channel.DefaultChannelPipeline.sendUpstream(DefaultChannelPipeline.java:564)
    at
    org.jboss.netty.channel.DefaultChannelPipeline$DefaultChannelHandlerContext.sendUpstream(DefaultChannelPipeline.java:791)
    at
    aleph.netty.core$upstream_traffic_handler$reify__8884.handleUpstream(core.clj:258)
    at
    org.jboss.netty.channel.DefaultChannelPipeline.sendUpstream(DefaultChannelPipeline.java:564)
    at
    org.jboss.netty.channel.DefaultChannelPipeline$DefaultChannelHandlerContext.sendUpstream(DefaultChannelPipeline.java:791)
    at
    aleph.netty.core$connection_handler$reify__8877.handleUpstream(core.clj:240)
    at
    org.jboss.netty.channel.DefaultChannelPipeline.sendUpstream(DefaultChannelPipeline.java:564)
    at
    org.jboss.netty.channel.DefaultChannelPipeline$DefaultChannelHandlerContext.sendUpstream(DefaultChannelPipeline.java:791)
    at
    aleph.netty.core$upstream_error_handler$reify__8867.handleUpstream(core.clj:199)
    at
    org.jboss.netty.channel.DefaultChannelPipeline.sendUpstream(DefaultChannelPipeline.java:564)
    at
    org.jboss.netty.channel.DefaultChannelPipeline.sendUpstream(DefaultChannelPipeline.java:559)
    at org.jboss.netty.channel.Channels.fireMessageReceived(Channels.java:268)
    at org.jboss.netty.channel.Channels.fireMessageReceived(Channels.java:255)
    at org.jboss.netty.channel.socket.nio.NioWorker.read(NioWorker.java:88)
    at
    org.jboss.netty.channel.socket.nio.AbstractNioWorker.process(AbstractNioWorker.java:109)
    at
    org.jboss.netty.channel.socket.nio.AbstractNioSelector.run(AbstractNioSelector.java:312)
    at
    org.jboss.netty.channel.socket.nio.AbstractNioWorker.run(AbstractNioWorker.java:90)
    at org.jboss.netty.channel.socket.nio.NioWorker.run(NioWorker.java:178)
    at
    org.jboss.netty.util.ThreadRenamingRunnable.run(ThreadRenamingRunnable.java:108)
    at
    org.jboss.netty.util.internal.DeadLockProofWorker$1.run(DeadLockProofWorker.java:42)
    at
    java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145)
    at
    java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615)
    at
    aleph.netty.core$cached_thread_executor$reify__8830$fn__8831.invoke(core.clj:78)
    at clojure.lang.AFn.run(AFn.java:24)
    at java.lang.Thread.run(Thread.java:722)
    Caused by: com.datastax.driver.core.exceptions.InvalidQueryException:
    LIMIT must be strictly positive
    at
    com.datastax.driver.core.ResultSetFuture.convertException(ResultSetFuture.java:307)
    at
    com.datastax.driver.core.ResultSetFuture$ResponseCallback.onSet(ResultSetFuture.java:125)
    at
    com.datastax.driver.core.RequestHandler.setFinalResult(RequestHandler.java:213)
    at com.datastax.driver.core.RequestHandler.onSet(RequestHandler.java:334)
    at
    com.datastax.driver.core.Connection$Dispatcher.messageReceived(Connection.java:534)
    at
    org.jboss.netty.channel.SimpleChannelUpstreamHandler.handleUpstream(SimpleChannelUpstreamHandler.java:70)
    at
    org.jboss.netty.channel.DefaultChannelPipeline.sendUpstream(DefaultChannelPipeline.java:564)
    at
    org.jboss.netty.channel.DefaultChannelPipeline$DefaultChannelHandlerContext.sendUpstream(DefaultChannelPipeline.java:791)
    at org.jboss.netty.channel.Channels.fireMessageReceived(Channels.java:296)
    at
    org.jboss.netty.handler.codec.oneone.OneToOneDecoder.handleUpstream(OneToOneDecoder.java:70)
    at
    org.jboss.netty.channel.DefaultChannelPipeline.sendUpstream(DefaultChannelPipeline.java:564)
    at
    org.jboss.netty.channel.DefaultChannelPipeline$DefaultChannelHandlerContext.sendUpstream(DefaultChannelPipeline.java:791)
    at
    org.jboss.netty.handler.codec.oneone.OneToOneDecoder.handleUpstream(OneToOneDecoder.java:68)
    at
    org.jboss.netty.channel.DefaultChannelPipeline.sendUpstream(DefaultChannelPipeline.java:564)
    at
    org.jboss.netty.channel.DefaultChannelPipeline$DefaultChannelHandlerContext.sendUpstream(DefaultChannelPipeline.java:791)
    at org.jboss.netty.channel.Channels.fireMessageReceived(Channels.java:296)
    at
    org.jboss.netty.handler.codec.frame.FrameDecoder.unfoldAndFireMessageReceived(FrameDecoder.java:462)
    at
    org.jboss.netty.handler.codec.frame.FrameDecoder.callDecode(FrameDecoder.java:443)
    at
    org.jboss.netty.handler.codec.frame.FrameDecoder.messageReceived(FrameDecoder.java:303)
    at
    org.jboss.netty.channel.SimpleChannelUpstreamHandler.handleUpstream(SimpleChannelUpstreamHandler.java:70)
    at
    org.jboss.netty.channel.DefaultChannelPipeline.sendUpstream(DefaultChannelPipeline.java:564)
    at
    org.jboss.netty.channel.DefaultChannelPipeline.sendUpstream(DefaultChannelPipeline.java:559)
    at org.jboss.netty.channel.Channels.fireMessageReceived(Channels.java:268)
    at org.jboss.netty.channel.Channels.fireMessageReceived(Channels.java:255)
    at org.jboss.netty.channel.socket.nio.NioWorker.read(NioWorker.java:88)
    at
    org.jboss.netty.channel.socket.nio.AbstractNioWorker.process(AbstractNioWorker.java:109)
    at
    org.jboss.netty.channel.socket.nio.AbstractNioSelector.run(AbstractNioSelector.java:312)
    at
    org.jboss.netty.channel.socket.nio.AbstractNioWorker.run(AbstractNioWorker.java:90)
    at org.jboss.netty.channel.socket.nio.NioWorker.run(NioWorker.java:178)
    at
    org.jboss.netty.util.ThreadRenamingRunnable.run(ThreadRenamingRunnable.java:108)
    at
    org.jboss.netty.util.internal.DeadLockProofWorker$1.run(DeadLockProofWorker.java:42)
    at
    java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145)
    at
    java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615)
    ... 1 more

Reply to this email directly or view it on GitHubhttps://github.com//issues/12#issuecomment-37161145
.

@neilprosser
Copy link
Contributor

Here's another example with some formatting. This is using master as of yesterday evening (it's got the ping route and the Carbon-shorthand rollups.

DEBUG [2014-03-26 08:43:10,195] New I/O worker #7 - org.spootnik.cyanite.http - got request:  {:remote-addr 1.1.1.1, :params {:path some.lovely.*.metrics, :from 1395770400}, :headers {host redacted-host.somewhere.com, user-agent Mozilla/5.0 (Macintosh; Intel Mac OS X 10_9_2) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/33.0.1750.152 Safari/537.36, x-forwarded-port 80, connection keep-alive, accept text/html,application/xhtml+xml,application/xml;q=0.9,image/webp,*/*;q=0.8, accept-language en-GB,en-US;q=0.8,en;q=0.6, x-forwarded-for 0.0.0.0, x-bluecoat-via 657b615249652dce, accept-encoding gzip,deflate,sdch, x-forwarded-proto http, cache-control max-stale=0}, :server-port 8080, :content-length nil, :keep-alive? true, :content-type nil, :character-encoding nil, :action :metrics, :uri /metrics/, :server-name redacted-server.somewhere.com, :query-string path=some.lovely.*.metrics&from=1395770400, :body nil, :scheme :http, :request-method :get}
DEBUG [2014-03-26 08:43:10,196] New I/O worker #7 - org.spootnik.cyanite.http - fetching paths:  some.lovely.*.metrics
DEBUG [2014-03-26 08:43:10,197] New I/O worker #7 - org.spootnik.cyanite.store - fetching paths from store:  () 60 4320 1395770400 1395823390 0
ERROR [2014-03-26 08:43:10,206] New I/O worker #7 - org.spootnik.cyanite.http - could not process request
com.datastax.driver.core.exceptions.InvalidQueryException: LIMIT must be strictly positive
    at com.datastax.driver.core.exceptions.InvalidQueryException.copy(InvalidQueryException.java:35)
    at com.datastax.driver.core.ResultSetFuture.extractCauseFromExecutionException(ResultSetFuture.java:269)
    at com.datastax.driver.core.ResultSetFuture.getUninterruptibly(ResultSetFuture.java:183)
    at com.datastax.driver.core.Session.execute(Session.java:111)
    at qbits.alia$execute.doInvoke(alia.clj:190)
    at clojure.lang.RestFn.invoke(RestFn.java:457)
    at org.spootnik.cyanite.store$fetch.invoke(store.clj:228)
    at org.spootnik.cyanite.http$fn__14073.invoke(http.clj:80)
    at clojure.lang.MultiFn.invoke(MultiFn.java:227)
    at org.spootnik.cyanite.http$wrap_process$fn__14085.invoke(http.clj:102)
    at org.spootnik.cyanite.http$wrap_process.invoke(http.clj:98)
    at org.spootnik.cyanite.http$start$handler__14096.invoke(http.clj:120)
    at aleph.http.netty$start_http_server$fn$reify__13468$stage0_13454__13469.invoke(netty.clj:77)
    at aleph.http.netty$start_http_server$fn$reify__13468.run(netty.clj:77)
    at lamina.core.pipeline$fn__3632$run__3639.invoke(pipeline.clj:31)
    at lamina.core.pipeline$resume_pipeline.invoke(pipeline.clj:61)
    at lamina.core.pipeline$start_pipeline.invoke(pipeline.clj:78)
    at aleph.http.netty$start_http_server$fn$reify__13468.invoke(netty.clj:77)
    at aleph.http.netty$start_http_server$fn__13451.invoke(netty.clj:77)
    at lamina.connections$server_generator_$this$reify__13247$stage0_13233__13248.invoke(connections.clj:376)
    at lamina.connections$server_generator_$this$reify__13247.run(connections.clj:376)
    at lamina.core.pipeline$fn__3632$run__3639.invoke(pipeline.clj:31)
    at lamina.core.pipeline$resume_pipeline.invoke(pipeline.clj:61)
    at lamina.core.pipeline$start_pipeline.invoke(pipeline.clj:78)
    at lamina.connections$server_generator_$this$reify__13247.invoke(connections.clj:376)
    at lamina.connections$server_generator_$this__13230.invoke(connections.clj:376)
    at lamina.connections$server_generator_$this__13230.invoke(connections.clj:371)
    at lamina.trace.instrument$instrument_fn$fn__6340$fn__6374.invoke(instrument.clj:140)
    at lamina.trace.instrument$instrument_fn$fn__6340.invoke(instrument.clj:140)
    at clojure.lang.AFn.applyToHelper(AFn.java:154)
    at clojure.lang.RestFn.applyTo(RestFn.java:132)
    at clojure.lang.AFunction$1.doInvoke(AFunction.java:29)
    at clojure.lang.RestFn.invoke(RestFn.java:408)
    at lamina.connections$server_generator$fn$reify__13294.run(connections.clj:407)
    at lamina.core.pipeline$fn__3632$run__3639.invoke(pipeline.clj:31)
    at lamina.core.pipeline$resume_pipeline.invoke(pipeline.clj:61)
    at lamina.core.pipeline$subscribe$fn__3665.invoke(pipeline.clj:118)
    at lamina.core.result.ResultChannel.success_BANG_(result.clj:388)
    at lamina.core.result$fn__1315$success_BANG___1318.invoke(result.clj:37)
    at lamina.core.queue$dispatch_consumption.invoke(queue.clj:111)
    at lamina.core.queue.EventQueue.enqueue(queue.clj:327)
    at lamina.core.queue$fn__1946$enqueue__1961.invoke(queue.clj:131)
    at lamina.core.graph.node.Node.propagate(node.clj:282)
    at lamina.core.graph.core$fn__1875$propagate__1880.invoke(core.clj:34)
    at lamina.core.graph.node.Node.propagate(node.clj:282)
    at lamina.core.graph.core$fn__1875$propagate__1880.invoke(core.clj:34)
    at lamina.core.channel.Channel.enqueue(channel.clj:63)
    at lamina.core.utils$fn__1070$enqueue__1071.invoke(utils.clj:74)
    at lamina.core$enqueue.invoke(core.clj:107)
    at aleph.http.core$collapse_reads$fn__12309.invoke(core.clj:229)
    at lamina.core.graph.propagator$bridge$fn__2919.invoke(propagator.clj:194)
    at lamina.core.graph.propagator.BridgePropagator.propagate(propagator.clj:61)
    at lamina.core.graph.core$fn__1875$propagate__1880.invoke(core.clj:34)
    at lamina.core.graph.node.Node.propagate(node.clj:282)
    at lamina.core.graph.core$fn__1875$propagate__1880.invoke(core.clj:34)
    at lamina.core.channel.SplicedChannel.enqueue(channel.clj:111)
    at lamina.core.utils$fn__1070$enqueue__1071.invoke(utils.clj:74)
    at lamina.core$enqueue.invoke(core.clj:107)
    at aleph.netty.server$server_message_handler$reify__9192.handleUpstream(server.clj:135)
    at org.jboss.netty.channel.DefaultChannelPipeline.sendUpstream(DefaultChannelPipeline.java:564)
    at org.jboss.netty.channel.DefaultChannelPipeline$DefaultChannelHandlerContext.sendUpstream(DefaultChannelPipeline.java:791)
    at org.jboss.netty.handler.codec.http.HttpContentEncoder.messageReceived(HttpContentEncoder.java:81)
    at org.jboss.netty.channel.SimpleChannelHandler.handleUpstream(SimpleChannelHandler.java:88)
    at org.jboss.netty.channel.DefaultChannelPipeline.sendUpstream(DefaultChannelPipeline.java:564)
    at org.jboss.netty.channel.DefaultChannelPipeline$DefaultChannelHandlerContext.sendUpstream(DefaultChannelPipeline.java:791)
    at org.jboss.netty.channel.Channels.fireMessageReceived(Channels.java:296)
    at org.jboss.netty.handler.codec.frame.FrameDecoder.unfoldAndFireMessageReceived(FrameDecoder.java:459)
    at org.jboss.netty.handler.codec.replay.ReplayingDecoder.callDecode(ReplayingDecoder.java:536)
    at org.jboss.netty.handler.codec.replay.ReplayingDecoder.messageReceived(ReplayingDecoder.java:435)
    at org.jboss.netty.channel.SimpleChannelUpstreamHandler.handleUpstream(SimpleChannelUpstreamHandler.java:70)
    at org.jboss.netty.channel.DefaultChannelPipeline.sendUpstream(DefaultChannelPipeline.java:564)
    at org.jboss.netty.channel.DefaultChannelPipeline$DefaultChannelHandlerContext.sendUpstream(DefaultChannelPipeline.java:791)
    at aleph.netty.core$upstream_traffic_handler$reify__8884.handleUpstream(core.clj:258)
    at org.jboss.netty.channel.DefaultChannelPipeline.sendUpstream(DefaultChannelPipeline.java:564)
    at org.jboss.netty.channel.DefaultChannelPipeline$DefaultChannelHandlerContext.sendUpstream(DefaultChannelPipeline.java:791)
    at aleph.netty.core$connection_handler$reify__8877.handleUpstream(core.clj:240)
    at org.jboss.netty.channel.DefaultChannelPipeline.sendUpstream(DefaultChannelPipeline.java:564)
    at org.jboss.netty.channel.DefaultChannelPipeline$DefaultChannelHandlerContext.sendUpstream(DefaultChannelPipeline.java:791)
    at aleph.netty.core$upstream_error_handler$reify__8867.handleUpstream(core.clj:199)
    at org.jboss.netty.channel.DefaultChannelPipeline.sendUpstream(DefaultChannelPipeline.java:564)
    at org.jboss.netty.channel.DefaultChannelPipeline.sendUpstream(DefaultChannelPipeline.java:559)
    at org.jboss.netty.channel.Channels.fireMessageReceived(Channels.java:268)
    at org.jboss.netty.channel.Channels.fireMessageReceived(Channels.java:255)
    at org.jboss.netty.channel.socket.nio.NioWorker.read(NioWorker.java:88)
    at org.jboss.netty.channel.socket.nio.AbstractNioWorker.process(AbstractNioWorker.java:109)
    at org.jboss.netty.channel.socket.nio.AbstractNioSelector.run(AbstractNioSelector.java:312)
    at org.jboss.netty.channel.socket.nio.AbstractNioWorker.run(AbstractNioWorker.java:90)
    at org.jboss.netty.channel.socket.nio.NioWorker.run(NioWorker.java:178)
    at org.jboss.netty.util.ThreadRenamingRunnable.run(ThreadRenamingRunnable.java:108)
    at org.jboss.netty.util.internal.DeadLockProofWorker$1.run(DeadLockProofWorker.java:42)
    at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145)
    at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615)
    at aleph.netty.core$cached_thread_executor$reify__8830$fn__8831.invoke(core.clj:78)
    at clojure.lang.AFn.run(AFn.java:22)
    at java.lang.Thread.run(Thread.java:724)
Caused by: com.datastax.driver.core.exceptions.InvalidQueryException: LIMIT must be strictly positive
    at com.datastax.driver.core.ResultSetFuture.convertException(ResultSetFuture.java:307)
    at com.datastax.driver.core.ResultSetFuture$ResponseCallback.onSet(ResultSetFuture.java:125)
    at com.datastax.driver.core.RequestHandler.setFinalResult(RequestHandler.java:213)
    at com.datastax.driver.core.RequestHandler.onSet(RequestHandler.java:334)
    at com.datastax.driver.core.Connection$Dispatcher.messageReceived(Connection.java:534)
    at org.jboss.netty.channel.SimpleChannelUpstreamHandler.handleUpstream(SimpleChannelUpstreamHandler.java:70)
    at org.jboss.netty.channel.DefaultChannelPipeline.sendUpstream(DefaultChannelPipeline.java:564)
    at org.jboss.netty.channel.DefaultChannelPipeline$DefaultChannelHandlerContext.sendUpstream(DefaultChannelPipeline.java:791)
    at org.jboss.netty.channel.Channels.fireMessageReceived(Channels.java:296)
    at org.jboss.netty.handler.codec.oneone.OneToOneDecoder.handleUpstream(OneToOneDecoder.java:70)
    at org.jboss.netty.channel.DefaultChannelPipeline.sendUpstream(DefaultChannelPipeline.java:564)
    at org.jboss.netty.channel.DefaultChannelPipeline$DefaultChannelHandlerContext.sendUpstream(DefaultChannelPipeline.java:791)
    at org.jboss.netty.handler.codec.oneone.OneToOneDecoder.handleUpstream(OneToOneDecoder.java:68)
    at org.jboss.netty.channel.DefaultChannelPipeline.sendUpstream(DefaultChannelPipeline.java:564)
    at org.jboss.netty.channel.DefaultChannelPipeline$DefaultChannelHandlerContext.sendUpstream(DefaultChannelPipeline.java:791)
    at org.jboss.netty.channel.Channels.fireMessageReceived(Channels.java:296)
    at org.jboss.netty.handler.codec.frame.FrameDecoder.unfoldAndFireMessageReceived(FrameDecoder.java:462)
    at org.jboss.netty.handler.codec.frame.FrameDecoder.callDecode(FrameDecoder.java:443)
    at org.jboss.netty.handler.codec.frame.FrameDecoder.messageReceived(FrameDecoder.java:303)
    at org.jboss.netty.channel.SimpleChannelUpstreamHandler.handleUpstream(SimpleChannelUpstreamHandler.java:70)
    at org.jboss.netty.channel.DefaultChannelPipeline.sendUpstream(DefaultChannelPipeline.java:564)
    at org.jboss.netty.channel.DefaultChannelPipeline.sendUpstream(DefaultChannelPipeline.java:559)
    at org.jboss.netty.channel.Channels.fireMessageReceived(Channels.java:268)
    at org.jboss.netty.channel.Channels.fireMessageReceived(Channels.java:255)
    at org.jboss.netty.channel.socket.nio.NioWorker.read(NioWorker.java:88)
    at org.jboss.netty.channel.socket.nio.AbstractNioWorker.process(AbstractNioWorker.java:109)
    at org.jboss.netty.channel.socket.nio.AbstractNioSelector.run(AbstractNioSelector.java:312)
    at org.jboss.netty.channel.socket.nio.AbstractNioWorker.run(AbstractNioWorker.java:90)
    at org.jboss.netty.channel.socket.nio.NioWorker.run(NioWorker.java:178)
    at org.jboss.netty.util.ThreadRenamingRunnable.run(ThreadRenamingRunnable.java:108)
    at org.jboss.netty.util.internal.DeadLockProofWorker$1.run(DeadLockProofWorker.java:42)
    at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145)
    at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615)
    ... 1 more

@neilprosser
Copy link
Contributor

It looks like we're using a 0 limit because (in my case) (max-points '() 60 1395770400 1395823390) is returning 0. I wonder whether that fetch could be skipped if (count paths) is 0? Doesn't explain why my list of paths is empty given I've got a whole Cassandra store full of stuff! That's next.

@pyr
Copy link
Owner

pyr commented Mar 26, 2014

@hiepsikhokhao , @neilprosser should be fixed in eeb2d58

@neilprosser
Copy link
Contributor

Boom! That was quick. I'll deploy that now.

@pyr
Copy link
Owner

pyr commented Mar 26, 2014

I think I know why your query might not work, this is a separate issue

On Wed, Mar 26, 2014 at 10:06 AM, Neil Prosser notifications@github.comwrote:

Boom! That was quick. I'll deploy that now.

Reply to this email directly or view it on GitHubhttps://github.com//issues/12#issuecomment-38662035
.

@neilprosser
Copy link
Contributor

That hasn't worked for me. paths is empty but not nil.

Changing line 224 to check (pos? (count paths)) instead of paths short-circuits it straight to the result. Equally could use (not-empty paths).

@pyr
Copy link
Owner

pyr commented Mar 26, 2014

indeed, sorry about this, updated at ccaf3fb, I'll try to cook up a test for this

@neilprosser
Copy link
Contributor

This looks fixed now.

@pyr
Copy link
Owner

pyr commented Mar 28, 2014

ok, thanks!

@pyr pyr closed this as completed Mar 28, 2014
@ghost
Copy link
Author

ghost commented Apr 4, 2014

Thanks @pyr a lot for your help :)

Sign up for free to subscribe to this conversation on GitHub. Already have an account? Sign in.
Labels
None yet
Projects
None yet
Development

No branches or pull requests

2 participants