Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

runtime error: integer divide by zero. #413

Merged
merged 1 commit into from
Apr 8, 2014

Conversation

jvshahid
Copy link
Contributor

@jvshahid jvshahid commented Apr 8, 2014

After creating a lot of continues queries, I keep getting lots of "divide by zero" messages.

********************************BUG********************************
Database: tracking
Query: [SELECT MEDIAN(total) as total, MEDIAN(total_kB) as total_kB, MEDIAN(uptime) as uptime, MEDIAN(reqs) as reqs, MEDIAN(bytes_per_sec) as bytes_per_sec, MEDIAN(bytes_per_req) as bytes_per_req, MEDIAN(busy_workers) as busy_workers, MEDIAN(idle_workers) as idle_workers FROM /^project.(\d+)\.apache\.16.*/ GROUP BY time(1440) where time > 1396863886702460u and time < 1396863887702444u]
Error: runtime error: integer divide by zero. Stacktrace: goroutine 18 [running]:
common.RecoverFunc(0xc222cd7aa0, 0x8, 0xc212ff5000, 0x17b, 0x0)
        /home/vagrant/influxdb/src/common/recover.go:13 +0x106
runtime.panic(0x884c40, 0x1004a9d)
        /home/vagrant/bin/go/src/pkg/runtime/panic.c:248 +0x106
cluster.(*ShardData).QueryResponseBufferSize(0xc210134460, 0xc21064eaa0, 0x64, 0x0)
        /home/vagrant/influxdb/src/cluster/shard.go:361 +0x12d
coordinator.(*CoordinatorImpl).queryShards(0xc2100c88c0, 0xc21064eaa0, 0xc2209d20f0, 0x1, 0x1, ...)
        /home/vagrant/influxdb/src/coordinator/coordinator.go:374 +0x111
coordinator.(*CoordinatorImpl).runQuerySpec(0xc2100c88c0, 0xc21064eaa0, 0x7fc57c0a7678, 0xc22dd08fc8, 0x0, ...)
        /home/vagrant/influxdb/src/coordinator/coordinator.go:414 +0x440
coordinator.(*CoordinatorImpl).runQuery(0xc2100c88c0, 0xc21cf602c0, 0x7fc57c0a7620, 0xc2100d8000, 0xc222cd7aa0, ...)
        /home/vagrant/influxdb/src/coordinator/coordinator.go:146 +0xdb
coordinator.(*CoordinatorImpl).RunQuery(0xc2100c88c0, 0x7fc57c0a7620, 0xc2100d8000, 0xc222cd7aa0, 0x8, ...)
        /home/vagrant

I suspect the query doesn't return any data, hence some calculactions afterwards seem to fail.

@jvshahid jvshahid added bug and removed bug labels Apr 7, 2014
@jvshahid jvshahid added this to the 0.5.6 milestone Apr 7, 2014
@@ -32,6 +32,8 @@ const (
HOST_ID_OFFSET = uint64(10000)

SHARDS_TO_QUERY_FOR_LIST_SERIES = 10

MAX_BUFFER_SIZE = 100000
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Maybe make this a config option? Remember to have that as the default that gets set even if they don't have that option in their config file.

@jvshahid
Copy link
Contributor

jvshahid commented Apr 8, 2014

Hey @nichdiekuh we found the bug that you were hitting. This was caused by an assumption that the group by interval is always greater than a second. The group by interval you're using is 1440 nanoseconds (which is probably not what you wanted to do), which was truncated to 0 seconds and caused the divide by zero error. That said, as I mentioned earlier, you probably meant a day and not 1440 nanoseconds. In order to achieve this you can do group by time(1d) or group by time(24h). Read http://influxdb.org/docs/query_language/ under the Group By section for more information on group by intervals.

jvshahid added a commit that referenced this pull request Apr 8, 2014
@jvshahid jvshahid merged commit a04aed4 into master Apr 8, 2014
@jvshahid jvshahid deleted the fix-413-small-group-by-time branch April 8, 2014 20:45
@nichdiekuh
Copy link
Author

That's funny, actually that was a bug in my code as well...it was supposed to be "1400m", but I'm glad now both sides have a bug less. Thank you! :-)

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

None yet

3 participants