fill(previous) should find most recent value, even if outside query time range #6878

Open
beckettsean opened this Issue Jun 20, 2016 · 36 comments

Comments

Projects
None yet
@beckettsean
Contributor

beckettsean commented Jun 20, 2016

Feature Request

Proposal: [Description of the feature]

When executing fill(previous) the query should always have a value for previous, even if there is no point with that field in the query time range.

Current behavior: [What currently happens]

> select * from fp
name: fp
--------
time            value
2016-06-20T16:09:13Z    10
2016-06-20T16:19:13Z    100

> select max(value) from fp where time > now() - 20m group by time(5m) fill(previous)
name: fp
--------
time            max
2016-06-20T16:05:00Z    10
2016-06-20T16:10:00Z    10
2016-06-20T16:15:00Z    100
2016-06-20T16:20:00Z    100
2016-06-20T16:25:00Z    100

> select max(value) from fp where time > now() - 18m group by time(5m) fill(previous)
name: fp
--------
time            max
2016-06-20T16:10:00Z    
2016-06-20T16:15:00Z    100
2016-06-20T16:20:00Z    100
2016-06-20T16:25:00Z    100

Note the null value for the 16:10-16:15 bucket, despite there being a point at 16:09 with a value.

Desired behavior: [What you would like to happen]

> select * from fp
name: fp
--------
time            value
2016-06-20T16:09:13Z    10
2016-06-20T16:19:13Z    100

> select max(value) from fp where time > now() - 20m group by time(5m) fill(previous)
name: fp
--------
time            max
2016-06-20T16:05:00Z    10
2016-06-20T16:10:00Z    10
2016-06-20T16:15:00Z    100
2016-06-20T16:20:00Z    100
2016-06-20T16:25:00Z    100

> select max(value) from fp where time > now() - 18m group by time(5m) fill(previous)
name: fp
--------
time            max
2016-06-20T16:10:00Z    10
2016-06-20T16:15:00Z    100
2016-06-20T16:20:00Z    100
2016-06-20T16:25:00Z    100

Use case: [Why is this important (helps with prioritizing requests)]

Currently customers have to know when the last value was recorded in order to make sure that point is included in the time range. For irregular series that's a significant burden. If the system can always find the most recent value regardless of the lower time bound, then many state change queries become useful.

@jwheeler-gs

This comment has been minimized.

Show comment
Hide comment
@jwheeler-gs

jwheeler-gs Jun 23, 2016

Are there any good work-arounds for this right now? I'm collecting sparse data and trying to graph it using fill(previous), which results in the first several values being null because the previous value falls outside the desired query time range.

The only thing I can think to do right now is to execute a second query to get last(field) with the time ending at the start time of the above query, then use that result to fill in the null values.

Are there any good work-arounds for this right now? I'm collecting sparse data and trying to graph it using fill(previous), which results in the first several values being null because the previous value falls outside the desired query time range.

The only thing I can think to do right now is to execute a second query to get last(field) with the time ending at the start time of the above query, then use that result to fill in the null values.

@beckettsean

This comment has been minimized.

Show comment
Hide comment
@beckettsean

beckettsean Jun 23, 2016

Contributor

@jwheeler-gs that's the best workaround for now

Contributor

beckettsean commented Jun 23, 2016

@jwheeler-gs that's the best workaround for now

@jsternberg

This comment has been minimized.

Show comment
Hide comment
@jsternberg

jsternberg Jun 27, 2016

Contributor

I have a crude solution to this in the branch for #5943. The proper solution to this requires #5943 to be implemented, but the crude solution will be good enough for 1.0 if I can't get the proper solution working in time.

Contributor

jsternberg commented Jun 27, 2016

I have a crude solution to this in the branch for #5943. The proper solution to this requires #5943 to be implemented, but the crude solution will be good enough for 1.0 if I can't get the proper solution working in time.

@jsternberg jsternberg self-assigned this Jun 27, 2016

@beckettsean

This comment has been minimized.

Show comment
Hide comment
@beckettsean

beckettsean Jul 6, 2016

Contributor

what happens if fill(previous) is used but there was no data in any interval? Presumably we could pull the tag set for the previous point and use that? Specifically wondering about #6967

Contributor

beckettsean commented Jul 6, 2016

what happens if fill(previous) is used but there was no data in any interval? Presumably we could pull the tag set for the previous point and use that? Specifically wondering about #6967

@beckettsean beckettsean referenced this issue in influxdata/docs.influxdata.com Jul 6, 2016

Closed

FEI entry #522

@jsternberg

This comment has been minimized.

Show comment
Hide comment
@jsternberg

jsternberg Jul 6, 2016

Contributor

I... have no idea. I will have to get back to you on that.

Contributor

jsternberg commented Jul 6, 2016

I... have no idea. I will have to get back to you on that.

@jwheeler-gs

This comment has been minimized.

Show comment
Hide comment
@jwheeler-gs

jwheeler-gs Jul 6, 2016

If there was no previous data, wouldn't it make sense to just return null until the first value is encountered? That would be the same as the current behavior but only in the case where there are no previous values.

If there was no previous data, wouldn't it make sense to just return null until the first value is encountered? That would be the same as the current behavior but only in the case where there are no previous values.

@jsternberg

This comment has been minimized.

Show comment
Hide comment
@jsternberg

jsternberg Jul 6, 2016

Contributor

@jwheeler-gs yes, that is the current behavior and would be for this too. The issue is what happens when there is no data in that interval at all. Right now, fill will not fill a series that doesn't exist in the interval. @beckettsean was asking what would happen if fill(previous) was used and there was no data in the interval, but there was data in the past at some point for that series.

Contributor

jsternberg commented Jul 6, 2016

@jwheeler-gs yes, that is the current behavior and would be for this too. The issue is what happens when there is no data in that interval at all. Right now, fill will not fill a series that doesn't exist in the interval. @beckettsean was asking what would happen if fill(previous) was used and there was no data in the interval, but there was data in the past at some point for that series.

@jwheeler-gs

This comment has been minimized.

Show comment
Hide comment
@jwheeler-gs

jwheeler-gs Jul 6, 2016

Aaah, I get it now. That's actually going to be a possible case for my use as well. I'd expect (hope for) fill to return that previous value for the entire interval. But then again, I'd also expect it to not return any data past the current point in time.

I'm using fill to provide data directly to a chart which can be adjusted to show any time window, past, present, or even future (where future data is filled in using prediction data from another dataset). What I really want is to see the previous value up to some specified point (the present timestamp) and then no data past that. This is probably a bit much to ask of influx and is outside the scope of what it really needs to provide. That can still be handled easily enough on the receiving end by nulling out all values past the current timestamp.

Aaah, I get it now. That's actually going to be a possible case for my use as well. I'd expect (hope for) fill to return that previous value for the entire interval. But then again, I'd also expect it to not return any data past the current point in time.

I'm using fill to provide data directly to a chart which can be adjusted to show any time window, past, present, or even future (where future data is filled in using prediction data from another dataset). What I really want is to see the previous value up to some specified point (the present timestamp) and then no data past that. This is probably a bit much to ask of influx and is outside the scope of what it really needs to provide. That can still be handled easily enough on the receiving end by nulling out all values past the current timestamp.

@retorquere

This comment has been minimized.

Show comment
Hide comment
@retorquere

retorquere Aug 25, 2016

Would it be possible to add fill(previous) for non-grouped queries? Something like

select GPS.latitude, GPS.longitude, Weather.temperature from car where time >=... and time <=... fill(previous)

Weather.temperature is written to the DB sparsely, and the reader must assume that as long as no new temperature is reported, the previous value holds.

I know I can do

select GPS.latitude, GPS.longitude, Weather.temperature from car where time >=... and time <=... group by time(1s) fill(previous)

but that slows down the request considerably, and I get loads of empty lines for times in the time range where there are no measurements.

Would it be possible to add fill(previous) for non-grouped queries? Something like

select GPS.latitude, GPS.longitude, Weather.temperature from car where time >=... and time <=... fill(previous)

Weather.temperature is written to the DB sparsely, and the reader must assume that as long as no new temperature is reported, the previous value holds.

I know I can do

select GPS.latitude, GPS.longitude, Weather.temperature from car where time >=... and time <=... group by time(1s) fill(previous)

but that slows down the request considerably, and I get loads of empty lines for times in the time range where there are no measurements.

@beckettsean

This comment has been minimized.

Show comment
Hide comment
@beckettsean

beckettsean Aug 25, 2016

Contributor

@retorquere the issue with having fill(previous) with no GROUP BY time() clause is that the system doesn't know when to fill. Should it return one point per nanosecond? Per second? A group by interval is needed to create a regular time series, so that the "missing" points are clear.

Contributor

beckettsean commented Aug 25, 2016

@retorquere the issue with having fill(previous) with no GROUP BY time() clause is that the system doesn't know when to fill. Should it return one point per nanosecond? Per second? A group by interval is needed to create a regular time series, so that the "missing" points are clear.

@retorquere

This comment has been minimized.

Show comment
Hide comment
@retorquere

retorquere Aug 25, 2016

@beckettsean I will defer to your superior knowledge on the matter of course, but conceptually, I'd figure it would return exactly the same points as with a regular non-grouped selected, just with the nulls filled in by the value in the column in one of the rows already selected.

@beckettsean I will defer to your superior knowledge on the matter of course, but conceptually, I'd figure it would return exactly the same points as with a regular non-grouped selected, just with the nulls filled in by the value in the column in one of the rows already selected.

@beckettsean

This comment has been minimized.

Show comment
Hide comment
@beckettsean

beckettsean Aug 25, 2016

Contributor

@retorquere InfluxDB does not store nulls. There are no nulls returned in a non-grouped query.

Contributor

beckettsean commented Aug 25, 2016

@retorquere InfluxDB does not store nulls. There are no nulls returned in a non-grouped query.

@retorquere

This comment has been minimized.

Show comment
Hide comment
@retorquere

retorquere Aug 25, 2016

If I submit this however:

curl -G 'http://localhost:8086/query?pretty=true' --data-urlencode "db=mydb" --data-urlencode "q=select GPS.latitude, Weather.temperature from car where time >= '2015-10-13T14:16:13Z' limit 10"

I get this (where I'd love for there to be a way to have those nulls be replaced by 5)

{
    "results": [
        {
            "series": [
                {
                    "name": "car",
                    "columns": [
                        "time",
                        "GPS.latitude",
                        "Weather.temperature"
                    ],
                    "values": [
                        [
                            "2015-10-13T14:16:14Z",
                            51.9893696,
                            5
                        ],
                        [
                            "2015-10-13T14:16:15Z",
                            51.9893696,
                            null
                        ],
                        [
                            "2015-10-13T14:16:16Z",
                            51.9893696,
                            null
                        ],
                        [
                            "2015-10-13T14:16:17Z",
                            51.9893696,
                            null
                        ],
                        [
                            "2015-10-13T14:16:18Z",
                            51.9893696,
                            null
                        ],
                        [
                            "2015-10-13T14:16:19Z",
                            51.9893696,
                            null
                        ],
                        [
                            "2015-10-13T14:16:20Z",
                            51.9893696,
                            null
                        ],
                        [
                            "2015-10-13T14:16:21Z",
                            51.9893696,
                            null
                        ],
                        [
                            "2015-10-13T14:16:22Z",
                            51.9893696,
                            null
                        ],
                        [
                            "2015-10-13T14:16:23Z",
                            51.9893696,
                            5
                        ]
                    ]
                }
            ]
        }
    ]
}

If I submit this however:

curl -G 'http://localhost:8086/query?pretty=true' --data-urlencode "db=mydb" --data-urlencode "q=select GPS.latitude, Weather.temperature from car where time >= '2015-10-13T14:16:13Z' limit 10"

I get this (where I'd love for there to be a way to have those nulls be replaced by 5)

{
    "results": [
        {
            "series": [
                {
                    "name": "car",
                    "columns": [
                        "time",
                        "GPS.latitude",
                        "Weather.temperature"
                    ],
                    "values": [
                        [
                            "2015-10-13T14:16:14Z",
                            51.9893696,
                            5
                        ],
                        [
                            "2015-10-13T14:16:15Z",
                            51.9893696,
                            null
                        ],
                        [
                            "2015-10-13T14:16:16Z",
                            51.9893696,
                            null
                        ],
                        [
                            "2015-10-13T14:16:17Z",
                            51.9893696,
                            null
                        ],
                        [
                            "2015-10-13T14:16:18Z",
                            51.9893696,
                            null
                        ],
                        [
                            "2015-10-13T14:16:19Z",
                            51.9893696,
                            null
                        ],
                        [
                            "2015-10-13T14:16:20Z",
                            51.9893696,
                            null
                        ],
                        [
                            "2015-10-13T14:16:21Z",
                            51.9893696,
                            null
                        ],
                        [
                            "2015-10-13T14:16:22Z",
                            51.9893696,
                            null
                        ],
                        [
                            "2015-10-13T14:16:23Z",
                            51.9893696,
                            5
                        ]
                    ]
                }
            ]
        }
    ]
}
@beckettsean

This comment has been minimized.

Show comment
Hide comment
@beckettsean

beckettsean Aug 25, 2016

Contributor

That's an interesting use case, where one field is more densely populated
than another. It might make sense to have fill(previous) in that case. Can
you open a feature request
https://github.com/influxdata/influxdb/issues/new describing that use
case?

On Thu, Aug 25, 2016 at 1:10 PM, retorquere notifications@github.com
wrote:

If I submit this however:

curl -G 'http://localhost:8086/query?pretty=true' --data-urlencode "db=mydb" --data-urlencode "q=select GPS.latitude, Weather.temperature from car where time >= '2015-10-13T14:16:13Z' limit 10"

I get this (where I'd love for there to be a way to have those nulls be
replaced by 5)

{
"results": [
{
"series": [
{
"name": "car",
"columns": [
"time",
"GPS.latitude",
"Weather.temperature"
],
"values": [
[
"2015-10-13T14:16:14Z",
51.9893696,
5
],
[
"2015-10-13T14:16:15Z",
51.9893696,
null
],
[
"2015-10-13T14:16:16Z",
51.9893696,
null
],
[
"2015-10-13T14:16:17Z",
51.9893696,
null
],
[
"2015-10-13T14:16:18Z",
51.9893696,
null
],
[
"2015-10-13T14:16:19Z",
51.9893696,
null
],
[
"2015-10-13T14:16:20Z",
51.9893696,
null
],
[
"2015-10-13T14:16:21Z",
51.9893696,
null
],
[
"2015-10-13T14:16:22Z",
51.9893696,
null
],
[
"2015-10-13T14:16:23Z",
51.9893696,
5
]
]
}
]
}
]
}


You are receiving this because you were mentioned.
Reply to this email directly, view it on GitHub
#6878 (comment),
or mute the thread
https://github.com/notifications/unsubscribe-auth/AGPAcegxWGsVX6UsH3lBkSNnty064F4Gks5qjei9gaJpZM4I54m9
.

Sean Beckett
Director of Support and Professional Services
InfluxDB

Contributor

beckettsean commented Aug 25, 2016

That's an interesting use case, where one field is more densely populated
than another. It might make sense to have fill(previous) in that case. Can
you open a feature request
https://github.com/influxdata/influxdb/issues/new describing that use
case?

On Thu, Aug 25, 2016 at 1:10 PM, retorquere notifications@github.com
wrote:

If I submit this however:

curl -G 'http://localhost:8086/query?pretty=true' --data-urlencode "db=mydb" --data-urlencode "q=select GPS.latitude, Weather.temperature from car where time >= '2015-10-13T14:16:13Z' limit 10"

I get this (where I'd love for there to be a way to have those nulls be
replaced by 5)

{
"results": [
{
"series": [
{
"name": "car",
"columns": [
"time",
"GPS.latitude",
"Weather.temperature"
],
"values": [
[
"2015-10-13T14:16:14Z",
51.9893696,
5
],
[
"2015-10-13T14:16:15Z",
51.9893696,
null
],
[
"2015-10-13T14:16:16Z",
51.9893696,
null
],
[
"2015-10-13T14:16:17Z",
51.9893696,
null
],
[
"2015-10-13T14:16:18Z",
51.9893696,
null
],
[
"2015-10-13T14:16:19Z",
51.9893696,
null
],
[
"2015-10-13T14:16:20Z",
51.9893696,
null
],
[
"2015-10-13T14:16:21Z",
51.9893696,
null
],
[
"2015-10-13T14:16:22Z",
51.9893696,
null
],
[
"2015-10-13T14:16:23Z",
51.9893696,
5
]
]
}
]
}
]
}


You are receiving this because you were mentioned.
Reply to this email directly, view it on GitHub
#6878 (comment),
or mute the thread
https://github.com/notifications/unsubscribe-auth/AGPAcegxWGsVX6UsH3lBkSNnty064F4Gks5qjei9gaJpZM4I54m9
.

Sean Beckett
Director of Support and Professional Services
InfluxDB

@richardeaxon

This comment has been minimized.

Show comment
Hide comment
@richardeaxon

richardeaxon Nov 7, 2016

+1 for this feature.

+1 for this feature.

@njbuch

This comment has been minimized.

Show comment
Hide comment
@njbuch

njbuch Feb 11, 2017

+1 for this feature!

njbuch commented Feb 11, 2017

+1 for this feature!

@titilambert

This comment has been minimized.

Show comment
Hide comment
@titilambert

titilambert May 10, 2017

Yes please ! This could be release useful !

Yes please ! This could be release useful !

@jeremyleijssen

This comment has been minimized.

Show comment
Hide comment
@jeremyleijssen

jeremyleijssen May 19, 2017

Any updates on this? or anyone know a workaround or something?

Currently needing this function

Any updates on this? or anyone know a workaround or something?

Currently needing this function

@yellowpattern

This comment has been minimized.

Show comment
Hide comment
@yellowpattern

yellowpattern Aug 12, 2017

+1 for this feature. Matter of fact, I thought "fill(previous)" and "fill(linear)" would do the job already.

+1 for this feature. Matter of fact, I thought "fill(previous)" and "fill(linear)" would do the job already.

@solars

This comment has been minimized.

Show comment
Hide comment
@solars

solars Dec 30, 2017

+1 this would be very convenient if you have sensors that report only on change

solars commented Dec 30, 2017

+1 this would be very convenient if you have sensors that report only on change

@rtmaster

This comment has been minimized.

Show comment
Hide comment
@rtmaster

rtmaster Jan 19, 2018

+1 very much needed

+1 very much needed

@wonderful123

This comment has been minimized.

Show comment
Hide comment
@wonderful123

wonderful123 Jan 24, 2018

It doesn't make sense to returns nulls with fill(previous) until it finds data within the first groupby range.

It doesn't make sense to returns nulls with fill(previous) until it finds data within the first groupby range.

@NateZimmer

This comment has been minimized.

Show comment
Hide comment
@NateZimmer

NateZimmer Mar 10, 2018

Super useful for 'Change Of Value' systems. If the state rarely changes, and one looks at the query at any given time, odds are there is no datapoints within that query. Kinda cripples plotting this stuff.

Super useful for 'Change Of Value' systems. If the state rarely changes, and one looks at the query at any given time, odds are there is no datapoints within that query. Kinda cripples plotting this stuff.

@solars

This comment has been minimized.

Show comment
Hide comment
@solars

solars Mar 10, 2018

open since 20 Jun 2016 for such a ridiculously basic feature when talking about time series...

solars commented Mar 10, 2018

open since 20 Jun 2016 for such a ridiculously basic feature when talking about time series...

@rtmaster

This comment has been minimized.

Show comment
Hide comment
@rtmaster

rtmaster Mar 11, 2018

Any Time series database should have this, really a basic concept

Any Time series database should have this, really a basic concept

@milannnnn

This comment has been minimized.

Show comment
Hide comment
@milannnnn

milannnnn Mar 28, 2018

Really a core function for and time series data logging. Should be implemented as soon as possible.

Really a core function for and time series data logging. Should be implemented as soon as possible.

@kevingarman

This comment has been minimized.

Show comment
Hide comment

+1

@philomatic

This comment has been minimized.

Show comment
Hide comment
@philomatic

philomatic May 22, 2018

+1

This is really needed, it's a little bit ridiculously sad that there isn't even a proper workaround.

Is there a possibility to put a bounty on this?

+1

This is really needed, it's a little bit ridiculously sad that there isn't even a proper workaround.

Is there a possibility to put a bounty on this?

@Kortenbach

This comment has been minimized.

Show comment
Hide comment
@Kortenbach

Kortenbach May 23, 2018

I think there are 2 cases to consider:
1- Logging is done at regular time intervals and the query covers >1 of these intervals
2- Logging is done at irregular time intervals

In case#1 if data is missing then the datasource was unavailable. Query's for that time period should probably return NIL or something similar.
In case #2 if data is missing then you cannot know what is going on. Either the datasource was not available or there simply wasn't any data produced in the requested period of time.

I think that the system behaviour of case #2 should be well defined before solving this problem. How can you be sure the datasource was unavailable or not? Need a datasource health inducator (and link every series to a datasource)?

I think there are 2 cases to consider:
1- Logging is done at regular time intervals and the query covers >1 of these intervals
2- Logging is done at irregular time intervals

In case#1 if data is missing then the datasource was unavailable. Query's for that time period should probably return NIL or something similar.
In case #2 if data is missing then you cannot know what is going on. Either the datasource was not available or there simply wasn't any data produced in the requested period of time.

I think that the system behaviour of case #2 should be well defined before solving this problem. How can you be sure the datasource was unavailable or not? Need a datasource health inducator (and link every series to a datasource)?

@Kortenbach

This comment has been minimized.

Show comment
Hide comment
@Kortenbach

Kortenbach May 24, 2018

I am logging data "on change", so this is a BIG issue for me. I'm directly interacting with InfluxDB, so I have some room for workarounds.
Possible workarounds (I can think of):
1- Insert value into database just after starting point of query. Delete after query is done. (will this impact database performance in the long run because of repeated inserts/deletes?)
2- Shift start time of query backwards to a point where the data is available. Delete extra samples (new start upto start) from JSON result. (I personally make sure that every day at 0:00 ALL series are logged once)
3- Find and read the previous sample and use fill() to insert it (you will run into another fill issue with this approach.)
4- Like 3 but manipulate the JSON result manually to emulate the fill.

What do you use as a work-around??

Kortenbach commented May 24, 2018

I am logging data "on change", so this is a BIG issue for me. I'm directly interacting with InfluxDB, so I have some room for workarounds.
Possible workarounds (I can think of):
1- Insert value into database just after starting point of query. Delete after query is done. (will this impact database performance in the long run because of repeated inserts/deletes?)
2- Shift start time of query backwards to a point where the data is available. Delete extra samples (new start upto start) from JSON result. (I personally make sure that every day at 0:00 ALL series are logged once)
3- Find and read the previous sample and use fill() to insert it (you will run into another fill issue with this approach.)
4- Like 3 but manipulate the JSON result manually to emulate the fill.

What do you use as a work-around??

@yellowpattern

This comment has been minimized.

Show comment
Hide comment
@yellowpattern

yellowpattern May 24, 2018

When you are directly interacting with the database, maybe some of the above workarounds might work but when your experience with the database is through another layer (such as grafana), some of the workarounds that you mention make no sense at all.

When you are directly interacting with the database, maybe some of the above workarounds might work but when your experience with the database is through another layer (such as grafana), some of the workarounds that you mention make no sense at all.

@Kortenbach

This comment has been minimized.

Show comment
Hide comment
@Kortenbach

Kortenbach May 24, 2018

@yellowpattern : I agree. I am directly interacting with the database. I will include that in my original post.
The idea behind my post is to find out what people are using as a workaround cuz I'm really struggeling with this "bug".

@yellowpattern : I agree. I am directly interacting with the database. I will include that in my original post.
The idea behind my post is to find out what people are using as a workaround cuz I'm really struggeling with this "bug".

@macrosak

This comment has been minimized.

Show comment
Hide comment
@macrosak

macrosak May 24, 2018

@Kortenbach we are using approach (4) which allows us to execute the last query (for finding the last value before the start of query) and the fill query at the same time.

@Kortenbach we are using approach (4) which allows us to execute the last query (for finding the last value before the start of query) and the fill query at the same time.

@Kortenbach

This comment has been minimized.

Show comment
Hide comment
@Kortenbach

Kortenbach May 24, 2018

@macrosak Thank you for you reply. Good to hear that people are actually implementing workarounds!
I'm currently working on approach (2). I have a checkbox that can switch the workaround on or off...
I hope someone comes up with a permanent solution soon, cuz this doesn't feel right.

@macrosak Thank you for you reply. Good to hear that people are actually implementing workarounds!
I'm currently working on approach (2). I have a checkbox that can switch the workaround on or off...
I hope someone comes up with a permanent solution soon, cuz this doesn't feel right.

@ronomal

This comment has been minimized.

Show comment
Hide comment
@ronomal

ronomal Jun 7, 2018

I expected this to be the default behavior for fill(previous) and fill(linear). I'm querying a dozen fields where only changes are stored, the workarounds really aren't ideal, it would be great to see this feature implemented.

ronomal commented Jun 7, 2018

I expected this to be the default behavior for fill(previous) and fill(linear). I'm querying a dozen fields where only changes are stored, the workarounds really aren't ideal, it would be great to see this feature implemented.

@yellowpattern

This comment has been minimized.

Show comment
Hide comment
@yellowpattern

yellowpattern Jun 19, 2018

Where I get bitten by this bug most is when I use a query that has "difference()" or "derivative()" in the query.

For queries beyond a certain time span, rather than get a delta that is ~1k, I get a first delta that is ~14 million (or rather, the query uses "0" from outside the time window, does a difference between that and the first value in the time window, gets 14,000,000.)

I'm getting so pissed off with this that I'm thinking of dropping influxdb for something else.

Where I get bitten by this bug most is when I use a query that has "difference()" or "derivative()" in the query.

For queries beyond a certain time span, rather than get a delta that is ~1k, I get a first delta that is ~14 million (or rather, the query uses "0" from outside the time window, does a difference between that and the first value in the time window, gets 14,000,000.)

I'm getting so pissed off with this that I'm thinking of dropping influxdb for something else.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment