Skip to content

HTTPS clone URL

Subversion checkout URL

You can clone with
or
.
Download ZIP

Loading…

Added ability to set maximum returned datapoints for json requests #170

Merged
merged 2 commits into from

7 participants

@philiphoy

If using a web gui for graphite that renders the charts on the browser such as Giraffe or Graphene etc. When the chart extends over a large historical period and there are a large number of datapoints, the browser struggles.

This pull request allows the client to set a limit to the number of datapoints returned, perhaps limiting to the pixel width of the graph canvas for example, when the limit is exceeded the datapoints are consolidated by some ratio over the whole period akin to what occurs when an chart image is rendered on the server.

@gingerlime

+1 - this looks like a very good idea, and would improve performance of external dashboards that pick up json data rather than perform the rendering inside graphite-web.

@SEJeff
Collaborator

The code looks pretty sane to me as well. I'd also LOVE to see this with giraffe

@obfuscurity
Owner

Without having actually tested the code, :+1: from me.

@jyoo

+1 +1 +1 please! This would be wonderful for sending a fixed number of points for long time periods - an autosummary if you will.

@obfuscurity obfuscurity merged commit a328fad into graphite-project:master
@gingerlime

Thanks for merging @obfuscurity . Does this also go to 0.9.x branch / pypi ? or would that require a separate pull request? in other words, when/how does this go into the pip installed version?

(have to admit, I don't understand the graphite project organization / workflow...)

@obfuscurity
Owner

This doesn't cherry-pick cleanly into 0.9.x. If someone has the time, please submit a new PR against the 0.9.x branch and we can review that separately.

@drawks
Collaborator
@obfuscurity
Owner

@drawks You're absolutely right. I didn't mean to imply that it would be merged in immediately, only that we'd need a separate PR if it were to go into 0.9.x.

s/doesn\'t/wouldn\'t/
@gingerlime gingerlime referenced this pull request from a commit in gingerlime/graphite-web
@gingerlime gingerlime "backporting" #170 into 0.9.x 5b1015d
@gingerlime gingerlime referenced this pull request from a commit in gingerlime/graphite-web
@gingerlime gingerlime "backporting" #170 into 0.9.x updated docs be16210
@gingerlime

I've created a pull request for 0.9.x, hope it can get accepted. I tested it and it works as expected. The code / change is identical to the one that went into master.

@tomerpeled

Hi,
I think we should improve this maxDataPoints behaviour or create another type lastXDataPoints (which won't be consolidated).
There are cases when you'll want that Graphite will calculate some function (such as HoltsWinter) on some large period of time, but only to get the last x data points.
For example: I'm using the holtWintersConfidenceBands function on several days period (for precise calculation) but only interested in the last several minutes results. Notice that in this case it is important to run holtWintersConfidenceBands function on some large period but it's not important to get all the data points from this period. By returning only the last x data points (without any consolidation) we will help to improve the calling client performance a lot.

We can add another filter type ,such as: lastXDataPoints .

The code should look like this:

if 'lastXDataPoints' in requestOptions and any(data):
lastXDataPoints = requestOptions['lastXDataPoints']
for series in data:
timestamps = range(series.start, series.end, series.step)
datapoints = zip(series[-lastXDataPoints:], timestamps[-lastXDataPoints:])
series_data.append( dict(target=series.name, datapoints=datapoints) )

What do you think?

@kamaradclimber kamaradclimber referenced this pull request from a commit in criteo-forks/graphite-web
@gingerlime gingerlime "backporting" #170 into 0.9.x 71155c0
@kamaradclimber kamaradclimber referenced this pull request from a commit in criteo-forks/graphite-web
@gingerlime gingerlime "backporting" #170 into 0.9.x updated docs 39c1f69
@fessyfoo fessyfoo referenced this pull request from a commit in opentable/graphite-web
@gingerlime gingerlime "backporting" #170 into 0.9.x
(cherry picked from commit 5b1015d)
141b97e
@fessyfoo fessyfoo referenced this pull request from a commit in opentable/graphite-web
@gingerlime gingerlime "backporting" #170 into 0.9.x updated docs
(cherry picked from commit be16210)
9972c73
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Commits on Mar 12, 2013
  1. @philiphoy
Commits on Mar 13, 2013
  1. @philiphoy
This page is out of date. Refresh to see the latest.
Showing with 36 additions and 4 deletions.
  1. +5 −0 docs/render_api.rst
  2. +31 −4 webapp/graphite/render/views.py
View
5 docs/render_api.rst
@@ -662,6 +662,11 @@ max
.. deprecated:: 0.9.0
See yMax_
+maxDataPoints
+-------------
+Set the maximum numbers of datapoints returned when using json content.
+
+If the number of datapoints in a selected range exceeds the maxDataPoints value then the datapoints over the whole period are consolidated.
minorGridLineColor
------------------
View
35 webapp/graphite/render/views.py
@@ -12,6 +12,7 @@
See the License for the specific language governing permissions and
limitations under the License."""
import csv
+import math
from datetime import datetime
from time import time
from random import shuffle
@@ -131,10 +132,34 @@ def renderView(request):
if format == 'json':
series_data = []
- for series in data:
- timestamps = range(series.start, series.end, series.step)
- datapoints = zip(series, timestamps)
- series_data.append( dict(target=series.name, datapoints=datapoints) )
+ if 'maxDataPoints' in requestOptions and any(data):
+ startTime = min([series.start for series in data])
+ endTime = max([series.end for series in data])
+ timeRange = endTime - startTime
+ maxDataPoints = requestOptions['maxDataPoints']
+ for series in data:
+ numberOfDataPoints = timeRange/series.step
+ if maxDataPoints < numberOfDataPoints:
+ valuesPerPoint = math.ceil(float(numberOfDataPoints) / float(maxDataPoints))
+ secondsPerPoint = int(valuesPerPoint * series.step)
+ # Nudge start over a little bit so that the consolidation bands align with each call
+ # removing 'jitter' seen when refreshing.
+ nudge = secondsPerPoint + (series.start % series.step) - (series.start % secondsPerPoint)
+ series.start = series.start + nudge
+ valuesToLose = int(nudge/series.step)
+ for r in range(1, valuesToLose):
+ del series[0]
+ series.consolidate(valuesPerPoint)
+ timestamps = range(series.start, series.end, secondsPerPoint)
+ else:
+ timestamps = range(series.start, series.end, series.step)
+ datapoints = zip(series, timestamps)
+ series_data.append(dict(target=series.name, datapoints=datapoints))
+ else:
+ for series in data:
+ timestamps = range(series.start, series.end, series.step)
+ datapoints = zip(series, timestamps)
+ series_data.append(dict(target=series.name, datapoints=datapoints))
if 'jsonp' in requestOptions:
response = HttpResponse(
@@ -233,6 +258,8 @@ def parseOptions(request):
requestOptions['jsonp'] = queryParams['jsonp']
if 'noCache' in queryParams:
requestOptions['noCache'] = True
+ if 'maxDataPoints' in queryParams and queryParams['maxDataPoints'].isdigit():
+ requestOptions['maxDataPoints'] = int(queryParams['maxDataPoints'])
requestOptions['localOnly'] = queryParams.get('local') == '1'
Something went wrong with that request. Please try again.