Skip to content

Set up time-to-compute and render benchmarks #12

@heckj

Description

@heckj

The process of rendering a chart could be impacted by different implementations of how a Mark's declaration is set up, and how it is used to render as a template against some data to render a series of glyphs or shapes onto a Canvas.

It's worthwhile to set up a benchmark for the pathological cases of huge charts - 10,000 to 1_000_000 (or more) data points - being rendered into charts, and the time it takes to spend on the data processing that happens through the channel, picking the visual property, any potential transformations, and scaling the resulting value into the ranges appropriate for the canvas, categories, etc.

Ideally the goal would be to get some baselines as we development, and most importantly ensure that we didn't accidentally enable any n^2 algorithms (or worse) when processing a chart into a visual end-result.

[ ] A baseline that shows amount of time vs. size (# of values) of the data set being processed would be a good starting point.

Metadata

Metadata

Assignees

Labels

No labels
No labels

Type

No type

Projects

No projects

Milestone

No milestone

Relationships

None yet

Development

No branches or pull requests

Issue actions