New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Rendering time metric #18
Comments
A bit of history
Current implementationSee
It spits results to an This is the typical usage: #ifdef MAPNIK_STATS
mapnik::progress_timer __stats__(std::clog, "postgis_datasource::init");
#endif
|
RequirementsFunctional requirements
Other requirements:
Non-functional requirements
|
Proposed roadmapMilestone 1: Coarse-grain DB vs rendering times (not per map yet)At this point I'm pretty confident we can achieve this, useful both for benchmarking and getting overall metrics in production.
Milestone 2: Finer-grain metrics (per map)
Milestone 3: Extended API and configurable storage strategySome ideas for a future milestone:
cc/ @jorgesancha |
Why this is importantThis is needed to better know our platform and its usage. That will allow us to:
|
I'm working right now on different branches with the hope of getting the first iteration complete soon:
This is yet WIP and probably lacks a lock to avoid concurrency issues... but the hardest parts to integrate are pretty much under control. I'll probably add some notes about the internals of mapnik and node-mapnik somewhere, which can be helpful to newcomers. Edit: added Windshaft-cartodb branch. |
A small test I use to check it actually gets data from node-mapnik (to be added to the corresponding branch): // file my-test.js
var mapnik = require('.');
var fs = require('fs');
// register fonts and datasource plugins
mapnik.register_default_fonts();
mapnik.register_default_input_plugins();
var map = new mapnik.Map(256, 256);
var out = '';
map.load('./test/stylesheet.xml', function(err,map) {
if (err) throw err;
map.zoomAll();
var im = new mapnik.Image(256, 256);
map.render(im, function(err,im) {
if (err) throw err;
console.log(mapnik.TimerStats.flush());
});
}); and the output:
|
A bit ugly but it starts to work in my development environment. This is just an example after tinkering a little with one simple map (based on world borders):
The same output beautified:
Some things I'd need to do:
|
Using
|
Something worth noting: if we wanted to deploy this changes before they were accepted upstream (or backported to mapnik v3.0.x series for what it matters), we'd need to set up our own building and packaging infrastructure as we're currently using vanilla mapnik and node-mapnik. |
Thanks @dgaubert I could run a few tests and validate that the metrics are consistent. I disabled cache and metatiling to get them. # reset the logs
$ truncate -s0 /tmp/postgres.log
$ curl -s localhost:8181/stats | jsonlint -p
{
"stats": {},
"ok": true
}
# request a tile
$ curl -v -s -o /dev/null "http://cdb.localhost.lan:8181/api/v1/map/cdb@99ca6525@ee41f1f57f9852eea1910c4f84f4e5ac:1486641153877/1/11/524/761.png" 2>&1 | grep X-Tiler-Profiler
< X-Tiler-Profiler: {"setDBAuth":7,"res":159,"getTileOrGrid":159,"getRenderer":30,"render-png":129,"render":67,"encode":62,"total":166}
# get the stats
$ curl -s localhost:8181/stats | jsonlint -p
{
"stats": {
"total_map_rendering": {
"cpu_time": 44.790000000000006,
"wall_time": 66.33305549621582
},
"postgis_datasource::features_with_context::get_resultset": {
"cpu_time": 0.522,
"wall_time": 19.935131072998047
}
},
"ok": true
}
# get the postgres query times from postgres itself, quick'n'dirty way
$ cat /tmp/postgres.log | grep duration: | grep -v parse | sed 's/.*duration: \([0-9.]*\) ms.*/\1/g' | awk '{SUM+=$1}END{print SUM}'
19.647 (Note: I'm using a debug build, so no optimizations and a lot of traces. The important point here is understanding the process and check that the measurements make sense) |
I added a new metric there, Taking into account that postgis queries are part of the Here another example: $ curl -v -s -o /dev/null "http://cdb.localhost.lan:8181/api/v1/map/cdb@99ca6525@ee41f1f57f9852eea1910c4f84f4e5ac:1486641153877/1/11/524/761.png" 2>&1 | grep X-Tiler-Profiler
< X-Tiler-Profiler: {"req2params_setup":1,"setDBAuth":3,"res":38,"getTileOrGrid":38,"render-png":38,"render":26,"encode":11,"total":42}
$ curl -s localhost:8181/stats | jsonlint -p
{
"stats": {
"total_map_rendering": {
"cpu_time": 9.753,
"wall_time": 25.885820388793945
},
"save_to_string": {
"cpu_time": 11.009,
"wall_time": 11.010885238647461
},
"postgis_datasource::features_with_context::get_resultset": {
"cpu_time": 1.195,
"wall_time": 17.41814613342285
}
},
"ok": true
} I'll stop there until further evaluation. |
cc/ @javisantana |
Under review by upstream project maintainers: mapnik#3705 |
The review above is stalled. I'll try a different approach: collect and pass the stats info back to the caller, when the caller wants to do so. It seems like node-mapnik passes a "closure" to the mapnik renderer, which turns out to be a pretty open structure of |
SystemtapI just wanted to add a quick note here about a possible different approach: use Basically with it we can instrument both kernel and user-space code and get metrics from there. I have very little experience with it, but I could easily make it run in my local setup: I got the identifiers by getting symbols from the binaries (not stripped):
the script:
(I think it supports c++ symbol demangling so the script could be simplified) then running it:
As you can see in the examples it is a very powerful tool and would not require changes in the source code, as long as the binaries are not stripped (contain the function symbols), which is the case with our current mapnik version. For this to work a kernel with the matching headers and debugging symbols is required AFAIK. @pllopis if you have any experience with this tool, I'd appreciate a "crash course" :) |
Here's another example that I wrote while checking metrics for the MVT blog:
and here's the output:
|
Superseeded by mapnik#3767 |
Add metric to measure the time spent on rendering vs postgres
The text was updated successfully, but these errors were encountered: