support opengl GPU profiling through GL_ARB_timer_query extension #82

Closed
gregory38 opened this Issue May 27, 2012 · 3 comments

Projects

None yet

3 participants

@gregory38
Contributor

Hello,

Opengl 3.3 allows to query a timer objet, to have the real time spend in the GPU vs the inaccurate time spend on the CPU to measure the performance. In short the extension can work in 2 ways:
1/ You can have the delta (in ns) between glBeginQuery(GL_TIME_ELAPSED,..) and glEndQuery(GL_TIME_ELAPSED);
2/ You can have absolute time (in ns) with glQueryCounter. You can emulate the first version, with 2 glQueryCounter and do a manual diff.

Extension: http://www.opengl.org/registry/specs/ARB/timer_query.txt
An example tutorial : http://www.lighthouse3d.com/cg-topics/opengl-timer-query/

Additional note, the command go into the GPU pipeline, so you must wait enough time to readback the value and therefore it could stall the pipeline. Or you can use a ping-pong buffer and read the data of previous frame (which is done and therefore query counter too).

Now it remains the question of what to profile? I feel it would be to heavy to annotate all GL commands and maybe not useful. For the moment my ideas are
1/ Use glBeginQuery/glEndQuery around the dump of a call.
2/ Use some glQueryCounter (with ping-ping buffer) around swap buffer command to get the performance by frame.

Gregory

@exjam exjam was assigned Jul 24, 2012
@jrfonseca
Member

James is working on this in https://github.com/exjam/apitrace

@jrfonseca
Member

This has been now merged into master. Let us know if you run into any issues.

@jrfonseca jrfonseca closed this Aug 6, 2012
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment