Skip to content
Eric Bodden edited this page Mar 26, 2015 · 5 revisions

This page gives information on our benchmarking results from our Technical Report (TR) and from our conference submission and provides all tools to reproduce these results.

Benchmark repository

We keep all raw result files and scripts in a benchmark repository.

Sound call graphs (Section 6.1 of TR, Section 4.1 of Conference Paper)

To produce dynamic call graphs, use our JVMTI agent. To produce static call graphs, use Soot with ProBe, using the main class probe.soot.CallGraph. To compare both call graphs, use the main class probe.CallGraphDiff.

Stability of Log Files (Section 6.2 of TR)

We provide all 10 iterations of log files here. You can reproduce them using the Play-Out Agent (see Downloads).

Effect of Input Size and Code Coverage (Section 6.3 of TR, Section 4.2 of Conference Paper)

The computed intersections of call graphs are available here (in Probe's call-graph format). This is the Java Program that we used to create these intersections. (An extension to Probe.)

You can see the impact of the input size on the log files by inspecting the different log files here. For TR: The number of phantom classes can be validated through the soot.log files in this directory.

The log files for the interactive programs can be found here.

Performance overhead (Section 6.4 of TR, Section 4.3 of Conference Paper)

To measure the performance overhead of the agents, just run the DaCapo benchmarks with both agents. To measure the runtime of Soot, just run Soot on the directory produced by the Plaz-Out Agent. We provide scripts that automate these tasks. You can find out runtime logs in the same place as the call-graph results (the soot.log files).