You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
From Tom Kelly on slack: "one thing that would be awesome in sandmark is the ability to configure the benchmark wrapper that collects the stats. Right now we have orun -o <output> -- <program-to-run> <program-arguments> which is static in the dune file. It would be nice if we could have the user configure in a central place <command> -o <output> -- <program-to-run> <program-arguments>. This can be powerful as you can then get off the shelf wrappers in there like ocperf.py and strace. It should also allow the user to define the arguments they want to pass to perf. For example they could record all the benchmarks for a given target."
The text was updated successfully, but these errors were encountered:
From Tom Kelly on slack: "one thing that would be awesome in sandmark is the ability to configure the benchmark wrapper that collects the stats. Right now we have
orun -o <output> -- <program-to-run> <program-arguments>
which is static in the dune file. It would be nice if we could have the user configure in a central place<command> -o <output> -- <program-to-run> <program-arguments>
. This can be powerful as you can then get off the shelf wrappers in there likeocperf.py
andstrace
. It should also allow the user to define the arguments they want to pass to perf. For example they could record all the benchmarks for a given target."The text was updated successfully, but these errors were encountered: