You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
This may be an adjunct to #378. It would be very useful to be able to run jobs via a job launcher in cluster / HPC environments. My environment frequently leads to cross-compilation and/or differing operational modes that are constructed by a job launcher. For instance, on our Intel Xeon Phi system, I can shift binary performance by 30% by submitting with:
Do you mean running many single-core benchmarks in parallel, or benchmarking a single multi-core program? If the latter, you can probably do the commands needed in your benchmark suite, and use setup_cache/track_* to track the information output by the batch system.
AFAIK there's a large amount of variety in batch systems. So I'm not sure if an out-of-the-box solution is feasible. (There's a plugin system available so it's in principle possible for each user to write their own stuff, but it's undocumented & API is probably not stable. Also, there's no concept of running different benchmarks in parallel.)
Generally, I'm running a single multi-core program - and generally I'm running another project's benchmarks rather than my own. The historical case had been running the built-in NumPy and SciPy benchmarks to track issues with vendor provided NumPy and SciPy packages and catch environmental regressions.
This may be an adjunct to #378. It would be very useful to be able to run jobs via a job launcher in cluster / HPC environments. My environment frequently leads to cross-compilation and/or differing operational modes that are constructed by a job launcher. For instance, on our Intel Xeon Phi system, I can shift binary performance by 30% by submitting with:
with the job launcher itself launching from a system that is architecturally different from the Intel Xeon Phi.
The text was updated successfully, but these errors were encountered: