Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Running in a batch system or job launcher #510

Open
wscullin opened this issue Apr 18, 2017 · 3 comments
Open

Running in a batch system or job launcher #510

wscullin opened this issue Apr 18, 2017 · 3 comments
Labels
idea Low-priority enhancement suggestion

Comments

@wscullin
Copy link

This may be an adjunct to #378. It would be very useful to be able to run jobs via a job launcher in cluster / HPC environments. My environment frequently leads to cross-compilation and/or differing operational modes that are constructed by a job launcher. For instance, on our Intel Xeon Phi system, I can shift binary performance by 30% by submitting with:

OMP_NUM_THREADS=64 aprun -n 1 -N 1 numactl --membind 1 python sample_bench.py

with the job launcher itself launching from a system that is architecturally different from the Intel Xeon Phi.

@pv
Copy link
Collaborator

pv commented Apr 18, 2017

Do you mean running many single-core benchmarks in parallel, or benchmarking a single multi-core program? If the latter, you can probably do the commands needed in your benchmark suite, and use setup_cache/track_* to track the information output by the batch system.

AFAIK there's a large amount of variety in batch systems. So I'm not sure if an out-of-the-box solution is feasible. (There's a plugin system available so it's in principle possible for each user to write their own stuff, but it's undocumented & API is probably not stable. Also, there's no concept of running different benchmarks in parallel.)

@wscullin
Copy link
Author

Generally, I'm running a single multi-core program - and generally I'm running another project's benchmarks rather than my own. The historical case had been running the built-in NumPy and SciPy benchmarks to track issues with vendor provided NumPy and SciPy packages and catch environmental regressions.

@pv
Copy link
Collaborator

pv commented Apr 18, 2017

Patches adding such a feature are of course welcome, if you find a clean
way to do it that's not specific to a certain batch system.

@pv pv added the enhancement Triaged as an enhancement request label Apr 18, 2017
@pv pv added the idea Low-priority enhancement suggestion label Jun 1, 2019
@pv pv removed the enhancement Triaged as an enhancement request label Jun 30, 2019
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
idea Low-priority enhancement suggestion
Projects
None yet
Development

No branches or pull requests

2 participants