-
Notifications
You must be signed in to change notification settings - Fork 609
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Extend HW image decoder bench script to support multiple GPUs #5065
Conversation
- adds an ability to run multiple data processing pipelines on multiple GPUs inside HW image decoder bench script Signed-off-by: Janusz Lisiecki <jlisiecki@nvidia.com>
3349de9
to
30c8798
Compare
!build |
CI MESSAGE: [10021551]: BUILD STARTED |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Added a tiny suggestion, LGTM regardless.
parser.add_argument('-n', dest='gpu_num', | ||
help='Number of GPUs used starting from device_id', default=1, type=int) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I recall some issue that device_ids
do not have to be consecutive (I might be wrong though). How about providing a list of devices instead? Something along the lines:
parser.add_argument('-n', dest='gpu_num', | |
help='Number of GPUs used starting from device_id', default=1, type=int) | |
parser.add_argument('-d', dest='device_id', help='device_id', default=0, nargs='*') |
And the usage would be:
python hw_decoder_bench.py -d 1 3 -b ...
python hw_decoder_bench.py -d 0 -b ...
for di in range(args.device_id):
pipes.append(DecoderPipeline(device_id=di))
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
The numbers are consecutive, but the order may differ from the PCI order.
I rather imagine this script to be used to test on heterogenous systems and the user will just decide how many GPUs is willing to use.
CI MESSAGE: [10021551]: BUILD PASSED |
…#5065) - adds an ability to run multiple data processing pipelines on multiple GPUs inside HW image decoder bench script Signed-off-by: Janusz Lisiecki <jlisiecki@nvidia.com>
multiple GPUs inside HW image decoder bench script
Category:
Other (e.g. Documentation, Tests, Configuration)
Description:
multiple GPUs inside HW image decoder bench script
Additional information:
Affected modules and functionalities:
Key points relevant for the review:
Tests:
Checklist
Documentation
DALI team only
Requirements
REQ IDs: N/A
JIRA TASK: N/A