Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Extend HW image decoder bench script to support multiple GPUs #5065

Merged
merged 1 commit into from
Sep 28, 2023

Conversation

JanuszL
Copy link
Contributor

@JanuszL JanuszL commented Sep 28, 2023

  • adds an ability to run multiple data processing pipelines on
    multiple GPUs inside HW image decoder bench script

Category:

Other (e.g. Documentation, Tests, Configuration)

Description:

  • adds an ability to run multiple data processing pipelines on
    multiple GPUs inside HW image decoder bench script

Additional information:

Affected modules and functionalities:

  • tools/hw_decoder_bench.py

Key points relevant for the review:

  • NA

Tests:

  • Existing tests apply
  • New tests added
    • Python tests
    • GTests
    • Benchmark
    • Other
  • N/A

Checklist

Documentation

  • Existing documentation applies
  • Documentation updated
    • Docstring
    • Doxygen
    • RST
    • Jupyter
    • Other
  • N/A

DALI team only

Requirements

  • Implements new requirements
  • Affects existing requirements
  • N/A

REQ IDs: N/A

JIRA TASK: N/A

- adds an ability to run multiple data processing pipelines on
  multiple GPUs inside HW image decoder bench script

Signed-off-by: Janusz Lisiecki <jlisiecki@nvidia.com>
@JanuszL
Copy link
Contributor Author

JanuszL commented Sep 28, 2023

!build

@dali-automaton
Copy link
Collaborator

CI MESSAGE: [10021551]: BUILD STARTED

Copy link
Member

@szalpal szalpal left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Added a tiny suggestion, LGTM regardless.

Comment on lines +24 to +25
parser.add_argument('-n', dest='gpu_num',
help='Number of GPUs used starting from device_id', default=1, type=int)
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I recall some issue that device_ids do not have to be consecutive (I might be wrong though). How about providing a list of devices instead? Something along the lines:

Suggested change
parser.add_argument('-n', dest='gpu_num',
help='Number of GPUs used starting from device_id', default=1, type=int)
parser.add_argument('-d', dest='device_id', help='device_id', default=0, nargs='*')

And the usage would be:

python hw_decoder_bench.py -d 1 3 -b ...
python hw_decoder_bench.py -d 0 -b ...
    for di in range(args.device_id):
        pipes.append(DecoderPipeline(device_id=di))

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

The numbers are consecutive, but the order may differ from the PCI order.
I rather imagine this script to be used to test on heterogenous systems and the user will just decide how many GPUs is willing to use.

@szalpal szalpal self-assigned this Sep 28, 2023
@dali-automaton
Copy link
Collaborator

CI MESSAGE: [10021551]: BUILD PASSED

@JanuszL JanuszL merged commit 5fd490d into NVIDIA:main Sep 28, 2023
5 checks passed
@JanuszL JanuszL deleted the multigpu_hw_dec_bench branch September 28, 2023 15:25
JanuszL added a commit to JanuszL/DALI that referenced this pull request Oct 13, 2023
…#5065)

- adds an ability to run multiple data processing pipelines on
  multiple GPUs inside HW image decoder bench script

Signed-off-by: Janusz Lisiecki <jlisiecki@nvidia.com>
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

None yet

4 participants