Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Enable CUDA 11.8 and Hopper support #4308

Merged
merged 1 commit into from
Oct 5, 2022
Merged

Conversation

JanuszL
Copy link
Contributor

@JanuszL JanuszL commented Oct 4, 2022

  • adds CUDA 11.8 based build
  • extend the image decoder to support H100 Hopper

Signed-off-by: Janusz Lisiecki jlisiecki@nvidia.com

Category:

Other (e.g. Documentation, Tests, Configuration)

Description:

  • adds CUDA 11.8 based build
  • extend the image decoder to support H100 Hopper

Additional information:

Affected modules and functionalities:

Key points relevant for the review:

Tests:

  • Existing tests apply
    • HwDecoderUtilizationTest*
    • HwDecoderSliceUtilizationTest*
    • HwDecoderCropUtilizationTest*
  • New tests added
    • Python tests
    • GTests
    • Benchmark
    • Other
  • N/A

Checklist

Documentation

  • Existing documentation applies
  • Documentation updated
    • Docstring
    • Doxygen
    • RST
    • Jupyter
    • Other
  • N/A

DALI team only

Requirements

  • Implements new requirements
  • Affects existing requirements
  • N/A

REQ IDs: N/A

JIRA TASK: N/A

@dali-automaton
Copy link
Collaborator

CI MESSAGE: [6085306]: BUILD STARTED

@stiepan stiepan self-assigned this Oct 4, 2022
@@ -115,6 +118,14 @@ class nvJPEGDecoder : public Operator<MixedBackend>, CachedDecoderImpl {
#endif
LOG_LINE << "Using NVJPEG_BACKEND_HARDWARE" << std::endl;
CUDA_CALL(nvjpegJpegStateCreate(handle_, &state_hw_batched_));
if (nvjpegIsSymbolAvailable("nvjpegGetHardwareDecoderInfo")) {
nvjpegGetHardwareDecoderInfo(handle_, &num_hw_engines_, &num_hw_cores_per_engine_);
// ToDo adjust hw_decoder_load_ based on num_hw_engines_ and num_hw_cores_per_engine_
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Suggested change
// ToDo adjust hw_decoder_load_ based on num_hw_engines_ and num_hw_cores_per_engine_
// TODO(jlisiecki) adjust hw_decoder_load_ based on num_hw_engines_ and num_hw_cores_per_engine_

won't the linter complain here?

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Apparently not

@dali-automaton
Copy link
Collaborator

CI MESSAGE: [6086992]: BUILD STARTED

@dali-automaton
Copy link
Collaborator

CI MESSAGE: [6085306]: BUILD PASSED

@dali-automaton
Copy link
Collaborator

CI MESSAGE: [6086992]: BUILD FAILED

- adds CUDA 11.8 based build
- extend the image decoder to support H100 Hopper

Signed-off-by: Janusz Lisiecki <jlisiecki@nvidia.com>
@dali-automaton
Copy link
Collaborator

CI MESSAGE: [6096383]: BUILD STARTED

@dali-automaton
Copy link
Collaborator

CI MESSAGE: [6096383]: BUILD PASSED

@JanuszL JanuszL merged commit cd16f63 into NVIDIA:main Oct 5, 2022
@JanuszL JanuszL deleted the amp_next branch October 5, 2022 16:59
stiepan pushed a commit that referenced this pull request Oct 5, 2022
- adds CUDA 11.8 based build
- extend the image decoder to support H100 Hopper

Signed-off-by: Janusz Lisiecki <jlisiecki@nvidia.com>
stiepan pushed a commit that referenced this pull request Oct 5, 2022
- adds CUDA 11.8 based build
- extend the image decoder to support H100 Hopper

Signed-off-by: Janusz Lisiecki <jlisiecki@nvidia.com>
@JanuszL JanuszL mentioned this pull request Jan 11, 2023
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

None yet

4 participants