Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Fix unchecked CUDA API calls in utility functions #2517

Merged
merged 7 commits into from
Dec 7, 2020

Conversation

klecki
Copy link
Contributor

@klecki klecki commented Dec 1, 2020

16512016, 16512037, 16512220, 16512041, ....

Signed-off-by: Krzysztof Lecki klecki@nvidia.com

Why we need this PR?

Sprinkle the utility functions with CUDA_CALL(...);

What happened in this PR?

  • What solution was applied: CUDA_CALL(...) when sensible
  • Affected modules and functionalities:
    only utils, the *_test.* and operators/** are checked already and kernels/** are the responsibility of the user
  • Key points relevant for the review:
    NA
  • Validation and testing:
    CI
  • Documentation (including examples):
    NO

JIRA TASK: [Use DALI-1724 or NA]

16512016, 16512037, 16512220, 16512041

Signed-off-by: Krzysztof Lecki <klecki@nvidia.com>
Signed-off-by: Krzysztof Lecki <klecki@nvidia.com>
Signed-off-by: Krzysztof Lecki <klecki@nvidia.com>
Signed-off-by: Krzysztof Lecki <klecki@nvidia.com>
Signed-off-by: Krzysztof Lecki <klecki@nvidia.com>
Signed-off-by: Krzysztof Lecki <klecki@nvidia.com>
@klecki klecki marked this pull request as ready for review December 4, 2020 16:42
@klecki
Copy link
Contributor Author

klecki commented Dec 4, 2020

!build

@dali-automaton
Copy link
Collaborator

CI MESSAGE: [1864225]: BUILD STARTED

@dali-automaton
Copy link
Collaborator

CI MESSAGE: [1864225]: BUILD FAILED

@@ -69,7 +69,7 @@ bool MFCC<GPUBackend>::SetupImpl(std::vector<OutputDesc> &output_desc,
output_desc = detail::SetupKernel<T>(kmgr_, ctx_, input, make_cspan(args_), axis_);
), DALI_FAIL(make_string("Unsupported data type: ", input.type().id()))); // NOLINT
int64_t max_ndct = 0;
for (int i = 0; i < nsamples_; ++i) {
for (int i = 0; i < input.ntensor(); ++i) {
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

In such instances we should always take the output size, not the input.
Rationale: if we use output size, all we risk is reading garbage or illegal read; if we use input size, we run the risk of corrupting memory and the function becomes a potential attack route.

Signed-off-by: Krzysztof Lecki <klecki@nvidia.com>
@klecki
Copy link
Contributor Author

klecki commented Dec 4, 2020

!build

@dali-automaton
Copy link
Collaborator

CI MESSAGE: [1865197]: BUILD STARTED

@dali-automaton
Copy link
Collaborator

CI MESSAGE: [1865197]: BUILD PASSED

@@ -69,7 +69,7 @@ bool MFCC<GPUBackend>::SetupImpl(std::vector<OutputDesc> &output_desc,
output_desc = detail::SetupKernel<T>(kmgr_, ctx_, input, make_cspan(args_), axis_);
), DALI_FAIL(make_string("Unsupported data type: ", input.type().id()))); // NOLINT
int64_t max_ndct = 0;
for (int i = 0; i < nsamples_; ++i) {
for (int i = 0; i < output_desc[0].shape.num_samples(); ++i) {
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

why this change?

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I removed the member field, the coverity was complaining that it is not initialized, and I didn't see it in @szalpal PR, so decided to go with the flow of the rework.

@klecki klecki merged commit 49779a8 into NVIDIA:master Dec 7, 2020
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

None yet

5 participants