Skip to content

ICM fixes (2/n)#27922

Merged
hariharans29 merged 39 commits intomainfrom
hari/icm_2
Apr 13, 2026
Merged

ICM fixes (2/n)#27922
hariharans29 merged 39 commits intomainfrom
hari/icm_2

Conversation

@hariharans29
Copy link
Copy Markdown
Member

Copy link
Copy Markdown
Contributor

@github-actions github-actions bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

You can commit the suggested changes from lintrunner.

Copy link
Copy Markdown
Contributor

Copilot AI left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Pull request overview

This PR addresses multiple ICM-reported robustness issues by hardening input/format validation and preventing integer overflow in buffer sizing across CPU GRU, ORT flatbuffer initializer loading, and SparseAttention input checks.

Changes:

  • Add SafeInt-based element count calculation to prevent buffer size overflow in DeepCpu GRU allocations, plus a regression test.
  • Improve ORT flatbuffer initializer loading validation (missing dims, null string entries) and adjust tensor-size computation to return Status, plus new tests.
  • Fix SparseAttention key_total_sequence_lengths shape check and add range validation, plus new unit tests.

Reviewed changes

Copilot reviewed 7 out of 7 changed files in this pull request and generated 2 comments.

Show a summary per file
File Description
onnxruntime/test/providers/cpu/rnn/deep_cpu_gru_op_test.cc Adds a regression test asserting overflow in buffer element count throws.
onnxruntime/test/flatbuffers/flatbuffer_utils_test.cc Adds tests for rejecting invalid flatbuffer initializers (null string entry, missing dims with external data).
onnxruntime/test/contrib_ops/sparse_attention_op_test.cc Adds unit tests for SparseAttention key_total_sequence_lengths shape/value validation.
onnxruntime/core/providers/cpu/rnn/deep_cpu_gru.h Declares a helper to compute buffer element counts safely.
onnxruntime/core/providers/cpu/rnn/deep_cpu_gru.cc Implements SafeInt-based buffer element count helper and uses it for allocations.
onnxruntime/core/graph/graph_flatbuffers_utils.cc Makes fbs tensor byte-size computation return Status, adds null checks, and hardens initializer loading.
onnxruntime/contrib_ops/cpu/sparse/sparse_attention_helper.h Fixes shape validation logic and adds per-batch value range checks for total key lengths.

💡 Add Copilot custom instructions for smarter, more guided reviews. Learn how to get started.

Copy link
Copy Markdown
Contributor

Copilot AI left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Pull request overview

Copilot reviewed 7 out of 7 changed files in this pull request and generated 1 comment.

Comments suppressed due to low confidence (1)

onnxruntime/core/graph/graph_flatbuffers_utils.cc:223

  • GetSizeInBytesFromFbsTensor now returns Status, but it still uses SafeInt arithmetic (std::accumulate with SafeInt and later multiplication). SafeInt overflows throw OnnxRuntimeException (see core/common/safeint.h), which would bypass the Status return path and can terminate model loading unexpectedly. Consider catching SafeInt exceptions (or using a non-throwing checked-multiply helper) and returning an INVALID_ARGUMENT/FAIL Status when dims multiplication overflows (and optionally validating dims are non-negative).
Status GetSizeInBytesFromFbsTensor(const fbs::Tensor& tensor, size_t& size_in_bytes) {
  const auto* fbs_dims = tensor.dims();
  ORT_RETURN_IF(nullptr == fbs_dims, "Missing dimensions for tensor. Invalid ORT format model.");

  auto num_elements = std::accumulate(fbs_dims->cbegin(), fbs_dims->cend(), SafeInt<size_t>(1),
                                      std::multiplies<>());


💡 Add Copilot custom instructions for smarter, more guided reviews. Learn how to get started.

hariharans29 and others added 2 commits March 31, 2026 22:39
Co-authored-by: Copilot <175728472+Copilot@users.noreply.github.com>
Copy link
Copy Markdown
Contributor

@github-actions github-actions bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

You can commit the suggested changes from lintrunner.

Copy link
Copy Markdown
Contributor

Copilot AI left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Pull request overview

Copilot reviewed 7 out of 7 changed files in this pull request and generated 3 comments.


💡 Add Copilot custom instructions for smarter, more guided reviews. Learn how to get started.

Copy link
Copy Markdown
Contributor

Copilot AI left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Pull request overview

Copilot reviewed 7 out of 7 changed files in this pull request and generated 1 comment.

Comments suppressed due to low confidence (1)

onnxruntime/core/graph/graph_flatbuffers_utils.cc:223

  • GetSizeInBytesFromFbsTensor now returns Status, but it still uses SafeInt arithmetic (std::accumulate and num_elements * byte_size_of_one_element) that will throw OnnxRuntimeException on overflow. That breaks the function’s contract and can bypass the caller’s ORT_RETURN_IF_ERROR flow. Consider catching overflow exceptions inside this function and returning an INVALID_ARGUMENT/FAIL Status instead (e.g., using ORT_TRY/ORT_CATCH).
Status GetSizeInBytesFromFbsTensor(const fbs::Tensor& tensor, size_t& size_in_bytes) {
  const auto* fbs_dims = tensor.dims();
  ORT_RETURN_IF(nullptr == fbs_dims, "Missing dimensions for tensor. Invalid ORT format model.");

  auto num_elements = std::accumulate(fbs_dims->cbegin(), fbs_dims->cend(), SafeInt<size_t>(1),
                                      std::multiplies<>());


💡 Add Copilot custom instructions for smarter, more guided reviews. Learn how to get started.

Copy link
Copy Markdown
Contributor

Copilot AI left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Pull request overview

Copilot reviewed 7 out of 7 changed files in this pull request and generated 3 comments.


💡 Add Copilot custom instructions for smarter, more guided reviews. Learn how to get started.

Co-authored-by: Copilot <175728472+Copilot@users.noreply.github.com>
Co-authored-by: Copilot <175728472+Copilot@users.noreply.github.com>
tianleiwu
tianleiwu previously approved these changes Apr 10, 2026
Copy link
Copy Markdown
Contributor

Copilot AI left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Pull request overview

Copilot reviewed 8 out of 8 changed files in this pull request and generated 1 comment.


💡 Add Copilot custom instructions for smarter, more guided reviews. Learn how to get started.

Copy link
Copy Markdown
Contributor

Copilot AI left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Pull request overview

Copilot reviewed 9 out of 9 changed files in this pull request and generated 2 comments.


💡 Add Copilot custom instructions for smarter, more guided reviews. Learn how to get started.

Copy link
Copy Markdown
Contributor

@github-actions github-actions bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

You can commit the suggested changes from lintrunner.

hariharans29 and others added 3 commits April 10, 2026 17:52
Co-authored-by: github-actions[bot] <41898282+github-actions[bot]@users.noreply.github.com>
Copy link
Copy Markdown
Contributor

Copilot AI left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Pull request overview

Copilot reviewed 10 out of 10 changed files in this pull request and generated no new comments.


💡 Add Copilot custom instructions for smarter, more guided reviews. Learn how to get started.

@hariharans29 hariharans29 enabled auto-merge (squash) April 12, 2026 05:48
@hariharans29 hariharans29 requested a review from tianleiwu April 12, 2026 05:56
tianleiwu
tianleiwu previously approved these changes Apr 12, 2026
@hariharans29 hariharans29 merged commit daa74b4 into main Apr 13, 2026
101 of 102 checks passed
@hariharans29 hariharans29 deleted the hari/icm_2 branch April 13, 2026 17:18
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

4 participants