feat: added checks for support of activations on target#18102
Conversation
🔗 Helpful Links🧪 See artifacts and rendered test results at hud.pytorch.org/pr/pytorch/executorch/18102
Note: Links to docs will display an error until the docs builds have been completed. ❌ 1 Awaiting Approval, 1 New Failure, 2 Unrelated FailuresAs of commit 2013e79 with merge base 28b4813 ( NEW FAILURE - The following job has failed:
BROKEN TRUNK - The following jobs failed but were present on the merge base:👉 Rebase onto the `viable/strict` branch to avoid these failures
This comment was automatically generated by Dr. CI and updates every 15 minutes. |
|
@pytorchbot label "release notes: nxp" |
|
@pytorchbot label "module: nxp" |
| num_macs = neutron_target_spec.get_num_macs() | ||
|
|
||
| # activations in Neutron are delegable only | ||
| # if `num_channels` % `num_macs` == 0 and `num_batches` == 1 |
There was a problem hiding this comment.
This is incorrect.
You call this function in every activation converter's is_supported_on_target() method. So this condition has to always be satisfied to delegate an activation. But operators which can use the Custom Activation feature of Neutron aren't restricted like this.
For example test_leaky_relu_converter.py contained tests with input shape (23, ), and it delegated successfully.
Your task was aimed only at solving the issue when the activations were the only ops in the model. But here you are introducing a check which effects every case (including models with multiple nodes), and it breaks existing working functionality.
I see you changed all existing tests to satisfy this condition. But the tests were passing, and they should still be passing if Neutron supports them.
Our goal is to delegate as many operators to Neutron as possible. Therefore, we cannot introduce arbitrary restrictions like this. Every single test that started failing when you introduced this change must still be passing. You cannot just modify the tests and pretend it didn't happen. If a test fails -> there is an issue. In this case, the issue was caused by your changes.
It appears you have misunderstood the task. In clamp_converter.py, you have removed the method supports_partitioning_result(). What was the motivation for this? Instead, you should have added this method to every "activation_converter" and used it to prohibit delegation if the activation was the only node in the partition (and perhaps consider some operator-specific conditions).
Unless I'm missing something, a complete rework is required.
There was a problem hiding this comment.
As I told you and the team multiple times at the regular meetups, I noticed that the converters did NOT check whether the node is delegable. Such functionality was not implemented, when in my opinion it should have been. I also believe that the absence of such functionality was the root cause of ReLU not being delegated when there are no other nodes in the partition (aside from QDQ nodes of course).
In the Neutron Converter documentation, there is a list of conditions, such as num_channels % num_macs == 0 that are not being checked. That is the whole motivation behind all these changes.
The tests needed to be modified to correctly reflect the newly imposed restrictions. I wanted to adhere to the documentation the Neutron team provided, however if you think it is better to start checking the requirements only when we encounter a situation when it stops working, that is fine by me. Just wanted to do the due dilligence of properly implementing the conversion of one of the most important operators there are, such as ReLU or Sigmoid.
As I am looking at it right now, some of the code is quite old, for example the functionality concerning ReLU. Maybe the restrictions did not exist back then.
There was a problem hiding this comment.
Let's not escalate this.
The bottom line is this:
- You wanted to adhere to the Neutron documentation, even in cases when we have working implementation which contradicts the documentation. So you prohibited these cases, which solved the issue of your ticket, but it also broke some supported cases.
- I think we should support as many cases as possible. We know the Neutron documentation is not 100% accurate, so I think prohibiting working cases just because they contradict the docs is not the way to go. Based on this opinion, the solution should have been different.
We can discuss it tomorrow if you want.
P.S.
I didn't mean to sound aggressive in my first comment. I apologize if my phrasing was inappropriate.
There was a problem hiding this comment.
Did major rework, please review the changes.
4e97a6e to
f75ac40
Compare
|
New PR description after major rework: Introduces stricter checks for support in cases where there might be |
There was a problem hiding this comment.
Pull request overview
Adds stricter target-specific validation for delegating certain activation operators to the NXP Neutron backend, plus new/updated tests that assert delegation vs non-delegation based on shape/partition context.
Changes:
- Introduces
activation_supported_on_target()to gate delegation of activation-only partitions based on channel-count vs targetNUM_MACS. - Updates ReLU/HardTanh/Clamp converters to apply the new “alone in partition” activation-support validation.
- Expands converter tests to assert activation delegation (or non-delegation) by inspecting the delegated graph.
Reviewed changes
Copilot reviewed 9 out of 9 changed files in this pull request and generated 5 comments.
Show a summary per file
| File | Description |
|---|---|
| backends/nxp/tests/models.py | Adds a HardTanhModule helper model for hardtanh-only test cases. |
| backends/nxp/tests/ir/converter/node_converter/test_relu_converter.py | Adds graph-level assertions to verify when ReLU is delegated vs kept. |
| backends/nxp/tests/ir/converter/node_converter/test_hardtanh_converter.py | Updates preprocessing helpers and adds an “unsupported” hardtanh delegation test. |
| backends/nxp/tests/ir/converter/node_converter/test_clamp_converter.py | Adds an “unsupported shape” case for clamp-as-(relu/relu6) when alone in a partition. |
| backends/nxp/backend/neutron_operator_support.py | Adds activation_supported_on_target() helper implementing the channel divisibility check. |
| backends/nxp/backend/ir/converter/node_converters/ops_converters/relu_converter.py | Applies activation support check when ReLU is alone in a partition. |
| backends/nxp/backend/ir/converter/node_converters/ops_converters/hardtanh_converter.py | Refactors hardtanh mode handling and applies activation support check for Relu/Relu6 modes when alone. |
| backends/nxp/backend/ir/converter/node_converters/ops_converters/clamp_converter.py | Replaces previous single-node partition logic with shared “alone in partition” + activation support check. |
| backends/nxp/backend/ir/converter/node_converter.py | Adds is_node_alone_in_partition() helper used by multiple converters. |
💡 Add Copilot custom instructions for smarter, more guided reviews. Learn how to get started.
You can also share your feedback on Copilot code review. Take the survey.
backends/nxp/tests/ir/converter/node_converter/test_hardtanh_converter.py
Outdated
Show resolved
Hide resolved
backends/nxp/backend/ir/converter/node_converters/ops_converters/hardtanh_converter.py
Outdated
Show resolved
Hide resolved
backends/nxp/backend/ir/converter/node_converters/ops_converters/hardtanh_converter.py
Outdated
Show resolved
Hide resolved
MartinPavella
left a comment
There was a problem hiding this comment.
Very nice solution 👍🏻
I pretty much only have cosmetic suggestions, and Copilot had a couple of good points. With those details implemented, LGTM.
backends/nxp/backend/ir/converter/node_converters/ops_converters/hardtanh_converter.py
Outdated
Show resolved
Hide resolved
backends/nxp/backend/ir/converter/node_converters/ops_converters/hardtanh_converter.py
Outdated
Show resolved
Hide resolved
backends/nxp/backend/ir/converter/node_converters/ops_converters/hardtanh_converter.py
Show resolved
Hide resolved
backends/nxp/backend/ir/converter/node_converters/ops_converters/clamp_converter.py
Show resolved
Hide resolved
f75ac40 to
96aeb57
Compare
|
Fixed all the issues mentioned by @MartinPavella and Copilot. Please review. Thank you for your feedback! :D |
MartinPavella
left a comment
There was a problem hiding this comment.
LGTM 👍🏻
Once tests pass, I'll merge
|
@novak-vaclav the linting error is preventing the merge. Can you please fix it? |
a65ea91 to
f5ec441
Compare
|
Linting errors should fixed @MartinPavella and performed a rebase |
There was a problem hiding this comment.
Pull request overview
This PR tightens NXP Neutron delegation validation for activation ops (notably relu, relu6-equivalent hardtanh, and clamp) to avoid creating partitions that Neutron can’t reliably delegate (e.g., single-op activation partitions with unsupported channel counts), and extends test coverage to assert delegation/non-delegation at the graph level.
Changes:
- Add
activation_supported_on_target()and use it from activation converters when the activation is alone in a partition. - Introduce
NodeConverter.is_node_alone_in_partition()helper and refactor activation converters to use it. - Expand/adjust tests to assert whether activations are delegated (presence of
executorch_call_delegate) or remain in the graph.
Reviewed changes
Copilot reviewed 9 out of 9 changed files in this pull request and generated 3 comments.
Show a summary per file
| File | Description |
|---|---|
backends/nxp/tests/models.py |
Adds HardTanhModule for targeted hardtanh/relu6 test cases. |
backends/nxp/tests/ir/converter/node_converter/test_relu_converter.py |
Adds graph-level assertions for ReLU delegation and a new unsupported-shape test. |
backends/nxp/tests/ir/converter/node_converter/test_hardtanh_converter.py |
Updates preprocess helpers, strengthens delegation assertions, adds unsupported hardtanh cases. |
backends/nxp/tests/ir/converter/node_converter/test_clamp_converter.py |
Adjusts unsupported clamp test to focus on channel/MAC incompatibility cases. |
backends/nxp/backend/neutron_operator_support.py |
Introduces activation_supported_on_target() used by converters for stricter target checks. |
backends/nxp/backend/ir/converter/node_converters/ops_converters/relu_converter.py |
Adds partition-context validation for standalone ReLU partitions. |
backends/nxp/backend/ir/converter/node_converters/ops_converters/hardtanh_converter.py |
Fixes hardtanh arg handling, adds partition-context checks for relu/relu6-equivalent bounds. |
backends/nxp/backend/ir/converter/node_converters/ops_converters/clamp_converter.py |
Refactors standalone clamp partition logic to use common helper + target support check. |
backends/nxp/backend/ir/converter/node_converter.py |
Adds is_node_alone_in_partition() helper used by multiple converters. |
💡 Add Copilot custom instructions for smarter, more guided reviews. Learn how to get started.
You can also share your feedback on Copilot code review. Take the survey.
backends/nxp/backend/ir/converter/node_converters/ops_converters/hardtanh_converter.py
Show resolved
Hide resolved
f5ec441 to
0d44977
Compare
|
also fixed a small type issue found by copilot |
|
@novak-vaclav unfortunately the rebase introduce some failures, so I cannot merge. Please take a loot at them. |
0d44977 to
e57be77
Compare
e57be77 to
2013e79
Compare
|
Pushed the correct changes and rebased onto main, please run the tests now @MartinPavella |
There was a problem hiding this comment.
Pull request overview
Adds stricter, target-specific activation delegation checks for NXP Neutron—particularly addressing cases where relu/relu6-equivalent ops appear alone in a partition and may fail delegation—plus expands test coverage for these scenarios.
Changes:
- Introduces
activation_supported_on_target()and wires it intorelu/clamp/hardtanhpartition validation when the activation is alone in its partition. - Fixes HardTanh argument/bounds handling and refactors supported-mode maps.
- Extends NXP backend tests to assert (non-)delegation behavior for unsupported shapes/partitions.
Reviewed changes
Copilot reviewed 9 out of 9 changed files in this pull request and generated 3 comments.
Show a summary per file
| File | Description |
|---|---|
| backends/nxp/backend/neutron_operator_support.py | Adds activation_supported_on_target() used to gate delegation based on channel/MAC constraints. |
| backends/nxp/backend/ir/converter/node_converter.py | Adds is_node_alone_in_partition() helper used by multiple converters. |
| backends/nxp/backend/ir/converter/node_converters/ops_converters/relu_converter.py | Adds partition-result validation for standalone ReLU partitions using target checks. |
| backends/nxp/backend/ir/converter/node_converters/ops_converters/clamp_converter.py | Reuses the new standalone-partition + target support gating for clamp-as-relu/relu6 cases. |
| backends/nxp/backend/ir/converter/node_converters/ops_converters/hardtanh_converter.py | Fixes bounds extraction, renames supported maps, and adds standalone-partition + target support gating for relu/relu6-like modes. |
| backends/nxp/tests/models.py | Adds HardTanhModule test model. |
| backends/nxp/tests/ir/converter/node_converter/test_relu_converter.py | Adds assertions that ReLU is delegated (or not) and a new unsupported-case test. |
| backends/nxp/tests/ir/converter/node_converter/test_hardtanh_converter.py | Updates delegation assertions and adds unsupported hardtanh cases for relu/relu6-like bounds. |
| backends/nxp/tests/ir/converter/node_converter/test_clamp_converter.py | Updates unsupported-case coverage to target shape/channel constraints for standalone clamp. |
💡 Add Copilot custom instructions for smarter, more guided reviews. Learn how to get started.
| partitions = [p for p in partition_list if node in p.nodes] | ||
| if len(partitions) != 1: | ||
| raise ValueError( | ||
| "Cannot find a partition of a node in graph. This should not occur." | ||
| ) | ||
|
|
There was a problem hiding this comment.
is_node_alone_in_partition() raises ValueError when the node is found in 0 or multiple partitions. This method is called from supports_partitioning_result() during NeutronPartitioner.validate_partitioning_result(), where the expected behavior is typically to return False and mark the node as NXP_DO_NOT_DELEGATE rather than abort partition validation. Consider returning False (or otherwise handling this case without throwing) to keep partitioning robust even if the partition list is unexpected.
|
|
||
| # Neutron cannot delegate a partition where ReLU or ReLU6 is the only operator | ||
| # and at the same time the node does not satisfy delegation requirements. | ||
| # In contrast, ReLUN1To1 and ReLU0To1 are supported and delegated successfuly. |
There was a problem hiding this comment.
Typo in comment: "successfuly" should be "successfully".
| # In contrast, ReLUN1To1 and ReLU0To1 are supported and delegated successfuly. | |
| # In contrast, ReLUN1To1 and ReLU0To1 are supported and delegated successfully. |
|
|
||
| # Neutron cannot delegate a partition where ReLU or ReLU6 is the only operator | ||
| # and at the same time the node does not satisfy delegation requirements. | ||
| # In contrast, ReLUN1To1 and ReLU0To1 are supported and delegated successfuly. |
There was a problem hiding this comment.
Typo in comment: "successfuly" should be "successfully".
| # In contrast, ReLUN1To1 and ReLU0To1 are supported and delegated successfuly. | |
| # In contrast, ReLUN1To1 and ReLU0To1 are supported and delegated successfully. |
|
Please @MartinPavella run the non-passing checks again, looking at the logs, all tests seem to be passing but for some reason the job timed out. |
|
The |
|
@novak-vaclav as the Linux unittests are passing and you didn't modify any non-NXP files, I don't see the need to wait for the windows unittest time-out fix. |
### Summary Introduces stricter checks for activation support on the target. The updated validation logic applies to: `clamp`, `hardtanh`, `leaky_relu`, `relu`, `sigmoid`, `tanh`. edit after rework: Introduces stricter checks for support in cases where there might be `relu` or `relu6` operator alone in a partition. In such cases, the Neutron converter fails to delegate the operator. Also fixed a small issue in `hardtanh_converter` related to reading out the arguments from the edge operator. This additional check applies for `relu_converter`, `clamp_converter` and `hardtanh_converter`. Also checked whether the issues does not affect other activations, such as `leaky_relu`, `tanh` and `sigmoid`, but empirically the issue did not occur there. ### Test plan tests can be manually run using `pytest -c /dev/null backends/nxp/tests/` cc @robert-kalmar @JakeStevens @digantdesai @MartinPavella @roman-janik-nxp
Summary
Introduces stricter checks for activation support on the target.
The updated validation logic applies to:
clamp,hardtanh,leaky_relu,relu,sigmoid,tanh.edit after rework:
Introduces stricter checks for support in cases where there might be
reluorrelu6operator alone in a partition. In such cases, the Neutron converter fails to delegate the operator. Also fixed a small issue inhardtanh_converterrelated to reading out the arguments from the edge operator.This additional check applies for
relu_converter,clamp_converterandhardtanh_converter. Also checked whether the issues does not affect other activations, such asleaky_relu,tanhandsigmoid, but empirically the issue did not occur there.Test plan
tests can be manually run using
pytest -c /dev/null backends/nxp/tests/cc @robert-kalmar @JakeStevens @digantdesai @MartinPavella @roman-janik-nxp