Skip to content

Conversation

cccclai
Copy link
Contributor

@cccclai cccclai commented Sep 30, 2025

Summary: As title, in PyTorch, when dim is not set, it will flatten the input and get argmax as dim=0. Add a pass to reshape the input when dim is not set and consolidate test case

edit:

  1. Apply to argmin too
  2. Add exir_ops.edge.aten.adaptive_max_pool3d.default to the to be implemented op list to pass the error

Differential Revision: D83606497

Copy link

pytorch-bot bot commented Sep 30, 2025

🔗 Helpful Links

🧪 See artifacts and rendered test results at hud.pytorch.org/pr/pytorch/executorch/14710

Note: Links to docs will display an error until the docs builds have been completed.

❗ 1 Active SEVs

There are 1 currently active SEVs. If your PR is affected, please view them below:

❌ 1 New Failure, 23 Pending

As of commit 3a5bff8 with merge base 0e74a17 (image):

NEW FAILURE - The following job has failed:

This comment was automatically generated by Dr. CI and updates every 15 minutes.

@meta-cla meta-cla bot added the CLA Signed This label is managed by the Facebook bot. Authors need to sign the CLA before a PR can be reviewed. label Sep 30, 2025
@facebook-github-bot
Copy link
Contributor

@cccclai has exported this pull request. If you are a Meta employee, you can view the originating Diff in D83606497.

Copy link

This PR needs a release notes: label

If your change should be included in the release notes (i.e. would users of this library care about this change?), please use a label starting with release notes:. This helps us keep track and include your important work in the next release notes.

To add a label, you can comment to pytorchbot, for example
@pytorchbot label "release notes: none"

For more information, see
https://github.com/pytorch/pytorch/wiki/PyTorch-AutoLabel-Bot#why-categorize-for-release-notes-and-how-does-it-work.

cccclai added a commit to cccclai/executorch-1 that referenced this pull request Sep 30, 2025
Summary:

As title, in PyTorch, when dim is not set, it will flatten the input and get argmax as dim=0. Add a pass to reshape the input when dim is not set and consolidate test case

Differential Revision: D83606497
@facebook-github-bot
Copy link
Contributor

@cccclai has exported this pull request. If you are a Meta employee, you can view the originating Diff in D83606497.

cccclai added a commit to cccclai/executorch-1 that referenced this pull request Sep 30, 2025
Summary:

As title, in PyTorch, when dim is not set, it will flatten the input and get argmax as dim=0. Add a pass to reshape the input when dim is not set and consolidate test case

Differential Revision: D83606497
@facebook-github-bot
Copy link
Contributor

@cccclai has exported this pull request. If you are a Meta employee, you can view the originating Diff in D83606497.

cccclai added a commit to cccclai/executorch-1 that referenced this pull request Sep 30, 2025
Summary:

As title, in PyTorch, when dim is not set, it will flatten the input and get argmax as dim=0. Add a pass to reshape the input when dim is not set and consolidate test case

Differential Revision: D83606497
@facebook-github-bot
Copy link
Contributor

@cccclai has exported this pull request. If you are a Meta employee, you can view the originating Diff in D83606497.

cccclai added a commit to cccclai/executorch-1 that referenced this pull request Sep 30, 2025
Summary:

As title, in PyTorch, when dim is not set, it will flatten the input and get argmax as dim=0. Add a pass to reshape the input when dim is not set and consolidate test case

Differential Revision: D83606497
@facebook-github-bot
Copy link
Contributor

@cccclai has exported this pull request. If you are a Meta employee, you can view the originating Diff in D83606497.

from executorch.exir.pass_base import ExportPass, PassResult
from executorch.exir.passes import dead_code_elimination_pass

class InsertReshapeForArgmax(ExportPass):
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I think we might have other ops like argmin which can leverage this pass as well. Do you mind rewording the class name into more generic one? Something like InsertReshapeForReduceOp.

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

updated

Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I think the new commit wasn't pushed successfully. Could you try again? Thanks

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

oh sorry, just push

graph = graph_module.graph
modified = False

for n in list(graph.nodes):
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I think we can use the iterator directly: for n in graph.nodes:.

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

updated

import torch
from executorch.backends.qualcomm._passes import InsertReshapeForArgmax

class TestPasses(unittest.TestCase):
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Thank you for adding this file, it's helpful for us to test all the passes thoroughly.

cccclai added a commit to cccclai/executorch-1 that referenced this pull request Oct 3, 2025
Summary:

As title, in PyTorch, when dim is not set, it will flatten the input and get argmax as dim=0. Add a pass to reshape the input when dim is not set and consolidate test case

Differential Revision: D83606497
cccclai added a commit to cccclai/executorch-1 that referenced this pull request Oct 3, 2025
Summary:

As title, in PyTorch, when dim is not set, it will flatten the input and get argmax as dim=0. Add a pass to reshape the input when dim is not set and consolidate test case

Differential Revision: D83606497
@cccclai cccclai force-pushed the export-D83606497 branch 2 times, most recently from 214554c to 5077d00 Compare October 3, 2025 22:48
cccclai added a commit to cccclai/executorch-1 that referenced this pull request Oct 3, 2025
Summary:

As title, in PyTorch, when dim is not set, it will flatten the input and get argmax as dim=0. Add a pass to reshape the input when dim is not set and consolidate test case

Differential Revision: D83606497
cccclai added a commit to cccclai/executorch-1 that referenced this pull request Oct 6, 2025
Summary:

As title, in PyTorch, when dim is not set, it will flatten the input and get argmax as dim=0. Add a pass to reshape the input when dim is not set and consolidate test case

Differential Revision: D83606497
@cccclai
Copy link
Contributor Author

cccclai commented Oct 7, 2025

Can I get another round of review on this? I fix a few more failing tests including argmin, and adaptive_max_pool3d

pytorch-bot bot pushed a commit that referenced this pull request Oct 7, 2025
Summary:

As title, in PyTorch, when dim is not set, it will flatten the input and get argmax as dim=0. Add a pass to reshape the input when dim is not set and consolidate test case

Differential Revision: D83606497
Copy link

meta-codesync bot commented Oct 7, 2025

@cccclai has exported this pull request. If you are a Meta employee, you can view the originating Diff in D83606497.

cccclai added a commit to cccclai/executorch-1 that referenced this pull request Oct 7, 2025
Summary:

As title, in PyTorch, when dim is not set, it will flatten the input and get argmax as dim=0. Add a pass to reshape the input when dim is not set and consolidate test case

Differential Revision: D83606497
Copy link
Collaborator

@haowhsu-quic haowhsu-quic left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Thank you!

to_be_implemented_operator = [
exir_ops.edge.aten._adaptive_avg_pool3d.default,
exir_ops.edge.aten.adaptive_max_pool2d.default,
exir_ops.edge.aten.adaptive_max_pool3d.default,
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

is this pr just about argmax? i see maxpool added here. if it is about reduce ops then please do update the title

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Yeah updated

@cccclai cccclai changed the title support argmax without dim kwargs support argmax/argmin without dim kwargs and fix adaptive_max_pool3d Oct 7, 2025
…ytorch#14710)

Summary:

As title, in PyTorch, when dim is not set, it will flatten the input and get argmax as dim=0. Add a pass to reshape the input when dim is not set and consolidate test case

Differential Revision: D83606497
@cccclai cccclai merged commit e09abea into pytorch:main Oct 7, 2025
175 of 178 checks passed
@cccclai
Copy link
Contributor Author

cccclai commented Oct 7, 2025

@pytorchbot cherry-pick --onto release/1.0 -c regression

pytorchbot pushed a commit that referenced this pull request Oct 7, 2025
…14710)

Summary: As title, in PyTorch, when dim is not set, it will flatten the
input and get argmax as dim=0. Add a pass to reshape the input when dim
is not set and consolidate test case

edit:
1. Apply to argmin too
2. Add `exir_ops.edge.aten.adaptive_max_pool3d.default` to the to be
implemented op list to pass the error

Differential Revision: D83606497

(cherry picked from commit e09abea)
@pytorchbot
Copy link
Collaborator

Cherry picking #14710

The cherry pick PR is at #14868 and it is recommended to link a regression cherry pick PR with an issue. The following tracker issues are updated:

Details for Dev Infra team Raised by workflow job

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
ciflow/nightly CLA Signed This label is managed by the Facebook bot. Authors need to sign the CLA before a PR can be reviewed. fb-exported meta-exported
Projects
None yet
Development

Successfully merging this pull request may close these issues.

7 participants