Skip to content

graph: backend: dnnl: backend refactor of adding fusion info attr #3424

New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Open
wants to merge 4 commits into
base: main
Choose a base branch
from

Conversation

xiang1guo
Copy link
Contributor

@xiang1guo xiang1guo commented Jun 16, 2025

Background

Preliminary work for MFDNN-13442. The main purpose of this PR is to bind the fusion_info as an attribute directly onto the op, rather than using a fusion_info_key to reference a container(currently a vector store in fusion_info_mgr) that manages the fusion info.

Works

  • Add new fusion_info attr.
  • Update all op's attr with new attr
  • Gtest update
  • rename fusion_info_mgr_t to reflect the updated function

@xiang1guo xiang1guo self-assigned this Jun 16, 2025
@xiang1guo xiang1guo requested a review from a team as a code owner June 16, 2025 06:54
@xiang1guo xiang1guo added the component:graph-api Codeowner: @oneapi-src/onednn-graph label Jun 16, 2025
@github-actions github-actions bot added the component:tests Codeowner: @oneapi-src/onednn-arch label Jun 16, 2025
// TODO(GX): the operator here is mainly for making compiler happy, and it's
// not actually used becasue fusion_info_t is an internal attribute. We may
// need to extend the logic here in case of future necessarity.
bool operator==(const fusion_info_t &other) const {
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

If it's not used, shall we mark it as = delete?

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

We cannot mark it as delete because this is called in attribute_value_t for comparison purpose (code). Namely all attribute should define == operator. And the operator == in attribute_value_t is actually called here in op.hpp for comparison. Since fusion_info_t is an internal attribute, it won't be used for comparison, thus I just implemented a rough one to make compiler happy(no compiation issue)

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

what do we need to extend?

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Oh, the extend I mentioned refers to the future scenario where we need to compare internal attributes. In that case, it would be necessary to strictly compare all the member meta ops within the fusion info.

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

What's the potential effort if we want to complete the comparison function here?

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Not much, it just looks a bit redundant. I had already written it down, but I thought it was too messy, so I didn't put it in current PR. :) I will add that later.

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I have added two new commits: the first one is related to the operator ==, and the second one involves cleaning up redundant code discovered while reviewing subgraph_info_mgr.

// info key to query it out from the manager.
class fusion_info_mgr_t {
// This class is used to store subgraph info such as fpmath_mode and flag of
// using block_layout for subgraph.
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

How about directly saving these info in the subgraph_t class?

Copy link
Contributor Author

@xiang1guo xiang1guo Jun 17, 2025

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I was thinking about this, and I think it's doable, but I didn't do that because:

  1. Those subgraph info is used in op_executable when creating desc, see here. I feel passing a cleaner sungraph_info instead of full subgraph_t is clearer and simpler.
  2. This involves too much change, and the first thing I want to do is just split fusion_info from this mgr to serve for compressed SDPA support.
    cc @TaoLv any suggestions?

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I see there are only two extra flags: fpmath and use block layout. Is it possible to move them along with fusion info? I know these are applied graph-wise. But eventually it's used by each op and each primitive. It also seems to simplify the logic around.

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I see there are only two extra flags: fpmath and use block layout. Is it possible to move them along with fusion info? I know these are applied graph-wise. But eventually it's used by each op and each primitive. It also seems to simplify the logic around.

ok, I will take a look at this suggestion.

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I choose to save fpmath_t and block layout info in subgraph currently and pass that info to op_executable and layout_propagayor independently, please refer to the last commit and review again. Thanks!
BTW, bind these info to op fusion_info level can be a long-term plan and depends on real scenario usage as the change involves too much change now(transformation pass handling is not straightforward after I look into the implementation) :)

@xiang1guo xiang1guo force-pushed the xiang/backend-refactor-part1 branch 4 times, most recently from 6d9a78f to d4d3632 Compare June 19, 2025 06:31
@xiang1guo xiang1guo force-pushed the xiang/backend-refactor-part1 branch from d4d3632 to da1bf66 Compare June 20, 2025 00:47
@xiang1guo xiang1guo force-pushed the xiang/backend-refactor-part1 branch from da1bf66 to 91e9481 Compare June 20, 2025 00:52
@xiang1guo
Copy link
Contributor Author

make test
set test_scope=NIGHTLY
disable benchdnn_all
enable benchdnn_graph

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
component:graph-api Codeowner: @oneapi-src/onednn-graph component:tests Codeowner: @oneapi-src/onednn-arch
Projects
None yet
Development

Successfully merging this pull request may close these issues.

4 participants