-
Notifications
You must be signed in to change notification settings - Fork 1k
graph: backend: dnnl: backend refactor of adding fusion info attr #3424
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
base: main
Are you sure you want to change the base?
Conversation
// TODO(GX): the operator here is mainly for making compiler happy, and it's | ||
// not actually used becasue fusion_info_t is an internal attribute. We may | ||
// need to extend the logic here in case of future necessarity. | ||
bool operator==(const fusion_info_t &other) const { |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
If it's not used, shall we mark it as = delete?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
We cannot mark it as delete because this is called in attribute_value_t
for comparison purpose (code). Namely all attribute should define ==
operator. And the operator ==
in attribute_value_t
is actually called here in op.hpp for comparison. Since fusion_info_t is an internal attribute, it won't be used for comparison, thus I just implemented a rough one to make compiler happy(no compiation issue)
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
what do we need to extend?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Oh, the extend
I mentioned refers to the future scenario where we need to compare internal attributes. In that case, it would be necessary to strictly compare all the member meta ops within the fusion info.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
What's the potential effort if we want to complete the comparison function here?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Not much, it just looks a bit redundant. I had already written it down, but I thought it was too messy, so I didn't put it in current PR. :) I will add that later.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I have added two new commits: the first one is related to the operator ==
, and the second one involves cleaning up redundant code discovered while reviewing subgraph_info_mgr
.
// info key to query it out from the manager. | ||
class fusion_info_mgr_t { | ||
// This class is used to store subgraph info such as fpmath_mode and flag of | ||
// using block_layout for subgraph. |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
How about directly saving these info in the subgraph_t class?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I was thinking about this, and I think it's doable, but I didn't do that because:
- Those subgraph info is used in op_executable when creating desc, see here. I feel passing a cleaner sungraph_info instead of full subgraph_t is clearer and simpler.
- This involves too much change, and the first thing I want to do is just split fusion_info from this mgr to serve for compressed SDPA support.
cc @TaoLv any suggestions?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I see there are only two extra flags: fpmath and use block layout. Is it possible to move them along with fusion info? I know these are applied graph-wise. But eventually it's used by each op and each primitive. It also seems to simplify the logic around.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I see there are only two extra flags: fpmath and use block layout. Is it possible to move them along with fusion info? I know these are applied graph-wise. But eventually it's used by each op and each primitive. It also seems to simplify the logic around.
ok, I will take a look at this suggestion.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I choose to save fpmath_t
and block layout
info in subgraph currently and pass that info to op_executable
and layout_propagayor
independently, please refer to the last commit and review again. Thanks!
BTW, bind these info to op fusion_info
level can be a long-term plan and depends on real scenario usage as the change involves too much change now(transformation pass handling is not straightforward after I look into the implementation) :)
6d9a78f
to
d4d3632
Compare
d4d3632
to
da1bf66
Compare
da1bf66
to
91e9481
Compare
make test |
Background
Preliminary work for MFDNN-13442. The main purpose of this PR is to bind the
fusion_info
as an attribute directly onto the op, rather than using afusion_info_key
to reference a container(currently a vector store infusion_info_mgr
) that manages the fusion info.Works
fusion_info
attr.fusion_info_mgr_t
to reflect the updated function