Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

New ir support legacy kernel instruction #55880

Merged

Conversation

phlrain
Copy link
Collaborator

@phlrain phlrain commented Aug 1, 2023

PR types

New features

PR changes

Others

Description

在new ir executor,BetaRun(已是默认接口)流程中支持fluid op

Others

Pcard-67164

@paddle-bot
Copy link

paddle-bot bot commented Aug 1, 2023

你的PR提交成功,感谢你对开源项目的贡献!
请关注后续CI自动化测试结果,详情请参考Paddle-CI手册
Your PR has been submitted. Thanks for your contribution!
Please wait for the result of CI firstly. See Paddle CI Manual for details.

@paddle-bot
Copy link

paddle-bot bot commented Aug 1, 2023

❌ The PR is not created using PR's template. You can refer to this Demo.
Please use PR's template, it helps save our maintainers' time so that more developers get helped.

namespace paddle {
namespace framework {
class Scope;
class Value;
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Value 属于 ir namespace

@@ -21,6 +21,8 @@

#include "paddle/fluid/framework/new_executor/new_executor_defs.h"
#include "paddle/fluid/platform/event.h"
#include "paddle/ir/core/operation.h"
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

前置声明 Operation 是不是就可以


phi::InferMetaContext infer_meta_context_;

paddle::framework::ExecutionContext* kernel_context_{nullptr};
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

只不是直接叫 execution_context 更加直观,以此和 phi 的 kernel_context 进行区别

std::shared_ptr<framework::RuntimeContext> runtime_context_;
std::shared_ptr<paddle::framework::OperatorBase> operator_base_;

phi::Kernel* phi_kernel_{nullptr}; // not owned
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

fluid 算子也需要 phi_kernel ?么

Copy link
Contributor

@Aurelius84 Aurelius84 left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

LGTM overall,comment可以单独提PR fix

@@ -21,6 +21,7 @@

#include "paddle/fluid/framework/new_executor/new_executor_defs.h"
#include "paddle/fluid/platform/event.h"
#include "paddle/ir/core/value.h"
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

这里其实不需要包含value.h头文件,Value已经有前置声明

#include <unordered_map>
#include <vector>

#include "paddle/fluid/framework/new_executor/instruction/instruction_util.h"
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

从规范上,instruction_util.h 要放在第一个来include

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

另外文件名是否直接 utils.h 就可以,不需要再加instruction前缀了,已经很长了~~

const std::unordered_map<const paddle::framework::Variable*, std::string>&
variable_2_var_name) {
std::vector<int> ids;
std::string var_name = value_2_var_name.at(value);
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Suggested change
std::string var_name = value_2_var_name.at(value);
auto& var_name = value_2_var_name.at(value);

// computing. They execute serially in device thread and block CUDA kernel
// launching in other GPU OPs. To improve performance, set them as kGpuSync
// and so that they would be dispatched to host thread.
auto op_attributes = op->attributes();
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Suggested change
auto op_attributes = op->attributes();
auto& op_attributes = op->attributes();

消除一次隐式copy 构造

const platform::Place& place,
const std::string& execution_stream,
const int stream_priority) {
auto op_attributes = op->attributes();
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Suggested change
auto op_attributes = op->attributes();
auto& op_attributes = op->attributes();

auto 与 auto& 是有区别的。

const std::unordered_map<const paddle::framework::Variable*, std::string>&
variable_2_var_name)
: InstructionBase(id, place) {
auto op_attributes = op->attributes();
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Suggested change
auto op_attributes = op->attributes();
auto& op_attributes = op->attributes();

.data();
auto kernel_result = phi::KernelFactory::Instance().SelectKernelOrThrowError(
kernel_name, kernel_key);
phi_kernel_ = new phi::Kernel(kernel_result.kernel);
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

析构函数里也要加上 delete phi_kernel_

@@ -73,6 +63,8 @@ class PhiKernelInstruction : public InstructionBase {
phi::KernelContext kernel_context_;

phi::Kernel* phi_kernel_{nullptr}; // not owned

std::string phi_op_name_;
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

这里加了这个成员,似乎没有处理它的构造?

variable_2_var_name_));

if (op_name == "pd.fused_softmax_mask_upper_triangle" ||
op_name == "pd.fused_softmax_mask_upper_triangle_grad") {
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

后续这里得单独抽离放到一个函数里,作为单独的分支。下面有定义 LegacyOpList,其实可以复用?

@phlrain phlrain merged commit f9c2f4c into PaddlePaddle:develop Aug 8, 2023
27 checks passed
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

None yet

3 participants