Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

[PHI Decoupling]Remove memory header (Part1) #50419

Merged
merged 13 commits into from Feb 21, 2023

Conversation

YuanRisheng
Copy link
Contributor

@YuanRisheng YuanRisheng commented Feb 10, 2023

PR types

Others

PR changes

Others

Describe

本PR解耦PHI算子库依赖的Fluid头文件:"paddle/fluid/memory/malloc.h"

背景:
PHI算子库正在进行解耦Fluid的工作,其中有一类memory相关的Fluid头文件,由于其深度依赖Fluid,无法通过常规方法将其与PHI解耦,本PR通过抽象设计出一套接口层来实现PHI和Fluid的memory头文件解耦。
解决方法:
设计了一个MemoryInterface来实现解耦,整体设计如下图所示:
image
其中,MemoryUtils用来管理MemoryInterface对象,PHI算子库的其他部件通过使用MemoryUtils来调用MemoryInterface定义的接口功能。
MemoryInterface定义了外部库使用PHI算子库所要实现的内存管理功能,该接口暴露给PHI之外的库,不同库可以按照定义好的接口标准进行实现,从而将自己的内存管理功能接入PHI算子库。

@paddle-bot
Copy link

paddle-bot bot commented Feb 10, 2023

你的PR提交成功,感谢你对开源项目的贡献!
请关注后续CI自动化测试结果,详情请参考Paddle-CI手册
Your PR has been submitted. Thanks for your contribution!
Please wait for the result of CI firstly. See Paddle CI Manual for details.

jiahy0825
jiahy0825 previously approved these changes Feb 17, 2023
Copy link
Contributor

@jiahy0825 jiahy0825 left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

LGTM except for a small question.


namespace phi {

struct MemoryInterface {
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

初始化 MemoryInterface 时,必须要实现此接口中的所有函数吗?

Is it necessary to implement all functions inside MemoryInterface?

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

目前主框架都会用到这里的接口函数,所以必须要都实现,不过报错信息我会优化一下,以支持不需要都实现的情况

@@ -0,0 +1,175 @@
// Copyright (c) 2022 PaddlePaddle Authors. All Rights Reserved.
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

2022 -> 2023

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

done

@@ -0,0 +1,175 @@
// Copyright (c) 2022 PaddlePaddle Authors. All Rights Reserved.
//
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

这个文件是不叫 memory_register 更形象些?

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

该文件主要作用是为phi算子库提供memory操作相关功能,感觉memory_register表达容易让人误解

@@ -1564,7 +1565,7 @@ void CommonGradBroadcastCUDA(const DenseTensor &x,
int y_block_size = std::min(ELEMWISE_MAX_BLOCK_DIM, y_threads);
if (dx) {
size_t dx_total_bytes = bytes * 2;
auto dx_tmp_buffer = paddle::memory::Alloc(
auto dx_tmp_buffer = phi::MemoryUtils::Instance().Alloc(
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

MemoryUtils的全局单例在Kernel中使用的太多看着不是很好,像paddle::memory::Alloc这样封装一个普通函数调用会比较好些,后面如果修改全局单例成本也会低一些

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

谢谢,已封装到函数中

Copy link
Contributor

@jiahy0825 jiahy0825 left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

LGTM

Copy link
Contributor

@ZzSean ZzSean left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

LGTM for CI-OP-Benchmark

Copy link
Contributor

@XiaoguangHu01 XiaoguangHu01 left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

LGTM

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

None yet

6 participants