Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Add dropout and log_loss for kunlun #27790

Merged
merged 5 commits into from
Oct 13, 2020
Merged

Conversation

tink2123
Copy link
Contributor

@tink2123 tink2123 commented Oct 9, 2020

PR types

New features

PR changes

OPs

Describe

Add dropout log_loss for kunlun

@paddle-bot-old
Copy link

paddle-bot-old bot commented Oct 9, 2020

Thanks for your contribution!
Please wait for the result of CI firstly. See Paddle CI Manual for details.

@tink2123 tink2123 changed the title Add conv2d dropout log_loss for kunlun Add dropout and log_loss for kunlun Oct 10, 2020
LDOUBLEV
LDOUBLEV previously approved these changes Oct 12, 2020
Copy link
Contributor

@LDOUBLEV LDOUBLEV left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

lgtm

@tink2123 tink2123 closed this Oct 12, 2020
@tink2123 tink2123 reopened this Oct 12, 2020
yghstill
yghstill previously approved these changes Oct 13, 2020
Copy link
Contributor

@yghstill yghstill left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

LGTM

Copy link
Contributor

@qingqing01 qingqing01 left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Help to merge

@qingqing01 qingqing01 merged commit ae01801 into PaddlePaddle:develop Oct 13, 2020
#ifdef PADDLE_WITH_XPU
static std::map<int, float*> mask_data_tables;
static const int max_data_size = 32 * 1024 * 1024;
static std::mutex s_mask_data_table_lock;
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

这个mutex是这个op特有的吗?


#ifdef PADDLE_WITH_XPU
static std::map<int, float*> mask_data_tables;
static const int max_data_size = 32 * 1024 * 1024;
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

命名不符合google c++ code style

int index = (dev_id << 16) + (prop << 8) + is_upscale;
std::lock_guard<std::mutex> lock(s_mask_data_table_lock);
if (mask_data_tables.find(index) == mask_data_tables.end()) {
float* mask_data_host = new float[max_data_size];
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

这种new可以用Paddle底层的统一内存管理

chen-zhiyu pushed a commit to chen-zhiyu/Paddle that referenced this pull request Oct 15, 2020
* add dropout,log_loss, test=kunlun
* fix dropout, test=kunlun
* polish error message, test=kunlun
* change boost::get to BOOST_GET_CONST, test=kunlun
* fix copyright, test=kunlun
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

None yet

4 participants