-
Notifications
You must be signed in to change notification settings - Fork 5.5k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Enable is_test attr of batch norm and drop out op for test program #8642
Conversation
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Great work!
@@ -186,9 +186,7 @@ void Prune(const proto::ProgramDesc& input, proto::ProgramDesc* output) { | |||
prune_impl(input, output, 0, -1, dependent_vars); | |||
} | |||
|
|||
void inference_optimize_impl(const proto::ProgramDesc& input, | |||
proto::ProgramDesc* output, int block_id) { | |||
*output = input; |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
output -> inout
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Done.
@@ -205,7 +203,12 @@ void inference_optimize_impl(const proto::ProgramDesc& input, | |||
|
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
这么看,原来的inference_program
已经有这个修改了啊,那原来是我没注意到。我理解,无论是哪个op,只要有is_test
这个属性,inference_program
里面都应该设置成true
才对,line 192 - 193
对op type
的判断是否可以去掉?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
是的,生成inference_program的时候就经过了inference_optimize函数的处理了。
你说的很有道理,已修改~
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
可以删除line 30 - 31
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Done
@@ -205,7 +203,12 @@ void inference_optimize_impl(const proto::ProgramDesc& input, | |||
|
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
可以删除line 30 - 31
@@ -115,7 +115,7 @@ def train(net_type, use_cuda, save_dirname, is_local): | |||
acc = fluid.layers.accuracy(input=predict, label=label) | |||
|
|||
# Test program | |||
test_program = fluid.default_main_program().clone() | |||
test_program = fluid.default_main_program().clone_as_test_program() |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
最好是所有的生成test_program
都改一下?不然用户容易迷惑。
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
所有三个生成test_program的地方都改了~
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
哦,我原以为所有的示例都有test呢。。。
python/paddle/fluid/framework.py
Outdated
p.sync_with_cpp() | ||
p.copy_param_info_from(self) | ||
return p | ||
|
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
或许我们可以给clone
函数加个参数,变成clone(for_test=False)
,默认值为False
,这样可以避免重复的代码实现,你觉得呢?另外需要加一些注释说明一下。
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Can't agree more. Done.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
LGTM. Thanks for your work.
fix #8372