Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Implement basic Load() and modify example based on updated inference design #7690

Merged
merged 18 commits into from
Jan 30, 2018

Conversation

sidgoyal78
Copy link
Contributor

@sidgoyal78 sidgoyal78 commented Jan 19, 2018

This PR basically modifies the current example.cc and inference.* files (in addition to changes in program_desc.*) to implement a basic Load() function which follows the current design doc by @Xreki (PR #7315). The example code is modified accordingly.

The example code runs successfully, but there is another TODO item.

TODO: The new getters and setters for feed and fetch vars in program_desc should have corresponding implementations in protobuf.cc.

@sidgoyal78 sidgoyal78 added the 预测 原名Inference,包含Capi预测问题等 label Jan 19, 2018

void LoadInferenceModel(framework::Executor& executor,
framework::Scope& scope,
const std::string& dirname);
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

The definition of InferenceEngine will be deleted (or you can delete it in this PR). I suggest to implement all load and save functions as a global function in a separate file named io.cc, just like io.py in Python API.

About the function's name, I suggest we use Load(...) directly.

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Yes, I just kept this to have a minimal InferenceEngine as of now to get the load() working. Just to clarify, this calls the LoadModelAndParam which is a new namespace infer (in inference.cc). So as you suggested, in the future, i will move this to a separate file. Do you think the name LoadModelAndParam is not okay?

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Do you think the name LoadModelAndParam is not okay?

I think we can Load(...) for simple.

in the future, i will move this to a separate file

I suggest doing this now to fix the interface for users as soon as possible.

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Okay, will fix naming and file structure. Thanks.

@@ -45,10 +45,19 @@ class ProgramDesc {

proto::ProgramDesc *Proto();

// 4 utility functions for inference (TODO: Better comment)
const std::vector<std::string> &GetFeedVarNames() const;
const std::vector<std::string> &GetFetchVarNames() const;
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I think the two functions are not special for inference. When there are feed_op in ProgramDesc, we can always call this function to get feed var names. Same to GetFetchVarNames().

I am thinking another implementation, not keeping a member of feed_var_names_ and traversing all the operators to find the inputs of feed_op. Like:

framework::BlockDesc* global_block = program_->MutableBlock(0);
feed_var_names_.clear();
fetch_var_names_.clear();
for (auto* op : global_block->AllOps()) {
if (op->Type() == "feed") {
feed_var_names_.insert(feed_var_names_.begin(), op->Output("Out")[0]);
} else if (op->Type() == "fetch") {
fetch_var_names_.push_back(op->Input("X")[0]);
}
}

Which way looks better?

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Thanks for the nice suggestion. But if go with option 2 (your suggestion), then we will need some placeholder class for keeping these 2 lists right? (For example, ProgramBuilder in your initial design). Otherwise, these 2 lists will have to be floating around in the program.
Or maybe i misunderstood?

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

we will need some placeholder class for keeping these 2 lists right?

There is no need to keep the 2 lists. When we want to get the 2 lists, we can always traverse all the operators of the given ProgramDesc to get them. It is not time-consuming. We can make sure that the feed_var_names and fetch_var_names got from the two interfaces are consistent with the user ProgramDesc.

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Sorry, i don't understand this:

We can make sure that the feed_var_names and fetch_var_names got from the two interfaces are consistent

If we don't keep feed_var_names and fetch_var_names as private members of the ProgramDesc class, then we won't need the 4 functions (GetFetch, GetFeed, InsertFeed, InsertFetch) right?

Copy link
Contributor

@kexinzhao kexinzhao Jan 24, 2018

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

@sidgoyal78 I prefer @Xreki's suggestion because we don't need to add extra members to ProgramDesc class.

In your code, it is possible that the feed_var_names data member is inconsistent with the actual info in ProgramDesc because you construct the feed_var_names using InsertFetchVarName.

Following @Xreki's suggestion:
We don't change any code in the ProgramDesc class at all, thus we don't need these four functions.
Instead, we provide the general (not a member function of ProgramDesc class) :

std::vector<std::string>& GetFeedVarNames(const ProgramDesc&);
std::vector<std::string>& GetFetchVarNames(const ProgramDesc&);

which obtain the vector of names on the fly by traversing the global block. This way we also make sure the obtained names are consistent with program desc.

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I think in your suggestion InsertFetch and InsertFeed are not taken into account. Maybe we can go away without those inserts in this PR (assuming that the user is not provided with the functionality of inserts).

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Yes. I don't think InsertFetch and InsertFeed are good utility function because they are error prone. We can provide SetFetch SetFeed function instead to overwrite the feed/fetch info contained in ProgramDesc if user wants to.

@@ -17,35 +17,31 @@ limitations under the License. */
#include "paddle/framework/block_desc.h"
#include "paddle/framework/lod_tensor.h"
#include "paddle/framework/program_desc.h"
#include "paddle/framework/executor.h"
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Please follow alphabetical order when including files, i.e., put executor.h under block_desc.h

@Xreki Xreki added this to DOING in Inference Framework Jan 23, 2018
@sidgoyal78
Copy link
Contributor Author

sidgoyal78 commented Jan 26, 2018

I think the example runs successfully now:

GLOG_v=3 ./build/paddle/inference/example --dirname=recognize_digits_mlp.inference.model/
WARNING: Logging before InitGoogleLogging() is written to STDERR
W0126 05:50:01.332662 39100 init.cc:56] 'GPU' is not supported, Please re-compile with WITH_GPU option
FLAGS_dirname: recognize_digits_mlp.inference.model/
I0126 05:50:01.332861 39100 io.cc:77] loading model from recognize_digits_mlp.inference.model//__model__
I0126 05:50:01.332942 39100 io.cc:83] program_desc_str's size: 2877
I0126 05:50:01.333258 39100 io.cc:53] parameter's name: fc_2.w_0
I0126 05:50:01.333325 39100 io.cc:53] parameter's name: fc_1.b_0
I0126 05:50:01.333370 39100 io.cc:53] parameter's name: fc_1.w_0
I0126 05:50:01.333428 39100 io.cc:53] parameter's name: fc_2.b_0
I0126 05:50:01.333467 39100 io.cc:53] parameter's name: fc_0.w_0
I0126 05:50:01.333493 39100 io.cc:53] parameter's name: fc_0.b_0
I0126 05:50:01.333515 39100 scope.cc:48] Create variable fc_0.b_0
I0126 05:50:01.333528 39100 executor.cc:100] Create Variable fc_0.b_0 global, which pointer is 0x2beda20
I0126 05:50:01.333542 39100 scope.cc:48] Create variable fc_1.b_0
I0126 05:50:01.333550 39100 executor.cc:100] Create Variable fc_1.b_0 global, which pointer is 0x2bef220
I0126 05:50:01.333560 39100 scope.cc:48] Create variable fc_2.b_0
I0126 05:50:01.333569 39100 executor.cc:100] Create Variable fc_2.b_0 global, which pointer is 0x2bef480
I0126 05:50:01.333578 39100 scope.cc:48] Create variable fc_0.w_0
I0126 05:50:01.333586 39100 executor.cc:100] Create Variable fc_0.w_0 global, which pointer is 0x2bef590
I0126 05:50:01.333596 39100 scope.cc:48] Create variable fc_2.w_0
I0126 05:50:01.333604 39100 executor.cc:100] Create Variable fc_2.w_0 global, which pointer is 0x2bef6a0
I0126 05:50:01.333613 39100 scope.cc:48] Create variable fc_1.w_0
I0126 05:50:01.333626 39100 executor.cc:100] Create Variable fc_1.w_0 global, which pointer is 0x2bef7f0
I0126 05:50:01.333919 39100 executor.cc:127] Op(load), inputs:{}, outputs:{Out[fc_2.w_0[64, 10]({})]}.
I0126 05:50:01.333997 39100 executor.cc:127] Op(load), inputs:{}, outputs:{Out[fc_1.b_0[64]({})]}.
I0126 05:50:01.334085 39100 executor.cc:127] Op(load), inputs:{}, outputs:{Out[fc_1.w_0[128, 64]({})]}.
I0126 05:50:01.334131 39100 executor.cc:127] Op(load), inputs:{}, outputs:{Out[fc_2.b_0[10]({})]}.
I0126 05:50:01.334601 39100 executor.cc:127] Op(load), inputs:{}, outputs:{Out[fc_0.w_0[784, 128]({})]}.
I0126 05:50:01.334651 39100 executor.cc:127] Op(load), inputs:{}, outputs:{Out[fc_0.b_0[128]({})]}.
I0126 05:50:01.336215 39100 feed_fetch_method.cc:26] SetFeedVariable name=feed index=0
I0126 05:50:01.336239 39100 scope.cc:48] Create variable feed
I0126 05:50:01.336266 39100 executor.cc:100] Create Variable fc_0.b_0 global, which pointer is 0x2beda20
I0126 05:50:01.336277 39100 scope.cc:48] Create variable learning_rate_3
I0126 05:50:01.336285 39100 executor.cc:100] Create Variable learning_rate_3 global, which pointer is 0x2c06cf0
I0126 05:50:01.336294 39100 executor.cc:100] Create Variable fc_0.w_0 global, which pointer is 0x2bef590
I0126 05:50:01.336304 39100 scope.cc:48] Create variable accuracy_1.tmp_1
I0126 05:50:01.336314 39100 executor.cc:105] Create Variable accuracy_1.tmp_1 locally, which pointer is 0x2c06ec0
I0126 05:50:01.336325 39100 scope.cc:48] Create variable fc_0.tmp_2
I0126 05:50:01.336334 39100 executor.cc:105] Create Variable fc_0.tmp_2 locally, which pointer is 0x2c06f60
I0126 05:50:01.336343 39100 scope.cc:48] Create variable fc_0.w_0@GRAD
I0126 05:50:01.336354 39100 executor.cc:105] Create Variable fc_0.w_0@GRAD locally, which pointer is 0x2c07320
I0126 05:50:01.336362 39100 scope.cc:48] Create variable fc_1.tmp_0@GRAD
I0126 05:50:01.336370 39100 executor.cc:105] Create Variable fc_1.tmp_0@GRAD locally, which pointer is 0x2c07430
I0126 05:50:01.336380 39100 scope.cc:48] Create variable velocity_1
I0126 05:50:01.336387 39100 executor.cc:100] Create Variable velocity_1 global, which pointer is 0x2c07540
I0126 05:50:01.336396 39100 scope.cc:48] Create variable velocity_5
I0126 05:50:01.336405 39100 executor.cc:100] Create Variable velocity_5 global, which pointer is 0x2c07650
I0126 05:50:01.336414 39100 scope.cc:48] Create variable mean_0.tmp_0
I0126 05:50:01.336422 39100 executor.cc:105] Create Variable mean_0.tmp_0 locally, which pointer is 0x2c07760
I0126 05:50:01.336434 39100 scope.cc:48] Create variable velocity_3
I0126 05:50:01.336443 39100 executor.cc:100] Create Variable velocity_3 global, which pointer is 0x2c078b0
I0126 05:50:01.336453 39100 scope.cc:48] Create variable _generated_var_2
I0126 05:50:01.336462 39100 executor.cc:105] Create Variable _generated_var_2 locally, which pointer is 0x2bef740
I0126 05:50:01.336472 39100 scope.cc:48] Create variable fc_1.w_0@GRAD
I0126 05:50:01.336479 39100 executor.cc:105] Create Variable fc_1.w_0@GRAD locally, which pointer is 0x2c07b50
I0126 05:50:01.336489 39100 scope.cc:48] Create variable accuracy_1.tmp_2
I0126 05:50:01.336498 39100 executor.cc:105] Create Variable accuracy_1.tmp_2 locally, which pointer is 0x2c07c80
I0126 05:50:01.336506 39100 scope.cc:48] Create variable velocity_2
I0126 05:50:01.336515 39100 executor.cc:100] Create Variable velocity_2 global, which pointer is 0x2c07d90
I0126 05:50:01.336524 39100 scope.cc:48] Create variable fc_2.tmp_0@GRAD
I0126 05:50:01.336534 39100 executor.cc:105] Create Variable fc_2.tmp_0@GRAD locally, which pointer is 0x2c07ea0
I0126 05:50:01.336549 39100 scope.cc:48] Create variable velocity_4
I0126 05:50:01.336557 39100 executor.cc:100] Create Variable velocity_4 global, which pointer is 0x2c07fb0
I0126 05:50:01.336566 39100 scope.cc:48] Create variable x
I0126 05:50:01.336575 39100 executor.cc:105] Create Variable x locally, which pointer is 0x2c080c0
I0126 05:50:01.336585 39100 executor.cc:100] Create Variable fc_1.w_0 global, which pointer is 0x2bef7f0
I0126 05:50:01.336593 39100 scope.cc:48] Create variable fc_1.tmp_1
I0126 05:50:01.336601 39100 executor.cc:105] Create Variable fc_1.tmp_1 locally, which pointer is 0x2c081d0
I0126 05:50:01.336611 39100 scope.cc:48] Create variable cast_0.tmp_0
I0126 05:50:01.336619 39100 executor.cc:105] Create Variable cast_0.tmp_0 locally, which pointer is 0x2c077e0
I0126 05:50:01.336628 39100 scope.cc:48] Create variable learning_rate_2
I0126 05:50:01.336635 39100 executor.cc:100] Create Variable learning_rate_2 global, which pointer is 0x2c08450
I0126 05:50:01.336644 39100 scope.cc:48] Create variable fetch
I0126 05:50:01.336654 39100 executor.cc:100] Create Variable fetch global, which pointer is 0x2c08560
I0126 05:50:01.336663 39100 scope.cc:48] Create variable velocity_0
I0126 05:50:01.336671 39100 executor.cc:100] Create Variable velocity_0 global, which pointer is 0x2c08600
I0126 05:50:01.336683 39100 scope.cc:48] Create variable fc_2.tmp_2
I0126 05:50:01.336693 39100 executor.cc:105] Create Variable fc_2.tmp_2 locally, which pointer is 0x2c08710
I0126 05:50:01.336701 39100 scope.cc:48] Create variable fc_1.tmp_0
I0126 05:50:01.336712 39100 executor.cc:105] Create Variable fc_1.tmp_0 locally, which pointer is 0x2c08820
I0126 05:50:01.336721 39100 scope.cc:48] Create variable y
I0126 05:50:01.336730 39100 executor.cc:105] Create Variable y locally, which pointer is 0x2c08930
I0126 05:50:01.336740 39100 executor.cc:100] Create Variable fc_2.w_0 global, which pointer is 0x2bef6a0
I0126 05:50:01.336750 39100 executor.cc:100] Create Variable fc_1.b_0 global, which pointer is 0x2bef220
I0126 05:50:01.336760 39100 scope.cc:48] Create variable cross_entropy_0.tmp_0@GRAD
I0126 05:50:01.336769 39100 executor.cc:105] Create Variable cross_entropy_0.tmp_0@GRAD locally, which pointer is 0x2c08a70
I0126 05:50:01.336779 39100 scope.cc:48] Create variable accuracy_1.tmp_0
I0126 05:50:01.336793 39100 executor.cc:105] Create Variable accuracy_1.tmp_0 locally, which pointer is 0x2c08bd0
I0126 05:50:01.336802 39100 scope.cc:48] Create variable learning_rate_4
I0126 05:50:01.336812 39100 executor.cc:100] Create Variable learning_rate_4 global, which pointer is 0x2c08ce0
I0126 05:50:01.336822 39100 scope.cc:48] Create variable cast_1.tmp_0
I0126 05:50:01.336829 39100 executor.cc:105] Create Variable cast_1.tmp_0 locally, which pointer is 0x2c08dd0
I0126 05:50:01.336835 39100 scope.cc:48] Create variable learning_rate_5
I0126 05:50:01.336843 39100 executor.cc:100] Create Variable learning_rate_5 global, which pointer is 0x2c08ee0
I0126 05:50:01.336855 39100 scope.cc:48] Create variable fc_2.tmp_0
I0126 05:50:01.336863 39100 executor.cc:105] Create Variable fc_2.tmp_0 locally, which pointer is 0x2c08ff0
I0126 05:50:01.336874 39100 scope.cc:48] Create variable cross_entropy_0.tmp_0
I0126 05:50:01.336881 39100 executor.cc:105] Create Variable cross_entropy_0.tmp_0 locally, which pointer is 0x2c09120
I0126 05:50:01.336891 39100 scope.cc:48] Create variable fc_2.tmp_2@GRAD
I0126 05:50:01.336900 39100 executor.cc:105] Create Variable fc_2.tmp_2@GRAD locally, which pointer is 0x2c09230
I0126 05:50:01.336910 39100 scope.cc:48] Create variable accuracy_0_1_correct
I0126 05:50:01.336918 39100 executor.cc:100] Create Variable accuracy_0_1_correct global, which pointer is 0x2c09360
I0126 05:50:01.336928 39100 scope.cc:48] Create variable mean_0.tmp_0@GRAD
I0126 05:50:01.336936 39100 executor.cc:105] Create Variable mean_0.tmp_0@GRAD locally, which pointer is 0x2c09490
I0126 05:50:01.336946 39100 scope.cc:48] Create variable fc_0.tmp_1@GRAD
I0126 05:50:01.336954 39100 executor.cc:105] Create Variable fc_0.tmp_1@GRAD locally, which pointer is 0x2c095a0
I0126 05:50:01.336962 39100 scope.cc:48] Create variable fc_0.tmp_1
I0126 05:50:01.336971 39100 executor.cc:105] Create Variable fc_0.tmp_1 locally, which pointer is 0x2c09780
I0126 05:50:01.336983 39100 scope.cc:48] Create variable fc_2.w_0@GRAD
I0126 05:50:01.336992 39100 executor.cc:105] Create Variable fc_2.w_0@GRAD locally, which pointer is 0x2c09890
I0126 05:50:01.337000 39100 scope.cc:48] Create variable fc_1.tmp_2@GRAD
I0126 05:50:01.337008 39100 executor.cc:105] Create Variable fc_1.tmp_2@GRAD locally, which pointer is 0x2c099a0
I0126 05:50:01.337018 39100 scope.cc:48] Create variable accuracy_0.tmp_1
I0126 05:50:01.337025 39100 executor.cc:105] Create Variable accuracy_0.tmp_1 locally, which pointer is 0x2c09ad0
I0126 05:50:01.337034 39100 scope.cc:48] Create variable learning_rate_1
I0126 05:50:01.337043 39100 executor.cc:100] Create Variable learning_rate_1 global, which pointer is 0x2c09be0
I0126 05:50:01.337050 39100 scope.cc:48] Create variable fc_2.tmp_1
I0126 05:50:01.337059 39100 executor.cc:105] Create Variable fc_2.tmp_1 locally, which pointer is 0x2c09cf0
I0126 05:50:01.337069 39100 scope.cc:48] Create variable fc_0.tmp_0@GRAD
I0126 05:50:01.337077 39100 executor.cc:105] Create Variable fc_0.tmp_0@GRAD locally, which pointer is 0x2c09e00
I0126 05:50:01.337085 39100 scope.cc:48] Create variable fc_1.tmp_1@GRAD
I0126 05:50:01.337096 39100 executor.cc:105] Create Variable fc_1.tmp_1@GRAD locally, which pointer is 0x2c09f10
I0126 05:50:01.337106 39100 scope.cc:48] Create variable _generated_var_0
I0126 05:50:01.337116 39100 executor.cc:105] Create Variable _generated_var_0 locally, which pointer is 0x2c0a040
I0126 05:50:01.337124 39100 scope.cc:48] Create variable fc_0.tmp_0
I0126 05:50:01.337132 39100 executor.cc:105] Create Variable fc_0.tmp_0 locally, which pointer is 0x2c0a150
I0126 05:50:01.337141 39100 scope.cc:48] Create variable accuracy_0_0_total
I0126 05:50:01.337150 39100 executor.cc:100] Create Variable accuracy_0_0_total global, which pointer is 0x2c0a280
I0126 05:50:01.337159 39100 executor.cc:100] Create Variable fc_2.b_0 global, which pointer is 0x2bef480
I0126 05:50:01.337168 39100 executor.cc:100] Create Variable feed global, which pointer is 0x2c06bd0
I0126 05:50:01.337177 39100 scope.cc:48] Create variable fc_0.tmp_2@GRAD
I0126 05:50:01.337185 39100 executor.cc:105] Create Variable fc_0.tmp_2@GRAD locally, which pointer is 0x2c0a390
I0126 05:50:01.337193 39100 scope.cc:48] Create variable accuracy_0.tmp_0
I0126 05:50:01.337201 39100 executor.cc:105] Create Variable accuracy_0.tmp_0 locally, which pointer is 0x2c0a4c0
I0126 05:50:01.337210 39100 scope.cc:48] Create variable fc_1.b_0@GRAD
I0126 05:50:01.337218 39100 executor.cc:105] Create Variable fc_1.b_0@GRAD locally, which pointer is 0x2c0a5d0
I0126 05:50:01.337227 39100 scope.cc:48] Create variable learning_rate_0
I0126 05:50:01.337235 39100 executor.cc:100] Create Variable learning_rate_0 global, which pointer is 0x2c0a6e0
I0126 05:50:01.337244 39100 scope.cc:48] Create variable fc_1.tmp_2
I0126 05:50:01.337255 39100 executor.cc:105] Create Variable fc_1.tmp_2 locally, which pointer is 0x2c0a7f0
I0126 05:50:01.337265 39100 scope.cc:48] Create variable fc_0.b_0@GRAD
I0126 05:50:01.337272 39100 executor.cc:105] Create Variable fc_0.b_0@GRAD locally, which pointer is 0x2c0a900
I0126 05:50:01.337280 39100 scope.cc:48] Create variable fc_2.b_0@GRAD
I0126 05:50:01.337290 39100 executor.cc:105] Create Variable fc_2.b_0@GRAD locally, which pointer is 0x2c0aa10
I0126 05:50:01.337297 39100 scope.cc:48] Create variable fc_2.tmp_1@GRAD
I0126 05:50:01.337306 39100 executor.cc:105] Create Variable fc_2.tmp_1@GRAD locally, which pointer is 0x2c0ab20
I0126 05:50:01.337316 39100 scope.cc:48] Create variable _generated_var_1
I0126 05:50:01.337323 39100 executor.cc:105] Create Variable _generated_var_1 locally, which pointer is 0x2c0ac50
I0126 05:50:01.337345 39100 feed_op.cc:44] Feed Var feed's 0 column to var x
I0126 05:50:01.337357 39100 tensor_util.h:36] Copy 1, 784 from CPUPlace to CPUPlace
I0126 05:50:01.337416 39100 executor.cc:127] Op(feed), inputs:{X[feed[-1]({{}})]}, outputs:{Out[x[1, 784]({})]}.
I0126 05:50:01.337446 39100 mul_op.cc:36] mul operator x.shape=1, 784 y.shape=784, 128 x_num_col_dims=1 y_num_col_dims=1
I0126 05:50:01.337482 39100 operator.cc:488] expected_kernel_key:data_type[5]:data_layout[ANY_LAYOUT]:place[CPUPlace]:library_type[PLAIN]
I0126 05:50:01.355481 39100 executor.cc:127] Op(mul), inputs:{X[x[1, 784]({})], Y[fc_0.w_0[784, 128]({})]}, outputs:{Out[fc_0.tmp_0[1, 128]({})]}.
I0126 05:50:01.355557 39100 operator.cc:488] expected_kernel_key:data_type[5]:data_layout[ANY_LAYOUT]:place[CPUPlace]:library_type[PLAIN]
I0126 05:50:01.355615 39100 executor.cc:127] Op(elementwise_add), inputs:{X[fc_0.tmp_0[1, 128]({})], Y[fc_0.b_0[128]({})]}, outputs:{Out[fc_0.tmp_1[1, 128]({})]}.
I0126 05:50:01.355644 39100 operator.cc:488] expected_kernel_key:data_type[5]:data_layout[ANY_LAYOUT]:place[CPUPlace]:library_type[PLAIN]
I0126 05:50:01.355680 39100 executor.cc:127] Op(relu), inputs:{X[fc_0.tmp_1[1, 128]({})]}, outputs:{Out[fc_0.tmp_2[1, 128]({})]}.
I0126 05:50:01.355707 39100 mul_op.cc:36] mul operator x.shape=1, 128 y.shape=128, 64 x_num_col_dims=1 y_num_col_dims=1
I0126 05:50:01.355726 39100 operator.cc:488] expected_kernel_key:data_type[5]:data_layout[ANY_LAYOUT]:place[CPUPlace]:library_type[PLAIN]
I0126 05:50:01.355789 39100 executor.cc:127] Op(mul), inputs:{X[fc_0.tmp_2[1, 128]({})], Y[fc_1.w_0[128, 64]({})]}, outputs:{Out[fc_1.tmp_0[1, 64]({})]}.
I0126 05:50:01.355815 39100 operator.cc:488] expected_kernel_key:data_type[5]:data_layout[ANY_LAYOUT]:place[CPUPlace]:library_type[PLAIN]
I0126 05:50:01.355844 39100 executor.cc:127] Op(elementwise_add), inputs:{X[fc_1.tmp_0[1, 64]({})], Y[fc_1.b_0[64]({})]}, outputs:{Out[fc_1.tmp_1[1, 64]({})]}.
I0126 05:50:01.355867 39100 operator.cc:488] expected_kernel_key:data_type[5]:data_layout[ANY_LAYOUT]:place[CPUPlace]:library_type[PLAIN]
I0126 05:50:01.355895 39100 executor.cc:127] Op(relu), inputs:{X[fc_1.tmp_1[1, 64]({})]}, outputs:{Out[fc_1.tmp_2[1, 64]({})]}.
I0126 05:50:01.355913 39100 mul_op.cc:36] mul operator x.shape=1, 64 y.shape=64, 10 x_num_col_dims=1 y_num_col_dims=1
I0126 05:50:01.355927 39100 operator.cc:488] expected_kernel_key:data_type[5]:data_layout[ANY_LAYOUT]:place[CPUPlace]:library_type[PLAIN]
I0126 05:50:01.355969 39100 executor.cc:127] Op(mul), inputs:{X[fc_1.tmp_2[1, 64]({})], Y[fc_2.w_0[64, 10]({})]}, outputs:{Out[fc_2.tmp_0[1, 10]({})]}.
I0126 05:50:01.355994 39100 operator.cc:488] expected_kernel_key:data_type[5]:data_layout[ANY_LAYOUT]:place[CPUPlace]:library_type[PLAIN]
I0126 05:50:01.356025 39100 executor.cc:127] Op(elementwise_add), inputs:{X[fc_2.tmp_0[1, 10]({})], Y[fc_2.b_0[10]({})]}, outputs:{Out[fc_2.tmp_1[1, 10]({})]}.
I0126 05:50:01.356048 39100 operator.cc:488] expected_kernel_key:data_type[5]:data_layout[ANY_LAYOUT]:place[CPUPlace]:library_type[PLAIN]
I0126 05:50:01.356119 39100 executor.cc:127] Op(softmax), inputs:{X[fc_2.tmp_1[1, 10]({})]}, outputs:{Out[fc_2.tmp_2[1, 10]({})]}.
I0126 05:50:01.356142 39100 tensor_util.h:36] Copy 1, 10 from CPUPlace to CPUPlace
I0126 05:50:01.356159 39100 fetch_op.cc:62] Fetch variable fc_2.tmp_2 to fetch
I0126 05:50:01.356179 39100 executor.cc:127] Op(fetch), inputs:{X[fc_2.tmp_2[1, 10]({})]}, outputs:{Out[fetch[-1]({{}})]}.
I0126 05:50:01.356210 39100 feed_fetch_method.cc:49] Fetch fetch with index 0 shape= 1, 10
I0126 05:50:01.356258 39102 scope.cc:33] Destroy variable _generated_var_1
dims_i: 1 10
I0126 05:50:01.356452 39102 scope.cc:33] Destroy variable fc_2.tmp_1@GRAD
I0126 05:50:01.356475 39102 scope.cc:33] Destroy variable fc_2.b_0@GRAD
I0126 05:50:01.356490 39102 scope.cc:33] Destroy variable fc_0.b_0@GRAD
I0126 05:50:01.356500 39102 scope.cc:33] Destroy variable fc_1.b_0@GRAD
I0126 05:50:01.356510 39102 scope.cc:33] Destroy variable accuracy_0.tmp_0
I0126 05:50:01.356519 39102 scope.cc:33] Destroy variable fc_0.tmp_2@GRAD
I0126 05:50:01.356529 39102 scope.cc:33] Destroy variable fc_0.tmp_0
I0126 05:50:01.356551 39102 scope.cc:33] Destroy variable _generated_var_0
I0126 05:50:01.356561 39102 scope.cc:33] Destroy variable fc_1.tmp_1@GRAD
I0126 05:50:01.356571 39102 scope.cc:33] Destroy variable accuracy_0.tmp_1
I0126 05:50:01.356582 39102 scope.cc:33] Destroy variable fc_1.tmp_2@GRAD
I0126 05:50:01.356592 39102 scope.cc:33] Destroy variable fc_2.w_0@GRAD
result: 0.117167 0.105163 0.0693661 0.232421 0.0753762 0.0512843 0.088262 0.0687214 0.0841529 0.108085
I0126 05:50:01.356642 39102 scope.cc:33] Destroy variable fc_0.tmp_1
I0126 05:50:01.356701 39102 scope.cc:33] Destroy variable fc_1.tmp_2
I0126 05:50:01.356720 39102 scope.cc:33] Destroy variable fc_0.tmp_1@GRAD
I0126 05:50:01.356735 39102 scope.cc:33] Destroy variable accuracy_1.tmp_2
I0126 05:50:01.356745 39102 scope.cc:33] Destroy variable fc_1.w_0@GRAD
I0126 05:50:01.356755 39102 scope.cc:33] Destroy variable accuracy_1.tmp_0
I0126 05:50:01.356763 39102 scope.cc:33] Destroy variable _generated_var_2
I0126 05:50:01.356777 39102 scope.cc:33] Destroy variable fc_0.tmp_0@GRAD
I0126 05:50:01.356787 39102 scope.cc:33] Destroy variable fc_2.tmp_1
I0126 05:50:01.356798 39102 scope.cc:33] Destroy variable mean_0.tmp_0
I0126 05:50:01.356808 39102 scope.cc:33] Destroy variable fc_0.w_0@GRAD
I0126 05:50:01.356817 39102 scope.cc:33] Destroy variable x
I0126 05:50:01.356830 39102 scope.cc:33] Destroy variable fc_1.tmp_1
I0126 05:50:01.356843 39102 scope.cc:33] Destroy variable cast_1.tmp_0
I0126 05:50:01.356858 39102 scope.cc:33] Destroy variable fc_0.tmp_2
I0126 05:50:01.356869 39102 scope.cc:33] Destroy variable fc_2.tmp_0@GRAD
I0126 05:50:01.356878 39102 scope.cc:33] Destroy variable cast_0.tmp_0
I0126 05:50:01.356887 39102 scope.cc:33] Destroy variable fc_2.tmp_0
I0126 05:50:01.356900 39102 scope.cc:33] Destroy variable cross_entropy_0.tmp_0@GRAD
I0126 05:50:01.356910 39102 scope.cc:33] Destroy variable accuracy_1.tmp_1
I0126 05:50:01.356920 39102 scope.cc:33] Destroy variable fc_1.tmp_0@GRAD
I0126 05:50:01.356928 39102 scope.cc:33] Destroy variable y
I0126 05:50:01.356943 39102 scope.cc:33] Destroy variable fc_1.tmp_0
I0126 05:50:01.356961 39102 scope.cc:33] Destroy variable fc_2.tmp_2
I0126 05:50:01.356971 39102 scope.cc:33] Destroy variable cross_entropy_0.tmp_0
I0126 05:50:01.356986 39102 scope.cc:33] Destroy variable fc_2.tmp_2@GRAD
I0126 05:50:01.357002 39102 scope.cc:33] Destroy variable mean_0.tmp_0@GRAD
I0126 05:50:01.357206 39100 scope.cc:33] Destroy variable learning_rate_1
I0126 05:50:01.357295 39100 scope.cc:33] Destroy variable accuracy_0_1_correct
I0126 05:50:01.357352 39100 scope.cc:33] Destroy variable learning_rate_5
I0126 05:50:01.357589 39100 scope.cc:33] Destroy variable learning_rate_4
I0126 05:50:01.357800 39100 scope.cc:33] Destroy variable fetch
I0126 05:50:01.357902 39100 scope.cc:33] Destroy variable velocity_0
I0126 05:50:01.357919 39100 scope.cc:33] Destroy variable learning_rate_2
I0126 05:50:01.357933 39100 scope.cc:33] Destroy variable learning_rate_0
I0126 05:50:01.357944 39100 scope.cc:33] Destroy variable velocity_4
I0126 05:50:01.357955 39100 scope.cc:33] Destroy variable velocity_2
I0126 05:50:01.357966 39100 scope.cc:33] Destroy variable fc_0.w_0
I0126 05:50:01.357985 39100 scope.cc:33] Destroy variable fc_2.w_0
I0126 05:50:01.358002 39100 scope.cc:33] Destroy variable fc_2.b_0
I0126 05:50:01.358021 39100 scope.cc:33] Destroy variable accuracy_0_0_total
I0126 05:50:01.358052 39100 scope.cc:33] Destroy variable fc_1.b_0
I0126 05:50:01.358265 39100 scope.cc:33] Destroy variable fc_0.b_0
I0126 05:50:01.358417 39100 scope.cc:33] Destroy variable feed
I0126 05:50:01.358518 39100 scope.cc:33] Destroy variable learning_rate_3
I0126 05:50:01.358721 39100 scope.cc:33] Destroy variable velocity_3
I0126 05:50:01.358928 39100 scope.cc:33] Destroy variable fc_1.w_0
I0126 05:50:01.358964 39100 scope.cc:33] Destroy variable velocity_5
I0126 05:50:01.358983 39100 scope.cc:33] Destroy variable velocity_1

The model was generated with the following script (which seem to be deleted as of now):

#  Copyright (c) 2018 PaddlePaddle Authors. All Rights Reserve.
#
#Licensed under the Apache License, Version 2.0 (the "License");
#you may not use this file except in compliance with the License.
#You may obtain a copy of the License at
#
#    http://www.apache.org/licenses/LICENSE-2.0
#
#Unless required by applicable law or agreed to in writing, software
#distributed under the License is distributed on an "AS IS" BASIS,
#WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
#See the License for the specific language governing permissions and
#limitations under the License.
from __future__ import print_function
import numpy as np
import paddle.v2 as paddle
import paddle.v2.fluid as fluid

BATCH_SIZE = 128
image = fluid.layers.data(name='x', shape=[784], dtype='float32')

regularizer = fluid.regularizer.L2Decay(0.0005 * BATCH_SIZE)

hidden1 = fluid.layers.fc(input=image,
                          size=128,
                          act='relu',
                          param_attr=fluid.ParamAttr(
                              regularizer=regularizer))

hidden2 = fluid.layers.fc(input=hidden1,
                          size=64,
                          act='relu',
                          param_attr=regularizer)

predict = fluid.layers.fc(input=hidden2,
                          size=10,
                          act='softmax',
                          param_attr=regularizer)

label = fluid.layers.data(name='y', shape=[1], dtype='int64')

cost = fluid.layers.cross_entropy(input=predict, label=label)
avg_cost = fluid.layers.mean(x=cost)

optimizer = fluid.optimizer.Momentum(learning_rate=0.001, momentum=0.9)
opts = optimizer.minimize(avg_cost)

accuracy = fluid.evaluator.Accuracy(input=predict, label=label)

inference_program = fluid.default_main_program().clone()
with fluid.program_guard(inference_program):
    test_accuracy = fluid.evaluator.Accuracy(input=predict, label=label)
    test_target = [avg_cost] + test_accuracy.metrics + test_accuracy.states
    inference_program = fluid.io.get_inference_program(test_target)

train_reader = paddle.batch(
    paddle.reader.shuffle(
        paddle.dataset.mnist.train(), buf_size=8192),
    batch_size=BATCH_SIZE)

test_reader = paddle.batch(paddle.dataset.mnist.test(), batch_size=128)

place = fluid.CPUPlace()
exe = fluid.Executor(place)
feeder = fluid.DataFeeder(feed_list=[image, label], place=place)
exe.run(fluid.default_startup_program())

PASS_NUM = 100
break_out = False
for pass_id in range(PASS_NUM):
    accuracy.reset(exe)
    for data in train_reader():
        out, acc = exe.run(fluid.default_main_program(),
                           feed=feeder.feed(data),
                           fetch_list=[avg_cost] + accuracy.metrics)
        pass_acc = accuracy.eval(exe)

        test_accuracy.reset(exe)
        for data in test_reader():
            out, acc = exe.run(inference_program,
                               feed=feeder.feed(data),
                               fetch_list=[avg_cost] + test_accuracy.metrics)

        test_pass_acc = test_accuracy.eval(exe)
        print("pass_id=" + str(pass_id) + " train_cost=" + str(
            out) + " train_acc=" + str(acc) + " train_pass_acc=" + str(pass_acc)
              + " test_acc=" + str(test_pass_acc))

        if test_pass_acc > 0.7:
            fluid.io.save_inference_model(
                "./recognize_digits_mlp.inference.model/", ["x"], [predict],
                exe)
	    break_out = True
            break
    if break_out == True:
        break

# Use load_inference_model to obtain the inference program desc,
# the feed_target_names (the names of variables that will be feeded 
# data using feed operators), and the fetch_targets (variables that 
# we want to obtain data from using fetch operators).
[infer_prog, feed_target_names, fetch_targets] = fluid.io.load_inference_model(
    "./recognize_digits_mlp.inference.model/", exe)

tensor_x = np.random.rand(1, 784).astype("float32")
# Construct feed as a dictionary of {feed_target_name: feed_target_data}
# and results will contain a list of data corresponding to fetch_targets.
results = exe.run(infer_prog,
                  feed={feed_target_names[0]: tensor_x},
                  fetch_list=fetch_targets)
print(results[0])

exit(0)

@@ -28,12 +30,29 @@ int main(int argc, char** argv) {
exit(1);
}

// 1. Define place, executor, scope
auto* place = new paddle::platform::CPUPlace();
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Change this line to

auto place = paddle::platform::CPUPlace();

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Thanks for the review. Quick question: why do we want to use auto over auto * or vice-versa?

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I think the question is more about when do you want to create a instance of a class or new a instance of class. auto and auto* will follow naturally. Below is just my personal opinion.

For this place example, because place is only used as a variable to separate CPU/GPU, we don't need to modify its state or use its member function. To simplify the code, we prefer to directly construct paddle::platform::CPUPlace().

For other examples, where you may want to modify the state of a instance or use its member functions, it may be better to use auto* = new xxx.

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Done

auto* place = new paddle::platform::CPUPlace();
paddle::framework::InitDevices();
paddle::framework::Executor* executor =
new paddle::framework::Executor(*place);
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Change to

auto* executor = new paddle::framework::Executor(place);

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Done

@@ -62,6 +99,9 @@ int main(int argc, char** argv) {
std::cout << std::endl;
}

delete engine;
delete place;
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

delete this line

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Done

framework::Scope& scope,
const std::string& dirname);

std::vector<std::string> GetFeedVarNames(framework::ProgramDesc* main_program);
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I think we prefer to keep this function as a public member of the ProgramDesc class.
So that we can use the following code:

std::vector<std::string>& feed_var_names = main_program->GetFeedVarNames();

We still get the vector on the fly, meaning we don't add private data members to program desc.

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Okay, i have a related question. Do you think GetFeedVarNames() will be useful elsewhere apart from the current use case for inference?

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I also think GetFeedVarNames and GetFetchVarNames are not io interfaces. @sidgoyal78 @kexinzhao I'm thinking about whether we can simplify the new Executor.Run(...) using this two functions? I am not sure, just thinking about...

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Okay.. I think as of now, based on yours and @kexinzhao 's suggestion, I will put the GetFeed and GetFetch methods in the ProgramDesc. We can then think about simplifying.

const std::string& dirname);

std::vector<std::string> GetFeedVarNames(framework::ProgramDesc* main_program);
std::vector<std::string> GetFetchVarNames(framework::ProgramDesc* main_program);
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Same here.

paddle::framework::InitDevices();
paddle::framework::Executor* executor =
new paddle::framework::Executor(*place);
paddle::framework::Scope* scope = new paddle::framework::Scope();
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

paddle::framework::Scope* -> auto*

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Done

// 3. Optional: perform optimization on the inference_program

// 4. Get the feed_var_names and fetch_var_names
std::vector<std::string> feed_var_names =
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

maybe change feed/fetch_var_names to feed/fetch_target_names to make them consistent with python side code.

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Sure, will modify. Thanks.

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I think it gets a bit confusing with variables feed_targets and fetch_targets. Maybe I will leave this for now.

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

There are feed_var_names and feed_var_name in python implementation. But there are not the same things: feed_var_names is the output's names of feed_op and feed_var_name is the input's names of feed_op. So we should distinguish this. Now we use feed_targets and feed_holder respectively now. Do you have any other suggestion?

@@ -15,7 +15,9 @@ limitations under the License. */
#include <time.h>
#include <iostream>
#include "gflags/gflags.h"
#include "paddle/inference/inference.h"
#include "io.h"
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

io.h -> paddle/inference/io.h

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Done

See the License for the specific language governing permissions and
limitations under the License. */

#include "io.h"
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

io.h -> paddle/inference/io.h

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Done


namespace paddle {

namespace io {
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

namespace io -> namespace inference, the namespace's name should be the same as the directory tree.
Remove blank line 19

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Done

namespace io {

bool IsParameter(const framework::VarDesc* var,
const framework::ProgramDesc* main_program) {
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Just remind here that this function is introduced to make the current inference ProgramDesc run. Normally, there should not be any unreferenced variables. We should also check the existence of all parameters and give prompt information if there are some parameters absent. This can be done in next PR.

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Noted.

op->CheckAttrs();
}
}
executor.Run(*load_program, &scope, 0, true, true);
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Delete load_program here.

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Done

return feed_var_names;
}

std::vector<std::string> GetFetchVarNames(
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

std::vector<std::string> -> const std::vector<std::string>&

std::vector<std::string> fetch_var_names;

for (auto* op : global_block->AllOps()) {
if (op->Type() == "fetch") {
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

fetch -> kFetchOpType

#include <vector>
#include "paddle/framework/block_desc.h"
#include "paddle/framework/executor.h"
#include "paddle/framework/init.h"
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Line 21 can be removed.

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Done

framework::Scope& scope,
const std::string& dirname);

std::vector<std::string> GetFeedVarNames(framework::ProgramDesc* main_program);
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I also think GetFeedVarNames and GetFetchVarNames are not io interfaces. @sidgoyal78 @kexinzhao I'm thinking about whether we can simplify the new Executor.Run(...) using this two functions? I am not sure, just thinking about...


namespace paddle {

namespace io {
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Remove blank line 27
Rename namespace to inference

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Done

Copy link
Contributor

@Xreki Xreki left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

There are some minor aspects to be fixed. But we can merge this PR first to avoid blocking other PRs. Will fix those minor aspects in next PR.

BlockDesc *global_block = blocks_[0].get();
std::vector<std::string> feed_var_names;
for (auto *op : global_block->AllOps()) {
if (op->Type() == "feed") {
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

feed -> kFeedOpType
and let's rename feed_var_names to feed_target_names if there is no better candidate.

BlockDesc *global_block = blocks_[0].get();
std::vector<std::string> fetch_var_names;
for (auto *op : global_block->AllOps()) {
if (op->Type() == "fetch") {
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

fetch -> kFetchOpType
and let's rename fetch_var_names to fetch_target_names if there is no better candidate.

@@ -45,6 +45,10 @@ class ProgramDesc {

proto::ProgramDesc *Proto();

const std::vector<std::string> GetFeedVarNames();

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

remove this blank line.

inputfs.close();

framework::ProgramDesc* main_program =
new framework::ProgramDesc(program_desc_str);
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

We need to use std::unique_ptr here and not leave the delete task to users.

@Xreki Xreki merged commit 311334e into PaddlePaddle:develop Jan 30, 2018
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
预测 原名Inference,包含Capi预测问题等
Projects
No open projects
Inference Framework
Basic Usage (DONE)
Development

Successfully merging this pull request may close these issues.

None yet

3 participants