Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

demo/mobilenet inference #11510

Closed
wants to merge 7 commits into from

Conversation

Superjomn
Copy link
Contributor

@Superjomn Superjomn commented Jun 15, 2018

I0625 14:11:18.414057 11213 mobilenet.cc:81] init predictor
I0625 14:11:19.366003 11213 mobilenet.cc:85] begin to process data
I0625 14:11:19.582965 11213 mobilenet.cc:46] process a line
I0625 14:11:19.664599 11213 mobilenet.cc:64] data size 270000
I0625 14:11:19.664620 11213 mobilenet.cc:65] data shape 4
I0625 14:11:19.666687 11213 mobilenet.cc:100] run executor
I0625 14:11:20.865559 11213 mobilenet.cc:104] output.size 1
I0625 14:11:20.865605 11213 mobilenet.cc:106] output: data[:10]	3 0.0267651 0.595492 0.553865 0.710751 0.837597 5 0.0162649 0.303328 -0.00768404 

@Superjomn Superjomn changed the title demo/mobilenet inference WIP demo/mobilenet inference Jun 15, 2018
@Superjomn Superjomn changed the title WIP demo/mobilenet inference demo/mobilenet inference Jun 25, 2018
cc_test(${TARGET} SRCS "${tests_SRCS}"
DEPS paddle_inference_api paddle_fluid
ARGS --data="${test_dir}/data.txt"
--modeldir="${test_dir}/model"
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

43行和44行的引号不能加,去掉后可以跑通

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

ok

config.param_file = FLAGS_modeldir + "/__params__";
config.prog_file = FLAGS_modeldir + "/__model__";
config.use_gpu = use_gpu;
config.fraction_of_gpu_memory = 0.15;
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

This should be FLAGS_fraction_of_gpu_memory_to_use

*/
void Main(bool use_gpu) {
NativeConfig config;
// config.model_dir = FLAGS_modeldir;
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Can remove comment.

@@ -14,3 +14,38 @@
#

inference_api_test(simple_on_word2vec ARGS test_word2vec)

set(mobilenet_url "xxx")
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

What's this for? Did not see any usage .

And remove magic name "xxx".


inference_download_test_demo (mobilenet_inference_demo
SRCS mobilenet.cc
URL http://paddlemodels.bj.bcebos.com/inference-vis-demos%2Fmobilenet.tar.gz)
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Maybe you want use mobilenet_url here?

DEPS paddle_inference_api paddle_fluid
ARGS --data=${test_dir}/data.txt
--modeldir=${test_dir}/model
--fraction_of_gpu_memory_to_use=0.5)
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

BTW, this is a GPU flag, maybe should not be place here as a common flag.

Since I build with cpu only and it fails with ERROR: unknown command line flag 'fraction_of_gpu_memory_to_use'

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

None yet

3 participants