Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

[Do not merge] Profiling inference unit-test for recognize digits #8497

Conversation

sidgoyal78
Copy link
Contributor

@sidgoyal78 sidgoyal78 commented Feb 22, 2018

We use the the profiler created by @qingqing01 to time the c++ inference code for network corresponding to recognize digits.

For the results, please see: https://github.com/sidgoyal78/paddle_notes/blob/master/benchmark/recoginze_digits.md

Do not Merge!

@sidgoyal78 sidgoyal78 changed the title [WIP] Profiling inference unit-test for recognize digits [Do not merge] Profiling inference unit-test for recognize digits Feb 26, 2018
@Xreki Xreki added the 预测 原名Inference,包含Capi预测问题等 label Feb 26, 2018
@kexinzhao kexinzhao added this to Basic Usage (DOING) in Inference Framework Feb 27, 2018
@sidgoyal78
Copy link
Contributor Author

@kexinzhao, @Xreki : Could you guys look at the usage of the profiling tool to make sure i did things in the proper way?

@kexinzhao
Copy link
Contributor

@sidgoyal78 The usage of profiling tools generally looks good to me.

@Xreki
Copy link
Contributor

Xreki commented Mar 5, 2018

Sorry for late reviewing. I have several suggestions:

  • There is no need to add timing codes for each operator. There are profiling codes already:

void OperatorWithKernel::RunImpl(const Scope& scope,
const platform::Place& place) const {
RuntimeInferShapeContext infer_shape_ctx(*this, scope);
this->InferShape(&infer_shape_ctx);
platform::DeviceContextPool& pool = platform::DeviceContextPool::Instance();
auto dev_ctx = pool.Get(place);
// profile
platform::RecordEvent record_event(Type(), dev_ctx);

  • The most important time is the running time, which the cost time of Executor.Run(). There is no need to include the runtime of initializing inference program. Because users only need to initialize the inference program once and run as many times as they need.

  • It is suggested to run repeatedly for many times when timing, especially for small models.

A version of how to use the profiling tools, #8748 .

@sidgoyal78
Copy link
Contributor Author

Thanks for your reply @Xreki .

  • I agree with the first 2 points.
  • For repeated runs, the results/tables that I have presented are based on numbers averaged over 10 runs.

@Xreki
Copy link
Contributor

Xreki commented Mar 7, 2018

I have presented are based on numbers averaged over 10 runs.

  • Do you mean manually run the executable multiple times? It is needed indeed.
  • For some small models which runs very fast, the timing for one call of the Run may be not precise, and we also need to add a for-loop to call the Run many times and timing the total time.

@sidgoyal78
Copy link
Contributor Author

I run the test case 10 times programmatically, if you look at this line: https://github.com/PaddlePaddle/Paddle/pull/8497/files#diff-cd4ed9c3186a34c7d89b06ba5d5c932dR155

So the TestInference function is called 10 times (and I ignore the results of the first run and average out only 9 of the remaining runs).

@Xreki
Copy link
Contributor

Xreki commented Mar 8, 2018

So the TestInference function is called 10 times (and I ignore the results of the first run and average out only 9 of the remaining runs).

Sorry for not notice that. So, I think we mean the same thing. And indeed it is better to ignore the results of the first run.

@sidgoyal78 sidgoyal78 closed this Jun 8, 2018
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
预测 原名Inference,包含Capi预测问题等
Projects
No open projects
Inference Framework
Basic Usage (DOING)
Development

Successfully merging this pull request may close these issues.

None yet

3 participants