Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Multi-Thread computing on mobile end #4678

Closed
hedaoyuan opened this issue Oct 10, 2017 · 1 comment
Closed

Multi-Thread computing on mobile end #4678

hedaoyuan opened this issue Oct 10, 2017 · 1 comment

Comments

@hedaoyuan
Copy link
Contributor

hedaoyuan commented Oct 10, 2017

When the single-thread computing on the mobile cannot meet the performance requirements of the model inference, it will naturally expect to be accelerated by multi-thread. At present, most of the mobile phone is a multi-core system, but the hardware architecture is not the same with the general multi-core server system. So, in the actual scene, the multi-thread acceleration method used on the server cannot get the same good acceleration on the mobile side. The main reason that impact the multi-thread acceleration effect on the mobile end is as follows.

  1. The big.LITTLE architecture. Because the computational power of the big and little cores is inconsistent, if the computational tasks are evenly distributed to multiple cores, the overall computational performance is dragged down by the little cores. This has been encountered in previous experiments. https://github.com/PaddlePaddle/Paddle/wiki/2017-07-19#hedaoyuan
  2. interactive mode. On the mobile side, CPU mode is generally set as interactive instead of performance mode. When a CPU core is awakened for calculations, the core starts at low frequency, and it takes some time to run to the high frequency. So, the multi-thread computing on the mobile end, the performance of other threads is not as good as the performance of the main thread.
  3. Power limited. The mobile end generally has power limited, which may exceed the power limit when using multiple threads, leading to CPU frequency reduction, thus affecting computing performance.
@hedaoyuan hedaoyuan self-assigned this Oct 10, 2017
@hedaoyuan hedaoyuan changed the title Multi-Threads computing on mobile end Multi-Thread computing on mobile end Oct 10, 2017
@hedaoyuan hedaoyuan added this to Multi-Threads & Mobile GPU in Embedded and Mobile Deployment Oct 10, 2017
@Xreki
Copy link
Contributor

Xreki commented Apr 26, 2018

Closed because there is a new version, Fluid. We won't support multi-thread based on Paddle v2.

@Xreki Xreki closed this as completed Apr 26, 2018
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
Embedded and Mobile Deployment
Multi-Threads & Mobile GPU
Development

No branches or pull requests

2 participants