Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Consider adding a PADDLE_INFERENCE option and a PADDLE_MOBILE macro #4221

Closed
hedaoyuan opened this issue Sep 20, 2017 · 0 comments
Closed

Consider adding a PADDLE_INFERENCE option and a PADDLE_MOBILE macro #4221

hedaoyuan opened this issue Sep 20, 2017 · 0 comments

Comments

@hedaoyuan
Copy link
Contributor

hedaoyuan commented Sep 20, 2017

At present, when we do model inference in the mobile environment, hoping the paddle can be small enough. So, when compiling the paddle for the mobile environment (Android, IOS), we need to be able to crop the paddle modules, thereby reducing the size of the inference program.

Based on the previous survey #1845, we found several modules (like libpaddle_pserver.a, libpaddle_trainer_lib.a, libpaddle_api.a and so on) that were not related to inference, which took some volume in the final inference program. So, consider adding a PADDLE_INFERENCE switch to do module clipping at the compile time. At present, in some of the CMakeLists.txt files have been used WITH_C_API done something like that. Further work is replacing WITH_C_API with PADDLE_INFERENCE and needs to refine the CMakeLists.txt files for the modules clipping.

@hedaoyuan hedaoyuan self-assigned this Sep 20, 2017
@hedaoyuan hedaoyuan added this to Build System & Build Optimize in Embedded and Mobile Deployment Oct 10, 2017
@Xreki Xreki closed this as completed Oct 23, 2017
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
Embedded and Mobile Deployment
Build System & Build Optimize
Development

No branches or pull requests

2 participants