Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

compile and install the shared library of fluid inference #7572

Merged
merged 15 commits into from
Jan 22, 2018
Merged

compile and install the shared library of fluid inference #7572

merged 15 commits into from
Jan 22, 2018

Conversation

luotao1
Copy link
Contributor

@luotao1 luotao1 commented Jan 16, 2018

related #7231

  • The structure of the installed shared library is as follows. It uses default cmake option + -DWITH_PYTHON=OFF
├── bin
│   └── paddle
├── include
│   └── paddle
│       ├── framework
│       ├── inference
│       ├── memory
│       └── platform
├── lib
│   └── libpaddle_fluid.so
├── opt
│   └── paddle
│       └── bin
└── third_party
    ├── eigen3
    │   ├── Eigen
    │   └── unsupported
    ├── gflags
    │   ├── include
    │   └── lib
    ├── glog
    │   ├── include
    │   └── lib
    └── protobuf
        ├── include
        └── lib
du -sh *
12K     bin
560K    include
37M     lib
98M     opt
12M     third_party
  • Add WITH_FLUID option to avoid changing current paddle.tgz when WITH_C_API=ON
  • Examples will be updated later.

@luotao1 luotao1 requested a review from Xreki January 16, 2018 11:51
@@ -107,6 +108,10 @@ if (WITH_C_API AND WITH_PYTHON)
"different Python interpreter from compiling.")
endif()

if (WITH_C_API)
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

How about we rename WITH_C_API into PADDLE_BUILD_INFERENCE_LIB ?

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

As WITH_C_API is used for a long time in old PaddlePaddle, users are familiar with it, we may remain it. And for fluid, we will add WITH_FLUID option for both training and inference. (see #7578 (comment))

@@ -84,3 +84,9 @@ cc_test(op_kernel_type_test SRCS op_kernel_type_test.cc DEPS place device_contex
cc_test(cow_ptr_tests SRCS details/cow_ptr_test.cc)
nv_test(data_device_transform_test SRCS data_device_transform_test.cu
DEPS operator op_registry init math_function)

if(NOT WITH_C_API AND WITH_FLUID)
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Do we have to call CMake's install function in these CMakeLists.txt files?

I am afraid that this would break the Bazel-style of the CMakeLists.txt files, which, was achieved by inventing the generic.cmake file.

Could we not install these header and library files, but leave them in the source directory and the build directory, but let users just build PaddlePaddle and include/link to these header files and library files?

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Do we have to call CMake's install function in these CMakeLists.txt files?

Yes, We need to install *.h files for inference package.

I am afraid that this would break the Bazel-style of the CMakeLists.txt files, which, was achieved by inventing the generic.cmake file.

The style of CMake's install function is indeed different from Bazel-style. However, Bazel doesn't have a similar install function. Thus, it's hard for the user to deploy it. For example, building-tensorflow-as-a-standalone-project need to manually copy include and libraries for users.
image

Could we not install these header and library files, but leave them in the source directory and the build directory, but let users just build PaddlePaddle and include/link to these header files and library files?

I think that make install is necessary and will make the deploy much easier.

  • Users can deploy it without building PaddlePaddle. For example like C-API, users can directly download the paddle.tgz in here.
  • For deploy in mobile, if we don't make install, users should manually copy corresponding header files and library files to their project like building-tensorflow-as-a-standalone-project.
  • We will continually build for minimum library size and header files, without make install automatically, users may not get the updated inference package at the first time.

@Xreki Xreki added the 预测 原名Inference,包含Capi预测问题等 label Jan 17, 2018
@@ -84,3 +84,9 @@ cc_test(op_kernel_type_test SRCS op_kernel_type_test.cc DEPS place device_contex
cc_test(cow_ptr_tests SRCS details/cow_ptr_test.cc)
nv_test(data_device_transform_test SRCS data_device_transform_test.cu
DEPS operator op_registry init math_function)

if(NOT WITH_C_API AND WITH_FLUID)
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

后面的PR,会将WITH_FLUID加到paddle/CMakeLists.txt里面吧,改成类似这样:

  if(WITH_FLUID)
    if(NOT Boost_FOUND)
      message(FATAL_ERROR "...")
    endif()
    add_subdirectory(string)
    add_subdirectory(memory)
    add_subdirectory(platform)
    add_subdirectory(framework)
    add_subdirectory(operators)
    add_subdirectory(pybind)
    add_subdirectory(inference)
  endif()

那么在paddle/framework/CMakeLists.txt里面就不需要加这个if语句了。

另外,需要install哪些头文件,后续看看能不能精简下。

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

那么在paddle/framework/CMakeLists.txt里面就不需要加这个if语句了

是的。#6346 已经考虑加WITH_FLUID选项来编译出只含有fluid的版本。但如果是只含有fluid的话,默认应该是OFF吧?

需要install哪些头文件,后续看看能不能精简下

好的。

cc_library(paddle_fluid DEPS paddle_fluid_api ${FLUID_CORE_MODULES} ${GLOB_OP_LIB})

# Create shared library
add_library(paddle_fluid_shared SHARED inference.cc)
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

shared library不能直接叫libpaddle_fluid.so

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

考虑到两点:

  1. 仿照了libpaddle_capi_shared.so
  2. 如果名字叫libpaddle_fluid.so,那第8行的cc_library(paddle_fluid ...)就要换名字了,不然两个库就重名了。

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

一个是静态库,一个是动态库,名字也不能一样?

Copy link
Contributor Author

@luotao1 luotao1 Jan 17, 2018

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

比如,如果按照下面这样写:

add_library(paddle_fluid_shared SHARED inference.cc)
add_library(paddle_fluid_shared STATIC inference.cc)

cmake会报错:

add_library cannot create target "paddle_fluid_shared" because another
  target with the same name already exists.  The existing target is a shared
  library created in source directory

不过能不能先编译出来,然后再在cmake里面改成一样的名字。

@Xreki Xreki added this to DOING in Inference Framework Jan 17, 2018
@luotao1
Copy link
Contributor Author

luotao1 commented Jan 17, 2018

A simple independent example project is updated in https://github.com/luotao1/fluid_inference_example.

Copy link
Contributor

@Xreki Xreki left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

我可以编译、以及成功链接example项目,但是运行的时候会报如下错误:

$ ../run.sh 
./example: error while loading shared libraries: libwarpctc.so: cannot open shared object file: No such file or directory

我想这个错误的原因应该是,目前使用cc_library这种方式,libwarpctc.so虽然代码中是动态装载的,但是生成库的时候被直接链接了。这其实是一个已知的问题,可以后面的PR再解决。

CMakeLists.txt Outdated
@@ -55,6 +55,7 @@ option(WITH_COVERAGE "Compile PaddlePaddle with code coverage" OFF)
option(COVERALLS_UPLOAD "Package code coverage data to coveralls" OFF)
option(ON_TRAVIS "Exclude special unit test on Travis CI" OFF)
option(WITH_C_API "Compile PaddlePaddle with C-API(Prediction)" OFF)
option(WITH_FLUID "Compile PaddlePaddle fluid only" ON)
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

有一点我不确定,WITH_FLUID这个选项,设置了之后是只编译Fluid,而不是同时编译老Paddle和Fluid?

可能需要在这里注解下,或者把这个描述语句改下,因为这个PR引入WITH_FLUID并没有实现Compile PaddlePaddle fluid only这个功能。

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

DONE. 已经加了comments并改成“ompile PaddlePaddle fluid only(TODO)”。

WITH_FLUID这个选项,设置了之后是只编译Fluid,而不是同时编译老Paddle和Fluid

是的。关于WITH_FLUID的默认值,可以在后续功能开发完毕后再讨论定为OFF还是ON。

目前使用cc_library这种方式,libwarpctc.so虽然代码中是动态装载的,但是生成库的时候被直接链接了。这其实是一个已知的问题,可以后面的PR再解决

好的,会在下一个PR中解决。

Copy link
Contributor

@Xreki Xreki left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

LGTM

@luotao1 luotao1 merged commit 1575c2c into PaddlePaddle:develop Jan 22, 2018
@luotao1 luotao1 deleted the fluid_infer branch January 22, 2018 15:27
@luotao1 luotao1 moved this from DOING to DONE in Inference Framework Jan 23, 2018
@luotao1 luotao1 restored the fluid_infer branch January 23, 2018 05:58
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
预测 原名Inference,包含Capi预测问题等
Projects
No open projects
Inference Framework
Basic Usage (DONE)
Development

Successfully merging this pull request may close these issues.

None yet

3 participants