Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

compiler error when do not have TENSORRT library #11050

Closed
luotao1 opened this issue May 30, 2018 · 0 comments · Fixed by #11051
Closed

compiler error when do not have TENSORRT library #11050

luotao1 opened this issue May 30, 2018 · 0 comments · Fixed by #11051
Labels
预测 原名Inference,包含Capi预测问题等

Comments

@luotao1
Copy link
Contributor

luotao1 commented May 30, 2018

../../libpaddle_fluid.a(tensorrt_engine_op.cc.o): In function `paddle::operators::TensorRTEngineKernel<paddle::platform::CPUDeviceContext, long>::Prepare(paddle::framework::ExecutionContext const&) const':
tensorrt_engine_op.cc:(.text._ZNK6paddle9operators20TensorRTEngineKernelINS_8platform16CPUDeviceContextElE7PrepareERKNS_9framework16ExecutionContextE[_ZNK6paddle9operators20TensorRTEngineKernelINS_8platform16CPUDeviceContextElE7PrepareERKNS_9framework16ExecutionContextE]+0xf2): undefined reference to `vtable for paddle::inference::tensorrt::TensorRTEngine'
tensorrt_engine_op.cc:(.text._ZNK6paddle9operators20TensorRTEngineKernelINS_8platform16CPUDeviceContextElE7PrepareERKNS_9framework16ExecutionContextE[_ZNK6paddle9operators20TensorRTEngineKernelINS_8platform16CPUDeviceContextElE7PrepareERKNS_9framework16ExecutionContextE]+0x2be): undefined reference to `paddle::inference::tensorrt::TensorRTEngine::FreezeNetwork()'
../../libpaddle_fluid.a(tensorrt_engine_op.cc.o): In function `paddle::operators::TensorRTEngineKernel<paddle::platform::CPUDeviceContext, long>::Compute(paddle::framework::ExecutionContext const&) const':
tensorrt_engine_op.cc:(.text._ZNK6paddle9operators20TensorRTEngineKernelINS_8platform16CPUDeviceContextElE7ComputeERKNS_9framework16ExecutionContextE[_ZNK6paddle9operators20TensorRTEngineKernelINS_8platform16CPUDeviceContextElE7ComputeERKNS_9framework16ExecutionContextE]+0x181): undefined reference to `paddle::inference::tensorrt::TensorRTEngine::SetInputFromCPU(std::string const&, void const*, unsigned long)'
tensorrt_engine_op.cc:(.text._ZNK6paddle9operators20TensorRTEngineKernelINS_8platform16CPUDeviceContextElE7ComputeERKNS_9framework16ExecutionContextE[_ZNK6paddle9operators20TensorRTEngineKernelINS_8platform16CPUDeviceContextElE7ComputeERKNS_9framework16ExecutionContextE]+0x1fd): undefined reference to `paddle::inference::tensorrt::TensorRTEngine::SetInputFromGPU(std::string const&, void const*, unsigned long)'
tensorrt_engine_op.cc:(.text._ZNK6paddle9operators20TensorRTEngineKernelINS_8platform16CPUDeviceContextElE7ComputeERKNS_9framework16ExecutionContextE[_ZNK6paddle9operators20TensorRTEngineKernelINS_8platform16CPUDeviceContextElE7ComputeERKNS_9framework16ExecutionContextE]+0x2c0): undefined reference to `paddle::inference::tensorrt::TensorRTEngine::GetITensor(std::string const&)'
tensorrt_engine_op.cc:(.text._ZNK6paddle9operators20TensorRTEngineKernelINS_8platform16CPUDeviceContextElE7ComputeERKNS_9framework16ExecutionContextE[_ZNK6paddle9operators20TensorRTEngineKernelINS_8platform16CPUDeviceContextElE7ComputeERKNS_9framework16ExecutionContextE]+0x430): undefined reference to `paddle::inference::tensorrt::TensorRTEngine::GetOutputInCPU(std::string const&, void*, unsigned long)'
tensorrt_engine_op.cc:(.text._ZNK6paddle9operators20TensorRTEngineKernelINS_8platform16CPUDeviceContextElE7ComputeERKNS_9framework16ExecutionContextE[_ZNK6paddle9operators20TensorRTEngineKernelINS_8platform16CPUDeviceContextElE7ComputeERKNS_9framework16ExecutionContextE]+0x4d9): undefined reference to `paddle::inference::tensorrt::TensorRTEngine::GetOutputInGPU(std::string const&, void*, unsigned long)'
../../libpaddle_fluid.a(tensorrt_engine_op.cc.o): In function `paddle::operators::TensorRTEngineKernel<paddle::platform::CPUDeviceContext, int>::Prepare(paddle::framework::ExecutionContext const&) const':
tensorrt_engine_op.cc:(.text._ZNK6paddle9operators20TensorRTEngineKernelINS_8platform16CPUDeviceContextEiE7PrepareERKNS_9framework16ExecutionContextE[_ZNK6paddle9operators20TensorRTEngineKernelINS_8platform16CPUDeviceContextEiE7PrepareERKNS_9framework16ExecutionContextE]+0xf2): undefined reference to `vtable for paddle::inference::tensorrt::TensorRTEngine'
tensorrt_engine_op.cc:(.text._ZNK6paddle9operators20TensorRTEngineKernelINS_8platform16CPUDeviceContextEiE7PrepareERKNS_9framework16ExecutionContextE[_ZNK6paddle9operators20TensorRTEngineKernelINS_8platform16CPUDeviceContextEiE7PrepareERKNS_9framework16ExecutionContextE]+0x2be): undefined reference to `paddle::inference::tensorrt::TensorRTEngine::FreezeNetwork()'
../../libpaddle_fluid.a(tensorrt_engine_op.cc.o): In function `paddle::operators::TensorRTEngineKernel<paddle::platform::CPUDeviceContext, int>::Compute(paddle::framework::ExecutionContext const&) const':
...
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
预测 原名Inference,包含Capi预测问题等
Projects
None yet
Development

Successfully merging a pull request may close this issue.

1 participant