Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

fluid.io.load_inference_model 载入多个模型的时候会报错Error Occurs, info:enforce version == 0U failed, 1015534361 != 0 Only version 0 is supported #1164

Open
zyfo2 opened this issue Aug 17, 2018 · 2 comments

Comments

@zyfo2
Copy link

zyfo2 commented Aug 17, 2018

我们有多个模型,需要做通用inference,需要载入多个模型
如果只load任意一个模型都是ok的
但如果load一个模型后,load另一个模型,如果其中一个模型用到lod_tensor,就会报错。
函数代码如下:
place = fluid.CPUPlace()
exe = fluid.Executor(place)
[inference_program, _, fetch_targets] = (
fluid.io.load_inference_model(dirname=model_path[0], executor=exe,
model_filename=model_path[1],
params_filename=params_path[1]))

错误如下:
Error Occurs, info:enforce version == 0U failed, 1015534361 != 0
Only version 0 is supported at [/paddle/paddle/fluid/framework/lod_tensor.cc:276]
PaddlePaddle Call Stacks:
0 0x7f3e15df5376p paddle::platform::EnforceNotMet::EnforceNotMet(std::exception_ptr::exception_ptr, char const*, int) + 486
1 0x7f3e16652872p paddle::framework::DeserializeFromStream(std::istream&, paddle::framework::LoDTensor*, paddle::platform::DeviceContext const&) + 1330
2 0x7f3e1645fe7ap paddle::operators::LoadCombineOp::RunImpl(paddle::framework::Scope const&, boost::variant<paddle::platform::CUDAPlace, paddle::platform::CPUPlace, paddle::platform::CUDAPinnedPlace, boost::detail::variant::void
, boost::detail::variant::void
, boost::detail::variant::void_, boost::detail::variant::void_, boost::detail::variant::void_, boost::detail::variant::void_, boost::detail::variant::void_, boost::detail::variant::void_, boost::detail::variant::void_, boost::detail::variant::void_, boost::detail::variant::void_, boost::detail::variant::void_, boost::detail::variant::void_, boost::detail::variant::void_, boost::detail::variant::void_, boost::detail::variant::void_, boost::detail::variant::void_> const&) const + 778
3 0x7f3e1660e450p paddle::framework::OperatorBase::Run(paddle::framework::Scope const&, boost::variant<paddle::platform::CUDAPlace, paddle::platform::CPUPlace, paddle::platform::CUDAPinnedPlace, boost::detail::variant::void_, boost::detail::variant::void_, boost::detail::variant::void_, boost::detail::variant::void_, boost::detail::variant::void_, boost::detail::variant::void_, boost::detail::variant::void_, boost::detail::variant::void_, boost::detail::variant::void_, boost::detail::variant::void_, boost::detail::variant::void_, boost::detail::variant::void_, boost::detail::variant::void_, boost::detail::variant::void_, boost::detail::variant::void_, boost::detail::variant::void_, boost::detail::variant::void_> const&) + 208
4 0x7f3e15e88cdfp paddle::framework::Executor::RunPreparedContext(paddle::framework::ExecutorPrepareContext*, paddle::framework::Scope*, bool, bool, bool) + 255
5 0x7f3e15e89d30p paddle::framework::Executor::Run(paddle::framework::ProgramDesc const&, paddle::framework::Scope*, int, bool, bool) + 128
6 0x7f3e15e0bfabp void pybind11::cpp_function::initialize<pybind11::cpp_function::initialize<void, paddle::framework::Executor, paddle::framework::ProgramDesc const&, paddle::framework::Scope*, int, bool, bool, pybind11::name, pybind11::is_method, pybind11::sibling>(void (paddle::framework::Executor::)(paddle::framework::ProgramDesc const&, paddle::framework::Scope, int, bool, bool), pybind11::name const&, pybind11::is_method const&, pybind11::sibling const&)::{lambda(paddle::framework::Executor*, paddle::framework::ProgramDesc const&, paddle::framework::Scope*, int, bool, bool)#1}, void, paddle::framework::Executor*, paddle::framework::ProgramDesc const&, paddle::framework::Scope*, int, bool, bool, pybind11::name, pybind11::is_method, pybind11::sibling>(pybind11::cpp_function::initialize<void, paddle::framework::Executor, paddle::framework::ProgramDesc const&, paddle::framework::Scope*, int, bool, bool, pybind11::name, pybind11::is_method, pybind11::sibling>(void (paddle::framework::Executor::)(paddle::framework::ProgramDesc const&, paddle::framework::Scope, int, bool, bool), pybind11::name const&, pybind11::is_method const&, pybind11::sibling const&)::{lambda(paddle::framework::Executor*, paddle::framework::ProgramDesc const&, paddle::framework::Scope*, int, bool, bool)#1}&&, void ()(paddle::framework::Executor, paddle::framework::ProgramDesc const&, paddle::framework::Scope*, int, bool, bool), pybind11::name const&, pybind11::is_method const&, pybind11::sibling const&)::{lambda(pybind11::detail::function_call&)#3}::_FUN(pybind11::detail::function_call) + 555
7 0x7f3e15e0446cp pybind11::cpp_function::dispatcher(_object*, _object*, _object*) + 2540
8 0x7f3efe641fc7p PyEval_EvalFrameEx + 28695
9 0x7f3efe6444e9p PyEval_EvalCodeEx + 2025
10 0x7f3efe6419b8p PyEval_EvalFrameEx + 27144
11 0x7f3efe6444e9p PyEval_EvalCodeEx + 2025
12 0x7f3efe6419b8p PyEval_EvalFrameEx + 27144
13 0x7f3efe6444e9p PyEval_EvalCodeEx + 2025
14 0x7f3efe6419b8p PyEval_EvalFrameEx + 27144
15 0x7f3efe6444e9p PyEval_EvalCodeEx + 2025
16 0x7f3efe6419b8p PyEval_EvalFrameEx + 27144
17 0x7f3efe6444e9p PyEval_EvalCodeEx + 2025
18 0x7f3efe6419b8p PyEval_EvalFrameEx + 27144
19 0x7f3efe642f9ep PyEval_EvalFrameEx + 32750
20 0x7f3efe642f9ep PyEval_EvalFrameEx + 32750
21 0x7f3efe642f9ep PyEval_EvalFrameEx + 32750
22 0x7f3efe6444e9p PyEval_EvalCodeEx + 2025
23 0x7f3efe5cd377p
24 0x7f3efe5a87a3p PyObject_Call + 67
25 0x7f3efe63d4bep PyEval_EvalFrameEx + 9486
26 0x7f3efe642f9ep PyEval_EvalFrameEx + 32750
27 0x7f3efe642f9ep PyEval_EvalFrameEx + 32750
28 0x7f3efe6444e9p PyEval_EvalCodeEx + 2025
29 0x7f3efe5cd28ap
30 0x7f3efe5a87a3p PyObject_Call + 67
31 0x7f3efe5b763dp
32 0x7f3efe5a87a3p PyObject_Call + 67
33 0x7f3efe63aa58p PyEval_CallObjectWithKeywords + 72
34 0x7f3efe673f36p
35 0x7f3efe34be25p
36 0x7f3efd96cbadp clone + 109

@zyfo2
Copy link
Author

zyfo2 commented Aug 17, 2018

solved by myself. for those who need it:
use a new scope for every model

        scope = fluid.Scope()
        with fluid.scope_guard(scope):
            place = fluid.CPUPlace()
            exe = fluid.Executor(place)
            [inference_program, _, fetch_targets] = (
                fluid.io.load_inference_model(dirname=model_path[0], executor=exe,
                                          model_filename=model_path[1],
                                          params_filename=params_path[1]))

and for prediction:

       with fluid.scope_guard(scope):
            results = exe.run(inference_program,
                          feed=inputs,
                          fetch_list=fetch_targets)

@1138886114
Copy link

When I load two different models in a service, I will report an error: ` precondition notmeterror: tensor not initialized yet when tensor:: type() is called

[Hint: holder_ should not be null.] (at /paddle/paddle/fluid/framework/tensor.h:202)

[operator < conv2d > error]`

scope = fluid.Scope()
with fluid.scope_guard(scope):
    exe = fluid.Executor(fluid.CPUPlace())
    [inference_program, feed_target_names, fetch_targets] = (
        fluid.io.load_inference_model(dirname=path,
                                      executor=exe,
                                      model_filename='res152_model',
                                      params_filename='res152_params'))


with fluid.scope_guard(scope):
    res = exe.run(inference_program, feed={feed_target_names[0]: im_pre}, fetch_list=fetch_targets)


scope_label = fluid.Scope()
with fluid.scope_guard(scope_label):
    exe_label = fluid.Executor(fluid.CPUPlace())
    [inference_label, feed_target_label, fetch_label] = (
        fluid.io.load_inference_model(dirname=path,
                                      executor=exe_label,
                                      model_filename='res152_model2',
                                      params_filename='res152_params2'))

with fluid.scope_guard(scope_label):
    res_classlabel = exe_label.run(inference_label,feed={feed_target_label[0]: im_pre},fetch_list=fetch_label)

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

2 participants