Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

fluid vgg16 flowers数据集显存占用过大(最新镜像) #61

Open
leanna62 opened this issue Jan 16, 2018 · 1 comment
Open

Comments

@leanna62
Copy link
Contributor

镜像版本:dzhwinter/benchmark:latest
运行命令:python vgg16.py --device GPU --data_set flowers --batch_size 64 (已拉取最新代码)
GPU信息:单卡显存:11439MiB
错误信息,看起来显存占用还是过大。
Traceback (most recent call last):
File "vgg16.py", line 182, in
main()
File "vgg16.py", line 156, in main
fetch_list=[avg_cost] + accuracy.metrics)
File "/usr/local/lib/python2.7/dist-packages/paddle/v2/fluid/executor.py", line 164, in run
self.executor.run(program.desc, scope, 0, True, True)
paddle.v2.fluid.core.EnforceNotMet: enforce allocating <= available failed, 10484135494 > 1370095360
at [/paddle/Paddle/paddle/platform/gpu_info.cc:89]
PaddlePaddle Call Stacks:
0 0x7f7a41602c28p paddle::platform::GpuMaxChunkSize() + 5080
1 0x7f7a40b7fdfcp void* paddle::memory::Allocpaddle::platform::CUDAPlace(paddle::platform::CUDAPlace, unsigned long) + 476
2 0x7f7a40b0240ep paddle::framework::Tensor::mutable_data(boost::variant<paddle::platform::CUDAPlace, paddle::platform::CPUPlace, boost::detail::variant::void_, boost::detail::variant::void_, boost::detail::variant::void_, boost::detail::variant::void_, boost::detail::variant::void_, boost::detail::variant::void_, boost::detail::variant::void_, boost::detail::variant::void_, boost::detail::variant::void_, boost::detail::variant::void_, boost::detail::variant::void_, boost::detail::variant::void_, boost::detail::variant::void_, boost::detail::variant::void_, boost::detail::variant::void_, boost::detail::variant::void_, boost::detail::variant::void_, boost::detail::variant::void_>, std::type_index) + 446
3 0x7f7a40b9bffap float* paddle::framework::Tensor::mutable_data(boost::variant<paddle::platform::CUDAPlace, paddle::platform::CPUPlace, boost::detail::variant::void_, boost::detail::variant::void_, boost::detail::variant::void_, boost::detail::variant::void_, boost::detail::variant::void_, boost::detail::variant::void_, boost::detail::variant::void_, boost::detail::variant::void_, boost::detail::variant::void_, boost::detail::variant::void_, boost::detail::variant::void_, boost::detail::variant::void_, boost::detail::variant::void_, boost::detail::variant::void_, boost::detail::variant::void_, boost::detail::variant::void_, boost::detail::variant::void_, boost::detail::variant::void_>) + 90
4 0x7f7a40e9e549p paddle::operators::GPUDropoutKernel<paddle::platform::CUDADeviceContext, float, float>::Compute(paddle::framework::ExecutionContext const&) const + 279
5 0x7f7a41573cf4p paddle::framework::OperatorWithKernel::Run(paddle::framework::Scope const&, boost::variant<paddle::platform::CUDAPlace, paddle::platform::CPUPlace, boost::detail::variant::void_, boost::detail::variant::void_, boost::detail::variant::void_, boost::detail::variant::void_, boost::detail::variant::void_, boost::detail::variant::void_, boost::detail::variant::void_, boost::detail::variant::void_, boost::detail::variant::void_, boost::detail::variant::void_, boost::detail::variant::void_, boost::detail::variant::void_, boost::detail::variant::void_, boost::detail::variant::void_, boost::detail::variant::void_, boost::detail::variant::void_, boost::detail::variant::void_, boost::detail::variant::void_> const&) const + 2084
6 0x7f7a40b86747p paddle::framework::Executor::Run(paddle::framework::ProgramDesc const&, paddle::framework::Scope*, int, bool, bool) + 983
7 0x7f7a40aec113p void pybind11::cpp_function::initialize<pybind11::cpp_function::initialize<void, paddle::framework::Executor, paddle::framework::ProgramDesc const&, paddle::framework::Scope*, int, bool, bool, pybind11::name, pybind11::is_method, pybind11::sibling>(void (paddle::framework::Executor::)(paddle::framework::ProgramDesc const&, paddle::framework::Scope, int, bool, bool), pybind11::name const&, pybind11::is_method const&, pybind11::sibling const&)::{lambda(paddle::framework::Executor*, paddle::framework::ProgramDesc const&, paddle::framework::Scope*, int, bool, bool)#1}, void, paddle::framework::Executor*, paddle::framework::ProgramDesc const&, paddle::framework::Scope*, int, bool, bool, pybind11::name, pybind11::is_method, pybind11::sibling>(pybind11::cpp_function::initialize<void, paddle::framework::Executor, paddle::framework::ProgramDesc const&, paddle::framework::Scope*, int, bool, bool, pybind11::name, pybind11::is_method, pybind11::sibling>(void (paddle::framework::Executor::)(paddle::framework::ProgramDesc const&, paddle::framework::Scope, int, bool, bool), pybind11::name const&, pybind11::is_method const&, pybind11::sibling const&)::{lambda(paddle::framework::Executor*, paddle::framework::ProgramDesc const&, paddle::framework::Scope*, int, bool, bool)#1}&&, void ()(paddle::framework::Executor, paddle::framework::ProgramDesc const&, paddle::framework::Scope*, int, bool, bool), pybind11::name const&, pybind11::is_method const&, pybind11::sibling const&)::{lambda(pybind11::detail::function_call&)#3}::_FUN(pybind11::detail::function_call) + 579
8 0x7f7a40ae9d64p pybind11::cpp_function::dispatcher(_object*, _object*, _object*) + 1236
9 0x4cad00p PyEval_EvalFrameEx + 28048
10 0x4c2705p PyEval_EvalCodeEx + 597
11 0x4ca088p PyEval_EvalFrameEx + 24856
12 0x4c2705p PyEval_EvalCodeEx + 597
13 0x4ca7dfp PyEval_EvalFrameEx + 26735
14 0x4c2705p PyEval_EvalCodeEx + 597
15 0x4c24a9p PyEval_EvalCode + 25
16 0x4f19efp
17 0x4ec372p PyRun_FileExFlags + 130
18 0x4eaaf1p PyRun_SimpleFileExFlags + 401
19 0x49e208p Py_Main + 1736
20 0x7f7a5ad33830p __libc_start_main + 240
21 0x49da59p _start + 41

@leanna62 leanna62 changed the title fluid vgg16 flowers数据集 fluid vgg16 flowers数据集显存占用过大(最新镜像) Jan 16, 2018
@dzhwinter
Copy link
Owner

this problem is fixed in PaddlePaddle/Paddle#7443.
Please have a try after it merged.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

2 participants