Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Reading dangerously large protocol message. #934

Closed
rahulsharma11 opened this issue Mar 31, 2020 · 4 comments
Closed

Reading dangerously large protocol message. #934

rahulsharma11 opened this issue Mar 31, 2020 · 4 comments

Comments

@rahulsharma11
Copy link

Hi,
I tried this project in my jetson nano.
Configuration-
RAM-4GB
CUDA-10.0
protobuf - 3.0.0

While using the "tools/demo.py" with default model, i am getting the following error-
[libprotobuf WARNING google/protobuf/io/coded_stream.cc:604] Reading dangerously large protocol message. If the message turns out to be larger than 2147483647 bytes, parsing will be halted for security reasons. To increase the limit (or to disable these warnings), see CodedInputStream::SetTotalBytesLimit() in google/protobuf/io/coded_stream.h.
[libprotobuf WARNING google/protobuf/io/coded_stream.cc:81] The total number of bytes read was 548317115
I0331 09:47:05.258278 20249 net.cpp:816] Ignoring source layer data
I0331 09:47:05.440400 20249 net.cpp:816] Ignoring source layer drop6
I0331 09:47:05.465526 20249 net.cpp:816] Ignoring source layer drop7
I0331 09:47:05.465593 20249 net.cpp:816] Ignoring source layer fc7_drop7_0_split
I0331 09:47:05.466341 20249 net.cpp:816] Ignoring source layer loss_cls
I0331 09:47:05.466392 20249 net.cpp:816] Ignoring source layer loss_bbox
I0331 09:47:05.470049 20249 net.cpp:816] Ignoring source layer silence_rpn_cls_score
I0331 09:47:05.470110 20249 net.cpp:816] Ignoring source layer silence_rpn_bbox_pred

Loaded network /home/nano2/rahul/orientation-aware-firearm-detection/py-faster-rcnn/data/faster_rcnn_models/VGG16_faster_rcnn_final.caffemodel
Killed

Any suggestion?
Thanks

@rahulsharma11
Copy link
Author

Hi, issue resolved.

@Blueberry0317
Copy link

excuse me, I meet with a similar problem, may I know the solution?
1

@rahulsharma11
Copy link
Author

Hi,
In my case, i was providing wrong prototxt file.

@Blueberry0317
Copy link

Hi,
In my case, i was providing wrong prototxt file.

thank you for your reply and maybe I meet with the 'out of memory' problem, i have changed my device and use tensorflow to train my dataset. thank you all the way.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

2 participants