Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Process killed in onnx_to_tensorrt.py Demo#5 #344

Closed
pNAIA opened this issue Feb 12, 2021 · 7 comments
Closed

Process killed in onnx_to_tensorrt.py Demo#5 #344

pNAIA opened this issue Feb 12, 2021 · 7 comments

Comments

@pNAIA
Copy link

pNAIA commented Feb 12, 2021

Demo #5 Step #5
$ python3 onnx_to_tensorrt.py -m yolov4-416
.......
[TensorRT] VERBOSE: Graph construction and optimization completed in 1.30692 seconds.
Killed

Fix this by ensuring enough swap file size. Please follow the below steps & add it to the list of steps @jkjung-avt. Apologies if you have already mentioned it in your exhaustive series of steps.

check for current swap size

free -m

Disable ZRAM:

sudo systemctl disable nvzramconfig

Create 4GB swap file

sudo fallocate -l 4G /mnt/4GB.swap
sudo chmod 600 /mnt/4GB.swap
sudo mkswap /mnt/4GB.swap

Append the following line to /etc/fstab (chmod this file if access issues are shown)

sudo echo "/mnt/4GB.swap swap swap defaults 0 0" >> /etc/fstab

REBOOT!

Reference: https://courses.nvidia.com/courses/course-v1:DLI+S-RX-02+V2/info

Now go ahead and run!
$ python3 onnx_to_tensorrt.py -m yolov4-416

Thanks,
Arun
pNaia Tech

@jkjung-avt
Copy link
Owner

Thanks for the suggestion. I've added a link in README to this issue.

@dashos18
Copy link

Hello! Even after doing all these steps above, I got killed error... Do you know what else I can do with it?

@jkjung-avt
Copy link
Owner

@dashos18 What platform are you using?

@dashos18
Copy link

Jetson Nano.
I used your code before for Jetson Xavier and it worked amazing! However, with Nano it is a bit tricky for me.

@jkjung-avt
Copy link
Owner

I'm able run the code on my Jetson Nano DevKit for all yolo models mentioned in the README.md. I don't know why it doesn't work for you.

As a last resort, you might try to conserve system memory by going into "text mode" (i.e. freeing up system memory consumed by the graphical interface) before executing "onnx_to_tensorrt.py": #386 (comment).

@liruichao-eon
Copy link

Thanks a lot!

@ViktorPavlovA
Copy link

thanks!

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

5 participants