-
Notifications
You must be signed in to change notification settings - Fork 153
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Unable to run ./yolo_inference on GPU #132
Comments
Update, I might have to add |
Hi @mattpopovich , The codes in development are truely outdated, we plan to update them in this week. |
That'd be great! Thank you. I'd be happy to test once they are updated. |
Hi @mattpopovich , Seems that you should add git clone https://github.com/pytorch/vision.git
cd vision
git checkout release/0.8.0
mkdir build && cd build
cmake .. -DTorch_DIR=$TORCH_PATH/share/cmake/Torch -DWITH_CUDA=ON
make -j4
sudo make install I guess that the above modification will temporarily solve your problem. FYI, TorchVision has updated the C++ interface in pytorch/vision#3146, so the codes in development \ tree:82d6afb only work with |
Hi @mattpopovich , we've updated the C++ interfaces in #136 , and we've tested the new interfaces with Just for the above problem, I guess it's because that you forget to add the git clone https://github.com/pytorch/vision.git
cd vision
git checkout release/0.9 # Assume that you're using PyTorch 1.8.0, replace this to `release/0.10` if you're using PyTorch 1.9.0
mkdir build && cd build
cmake .. -DTorch_DIR=$TORCH_PATH/share/cmake/Torch -DWITH_CUDA=ON
make -j4
sudo make install BTW, we didn't impose too strong restrictions on the version of CUDA and Ubuntu systems. I believe this will solve your problem, as such I'm closing this issue, and feel free to reopen this or file a new issue if you have further questions. |
Thanks @zhiqwang. I think that solves my problem. I'm still having issues building but I've raised those concerns with pytorch/vision: pytorch/vision#4175 |
@zhiqwang @mattpopovich How to convert the customized yolov5 to yolov5rt? I trained a customized yolov5 with my own dataset and now I need to convert to yolov5rt because I need to use torch.jit.script function to export the weight to torchscript format. Your help will appreciate. |
Hi @Jelly123456 , we provide a notebook https://github.com/zhiqwang/yolov5-rt-stack/blob/master/notebooks/how-to-align-with-ultralytics-yolov5.ipynb to show how to convert a customized yolov5 model trained with ultralytics to |
Hi, thanks for putting this repo together. I am working with it due to trying to infer my yolov5 model in c++ with pre and post processing on the GPU as I mentioned here.
I converted my model from yolov5 to yolov5-rt-stack and it seemed to work without issue, but I was having issues trying to run it. Before diving into that issue too deeply, I decided to try and run your sample code first to see if that worked.
I followed your README and I was able to run inference via CPU without issue. However, when I try to run using the
--gpu
flag, I get the following error:Click to display error
I think the main thing to note in that error log is the following:
My takeaway from that is either I am building TorchVision for CPU and not CUDA... or
torchvision::nms
does not support CUDA?Click to show my environment:
I installed TorchVision via your instructions listed under number 2 here. I've tried checking out
release/0.8.0
,v0.8.1
, andv0.8.2
all with the same issue. I've also triedv0.9.0
andv0.10.0
but your build instructions do not work for them so I ignored them for the time being.Also worth noting there are two dependencies that I don't meet:
Similar issues that I've found:
pytorch/vision#3058
WongKinYiu/PyTorch_YOLOv4#169
Any thoughts or ideas? Does the
--gpu
flag work for you?Thanks,
Matt
The text was updated successfully, but these errors were encountered: