-
-
Notifications
You must be signed in to change notification settings - Fork 16k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Error : Input type (torch.FloatTensor) and weight type (torch.HalfTensor) should be the same #101
Comments
Hello @niharpatel1999, thank you for your interest in our work! Please visit our Custom Training Tutorial to get started, and see our Jupyter Notebook , Docker Image, and Google Cloud Quickstart Guide for example environments. If this is a bug report, please provide screenshots and minimum viable code to reproduce your issue, otherwise we can not help you. If this is a custom model or data training question, please note that Ultralytics does not provide free personal support. As a leader in vision ML and AI, we do offer professional consulting, from simple expert advice up to delivery of fully customized, end-to-end production solutions for our clients, such as:
For more information please visit https://www.ultralytics.com. |
@niharpatel1999 ok thanks for the bug report. I will try to reproduce. |
There is no issue if we run inference on colab .
But try downloading the last.pt file trained on GPU accelerated colab and
run inference on desktop than it would show error.
…On Tue, Jun 16, 2020 at 10:57 PM Glenn Jocher ***@***.***> wrote:
I used our colab notebook to train a model for 3 epochs, then used last.pt
to run inference. No problem at all, everything works fine.
[image: Screen Shot 2020-06-16 at 10 26 41 AM]
<https://user-images.githubusercontent.com/26833433/84807300-e2f9fa80-afbb-11ea-8601-c6c5bd5a7ea8.png>
—
You are receiving this because you were mentioned.
Reply to this email directly, view it on GitHub
<#101 (comment)>,
or unsubscribe
<https://github.com/notifications/unsubscribe-auth/ALKVSQYFVOU3K5PALGV5BPTRW6TQFANCNFSM4N72LIXA>
.
|
@niharpatel1999 ah, ok I see. I will try to reproduce your use case here. |
@niharpatel1999 I repeated your steps, everything works fine for me. I don't see any problems. You may want to repeat your steps with the latest version of the repo, perhaps your bug has already been resolved. You can also run models on cpu in colab like this by the way: |
Thanks.
…On Tue, Jun 16, 2020 at 11:14 PM Glenn Jocher ***@***.***> wrote:
@niharpatel1999 <https://github.com/niharpatel1999> I repeated your
steps, everything works fine for me. I don't see any problems. You may want
to repeat your steps with the latest version of the repo, perhaps your bug
has already been resolved.
You can also run models on cpu in colab like this by the way:
python detect.py --device cpu
—
You are receiving this because you were mentioned.
Reply to this email directly, view it on GitHub
<#101 (comment)>,
or unsubscribe
<https://github.com/notifications/unsubscribe-auth/ALKVSQ7XSJCOBUPNNNKOX7TRW6VONANCNFSM4N72LIXA>
.
|
@glenn-jocher, I am @niharpatel1999 teammate the weights tensor in our case is of the type torch.HalfTensor because we have trained it on GPU so there is an error of Input type torch.FloatTensor and weight type of torch.HalfTensor should be the same. we resolved that error just by changing one line of code in detect.py. Line number 23 has model.to(device).eval() to model.to(device).float().eval() by doing this we resolved the error by changing the type of weight tensor to float.It executed successfully on the CPU.Thanks for your support. |
This should have been resolved in recent commits. Please git pull and try again. |
@deeppatel4557 Thank you very much. Your proposal is very helpful to me |
I trained my yolov5l model on Colab and updated the last.pt file to run inference on the local machine. But it shows the following error
RuntimeError: Input type (torch.FloatTensor) and weight type (torch.HalfTensor) should be the same
while executing
python detect.py --weights weights/last.pt --img 416 --conf 0.4 --source ../test --device cpu
The text was updated successfully, but these errors were encountered: