-
-
Notifications
You must be signed in to change notification settings - Fork 5.3k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
v8Detection loss backward #13285
Comments
Hello! It looks like you're encountering an issue with the backward pass of your loss calculation. In your case, you should only call Make sure that your model's parameters require gradients and that your inputs to the model are also set to require gradients if necessary. You can check if your model's parameters are set to require gradients by printing If the problem persists, ensure that all operations in your custom training loop support gradient tracking and that no tensors are inadvertently detached from the computation graph before the loss calculation. If you need further assistance, feel free to share more details about your training loop and model setup! |
Hi , It is working now when I freeze all the parameters except the 22nd layer. but the problem is I don't want to update the parameters of the yolo model for example: |
Hello, Great to hear that freezing the parameters worked for you! To update only the gate weights without altering the pretrained model parameters, you can selectively disable gradient computations for the model parameters. This can be done by setting for param in model.parameters():
param.requires_grad = False Apply this to each of your pretrained models. This way, only the parameters of the gate (assuming they have If you need further assistance or have more questions, feel free to ask. Happy coding! 🚀 |
👋 Hello there! We wanted to give you a friendly reminder that this issue has not had any recent activity and may be closed soon, but don't worry - you can always reopen it if needed. If you still have any questions or concerns, please feel free to let us know how we can help. For additional resources and information, please see the links below:
Feel free to inform us of any other issues you discover or feature requests that come to mind in the future. Pull Requests (PRs) are also always welcomed! Thank you for your contributions to YOLO 🚀 and Vision AI ⭐ |
Search before asking
Question
I am creating custom Training for YOLO model to adapt ensemble learning. I am initialising v8Detectionloss with the model.
criterion = v8DetectionLoss(model_1)
loss1 , lossitem1 = criterion(pred1 , batch)
what is the right way to do loss.backward().
I tried both loss1.backward() and lossitem1.backward()
for both of those approaches , I am getting
RuntimeError: element 0 of tensors does not require grad and does not have a grad_fn
Additional
No response
The text was updated successfully, but these errors were encountered: