-
Notifications
You must be signed in to change notification settings - Fork 1
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Problems encountered running the train.py file #1
Comments
This is a problem due to pytorch version, the reasons behind it are complex and you may find some solutions here https://www.google.com/search?q=inplace+operation+pytorch. |
Thank you very much for your answer. Originally, I thought that the source code of Relu incentive function had changed. According to your help, I successfully solved the problem. In addition, the data sets you provide are all data content encoded by numpy, can you upload a copy of the original data content (or part of the data), so that we can more intuitively understand what data your model works on? |
You can refer to https://github.com/Shen-Lab/CPAC to see what data is and how to process it. |
I would like to ask you how to modify this problem. I hope to get your help |
Hello,When I run the line loss.backward(), I get an error:
RuntimeError: one of the variables needed for gradient computation has been modified by an inplace operation: [torch.cuda.FloatTensor [32, 1000, 256]], which is output 0 of ReluBackward0, is at version 1; expected version 0 instead. Hint: the backtrace further above shows the operation that failed to compute its gradient. The variable in question was changed in there or anywhere later. Good luck!
The first time I encounter this error, it seems that the error message is caused by the relu function.I really don't know how to solve this problem, due to server graphics card issues, I can not use Torch 1.6.0, must choose a higher cuda version (Torch 1.13.0), I don't know if this is the reason. I am looking forward to your reply and hope to get your help
The text was updated successfully, but these errors were encountered: