You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
RuntimeError: Trying to backward through the graph a second time, but the saved intermediate results have already been freed. Specify retain_graph=True when calling backward the first time.
#45
Closed
XieZixiUSTC opened this issue
Sep 10, 2021
· 3 comments
i encounter this problem,please help me ,thanks a lot
the code is :
compute output
output = model(images)
# loss = criterion(output, target)
# SAM
# first forward-backward pass
loss = criterion(output, target) # use this loss for any training statistics
loss.backward()
optimizer.first_step(zero_grad=True)
# second forward-backward pass
criterion(output, target).backward() # make sure to do a full forward pass
optimizer.second_step(zero_grad=True)
# measure accuracy and record loss
acc1, acc5 = accuracy(output, target, topk=(1, 5))
losses.update(loss.item(), images.size(0))
top1.update(acc1[0], images.size(0))
top5.update(acc5[0], images.size(0))
the problen is :
Traceback (most recent call last):
File "C:/softWare/SAM/main.py", line 557, in
main()
File "C:/softWare/SAM/main.py", line 156, in main
main_worker(args.gpu, ngpus_per_node, args)
File "C:/softWare/SAM/main.py", line 340, in main_worker
acc2, loss2=train(train_loader, model, criterion, optimizer, epoch, args, f)
File "C:/softWare/SAM/main.py", line 413, in train
criterion(output, target).backward() # make sure to do a full forward pass
File "C:\softWare\Python37\lib\site-packages\torch\tensor.py", line 221, in backward
torch.autograd.backward(self, gradient, retain_graph, create_graph)
File "C:\softWare\Python37\lib\site-packages\torch\autograd_init_.py", line 132, in backward
allow_unreachable=True) # allow_unreachable flag
RuntimeError: Trying to backward through the graph a second time, but the saved intermediate results have already been freed. Specify retain_graph=True when calling backward the first time.
The text was updated successfully, but these errors were encountered:
Sorry, I got the similar question. I actually did the full second forward pass but also reported "Trying to backward through the graph a second time (or directly access saved variables after they have already been freed). Saved intermediate values of the graph are freed when you call .backward() or autograd.grad(). Specify retain_graph=True if you need to backward through the graph a second time or if you need to access saved variables after calling backward."
Here is my code:
output = backbone(imgs)
loss = self.criterion(output,labs)
backbone_opt.zero_grad()
loss.backward()
backbone_opt.first_step(zero_grad=True)
But if I substitute "output2 = backbone(imgs)" with "output2 = backbone(imgs.detach())", it works.
Does this implementation make sense or I also made some mistakes here?
i encounter this problem,please help me ,thanks a lot
the code is :
compute output
the problen is :
Traceback (most recent call last):
File "C:/softWare/SAM/main.py", line 557, in
main()
File "C:/softWare/SAM/main.py", line 156, in main
main_worker(args.gpu, ngpus_per_node, args)
File "C:/softWare/SAM/main.py", line 340, in main_worker
acc2, loss2=train(train_loader, model, criterion, optimizer, epoch, args, f)
File "C:/softWare/SAM/main.py", line 413, in train
criterion(output, target).backward() # make sure to do a full forward pass
File "C:\softWare\Python37\lib\site-packages\torch\tensor.py", line 221, in backward
torch.autograd.backward(self, gradient, retain_graph, create_graph)
File "C:\softWare\Python37\lib\site-packages\torch\autograd_init_.py", line 132, in backward
allow_unreachable=True) # allow_unreachable flag
RuntimeError: Trying to backward through the graph a second time, but the saved intermediate results have already been freed. Specify retain_graph=True when calling backward the first time.
The text was updated successfully, but these errors were encountered: