Skip to content
This repository has been archived by the owner on Oct 31, 2023. It is now read-only.

memory leak #2

Closed
wassname opened this issue Sep 23, 2022 · 1 comment
Closed

memory leak #2

wassname opened this issue Sep 23, 2022 · 1 comment

Comments

@wassname
Copy link

wassname commented Sep 23, 2022

Thanks for sharing, this is a simple and interesting way to use auxiliary losses.

When using it on a large dataset I get a memory leak, it uses up more and more cuda memory untill it crashes. I think this is because the graph is not cleared loss.backward(retain_graph=True).

The obvious next step is to clear the graph with a loss.backward(retain_graph=False) but I get an error that the variables have been modified (image blow). I assume this is an intentional or metabalance, but I can't find where, and I can't find a way to clear the graph manually.

Any tips?

image

@wassname
Copy link
Author

I'm going to close this, because this issue seems more complicated than I originally thought. Perhaps it's my torch version or I do have an in-place operation. It's really quite hard to pin down into a clear question I'm afraid.

Sign up for free to subscribe to this conversation on GitHub. Already have an account? Sign in.
Labels
None yet
Projects
None yet
Development

No branches or pull requests

1 participant