New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
model file too big after batch learning #112
Comments
Below is a little code snippit I use to recursively cleanup the model before saving to disk. It's not overly robust (it fails for some container classes), but it works on 99% of my models. All I'm saying is that you may need to modify it a little.
FYI: I don't think this is an issue, and you should probably close it. There are plenty of examples I can think of where I would want to save the gradInput and output tensors. |
Thanks, I am using following code for now.
|
Then you should close the issue. |
Is this by design? |
Yes. Torch will serialize anything in those module instances before writing to disk, including the output and gradInput values and any other tensors in the table. It's up to the user to clean up the module if all you want are the weights and bias values. |
Thanks |
Probably, This issue caused by Module#output and Module#gradInput.
The text was updated successfully, but these errors were encountered: