New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Layers do not clear inputs, even when they are invalidated by workspaces #4291

Closed
schrum2 opened this Issue Nov 13, 2017 · 5 comments

Comments

Projects
None yet
2 participants
@schrum2
Copy link

schrum2 commented Nov 13, 2017

Issue Description

This short code gist runs VGG16 with image net weights and then tries to analyze the intermediate activation of a particular layer:
https://gist.github.com/schrum2/254dec555ba6908c8f8a21f3141e023c

I have successfully used VGG16 before, but only the final outputs. However, when I run this code, it crashes with a Fatal Error:

https://gist.github.com/schrum2/1cc1fb41971c9a4d0a284f366ecc7706

The full error contents of the error report file are here:

https://gist.github.com/schrum2/5e47cf3d7023b8d06b5291f6d0e38943

It's possible I'm using the activate() method wrong, but even if I am, it seems in appropriate to receive a fatal error rather than a more straight forward exception.

Version Information

Please indicate relevant versions, including, if relevant:

  • Deeplearning4j version: 0.9.1
  • platform information (OS, etc): Windows 10
@AlexDBlack

This comment has been minimized.

Copy link
Member

AlexDBlack commented Nov 13, 2017

This appears to be due to workspaces, and (given the use of workspaces by default for inference).
In short, it's trying to do forward pass with memory that has already been deallocated.
If you aren't familiar with workspaces, read this: https://deeplearning4j.org/workspaces

Moving forward, no-arg methods like activate() will be for internal use only - in part because this sort of "stateful" behaviour will be changed.

Anyway, to get the activations you want: you can use the feedForward methods on ComputationGraph, with return a Map<String, INDArray> of activations.

Edit: another workaround (though beware of the memory cost in general) is to disable the inference workspace:
vgg16.getConfiguration().setInferenceWorkspaceMode(WorkspaceMode.NONE);

@AlexDBlack AlexDBlack changed the title Fatal Exception when calling layer activate method Layers do not clear inputs, even when they are invalidated by workspaces Nov 13, 2017

@AlexDBlack AlexDBlack added the Bug label Nov 13, 2017

@schrum2

This comment has been minimized.

Copy link

schrum2 commented Nov 13, 2017

The feedForward method seems to work fine. Thanks for the quick response.

From my perspective, the issue is resolved, but I don't know if you want to leave it open until the activate method is made private.

@AlexDBlack

This comment has been minimized.

Copy link
Member

AlexDBlack commented Nov 13, 2017

Yeah, I'll leave the issue open as a reminder to fix this - regardless of what we do here, we don't want users running into this in the future.

@AlexDBlack

This comment has been minimized.

Copy link
Member

AlexDBlack commented Nov 29, 2017

@AlexDBlack AlexDBlack closed this Nov 29, 2017

@lock

This comment has been minimized.

Copy link

lock bot commented Sep 24, 2018

This thread has been automatically locked since there has not been any recent activity after it was closed. Please open a new issue for related bugs.

@lock lock bot locked and limited conversation to collaborators Sep 24, 2018

Sign up for free to subscribe to this conversation on GitHub. Already have an account? Sign in.