Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

DL4J: Add 'output for these layers only' method for ComputationGraph #6736

AlexDBlack opened this issue Nov 20, 2018 · 1 comment


None yet
1 participant
Copy link

commented Nov 20, 2018

We have a protected method for this, but no public method:

* Provide the output of the specified layers, detached from any workspace. This is most commonly used at inference/test
* time, and is more memory efficient than {@link #ffToLayerActivationsDetached(boolean, FwdPassType, boolean, int, int[], INDArray[], INDArray[], INDArray[], boolean)}
* and {@link #ffToLayerActivationsInWS(boolean, int, int[], FwdPassType, boolean, INDArray[], INDArray[], INDArray[], boolean)}.<br>
* This method clears all layer inputs.
* NOTE: in general, no workspaces should be activated externally for this method!
* This method handles the workspace activation as required
* @param train Training mode (true) or test/inference mode (false)
* @param fwdPassType Type of forward pass to perform (STANDARD or RNN_TIMESTEP only)
* @param layerIndexes Indexes of the layers to get the activations for
* @param features Input features for the network
* @param fMask Input/feature mask array. May be null.
* @param lMasks Labels mask array. May be null
* @param clearLayerInputs If true: the layer input fields will be cleared
* @param detachedInputs If true: the layer input fields will be detached. Usually used for external errors cases
* @param outputWorkspace Optional - if provided, outputs should be placed in this workspace. NOTE: this workspace
* must be open
* @return Output of the specified layers, detached from any workspace
protected INDArray[] outputOfLayersDetached(boolean train, @NonNull FwdPassType fwdPassType, @NonNull int[] layerIndexes, @NonNull INDArray[] features,
INDArray[] fMask, INDArray[] lMasks, boolean clearLayerInputs, boolean detachedInputs, MemoryWorkspace outputWorkspace){

AlexDBlack added a commit that referenced this issue Nov 27, 2018

AlexDBlack added a commit that referenced this issue Nov 27, 2018

Misc DL4J/ND4J fixes (#6774)
* #6770 Fix issue with incorrect EWS for TAD resulting in column vectors

* #6748 Add NaN checks in evaluation

* #6735 Add validation for invalid evaluation

* #6734 Fix ComputationGraph layer name inference from param name (get/setParam etc methods)

* #6736 additional CompGraph output overload

* Add commit() in ParallelInference

* Small fix

* put commit() in proper location

* Layer config cloning should clone INDArray fields to avoid excessive relocation issues on CUDA

* ParallelInference: Add commit before passing input array(s) between threads

* Trigger CI

This comment has been minimized.

Copy link

commented Dec 28, 2018

This thread has been automatically locked since there has not been any recent activity after it was closed. Please open a new issue for related bugs.

@lock lock bot locked and limited conversation to collaborators Dec 28, 2018

Sign up for free to subscribe to this conversation on GitHub. Already have an account? Sign in.
You can’t perform that action at this time.