New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Add methods to determine input size #4670

AlexDBlack opened this Issue Feb 18, 2018 · 1 comment


None yet
2 participants
Copy link

AlexDBlack commented Feb 18, 2018

Output/layer sizes are (now) easy to determine:

* Return the layer size (number of units) for the specified layer.
* Note that the meaning of the "layer size" can depend on the type of layer. For example:<br>
* - DenseLayer, OutputLayer, recurrent layers: number of units (nOut configuration option)<br>
* - ConvolutionLayer: the depth (number of channels)<br>
* - Subsampling layers, global pooling layers, etc: size of 0 is always returned<br>
* @param layer Index of the layer to get the size of. Must be in range 0 to nLayers-1 inclusive
* @return Size of the layer
public int layerSize(int layer) {
if (layer < 0 || layer > layers.length) {
throw new IllegalArgumentException("Invalid layer index: " + layer + ". Layer index must be between 0 and "
+ (layers.length - 1) + " inclusive");
org.deeplearning4j.nn.conf.layers.Layer conf = layers[layer].conf().getLayer();
if (conf == null || !(conf instanceof FeedForwardLayer)) {
return 0;
FeedForwardLayer ffl = (FeedForwardLayer) conf;
return ffl.getNOut();

But input size/type to a network is not as easy to infer.


This comment has been minimized.

Copy link

lock bot commented Sep 22, 2018

This thread has been automatically locked since there has not been any recent activity after it was closed. Please open a new issue for related bugs.

@lock lock bot locked and limited conversation to collaborators Sep 22, 2018

Sign up for free to subscribe to this conversation on GitHub. Already have an account? Sign in.