You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
I'm trying to get the activation values of neurones for each layer and store them. I have tried many variations of dense_layer method in mode_base, so that I can store activations in a dictionary, just like the way weights are stored for each layer. But given that unlike weights the dimension of activation depends on the batch size and therefore unknown before training, all my attempts have failed - including those that define a dynamic size tf variable. Is there any other way of layer-wise activation extraction that I can try? Thanks.
The text was updated successfully, but these errors were encountered:
I'm trying to get the activation values of neurones for each layer and store them. I have tried many variations of dense_layer method in mode_base, so that I can store activations in a dictionary, just like the way weights are stored for each layer. But given that unlike weights the dimension of activation depends on the batch size and therefore unknown before training, all my attempts have failed - including those that define a dynamic size tf variable. Is there any other way of layer-wise activation extraction that I can try? Thanks.
The text was updated successfully, but these errors were encountered: