You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
As in Layer 1 If we give an i/p of size 28*28 to the convolution (with 16 filters) then we have 16 Output of size 28*28 as each filter is applied on the i/p image. Now in Layer 2 If we give 16 i/p of size 14*14 to the convolution (with 36 filters) then we should have 16*36=576 Output of size 14*14 as each filter is applied on every i/p image, But according to this final o/p of Layer 2 will be different. Can anybody tell me how conv2d is applied in the second layer.
The text was updated successfully, but these errors were encountered:
Thanks for taking the time to write a clear question. However, I think it has been more than a year since I did the video so I honestly can't remember the details of the tutorial. You do ask for anybody to help you, but I think it is unlikely that anyone will answer as it is mostly only me who responds to issues here, so I've closed the issue again. You could try writing this as a comment to the video on YouTube and see if anyone can help you there.
What may also be helpful is to add a lot of print-statements so you can see the shape of the tensors that are being sent through the neural network.
In general I can say that the convolution operator is a bit tricky to understand for multi-channel inputs. Perhaps it may be helpful to watch the video a second time? As I recall, there is a part that discusses the convolution for multi-channel inputs.
I can understand with time It becomes difficult to keep track of minute details. I was just asking for a simple intuitive explanation, not rigor details. By the way, I have posted it on Facebook lets see.
Small confusion about how convolution is applied in
"TensorFlow-Tutorials/02_Convolutional_Neural_Network.ipynb"
Layer 1
28*28(1) -> Convolution (# Filters =16) -> 28*28(16) -> Max Pooling -> 14*14(16) -> Relu -> 14*14(16)
Layer 2
14*14(16) -> Convolution (# Filters =36) -> ? -> Max Pooling -> 7*7(36) -> Relu -> 7*7(36)
As in Layer 1 If we give an i/p of size 28*28 to the convolution (with 16 filters) then we have 16 Output of size 28*28 as each filter is applied on the i/p image. Now in Layer 2 If we give 16 i/p of size 14*14 to the convolution (with 36 filters) then we should have 16*36=576 Output of size 14*14 as each filter is applied on every i/p image, But according to this final o/p of Layer 2 will be different. Can anybody tell me how conv2d is applied in the second layer.
The text was updated successfully, but these errors were encountered: