You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
#####################################################
In the above code , why are we computing the values of layer_1_delta and layer_2_delta again and again...should not one iteration suffice ..what is the purpose..this is the code in regularization chapter for mnist digit classification with mini batched SGD...I changed some
####################################################
``
layer_2_delta = (labels[batch_start:batch_end]-layer_2)/batch_size
layer_1_delta = layer_2_delta.dot(weights_1_2.T)* relu2deriv(layer_1)
weights_1_2 += (batch_size-1)*alpha * layer_1.T.dot(layer_2_delta)
weights_0_1 += (batch_size-1)*alpha * layer_0.T.dot(layer_1_delta)
layer_1_delta *= dropout_mask
for k in range(batch_size):
correct_cnt += int(np.argmax(layer_2[k:k+1]) == np.argmax(labels[batch_start+k:batch_start+k+1]))
##############################
this seems much faster and reaches the same bench marks
###############################
``
The text was updated successfully, but these errors were encountered:
I also struggled with that part. But if you look at the batching in the next chapter (on github), you'll see it's done differently. Only the correct_cnt calculation is in the loop:
``
for k in range(batch_size):
#####################################################
In the above code , why are we computing the values of layer_1_delta and layer_2_delta again and again...should not one iteration suffice ..what is the purpose..this is the code in regularization chapter for mnist digit classification with mini batched SGD...I changed some
####################################################
``
##############################
this seems much faster and reaches the same bench marks
###############################
``
The text was updated successfully, but these errors were encountered: