You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Based on the description on the same section above (page 342), it seems that:
For hidden layer:
x has dimension [n_examples, n_features] (but [n_hidden, n_features] is specified)
self.weight_h.T has dimension [n_hidden, n_features].T (but [n_features, n_examples] .T is specified)
So z_h = np.dot(x, self.weight_h.T) + self.bias_h will have specified dimension ([n_examples, n_hidden])
For output layer: a_h has dimension [n_examples, n_hidden] (but [n_classes, n_hidden] is specified) self.weight_h.T has dimension [n_classes, n_hidden].T (but [n_hidden, n_examples] .T is specified)
So z_out = np.dot(a_h, self.weight_out.T) + self.bias_out will have specified dimension ([n_examples, n_classes])
Is it correct, or I am wrong in comments interpretation?
Thank you.
The text was updated successfully, but these errors were encountered:
Oh, good call. think the code comments are still the old ones (arg, that's what I hate about code comments, sometimes it is easy to forget updating them). I initially had it as
Hi Sebastian,
There are comments about dimensions for hidden and output layers for MLP on the page 348 and in the repository:
machine-learning-book/ch11/neuralnet.py
Lines 43 to 45 in baf2513
machine-learning-book/ch11/neuralnet.py
Lines 49 to 51 in baf2513
Based on the description on the same section above (page 342), it seems that:
x
has dimension[n_examples, n_features]
(but[n_hidden, n_features]
is specified)self.weight_h.T
has dimension[n_hidden, n_features].T
(but[n_features, n_examples] .T
is specified)So
z_h = np.dot(x, self.weight_h.T) + self.bias_h
will have specified dimension ([n_examples, n_hidden]
)a_h
has dimension[n_examples, n_hidden]
(but[n_classes, n_hidden]
is specified)self.weight_h.T
has dimension[n_classes, n_hidden].T
(but[n_hidden, n_examples] .T
is specified)So
z_out = np.dot(a_h, self.weight_out.T) + self.bias_out
will have specified dimension ([n_examples, n_classes]
)Is it correct, or I am wrong in comments interpretation?
Thank you.
The text was updated successfully, but these errors were encountered: