You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
There appears a bug in ch3 example linear_regression_tf.py
At line 42, y becomes shape(100,) but I think it should be (100,1) because y-y_pred becomes (100,100) given y_pred is (100,1). So the loss function is very largely overestimated.
Simply changing the 42 line as following makes the training converge to global minimum. (W=5, b=2)
y = tf.placeholder(tf.float32, (N,)) --> tf.placeholder(tf.float32, (N,1))
Of course, np.reshape should be removed from line 28 and some additional code change is necessary to make the script runnable.
So I don't think this example is a proper proof of gradient descent not converging to global minimum.
But I still deeply appreciate the great examples of tensorflow and it helps me with studying deep learning so much.
Thanks,
The text was updated successfully, but these errors were encountered:
Yes, I believe you're correct (#9 points this out as well). This is a bug on our part that slipped through. We'll fix this in a future release for sure! Apologies for the error in the meanwhile.
Hi.
Thanks for this great deep learning examples!
There appears a bug in ch3 example linear_regression_tf.py
At line 42, y becomes shape(100,) but I think it should be (100,1) because y-y_pred becomes (100,100) given y_pred is (100,1). So the loss function is very largely overestimated.
Simply changing the 42 line as following makes the training converge to global minimum. (W=5, b=2)
y = tf.placeholder(tf.float32, (N,)) --> tf.placeholder(tf.float32, (N,1))
Of course, np.reshape should be removed from line 28 and some additional code change is necessary to make the script runnable.
So I don't think this example is a proper proof of gradient descent not converging to global minimum.
But I still deeply appreciate the great examples of tensorflow and it helps me with studying deep learning so much.
Thanks,
The text was updated successfully, but these errors were encountered: