Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Bug in ch3 example of linear regression #14

Open
chobing97 opened this issue Jun 19, 2018 · 1 comment
Open

Bug in ch3 example of linear regression #14

chobing97 opened this issue Jun 19, 2018 · 1 comment

Comments

@chobing97
Copy link

chobing97 commented Jun 19, 2018

Hi.
Thanks for this great deep learning examples!

There appears a bug in ch3 example linear_regression_tf.py
At line 42, y becomes shape(100,) but I think it should be (100,1) because y-y_pred becomes (100,100) given y_pred is (100,1). So the loss function is very largely overestimated.
Simply changing the 42 line as following makes the training converge to global minimum. (W=5, b=2)

y = tf.placeholder(tf.float32, (N,)) --> tf.placeholder(tf.float32, (N,1))

Of course, np.reshape should be removed from line 28 and some additional code change is necessary to make the script runnable.

So I don't think this example is a proper proof of gradient descent not converging to global minimum.

But I still deeply appreciate the great examples of tensorflow and it helps me with studying deep learning so much.
Thanks,

@rbharath
Copy link
Collaborator

Yes, I believe you're correct (#9 points this out as well). This is a bug on our part that slipped through. We'll fix this in a future release for sure! Apologies for the error in the meanwhile.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

2 participants