You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
I just want to clarify my understanding before making any clarifying changes. In the Linear Regression article under 'Bias Term', it reads:
Below we add a constant 1 to our features matrix. By setting this value to 1, it turns our bias term into a constant.
bias = np.ones(shape=(len(features),1))
features = np.append(bias, features, axis=1)
So the purpose of adding the 1 along with the other features in each example is so that the 1 will be multiplied by the 'bias weight' when the dot product of the features and weights is performed in the predict() function. Is that accurate?
The text was updated successfully, but these errors were encountered:
joelgenter
changed the title
Linear Regression Bias Clarification
Linear regression bias clarification
Jan 25, 2019
So the purpose of adding the 1 along with the other features in each example is so that the 1 will be multiplied by the 'bias weight' when the dot product of the features and weights is performed in the predict() function. Is that accurate?
Yes, exactly. We "augement" the data with a column of constant 1 so that can treeat the whole expressions using dot product rather than handle bias manually. Here is an example shows its equivalent:
I just want to clarify my understanding before making any clarifying changes. In the Linear Regression article under 'Bias Term', it reads:
So the purpose of adding the 1 along with the other features in each example is so that the 1 will be multiplied by the 'bias weight' when the dot product of the features and weights is performed in the predict() function. Is that accurate?
The text was updated successfully, but these errors were encountered: