From aa578bfa13463769fc10273a636718031ac271e3 Mon Sep 17 00:00:00 2001 From: Fred Pearce Date: Sat, 27 Sep 2025 18:03:12 -0700 Subject: [PATCH 1/9] Add EOL escape to fix eq formatting --- ...08_y97mg42O7d_homework-q7-what-do-the-weights-represent.md | 4 ++-- 1 file changed, 2 insertions(+), 2 deletions(-) diff --git a/_questions/machine-learning-zoomcamp/module-1-homework/008_y97mg42O7d_homework-q7-what-do-the-weights-represent.md b/_questions/machine-learning-zoomcamp/module-1-homework/008_y97mg42O7d_homework-q7-what-do-the-weights-represent.md index a248bec..ba113c2 100644 --- a/_questions/machine-learning-zoomcamp/module-1-homework/008_y97mg42O7d_homework-q7-what-do-the-weights-represent.md +++ b/_questions/machine-learning-zoomcamp/module-1-homework/008_y97mg42O7d_homework-q7-what-do-the-weights-represent.md @@ -4,8 +4,8 @@ question: 'Homework Q7: What do the weights represent?' sort_order: 8 --- -The weight vector, `w`, contains the coefficients for a linear model fit between the target variable, `y`, and the input features in `X`, with the model estimate of `y`, `y_est` defined as follows: +The weight vector, `w`, contains the coefficients for a linear model fit between the target variable, `y`, and the input features in `X`, with the model estimate of `y`, `y_est` defined as follows:\ $$ y_{est} = w[0]*X[0] + w[1]*X[1] $$ -where the values in brackets refer to each column of the feature matrix, `X`, and the corresponding row of the weight vector, `w`. Each value in `w` describes the slope of the trend line that fits `y` the best for each feature. As we'll learn in Module 2, least squares yields a "best" fit that minimizes the squared difference between `y` and `y_est`. The weights, `w`, can be checked to see if they're reasonable by multiplying `X` by the weight vector, `w`: +where the values in brackets refer to each column of the feature matrix, `X`, and the corresponding row of the weight vector, `w`. Each value in `w` describes the slope of the trend line that fits `y` the best for each feature. As we'll learn in Module 2, least squares yields a "best" fit that minimizes the squared difference between `y` and `y_est`. The weights, `w`, can be checked to see if they're reasonable by multiplying `X` by the weight vector, `w`:\ $$ y_{est} = X.dot(w) $$ This should produce a vector, `y_est` that is similar, plus or minus some error, to the original target variable, `y`. From 806f5eff7667fa7b695b656d9d61449a01f43a5c Mon Sep 17 00:00:00 2001 From: Fred Pearce Date: Sat, 27 Sep 2025 18:46:19 -0700 Subject: [PATCH 2/9] More EOL escape to fix formatting --- ...08_y97mg42O7d_homework-q7-what-do-the-weights-represent.md | 4 ++-- 1 file changed, 2 insertions(+), 2 deletions(-) diff --git a/_questions/machine-learning-zoomcamp/module-1-homework/008_y97mg42O7d_homework-q7-what-do-the-weights-represent.md b/_questions/machine-learning-zoomcamp/module-1-homework/008_y97mg42O7d_homework-q7-what-do-the-weights-represent.md index ba113c2..fe9560d 100644 --- a/_questions/machine-learning-zoomcamp/module-1-homework/008_y97mg42O7d_homework-q7-what-do-the-weights-represent.md +++ b/_questions/machine-learning-zoomcamp/module-1-homework/008_y97mg42O7d_homework-q7-what-do-the-weights-represent.md @@ -5,7 +5,7 @@ sort_order: 8 --- The weight vector, `w`, contains the coefficients for a linear model fit between the target variable, `y`, and the input features in `X`, with the model estimate of `y`, `y_est` defined as follows:\ -$$ y_{est} = w[0]*X[0] + w[1]*X[1] $$ +$$ y_{est} = w[0]*X[0] + w[1]*X[1] $$\ where the values in brackets refer to each column of the feature matrix, `X`, and the corresponding row of the weight vector, `w`. Each value in `w` describes the slope of the trend line that fits `y` the best for each feature. As we'll learn in Module 2, least squares yields a "best" fit that minimizes the squared difference between `y` and `y_est`. The weights, `w`, can be checked to see if they're reasonable by multiplying `X` by the weight vector, `w`:\ -$$ y_{est} = X.dot(w) $$ +$$ y_{est} = X.dot(w) $$\ This should produce a vector, `y_est` that is similar, plus or minus some error, to the original target variable, `y`. From d98dda6bde095107418ae178233051d8757a9e89 Mon Sep 17 00:00:00 2001 From: Fred Pearce Date: Sat, 27 Sep 2025 18:48:41 -0700 Subject: [PATCH 3/9] Remove spaces in equations --- ...08_y97mg42O7d_homework-q7-what-do-the-weights-represent.md | 4 ++-- 1 file changed, 2 insertions(+), 2 deletions(-) diff --git a/_questions/machine-learning-zoomcamp/module-1-homework/008_y97mg42O7d_homework-q7-what-do-the-weights-represent.md b/_questions/machine-learning-zoomcamp/module-1-homework/008_y97mg42O7d_homework-q7-what-do-the-weights-represent.md index fe9560d..1b35588 100644 --- a/_questions/machine-learning-zoomcamp/module-1-homework/008_y97mg42O7d_homework-q7-what-do-the-weights-represent.md +++ b/_questions/machine-learning-zoomcamp/module-1-homework/008_y97mg42O7d_homework-q7-what-do-the-weights-represent.md @@ -5,7 +5,7 @@ sort_order: 8 --- The weight vector, `w`, contains the coefficients for a linear model fit between the target variable, `y`, and the input features in `X`, with the model estimate of `y`, `y_est` defined as follows:\ -$$ y_{est} = w[0]*X[0] + w[1]*X[1] $$\ +$$y_{est} = w[0]*X[0] + w[1]*X[1]$$\ where the values in brackets refer to each column of the feature matrix, `X`, and the corresponding row of the weight vector, `w`. Each value in `w` describes the slope of the trend line that fits `y` the best for each feature. As we'll learn in Module 2, least squares yields a "best" fit that minimizes the squared difference between `y` and `y_est`. The weights, `w`, can be checked to see if they're reasonable by multiplying `X` by the weight vector, `w`:\ -$$ y_{est} = X.dot(w) $$\ +$$y_{est} = X.dot(w)$$\ This should produce a vector, `y_est` that is similar, plus or minus some error, to the original target variable, `y`. From 9f8f8654be6169f299ac08c724e3fbb1f1b189aa Mon Sep 17 00:00:00 2001 From: Fred Pearce Date: Sat, 27 Sep 2025 19:55:34 -0700 Subject: [PATCH 4/9] Place equation on separate lines to center, hopefully... --- ...2O7d_homework-q7-what-do-the-weights-represent.md | 12 ++++++++---- 1 file changed, 8 insertions(+), 4 deletions(-) diff --git a/_questions/machine-learning-zoomcamp/module-1-homework/008_y97mg42O7d_homework-q7-what-do-the-weights-represent.md b/_questions/machine-learning-zoomcamp/module-1-homework/008_y97mg42O7d_homework-q7-what-do-the-weights-represent.md index 1b35588..004b012 100644 --- a/_questions/machine-learning-zoomcamp/module-1-homework/008_y97mg42O7d_homework-q7-what-do-the-weights-represent.md +++ b/_questions/machine-learning-zoomcamp/module-1-homework/008_y97mg42O7d_homework-q7-what-do-the-weights-represent.md @@ -4,8 +4,12 @@ question: 'Homework Q7: What do the weights represent?' sort_order: 8 --- -The weight vector, `w`, contains the coefficients for a linear model fit between the target variable, `y`, and the input features in `X`, with the model estimate of `y`, `y_est` defined as follows:\ -$$y_{est} = w[0]*X[0] + w[1]*X[1]$$\ -where the values in brackets refer to each column of the feature matrix, `X`, and the corresponding row of the weight vector, `w`. Each value in `w` describes the slope of the trend line that fits `y` the best for each feature. As we'll learn in Module 2, least squares yields a "best" fit that minimizes the squared difference between `y` and `y_est`. The weights, `w`, can be checked to see if they're reasonable by multiplying `X` by the weight vector, `w`:\ -$$y_{est} = X.dot(w)$$\ +The weight vector, `w`, contains the coefficients for a linear model fit between the target variable, `y`, and the input features in `X`, with the model estimate of `y`, `y_est` defined as follows: +$$ +y_{est} = w[0]*X[0] + w[1]*X[1] +$$ +where the values in brackets refer to each column of the feature matrix, `X`, and the corresponding row of the weight vector, `w`. Each value in `w` describes the slope of the trend line that fits `y` the best for each feature. As we'll learn in Module 2, least squares yields a "best" fit that minimizes the squared difference between `y` and `y_est`. The weights, `w`, can be checked to see if they're reasonable by multiplying `X` by the weight vector, `w`: +$$ +y_{est} = X.dot(w) +$$ This should produce a vector, `y_est` that is similar, plus or minus some error, to the original target variable, `y`. From 3c5afc88c0c899c6f377e814cac22b5772131b35 Mon Sep 17 00:00:00 2001 From: Fred Pearce Date: Sat, 27 Sep 2025 19:57:13 -0700 Subject: [PATCH 5/9] Add EOL escapes back in to see if it centers eqs --- ...97mg42O7d_homework-q7-what-do-the-weights-represent.md | 8 ++++---- 1 file changed, 4 insertions(+), 4 deletions(-) diff --git a/_questions/machine-learning-zoomcamp/module-1-homework/008_y97mg42O7d_homework-q7-what-do-the-weights-represent.md b/_questions/machine-learning-zoomcamp/module-1-homework/008_y97mg42O7d_homework-q7-what-do-the-weights-represent.md index 004b012..ba090ec 100644 --- a/_questions/machine-learning-zoomcamp/module-1-homework/008_y97mg42O7d_homework-q7-what-do-the-weights-represent.md +++ b/_questions/machine-learning-zoomcamp/module-1-homework/008_y97mg42O7d_homework-q7-what-do-the-weights-represent.md @@ -4,12 +4,12 @@ question: 'Homework Q7: What do the weights represent?' sort_order: 8 --- -The weight vector, `w`, contains the coefficients for a linear model fit between the target variable, `y`, and the input features in `X`, with the model estimate of `y`, `y_est` defined as follows: +The weight vector, `w`, contains the coefficients for a linear model fit between the target variable, `y`, and the input features in `X`, with the model estimate of `y`, `y_est` defined as follows:\ $$ y_{est} = w[0]*X[0] + w[1]*X[1] -$$ -where the values in brackets refer to each column of the feature matrix, `X`, and the corresponding row of the weight vector, `w`. Each value in `w` describes the slope of the trend line that fits `y` the best for each feature. As we'll learn in Module 2, least squares yields a "best" fit that minimizes the squared difference between `y` and `y_est`. The weights, `w`, can be checked to see if they're reasonable by multiplying `X` by the weight vector, `w`: +$$\ +where the values in brackets refer to each column of the feature matrix, `X`, and the corresponding row of the weight vector, `w`. Each value in `w` describes the slope of the trend line that fits `y` the best for each feature. As we'll learn in Module 2, least squares yields a "best" fit that minimizes the squared difference between `y` and `y_est`. The weights, `w`, can be checked to see if they're reasonable by multiplying `X` by the weight vector, `w`:\ $$ y_{est} = X.dot(w) -$$ +$$\ This should produce a vector, `y_est` that is similar, plus or minus some error, to the original target variable, `y`. From 6907fd5ac24ef00bc8fba09c56c38f67445b79b7 Mon Sep 17 00:00:00 2001 From: Fred Pearce Date: Sat, 27 Sep 2025 19:59:11 -0700 Subject: [PATCH 6/9] Put $ back on same line, add blank lines --- ...2O7d_homework-q7-what-do-the-weights-represent.md | 12 ++++++------ 1 file changed, 6 insertions(+), 6 deletions(-) diff --git a/_questions/machine-learning-zoomcamp/module-1-homework/008_y97mg42O7d_homework-q7-what-do-the-weights-represent.md b/_questions/machine-learning-zoomcamp/module-1-homework/008_y97mg42O7d_homework-q7-what-do-the-weights-represent.md index ba090ec..74db855 100644 --- a/_questions/machine-learning-zoomcamp/module-1-homework/008_y97mg42O7d_homework-q7-what-do-the-weights-represent.md +++ b/_questions/machine-learning-zoomcamp/module-1-homework/008_y97mg42O7d_homework-q7-what-do-the-weights-represent.md @@ -5,11 +5,11 @@ sort_order: 8 --- The weight vector, `w`, contains the coefficients for a linear model fit between the target variable, `y`, and the input features in `X`, with the model estimate of `y`, `y_est` defined as follows:\ -$$ -y_{est} = w[0]*X[0] + w[1]*X[1] -$$\ + +$$y_{est} = w[0]*X[0] + w[1]*X[1]$$\ + where the values in brackets refer to each column of the feature matrix, `X`, and the corresponding row of the weight vector, `w`. Each value in `w` describes the slope of the trend line that fits `y` the best for each feature. As we'll learn in Module 2, least squares yields a "best" fit that minimizes the squared difference between `y` and `y_est`. The weights, `w`, can be checked to see if they're reasonable by multiplying `X` by the weight vector, `w`:\ -$$ -y_{est} = X.dot(w) -$$\ + +$$y_{est} = X.dot(w)$$\ + This should produce a vector, `y_est` that is similar, plus or minus some error, to the original target variable, `y`. From 8fc55362d6b23235c2cce8d612f9bf236f394ef7 Mon Sep 17 00:00:00 2001 From: Fred Pearce Date: Sat, 27 Sep 2025 20:01:25 -0700 Subject: [PATCH 7/9] Revert to previous state that worked ok --- ...08_y97mg42O7d_homework-q7-what-do-the-weights-represent.md | 4 ---- 1 file changed, 4 deletions(-) diff --git a/_questions/machine-learning-zoomcamp/module-1-homework/008_y97mg42O7d_homework-q7-what-do-the-weights-represent.md b/_questions/machine-learning-zoomcamp/module-1-homework/008_y97mg42O7d_homework-q7-what-do-the-weights-represent.md index 74db855..1b35588 100644 --- a/_questions/machine-learning-zoomcamp/module-1-homework/008_y97mg42O7d_homework-q7-what-do-the-weights-represent.md +++ b/_questions/machine-learning-zoomcamp/module-1-homework/008_y97mg42O7d_homework-q7-what-do-the-weights-represent.md @@ -5,11 +5,7 @@ sort_order: 8 --- The weight vector, `w`, contains the coefficients for a linear model fit between the target variable, `y`, and the input features in `X`, with the model estimate of `y`, `y_est` defined as follows:\ - $$y_{est} = w[0]*X[0] + w[1]*X[1]$$\ - where the values in brackets refer to each column of the feature matrix, `X`, and the corresponding row of the weight vector, `w`. Each value in `w` describes the slope of the trend line that fits `y` the best for each feature. As we'll learn in Module 2, least squares yields a "best" fit that minimizes the squared difference between `y` and `y_est`. The weights, `w`, can be checked to see if they're reasonable by multiplying `X` by the weight vector, `w`:\ - $$y_{est} = X.dot(w)$$\ - This should produce a vector, `y_est` that is similar, plus or minus some error, to the original target variable, `y`. From e5dbe922fca190c09b9d73193dcfb5e149713fae Mon Sep 17 00:00:00 2001 From: Fred Pearce Date: Sat, 27 Sep 2025 20:02:56 -0700 Subject: [PATCH 8/9] Add new lines back in, remove slashes, fingers crossed... --- ...2O7d_homework-q7-what-do-the-weights-represent.md | 12 ++++++++---- 1 file changed, 8 insertions(+), 4 deletions(-) diff --git a/_questions/machine-learning-zoomcamp/module-1-homework/008_y97mg42O7d_homework-q7-what-do-the-weights-represent.md b/_questions/machine-learning-zoomcamp/module-1-homework/008_y97mg42O7d_homework-q7-what-do-the-weights-represent.md index 1b35588..a274cdf 100644 --- a/_questions/machine-learning-zoomcamp/module-1-homework/008_y97mg42O7d_homework-q7-what-do-the-weights-represent.md +++ b/_questions/machine-learning-zoomcamp/module-1-homework/008_y97mg42O7d_homework-q7-what-do-the-weights-represent.md @@ -4,8 +4,12 @@ question: 'Homework Q7: What do the weights represent?' sort_order: 8 --- -The weight vector, `w`, contains the coefficients for a linear model fit between the target variable, `y`, and the input features in `X`, with the model estimate of `y`, `y_est` defined as follows:\ -$$y_{est} = w[0]*X[0] + w[1]*X[1]$$\ -where the values in brackets refer to each column of the feature matrix, `X`, and the corresponding row of the weight vector, `w`. Each value in `w` describes the slope of the trend line that fits `y` the best for each feature. As we'll learn in Module 2, least squares yields a "best" fit that minimizes the squared difference between `y` and `y_est`. The weights, `w`, can be checked to see if they're reasonable by multiplying `X` by the weight vector, `w`:\ -$$y_{est} = X.dot(w)$$\ +The weight vector, `w`, contains the coefficients for a linear model fit between the target variable, `y`, and the input features in `X`, with the model estimate of `y`, `y_est` defined as follows: + +$$y_{est} = w[0]*X[0] + w[1]*X[1]$$ + +where the values in brackets refer to each column of the feature matrix, `X`, and the corresponding row of the weight vector, `w`. Each value in `w` describes the slope of the trend line that fits `y` the best for each feature. As we'll learn in Module 2, least squares yields a "best" fit that minimizes the squared difference between `y` and `y_est`. The weights, `w`, can be checked to see if they're reasonable by multiplying `X` by the weight vector, `w`: + +$$y_{est} = X.dot(w)$$ + This should produce a vector, `y_est` that is similar, plus or minus some error, to the original target variable, `y`. From 3d8e251e6f3429b7a3aa253c682e54c9f715125d Mon Sep 17 00:00:00 2001 From: Fred Pearce Date: Sat, 27 Sep 2025 20:04:28 -0700 Subject: [PATCH 9/9] Equation formatting looks good, add missing comma --- .../008_y97mg42O7d_homework-q7-what-do-the-weights-represent.md | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/_questions/machine-learning-zoomcamp/module-1-homework/008_y97mg42O7d_homework-q7-what-do-the-weights-represent.md b/_questions/machine-learning-zoomcamp/module-1-homework/008_y97mg42O7d_homework-q7-what-do-the-weights-represent.md index a274cdf..26739cd 100644 --- a/_questions/machine-learning-zoomcamp/module-1-homework/008_y97mg42O7d_homework-q7-what-do-the-weights-represent.md +++ b/_questions/machine-learning-zoomcamp/module-1-homework/008_y97mg42O7d_homework-q7-what-do-the-weights-represent.md @@ -4,7 +4,7 @@ question: 'Homework Q7: What do the weights represent?' sort_order: 8 --- -The weight vector, `w`, contains the coefficients for a linear model fit between the target variable, `y`, and the input features in `X`, with the model estimate of `y`, `y_est` defined as follows: +The weight vector, `w`, contains the coefficients for a linear model fit between the target variable, `y`, and the input features in `X`, with the model estimate of `y`, `y_est`, defined as follows: $$y_{est} = w[0]*X[0] + w[1]*X[1]$$