Skip to content

Commit

Permalink
Added Week 4 L1
Browse files Browse the repository at this point in the history
  • Loading branch information
Harsh-0986 committed Mar 5, 2024
1 parent 3f1a47a commit 07d3b5e
Show file tree
Hide file tree
Showing 3 changed files with 15 additions and 2 deletions.
12 changes: 11 additions & 1 deletion docs/Linear Algebra/05 Linear regression.md
Original file line number Diff line number Diff line change
Expand Up @@ -33,4 +33,14 @@ $\to (A^TA)\theta=A^TY$
If $A$ is full rank then, $\theta=(A^TA)^{-1}A^TY.$

!!! tip
The application of $\theta$ is in finding the Maximum Likelihood Probability.
The application of $\theta$ is in finding the Maximum Likelihood Probability.

## Ridge Regression (Regularised version of Linear Regrssion)
Instead of solving the **loss function** $L(\theta)=\sum_{i=1}^{n}(x^T_i\theta-y_i)^2,$ we solve the following regularised version:

$\bar{L}(\theta)=\sum_{i=1}^{n}(x^T_i\theta-y_i)^2+\lambda||\theta||^2,$ where $\lambda||\theta||^2$ is the **regularization term** and then minimise it.

Now, if you minimise this using the above calculations, we get $(A^TA+\lambda I)\theta_{reg}=A^TY\to\theta=(A^TA+\lambda I)^{-1}A^TY$

!!! note
$(A^TA+\lambda I) $ is invertible even if $A$ is not of full rank
4 changes: 3 additions & 1 deletion docs/Linear Algebra/06 Polynomial Regression.md
Original file line number Diff line number Diff line change
Expand Up @@ -14,8 +14,10 @@ Therefore, for a given $x,$ $\phi(x)=(1,x,x^2,\dots x^n)$

Now, $\hat{y}(x) = \theta^T\phi(x)$

## Applying Linear Regression
Now we apply linear regression on $\phi(x), $

So here $A=\begin{bmatrix}\phi(x_1)^T\\\phi(x_2)^T\\.\\.\\.\\\phi(x_n)^T\end{bmatrix}.$

Now $(A^TA)\theta=A^TY$
Now $(A^TA)\theta=A^TY$

1 change: 1 addition & 0 deletions docs/Linear Algebra/07 EigenValues and EigenVectors.md
Original file line number Diff line number Diff line change
@@ -0,0 +1 @@
# Eigen Values and Eigen Vectors

0 comments on commit 07d3b5e

Please sign in to comment.