Skip to content

Commit

Permalink
fixing ReadMe img links
Browse files Browse the repository at this point in the history
  • Loading branch information
am-3 committed Dec 8, 2022
1 parent c6a1b90 commit 310581a
Showing 1 changed file with 5 additions and 5 deletions.
10 changes: 5 additions & 5 deletions README.md
Original file line number Diff line number Diff line change
Expand Up @@ -18,7 +18,7 @@ Demonstrating using House Price Prediction example.

### UML

![uml](\imgs\uml.jpeg)
![uml](/imgs/uml.jpeg)



Expand Down Expand Up @@ -48,7 +48,7 @@ CSV file with n columns; of which n-1 are features and the nth column is target.

To train the model, Stochastic Gradient Descent algorithm is applied. In this algorithm, one datapoint is taken at a time for training the model. Appropriate weights are multiplied with the features and added up along with bias.

![gdgif](\imgs\gradient_descent_gif.gif)
![gdgif](/imgs/gradient_descent_gif.gif)

This gives the predicted value. The loss between true and predicted values is calculated. Further to reduce the loss, first the gradient of the loss is obtained *(which is the derivate of the loss function with respect to the weights and bias)* and then the obtained gradients are multiplied with the learning rate; and subtracted from the corresponding weights and bias.

Expand All @@ -62,7 +62,7 @@ A Linear Regression model is created that predicts the house price with user giv

#### Terminologies and techniques used:

![lr](\imgs\linear_regression_graph2.png)
![lr](/imgs/linear_regression_graph2.png)

**Linear Regression:** Linear Regression is a linear approach for modelling the relationship between a scalar response and one or more explanatory variables (also known as dependent and independent variables). Basically, we plot all the datapoints on a graph and initially a random line (y = mx + c) is drawn which may or may not pass through all the points. The goal is to make the line pass through majority of the datapoints. For that loss is calculated which is the sum of the difference of the actual value and the predicted value (y) of all datapoints. This loss is reduced by changing the parameters of the line (m and c). Finally, a value of m and c is reached such that the loss value doesn't decrease further. This value gives the best line possible.
Here, m is called weight as every feature does have some weightage to the predicted value and c is called bias.
Expand Down Expand Up @@ -93,7 +93,7 @@ You can check if it is ready to go by running ,

and seeing a similar output on the console.

![g++](\imgs\usuage_g++.png)
![g++](/imgs/usuage_g++.png)



Expand Down Expand Up @@ -147,7 +147,7 @@ On Windows,

### Sample Output

![output](\imgs\output.png)
![output](/imgs/output.png)



Expand Down

0 comments on commit 310581a

Please sign in to comment.