This repository contains a Python implementation of linear regression using the gradient descent algorithm. The goal of this project is to find the best-fitting line for a given dataset by determining the optimal values for the intercept and coefficients.
- Linear regression using the gradient descent optimization algorithm.
- Calculates the mean squared error between predicted and actual target values.
- Finds the best-fitting line for a given dataset.
To use this code, ensure you have the necessary dependencies installed, such as NumPy and Matplotlib. The implementation is provided in Python and can be executed directly.
The algorithm performs the following steps:
- Define the training dataset with input features and corresponding target values.
- Initialize the intercept and coefficient values.
- Implement functions to predict target values, compute the cost function, and compute the gradient.
- Perform gradient descent optimization to find the optimal values for the intercept and coefficients.
- Print the optimal values and plot the cost history over iterations.
This project serves as a simple implementation of linear regression using gradient descent. It aims to demonstrate the concept of finding the best-fitting line for a given dataset. The code can be further optimized and extended for more complex datasets and problems.
Feel free to explore and use this Python implementation of linear regression with gradient descent for your projects. It provides a starting point to understand optimization algorithms and how they can be applied to regression problems. Happy learning!