Polynomial regression using the normal equation and gradient descent methods.
This code is licensed under the MIT License - see the LICENSE.md file for details.
The goal of polynomial regression is to fit a nth degree polynomial to data to establish a general relationship between the independent variable x and dependent variable y. Polynomial regression is a special form of multiple linear regression, in which the objective is to minimize the cost function given by:
and the hypothesis is given by the linear model:
The PolynomialRegression
class can perform polynomial regression using two different methods: the normal equation and gradient descent. The normal equation method uses the closed form solution to linear regression:
x_pts, y_pts = generatePolyPoints(0, 50, 100, [5, 1, 1],
noiseLevel = 2, plot = 1)
PR = PolynomialRegression(x_pts, y_pts)
theta = PR.fit(method = 'normal_equation', order = 2)
PR.plot_predictedPolyLine()
x_pts, y_pts = generatePolyPoints(0, 50, 100, [5, 1, 1],
noiseLevel = 2, plot = 1)
PR = PolynomialRegression(x_pts, y_pts)
theta = PR.fit(method = 'gradient_descent', order = 2, tol = 10**-3, numIters = 100, learningRate = 10**-4)
PR.plot_predictedPolyLine()
PR.plotCost()