Gradient Descent in Linear Regression
In linear regression, the model targets to get the best-fit regression line to predict the value of y based on the given input value (x). While training the model, the model calculates the cost function which measures the Root Mean Squared error between the predicted value (pred) and true value (y). The model targets to minimize the cost function.
To minimize the cost function, the model needs to have the best value of θ1 and θ2. Initially model selects θ1 and θ2 values randomly and then iteratively update these value in order to minimize the cost function until it reaches the minimum. By the time model achieves the minimum cost function, it will have the best θ1 and θ2 values. Using these finally updated values of θ1 and θ2 in the hypothesis equation of linear equation, the model predicts the value of x in the best manner it can.
Therefore, the question arises – How do θ1 and θ2 values get updated?
Linear Regression Cost Function:
Gradient Descent Algorithm For Linear Regression
-> θj : Weights of the hypothesis. -> hθ(xi) : predicted y value for ith input. -> j : Feature index number (can be 0, 1, 2, ......, n). -> α : Learning Rate of Gradient Descent.
We graph cost function as a function of parameter estimates i.e. parameter range of our hypothesis function and the cost resulting from selecting a particular set of parameters. We move downward towards pits in the graph, to find the minimum value. The way to do this is taking derivative of cost function as explained in the above figure. Gradient Descent step-downs the cost function in the direction of the steepest descent. The size of each step is determined by parameter α known as Learning Rate.
In the Gradient Descent algorithm, one can infer two points :
- If slope is +ve : θj = θj – (+ve value). Hence value of θj decreases.
- If slope is -ve : θj = θj – (-ve value). Hence value of θj increases.
The choice of correct learning rate is very important as it ensures that Gradient Descent converges in a reasonable time. :
- If we choose α to be very large, Gradient Descent can overshoot the minimum. It may fail to converge or even diverge.
- If we choose α to be very small, Gradient Descent will take small steps to reach local minima and will take a longer time to reach minima.
For linear regression Cost, the Function graph is always convex shaped.
Python3
# Implementation of gradient descent in linear regression import numpy as np import matplotlib.pyplot as plt class Linear_Regression: def __init__( self , X, Y): self .X = X self .Y = Y self .b = [ 0 , 0 ] def update_coeffs( self , learning_rate): Y_pred = self .predict() Y = self .Y m = len (Y) self .b[ 0 ] = self .b[ 0 ] - (learning_rate * (( 1 / m) * np. sum (Y_pred - Y))) self .b[ 1 ] = self .b[ 1 ] - (learning_rate * (( 1 / m) * np. sum ((Y_pred - Y) * self .X))) def predict( self , X = []): Y_pred = np.array([]) if not X: X = self .X b = self .b for x in X: Y_pred = np.append(Y_pred, b[ 0 ] + (b[ 1 ] * x)) return Y_pred def get_current_accuracy( self , Y_pred): p, e = Y_pred, self .Y n = len (Y_pred) return 1 - sum ( [ abs (p[i] - e[i]) / e[i] for i in range (n) if e[i] ! = 0 ] ) / n #def predict(self, b, yi): def compute_cost( self , Y_pred): m = len ( self .Y) J = ( 1 / 2 * m) * (np. sum (Y_pred - self .Y) * * 2 ) return J def plot_best_fit( self , Y_pred, fig): f = plt.figure(fig) plt.scatter( self .X, self .Y, color = 'b' ) plt.plot( self .X, Y_pred, color = 'g' ) f.show() def main(): X = np.array([i for i in range ( 11 )]) Y = np.array([ 2 * i for i in range ( 11 )]) regressor = Linear_Regression(X, Y) iterations = 0 steps = 100 learning_rate = 0.01 costs = [] #original best-fit line Y_pred = regressor.predict() regressor.plot_best_fit(Y_pred, 'Initial Best Fit Line' ) while 1 : Y_pred = regressor.predict() cost = regressor.compute_cost(Y_pred) costs.append(cost) regressor.update_coeffs(learning_rate) iterations + = 1 if iterations % steps = = 0 : print (iterations, "epochs elapsed" ) print ( "Current accuracy is :" , regressor.get_current_accuracy(Y_pred)) stop = input ( "Do you want to stop (y/*)??" ) if stop = = "y" : break #final best-fit line regressor.plot_best_fit(Y_pred, 'Final Best Fit Line' ) #plot to verify cost function decreases h = plt.figure( 'Verification' ) plt.plot( range (iterations), costs, color = 'b' ) h.show() # if user wants to predict using the regressor: regressor.predict([i for i in range ( 10 )]) if __name__ = = '__main__' : main() |
Output:
Note: Gradient descent sometimes is also implemented using Regularization.
Gradient Descent is a popular optimization algorithm for linear regression models that involves iteratively adjusting the model parameters to minimize the cost function. Here are some advantages and disadvantages of using Gradient Descent for linear regression:
Advantages:
- Flexibility: Gradient Descent can be used with various cost functions and can handle non-linear regression problems.
- Scalability: Gradient Descent is scalable to large datasets since it updates the parameters for each training example one at a time.
- Convergence: Gradient Descent can converge to the global minimum of the cost function, provided that the learning rate is set appropriately.
Disadvantages:
- Sensitivity to Learning Rate: The choice of learning rate can be critical in Gradient Descent since using a high learning rate can cause the algorithm to overshoot the minimum, while a low learning rate can make the algorithm converge slowly.
- Slow Convergence: Gradient Descent may require more iterations to converge to the minimum since it updates the parameters for each training example one at a time.
- Local Minima: Gradient Descent can get stuck in local minima if the cost function has multiple local minima.
- Noisy updates: The updates in Gradient Descent are noisy and have a high variance, which can make the optimization process less stable and lead to oscillations around the minimum.
Overall, Gradient Descent is a useful optimization algorithm for linear regression, but it has some limitations and requires careful tuning of the learning rate to ensure convergence.
Please Login to comment...