Skip to content
Related Articles

Related Articles

Optimization techniques for Gradient Descent

View Discussion
Improve Article
Save Article
  • Difficulty Level : Medium
  • Last Updated : 20 Jun, 2022

Gradient Descent is an iterative optimization algorithm, used to find the minimum value for a function. The general idea is to initialize the parameters to random values, and then take small steps in the direction of the “slope” at each iteration. Gradient descent is highly used in supervised learning to minimize the error function and find the optimal values for the parameters. Various extensions have been designed for the gradient descent algorithms. Some of them are discussed below:

Momentum method: This method is used to accelerate the gradient descent algorithm by taking into consideration the exponentially weighted average of the gradients. Using averages makes the algorithm converge towards the minima in a faster way, as the gradients towards the uncommon directions are canceled out. The pseudocode for the momentum method is given below.

V = 0
for each iteration i:
    compute dW
    V = β V + (1 - β) dW
    W = W - α V

V and dW are analogous to velocity and acceleration respectively. α is the learning rate, and β is analogous to momentum normally kept at 0.9. Physics interpretation is that the velocity of a ball rolling downhill builds up momentum according to the direction of slope(gradient) of the hill and therefore helps in better arrival of the ball at a minimum value (in our case – at a minimum loss).

RMSprop: RMSprop was proposed by the University of Toronto’s Geoffrey Hinton. The intuition is to apply an exponentially weighted average method to the second moment of the gradients (dW2). The pseudocode for this is as follows:

S = 0
for each iteration i
    compute dW
    S = β S + (1 - β) dW2
    W = W - α dW√S + ε

Adam Optimization: Adam optimization algorithm incorporates the momentum method and RMSprop, along with bias correction. The pseudocode for this approach is as follows:

V = 0
S = 0
for each iteration i
    compute dW
    V = β1 S + (1 - β1) dW
    S = β2 S + (1 - β2) dW2
    V = V{1 - β1i}
    S = S{1 - β2i}
    W = W - α V√S + ε

Kingma and Ba, the proposers of Adam, recommended the following values for the hyperparameters.

α = 0.001
β1 = 0.9
β2 = 0.999
ε = 10-8
My Personal Notes arrow_drop_up
Recommended Articles
Page :

Start Your Coding Journey Now!