The document discusses applying the gradient method to minimize the Griewank test function. It outlines the gradient method algorithm, shows simulation results for different starting points that converge in few iterations, and improvements made to ensure the algorithm finds the global minimum regardless of starting point by continuing to search after reaching local minima. The conclusions state the gradient method performs well locally but extensions were needed to locate the true global minimum for the Griewank function.
Gradient Steepest method application on Griewank Function
1. Gradient method application
on Griewank Function
Talk by
Imane HAFNAOUI
University of M’Hamed Bouguara - IGEE
2. OUTLINES
• Introduction
• Griewank Function
• Gradient (steepest descent) Method
• Simulation and Results
• Improvements
• Conclusion
3. INTRODUCTION
Optimization includes finding "best available"
values of some objective function given a
defined domain.
Optimization problems can be found everywhere:
• Increase market profit
• Operation search (decision science)
• Minimize loss in power grids
Optimization methods and algorithms have been
developed to solve these problems.
• Particle Swarm Optimization Algorithm (PSO)
presentented by Fran Van Den Bergh in his work.
• Self Adaptive Differential Evaluation (SeDA)
introduced by Qin et al. In their publication.
• Etc etc
4. GRIEWANK FUNCTION
Test Functions: they are special functions
known in literature to be used as testbenches.
They come in different classes and set for specific
purposes.
One of the well-known functions is the
Griewank function.
6. GRADIENT METHOD
Steepest descent iteratively performs line searches
in the local downhill gradient direction.
Steps:
1. Evaluate the gradient vector
2. Compute the search direction
3. Construct the next point
4. Perform the termination test for minimization
5. Repeat the process
7. SIMULATION AND RESULTS
Results with starting point x0 = (1, 1)
K xk Xk+1 Norm (sk)
0 1 1 0.1677 0.6767 -0.6402 -0.2487 0.6868
1 0.1677 0.6767 -0.0250 0.2589 0.1483 0.3214 0.3539
2 -0.0250 0.2589 0.0070 0.0915 -0.0246 0.1288 0.1312
3 0.0070 0.0915 -0.0021 0.0320 0.0070 0.0457 0.0463
4 -0.0021 0.0320 0.0006 0.0112 -0.0021 0.0160 0.0161
5 0.0006 0.0112 -0.0002 0.0039 0.0006 0.0056 0.0056
6 -0.0002 0.0039 0.0001 0.0014 -0.0002 0.0020 0.0020
7 0.0001 0.0014 -0.0000 0.0005 0.0001 0.0007 0.0007
The changes in the function
after each iteration.
Only 4 iterations to
reach a solution.
9. IMPROVEMENTS
We need to ensure that the algorithm always
converges to the global optimum regardless of
the starting point.
The idea is to keep the algorithm running and
searching for the smallest of the local minima
(global minimum) even if it reaches a local
minimum.
To achieve this, the gradient method is once
again applied, but this time to the “governing”
function in the Griewank function.
12. RESULTS
Table representing the results and the jumps
over the local minima to reach the global
optimum.
X0 X1 X2 X3 X4
7 25 0.25 6.280 26.630 3.1400 13.3153 3.1400 4.4385 0.00e-4 0.41e-4
-80 -120 0.35 -78.500 -155.345 -25.120 -44.384 -6.280 -17.753 0.00e-4 0.41e-4
-100 120 0.45 -97.340 119.837 -12.560 8.8769 -3.1400 4.4384 0.19e-6 0.57e-4
534 -120 0.50 530.659 -119.833 6.2800 -0.0001 -0.0e-4 0.48e-4
13. RESULTS
Contour to the Griewank function demonstrating the jumps
that the algorithm performs each time it reaches a local
minimum. Starting point x0 = (120, -120), step size
14. CONCLUSION
• The Gradient method is good to detect the
local minimum closest to the starting point.
• Additions to the steepest method have
proven successful in locating the global
minimum of the Griewank function regardless
of how far the starting point is selected.
• These extensions cannot be assured to
have the same results if applied to other test
functions, nor if the number of variables is
increased.