SlideShare une entreprise Scribd logo
1  sur  25
Télécharger pour lire hors ligne
Nonlinear equations www.openeering.com page 1/25
NUMERICAL ANALYSIS USING SCILAB:
SOLVING NONLINEAR EQUATIONS
In this tutorial we provide a collection of numerical methods for solving nonlinear
equations using Scilab.
Level
This work is licensed under a Creative Commons Attribution-NonCommercial-NoDerivs 3.0 Unported License.
Nonlinear equations www.openeering.com page 2/25
Step 1: The purpose of this tutorial
The purpose of this Scilab tutorial is to provide a collection of numerical
methods for finding the zeros of scalar nonlinear functions. The methods
that we present are:
 Bisection;
 Secant;
 Newton-Raphson;
 Fixed point iteration method.
These classical methods are typical topics of a numerical analysis course
at university level.
An introduction to
NUMERICAL ANALYSIS USING SCILAB
solving nonlinear equations
Step 2: Roadmap
This tutorial is composed of two main parts: the first one (Steps 3-10)
contains an introduction about the problem of solving nonlinear equations,
presents some solution strategies and introduces properties and issues of
such problems and solutions. The second part (Steps 11-23) is dedicated
to the specific methods, equipped with many Scilab examples.
Descriptions Steps
Introduction and solution strategies 3-6
Conditioning and convergence 7-10
Bisection method 11-12
Secant method 13-14
Newton method 15-18
Fixed point iteration method 19-22
Conclusions and remarks 23-25
Nonlinear equations www.openeering.com page 3/25
Step 3: Introduction
Many problems that arise in different areas of engineering lead to the
solution of scalar nonlinear equations of the form
( )
i.e. to find a zero of a nonlinear function.
Nonlinear equations can have none, one, two, or an infinite number of
solutions. Some examples are presented on the right.
Note: A special class of nonlinear equations is constituted by polynomials
of the form
( ) .
(Linear chirp function ( ( )) with infinite zeros)
(Function ( ) ( ) with one zero)
The code of the examples is available in the file ex1.sce
Nonlinear equations www.openeering.com page 4/25
Step 4: Solution strategies
Many solution methods exist and the correct choice depends on the type
of function . For example, different methods are used whether is a
polynomial or it is a continuous function whose derivatives are not
available.
Moreover, the problem can be stated in equivalent formulations. For
example, the original formulation ( ) can be converted into a fixed
point formulation of the form ( ) or into a minimization problem of
the form ( ).
It is important to note that even if these formulations are mathematically
equivalent (their zeros are the same ones), the numerical methods used
to approximate the solution do not have all the same behavior.
Hence, the numerical solution strategy should take into account the kind
of problem we try to solve.
Example of equivalent formulations:
Original problem:
( )
Examples of fixed point formulation:
or
( )
Example of minimization formulation:
( )
Nonlinear equations www.openeering.com page 5/25
Step 5: Graphical interpretation and separation of zeros
The first step of many numerical methods for solving nonlinear equations
is to identify a starting point or an interval where to search a single zero:
this is called “separation of zeros”. If no other information is available, this
can be done by evaluating the function at several values and plotting
the results ( ).
Solving the problem ( ) is equivalent to find the solutions of the
following system
{
( )
i.e., graphically, to determine, in a Cartesian plane, the intersections of the
graph of the function ( ) with the -axis.
In the case of fixed point formulation ( ) its graphical formulation is
related to the system
{
( )
i.e. the solutions are given by the intersections of the function ( )
with the bisector .
(Separation of zeros of the original problem: )
(Fixed point equivalent formulation: ( ))
The code of the example is available in the file ex2.sce
Nonlinear equations www.openeering.com page 6/25
Step 6: Example of a bracketing strategy
Bracketing is an automatic strategy for finding intervals containing a zero
of a given function . An example of bracketing is given in the following
lines of code; the idea is to identify the points in which the function
changes sign:
function xsol=fintsearch(f, xmin, xmax, neval)
// Generate x vector
x = linspace(xmin, xmax, neval)';
// Evaluate function
y = f(x);
// Check for zeros
indz = find(abs(y)<=1000*%eps);
y(indz) = 0;
// Compute signs
s = sign(y);
// Find where f changes sign
inds = find(diff(s)~=0);
// Compute intervals
xsol = [x(inds),x(inds+1)];
endfunction
The code is also available in the file fintsearch.sci, while the example
can be found in fintsearch_test.sce.
(Separation of zeros for the function ( ) ( ))
Nonlinear equations www.openeering.com page 7/25
Step 7: Conditioning of a zero-finding problem
The conditioning of a zero-finding problem is a measure of how it is
sensitive to perturbations of the equation.
Here we denote by a zero of the function ( ), i.e. ( ) .
From the first figure on the right we can intuitively see that if the derivative
| ( )| is “large” the problem is well-conditioned. In this case we can
clearly identify the zero of , even if there are rounding errors. Conversely,
if the derivative is “small”, the zero-finding problem is said to be ill-
conditioned and there is no clear identification of the zero. In this case, if
rounding errors are present, the zero is spread up over a “large” interval of
uncertainty.
In summary, we can state the following:
 The conditioning number of the root finding problem is | ( )|;
 The problem is ill-conditioned if | ( )| is large, i.e. | ( )| is
small.
In the graphic on the right down we can see that the zero of
( ) can not be identified because
of ill-conditioning.
The code of the example is available in the file ex3.sce
(Example of well- and il- conditioned root finding problem)
(Given a very ill-conditioned problem, the unique zero cannot be
identified)
Nonlinear equations www.openeering.com page 8/25
Estimating the conditioning of the problem of finding a (single) zero of a
(continuously differentiable) function ( ) means to provide an estimate of
the relative error of a perturbed solution.
Finding the zero of a function ( ), i.e. ( ) , is equivalent (for
continuously differentiable functions) to solving the inverse problem
( ).
If we consider a perturbed solution ̃, i.e. ( ̃) or, equivalently,
̃ ( ), making an error on its evaluation, we have the following
error:
̃ ( ) ( )
Using the following Taylor expansion
( ) ( ) ( ( )) ( )
and the elation obtained on the right
( ( ))
( )
the error can be written as
( ) ( ( )) ( ) ( ( ))
( )
Hence, the relative error can be stated as
| | |
( )
| | |
The code of the example on the right is available in the file ex4.sce
Original problem:
Find such that ( ) i..e. ( )
Perturbed problem:
Find ̃ such that ( ̃) i..e. ̃ ( )
Inverse function:
(Example of inverse function)
The function and its inverse are related from the relation
( ( )) ( ( ))
and the derivative of the inverse function satisfies the relation
( ( )) ( ) i.e. ( ( ))
( )
(where ( ))
Nonlinear equations www.openeering.com page 9/25
Step 8: Convergence rates of iterative methods
Typically, methods for approximating nonlinear equations are based on
iterative strategies, i.e. starting from an initial guess solution , and
computing a sequence of solutions that converge to the desired
solution (where ( ) ), i.e.
We define the rate of convergence of the sequence as
| |
| |
| |
| |
for some constant and . is called the asymptotic error
constant while is the error at the iteration.
If and the convergence is called linear. We require to
ensure the convergence of the method indeed the error must be reduced
at each iteration as explained by this relation:
| | | | ( | |) | | | |
Here we compare the n+1-th step error with the initial error.
If the convergence is called superlinear.
If the convergence is called quadratic.
The following figure shows a typical convergence rate profile where we can
identify three different regions:
 an exploration region: in this region there is an exploration of the
solution space trying to find an initial guess solution (starting point)
where convergence properties are guaranteed, and, moreover, there
is no significant reduction of the error;
 a convergence region: the basin of attraction of the solution;
 a stagnation region: this last region is due to round-off errors of the
floating point system that are unavoidable.
This figure stresses the fact that the definition of the convergence rate is
valid only “in the convergence region”, hence it is a local definition.
(Typical behavior of a convergence rate profile)
Nonlinear equations www.openeering.com page 10/25
Step 9: Examples of convergence rates
Let us suppose we are looking for the zero .
Linear rate:
Consider the following error model:
and
In this case we get the following errors:
zero significant figures
one significant figures
two significant figures
three significant figures
With a linear rate of convergence, the number of significant figures the
method gains is constant at each step (a multiple of the iteration number).
Quadratic rate:
Consider the following error model:
and
In this case we get the following errors:
zero significant figures
one significant figures (one
figure gained)
three significant figures (two
figures gained)
seven significant figures
(four figures gained)
With a quadratic rate of convergence, the number of significant figures the
method gains at each iteration is twice the previous iteration.
A comparison of the typical rate of convergence (when rounding errors are
present) is shown in the following figure:
(Comparison between linear, superlinear and quadratic rate of
convergence)
The number of figures gained per iteration can be summarized in the
following table:
Convergence rate Figures gained per iteration
Linear Constant
Superlinear Increasing
Quadratic Double
The code of the example is available in the file ex5.sce
Nonlinear equations www.openeering.com page 11/25
Step 10: Convergence criteria
When we approximate a solution with an iterative method it is necessary
to choose how to properly stop the algorithm and hence provide the
solution. As each evaluation of the function can be computationally
expensive, it is important to avoid unnecessary evaluations (for instance,
avoiding evaluations in the stagnation region).
The convergence criteria reported on the right refer to the following
problem:
 Find a solution such that ( ) starting from an initial guess
( ( )), with in .
The design criteria are based on the absolute or relative error for the
variable or for the value of the function . The difference between a
criterion based on or depends on the conditioning of the nonlinear
problem, while a choice based on the absolute or relative error depends
on the scaling of the nonlinear equations.
In our example, we consider the relative errors for f and x they are
adimensional, i.e. they allow to avoid multiplicative constants.
Example of convergence criteria:
 Absolute error between two iterates on :
| |
 Relative error between two iterates on :
| |
 Absolute residual on :
| ( )|
 Relative residual on :
| ( )| ( )
Example of implementation of a stopping criterion:
// Check for convergence
if (abs(fxnew)/fref < ftol) | (abs(dx)/xref < xtol)
// The root is found
x = xnew;
fx = fxnew;
end
The code checks the convergence both on and . If we are dealing with an
ill-conditioned problem it is likely that will not converge, so the check on
will stop the iterative method.
Nonlinear equations www.openeering.com page 12/25
Step 11: Bisection method
Supposing we are looking for a zero of a continuous function, this method
starts from an interval [a,b] containing the solution and then evaluates the
function at the midpoint m=(a+b)/2. Then, according to the sign of the
function, it moves to the subinterval [a,m] or [m,b] containing the solution
and it repeats the procedure until convergence.
The main pseudo-code of the algorithm is the following:
Algorithm pseudo-code
while ((b - a) > tol) do
m = (a + b)/2
if sign(f(a)) = sign(f(m)) then
a = m
else
b = m
end
end1
The figure on the right refers to the first 4 iterations of the bisection
method applied to the function ( ) in the interval [1,2]. The
method starts from the initial interval [a,b]=[1,2] and evaluates the function
at the midpoint m=1.5. Since the sign of the function in m=1.5 is equal to
the sign of the function in b=2, the method moves to the interval
[a,m]=[1,1.5], which contains the zero. At the second step, it starts from
the interval [a,b]=[1,1.5], it evaluates the function at the midpoint m=1.25
and it moves to the interval [1.25, 1.5]. And so on.
The function is available in the file bisection.sci, while the example can
be found in bisection_test.sce. (Example of the first four iterations of the bisection method)
Nonlinear equations www.openeering.com page 13/25
Step 12: Convergence of the bisection method
At each iteration of the method the searching interval is halved (and
contains the zero), i.e.
( ) ( )
Hence, the absolute error at the nth iteration is
| | | |
( )
( )
and the converge | | is guaranteed for .
Observe that at each iteration the interval is halved, i.e. ( )
( )
, but this relation does not guarantee that | | | | (i.e.
monotone convergence to ) as explained in the figure below.
However, we define the rate of the convergence for this method linear.
The figure on the right shows the relative error related to the iterations
(reported in the table below) of the method applied to the function
( ) in the interval [1,2] where the analytic solution is √ . As
expected, the method gains 1 significant figure every 3/4 iterations.
(Relative error of the bisection method)
(Iterations of the bisection method)
Nonlinear equations www.openeering.com page 14/25
Step 13: Secant method
Supposing we are looking for a zero of a continuous function, this method
starts from two approximations ( , ( )) and ( ( )) of the unknown
zero ( ( ) ) and computes the new approximation as the zero
of the straight line passing through the two given points. Hence, can be
obtained solving the following system:
{
( )
( ) ( )
giving
( )
( )( )
( ) ( )
Once the new approximation is computed, we repeat the same procedure
with the new initial points ( ( )) and ( ( )).
The iterative formula is
( )( )
( ) ( )
.
The figures on the right refer to the first four iterations of the method for
the function ( ) with initial guess values and .
The function is available in the file secant.sci, while the example can be
found in secant_test.sce.
(First four iterations of the secant method)
Nonlinear equations www.openeering.com page 15/25
Step 14: Convergence of the secant method
The main pseudo-code of the algorithm is the following:
Algorithm
xkm1 = x0; fkm1 = f(x0) // Step: k-1
xk = x1; fk = f(x1) // Step: k
xkp1 = xk // Initialization
iter = 1 // Current iteration
while iter <= itermax do
iter = iter+1
xkp1 = xk-(fk*(xk-xkm1))/(fk-fkm1)
if abs(xkp1-xk)<tol break // Converg. test
xkm1 = xk; fkm1 = fk
xk = xkp1; fk = f(xkp1)
end
The algorithm iterates until convergence or until the maximum number of
iterations is reached. At each iteration only one function evaluation is
required. The “break” statement terminates the execution of the while
loop.
For the secant method it is possible to prove the following result: if the
function is continuous with continuous derivatives until order 2 near the
zero, the zero is simple (has multiplicity 1) and the initial guesses and
are picked in a neighborhood of the zero, then the method converges
and the convergence rate is equal to
√
(superlinear).
The figure on the right shows the relative error related to the iterations
(reported in the table below) of the method applied to the function
( ) in the interval [1,2], where the analytic solution is √ .
(Relative error of the secant method)
(Iterations of the secant method)
Nonlinear equations www.openeering.com page 16/25
Step 15: Newton method
Supposing we are looking for a zero of a continuous function with
continuous derivatives, this method starts from an approximation of the
unknown zero and computes the new approximation as the zero of
the straight line passing through the initial point and tangent to the
function. Hence, can be obtained solving the following system:
{
( ) ( ) ( )
giving
( )
( )
( )
Once the new approximation is computed, we repeat the same procedure
with the new initial point . The iterative formula is then
( )
( )
.
The figures on the right refer to the first four iterations of the method for
the function ( ) with initial guess value .
The function is available in the file newton.sci, while all the examples
related to this method can be found in newton_test.sce.
(First four iterations of the Newton method)
Nonlinear equations www.openeering.com page 17/25
Step 16: Convergence of the Newton method
The main pseudo-code of the algorithm is the following:
Algorithm
xk = x0;
iter = 1 // Current iteration
while iter <= itermax do
iter = iter+1
xkp1 = xk-f(xk)/f’(xk)
if abs(xkp1-xk)<tol break // Converg. test
xk = xkp1;
end1
The algorithm iterates until convergence or the maximum number of
iterations is reached. At each iterations a function evaluation with its
derivative is required. The “break” statement terminates the execution of
the while loop.
For the newton method it is possible to prove the following results: if the
function f is continuous with continuous derivatives until order 2 near the
zero, the zero is simple (has multiplicity 1) and the initial guess is
picked in a neighborhood of the zero, then the method converges and the
convergence rate is equal to (quadratic).
The figure on the right shows the relative error related to the iterations
(reported in the table below) of the method applied to the function
( ) in the interval [1,2], where the analytic solution is √ . As
expected, the number significant figures doubles at each iteration.
(Relative error of the Newton method)
(Iterations of the Newton method)
Nonlinear equations www.openeering.com page 18/25
Step 17: Newton method (loss of quadratic convergence)
A zero is said to be multiple with multiplicity if
( ) ( ) ( ) and ( ) .
In the Newton method, if the zero is multiple, the convergence rate
decreases from quadratic to linear. As an example, consider the function
( ) with zero . Then the Newton method can be stated as
giving the error
which is clearly linear.
On the right we report an example of loss of quadratic convergence
applied to the function
( ) ( )
which has a zero of multiplicity 2 in .
The relative error shows a linear behavior, indeed the method gains a
constant number of significant figures every 3 or 4 iterations.
(Function with a zero of multiplicity 2)
(Loss of quadratic convergence)
Nonlinear equations www.openeering.com page 19/25
Step 18: Newton method (Global convergence)
If the function of which we are looking for the zeros and its derivatives until
order 2 are continuous, it is possible to ensure the global convergence of
the Newton method by choosing a proper initial guess point.
As reported in the table on the right, under the above mentioned
hypothesis it is possible to identify a neighborhood of the zero such
that, for each initial guess in , the sequence is monotonically
decreasing (or increasing) to the zero.
For instance, if the function is convex and increasing (second row and
second column case in the table), the Newton method with an initial guess
picked on the right of the zero converges monotonically decreasing to the
unknown zero.
The Newton method with the above choosing criterion also ensures that
all are well defined, i.e. the method does not generate any point in
which the function is not defined (an example in which the monotonicity of
the convergence is important is the logarithm function, which is not
defined for negative values).
Function
properties
( )
(decrease)
( )
(increase)
( )
(concave)
(Right domain) (Left domain)
( )
(convex)
(Left domain) (Right domain)
(How to choose the initial guess for global convergence)
Nonlinear equations www.openeering.com page 20/25
Step 19: Fixed point iteration method
The fixed point iteration method transforms the original problem ( )
into the problem ( ) and solves it using an iterative scheme of the
form:
( ).
If the iterative scheme converges to the value , i.e. ( ), then is
also a zero of ( ), since ( ) ( ) .
Solving the equation ( ) is equivalent to solve the following system
{
( )
On the right we reported some graphical examples of iterations applied to
6 different functions . The three examples on the left show cases in
which the method converges to the unknown zero, while among the
examples on the right there is no convergence of the method, even if the
functions seem to be quite similar to the ones on the left.
Note: The Newton method is a particular case of fixed point iteration
method where ( )
( )
( )
.
(Graphical examples of iterations applied to 6 different functions )
Nonlinear equations www.openeering.com page 21/25
Step 20: Fixed point iteration method - example #1
The example on the right refers to the function ( ) , where the
considered fixed point iteration is
( )
Here the relative error shows a linear behavior of the method, while on the
right we can see the first four iterations of the method.
(Relative error and iterations of the fixed point iteration method)
The function is available in the file fixedpoint.sci, while the example can
be found in fixedpoint_test.sce.
(First four iterations of the fixed-point iteration method))
Nonlinear equations www.openeering.com page 22/25
Step 21: Convergence of the fixed point iteration method
It is possible to prove that the sequence converges to if
| ( )| (i.e. it is a contraction) in the interval containing the
initial guess and for all (i.e. ( ) ). Moreover, in this case, it is
possible to prove that the solution is the unique solution in the interval
of the equation ( ) . While, if | ( )| in the whole interval , the
sequence does not converge to the solution (even if we start very close to
the zero ).
The convergence rate of the fixed point iteration method is:
 If ( ) , the method has a linear convergence rate;
 If ( ) and ( ) , the method has a quadratic
convergence rate;
Example: The fixed point iteration method applied to the Newton method
with fixed point function
( )
( )
( )
with ( ) and ( ) shows a quadratic rate of convergence,
indeed we have ( ) and ( ) ( ( )
( ) ( ) ( ) ( )
( ( ))
).
Proof of convergence (idea)
Using the Taylor formula we have
( ) ( ) ( ) ( ) ( )
( ) ( ) ( ) ( ) ( )
( ) ( ) ( ) ( ) ( )
where are unknown values in the intervals ( ). If the derivatives are
bounded by the relation
| ( )|
the error can be written as
| | ( ) | |
i.e.
| | | |
where is the initial error.
Proof of convergence rate (idea)
Assuming ( ) and using the Taylor formula we have
( ) ( )
( ) ( ) ( )
( )
( )
( )
( )
where ( ) and ( ).
If ( ) we have a linear rate of convergence of the sequence, i.e.
| |
| |
( ).
If ( ) and ( ) we have a quadratic rate of convergence of the
sequence, i.e.
| |
| |
( ).
Nonlinear equations www.openeering.com page 23/25
Step 22: Fixed point iteration method - example #2
This example refers to the zero of the function
( )
which is .
Starting from this function it is possible to write different kinds of fixed
point iterations:
1. with ( ) : This function shows a linear
behavior of convergence (blue line), indeed the derivative
( ) ;
2. ( ) with ( ) ( ) : This function does not
converge (red line), indeed the derivative ( ) ;
3. with ( ) : This function shows a quadratic
rate of convergence (purple line), indeed the derivative ( ) .
The initial guess is equal to .
Also this second example can be found in the file fixedpoint_test.sce.
(Original function)
(Relative error for different fixed point iterations)
Nonlinear equations www.openeering.com page 24/25
Step 23: Comparison of the methods – an example
This example refers to the zero of the function
( ) ( )
which is .
In the figure on the right we can see a typical result on relative errors, in
step with the rates of convergence discussed earlier for the different
methods.
At a first glance, one might think that the Newton method should be
always chosen, since its convergence is the best one, but it has to be
considered that this method needs the computation of the derivative,
which could require the evaluation of the original function, which can be
numerically very expensive.
The source code of this example can be found in the file main_fcn.sce.
(Nonlinear test function)
(Relative error of the 4 methods)
Nonlinear equations www.openeering.com page 25/25
Step 24: Concluding remarks and References
In this tutorial we have collected a series of numerical examples written in
Scilab to tackle the problem of finding the zeros of scalar nonlinear
equations with the main methods on which are based the state-of-the-art
algorithms.
On the right-hand column you may find a list of references for further
studies.
1. Scilab Web Page: www.scilab.org.
2. Openeering: www.openeering.com.
3. K. Atkinson, An Introduction to Numerical Analysis, John Wiley, New
York, 1989.
Step 25: Software content
To report a bug or suggest some improvement please contact the
Openeering team at the web site www.openeering.com.
Thank you for your attention,
Anna Bassi and Manolo Venturin
--------------
Main directory
--------------
ex1.sce : Examples of zero-finding problems
ex2.sce : Examples of separation and fixed point
ex3.sce : Example on conditioning
ex4.sce : Example of inverse function
ex5.sce : Examples of convergence rates
fintsearch.sci : Search intervals
fintsearch_test.sce: Test for search intervals
bisection.sci : Bisection method
bisection_test.sce : Test for the bisection method
secant.sci : Secant method
secant_test.sce : Test for the secant method
newton.sci : Newton-Raphson method
newton_test.sce : Test for the Newton-Raphson method
fixedpoint.sci : Fixed point iteration method
fixedpoint_test.sce: Tests for the fixed point iteration method
main_fcn.sce : Comparison of methods
license.txt : License file

Contenu connexe

Tendances

ME6604 GAS DYNAMICS AND JET PROPULSION NOTES
ME6604 GAS DYNAMICS AND JET PROPULSION NOTESME6604 GAS DYNAMICS AND JET PROPULSION NOTES
ME6604 GAS DYNAMICS AND JET PROPULSION NOTESBIBIN CHIDAMBARANATHAN
 
Single Slider Crank mechanism based canplastic bottle crusher.Proj.MSE508
Single Slider Crank mechanism based canplastic bottle crusher.Proj.MSE508Single Slider Crank mechanism based canplastic bottle crusher.Proj.MSE508
Single Slider Crank mechanism based canplastic bottle crusher.Proj.MSE508Viralkumar Jayswal
 
Mecanismo de 4 barras
Mecanismo de 4 barrasMecanismo de 4 barras
Mecanismo de 4 barrasAngel ML
 
273471273 parametros-de-fallas-y-weibull
273471273 parametros-de-fallas-y-weibull273471273 parametros-de-fallas-y-weibull
273471273 parametros-de-fallas-y-weibullssuser12d509
 
12 Speed Gear Box Theory Notes by Prof. Sagar Dhotare
12 Speed Gear Box Theory Notes by Prof. Sagar Dhotare12 Speed Gear Box Theory Notes by Prof. Sagar Dhotare
12 Speed Gear Box Theory Notes by Prof. Sagar DhotareSagar Dhotare
 
Design and Simulation study of Electro-Mechanical Actuator for Missile Maneuv...
Design and Simulation study of Electro-Mechanical Actuator for Missile Maneuv...Design and Simulation study of Electro-Mechanical Actuator for Missile Maneuv...
Design and Simulation study of Electro-Mechanical Actuator for Missile Maneuv...SumanthKukutam
 
1 introduction to mechanical vibrations (eng. ahmed abd el aleem amin)
1     introduction to mechanical vibrations (eng. ahmed abd el aleem amin)1     introduction to mechanical vibrations (eng. ahmed abd el aleem amin)
1 introduction to mechanical vibrations (eng. ahmed abd el aleem amin)Ahmed Abdel-Aleem
 
Clasificación de las Válvulas de Presión Ohcas acc. drta.
Clasificación de  las Válvulas de Presión Ohcas acc. drta.Clasificación de  las Válvulas de Presión Ohcas acc. drta.
Clasificación de las Válvulas de Presión Ohcas acc. drta.Carlos Muñiz Cueto
 
Study and analysis of convergent Divergent Nozzle Using CFD
Study and analysis of convergent Divergent Nozzle Using CFDStudy and analysis of convergent Divergent Nozzle Using CFD
Study and analysis of convergent Divergent Nozzle Using CFDAkhilendra Akki
 
1 Mechanical Vibrations07 March.pdf
1 Mechanical Vibrations07 March.pdf1 Mechanical Vibrations07 March.pdf
1 Mechanical Vibrations07 March.pdfEinoNekongo
 
Limits, Fits & Tolerances
Limits, Fits & TolerancesLimits, Fits & Tolerances
Limits, Fits & TolerancesAspirations
 
Synthesis of Mechanism
Synthesis of MechanismSynthesis of Mechanism
Synthesis of MechanismR A Shah
 
Mecanismos de 4 barras.
Mecanismos de 4 barras.Mecanismos de 4 barras.
Mecanismos de 4 barras.chbaraa
 
Gears and pulleys engineering diagrams
Gears and pulleys engineering diagramsGears and pulleys engineering diagrams
Gears and pulleys engineering diagramsrajat_jubilee
 

Tendances (20)

ME6604 GAS DYNAMICS AND JET PROPULSION NOTES
ME6604 GAS DYNAMICS AND JET PROPULSION NOTESME6604 GAS DYNAMICS AND JET PROPULSION NOTES
ME6604 GAS DYNAMICS AND JET PROPULSION NOTES
 
Single Slider Crank mechanism based canplastic bottle crusher.Proj.MSE508
Single Slider Crank mechanism based canplastic bottle crusher.Proj.MSE508Single Slider Crank mechanism based canplastic bottle crusher.Proj.MSE508
Single Slider Crank mechanism based canplastic bottle crusher.Proj.MSE508
 
Mecanismo de 4 barras
Mecanismo de 4 barrasMecanismo de 4 barras
Mecanismo de 4 barras
 
273471273 parametros-de-fallas-y-weibull
273471273 parametros-de-fallas-y-weibull273471273 parametros-de-fallas-y-weibull
273471273 parametros-de-fallas-y-weibull
 
6.position analysis
6.position analysis6.position analysis
6.position analysis
 
2 Taller modelacion matemática 2018-1 UNAL
2 Taller modelacion matemática 2018-1 UNAL2 Taller modelacion matemática 2018-1 UNAL
2 Taller modelacion matemática 2018-1 UNAL
 
12 Speed Gear Box Theory Notes by Prof. Sagar Dhotare
12 Speed Gear Box Theory Notes by Prof. Sagar Dhotare12 Speed Gear Box Theory Notes by Prof. Sagar Dhotare
12 Speed Gear Box Theory Notes by Prof. Sagar Dhotare
 
Revised ppt for Design of energy storing elements and engine componentsxx
Revised ppt for Design of energy storing elements and engine componentsxxRevised ppt for Design of energy storing elements and engine componentsxx
Revised ppt for Design of energy storing elements and engine componentsxx
 
Design and Simulation study of Electro-Mechanical Actuator for Missile Maneuv...
Design and Simulation study of Electro-Mechanical Actuator for Missile Maneuv...Design and Simulation study of Electro-Mechanical Actuator for Missile Maneuv...
Design and Simulation study of Electro-Mechanical Actuator for Missile Maneuv...
 
1 introduction to mechanical vibrations (eng. ahmed abd el aleem amin)
1     introduction to mechanical vibrations (eng. ahmed abd el aleem amin)1     introduction to mechanical vibrations (eng. ahmed abd el aleem amin)
1 introduction to mechanical vibrations (eng. ahmed abd el aleem amin)
 
Dom lab manual new
Dom lab manual newDom lab manual new
Dom lab manual new
 
ED7104 VAC_notes
ED7104 VAC_notesED7104 VAC_notes
ED7104 VAC_notes
 
Clasificación de las Válvulas de Presión Ohcas acc. drta.
Clasificación de  las Válvulas de Presión Ohcas acc. drta.Clasificación de  las Válvulas de Presión Ohcas acc. drta.
Clasificación de las Válvulas de Presión Ohcas acc. drta.
 
Study and analysis of convergent Divergent Nozzle Using CFD
Study and analysis of convergent Divergent Nozzle Using CFDStudy and analysis of convergent Divergent Nozzle Using CFD
Study and analysis of convergent Divergent Nozzle Using CFD
 
1 Mechanical Vibrations07 March.pdf
1 Mechanical Vibrations07 March.pdf1 Mechanical Vibrations07 March.pdf
1 Mechanical Vibrations07 March.pdf
 
Limits, Fits & Tolerances
Limits, Fits & TolerancesLimits, Fits & Tolerances
Limits, Fits & Tolerances
 
FEA Report
FEA ReportFEA Report
FEA Report
 
Synthesis of Mechanism
Synthesis of MechanismSynthesis of Mechanism
Synthesis of Mechanism
 
Mecanismos de 4 barras.
Mecanismos de 4 barras.Mecanismos de 4 barras.
Mecanismos de 4 barras.
 
Gears and pulleys engineering diagrams
Gears and pulleys engineering diagramsGears and pulleys engineering diagrams
Gears and pulleys engineering diagrams
 

En vedette

Numerical analysis using Scilab: Error analysis and propagation
Numerical analysis using Scilab: Error analysis and propagationNumerical analysis using Scilab: Error analysis and propagation
Numerical analysis using Scilab: Error analysis and propagationScilab
 
Numerical analysis using Scilab: Numerical stability and conditioning
Numerical analysis using Scilab: Numerical stability and conditioningNumerical analysis using Scilab: Numerical stability and conditioning
Numerical analysis using Scilab: Numerical stability and conditioningScilab
 
Aeraulic toolbox for Xcos
Aeraulic toolbox for XcosAeraulic toolbox for Xcos
Aeraulic toolbox for XcosScilab
 
Introduction to Control systems in scilab
Introduction to Control systems in scilabIntroduction to Control systems in scilab
Introduction to Control systems in scilabScilab
 
Customizing Xcos with new Blocks and Palette
Customizing Xcos with new Blocks and PaletteCustomizing Xcos with new Blocks and Palette
Customizing Xcos with new Blocks and PaletteScilab
 
Data mining
Data miningData mining
Data miningScilab
 
Modeling an ODE: 3 different approaches - Part 1
Modeling an ODE: 3 different approaches - Part 1Modeling an ODE: 3 different approaches - Part 1
Modeling an ODE: 3 different approaches - Part 1Scilab
 
Modeling an ODE: 3 different approaches - Part 2
Modeling an ODE: 3 different approaches - Part 2Modeling an ODE: 3 different approaches - Part 2
Modeling an ODE: 3 different approaches - Part 2Scilab
 
Adequacy of solutions
Adequacy of solutionsAdequacy of solutions
Adequacy of solutionsTarun Gehlot
 
Scilab as a calculator
Scilab as a calculatorScilab as a calculator
Scilab as a calculatorScilab
 
Applied numerical methods lec6
Applied numerical methods lec6Applied numerical methods lec6
Applied numerical methods lec6Yasser Ahmed
 
First steps with Scilab
First steps with ScilabFirst steps with Scilab
First steps with ScilabScilab
 
Agile Metrics
Agile MetricsAgile Metrics
Agile Metricsnick945
 
Data fitting in Scilab - Tutorial
Data fitting in Scilab - TutorialData fitting in Scilab - Tutorial
Data fitting in Scilab - TutorialScilab
 
Introduction to Discrete Probabilities with Scilab - Michaël Baudin, Consort...
Introduction to Discrete Probabilities with Scilab - Michaël Baudin, Consort...Introduction to Discrete Probabilities with Scilab - Michaël Baudin, Consort...
Introduction to Discrete Probabilities with Scilab - Michaël Baudin, Consort...Scilab
 
Approximation and error
Approximation and errorApproximation and error
Approximation and errorrubenarismendi
 
ScilabTEC 2015 - TUM
ScilabTEC 2015 - TUMScilabTEC 2015 - TUM
ScilabTEC 2015 - TUMScilab
 

En vedette (20)

Numerical analysis using Scilab: Error analysis and propagation
Numerical analysis using Scilab: Error analysis and propagationNumerical analysis using Scilab: Error analysis and propagation
Numerical analysis using Scilab: Error analysis and propagation
 
Numerical analysis using Scilab: Numerical stability and conditioning
Numerical analysis using Scilab: Numerical stability and conditioningNumerical analysis using Scilab: Numerical stability and conditioning
Numerical analysis using Scilab: Numerical stability and conditioning
 
Aeraulic toolbox for Xcos
Aeraulic toolbox for XcosAeraulic toolbox for Xcos
Aeraulic toolbox for Xcos
 
Introduction to Control systems in scilab
Introduction to Control systems in scilabIntroduction to Control systems in scilab
Introduction to Control systems in scilab
 
Customizing Xcos with new Blocks and Palette
Customizing Xcos with new Blocks and PaletteCustomizing Xcos with new Blocks and Palette
Customizing Xcos with new Blocks and Palette
 
Data mining
Data miningData mining
Data mining
 
Modeling an ODE: 3 different approaches - Part 1
Modeling an ODE: 3 different approaches - Part 1Modeling an ODE: 3 different approaches - Part 1
Modeling an ODE: 3 different approaches - Part 1
 
Modeling an ODE: 3 different approaches - Part 2
Modeling an ODE: 3 different approaches - Part 2Modeling an ODE: 3 different approaches - Part 2
Modeling an ODE: 3 different approaches - Part 2
 
Adequacy of solutions
Adequacy of solutionsAdequacy of solutions
Adequacy of solutions
 
Scilab as a calculator
Scilab as a calculatorScilab as a calculator
Scilab as a calculator
 
Fixedpoint
FixedpointFixedpoint
Fixedpoint
 
Applied numerical methods lec6
Applied numerical methods lec6Applied numerical methods lec6
Applied numerical methods lec6
 
First steps with Scilab
First steps with ScilabFirst steps with Scilab
First steps with Scilab
 
Agile Metrics
Agile MetricsAgile Metrics
Agile Metrics
 
Data fitting in Scilab - Tutorial
Data fitting in Scilab - TutorialData fitting in Scilab - Tutorial
Data fitting in Scilab - Tutorial
 
Introduction to Discrete Probabilities with Scilab - Michaël Baudin, Consort...
Introduction to Discrete Probabilities with Scilab - Michaël Baudin, Consort...Introduction to Discrete Probabilities with Scilab - Michaël Baudin, Consort...
Introduction to Discrete Probabilities with Scilab - Michaël Baudin, Consort...
 
Es272 ch4a
Es272 ch4aEs272 ch4a
Es272 ch4a
 
Approximation and error
Approximation and errorApproximation and error
Approximation and error
 
METHOD OF JACOBI
METHOD OF JACOBIMETHOD OF JACOBI
METHOD OF JACOBI
 
ScilabTEC 2015 - TUM
ScilabTEC 2015 - TUMScilabTEC 2015 - TUM
ScilabTEC 2015 - TUM
 

Similaire à Numerical analysis using Scilab: Solving nonlinear equations

A brief study on linear programming solving methods
A brief study on linear programming solving methodsA brief study on linear programming solving methods
A brief study on linear programming solving methodsMayurjyotiNeog
 
Engineering Numerical Analysis-Introduction.pdf
Engineering Numerical Analysis-Introduction.pdfEngineering Numerical Analysis-Introduction.pdf
Engineering Numerical Analysis-Introduction.pdfssuseraae901
 
Research Inventy : International Journal of Engineering and Science
Research Inventy : International Journal of Engineering and ScienceResearch Inventy : International Journal of Engineering and Science
Research Inventy : International Journal of Engineering and Scienceresearchinventy
 
1439049238 272709.Pdf
1439049238 272709.Pdf1439049238 272709.Pdf
1439049238 272709.PdfAndrew Parish
 
A Robust Method Based On LOVO Functions For Solving Least Squares Problems
A Robust Method Based On LOVO Functions For Solving Least Squares ProblemsA Robust Method Based On LOVO Functions For Solving Least Squares Problems
A Robust Method Based On LOVO Functions For Solving Least Squares ProblemsDawn Cook
 
A Comparative Analysis Of Assignment Problem
A Comparative Analysis Of Assignment ProblemA Comparative Analysis Of Assignment Problem
A Comparative Analysis Of Assignment ProblemJim Webb
 
Efficient Solution of Two-Stage Stochastic Linear Programs Using Interior Poi...
Efficient Solution of Two-Stage Stochastic Linear Programs Using Interior Poi...Efficient Solution of Two-Stage Stochastic Linear Programs Using Interior Poi...
Efficient Solution of Two-Stage Stochastic Linear Programs Using Interior Poi...SSA KPI
 
International Refereed Journal of Engineering and Science (IRJES)
International Refereed Journal of Engineering and Science (IRJES)International Refereed Journal of Engineering and Science (IRJES)
International Refereed Journal of Engineering and Science (IRJES)irjes
 
lecture01.ppt
lecture01.pptlecture01.ppt
lecture01.pptwafahop
 
The International Journal of Engineering and Science (The IJES)
The International Journal of Engineering and Science (The IJES)The International Journal of Engineering and Science (The IJES)
The International Journal of Engineering and Science (The IJES)theijes
 
Chapter3 hundred page machine learning
Chapter3 hundred page machine learningChapter3 hundred page machine learning
Chapter3 hundred page machine learningmustafa sarac
 
An Improved Iterative Method for Solving General System of Equations via Gene...
An Improved Iterative Method for Solving General System of Equations via Gene...An Improved Iterative Method for Solving General System of Equations via Gene...
An Improved Iterative Method for Solving General System of Equations via Gene...Zac Darcy
 
An Improved Iterative Method for Solving General System of Equations via Gene...
An Improved Iterative Method for Solving General System of Equations via Gene...An Improved Iterative Method for Solving General System of Equations via Gene...
An Improved Iterative Method for Solving General System of Equations via Gene...Zac Darcy
 

Similaire à Numerical analysis using Scilab: Solving nonlinear equations (20)

A brief study on linear programming solving methods
A brief study on linear programming solving methodsA brief study on linear programming solving methods
A brief study on linear programming solving methods
 
Engineering Numerical Analysis-Introduction.pdf
Engineering Numerical Analysis-Introduction.pdfEngineering Numerical Analysis-Introduction.pdf
Engineering Numerical Analysis-Introduction.pdf
 
Research Inventy : International Journal of Engineering and Science
Research Inventy : International Journal of Engineering and ScienceResearch Inventy : International Journal of Engineering and Science
Research Inventy : International Journal of Engineering and Science
 
1439049238 272709.Pdf
1439049238 272709.Pdf1439049238 272709.Pdf
1439049238 272709.Pdf
 
A Robust Method Based On LOVO Functions For Solving Least Squares Problems
A Robust Method Based On LOVO Functions For Solving Least Squares ProblemsA Robust Method Based On LOVO Functions For Solving Least Squares Problems
A Robust Method Based On LOVO Functions For Solving Least Squares Problems
 
A0280115(1)
A0280115(1)A0280115(1)
A0280115(1)
 
A Comparative Analysis Of Assignment Problem
A Comparative Analysis Of Assignment ProblemA Comparative Analysis Of Assignment Problem
A Comparative Analysis Of Assignment Problem
 
Efficient Solution of Two-Stage Stochastic Linear Programs Using Interior Poi...
Efficient Solution of Two-Stage Stochastic Linear Programs Using Interior Poi...Efficient Solution of Two-Stage Stochastic Linear Programs Using Interior Poi...
Efficient Solution of Two-Stage Stochastic Linear Programs Using Interior Poi...
 
International Refereed Journal of Engineering and Science (IRJES)
International Refereed Journal of Engineering and Science (IRJES)International Refereed Journal of Engineering and Science (IRJES)
International Refereed Journal of Engineering and Science (IRJES)
 
F0333034038
F0333034038F0333034038
F0333034038
 
Unger
UngerUnger
Unger
 
Curvefitting
CurvefittingCurvefitting
Curvefitting
 
lecture01.ppt
lecture01.pptlecture01.ppt
lecture01.ppt
 
Simplex Algorithm
Simplex AlgorithmSimplex Algorithm
Simplex Algorithm
 
The International Journal of Engineering and Science (The IJES)
The International Journal of Engineering and Science (The IJES)The International Journal of Engineering and Science (The IJES)
The International Journal of Engineering and Science (The IJES)
 
Chapter3 hundred page machine learning
Chapter3 hundred page machine learningChapter3 hundred page machine learning
Chapter3 hundred page machine learning
 
Math(2)
Math(2)Math(2)
Math(2)
 
Advance algebra
Advance algebraAdvance algebra
Advance algebra
 
An Improved Iterative Method for Solving General System of Equations via Gene...
An Improved Iterative Method for Solving General System of Equations via Gene...An Improved Iterative Method for Solving General System of Equations via Gene...
An Improved Iterative Method for Solving General System of Equations via Gene...
 
An Improved Iterative Method for Solving General System of Equations via Gene...
An Improved Iterative Method for Solving General System of Equations via Gene...An Improved Iterative Method for Solving General System of Equations via Gene...
An Improved Iterative Method for Solving General System of Equations via Gene...
 

Plus de Scilab

Statistical Analysis for Robust Design
Statistical Analysis for Robust DesignStatistical Analysis for Robust Design
Statistical Analysis for Robust DesignScilab
 
Electric motor optimization
Electric motor optimizationElectric motor optimization
Electric motor optimizationScilab
 
Asteroidlanding - Scilab conference 2019 Keynote
Asteroidlanding - Scilab conference 2019 KeynoteAsteroidlanding - Scilab conference 2019 Keynote
Asteroidlanding - Scilab conference 2019 KeynoteScilab
 
Faster Time to Market using Scilab/XCOS/X2C for motor control algorithm devel...
Faster Time to Market using Scilab/XCOS/X2C for motor control algorithm devel...Faster Time to Market using Scilab/XCOS/X2C for motor control algorithm devel...
Faster Time to Market using Scilab/XCOS/X2C for motor control algorithm devel...Scilab
 
Scilab and Xcos for Very Low Earth Orbits satellites modelling
Scilab and Xcos for Very Low Earth Orbits satellites modellingScilab and Xcos for Very Low Earth Orbits satellites modelling
Scilab and Xcos for Very Low Earth Orbits satellites modellingScilab
 
X2C -a tool for model-based control development and automated code generation...
X2C -a tool for model-based control development and automated code generation...X2C -a tool for model-based control development and automated code generation...
X2C -a tool for model-based control development and automated code generation...Scilab
 
A Real-Time Interface for Xcos – an illustrative demonstration using a batter...
A Real-Time Interface for Xcos – an illustrative demonstration using a batter...A Real-Time Interface for Xcos – an illustrative demonstration using a batter...
A Real-Time Interface for Xcos – an illustrative demonstration using a batter...Scilab
 
Aircraft Simulation Model and Flight Control Laws Design Using Scilab and XCos
Aircraft Simulation Model and Flight Control Laws Design Using Scilab and XCosAircraft Simulation Model and Flight Control Laws Design Using Scilab and XCos
Aircraft Simulation Model and Flight Control Laws Design Using Scilab and XCosScilab
 
Scilab for real dummies j.heikell - part3
Scilab for real dummies j.heikell - part3Scilab for real dummies j.heikell - part3
Scilab for real dummies j.heikell - part3Scilab
 
Scilab for real dummies j.heikell - part 2
Scilab for real dummies j.heikell - part 2Scilab for real dummies j.heikell - part 2
Scilab for real dummies j.heikell - part 2Scilab
 
Scilab for real dummies j.heikell - part 1
Scilab for real dummies j.heikell - part 1Scilab for real dummies j.heikell - part 1
Scilab for real dummies j.heikell - part 1Scilab
 
Multiobjective optimization and Genetic algorithms in Scilab
Multiobjective optimization and Genetic algorithms in ScilabMultiobjective optimization and Genetic algorithms in Scilab
Multiobjective optimization and Genetic algorithms in ScilabScilab
 
Scilab optimization workshop
Scilab optimization workshop Scilab optimization workshop
Scilab optimization workshop Scilab
 
INRA @ Scilab Conference 2018
INRA @ Scilab Conference 2018INRA @ Scilab Conference 2018
INRA @ Scilab Conference 2018Scilab
 
Qualcomm @ Scilab Conference 2018
Qualcomm @ Scilab Conference 2018Qualcomm @ Scilab Conference 2018
Qualcomm @ Scilab Conference 2018Scilab
 
Sanofi @ Scilab Conference 2018
Sanofi @ Scilab Conference 2018Sanofi @ Scilab Conference 2018
Sanofi @ Scilab Conference 2018Scilab
 
University of Applied Science Esslingen @ Scilab Conference 2018
University of Applied Science Esslingen @ Scilab Conference 2018University of Applied Science Esslingen @ Scilab Conference 2018
University of Applied Science Esslingen @ Scilab Conference 2018Scilab
 
DLR @ Scilab Conference 2018
DLR @ Scilab Conference 2018DLR @ Scilab Conference 2018
DLR @ Scilab Conference 2018Scilab
 
Fraunhofer IIS @ Scilab Conference 2018
Fraunhofer IIS @ Scilab Conference 2018Fraunhofer IIS @ Scilab Conference 2018
Fraunhofer IIS @ Scilab Conference 2018Scilab
 
Arcelormittal @ Scilab Conference 2018
Arcelormittal @ Scilab Conference 2018Arcelormittal @ Scilab Conference 2018
Arcelormittal @ Scilab Conference 2018Scilab
 

Plus de Scilab (20)

Statistical Analysis for Robust Design
Statistical Analysis for Robust DesignStatistical Analysis for Robust Design
Statistical Analysis for Robust Design
 
Electric motor optimization
Electric motor optimizationElectric motor optimization
Electric motor optimization
 
Asteroidlanding - Scilab conference 2019 Keynote
Asteroidlanding - Scilab conference 2019 KeynoteAsteroidlanding - Scilab conference 2019 Keynote
Asteroidlanding - Scilab conference 2019 Keynote
 
Faster Time to Market using Scilab/XCOS/X2C for motor control algorithm devel...
Faster Time to Market using Scilab/XCOS/X2C for motor control algorithm devel...Faster Time to Market using Scilab/XCOS/X2C for motor control algorithm devel...
Faster Time to Market using Scilab/XCOS/X2C for motor control algorithm devel...
 
Scilab and Xcos for Very Low Earth Orbits satellites modelling
Scilab and Xcos for Very Low Earth Orbits satellites modellingScilab and Xcos for Very Low Earth Orbits satellites modelling
Scilab and Xcos for Very Low Earth Orbits satellites modelling
 
X2C -a tool for model-based control development and automated code generation...
X2C -a tool for model-based control development and automated code generation...X2C -a tool for model-based control development and automated code generation...
X2C -a tool for model-based control development and automated code generation...
 
A Real-Time Interface for Xcos – an illustrative demonstration using a batter...
A Real-Time Interface for Xcos – an illustrative demonstration using a batter...A Real-Time Interface for Xcos – an illustrative demonstration using a batter...
A Real-Time Interface for Xcos – an illustrative demonstration using a batter...
 
Aircraft Simulation Model and Flight Control Laws Design Using Scilab and XCos
Aircraft Simulation Model and Flight Control Laws Design Using Scilab and XCosAircraft Simulation Model and Flight Control Laws Design Using Scilab and XCos
Aircraft Simulation Model and Flight Control Laws Design Using Scilab and XCos
 
Scilab for real dummies j.heikell - part3
Scilab for real dummies j.heikell - part3Scilab for real dummies j.heikell - part3
Scilab for real dummies j.heikell - part3
 
Scilab for real dummies j.heikell - part 2
Scilab for real dummies j.heikell - part 2Scilab for real dummies j.heikell - part 2
Scilab for real dummies j.heikell - part 2
 
Scilab for real dummies j.heikell - part 1
Scilab for real dummies j.heikell - part 1Scilab for real dummies j.heikell - part 1
Scilab for real dummies j.heikell - part 1
 
Multiobjective optimization and Genetic algorithms in Scilab
Multiobjective optimization and Genetic algorithms in ScilabMultiobjective optimization and Genetic algorithms in Scilab
Multiobjective optimization and Genetic algorithms in Scilab
 
Scilab optimization workshop
Scilab optimization workshop Scilab optimization workshop
Scilab optimization workshop
 
INRA @ Scilab Conference 2018
INRA @ Scilab Conference 2018INRA @ Scilab Conference 2018
INRA @ Scilab Conference 2018
 
Qualcomm @ Scilab Conference 2018
Qualcomm @ Scilab Conference 2018Qualcomm @ Scilab Conference 2018
Qualcomm @ Scilab Conference 2018
 
Sanofi @ Scilab Conference 2018
Sanofi @ Scilab Conference 2018Sanofi @ Scilab Conference 2018
Sanofi @ Scilab Conference 2018
 
University of Applied Science Esslingen @ Scilab Conference 2018
University of Applied Science Esslingen @ Scilab Conference 2018University of Applied Science Esslingen @ Scilab Conference 2018
University of Applied Science Esslingen @ Scilab Conference 2018
 
DLR @ Scilab Conference 2018
DLR @ Scilab Conference 2018DLR @ Scilab Conference 2018
DLR @ Scilab Conference 2018
 
Fraunhofer IIS @ Scilab Conference 2018
Fraunhofer IIS @ Scilab Conference 2018Fraunhofer IIS @ Scilab Conference 2018
Fraunhofer IIS @ Scilab Conference 2018
 
Arcelormittal @ Scilab Conference 2018
Arcelormittal @ Scilab Conference 2018Arcelormittal @ Scilab Conference 2018
Arcelormittal @ Scilab Conference 2018
 

Dernier

Porous Ceramics seminar and technical writing
Porous Ceramics seminar and technical writingPorous Ceramics seminar and technical writing
Porous Ceramics seminar and technical writingrakeshbaidya232001
 
result management system report for college project
result management system report for college projectresult management system report for college project
result management system report for college projectTonystark477637
 
Call for Papers - African Journal of Biological Sciences, E-ISSN: 2663-2187, ...
Call for Papers - African Journal of Biological Sciences, E-ISSN: 2663-2187, ...Call for Papers - African Journal of Biological Sciences, E-ISSN: 2663-2187, ...
Call for Papers - African Journal of Biological Sciences, E-ISSN: 2663-2187, ...Christo Ananth
 
Call Girls Pimpri Chinchwad Call Me 7737669865 Budget Friendly No Advance Boo...
Call Girls Pimpri Chinchwad Call Me 7737669865 Budget Friendly No Advance Boo...Call Girls Pimpri Chinchwad Call Me 7737669865 Budget Friendly No Advance Boo...
Call Girls Pimpri Chinchwad Call Me 7737669865 Budget Friendly No Advance Boo...roncy bisnoi
 
Sheet Pile Wall Design and Construction: A Practical Guide for Civil Engineer...
Sheet Pile Wall Design and Construction: A Practical Guide for Civil Engineer...Sheet Pile Wall Design and Construction: A Practical Guide for Civil Engineer...
Sheet Pile Wall Design and Construction: A Practical Guide for Civil Engineer...Dr.Costas Sachpazis
 
(SHREYA) Chakan Call Girls Just Call 7001035870 [ Cash on Delivery ] Pune Esc...
(SHREYA) Chakan Call Girls Just Call 7001035870 [ Cash on Delivery ] Pune Esc...(SHREYA) Chakan Call Girls Just Call 7001035870 [ Cash on Delivery ] Pune Esc...
(SHREYA) Chakan Call Girls Just Call 7001035870 [ Cash on Delivery ] Pune Esc...ranjana rawat
 
Call Girls Service Nashik Vaishnavi 7001305949 Independent Escort Service Nashik
Call Girls Service Nashik Vaishnavi 7001305949 Independent Escort Service NashikCall Girls Service Nashik Vaishnavi 7001305949 Independent Escort Service Nashik
Call Girls Service Nashik Vaishnavi 7001305949 Independent Escort Service NashikCall Girls in Nagpur High Profile
 
Processing & Properties of Floor and Wall Tiles.pptx
Processing & Properties of Floor and Wall Tiles.pptxProcessing & Properties of Floor and Wall Tiles.pptx
Processing & Properties of Floor and Wall Tiles.pptxpranjaldaimarysona
 
Introduction and different types of Ethernet.pptx
Introduction and different types of Ethernet.pptxIntroduction and different types of Ethernet.pptx
Introduction and different types of Ethernet.pptxupamatechverse
 
UNIT - IV - Air Compressors and its Performance
UNIT - IV - Air Compressors and its PerformanceUNIT - IV - Air Compressors and its Performance
UNIT - IV - Air Compressors and its Performancesivaprakash250
 
Call for Papers - Educational Administration: Theory and Practice, E-ISSN: 21...
Call for Papers - Educational Administration: Theory and Practice, E-ISSN: 21...Call for Papers - Educational Administration: Theory and Practice, E-ISSN: 21...
Call for Papers - Educational Administration: Theory and Practice, E-ISSN: 21...Christo Ananth
 
MANUFACTURING PROCESS-II UNIT-2 LATHE MACHINE
MANUFACTURING PROCESS-II UNIT-2 LATHE MACHINEMANUFACTURING PROCESS-II UNIT-2 LATHE MACHINE
MANUFACTURING PROCESS-II UNIT-2 LATHE MACHINESIVASHANKAR N
 
Structural Analysis and Design of Foundations: A Comprehensive Handbook for S...
Structural Analysis and Design of Foundations: A Comprehensive Handbook for S...Structural Analysis and Design of Foundations: A Comprehensive Handbook for S...
Structural Analysis and Design of Foundations: A Comprehensive Handbook for S...Dr.Costas Sachpazis
 
(ANJALI) Dange Chowk Call Girls Just Call 7001035870 [ Cash on Delivery ] Pun...
(ANJALI) Dange Chowk Call Girls Just Call 7001035870 [ Cash on Delivery ] Pun...(ANJALI) Dange Chowk Call Girls Just Call 7001035870 [ Cash on Delivery ] Pun...
(ANJALI) Dange Chowk Call Girls Just Call 7001035870 [ Cash on Delivery ] Pun...ranjana rawat
 
KubeKraft presentation @CloudNativeHooghly
KubeKraft presentation @CloudNativeHooghlyKubeKraft presentation @CloudNativeHooghly
KubeKraft presentation @CloudNativeHooghlysanyuktamishra911
 
(PRIYA) Rajgurunagar Call Girls Just Call 7001035870 [ Cash on Delivery ] Pun...
(PRIYA) Rajgurunagar Call Girls Just Call 7001035870 [ Cash on Delivery ] Pun...(PRIYA) Rajgurunagar Call Girls Just Call 7001035870 [ Cash on Delivery ] Pun...
(PRIYA) Rajgurunagar Call Girls Just Call 7001035870 [ Cash on Delivery ] Pun...ranjana rawat
 
UNIT-III FMM. DIMENSIONAL ANALYSIS
UNIT-III FMM.        DIMENSIONAL ANALYSISUNIT-III FMM.        DIMENSIONAL ANALYSIS
UNIT-III FMM. DIMENSIONAL ANALYSISrknatarajan
 

Dernier (20)

Porous Ceramics seminar and technical writing
Porous Ceramics seminar and technical writingPorous Ceramics seminar and technical writing
Porous Ceramics seminar and technical writing
 
Water Industry Process Automation & Control Monthly - April 2024
Water Industry Process Automation & Control Monthly - April 2024Water Industry Process Automation & Control Monthly - April 2024
Water Industry Process Automation & Control Monthly - April 2024
 
result management system report for college project
result management system report for college projectresult management system report for college project
result management system report for college project
 
Call for Papers - African Journal of Biological Sciences, E-ISSN: 2663-2187, ...
Call for Papers - African Journal of Biological Sciences, E-ISSN: 2663-2187, ...Call for Papers - African Journal of Biological Sciences, E-ISSN: 2663-2187, ...
Call for Papers - African Journal of Biological Sciences, E-ISSN: 2663-2187, ...
 
Call Girls Pimpri Chinchwad Call Me 7737669865 Budget Friendly No Advance Boo...
Call Girls Pimpri Chinchwad Call Me 7737669865 Budget Friendly No Advance Boo...Call Girls Pimpri Chinchwad Call Me 7737669865 Budget Friendly No Advance Boo...
Call Girls Pimpri Chinchwad Call Me 7737669865 Budget Friendly No Advance Boo...
 
Sheet Pile Wall Design and Construction: A Practical Guide for Civil Engineer...
Sheet Pile Wall Design and Construction: A Practical Guide for Civil Engineer...Sheet Pile Wall Design and Construction: A Practical Guide for Civil Engineer...
Sheet Pile Wall Design and Construction: A Practical Guide for Civil Engineer...
 
(SHREYA) Chakan Call Girls Just Call 7001035870 [ Cash on Delivery ] Pune Esc...
(SHREYA) Chakan Call Girls Just Call 7001035870 [ Cash on Delivery ] Pune Esc...(SHREYA) Chakan Call Girls Just Call 7001035870 [ Cash on Delivery ] Pune Esc...
(SHREYA) Chakan Call Girls Just Call 7001035870 [ Cash on Delivery ] Pune Esc...
 
Call Girls Service Nashik Vaishnavi 7001305949 Independent Escort Service Nashik
Call Girls Service Nashik Vaishnavi 7001305949 Independent Escort Service NashikCall Girls Service Nashik Vaishnavi 7001305949 Independent Escort Service Nashik
Call Girls Service Nashik Vaishnavi 7001305949 Independent Escort Service Nashik
 
Processing & Properties of Floor and Wall Tiles.pptx
Processing & Properties of Floor and Wall Tiles.pptxProcessing & Properties of Floor and Wall Tiles.pptx
Processing & Properties of Floor and Wall Tiles.pptx
 
Introduction and different types of Ethernet.pptx
Introduction and different types of Ethernet.pptxIntroduction and different types of Ethernet.pptx
Introduction and different types of Ethernet.pptx
 
UNIT - IV - Air Compressors and its Performance
UNIT - IV - Air Compressors and its PerformanceUNIT - IV - Air Compressors and its Performance
UNIT - IV - Air Compressors and its Performance
 
Call for Papers - Educational Administration: Theory and Practice, E-ISSN: 21...
Call for Papers - Educational Administration: Theory and Practice, E-ISSN: 21...Call for Papers - Educational Administration: Theory and Practice, E-ISSN: 21...
Call for Papers - Educational Administration: Theory and Practice, E-ISSN: 21...
 
MANUFACTURING PROCESS-II UNIT-2 LATHE MACHINE
MANUFACTURING PROCESS-II UNIT-2 LATHE MACHINEMANUFACTURING PROCESS-II UNIT-2 LATHE MACHINE
MANUFACTURING PROCESS-II UNIT-2 LATHE MACHINE
 
Structural Analysis and Design of Foundations: A Comprehensive Handbook for S...
Structural Analysis and Design of Foundations: A Comprehensive Handbook for S...Structural Analysis and Design of Foundations: A Comprehensive Handbook for S...
Structural Analysis and Design of Foundations: A Comprehensive Handbook for S...
 
(ANJALI) Dange Chowk Call Girls Just Call 7001035870 [ Cash on Delivery ] Pun...
(ANJALI) Dange Chowk Call Girls Just Call 7001035870 [ Cash on Delivery ] Pun...(ANJALI) Dange Chowk Call Girls Just Call 7001035870 [ Cash on Delivery ] Pun...
(ANJALI) Dange Chowk Call Girls Just Call 7001035870 [ Cash on Delivery ] Pun...
 
KubeKraft presentation @CloudNativeHooghly
KubeKraft presentation @CloudNativeHooghlyKubeKraft presentation @CloudNativeHooghly
KubeKraft presentation @CloudNativeHooghly
 
(PRIYA) Rajgurunagar Call Girls Just Call 7001035870 [ Cash on Delivery ] Pun...
(PRIYA) Rajgurunagar Call Girls Just Call 7001035870 [ Cash on Delivery ] Pun...(PRIYA) Rajgurunagar Call Girls Just Call 7001035870 [ Cash on Delivery ] Pun...
(PRIYA) Rajgurunagar Call Girls Just Call 7001035870 [ Cash on Delivery ] Pun...
 
DJARUM4D - SLOT GACOR ONLINE | SLOT DEMO ONLINE
DJARUM4D - SLOT GACOR ONLINE | SLOT DEMO ONLINEDJARUM4D - SLOT GACOR ONLINE | SLOT DEMO ONLINE
DJARUM4D - SLOT GACOR ONLINE | SLOT DEMO ONLINE
 
Roadmap to Membership of RICS - Pathways and Routes
Roadmap to Membership of RICS - Pathways and RoutesRoadmap to Membership of RICS - Pathways and Routes
Roadmap to Membership of RICS - Pathways and Routes
 
UNIT-III FMM. DIMENSIONAL ANALYSIS
UNIT-III FMM.        DIMENSIONAL ANALYSISUNIT-III FMM.        DIMENSIONAL ANALYSIS
UNIT-III FMM. DIMENSIONAL ANALYSIS
 

Numerical analysis using Scilab: Solving nonlinear equations

  • 1. Nonlinear equations www.openeering.com page 1/25 NUMERICAL ANALYSIS USING SCILAB: SOLVING NONLINEAR EQUATIONS In this tutorial we provide a collection of numerical methods for solving nonlinear equations using Scilab. Level This work is licensed under a Creative Commons Attribution-NonCommercial-NoDerivs 3.0 Unported License.
  • 2. Nonlinear equations www.openeering.com page 2/25 Step 1: The purpose of this tutorial The purpose of this Scilab tutorial is to provide a collection of numerical methods for finding the zeros of scalar nonlinear functions. The methods that we present are:  Bisection;  Secant;  Newton-Raphson;  Fixed point iteration method. These classical methods are typical topics of a numerical analysis course at university level. An introduction to NUMERICAL ANALYSIS USING SCILAB solving nonlinear equations Step 2: Roadmap This tutorial is composed of two main parts: the first one (Steps 3-10) contains an introduction about the problem of solving nonlinear equations, presents some solution strategies and introduces properties and issues of such problems and solutions. The second part (Steps 11-23) is dedicated to the specific methods, equipped with many Scilab examples. Descriptions Steps Introduction and solution strategies 3-6 Conditioning and convergence 7-10 Bisection method 11-12 Secant method 13-14 Newton method 15-18 Fixed point iteration method 19-22 Conclusions and remarks 23-25
  • 3. Nonlinear equations www.openeering.com page 3/25 Step 3: Introduction Many problems that arise in different areas of engineering lead to the solution of scalar nonlinear equations of the form ( ) i.e. to find a zero of a nonlinear function. Nonlinear equations can have none, one, two, or an infinite number of solutions. Some examples are presented on the right. Note: A special class of nonlinear equations is constituted by polynomials of the form ( ) . (Linear chirp function ( ( )) with infinite zeros) (Function ( ) ( ) with one zero) The code of the examples is available in the file ex1.sce
  • 4. Nonlinear equations www.openeering.com page 4/25 Step 4: Solution strategies Many solution methods exist and the correct choice depends on the type of function . For example, different methods are used whether is a polynomial or it is a continuous function whose derivatives are not available. Moreover, the problem can be stated in equivalent formulations. For example, the original formulation ( ) can be converted into a fixed point formulation of the form ( ) or into a minimization problem of the form ( ). It is important to note that even if these formulations are mathematically equivalent (their zeros are the same ones), the numerical methods used to approximate the solution do not have all the same behavior. Hence, the numerical solution strategy should take into account the kind of problem we try to solve. Example of equivalent formulations: Original problem: ( ) Examples of fixed point formulation: or ( ) Example of minimization formulation: ( )
  • 5. Nonlinear equations www.openeering.com page 5/25 Step 5: Graphical interpretation and separation of zeros The first step of many numerical methods for solving nonlinear equations is to identify a starting point or an interval where to search a single zero: this is called “separation of zeros”. If no other information is available, this can be done by evaluating the function at several values and plotting the results ( ). Solving the problem ( ) is equivalent to find the solutions of the following system { ( ) i.e., graphically, to determine, in a Cartesian plane, the intersections of the graph of the function ( ) with the -axis. In the case of fixed point formulation ( ) its graphical formulation is related to the system { ( ) i.e. the solutions are given by the intersections of the function ( ) with the bisector . (Separation of zeros of the original problem: ) (Fixed point equivalent formulation: ( )) The code of the example is available in the file ex2.sce
  • 6. Nonlinear equations www.openeering.com page 6/25 Step 6: Example of a bracketing strategy Bracketing is an automatic strategy for finding intervals containing a zero of a given function . An example of bracketing is given in the following lines of code; the idea is to identify the points in which the function changes sign: function xsol=fintsearch(f, xmin, xmax, neval) // Generate x vector x = linspace(xmin, xmax, neval)'; // Evaluate function y = f(x); // Check for zeros indz = find(abs(y)<=1000*%eps); y(indz) = 0; // Compute signs s = sign(y); // Find where f changes sign inds = find(diff(s)~=0); // Compute intervals xsol = [x(inds),x(inds+1)]; endfunction The code is also available in the file fintsearch.sci, while the example can be found in fintsearch_test.sce. (Separation of zeros for the function ( ) ( ))
  • 7. Nonlinear equations www.openeering.com page 7/25 Step 7: Conditioning of a zero-finding problem The conditioning of a zero-finding problem is a measure of how it is sensitive to perturbations of the equation. Here we denote by a zero of the function ( ), i.e. ( ) . From the first figure on the right we can intuitively see that if the derivative | ( )| is “large” the problem is well-conditioned. In this case we can clearly identify the zero of , even if there are rounding errors. Conversely, if the derivative is “small”, the zero-finding problem is said to be ill- conditioned and there is no clear identification of the zero. In this case, if rounding errors are present, the zero is spread up over a “large” interval of uncertainty. In summary, we can state the following:  The conditioning number of the root finding problem is | ( )|;  The problem is ill-conditioned if | ( )| is large, i.e. | ( )| is small. In the graphic on the right down we can see that the zero of ( ) can not be identified because of ill-conditioning. The code of the example is available in the file ex3.sce (Example of well- and il- conditioned root finding problem) (Given a very ill-conditioned problem, the unique zero cannot be identified)
  • 8. Nonlinear equations www.openeering.com page 8/25 Estimating the conditioning of the problem of finding a (single) zero of a (continuously differentiable) function ( ) means to provide an estimate of the relative error of a perturbed solution. Finding the zero of a function ( ), i.e. ( ) , is equivalent (for continuously differentiable functions) to solving the inverse problem ( ). If we consider a perturbed solution ̃, i.e. ( ̃) or, equivalently, ̃ ( ), making an error on its evaluation, we have the following error: ̃ ( ) ( ) Using the following Taylor expansion ( ) ( ) ( ( )) ( ) and the elation obtained on the right ( ( )) ( ) the error can be written as ( ) ( ( )) ( ) ( ( )) ( ) Hence, the relative error can be stated as | | | ( ) | | | The code of the example on the right is available in the file ex4.sce Original problem: Find such that ( ) i..e. ( ) Perturbed problem: Find ̃ such that ( ̃) i..e. ̃ ( ) Inverse function: (Example of inverse function) The function and its inverse are related from the relation ( ( )) ( ( )) and the derivative of the inverse function satisfies the relation ( ( )) ( ) i.e. ( ( )) ( ) (where ( ))
  • 9. Nonlinear equations www.openeering.com page 9/25 Step 8: Convergence rates of iterative methods Typically, methods for approximating nonlinear equations are based on iterative strategies, i.e. starting from an initial guess solution , and computing a sequence of solutions that converge to the desired solution (where ( ) ), i.e. We define the rate of convergence of the sequence as | | | | | | | | for some constant and . is called the asymptotic error constant while is the error at the iteration. If and the convergence is called linear. We require to ensure the convergence of the method indeed the error must be reduced at each iteration as explained by this relation: | | | | ( | |) | | | | Here we compare the n+1-th step error with the initial error. If the convergence is called superlinear. If the convergence is called quadratic. The following figure shows a typical convergence rate profile where we can identify three different regions:  an exploration region: in this region there is an exploration of the solution space trying to find an initial guess solution (starting point) where convergence properties are guaranteed, and, moreover, there is no significant reduction of the error;  a convergence region: the basin of attraction of the solution;  a stagnation region: this last region is due to round-off errors of the floating point system that are unavoidable. This figure stresses the fact that the definition of the convergence rate is valid only “in the convergence region”, hence it is a local definition. (Typical behavior of a convergence rate profile)
  • 10. Nonlinear equations www.openeering.com page 10/25 Step 9: Examples of convergence rates Let us suppose we are looking for the zero . Linear rate: Consider the following error model: and In this case we get the following errors: zero significant figures one significant figures two significant figures three significant figures With a linear rate of convergence, the number of significant figures the method gains is constant at each step (a multiple of the iteration number). Quadratic rate: Consider the following error model: and In this case we get the following errors: zero significant figures one significant figures (one figure gained) three significant figures (two figures gained) seven significant figures (four figures gained) With a quadratic rate of convergence, the number of significant figures the method gains at each iteration is twice the previous iteration. A comparison of the typical rate of convergence (when rounding errors are present) is shown in the following figure: (Comparison between linear, superlinear and quadratic rate of convergence) The number of figures gained per iteration can be summarized in the following table: Convergence rate Figures gained per iteration Linear Constant Superlinear Increasing Quadratic Double The code of the example is available in the file ex5.sce
  • 11. Nonlinear equations www.openeering.com page 11/25 Step 10: Convergence criteria When we approximate a solution with an iterative method it is necessary to choose how to properly stop the algorithm and hence provide the solution. As each evaluation of the function can be computationally expensive, it is important to avoid unnecessary evaluations (for instance, avoiding evaluations in the stagnation region). The convergence criteria reported on the right refer to the following problem:  Find a solution such that ( ) starting from an initial guess ( ( )), with in . The design criteria are based on the absolute or relative error for the variable or for the value of the function . The difference between a criterion based on or depends on the conditioning of the nonlinear problem, while a choice based on the absolute or relative error depends on the scaling of the nonlinear equations. In our example, we consider the relative errors for f and x they are adimensional, i.e. they allow to avoid multiplicative constants. Example of convergence criteria:  Absolute error between two iterates on : | |  Relative error between two iterates on : | |  Absolute residual on : | ( )|  Relative residual on : | ( )| ( ) Example of implementation of a stopping criterion: // Check for convergence if (abs(fxnew)/fref < ftol) | (abs(dx)/xref < xtol) // The root is found x = xnew; fx = fxnew; end The code checks the convergence both on and . If we are dealing with an ill-conditioned problem it is likely that will not converge, so the check on will stop the iterative method.
  • 12. Nonlinear equations www.openeering.com page 12/25 Step 11: Bisection method Supposing we are looking for a zero of a continuous function, this method starts from an interval [a,b] containing the solution and then evaluates the function at the midpoint m=(a+b)/2. Then, according to the sign of the function, it moves to the subinterval [a,m] or [m,b] containing the solution and it repeats the procedure until convergence. The main pseudo-code of the algorithm is the following: Algorithm pseudo-code while ((b - a) > tol) do m = (a + b)/2 if sign(f(a)) = sign(f(m)) then a = m else b = m end end1 The figure on the right refers to the first 4 iterations of the bisection method applied to the function ( ) in the interval [1,2]. The method starts from the initial interval [a,b]=[1,2] and evaluates the function at the midpoint m=1.5. Since the sign of the function in m=1.5 is equal to the sign of the function in b=2, the method moves to the interval [a,m]=[1,1.5], which contains the zero. At the second step, it starts from the interval [a,b]=[1,1.5], it evaluates the function at the midpoint m=1.25 and it moves to the interval [1.25, 1.5]. And so on. The function is available in the file bisection.sci, while the example can be found in bisection_test.sce. (Example of the first four iterations of the bisection method)
  • 13. Nonlinear equations www.openeering.com page 13/25 Step 12: Convergence of the bisection method At each iteration of the method the searching interval is halved (and contains the zero), i.e. ( ) ( ) Hence, the absolute error at the nth iteration is | | | | ( ) ( ) and the converge | | is guaranteed for . Observe that at each iteration the interval is halved, i.e. ( ) ( ) , but this relation does not guarantee that | | | | (i.e. monotone convergence to ) as explained in the figure below. However, we define the rate of the convergence for this method linear. The figure on the right shows the relative error related to the iterations (reported in the table below) of the method applied to the function ( ) in the interval [1,2] where the analytic solution is √ . As expected, the method gains 1 significant figure every 3/4 iterations. (Relative error of the bisection method) (Iterations of the bisection method)
  • 14. Nonlinear equations www.openeering.com page 14/25 Step 13: Secant method Supposing we are looking for a zero of a continuous function, this method starts from two approximations ( , ( )) and ( ( )) of the unknown zero ( ( ) ) and computes the new approximation as the zero of the straight line passing through the two given points. Hence, can be obtained solving the following system: { ( ) ( ) ( ) giving ( ) ( )( ) ( ) ( ) Once the new approximation is computed, we repeat the same procedure with the new initial points ( ( )) and ( ( )). The iterative formula is ( )( ) ( ) ( ) . The figures on the right refer to the first four iterations of the method for the function ( ) with initial guess values and . The function is available in the file secant.sci, while the example can be found in secant_test.sce. (First four iterations of the secant method)
  • 15. Nonlinear equations www.openeering.com page 15/25 Step 14: Convergence of the secant method The main pseudo-code of the algorithm is the following: Algorithm xkm1 = x0; fkm1 = f(x0) // Step: k-1 xk = x1; fk = f(x1) // Step: k xkp1 = xk // Initialization iter = 1 // Current iteration while iter <= itermax do iter = iter+1 xkp1 = xk-(fk*(xk-xkm1))/(fk-fkm1) if abs(xkp1-xk)<tol break // Converg. test xkm1 = xk; fkm1 = fk xk = xkp1; fk = f(xkp1) end The algorithm iterates until convergence or until the maximum number of iterations is reached. At each iteration only one function evaluation is required. The “break” statement terminates the execution of the while loop. For the secant method it is possible to prove the following result: if the function is continuous with continuous derivatives until order 2 near the zero, the zero is simple (has multiplicity 1) and the initial guesses and are picked in a neighborhood of the zero, then the method converges and the convergence rate is equal to √ (superlinear). The figure on the right shows the relative error related to the iterations (reported in the table below) of the method applied to the function ( ) in the interval [1,2], where the analytic solution is √ . (Relative error of the secant method) (Iterations of the secant method)
  • 16. Nonlinear equations www.openeering.com page 16/25 Step 15: Newton method Supposing we are looking for a zero of a continuous function with continuous derivatives, this method starts from an approximation of the unknown zero and computes the new approximation as the zero of the straight line passing through the initial point and tangent to the function. Hence, can be obtained solving the following system: { ( ) ( ) ( ) giving ( ) ( ) ( ) Once the new approximation is computed, we repeat the same procedure with the new initial point . The iterative formula is then ( ) ( ) . The figures on the right refer to the first four iterations of the method for the function ( ) with initial guess value . The function is available in the file newton.sci, while all the examples related to this method can be found in newton_test.sce. (First four iterations of the Newton method)
  • 17. Nonlinear equations www.openeering.com page 17/25 Step 16: Convergence of the Newton method The main pseudo-code of the algorithm is the following: Algorithm xk = x0; iter = 1 // Current iteration while iter <= itermax do iter = iter+1 xkp1 = xk-f(xk)/f’(xk) if abs(xkp1-xk)<tol break // Converg. test xk = xkp1; end1 The algorithm iterates until convergence or the maximum number of iterations is reached. At each iterations a function evaluation with its derivative is required. The “break” statement terminates the execution of the while loop. For the newton method it is possible to prove the following results: if the function f is continuous with continuous derivatives until order 2 near the zero, the zero is simple (has multiplicity 1) and the initial guess is picked in a neighborhood of the zero, then the method converges and the convergence rate is equal to (quadratic). The figure on the right shows the relative error related to the iterations (reported in the table below) of the method applied to the function ( ) in the interval [1,2], where the analytic solution is √ . As expected, the number significant figures doubles at each iteration. (Relative error of the Newton method) (Iterations of the Newton method)
  • 18. Nonlinear equations www.openeering.com page 18/25 Step 17: Newton method (loss of quadratic convergence) A zero is said to be multiple with multiplicity if ( ) ( ) ( ) and ( ) . In the Newton method, if the zero is multiple, the convergence rate decreases from quadratic to linear. As an example, consider the function ( ) with zero . Then the Newton method can be stated as giving the error which is clearly linear. On the right we report an example of loss of quadratic convergence applied to the function ( ) ( ) which has a zero of multiplicity 2 in . The relative error shows a linear behavior, indeed the method gains a constant number of significant figures every 3 or 4 iterations. (Function with a zero of multiplicity 2) (Loss of quadratic convergence)
  • 19. Nonlinear equations www.openeering.com page 19/25 Step 18: Newton method (Global convergence) If the function of which we are looking for the zeros and its derivatives until order 2 are continuous, it is possible to ensure the global convergence of the Newton method by choosing a proper initial guess point. As reported in the table on the right, under the above mentioned hypothesis it is possible to identify a neighborhood of the zero such that, for each initial guess in , the sequence is monotonically decreasing (or increasing) to the zero. For instance, if the function is convex and increasing (second row and second column case in the table), the Newton method with an initial guess picked on the right of the zero converges monotonically decreasing to the unknown zero. The Newton method with the above choosing criterion also ensures that all are well defined, i.e. the method does not generate any point in which the function is not defined (an example in which the monotonicity of the convergence is important is the logarithm function, which is not defined for negative values). Function properties ( ) (decrease) ( ) (increase) ( ) (concave) (Right domain) (Left domain) ( ) (convex) (Left domain) (Right domain) (How to choose the initial guess for global convergence)
  • 20. Nonlinear equations www.openeering.com page 20/25 Step 19: Fixed point iteration method The fixed point iteration method transforms the original problem ( ) into the problem ( ) and solves it using an iterative scheme of the form: ( ). If the iterative scheme converges to the value , i.e. ( ), then is also a zero of ( ), since ( ) ( ) . Solving the equation ( ) is equivalent to solve the following system { ( ) On the right we reported some graphical examples of iterations applied to 6 different functions . The three examples on the left show cases in which the method converges to the unknown zero, while among the examples on the right there is no convergence of the method, even if the functions seem to be quite similar to the ones on the left. Note: The Newton method is a particular case of fixed point iteration method where ( ) ( ) ( ) . (Graphical examples of iterations applied to 6 different functions )
  • 21. Nonlinear equations www.openeering.com page 21/25 Step 20: Fixed point iteration method - example #1 The example on the right refers to the function ( ) , where the considered fixed point iteration is ( ) Here the relative error shows a linear behavior of the method, while on the right we can see the first four iterations of the method. (Relative error and iterations of the fixed point iteration method) The function is available in the file fixedpoint.sci, while the example can be found in fixedpoint_test.sce. (First four iterations of the fixed-point iteration method))
  • 22. Nonlinear equations www.openeering.com page 22/25 Step 21: Convergence of the fixed point iteration method It is possible to prove that the sequence converges to if | ( )| (i.e. it is a contraction) in the interval containing the initial guess and for all (i.e. ( ) ). Moreover, in this case, it is possible to prove that the solution is the unique solution in the interval of the equation ( ) . While, if | ( )| in the whole interval , the sequence does not converge to the solution (even if we start very close to the zero ). The convergence rate of the fixed point iteration method is:  If ( ) , the method has a linear convergence rate;  If ( ) and ( ) , the method has a quadratic convergence rate; Example: The fixed point iteration method applied to the Newton method with fixed point function ( ) ( ) ( ) with ( ) and ( ) shows a quadratic rate of convergence, indeed we have ( ) and ( ) ( ( ) ( ) ( ) ( ) ( ) ( ( )) ). Proof of convergence (idea) Using the Taylor formula we have ( ) ( ) ( ) ( ) ( ) ( ) ( ) ( ) ( ) ( ) ( ) ( ) ( ) ( ) ( ) where are unknown values in the intervals ( ). If the derivatives are bounded by the relation | ( )| the error can be written as | | ( ) | | i.e. | | | | where is the initial error. Proof of convergence rate (idea) Assuming ( ) and using the Taylor formula we have ( ) ( ) ( ) ( ) ( ) ( ) ( ) ( ) ( ) where ( ) and ( ). If ( ) we have a linear rate of convergence of the sequence, i.e. | | | | ( ). If ( ) and ( ) we have a quadratic rate of convergence of the sequence, i.e. | | | | ( ).
  • 23. Nonlinear equations www.openeering.com page 23/25 Step 22: Fixed point iteration method - example #2 This example refers to the zero of the function ( ) which is . Starting from this function it is possible to write different kinds of fixed point iterations: 1. with ( ) : This function shows a linear behavior of convergence (blue line), indeed the derivative ( ) ; 2. ( ) with ( ) ( ) : This function does not converge (red line), indeed the derivative ( ) ; 3. with ( ) : This function shows a quadratic rate of convergence (purple line), indeed the derivative ( ) . The initial guess is equal to . Also this second example can be found in the file fixedpoint_test.sce. (Original function) (Relative error for different fixed point iterations)
  • 24. Nonlinear equations www.openeering.com page 24/25 Step 23: Comparison of the methods – an example This example refers to the zero of the function ( ) ( ) which is . In the figure on the right we can see a typical result on relative errors, in step with the rates of convergence discussed earlier for the different methods. At a first glance, one might think that the Newton method should be always chosen, since its convergence is the best one, but it has to be considered that this method needs the computation of the derivative, which could require the evaluation of the original function, which can be numerically very expensive. The source code of this example can be found in the file main_fcn.sce. (Nonlinear test function) (Relative error of the 4 methods)
  • 25. Nonlinear equations www.openeering.com page 25/25 Step 24: Concluding remarks and References In this tutorial we have collected a series of numerical examples written in Scilab to tackle the problem of finding the zeros of scalar nonlinear equations with the main methods on which are based the state-of-the-art algorithms. On the right-hand column you may find a list of references for further studies. 1. Scilab Web Page: www.scilab.org. 2. Openeering: www.openeering.com. 3. K. Atkinson, An Introduction to Numerical Analysis, John Wiley, New York, 1989. Step 25: Software content To report a bug or suggest some improvement please contact the Openeering team at the web site www.openeering.com. Thank you for your attention, Anna Bassi and Manolo Venturin -------------- Main directory -------------- ex1.sce : Examples of zero-finding problems ex2.sce : Examples of separation and fixed point ex3.sce : Example on conditioning ex4.sce : Example of inverse function ex5.sce : Examples of convergence rates fintsearch.sci : Search intervals fintsearch_test.sce: Test for search intervals bisection.sci : Bisection method bisection_test.sce : Test for the bisection method secant.sci : Secant method secant_test.sce : Test for the secant method newton.sci : Newton-Raphson method newton_test.sce : Test for the Newton-Raphson method fixedpoint.sci : Fixed point iteration method fixedpoint_test.sce: Tests for the fixed point iteration method main_fcn.sce : Comparison of methods license.txt : License file