SlideShare une entreprise Scribd logo
1  sur  16
CHAPI'ER3

                         Method of Successive
                           Approximations

3.1. Iterative Scheme
Ordinary first-order differential equations can be solved by the well-known Pi-
card method of successive approximations. An iterative scheme based on the
same principle is also available for linear integral equations of the second kind:

                          g(s) = f(s) +)..       j   K(s, t)g(t)dt .                      (3.1.1)

In this chapter, we present this method. We assume the functions f (s) and
K(s, t) to be .C2-functions as defined in Chapter 1.
     As a zero-order approximation to the desired function g(s), the solution
go(s),
                                 go(s) = f(s) ,                       (3.1.2)
is taken. This is substituted into the right side of Equation (3.1.1) to give the
first-order approximation

                         g1(s) = f(s) +)..       j   K(s, t)go(t)dt.                      (3.1.3)

This function, when substituted into Equation (3.1.1), yields the second approx-
imation. This process is then repeated; the (n + 1)th approximation is obtained
by substituting the nth approximation in the right side of (3.1.1). There results
the recurrence relation

                        gn+l(s)=f(s)+J..         J    K(s,t)gn(t)dt.                      (3.1.4)

    If gn (s) tends uniformly to a limit as n ---+ oo, then this limit is the required
solution. To study such a limit, let us examine the iterative procedure (3.1.4) in
detail. The first- and second-order approximations are

                          g1(s) = f(s)+J..       j    K(s,t)f(t)dt                        (3.1.5)


                                       J
and
                  g2(s) =f(s)+J..          K(s,t)f(t)dt

                                   j             [j
                                                                                          (3.1.6)
                            +.A?       K(s, t)        K(t, x)f(x)dx    Jdt.

R.P. Kanwal, Linear Integral Equations: Theory & Technique, Modern Birkhäuser Classics,        25
DOI 10.1007/978-1-4614-6012-1_3, © Springer Science+Business Media New York 2013
26                         3. Method of Successive Approximations


This formula can be simplified by setting

                             K2(s, t) =      J    K(s, x)K(x, t)dx                    (3.1.7)

and by changing the order of integration. The result is

      g2(s) = f(s) +).          J K(s, t)f(t)dt        + ),,Z   J K2(s, t)f(t)dt.     (3.1.8)

Similarly,

        g3(s) =f(s) +).           j   K(s, t)f(t)dt


                   + ). 2   f   K2(S, t)f(t)dt + ). 3      f     K3(S, t)f(t)dt'
                                                                                      (3.1.9)




                                             f
where
                            K3(s, t)     =       K(s, x)K2 (x, t)dx.                 (3.1.10)

By continuing this process, and denoting

                           Km(s,t)     =   J K(s,x)Km-t(X,t)dx,                      (3.1.11)

we get the nth approximate solution of integral equation (3.1.1) as

                     gn(s)    =   f(s)     + ~).m      J Km(S, t)f(t)dt.             (3.1.12)

We call the expression Km(s, t) the mth iterate, where Kt(S, t) = K(s, t).
Passing to the limit as n --* oo, we obtain the so-called Neumann series

        g(s) = lim gn(s) = f(s)
                 n-:,.oo
                                               ~).m
                                             + L....,
                                                 m=l
                                                         J      Km(s, t)f(t)dt.      (3.1.13)

    It remains to determine the conditions under which this convergence is
achieved. For this purpose, we attend to the partial sum (3.1.12) and apply the
Schwartz inequality (1.6.3) to the general term of this sum. This gives


        If   Km(s,t)f(t)d{::;; ( / 1Km(s,t)i 2 dt)                  J if(t)i 2 dt.   (3.1.14)

Let D be the norm of f,
                                                                                     (3.1.15)
3.1. Iterative Scheme                                  27


and let   c;. denote the upper bound of the integral
                                 ff     Km(s, t)i 2 dt.

Then, the inequality (3.1.14) becomes

                          If   Km(S, t)f(t)dtl
                                                   2
                                                       ~ c;, D   2 •                (3.1.16)

The next step is to connect the estimate     c;.
                                         with the estimate              c;. This is achieved
by applying the Schwarz inequality to the relation (3.1.11):



which, when integrated with respect to t, yields

                                                                                    (3.1.17)



                                   ff
where
                            B2 =            IK(x, t)i 2dx dt.                       (3.1.18)
The inequality (3.1.17) sets up the recurrence relation
                                   cz ~     BZm-zcz                                 (3.1.19)
                                    m '-"              1 •

From Equations (3.1.16) and (3.1.19), we have the inequality

                       If   Km(s, t)f(t)dtl
                                               2
                                                   ~ CfD     2 B2m-Z.               (3.1.20)

Therefore, the general term of the partial sum (3.1.12) has a magnitude less
than the quantity DC 1 1Aim Bm- 1, and it follows that the infinite series (3.1.13)
converges faster than the geometric series with common ratio IAlB. Hence, if

                                        IAIB   < 1,                                 (3.1.21)
the uniform convergence of this series is assured.
     We now prove that, for given A, Equation (3.1.1) has a unique solution.
Suppose the contrary, and let g 1 (s) and g 2 (s) be two solutions of Equation


                                           f
(3.1.1):
                       g1(s) = f(s)     +A         K(s, t)g1(t)dt,

                       gz(s) = f(s)     +A f       K(s, t)gz(t)dt.
28                     3. Method of Successive Approximations


By subtracting these equations and setting g 1 (s)- g2(s) = </J(s), there results
the homogeneous integral equation

                                </J(s) =A     I   K(s, t)</J(t)dt.

Apply the Schwarz inequality to this equation and get

                   I</J(s)l 2   ~ IAI 2   I   IK(s,   t)l 2 dt   I   I</J(t)l 2dt,

or
                                                                                     (3.1.22)

In view of the inequality (3.1.21) and the nature of the function <P (s) = g 1 (s) -
g2 (s), we readily conclude that <P (s)        =
                                         0, that is, g 1 (s) g2 (s).      =
     What is the estimate of the error for neglecting terms after the nth term in
the Neumann series (3.1.13)? This is found by writing this series as

          g(s) = f(s)     +     1; IAm        Km(S, t) f(t)dt         + Rn(s).       (3.1.23)

Then, it follows from the preceding analysis that

                                                                                     (3.1.24)
     Finally, we can evaluate the resolvent kernel as defined in the previous
chapter, in terms of the iterated kernels Km (s, t). Indeed, by changing the order
of integration and summation in the Neumann series (3.1.13), we obtain


               g(s) = f(s)        + '-   Jlt '-"-'            K.(s,   I)] f(t)dt.
Comparing this with (2.3. 7),

                      g(s) = f(s) +A              I   f(s, t; A)j(t)dt,              (3.1.25)

we have
                                               L Am- Km(S, t).
                                               00

                           f(s, t; A)=                    1                          (3.1.26)
                                              m=1
From the previous discussion, we can infer (see Section 3.5) that the series
(3.1.26) is also convergent at least for IAIB < 1. Hence, the resolvent kernel is
an analytic function of A, regular at least inside the circle IAI < B- 1 •
     From the uniqueness of the solution of (3.1.1), we can prove that the resol-
vent kernel f(s, t; A) is unique. In fact, let Equation (3.1.1) have, with A = Ao,
3.2. Examples                               29


two resolvent kernels r 1 (s, t; .A) and r 2 (s, t; .Ao). In view of the uniqueness of
the solution of (3.1.1), an arbitrary function f(s) satisfies the identity

  f(s)   + Ao   I   rt(S, t; Ao)f(t)dt    =f(s) + Ao I rz(S, t; .A.o)f(t)dt. (3.1.27)
Setting ll(s, t; .Ao)   = rt (s, t; .A.o)-rz(s, t; .Ao), we have, fromEquation(3.1.27),
                                I   ll(s, t; .A.o)f(t)dt    = 0,
for an arbitrary function f(t). Let us take f(t) = ll*(s, t; .A), with fixed s. This
implies that

                                 I   lll(s, t; .Ao)l 2 dt   =0,
                                    =
which means that II (s, t; A.o) 0, proving the uniqueness of the resolvent kernel.
    The preceding analysis can be summed up in the following basic theorem.


Theorem. To each .C2 -kernel K(s, t), there corresponds a unique resolvent
kernel r(s, t; .A) which is an analytic function of .A, regular at least inside the
circle I.AI < n- 1, and represented by the power series (3.1.26). Furthermore,
if f(s) is also an .C2 -function, then the unique .Cz-solution of the Fredholm
equation (3.1.1) valid in the circle 1>..1 < n- 1, is given by the formula (3.1.25).

     The method of successive approximations has many drawbacks. In addition
to being a cumbersome process, the Neumann series, in general, cannot be
summed in closed form. Furthermore, a solution of the integral Equation (3.1.1)
may exist even if I.AIB > 1, as evidenced in the previous chapter. In fact, we
saw that the resolvent kernel is a quotient of two polynomials of nth degree in
.A, and therefore the only possible singular points of r(s, t; .A) are the roots of
the denominator D(.A) = 0. But, for I.AIB > 1, the Neumann series does not
converge and as such does not provide the desired solution. We have more to
say about these ideas in the next chapter.



3.2. Examples

Example 1. Solve the integral equation

                             g(s) = f(s)     +A   1   1
                                                          es-t g(t)dt .          (3.2.1)
30                      3. Method of Successive Approximations


     Following the method of the previous section, we have

                         K1(s, t) =es-t,

                         K2(s, t) =   11 es-xex-tdx =es-t.


Proceeding in this way, we find that all the iterated kernels coincide with K (s, t).
Using Equation (3.1.26), we obtain the resolvent kernel as

        r(s, t; A.)= K(s, t)(1 +).. + A. 2 +···)=es-t /(1- A.).                  (3.2.2)

Although the series (1 +A.+ A. 2 + · · ·)converges only for lA. I < 1, the resolvent
kernel is, in fact, an analytic function of A., regular in the whole plane except at
the point A. = 1, which is a simple pole of the kernel r. The solution g(s) then
follows from (3.1.25):

                  g(s) = f(s)- [A./(A.- 1)]       1 1
                                                        es-t f(t)dt.             (3.2.3)

Example 2. Solve the Fredholm integral equation

                          g(s) = 1 +A.   1 1
                                           (1 - 3st)g(t)dt                       (3.2.4)

and evaluate the resolvent kernel.
    Starting with g0 (s) = 1, we have


g1(s) = 1 +A.     1
                  1
                      (1 - 3st)dt = 1 +A. ( 1 -   ~s)     ,


g 2 (s) = 1 +A.   1
                  1
                      (1- 3st) [ 1 +A. ( 1-    ~t) Jdt =      1 +A. ( 1-   ~s) + ~A. 2 ,




or
                                                                                 (3.2.5)

The geometric series in Equation (3.2.4) is convergent provided lA. I < 2. Then,

                         g(s) = [4 + 2A.(2- 3s)]/(4- A. 2 ) ,                    (3.2.6)
3.2. Examples                               31


and precisely the same remarks apply to the region of the validity of this solution
as given in Example 1.
      To evaluate the resolvent kernel, we find the iterated kernels.



          Kt(s,t) = 1-3st,

          K2(s, t) =   1   1                                    3
                               (1 - 3sx)(1 - 3xt)dx = 1 - -(s + t) + 3st ,
                                                                2
                       1
                       0

          K3(s, t) =
                           1
                               (1- 3sx) [ 1-   ~(x + t) + 3xt] dx
                      1           1
                    = -(1- 3st) = -K1 (s, t),
                       4                 4

Similarly,




and




Hence,



f(s, t; A.)= Kt + lK2 + l 2K3 + ...

             =   (1+~l2 + 1~l4 + .. .)Kt+l(l+~l2 + 1~l4 + .. .)K2
             =   [o+A.)-~l(s+t)-3(1-l)st]/ (t-~l 2 ),                        Ill <2.
                                                                               (3.2.7)


Example 3. Solve the integral equation



                       g(s) = 1 + l      1"   [sin(s   + t)]g(t)dt .          (3.2.8)
32                       3. Method of Successive Approximations


Let us first evaluate the iterated kernels in this example:
        K1(s,t) =K(s,t) =sin(s+t),

        Kz(s, t) =     1rr [sin(s +;)]sin(x + t)dx
                  = 1  -Jr( sms sm
                       2
                            [       0        0                   1
                                                 t + coss cost = -Jl' cos(s - t) ,
                                                                   ]




                                                                             2
        K 3 (s, t) =   ~Jl' t           [sin(s + x)] cos(x- t)dx
                       2 Jo

                  = 1  -Jr 1<sms
                       2        0
                                    1
                                         0
                                                 cosx + smx sms)
                                                           0           0




                         x (cosx cost +sin x sin t)dx

                  =    (~Jr     Y       [sins cost+ coss sin t] =                (~Jl'   Ysin(s + t) .

Proceeding in this manner, we obtain

                            K4(s, t)             = (~Jr) 3 cos(s- t) ,

                            Ks(s, t) =             (~Jr)   4
                                                               sin(s + t) ,

                            K6(s, t) =             (~Jr) 5 cos(s- t), etc.
Substituting these values in the formula (3.1.13) and integrating, there results
the solution

           g(s)   =    1+   2A(coss) [          1+ G")' + G" + .. .J   1.2         )'A4


                  +J.2 ,.(sins)              [1+ (H'+ (H'<'+ ···] ,                                 (3.2.9)
or




                                                                                                    (3.2.10)
Since
3.2. Examples                                 33


the interval of convergence of the series (3.2.9) lies between      -J2/n and J2;n.
Example 4. Prove that the mth iterated kernel Km(s, t) satisfies the following
relation:
                            Km(S, t) =   J K,(s, x)Km-r(X, t)dx ,               (3.2.11)

where r is any positive integer less than m.
    By successive applications of Equation (3.1.11 ), we have

     Km(s, t) =     J···J        K(s, xi)K(Xt. x2) · · · K(Xm-t. t)dxm-1 · · · dx1 .
                                                                        (3.2.12)
Thus, Km (s, t) is an (m - 1)-fold integral. Similarly, K, (s, x) and Km_,(x, t)
are (r - 1)- and (m - r - 1)-fold integrals. This means that

                                  J  K,(s, x)Km_,(x, t)dx

is an (m - 1)-fold integral, and the result follows.
     One can take the go(s) approximation different from f(s), as we demon-
strate by the following example.
Example 5. Solve the inhomogeneous Fredholm integral equation of the second
kind,

                              g(s) = 2s   + >..1 1 (s + t)g(t)dt,               (3.2.13)

by the method of successive approximations to the third order.
     For this equation, we take g 0 (s) = 1. Then,

  gl (s) = 2s    +A.   t (s + t)dt = 2s + A.(s +!) ,
                       lo                       2

  gz(s) = 2s     + A.1 1 (s + t){2t + A.[t + (1/2)]}dt
        =   2s   + A.[s + (2/3)] +A. 2 [s + (7 /12)],
  g3(s) = 2s     + A.1 1 (s + t){2t + A.[t + (2/3)] + A. 2 [t + (7/12)]}dt
        = 2s     + A.[s + (2/3)] +). 2 [(7 /6)s + (2/3)] + .A 3 [(13/12)s + (5/8)].
                                                                                (3.2.14)
    From Example 2 of Section 2.2, we find the exact solution to be

                    g(s) = [12(2- .A)s       + 8.A]/(12- 12).- .A 2),           (3.2.15)
and the comparison of Equations (3.2.14) and (3.2.15) is left to the reader.
34                    3. Method of Successive Approximations


3.3. Volterra Integral Equation

The same iterative scheme is applicable to the Volterra integral equation of the
second kind. In fact, the formulas corresponding to Equations (3.1.13) and
(3.1.25) are, respectively,

                   ges) = Jes)+                ~Am is Kmes,t)Jet)dt,                      (3.3.1)


                     ges) = Jes) +A. is res, t; A.) Jet)dt ,                              (3.3.2)

where the iterated kernel Kmes, t) satisfies the recurrence formula

                      Kmes,t) = js Kes,x)Km-lex,t)dx                                      (3.3.3)

with Ktes, t) = Kes, t), as before. The resolvent kernel res, t; A.) is given by
the same formula as (3.1.26), and it is an entire function of A. for any given es, t)
(see Exercise 8).
     We illustrate this concept by the following examples.



3.4. Examples

Example 1. Find the Neumann series for the solution of the integral equation

                      ges) = e1 + s) +A. loses - t)get)dt .                               (3.4.1)

     From the formula (3.3.3), we have
                K 1 (s, t) = (s- t),

                K 2 (s, t)   =f      1
                                         s
                                             (s- x)(x- t)dx      =
                                                                     es- t)3
                                                                       3!    ,


                             =f
                                         s   (s - x)(x - t) 3             (s - t) 5
                K 3 (s, t)           1             3!         dx     =       5!       ,

and so on. Thus,

         g(s) = 1 + s +A.        (
                                     s2
                                     - +-
                                               5 + A. 2 ( 5 +5 + ...
                                                3)        -
                                                            4 -
                                                              5)                          (3.4.2)
                                     2! 3!                  4!       5!
For A.= 1, g(s) = e'.
3.4. Examples                                            35


Example 2. Solve the integral equation

                                          g(s) = f(s) +.l.. los t!-'g(t)dt                                   (3.4.3)

and evaluate the resolvent kernel.
    For this case,




                                              (s-   t)m-1         -t
                       Km(S, t}           =     (m -1)!
                                                             tf          .

The resolvent kernel is
                                     es-t L...m=1   .... - 1 (s-t)'"-l       =
                                 {
                                            "'00                                 e<'-+1)(s-t)
     ~""'(                                                                                          t ~ s'
     1       s, t,• II.}
                    '      _
                           -
                                                        (m-1)!                                  '
                                                                                                             (3.4.4)
                                     0,                                                             t > s.
Hence, the solution is

                                     g(s)   =   /(s)   + .l.. los e<A+1)(s-t) f(t)dt.                        (3.4.5)

Example 3. Solve the Volterra equation

                                              g(s) = 1 +los stg(t)dt .                                       (3.4.6)

    For this example, K 1 (s, t) = K(s, t) = st,

         Kz(s, t) =is sx 2 t dx = (s 4t- st 4)J3,

         K 3 (s, t) =is [(sx)(x 4t - xt 4)j3]dx = (s 1 t- 2s 4t 4 + st 1 )j18,

         K4(s, t) =is [(sx)(x 1 t - 2x 4t 4 +xt 1 )j18]dx

                               = (s 10 t - 3s 1 t 4 + 3s4t 1       -         st 10 )j162,
and so on. Thus,
                                     53       59            59                       5 12
             g(s) = 1 +              3 + 2 · 5 + 2 · 5 · 8 + 2 · 5 · 8 ·11 + · · · ·                         (3.4.7)
36                    3. Method of Successive Approximations


3.5. Some Results about the Resolvent Kernel

The series for the resolvent kernel f(s, t; A.),

                                           L A.m-l Km(S, t) ,
                                            00

                            f(s, t; A.)=                                              (3.5.1)
                                           m=l

can be proved to be absolutely and uniformly convergent for all values of s and
t in the circle lA. I < 1/B. In addition to the assumptions of Section 3.1, we need
the additional inequality

                                                           E   = const.               (3.5.2)

Recall that this is one of the conditions for the kernel K to be an £ 2 -kernel.
Applying the Schwarz inequality to the recurrence formula

                       Km(s, t)      =I    Km-1(s,x)K(x, t)dx                         (3.5.3)



                                (I                             I
yields

             !Km(S, 1)1 2   ~        !Km-1 (s, x)! 2dx)            IK(x, t)! 2 dx ,

which, with the help of Equation (3.1.19), becomes

                                !Km(s, t)! ~ C1E Bm- 1 .                              (3.5.4)

Thus, the series (3.5.1) is dominated by the geometric series with the general
term C 1E(A.m- 1Bm- 1), and that completes the proof.
    Next, we prove that the resolvent kernel satisfies the integral equation

               f(s, t; A.)= K(s, t) +A.          I   f(s, x; A.)K(x, t) dx .          (3.5.5)

This follows by replacing Km(s, t) in the series (3.5.1) by the integral relation
(3.5.3). Then,

         f(s, t; A.)= K1(s, t) +      f>m- I     1      Km-1(s, x)K(x, t)dx


                                                      I
                                      m=2

                    = K(s, t) +A. f>..m- 1                Km(s,x)K(x, t)dx
                                       m=l


                    = K(s, t)     +A J[.~A"- 1 K.(s,x)]                  K(x, t)dx,
3.5. Some Results about the Resolvent Kernel                       37


and the integral Equation (3.5.5) follows immediately. The change of order of
integration and summation is legitimate in view of the uniform convergence of
the series involved.
     By proceeding in a similar manner we can prove that

                 f'(s,t;A) =K(s,t)+A I K(s,x)f'(x,t;A)dx.                        (3.5.6)

The two Equations (3.5.5) and (3.5.6) are called the Fredholm identities. From
the preceding analysis it follows that they are valid, at least, within the circle
IAlB < 1. Actually, they hold through the whole domain of the existence of the
resolvent kernel in the complex A-plane.
     Another interesting result for the resolvent kernel is that it satisfies the
integro-differential equation

                   of'(s,t;A)/oA= I f'(s,X;A)f'(x,t;A)dx.                        (3.5.7)

In fact

    I f'(s, x; A)f'(x, t; A)dx = I f : Am- 1 Km(s, x)
                                        m=l
                                                              f:
                                                              n=l
                                                                    An-l K,(x, t)dx.

On account of the absolute and uniform convergence of the series (3.5.1), we
can multiply the series under the integral sign and integrate it term by term.
Therefore,

          I                               00    00

              f'(s,x;A)f'(x,t;A)dx = LL~>m+n-ZKm+n(s,t).
                                     m=l n=l
                                                                                 (3.5.8)

Now, set m +n = p and change the order of summation; there results the relation
      00 00                   00 p-1

     L LAm+n- 2 Km+n(s, t) = L LV- 2 Kp(S, t)
     m=ln=1                  p=2n=l

                              = f:<P- 1)V-2 Kp(s, t) = of'(s, t; A)
                                p=Z                               oA
                                                                     (3.5.9)
Combining Equations (3.5.8) and (3.5.9), we have the result (3.5.7).
   By a similar process we can obtain the integral equation

     f'(s, t; A)- f'(s, t; J.-L) =(A- J.-L) I   f'(s, x; J.-L)f'(x, t; A)dx ,   (3.5.10)

which is called the resolvent equation. Then Equation (3.5.7) can be derived
from (3.5.10) by dividing it by (A - J.-L) and letting J.-L ~ A. Equation (3.5.10)
38                     3. Method of Successive Approximations


can also be derived by the help of the Fredholm identities (3.5.5) and (3.5.6) (see
Exercise 9).
    A Volterra integral equation of the second kind satisfies the identities


          f(s,t;'A)=K(s,t)+>..      { f/s f(s, x; 'A)K(x, t)dx,            (3.5.11)
                                      J,   K(s,x)f(x,t;'A)dx,
and are proved in a similar fashion [4].
      In Section 1.6 we defined the orthogonal kernels L(s, t) and M(s, t). If we
let r t and r m denote the resolvent kernels for L and M, so that we have
                           00                   00

            ft(S, t; A)= L'An-lLn(S, t) = L'AnLn+t(S, t),                  (3.5.12)
                          n=l                  n=O

and
                          00                    00

           fm(s,t;>..) = L'An-lMn(s,t) = L'AnMn+l(s,t),                    (3.5.13)
                          n=l                   n=O
where Ln and Mn denote the iterated kernels. For the case that L and Mare
orthogonal it follows that

                           J    L(s, x)f(x, t; 'A)dx = 0,                  (3.5.14)

and
                          I M(s, x)ft(X, t; >..)dx = 0.                    (3.5.15)

    Furthermore, by applying the Fredholm identity (3.5.6) to the resolvent
kernels r l and r m, it follows that

     ft(S, t; A)+ f m(s, t; 'A)= L(s, t) + M(s, t)

                  +'A I[L(s,x)ft(x,t;>..)+M(s,x)fm(X,t;>..)dx,             (3.5.16)

which, in view of Equations (3.5.14) and (3.5.15), can be written as

     ft(s, t; A)+ f m(s, t; A)= L(s, t) + M(s, t)

             +)..I [L (s, x) + M(s, x)][ft(S, t; A)+ r m(X, t; >..)]dx .   (3.5.17)

This shows that the resolvent kernel of the sum L(s, t) + M(s, t) of two orthog-
onal kernels is the sum of their resolvent kernels, that is,

                     ft+m(s, t; A)= ft(S, t; A)+ f m(s, t; A) .            (3.5.18)
3.5. Some Results about the Resolvent Kernel                 39


Exercises

1. Solve the following Fredholm integral equations by the method of successive
approximations:

                                                                1
                                 1
                     g(s) = es - -e     + - +-
                                          1 11                      g(t)dt .
                                    2        2         2    0


                                        1         111C/2
                    g(s) =(sins)- -s +-                             stg(t)dt.
                                        4         4     0

2. Consider the integral equation

                           g(s) = 1 +A      1     1
                                                      stg(t)dt .

     (a) Make use of the relation IAI < B- 1 to show that the iterative procedure
is valid for IAI < 3.
     (b) Show that the iterative procedure leads formally to the solution

               g(s) = 1 +s[(A/2) + (A 2 /6) + (A 3 j18) + ... ].

    (c) Use the method of the previous chapter to obtain the exact solution

                    g(s) = 1 + [3As/2(3- A)],


3. Solve the integral equation

                        g(s) = 1 +A     1    1
                                                 (s + t)g(t)dt ,

by the method of successive approximations and show that the estimate afforded
by the relation IAI < B- 1 is conservative in this case.

4. Find the resolvent kernel associated with the following kernels: (i) Is - t I,
in the interval (0, 1); (ii) exp(-ls- tl), in the interval (0, 1); (iii) cos(s + t),
in the interval (0, 2rr).

5. Solve the following Volterra integral equations by the method of this chapter:

                         g(s) = 1 +     1s   (s - t)g(t)dt ,                     (i)


                  g(s) = 29 + 6s +      1s   (6s - 6t + S)g(t)dt .              (ii)
40                     3. Method of Successive Approximations


6. Find an approximate solution of the integral equation

                        g(s) = (sinh s)    +   1s er-s   g(t) dt ,

by the method of iteration.

7. Obtain the radius of convergence of the Neumann series when the functions
f(s) and the kernel K(s, t) are continuous in the interval (a, b).

8. Prove that the resolvent kernel for a Volterra integral equation of the second
kind is an entire function of .1.. for any given (s, t).

9. Derive the resolvent equation (3.5.10) by appealing to the Fredholm identities
(3.5.5) and (3.5.6).

Contenu connexe

Tendances

Indefinite Integral
Indefinite IntegralIndefinite Integral
Indefinite IntegralJelaiAujero
 
Double Integral Powerpoint
Double Integral PowerpointDouble Integral Powerpoint
Double Integral Powerpointoaishnosaj
 
Cubic Spline Interpolation
Cubic Spline InterpolationCubic Spline Interpolation
Cubic Spline InterpolationVARUN KUMAR
 
Odepowerpointpresentation1
Odepowerpointpresentation1 Odepowerpointpresentation1
Odepowerpointpresentation1 Pokarn Narkhede
 
Hausdorff and Non-Hausdorff Spaces
Hausdorff and Non-Hausdorff SpacesHausdorff and Non-Hausdorff Spaces
Hausdorff and Non-Hausdorff Spacesgizemk
 
Numerical integration
Numerical integrationNumerical integration
Numerical integrationSunny Chauhan
 
the fourier series
the fourier seriesthe fourier series
the fourier seriessafi al amu
 
Limits and its theorem In Calculus
Limits and its theorem  In CalculusLimits and its theorem  In Calculus
Limits and its theorem In CalculusUzair Ahmad
 
Methods of variation of parameters- advance engineering mathe mathematics
Methods of variation of parameters- advance engineering mathe mathematicsMethods of variation of parameters- advance engineering mathe mathematics
Methods of variation of parameters- advance engineering mathe mathematicsKaushal Patel
 
Linear differential equation with constant coefficient
Linear differential equation with constant coefficientLinear differential equation with constant coefficient
Linear differential equation with constant coefficientSanjay Singh
 
Integration by partial fraction
Integration by partial fractionIntegration by partial fraction
Integration by partial fractionAyesha Ch
 
Laurent's Series & Types of Singularities
Laurent's Series & Types of SingularitiesLaurent's Series & Types of Singularities
Laurent's Series & Types of SingularitiesAakash Singh
 
Complex Numbers and Functions. Complex Differentiation
Complex Numbers and Functions. Complex DifferentiationComplex Numbers and Functions. Complex Differentiation
Complex Numbers and Functions. Complex DifferentiationHesham Ali
 
Line integral,Strokes and Green Theorem
Line integral,Strokes and Green TheoremLine integral,Strokes and Green Theorem
Line integral,Strokes and Green TheoremHassan Ahmed
 

Tendances (20)

Indefinite Integral
Indefinite IntegralIndefinite Integral
Indefinite Integral
 
Perturbation
PerturbationPerturbation
Perturbation
 
Double Integral Powerpoint
Double Integral PowerpointDouble Integral Powerpoint
Double Integral Powerpoint
 
Cubic Spline Interpolation
Cubic Spline InterpolationCubic Spline Interpolation
Cubic Spline Interpolation
 
Odepowerpointpresentation1
Odepowerpointpresentation1 Odepowerpointpresentation1
Odepowerpointpresentation1
 
Hausdorff and Non-Hausdorff Spaces
Hausdorff and Non-Hausdorff SpacesHausdorff and Non-Hausdorff Spaces
Hausdorff and Non-Hausdorff Spaces
 
Numerical integration
Numerical integrationNumerical integration
Numerical integration
 
the fourier series
the fourier seriesthe fourier series
the fourier series
 
MEAN VALUE THEOREM
MEAN VALUE THEOREMMEAN VALUE THEOREM
MEAN VALUE THEOREM
 
The gamma function
The gamma functionThe gamma function
The gamma function
 
Limits and its theorem In Calculus
Limits and its theorem  In CalculusLimits and its theorem  In Calculus
Limits and its theorem In Calculus
 
Methods of variation of parameters- advance engineering mathe mathematics
Methods of variation of parameters- advance engineering mathe mathematicsMethods of variation of parameters- advance engineering mathe mathematics
Methods of variation of parameters- advance engineering mathe mathematics
 
Group Theory
Group TheoryGroup Theory
Group Theory
 
Linear differential equation with constant coefficient
Linear differential equation with constant coefficientLinear differential equation with constant coefficient
Linear differential equation with constant coefficient
 
Limit & continuity, B.Sc . 1 calculus , Unit - 1
Limit & continuity, B.Sc . 1 calculus , Unit - 1Limit & continuity, B.Sc . 1 calculus , Unit - 1
Limit & continuity, B.Sc . 1 calculus , Unit - 1
 
Integration by partial fraction
Integration by partial fractionIntegration by partial fraction
Integration by partial fraction
 
Laurent's Series & Types of Singularities
Laurent's Series & Types of SingularitiesLaurent's Series & Types of Singularities
Laurent's Series & Types of Singularities
 
Binomial
BinomialBinomial
Binomial
 
Complex Numbers and Functions. Complex Differentiation
Complex Numbers and Functions. Complex DifferentiationComplex Numbers and Functions. Complex Differentiation
Complex Numbers and Functions. Complex Differentiation
 
Line integral,Strokes and Green Theorem
Line integral,Strokes and Green TheoremLine integral,Strokes and Green Theorem
Line integral,Strokes and Green Theorem
 

Similaire à Linear integral equations

3. Frequency-Domain Analysis of Continuous-Time Signals and Systems.pdf
3. Frequency-Domain Analysis of Continuous-Time Signals and Systems.pdf3. Frequency-Domain Analysis of Continuous-Time Signals and Systems.pdf
3. Frequency-Domain Analysis of Continuous-Time Signals and Systems.pdfTsegaTeklewold1
 
On problem-of-parameters-identification-of-dynamic-object
On problem-of-parameters-identification-of-dynamic-objectOn problem-of-parameters-identification-of-dynamic-object
On problem-of-parameters-identification-of-dynamic-objectCemal Ardil
 
Hull White model presentation
Hull White model presentationHull White model presentation
Hull White model presentationStephan Chang
 
Tele4653 l5
Tele4653 l5Tele4653 l5
Tele4653 l5Vin Voro
 
Finite-difference modeling, accuracy, and boundary conditions- Arthur Weglein...
Finite-difference modeling, accuracy, and boundary conditions- Arthur Weglein...Finite-difference modeling, accuracy, and boundary conditions- Arthur Weglein...
Finite-difference modeling, accuracy, and boundary conditions- Arthur Weglein...Arthur Weglein
 
Solucionario Mecácnica Clásica Goldstein
Solucionario Mecácnica Clásica GoldsteinSolucionario Mecácnica Clásica Goldstein
Solucionario Mecácnica Clásica GoldsteinFredy Mojica
 
A Note on “   Geraghty contraction type mappings”
A Note on “   Geraghty contraction type mappings”A Note on “   Geraghty contraction type mappings”
A Note on “   Geraghty contraction type mappings”IOSRJM
 

Similaire à Linear integral equations (20)

D021018022
D021018022D021018022
D021018022
 
3. Frequency-Domain Analysis of Continuous-Time Signals and Systems.pdf
3. Frequency-Domain Analysis of Continuous-Time Signals and Systems.pdf3. Frequency-Domain Analysis of Continuous-Time Signals and Systems.pdf
3. Frequency-Domain Analysis of Continuous-Time Signals and Systems.pdf
 
Functions, Graphs, & Curves
Functions, Graphs, & CurvesFunctions, Graphs, & Curves
Functions, Graphs, & Curves
 
Ps02 cmth03 unit 1
Ps02 cmth03 unit 1Ps02 cmth03 unit 1
Ps02 cmth03 unit 1
 
Directional derivative and gradient
Directional derivative and gradientDirectional derivative and gradient
Directional derivative and gradient
 
On problem-of-parameters-identification-of-dynamic-object
On problem-of-parameters-identification-of-dynamic-objectOn problem-of-parameters-identification-of-dynamic-object
On problem-of-parameters-identification-of-dynamic-object
 
Hull White model presentation
Hull White model presentationHull White model presentation
Hull White model presentation
 
Tele4653 l5
Tele4653 l5Tele4653 l5
Tele4653 l5
 
Chang etal 2012a
Chang etal 2012aChang etal 2012a
Chang etal 2012a
 
Finite-difference modeling, accuracy, and boundary conditions- Arthur Weglein...
Finite-difference modeling, accuracy, and boundary conditions- Arthur Weglein...Finite-difference modeling, accuracy, and boundary conditions- Arthur Weglein...
Finite-difference modeling, accuracy, and boundary conditions- Arthur Weglein...
 
Gold1
Gold1Gold1
Gold1
 
Gold1
Gold1Gold1
Gold1
 
Solucionario Mecácnica Clásica Goldstein
Solucionario Mecácnica Clásica GoldsteinSolucionario Mecácnica Clásica Goldstein
Solucionario Mecácnica Clásica Goldstein
 
Sol49
Sol49Sol49
Sol49
 
Sol49
Sol49Sol49
Sol49
 
A Note on “   Geraghty contraction type mappings”
A Note on “   Geraghty contraction type mappings”A Note on “   Geraghty contraction type mappings”
A Note on “   Geraghty contraction type mappings”
 
Plank sol
Plank solPlank sol
Plank sol
 
Plank sol
Plank solPlank sol
Plank sol
 
Sect5 4
Sect5 4Sect5 4
Sect5 4
 
Laplace quad
Laplace quadLaplace quad
Laplace quad
 

Plus de Springer

The chemistry of the actinide and transactinide elements (set vol.1 6)
The chemistry of the actinide and transactinide elements (set vol.1 6)The chemistry of the actinide and transactinide elements (set vol.1 6)
The chemistry of the actinide and transactinide elements (set vol.1 6)Springer
 
Transition metal catalyzed enantioselective allylic substitution in organic s...
Transition metal catalyzed enantioselective allylic substitution in organic s...Transition metal catalyzed enantioselective allylic substitution in organic s...
Transition metal catalyzed enantioselective allylic substitution in organic s...Springer
 
Total synthesis of natural products
Total synthesis of natural productsTotal synthesis of natural products
Total synthesis of natural productsSpringer
 
Solid state nmr
Solid state nmrSolid state nmr
Solid state nmrSpringer
 
Mass spectrometry
Mass spectrometryMass spectrometry
Mass spectrometrySpringer
 
Higher oxidation state organopalladium and platinum
Higher oxidation state organopalladium and platinumHigher oxidation state organopalladium and platinum
Higher oxidation state organopalladium and platinumSpringer
 
Principles and applications of esr spectroscopy
Principles and applications of esr spectroscopyPrinciples and applications of esr spectroscopy
Principles and applications of esr spectroscopySpringer
 
Inorganic 3 d structures
Inorganic 3 d structuresInorganic 3 d structures
Inorganic 3 d structuresSpringer
 
Field flow fractionation in biopolymer analysis
Field flow fractionation in biopolymer analysisField flow fractionation in biopolymer analysis
Field flow fractionation in biopolymer analysisSpringer
 
Thermodynamics of crystalline states
Thermodynamics of crystalline statesThermodynamics of crystalline states
Thermodynamics of crystalline statesSpringer
 
Theory of electroelasticity
Theory of electroelasticityTheory of electroelasticity
Theory of electroelasticitySpringer
 
Tensor algebra and tensor analysis for engineers
Tensor algebra and tensor analysis for engineersTensor algebra and tensor analysis for engineers
Tensor algebra and tensor analysis for engineersSpringer
 
Springer handbook of nanomaterials
Springer handbook of nanomaterialsSpringer handbook of nanomaterials
Springer handbook of nanomaterialsSpringer
 
Shock wave compression of condensed matter
Shock wave compression of condensed matterShock wave compression of condensed matter
Shock wave compression of condensed matterSpringer
 
Polarization bremsstrahlung on atoms, plasmas, nanostructures and solids
Polarization bremsstrahlung on atoms, plasmas, nanostructures and solidsPolarization bremsstrahlung on atoms, plasmas, nanostructures and solids
Polarization bremsstrahlung on atoms, plasmas, nanostructures and solidsSpringer
 
Nanostructured materials for magnetoelectronics
Nanostructured materials for magnetoelectronicsNanostructured materials for magnetoelectronics
Nanostructured materials for magnetoelectronicsSpringer
 
Nanobioelectrochemistry
NanobioelectrochemistryNanobioelectrochemistry
NanobioelectrochemistrySpringer
 
Modern theory of magnetism in metals and alloys
Modern theory of magnetism in metals and alloysModern theory of magnetism in metals and alloys
Modern theory of magnetism in metals and alloysSpringer
 
Mechanical behaviour of materials
Mechanical behaviour of materialsMechanical behaviour of materials
Mechanical behaviour of materialsSpringer
 

Plus de Springer (20)

The chemistry of the actinide and transactinide elements (set vol.1 6)
The chemistry of the actinide and transactinide elements (set vol.1 6)The chemistry of the actinide and transactinide elements (set vol.1 6)
The chemistry of the actinide and transactinide elements (set vol.1 6)
 
Transition metal catalyzed enantioselective allylic substitution in organic s...
Transition metal catalyzed enantioselective allylic substitution in organic s...Transition metal catalyzed enantioselective allylic substitution in organic s...
Transition metal catalyzed enantioselective allylic substitution in organic s...
 
Total synthesis of natural products
Total synthesis of natural productsTotal synthesis of natural products
Total synthesis of natural products
 
Solid state nmr
Solid state nmrSolid state nmr
Solid state nmr
 
Mass spectrometry
Mass spectrometryMass spectrometry
Mass spectrometry
 
Higher oxidation state organopalladium and platinum
Higher oxidation state organopalladium and platinumHigher oxidation state organopalladium and platinum
Higher oxidation state organopalladium and platinum
 
Principles and applications of esr spectroscopy
Principles and applications of esr spectroscopyPrinciples and applications of esr spectroscopy
Principles and applications of esr spectroscopy
 
Inorganic 3 d structures
Inorganic 3 d structuresInorganic 3 d structures
Inorganic 3 d structures
 
Field flow fractionation in biopolymer analysis
Field flow fractionation in biopolymer analysisField flow fractionation in biopolymer analysis
Field flow fractionation in biopolymer analysis
 
Thermodynamics of crystalline states
Thermodynamics of crystalline statesThermodynamics of crystalline states
Thermodynamics of crystalline states
 
Theory of electroelasticity
Theory of electroelasticityTheory of electroelasticity
Theory of electroelasticity
 
Tensor algebra and tensor analysis for engineers
Tensor algebra and tensor analysis for engineersTensor algebra and tensor analysis for engineers
Tensor algebra and tensor analysis for engineers
 
Springer handbook of nanomaterials
Springer handbook of nanomaterialsSpringer handbook of nanomaterials
Springer handbook of nanomaterials
 
Shock wave compression of condensed matter
Shock wave compression of condensed matterShock wave compression of condensed matter
Shock wave compression of condensed matter
 
Polarization bremsstrahlung on atoms, plasmas, nanostructures and solids
Polarization bremsstrahlung on atoms, plasmas, nanostructures and solidsPolarization bremsstrahlung on atoms, plasmas, nanostructures and solids
Polarization bremsstrahlung on atoms, plasmas, nanostructures and solids
 
Nanostructured materials for magnetoelectronics
Nanostructured materials for magnetoelectronicsNanostructured materials for magnetoelectronics
Nanostructured materials for magnetoelectronics
 
Nanobioelectrochemistry
NanobioelectrochemistryNanobioelectrochemistry
Nanobioelectrochemistry
 
Modern theory of magnetism in metals and alloys
Modern theory of magnetism in metals and alloysModern theory of magnetism in metals and alloys
Modern theory of magnetism in metals and alloys
 
Mechanical behaviour of materials
Mechanical behaviour of materialsMechanical behaviour of materials
Mechanical behaviour of materials
 
Magnonics
MagnonicsMagnonics
Magnonics
 

Linear integral equations

  • 1. CHAPI'ER3 Method of Successive Approximations 3.1. Iterative Scheme Ordinary first-order differential equations can be solved by the well-known Pi- card method of successive approximations. An iterative scheme based on the same principle is also available for linear integral equations of the second kind: g(s) = f(s) +).. j K(s, t)g(t)dt . (3.1.1) In this chapter, we present this method. We assume the functions f (s) and K(s, t) to be .C2-functions as defined in Chapter 1. As a zero-order approximation to the desired function g(s), the solution go(s), go(s) = f(s) , (3.1.2) is taken. This is substituted into the right side of Equation (3.1.1) to give the first-order approximation g1(s) = f(s) +).. j K(s, t)go(t)dt. (3.1.3) This function, when substituted into Equation (3.1.1), yields the second approx- imation. This process is then repeated; the (n + 1)th approximation is obtained by substituting the nth approximation in the right side of (3.1.1). There results the recurrence relation gn+l(s)=f(s)+J.. J K(s,t)gn(t)dt. (3.1.4) If gn (s) tends uniformly to a limit as n ---+ oo, then this limit is the required solution. To study such a limit, let us examine the iterative procedure (3.1.4) in detail. The first- and second-order approximations are g1(s) = f(s)+J.. j K(s,t)f(t)dt (3.1.5) J and g2(s) =f(s)+J.. K(s,t)f(t)dt j [j (3.1.6) +.A? K(s, t) K(t, x)f(x)dx Jdt. R.P. Kanwal, Linear Integral Equations: Theory & Technique, Modern Birkhäuser Classics, 25 DOI 10.1007/978-1-4614-6012-1_3, © Springer Science+Business Media New York 2013
  • 2. 26 3. Method of Successive Approximations This formula can be simplified by setting K2(s, t) = J K(s, x)K(x, t)dx (3.1.7) and by changing the order of integration. The result is g2(s) = f(s) +). J K(s, t)f(t)dt + ),,Z J K2(s, t)f(t)dt. (3.1.8) Similarly, g3(s) =f(s) +). j K(s, t)f(t)dt + ). 2 f K2(S, t)f(t)dt + ). 3 f K3(S, t)f(t)dt' (3.1.9) f where K3(s, t) = K(s, x)K2 (x, t)dx. (3.1.10) By continuing this process, and denoting Km(s,t) = J K(s,x)Km-t(X,t)dx, (3.1.11) we get the nth approximate solution of integral equation (3.1.1) as gn(s) = f(s) + ~).m J Km(S, t)f(t)dt. (3.1.12) We call the expression Km(s, t) the mth iterate, where Kt(S, t) = K(s, t). Passing to the limit as n --* oo, we obtain the so-called Neumann series g(s) = lim gn(s) = f(s) n-:,.oo ~).m + L...., m=l J Km(s, t)f(t)dt. (3.1.13) It remains to determine the conditions under which this convergence is achieved. For this purpose, we attend to the partial sum (3.1.12) and apply the Schwartz inequality (1.6.3) to the general term of this sum. This gives If Km(s,t)f(t)d{::;; ( / 1Km(s,t)i 2 dt) J if(t)i 2 dt. (3.1.14) Let D be the norm of f, (3.1.15)
  • 3. 3.1. Iterative Scheme 27 and let c;. denote the upper bound of the integral ff Km(s, t)i 2 dt. Then, the inequality (3.1.14) becomes If Km(S, t)f(t)dtl 2 ~ c;, D 2 • (3.1.16) The next step is to connect the estimate c;. with the estimate c;. This is achieved by applying the Schwarz inequality to the relation (3.1.11): which, when integrated with respect to t, yields (3.1.17) ff where B2 = IK(x, t)i 2dx dt. (3.1.18) The inequality (3.1.17) sets up the recurrence relation cz ~ BZm-zcz (3.1.19) m '-" 1 • From Equations (3.1.16) and (3.1.19), we have the inequality If Km(s, t)f(t)dtl 2 ~ CfD 2 B2m-Z. (3.1.20) Therefore, the general term of the partial sum (3.1.12) has a magnitude less than the quantity DC 1 1Aim Bm- 1, and it follows that the infinite series (3.1.13) converges faster than the geometric series with common ratio IAlB. Hence, if IAIB < 1, (3.1.21) the uniform convergence of this series is assured. We now prove that, for given A, Equation (3.1.1) has a unique solution. Suppose the contrary, and let g 1 (s) and g 2 (s) be two solutions of Equation f (3.1.1): g1(s) = f(s) +A K(s, t)g1(t)dt, gz(s) = f(s) +A f K(s, t)gz(t)dt.
  • 4. 28 3. Method of Successive Approximations By subtracting these equations and setting g 1 (s)- g2(s) = </J(s), there results the homogeneous integral equation </J(s) =A I K(s, t)</J(t)dt. Apply the Schwarz inequality to this equation and get I</J(s)l 2 ~ IAI 2 I IK(s, t)l 2 dt I I</J(t)l 2dt, or (3.1.22) In view of the inequality (3.1.21) and the nature of the function <P (s) = g 1 (s) - g2 (s), we readily conclude that <P (s) = 0, that is, g 1 (s) g2 (s). = What is the estimate of the error for neglecting terms after the nth term in the Neumann series (3.1.13)? This is found by writing this series as g(s) = f(s) + 1; IAm Km(S, t) f(t)dt + Rn(s). (3.1.23) Then, it follows from the preceding analysis that (3.1.24) Finally, we can evaluate the resolvent kernel as defined in the previous chapter, in terms of the iterated kernels Km (s, t). Indeed, by changing the order of integration and summation in the Neumann series (3.1.13), we obtain g(s) = f(s) + '- Jlt '-"-' K.(s, I)] f(t)dt. Comparing this with (2.3. 7), g(s) = f(s) +A I f(s, t; A)j(t)dt, (3.1.25) we have L Am- Km(S, t). 00 f(s, t; A)= 1 (3.1.26) m=1 From the previous discussion, we can infer (see Section 3.5) that the series (3.1.26) is also convergent at least for IAIB < 1. Hence, the resolvent kernel is an analytic function of A, regular at least inside the circle IAI < B- 1 • From the uniqueness of the solution of (3.1.1), we can prove that the resol- vent kernel f(s, t; A) is unique. In fact, let Equation (3.1.1) have, with A = Ao,
  • 5. 3.2. Examples 29 two resolvent kernels r 1 (s, t; .A) and r 2 (s, t; .Ao). In view of the uniqueness of the solution of (3.1.1), an arbitrary function f(s) satisfies the identity f(s) + Ao I rt(S, t; Ao)f(t)dt =f(s) + Ao I rz(S, t; .A.o)f(t)dt. (3.1.27) Setting ll(s, t; .Ao) = rt (s, t; .A.o)-rz(s, t; .Ao), we have, fromEquation(3.1.27), I ll(s, t; .A.o)f(t)dt = 0, for an arbitrary function f(t). Let us take f(t) = ll*(s, t; .A), with fixed s. This implies that I lll(s, t; .Ao)l 2 dt =0, = which means that II (s, t; A.o) 0, proving the uniqueness of the resolvent kernel. The preceding analysis can be summed up in the following basic theorem. Theorem. To each .C2 -kernel K(s, t), there corresponds a unique resolvent kernel r(s, t; .A) which is an analytic function of .A, regular at least inside the circle I.AI < n- 1, and represented by the power series (3.1.26). Furthermore, if f(s) is also an .C2 -function, then the unique .Cz-solution of the Fredholm equation (3.1.1) valid in the circle 1>..1 < n- 1, is given by the formula (3.1.25). The method of successive approximations has many drawbacks. In addition to being a cumbersome process, the Neumann series, in general, cannot be summed in closed form. Furthermore, a solution of the integral Equation (3.1.1) may exist even if I.AIB > 1, as evidenced in the previous chapter. In fact, we saw that the resolvent kernel is a quotient of two polynomials of nth degree in .A, and therefore the only possible singular points of r(s, t; .A) are the roots of the denominator D(.A) = 0. But, for I.AIB > 1, the Neumann series does not converge and as such does not provide the desired solution. We have more to say about these ideas in the next chapter. 3.2. Examples Example 1. Solve the integral equation g(s) = f(s) +A 1 1 es-t g(t)dt . (3.2.1)
  • 6. 30 3. Method of Successive Approximations Following the method of the previous section, we have K1(s, t) =es-t, K2(s, t) = 11 es-xex-tdx =es-t. Proceeding in this way, we find that all the iterated kernels coincide with K (s, t). Using Equation (3.1.26), we obtain the resolvent kernel as r(s, t; A.)= K(s, t)(1 +).. + A. 2 +···)=es-t /(1- A.). (3.2.2) Although the series (1 +A.+ A. 2 + · · ·)converges only for lA. I < 1, the resolvent kernel is, in fact, an analytic function of A., regular in the whole plane except at the point A. = 1, which is a simple pole of the kernel r. The solution g(s) then follows from (3.1.25): g(s) = f(s)- [A./(A.- 1)] 1 1 es-t f(t)dt. (3.2.3) Example 2. Solve the Fredholm integral equation g(s) = 1 +A. 1 1 (1 - 3st)g(t)dt (3.2.4) and evaluate the resolvent kernel. Starting with g0 (s) = 1, we have g1(s) = 1 +A. 1 1 (1 - 3st)dt = 1 +A. ( 1 - ~s) , g 2 (s) = 1 +A. 1 1 (1- 3st) [ 1 +A. ( 1- ~t) Jdt = 1 +A. ( 1- ~s) + ~A. 2 , or (3.2.5) The geometric series in Equation (3.2.4) is convergent provided lA. I < 2. Then, g(s) = [4 + 2A.(2- 3s)]/(4- A. 2 ) , (3.2.6)
  • 7. 3.2. Examples 31 and precisely the same remarks apply to the region of the validity of this solution as given in Example 1. To evaluate the resolvent kernel, we find the iterated kernels. Kt(s,t) = 1-3st, K2(s, t) = 1 1 3 (1 - 3sx)(1 - 3xt)dx = 1 - -(s + t) + 3st , 2 1 0 K3(s, t) = 1 (1- 3sx) [ 1- ~(x + t) + 3xt] dx 1 1 = -(1- 3st) = -K1 (s, t), 4 4 Similarly, and Hence, f(s, t; A.)= Kt + lK2 + l 2K3 + ... = (1+~l2 + 1~l4 + .. .)Kt+l(l+~l2 + 1~l4 + .. .)K2 = [o+A.)-~l(s+t)-3(1-l)st]/ (t-~l 2 ), Ill <2. (3.2.7) Example 3. Solve the integral equation g(s) = 1 + l 1" [sin(s + t)]g(t)dt . (3.2.8)
  • 8. 32 3. Method of Successive Approximations Let us first evaluate the iterated kernels in this example: K1(s,t) =K(s,t) =sin(s+t), Kz(s, t) = 1rr [sin(s +;)]sin(x + t)dx = 1 -Jr( sms sm 2 [ 0 0 1 t + coss cost = -Jl' cos(s - t) , ] 2 K 3 (s, t) = ~Jl' t [sin(s + x)] cos(x- t)dx 2 Jo = 1 -Jr 1<sms 2 0 1 0 cosx + smx sms) 0 0 x (cosx cost +sin x sin t)dx = (~Jr Y [sins cost+ coss sin t] = (~Jl' Ysin(s + t) . Proceeding in this manner, we obtain K4(s, t) = (~Jr) 3 cos(s- t) , Ks(s, t) = (~Jr) 4 sin(s + t) , K6(s, t) = (~Jr) 5 cos(s- t), etc. Substituting these values in the formula (3.1.13) and integrating, there results the solution g(s) = 1+ 2A(coss) [ 1+ G")' + G" + .. .J 1.2 )'A4 +J.2 ,.(sins) [1+ (H'+ (H'<'+ ···] , (3.2.9) or (3.2.10) Since
  • 9. 3.2. Examples 33 the interval of convergence of the series (3.2.9) lies between -J2/n and J2;n. Example 4. Prove that the mth iterated kernel Km(s, t) satisfies the following relation: Km(S, t) = J K,(s, x)Km-r(X, t)dx , (3.2.11) where r is any positive integer less than m. By successive applications of Equation (3.1.11 ), we have Km(s, t) = J···J K(s, xi)K(Xt. x2) · · · K(Xm-t. t)dxm-1 · · · dx1 . (3.2.12) Thus, Km (s, t) is an (m - 1)-fold integral. Similarly, K, (s, x) and Km_,(x, t) are (r - 1)- and (m - r - 1)-fold integrals. This means that J K,(s, x)Km_,(x, t)dx is an (m - 1)-fold integral, and the result follows. One can take the go(s) approximation different from f(s), as we demon- strate by the following example. Example 5. Solve the inhomogeneous Fredholm integral equation of the second kind, g(s) = 2s + >..1 1 (s + t)g(t)dt, (3.2.13) by the method of successive approximations to the third order. For this equation, we take g 0 (s) = 1. Then, gl (s) = 2s +A. t (s + t)dt = 2s + A.(s +!) , lo 2 gz(s) = 2s + A.1 1 (s + t){2t + A.[t + (1/2)]}dt = 2s + A.[s + (2/3)] +A. 2 [s + (7 /12)], g3(s) = 2s + A.1 1 (s + t){2t + A.[t + (2/3)] + A. 2 [t + (7/12)]}dt = 2s + A.[s + (2/3)] +). 2 [(7 /6)s + (2/3)] + .A 3 [(13/12)s + (5/8)]. (3.2.14) From Example 2 of Section 2.2, we find the exact solution to be g(s) = [12(2- .A)s + 8.A]/(12- 12).- .A 2), (3.2.15) and the comparison of Equations (3.2.14) and (3.2.15) is left to the reader.
  • 10. 34 3. Method of Successive Approximations 3.3. Volterra Integral Equation The same iterative scheme is applicable to the Volterra integral equation of the second kind. In fact, the formulas corresponding to Equations (3.1.13) and (3.1.25) are, respectively, ges) = Jes)+ ~Am is Kmes,t)Jet)dt, (3.3.1) ges) = Jes) +A. is res, t; A.) Jet)dt , (3.3.2) where the iterated kernel Kmes, t) satisfies the recurrence formula Kmes,t) = js Kes,x)Km-lex,t)dx (3.3.3) with Ktes, t) = Kes, t), as before. The resolvent kernel res, t; A.) is given by the same formula as (3.1.26), and it is an entire function of A. for any given es, t) (see Exercise 8). We illustrate this concept by the following examples. 3.4. Examples Example 1. Find the Neumann series for the solution of the integral equation ges) = e1 + s) +A. loses - t)get)dt . (3.4.1) From the formula (3.3.3), we have K 1 (s, t) = (s- t), K 2 (s, t) =f 1 s (s- x)(x- t)dx = es- t)3 3! , =f s (s - x)(x - t) 3 (s - t) 5 K 3 (s, t) 1 3! dx = 5! , and so on. Thus, g(s) = 1 + s +A. ( s2 - +- 5 + A. 2 ( 5 +5 + ... 3) - 4 - 5) (3.4.2) 2! 3! 4! 5! For A.= 1, g(s) = e'.
  • 11. 3.4. Examples 35 Example 2. Solve the integral equation g(s) = f(s) +.l.. los t!-'g(t)dt (3.4.3) and evaluate the resolvent kernel. For this case, (s- t)m-1 -t Km(S, t} = (m -1)! tf . The resolvent kernel is es-t L...m=1 .... - 1 (s-t)'"-l = { "'00 e<'-+1)(s-t) ~""'( t ~ s' 1 s, t,• II.} ' _ - (m-1)! ' (3.4.4) 0, t > s. Hence, the solution is g(s) = /(s) + .l.. los e<A+1)(s-t) f(t)dt. (3.4.5) Example 3. Solve the Volterra equation g(s) = 1 +los stg(t)dt . (3.4.6) For this example, K 1 (s, t) = K(s, t) = st, Kz(s, t) =is sx 2 t dx = (s 4t- st 4)J3, K 3 (s, t) =is [(sx)(x 4t - xt 4)j3]dx = (s 1 t- 2s 4t 4 + st 1 )j18, K4(s, t) =is [(sx)(x 1 t - 2x 4t 4 +xt 1 )j18]dx = (s 10 t - 3s 1 t 4 + 3s4t 1 - st 10 )j162, and so on. Thus, 53 59 59 5 12 g(s) = 1 + 3 + 2 · 5 + 2 · 5 · 8 + 2 · 5 · 8 ·11 + · · · · (3.4.7)
  • 12. 36 3. Method of Successive Approximations 3.5. Some Results about the Resolvent Kernel The series for the resolvent kernel f(s, t; A.), L A.m-l Km(S, t) , 00 f(s, t; A.)= (3.5.1) m=l can be proved to be absolutely and uniformly convergent for all values of s and t in the circle lA. I < 1/B. In addition to the assumptions of Section 3.1, we need the additional inequality E = const. (3.5.2) Recall that this is one of the conditions for the kernel K to be an £ 2 -kernel. Applying the Schwarz inequality to the recurrence formula Km(s, t) =I Km-1(s,x)K(x, t)dx (3.5.3) (I I yields !Km(S, 1)1 2 ~ !Km-1 (s, x)! 2dx) IK(x, t)! 2 dx , which, with the help of Equation (3.1.19), becomes !Km(s, t)! ~ C1E Bm- 1 . (3.5.4) Thus, the series (3.5.1) is dominated by the geometric series with the general term C 1E(A.m- 1Bm- 1), and that completes the proof. Next, we prove that the resolvent kernel satisfies the integral equation f(s, t; A.)= K(s, t) +A. I f(s, x; A.)K(x, t) dx . (3.5.5) This follows by replacing Km(s, t) in the series (3.5.1) by the integral relation (3.5.3). Then, f(s, t; A.)= K1(s, t) + f>m- I 1 Km-1(s, x)K(x, t)dx I m=2 = K(s, t) +A. f>..m- 1 Km(s,x)K(x, t)dx m=l = K(s, t) +A J[.~A"- 1 K.(s,x)] K(x, t)dx,
  • 13. 3.5. Some Results about the Resolvent Kernel 37 and the integral Equation (3.5.5) follows immediately. The change of order of integration and summation is legitimate in view of the uniform convergence of the series involved. By proceeding in a similar manner we can prove that f'(s,t;A) =K(s,t)+A I K(s,x)f'(x,t;A)dx. (3.5.6) The two Equations (3.5.5) and (3.5.6) are called the Fredholm identities. From the preceding analysis it follows that they are valid, at least, within the circle IAlB < 1. Actually, they hold through the whole domain of the existence of the resolvent kernel in the complex A-plane. Another interesting result for the resolvent kernel is that it satisfies the integro-differential equation of'(s,t;A)/oA= I f'(s,X;A)f'(x,t;A)dx. (3.5.7) In fact I f'(s, x; A)f'(x, t; A)dx = I f : Am- 1 Km(s, x) m=l f: n=l An-l K,(x, t)dx. On account of the absolute and uniform convergence of the series (3.5.1), we can multiply the series under the integral sign and integrate it term by term. Therefore, I 00 00 f'(s,x;A)f'(x,t;A)dx = LL~>m+n-ZKm+n(s,t). m=l n=l (3.5.8) Now, set m +n = p and change the order of summation; there results the relation 00 00 00 p-1 L LAm+n- 2 Km+n(s, t) = L LV- 2 Kp(S, t) m=ln=1 p=2n=l = f:<P- 1)V-2 Kp(s, t) = of'(s, t; A) p=Z oA (3.5.9) Combining Equations (3.5.8) and (3.5.9), we have the result (3.5.7). By a similar process we can obtain the integral equation f'(s, t; A)- f'(s, t; J.-L) =(A- J.-L) I f'(s, x; J.-L)f'(x, t; A)dx , (3.5.10) which is called the resolvent equation. Then Equation (3.5.7) can be derived from (3.5.10) by dividing it by (A - J.-L) and letting J.-L ~ A. Equation (3.5.10)
  • 14. 38 3. Method of Successive Approximations can also be derived by the help of the Fredholm identities (3.5.5) and (3.5.6) (see Exercise 9). A Volterra integral equation of the second kind satisfies the identities f(s,t;'A)=K(s,t)+>.. { f/s f(s, x; 'A)K(x, t)dx, (3.5.11) J, K(s,x)f(x,t;'A)dx, and are proved in a similar fashion [4]. In Section 1.6 we defined the orthogonal kernels L(s, t) and M(s, t). If we let r t and r m denote the resolvent kernels for L and M, so that we have 00 00 ft(S, t; A)= L'An-lLn(S, t) = L'AnLn+t(S, t), (3.5.12) n=l n=O and 00 00 fm(s,t;>..) = L'An-lMn(s,t) = L'AnMn+l(s,t), (3.5.13) n=l n=O where Ln and Mn denote the iterated kernels. For the case that L and Mare orthogonal it follows that J L(s, x)f(x, t; 'A)dx = 0, (3.5.14) and I M(s, x)ft(X, t; >..)dx = 0. (3.5.15) Furthermore, by applying the Fredholm identity (3.5.6) to the resolvent kernels r l and r m, it follows that ft(S, t; A)+ f m(s, t; 'A)= L(s, t) + M(s, t) +'A I[L(s,x)ft(x,t;>..)+M(s,x)fm(X,t;>..)dx, (3.5.16) which, in view of Equations (3.5.14) and (3.5.15), can be written as ft(s, t; A)+ f m(s, t; A)= L(s, t) + M(s, t) +)..I [L (s, x) + M(s, x)][ft(S, t; A)+ r m(X, t; >..)]dx . (3.5.17) This shows that the resolvent kernel of the sum L(s, t) + M(s, t) of two orthog- onal kernels is the sum of their resolvent kernels, that is, ft+m(s, t; A)= ft(S, t; A)+ f m(s, t; A) . (3.5.18)
  • 15. 3.5. Some Results about the Resolvent Kernel 39 Exercises 1. Solve the following Fredholm integral equations by the method of successive approximations: 1 1 g(s) = es - -e + - +- 1 11 g(t)dt . 2 2 2 0 1 111C/2 g(s) =(sins)- -s +- stg(t)dt. 4 4 0 2. Consider the integral equation g(s) = 1 +A 1 1 stg(t)dt . (a) Make use of the relation IAI < B- 1 to show that the iterative procedure is valid for IAI < 3. (b) Show that the iterative procedure leads formally to the solution g(s) = 1 +s[(A/2) + (A 2 /6) + (A 3 j18) + ... ]. (c) Use the method of the previous chapter to obtain the exact solution g(s) = 1 + [3As/2(3- A)], 3. Solve the integral equation g(s) = 1 +A 1 1 (s + t)g(t)dt , by the method of successive approximations and show that the estimate afforded by the relation IAI < B- 1 is conservative in this case. 4. Find the resolvent kernel associated with the following kernels: (i) Is - t I, in the interval (0, 1); (ii) exp(-ls- tl), in the interval (0, 1); (iii) cos(s + t), in the interval (0, 2rr). 5. Solve the following Volterra integral equations by the method of this chapter: g(s) = 1 + 1s (s - t)g(t)dt , (i) g(s) = 29 + 6s + 1s (6s - 6t + S)g(t)dt . (ii)
  • 16. 40 3. Method of Successive Approximations 6. Find an approximate solution of the integral equation g(s) = (sinh s) + 1s er-s g(t) dt , by the method of iteration. 7. Obtain the radius of convergence of the Neumann series when the functions f(s) and the kernel K(s, t) are continuous in the interval (a, b). 8. Prove that the resolvent kernel for a Volterra integral equation of the second kind is an entire function of .1.. for any given (s, t). 9. Derive the resolvent equation (3.5.10) by appealing to the Fredholm identities (3.5.5) and (3.5.6).