2. Contents
• Introduction
• Noise models
• Restoration in the presence of noise using Spatial domain and
frequency domain filtering
• Linear, position invariant degradation
• Estimating the degradation functions
• Inverse filtering
• Minimum mean square error filtering
• Constrained least squares filtering.
3. Introduction
• Image restoration and image enhancement share a common
goal: to improve image quality for human perception
• Image enhancement is subjective where image restoration is
objective.
• For Image restoration , one should have prior knowledge of
degradation phenomena to recover image.
• Some image restoration methods are best suitable for spatial
domain and some are for frequency domain.
5. • If H is linear , position invariant process, then the degraded image is given
by
g(x,y)=h(x,y)*f(x,y)+n(x,y)
• Where h(x,y) is spatial domain representation of image degradation
process.
• The equivalent frequency domain representation of above equation is
given by following equation.
G(u,v)=H(u,v)F(u, v)+N(u, v)
• The above two equations are the basis of many image restoration process
that are going to be discussed in this module.
6. Noise Models
• The principal source of noise in digital image arise during
image acquisition and transmission.
• The performance of imaging sensors are affected by
environmental factors.
• Images are corrupted during the transmission principally due
to interference in the channel used for transmission.
• For example an image transmitted in wireless network might
be corrupted as a result of lightening or other atmospheric
disturbances.
7. Spatial and Frequency Properties of noise
• Spatial Properties:
1. We assume in this module that noise is independent of
spatial coordinates, and that it is uncorrelated with respect
to the image itself.
• Frequency Properties:
1. If the Fourier spectrum of noise is constant then that noise is
called as white noise.
14. • Data drop noise and spike noise are also the terms used to
refer to this type of noise.
• Noise impulses can be negative or positive
• Scaling is usually image digitizing process. Because impulse
corruption usually large compared to the strength of the
image signal, impulse noise generally digitized as extreme
values in an image.
• Thus assumption usually is that and b are saturated values in
the sense they are equal to minimum and maximum values in
the digitized image.
15. • As a result negative impulses will appear as the black points
where positive impulses will appear as the white points.
• For an 8 bit image this typically means that a=0 (black) and
b=255(white)
18. Periodic Noise and Estimation of Noise
Parameters
• Periodic noise in an image arises typically from electrical and
electromechanical interference during image acquisition.
• Periodic noise can be reduced significantly using frequency
domain filtering.
• The parameters of periodic noise typically are estimated by
inspection of the Fourier spectrum of the image.
• Periodic noise tends to produce frequency spikes that often
can be detected by visual analysis.
• The parameters of the noise PDF’s may be known partially
from sensor specifications, but it often necessary to estimate
them for a particular imaging arrangement.
19. Contd..
• If the imaging system is available, one simple way to study the
system noise characteristics is to capture set of images of the
flat environment.
• When images already generated by the sensors are available,
it’s possible to estimate the parameters of the PDF from small
patches of the reasonably constant background intensity.
• For example vertical strips are were cropped from the
Gaussian, Rayleigh and uniform images in previously displayed
figures.
• The histograms shown were calculated using image data from
these small strips
20.
21.
22. Restoration in the presence of noise
only- Spatial Filtering
• When the only degradation present in an image is noise, then
the standard equation will reduce to the following form
g(x,y)=f(x,y)+n(x,y)
And G(u,v)=F(u,v)+N(u,v)
• Here the noise terms are unknown so subtracting them from
g(x,y) or G(u,v) are unrealistic.
• If the noise is periodic noise then we can calculate the noise
terms from the spectrum of G(u,v) and we can subtract noise
from the image.
• Spatial filtering is the method of choice in situations when
additive random noise is present.
27. Order Statistics filter
• Order statistics filter are spatial filters whose response is
based on ordering or ranking the values of pixels contained in
the image area encompassed by the filter.
• The ranking result determines the response of the filter.
• Median filters, Max and Min filters, Midpoint filters, Alpha
trimmed filters are some examples of order statistics filters
31. Adaptive Filters
• The filters discussed so far are applied to an image without
regard for image characteristics vary from one point to
another.
• Here we will study two adaptive filters whose behavior
changes based on statistical characteristics of the image
inside the filter region.
34. Adaptive median filtering
• The adaptive median filter also works in the rectangular filter window of
area Sxy.
• Adaptive median filter changes the size of Sxy during the opeartion
• The output of the filter is single value used to replace the value of pixel at
(x,y) the point on which the window Sxy is centered at given time.
• Consider the following notation
• Zmin= minimum intensity value in Sxy
• Zmax=maximum intensity value in Sxy
• Zmed=median of the intensity values in Sxy
• Zxy=intensity value at coordinates (x,y)
• Smax=maximum allowed size of Sxy
• The adaptive median filtering algorithm works in two stages, Stage A and
Stage B
35. Contd..
• Stage A: A1=Zmed-Zmin
A2=Zmed-Zmax
If A1>0 and A2<0, go to stage B, else increase the
window size
If window size <= Smax repeat stage A
Else output Zmed
• Stage B: B1=Zxy-Zmin
B2=Zxy-Zmax
If B1>0 AND B2<0, output Zxy
else output Zmed
36. Contd..
• The key to understand the mechanics of this algorithm is to keep in mind
that it has 3 main purpose
1. Remove salt and pepper noise
2. To provide smoothing of other noise that may not be impulsive
3. And to reduce distortion such as excessive thickening of object boundries
• With these observations in mind . We see that purpose of stage A is to
determine whether median filter output Zmed is impulsive or not
• If the condition Zmin<Zmed<Zmax holds good, then Zmed cant be impulse
• In this case we go for stage B and test to see if the point in the center of
the window itself impulse (Zxy)
• If B1>0 and B2<0 is true then Zmin<Zxy<Zmax, and Zxy can’t be impulse
• In this case the algorithm outputs the unchanged pixel value, Zxy
• By not changing intermediate level points distortion is reduced in the
image
37. Contd..
• If the condition B1<0 and B2<0 is false the either Zxy=Zmin or
Zxy=Zmax, in either case value of the pixel is extreme value
and algorithm outputs the median value Zmed
• The last step is changing each pixel with median value of the
neighbored this causes the unnecessary loss of detail.
• Suppose that stage A does not find an impulse , then the
algorithm increases the size of the window and repeat stage
until algorithm finds either median value that is not impulse
or maximum window size is reached
38. Periodic noise reduction by frequency
domain filtering
• Periodic noise can be effectively filtered using frequency
domain filtering
• The periodic will appear as burst of energy in fourier spectrum
• Here we have to use selective filtering methods to remove
noise.
• The types of selective filters are
• Bandreject filters
• Bandpass filters
• Notch filters
39. Bandreject filters
• One of the applications of the bandreject filtering is noise removal in applications
where the general location of the noise componant in frequency domain is
approximately known
42. Notch Filters
• A notch filter rejects the frequencies in the predefined
neighborhoods about a center frequency.
• Due to Symmetry notch filters must appear symmetric pairs
about the origin in order to obtain meaningful results.
• Notch filters which allows the frequencies rather than
suppressing are obtained by following relationship
• HNP(u,v)=1-HNR(u,v)
• HNP is the transfer function of the notch pass filter
corresponding notch reject filters.
43.
44. Optimum Notch filtering
• When several interference components are present the
methods discussed in the previous are not acceptable as they
remove much image information in the filtering process.
• Interference components are not generally single frequency
bursts, they tend to have broad skirts that tend to have more
information about the interference pattern.
• These are not easily detectable from the normal transform
background.
• So we need a optimum method that it minimizes the local
variances of the restored image f’(x,y)
50. Contd..
• For w(x,y) the result is
• w(x,y)=[𝑔 𝑥, 𝑦 𝑛(𝑥, 𝑦) − 𝑔(𝑥, 𝑦) 𝑛(𝑥, 𝑦)]/[𝑛2 𝑥, 𝑦 − 𝑛 2 𝑥, 𝑦
51. Linear position invariant degradation
• The input-output relationship in fig 5.1 before the restoration stage is
expressed as
• g(x,y)=H(f(x,y))+n(x,y)………..(1)
• Let us assume that n(x,y) is zero so that g(x,y)=H(f(x,y))
• The opeartion H is linear if
• H[af1(x,y)+bf2(x,y)]=aH[f1(x,y)]+bH[f2(x,y)]…………(2)
• Where a and b are scalars and f1(x,y) and f2(x,y) are two input images.
• If a=b=1 Eq.2 becomes as follows
H[f1(x,y)+f2(x,y)]=H[f1(x,y)]+H[f2(x,y)] …..(3) which is called as the
property of additivity
• When f2(x,y)=0 eq 2 will become H[af1(x,y)]=aH[f1(x,y)]
• which is called as property of homogenity.
52. Contd..
• An operator having the input-output relationship g(x,y)=H[f(x,y)] is said to
be position invariant if
𝐻 𝑓 𝑥 − 𝛼, 𝑦 − 𝛽 = 𝑔 𝑥 − 𝛼, 𝑦 − 𝛽 …..4
• This definition indicates that the response at any point in the image
depends only on the value of the input at that point not on its position.
• With the slight change in notation in the definition of the impulse can be
expressed as
𝑓 𝑥, 𝑦 = −∞
∞
−∞
∞
𝑓 𝛼, 𝛽 𝛿 𝑥 − 𝛼, 𝑦 − 𝛽 𝑑𝛼𝑑𝛽……..5
• Assume again for a moment that n(x, y)=0 then substitution of equation 6
in equation 1 will yield the following the expression
g(x,y)=H[f(x,y)]=H[ −∞
∞
−∞
∞
𝑓(𝛼, 𝛽)𝛿(𝑥 − 𝛼, 𝑦 − 𝛽)]𝑑𝛼𝑑𝛽]…..6
• If H is linear operator and we extend the additive property to integrals
then
53. Contd..
• g(x,y)= −∞
∞
−∞
∞
𝐻[𝑓 𝛼, 𝛽 ]𝛿(𝑥 − 𝛼, 𝑦 − 𝛽)]𝑑𝛼𝑑𝛽……7
• Because f(𝛼, 𝛽) is independent of (x,y) and using the homogenity property
it follows that
• 𝑔 𝑥, 𝑦 = −∞
∞
−∞
∞
𝑓 𝛼, 𝛽 𝐻[𝛿(𝑥 − 𝛼, 𝑦 − 𝛽)]𝑑𝛼𝑑𝛽…….8
• Then term h(x, 𝛼,y, 𝛽)=𝐻 𝛿 𝑥 − 𝛼, 𝑦 − 𝛽 ………9
• The above term is known as impulse response of H. In other words, if
n(x,y)=0 then h(x, 𝛼,y, 𝛽) is the response of H to an impulse at coordinates
(x,y).
• Substituting the equation 9 to equation 8 yields the following result
𝑔 𝑥, 𝑦 = −∞
∞
−∞
∞
𝑓( 𝛼, 𝛽)h(x, 𝛼,y, 𝛽) d𝛼𝑑𝛽……10
• The above term is known as superposition integral of its first kind.
54. Contd..
• If H is a position invariant then
𝐻 𝛿 𝑥 − 𝛼, 𝑦 − 𝛽 = ℎ 𝑥 − 𝛼, 𝑦 − 𝛽 …….11
• Then equation 10 reduces in this case to
𝑔 𝑥, 𝑦 = −∞
∞
−∞
∞
𝑓 𝛼, 𝛽 ℎ 𝑥, 𝛼, 𝑦, 𝛽 𝑑𝛼𝑑𝛽……….12
• In the presence of additive noise the equation 10[ equation of linear
degradation model] reduces to
𝑔 𝑥, 𝑦 = −∞
∞
−∞
∞
𝑓 𝛼, 𝛽 ℎ 𝑥, 𝛼, 𝑦, 𝛽 𝑑𝛼𝑑𝛽 + 𝑛(𝑥, 𝑦)……..13
• If H is position invariant then the above equation becomes
𝑔 𝑥, 𝑦 = −∞
∞
−∞
∞
𝑓 𝛼, 𝛽 ℎ 𝑥 − 𝛼, 𝑦 − 𝛽 𝑑𝛼𝑑𝛽 + 𝑛 𝑥, 𝑦 … … … .14
• The values of the noise term n(x,y) are random and are assumed to
independent of position. Using the familiar notation of convolution we can
write
56. Estimating the Degradation function
• There are 3 principal ways to estimate the degradation function in image
restoration
1. Observation
2. Experimentation
3. Mathematical Modeling
• The process of restoring an image by using degradation function that has
been estimated in some way times is called “deconvolution”.
57. Estimation by Image Observation
• Suppose that we are given a degraded image without any knowledge
about the degradation function H.
• Lets assume that image is degraded by a linear, position invariant process.
So one way to estimate H is to gather information from the image itself
• For example if image is blurred we can look at small rectangular of the
image containing sample structures, like part of the object and
background.
• In order to reduce the effect of the noise, we would look for an area in
which signal content is strong.
• The next step is to process the image to arrive at a result that is as un
blurred as possible.
58. Contd..
• Lets the observed subimage be gs(x,y) and the processed subimage be
f’s(x,y), Then assuming that the effect of noise is negligible we can say that
Hs(u,v) =Gs(u,v)/F’s(u,v)
• Now from the characteristics of the function , we then deduce the
complete degradation function H(u,v) based on our assumption of positive
invariance.
• Clearly it is one of the tedious method used only in certain circumstances
such as restoring the old photograph of historical value.
59. Estimation by experimentation
• If equipment similar to the equipment used to acquire the degraded
image is available, it is possible in principle to obtain an accurate
estimation of the degradation.
• Images similar to the degraded image can be acquired with various system
settings until they are degraded as closely as possible to the image we
wish to restore.
• Then the idea is to obtain the ideal response of the degradation by
imaging an impulse using the same system settings
• An impulse is simulated by bright dot of light, as bright as possible to
reduce the effect of noise to the negligible values.
• Then recalling that the fourier transform of the impulse is a constant we
can say that H(u,v)= G(u,v)/A
60. Estimation by mathematical modeling
• Degradation modeling has been used for many years because of the
insight it affords into the image restoration problem.
• In some cases the model can take account some environmental conditions
that cause degradations
• For example a degradation model proposed by Hufnagel and Stanley is
based on the physical characteristics of physical turbulence. This model
has below form
𝐻 𝑢, 𝑣 = 𝑒^ −𝑘𝑢2 + 𝑣2
5
6
• Where k is constant that depends on the nature of the turbulence. With
exception of the 5/6 power on the exponent
• This equation has the same effect as the Gaussian LPF and Gaussian LPF
sometimes used to model mild, uniform blurring
62. Contd..
• Another major approach in modeling is to derive a mathematical model
starting from basic principles.
• Consider the image has been blurred by uniform linear motion between
and sensor during the image acquisition.
• Suppose that an image f(x,y) undergoes planer motion and that x0(t) and
y0(t) are the time varying components of the motion in the x and y
directions , respectively
• The total exposure in any part of the recording medium is obtained by
integrating the instantaneous exposure over the time interval during
which imaging system shutter is open.
• Assuming that shutter opening and closing takes place instantaneously,
and that optical imaging is perfect, isolates the effect of image motion.
Then T is the duration of exposure
63. Contd..
• From previous discussions we can follow that
𝑔 𝑥, 𝑦 = 0
𝑇
𝑓[𝑥 − 𝑥0 t , y − y0(t)]dt ……(1)
• Where g(x, y) is the blurred image
• If we take the fourier transform of the above equation 1 then it will yield
following equations
𝐺 𝑢, 𝑣 = −∞
∞
−∞
∞
𝑔 𝑥, 𝑦 𝑒−𝑗2𝜋 𝑢𝑥+𝑣𝑦
𝑑𝑥𝑑𝑦
= −∞
∞
−∞
∞
[ 0
𝑇
𝑓( 𝑥 − 𝑥0 𝑡 , 𝑦 −
64. Contd..
• The term inside the outer brackets is the Fourier transform of the
displaced function f[x-x0(t),y-y0(t)].
• 𝐺 𝑢, 𝑣 = 0
𝑇
𝐹 𝑢, 𝑣 𝑒 −𝑗2𝜋 𝑢𝑥0
𝑡 +𝑣𝑦0
𝑡 𝑑𝑡
= 𝐹(𝑢, 𝑣) 0
𝑇
𝑒−𝑗2𝜋 𝑢𝑥0
𝑡 +𝑣𝑦0
𝑡 𝑑𝑡………(4)
• Where the last step follows from the fact that the F(u,v) is independent of
t.
• 𝐻 𝑢, 𝑣 = 0
𝑇
𝑒−𝑗2𝜋 𝑢𝑥0 𝑡 +𝑣𝑦0 𝑡 𝑑𝑡 ………..(5)
• Equation 4 can be expressed in the familiar form
• 𝐺 𝑢, 𝑣 = 𝐻 𝑢, 𝑣 𝐹 𝑢, 𝑣
• If the motion variables x0(t) and y0(t) are known , the transfer function
H(u,v) can be obtained directly from eq( 3)
65. Contd..
• For illustration lets consider that suppose image in question undergoes
uniform linear motion in the x-direction at rate given by x0(t)=
𝑎𝑇
𝑇
.
• When t=T, image has been displaced by total distance 𝑎 . With y0(t)=0
equation 5 yields
𝐻 𝑢, 𝑣 = 0
𝑇
𝑒^(−𝑗2𝜋𝑢𝑥0(𝑡) dt
= 0
𝑇
𝑒−
𝑗2𝜋𝑢𝑎𝑡
𝑇 𝑑𝑡
=
𝑇
𝜋𝑢𝑎
sin 𝜋𝑢𝑎 𝑒−𝑗𝜋𝑢𝑎
………..(6)
• From the above equation we can observe that H vanishes at values of
u=n/a where n is an integer, if we allow y component to vary as well, with
the motion given by y0=bt/T, then the degradation function becomes
𝐻 𝑢, 𝑣 =
𝑇
𝜋 𝑢𝑎+𝑣𝑏
sin 𝜋 𝑢𝑎 + 𝑣𝑏 𝑒−𝑗𝜋 𝑢𝑎+𝑣𝑏 ……(7)
66. Inverse Filtering
• It is the simplest approach for the restoration of images
• Here we compute the estimate F’(u,v) of the transform of the original
image simply by dividing the transform of the degraded image,
• 𝐹′ 𝑢, 𝑣 =
𝐺 𝑢,𝑣
𝐻 𝑢,𝑣
• We know that G(u,v)=H(u,v)F(u, v)+N(u, v) so substituting in the above
equation yields
F’(u,v)=F(u,v)+[N(u,v)/H(u,v)]
• The above equation indicates that the reconstructed image will not be
same as the original image even if we know the degradation function.
67. Minimum Mean Square Filter(Wiener Filtering)
• The inverse filtering approach discussed in the previous section makes no
explicit provision for handling noise.
• Here we will discuss about the approach that incorporates both the
degradation function and statistical characteristics of noise into the
restoration process.
• Here we will consider images and noise as random variables , and
objective is to find an estimate f’ of the uncorrupted image f such that
mean square error between them is minimized.
• The error measure is given by 𝑒2 = 𝐸 𝑓 − 𝑓′ 2 ………..(1)
• Where E{.} is the expected value of the arrangement.
• Here we assumed that noise and image are uncorrelated
68. Contd..
• Based on the previous conditions given here , the minimum of the error
function is given by following expression
• 𝐹′
𝑢, 𝑣 =
𝐻∗ 𝑢,𝑣 𝑆𝑓 𝑢,𝑣
𝑆𝑓 𝑢,𝑣 𝐻 𝑢,𝑣 2+𝑆𝑛 𝑢,𝑣
𝐺 𝑢, 𝑣
• =
𝐻∗ 𝑢,𝑣
𝐻 𝑢,𝑣 2+
𝑆𝑛 𝑢,𝑣
𝑆𝑓 𝑢,𝑣
𝐺(𝑢, 𝑣)
=
1
𝐻 𝑢,𝑣
𝐻 𝑢, 𝑣 2/[ 𝐻 𝑢, 𝑣 2 +
𝑆𝑛 𝑢,𝑣
𝑆𝑓 𝑢,𝑣
𝐺(𝑢, 𝑣) ……….(2)
• Where we used the product that the product of a complex quantity with
its conjugate is equal to the magnitude of the complex quantity squared.
• This result is known as the Wiener filter.
• The filter which consists of the terms inside the brackets also is commonly
referred to as the minimum mean square error filter or least square error
filter
69. Contd..
• The terms in the equation 1 are as follows
1. H(u,v)= degradation filter
2. 𝐻∗(𝑢, 𝑣)= Complex conjugate of H(u,v)
3. 𝐻 𝑢, 𝑣 2 = 𝐻∗ 𝑢, 𝑣 𝐻 𝑢, 𝑣
4. Sn(u,v)= 𝑁 𝑢, 𝑣 2 = power spectrum of the noise
5. Sf(u,v)= 𝐹 𝑢, 𝑣 2
= Power spectrum of the undegraded image
• H(u,v) is the transform of the degradation function and G(u,v) is the
transform of the degraded image
• The restored image in the spatial domain is given by the inverse fourier
transfrom of the frequency domain estimate F’(u,v)
• If the noise is zero then then noise power spectrum vanishes and Wiener
filter reduces to the inverse filter.
70. Contd..
• The number of useful measures are based on the power spectra of the
noise and of the undegraded image.
• One of the important was the signal to noise ratio approximated using
frequency domain quantities
• 𝑆𝑁𝑅 = 𝑢=0
𝑀−1
𝑣=0
𝑁−1 𝐹 𝑢,𝑣 2
𝑢=0
𝑀−1
𝑣=0
𝑁−1 𝑁 𝑢,𝑣 2…..(3)
• The ratio gives the level of information bearing signal power to the level of
noise power
• Images with low noise tends to have high SNR and conversely same image
with high noise tends to have less SNR
• It is one of the important metric used in characterizing the performance of
restoration algorithms
71. Contd..
• The mean square error given in statistical form in equation 1 can be
approximated also in terms a summation involving the original and
restored images
𝑀𝑆𝐸 =
1
𝑀𝑁
𝑥=0
𝑀−1
𝑦=0
𝑁−1
𝑓 𝑥, 𝑦 − 𝑓′ 𝑥, 𝑦 2
• We can define signal to noise ratio in spatial domain as
𝑆𝑁𝑅 =
𝑥=0
𝑀−1
𝑦=0
𝑁−1
𝑓′
𝑥, 𝑦 2
𝑥=0
𝑀−1
𝑦=0
𝑁−1
𝑓 𝑥, 𝑦 − 𝑓′ 𝑥, 𝑦 2
• The closer the f and f’ the larger the ratio will be. Some times the square
root of this measure is used which are known as RMS SNR
• When we are dealing with white noise , The spectrum 𝑁 𝑢, 𝑣 2
is
constant.
72. Contd..
• When the don’t know the power spectrum of the undegraded image or it
can’t be estimated then the eq 2 can be expressed as
𝐹′ 𝑢, 𝑣 =
1
𝐻 𝑢, 𝑣
𝐻 𝑢, 𝑣 2
𝐻 𝑢, 𝑣 2 + 𝐾
𝐺(𝑢, 𝑣)
• Where K is a specified constant that is added to all terms of 𝐻 𝑢, 𝑣 2
73. Constrained Least Square Filtering
• The advantage of this filtering method is, it is sufficient if we know about
variance and mean of the noise.
• The above parameters can be easily calculated by degraded images.
• This algorithm will yield optimal result for all the images it is applied.
• Disadvantages of previous filtering methods-
• Need to know power spectrum of the original image.
• Or need to use constant estimate of the ratio of power spectra but it is not
reasonable.
• In this CLSF g(x,y)=f(x,y)*h(x,y)+n(x,y) is represented in vector form as
follows
• g=Hf+n
• Due to large dimensionality this method very difficult to numerically solve.
• This method will reduce the sensitivity of H to noise.
74. Contd..
• Lets define the minimum criterion function C as follows
𝐶 =
𝑥=0
𝑀−1
𝑦=0
𝑁−1
𝛻2 𝑓 𝑥, 𝑦 2
• And the above C is subject to the constraint
𝑔 − 𝐻𝑓′ 2 = 𝑛 2
• The frequency domain solution to this optimization problem is given by
the expression
• 𝐹′
𝑢, 𝑣 =
𝐻∗ 𝑢,𝑣
𝐻 𝑢,𝑣 2+𝛾 𝑃 𝑢,𝑣 2 𝐺(𝑢, 𝑣)
• Where 𝛾 is a parameter that need to be adjusted so that constraint is
satisfied.
• Where P(u,v) is the fourier transform of the p(x,y) which is known as
laplace operator.
75. Geometric Mean Filter
• It is the generalized form of Wiener filter
• 𝐹′ 𝑢, 𝑣 =
𝐻∗ 𝑢,𝑣
𝐻 𝑢,𝑣 2
𝛼 𝐻∗ 𝑢,𝑣
𝐻 𝑢,𝑣 2+𝛽
𝑆𝑛 𝑢,𝑣
𝑆𝑓 𝑢,𝑣
1−𝛼
𝐺(𝑢, 𝑣)
• With 𝛼 𝑎𝑛𝑑 𝛽 being positive , real constants . The geometric mean filter
consists of two expressions in brackets raised to the powers of 𝛼 and 1- 𝛼
respectively
• When 𝛼 =1 this filter reduces to inverse filter and with 𝛼 =0 the filter
becomes so called parametric Wiener filter
• When 𝛽=1 the above filter reduces to the standard Wiener filter