2. Introduction
Image Enhancement :
accentuation and sharpening of image features
(edges, boundaries, contrast) to improve image
visual appearance and analysis.
does not increase information content, but
improves feature dynamic range to ease feature
detection.
Major challenge in image enhancement:
quantification of the enhanced feature criterion.
4. Point Operation
Point Operation
zero-memory operation
Map graylevel u∈[0,L] to graylevel v∈[0,L], through
transform relation v=f(u).
Contrast stretching, clipping, thresholding
α u, 0≤u<a
vb v γ
v = β ( u − a ) + va , a ≤ u < b
β
va γ ( u − b ) + vb , b ≤ u < L
α
u α and β determination is based on
a b L image histogram.
5. Point Operation
Contrast stretching
Bad contrast: unadequate illumination, sensor non-linearity.
In the contrast-stretched area, transforming gradient >1.
Special contrast-stretching cases:
Clipping: α=γ=0.
Thresholding : α=γ=0 , a=b.
v v
vb v γ
β
va u u
α
u
a b L
stretching clipping tresholding
9. Intensity Level Slicing
Segment a certain intensity level from the rest of the
image.
Without background: L, a ≤ u ≤ b
v=
0, lainnya
With background: L, a ≤ u ≤ b
v=
u , lainnya
v v
L
u u
a b a b L
11. Range Compression and Digital Substraction
Compress image intensity range:
v = c log10(1+|u|) , c = scaling constant
Digital Substraction:
Detect difference/gradual intensity change between
images.
Example: Digital Substraction Angiography (DSA).
15. Histogram Equalization
Histogram equalization is not appropriate for narrow
intensity distribution images.
Original Histeq
Image Result
16. Spatial Operation
Spatial Averaging
Pixel intensity is replaced with weighted average of its
neighborhood pixels intensity.
v ( m, n ) = ∑ ∑ a (k,l ) y (m − k, n − l )
( k ,l )∈W
Spatial averaging: a(k,l) = constant
1
v ( m, n ) = ∑W ∑ y ( m − k , n − l )
NW ( k ,l )∈
a(k,l)=1/NW, NW = number of pixels within filtering window
17. Spatial Operation
Alternative method: every pixel is replaced by its 4-
closest neioghbors average intensity value :
v(m,n) = 0.5 [y(m,n) + 0.25{ y(m-1,n)
+ y(m+1,n) + y(m,n-1) + y(m,n+1)} ]
Averaging ‘ mask’:
¼ ¼ 1/9 1/9 1/9 0 1/8 0
¼ ¼ 1/9 1/9 1/9 1/8 ¼ 1/8
1/9 1/9 1/9 0 1/8 0
2×2 window 3×3 window 5-points weighted averaging
18. Spatial Operation
Spatial averaging:
Smoothing
Low-pass filtering
Subsampling
Noisy image:
y(m,n) = u(m,n) + η(m,n)
η(m,n) = white noise with variance= ση2
Output image:
1
v ( m, n ) =
NW
∑ ∑ u ( m − k , n − l ) + η ( m, n )
( k ,l )∈W
η(m,n) = spatial average
• If mean(η(m,n)) = 0, noise power suppression level is proportional
to the number of pixels in the filtering window: ση2 = ση2/NW.
27. Image Sharpening
Enhance missing delicate structures due to blurring
effect.
Sharpen graylevel difference between neighbouring
pixels in an image.
High-pass filtering
Shift-invariant operator
Convolution mask contains positive number
surrounded by negative numbers.
−1 −1 −1
1
−1 8 −1
9
−1 −1 −1
28. High Pass Filtering
Relation between high-pass filtered image g, original
image f, and its low-pass filtered version:
g(m, n) = f(m, n) – lowpass(f(m, n))
29. High-boost Filtering (Unsharp Masking)
Substraction of amplified original image and low-pass fitered
original image:
g(m, n) = Af(m, n) – lowpass(f(m, n))
= (A-1)f(m, n) + [f(m, n)– lowpass(f(m, n))]
= (A-1)f(m, n) + higpass(f(m, n))
Produce an edge-enhanced version of original image.
Original Image High pass High boost
30. Derivative Filter
Sharpen image boundaries/edges, based on discrete
spatial gradient operator:
Implemented with 2-D convolution approach:
hn = edge detector derivative filter convolution mask
33. Frequency-domain Method
Enhancement conducted in the transform domain, followed by
an inverse transform to obtain the spatial domain enhanced
image representation.
For original image U = {u(m,n)} being transformed into V =
{v(k,l)} : V = AUAT.
Enhancement operation produces v’(k,l) = f(u(m,n)).
Spatial domain enhanced image: U = A-1V’(AT)-1 .
Generalized Filtering:
zero-memory transformation
Pixel-to-pixel multiplication: v’(k, l) = g(k, l) v(k, l).
g(k, l) = zonal mask.
u(m,n) Transformasi v(k,l) Operasi v’(k,l) Transformasi u’(m,n)
Inverse
Uniter AUAT Titik f(.)
A-1 V’(AT)-1
34. Frequency-domain Method
Frequency-domain processing is utilized to accelerate
spatial image filtering.
Time domain convolution is equivalent to fourier domain
(FFT) pixel-to-pixel multiplication:
g(m,n) = h(m,n)*f(m,n) ≈ G(u,v) = H(u,v) F(u,v)
35. Ideal Low-pass Filter
Ideal low-pass filter:
1, jika u 2 + v 2 ≤ r
r0=57
0
H ( u, v ) =
0, jika u 2 + v 2 > r0
Original Image LPF, r0 = 57 LPF, r0 = 36 LPF, r0 = 26
• Ringing effect: property of ideal filter.
37. Butterworth Low-pass Filter
False contouring removal and noise suppression.
False False contour
contour due removal with
to LPF
unadequate Butterworth
quantization
Original Image Noisy Image Filtered Image
38. Ideal High-pass Filter
Ideal high-pass filter
0, jika u 2 + v 2 ≤ r
r0 = 36
0
H ( u, v ) =
1, jika u 2 + v 2 > r0
Original Image HPF, r0 = 18 HPF, r0 = 36 HPF, r0 = 26
•Ringing effect: property of ideal filter
41. Pyramid Edge Detection
There may be a number of strong edges in the image that are not significant,
because they are short or unconnected.
Pyramid edge detection is used to enhance substantial (strong & long) edges,
but to ignore the weak or short edges.
repetitive
shrinkage EDGE TRACKING
1. Cut down to quarter size by averaging 4
corresponding pixels
2. Repeat x times, keep each generated image
3. At the smallest image, perform edge detection
(e.g Sobel)
4. Find edges ? (a threshold is needed)
5. If yes, perform edge detection on the group of 4
corresponding pixel in the next larger image.
averaging 6. Continue to the next larger image till the largest
image.