SlideShare a Scribd company logo
1 of 10
Download to read offline
Advanced Computing: An International Journal (ACIJ), Vol.7, No.3, May 2016
DOI:10.5121/acij.2016.7302 19
COLOUR IMAGE REPRESENTION OF
MULTISPECTRAL IMAGE FUSION
Preema Mole 1
and M Mathurakani2
1
Student in M.Tech, ECE Department, Toc H Institute of Science and Technology,
Ernakulam, India
2
Professor, ECE Department, Toc H Institute of Science and Technology
Formerly Scientist, DRDO, NPOL, Kochi, Ernakulam, India
ABSTRACT
The availability of imaging sensors operating in multiple spectral bands has led to the requirement of
image fusion algorithms that would combine the image from these sensors in an efficient way to give an
image that is more perceptible to human eye. Multispectral Image fusion is the process of combining
images optically acquired in more than one spectral band. In this paper, we present a pixel-level image
fusion that combines four images from four different spectral bands namely near infrared(0.76-0.90um),
mid infrared(1.55-1.75um),thermal- infrared(10.4-12.5um) and mid infrared(2.08-2.35um) to give a
composite colour image. The work coalesces a fusion technique that involves linear transformation based
on Cholesky decomposition of the covariance matrix of source data that converts multispectral source
images which are in grayscale into colour image. This work is composed of different segments that
includes estimation of covariance matrix of images, cholesky decomposition and transformation ones.
Finally, the fused colour image is compared with the fused image obtained by PCA transformation.
KEYWORDS
Multispectral image fusion, cholesky decomposition, principal component analysis, Grayscale image.
1. INTRODUCTION
The deployment of multiple number of sensors operating in different spectral bands has led to
the availability of multiple data. To educe information from these data there is a need to
combine all data from different sensors. This can be accomplished by image fusion algorithms.
Image fusion have been analysed from past twenty years. The main objective of image fusion is
to generate a fused image from a number of source images that is more informative and
perceptible to human vision. The concept of data fusion goes back to the 1950’s and
1960’s,with the search for methods of merging images from various sensors to provide
composite image which could be used to identify natural and man-made objects better. Burt [1]
was one of the first to report the use of Laplacian pyramid techniques in binocular image fusion.
Image fusion is a category of data fusion where data appear as arrays of numbers which
represents brightness, intensity, resolution and other image properties. These data can be two
dimensional, three dimension or multi-dimensional. The aim of image fusion is to reduce
uncertainty and minimize redundancy in the final fused image while maximizing the relevant
information. Some of the major benefits of image fusion are: wider spatial and temporal
coverage, decreased uncertainty, improved reliability, and increased robustness of the system.
Image fusion algorithms are broadly classified into three levels: Low, mid and high levels.
These levels sometimes are also referred to as pixel, feature and symbolic levels. In pixel level,
image is fused pixel by pixel manner and is used to merge the physical parameters. The main
advantage of pixel level is that the original measured quantities are directly involved in the
fusion process. Also it is more simple, linear, efficient with respect to time and easy to
Advanced Computing: An International Journal (ACIJ), Vol.7, No.3, May 2016
20
implement. This fusion technique can also detect undesirable noise which makes it more
effective for image fusion. Feature level need to recognise the object from various input data
sources. Thus the level first requires the extraction of features contained in the various input
images. These features can be identified by characteristics like size, contrast, shape or texture.
The fusion in this level enables the detection of useful features with higher confidence. Fusion
at symbol level allows information to be efficiently merged at higher level of abstraction. Here
the information is extracted separately from each input data sources and decision is made based
on input images and then fusion is done. This project mainly deals with the pixel level fusion
technique because of its linearity and simplicity.
In remote sensing applications, fusion of multiband images lying in different spectral bands of
the electromagnetic spectrum is one of the key areas of research. A convenient method for this
is to reduce the image set dimensionality to three dimension set, which can be displayed as red,
green, blue channels. Human vision also functions similarly. The visible spectrum of light
entering each point of eye is converted into 3 component perceived colour, as captured by
Large, medium and small cones which roughly corresponds to RGB. Also human eye can
distinguish only few levels of Grayscale but can identify thousands of colour on RGB colour
scale. This core idea s used to yield final colour image with maximum information from the data
set and enhanced visual features compared to the source multispectral band images. In this work
a different approach called VTVA(Vector valued Total Variation Algorithm)is tried that
controls the correlation between colour components of the final image by means of covariance
matrix.
1.1. RGB COLOUR SPACE
All colour spaces are three-dimensional orthogonal coordinate systems, meaning that there are
three axes (in this case the red, green, and blue colour intensities) that are perpendicular to one
another. This colour space is illustrated in figure 1. The red intensity starts at zero at the origin
and increases along one of the axes. Similarly, green and blue intensities start at the origin and
increase along their axes. Because each colour can only have values between zero and some
maximum intensity (255 for 8-bit depth), the resulting structure is the cube. We can define any
colour simply by giving its red, green, and blue values, or coordinates, within the colour cube.
These coordinates are usually represented as an ordered triplet - the red, green, and blue
intensity values enclosed within parentheses as (red, green, blue).
Figure 1. Data distribution before transformation
Advanced Computing: An International Journal (ACIJ), Vol.7, No.3, May 2016
21
1.2. COLOUR PERCEPTION
This project displays a fused colour image which allows the user to obtain not only the
information but also pleasant image to human eye. There are two types of receptor in the retina
of human eye namely rods and cones. Depending on RGB colour space the cones are classified
into short, medium and large cones. For same wavelength medium and large cones have same
sensitivity, while the short cones have different level of sensitivity.
Figure 2. Spectral sensitivity of S-cone, M-cone and L-cone
The response to this is obtained by sum of product of sensitivity to that of spectral radiance of
light. Figure 2 shows the spectral sensitivity curve for three cones.The spectral sensitivity
of S-cones peak at approximately 440 nm, M-cones peak at 545 nm and L-cones peak at 565 nm
after corrected for pre-retinal light loss, although the various measuring techniques result in
slightly different maximum sensitivity values
1.3. CHOLESKY DECOMPOSITION
Every symmetric positive definite matrix A can be decomposed into a product of unique lower
triangular matrix R and its transpose.
ie: A=RRT
(1)
where R is called the cholesky factor of A and can be interpreted as generalised square root of
A. The cholesky decomposition is unique when A is positive definite.
2. SURVEY OF RELATED RESEARCH
We examine some of the salient features of related research that has been reported. These works
in image fusion can be traced back to mid eighties. Burt[2] was one of the first to report the use
of Laplacian pyramid techniques. In this method, several copies of images was constructed at
increasing scale, then each copy was convolved with original image. The advantages of this
method was in terms of both computational cost and complexity. In 1985, P.J.Burt et.al and
E.H.Adelson in[3] analysed that the essential problem in image merging is pattern conservation
that must be preserved in composite image. In this paper, authors proposed an approach called
Merging images through pattern decomposition. At about 1988, Alexander Toet et. Al proposed
composite visible/thermal-infrared imaging apparatus[4]. R.D. Lillquist et. al in [5] presented a ,
Composite visible/thermal-infrared imaging apparatus Alexander Toet et al(1989),introduced
ROLP(Ratio Of Low Pass) pyramid method that fits models of the human visual system. In this
approach, judgments on the relative importance of pattern segments were based in their local
luminance contrast values[6]. Alexander Toet et. al in [7], introduced a new approach to image
fusion based on hierarchical image decomposition. This approach produced images that
appeared to be more crispy than the images produced by other linear fusion scheme. H. Li et.al
in [8] presented an image fusion scheme based on wavelet transforms. In this paper, the fusion
Advanced Computing: An International Journal (ACIJ), Vol.7, No.3, May 2016
22
took place in different resolution levels and more dominant features at each scale were
preserved in the new multiresolution representation. I. Koren et.al in 1995,proposed a method of
image fusion using steerable dyadic wavelet transform[9], which executed low level fusion on
registered images by the use of steerable dyadic wavelet transform. Shutao Li et.al in [10],
proposed pixel level image fusion algorithm for merging Landsat thematic mapper (TM) images
and SPOT panchromatic images. V.S. Petrovoic et.al in [11] introduced a novel approach to
multi resolution signal level image fusion for accurately transferring visual information from
any number of input image signals, into a single fused image without the loss of information or
the introduction of distortion. This new Gradient fusion reduced the amount of distortion,
artifacts and the loss of contrast information. V.Tsagaris et.al in 2005 came up with the method
based on partitioning the hyperspectral data into subgroups of bands[12]. V. Tsagaris and V.
Anastassopoulos proposed Multispectral image fusion for improved RGB representation based
on perceptual attributes in [13]. Q. Du, N. Raksuntorn, S. Cai, and R. J. Moorhead, et al in 2008,
investigated RGB colour composition schemes for hyperspectral imagery. In their paper, they
proposed to display the useful information as distinctively as possible for high-class
seperability. The work also demonstrated that the use of data processing step can significantly
improve the quality of colour display, whereas data classification generally outperforms data
transformation, although the implementation is more complicated [14].
3. IMAGE FUSION METHOD
There are various methods that have been developed to perform image fusion. The method
focussed in this paper is principal component analysis and VTVA. In this section, the necessary
background information for VTVA method is provided. Also, the introduction to Principal
Component Analysis (PCA) and the drawbacks of using PCA to get colour image are also
discussed.
3.1. Principal Component Analysis for Grayscale
Principal Component Analysis is a mathematical procedure that transforms a number of
correlated variables into a number of uncorrelated variables called principal components. These
components captures as much of variance in data as possible. The principal component is taken
to be along the direction with the maximum variance. The second principal component will be
orthogonal to the first. The third principal component is taken in the maximum variance
direction in the subspace perpendicular to the first two and so on. These principal components
are then multiplied with the source images and added together to give the fused image. The
PCA is also known as Karhunen-Loeve transform or the Hotelling.
3.2. Principal Component Analysis for Colour
Most of the Natural color images are captured in RGB, The main reason is that RGB does not
separate color from intensity which results in highly correlated channels. But when PCA is used
to obtain color image the results are very poor. This is due to the reason that PCA totally
decorrelate the three components. The entropy and standard deviation for final fused image also
decreases. Thus the in this paper another method (VTVA) is used to get colour image by fusing
grayscale images.
Advanced Computing: An International Journal (ACIJ), Vol.7, No.3, May 2016
23
3.3. Vector valued Total Variation Algorithm
The Idea of this method of image fusion is not to totally decorrelate the data but to control the
correlation between the color components of the final image. This is achieved by means of the
covariance matrix. The method distributes the energy of the source multispectral bands so that
the correlation between the RGB components of the final image may be selected by the user and
adjusted to be similar to that of natural color images. For example, we could consider the case
of calculating the mean correlation between red–green, red–blue, and green–blue channels for
the database of a large number of natural images (like the Corel or Imagine Macmillan
database)as in [6]. In this way, no additional transformation is needed.
This can be achieved using a linear transformation of the form
Y = AT
X (2)
Where X and Y are the population vectors of the source and the final images, respectively. The
relation between the covariance matrices is
Cy= E [YYT
] (3)
= E[AT
X.XT
A]
= AT
E[XXT
] A
Cy = AT
CX A
(4)
Where Cx is the covariance of the vector population X and Cy is the covariance of the resulting
vector population Y . The choice of the covariance matrix based on the statistical properties of
natural color image makes sure that the final colour image will be pleasing to human eye. Both
the matrices Cx and Cy are of same dimension. If they both are known then the transformation
matrix A can be evaluated using the cholesky factorization method. Accordingly, a symmetric
positive definite matrix can be decomposed by means of upper triangular matrix Q so that,
S=QT
.Q (5)
Using the above mentioned factorization Cx and Cy can also be written as,
Cx=QT
x .Qx
Cy=QT
y .Qy (6)
thus (6) becomes,
QT
y .Qy= AT
QT
x .Qx A=(QxA)T
QxA (7)
Thus,
Qy=Qx.A (8)
and the transformation matrix A is
A=Qx
-1
.Qy (9)
This transformation matrix A implies that the transformation depends on the statistical
properties of the origin multispectral data set. Also, in the design of the transformation, the
statistical properties of the natural colour images are taken into account. The resulting
population vector Y is of the same order as the original population vector X, but only three of
the components of vector Y will be used for colour representation.The evaluation of the desired
covariance matrix Cy for the transformed vector is based on the statistical properties of natural
colour images, and on requirements imposed by the user.
The relation between the covariance matrix and the correlation coefficient matrix Ry is given by,
Cy=∑Ry∑T
(10)
Advanced Computing: An International Journal (ACIJ), Vol.7, No.3, May 2016
24
Where,
∑=
0 0 0 . 0
0 0 0 . 0
. . . . . .
0 0 0 0 .
is the diagonal matrix with the variances (or standard deviations) of the new vectors in the main
diagonal and
Ry=
. 0
. 0
. . . . . .
0 0 0 .
is the desired correlation coefficient matrix.
3.4. VTVA Algorithm
The necessary steps for the method implementation are summarized as follows.
1) Estimate the covariance matrix Cx of population vectors X.
2) Compute the covariance matrix Cy of population vectors Y , using the correlation coefficient
matrix Ry and the diagonal matrix Σ.
3) Decompose the covariance matrices Cx and Cy using the Cholesky factorization method in (6)
by means of the upper triangular matrices Qx and Qy, respectively.
4) Compute the inverse of the upper triangular matrix Qx, namely, Q−1
x .
5) Compute the transformation matrix A in (9).
6) Compute the transformed population vectors Y using (2).
7) Scale the mapped images to the range of [0, 255] in order to produce RGB representation.
For high visual quality, the final color image produced by the transformation must have a high
degree of contrast. In other words, the energy of the original data must be sustained and equally
distributed in the RGB components of the final color image. This requirement is expressed as
follows:
(11)
with σy1 = σy2 = σy3 approximately. The remaining bands should have negligible energy
(contrast) and will not be used in forming the final color image. Their variance can be adjusted
to small values, for example, σyi = 10 −4
σy1, for i =4,...,K.
=
Advanced Computing: An International Journal (ACIJ), Vol.7, No.3, May 2016
25
4. APPLICATIONS
Remote sensing technique have proven to be powerful tool for monitoring the earth’s surface
and atmosphere. The application of image fusion can be divided into Military and Non Military
applications. Military applications include Detection, location tracking, identification of military
entries, ocean surveillance, etc. Image fusion has also been extensively used in Non military
applications that include interpretation and classification of aerial and satellite images
5. EXPERIMENTAL PROCEDURES AND RESULTS
The multispectral data set used in this work consists of 7multispectral bands images acquired
from Landsat Thematic Mapper (TM)sensor. The size of each image is 850 x 1100 pixels. The
average orbital height for these images is 700km and spatial resolution is 30meters except the
band 6 which is 90meters. The spectral range of sensor is depicted in table 1.
Table 1. Spectral range of data
The experiments conducted in this work aim to compare the performance criteria of
PCA(grayscale), PCA(Colour) and VTVA . Fusion was performed using four grayscale images.
Parameters namely, entropy, standard deviation, number of levels and mean for PCA and
VTVA were examined.
.
Figure 3. Detailed image (a)band 4 Near infrared (b)band 5 Mid infrared1 (c)band 6 Thermal infrared (d)
band 7 Mid infrared 2
BAND SPECTRAL
RANGE(µm)
Near infrared 0.76-0.85
Mid infrared 1 1.55-1.75
Thermal
infrared
10.4-12.5
Mid infrared 2 2.08-2.35
Advanced Computing: An International Journal (ACIJ), Vol.7, No.3, May 2016
26
The input grayscale source images are shown in figure 3. The image in figure 4 is derived from
the PCA analysis. This fused image as seen is more clear and contains more information when
compared with the input source images.
Figure 4. Fused Grayscale image using PCA
Fusion result shown in figure 5 shows the fused image obtained when PCA is used to get
colour image.
Figure 5. Fused Colour image using PCA
In this figure it can be seen that the fused image is having some colour components but the
amount of information is too low. This happens because the PCA technique totally decorrelates
the three colour components. But a good natural colour image has a lot of correlation between
them.
Finally, the image in figure 6 shows the fused colour image obtained using VTVA method. This
method converts input grayscale images into a colour image with more information and contrast
than compared to PCA
Advanced Computing: An International Journal (ACIJ), Vol.7, No.3, May 2016
27
Figure 6. Fused Colour image using VTVA
This colour image has lot of correlation between the three colour components and thus the
image has more information and contrast.
The values given in table 2 shows the comparison between the parameters namely entropy,
standard deviation, mean and number of levels present in the input and final fused method.
Table 2. Parameter Comparison
IMAGES ENTROPY STANDARD
DEVIATION
MEAN NO. OF
LEVELS
Image 1 6.9310 65.7602 119.1191 256
Image 2 7.0911 67.1405 94.8459 256
Image 3 7.2474 70.4155 142.1532 256
Image 4 7.7525 76.0397 119.8504 256
PCA(Grayscal
e)
7.5036 62.4518 119.1782 253
PCA(coloured
)
17.5470 46.4458 84.4458 145
VTVA 17.9930 74.7853 139.54 573169
These values when compared proves that the fused image obtained using VTVA has got more
information, contrast and more number of levels than compared to PCA. Also, the comparison
proves that fused colour image obtained by using VTVA has more number of levels than fused
grayscale image obtained by using PCA. Thus to fuse the grayscale source images and
transform into colour, VTVA is the best method.
6. CONCLUSIONS
In this paper we tried to fuse four grayscale images from different spectral bands that can be
fused to get more information using Principal Component Analysis method. In this paper it is
concluded that PCA cannot be used to get final colour image. But when these images are fused
using VTVA the final fused image obtained is found to be more pleasant to human eye. As well
as this fused image has hot more information and contrast than compared to other method. Also,
the work in this paper shows that the fused colour image has more number of levels which
means more combinations of colours are present. But the grayscale image have maximum 256
gray levels only.
Advanced Computing: An International Journal (ACIJ), Vol.7, No.3, May 2016
28
REFERENCES
[1] Dimitrios Besiris, Vassilis Tsagaris,Nikolaos Fragoulis, and Christos Theoharatos, An
FPGA-Based Hardware Implementation of Configurable Pixel-Level Colour Image
Fusion, IEEE Trans. Geosci. Remote Sens., vol. 50, no. 2 , Feb. 2012.
[2] P.J.Burt, The pyramid as a structure for efficient computation, in: A.Rosenfeld (Ed.),
Multiresolution Image Processing and Analysis,Springer-Verlag, Berlin, 1984, pp. 635.
[3] P.J. Burt, E.H. Adelson, Merging images through pattern decomposition,Proc. SPIE 575 (1985)
173181.
[4] E.H. Adelson, Depth-of-Focus Imaging Process Method, United States Patent 4,661,986 (1987).
[5] R.D. Lillquist, Composite visible/thermal-infrared imaging apparatus,United States Patent
4,751,571 (1988).
[6] A.Toet, Image fusion by a ratio of low-pass pyramid, Pattern Recognition Letters 9 (4) (1989)
245253.
[7] A.Toet, Hierarchical image fusion, Machine Vision and Applications 3 (1990) 111.
[8] H. Li, S. Manjunath, S. Mitra, Multisensor image fusion using the wavelet transform, Graphical
Models and Image Processing 57 (3) (1995) 235245.
[9] I. Koren, A. Laine, F. Taylor, Image fusion using steerable dyadic wavelet transform, in:
Proceedings of the International Conference on Image Processing, Washington, DC, USA, 1995,
pp. 232235.
[10]S. Li, J.T. Kwok, Y. Wang, Using the discrete wavelet frame transform to merge Landsat TM
and SPOT panchromatic images, Information Fusion 3 (2002) 1723.
[11]V.S. Petrovic, C.S. Xydeas, Gradient-based multiresolution image fusion, IEEE Transactions on
Image Processing 13 (2) (2004) 228237
[12]V. Tsagaris, V. Anastassopoulos, and G. Lampropoulos, Fusion of hyperspectral data using
segmented PCT for enhanced color representation, IEEE Trans. Geosci. Remote Sens., vol. 43,
no. 10, pp. 23652375,Oct.2005.
[13]V. Tsagaris and V. Anastassopoulos, Multispectral image fusion for improved RGB
representation based on perceptual attributes, Int. J.Remote Sens., vol. 26, no. 15, pp. 32413254,
Aug. 2005.
[14]Q. Du, N. Raksuntorn, S. Cai, and R. J. Moorhead, Color display for hyperspectral imagery,
IEEE Trans. Geosci. Remote Sens., vol. 46, no. 6,pp. 18581866, Jun. 2008.
Authors
Preema Mole has graduated from Sree Narayana Gurukulam College of
Engineering of Mahatma Gandhi University in Electronics & Communication
Engineering in 2013. She is currently pursuing her M.Tech Degree in
Wireless Technology from Toc H Institute of Science & Technology,
Arakunnam. Her research interest includes Signal Processing and Image
Processing.
M. Mathurakani has graduated from Alagappa Chettiar College of Engineering and Technology of
Madurai University and completed his masters from PSG college of
Technology of Madras University. He has worked as a Scientist in Defence
Research and development organization (DRDO) in the area of signal
processing and embedded system design and implementation. He was
honoured with the DRDO Scientist of the year award in 2003.Currently he is a
professor in Toc H Institute of Science and Technology, Arakunnam. His area
of research interest includes signal processing algorithms, embedded system
modeling and synthesis, reusable software architectures and MIMO and
OFDM based communication systems.

More Related Content

What's hot

Single image super resolution with improved wavelet interpolation and iterati...
Single image super resolution with improved wavelet interpolation and iterati...Single image super resolution with improved wavelet interpolation and iterati...
Single image super resolution with improved wavelet interpolation and iterati...iosrjce
 
PCA & CS based fusion for Medical Image Fusion
PCA & CS based fusion for Medical Image FusionPCA & CS based fusion for Medical Image Fusion
PCA & CS based fusion for Medical Image FusionIJMTST Journal
 
An ensemble classification algorithm for hyperspectral images
An ensemble classification algorithm for hyperspectral imagesAn ensemble classification algorithm for hyperspectral images
An ensemble classification algorithm for hyperspectral imagessipij
 
Multiexposure Image Fusion
Multiexposure Image FusionMultiexposure Image Fusion
Multiexposure Image FusionIJMER
 
Wavelet based image fusion
Wavelet based image fusionWavelet based image fusion
Wavelet based image fusionUmed Paliwal
 
G143741
G143741G143741
G143741irjes
 
Probabilistic model based image segmentation
Probabilistic model based image segmentationProbabilistic model based image segmentation
Probabilistic model based image segmentationijma
 
A Novel Feature Extraction Scheme for Medical X-Ray Images
A Novel Feature Extraction Scheme for Medical X-Ray ImagesA Novel Feature Extraction Scheme for Medical X-Ray Images
A Novel Feature Extraction Scheme for Medical X-Ray ImagesIJERA Editor
 
An Experiment with Sparse Field and Localized Region Based Active Contour Int...
An Experiment with Sparse Field and Localized Region Based Active Contour Int...An Experiment with Sparse Field and Localized Region Based Active Contour Int...
An Experiment with Sparse Field and Localized Region Based Active Contour Int...CSCJournals
 
Wavelet based image fusion
Wavelet based image fusionWavelet based image fusion
Wavelet based image fusionUmed Paliwal
 
An efficient image segmentation approach through enhanced watershed algorithm
An efficient image segmentation approach through enhanced watershed algorithmAn efficient image segmentation approach through enhanced watershed algorithm
An efficient image segmentation approach through enhanced watershed algorithmAlexander Decker
 
Effective segmentation of sclera, iris and pupil in noisy eye images
Effective segmentation of sclera, iris and pupil in noisy eye imagesEffective segmentation of sclera, iris and pupil in noisy eye images
Effective segmentation of sclera, iris and pupil in noisy eye imagesTELKOMNIKA JOURNAL
 
Synthetic aperture radar images
Synthetic aperture radar imagesSynthetic aperture radar images
Synthetic aperture radar imagessipij
 
Comparison of Segmentation Algorithms and Estimation of Optimal Segmentation ...
Comparison of Segmentation Algorithms and Estimation of Optimal Segmentation ...Comparison of Segmentation Algorithms and Estimation of Optimal Segmentation ...
Comparison of Segmentation Algorithms and Estimation of Optimal Segmentation ...Pinaki Ranjan Sarkar
 

What's hot (19)

Image Fusion
Image FusionImage Fusion
Image Fusion
 
I0351053058
I0351053058I0351053058
I0351053058
 
Single image super resolution with improved wavelet interpolation and iterati...
Single image super resolution with improved wavelet interpolation and iterati...Single image super resolution with improved wavelet interpolation and iterati...
Single image super resolution with improved wavelet interpolation and iterati...
 
PCA & CS based fusion for Medical Image Fusion
PCA & CS based fusion for Medical Image FusionPCA & CS based fusion for Medical Image Fusion
PCA & CS based fusion for Medical Image Fusion
 
A HYBRID APPROACH OF WAVELETS FOR EFFECTIVE IMAGE FUSION FOR MULTIMODAL MEDIC...
A HYBRID APPROACH OF WAVELETS FOR EFFECTIVE IMAGE FUSION FOR MULTIMODAL MEDIC...A HYBRID APPROACH OF WAVELETS FOR EFFECTIVE IMAGE FUSION FOR MULTIMODAL MEDIC...
A HYBRID APPROACH OF WAVELETS FOR EFFECTIVE IMAGE FUSION FOR MULTIMODAL MEDIC...
 
An ensemble classification algorithm for hyperspectral images
An ensemble classification algorithm for hyperspectral imagesAn ensemble classification algorithm for hyperspectral images
An ensemble classification algorithm for hyperspectral images
 
Multiexposure Image Fusion
Multiexposure Image FusionMultiexposure Image Fusion
Multiexposure Image Fusion
 
Wavelet based image fusion
Wavelet based image fusionWavelet based image fusion
Wavelet based image fusion
 
Image resolution enhancement via multi surface fitting
Image resolution enhancement via multi surface fittingImage resolution enhancement via multi surface fitting
Image resolution enhancement via multi surface fitting
 
G143741
G143741G143741
G143741
 
Probabilistic model based image segmentation
Probabilistic model based image segmentationProbabilistic model based image segmentation
Probabilistic model based image segmentation
 
A Novel Feature Extraction Scheme for Medical X-Ray Images
A Novel Feature Extraction Scheme for Medical X-Ray ImagesA Novel Feature Extraction Scheme for Medical X-Ray Images
A Novel Feature Extraction Scheme for Medical X-Ray Images
 
An Experiment with Sparse Field and Localized Region Based Active Contour Int...
An Experiment with Sparse Field and Localized Region Based Active Contour Int...An Experiment with Sparse Field and Localized Region Based Active Contour Int...
An Experiment with Sparse Field and Localized Region Based Active Contour Int...
 
Wavelet based image fusion
Wavelet based image fusionWavelet based image fusion
Wavelet based image fusion
 
An efficient image segmentation approach through enhanced watershed algorithm
An efficient image segmentation approach through enhanced watershed algorithmAn efficient image segmentation approach through enhanced watershed algorithm
An efficient image segmentation approach through enhanced watershed algorithm
 
Effective segmentation of sclera, iris and pupil in noisy eye images
Effective segmentation of sclera, iris and pupil in noisy eye imagesEffective segmentation of sclera, iris and pupil in noisy eye images
Effective segmentation of sclera, iris and pupil in noisy eye images
 
Synthetic aperture radar images
Synthetic aperture radar imagesSynthetic aperture radar images
Synthetic aperture radar images
 
Comparison of Segmentation Algorithms and Estimation of Optimal Segmentation ...
Comparison of Segmentation Algorithms and Estimation of Optimal Segmentation ...Comparison of Segmentation Algorithms and Estimation of Optimal Segmentation ...
Comparison of Segmentation Algorithms and Estimation of Optimal Segmentation ...
 
H0114857
H0114857H0114857
H0114857
 

Similar to COLOUR IMAGE REPRESENTION OF MULTISPECTRAL IMAGE FUSION

iaetsd Image fusion of brain images using discrete wavelet transform
iaetsd Image fusion of brain images using discrete wavelet transformiaetsd Image fusion of brain images using discrete wavelet transform
iaetsd Image fusion of brain images using discrete wavelet transformIaetsd Iaetsd
 
Inflammatory Conditions Mimicking Tumours In Calabar: A 30 Year Study (1978-2...
Inflammatory Conditions Mimicking Tumours In Calabar: A 30 Year Study (1978-2...Inflammatory Conditions Mimicking Tumours In Calabar: A 30 Year Study (1978-2...
Inflammatory Conditions Mimicking Tumours In Calabar: A 30 Year Study (1978-2...IOSR Journals
 
V.karthikeyan published article a..a
V.karthikeyan published article a..aV.karthikeyan published article a..a
V.karthikeyan published article a..aKARTHIKEYAN V
 
V.KARTHIKEYAN PUBLISHED ARTICLE
V.KARTHIKEYAN PUBLISHED ARTICLEV.KARTHIKEYAN PUBLISHED ARTICLE
V.KARTHIKEYAN PUBLISHED ARTICLEKARTHIKEYAN V
 
Mr image compression based on selection of mother wavelet and lifting based w...
Mr image compression based on selection of mother wavelet and lifting based w...Mr image compression based on selection of mother wavelet and lifting based w...
Mr image compression based on selection of mother wavelet and lifting based w...ijma
 
IMAGE QUALITY OPTIMIZATION USING RSATV
IMAGE QUALITY OPTIMIZATION USING RSATVIMAGE QUALITY OPTIMIZATION USING RSATV
IMAGE QUALITY OPTIMIZATION USING RSATVpaperpublications3
 
RADAR Image Fusion Using Wavelet Transform
RADAR Image Fusion Using Wavelet TransformRADAR Image Fusion Using Wavelet Transform
RADAR Image Fusion Using Wavelet TransformINFOGAIN PUBLICATION
 
MR Image Compression Based on Selection of Mother Wavelet and Lifting Based W...
MR Image Compression Based on Selection of Mother Wavelet and Lifting Based W...MR Image Compression Based on Selection of Mother Wavelet and Lifting Based W...
MR Image Compression Based on Selection of Mother Wavelet and Lifting Based W...ijma
 
Change Detection from Remotely Sensed Images Based on Stationary Wavelet Tran...
Change Detection from Remotely Sensed Images Based on Stationary Wavelet Tran...Change Detection from Remotely Sensed Images Based on Stationary Wavelet Tran...
Change Detection from Remotely Sensed Images Based on Stationary Wavelet Tran...IJECEIAES
 
Color Guided Thermal image Super Resolution
Color Guided Thermal image Super ResolutionColor Guided Thermal image Super Resolution
Color Guided Thermal image Super ResolutionSafayet Hossain
 
Fusing stereo images into its equivalent cyclopean view
Fusing stereo images into its equivalent cyclopean viewFusing stereo images into its equivalent cyclopean view
Fusing stereo images into its equivalent cyclopean viewEngineering Publication House
 
Human Re-identification with Global and Local Siamese Convolution Neural Network
Human Re-identification with Global and Local Siamese Convolution Neural NetworkHuman Re-identification with Global and Local Siamese Convolution Neural Network
Human Re-identification with Global and Local Siamese Convolution Neural NetworkTELKOMNIKA JOURNAL
 
Report medical image processing image slice interpolation and noise removal i...
Report medical image processing image slice interpolation and noise removal i...Report medical image processing image slice interpolation and noise removal i...
Report medical image processing image slice interpolation and noise removal i...Shashank
 
MC0086 Internal Assignment (SMU)
MC0086 Internal Assignment (SMU)MC0086 Internal Assignment (SMU)
MC0086 Internal Assignment (SMU)Krishan Pareek
 
ANALYSIS OF BIOMEDICAL IMAGE USING WAVELET TRANSFORM
ANALYSIS OF BIOMEDICAL IMAGE USING WAVELET TRANSFORMANALYSIS OF BIOMEDICAL IMAGE USING WAVELET TRANSFORM
ANALYSIS OF BIOMEDICAL IMAGE USING WAVELET TRANSFORMijiert bestjournal
 
Density Driven Image Coding for Tumor Detection in mri Image
Density Driven Image Coding for Tumor Detection in mri ImageDensity Driven Image Coding for Tumor Detection in mri Image
Density Driven Image Coding for Tumor Detection in mri ImageIOSRjournaljce
 
Optimal Coefficient Selection For Medical Image Fusion
Optimal Coefficient Selection For Medical Image FusionOptimal Coefficient Selection For Medical Image Fusion
Optimal Coefficient Selection For Medical Image FusionIJERA Editor
 
Image Fusion and Image Quality Assessment of Fused Images
Image Fusion and Image Quality Assessment of Fused ImagesImage Fusion and Image Quality Assessment of Fused Images
Image Fusion and Image Quality Assessment of Fused ImagesCSCJournals
 

Similar to COLOUR IMAGE REPRESENTION OF MULTISPECTRAL IMAGE FUSION (20)

iaetsd Image fusion of brain images using discrete wavelet transform
iaetsd Image fusion of brain images using discrete wavelet transformiaetsd Image fusion of brain images using discrete wavelet transform
iaetsd Image fusion of brain images using discrete wavelet transform
 
Inflammatory Conditions Mimicking Tumours In Calabar: A 30 Year Study (1978-2...
Inflammatory Conditions Mimicking Tumours In Calabar: A 30 Year Study (1978-2...Inflammatory Conditions Mimicking Tumours In Calabar: A 30 Year Study (1978-2...
Inflammatory Conditions Mimicking Tumours In Calabar: A 30 Year Study (1978-2...
 
V.karthikeyan published article a..a
V.karthikeyan published article a..aV.karthikeyan published article a..a
V.karthikeyan published article a..a
 
V.KARTHIKEYAN PUBLISHED ARTICLE
V.KARTHIKEYAN PUBLISHED ARTICLEV.KARTHIKEYAN PUBLISHED ARTICLE
V.KARTHIKEYAN PUBLISHED ARTICLE
 
Mr image compression based on selection of mother wavelet and lifting based w...
Mr image compression based on selection of mother wavelet and lifting based w...Mr image compression based on selection of mother wavelet and lifting based w...
Mr image compression based on selection of mother wavelet and lifting based w...
 
IMAGE QUALITY OPTIMIZATION USING RSATV
IMAGE QUALITY OPTIMIZATION USING RSATVIMAGE QUALITY OPTIMIZATION USING RSATV
IMAGE QUALITY OPTIMIZATION USING RSATV
 
F010224446
F010224446F010224446
F010224446
 
RADAR Image Fusion Using Wavelet Transform
RADAR Image Fusion Using Wavelet TransformRADAR Image Fusion Using Wavelet Transform
RADAR Image Fusion Using Wavelet Transform
 
MR Image Compression Based on Selection of Mother Wavelet and Lifting Based W...
MR Image Compression Based on Selection of Mother Wavelet and Lifting Based W...MR Image Compression Based on Selection of Mother Wavelet and Lifting Based W...
MR Image Compression Based on Selection of Mother Wavelet and Lifting Based W...
 
Change Detection from Remotely Sensed Images Based on Stationary Wavelet Tran...
Change Detection from Remotely Sensed Images Based on Stationary Wavelet Tran...Change Detection from Remotely Sensed Images Based on Stationary Wavelet Tran...
Change Detection from Remotely Sensed Images Based on Stationary Wavelet Tran...
 
Color Guided Thermal image Super Resolution
Color Guided Thermal image Super ResolutionColor Guided Thermal image Super Resolution
Color Guided Thermal image Super Resolution
 
Fusing stereo images into its equivalent cyclopean view
Fusing stereo images into its equivalent cyclopean viewFusing stereo images into its equivalent cyclopean view
Fusing stereo images into its equivalent cyclopean view
 
Dd25624627
Dd25624627Dd25624627
Dd25624627
 
Human Re-identification with Global and Local Siamese Convolution Neural Network
Human Re-identification with Global and Local Siamese Convolution Neural NetworkHuman Re-identification with Global and Local Siamese Convolution Neural Network
Human Re-identification with Global and Local Siamese Convolution Neural Network
 
Report medical image processing image slice interpolation and noise removal i...
Report medical image processing image slice interpolation and noise removal i...Report medical image processing image slice interpolation and noise removal i...
Report medical image processing image slice interpolation and noise removal i...
 
MC0086 Internal Assignment (SMU)
MC0086 Internal Assignment (SMU)MC0086 Internal Assignment (SMU)
MC0086 Internal Assignment (SMU)
 
ANALYSIS OF BIOMEDICAL IMAGE USING WAVELET TRANSFORM
ANALYSIS OF BIOMEDICAL IMAGE USING WAVELET TRANSFORMANALYSIS OF BIOMEDICAL IMAGE USING WAVELET TRANSFORM
ANALYSIS OF BIOMEDICAL IMAGE USING WAVELET TRANSFORM
 
Density Driven Image Coding for Tumor Detection in mri Image
Density Driven Image Coding for Tumor Detection in mri ImageDensity Driven Image Coding for Tumor Detection in mri Image
Density Driven Image Coding for Tumor Detection in mri Image
 
Optimal Coefficient Selection For Medical Image Fusion
Optimal Coefficient Selection For Medical Image FusionOptimal Coefficient Selection For Medical Image Fusion
Optimal Coefficient Selection For Medical Image Fusion
 
Image Fusion and Image Quality Assessment of Fused Images
Image Fusion and Image Quality Assessment of Fused ImagesImage Fusion and Image Quality Assessment of Fused Images
Image Fusion and Image Quality Assessment of Fused Images
 

More from acijjournal

Jan_2024_Top_read_articles_in_ACIJ.pdf
Jan_2024_Top_read_articles_in_ACIJ.pdfJan_2024_Top_read_articles_in_ACIJ.pdf
Jan_2024_Top_read_articles_in_ACIJ.pdfacijjournal
 
Call for Papers - Advanced Computing An International Journal (ACIJ) (2).pdf
Call for Papers - Advanced Computing An International Journal (ACIJ) (2).pdfCall for Papers - Advanced Computing An International Journal (ACIJ) (2).pdf
Call for Papers - Advanced Computing An International Journal (ACIJ) (2).pdfacijjournal
 
cs - ACIJ (4) (1).pdf
cs - ACIJ (4) (1).pdfcs - ACIJ (4) (1).pdf
cs - ACIJ (4) (1).pdfacijjournal
 
cs - ACIJ (2).pdf
cs - ACIJ (2).pdfcs - ACIJ (2).pdf
cs - ACIJ (2).pdfacijjournal
 
7thInternational Conference on Data Mining & Knowledge Management (DaKM 2022)
7thInternational Conference on Data Mining & Knowledge Management (DaKM 2022)7thInternational Conference on Data Mining & Knowledge Management (DaKM 2022)
7thInternational Conference on Data Mining & Knowledge Management (DaKM 2022)acijjournal
 
7thInternational Conference on Data Mining & Knowledge Management (DaKM 2022)
7thInternational Conference on Data Mining & Knowledge Management (DaKM 2022)7thInternational Conference on Data Mining & Knowledge Management (DaKM 2022)
7thInternational Conference on Data Mining & Knowledge Management (DaKM 2022)acijjournal
 
7thInternational Conference on Data Mining & Knowledge Management (DaKM 2022)
7thInternational Conference on Data Mining & Knowledge Management (DaKM 2022)7thInternational Conference on Data Mining & Knowledge Management (DaKM 2022)
7thInternational Conference on Data Mining & Knowledge Management (DaKM 2022)acijjournal
 
4thInternational Conference on Machine Learning & Applications (CMLA 2022)
4thInternational Conference on Machine Learning & Applications (CMLA 2022)4thInternational Conference on Machine Learning & Applications (CMLA 2022)
4thInternational Conference on Machine Learning & Applications (CMLA 2022)acijjournal
 
7thInternational Conference on Data Mining & Knowledge Management (DaKM 2022)
7thInternational Conference on Data Mining & Knowledge Management (DaKM 2022)7thInternational Conference on Data Mining & Knowledge Management (DaKM 2022)
7thInternational Conference on Data Mining & Knowledge Management (DaKM 2022)acijjournal
 
3rdInternational Conference on Natural Language Processingand Applications (N...
3rdInternational Conference on Natural Language Processingand Applications (N...3rdInternational Conference on Natural Language Processingand Applications (N...
3rdInternational Conference on Natural Language Processingand Applications (N...acijjournal
 
4thInternational Conference on Machine Learning & Applications (CMLA 2022)
4thInternational Conference on Machine Learning & Applications (CMLA 2022)4thInternational Conference on Machine Learning & Applications (CMLA 2022)
4thInternational Conference on Machine Learning & Applications (CMLA 2022)acijjournal
 
Graduate School Cyber Portfolio: The Innovative Menu For Sustainable Development
Graduate School Cyber Portfolio: The Innovative Menu For Sustainable DevelopmentGraduate School Cyber Portfolio: The Innovative Menu For Sustainable Development
Graduate School Cyber Portfolio: The Innovative Menu For Sustainable Developmentacijjournal
 
Genetic Algorithms and Programming - An Evolutionary Methodology
Genetic Algorithms and Programming - An Evolutionary MethodologyGenetic Algorithms and Programming - An Evolutionary Methodology
Genetic Algorithms and Programming - An Evolutionary Methodologyacijjournal
 
Data Transformation Technique for Protecting Private Information in Privacy P...
Data Transformation Technique for Protecting Private Information in Privacy P...Data Transformation Technique for Protecting Private Information in Privacy P...
Data Transformation Technique for Protecting Private Information in Privacy P...acijjournal
 
Advanced Computing: An International Journal (ACIJ)
Advanced Computing: An International Journal (ACIJ) Advanced Computing: An International Journal (ACIJ)
Advanced Computing: An International Journal (ACIJ) acijjournal
 
E-Maintenance: Impact Over Industrial Processes, Its Dimensions & Principles
E-Maintenance: Impact Over Industrial Processes, Its Dimensions & PrinciplesE-Maintenance: Impact Over Industrial Processes, Its Dimensions & Principles
E-Maintenance: Impact Over Industrial Processes, Its Dimensions & Principlesacijjournal
 
10th International Conference on Software Engineering and Applications (SEAPP...
10th International Conference on Software Engineering and Applications (SEAPP...10th International Conference on Software Engineering and Applications (SEAPP...
10th International Conference on Software Engineering and Applications (SEAPP...acijjournal
 
10th International conference on Parallel, Distributed Computing and Applicat...
10th International conference on Parallel, Distributed Computing and Applicat...10th International conference on Parallel, Distributed Computing and Applicat...
10th International conference on Parallel, Distributed Computing and Applicat...acijjournal
 
DETECTION OF FORGERY AND FABRICATION IN PASSPORTS AND VISAS USING CRYPTOGRAPH...
DETECTION OF FORGERY AND FABRICATION IN PASSPORTS AND VISAS USING CRYPTOGRAPH...DETECTION OF FORGERY AND FABRICATION IN PASSPORTS AND VISAS USING CRYPTOGRAPH...
DETECTION OF FORGERY AND FABRICATION IN PASSPORTS AND VISAS USING CRYPTOGRAPH...acijjournal
 
Detection of Forgery and Fabrication in Passports and Visas Using Cryptograph...
Detection of Forgery and Fabrication in Passports and Visas Using Cryptograph...Detection of Forgery and Fabrication in Passports and Visas Using Cryptograph...
Detection of Forgery and Fabrication in Passports and Visas Using Cryptograph...acijjournal
 

More from acijjournal (20)

Jan_2024_Top_read_articles_in_ACIJ.pdf
Jan_2024_Top_read_articles_in_ACIJ.pdfJan_2024_Top_read_articles_in_ACIJ.pdf
Jan_2024_Top_read_articles_in_ACIJ.pdf
 
Call for Papers - Advanced Computing An International Journal (ACIJ) (2).pdf
Call for Papers - Advanced Computing An International Journal (ACIJ) (2).pdfCall for Papers - Advanced Computing An International Journal (ACIJ) (2).pdf
Call for Papers - Advanced Computing An International Journal (ACIJ) (2).pdf
 
cs - ACIJ (4) (1).pdf
cs - ACIJ (4) (1).pdfcs - ACIJ (4) (1).pdf
cs - ACIJ (4) (1).pdf
 
cs - ACIJ (2).pdf
cs - ACIJ (2).pdfcs - ACIJ (2).pdf
cs - ACIJ (2).pdf
 
7thInternational Conference on Data Mining & Knowledge Management (DaKM 2022)
7thInternational Conference on Data Mining & Knowledge Management (DaKM 2022)7thInternational Conference on Data Mining & Knowledge Management (DaKM 2022)
7thInternational Conference on Data Mining & Knowledge Management (DaKM 2022)
 
7thInternational Conference on Data Mining & Knowledge Management (DaKM 2022)
7thInternational Conference on Data Mining & Knowledge Management (DaKM 2022)7thInternational Conference on Data Mining & Knowledge Management (DaKM 2022)
7thInternational Conference on Data Mining & Knowledge Management (DaKM 2022)
 
7thInternational Conference on Data Mining & Knowledge Management (DaKM 2022)
7thInternational Conference on Data Mining & Knowledge Management (DaKM 2022)7thInternational Conference on Data Mining & Knowledge Management (DaKM 2022)
7thInternational Conference on Data Mining & Knowledge Management (DaKM 2022)
 
4thInternational Conference on Machine Learning & Applications (CMLA 2022)
4thInternational Conference on Machine Learning & Applications (CMLA 2022)4thInternational Conference on Machine Learning & Applications (CMLA 2022)
4thInternational Conference on Machine Learning & Applications (CMLA 2022)
 
7thInternational Conference on Data Mining & Knowledge Management (DaKM 2022)
7thInternational Conference on Data Mining & Knowledge Management (DaKM 2022)7thInternational Conference on Data Mining & Knowledge Management (DaKM 2022)
7thInternational Conference on Data Mining & Knowledge Management (DaKM 2022)
 
3rdInternational Conference on Natural Language Processingand Applications (N...
3rdInternational Conference on Natural Language Processingand Applications (N...3rdInternational Conference on Natural Language Processingand Applications (N...
3rdInternational Conference on Natural Language Processingand Applications (N...
 
4thInternational Conference on Machine Learning & Applications (CMLA 2022)
4thInternational Conference on Machine Learning & Applications (CMLA 2022)4thInternational Conference on Machine Learning & Applications (CMLA 2022)
4thInternational Conference on Machine Learning & Applications (CMLA 2022)
 
Graduate School Cyber Portfolio: The Innovative Menu For Sustainable Development
Graduate School Cyber Portfolio: The Innovative Menu For Sustainable DevelopmentGraduate School Cyber Portfolio: The Innovative Menu For Sustainable Development
Graduate School Cyber Portfolio: The Innovative Menu For Sustainable Development
 
Genetic Algorithms and Programming - An Evolutionary Methodology
Genetic Algorithms and Programming - An Evolutionary MethodologyGenetic Algorithms and Programming - An Evolutionary Methodology
Genetic Algorithms and Programming - An Evolutionary Methodology
 
Data Transformation Technique for Protecting Private Information in Privacy P...
Data Transformation Technique for Protecting Private Information in Privacy P...Data Transformation Technique for Protecting Private Information in Privacy P...
Data Transformation Technique for Protecting Private Information in Privacy P...
 
Advanced Computing: An International Journal (ACIJ)
Advanced Computing: An International Journal (ACIJ) Advanced Computing: An International Journal (ACIJ)
Advanced Computing: An International Journal (ACIJ)
 
E-Maintenance: Impact Over Industrial Processes, Its Dimensions & Principles
E-Maintenance: Impact Over Industrial Processes, Its Dimensions & PrinciplesE-Maintenance: Impact Over Industrial Processes, Its Dimensions & Principles
E-Maintenance: Impact Over Industrial Processes, Its Dimensions & Principles
 
10th International Conference on Software Engineering and Applications (SEAPP...
10th International Conference on Software Engineering and Applications (SEAPP...10th International Conference on Software Engineering and Applications (SEAPP...
10th International Conference on Software Engineering and Applications (SEAPP...
 
10th International conference on Parallel, Distributed Computing and Applicat...
10th International conference on Parallel, Distributed Computing and Applicat...10th International conference on Parallel, Distributed Computing and Applicat...
10th International conference on Parallel, Distributed Computing and Applicat...
 
DETECTION OF FORGERY AND FABRICATION IN PASSPORTS AND VISAS USING CRYPTOGRAPH...
DETECTION OF FORGERY AND FABRICATION IN PASSPORTS AND VISAS USING CRYPTOGRAPH...DETECTION OF FORGERY AND FABRICATION IN PASSPORTS AND VISAS USING CRYPTOGRAPH...
DETECTION OF FORGERY AND FABRICATION IN PASSPORTS AND VISAS USING CRYPTOGRAPH...
 
Detection of Forgery and Fabrication in Passports and Visas Using Cryptograph...
Detection of Forgery and Fabrication in Passports and Visas Using Cryptograph...Detection of Forgery and Fabrication in Passports and Visas Using Cryptograph...
Detection of Forgery and Fabrication in Passports and Visas Using Cryptograph...
 

Recently uploaded

Structural Integrity Assessment Standards in Nigeria by Engr Nimot Muili
Structural Integrity Assessment Standards in Nigeria by Engr Nimot MuiliStructural Integrity Assessment Standards in Nigeria by Engr Nimot Muili
Structural Integrity Assessment Standards in Nigeria by Engr Nimot MuiliNimot Muili
 
Robotics-Asimov's Laws, Mechanical Subsystems, Robot Kinematics, Robot Dynami...
Robotics-Asimov's Laws, Mechanical Subsystems, Robot Kinematics, Robot Dynami...Robotics-Asimov's Laws, Mechanical Subsystems, Robot Kinematics, Robot Dynami...
Robotics-Asimov's Laws, Mechanical Subsystems, Robot Kinematics, Robot Dynami...Sumanth A
 
Turn leadership mistakes into a better future.pptx
Turn leadership mistakes into a better future.pptxTurn leadership mistakes into a better future.pptx
Turn leadership mistakes into a better future.pptxStephen Sitton
 
A brief look at visionOS - How to develop app on Apple's Vision Pro
A brief look at visionOS - How to develop app on Apple's Vision ProA brief look at visionOS - How to develop app on Apple's Vision Pro
A brief look at visionOS - How to develop app on Apple's Vision ProRay Yuan Liu
 
KCD Costa Rica 2024 - Nephio para parvulitos
KCD Costa Rica 2024 - Nephio para parvulitosKCD Costa Rica 2024 - Nephio para parvulitos
KCD Costa Rica 2024 - Nephio para parvulitosVictor Morales
 
1- Practice occupational health and safety procedures.pptx
1- Practice occupational health and safety procedures.pptx1- Practice occupational health and safety procedures.pptx
1- Practice occupational health and safety procedures.pptxMel Paras
 
Stork Webinar | APM Transformational planning, Tool Selection & Performance T...
Stork Webinar | APM Transformational planning, Tool Selection & Performance T...Stork Webinar | APM Transformational planning, Tool Selection & Performance T...
Stork Webinar | APM Transformational planning, Tool Selection & Performance T...Stork
 
Introduction to Artificial Intelligence: Intelligent Agents, State Space Sear...
Introduction to Artificial Intelligence: Intelligent Agents, State Space Sear...Introduction to Artificial Intelligence: Intelligent Agents, State Space Sear...
Introduction to Artificial Intelligence: Intelligent Agents, State Space Sear...shreenathji26
 
Detection&Tracking - Thermal imaging object detection and tracking
Detection&Tracking - Thermal imaging object detection and trackingDetection&Tracking - Thermal imaging object detection and tracking
Detection&Tracking - Thermal imaging object detection and trackinghadarpinhas1
 
March 2024 - Top 10 Read Articles in Artificial Intelligence and Applications...
March 2024 - Top 10 Read Articles in Artificial Intelligence and Applications...March 2024 - Top 10 Read Articles in Artificial Intelligence and Applications...
March 2024 - Top 10 Read Articles in Artificial Intelligence and Applications...gerogepatton
 
FUNCTIONAL AND NON FUNCTIONAL REQUIREMENT
FUNCTIONAL AND NON FUNCTIONAL REQUIREMENTFUNCTIONAL AND NON FUNCTIONAL REQUIREMENT
FUNCTIONAL AND NON FUNCTIONAL REQUIREMENTSneha Padhiar
 
Submerged Combustion, Explosion Flame Combustion, Pulsating Combustion, and E...
Submerged Combustion, Explosion Flame Combustion, Pulsating Combustion, and E...Submerged Combustion, Explosion Flame Combustion, Pulsating Combustion, and E...
Submerged Combustion, Explosion Flame Combustion, Pulsating Combustion, and E...Ayisha586983
 
22CYT12 & Chemistry for Computer Systems_Unit-II-Corrosion & its Control Meth...
22CYT12 & Chemistry for Computer Systems_Unit-II-Corrosion & its Control Meth...22CYT12 & Chemistry for Computer Systems_Unit-II-Corrosion & its Control Meth...
22CYT12 & Chemistry for Computer Systems_Unit-II-Corrosion & its Control Meth...KrishnaveniKrishnara1
 
TEST CASE GENERATION GENERATION BLOCK BOX APPROACH
TEST CASE GENERATION GENERATION BLOCK BOX APPROACHTEST CASE GENERATION GENERATION BLOCK BOX APPROACH
TEST CASE GENERATION GENERATION BLOCK BOX APPROACHSneha Padhiar
 
Comprehensive energy systems.pdf Comprehensive energy systems.pdf
Comprehensive energy systems.pdf Comprehensive energy systems.pdfComprehensive energy systems.pdf Comprehensive energy systems.pdf
Comprehensive energy systems.pdf Comprehensive energy systems.pdfalene1
 
ADM100 Running Book for sap basis domain study
ADM100 Running Book for sap basis domain studyADM100 Running Book for sap basis domain study
ADM100 Running Book for sap basis domain studydhruvamdhruvil123
 
AntColonyOptimizationManetNetworkAODV.pptx
AntColonyOptimizationManetNetworkAODV.pptxAntColonyOptimizationManetNetworkAODV.pptx
AntColonyOptimizationManetNetworkAODV.pptxLina Kadam
 
Artificial Intelligence in Power System overview
Artificial Intelligence in Power System overviewArtificial Intelligence in Power System overview
Artificial Intelligence in Power System overviewsandhya757531
 
Analysis and Evaluation of Dal Lake Biomass for Conversion to Fuel/Green fert...
Analysis and Evaluation of Dal Lake Biomass for Conversion to Fuel/Green fert...Analysis and Evaluation of Dal Lake Biomass for Conversion to Fuel/Green fert...
Analysis and Evaluation of Dal Lake Biomass for Conversion to Fuel/Green fert...arifengg7
 

Recently uploaded (20)

Structural Integrity Assessment Standards in Nigeria by Engr Nimot Muili
Structural Integrity Assessment Standards in Nigeria by Engr Nimot MuiliStructural Integrity Assessment Standards in Nigeria by Engr Nimot Muili
Structural Integrity Assessment Standards in Nigeria by Engr Nimot Muili
 
Robotics-Asimov's Laws, Mechanical Subsystems, Robot Kinematics, Robot Dynami...
Robotics-Asimov's Laws, Mechanical Subsystems, Robot Kinematics, Robot Dynami...Robotics-Asimov's Laws, Mechanical Subsystems, Robot Kinematics, Robot Dynami...
Robotics-Asimov's Laws, Mechanical Subsystems, Robot Kinematics, Robot Dynami...
 
Turn leadership mistakes into a better future.pptx
Turn leadership mistakes into a better future.pptxTurn leadership mistakes into a better future.pptx
Turn leadership mistakes into a better future.pptx
 
A brief look at visionOS - How to develop app on Apple's Vision Pro
A brief look at visionOS - How to develop app on Apple's Vision ProA brief look at visionOS - How to develop app on Apple's Vision Pro
A brief look at visionOS - How to develop app on Apple's Vision Pro
 
KCD Costa Rica 2024 - Nephio para parvulitos
KCD Costa Rica 2024 - Nephio para parvulitosKCD Costa Rica 2024 - Nephio para parvulitos
KCD Costa Rica 2024 - Nephio para parvulitos
 
1- Practice occupational health and safety procedures.pptx
1- Practice occupational health and safety procedures.pptx1- Practice occupational health and safety procedures.pptx
1- Practice occupational health and safety procedures.pptx
 
Stork Webinar | APM Transformational planning, Tool Selection & Performance T...
Stork Webinar | APM Transformational planning, Tool Selection & Performance T...Stork Webinar | APM Transformational planning, Tool Selection & Performance T...
Stork Webinar | APM Transformational planning, Tool Selection & Performance T...
 
Introduction to Artificial Intelligence: Intelligent Agents, State Space Sear...
Introduction to Artificial Intelligence: Intelligent Agents, State Space Sear...Introduction to Artificial Intelligence: Intelligent Agents, State Space Sear...
Introduction to Artificial Intelligence: Intelligent Agents, State Space Sear...
 
Detection&Tracking - Thermal imaging object detection and tracking
Detection&Tracking - Thermal imaging object detection and trackingDetection&Tracking - Thermal imaging object detection and tracking
Detection&Tracking - Thermal imaging object detection and tracking
 
March 2024 - Top 10 Read Articles in Artificial Intelligence and Applications...
March 2024 - Top 10 Read Articles in Artificial Intelligence and Applications...March 2024 - Top 10 Read Articles in Artificial Intelligence and Applications...
March 2024 - Top 10 Read Articles in Artificial Intelligence and Applications...
 
FUNCTIONAL AND NON FUNCTIONAL REQUIREMENT
FUNCTIONAL AND NON FUNCTIONAL REQUIREMENTFUNCTIONAL AND NON FUNCTIONAL REQUIREMENT
FUNCTIONAL AND NON FUNCTIONAL REQUIREMENT
 
Submerged Combustion, Explosion Flame Combustion, Pulsating Combustion, and E...
Submerged Combustion, Explosion Flame Combustion, Pulsating Combustion, and E...Submerged Combustion, Explosion Flame Combustion, Pulsating Combustion, and E...
Submerged Combustion, Explosion Flame Combustion, Pulsating Combustion, and E...
 
22CYT12 & Chemistry for Computer Systems_Unit-II-Corrosion & its Control Meth...
22CYT12 & Chemistry for Computer Systems_Unit-II-Corrosion & its Control Meth...22CYT12 & Chemistry for Computer Systems_Unit-II-Corrosion & its Control Meth...
22CYT12 & Chemistry for Computer Systems_Unit-II-Corrosion & its Control Meth...
 
Versatile Engineering Construction Firms
Versatile Engineering Construction FirmsVersatile Engineering Construction Firms
Versatile Engineering Construction Firms
 
TEST CASE GENERATION GENERATION BLOCK BOX APPROACH
TEST CASE GENERATION GENERATION BLOCK BOX APPROACHTEST CASE GENERATION GENERATION BLOCK BOX APPROACH
TEST CASE GENERATION GENERATION BLOCK BOX APPROACH
 
Comprehensive energy systems.pdf Comprehensive energy systems.pdf
Comprehensive energy systems.pdf Comprehensive energy systems.pdfComprehensive energy systems.pdf Comprehensive energy systems.pdf
Comprehensive energy systems.pdf Comprehensive energy systems.pdf
 
ADM100 Running Book for sap basis domain study
ADM100 Running Book for sap basis domain studyADM100 Running Book for sap basis domain study
ADM100 Running Book for sap basis domain study
 
AntColonyOptimizationManetNetworkAODV.pptx
AntColonyOptimizationManetNetworkAODV.pptxAntColonyOptimizationManetNetworkAODV.pptx
AntColonyOptimizationManetNetworkAODV.pptx
 
Artificial Intelligence in Power System overview
Artificial Intelligence in Power System overviewArtificial Intelligence in Power System overview
Artificial Intelligence in Power System overview
 
Analysis and Evaluation of Dal Lake Biomass for Conversion to Fuel/Green fert...
Analysis and Evaluation of Dal Lake Biomass for Conversion to Fuel/Green fert...Analysis and Evaluation of Dal Lake Biomass for Conversion to Fuel/Green fert...
Analysis and Evaluation of Dal Lake Biomass for Conversion to Fuel/Green fert...
 

COLOUR IMAGE REPRESENTION OF MULTISPECTRAL IMAGE FUSION

  • 1. Advanced Computing: An International Journal (ACIJ), Vol.7, No.3, May 2016 DOI:10.5121/acij.2016.7302 19 COLOUR IMAGE REPRESENTION OF MULTISPECTRAL IMAGE FUSION Preema Mole 1 and M Mathurakani2 1 Student in M.Tech, ECE Department, Toc H Institute of Science and Technology, Ernakulam, India 2 Professor, ECE Department, Toc H Institute of Science and Technology Formerly Scientist, DRDO, NPOL, Kochi, Ernakulam, India ABSTRACT The availability of imaging sensors operating in multiple spectral bands has led to the requirement of image fusion algorithms that would combine the image from these sensors in an efficient way to give an image that is more perceptible to human eye. Multispectral Image fusion is the process of combining images optically acquired in more than one spectral band. In this paper, we present a pixel-level image fusion that combines four images from four different spectral bands namely near infrared(0.76-0.90um), mid infrared(1.55-1.75um),thermal- infrared(10.4-12.5um) and mid infrared(2.08-2.35um) to give a composite colour image. The work coalesces a fusion technique that involves linear transformation based on Cholesky decomposition of the covariance matrix of source data that converts multispectral source images which are in grayscale into colour image. This work is composed of different segments that includes estimation of covariance matrix of images, cholesky decomposition and transformation ones. Finally, the fused colour image is compared with the fused image obtained by PCA transformation. KEYWORDS Multispectral image fusion, cholesky decomposition, principal component analysis, Grayscale image. 1. INTRODUCTION The deployment of multiple number of sensors operating in different spectral bands has led to the availability of multiple data. To educe information from these data there is a need to combine all data from different sensors. This can be accomplished by image fusion algorithms. Image fusion have been analysed from past twenty years. The main objective of image fusion is to generate a fused image from a number of source images that is more informative and perceptible to human vision. The concept of data fusion goes back to the 1950’s and 1960’s,with the search for methods of merging images from various sensors to provide composite image which could be used to identify natural and man-made objects better. Burt [1] was one of the first to report the use of Laplacian pyramid techniques in binocular image fusion. Image fusion is a category of data fusion where data appear as arrays of numbers which represents brightness, intensity, resolution and other image properties. These data can be two dimensional, three dimension or multi-dimensional. The aim of image fusion is to reduce uncertainty and minimize redundancy in the final fused image while maximizing the relevant information. Some of the major benefits of image fusion are: wider spatial and temporal coverage, decreased uncertainty, improved reliability, and increased robustness of the system. Image fusion algorithms are broadly classified into three levels: Low, mid and high levels. These levels sometimes are also referred to as pixel, feature and symbolic levels. In pixel level, image is fused pixel by pixel manner and is used to merge the physical parameters. The main advantage of pixel level is that the original measured quantities are directly involved in the fusion process. Also it is more simple, linear, efficient with respect to time and easy to
  • 2. Advanced Computing: An International Journal (ACIJ), Vol.7, No.3, May 2016 20 implement. This fusion technique can also detect undesirable noise which makes it more effective for image fusion. Feature level need to recognise the object from various input data sources. Thus the level first requires the extraction of features contained in the various input images. These features can be identified by characteristics like size, contrast, shape or texture. The fusion in this level enables the detection of useful features with higher confidence. Fusion at symbol level allows information to be efficiently merged at higher level of abstraction. Here the information is extracted separately from each input data sources and decision is made based on input images and then fusion is done. This project mainly deals with the pixel level fusion technique because of its linearity and simplicity. In remote sensing applications, fusion of multiband images lying in different spectral bands of the electromagnetic spectrum is one of the key areas of research. A convenient method for this is to reduce the image set dimensionality to three dimension set, which can be displayed as red, green, blue channels. Human vision also functions similarly. The visible spectrum of light entering each point of eye is converted into 3 component perceived colour, as captured by Large, medium and small cones which roughly corresponds to RGB. Also human eye can distinguish only few levels of Grayscale but can identify thousands of colour on RGB colour scale. This core idea s used to yield final colour image with maximum information from the data set and enhanced visual features compared to the source multispectral band images. In this work a different approach called VTVA(Vector valued Total Variation Algorithm)is tried that controls the correlation between colour components of the final image by means of covariance matrix. 1.1. RGB COLOUR SPACE All colour spaces are three-dimensional orthogonal coordinate systems, meaning that there are three axes (in this case the red, green, and blue colour intensities) that are perpendicular to one another. This colour space is illustrated in figure 1. The red intensity starts at zero at the origin and increases along one of the axes. Similarly, green and blue intensities start at the origin and increase along their axes. Because each colour can only have values between zero and some maximum intensity (255 for 8-bit depth), the resulting structure is the cube. We can define any colour simply by giving its red, green, and blue values, or coordinates, within the colour cube. These coordinates are usually represented as an ordered triplet - the red, green, and blue intensity values enclosed within parentheses as (red, green, blue). Figure 1. Data distribution before transformation
  • 3. Advanced Computing: An International Journal (ACIJ), Vol.7, No.3, May 2016 21 1.2. COLOUR PERCEPTION This project displays a fused colour image which allows the user to obtain not only the information but also pleasant image to human eye. There are two types of receptor in the retina of human eye namely rods and cones. Depending on RGB colour space the cones are classified into short, medium and large cones. For same wavelength medium and large cones have same sensitivity, while the short cones have different level of sensitivity. Figure 2. Spectral sensitivity of S-cone, M-cone and L-cone The response to this is obtained by sum of product of sensitivity to that of spectral radiance of light. Figure 2 shows the spectral sensitivity curve for three cones.The spectral sensitivity of S-cones peak at approximately 440 nm, M-cones peak at 545 nm and L-cones peak at 565 nm after corrected for pre-retinal light loss, although the various measuring techniques result in slightly different maximum sensitivity values 1.3. CHOLESKY DECOMPOSITION Every symmetric positive definite matrix A can be decomposed into a product of unique lower triangular matrix R and its transpose. ie: A=RRT (1) where R is called the cholesky factor of A and can be interpreted as generalised square root of A. The cholesky decomposition is unique when A is positive definite. 2. SURVEY OF RELATED RESEARCH We examine some of the salient features of related research that has been reported. These works in image fusion can be traced back to mid eighties. Burt[2] was one of the first to report the use of Laplacian pyramid techniques. In this method, several copies of images was constructed at increasing scale, then each copy was convolved with original image. The advantages of this method was in terms of both computational cost and complexity. In 1985, P.J.Burt et.al and E.H.Adelson in[3] analysed that the essential problem in image merging is pattern conservation that must be preserved in composite image. In this paper, authors proposed an approach called Merging images through pattern decomposition. At about 1988, Alexander Toet et. Al proposed composite visible/thermal-infrared imaging apparatus[4]. R.D. Lillquist et. al in [5] presented a , Composite visible/thermal-infrared imaging apparatus Alexander Toet et al(1989),introduced ROLP(Ratio Of Low Pass) pyramid method that fits models of the human visual system. In this approach, judgments on the relative importance of pattern segments were based in their local luminance contrast values[6]. Alexander Toet et. al in [7], introduced a new approach to image fusion based on hierarchical image decomposition. This approach produced images that appeared to be more crispy than the images produced by other linear fusion scheme. H. Li et.al in [8] presented an image fusion scheme based on wavelet transforms. In this paper, the fusion
  • 4. Advanced Computing: An International Journal (ACIJ), Vol.7, No.3, May 2016 22 took place in different resolution levels and more dominant features at each scale were preserved in the new multiresolution representation. I. Koren et.al in 1995,proposed a method of image fusion using steerable dyadic wavelet transform[9], which executed low level fusion on registered images by the use of steerable dyadic wavelet transform. Shutao Li et.al in [10], proposed pixel level image fusion algorithm for merging Landsat thematic mapper (TM) images and SPOT panchromatic images. V.S. Petrovoic et.al in [11] introduced a novel approach to multi resolution signal level image fusion for accurately transferring visual information from any number of input image signals, into a single fused image without the loss of information or the introduction of distortion. This new Gradient fusion reduced the amount of distortion, artifacts and the loss of contrast information. V.Tsagaris et.al in 2005 came up with the method based on partitioning the hyperspectral data into subgroups of bands[12]. V. Tsagaris and V. Anastassopoulos proposed Multispectral image fusion for improved RGB representation based on perceptual attributes in [13]. Q. Du, N. Raksuntorn, S. Cai, and R. J. Moorhead, et al in 2008, investigated RGB colour composition schemes for hyperspectral imagery. In their paper, they proposed to display the useful information as distinctively as possible for high-class seperability. The work also demonstrated that the use of data processing step can significantly improve the quality of colour display, whereas data classification generally outperforms data transformation, although the implementation is more complicated [14]. 3. IMAGE FUSION METHOD There are various methods that have been developed to perform image fusion. The method focussed in this paper is principal component analysis and VTVA. In this section, the necessary background information for VTVA method is provided. Also, the introduction to Principal Component Analysis (PCA) and the drawbacks of using PCA to get colour image are also discussed. 3.1. Principal Component Analysis for Grayscale Principal Component Analysis is a mathematical procedure that transforms a number of correlated variables into a number of uncorrelated variables called principal components. These components captures as much of variance in data as possible. The principal component is taken to be along the direction with the maximum variance. The second principal component will be orthogonal to the first. The third principal component is taken in the maximum variance direction in the subspace perpendicular to the first two and so on. These principal components are then multiplied with the source images and added together to give the fused image. The PCA is also known as Karhunen-Loeve transform or the Hotelling. 3.2. Principal Component Analysis for Colour Most of the Natural color images are captured in RGB, The main reason is that RGB does not separate color from intensity which results in highly correlated channels. But when PCA is used to obtain color image the results are very poor. This is due to the reason that PCA totally decorrelate the three components. The entropy and standard deviation for final fused image also decreases. Thus the in this paper another method (VTVA) is used to get colour image by fusing grayscale images.
  • 5. Advanced Computing: An International Journal (ACIJ), Vol.7, No.3, May 2016 23 3.3. Vector valued Total Variation Algorithm The Idea of this method of image fusion is not to totally decorrelate the data but to control the correlation between the color components of the final image. This is achieved by means of the covariance matrix. The method distributes the energy of the source multispectral bands so that the correlation between the RGB components of the final image may be selected by the user and adjusted to be similar to that of natural color images. For example, we could consider the case of calculating the mean correlation between red–green, red–blue, and green–blue channels for the database of a large number of natural images (like the Corel or Imagine Macmillan database)as in [6]. In this way, no additional transformation is needed. This can be achieved using a linear transformation of the form Y = AT X (2) Where X and Y are the population vectors of the source and the final images, respectively. The relation between the covariance matrices is Cy= E [YYT ] (3) = E[AT X.XT A] = AT E[XXT ] A Cy = AT CX A (4) Where Cx is the covariance of the vector population X and Cy is the covariance of the resulting vector population Y . The choice of the covariance matrix based on the statistical properties of natural color image makes sure that the final colour image will be pleasing to human eye. Both the matrices Cx and Cy are of same dimension. If they both are known then the transformation matrix A can be evaluated using the cholesky factorization method. Accordingly, a symmetric positive definite matrix can be decomposed by means of upper triangular matrix Q so that, S=QT .Q (5) Using the above mentioned factorization Cx and Cy can also be written as, Cx=QT x .Qx Cy=QT y .Qy (6) thus (6) becomes, QT y .Qy= AT QT x .Qx A=(QxA)T QxA (7) Thus, Qy=Qx.A (8) and the transformation matrix A is A=Qx -1 .Qy (9) This transformation matrix A implies that the transformation depends on the statistical properties of the origin multispectral data set. Also, in the design of the transformation, the statistical properties of the natural colour images are taken into account. The resulting population vector Y is of the same order as the original population vector X, but only three of the components of vector Y will be used for colour representation.The evaluation of the desired covariance matrix Cy for the transformed vector is based on the statistical properties of natural colour images, and on requirements imposed by the user. The relation between the covariance matrix and the correlation coefficient matrix Ry is given by, Cy=∑Ry∑T (10)
  • 6. Advanced Computing: An International Journal (ACIJ), Vol.7, No.3, May 2016 24 Where, ∑= 0 0 0 . 0 0 0 0 . 0 . . . . . . 0 0 0 0 . is the diagonal matrix with the variances (or standard deviations) of the new vectors in the main diagonal and Ry= . 0 . 0 . . . . . . 0 0 0 . is the desired correlation coefficient matrix. 3.4. VTVA Algorithm The necessary steps for the method implementation are summarized as follows. 1) Estimate the covariance matrix Cx of population vectors X. 2) Compute the covariance matrix Cy of population vectors Y , using the correlation coefficient matrix Ry and the diagonal matrix Σ. 3) Decompose the covariance matrices Cx and Cy using the Cholesky factorization method in (6) by means of the upper triangular matrices Qx and Qy, respectively. 4) Compute the inverse of the upper triangular matrix Qx, namely, Q−1 x . 5) Compute the transformation matrix A in (9). 6) Compute the transformed population vectors Y using (2). 7) Scale the mapped images to the range of [0, 255] in order to produce RGB representation. For high visual quality, the final color image produced by the transformation must have a high degree of contrast. In other words, the energy of the original data must be sustained and equally distributed in the RGB components of the final color image. This requirement is expressed as follows: (11) with σy1 = σy2 = σy3 approximately. The remaining bands should have negligible energy (contrast) and will not be used in forming the final color image. Their variance can be adjusted to small values, for example, σyi = 10 −4 σy1, for i =4,...,K. =
  • 7. Advanced Computing: An International Journal (ACIJ), Vol.7, No.3, May 2016 25 4. APPLICATIONS Remote sensing technique have proven to be powerful tool for monitoring the earth’s surface and atmosphere. The application of image fusion can be divided into Military and Non Military applications. Military applications include Detection, location tracking, identification of military entries, ocean surveillance, etc. Image fusion has also been extensively used in Non military applications that include interpretation and classification of aerial and satellite images 5. EXPERIMENTAL PROCEDURES AND RESULTS The multispectral data set used in this work consists of 7multispectral bands images acquired from Landsat Thematic Mapper (TM)sensor. The size of each image is 850 x 1100 pixels. The average orbital height for these images is 700km and spatial resolution is 30meters except the band 6 which is 90meters. The spectral range of sensor is depicted in table 1. Table 1. Spectral range of data The experiments conducted in this work aim to compare the performance criteria of PCA(grayscale), PCA(Colour) and VTVA . Fusion was performed using four grayscale images. Parameters namely, entropy, standard deviation, number of levels and mean for PCA and VTVA were examined. . Figure 3. Detailed image (a)band 4 Near infrared (b)band 5 Mid infrared1 (c)band 6 Thermal infrared (d) band 7 Mid infrared 2 BAND SPECTRAL RANGE(µm) Near infrared 0.76-0.85 Mid infrared 1 1.55-1.75 Thermal infrared 10.4-12.5 Mid infrared 2 2.08-2.35
  • 8. Advanced Computing: An International Journal (ACIJ), Vol.7, No.3, May 2016 26 The input grayscale source images are shown in figure 3. The image in figure 4 is derived from the PCA analysis. This fused image as seen is more clear and contains more information when compared with the input source images. Figure 4. Fused Grayscale image using PCA Fusion result shown in figure 5 shows the fused image obtained when PCA is used to get colour image. Figure 5. Fused Colour image using PCA In this figure it can be seen that the fused image is having some colour components but the amount of information is too low. This happens because the PCA technique totally decorrelates the three colour components. But a good natural colour image has a lot of correlation between them. Finally, the image in figure 6 shows the fused colour image obtained using VTVA method. This method converts input grayscale images into a colour image with more information and contrast than compared to PCA
  • 9. Advanced Computing: An International Journal (ACIJ), Vol.7, No.3, May 2016 27 Figure 6. Fused Colour image using VTVA This colour image has lot of correlation between the three colour components and thus the image has more information and contrast. The values given in table 2 shows the comparison between the parameters namely entropy, standard deviation, mean and number of levels present in the input and final fused method. Table 2. Parameter Comparison IMAGES ENTROPY STANDARD DEVIATION MEAN NO. OF LEVELS Image 1 6.9310 65.7602 119.1191 256 Image 2 7.0911 67.1405 94.8459 256 Image 3 7.2474 70.4155 142.1532 256 Image 4 7.7525 76.0397 119.8504 256 PCA(Grayscal e) 7.5036 62.4518 119.1782 253 PCA(coloured ) 17.5470 46.4458 84.4458 145 VTVA 17.9930 74.7853 139.54 573169 These values when compared proves that the fused image obtained using VTVA has got more information, contrast and more number of levels than compared to PCA. Also, the comparison proves that fused colour image obtained by using VTVA has more number of levels than fused grayscale image obtained by using PCA. Thus to fuse the grayscale source images and transform into colour, VTVA is the best method. 6. CONCLUSIONS In this paper we tried to fuse four grayscale images from different spectral bands that can be fused to get more information using Principal Component Analysis method. In this paper it is concluded that PCA cannot be used to get final colour image. But when these images are fused using VTVA the final fused image obtained is found to be more pleasant to human eye. As well as this fused image has hot more information and contrast than compared to other method. Also, the work in this paper shows that the fused colour image has more number of levels which means more combinations of colours are present. But the grayscale image have maximum 256 gray levels only.
  • 10. Advanced Computing: An International Journal (ACIJ), Vol.7, No.3, May 2016 28 REFERENCES [1] Dimitrios Besiris, Vassilis Tsagaris,Nikolaos Fragoulis, and Christos Theoharatos, An FPGA-Based Hardware Implementation of Configurable Pixel-Level Colour Image Fusion, IEEE Trans. Geosci. Remote Sens., vol. 50, no. 2 , Feb. 2012. [2] P.J.Burt, The pyramid as a structure for efficient computation, in: A.Rosenfeld (Ed.), Multiresolution Image Processing and Analysis,Springer-Verlag, Berlin, 1984, pp. 635. [3] P.J. Burt, E.H. Adelson, Merging images through pattern decomposition,Proc. SPIE 575 (1985) 173181. [4] E.H. Adelson, Depth-of-Focus Imaging Process Method, United States Patent 4,661,986 (1987). [5] R.D. Lillquist, Composite visible/thermal-infrared imaging apparatus,United States Patent 4,751,571 (1988). [6] A.Toet, Image fusion by a ratio of low-pass pyramid, Pattern Recognition Letters 9 (4) (1989) 245253. [7] A.Toet, Hierarchical image fusion, Machine Vision and Applications 3 (1990) 111. [8] H. Li, S. Manjunath, S. Mitra, Multisensor image fusion using the wavelet transform, Graphical Models and Image Processing 57 (3) (1995) 235245. [9] I. Koren, A. Laine, F. Taylor, Image fusion using steerable dyadic wavelet transform, in: Proceedings of the International Conference on Image Processing, Washington, DC, USA, 1995, pp. 232235. [10]S. Li, J.T. Kwok, Y. Wang, Using the discrete wavelet frame transform to merge Landsat TM and SPOT panchromatic images, Information Fusion 3 (2002) 1723. [11]V.S. Petrovic, C.S. Xydeas, Gradient-based multiresolution image fusion, IEEE Transactions on Image Processing 13 (2) (2004) 228237 [12]V. Tsagaris, V. Anastassopoulos, and G. Lampropoulos, Fusion of hyperspectral data using segmented PCT for enhanced color representation, IEEE Trans. Geosci. Remote Sens., vol. 43, no. 10, pp. 23652375,Oct.2005. [13]V. Tsagaris and V. Anastassopoulos, Multispectral image fusion for improved RGB representation based on perceptual attributes, Int. J.Remote Sens., vol. 26, no. 15, pp. 32413254, Aug. 2005. [14]Q. Du, N. Raksuntorn, S. Cai, and R. J. Moorhead, Color display for hyperspectral imagery, IEEE Trans. Geosci. Remote Sens., vol. 46, no. 6,pp. 18581866, Jun. 2008. Authors Preema Mole has graduated from Sree Narayana Gurukulam College of Engineering of Mahatma Gandhi University in Electronics & Communication Engineering in 2013. She is currently pursuing her M.Tech Degree in Wireless Technology from Toc H Institute of Science & Technology, Arakunnam. Her research interest includes Signal Processing and Image Processing. M. Mathurakani has graduated from Alagappa Chettiar College of Engineering and Technology of Madurai University and completed his masters from PSG college of Technology of Madras University. He has worked as a Scientist in Defence Research and development organization (DRDO) in the area of signal processing and embedded system design and implementation. He was honoured with the DRDO Scientist of the year award in 2003.Currently he is a professor in Toc H Institute of Science and Technology, Arakunnam. His area of research interest includes signal processing algorithms, embedded system modeling and synthesis, reusable software architectures and MIMO and OFDM based communication systems.