SlideShare une entreprise Scribd logo
1  sur  9
Télécharger pour lire hors ligne
Computer Engineering and Intelligent Systems                                                                   www.iiste.org
ISSN 2222-1719 (Paper) ISSN 2222-2863 (Online)
Vol 3, No.7, 2012




            Quality Assessment of Resultant Images after Processing
                                                 Anitha S, Dr Nagabhushana B S
             Research Scholar, Dr. MGR Educational and Research Institute, University, Chennai 600 085, India
                                                   E-mail: anithasd@yahoo.com


                                                        Lead Consultant

                                     KPIT Cummins Infosystems Limited Bangalore, India
                                                   E-mail: bsnaga@gmail.com
Abstract
Image quality is a characteristic of an image that measures the perceived image degradation, typically, compared to
an ideal or perfect image. Imaging systems may introduce some amounts of distortion or artifacts in the signal, so the
quality assessment is an important problem. Processing of images involves complicated steps. The aim of any
processing result is to get a processed image which is very much same as the original. It includes image restoration,
enhancement, compression and many more. To find if the reconstructed image after compression has lost the
originality is found by assessing the quality of the image. Traditional perceptual image quality assessment
approaches are based on measuring the errors (signal differences between the distorted and the reference images and
attempt to quantify the errors in a way that simulates human visual error sensitivity features. A discussion is proposed
here in order to assess the quality of the compressed image and the relevant information of the processed image is
found.


Keywords: Reference methods, Quality Assessment, Lateral chromatic aberration, Root Mean Squared Error, Peak
Signal to Noise Ratio, Signal to Noise Ratio, Human Visual System.


1. Introduction
Interest in image processing methods stem from two principal application areas, improvement of pictorial
information for human interpretation and processing of scene data for autonomous machine perception . One of the
first applications of image processing techniques in the first category was in improving digitized newspaper pictures
sent by submarine cable between London and New York. An image may be defined as a monochrome image or
simply image, two-dimensional function, f(x, y) where x and y are spatial coordinates and (x, y) is called the
intensity of f are all finite, discrete quantities, the image is called a digital image. A digital image is an image f(x, y)
that has been discretized both in spatial coordinates and brightness. A digital image can be considered a matrix whose
row and column indices identify a point in the image and the corresponding matrix element value identifies the gray
level at that point. The elements of such a digital array are called image elements, picture elements, pixels or pels.
 Digital image processing encompassed a broad range of hardware, software, and theoretical underpinnings. There
are basic fundamental steps required to perform an image processing task. The overall objective is to produce a result
from a problem domain by means of image processing. A piece of mail is taken as an example objective is to read the
address on each piece. Thus the desired output in this case is a stream of alphanumeric characters. The first step in
the image processing process is the image acquisition –that is, to acquire a digital image. Acquiring the image
requires an imaging sensor and the capability to digitize the signal produced by the sensor. The imaging sensor could
be a monochrome or color TV camera that produces a two-dimensional image at a time. After the digital image has
been obtained, the next step deals with preprocessing that image. The key function of preprocessing is to improve the
image in ways that increase the chances for success of the other processes, like enhancing the contrast, removing

                                                           105
Computer Engineering and Intelligent Systems                                                                 www.iiste.org
ISSN 2222-1719 (Paper) ISSN 2222-2863 (Online)
Vol 3, No.7, 2012



noise and isolating regions whose texture indicate a likelihood of alphanumeric information or may be images with
graphics.
The next stage deals with segmentation. Segmentation partitions an input image into its constituent parts or objects.
In    general, autonomous segmentation is one of the most difficult tasks in digital image processing. On the other
hand, a rugged segmentation procedure brings the process a long way toward successful solution of an imaging
problem. The output of segmentation stage usually is raw pixel data, constituting either the boundary of a region or
all the points in the region itself. In either case, converting the data to a form suitable for computer processing is
necessary. The first decision that must be made is whether the data should be represented as a boundary or as a
complete region. Boundary representation is appropriate when the focus is on internal properties, such as corners and
inflections. Regional representation is appropriate when the focus is on internal properties, such as texture or skeletal
shape. Choosing a representation is only a part of the solution for transforming raw data into a form suitable for
subsequent computer processing. A method must also be specified for describing the data so that features of interest
are highlighted. Description also called feature selection, deals with extracting features that result in some
quantitative information of interest.
The last stage involves recognition and interpretation. Recognition is the process that assigns a label to an object
based on the information provided by its descriptors. Interpretation involves assigning meaning to an ensemble of
recognized objects. Finally the result is obtained. If the problem domain requires image compression then this
addresses the problem of reducing the amount of data required to represent a digital image. It is a process intended to
yield a compact representation of an image, thereby reducing the image storage/transmission requirements.


2.Assessment of quality of image
Traditional perceptual image quality assessment approaches are based on measuring the errors (signal differences)
between the distorted and the reference images, and attempt to quantify the errors in a way that simulates human
visual error sensitivity features [1]. These methods usually involve 1.A channel decomposition process that
transforms the image signals into different spatial frequency as well as orientation selective subbands. 2.An error
normalization process that weights the error signal in each subband by incorporating the variation of visual
sensitivity in different subbands (often related to a contrast sensitivity function), and the variation of visual error
sensitivity caused by intra- or inter-channel neighboring transform coefficients (often related to certain visual
masking effects) . 3. An error pooling process that combines the error signals in different subbands into a single
quality/distortion value.
An image source communicates to a receiver through a channel that limits the amount of information that could flow
through it, thereby introducing distortions [2]. The output of the image source is the reference image, the output of
the channel is the test image, and the goal is to relate the visual quality of the test image to the amount of information
shared between the test and the reference signals, or more precisely, the mutual information between them [3].
Although mutual information is a statistical measure of information fidelity, and may only be loosely related with
what humans regard as image information, it places fundamental limits on the amount of cognitive information that
could be extracted from an image. For example, in cases where the channel is distorting images severely,
corresponding to low mutual information between the test and the reference, the ability of human viewers to obtain
semantic information by discriminating and identifying objects in images is also hampered. Thus, information
fidelity methods exploit the relationship between statistical image information and visual quality. While these
approaches can conveniently make use of many known psychophysical features of the human visual systems, they
are based on some strong assumptions, which are difficult to validate [4].


3 Information Theoretic Approaches to Image Quality Assessment
Explored information-theoretic approaches to the quality assessment problem, where the quality assessment problem
is viewed as an information-fidelity problem rather than a signal-fidelity problem has been explored.. An image
source communicates to a receiver through a channel that limits the amount of information that could flow through it,
thereby introducing distortions. The output of the image source is the reference image, the output of the channel is

                                                          106
Computer Engineering and Intelligent Systems                                                                www.iiste.org
ISSN 2222-1719 (Paper) ISSN 2222-2863 (Online)
Vol 3, No.7, 2012



the test image, and the goal is to relate the visual quality of
the test image to the amount of information shared between the test and the reference signals, or more precisely, the
mutual information between them. Although mutual information is a statistical measure of information fidelity, and
may only be loosely related with what humans regard as image information, it places fundamental limits on the
amount of cognitive information that could be extracted from an image. For example, in cases where the channel is
distorting images severely, corresponding to low mutual information between the test and the reference, the ability of
human viewers to obtain semantic information by discriminating and identifying objects in images is also hampered.
Thus, information fidelity methods exploit the relationship between statistical image information and visual quality
has been discussed as in [5],[6],[7].


4.Structural Similarity Based Image and Video Quality Assessment
Different from traditional error-sensitivity based approach, structural similarity based image quality assessment is
based on the following philosophy: The main function of the human visual system is to extract structural information
from the viewing field, and the human visual system is highly adapted for this purpose. Therefore, a measurement of
structural information loss can provide a good approximation to perceived image distortion. Here, we regard the
structural information in an image as those attributes that reflect the structure of objects in the scene, independent of
the average luminance and contrast. A universal image quality index that separates the comparison of luminance,
contrast and structure was introduced. This approach was generalized and improved, leading to a Structural
SIMilarity Index (SSIM), which had shown clear advantages over traditional mean squared error (MSE) and peak
signal to noise ratio (PSNR) measures when tested on a database of JPEG and JPEG2000 compressed images. This
SSIM index has also been applied for video quality assessment and extended to a multi-scale approach [8],[9],[10].

In digital or film-based photography, an image is formed on the image plane of the camera and then measured
electronically or chemically to produce the photograph. The image formation process may be described by the ideal
pinhole camera model, where only light rays from the depicted scene that pass through the camera aperture can fall
on the image plane. In reality, this ideal model is only an approximation of the image formation process, and image
quality may be described in terms of how well the camera approximates the pinhole model.
An ideal model of how a camera measures light is that the resulting photograph should represent the amount of light
that falls on each point at a certain point in time. This model is only an approximate description of the light
measurement process of a camera, and image quality is also related to the deviation from this model.
In some cases, the image for which quality should be determined is primarily not the result of a photographic process
in a camera, but the result of storing or transmitting the image. A typical example is a digital image that has been
compressed, stored or transmitted, and then decompressed again. Unless a lossless compression method has been
used, the resulting image is normally not identical to the original image and the deviation from the (ideal) original
image is then a measure of quality. By considering a large set of images, and determining a quality measure for each
of them, statistical methods can be used to determine an overall quality measure of the compression method.
In a typical digital camera, the resulting image quality depends on all three factors mentioned above: how much the
image formation process of the camera deviates from the pinhole model, the quality of the image measurement
process, and the coding artifacts that are introduced in the image produced by the camera, typically by the JPEG
coding method.
By defining image quality in terms of a deviation from the ideal situation, quality measures become technical in the
sense that they can be objectively determined in terms of deviations from the ideal models. Image quality can,
however, also be related to the subjective perception of an image, e.g., a human looking at a photograph. Examples
are how colors are represented in a black-and-white image, as well as in color images, or that the reduction of image
quality from noise depends on how the noise correlates with the information the viewer seeks in the image rather
than its overall strength. Another example of this type of quality measure is Johnson's criteria for determining the
necessary quality of an image in order to detect targets in night vision systems.
Subjective measures of quality also relate to the fact that, although the camera's deviation from the ideal models of


                                                          107
Computer Engineering and Intelligent Systems                                                                 www.iiste.org
ISSN 2222-1719 (Paper) ISSN 2222-2863 (Online)
Vol 3, No.7, 2012



image formation and measurement in general is undesirable and corresponds to reduced objective image quality,
these deviations can also be used for artistic effects in image production, corresponding to high subjective quality.


4.1 Image quality assessment categories
There are several techniques and metrics that can be measured objectively and automatically evaluated by a
computer program. Therefore, they can be classified as full-reference (FR) methods and no-reference (NR) methods.
In FR image quality assessment methods, the quality of a test image is evaluated by comparing it with a reference
image that is assumed to have perfect quality. NR metrics try to assess the quality of an image without any reference
to the original one. For example, comparing an original image to the output of JPEG compression of that image is
full-reference – it uses the original as reference.


5.Image quality factors
5.1 Sharpness determines the amount of detail an image can convey. System sharpness is affected by the lens (design
and manufacturing quality, focal length, aperture, and distance from the image center) and sensor (pixel count and
anti-aliasing filter). In the field, sharpness is affected by camera shake (a good tripod can be helpful), focus accuracy,
and atmospheric disturbances (thermal effects and aerosols). Lost sharpness can be restored by sharpening, but
sharpening has limits. Oversharpening, can degrade image quality by causing "halos" to appear near contrast
boundaries. Images from many compact digital cameras are oversharpened.


5.2 Noise is a random variation of image density, visible as grain in film and pixel level variations in digital images.
It arises from the effects of basic physics— the photon nature of light and the thermal energy of heat— inside image
sensors. Typical noise reduction (NR) software reduces the visibility of noise by smoothing the image, excluding
areas near contrast boundaries. This technique works well, but it can obscure fine, low contrast detail.


5.3 Dynamic range (or exposure range) is the range of light levels a camera can capture, usually measured in f-stops,
Exposure Value (EV), or zones (all factors of two in exposure). It is closely related to noise: high noise implies low
dynamic range.


5.4 Tone reproduction is the relationship between scene luminance and the reproduced image brightness.


5.5 Contrast, also known as gamma, is the slope of the tone reproduction curve in a log-log space. High contrast
usually involves loss of dynamic range — loss of detail, or clipping, in highlights or shadows.


5.6 Color accuracy is an important but ambiguous image quality factor. Many viewers prefer enhanced color
saturation; the most accurate color isn't necessarily the most pleasing. Nevertheless it is important to measure a
camera's color response: its color shifts, saturation, and the effectiveness of its white balance algorithms.


5.7 Distortion is an aberration that causes straight lines to curve near the edges of images. It can be troublesome for
architectural photography and metrology (photographic applications involving measurement). Distortion is worst in
wide angle, telephoto, and zoom lenses. It often worse for close-up images than for images at a distance. It can be
easily corrected in software.


5.8 Vignetting, or light falloff, darkens images near the corners. It can be significant with wide angle lenses.



                                                          108
Computer Engineering and Intelligent Systems                                                               www.iiste.org
ISSN 2222-1719 (Paper) ISSN 2222-2863 (Online)
Vol 3, No.7, 2012



5.9 Exposure accuracy can be an issue with fully automatic cameras and with video cameras where there is little or
no opportunity for post-exposure tonal adjustment. Some even have exposure memory: exposure may change after
very bright or dark objects appear in a scene.


5.10 Lateral chromatic aberration (LCA), also called "color fringing", including purple fringing, is a lens aberration
that causes colors to focus at different distances from the image center. It is most visible near corners of images. LCA
is worst with asymmetrical lenses, including ultrawides, true telephotos and zooms. It is strongly affected by
demosaicing.


5.11 Lens flare, including "veiling glare" is stray light in lenses and optical systems caused by reflections between
lens elements and the inside barrel of the lens. It can cause image fogging (loss of shadow detail and color) as well as
"ghost" images that can occur in the presence of bright light sources in or near the field of view.


5.12 Color moiré is artificial color banding that can appear in images with repetitive patterns of high spatial
frequencies, like fabrics or picket fences. It is affected by lens sharpness, the anti-aliasing (lowpass) filter (which
softens the image), and demosaicing software. It tends to be worst with the sharpest lenses.


5.13 Artifacts – software (especially operations performed during RAW conversion) can cause significant visual
artifacts, including Data compression and transmission losses (e.g. Low quality JPEG), oversharpening "halos" and
loss of fine, low-contrast detail.


6. From analog to digital video
Since the time when the world's first video sequence was recorded, many video processing systems have been
designed. In the ages of analog video systems, it was possible to evaluate quality of a video processing system by
calculating the system's frequency response using some traditional test signal (for example, a collection of color bars
and circles).
Nowadays, digital video systems are replacing analog ones, and evaluation methods have changed. Performance of a
digital video processing system can vary significantly and depends on dynamic characteristics of input video signal
(e.g. amount of motion or spatial details). That's why digital video quality should be evaluated on diverse video
sequences, often from the user's database.


6.1 Objective video quality
Objective video evaluation techniques are mathematical models that approximate results of subjective quality
assessment, but are based on criteria and metrics that can be measured objectively and automatically evaluated by a
computer program. Objective methods are classified based on the availability of the original video signal, which is
considered to be of high quality (generally not compressed). Therefore, they can be classified as FR, Reduced
Reference Methods (RR) and NR. FR metrics compute the quality difference by comparing every pixel in each
image of the distorted video to its corresponding pixel in the original video. RR metrics extract some features of both
videos and compare them to give a quality score. They are used when all the original video is not available, e.g. in a
transmission with a limited bandwidth. NR metrics try to assess the quality of a distorted video without any reference
to the original video. These metrics are usually used when the video coding method is known.
The most traditional ways of evaluating quality of digital video processing system (e.g. video codec like DivX, Xvid)
are calculation of the signal-to-noise ratio (SNR) and (PSNR) between the original video signal and signal passed
through this system. PSNR is the most widely used objective video quality metric. However, PSNR values do not
perfectly correlate with a perceived visual quality due to the non-linear behavior of the human visual system.
Recently a number of more complicated and precise metrics were developed, for example UQI, VQM, PEVQ, SSIM,

                                                         109
Computer Engineering and Intelligent Systems                                                                 www.iiste.org
ISSN 2222-1719 (Paper) ISSN 2222-2863 (Online)
Vol 3, No.7, 2012



VQuad-HD and CZD. Based on a benchmark by the Video Quality Experts Group (VQEG) in the course of the
Multimedia Test Phase 2007-2008 some metrics were standardized as ITU-T Rec. J.246 (RR), J.247 (FR) in 2008
and J.341 (FR HD) in 2011.
The performance of an objective video quality metric is evaluated by computing the correlation between the
objective scores and the subjective test results. The latter is called mean opinion score (MOS). The most frequently
used correlation coefficients are : linear correlation coefficient, Spearman's rank correlation coefficient, kurtosis,
kappa coefficient and outliers ratio.
When estimating quality of a video codec, all the mentioned objective methods may require repeating post-encoding
tests in order to determine the encoding parameters that satisfy a required level of visual quality, making them time
consuming, complex and impractical for implementation in real commercial applications. For this reason, much
research has been focused on developing novel objective evaluation methods which enable prediction of the
perceived quality level of the encoded video before the actual encoding is performed [1].


6.2 Subjective video quality
The main goal of many objective video quality metrics is to automatically estimate average user (viewer) opinion on
a quality of video processed by the system. Sometimes however, measurement of subjective video quality can also be
challenging because it may require a trained expert to judge it. Many “subjective video quality measurements” are
described in ITU-T recommendation BT.500. Their main idea is the same as in Mean Opinion Score for audio: video
sequences are shown to the group of viewers and then their opinion is recorded and averaged to evaluate the quality
of each video sequence. However, details of testing may vary greatly.


6.3 Measurement of quality
PSNR, is an engineering term for the ratio between the maximum possible power of a signal and the power of
corrupting noise that affects the fidelity of its representation. Because many signals have a very wide dynamic range,
PSNR is usually expressed in terms of the logarithmic decibel scale. The PSNR is most commonly used as a measure
of quality of reconstruction of lossy compression codecs (e.g., for image compression). The signal in this case is the
original data, and the noise is the error introduced by compression. When comparing compression codecs it is used as
an approximation to human perception of reconstruction quality, therefore in some cases one reconstruction may
appear to be closer to the original than another, even though it has a lower PSNR (a higher PSNR would normally
indicate that the reconstruction is of higher quality). One has to be extremely careful with the range of validity of this
metric; it is only conclusively valid when it is used to compare results from the same codec (or codec type) and same
content. It is most easily defined via MSE.
When the pixels are represented using 8 bits per sample, this is 255 MAXI is the maximum possible pixel value of
the image. More generally, when samples are represented using linear PCM with B bits per sample, MAXI is 2B−1.
For color images with three RGB values per pixel, the definition of PSNR is the same except the MSE is the sum
over all squared value differences divided by image size and by three. Alternately, for color images the image is
converted to a different color space and PSNR is reported against each channel of that color space, e.g., YCbCr or
HSL [11],[12],[13].
Typical values for the PSNR in lossy image and video compression are between 30 and 50 dB, where higher is
better.[14],[15] Acceptable values for wireless transmission quality loss are considered to be about 20 dB to
25 dB.[16],[17]. When the two images are identical, the MSE will be zero. For this value the PSNR is undefined.
Objective methods for assessing perceptual image quality traditionally attempted to quantify the visibility of errors
(differences) between a distorted image and a reference image using a variety of known properties of the human
visual system.
The PSNR block computes the peak signal-to-noise ratio, in decibels, between two images. This ratio is often used as
a quality measurement between the original and a compressed image. The higher the PSNR the better the quality of
the compressed or reconstructed image. MSE and the PSNR are the two error metrics used to compare image


                                                          110
Computer Engineering and Intelligent Systems                                                            www.iiste.org
ISSN 2222-1719 (Paper) ISSN 2222-2863 (Online)
Vol 3, No.7, 2012



compression quality. The MSE represents the cumulative squared error between the compressed and the original
image, whereas PSNR represents a measure of the peak error. The lower the value of MSE, the lower the error.


6.4 Recommendation for Computing PSNR for Color Images
Different approaches exist for computing the PSNR of a color image. Because the human eye is most sensitive to
luma information, compute the PSNR for color images by converting the image to a color space that separates the
intensity (luma) channel, such as YCbCr. The Y (luma), in YCbCr represents a weighted average of R, G, and B. G is
given the most weight, again because the human eye perceives it most easily. With this consideration, compute the
PSNR only on the luma channel.



PSNR is generally used to analyze quality of image, sound and video files in dB (decibels). PSNR calculation of two
images, one original and an altered image, describes how far two images are equal. Equation 1 shows the famous
formula.
               x: width of image.
               y: height.
             x*y: number of pixels (or quantities).




                                                                                Equation 1.


7. Conclusion
Various processing techniques are involved in processing the image from the problem domain to the obtaining of
results. The processed image is reconstructed to check the quality of the image. There are various reference methods
to find the quality of the image. It might be subjective or objective.FR method or NF methods can be one adapted for
assessment of quality of processed image. This processed image can be perceived by Human Visual System. Various
assessment categories are available for assessing the image. The best way to assess the quality is by finding RMSE
and PSNR. By finding this, quality of the processed image can be found. Therefore various categories and
assessment of quality with respect to processed image , in this case a compressed image is considered and discussed.


References
[1] Z. Wang and A.C. Bovik, The Handbook of Image and Video Processing, 2nd ed. New York: Academic Press,
June 2005, ch. Structural approaches to image quality assessment, pp. 961–974.
[2] Z. Wang, A.C. Bovik, H.R. Sheikh and E.P. Simoncelli, "Image quality assessment: from error visibility to
structural similarity," IEEE Transactions on Image Processing , vol.13, no.4pp. 600- 612, April 2004.
[3] Z. Wang and A.C. Bovik, The Handbook of video databases: Design and Applications. Boca Raton, Florida: CRC
Press, September 2003, ch. Objective video quality assessment, pp. 1041–1078.
[4] Z. Wang, A.C. Bovik and L. Lu, "Why is image quality assessment so difficult?," IEEE International Conference
on Acoustics, Speech, and Signal Processing, 2002.
[5]H.R. Sheikh, M.F. Sabir and A.C. Bovik, "A statistical evaluation of recent full reference image quality

                                                        111
Computer Engineering and Intelligent Systems                                                           www.iiste.org
ISSN 2222-1719 (Paper) ISSN 2222-2863 (Online)
Vol 3, No.7, 2012



assessment algorithms", IEEE Transactions on Image Processing, vol.15, no.11, pp.3440-3451, Nov. 2006.
[6]H.R. Sheikh.and A.C. Bovik, "Image information and visual quality," IEEE Transactions on Image Processing,
vol.15, no.2,pp. 430- 444, Feb. 2006.
[7]H.R. Sheikh, A.C. Bovik and G. de Veciana, "An information fidelity criterion for image quality assessment using
natural scene statistics," IEEE Transactions on Image Processing , vol.14, no.12, pp. 2117- 2128, Dec. 2005.
[8]Z. Wang and A. C. Bovik, The Handbook of Image and Video Processing, 2nd ed. New York: Academic Press,
June 2005, ch. Structural approaches to image quality assessment, pp. 961–974.
[9]Z. Wang, A.C. Bovik, H.R. Sheikh and E.P. Simoncelli, "Image quality assessment: from error visibility to
structural similarity," IEEE Transactions on Image Processing , vol.13, no.4pp. 600- 612, April 2004.
[10]Z. Wang, L. Lu and A.C. Bovik, "Video quality assessment based on structural distortion measurement", Signal
Processing: Image Communication, Special issue on Objective video quality metrics, vol. 19, no. 2, February 2004.
[11] Huynh-Thu, Q.; Ghanbari, M. (2008). "Scope of validity of PSNR in image/video quality assessment".
Electronics Letters 44 (13): 800–801. doi:10.1049/el:20080522.
[12]Oriani, Emanuele. "qpsnr: A quick PSNR/SSIM analyzer for Linux". http://qpsnr.youlink.org/. Retrieved 6 April
2011.
[13]"Image     Processing     Science    calculating   RMSE        and    PSNR       for     color     images".
http://www.varioustopics.com/image-processing/1008462-calculating-rmse-and-psnr-for-color-images.html.
Retrieved 6 April 2011.
[14] "pnmpsnr User Manual". http://netpbm.sourceforge.net/doc/pnmpsnr.html. Retrieved 6 April 2011.
[15] Welstead, Stephen T. (1999). Fractal and wavelet image compression techniques. SPIE Publication.
pp. 155–156.ISBN 978-0819435033.
[16]http://books.google.com/books?id=evGyv-mogukC&lpg=PA155&dq=image%20compression%20acceptable%2
0PSNR&pg=PA156#v=onepage&q=image%20compression%20acceptable%20PSNR&f=false.
[17]Barni, Mauro, ed (May 2006). "Fractal Image Compression". Document and image compression (CRC
Press)968:168–169.ISSN 9780849335563.http://books.google.com/books?id=FmqPOsEYRsEC&lpg=PA229&dq=i
mage%20compression%20acceptable%20PSNR&pg=PA168#v=onepage&q=PSNR&f=false. Retrieved 5 April 2011




                                                       112
This academic article was published by The International Institute for Science,
Technology and Education (IISTE). The IISTE is a pioneer in the Open Access
Publishing service based in the U.S. and Europe. The aim of the institute is
Accelerating Global Knowledge Sharing.

More information about the publisher can be found in the IISTE’s homepage:
http://www.iiste.org


The IISTE is currently hosting more than 30 peer-reviewed academic journals and
collaborating with academic institutions around the world. Prospective authors of
IISTE journals can find the submission instruction on the following page:
http://www.iiste.org/Journals/

The IISTE editorial team promises to the review and publish all the qualified
submissions in a fast manner. All the journals articles are available online to the
readers all over the world without financial, legal, or technical barriers other than
those inseparable from gaining access to the internet itself. Printed version of the
journals is also available upon request of readers and authors.

IISTE Knowledge Sharing Partners

EBSCO, Index Copernicus, Ulrich's Periodicals Directory, JournalTOCS, PKP Open
Archives Harvester, Bielefeld Academic Search Engine, Elektronische
Zeitschriftenbibliothek EZB, Open J-Gate, OCLC WorldCat, Universe Digtial
Library , NewJour, Google Scholar

Contenu connexe

Tendances

Paper id 21201419
Paper id 21201419Paper id 21201419
Paper id 21201419IJRAT
 
An Algorithm for Improving the Quality of Compacted JPEG Image by Minimizes t...
An Algorithm for Improving the Quality of Compacted JPEG Image by Minimizes t...An Algorithm for Improving the Quality of Compacted JPEG Image by Minimizes t...
An Algorithm for Improving the Quality of Compacted JPEG Image by Minimizes t...ijcga
 
Emblematical image based pattern recognition paradigm using Multi-Layer Perce...
Emblematical image based pattern recognition paradigm using Multi-Layer Perce...Emblematical image based pattern recognition paradigm using Multi-Layer Perce...
Emblematical image based pattern recognition paradigm using Multi-Layer Perce...iosrjce
 
Visual Image Quality Assessment Technique using FSIM
Visual Image Quality Assessment Technique using FSIMVisual Image Quality Assessment Technique using FSIM
Visual Image Quality Assessment Technique using FSIMEditor IJCATR
 
Removal of Unwanted Objects using Image Inpainting - a Technical Review
Removal of Unwanted Objects using Image Inpainting - a Technical ReviewRemoval of Unwanted Objects using Image Inpainting - a Technical Review
Removal of Unwanted Objects using Image Inpainting - a Technical ReviewIJERA Editor
 
A Study of Image Compression Methods
A Study of Image Compression MethodsA Study of Image Compression Methods
A Study of Image Compression MethodsIOSR Journals
 
International Journal of Engineering Research and Development (IJERD)
International Journal of Engineering Research and Development (IJERD)International Journal of Engineering Research and Development (IJERD)
International Journal of Engineering Research and Development (IJERD)IJERD Editor
 
A New Wavelet based Digital Watermarking Method for Authenticated Mobile Signals
A New Wavelet based Digital Watermarking Method for Authenticated Mobile SignalsA New Wavelet based Digital Watermarking Method for Authenticated Mobile Signals
A New Wavelet based Digital Watermarking Method for Authenticated Mobile SignalsCSCJournals
 
A spatial image compression algorithm based on run length encoding
A spatial image compression algorithm based on run length encodingA spatial image compression algorithm based on run length encoding
A spatial image compression algorithm based on run length encodingjournalBEEI
 
01 introduction image processing analysis
01 introduction image processing analysis01 introduction image processing analysis
01 introduction image processing analysisRumah Belajar
 
Compression of Compound Images Using Wavelet Transform
Compression of Compound Images Using Wavelet TransformCompression of Compound Images Using Wavelet Transform
Compression of Compound Images Using Wavelet TransformDR.P.S.JAGADEESH KUMAR
 
Blank Background Image Lossless Compression Technique
Blank Background Image Lossless Compression TechniqueBlank Background Image Lossless Compression Technique
Blank Background Image Lossless Compression TechniqueCSCJournals
 
Image compression and reconstruction using a new approach by artificial neura...
Image compression and reconstruction using a new approach by artificial neura...Image compression and reconstruction using a new approach by artificial neura...
Image compression and reconstruction using a new approach by artificial neura...Hưng Đặng
 

Tendances (20)

P180203105108
P180203105108P180203105108
P180203105108
 
Paper id 21201419
Paper id 21201419Paper id 21201419
Paper id 21201419
 
An Algorithm for Improving the Quality of Compacted JPEG Image by Minimizes t...
An Algorithm for Improving the Quality of Compacted JPEG Image by Minimizes t...An Algorithm for Improving the Quality of Compacted JPEG Image by Minimizes t...
An Algorithm for Improving the Quality of Compacted JPEG Image by Minimizes t...
 
Emblematical image based pattern recognition paradigm using Multi-Layer Perce...
Emblematical image based pattern recognition paradigm using Multi-Layer Perce...Emblematical image based pattern recognition paradigm using Multi-Layer Perce...
Emblematical image based pattern recognition paradigm using Multi-Layer Perce...
 
Visual Image Quality Assessment Technique using FSIM
Visual Image Quality Assessment Technique using FSIMVisual Image Quality Assessment Technique using FSIM
Visual Image Quality Assessment Technique using FSIM
 
DIP questions
DIP questionsDIP questions
DIP questions
 
Jc3515691575
Jc3515691575Jc3515691575
Jc3515691575
 
J017426467
J017426467J017426467
J017426467
 
Removal of Unwanted Objects using Image Inpainting - a Technical Review
Removal of Unwanted Objects using Image Inpainting - a Technical ReviewRemoval of Unwanted Objects using Image Inpainting - a Technical Review
Removal of Unwanted Objects using Image Inpainting - a Technical Review
 
A Study of Image Compression Methods
A Study of Image Compression MethodsA Study of Image Compression Methods
A Study of Image Compression Methods
 
International Journal of Engineering Research and Development (IJERD)
International Journal of Engineering Research and Development (IJERD)International Journal of Engineering Research and Development (IJERD)
International Journal of Engineering Research and Development (IJERD)
 
A New Wavelet based Digital Watermarking Method for Authenticated Mobile Signals
A New Wavelet based Digital Watermarking Method for Authenticated Mobile SignalsA New Wavelet based Digital Watermarking Method for Authenticated Mobile Signals
A New Wavelet based Digital Watermarking Method for Authenticated Mobile Signals
 
Dw32759763
Dw32759763Dw32759763
Dw32759763
 
[IJET-V2I2P10] Authors:M. Dhivya, P. Jenifer, D. C. Joy Winnie Wise, N. Rajap...
[IJET-V2I2P10] Authors:M. Dhivya, P. Jenifer, D. C. Joy Winnie Wise, N. Rajap...[IJET-V2I2P10] Authors:M. Dhivya, P. Jenifer, D. C. Joy Winnie Wise, N. Rajap...
[IJET-V2I2P10] Authors:M. Dhivya, P. Jenifer, D. C. Joy Winnie Wise, N. Rajap...
 
A spatial image compression algorithm based on run length encoding
A spatial image compression algorithm based on run length encodingA spatial image compression algorithm based on run length encoding
A spatial image compression algorithm based on run length encoding
 
01 introduction image processing analysis
01 introduction image processing analysis01 introduction image processing analysis
01 introduction image processing analysis
 
Compression of Compound Images Using Wavelet Transform
Compression of Compound Images Using Wavelet TransformCompression of Compound Images Using Wavelet Transform
Compression of Compound Images Using Wavelet Transform
 
48 144-1-pb
48 144-1-pb48 144-1-pb
48 144-1-pb
 
Blank Background Image Lossless Compression Technique
Blank Background Image Lossless Compression TechniqueBlank Background Image Lossless Compression Technique
Blank Background Image Lossless Compression Technique
 
Image compression and reconstruction using a new approach by artificial neura...
Image compression and reconstruction using a new approach by artificial neura...Image compression and reconstruction using a new approach by artificial neura...
Image compression and reconstruction using a new approach by artificial neura...
 

Similaire à Quality assessment of resultant images after processing

DIP-LECTURE_NOTES.pdf
DIP-LECTURE_NOTES.pdfDIP-LECTURE_NOTES.pdf
DIP-LECTURE_NOTES.pdfVaideshSiva1
 
General Review Of Algorithms Presented For Image Segmentation
General Review Of Algorithms Presented For Image SegmentationGeneral Review Of Algorithms Presented For Image Segmentation
General Review Of Algorithms Presented For Image SegmentationMelissa Moore
 
Evaluation Of Proposed Design And Necessary Corrective Action
Evaluation Of Proposed Design And Necessary Corrective ActionEvaluation Of Proposed Design And Necessary Corrective Action
Evaluation Of Proposed Design And Necessary Corrective ActionSandra Arveseth
 
3.introduction onwards deepa
3.introduction onwards deepa3.introduction onwards deepa
3.introduction onwards deepaSafalsha Babu
 
Digital Image Processing
Digital Image ProcessingDigital Image Processing
Digital Image ProcessingReshma KC
 
A supervised lung nodule classification method using patch based context anal...
A supervised lung nodule classification method using patch based context anal...A supervised lung nodule classification method using patch based context anal...
A supervised lung nodule classification method using patch based context anal...ASWATHY VG
 
Review Paper on Image Processing Techniques
Review Paper on Image Processing TechniquesReview Paper on Image Processing Techniques
Review Paper on Image Processing TechniquesIJSRD
 
Computer Vision Performance and Image Quality Metrics : A Reciprocal Relation
Computer Vision Performance and Image Quality Metrics : A Reciprocal Relation Computer Vision Performance and Image Quality Metrics : A Reciprocal Relation
Computer Vision Performance and Image Quality Metrics : A Reciprocal Relation cscpconf
 
COMPUTER VISION PERFORMANCE AND IMAGE QUALITY METRICS: A RECIPROCAL RELATION
COMPUTER VISION PERFORMANCE AND IMAGE QUALITY METRICS: A RECIPROCAL RELATION COMPUTER VISION PERFORMANCE AND IMAGE QUALITY METRICS: A RECIPROCAL RELATION
COMPUTER VISION PERFORMANCE AND IMAGE QUALITY METRICS: A RECIPROCAL RELATION csandit
 
SIGNIFICANCE OF DIMENSIONALITY REDUCTION IN IMAGE PROCESSING
SIGNIFICANCE OF DIMENSIONALITY REDUCTION IN IMAGE PROCESSING SIGNIFICANCE OF DIMENSIONALITY REDUCTION IN IMAGE PROCESSING
SIGNIFICANCE OF DIMENSIONALITY REDUCTION IN IMAGE PROCESSING sipij
 
Digital image processing
Digital image processingDigital image processing
Digital image processingDeevena Dayaal
 
APPLICATIONS OF SPATIAL FEATURES IN CBIR : A SURVEY
APPLICATIONS OF SPATIAL FEATURES IN CBIR : A SURVEYAPPLICATIONS OF SPATIAL FEATURES IN CBIR : A SURVEY
APPLICATIONS OF SPATIAL FEATURES IN CBIR : A SURVEYcscpconf
 
Applications of spatial features in cbir a survey
Applications of spatial features in cbir  a surveyApplications of spatial features in cbir  a survey
Applications of spatial features in cbir a surveycsandit
 
Ijarcet vol-2-issue-7-2287-2291
Ijarcet vol-2-issue-7-2287-2291Ijarcet vol-2-issue-7-2287-2291
Ijarcet vol-2-issue-7-2287-2291Editor IJARCET
 
Ijarcet vol-2-issue-7-2287-2291
Ijarcet vol-2-issue-7-2287-2291Ijarcet vol-2-issue-7-2287-2291
Ijarcet vol-2-issue-7-2287-2291Editor IJARCET
 
A Flexible Scheme for Transmission Line Fault Identification Using Image Proc...
A Flexible Scheme for Transmission Line Fault Identification Using Image Proc...A Flexible Scheme for Transmission Line Fault Identification Using Image Proc...
A Flexible Scheme for Transmission Line Fault Identification Using Image Proc...IJEEE
 
An Introduction to Digital Image Analysis.pdf
An Introduction to Digital Image Analysis.pdfAn Introduction to Digital Image Analysis.pdf
An Introduction to Digital Image Analysis.pdfThe Lifesciences Magazine
 

Similaire à Quality assessment of resultant images after processing (20)

DIP-LECTURE_NOTES.pdf
DIP-LECTURE_NOTES.pdfDIP-LECTURE_NOTES.pdf
DIP-LECTURE_NOTES.pdf
 
General Review Of Algorithms Presented For Image Segmentation
General Review Of Algorithms Presented For Image SegmentationGeneral Review Of Algorithms Presented For Image Segmentation
General Review Of Algorithms Presented For Image Segmentation
 
Evaluation Of Proposed Design And Necessary Corrective Action
Evaluation Of Proposed Design And Necessary Corrective ActionEvaluation Of Proposed Design And Necessary Corrective Action
Evaluation Of Proposed Design And Necessary Corrective Action
 
3.introduction onwards deepa
3.introduction onwards deepa3.introduction onwards deepa
3.introduction onwards deepa
 
Jc3416551658
Jc3416551658Jc3416551658
Jc3416551658
 
Digital Image Processing
Digital Image ProcessingDigital Image Processing
Digital Image Processing
 
A supervised lung nodule classification method using patch based context anal...
A supervised lung nodule classification method using patch based context anal...A supervised lung nodule classification method using patch based context anal...
A supervised lung nodule classification method using patch based context anal...
 
Review Paper on Image Processing Techniques
Review Paper on Image Processing TechniquesReview Paper on Image Processing Techniques
Review Paper on Image Processing Techniques
 
Q0460398103
Q0460398103Q0460398103
Q0460398103
 
Ai4201231234
Ai4201231234Ai4201231234
Ai4201231234
 
Computer Vision Performance and Image Quality Metrics : A Reciprocal Relation
Computer Vision Performance and Image Quality Metrics : A Reciprocal Relation Computer Vision Performance and Image Quality Metrics : A Reciprocal Relation
Computer Vision Performance and Image Quality Metrics : A Reciprocal Relation
 
COMPUTER VISION PERFORMANCE AND IMAGE QUALITY METRICS: A RECIPROCAL RELATION
COMPUTER VISION PERFORMANCE AND IMAGE QUALITY METRICS: A RECIPROCAL RELATION COMPUTER VISION PERFORMANCE AND IMAGE QUALITY METRICS: A RECIPROCAL RELATION
COMPUTER VISION PERFORMANCE AND IMAGE QUALITY METRICS: A RECIPROCAL RELATION
 
SIGNIFICANCE OF DIMENSIONALITY REDUCTION IN IMAGE PROCESSING
SIGNIFICANCE OF DIMENSIONALITY REDUCTION IN IMAGE PROCESSING SIGNIFICANCE OF DIMENSIONALITY REDUCTION IN IMAGE PROCESSING
SIGNIFICANCE OF DIMENSIONALITY REDUCTION IN IMAGE PROCESSING
 
Digital image processing
Digital image processingDigital image processing
Digital image processing
 
APPLICATIONS OF SPATIAL FEATURES IN CBIR : A SURVEY
APPLICATIONS OF SPATIAL FEATURES IN CBIR : A SURVEYAPPLICATIONS OF SPATIAL FEATURES IN CBIR : A SURVEY
APPLICATIONS OF SPATIAL FEATURES IN CBIR : A SURVEY
 
Applications of spatial features in cbir a survey
Applications of spatial features in cbir  a surveyApplications of spatial features in cbir  a survey
Applications of spatial features in cbir a survey
 
Ijarcet vol-2-issue-7-2287-2291
Ijarcet vol-2-issue-7-2287-2291Ijarcet vol-2-issue-7-2287-2291
Ijarcet vol-2-issue-7-2287-2291
 
Ijarcet vol-2-issue-7-2287-2291
Ijarcet vol-2-issue-7-2287-2291Ijarcet vol-2-issue-7-2287-2291
Ijarcet vol-2-issue-7-2287-2291
 
A Flexible Scheme for Transmission Line Fault Identification Using Image Proc...
A Flexible Scheme for Transmission Line Fault Identification Using Image Proc...A Flexible Scheme for Transmission Line Fault Identification Using Image Proc...
A Flexible Scheme for Transmission Line Fault Identification Using Image Proc...
 
An Introduction to Digital Image Analysis.pdf
An Introduction to Digital Image Analysis.pdfAn Introduction to Digital Image Analysis.pdf
An Introduction to Digital Image Analysis.pdf
 

Plus de Alexander Decker

Abnormalities of hormones and inflammatory cytokines in women affected with p...
Abnormalities of hormones and inflammatory cytokines in women affected with p...Abnormalities of hormones and inflammatory cytokines in women affected with p...
Abnormalities of hormones and inflammatory cytokines in women affected with p...Alexander Decker
 
A validation of the adverse childhood experiences scale in
A validation of the adverse childhood experiences scale inA validation of the adverse childhood experiences scale in
A validation of the adverse childhood experiences scale inAlexander Decker
 
A usability evaluation framework for b2 c e commerce websites
A usability evaluation framework for b2 c e commerce websitesA usability evaluation framework for b2 c e commerce websites
A usability evaluation framework for b2 c e commerce websitesAlexander Decker
 
A universal model for managing the marketing executives in nigerian banks
A universal model for managing the marketing executives in nigerian banksA universal model for managing the marketing executives in nigerian banks
A universal model for managing the marketing executives in nigerian banksAlexander Decker
 
A unique common fixed point theorems in generalized d
A unique common fixed point theorems in generalized dA unique common fixed point theorems in generalized d
A unique common fixed point theorems in generalized dAlexander Decker
 
A trends of salmonella and antibiotic resistance
A trends of salmonella and antibiotic resistanceA trends of salmonella and antibiotic resistance
A trends of salmonella and antibiotic resistanceAlexander Decker
 
A transformational generative approach towards understanding al-istifham
A transformational  generative approach towards understanding al-istifhamA transformational  generative approach towards understanding al-istifham
A transformational generative approach towards understanding al-istifhamAlexander Decker
 
A time series analysis of the determinants of savings in namibia
A time series analysis of the determinants of savings in namibiaA time series analysis of the determinants of savings in namibia
A time series analysis of the determinants of savings in namibiaAlexander Decker
 
A therapy for physical and mental fitness of school children
A therapy for physical and mental fitness of school childrenA therapy for physical and mental fitness of school children
A therapy for physical and mental fitness of school childrenAlexander Decker
 
A theory of efficiency for managing the marketing executives in nigerian banks
A theory of efficiency for managing the marketing executives in nigerian banksA theory of efficiency for managing the marketing executives in nigerian banks
A theory of efficiency for managing the marketing executives in nigerian banksAlexander Decker
 
A systematic evaluation of link budget for
A systematic evaluation of link budget forA systematic evaluation of link budget for
A systematic evaluation of link budget forAlexander Decker
 
A synthetic review of contraceptive supplies in punjab
A synthetic review of contraceptive supplies in punjabA synthetic review of contraceptive supplies in punjab
A synthetic review of contraceptive supplies in punjabAlexander Decker
 
A synthesis of taylor’s and fayol’s management approaches for managing market...
A synthesis of taylor’s and fayol’s management approaches for managing market...A synthesis of taylor’s and fayol’s management approaches for managing market...
A synthesis of taylor’s and fayol’s management approaches for managing market...Alexander Decker
 
A survey paper on sequence pattern mining with incremental
A survey paper on sequence pattern mining with incrementalA survey paper on sequence pattern mining with incremental
A survey paper on sequence pattern mining with incrementalAlexander Decker
 
A survey on live virtual machine migrations and its techniques
A survey on live virtual machine migrations and its techniquesA survey on live virtual machine migrations and its techniques
A survey on live virtual machine migrations and its techniquesAlexander Decker
 
A survey on data mining and analysis in hadoop and mongo db
A survey on data mining and analysis in hadoop and mongo dbA survey on data mining and analysis in hadoop and mongo db
A survey on data mining and analysis in hadoop and mongo dbAlexander Decker
 
A survey on challenges to the media cloud
A survey on challenges to the media cloudA survey on challenges to the media cloud
A survey on challenges to the media cloudAlexander Decker
 
A survey of provenance leveraged
A survey of provenance leveragedA survey of provenance leveraged
A survey of provenance leveragedAlexander Decker
 
A survey of private equity investments in kenya
A survey of private equity investments in kenyaA survey of private equity investments in kenya
A survey of private equity investments in kenyaAlexander Decker
 
A study to measures the financial health of
A study to measures the financial health ofA study to measures the financial health of
A study to measures the financial health ofAlexander Decker
 

Plus de Alexander Decker (20)

Abnormalities of hormones and inflammatory cytokines in women affected with p...
Abnormalities of hormones and inflammatory cytokines in women affected with p...Abnormalities of hormones and inflammatory cytokines in women affected with p...
Abnormalities of hormones and inflammatory cytokines in women affected with p...
 
A validation of the adverse childhood experiences scale in
A validation of the adverse childhood experiences scale inA validation of the adverse childhood experiences scale in
A validation of the adverse childhood experiences scale in
 
A usability evaluation framework for b2 c e commerce websites
A usability evaluation framework for b2 c e commerce websitesA usability evaluation framework for b2 c e commerce websites
A usability evaluation framework for b2 c e commerce websites
 
A universal model for managing the marketing executives in nigerian banks
A universal model for managing the marketing executives in nigerian banksA universal model for managing the marketing executives in nigerian banks
A universal model for managing the marketing executives in nigerian banks
 
A unique common fixed point theorems in generalized d
A unique common fixed point theorems in generalized dA unique common fixed point theorems in generalized d
A unique common fixed point theorems in generalized d
 
A trends of salmonella and antibiotic resistance
A trends of salmonella and antibiotic resistanceA trends of salmonella and antibiotic resistance
A trends of salmonella and antibiotic resistance
 
A transformational generative approach towards understanding al-istifham
A transformational  generative approach towards understanding al-istifhamA transformational  generative approach towards understanding al-istifham
A transformational generative approach towards understanding al-istifham
 
A time series analysis of the determinants of savings in namibia
A time series analysis of the determinants of savings in namibiaA time series analysis of the determinants of savings in namibia
A time series analysis of the determinants of savings in namibia
 
A therapy for physical and mental fitness of school children
A therapy for physical and mental fitness of school childrenA therapy for physical and mental fitness of school children
A therapy for physical and mental fitness of school children
 
A theory of efficiency for managing the marketing executives in nigerian banks
A theory of efficiency for managing the marketing executives in nigerian banksA theory of efficiency for managing the marketing executives in nigerian banks
A theory of efficiency for managing the marketing executives in nigerian banks
 
A systematic evaluation of link budget for
A systematic evaluation of link budget forA systematic evaluation of link budget for
A systematic evaluation of link budget for
 
A synthetic review of contraceptive supplies in punjab
A synthetic review of contraceptive supplies in punjabA synthetic review of contraceptive supplies in punjab
A synthetic review of contraceptive supplies in punjab
 
A synthesis of taylor’s and fayol’s management approaches for managing market...
A synthesis of taylor’s and fayol’s management approaches for managing market...A synthesis of taylor’s and fayol’s management approaches for managing market...
A synthesis of taylor’s and fayol’s management approaches for managing market...
 
A survey paper on sequence pattern mining with incremental
A survey paper on sequence pattern mining with incrementalA survey paper on sequence pattern mining with incremental
A survey paper on sequence pattern mining with incremental
 
A survey on live virtual machine migrations and its techniques
A survey on live virtual machine migrations and its techniquesA survey on live virtual machine migrations and its techniques
A survey on live virtual machine migrations and its techniques
 
A survey on data mining and analysis in hadoop and mongo db
A survey on data mining and analysis in hadoop and mongo dbA survey on data mining and analysis in hadoop and mongo db
A survey on data mining and analysis in hadoop and mongo db
 
A survey on challenges to the media cloud
A survey on challenges to the media cloudA survey on challenges to the media cloud
A survey on challenges to the media cloud
 
A survey of provenance leveraged
A survey of provenance leveragedA survey of provenance leveraged
A survey of provenance leveraged
 
A survey of private equity investments in kenya
A survey of private equity investments in kenyaA survey of private equity investments in kenya
A survey of private equity investments in kenya
 
A study to measures the financial health of
A study to measures the financial health ofA study to measures the financial health of
A study to measures the financial health of
 

Dernier

Digital Identity is Under Attack: FIDO Paris Seminar.pptx
Digital Identity is Under Attack: FIDO Paris Seminar.pptxDigital Identity is Under Attack: FIDO Paris Seminar.pptx
Digital Identity is Under Attack: FIDO Paris Seminar.pptxLoriGlavin3
 
Nell’iperspazio con Rocket: il Framework Web di Rust!
Nell’iperspazio con Rocket: il Framework Web di Rust!Nell’iperspazio con Rocket: il Framework Web di Rust!
Nell’iperspazio con Rocket: il Framework Web di Rust!Commit University
 
Passkey Providers and Enabling Portability: FIDO Paris Seminar.pptx
Passkey Providers and Enabling Portability: FIDO Paris Seminar.pptxPasskey Providers and Enabling Portability: FIDO Paris Seminar.pptx
Passkey Providers and Enabling Portability: FIDO Paris Seminar.pptxLoriGlavin3
 
Use of FIDO in the Payments and Identity Landscape: FIDO Paris Seminar.pptx
Use of FIDO in the Payments and Identity Landscape: FIDO Paris Seminar.pptxUse of FIDO in the Payments and Identity Landscape: FIDO Paris Seminar.pptx
Use of FIDO in the Payments and Identity Landscape: FIDO Paris Seminar.pptxLoriGlavin3
 
The State of Passkeys with FIDO Alliance.pptx
The State of Passkeys with FIDO Alliance.pptxThe State of Passkeys with FIDO Alliance.pptx
The State of Passkeys with FIDO Alliance.pptxLoriGlavin3
 
Generative AI for Technical Writer or Information Developers
Generative AI for Technical Writer or Information DevelopersGenerative AI for Technical Writer or Information Developers
Generative AI for Technical Writer or Information DevelopersRaghuram Pandurangan
 
From Family Reminiscence to Scholarly Archive .
From Family Reminiscence to Scholarly Archive .From Family Reminiscence to Scholarly Archive .
From Family Reminiscence to Scholarly Archive .Alan Dix
 
SALESFORCE EDUCATION CLOUD | FEXLE SERVICES
SALESFORCE EDUCATION CLOUD | FEXLE SERVICESSALESFORCE EDUCATION CLOUD | FEXLE SERVICES
SALESFORCE EDUCATION CLOUD | FEXLE SERVICESmohitsingh558521
 
unit 4 immunoblotting technique complete.pptx
unit 4 immunoblotting technique complete.pptxunit 4 immunoblotting technique complete.pptx
unit 4 immunoblotting technique complete.pptxBkGupta21
 
WordPress Websites for Engineers: Elevate Your Brand
WordPress Websites for Engineers: Elevate Your BrandWordPress Websites for Engineers: Elevate Your Brand
WordPress Websites for Engineers: Elevate Your Brandgvaughan
 
How AI, OpenAI, and ChatGPT impact business and software.
How AI, OpenAI, and ChatGPT impact business and software.How AI, OpenAI, and ChatGPT impact business and software.
How AI, OpenAI, and ChatGPT impact business and software.Curtis Poe
 
Scanning the Internet for External Cloud Exposures via SSL Certs
Scanning the Internet for External Cloud Exposures via SSL CertsScanning the Internet for External Cloud Exposures via SSL Certs
Scanning the Internet for External Cloud Exposures via SSL CertsRizwan Syed
 
Ensuring Technical Readiness For Copilot in Microsoft 365
Ensuring Technical Readiness For Copilot in Microsoft 365Ensuring Technical Readiness For Copilot in Microsoft 365
Ensuring Technical Readiness For Copilot in Microsoft 3652toLead Limited
 
Streamlining Python Development: A Guide to a Modern Project Setup
Streamlining Python Development: A Guide to a Modern Project SetupStreamlining Python Development: A Guide to a Modern Project Setup
Streamlining Python Development: A Guide to a Modern Project SetupFlorian Wilhelm
 
New from BookNet Canada for 2024: BNC CataList - Tech Forum 2024
New from BookNet Canada for 2024: BNC CataList - Tech Forum 2024New from BookNet Canada for 2024: BNC CataList - Tech Forum 2024
New from BookNet Canada for 2024: BNC CataList - Tech Forum 2024BookNet Canada
 
Moving Beyond Passwords: FIDO Paris Seminar.pdf
Moving Beyond Passwords: FIDO Paris Seminar.pdfMoving Beyond Passwords: FIDO Paris Seminar.pdf
Moving Beyond Passwords: FIDO Paris Seminar.pdfLoriGlavin3
 
Unleash Your Potential - Namagunga Girls Coding Club
Unleash Your Potential - Namagunga Girls Coding ClubUnleash Your Potential - Namagunga Girls Coding Club
Unleash Your Potential - Namagunga Girls Coding ClubKalema Edgar
 
How to write a Business Continuity Plan
How to write a Business Continuity PlanHow to write a Business Continuity Plan
How to write a Business Continuity PlanDatabarracks
 
Connect Wave/ connectwave Pitch Deck Presentation
Connect Wave/ connectwave Pitch Deck PresentationConnect Wave/ connectwave Pitch Deck Presentation
Connect Wave/ connectwave Pitch Deck PresentationSlibray Presentation
 
TrustArc Webinar - How to Build Consumer Trust Through Data Privacy
TrustArc Webinar - How to Build Consumer Trust Through Data PrivacyTrustArc Webinar - How to Build Consumer Trust Through Data Privacy
TrustArc Webinar - How to Build Consumer Trust Through Data PrivacyTrustArc
 

Dernier (20)

Digital Identity is Under Attack: FIDO Paris Seminar.pptx
Digital Identity is Under Attack: FIDO Paris Seminar.pptxDigital Identity is Under Attack: FIDO Paris Seminar.pptx
Digital Identity is Under Attack: FIDO Paris Seminar.pptx
 
Nell’iperspazio con Rocket: il Framework Web di Rust!
Nell’iperspazio con Rocket: il Framework Web di Rust!Nell’iperspazio con Rocket: il Framework Web di Rust!
Nell’iperspazio con Rocket: il Framework Web di Rust!
 
Passkey Providers and Enabling Portability: FIDO Paris Seminar.pptx
Passkey Providers and Enabling Portability: FIDO Paris Seminar.pptxPasskey Providers and Enabling Portability: FIDO Paris Seminar.pptx
Passkey Providers and Enabling Portability: FIDO Paris Seminar.pptx
 
Use of FIDO in the Payments and Identity Landscape: FIDO Paris Seminar.pptx
Use of FIDO in the Payments and Identity Landscape: FIDO Paris Seminar.pptxUse of FIDO in the Payments and Identity Landscape: FIDO Paris Seminar.pptx
Use of FIDO in the Payments and Identity Landscape: FIDO Paris Seminar.pptx
 
The State of Passkeys with FIDO Alliance.pptx
The State of Passkeys with FIDO Alliance.pptxThe State of Passkeys with FIDO Alliance.pptx
The State of Passkeys with FIDO Alliance.pptx
 
Generative AI for Technical Writer or Information Developers
Generative AI for Technical Writer or Information DevelopersGenerative AI for Technical Writer or Information Developers
Generative AI for Technical Writer or Information Developers
 
From Family Reminiscence to Scholarly Archive .
From Family Reminiscence to Scholarly Archive .From Family Reminiscence to Scholarly Archive .
From Family Reminiscence to Scholarly Archive .
 
SALESFORCE EDUCATION CLOUD | FEXLE SERVICES
SALESFORCE EDUCATION CLOUD | FEXLE SERVICESSALESFORCE EDUCATION CLOUD | FEXLE SERVICES
SALESFORCE EDUCATION CLOUD | FEXLE SERVICES
 
unit 4 immunoblotting technique complete.pptx
unit 4 immunoblotting technique complete.pptxunit 4 immunoblotting technique complete.pptx
unit 4 immunoblotting technique complete.pptx
 
WordPress Websites for Engineers: Elevate Your Brand
WordPress Websites for Engineers: Elevate Your BrandWordPress Websites for Engineers: Elevate Your Brand
WordPress Websites for Engineers: Elevate Your Brand
 
How AI, OpenAI, and ChatGPT impact business and software.
How AI, OpenAI, and ChatGPT impact business and software.How AI, OpenAI, and ChatGPT impact business and software.
How AI, OpenAI, and ChatGPT impact business and software.
 
Scanning the Internet for External Cloud Exposures via SSL Certs
Scanning the Internet for External Cloud Exposures via SSL CertsScanning the Internet for External Cloud Exposures via SSL Certs
Scanning the Internet for External Cloud Exposures via SSL Certs
 
Ensuring Technical Readiness For Copilot in Microsoft 365
Ensuring Technical Readiness For Copilot in Microsoft 365Ensuring Technical Readiness For Copilot in Microsoft 365
Ensuring Technical Readiness For Copilot in Microsoft 365
 
Streamlining Python Development: A Guide to a Modern Project Setup
Streamlining Python Development: A Guide to a Modern Project SetupStreamlining Python Development: A Guide to a Modern Project Setup
Streamlining Python Development: A Guide to a Modern Project Setup
 
New from BookNet Canada for 2024: BNC CataList - Tech Forum 2024
New from BookNet Canada for 2024: BNC CataList - Tech Forum 2024New from BookNet Canada for 2024: BNC CataList - Tech Forum 2024
New from BookNet Canada for 2024: BNC CataList - Tech Forum 2024
 
Moving Beyond Passwords: FIDO Paris Seminar.pdf
Moving Beyond Passwords: FIDO Paris Seminar.pdfMoving Beyond Passwords: FIDO Paris Seminar.pdf
Moving Beyond Passwords: FIDO Paris Seminar.pdf
 
Unleash Your Potential - Namagunga Girls Coding Club
Unleash Your Potential - Namagunga Girls Coding ClubUnleash Your Potential - Namagunga Girls Coding Club
Unleash Your Potential - Namagunga Girls Coding Club
 
How to write a Business Continuity Plan
How to write a Business Continuity PlanHow to write a Business Continuity Plan
How to write a Business Continuity Plan
 
Connect Wave/ connectwave Pitch Deck Presentation
Connect Wave/ connectwave Pitch Deck PresentationConnect Wave/ connectwave Pitch Deck Presentation
Connect Wave/ connectwave Pitch Deck Presentation
 
TrustArc Webinar - How to Build Consumer Trust Through Data Privacy
TrustArc Webinar - How to Build Consumer Trust Through Data PrivacyTrustArc Webinar - How to Build Consumer Trust Through Data Privacy
TrustArc Webinar - How to Build Consumer Trust Through Data Privacy
 

Quality assessment of resultant images after processing

  • 1. Computer Engineering and Intelligent Systems www.iiste.org ISSN 2222-1719 (Paper) ISSN 2222-2863 (Online) Vol 3, No.7, 2012 Quality Assessment of Resultant Images after Processing Anitha S, Dr Nagabhushana B S Research Scholar, Dr. MGR Educational and Research Institute, University, Chennai 600 085, India E-mail: anithasd@yahoo.com Lead Consultant KPIT Cummins Infosystems Limited Bangalore, India E-mail: bsnaga@gmail.com Abstract Image quality is a characteristic of an image that measures the perceived image degradation, typically, compared to an ideal or perfect image. Imaging systems may introduce some amounts of distortion or artifacts in the signal, so the quality assessment is an important problem. Processing of images involves complicated steps. The aim of any processing result is to get a processed image which is very much same as the original. It includes image restoration, enhancement, compression and many more. To find if the reconstructed image after compression has lost the originality is found by assessing the quality of the image. Traditional perceptual image quality assessment approaches are based on measuring the errors (signal differences between the distorted and the reference images and attempt to quantify the errors in a way that simulates human visual error sensitivity features. A discussion is proposed here in order to assess the quality of the compressed image and the relevant information of the processed image is found. Keywords: Reference methods, Quality Assessment, Lateral chromatic aberration, Root Mean Squared Error, Peak Signal to Noise Ratio, Signal to Noise Ratio, Human Visual System. 1. Introduction Interest in image processing methods stem from two principal application areas, improvement of pictorial information for human interpretation and processing of scene data for autonomous machine perception . One of the first applications of image processing techniques in the first category was in improving digitized newspaper pictures sent by submarine cable between London and New York. An image may be defined as a monochrome image or simply image, two-dimensional function, f(x, y) where x and y are spatial coordinates and (x, y) is called the intensity of f are all finite, discrete quantities, the image is called a digital image. A digital image is an image f(x, y) that has been discretized both in spatial coordinates and brightness. A digital image can be considered a matrix whose row and column indices identify a point in the image and the corresponding matrix element value identifies the gray level at that point. The elements of such a digital array are called image elements, picture elements, pixels or pels. Digital image processing encompassed a broad range of hardware, software, and theoretical underpinnings. There are basic fundamental steps required to perform an image processing task. The overall objective is to produce a result from a problem domain by means of image processing. A piece of mail is taken as an example objective is to read the address on each piece. Thus the desired output in this case is a stream of alphanumeric characters. The first step in the image processing process is the image acquisition –that is, to acquire a digital image. Acquiring the image requires an imaging sensor and the capability to digitize the signal produced by the sensor. The imaging sensor could be a monochrome or color TV camera that produces a two-dimensional image at a time. After the digital image has been obtained, the next step deals with preprocessing that image. The key function of preprocessing is to improve the image in ways that increase the chances for success of the other processes, like enhancing the contrast, removing 105
  • 2. Computer Engineering and Intelligent Systems www.iiste.org ISSN 2222-1719 (Paper) ISSN 2222-2863 (Online) Vol 3, No.7, 2012 noise and isolating regions whose texture indicate a likelihood of alphanumeric information or may be images with graphics. The next stage deals with segmentation. Segmentation partitions an input image into its constituent parts or objects. In general, autonomous segmentation is one of the most difficult tasks in digital image processing. On the other hand, a rugged segmentation procedure brings the process a long way toward successful solution of an imaging problem. The output of segmentation stage usually is raw pixel data, constituting either the boundary of a region or all the points in the region itself. In either case, converting the data to a form suitable for computer processing is necessary. The first decision that must be made is whether the data should be represented as a boundary or as a complete region. Boundary representation is appropriate when the focus is on internal properties, such as corners and inflections. Regional representation is appropriate when the focus is on internal properties, such as texture or skeletal shape. Choosing a representation is only a part of the solution for transforming raw data into a form suitable for subsequent computer processing. A method must also be specified for describing the data so that features of interest are highlighted. Description also called feature selection, deals with extracting features that result in some quantitative information of interest. The last stage involves recognition and interpretation. Recognition is the process that assigns a label to an object based on the information provided by its descriptors. Interpretation involves assigning meaning to an ensemble of recognized objects. Finally the result is obtained. If the problem domain requires image compression then this addresses the problem of reducing the amount of data required to represent a digital image. It is a process intended to yield a compact representation of an image, thereby reducing the image storage/transmission requirements. 2.Assessment of quality of image Traditional perceptual image quality assessment approaches are based on measuring the errors (signal differences) between the distorted and the reference images, and attempt to quantify the errors in a way that simulates human visual error sensitivity features [1]. These methods usually involve 1.A channel decomposition process that transforms the image signals into different spatial frequency as well as orientation selective subbands. 2.An error normalization process that weights the error signal in each subband by incorporating the variation of visual sensitivity in different subbands (often related to a contrast sensitivity function), and the variation of visual error sensitivity caused by intra- or inter-channel neighboring transform coefficients (often related to certain visual masking effects) . 3. An error pooling process that combines the error signals in different subbands into a single quality/distortion value. An image source communicates to a receiver through a channel that limits the amount of information that could flow through it, thereby introducing distortions [2]. The output of the image source is the reference image, the output of the channel is the test image, and the goal is to relate the visual quality of the test image to the amount of information shared between the test and the reference signals, or more precisely, the mutual information between them [3]. Although mutual information is a statistical measure of information fidelity, and may only be loosely related with what humans regard as image information, it places fundamental limits on the amount of cognitive information that could be extracted from an image. For example, in cases where the channel is distorting images severely, corresponding to low mutual information between the test and the reference, the ability of human viewers to obtain semantic information by discriminating and identifying objects in images is also hampered. Thus, information fidelity methods exploit the relationship between statistical image information and visual quality. While these approaches can conveniently make use of many known psychophysical features of the human visual systems, they are based on some strong assumptions, which are difficult to validate [4]. 3 Information Theoretic Approaches to Image Quality Assessment Explored information-theoretic approaches to the quality assessment problem, where the quality assessment problem is viewed as an information-fidelity problem rather than a signal-fidelity problem has been explored.. An image source communicates to a receiver through a channel that limits the amount of information that could flow through it, thereby introducing distortions. The output of the image source is the reference image, the output of the channel is 106
  • 3. Computer Engineering and Intelligent Systems www.iiste.org ISSN 2222-1719 (Paper) ISSN 2222-2863 (Online) Vol 3, No.7, 2012 the test image, and the goal is to relate the visual quality of the test image to the amount of information shared between the test and the reference signals, or more precisely, the mutual information between them. Although mutual information is a statistical measure of information fidelity, and may only be loosely related with what humans regard as image information, it places fundamental limits on the amount of cognitive information that could be extracted from an image. For example, in cases where the channel is distorting images severely, corresponding to low mutual information between the test and the reference, the ability of human viewers to obtain semantic information by discriminating and identifying objects in images is also hampered. Thus, information fidelity methods exploit the relationship between statistical image information and visual quality has been discussed as in [5],[6],[7]. 4.Structural Similarity Based Image and Video Quality Assessment Different from traditional error-sensitivity based approach, structural similarity based image quality assessment is based on the following philosophy: The main function of the human visual system is to extract structural information from the viewing field, and the human visual system is highly adapted for this purpose. Therefore, a measurement of structural information loss can provide a good approximation to perceived image distortion. Here, we regard the structural information in an image as those attributes that reflect the structure of objects in the scene, independent of the average luminance and contrast. A universal image quality index that separates the comparison of luminance, contrast and structure was introduced. This approach was generalized and improved, leading to a Structural SIMilarity Index (SSIM), which had shown clear advantages over traditional mean squared error (MSE) and peak signal to noise ratio (PSNR) measures when tested on a database of JPEG and JPEG2000 compressed images. This SSIM index has also been applied for video quality assessment and extended to a multi-scale approach [8],[9],[10]. In digital or film-based photography, an image is formed on the image plane of the camera and then measured electronically or chemically to produce the photograph. The image formation process may be described by the ideal pinhole camera model, where only light rays from the depicted scene that pass through the camera aperture can fall on the image plane. In reality, this ideal model is only an approximation of the image formation process, and image quality may be described in terms of how well the camera approximates the pinhole model. An ideal model of how a camera measures light is that the resulting photograph should represent the amount of light that falls on each point at a certain point in time. This model is only an approximate description of the light measurement process of a camera, and image quality is also related to the deviation from this model. In some cases, the image for which quality should be determined is primarily not the result of a photographic process in a camera, but the result of storing or transmitting the image. A typical example is a digital image that has been compressed, stored or transmitted, and then decompressed again. Unless a lossless compression method has been used, the resulting image is normally not identical to the original image and the deviation from the (ideal) original image is then a measure of quality. By considering a large set of images, and determining a quality measure for each of them, statistical methods can be used to determine an overall quality measure of the compression method. In a typical digital camera, the resulting image quality depends on all three factors mentioned above: how much the image formation process of the camera deviates from the pinhole model, the quality of the image measurement process, and the coding artifacts that are introduced in the image produced by the camera, typically by the JPEG coding method. By defining image quality in terms of a deviation from the ideal situation, quality measures become technical in the sense that they can be objectively determined in terms of deviations from the ideal models. Image quality can, however, also be related to the subjective perception of an image, e.g., a human looking at a photograph. Examples are how colors are represented in a black-and-white image, as well as in color images, or that the reduction of image quality from noise depends on how the noise correlates with the information the viewer seeks in the image rather than its overall strength. Another example of this type of quality measure is Johnson's criteria for determining the necessary quality of an image in order to detect targets in night vision systems. Subjective measures of quality also relate to the fact that, although the camera's deviation from the ideal models of 107
  • 4. Computer Engineering and Intelligent Systems www.iiste.org ISSN 2222-1719 (Paper) ISSN 2222-2863 (Online) Vol 3, No.7, 2012 image formation and measurement in general is undesirable and corresponds to reduced objective image quality, these deviations can also be used for artistic effects in image production, corresponding to high subjective quality. 4.1 Image quality assessment categories There are several techniques and metrics that can be measured objectively and automatically evaluated by a computer program. Therefore, they can be classified as full-reference (FR) methods and no-reference (NR) methods. In FR image quality assessment methods, the quality of a test image is evaluated by comparing it with a reference image that is assumed to have perfect quality. NR metrics try to assess the quality of an image without any reference to the original one. For example, comparing an original image to the output of JPEG compression of that image is full-reference – it uses the original as reference. 5.Image quality factors 5.1 Sharpness determines the amount of detail an image can convey. System sharpness is affected by the lens (design and manufacturing quality, focal length, aperture, and distance from the image center) and sensor (pixel count and anti-aliasing filter). In the field, sharpness is affected by camera shake (a good tripod can be helpful), focus accuracy, and atmospheric disturbances (thermal effects and aerosols). Lost sharpness can be restored by sharpening, but sharpening has limits. Oversharpening, can degrade image quality by causing "halos" to appear near contrast boundaries. Images from many compact digital cameras are oversharpened. 5.2 Noise is a random variation of image density, visible as grain in film and pixel level variations in digital images. It arises from the effects of basic physics— the photon nature of light and the thermal energy of heat— inside image sensors. Typical noise reduction (NR) software reduces the visibility of noise by smoothing the image, excluding areas near contrast boundaries. This technique works well, but it can obscure fine, low contrast detail. 5.3 Dynamic range (or exposure range) is the range of light levels a camera can capture, usually measured in f-stops, Exposure Value (EV), or zones (all factors of two in exposure). It is closely related to noise: high noise implies low dynamic range. 5.4 Tone reproduction is the relationship between scene luminance and the reproduced image brightness. 5.5 Contrast, also known as gamma, is the slope of the tone reproduction curve in a log-log space. High contrast usually involves loss of dynamic range — loss of detail, or clipping, in highlights or shadows. 5.6 Color accuracy is an important but ambiguous image quality factor. Many viewers prefer enhanced color saturation; the most accurate color isn't necessarily the most pleasing. Nevertheless it is important to measure a camera's color response: its color shifts, saturation, and the effectiveness of its white balance algorithms. 5.7 Distortion is an aberration that causes straight lines to curve near the edges of images. It can be troublesome for architectural photography and metrology (photographic applications involving measurement). Distortion is worst in wide angle, telephoto, and zoom lenses. It often worse for close-up images than for images at a distance. It can be easily corrected in software. 5.8 Vignetting, or light falloff, darkens images near the corners. It can be significant with wide angle lenses. 108
  • 5. Computer Engineering and Intelligent Systems www.iiste.org ISSN 2222-1719 (Paper) ISSN 2222-2863 (Online) Vol 3, No.7, 2012 5.9 Exposure accuracy can be an issue with fully automatic cameras and with video cameras where there is little or no opportunity for post-exposure tonal adjustment. Some even have exposure memory: exposure may change after very bright or dark objects appear in a scene. 5.10 Lateral chromatic aberration (LCA), also called "color fringing", including purple fringing, is a lens aberration that causes colors to focus at different distances from the image center. It is most visible near corners of images. LCA is worst with asymmetrical lenses, including ultrawides, true telephotos and zooms. It is strongly affected by demosaicing. 5.11 Lens flare, including "veiling glare" is stray light in lenses and optical systems caused by reflections between lens elements and the inside barrel of the lens. It can cause image fogging (loss of shadow detail and color) as well as "ghost" images that can occur in the presence of bright light sources in or near the field of view. 5.12 Color moiré is artificial color banding that can appear in images with repetitive patterns of high spatial frequencies, like fabrics or picket fences. It is affected by lens sharpness, the anti-aliasing (lowpass) filter (which softens the image), and demosaicing software. It tends to be worst with the sharpest lenses. 5.13 Artifacts – software (especially operations performed during RAW conversion) can cause significant visual artifacts, including Data compression and transmission losses (e.g. Low quality JPEG), oversharpening "halos" and loss of fine, low-contrast detail. 6. From analog to digital video Since the time when the world's first video sequence was recorded, many video processing systems have been designed. In the ages of analog video systems, it was possible to evaluate quality of a video processing system by calculating the system's frequency response using some traditional test signal (for example, a collection of color bars and circles). Nowadays, digital video systems are replacing analog ones, and evaluation methods have changed. Performance of a digital video processing system can vary significantly and depends on dynamic characteristics of input video signal (e.g. amount of motion or spatial details). That's why digital video quality should be evaluated on diverse video sequences, often from the user's database. 6.1 Objective video quality Objective video evaluation techniques are mathematical models that approximate results of subjective quality assessment, but are based on criteria and metrics that can be measured objectively and automatically evaluated by a computer program. Objective methods are classified based on the availability of the original video signal, which is considered to be of high quality (generally not compressed). Therefore, they can be classified as FR, Reduced Reference Methods (RR) and NR. FR metrics compute the quality difference by comparing every pixel in each image of the distorted video to its corresponding pixel in the original video. RR metrics extract some features of both videos and compare them to give a quality score. They are used when all the original video is not available, e.g. in a transmission with a limited bandwidth. NR metrics try to assess the quality of a distorted video without any reference to the original video. These metrics are usually used when the video coding method is known. The most traditional ways of evaluating quality of digital video processing system (e.g. video codec like DivX, Xvid) are calculation of the signal-to-noise ratio (SNR) and (PSNR) between the original video signal and signal passed through this system. PSNR is the most widely used objective video quality metric. However, PSNR values do not perfectly correlate with a perceived visual quality due to the non-linear behavior of the human visual system. Recently a number of more complicated and precise metrics were developed, for example UQI, VQM, PEVQ, SSIM, 109
  • 6. Computer Engineering and Intelligent Systems www.iiste.org ISSN 2222-1719 (Paper) ISSN 2222-2863 (Online) Vol 3, No.7, 2012 VQuad-HD and CZD. Based on a benchmark by the Video Quality Experts Group (VQEG) in the course of the Multimedia Test Phase 2007-2008 some metrics were standardized as ITU-T Rec. J.246 (RR), J.247 (FR) in 2008 and J.341 (FR HD) in 2011. The performance of an objective video quality metric is evaluated by computing the correlation between the objective scores and the subjective test results. The latter is called mean opinion score (MOS). The most frequently used correlation coefficients are : linear correlation coefficient, Spearman's rank correlation coefficient, kurtosis, kappa coefficient and outliers ratio. When estimating quality of a video codec, all the mentioned objective methods may require repeating post-encoding tests in order to determine the encoding parameters that satisfy a required level of visual quality, making them time consuming, complex and impractical for implementation in real commercial applications. For this reason, much research has been focused on developing novel objective evaluation methods which enable prediction of the perceived quality level of the encoded video before the actual encoding is performed [1]. 6.2 Subjective video quality The main goal of many objective video quality metrics is to automatically estimate average user (viewer) opinion on a quality of video processed by the system. Sometimes however, measurement of subjective video quality can also be challenging because it may require a trained expert to judge it. Many “subjective video quality measurements” are described in ITU-T recommendation BT.500. Their main idea is the same as in Mean Opinion Score for audio: video sequences are shown to the group of viewers and then their opinion is recorded and averaged to evaluate the quality of each video sequence. However, details of testing may vary greatly. 6.3 Measurement of quality PSNR, is an engineering term for the ratio between the maximum possible power of a signal and the power of corrupting noise that affects the fidelity of its representation. Because many signals have a very wide dynamic range, PSNR is usually expressed in terms of the logarithmic decibel scale. The PSNR is most commonly used as a measure of quality of reconstruction of lossy compression codecs (e.g., for image compression). The signal in this case is the original data, and the noise is the error introduced by compression. When comparing compression codecs it is used as an approximation to human perception of reconstruction quality, therefore in some cases one reconstruction may appear to be closer to the original than another, even though it has a lower PSNR (a higher PSNR would normally indicate that the reconstruction is of higher quality). One has to be extremely careful with the range of validity of this metric; it is only conclusively valid when it is used to compare results from the same codec (or codec type) and same content. It is most easily defined via MSE. When the pixels are represented using 8 bits per sample, this is 255 MAXI is the maximum possible pixel value of the image. More generally, when samples are represented using linear PCM with B bits per sample, MAXI is 2B−1. For color images with three RGB values per pixel, the definition of PSNR is the same except the MSE is the sum over all squared value differences divided by image size and by three. Alternately, for color images the image is converted to a different color space and PSNR is reported against each channel of that color space, e.g., YCbCr or HSL [11],[12],[13]. Typical values for the PSNR in lossy image and video compression are between 30 and 50 dB, where higher is better.[14],[15] Acceptable values for wireless transmission quality loss are considered to be about 20 dB to 25 dB.[16],[17]. When the two images are identical, the MSE will be zero. For this value the PSNR is undefined. Objective methods for assessing perceptual image quality traditionally attempted to quantify the visibility of errors (differences) between a distorted image and a reference image using a variety of known properties of the human visual system. The PSNR block computes the peak signal-to-noise ratio, in decibels, between two images. This ratio is often used as a quality measurement between the original and a compressed image. The higher the PSNR the better the quality of the compressed or reconstructed image. MSE and the PSNR are the two error metrics used to compare image 110
  • 7. Computer Engineering and Intelligent Systems www.iiste.org ISSN 2222-1719 (Paper) ISSN 2222-2863 (Online) Vol 3, No.7, 2012 compression quality. The MSE represents the cumulative squared error between the compressed and the original image, whereas PSNR represents a measure of the peak error. The lower the value of MSE, the lower the error. 6.4 Recommendation for Computing PSNR for Color Images Different approaches exist for computing the PSNR of a color image. Because the human eye is most sensitive to luma information, compute the PSNR for color images by converting the image to a color space that separates the intensity (luma) channel, such as YCbCr. The Y (luma), in YCbCr represents a weighted average of R, G, and B. G is given the most weight, again because the human eye perceives it most easily. With this consideration, compute the PSNR only on the luma channel. PSNR is generally used to analyze quality of image, sound and video files in dB (decibels). PSNR calculation of two images, one original and an altered image, describes how far two images are equal. Equation 1 shows the famous formula. x: width of image. y: height. x*y: number of pixels (or quantities). Equation 1. 7. Conclusion Various processing techniques are involved in processing the image from the problem domain to the obtaining of results. The processed image is reconstructed to check the quality of the image. There are various reference methods to find the quality of the image. It might be subjective or objective.FR method or NF methods can be one adapted for assessment of quality of processed image. This processed image can be perceived by Human Visual System. Various assessment categories are available for assessing the image. The best way to assess the quality is by finding RMSE and PSNR. By finding this, quality of the processed image can be found. Therefore various categories and assessment of quality with respect to processed image , in this case a compressed image is considered and discussed. References [1] Z. Wang and A.C. Bovik, The Handbook of Image and Video Processing, 2nd ed. New York: Academic Press, June 2005, ch. Structural approaches to image quality assessment, pp. 961–974. [2] Z. Wang, A.C. Bovik, H.R. Sheikh and E.P. Simoncelli, "Image quality assessment: from error visibility to structural similarity," IEEE Transactions on Image Processing , vol.13, no.4pp. 600- 612, April 2004. [3] Z. Wang and A.C. Bovik, The Handbook of video databases: Design and Applications. Boca Raton, Florida: CRC Press, September 2003, ch. Objective video quality assessment, pp. 1041–1078. [4] Z. Wang, A.C. Bovik and L. Lu, "Why is image quality assessment so difficult?," IEEE International Conference on Acoustics, Speech, and Signal Processing, 2002. [5]H.R. Sheikh, M.F. Sabir and A.C. Bovik, "A statistical evaluation of recent full reference image quality 111
  • 8. Computer Engineering and Intelligent Systems www.iiste.org ISSN 2222-1719 (Paper) ISSN 2222-2863 (Online) Vol 3, No.7, 2012 assessment algorithms", IEEE Transactions on Image Processing, vol.15, no.11, pp.3440-3451, Nov. 2006. [6]H.R. Sheikh.and A.C. Bovik, "Image information and visual quality," IEEE Transactions on Image Processing, vol.15, no.2,pp. 430- 444, Feb. 2006. [7]H.R. Sheikh, A.C. Bovik and G. de Veciana, "An information fidelity criterion for image quality assessment using natural scene statistics," IEEE Transactions on Image Processing , vol.14, no.12, pp. 2117- 2128, Dec. 2005. [8]Z. Wang and A. C. Bovik, The Handbook of Image and Video Processing, 2nd ed. New York: Academic Press, June 2005, ch. Structural approaches to image quality assessment, pp. 961–974. [9]Z. Wang, A.C. Bovik, H.R. Sheikh and E.P. Simoncelli, "Image quality assessment: from error visibility to structural similarity," IEEE Transactions on Image Processing , vol.13, no.4pp. 600- 612, April 2004. [10]Z. Wang, L. Lu and A.C. Bovik, "Video quality assessment based on structural distortion measurement", Signal Processing: Image Communication, Special issue on Objective video quality metrics, vol. 19, no. 2, February 2004. [11] Huynh-Thu, Q.; Ghanbari, M. (2008). "Scope of validity of PSNR in image/video quality assessment". Electronics Letters 44 (13): 800–801. doi:10.1049/el:20080522. [12]Oriani, Emanuele. "qpsnr: A quick PSNR/SSIM analyzer for Linux". http://qpsnr.youlink.org/. Retrieved 6 April 2011. [13]"Image Processing Science calculating RMSE and PSNR for color images". http://www.varioustopics.com/image-processing/1008462-calculating-rmse-and-psnr-for-color-images.html. Retrieved 6 April 2011. [14] "pnmpsnr User Manual". http://netpbm.sourceforge.net/doc/pnmpsnr.html. Retrieved 6 April 2011. [15] Welstead, Stephen T. (1999). Fractal and wavelet image compression techniques. SPIE Publication. pp. 155–156.ISBN 978-0819435033. [16]http://books.google.com/books?id=evGyv-mogukC&lpg=PA155&dq=image%20compression%20acceptable%2 0PSNR&pg=PA156#v=onepage&q=image%20compression%20acceptable%20PSNR&f=false. [17]Barni, Mauro, ed (May 2006). "Fractal Image Compression". Document and image compression (CRC Press)968:168–169.ISSN 9780849335563.http://books.google.com/books?id=FmqPOsEYRsEC&lpg=PA229&dq=i mage%20compression%20acceptable%20PSNR&pg=PA168#v=onepage&q=PSNR&f=false. Retrieved 5 April 2011 112
  • 9. This academic article was published by The International Institute for Science, Technology and Education (IISTE). The IISTE is a pioneer in the Open Access Publishing service based in the U.S. and Europe. The aim of the institute is Accelerating Global Knowledge Sharing. More information about the publisher can be found in the IISTE’s homepage: http://www.iiste.org The IISTE is currently hosting more than 30 peer-reviewed academic journals and collaborating with academic institutions around the world. Prospective authors of IISTE journals can find the submission instruction on the following page: http://www.iiste.org/Journals/ The IISTE editorial team promises to the review and publish all the qualified submissions in a fast manner. All the journals articles are available online to the readers all over the world without financial, legal, or technical barriers other than those inseparable from gaining access to the internet itself. Printed version of the journals is also available upon request of readers and authors. IISTE Knowledge Sharing Partners EBSCO, Index Copernicus, Ulrich's Periodicals Directory, JournalTOCS, PKP Open Archives Harvester, Bielefeld Academic Search Engine, Elektronische Zeitschriftenbibliothek EZB, Open J-Gate, OCLC WorldCat, Universe Digtial Library , NewJour, Google Scholar