SlideShare une entreprise Scribd logo
1  sur  3
Télécharger pour lire hors ligne
Medical Images Compression: JPEG variations for DICOM standard
                                                                                Author: Jose Pablo Pinilla Gomez
The DICOM (Digital Imaging and Communication in Medicine) is a standard managed by the Medical
Imaging & Technology Alliance to ensure the interoperability of medical systems, images, documents
and workflow. It was released in 1993 (updated in 2011), to solve the problem with CT (Computed
Tomography) or MRI (Magnetic Resonance Imaging) images which cannot be decoded by any means
other than those provided by the same manufacturers. This discordance between devices causes
unfair competition among medical machines manufacturers by keeping health care facilities from
acquiring alternate machines that can lead to device incompatibility. Although there are data
“translators” to interconnect different standard systems, their implementation increases the total cost
and delay of the system. A standardized system is then necessary for the implementation of a PACS
(Picture Archiving and Communication System) that compiles all kinds of acquirable images for
medical records. The image file format specified in the current DICOM standard supports RLE (Run
Length Encoding), JPEG, JPEG-LS and JPEG-2000 compression and decompression schemes.


RLE is a lossless compression scheme that converts repeating byte sequences into two-byte codes
with the value of the repeating byte and the negative representation of the number of bytes in the run,
this is called a Replica Run; In case a sequence of non-repetitive bytes is found, the resulting code will
have one byte with the positive representation of the number of bytes in the sequence followed by the
literal sequence (Literal Run)1. RLE can yield good compression ratios for monochrome images but it
has a very variable behaviour for RGB and YCbCr images. The use of RLE is often preceded by other
algorithms to improve its performance.


The Joint Photographic Experts Group created the JPEG standard in 1992. It relies on a five step
process to compress pixel data with a configurable parameter of quality percentage. The processing
begins by changing the color space from the pixel RGB (Red, Green and Blue) triplet to the YCbCr
(Lluminance,Red-Chrominance and Blue-Chrominance) values. As many other image compression
schemes, JPEG uses YCbCr color space to keep the lluminance values that are more relevant to the
human eye while discarding some of the chrominance information. After decompression, files with
“chroma subsampling” are still able to display images with almost unnoticeable changes. A
configurable amount of chrominance (hue and saturation) information is discarded by a subsequent
downsampling step in JPEG.


After subsampling, the next step is to divide the whole image into 8x8 matrices of the same channel
(Y, Cb and Cr). Then, every 8x8 pixel block goes through a Discrete Cosine Transform (DCT)
computation of coefficients. The DCT gives back 64 values (coefficients) that are related to the
“concentration” of one of 64 8x8 blocks with different patterns that can represent the original image
when put together, or give a close approximation of it. Each of the predefined patterns is called a
“basis function” and it is a two-dimensional set of curves created using the cosine function.


The resulting 8x8 matrix with the coefficients is now run through a step of “Quantization”, where every
value in the input matrix is divided by a corresponding constant derived from predefined quantization
tables and user configuration of quality; the resulting values are rounded to the nearest integers. As
stated in the JPEG standard2 “The purpose of quantization is to achieve further compression by
representing DCT coefficients with no greater precision than is necessary to achieve the desired

1   DICOM standard Part 5: Data Structures and Encoding. 2011. Annex G. P101.
    http://medical.nema.org/Dicom/2011/11_05pu.pdf
2   JPEG Still Picture Compression Standard. 1991. p5. http://white.stanford.edu/~brian/psy221/reader/Wallace.JPEG.pdf
image quality. Stated another way, the goal of this processing step is to discard information which is
not visually significant.” The outcome of quantizing the DCT coefficients is that smaller, unimportant
coefficients will disappear and larger coefficients will lose unnecessary precision.


The final step in JPEG compression is called “Coding” which involves three sub-processes: DC
Coding, Zig-Zag Sequence and Entropy Coding. The “DC” coefficient in the matrix, which is the top
left-most value, contains a higher integer that is a measure of the average value of the 64 image
samples. DC coding replaces every block's DC coefficient with the difference between it and the one
of the previous block, this values are often correlated and therefore produce a small difference. Zig-
Zag Sequence is a literal zig-zag change in the organization of the matrix, taking advantage of the
differentiation between “high-frequency” and “low-frequency” coefficients; the result of this process is
that non-zero (low-frequency) and zero (high-frequency) values are grouped so that the following
compression steps are more effective. The entropy coding uses the Huffman method which can
replace repeating zeroes at the end of a matrix with a single character (byte) and use prefix coding * for
other characters according to their frequency of appearance, more frequent characters are coded into
shorter values.


JPEG's algorithm is very fast, with an average decompression time of 0.9 seconds 3, but it is overall a
lossy standard and it reflects it in most of its steps. Color space transformation is an irreversible
process because computer calculations of decimal point numbers are always approximations of the
real values. Subsampling is a straightforward elimination of data but this step is optional, unlike block
splitting in which certain blocks may require “filling” algorithms to complete the 8x8 matrices and
thereby altering the decompressed result. Next is the DCT which isn't by itself a lossy process, but
forward quantization always is. Quantization is a many-to-one mapping, and therefore is
fundamentally lossy. It is the principal source of lossiness in DCT-based encoders.4


Medical Imaging applications often require high precision so that digital alterations can't affect a
physician's criteria based on radiology results. This doesn't mean that the compression schemes in
Medical PACS have to be lossless, but they need an appropriate level of fidelity. On the other hand,
fidelity is a very subjective qualification which is why DICOM also introduced the lossless JPEG
scheme as well as the lossless/near-lossless JPEG-LS and the JPEG 2000 later on, as alternatives to
the “baseline” JPEG standard5.


Lossless JPEG standard was added as a JPEG mode of operation in 1993. It replaces the DCT and
other lossy steps (Quantization, Chroma Subsampling) from the original format with a new coding
called Differential Pulse Code Modulation (DCPM). DCPM is based on the prediction of an image
block based on its surroundings, the values in these predictions are generally accurate which is why
the difference between them and the ones achieved in the compression tends to be small. Thereby,
this variation of the standard saves these differences and them compresses them even further using
the Huffman method. The downside of lossless JPEG is that it cannot achieve compressions ratios
comparable to those of baseline JPEG compression.



*   Prefix property states that there is no valid code word in the system that is a prefix (start) of any other valid code word
    in the set.
3   JPEG 2000 still image coding versus other standards. 2000. http://www.jpeg.org/public/wg1n1816.pdf
4   SCHROEDER, Mark D.. JPEG Compression Algorithm and Associated Data Structures. 1997.
    http://akbar.marlboro.edu/~mahoney/courses/Fall01/computation/compression/jpeg/jpeg.html
5   The JPEG Committee. Medical Imaging. 2007. http://www.jpeg.org/apps/medical.html
In 1999 the Joint Photographic Experts Group released the JPEG-LS standard as a solution to
achieve better lossless compression. It uses the prediction, RLE, and context modelling (surrounding
blocks dependency) coding techniques included in the LOCO-I algorithm created by Hewlett Packard
(HP). This scheme helps to achieve a better compression while keeping the integrity of information in
accord to user's configuration6. Therefore, this standard is classified as lossless/near-lossless, where
near-lossless refers to a change from 1 to 5 between the original RGB values and the uncompressed
versions7. It takes approximately the same time to decompress a JPEG-LS file and a lossless JPEG
but the compression ratio achieved is better.


The latest standard included in the DICOM compatibility was JPEG 2000 which in addition to having a
lossless mode, it includes a set of features that are advantageous in Medical Imaging applications,
such as: Regions of Interest, scalability(Low-resolution previewing) and extensive MetaData. The
differences between baseline JPEG and JPEG 2000 algorithms reflect a new approach in which the
objective of higher compression is always accompanied by a lossless alternative. That is, the Color
Space Transformation can either be irreversible or reversible by removing the quantization errors by
using integers; instead of block splitting, JPEG 2000 uses variable size “tiles” to remove undesired
block filling; the DCT is replaced by a Wavelet Transform (WT) with lossy and lossless variations,
again by removing quantization; and it also runs a coding algorithm afterwards. The WT differs from
the DCT, in that, instead of cosine functions it uses another kind of oscillating functions called
wavelets. A wavelet's amplitude starts from zero, reaches its maximum value and then fades back to
zero creating a “frequency pulse”. The effect of this change in the compression algorithm is that this
transform's coefficients are centred around zero (can be easily coded), with a very few large
coefficients. Nevertheless, JPEG 2000's algorithms make image compression and decompression
relatively slow, taking an average of 4.3s to decompress an image.


Although there is a limitation to JPEG file formats in the DICOM standard, there is a high range of
possibilities that rise from the different JPEG versions and the quality configuration capabilities; in
addition, JPEG is now working in collaboration with DICOM to further improve this integration so that
Medical Imaging necessities are taken under consideration8. But still, when it comes to image quality
there is a lot of subjectivity in order to define how lossy an image can be, taking into account that
lossless algorithms cannot yield as good compression results as those with lossy schemes. Therefore,
the decision to implement a lossy or lossless scheme, and how lossy it can be is now turning into a
decision of what features and performance is required by the application.


Lossless and near-lossless standards with good speed, compression ratio, and new features have the
Medical Imaging industry as one of its targets. However, in order to improve any of those features the
system has to sacrifice performance in another, for example, the average compression ratio for the
lossless configurations of JPEG(2.09), JPEG-LS(2.98) and JPEG 2000(2.50) differ, making JPEG-LS
the best choice for lossless compression, also due to it's fast performance. But if a certain amount of
quality can be compromised (near-lossless), the JPEG algorithm will give better results in considerably
less time. Although, when compared to JPEG 2000 that offers the “Medical-Imaging-oriented”
features, JPEG 2000 will outperform JPEG by giving out the same compression ratios with more
quality settings, taking into account that this superiority is toned down by the amount of time that it
takes to decompress.



6   The JPEG Committee. Lossless JPEG. JPEG-LS. 2007. http://www.jpeg.org/jpeg/jpegls.html
7   Lossless, Near-lossless and Lossy Compression of PTM Images. HP. 2001.
    http://citeseerx.ist.psu.edu/viewdoc/download?doi=10.1.1.22.1635&rep=rep1&type=pdf
8   The JPEG Committee. Medical Imaging. 2007. http://www.jpeg.org/apps/medical.html

Contenu connexe

Tendances

3 d discrete cosine transform for image compression
3 d discrete cosine transform for image compression3 d discrete cosine transform for image compression
3 d discrete cosine transform for image compression
Alexander Decker
 
Halftoning-based BTC image reconstruction using patch processing with border ...
Halftoning-based BTC image reconstruction using patch processing with border ...Halftoning-based BTC image reconstruction using patch processing with border ...
Halftoning-based BTC image reconstruction using patch processing with border ...
TELKOMNIKA JOURNAL
 
B140715
B140715B140715
B140715
irjes
 
Compressed Medical Image Transfer in Frequency Domain
Compressed Medical Image Transfer in Frequency DomainCompressed Medical Image Transfer in Frequency Domain
Compressed Medical Image Transfer in Frequency Domain
CSCJournals
 
Performance analysis of Hybrid Transform, Hybrid Wavelet and Multi-Resolutio...
Performance analysis of Hybrid Transform, Hybrid Wavelet and  Multi-Resolutio...Performance analysis of Hybrid Transform, Hybrid Wavelet and  Multi-Resolutio...
Performance analysis of Hybrid Transform, Hybrid Wavelet and Multi-Resolutio...
IJMER
 

Tendances (20)

Tile Boundary Artifacts Reduction of JPEG2000 Compressed Images
Tile Boundary Artifacts Reduction of JPEG2000 Compressed Images Tile Boundary Artifacts Reduction of JPEG2000 Compressed Images
Tile Boundary Artifacts Reduction of JPEG2000 Compressed Images
 
O0342085098
O0342085098O0342085098
O0342085098
 
3 d discrete cosine transform for image compression
3 d discrete cosine transform for image compression3 d discrete cosine transform for image compression
3 d discrete cosine transform for image compression
 
A COMPARATIVE STUDY ON IMAGE COMPRESSION USING HALFTONING BASED BLOCK TRUNCAT...
A COMPARATIVE STUDY ON IMAGE COMPRESSION USING HALFTONING BASED BLOCK TRUNCAT...A COMPARATIVE STUDY ON IMAGE COMPRESSION USING HALFTONING BASED BLOCK TRUNCAT...
A COMPARATIVE STUDY ON IMAGE COMPRESSION USING HALFTONING BASED BLOCK TRUNCAT...
 
Halftoning-based BTC image reconstruction using patch processing with border ...
Halftoning-based BTC image reconstruction using patch processing with border ...Halftoning-based BTC image reconstruction using patch processing with border ...
Halftoning-based BTC image reconstruction using patch processing with border ...
 
H010315356
H010315356H010315356
H010315356
 
Bg044357364
Bg044357364Bg044357364
Bg044357364
 
B140715
B140715B140715
B140715
 
ROI Based Image Compression in Baseline JPEG
ROI Based Image Compression in Baseline JPEGROI Based Image Compression in Baseline JPEG
ROI Based Image Compression in Baseline JPEG
 
Compressed Medical Image Transfer in Frequency Domain
Compressed Medical Image Transfer in Frequency DomainCompressed Medical Image Transfer in Frequency Domain
Compressed Medical Image Transfer in Frequency Domain
 
EFFICIENT IMAGE COMPRESSION USING LAPLACIAN PYRAMIDAL FILTERS FOR EDGE IMAGES
EFFICIENT IMAGE COMPRESSION USING LAPLACIAN PYRAMIDAL FILTERS FOR EDGE IMAGESEFFICIENT IMAGE COMPRESSION USING LAPLACIAN PYRAMIDAL FILTERS FOR EDGE IMAGES
EFFICIENT IMAGE COMPRESSION USING LAPLACIAN PYRAMIDAL FILTERS FOR EDGE IMAGES
 
THE REDUCTION IN COMPUTATION TIME IN THE COLOR IMAGE DECOMPOSITION WITH THE B...
THE REDUCTION IN COMPUTATION TIME IN THE COLOR IMAGE DECOMPOSITION WITH THE B...THE REDUCTION IN COMPUTATION TIME IN THE COLOR IMAGE DECOMPOSITION WITH THE B...
THE REDUCTION IN COMPUTATION TIME IN THE COLOR IMAGE DECOMPOSITION WITH THE B...
 
Review of Diverse Techniques Used for Effective Fractal Image Compression
Review of Diverse Techniques Used for Effective Fractal Image CompressionReview of Diverse Techniques Used for Effective Fractal Image Compression
Review of Diverse Techniques Used for Effective Fractal Image Compression
 
Color image compression based on spatial and magnitude signal decomposition
Color image compression based on spatial and magnitude signal decomposition Color image compression based on spatial and magnitude signal decomposition
Color image compression based on spatial and magnitude signal decomposition
 
Efficient Image Compression Technique using JPEG2000 with Adaptive Threshold
Efficient Image Compression Technique using JPEG2000 with Adaptive ThresholdEfficient Image Compression Technique using JPEG2000 with Adaptive Threshold
Efficient Image Compression Technique using JPEG2000 with Adaptive Threshold
 
International Journal on Soft Computing ( IJSC )
International Journal on Soft Computing ( IJSC )International Journal on Soft Computing ( IJSC )
International Journal on Soft Computing ( IJSC )
 
Probabilistic model based image segmentation
Probabilistic model based image segmentationProbabilistic model based image segmentation
Probabilistic model based image segmentation
 
10
1010
10
 
Performance analysis of Hybrid Transform, Hybrid Wavelet and Multi-Resolutio...
Performance analysis of Hybrid Transform, Hybrid Wavelet and  Multi-Resolutio...Performance analysis of Hybrid Transform, Hybrid Wavelet and  Multi-Resolutio...
Performance analysis of Hybrid Transform, Hybrid Wavelet and Multi-Resolutio...
 
Brain image segmentation based on FCM Clustering Algorithm and Rough Set
Brain image segmentation based on FCM Clustering Algorithm and Rough SetBrain image segmentation based on FCM Clustering Algorithm and Rough Set
Brain image segmentation based on FCM Clustering Algorithm and Rough Set
 

En vedette

Compression using JPEG
Compression using JPEGCompression using JPEG
Compression using JPEG
Sabih Hasan
 
IJEST12-04-09-150
IJEST12-04-09-150IJEST12-04-09-150
IJEST12-04-09-150
Jigar Jain
 

En vedette (12)

DICOM structure
DICOM structureDICOM structure
DICOM structure
 
DICOM Structure Basics
DICOM Structure BasicsDICOM Structure Basics
DICOM Structure Basics
 
Compression using JPEG
Compression using JPEGCompression using JPEG
Compression using JPEG
 
Introduction to hl7 v3
Introduction to hl7 v3Introduction to hl7 v3
Introduction to hl7 v3
 
Medical image compression
Medical image compressionMedical image compression
Medical image compression
 
Medical Image Compression
Medical Image CompressionMedical Image Compression
Medical Image Compression
 
Hl7 reference information model
Hl7 reference information modelHl7 reference information model
Hl7 reference information model
 
Dicom
DicomDicom
Dicom
 
Seminar Report on image compression
Seminar Report on image compressionSeminar Report on image compression
Seminar Report on image compression
 
Picture Archiving and Communication System (PACS)
Picture Archiving and Communication System (PACS)Picture Archiving and Communication System (PACS)
Picture Archiving and Communication System (PACS)
 
IJEST12-04-09-150
IJEST12-04-09-150IJEST12-04-09-150
IJEST12-04-09-150
 
Medical Image Compression
Medical Image CompressionMedical Image Compression
Medical Image Compression
 

Similaire à Medical images compression: JPEG variations for DICOM standard

Similaire à Medical images compression: JPEG variations for DICOM standard (20)

JPEG
JPEGJPEG
JPEG
 
Comparison of different Fingerprint Compression Techniques
Comparison of different Fingerprint Compression TechniquesComparison of different Fingerprint Compression Techniques
Comparison of different Fingerprint Compression Techniques
 
Digital Image Processing - Image Compression
Digital Image Processing - Image CompressionDigital Image Processing - Image Compression
Digital Image Processing - Image Compression
 
Jv2517361741
Jv2517361741Jv2517361741
Jv2517361741
 
Jv2517361741
Jv2517361741Jv2517361741
Jv2517361741
 
Evaluation of Huffman and Arithmetic Algorithms for Multimedia Compression St...
Evaluation of Huffman and Arithmetic Algorithms for Multimedia Compression St...Evaluation of Huffman and Arithmetic Algorithms for Multimedia Compression St...
Evaluation of Huffman and Arithmetic Algorithms for Multimedia Compression St...
 
Pipelined Architecture of 2D-DCT, Quantization and ZigZag Process for JPEG Im...
Pipelined Architecture of 2D-DCT, Quantization and ZigZag Process for JPEG Im...Pipelined Architecture of 2D-DCT, Quantization and ZigZag Process for JPEG Im...
Pipelined Architecture of 2D-DCT, Quantization and ZigZag Process for JPEG Im...
 
Jpeg
JpegJpeg
Jpeg
 
Squashed JPEG Image Compression via Sparse Matrix
Squashed JPEG Image Compression via Sparse MatrixSquashed JPEG Image Compression via Sparse Matrix
Squashed JPEG Image Compression via Sparse Matrix
 
SQUASHED JPEG IMAGE COMPRESSION VIA SPARSE MATRIX
SQUASHED JPEG IMAGE COMPRESSION VIA SPARSE MATRIXSQUASHED JPEG IMAGE COMPRESSION VIA SPARSE MATRIX
SQUASHED JPEG IMAGE COMPRESSION VIA SPARSE MATRIX
 
Squashed JPEG Image Compression via Sparse Matrix
Squashed JPEG Image Compression via Sparse MatrixSquashed JPEG Image Compression via Sparse Matrix
Squashed JPEG Image Compression via Sparse Matrix
 
Wavelet based Image Coding Schemes: A Recent Survey
Wavelet based Image Coding Schemes: A Recent Survey  Wavelet based Image Coding Schemes: A Recent Survey
Wavelet based Image Coding Schemes: A Recent Survey
 
PERFORMANCE EVALUATION OF JPEG IMAGE COMPRESSION USING SYMBOL REDUCTION TECHN...
PERFORMANCE EVALUATION OF JPEG IMAGE COMPRESSION USING SYMBOL REDUCTION TECHN...PERFORMANCE EVALUATION OF JPEG IMAGE COMPRESSION USING SYMBOL REDUCTION TECHN...
PERFORMANCE EVALUATION OF JPEG IMAGE COMPRESSION USING SYMBOL REDUCTION TECHN...
 
IRJET- RGB Image Compression using Multi-Level Block Trunction Code Algor...
IRJET-  	  RGB Image Compression using Multi-Level Block Trunction Code Algor...IRJET-  	  RGB Image Compression using Multi-Level Block Trunction Code Algor...
IRJET- RGB Image Compression using Multi-Level Block Trunction Code Algor...
 
Jl2516751681
Jl2516751681Jl2516751681
Jl2516751681
 
Design of Image Compression Algorithm using MATLAB
Design of Image Compression Algorithm using MATLABDesign of Image Compression Algorithm using MATLAB
Design of Image Compression Algorithm using MATLAB
 
Himadeep
HimadeepHimadeep
Himadeep
 
NEW LOCAL BINARY PATTERN FEATURE EXTRACTOR WITH ADAPTIVE THRESHOLD FOR FACE R...
NEW LOCAL BINARY PATTERN FEATURE EXTRACTOR WITH ADAPTIVE THRESHOLD FOR FACE R...NEW LOCAL BINARY PATTERN FEATURE EXTRACTOR WITH ADAPTIVE THRESHOLD FOR FACE R...
NEW LOCAL BINARY PATTERN FEATURE EXTRACTOR WITH ADAPTIVE THRESHOLD FOR FACE R...
 
FPGA based JPEG Encoder
FPGA based JPEG EncoderFPGA based JPEG Encoder
FPGA based JPEG Encoder
 
Ec36783787
Ec36783787Ec36783787
Ec36783787
 

Plus de Jose Pinilla

Black wednesday SOPA/PIPA Report
Black wednesday SOPA/PIPA ReportBlack wednesday SOPA/PIPA Report
Black wednesday SOPA/PIPA Report
Jose Pinilla
 
The internet success factors
The internet success factorsThe internet success factors
The internet success factors
Jose Pinilla
 
FPGA como alternativa
FPGA como alternativaFPGA como alternativa
FPGA como alternativa
Jose Pinilla
 
"Basta de historias" de Andrés Oppenheimer
"Basta de historias" de Andrés Oppenheimer"Basta de historias" de Andrés Oppenheimer
"Basta de historias" de Andrés Oppenheimer
Jose Pinilla
 

Plus de Jose Pinilla (11)

Summary - Adaptive Insertion Policies for High Performance Caching. Qureshi, ...
Summary - Adaptive Insertion Policies for High Performance Caching. Qureshi, ...Summary - Adaptive Insertion Policies for High Performance Caching. Qureshi, ...
Summary - Adaptive Insertion Policies for High Performance Caching. Qureshi, ...
 
Instruction Level Parallelism (ILP) Limitations
Instruction Level Parallelism (ILP) LimitationsInstruction Level Parallelism (ILP) Limitations
Instruction Level Parallelism (ILP) Limitations
 
X-ISCKER
X-ISCKERX-ISCKER
X-ISCKER
 
CWCAS X-ISCKER Poster
CWCAS X-ISCKER PosterCWCAS X-ISCKER Poster
CWCAS X-ISCKER Poster
 
Presentación Proyecto de Grado: X-ISCKER
Presentación Proyecto de Grado: X-ISCKERPresentación Proyecto de Grado: X-ISCKER
Presentación Proyecto de Grado: X-ISCKER
 
Black wednesday SOPA/PIPA Report
Black wednesday SOPA/PIPA ReportBlack wednesday SOPA/PIPA Report
Black wednesday SOPA/PIPA Report
 
Telemedicine and telecardiology report
Telemedicine and telecardiology reportTelemedicine and telecardiology report
Telemedicine and telecardiology report
 
The internet success factors
The internet success factorsThe internet success factors
The internet success factors
 
FPGA como alternativa
FPGA como alternativaFPGA como alternativa
FPGA como alternativa
 
FPGA @ UPB-BGA
FPGA @ UPB-BGAFPGA @ UPB-BGA
FPGA @ UPB-BGA
 
"Basta de historias" de Andrés Oppenheimer
"Basta de historias" de Andrés Oppenheimer"Basta de historias" de Andrés Oppenheimer
"Basta de historias" de Andrés Oppenheimer
 

Dernier

The basics of sentences session 2pptx copy.pptx
The basics of sentences session 2pptx copy.pptxThe basics of sentences session 2pptx copy.pptx
The basics of sentences session 2pptx copy.pptx
heathfieldcps1
 
Activity 01 - Artificial Culture (1).pdf
Activity 01 - Artificial Culture (1).pdfActivity 01 - Artificial Culture (1).pdf
Activity 01 - Artificial Culture (1).pdf
ciinovamais
 
Russian Escort Service in Delhi 11k Hotel Foreigner Russian Call Girls in Delhi
Russian Escort Service in Delhi 11k Hotel Foreigner Russian Call Girls in DelhiRussian Escort Service in Delhi 11k Hotel Foreigner Russian Call Girls in Delhi
Russian Escort Service in Delhi 11k Hotel Foreigner Russian Call Girls in Delhi
kauryashika82
 
1029 - Danh muc Sach Giao Khoa 10 . pdf
1029 -  Danh muc Sach Giao Khoa 10 . pdf1029 -  Danh muc Sach Giao Khoa 10 . pdf
1029 - Danh muc Sach Giao Khoa 10 . pdf
QucHHunhnh
 

Dernier (20)

The basics of sentences session 2pptx copy.pptx
The basics of sentences session 2pptx copy.pptxThe basics of sentences session 2pptx copy.pptx
The basics of sentences session 2pptx copy.pptx
 
A Critique of the Proposed National Education Policy Reform
A Critique of the Proposed National Education Policy ReformA Critique of the Proposed National Education Policy Reform
A Critique of the Proposed National Education Policy Reform
 
Activity 01 - Artificial Culture (1).pdf
Activity 01 - Artificial Culture (1).pdfActivity 01 - Artificial Culture (1).pdf
Activity 01 - Artificial Culture (1).pdf
 
BAG TECHNIQUE Bag technique-a tool making use of public health bag through wh...
BAG TECHNIQUE Bag technique-a tool making use of public health bag through wh...BAG TECHNIQUE Bag technique-a tool making use of public health bag through wh...
BAG TECHNIQUE Bag technique-a tool making use of public health bag through wh...
 
APM Welcome, APM North West Network Conference, Synergies Across Sectors
APM Welcome, APM North West Network Conference, Synergies Across SectorsAPM Welcome, APM North West Network Conference, Synergies Across Sectors
APM Welcome, APM North West Network Conference, Synergies Across Sectors
 
9548086042 for call girls in Indira Nagar with room service
9548086042  for call girls in Indira Nagar  with room service9548086042  for call girls in Indira Nagar  with room service
9548086042 for call girls in Indira Nagar with room service
 
Measures of Dispersion and Variability: Range, QD, AD and SD
Measures of Dispersion and Variability: Range, QD, AD and SDMeasures of Dispersion and Variability: Range, QD, AD and SD
Measures of Dispersion and Variability: Range, QD, AD and SD
 
Class 11th Physics NEET formula sheet pdf
Class 11th Physics NEET formula sheet pdfClass 11th Physics NEET formula sheet pdf
Class 11th Physics NEET formula sheet pdf
 
Software Engineering Methodologies (overview)
Software Engineering Methodologies (overview)Software Engineering Methodologies (overview)
Software Engineering Methodologies (overview)
 
Disha NEET Physics Guide for classes 11 and 12.pdf
Disha NEET Physics Guide for classes 11 and 12.pdfDisha NEET Physics Guide for classes 11 and 12.pdf
Disha NEET Physics Guide for classes 11 and 12.pdf
 
Russian Escort Service in Delhi 11k Hotel Foreigner Russian Call Girls in Delhi
Russian Escort Service in Delhi 11k Hotel Foreigner Russian Call Girls in DelhiRussian Escort Service in Delhi 11k Hotel Foreigner Russian Call Girls in Delhi
Russian Escort Service in Delhi 11k Hotel Foreigner Russian Call Girls in Delhi
 
Explore beautiful and ugly buildings. Mathematics helps us create beautiful d...
Explore beautiful and ugly buildings. Mathematics helps us create beautiful d...Explore beautiful and ugly buildings. Mathematics helps us create beautiful d...
Explore beautiful and ugly buildings. Mathematics helps us create beautiful d...
 
1029 - Danh muc Sach Giao Khoa 10 . pdf
1029 -  Danh muc Sach Giao Khoa 10 . pdf1029 -  Danh muc Sach Giao Khoa 10 . pdf
1029 - Danh muc Sach Giao Khoa 10 . pdf
 
Call Girls in Dwarka Mor Delhi Contact Us 9654467111
Call Girls in Dwarka Mor Delhi Contact Us 9654467111Call Girls in Dwarka Mor Delhi Contact Us 9654467111
Call Girls in Dwarka Mor Delhi Contact Us 9654467111
 
Student login on Anyboli platform.helpin
Student login on Anyboli platform.helpinStudent login on Anyboli platform.helpin
Student login on Anyboli platform.helpin
 
Web & Social Media Analytics Previous Year Question Paper.pdf
Web & Social Media Analytics Previous Year Question Paper.pdfWeb & Social Media Analytics Previous Year Question Paper.pdf
Web & Social Media Analytics Previous Year Question Paper.pdf
 
Sanyam Choudhary Chemistry practical.pdf
Sanyam Choudhary Chemistry practical.pdfSanyam Choudhary Chemistry practical.pdf
Sanyam Choudhary Chemistry practical.pdf
 
Arihant handbook biology for class 11 .pdf
Arihant handbook biology for class 11 .pdfArihant handbook biology for class 11 .pdf
Arihant handbook biology for class 11 .pdf
 
Q4-W6-Restating Informational Text Grade 3
Q4-W6-Restating Informational Text Grade 3Q4-W6-Restating Informational Text Grade 3
Q4-W6-Restating Informational Text Grade 3
 
Key note speaker Neum_Admir Softic_ENG.pdf
Key note speaker Neum_Admir Softic_ENG.pdfKey note speaker Neum_Admir Softic_ENG.pdf
Key note speaker Neum_Admir Softic_ENG.pdf
 

Medical images compression: JPEG variations for DICOM standard

  • 1. Medical Images Compression: JPEG variations for DICOM standard Author: Jose Pablo Pinilla Gomez The DICOM (Digital Imaging and Communication in Medicine) is a standard managed by the Medical Imaging & Technology Alliance to ensure the interoperability of medical systems, images, documents and workflow. It was released in 1993 (updated in 2011), to solve the problem with CT (Computed Tomography) or MRI (Magnetic Resonance Imaging) images which cannot be decoded by any means other than those provided by the same manufacturers. This discordance between devices causes unfair competition among medical machines manufacturers by keeping health care facilities from acquiring alternate machines that can lead to device incompatibility. Although there are data “translators” to interconnect different standard systems, their implementation increases the total cost and delay of the system. A standardized system is then necessary for the implementation of a PACS (Picture Archiving and Communication System) that compiles all kinds of acquirable images for medical records. The image file format specified in the current DICOM standard supports RLE (Run Length Encoding), JPEG, JPEG-LS and JPEG-2000 compression and decompression schemes. RLE is a lossless compression scheme that converts repeating byte sequences into two-byte codes with the value of the repeating byte and the negative representation of the number of bytes in the run, this is called a Replica Run; In case a sequence of non-repetitive bytes is found, the resulting code will have one byte with the positive representation of the number of bytes in the sequence followed by the literal sequence (Literal Run)1. RLE can yield good compression ratios for monochrome images but it has a very variable behaviour for RGB and YCbCr images. The use of RLE is often preceded by other algorithms to improve its performance. The Joint Photographic Experts Group created the JPEG standard in 1992. It relies on a five step process to compress pixel data with a configurable parameter of quality percentage. The processing begins by changing the color space from the pixel RGB (Red, Green and Blue) triplet to the YCbCr (Lluminance,Red-Chrominance and Blue-Chrominance) values. As many other image compression schemes, JPEG uses YCbCr color space to keep the lluminance values that are more relevant to the human eye while discarding some of the chrominance information. After decompression, files with “chroma subsampling” are still able to display images with almost unnoticeable changes. A configurable amount of chrominance (hue and saturation) information is discarded by a subsequent downsampling step in JPEG. After subsampling, the next step is to divide the whole image into 8x8 matrices of the same channel (Y, Cb and Cr). Then, every 8x8 pixel block goes through a Discrete Cosine Transform (DCT) computation of coefficients. The DCT gives back 64 values (coefficients) that are related to the “concentration” of one of 64 8x8 blocks with different patterns that can represent the original image when put together, or give a close approximation of it. Each of the predefined patterns is called a “basis function” and it is a two-dimensional set of curves created using the cosine function. The resulting 8x8 matrix with the coefficients is now run through a step of “Quantization”, where every value in the input matrix is divided by a corresponding constant derived from predefined quantization tables and user configuration of quality; the resulting values are rounded to the nearest integers. As stated in the JPEG standard2 “The purpose of quantization is to achieve further compression by representing DCT coefficients with no greater precision than is necessary to achieve the desired 1 DICOM standard Part 5: Data Structures and Encoding. 2011. Annex G. P101. http://medical.nema.org/Dicom/2011/11_05pu.pdf 2 JPEG Still Picture Compression Standard. 1991. p5. http://white.stanford.edu/~brian/psy221/reader/Wallace.JPEG.pdf
  • 2. image quality. Stated another way, the goal of this processing step is to discard information which is not visually significant.” The outcome of quantizing the DCT coefficients is that smaller, unimportant coefficients will disappear and larger coefficients will lose unnecessary precision. The final step in JPEG compression is called “Coding” which involves three sub-processes: DC Coding, Zig-Zag Sequence and Entropy Coding. The “DC” coefficient in the matrix, which is the top left-most value, contains a higher integer that is a measure of the average value of the 64 image samples. DC coding replaces every block's DC coefficient with the difference between it and the one of the previous block, this values are often correlated and therefore produce a small difference. Zig- Zag Sequence is a literal zig-zag change in the organization of the matrix, taking advantage of the differentiation between “high-frequency” and “low-frequency” coefficients; the result of this process is that non-zero (low-frequency) and zero (high-frequency) values are grouped so that the following compression steps are more effective. The entropy coding uses the Huffman method which can replace repeating zeroes at the end of a matrix with a single character (byte) and use prefix coding * for other characters according to their frequency of appearance, more frequent characters are coded into shorter values. JPEG's algorithm is very fast, with an average decompression time of 0.9 seconds 3, but it is overall a lossy standard and it reflects it in most of its steps. Color space transformation is an irreversible process because computer calculations of decimal point numbers are always approximations of the real values. Subsampling is a straightforward elimination of data but this step is optional, unlike block splitting in which certain blocks may require “filling” algorithms to complete the 8x8 matrices and thereby altering the decompressed result. Next is the DCT which isn't by itself a lossy process, but forward quantization always is. Quantization is a many-to-one mapping, and therefore is fundamentally lossy. It is the principal source of lossiness in DCT-based encoders.4 Medical Imaging applications often require high precision so that digital alterations can't affect a physician's criteria based on radiology results. This doesn't mean that the compression schemes in Medical PACS have to be lossless, but they need an appropriate level of fidelity. On the other hand, fidelity is a very subjective qualification which is why DICOM also introduced the lossless JPEG scheme as well as the lossless/near-lossless JPEG-LS and the JPEG 2000 later on, as alternatives to the “baseline” JPEG standard5. Lossless JPEG standard was added as a JPEG mode of operation in 1993. It replaces the DCT and other lossy steps (Quantization, Chroma Subsampling) from the original format with a new coding called Differential Pulse Code Modulation (DCPM). DCPM is based on the prediction of an image block based on its surroundings, the values in these predictions are generally accurate which is why the difference between them and the ones achieved in the compression tends to be small. Thereby, this variation of the standard saves these differences and them compresses them even further using the Huffman method. The downside of lossless JPEG is that it cannot achieve compressions ratios comparable to those of baseline JPEG compression. * Prefix property states that there is no valid code word in the system that is a prefix (start) of any other valid code word in the set. 3 JPEG 2000 still image coding versus other standards. 2000. http://www.jpeg.org/public/wg1n1816.pdf 4 SCHROEDER, Mark D.. JPEG Compression Algorithm and Associated Data Structures. 1997. http://akbar.marlboro.edu/~mahoney/courses/Fall01/computation/compression/jpeg/jpeg.html 5 The JPEG Committee. Medical Imaging. 2007. http://www.jpeg.org/apps/medical.html
  • 3. In 1999 the Joint Photographic Experts Group released the JPEG-LS standard as a solution to achieve better lossless compression. It uses the prediction, RLE, and context modelling (surrounding blocks dependency) coding techniques included in the LOCO-I algorithm created by Hewlett Packard (HP). This scheme helps to achieve a better compression while keeping the integrity of information in accord to user's configuration6. Therefore, this standard is classified as lossless/near-lossless, where near-lossless refers to a change from 1 to 5 between the original RGB values and the uncompressed versions7. It takes approximately the same time to decompress a JPEG-LS file and a lossless JPEG but the compression ratio achieved is better. The latest standard included in the DICOM compatibility was JPEG 2000 which in addition to having a lossless mode, it includes a set of features that are advantageous in Medical Imaging applications, such as: Regions of Interest, scalability(Low-resolution previewing) and extensive MetaData. The differences between baseline JPEG and JPEG 2000 algorithms reflect a new approach in which the objective of higher compression is always accompanied by a lossless alternative. That is, the Color Space Transformation can either be irreversible or reversible by removing the quantization errors by using integers; instead of block splitting, JPEG 2000 uses variable size “tiles” to remove undesired block filling; the DCT is replaced by a Wavelet Transform (WT) with lossy and lossless variations, again by removing quantization; and it also runs a coding algorithm afterwards. The WT differs from the DCT, in that, instead of cosine functions it uses another kind of oscillating functions called wavelets. A wavelet's amplitude starts from zero, reaches its maximum value and then fades back to zero creating a “frequency pulse”. The effect of this change in the compression algorithm is that this transform's coefficients are centred around zero (can be easily coded), with a very few large coefficients. Nevertheless, JPEG 2000's algorithms make image compression and decompression relatively slow, taking an average of 4.3s to decompress an image. Although there is a limitation to JPEG file formats in the DICOM standard, there is a high range of possibilities that rise from the different JPEG versions and the quality configuration capabilities; in addition, JPEG is now working in collaboration with DICOM to further improve this integration so that Medical Imaging necessities are taken under consideration8. But still, when it comes to image quality there is a lot of subjectivity in order to define how lossy an image can be, taking into account that lossless algorithms cannot yield as good compression results as those with lossy schemes. Therefore, the decision to implement a lossy or lossless scheme, and how lossy it can be is now turning into a decision of what features and performance is required by the application. Lossless and near-lossless standards with good speed, compression ratio, and new features have the Medical Imaging industry as one of its targets. However, in order to improve any of those features the system has to sacrifice performance in another, for example, the average compression ratio for the lossless configurations of JPEG(2.09), JPEG-LS(2.98) and JPEG 2000(2.50) differ, making JPEG-LS the best choice for lossless compression, also due to it's fast performance. But if a certain amount of quality can be compromised (near-lossless), the JPEG algorithm will give better results in considerably less time. Although, when compared to JPEG 2000 that offers the “Medical-Imaging-oriented” features, JPEG 2000 will outperform JPEG by giving out the same compression ratios with more quality settings, taking into account that this superiority is toned down by the amount of time that it takes to decompress. 6 The JPEG Committee. Lossless JPEG. JPEG-LS. 2007. http://www.jpeg.org/jpeg/jpegls.html 7 Lossless, Near-lossless and Lossy Compression of PTM Images. HP. 2001. http://citeseerx.ist.psu.edu/viewdoc/download?doi=10.1.1.22.1635&rep=rep1&type=pdf 8 The JPEG Committee. Medical Imaging. 2007. http://www.jpeg.org/apps/medical.html