Contenu connexe
Similaire à Comparison of ezw and h.264 2
Similaire à Comparison of ezw and h.264 2 (20)
Plus de IAEME Publication
Plus de IAEME Publication (20)
Comparison of ezw and h.264 2
- 1. International Journal of Computer Engineering and Technology (IJCET), ISSN 0976-
6367(Print), ISSN 0976 – 6375(Online) Volume 4, Issue 3, May – June (2013), © IAEME
291
COMPARISON OF EZW AND H.264
Sangeeta Mishra, Sudhir Sawarkar
Research Scholar, Amravati, DMCE, Airoli
ABSTRACT
Motion compensation techniques are an important part of almost all video codecs
since they provide an effective way of exploiting the temporal redundancy between frames in
an image sequence. EZW (Embedded Zerotrees of Wavelet Transforms) is a lossy image
compression algorithm. At low bit rates (i.e. high compression ratios) most of the coefficients
produced by a subband transform (such as the wavelet transform) will be zero, or very close
to zero. This occurs because "real world" images tend to contain mostly low frequency
information (highly correlated). H.264/MPEG-4 Part 10 or AVC (Advanced Video Coding)
is a standard for video compression. This paper mainly presents a comparison of two main
video compression techniques first using Embedded Zerotree Wavelet (EZW) algorithm &
second using H.264 Codec.
Index Terms—Motion estimation,AVC, video compression, MPEG, H.264,EZW.
1. INTRODUCTION
Reducing the transmission bit-rate while concomitantly retaining image quality is the
most daunting challenge to overcome in the area of very low bit-rate video coding, e.g.,
H.26X standards. The MPEG-4 [2] video standard introduced the concept of content-based
coding, by dividing video frames into separate segments comprising a background and one or
more moving objects. This idea has been exploited in several low bit-rate macroblock-based
video coding algorithms [3] using a simplified segmentation process which avoids handling
arbitrary shaped objects, and therefore can employ popular macroblock-based motion
estimation techniques. Such algorithms focus on moving regions through the use of regular
pattern templates, from a pattern codebook, of non-overlapping rectangular blocks of 16×16
pixels, called macroblocks (MB).
INTERNATIONAL JOURNAL OF COMPUTER ENGINEERING
& TECHNOLOGY (IJCET)
ISSN 0976 – 6367(Print)
ISSN 0976 – 6375(Online)
Volume 4, Issue 3, May-June (2013), pp. 291-296
© IAEME: www.iaeme.com/ijcet.asp
Journal Impact Factor (2013): 6.1302 (Calculated by GISI)
www.jifactor.com
IJCET
© I A E M E
- 2. International Journal of Computer Engineering and Technology (IJCET), ISSN 0976-
6367(Print), ISSN 0976 – 6375(Online) Volume 4, Issue 3, May – June (2013), © IAEME
292
2. VIDEO COMPRESSION
With the advent of the multimedia age and the spread of Internet, video storage on
CD/DVD and streaming video has been gaining a lot of popularity. The ISO Moving Picture
Experts Group (MPEG) video coding standards pertain towards compressed video storage on
physical media like CD/DVD, where as the International Telecommunications Union (ITU)
addresses real-time point-to-point or multi-point communications over a network. The former
has the advantage of having higher bandwidth for data transmission. In either standard the
basic flow of the entire compression decompression process is largely the same and is
depicted in Fig.1 shows the block diagram for video compression process. The most
computationally expensive part in the compression process is the Motion Estimation. Motion
Estimation examines the movement of objects in sequence to try to obtain the vectors
representing the estimated motion. Encoder side estimates the motion of the current frame
with respect to previous frame. A motion compensated image of the current frame is then
created. Motion vector is then transmitted to decoder. Decoder reverses the whole process
and creates a full frame. This way motion compensation uses the knowledge of object motion
to achieve data comp
Fig.1: Block Diagram for Video Compression process flow.
The encoding side estimates the motion in the current frame with respect to a previous
frame. A motion compensated image for the current frame is then created that is built of
blocks of image from the previous frame. The motion vectors for blocks used for motion
estimation are transmitted, as well as the difference of the compensated image with the
current frame is also EZW encoded and sent. The encoded image that is sent is then decoded
at the encoder and used as a reference frame for the subsequent frames. The decoder reverses
the process and creates a full frame. The whole idea behind motion estimation based video
compression is to save on bits by sending EZW encoded difference images which inherently
have less energy and can be highly compressed as compared to sending a full frame that is
EZW encoded.
3. EZW
The embedded zerotree wavelet algorithm (EZW) is a simple, yet remarkably
effective, image compression algorithm, having the property that the bits in the bit stream are
- 3. International Journal of Computer Engineering and Technology (IJCET), ISSN 0976-
6367(Print), ISSN 0976 – 6375(Online) Volume 4, Issue 3, May – June (2013), © IAEME
293
generated in order of importance, yielding a fully embedded code. The embedded code
represents a sequence of binary decisions that distinguish an image from the “null” image. J.
M. Shapiro proposed EZW image coding method in 1992. It has lead to a new generation of
powerful wavelet image coders that exploit the dependency between wavelet coefficients in
scale and space. Zerotree coding reduces the cost of encoding the significance map by
exploiting the interscale dependency of wavelet coefficients [1]. Using an embedded coding
algorithm, an encoder can terminate the encoding at any point thereby allowing a target rate
or target distortion metric to be met exactly. Also, given a bit stream, the decoder can cease
decoding at any point in the bit stream and still produce exactly the same image that would
have been encoded at the bit rate corresponding to the truncated bit stream. In addition to
producing a fully embedded bit stream, EZW consistently produces compression results that
are competitive with virtually all known compression algorithms on standard test images.
This performance is achieved with a technique that requires absolutely no training, no pre-
stored tables or codebooks, and requires no prior knowledge of the image source.
Fig.2 EZW flowchart to encode of DWT coefficients
4. H.264 CODEC
The intent of the H.264/AVC project was to create a standard capable of providing
good video quality at substantially lower bit rates than previous standards (i.e., half or less the
bit rate of MPEG-2, H.263, or MPEG-4 Part 2), without increasing the complexity of design
so much that it would be impractical or excessively expensive to implement. An additional
goal was to provide enough flexibility to allow the standard to be applied to a wide variety of
applications on a wide variety of networks and systems, including low and high bit rates, low
and high resolution video, broadcast, DVD storage, RTP/IP packet networks, and ITU-T
multimedia telephony systems[4][5].
A general block diagram is shown in fig.3. In this diagram the input video is supplied.
Then video goes to next block which processes video based on EZW algorithm and H.264
codec .After processing the original video compressed video is regenerated.
- 4. International Journal of Computer Engineering and Technology (IJCET), ISSN 0976-
6367(Print), ISSN 0976 – 6375(Online) Volume 4, Issue 3, May – June (2013), © IAEME
294
Fig 3: Data Flow Diagram
5. RESULTS AND DISCUSSION
We have implemented embedded zerotree wavelet algorithm (EZW) and H.264 codec
on different videos. We have calculated MSE, PSNR, Compression ratio & Compression
factor for four different videos.
We have done the comparison of all the parameters listed above on different videos.
We have got compression ratios for different videos as 60% to 70%, compression ratio
obtained by EZW is lesser then the H.264 codec.
Demonstration of frames generated of “vipconcentricity.avi”:
a)Frames of “vipconcentricity.avi”
1. Original Frames:
2. Reconstructed Frames by EZW:
3. Frames Generated by H.264:
- 5. International Journal of Computer Engineering and Technology (IJCET), ISSN 0976-
6367(Print), ISSN 0976 – 6375(Online) Volume 4, Issue 3, May – June (2013), © IAEME
295
Frames of “casio.avi”
1. Original Frames
2. Reconstructed Frames by EZW:
3. Frames Generated by H.264:
Table 1 below shows the comparison of different parameters between EZW and H.264 on
four different videos from the MATLAB demos.
Table1. Comparison of EZW & H.264
- 6. International Journal of Computer Engineering and Technology (IJCET), ISSN 0976-
6367(Print), ISSN 0976 – 6375(Online) Volume 4, Issue 3, May – June (2013), © IAEME
296
6. FUTURE SCOPE
We have seen that intra prediction and motion compensation are just two parts of the
whole decoder of H.264. In the decoder, there are still other parts such as deblocking
filtering, entropy decoding (CAVLC or CABAC), and inverse transform (IDCT). These
processing units can be employed to accelerate the processing speed and reduce the
computation load of the microprocessor. For a complete H.264 decoding system, these
processing units and a microprocessor have to be integrated into the system. Then appropriate
designs of interface and data communication become important issues when integrating the
whole system from the top view.
The EZW algorithm achieves excellent compression performance, usually higher than that of
arithmetic coding and Huffman coding. Algorithm offers additional advantages such as
spatial random access and ease of geometric manipulation.
7. REFERENCES
1. J. M. Shapiro, “Embedded image coding using zero trees of wavelet coefficients”,
IEEE Trans. Signal Processing, vol.41, pp. 3445-3462, 1993.
2. Mohammed Ghanbari :Video Coding And Introduction To Standard Codecs .
3. I.Richardson: Video Coding for Next Generation Multimedia.
4. G. J. Sullivan, P. Topiwala, and A. Luthra, "The H.264/AVC Advanced Video
Coding Standard: Overview and Introduction to the Fidelity Range Extensions",
Presented at the SPIE Conference on Applications of Digital Image Processing
XXVII Special Session on Advances in the New Emerging Standard: H.264/AVC,
August 2004.
5. Thomas Wiegand, Gary J. Sullivan, “Overview of the H.264/AVC Video Coding
Standard”, IEEE transactions on circuits and systems for video
technology,vol.13,no.7,July 2003.
6. Dhaval R. Bhojani and Dr. Ved Vyas Dwivedi, “Novel Idea for Improving Video
Codecs”, International Journal of Electronics and Communication Engineering
&Technology (IJECET), Volume 4, Issue 2, 2013, pp. 301 - 307, ISSN Print:
0976- 6464, ISSN Online: 0976 –6472.
7. Pardeep Singh, Nivedita and Sugandha Sharma, “A Comparative Study: Block
Truncation Coding, Wavelet, Embedded Zerotree and Fractal Image Compression on
Color Image”, International Journal of Electronics and Communication Engineering
& Technology (IJECET), Volume 3, Issue 2, 2012, pp. 10 - 21, ISSN Print:
0976- 6464, ISSN Online: 0976 –6472.