SlideShare une entreprise Scribd logo
1  sur  25
Lecture Notes on Quantization
for
Open Educational Resource
on
Data Compression(CA209)
by
Dr. Piyush Charan
Assistant Professor
Department of Electronics and Communication Engg.
Integral University, Lucknow
This work is licensed under a Creative Commons Attribution-NonCommercial-ShareAlike 4.0 International License.
Unit 5-Syllabus
• Quantization
– Vector Quantization,
– Advantages of Vector Quantization over Scalar
Quantization,
– The Linde-BuzoGray Algorithm,
– Tree-structured Vector Quantizers,
– Structured Vector Quantizers
02 February 2021 Dr. Piyush Charan, Dept. of ECE, Integral University, Lucknow 3
Introduction
• Quantization is one of the efficient tool for lossy
compression.
• It can reduce the bits required to represent the source.
• In lossy compression application, we represent each source
output using one of a small number of codewords.
• The number of distinct source output values is generally
much larger than the number of codewords available to
represent them.
• The process of representing the number of distinct output
values to a much smaller set is called quantization.
02 February 2021 Dr. Piyush Charan, Dept. of ECE, Integral University, Lucknow 4
Introduction contd…
• The set of input and output of a quantizer can
be scalars or vectors.
02 February 2021 Dr. Piyush Charan, Dept. of ECE, Integral University, Lucknow 5
Types of Quantization
• Scalar Quantization: The most common types of
quantization is scalar quantization. Scalar quantization,
typically denoted as y = Q(x) is the process of using
quantization function Q(x) to map a input value x to scalar
output value y.
• Vector Quantization: A vector quantization map k-
dimensional vector in the vector space Rk into a finite set of
vectors Y=[Yi : i=1,2,..,N]. Each vector Yi is called a code
vector or a codeword and set of all the codeword is called a
codebook.
02 February 2021 Dr. Piyush Charan, Dept. of ECE, Integral University, Lucknow 6
Vector Quantization
• VQ is a lossy data compression method based on the principal of
block coding technique that quantizes blocks of data instead of
signal sample.
• VQ exploits the correlation existing between neighboring signal
sample by quantizing them together.
• VQ is one of the widely used and efficient technique for image
compression.
• Since last few decades in the field of multimedia data compression,
VQ has received a great attention because it has simple decoding
structure and can provide high compression ratio.
02 February 2021 Dr. Piyush Charan, Dept. of ECE, Integral University, Lucknow 7
Vector Quantization contd…
• VQ based image compression technique has three major steps namely:
1. Codebook Design
2. VQ Encoding Processes.
3. VQ Decoding Processes.
• In VQ based image compression first image is decomposed into non-
overlapping sub-blocks and each sub block is converted into one-
dimension vector termed as training vector.
• From training vectors, a set of representative vector are selected to
represent the entire set of training vector.
• The set of representative training vector is called a codebook and each
representative training vector is called codeword or code-vector.
02 February 2021 Dr. Piyush Charan, Dept. of ECE, Integral University, Lucknow 8
Vector Quantization contd…
• The goal of VQ code book generation is to find an optimal code book that yields the
lowest possible distortion when compared with all other code books of the same size.
• The performance of VQ based image compression technique depends upon the
constructed codebook.
• The search complexity increases with the number of vectors in the code book and to
minimize the search complexity, the tree search vector quantization schemes was
introduced.
• The number of code vectors N depends on two parameters, rate R and dimensions L.
• The number of code vector is calculated using the following formula-
𝑁𝑢𝑚𝑏𝑒𝑟 𝑜𝑓 𝑐𝑜𝑑𝑒 𝑣𝑒𝑐𝑡𝑜𝑟𝑠 (𝑁) = 2𝑅×𝐿
where;
R → Rate in bits/pixel,
L → Dimensions
02 February 2021 Dr. Piyush Charan, Dept. of ECE, Integral University, Lucknow 9
Vector Quantization Process
02 February 2021 Dr. Piyush Charan, Dept. of ECE, Integral University, Lucknow 10
Difference between Vector and Scalar
Quantization
02 February 2021 Dr. Piyush Charan, Dept. of ECE, Integral University, Lucknow 11
• ⇢ 1: Vector Quantization can lower the average distortion with the
number of reconstruction levels held constant, While Scalar Quantization
cannot.
• ⇢ 2: Vector Quantization can reduce the number of reconstruction levels
when distortion is held constant, While Scalar Quantization cannot.
• ⇢ 3: The most significant way Vector Quantization can improve
performance over Scalar Quantization is by exploiting the statistical
dependence among scalars in the block.
• ⇢ 4: Vector Quantization is also more effective than Scalar Quantization
When the source output values are not correlated.
Difference between Vector and Scalar
Quantization contd…
02 February 2021 Dr. Piyush Charan, Dept. of ECE, Integral University, Lucknow 12
• ⇢ 5: In Scalar Quantization, in One Dimension, the quantization regions
are restricted to be in intervals(i.e., Output points are restricted to be
rectangular grids) and the only parameter we can manipulate is the size of
the interval. While, in Vector Quantization, When we divide the input into
vectors of some length n, the quantization regions are no longer restricted to
be rectangles or squares, we have the freedom to divide the range of the
inputs in an infinite number of ways.
• ⇢ 6: In Scalar Quantization, the Granular Error is affected by size of
quantization interval only, while in Vector Quantization, Granular Error is
affected by the both shape and size of quantization interval.
• ⇢ 7: Vector Quantization provides more flexibility towards modifications
than Scalar Quantization. The flexibility of Vector Quantization towards
modification increases with increasing dimension.
Difference between Vector and Scalar
Quantization contd…
02 February 2021 Dr. Piyush Charan, Dept. of ECE, Integral University, Lucknow 13
• ⇢ 8: Vector Quantization have improved performance when there is
sample to sample dependence of input, While not in Scalar
Quantization.
• ⇢ 9: Vector Quantization have improved performance when there is
not the sample to sample dependence of input, While not in Scalar
Quantization.
• ⇢ 10:Describing the decision boundaries between reconstruction
levels is easier in Scalar Quantization than in Vector Quantization.
Advantages of Vector Quantization
over Scalar Quantization
• Vector Quantization provide flexibility in choosing
multidimensional Quantizer cell shape and in choosing a
desired code-book size.
• The advantage of VQ over SQ is the fractional value of
resolution that is achieved and very important for low-bit rate
applications where low resolution is sufficient.
• For a given rate VQ results in a lower distortion than SQ.
• VQ can utilize the memory of the source better than SQ.
02 February 2021 Dr. Piyush Charan, Dept. of ECE, Integral University, Lucknow 14
Linde-Buzo Gray Algorithm
• The need for multi-dimensional integration for the
design of a vector quantizer was a challenging problem
in the earlier days.
• The main concepts is to divide a group of vector. To
find a most representative vector from one group. Then
gather the vectors to from a codebook. The inputs are
not longer scalars in the LBG algorithm.
02 February 2021 Dr. Piyush Charan, Dept. of ECE, Integral University, Lucknow 15
LBG Algorithm
1. Divide image into block. Then we can view one block as k-dimension
vector.
2. Arbitrarily choose initial codebook. Set these initial codebook as
centroids. Other are grouped. Vector are in the same group when they
have the same nearest centroids.
3. Again to find new centroids for every group. Get new codebooks.
Repeat 2,3 steps until the centroids of every groups converge.
• Thus at every iteration the codebook become progressively better. This
processed is continued till there is no change in the overall distortion.
02 February 2021 Dr. Piyush Charan, Dept. of ECE, Integral University, Lucknow 16
Initializing the LBG Algorithm
• The important thing we need to consider is the good set of initial
quantization points that will guarantee the convergence the LBG
algorithm guarantee that the distortion from one iteration to the next will
not increase.
• The performance of the LBG algorithm depends heavily on the initial
codebook.
• We will use splitting technique to design the initial codebook.
1. Random selection of Hilbert technique
2. Pair wise Nearest Neighbor (PNN) method.
02 February 2021 Dr. Piyush Charan, Dept. of ECE, Integral University, Lucknow 17
Empty Cell Problem
• What we will do if one of the reconstruction or quantization region in
some iteration is empty?
• There might be no points which are closer to a given reconstruction
point than any other reconstruction points.
• This is problem because in order to update an output points, we need to
take the average of the input vectors assigned to that output.
• But in this case we will end up with an output that is never used.
• A common solution to a empty cell problem is to remove an output
point that has no inputs associated with it and replace it with a point
from the quantization region with most training points.
02 February 2021 Dr. Piyush Charan, Dept. of ECE, Integral University, Lucknow 18
Tree Structure Vector Quantization
• Another fast codebook design technique-structured VQ and
was presented by Buzo.
• The number of operation can be reduced by enforcing a certain
structure on the codebook.
• One such possibility is using a tree structure, while turns into a
tree codebook and the method is called the binary search
clustering.
02 February 2021 Dr. Piyush Charan, Dept. of ECE, Integral University, Lucknow 19
Tree Structure Vector Quantization
• The disadvantage of tree-search is that we might not end up with the reconstruction
point that is closest the distortion will be a little higher compared to a full search
Quantizer.
• The storage requirement will also be larger, since we have to store all test vector too.
02 February 2021 Dr. Piyush Charan, Dept. of ECE, Integral University, Lucknow 20
How to design TSVQ
1. Obtain the average of all the training vectors, unsettled it to obtain a
second vector, and use these vector to from a two level VQ.
2. Call the vector v0and v1 and the group of training set vector that would
be quantized to each as g0 and g1.
3. Unsettled v0 and v1 to get the initial vectors for a four-level VQ.
4. Use g0 to design a two-level VQ and g1 and to design the another two-
level VQ.
5. Label the vectors v00,v01,v10,v11.
6. Split g0 using v00 and v01 into two groups g00,g01
7. Split g1 using v10 and v11 into two groups g10 and g11.
02 February 2021 Dr. Piyush Charan, Dept. of ECE, Integral University, Lucknow 21
Pruned Tree- structured Vector Quantizer
• Now we have develop tree-structured codebook and we can
improve its rate distortion performance by pruning removing
carefully selected subgroups that will reduce the size of the
codebook and thus the rate.
• But it may increase the distortion so the main objective of
pruning is to remove those groups that will result in the best
trade-off rate and distortion.
• Prune trees by finding sub tree T that minimizes ‘𝜆𝑇’
• 𝜆𝑇 =
𝑐ℎ𝑎𝑛𝑔𝑒 𝑖𝑛 𝑑𝑖𝑠𝑡𝑜𝑟𝑡𝑖𝑜𝑛 𝑖𝑓 𝑝𝑟𝑢𝑛𝑒 𝑠𝑢𝑏 𝑡𝑟𝑒𝑒 𝑇
𝑐ℎ𝑎𝑛𝑔𝑒 𝑖𝑛 𝑟𝑎𝑡𝑒 𝑖𝑓 𝑝𝑟𝑢𝑛𝑒 𝑠𝑢𝑏𝑡𝑟𝑒𝑒 𝑇
02 February 2021 Dr. Piyush Charan, Dept. of ECE, Integral University, Lucknow 22
Structured Vector Quantization
• Several structured code impose a structure that allows for reduces implementation
complexity and also constrain codewords or codeword search.
• Let L be the digestion of the VQ. If R is the bit rate, then 𝐿2𝑅𝐿 scalar need to be stored.
Also, 𝐿2𝑅𝐿 scalar distortion calculation are required.
• So solution is to introduce some from of structure in the codebook and also in quantization
process.
• Disadvantage of structure VQ is inventible loss in rate-distortion performance.
• Different types of structure vector Quantizer are:
1. Lattice quantization
2. Tree-structure code
3. Multistage code
4. Product code: gain/shape code
02 February 2021 Dr. Piyush Charan, Dept. of ECE, Integral University, Lucknow 23
Lattice Vector Quantizer
• VQ codebook designed using LBG
algorithm complicated the
quantization process and have no
visible structure.
• So alternative is a Lattice point
quantization sine we can use it as
fast encoding algorithm.
• For a bit rate of n bit/sample and
spatial dimension v, the number of
codebook vectors, or equivalently of
lattice points used in 2nv.
02 February 2021 Dr. Piyush Charan, Dept. of ECE, Integral University, Lucknow 24
How are tree structured vector quantizers
better?
• Tree-structured vector quantization (TSVQ) reduces the complexity
by imposing a hierarchical structure on the partitioning. We study the
design of optimal tree-structured vector quantizers that minimize the
expected distortion subject to cost functions related to storage cost,
encoding rate, or quantization time.
02 February 2021 Dr. Piyush Charan, Dept. of ECE, Integral University, Lucknow 25
Thanks!!
02 February 2021 Dr. Piyush Charan, Dept. of ECE, Integral University, Lucknow 26
Dr. Piyush Charan
Assistant Professor,
Department of ECE,
Integral University, Lucknow
Email: er.piyush.charan@gmail.com, piyush@iul.ac.in

Contenu connexe

Tendances

Tendances (20)

Unit 3 Arithmetic Coding
Unit 3 Arithmetic CodingUnit 3 Arithmetic Coding
Unit 3 Arithmetic Coding
 
Classification using back propagation algorithm
Classification using back propagation algorithmClassification using back propagation algorithm
Classification using back propagation algorithm
 
Data compression techniques
Data compression techniquesData compression techniques
Data compression techniques
 
DCDR Unit-2 Mathematical Preliminaries for Lossless Compression Models
DCDR Unit-2 Mathematical Preliminaries for Lossless Compression ModelsDCDR Unit-2 Mathematical Preliminaries for Lossless Compression Models
DCDR Unit-2 Mathematical Preliminaries for Lossless Compression Models
 
Introduction Data Compression/ Data compression, modelling and coding,Image C...
Introduction Data Compression/ Data compression, modelling and coding,Image C...Introduction Data Compression/ Data compression, modelling and coding,Image C...
Introduction Data Compression/ Data compression, modelling and coding,Image C...
 
04 Multi-layer Feedforward Networks
04 Multi-layer Feedforward Networks04 Multi-layer Feedforward Networks
04 Multi-layer Feedforward Networks
 
Data compression
Data compression Data compression
Data compression
 
Solution(1)
Solution(1)Solution(1)
Solution(1)
 
Presentation on K-Means Clustering
Presentation on K-Means ClusteringPresentation on K-Means Clustering
Presentation on K-Means Clustering
 
Routing algorithm
Routing algorithmRouting algorithm
Routing algorithm
 
Line clipping
Line clippingLine clipping
Line clipping
 
Distributed design alternatives
Distributed design alternativesDistributed design alternatives
Distributed design alternatives
 
Parallel Processing Concepts
Parallel Processing Concepts Parallel Processing Concepts
Parallel Processing Concepts
 
Csc446: Pattern Recognition
Csc446: Pattern Recognition Csc446: Pattern Recognition
Csc446: Pattern Recognition
 
Tree pruning
 Tree pruning Tree pruning
Tree pruning
 
Genetic algorithms in Data Mining
Genetic algorithms in Data MiningGenetic algorithms in Data Mining
Genetic algorithms in Data Mining
 
DCDR Unit-3 Huffman Coding
DCDR Unit-3 Huffman CodingDCDR Unit-3 Huffman Coding
DCDR Unit-3 Huffman Coding
 
05 Clustering in Data Mining
05 Clustering in Data Mining05 Clustering in Data Mining
05 Clustering in Data Mining
 
Digital Image Processing - Image Compression
Digital Image Processing - Image CompressionDigital Image Processing - Image Compression
Digital Image Processing - Image Compression
 
UNIT-4.pptx
UNIT-4.pptxUNIT-4.pptx
UNIT-4.pptx
 

Similaire à Unit 5 Quantization

final year ieee pojects in pondicherry,bulk ieee projects ,bulk 2015-16 i...
  final  year ieee pojects in pondicherry,bulk ieee projects ,bulk  2015-16 i...  final  year ieee pojects in pondicherry,bulk ieee projects ,bulk  2015-16 i...
final year ieee pojects in pondicherry,bulk ieee projects ,bulk 2015-16 i...
nexgentech
 
Neural Networks for Machine Learning and Deep Learning
Neural Networks for Machine Learning and Deep LearningNeural Networks for Machine Learning and Deep Learning
Neural Networks for Machine Learning and Deep Learning
comifa7406
 

Similaire à Unit 5 Quantization (20)

ResNeSt: Split-Attention Networks
ResNeSt: Split-Attention NetworksResNeSt: Split-Attention Networks
ResNeSt: Split-Attention Networks
 
AN EFFICIENT CODEBOOK INITIALIZATION APPROACH FOR LBG ALGORITHM
AN EFFICIENT CODEBOOK INITIALIZATION APPROACH FOR LBG ALGORITHMAN EFFICIENT CODEBOOK INITIALIZATION APPROACH FOR LBG ALGORITHM
AN EFFICIENT CODEBOOK INITIALIZATION APPROACH FOR LBG ALGORITHM
 
A systematic image compression in the combination of linear vector quantisati...
A systematic image compression in the combination of linear vector quantisati...A systematic image compression in the combination of linear vector quantisati...
A systematic image compression in the combination of linear vector quantisati...
 
Performance Comparison of K-means Codebook Optimization using different Clust...
Performance Comparison of K-means Codebook Optimization using different Clust...Performance Comparison of K-means Codebook Optimization using different Clust...
Performance Comparison of K-means Codebook Optimization using different Clust...
 
Towards better analysis of deep convolutional neural networks
Towards better analysis of deep convolutional neural networksTowards better analysis of deep convolutional neural networks
Towards better analysis of deep convolutional neural networks
 
Automatic Grading of Handwritten Answers
Automatic Grading of Handwritten AnswersAutomatic Grading of Handwritten Answers
Automatic Grading of Handwritten Answers
 
BULK IEEE PROJECTS IN MATLAB ,BULK IEEE PROJECTS, IEEE 2015-16 MATLAB PROJEC...
 BULK IEEE PROJECTS IN MATLAB ,BULK IEEE PROJECTS, IEEE 2015-16 MATLAB PROJEC... BULK IEEE PROJECTS IN MATLAB ,BULK IEEE PROJECTS, IEEE 2015-16 MATLAB PROJEC...
BULK IEEE PROJECTS IN MATLAB ,BULK IEEE PROJECTS, IEEE 2015-16 MATLAB PROJEC...
 
final year ieee pojects in pondicherry,bulk ieee projects ,bulk 2015-16 i...
  final  year ieee pojects in pondicherry,bulk ieee projects ,bulk  2015-16 i...  final  year ieee pojects in pondicherry,bulk ieee projects ,bulk  2015-16 i...
final year ieee pojects in pondicherry,bulk ieee projects ,bulk 2015-16 i...
 
Week 11: Programming for Data Analysis
Week 11: Programming for Data AnalysisWeek 11: Programming for Data Analysis
Week 11: Programming for Data Analysis
 
Neural Networks for Machine Learning and Deep Learning
Neural Networks for Machine Learning and Deep LearningNeural Networks for Machine Learning and Deep Learning
Neural Networks for Machine Learning and Deep Learning
 
Unit 1 Introduction to Data Compression
Unit 1 Introduction to Data CompressionUnit 1 Introduction to Data Compression
Unit 1 Introduction to Data Compression
 
Unit 1 Introduction to Data Compression
Unit 1 Introduction to Data CompressionUnit 1 Introduction to Data Compression
Unit 1 Introduction to Data Compression
 
Unit 2 Lecture notes on Huffman coding
Unit 2 Lecture notes on Huffman codingUnit 2 Lecture notes on Huffman coding
Unit 2 Lecture notes on Huffman coding
 
Researc-paper_Project Work Phase-1 PPT (21CS09).pptx
Researc-paper_Project Work Phase-1 PPT (21CS09).pptxResearc-paper_Project Work Phase-1 PPT (21CS09).pptx
Researc-paper_Project Work Phase-1 PPT (21CS09).pptx
 
Types of Machine Learnig Algorithms(CART, ID3)
Types of Machine Learnig Algorithms(CART, ID3)Types of Machine Learnig Algorithms(CART, ID3)
Types of Machine Learnig Algorithms(CART, ID3)
 
Generalization of linear and non-linear support vector machine in multiple fi...
Generalization of linear and non-linear support vector machine in multiple fi...Generalization of linear and non-linear support vector machine in multiple fi...
Generalization of linear and non-linear support vector machine in multiple fi...
 
Machine learning for Data Science
Machine learning for Data ScienceMachine learning for Data Science
Machine learning for Data Science
 
Deep learning Tutorial - Part II
Deep learning Tutorial - Part IIDeep learning Tutorial - Part II
Deep learning Tutorial - Part II
 
Comparison of Learning Algorithms for Handwritten Digit Recognition
Comparison of Learning Algorithms for Handwritten Digit RecognitionComparison of Learning Algorithms for Handwritten Digit Recognition
Comparison of Learning Algorithms for Handwritten Digit Recognition
 
“Introduction to DNN Model Compression Techniques,” a Presentation from Xailient
“Introduction to DNN Model Compression Techniques,” a Presentation from Xailient“Introduction to DNN Model Compression Techniques,” a Presentation from Xailient
“Introduction to DNN Model Compression Techniques,” a Presentation from Xailient
 

Plus de Dr Piyush Charan

Plus de Dr Piyush Charan (20)

Unit 1- Intro to Wireless Standards.pdf
Unit 1- Intro to Wireless Standards.pdfUnit 1- Intro to Wireless Standards.pdf
Unit 1- Intro to Wireless Standards.pdf
 
Unit 1 Solar Collectors
Unit 1 Solar CollectorsUnit 1 Solar Collectors
Unit 1 Solar Collectors
 
Unit 4 Lossy Coding Preliminaries
Unit 4 Lossy Coding PreliminariesUnit 4 Lossy Coding Preliminaries
Unit 4 Lossy Coding Preliminaries
 
Unit 3 Geothermal Energy
Unit 3 Geothermal EnergyUnit 3 Geothermal Energy
Unit 3 Geothermal Energy
 
Unit 2: Programming Language Tools
Unit 2:  Programming Language ToolsUnit 2:  Programming Language Tools
Unit 2: Programming Language Tools
 
Unit 4 Arrays
Unit 4 ArraysUnit 4 Arrays
Unit 4 Arrays
 
Unit 3 Lecture Notes on Programming
Unit 3 Lecture Notes on ProgrammingUnit 3 Lecture Notes on Programming
Unit 3 Lecture Notes on Programming
 
Unit 3 introduction to programming
Unit 3 introduction to programmingUnit 3 introduction to programming
Unit 3 introduction to programming
 
Forensics and wireless body area networks
Forensics and wireless body area networksForensics and wireless body area networks
Forensics and wireless body area networks
 
Final PhD Defense Presentation
Final PhD Defense PresentationFinal PhD Defense Presentation
Final PhD Defense Presentation
 
Unit 3 Dictionary based Compression Techniques
Unit 3 Dictionary based Compression TechniquesUnit 3 Dictionary based Compression Techniques
Unit 3 Dictionary based Compression Techniques
 
Unit 1 Introduction to Non-Conventional Energy Resources
Unit 1 Introduction to Non-Conventional Energy ResourcesUnit 1 Introduction to Non-Conventional Energy Resources
Unit 1 Introduction to Non-Conventional Energy Resources
 
Unit 5-Operational Amplifiers and Electronic Measurement Devices
Unit 5-Operational Amplifiers and Electronic Measurement DevicesUnit 5-Operational Amplifiers and Electronic Measurement Devices
Unit 5-Operational Amplifiers and Electronic Measurement Devices
 
Unit 4 Switching Theory and Logic Gates
Unit 4 Switching Theory and Logic GatesUnit 4 Switching Theory and Logic Gates
Unit 4 Switching Theory and Logic Gates
 
Unit 1 Numerical Problems on PN Junction Diode
Unit 1 Numerical Problems on PN Junction DiodeUnit 1 Numerical Problems on PN Junction Diode
Unit 1 Numerical Problems on PN Junction Diode
 
Unit 4_Part 1_Number System
Unit 4_Part 1_Number SystemUnit 4_Part 1_Number System
Unit 4_Part 1_Number System
 
Unit 5 Global Issues- Early life of Prophet Muhammad
Unit 5 Global Issues- Early life of Prophet MuhammadUnit 5 Global Issues- Early life of Prophet Muhammad
Unit 5 Global Issues- Early life of Prophet Muhammad
 
Unit 4 Engineering Ethics
Unit 4 Engineering EthicsUnit 4 Engineering Ethics
Unit 4 Engineering Ethics
 
Unit 3 Professional Responsibility
Unit 3 Professional ResponsibilityUnit 3 Professional Responsibility
Unit 3 Professional Responsibility
 
Unit 5 oscillators and voltage regulators
Unit 5 oscillators and voltage regulatorsUnit 5 oscillators and voltage regulators
Unit 5 oscillators and voltage regulators
 

Dernier

Top Rated Call Girls In chittoor 📱 {7001035870} VIP Escorts chittoor
Top Rated Call Girls In chittoor 📱 {7001035870} VIP Escorts chittoorTop Rated Call Girls In chittoor 📱 {7001035870} VIP Escorts chittoor
Top Rated Call Girls In chittoor 📱 {7001035870} VIP Escorts chittoor
dharasingh5698
 
Integrated Test Rig For HTFE-25 - Neometrix
Integrated Test Rig For HTFE-25 - NeometrixIntegrated Test Rig For HTFE-25 - Neometrix
Integrated Test Rig For HTFE-25 - Neometrix
Neometrix_Engineering_Pvt_Ltd
 
XXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXX
XXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXX
XXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXX
ssuser89054b
 
VIP Call Girls Palanpur 7001035870 Whatsapp Number, 24/07 Booking
VIP Call Girls Palanpur 7001035870 Whatsapp Number, 24/07 BookingVIP Call Girls Palanpur 7001035870 Whatsapp Number, 24/07 Booking
VIP Call Girls Palanpur 7001035870 Whatsapp Number, 24/07 Booking
dharasingh5698
 
Cara Menggugurkan Sperma Yang Masuk Rahim Biyar Tidak Hamil
Cara Menggugurkan Sperma Yang Masuk Rahim Biyar Tidak HamilCara Menggugurkan Sperma Yang Masuk Rahim Biyar Tidak Hamil
Cara Menggugurkan Sperma Yang Masuk Rahim Biyar Tidak Hamil
Cara Menggugurkan Kandungan 087776558899
 
Call Girls in Ramesh Nagar Delhi 💯 Call Us 🔝9953056974 🔝 Escort Service
Call Girls in Ramesh Nagar Delhi 💯 Call Us 🔝9953056974 🔝 Escort ServiceCall Girls in Ramesh Nagar Delhi 💯 Call Us 🔝9953056974 🔝 Escort Service
Call Girls in Ramesh Nagar Delhi 💯 Call Us 🔝9953056974 🔝 Escort Service
9953056974 Low Rate Call Girls In Saket, Delhi NCR
 

Dernier (20)

Employee leave management system project.
Employee leave management system project.Employee leave management system project.
Employee leave management system project.
 
Top Rated Call Girls In chittoor 📱 {7001035870} VIP Escorts chittoor
Top Rated Call Girls In chittoor 📱 {7001035870} VIP Escorts chittoorTop Rated Call Girls In chittoor 📱 {7001035870} VIP Escorts chittoor
Top Rated Call Girls In chittoor 📱 {7001035870} VIP Escorts chittoor
 
Integrated Test Rig For HTFE-25 - Neometrix
Integrated Test Rig For HTFE-25 - NeometrixIntegrated Test Rig For HTFE-25 - Neometrix
Integrated Test Rig For HTFE-25 - Neometrix
 
(INDIRA) Call Girl Bhosari Call Now 8617697112 Bhosari Escorts 24x7
(INDIRA) Call Girl Bhosari Call Now 8617697112 Bhosari Escorts 24x7(INDIRA) Call Girl Bhosari Call Now 8617697112 Bhosari Escorts 24x7
(INDIRA) Call Girl Bhosari Call Now 8617697112 Bhosari Escorts 24x7
 
Call Girls Pimpri Chinchwad Call Me 7737669865 Budget Friendly No Advance Boo...
Call Girls Pimpri Chinchwad Call Me 7737669865 Budget Friendly No Advance Boo...Call Girls Pimpri Chinchwad Call Me 7737669865 Budget Friendly No Advance Boo...
Call Girls Pimpri Chinchwad Call Me 7737669865 Budget Friendly No Advance Boo...
 
data_management_and _data_science_cheat_sheet.pdf
data_management_and _data_science_cheat_sheet.pdfdata_management_and _data_science_cheat_sheet.pdf
data_management_and _data_science_cheat_sheet.pdf
 
Work-Permit-Receiver-in-Saudi-Aramco.pptx
Work-Permit-Receiver-in-Saudi-Aramco.pptxWork-Permit-Receiver-in-Saudi-Aramco.pptx
Work-Permit-Receiver-in-Saudi-Aramco.pptx
 
Bhosari ( Call Girls ) Pune 6297143586 Hot Model With Sexy Bhabi Ready For ...
Bhosari ( Call Girls ) Pune  6297143586  Hot Model With Sexy Bhabi Ready For ...Bhosari ( Call Girls ) Pune  6297143586  Hot Model With Sexy Bhabi Ready For ...
Bhosari ( Call Girls ) Pune 6297143586 Hot Model With Sexy Bhabi Ready For ...
 
XXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXX
XXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXX
XXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXX
 
A Study of Urban Area Plan for Pabna Municipality
A Study of Urban Area Plan for Pabna MunicipalityA Study of Urban Area Plan for Pabna Municipality
A Study of Urban Area Plan for Pabna Municipality
 
Unit 1 - Soil Classification and Compaction.pdf
Unit 1 - Soil Classification and Compaction.pdfUnit 1 - Soil Classification and Compaction.pdf
Unit 1 - Soil Classification and Compaction.pdf
 
VIP Call Girls Palanpur 7001035870 Whatsapp Number, 24/07 Booking
VIP Call Girls Palanpur 7001035870 Whatsapp Number, 24/07 BookingVIP Call Girls Palanpur 7001035870 Whatsapp Number, 24/07 Booking
VIP Call Girls Palanpur 7001035870 Whatsapp Number, 24/07 Booking
 
Navigating Complexity: The Role of Trusted Partners and VIAS3D in Dassault Sy...
Navigating Complexity: The Role of Trusted Partners and VIAS3D in Dassault Sy...Navigating Complexity: The Role of Trusted Partners and VIAS3D in Dassault Sy...
Navigating Complexity: The Role of Trusted Partners and VIAS3D in Dassault Sy...
 
Cara Menggugurkan Sperma Yang Masuk Rahim Biyar Tidak Hamil
Cara Menggugurkan Sperma Yang Masuk Rahim Biyar Tidak HamilCara Menggugurkan Sperma Yang Masuk Rahim Biyar Tidak Hamil
Cara Menggugurkan Sperma Yang Masuk Rahim Biyar Tidak Hamil
 
ONLINE FOOD ORDER SYSTEM PROJECT REPORT.pdf
ONLINE FOOD ORDER SYSTEM PROJECT REPORT.pdfONLINE FOOD ORDER SYSTEM PROJECT REPORT.pdf
ONLINE FOOD ORDER SYSTEM PROJECT REPORT.pdf
 
DC MACHINE-Motoring and generation, Armature circuit equation
DC MACHINE-Motoring and generation, Armature circuit equationDC MACHINE-Motoring and generation, Armature circuit equation
DC MACHINE-Motoring and generation, Armature circuit equation
 
Double Revolving field theory-how the rotor develops torque
Double Revolving field theory-how the rotor develops torqueDouble Revolving field theory-how the rotor develops torque
Double Revolving field theory-how the rotor develops torque
 
Minimum and Maximum Modes of microprocessor 8086
Minimum and Maximum Modes of microprocessor 8086Minimum and Maximum Modes of microprocessor 8086
Minimum and Maximum Modes of microprocessor 8086
 
Call Girls in Ramesh Nagar Delhi 💯 Call Us 🔝9953056974 🔝 Escort Service
Call Girls in Ramesh Nagar Delhi 💯 Call Us 🔝9953056974 🔝 Escort ServiceCall Girls in Ramesh Nagar Delhi 💯 Call Us 🔝9953056974 🔝 Escort Service
Call Girls in Ramesh Nagar Delhi 💯 Call Us 🔝9953056974 🔝 Escort Service
 
Water Industry Process Automation & Control Monthly - April 2024
Water Industry Process Automation & Control Monthly - April 2024Water Industry Process Automation & Control Monthly - April 2024
Water Industry Process Automation & Control Monthly - April 2024
 

Unit 5 Quantization

  • 1. Lecture Notes on Quantization for Open Educational Resource on Data Compression(CA209) by Dr. Piyush Charan Assistant Professor Department of Electronics and Communication Engg. Integral University, Lucknow This work is licensed under a Creative Commons Attribution-NonCommercial-ShareAlike 4.0 International License.
  • 2. Unit 5-Syllabus • Quantization – Vector Quantization, – Advantages of Vector Quantization over Scalar Quantization, – The Linde-BuzoGray Algorithm, – Tree-structured Vector Quantizers, – Structured Vector Quantizers 02 February 2021 Dr. Piyush Charan, Dept. of ECE, Integral University, Lucknow 3
  • 3. Introduction • Quantization is one of the efficient tool for lossy compression. • It can reduce the bits required to represent the source. • In lossy compression application, we represent each source output using one of a small number of codewords. • The number of distinct source output values is generally much larger than the number of codewords available to represent them. • The process of representing the number of distinct output values to a much smaller set is called quantization. 02 February 2021 Dr. Piyush Charan, Dept. of ECE, Integral University, Lucknow 4
  • 4. Introduction contd… • The set of input and output of a quantizer can be scalars or vectors. 02 February 2021 Dr. Piyush Charan, Dept. of ECE, Integral University, Lucknow 5
  • 5. Types of Quantization • Scalar Quantization: The most common types of quantization is scalar quantization. Scalar quantization, typically denoted as y = Q(x) is the process of using quantization function Q(x) to map a input value x to scalar output value y. • Vector Quantization: A vector quantization map k- dimensional vector in the vector space Rk into a finite set of vectors Y=[Yi : i=1,2,..,N]. Each vector Yi is called a code vector or a codeword and set of all the codeword is called a codebook. 02 February 2021 Dr. Piyush Charan, Dept. of ECE, Integral University, Lucknow 6
  • 6. Vector Quantization • VQ is a lossy data compression method based on the principal of block coding technique that quantizes blocks of data instead of signal sample. • VQ exploits the correlation existing between neighboring signal sample by quantizing them together. • VQ is one of the widely used and efficient technique for image compression. • Since last few decades in the field of multimedia data compression, VQ has received a great attention because it has simple decoding structure and can provide high compression ratio. 02 February 2021 Dr. Piyush Charan, Dept. of ECE, Integral University, Lucknow 7
  • 7. Vector Quantization contd… • VQ based image compression technique has three major steps namely: 1. Codebook Design 2. VQ Encoding Processes. 3. VQ Decoding Processes. • In VQ based image compression first image is decomposed into non- overlapping sub-blocks and each sub block is converted into one- dimension vector termed as training vector. • From training vectors, a set of representative vector are selected to represent the entire set of training vector. • The set of representative training vector is called a codebook and each representative training vector is called codeword or code-vector. 02 February 2021 Dr. Piyush Charan, Dept. of ECE, Integral University, Lucknow 8
  • 8. Vector Quantization contd… • The goal of VQ code book generation is to find an optimal code book that yields the lowest possible distortion when compared with all other code books of the same size. • The performance of VQ based image compression technique depends upon the constructed codebook. • The search complexity increases with the number of vectors in the code book and to minimize the search complexity, the tree search vector quantization schemes was introduced. • The number of code vectors N depends on two parameters, rate R and dimensions L. • The number of code vector is calculated using the following formula- 𝑁𝑢𝑚𝑏𝑒𝑟 𝑜𝑓 𝑐𝑜𝑑𝑒 𝑣𝑒𝑐𝑡𝑜𝑟𝑠 (𝑁) = 2𝑅×𝐿 where; R → Rate in bits/pixel, L → Dimensions 02 February 2021 Dr. Piyush Charan, Dept. of ECE, Integral University, Lucknow 9
  • 9. Vector Quantization Process 02 February 2021 Dr. Piyush Charan, Dept. of ECE, Integral University, Lucknow 10
  • 10. Difference between Vector and Scalar Quantization 02 February 2021 Dr. Piyush Charan, Dept. of ECE, Integral University, Lucknow 11 • ⇢ 1: Vector Quantization can lower the average distortion with the number of reconstruction levels held constant, While Scalar Quantization cannot. • ⇢ 2: Vector Quantization can reduce the number of reconstruction levels when distortion is held constant, While Scalar Quantization cannot. • ⇢ 3: The most significant way Vector Quantization can improve performance over Scalar Quantization is by exploiting the statistical dependence among scalars in the block. • ⇢ 4: Vector Quantization is also more effective than Scalar Quantization When the source output values are not correlated.
  • 11. Difference between Vector and Scalar Quantization contd… 02 February 2021 Dr. Piyush Charan, Dept. of ECE, Integral University, Lucknow 12 • ⇢ 5: In Scalar Quantization, in One Dimension, the quantization regions are restricted to be in intervals(i.e., Output points are restricted to be rectangular grids) and the only parameter we can manipulate is the size of the interval. While, in Vector Quantization, When we divide the input into vectors of some length n, the quantization regions are no longer restricted to be rectangles or squares, we have the freedom to divide the range of the inputs in an infinite number of ways. • ⇢ 6: In Scalar Quantization, the Granular Error is affected by size of quantization interval only, while in Vector Quantization, Granular Error is affected by the both shape and size of quantization interval. • ⇢ 7: Vector Quantization provides more flexibility towards modifications than Scalar Quantization. The flexibility of Vector Quantization towards modification increases with increasing dimension.
  • 12. Difference between Vector and Scalar Quantization contd… 02 February 2021 Dr. Piyush Charan, Dept. of ECE, Integral University, Lucknow 13 • ⇢ 8: Vector Quantization have improved performance when there is sample to sample dependence of input, While not in Scalar Quantization. • ⇢ 9: Vector Quantization have improved performance when there is not the sample to sample dependence of input, While not in Scalar Quantization. • ⇢ 10:Describing the decision boundaries between reconstruction levels is easier in Scalar Quantization than in Vector Quantization.
  • 13. Advantages of Vector Quantization over Scalar Quantization • Vector Quantization provide flexibility in choosing multidimensional Quantizer cell shape and in choosing a desired code-book size. • The advantage of VQ over SQ is the fractional value of resolution that is achieved and very important for low-bit rate applications where low resolution is sufficient. • For a given rate VQ results in a lower distortion than SQ. • VQ can utilize the memory of the source better than SQ. 02 February 2021 Dr. Piyush Charan, Dept. of ECE, Integral University, Lucknow 14
  • 14. Linde-Buzo Gray Algorithm • The need for multi-dimensional integration for the design of a vector quantizer was a challenging problem in the earlier days. • The main concepts is to divide a group of vector. To find a most representative vector from one group. Then gather the vectors to from a codebook. The inputs are not longer scalars in the LBG algorithm. 02 February 2021 Dr. Piyush Charan, Dept. of ECE, Integral University, Lucknow 15
  • 15. LBG Algorithm 1. Divide image into block. Then we can view one block as k-dimension vector. 2. Arbitrarily choose initial codebook. Set these initial codebook as centroids. Other are grouped. Vector are in the same group when they have the same nearest centroids. 3. Again to find new centroids for every group. Get new codebooks. Repeat 2,3 steps until the centroids of every groups converge. • Thus at every iteration the codebook become progressively better. This processed is continued till there is no change in the overall distortion. 02 February 2021 Dr. Piyush Charan, Dept. of ECE, Integral University, Lucknow 16
  • 16. Initializing the LBG Algorithm • The important thing we need to consider is the good set of initial quantization points that will guarantee the convergence the LBG algorithm guarantee that the distortion from one iteration to the next will not increase. • The performance of the LBG algorithm depends heavily on the initial codebook. • We will use splitting technique to design the initial codebook. 1. Random selection of Hilbert technique 2. Pair wise Nearest Neighbor (PNN) method. 02 February 2021 Dr. Piyush Charan, Dept. of ECE, Integral University, Lucknow 17
  • 17. Empty Cell Problem • What we will do if one of the reconstruction or quantization region in some iteration is empty? • There might be no points which are closer to a given reconstruction point than any other reconstruction points. • This is problem because in order to update an output points, we need to take the average of the input vectors assigned to that output. • But in this case we will end up with an output that is never used. • A common solution to a empty cell problem is to remove an output point that has no inputs associated with it and replace it with a point from the quantization region with most training points. 02 February 2021 Dr. Piyush Charan, Dept. of ECE, Integral University, Lucknow 18
  • 18. Tree Structure Vector Quantization • Another fast codebook design technique-structured VQ and was presented by Buzo. • The number of operation can be reduced by enforcing a certain structure on the codebook. • One such possibility is using a tree structure, while turns into a tree codebook and the method is called the binary search clustering. 02 February 2021 Dr. Piyush Charan, Dept. of ECE, Integral University, Lucknow 19
  • 19. Tree Structure Vector Quantization • The disadvantage of tree-search is that we might not end up with the reconstruction point that is closest the distortion will be a little higher compared to a full search Quantizer. • The storage requirement will also be larger, since we have to store all test vector too. 02 February 2021 Dr. Piyush Charan, Dept. of ECE, Integral University, Lucknow 20
  • 20. How to design TSVQ 1. Obtain the average of all the training vectors, unsettled it to obtain a second vector, and use these vector to from a two level VQ. 2. Call the vector v0and v1 and the group of training set vector that would be quantized to each as g0 and g1. 3. Unsettled v0 and v1 to get the initial vectors for a four-level VQ. 4. Use g0 to design a two-level VQ and g1 and to design the another two- level VQ. 5. Label the vectors v00,v01,v10,v11. 6. Split g0 using v00 and v01 into two groups g00,g01 7. Split g1 using v10 and v11 into two groups g10 and g11. 02 February 2021 Dr. Piyush Charan, Dept. of ECE, Integral University, Lucknow 21
  • 21. Pruned Tree- structured Vector Quantizer • Now we have develop tree-structured codebook and we can improve its rate distortion performance by pruning removing carefully selected subgroups that will reduce the size of the codebook and thus the rate. • But it may increase the distortion so the main objective of pruning is to remove those groups that will result in the best trade-off rate and distortion. • Prune trees by finding sub tree T that minimizes ‘𝜆𝑇’ • 𝜆𝑇 = 𝑐ℎ𝑎𝑛𝑔𝑒 𝑖𝑛 𝑑𝑖𝑠𝑡𝑜𝑟𝑡𝑖𝑜𝑛 𝑖𝑓 𝑝𝑟𝑢𝑛𝑒 𝑠𝑢𝑏 𝑡𝑟𝑒𝑒 𝑇 𝑐ℎ𝑎𝑛𝑔𝑒 𝑖𝑛 𝑟𝑎𝑡𝑒 𝑖𝑓 𝑝𝑟𝑢𝑛𝑒 𝑠𝑢𝑏𝑡𝑟𝑒𝑒 𝑇 02 February 2021 Dr. Piyush Charan, Dept. of ECE, Integral University, Lucknow 22
  • 22. Structured Vector Quantization • Several structured code impose a structure that allows for reduces implementation complexity and also constrain codewords or codeword search. • Let L be the digestion of the VQ. If R is the bit rate, then 𝐿2𝑅𝐿 scalar need to be stored. Also, 𝐿2𝑅𝐿 scalar distortion calculation are required. • So solution is to introduce some from of structure in the codebook and also in quantization process. • Disadvantage of structure VQ is inventible loss in rate-distortion performance. • Different types of structure vector Quantizer are: 1. Lattice quantization 2. Tree-structure code 3. Multistage code 4. Product code: gain/shape code 02 February 2021 Dr. Piyush Charan, Dept. of ECE, Integral University, Lucknow 23
  • 23. Lattice Vector Quantizer • VQ codebook designed using LBG algorithm complicated the quantization process and have no visible structure. • So alternative is a Lattice point quantization sine we can use it as fast encoding algorithm. • For a bit rate of n bit/sample and spatial dimension v, the number of codebook vectors, or equivalently of lattice points used in 2nv. 02 February 2021 Dr. Piyush Charan, Dept. of ECE, Integral University, Lucknow 24
  • 24. How are tree structured vector quantizers better? • Tree-structured vector quantization (TSVQ) reduces the complexity by imposing a hierarchical structure on the partitioning. We study the design of optimal tree-structured vector quantizers that minimize the expected distortion subject to cost functions related to storage cost, encoding rate, or quantization time. 02 February 2021 Dr. Piyush Charan, Dept. of ECE, Integral University, Lucknow 25
  • 25. Thanks!! 02 February 2021 Dr. Piyush Charan, Dept. of ECE, Integral University, Lucknow 26 Dr. Piyush Charan Assistant Professor, Department of ECE, Integral University, Lucknow Email: er.piyush.charan@gmail.com, piyush@iul.ac.in