9953056974 Call Girls In South Ex, Escorts (Delhi) NCR.pdf
Performance Anaysis for Imaging System
1. Performance Analysis for Imaging Systems |CSIT Dept’s SGBAU Amravati. 1
Practical No. 1
Aim : Study of GNU Octave 4.2.1
Tool : Octave 4.2.1
Theory :
GNU Octave is software featuring a high-level programming language, primarily
intended for numerical computations. Octave helps in solving linear and nonlinear problems
numerically, and for performing other numerical experiments using a language that is mostly
compatible with MATLAB. It may also be used as a batch-oriented language. Since it is part of
the GNU Project, it is free software under the terms of the GNU General Public License.
Octave is one of the major free alternatives to MATLAB, others
being Scilab and FreeMat. Scilab, however, puts less emphasis on (bidirectional) syntactic
compatibility with MATLAB than Octave does.
GNU Octave is a high-level interpreted language, primarily intended for numerical
computations. It provides capabilities for the numerical solution of linear and nonlinear
problems, and for performing other numerical experiments. It also provides extensive graphics
capabilities for data visualization and manipulation. Octave is normally used through its
interactive command line interface, but it can also be used to write non-interactive programs. The
Octave language is quite similar to Matlab so that most programs are easily portable.
Octave is written in C++ using the C++ standard library.
Octave uses an interpreter to execute the Octave scripting language.
Octave is extensible using dynamically loadable modules.
Octave interpreter has an OpenGL-based graphics engine to create plots, graphs and
charts and to save or print them. Alternatively, gnuplot can be used for the same purpose.
Octave includes a Graphical User Interface (GUI) in addition to the traditional Command
Line Interface (CLI); see #User interfaces for details.
2. Performance Analysis for Imaging Systems |CSIT Dept’s SGBAU Amravati. 2
Follow these instructions to install Octave on your computer.
1. Make sure you are connected to the network file storage, specifically the I: drive. For
information on connecting, see Network File Storage at St. Olaf.
2. Navigate to I:Octave.
3. Right-click on install-octave.bat and select Run as administrator.
4. The installer will copy the Octave files to C:Octave on your computer and place a shortcut on
your desktop. Double-click on the desktop shortcut to start Octave.
3. Performance Analysis for Imaging Systems |CSIT Dept’s SGBAU Amravati. 3
Important information for Windows 10 and 8 users
Octave is not completely compatible with Windows 10 and 8. If you install it on Windows 10 or
8, you will not see the typical “octave>” prompt, and certain functions, such as plotting, will not
work. There is a work-around:
1. Right-click on the desktop shortcut for Octave and select Properties.
2. In the Target: field, add the following text to the end of the line “-i –line editing”.
3. Click OK to save the change.
Conclusion: We have studied GNU Octave 4.2.1
4. Performance Analysis for Imaging Systems |CSIT Dept’s SGBAU Amravati. 4
Practical No. 2
Aim: To Convert a RGB image to a gray-scale image using Octave.
Tool : Octave 4.2.1
Theory:
In this we use Octave to read a JPG image file, which is an RGB file, and convert it to a gray-
scale image.
In general Octave supports four different kinds of images, gray-scale images, RGB images,
binary images, and indexed images. A gray-scale image is represented with an M-by-N matrix in
which each element corresponds to the intensity of a pixel. An RGB image is represented with an
M-by-N-by-3 array where each 3-vector corresponds to the red, green, and blue intensities of
each pixel.
The actual meaning of the value of a pixel in a gray-scale or RGB image depends on the class of
the matrix. If the matrix is of class double pixel intensities are between 0 and 1, if it is of
class uint8 intensities are between 0 and 255, and if it is of class uint16 intensities are between 0
and 65535.
A binary image is an M-by-N matrix of class logical. A pixel in a binary image is black if it
is false and white if it is true.
An indexed image consists of an M-by-N matrix of integers and a C-by-3 color map. Each
integer corresponds to an index in the color map, and each row in the color map corresponds to
an RGB color. The color map must be of class double with values between 0 and 1.
— Function File: [img, map] = gray2ind (I, n)
Convert a gray scale intensity image to an Octave indexed image. The indexed image will consist
of n different intensity values. If not given n will default to 64.
This procedure is performed by the following steps:
1. Read the JPG image to an RGB matix.
5. Performance Analysis for Imaging Systems |CSIT Dept’s SGBAU Amravati. 5
2. Convert the RGB image to an color-indexed image
3. Convert the color-indexed image to a gray-intensity matrix
4. Treat the borders of the gray matrix in order to fit in the uint8 boundaries.
Program :
[img,map,alpha]=imread('Desert.jpg');
[x,map]=rgb2ind(img);
y=ind2gray(x,map);
y=unit8((255*y)/max(max(y)));
imwrite(y,'Desert.jpg','jpg','Quality',75);
Below there is an example , in the left the original picture and in the right the treated image.
Output :
Conclusion : We have Convert a RGB image to a gray-scale image using Octave.
6. Performance Analysis for Imaging Systems |CSIT Dept’s SGBAU Amravati. 6
Practical no. 3
Aim: Write a code to separate the RGB components of an image and visualize the result using
Octave.
Tool : Octave 4.2.1
Theory:
In general Octave supports four different kinds of images, grayscale images, RGB images, binary
images, and indexed images. A grayscale image is represented with an M-by-N matrix in which
each element corresponds to the intensity of a pixel. An RGB image is represented with an M-
by-N-by-3 array where each 3-vector corresponds to the red, green, and blue intensities of each
pixel.
The actual meaning of the value of a pixel in a grayscale or RGB image depends on the class of
the matrix. If the matrix is of class double pixel intensities are between 0 and 1, if it is of
class uint8 intensities are between 0 and 255, and if it is of class uint16 intensities are between 0
and 65535.
A binary image is an M-by-N matrix of class logical. A pixel in a binary image is black if it
is false and white if it is true.
An indexed image consists of an M-by-N matrix of integers and a C-by-3 color map. Each
integer corresponds to an index in the color map, and each row in the color map corresponds to
an RGB color. The color map must be of class double with values between 0 and 1.
Program:
n1=imread('Jellyfish.jpg');
imshow(n1);
size(n1);
red_n1=n1;
red_n1(:,:,2)=0;
red_n1(:,:,3)=0;
8. Performance Analysis for Imaging Systems |CSIT Dept’s SGBAU Amravati. 8
Conclusion: We have write a code to separate the RGB components of an image and visualize
the result using Octave.
9. Performance Analysis for Imaging Systems |CSIT Dept’s SGBAU Amravati. 9
Practical No. 4
Aim: Write a code to convert between the RGB and the HSV color spaces and visualize its
corresponding results.
Tool : Octave 4.2.1
Theory:
The color information can be encoded using dierent color spaces, with each color space having
its own characteristics. This assignment will demonstrate how a relatively simple conversion
between the RGB and the HSV color spaces helps us achieve interesting results. (a) Read image
from the le trucks.jpg. Display the image on screen, both as a color image in the RGB color
space, and each of its channels as a separate grayscale image. Convert the image from the RGB
color space to the HSV color space, using the built-in function rgb2hsv, and display each channel
as a separate grayscale image. When working with resulting matrices, take a note of their type;
the original image in the RGB color space is stored in a matrix of type uint8 (unsigned integers
in range 0 to 255), while the converted image is stored in a matrix of type double (real values in
range 0 to 1).
Program:
I = imread('C:UsersAchalDocumentsApple.jpg');
figure (1);
imshow (I);
I1 = rgb2hsv (I);
figure (2);
imshow (I1);
Input:
10. Performance Analysis for Imaging Systems |CSIT Dept’s SGBAU Amravati. 10
Output :
Conclusion: We have write a code to convert between the RGB and the HSV color spaces and
visualize its corresponding results.
11. Performance Analysis for Imaging Systems |CSIT Dept’s SGBAU Amravati. 11
Practical no. 5
Aim: To calculate image histogram and visualize the result of the highest and lowest values
using Octave
Tool : Octave 4.2.1
Theory:
Histograms are a very useful tool in image analysis; as we will be using them extensively in the
later exercises, it is recommended that you pay extra attention to how they are built.
Steps :
1. Create a script le myhist.m and use it to implement function myhist that is provided with
a 2-D grayscale image and a number of bins, and the function returns a 1-D histogram as
well as the bottom reference value for each bin
2. Histogram calculation is also implemented in Matlab/Octavein form of function hist,
however, this function works in a bit dierent way. To try this out, write the code below to
script le exercise1_assignment2b.m. Read image from the le umbrellas.jpg and change it
to grayscale. Since the hist function does not work on images, but on sequences of points
we have to rst reshape the image matrix of size (N × M) into 1-D vector of size NM × 1
and compute a histogram on this sequence. Plot a histogram for dierent number of bins
and explain why does the shape of the histogram change with that number.
3. The maximum and the minimum grayscale value in the input image, v_max and v_min,
can be determined using commands max(I(:)) and min(I(:)). The new pixel value can be
computed using a formula that is similar to the one that we used in the function myhist.
Test the function by writing a script that reads an image from le phone.jpg (note that it is
already a grayscale image), compute the histogram with 255 bins, and displays it using
imshow. As you can observe from the histogram, the lowest grayscale value in the image
is not 0, and the highest value is not 255. Perform the histogram stretching operation and
visualize the results (display the image using imshow and plot its 255-bin histogram).
12. Performance Analysis for Imaging Systems |CSIT Dept’s SGBAU Amravati. 12
Program:
I = double(rgb2hsv(imread('Jellyfish.jpg')));
P = I(:); % A handy way to turn 2D matrix into a vector of numbers
figure(1); clf;
bins = 10 ;
H = hist(P, bins);
subplot(1,3,1); bar(H, 'b');
bins = 20 ;
H = hist(P, bins);
subplot(1,3,2); bar(H, 'b');
bins = 40 ;
H = hist(P, bins);
subplot(1,3,3); bar(H, 'b');
Input Image:
13. Performance Analysis for Imaging Systems |CSIT Dept’s SGBAU Amravati. 13
Output :
Conclusion: Image histogram and visualized the result of the highest and lowest values using
Octave.
14. Performance Analysis for Imaging Systems |CSIT Dept’s SGBAU Amravati. 14
Practical no. 6
Aim: To investigate how changing the values of individual channels affects the image and
results for different color spaces in octave.
Tool : Octave 4.2.1
Theory:
The color information can be encoded using dierent color spaces, with each color space having
its own characteristics.
Steps :
Read image . Display the image on screen, both as a color image in the RGB color space,
and each of its channels as a separate grayscale image. Convert the image from the RGB
color space to the HSV color space, using the built-in function rgb2hsv, and display each
channel as a separate grayscale image. When working with resulting matrices, take a note
of their type; the original image in the RGB color space is stored in a matrix of type uint8
(unsigned integers in range 0 to 255), while the converted image is stored in a matrix of
type double (real values in range 0 to 1).
Let us investigate how changing the values of individual channels aects the image. Create
a le called exercise1_assignment3b.m and write a script that will modify the image from
le trucks.jpg by scaling each channel by values from 0 to 1 (do this in 10 steps). Perform
this modication on both the RGB and the HSV image. (Note: before we can display the
HSV image, we need to convert it back to the RGB color space. This can be done using
the built-in function hsv2rgb.)
Different color spaces are also useful when we wish to threshold the image. For example,
in the RGB color space, it is di-cult to determine regions that belong to a certain shade of
a color. To demonstrate thiss, create a le exercise1_assignment3c.m and write a script
that loads the image from le trucks.jpg, and thresholds its blue channel with the threshold
value of 200. Display the original and the thresholded image next to each other.
Program:
I1 = imread('Jellyfish.jpg');
figure (1);
subplot(2, 4, 1);
I2 = rgb2hsv(I1);
17. Performance Analysis for Imaging Systems |CSIT Dept’s SGBAU Amravati. 17
Output :
Conclusion : The values of individual channels affects the image and results for different color
spaces has been performed.
18. Performance Analysis for Imaging Systems |CSIT Dept’s SGBAU Amravati. 18
Practical no. 7
Aim: Introduction to image processing tool Scilab 6.0.1.
Tool: Scilab 6.0.1
Theory:
Scilab is a programming language associated with a rich collection of numerical
algorithms covering many aspects of scientific computing problems.
From the software point of view, Scilab is an interpreted language. This generally allows
getting faster development processes, because the user directly accesses a high-level language,
with a rich set of features provided by the library. The Scilab language is meant to be extended
so that user-defined data types can be defined with possibly overloaded operations. Scilab users
can develop their own modules so that they can solve their particular problems. The Scilab
language allows to dynamically compile and link other languages such as Fortran and C: this
way, external libraries can be used as if they were a part of Scilab built-in features. Scilab also
interfaces LabVIEW, a platform and development environment for a visual programming
language from National Instruments.
From the license point of view, Scilab is a free software in the sense that the user does
not pay for it and Scilab is an open source software, provided under the Cecill license. The
software is distributed with source code, so that the user has an access to Scilab’s most internal
aspects. Most of the time, the user downloads and installs a binary version of Scilab, since the
Scilab consortium provides Windows, Linux and Mac OS executable versions. Online help is
provided in many local languages.
From the scientific point of view, Scilab comes with many features. At the very
beginning of Scilab, features were focused online algebra. But, rapidly, the number of features
extended to cover many areas of scientific computing. The following is a short list of its
capabilities:
• Linear algebra, sparse matrices,
• Polynomials and rational functions,
19. Performance Analysis for Imaging Systems |CSIT Dept’s SGBAU Amravati. 19
• Interpolation, approximation,
• Linear, quadratic and non linear optimization,
• Ordinary Deferential Equation solver and Deferential Algebraic Equations solver,
• Classic and robust control, Linear Matrix Inequality optimization,
• Differentiable and non-differentiable optimization,
• Signal processing,
• Statistics.
Scilab provides many graphics features, including a set of plotting functions, which allow
creating 2D and 3D plots as well as user interfaces. The Xcos environment provides a hybrid
dynamic systems modeller and simulator.
Working with Scilab:
There are several ways of using Scilab and the following paragraphs present three
methods:
• using the console in the interactive mode,
• using the exec function against a file,
• using batch processing.
The Console
The first way is to use Scilab interactively, by typing commands in the console, analyzing
the results and continuing this process until the final result is computed. This document is
designed so that the Scilab examples which are printed here can be copied into the console. The
goal is that the reader can experiment by himself Scilab behavior. This is indeed a good way of
understanding the behavior of the program and, most of the time, it allows a quick and smooth
20. Performance Analysis for Imaging Systems |CSIT Dept’s SGBAU Amravati. 20
way of performing the desired computation. In the following example, the functiondisp is used in
the interactive mode to print out the string ”Hello World!”.
The Editor
Scilab version 5.2 provides a new editor which allows to edit scripts easily. This editor allows to
manage several files at the same time, There are many features which are worth to mention in
this editor. The most commonly used features are under the Execute menu.
• Load into Scilab allows to execute the statements in the current file, as if we
did a copy and paste. This implies that the statements which do not end with the semicolon
”;”character will produce an output in the console.
• Evaluate Selection allows to execute the statements which are currently selected.
• Execute File Into Scilab allows to execute the file, as if we used the exec function. The results
which are produced in the console are only those which are associated with printing functions,
such as disp for example.
The Edit menu provides a very interesting feature, commonly known as a ”pretty printer”
in most languages. This is the Edit> Correct Indentation feature, which automatically indents the
current selection. This feature is extremelly convenient, as it allows to format algorithms, so that
heif,for and other structured blocks are easy to analyze. The editor provides a fast access to the
inline help. Indeed, assume that we have selected thedisp statement, as presented in figure 7.
When we right-click in the editor, we get the context menu, where the Help about ”disp”entry
allows to open the help page associated with thedisp function.
Basic Elements of the language:
Scilab is an interpreted language, which means that it allows to manipulate variables In a
very dynamic way. If Scilab provided only these features, it would only be a super desktop
calculator. Fortunately, it is a lot more and this is the subject of the remaining sections, where we
will show how to manage other types of variables, that is booleans, complex numbers, integers
and strings. It seems strange at first, but it is worth to state it right from the start: in Scilab,
everything is a matrix. To be more accurate, we should write: all real, complex, boolean, integer,
21. Performance Analysis for Imaging Systems |CSIT Dept’s SGBAU Amravati. 21
string and polynomial variables are matrices. Lists and other complex data structures (such as
tlists and mlists) are not matrices (but can contain matrices). These complex data structures will
not be presented in this document. This is why we could begin by presenting matrices. Still, we
choose to present basic data types first, because Scilab matrices are in fact a special organization
of these basic building blocks.
In Scilab, we can manage real and complex numbers. This always leads to some
confusion if the context is not clear enough. In the following, when we write real variable, we
will refer to a variable which content is not complex. In most cases, real variables and complex
variables behave in a very similar way, although some extra care must be taken when complex
data is to be processed. Because it would make the presentation cumbersome, we simplify most
of the discussions by considering only real variables, taking extra care with complex variables
only when needed.
Conclusion:
Scilab is a programming language that supports faster process development has been
studied with its basic elements.
22. Performance Analysis for Imaging Systems |CSIT Dept’s SGBAU Amravati. 22
Practical no. 8
Title: To apply an affine transformation or perspective transformation to an image using scilab.
Tool: Scilab 6.0.1
Theory:
This example shows how to use phase correlation as a preliminary step for automatic image
registration. In this process, you perform phase correlation, using imregcorr, and then pass the
result of that registration as the initial condition of an optimization-based registration, using
imregister. Phase correlation and optimization-based registration are complementary algorithms.
Phase correlation is good for finding gross alignment, even for severely misaligned images.
Optimization-based registration is good for finding precise alignment, given a good initial
condition.
Steps :
Read an image that will be the reference image in the registration.
Create an unregistered image by deliberately distorting this image using rotation,
isotropic scaling, and shearing in the y direction.
Add noise to the image, and display the result.
Apply the estimated geometric transform to the misaligned image. Specify
'OutputView' to make sure the registered image is the same size as the reference image.
Display the original image and the registered image side-by-side. You can see that
imregcorr has done a good job handling the rotation and scaling differences between
the images. The registered image, movingReg, is very close to being aligned with the
original image, fixed. But some misalignment remains. imregcorr can handle rotation
and scale distortions well, but not shear distortion.
Program:
I= imread ('C:UsersCOM SCIDocumentsflower.jpg');
imshow (I);
I = I(10+[1:256],222+[1:256],:);
figure;imshow(I);title('Original Image');
// create PSF
LEN = 31;
23. Performance Analysis for Imaging Systems |CSIT Dept’s SGBAU Amravati. 23
THETA = 11;
PSF = fspecial('motion',LEN,THETA);
//blur the image
Blurred = imfilter(I,PSF,'circular','conv');
figure; imshow(Blurred);title('Blurred Image');
// deblur the image
wnr1 = deconvwnr(Blurred,PSF);
figure;imshow(wnr1);
title('Restored, True PSF')
Input Image:
24. Performance Analysis for Imaging Systems |CSIT Dept’s SGBAU Amravati. 24
Output :
Conclusion:
Affine transformation or perspective transformation to an image using scilab has been
performed.
25. Performance Analysis for Imaging Systems |CSIT Dept’s SGBAU Amravati. 25
Practical no. 9
Title: Edge Detection using Canny and Sobel method in Scilab 6.0.1
Tool: Scilab 6.0.1
Theory:
In an image, an edge is a curve that follows a path of rapid change in image intensity.
Edges are often associated with the boundaries of objects in a scene. Edge detection is used to
identify the edges in an image.
To find edges, you can use the edge function. This function looks for places in the image
where the intensity changes rapidly, using one of these two criteria:
Places where the first derivative of the intensity is larger in magnitude than some
threshold.
Places where the second derivative of the intensity has a zero crossing.
Edge provides a number of derivative estimators, each of which implements one of the
definitions above. For some of these estimators, you can specify whether the operation should be
sensitive to horizontal edges, vertical edges, or both. edge returns a binary image containing 1's
where edges are found and 0's elsewhere.
The most powerful edge-detection method that edge provides is the Canny method. The
Canny method differs from the other edge-detection methods in that it uses two different
thresholds (to detect strong and weak edges), and includes the weak edges in the output only if
they are connected to strong edges. This method is therefore less likely than the others to be
fooled by noise, and more likely to detect true weak edges.
The following illustrates the power of the Canny edge detector by showing the results of
applying the Sobel and Canny edge detectors to the same image.
27. Performance Analysis for Imaging Systems |CSIT Dept’s SGBAU Amravati. 27
Output :
Conclusion: Detecting Edges has been performed.
28. Performance Analysis for Imaging Systems |CSIT Dept’s SGBAU Amravati. 28
Practical No. 10
Aim: Implementing Image segmentation - color-based segmentation using K-Means Clustering
Tool : Matlab R2018a
Theory:
Clustering is a classification technique. Given a vector of N measurements describing
each pixels or group of pixels (i.e., region) in an image, a similarity of a measurement vector and
therefore their clustering their clustering in the N-dimensional measurement space implies
similarity of the corresponding pixels or pixel groups. Therefore, clustering in measurement
space may be an indicator of similarity of image regions, and may be used for segmentation
purpose.
K-Means Clustering Overview
K-Means Clustering generates specific number of disjoint, flat (non-hierarchical)
clusters. it is well suited to generating globular cluster. The K-Means method is numerical,
unsupervised, non-deterministic and iterative.
Your goal is to segment colors in an automated fashion using the L*a*b* color space and K-
means clustering.
Program:
Step 1: Read Image
Step 2: Convert Image from RGB Color Space to L*a*b* Color Space
Step 3: Classify the Colors in 'a*b*' Space Using K-Means Clustering
Step 4: Label Every Pixel in the Image Using the Results from KMEANS
Step 5: Create Images that Segment the H&E Image by Color.
Step 6: Segment the Nuclei into a Separate Image
Program :
he=imread('C:UsersAchalDocumentsdog.jpg');
figure (1);
imshow(he), title('H&E image');
text(size(he,2),size(he,1)+15,...
29. Performance Analysis for Imaging Systems |CSIT Dept’s SGBAU Amravati. 29
'Image courtesy of Alan Partin, Johns Hopkins University', ...
'FontSize',7,'HorizontalAlignment','right');
lab_he = rgb2lab(he);
ab = lab_he(:,:,2:3);
nrows = size(ab,1);
ncols = size(ab,2);
ab = reshape(ab,nrows*ncols,2);
nColors = 3;
[cluster_idx, cluster_center] = kmeans(ab,nColors,'distance','sqEuclidean', ...
'Replicates',3);
pixel_labels = reshape(cluster_idx,nrows,ncols);
figure(2);
imshow(pixel_labels,[]), title('image labeled by cluster index');
segmented_images = cell(1,3);
rgb_label = repmat(pixel_labels,[1 1 3]);
for k = 1:nColors
color = he;
color(rgb_label ~= k) = 0;
segmented_images{k} = color;
end
figure (3);
imshow(segmented_images{1}), title('objects in cluster 1');
figure (4);
imshow(segmented_images{2}), title('objects in cluster 2');
figure (5);
imshow(segmented_images{3}), title('objects in cluster 3');
mean_cluster_value = mean(cluster_center,2);
[tmp, idx] = sort(mean_cluster_value);
blue_cluster_num = idx(1);
L = lab_he(:,:,1);
blue_idx = find(pixel_labels == blue_cluster_num);
L_blue = L(blue_idx);
is_light_blue = imbinarize(rescale(L_blue));
nuclei_labels = repmat(uint8(0),[nrows ncols]);
nuclei_labels(blue_idx(is_light_blue==false)) = 1;
nuclei_labels = repmat(nuclei_labels,[1 1 3]);
blue_nuclei = he;
blue_nuclei(nuclei_labels ~= 1) = 0;
figure (6);
imshow(blue_nuclei), title('blue nuclei');
30. Performance Analysis for Imaging Systems |CSIT Dept’s SGBAU Amravati. 30
Output :
Conclusion:
Hence we used matlab functions to segment colors in an automated fashion using the
L*a*b* color space and K-means clustering.
31. Performance Analysis for Imaging Systems |CSIT Dept’s SGBAU Amravati. 31
Practical No. 11
Aim: Implementing Image deblurring using Matlab.
Tool : Matlab R2018a
Theory:
Use the deconvreg function to deblur an image using a regularized filter. A regularized filter can
be used effectively when limited information is known about the additive noise.
To illustrate, this example simulates a blurred image by convolving a Gaussian filter PSF with an
image (using imfilter). Additive noise in the image is simulated by adding Gaussian noise of
variance V to the blurred image (using imnoise):
steps :
1. Read an image into the MATLAB®
workspace. The example uses cropping to reduce the size
of the image to be deblurred. This is not a required step in deblurring operations.
2. Create the PSF.
3. Create a simulated blur in the image and add noise.
4. Use deconvreg to deblur the image, specifying the PSF used to create the blur and the noise
power, NP.
Refining the Result
You can affect the deconvolution results by providing values for the optional arguments
supported by the deconvreg function. Using these arguments you can specify the noise power
value, the range over which deconvreg should iterate as it converges on the optimal solution, and
the regularization operator to constrain the deconvolution. To see the impact of these optional
arguments, view the Image Processing Toolbox™ deblurring examples.
Program:
I = imread('C:UsersAchalDocumentsApple.jpg');
I = I(125+[1:256],1:256,:);
figure, imshow(I)
title('Original Image');
32. Performance Analysis for Imaging Systems |CSIT Dept’s SGBAU Amravati. 32
PSF = fspecial('gaussian',11,5);
Blurred = imfilter(I,PSF,'conv');
V = .02;
BlurredNoisy = imnoise(Blurred,'gaussian',0,V);
figure, imshow(BlurredNoisy)
title('Blurred and Noisy Image');
NP = V*prod(size(I));
[reg1 LAGRA] = deconvreg(BlurredNoisy,PSF,NP);
figure,imshow(reg1)
title('Restored Image');
Output :
Conclusion: We have write a code to convert between the RGB and the HSV color spaces and
visualize its corresponding result.
33. Performance Analysis for Imaging Systems |CSIT Dept’s SGBAU Amravati. 33
Practical No. 12
Aim : Basic gray- level transformation in Matlab R2017a
Tool : Matlab R2018a
Theory :
We begin the study of image enhancement techniques by discussing gray transformation
functions. These are among the simplest of all image enhancement techniques .The values of
pixels, before and after processing, will be denoted by and s, respectively. As indicated in the
previous section, these values are related by an expression of the form s=T(r), where T is a
transformation that maps a pixel value r into a pixel value s. Since we are dealing with digital
quantities, values of the transformation function typically are stored in a one and the mappings
from r to s are implemented via table lookups. For an 8-bit environment, a lookup table
containing the values of T will have 256 entries As an introduction to gray-level transformations,
consider Fig. 3.3, which shows three basic types of functions used frequently for image
enhancement: linear (negative and identity transformations), logarithmic (transformations), and
power-law (identity function is the trivial case in which output intensities are identical to input
intensities. It is included in the graph only for completeness.
The negative of an image with gray levels in the range [0,L-1]is obtained by using the
negative transformation shown in Figure. 3.3, which is given by the expression
s = L - 1 - r.
Reversing the intensity levels of an image in this manner produces the equivalent of a
photographic negative. This type of processing is particularly suited for enhancing white or gray
detail embedded in dark regions of an image, especially when the black areas are dominant in
size.
34. Performance Analysis for Imaging Systems |CSIT Dept’s SGBAU Amravati. 34
Basic transformations performed are:
1. Rotate an image
2. Get negative of black n white image
3. Get negative of color image
Program :
% 1. To read an image %
i = imread('C:UsersAchalDocumentsA.jpg');
imshow(i)
% 2. To Rotate an image %
k = imread('C:UsersAchalDocumentsA.jpg');
j = imrotate(k,35,'bilinear');
imshow(j);
% 3. Get the negative of Black n White image %
k = imread('C:UsersAchalDocumentsA.jpg');
35. Performance Analysis for Imaging Systems |CSIT Dept’s SGBAU Amravati. 35
imshow(k);
neg = 255 - k;
imshow(neg);
% 4. Get the negative of color image %
a = imread('C:UsersAchalDocumentsB.jpg');
imshow(a);
a_rgb = rgb2gray(a);
imshow(a_rgb);
neg = 255-a_rgb;
imshow(neg);
Output:
36. Performance Analysis for Imaging Systems |CSIT Dept’s SGBAU Amravati. 36
Conclusion:
In this experiment we have used matlab functions to rotate, to get gray scale image and to
get negative of image.
37. Performance Analysis for Imaging Systems |CSIT Dept’s SGBAU Amravati. 37
Practical No. 13
Aim: Implementing Image enhancement, Image adjustment and Thresholding in Matlab R2017a
Tool : Matlab RS2018a
Theory:
Contrast enhancements improve the perceptibility of objects in the scene by enhancing
the brightness difference between objects and their backgrounds. Contrast enhancements are
typically performed as a contrast stretch followed by a tonal enhancement, although these could
both be performed in one step. A contrast stretch improves the brightness differences uniformly
across the dynamic range of the image, whereas tonal enhancements improve the brightness
differences in the shadow (dark), midtone (grays), or highlight (bright) regions at the expense of
the brightness differences in the other regions.
Contrast enhancement processes adjust the relative brightness and darkness of objects in
the scene to improve their visibility. The contrast and tone of the image can be changed by
mapping the gray levels in the image to new values through a gray-level transf
Steps to image enhancement :
1. Load images
2. Resize images
3. Enhance grayscale images
4. Enhance color images
38. Performance Analysis for Imaging Systems |CSIT Dept’s SGBAU Amravati. 38
Program :
```````````````````` Image Enhancement````````````````````````
i = imread('C:UsersAchalDocumentsA.jpg');
j= imadjust(i,stretchlim(i),[]);
imshow(i);
imshow(j);
RGB1 = imread('C:UsersAchalDocumentsA.jpg');
imshow(RGB1);
RGB2 = imadjust(RGB1,[.2 .3 0; .6 .7 1],[]);
imshow(RGB2);
RGB2 = imadjust(RGB1,[.5 .6 0; .9 1 1],[]);
imshow(RGB2);
RGB2 = imadjust(RGB1,[.1 .2 0; .9 1 1],[]);
imshow(RGB2);
`````````````````Thresholding````````````````````
f = imread('C:UsersAchalDocumentsA.jpg');
T=input('Enter a threshold value, T = ');
%Enter a threshold value, T = 100%
[M,N]=size(f);
for x = 1:M
for y = 1:N
if(f(x,y)<T)
39. Performance Analysis for Imaging Systems |CSIT Dept’s SGBAU Amravati. 39
g(x,y)=0;
else
g(x,y)=255;
end
end
end
imshow(g);
Output :
40. Performance Analysis for Imaging Systems |CSIT Dept’s SGBAU Amravati. 40
Conclusion :
We have performed image enhancement using contrast stretching and histogram equalization.
41. Performance Analysis for Imaging Systems |CSIT Dept’s SGBAU Amravati. 41
Practical No: 14
Aim: Morphological operation
Tool : Matlab R2018a
Theory:
Morphology is a broad set of image processing operations that process images based on
shapes. Morphological operations apply a structuring element to an input image, creating an
output image of the same size. In a morphological operation, the value of each pixel in the output
image is based on a comparison of the corresponding pixel in the input image with its neighbors.
By choosing the size and shape of the neighborhood, you can construct a morphological
operation that is sensitive to specific shapes in the input image.
The most basic morphological operations are dilation and erosion. Dilation adds pixels to the
boundaries of objects in an image, while erosion removes pixels on object boundaries. The
number of pixels added or removed from the objects in an image depends on the size and shape
of the structuring element used to process the image. In the morphological dilation and erosion
operations, the state of any given pixel in the output image is determined by applying a rule to
the corresponding pixel and its neighbors in the input image. The rule used to process the pixels
defines the operation as a dilation or an erosion. This table lists the rules for both dilation and
erosion.
Rules for Dilation and Erosion
Dilation
Rule: The value of the output pixel is the maximum value of all the pixels in the input pixel's
neighborhood. In a binary image, if any of the pixels is set to the value 1, the output pixel is set
to 1.
Erosion
Rule: The value of the output pixel is the minimum value of all the pixels in the input pixel's
neighborhood. In a binary image, if any of the pixels is set to 0, the output pixel is set to 0.
42. Performance Analysis for Imaging Systems |CSIT Dept’s SGBAU Amravati. 42
Program:
% Dilation %
i=imread('C:UsersAchalDocumentsB.jpg');
imshow(i);
SE = strel('rectangle',[40 30]);
op = imdilate(i,SE);
imshow(op)
% Erosion %
i=imread('C:UsersAchalDocumentsB.jpg ');
imshow(i)
SE = strel('rectangle',[40 30]);
op = imerode(i,SE);
imshow(op)
% To get Skeleton %
BW = imread('C:UsersAchalDocumentsC.bmp ');
figure (1);
imshow(BW)
BW3 = bwmorph(BW,'skel',Inf);
figure (2);
imshow(BW3)
Output :
43. Performance Analysis for Imaging Systems |CSIT Dept’s SGBAU Amravati. 43
Conclusion:
Morphological operations like Dilation and Erosion by using MATLAB has been done
successfully.
44. Performance Analysis for Imaging Systems |CSIT Dept’s SGBAU Amravati. 44
PRACTICAL NO. 15
Aim : To classify objects based on their roundness using bwboundaries, a boundary tracing
routine
Tool : Matlab R2018a
Theory:
Identification of round object feature helps to classify objects based on their roundness using
bwboundaries, a boundary tracing routine.
The various steps in identifying the round objects are:
1. Read Image
2. Threshold the Image which convert the image to black and white in order to prepare for
boundary tracing using bwboundaries.
3. Remove the Noise, where using morphology functions, remove pixels which do not
belong to the objects of interest.
4. Find the Boundaries where we concentrate only on the exterior boundaries. Option
'noholes' will accelerate the processing by preventing bwboundaries from searching for
inner contours.
5. Determine which Objects are Round
Estimate each object's area and perimeter. Use these results to form a simple metric
indicating the roundness of an object:
metric = 4*pi*area/perimeter^2.
This metric is equal to one only for a circle and it is less than one for any other shape.
The discrimination process can be controlled by setting an appropriate threshold. Metrics
closer to 1 indicate that the object is approximately round.
Program:
RGB=imread('C:UsersAchalDocumentspillsetc.png');
figure (1);
imshow(RGB);
I = rgb2gray(RGB);
bw = imbinarize(I);
figure (2);
imshow(bw);
bw = bwareaopen(bw,30);
se = strel('disk',2);
bw = imclose(bw,se);
45. Performance Analysis for Imaging Systems |CSIT Dept’s SGBAU Amravati. 45
bw = imfill(bw,'holes');
figure (3);
imshow(bw);
[B,L] = bwboundaries(bw,'noholes');
figure (4);
imshow(label2rgb(L, @jet, [.5 .5 .5]))
hold on
for k = 1:length(B)
boundary = B{k};
plot(boundary(:,2), boundary(:,1), 'w', 'LineWidth', 2)
end
stats = regionprops(L,'Area','Centroid');
threshold = 0.94;
% loop over the boundaries
for k = 1:length(B)
% obtain (X,Y) boundary coordinates corresponding to label 'k'
boundary = B{k};
% compute a simple estimate of the object's perimeter
delta_sq = diff(boundary).^2;
perimeter = sum(sqrt(sum(delta_sq,2)));
% obtain the area calculation corresponding to label 'k'
area = stats(k).Area;
% compute the roundness metric
metric = 4*pi*area/perimeter^2;
% display the results
metric_string = sprintf('%2.2f',metric);
% mark objects above the threshold with a black circle
if metric > threshold
centroid = stats(k).Centroid;
plot(centroid(1),centroid(2),'ko');
end
text(boundary(1,2)-35,boundary(1,1)+13,metric_string,'Color','y',...
'FontSize',14,'FontWeight','bold');
end
title(['Metrics closer to 1 indicate that ',...
'the object is approximately round']);
46. Performance Analysis for Imaging Systems |CSIT Dept’s SGBAU Amravati. 46
Output :
Conclusion : In this practical, we have used matlab functions for feature extraction. We find
here round objects from image