SlideShare a Scribd company logo
1 of 259
Computer Graphics
By: Smruti Smaraki Sarangi
Assistant Professor
IMS Unison University,
Dehradun
1
C O N T E N T S
Sl.no Chapter Page
1 Introduction to graphics, and its application 2-6
2 Video display devices: CRT, Flat panel display, Raster Scan system,
Random scan system, Input and Output devices, Graphics software and
Functions, GUI.
7-29
3 Line drawing algorithms, Circle generating and ellipse generating
algorithms, Filled area primitives: flood-fill, boundary-fill and scan-line
polygon fill algorithm and antialiasing.
30-59
4 Line attributes, Curve attributes, Area-fill attributes, Character attributes,
Bundled and Marker Attributes
60-75
5 2D basic transformation, 2D composite transformation, Matrix
representation and homogeneous co-ordinate, Transformation between co-
ordinate system and Affine transformation
76-102
6 Viewing Pipeline, View Co-ordinate reference frame, Window to viewport
Transformation, Clipping and Types of Clipping
103-125
7 Representation of point, 3D Transformation and its types 126-149
8 3D Viewing, Projection and its types, Viewing Pipeline. 3D Clipping and
Viewport Clipping
150-171
9 Visible surface detection algorithms (back-face detection, z-buffer, a-buffer,
scan-line method, depths sorting method) Ray-tracing algorithm and its
surface intersection calculation and ray casting method.
172-197
10 Curved line and surfaces, BSP Trees and Octree, Spline Representation and
specifications, B-Spline curves and surfaces, Bezier curve and surfaces
198-232
11 Basic Illumination Models, Halftoning and Dithering Techniques, Polygon
Rendering Methods, Animation Techniques and Morphing, Hierarchical
modeling structures, Displaying light Intensities and continuous tone image
233-258
2
CHAPTER -1
COMPUTER GRAPHICS:
Computer graphics are graphics created using computers and more generally,
the representation and manipulation of image data by a computer with help from
specialized software and hardware.
The development of computer graphics has made computers easier to interact
with, and better for understanding and interpreting many types of data.
Developments in computer graphics have a profound impact on many types of
media and have revolutionized animation, movies and the video game industry.
APPLICATION OF COMPUTER GRAPHICS:
Computers have become a powerful tool for the rapid and economical
production of pictures. Advances in computer technology have made interactive
computer graphics a practical tool. Today, computer graphics is used in the areas
as science, engineering, medicine, business, industry, government, art,
entertainment, advertising, education and training.
I. COMPUTER AIDED DESIGN (CAD):
A major use of computer graphics is in design purposes, generally
referred to as CAD. Computer Aided Design methods are now routinely
used in the design of buildings, automobiles, aircraft, watercraft,
spacecraft, computers, text-tiles and many other products.
For some design applications, objects are first displayed in wireframe
outline from that shows the overall shapes. Software packages for CAD
3
applications typically provide the designer with the multi-window
environment.
Animations are often used in CAD applications. Realistic displays of
architectural designs permit both architect and their clients to study the
appearance of a single building or a group of buildings. With virtual
reality systems, designers can even go for a simulated “walk” through the
room or around the outsides of buildings to better appreciate to overall
effect of a particular design.
In addition to realistic exterior building displays architecture, CAD
packages also provide facilities for experimenting with 3-dimensional
interior layouts and lighting.
II. PRESENTATION GRAPHICS:
It is used to produce illustrations for reports or to generate 35 mm
slides or transparencies for use with projectors. It is commonly used in
summarize financial, statistical, mathematics, scientific and economic
data for research reports, managerial reports, consumer information,
bulletins and other types of reports. E.g.: Bar Charts, Line Graphs,
Surface Graphs and Pie Charts. Three dimensional graphs are used
simply for effect they can provide a more dramatic or more attractive
presentation of data relationships.
III. COMPUTER ART:
Computer Graphics methods are widely used in both fine art and
commercial art applications.
4
Artists use a variety of computer methods, including special purpose
hardware, artist’s paint brush programs, other paint packages, specially
developed software, symbolic mathematics packages, CAD packages,
desktop publishing software and animation packages that provide
facilities for designing object shapes and specifying object motions.
The basic idea behind a “paint brush” program is that, it allows artists
to paint pictures on the screen of video monitor. The picture is usually
painted electronically on a graphics tablet using a stylus, which can
simulate different brush strokes, brush widths and colors.
The art work of electronic art created with the aid of mathematical
relationships is designed in relation to frequency variations and other
parameters in a musical composition to produce a video that integrates
visual and aural patterns.
IV. ENTERTAINMENT:
Computer Graphics methods are now commonly used in making
motion pictures, music videos and television shows. A graphics scene
generated for the movie “Star-Trek – The wrath of Khan” is one of the
uses, in entertainment field.
Many TV series regularly employ compute graphics method. E.g.:
Deep Space Nine and Stay Tuned. Music videos use graphics and image
processing technique can be used to produce a transformation of one
person or object into another or morphing.
5
V. EDUCATION AND TRAINING:
Computer generated models of physical, financial and economic
systems are often used as educational aids, models of physical systems,
physiological systems and population trends or equipment such as the
color-coded diagram can help trainings, to understand the operation of
the system.
For some training applications, special systems are designed. E.g.:
Simulators for practice sessions for training of ship, contains pilots,
heavy equipment operators and air-traffic control personnel.
VI. VISUALIZATION:
It is an application for computer graphics. Producing graphical
representations for scientific engineering and medical data sets and
processes is generally referred to as scientific visualization and the term
business visualization is used in connection with data sets related to
commerce, industry and other non-scientific areas.
There are many different kinds of data sets and effective visualization
schemes depend on the characteristics of the data. A collection of data
can contain scalar values vectors, higher order tensors or any
combination of these data types.
Color coding is just one way to visualize the data sets. Additional
techniques include contour plots, graphs and charts, surface renderings
and visualization of volume interiors.
6
VII. IMAGE PROCESSING:
Image Processing applies techniques to modify or interact existing
pictures such as photographs and TV scans. Two principal applications of
image processing are: i) improving picture quality and ii) machine
perception of visual information.
To apply image processing methods, we first digitize a photograph or
other picture into an image file. Then digital methods can be applied to
rearrange picture parts to enhance color separations or to improve the
quality of shading.
An example of the application of image processing methods is to
enhance the quality of a picture. Image processing and computer graphics
are typically combined in many applications. The last application is
generally referred to as computer-aided surgery.
VIII. GRAPHICAL USER INTERFACE(GUI):
It is common now for software packages to provide a graphical
interface. A major component of a graphical interface is a window
manager that allows a user to display multiple-window areas. Each
window can contain a different process that can contain graphical and
non-graphical displays.
An “icon” is a graphical symbol that is designed to look like the
processing option it represents. The advantages of icons are that they take
up less screen space than corresponding textual description and they can
be understood more quickly, if well designed.
7
CHAPTER -2
VIDEO DISPLAY DEVICES:
The display devices are known as output devices. The most commonly used
output device in a graphics video monitor. The operations of most video monitors
are based on the standard cathode-ray-tube (CRT) design.
How the Interactive Graphics display works:
The modern graphics display is extremely simple in construction. It consists of
three components:
1) A digital memory or frame buffer, in which the displayed Image is stored
as a matrix of intensity values.
2) A monitor
3) A display controller, which is a simple interface that passes the contents of
the frame buffer to the monitor.
Inside the frame buffer the image is stored as a pattern of binary digital
numbers, which represent a rectangular array of picture elements, or pixel. The
pixel is the smallest addressable screen element.
In the Simplest case where we wish to store only black and white images, we
can represent black pixels by 0's in the frame buffer and white Pixels by 1's. The
display controller simply reads each successive byte of data from the frame buffer
and converts each 0 and 1 to the corresponding video signal. This signal is then fed
to the monitor. If we wish to change the displayed picture all we need to do is to
change of modify the frame buffer contents to represent the new pattern of pixels.
8
CATHODE-RAY-TUBES:
A Cathode-Ray Tube (CRT) is a video display device, whose design depends
on the operation of most video.
BASIC DESIGN OF A MAGNETIC DEFLECTION:
A beam of electronic emitted by an electron gun, passes through focusing and
deflection systems that direct the beam towards specified position on the phosphor-
coated screen. The phosphor then emits a small spot of light at each position
contacted by the electron beam. Because the light emitted by the phosphor fades
very rapidly. Some method is needed for maintaining the screen picture.
One way to keep phosphor glowing is to redraw the picture repeatedly by
quickly directing the electron beam back over the same points. This type of display
is called a “refresh rate”.
Phosphor coated screen
Deflected Electron beam
Electron beam
Connector Pins
Electron Gun (Cathode)
Base
Focusing System
Magnetic Deflection
Coils
Horizontal
Deflection Amplifier
9
OPERATION OF AN ELECTRON GUN WITH AN ACCELERATING
MODE:
The primary components of an electron gun in a CRT are the heated metal
cathode and a central grid. Heat is supplied to the cathode by directing a current
through a coil of wire called the “filament”, inside the cylindrical cathode
structure. This causes electrons to be ‘boiled off”, the hot cathode surface.
In a vacuum, inside the CRT envelope, the free negatively charged electrons are
then accelerated towards the phosphor coating by a high positive voltage. The
accelerating voltage can be generated with a positively charged metal coating on
the inside of the CRT envelope near the phosphor screen or an accelerating anode
can be used.
Intensity of the electron beam is controlled by setting voltage levels on the
control grid, which is a metal cylinder that fits over the cathode. A high negative
voltage applied to the central grid will shut off the beam by repelling electrons and
stopping them from passing through the small hole at the end of the control grid
structure. A smaller negative voltage on the control grid simply decreases the
number of electrons passing through.
The focusing system in CRT is needed to force the electron beam to coverage
into a small spot as it strikes the phosphor. Otherwise, the electrons would repel
each other and the beam would spread out as it approaches the screen.
The electron beam will be focused properly only at the center of the screen. As
the beam moves to the outer edges of the screen, displayed images become blurred.
To compensate for this, the system can adjust the focusing according to the screen
position of the beam, when the picture presentation is more than the refresh rate,
then it is called “blurred/overlapping”.
10
Cathode ray tubes are now commonly constructed with magnetic deflection
coils mounted on the outside of the CRT envelope. Two pairs of coils are used
with the coils in each pair mounted on opposite sides of the neck of the CRT
envelope.
The magnetic field produced by each pair of coils results in a traverse deflection
force that is perpendicular both to the direction of the magnetic field and to the
direction of travel of the electron beam. Horizontal deflection is accomplished with
one pair of coils and vertical deflection is accomplished with another pair of coils.
When electrostatic deflection is used 2 pairs of parallel plates is mounted
horizontally to control the vertical deflection and other is mounted vertically to
control horizontal deflection.
Different kinds of phosphors are available for use in a CRT. Besides color, a
major difference between phosphors in their “persistence”, how long they continue
to emit light after the CRT beam is removed. “Persistence” is defined as the time;
it takes the emitted light from the screen to delay to one-tenth of its original
intensity. i.e.: Persistence α 1/Refreshment
The intensity is greatest at the center of the spot and decreases with a Gaussian
distribution out to the edges of the spot. The distribution corresponds to the cross
sectional electron density distribution of CRT beam.
The maximum number of points that can be displayed without overlap on a
CRT is referred to as “resolution”. “Resolution” is the number of
points/centimeter that can be plotted horizontally and vertically. Resolution of
: Intensity distribution of an illuminated phosphor spot on a CRT screen.
11
CRT is dependent on the type of phosphor, the intensity to be displayed and the
focusing and the deflection system.
“Aspect Ratio” is the property of video monitors. This number gives the ratio
of vertical points to horizontal points necessary to produce equal length lines in
both directions on the screen. An aspect ratio of 3/4 means that a vertical line
plotted with 3 points has the same length as a horizontal line plotted with 4 points.
COLOR CRT MONITORS:
The CRT monitor displays color pictures by using a combination of phosphors
that emit different colored light. By combining the emitted light from the different
phosphor a range of colors can be generated.
The 2 basic techniques for producing color display with a CRT are the beam-
penetration method and the shadow-mask method.
I. BEAM PENETRATION METHOD:
The beam penetration method for displaying color pictures has been
used with random scan monitors. Two layers of phosphor, usually red
and green are coated onto the inside of the CRT screen and the displayed
color depends on how for the electron beam penetrates into the phosphor
layers.
A beam of slow elements excites only the outer red layer. A beam of
very fast electrons penetrates through the red layer and excites the inner
green layer. An intermediate beam speeds, combinations of red and green
lights are emitted to show two additional colors, orange and yellow. The
speed of the electrons and hence the screen color at any point is
controlled by the beam accelerating voltage.
12
II. SHADOW MASK METHOD:
It is commonly used is raster scan systems because they produced a
much wider range of colors than the beam penetration method. One
phosphor dot emits a red light, another emits a green light and the 3rd
emits a blue light. This type of CRT has 3 electron guns, one for each
color dot and a shadow mask grid just behind the phosphor coated screen.
The 3 electron beams are deflected and focused as a group onto the
shadow mask, which contains a series of holes aligned with the
phosphor-dot patterns. When the 3 beams pass through a hole in the
shadow mask they activate a dot triangle, which appears as a smaller
color spot on the screen.
“Composite Monitors” are adaptations of TV sets that allow bypass of
the broadcast cirentry. These display devices still require that the picture
information be combined, but no carrier signal is needed.
Color CRTs in graphics systems are designed RGB monitors. These
monitors use shadow mask methods and take the intensity level for each
electron gun directly from the computer system without any immediate
processing. A RGB color system with 24 bits of storage per pixel is
generally referred to as a full-color system or a true color system.
FLAT PANEL DISPLAY:
This refers to a class of video devices that have reduced volume, weight and
power requirements compared to a CRT. A significant feature of flat panel displays
is that they are thinner than CRTs.
13
Current uses for flat panel displays include small TV monitors and calculators,
pocket video games, laptop computers and as graphics display in applications
requiring rugged portable monitors. Flat panel display is of 2 types i.e.: emissive
display and non-emissive display.
Flat Panel Display commonly use threadlike liquid crystal compounds, which
tend to keep the long-axes of the rod shaped molecules aligned. A Flat Panel
Display can be constructed with a nematic liquid crystal. It has 2 glass plates, each
containing a light polarizes at right angles to the other plate, sandwich the liquid
crystal material.
Horizontal transparent conductors’ rows are built into one glass plate and
vertical conductors’ columns are put into other plate. The intersection of 2
conductors defines a pixel position.
Polarized light passing through the material is twisted, so that it will pass
through the opposite polarizer. The light is then reflected back to the viewer. The
turn off the pixel, we apply a voltage to the 2 intersecting conductors to align the
molecules, so that the light is not twisted. These types of flat panel device are
called as “Passive Matrix LCD”.
Another method for constructing LCDs is to place a transistor at each pixel
location, using thin film transistor at each pixel location, using thin film transistor
technology. The transistors are used to control the voltage at pixel locations and to
prevent charge from gradually leaking out of the liquid-crystal cells. These devices
are called active matrix displays.
14
I. EMMISIVE DISPLAY:
The Emissive Displays (Emitters) are the devices that convert
electrical energy into light. E.g.: Plasma Panels, Thin Film
Electroluminescent Displays and Light Emitting Diodes are example of
emissive displays.
a. PLASMA PANEL:
The plasma panel also called gas-discharge displays are
constructed by filling the region between two glass plates with a
mixture of gases that usually includes Neon.
A series of vertical conducting ribbons is placed on one glass panel
and a set of horizontal ribbons is built into the other glass panel.
Firing voltages applied to a pair of horizontal and vertical conductors
causes the gat at the intersection of the two conductors to breakdown
into glowing plasma of electron and ions.
One disadvantage of plasma panels has been that they were strictly
monochromatic devices, but systems have been developed that are
now capable of displaying color and gray scale.
b. THIN-FILM ELECTROLUMINISCENT DISPLAYS:
Thin-Film Electroluminescent displays are similar in construction
to plasma panel. The difference is that the region between the glass
plates is filled with a phosphor, such as zinc sulphide dopped with
manganese, instead of a gas.
15
When a high voltage is applied to a pair of crossing electrodes, the
phosphor becomes a conductor in the area of the intersection of two
electrodes; the phosphor becomes a conductor in the area of the
intersection of two electrodes. Electrical energy is then absorbed by
manganese, which release the energy as a spot of light similar to the
glowing plasma effect in a plasma panel.
Electroluminescent displays require more power than plasma
panels and good color and gray scale displays are hard to achieve.
c. LIGHT-EMITTING DIODE(LED):
In Light Emitting Diode (LED), a matrix of diodes are arranged to
form the pixel positions in the display and picture definition is stored
in a refresh buffer.
II. NON-EMISSIVE DISPLAY:
The Non-Emissive Displays (Non-Emitters) use optical effects to
convert sunlight or light from some other source into graphics patterns.
E.g.: Liquid Crystal Display.
The Non-Emissive devices produce by passing polarized light from
the surroundings or from an internal light source through a liquid crystal
material that can be aligned to either block or transmit the light. Liquid
crystal refers to the fact that these compounds have a crystalline
arrangement of molecules.
a. LIQUID CRYSTAL DISPLAY (LCD):
Liquid Crystal Displays (LCDs) are commonly used in small
systems such as calculators, laptops and computers.
16
RASTER-SCAN SYSTEM:
Raster-Scan Systems typically employ several processing units. In addition to
the central processing unit or CPU, a special purpose, processor called the “video
controller” or “display controller”, is used to control the operation of the display
device. Here, the frame buffer can be anywhere in the system memory and the
video controller, more sophisticated raster systems employ other processors as co-
processors and accelerators to implement various graphics.
VIDEO CONTROLLER:
A fixed area of the system memory is reserved for the frame buffer and the
video controller is given direct access to the frame buffer memory.
Frame buffer locations and the corresponding screen positions are referenced in
Cartesian coordinate origin is defined at the lower left screen corner. The screen
surface is then represented as the first quadrant of 2D system, with positive ‘x’
values increasing to the right and positive ‘y’ values increasing from bottom to top.
CPU SYSTEM MEMORY VIDEO CONTROLLER
SYSTEM BUS
I/O DEVICES
MONITOR
CPU SYSTEM
MEMORY
VIDEO
CONTROLLER
SYSTEM BUS
I/O DEVICES
MONITORFRAME
BUFFER
17
Here, 2 registers are used to store the co-ordinates of the screen pixels. Initially
the x-register is set too and y-register is set to ‘ymax’. The value stored in the frame
buffer for this pixel position is then retrieved and used to set the intensity of CRT
beam. The x-register is incremented by 1 and the process repeated for the next
pixel on the top scan line.
This procedure is repeated for each pixel along the scan line. After the last pixel
on the top scan line has been processed, the x-register is rest to 0 and y-register
decremented by 1. This procedure is repeated for each successive scan line. After
cycling through all pixels along the bottom scan line y = 0, the video controller
resets the registers to the 1st
pixel position on the top scan line and the refresh
process start over.
When the frame/pixel presentation is less than the refresh rate, then it is called
flickering. Flickering is a problem occurred in raster-scan system and it is solved
by interlacing technique.
RASTER-SCAN DISPLAY:
It is based on TV technology. Here electron beam is swept across the screen, on
row at a time from top to bottom. The picture definition is stored in a memory area
called refresh buffer/frame buffer. Here each screen point is referred as a pixel or
pel (picture element).
The capacity of raster-scan system is to store intensity information for each
screen point for realistic display of scenes containing shading and color patterns. In
black and white system, a bit value of ‘1’ represents the electron beam intensity is
turned on and a value of ‘0’ indicates the beam intensity is to be off.
18
In black and white system, a bit value of ‘1’ indicates the electron beam
intensity is turned on and a value of ‘0’ indicates the beam intensity is to be off. A
system with 24 bits/pixel requires 2 megabytes of storage for the frame buffer. On
black and white system with one bit/pixel, the frame buffer is called bitmap. For
multiple bits/pixel, the frame buffer is called pixmap.
Raster-Scan display is carried out at the rate of 60 to 80 frames/second. It has
units of cycle/sec or hertz. It contains a separate display processor called “graphics
controller” or “display co-processor” to free the CPU. A major task of display
processor is digitizing a picture definition given in an application program into a
set of pixel intensity values for storage in the frame buffer. This is called “Scan
Conversion”.
RANDOM-SCAN DISPLAY:
In this system, an application program is input and stored in the system memory
along with a graphics package. The display file is accessed by the display
processor to refresh the screen. The display processor is called as display-
processing unit or graphics controller.
DISPLAY PROCESSOR
MEMORY
VIDEO
CONTROLLER
I/O DEVICES
MONITOR
FRAME
BUFFER
CPU DISPLAY
PROCESSOR
SYSTEM
MEMORY
[RASTER-SCAN SYSTEM WITH A DISPLAY PROCESSOR]
19
RANDOM-SCAN SYSTEMS:
When a random-scan display unit is operated a CRT has the electron beam
directed only to the parts of the screen, where a picture is to be drawn. Random-
scan monitors draw a picture one line at a time and for this reason is also refreshed
to as vector displays or stroke-writing or calligraphic display.
Refresh rate on a random-scan system depends on the number of lines to be
displayed. Picture definition is now stored as a set of line drawing commands in an
area of memory referred to as the refresh display file or display list or display
program or refresh buffer. Random-scan systems are designed for line drawing
applications and can’t display realistic shaded scenes.
INPUT DEVICES:
The various input devices are: Keyboard, Mouse, Trackball and Space-ball,
Joysticks, Data glove, Digitizers, Image Scanner, Light Pens, Touch Panels and
Voice System.
I. KEYBOARD:
It is used primarily as a device for entering that strings. It is an efficient
device for inputting non-graphics data as picture labels associated with a
graphics display. It provided with features to facilitate entry of screen co-
ordinates, menu selections or graphics functions.
CPU SYSTEM MEMORY DISPLAY PROCESSOR
SYSTEM BUS
I/O DEVICES
MONITOR
20
Cursor-control keys and function keys are common feature on general
purpose keyboards. Function keys allows users to entry frequently used
operations in a single keystroke and a cursor control keys can be used to
select displayed objects or co-ordinate positions by positioning the screen
cursor.
II. MOUSE:
A mouse is a small hand-held box used to position the screen cursor.
Wheels or rollers on the bottom of the mouse can be used to record the
amount of direction of movement.
III. TRACKBALL AND SPACEBALL:
A track-ball is a ball that can be rotated with fingers or palm of the hand
produce screen-cursor movement. It is a 2-dimensional positioning device. A
space-ball provides 6 degrees of freedom. It does not actually move. It is
used for 3-dimensional positioning.
IV. JOYSTICKS:
It consists of a small, vertical layer mounted on a base that is used to
steer the screen cursor around. Most joysticks select screen positions with
actual stick movement, other respond to pressure on the stick. In movable
joystick, the stick is used to activate switches that cause the screen cursor to
move at a constant rate in the direction selected.
V. DATA GLOVE:
A data glove can be used to grasp a ‘virtual object’. It is constructed with
a series of sensors that detect hand and finger motion.
21
VI. DIGITIZERS:
It is a common device used for drawing, painting or interactively
selecting co-ordinate positions on an object. These devices can be used to
input co-ordinate values in either a 2D or 3D space. It is used to scan over a
drawing or object and to input a set of discrete coordinate positions, which
can be joined with straight line segments to approximate the cure or surface
shapes.
One type of digitizers is the “Graphics Tablet”, which is used to input 2D
co-ordinates by activating a hand cursor or stylus at selected positions on a
flat structure. A hand cursor contains cross hairs for sighting positions, while
a stylus is a pencil shaped device that is pointed at positions on the tablet.
VII. IMAGE SCANNER:
Drawings, graphs, color and black and white photos or text can be stored
for computer processing with an image scanner by passing an optical
scanning mechanism over the information to be stored.
VIII. LIGHT PENS:
This is a pencil shaped device are used to select screen position by
detecting the light coming from points on the CRT screen. An activated light
pen, pointed at a spot on the screen as the electron beam lights up that spot
generates an electrical plate that causes the co-ordinates position of the
electron beam to be recorded.
22
IX. TOUCH PANELS:
It allows displayed objects or screen positions to be selected with the
touch of a finger. A typical application of touch panels is for the selection of
processing options that are represented with graphical icons.
Optical touch panels employ a line of infrared light emitting diodes
(LEDs) along one vertical edge and along one horizontal edge contain light-
detectors. These detectors are used to record which beams are interrupted
when the panel is touched.
An electrical touch panel is constructed with 2 transparent plates
separated by a small distance. One plate is coated with conducting material
and other with a resistance material.
X. VOICE SYSTEM:
It can be used to initiate graphics operations or to enter data. These
systems operate by matching an input against a predefined dictionary of
words and phrases.
HARD COPY OUTPUT DEVICES:
Hard-Copy output devices gives images in several format. For presentations or
archiving, we can send files to devices or service bureaus that will produce 35mm
slides or overhead transparencies. The quality of pictures obtained from a device
depends on dot size and the number of dots per inch or lines per inch that can be
displayed.
Printer produces output by either impact or non-impact methods. Impact
printers press format character faces against an inked ribbon onto the paper. A line
23
printer is an example of impact device. Non-impact printers and plotters use laser
techniques methods and electro-thermal methods to get images into paper.
Character impact printers have a dot matrix print head containing a rectangular
array of protruding wire pins with the number of pins depending on the quality of
the printer.
In a laser device, a laser beam creates a charge distribution on a rotating drum
coated with a photoelectric material, such as selenium. Toner is applied to the
drum and then transferred to paper. Ink-jet methods produce output by squirting
ink in horizontal rows across a roll of paper wrapped on a drum. The electrically
charged ink stream is deflected by an electric field to produce dot-matrix patterns.
An electrostatic device places a negative charge on the paper, one complete row
at a time along the length of the paper. Then the paper is exposed to a toner. The
toner is positively charged and so is to the negatively charged areas, where it
adheres to produce the specified output. Electro-thermal methods use heat in a dot-
matrix print head to output patterns on heat-sensitive paper.
GRAPHICS SOFTWARE:
Graphics Software is of 2 types. That is:
i) General Programming Packages
ii) Special Purpose Application Packages.
A general programming package provides an extensive set of graphics
functions that can be used in a high level programming language. Application
graphics packages are designed for non-programmers. So, that user can generate
displays without worrying about how graphics operations.
24
CO-ORDINATE REPRESENTATIONS:
With few exceptions, general graphics packages are designed to be used with
Cartesian co-ordinate specifications. If co-ordinates values for a picture are
specified in some other references frame, they must be converted to Cartesian co-
ordinates before they can be input to the graphics package.
Special purpose packages may allow using of other co-ordinate frames that are
appropriate to the application. We can construct the shape of individual objects, in
a scene within separate co-ordinate reference frames, called “modeling co-
ordinates” or “local co-ordinates” or “master co-ordinates”.
Only individual object shapes have been specified, we can place the objects into
appropriate positions within the scene using a reference frame called world co-
ordinates. The world co-ordinates description of the scene is transferred to one or
more output device reference frames for display. The display co-ordinate systems
are referred to as device co-ordinates/screen co-ordinates in the case of video
monitors.
Generally, a graphics system first converts world co-ordinates to specific device
co-ordinate to specific device co-ordinates. An initial modeling co-ordinate
position (xmc, ymc) is transferred to advice co-ordinate position (xdc, ydc) with the
sequence (xmc, ymc) → (xwc, ywc) → (xnc, ync) → (xdc, ydc). The normalized co-
ordinates satisfy the inequalities 0 ≤ xnc ≤ 1, 0 ≤ ync ≤ 1 and the device co-ordinates
‘xdc’ and ‘ydc’ are integers within the range (0,0) to (xmax, ymax) for particular
device.
25
GRAPHICS FUNCTIONS:
A general purpose graphics package provides users with a variety of functions
for creating and manipulating pictures. The basic building blocks for pictures are
referred to as output primitives. They include character strings and geometric
entities such as points, straight lines, curved lines, filled areas and shapes defined
with arrays of color points.
Attributes are the properties of the output primitives that is an attribute describe
how a particular primitive is to be displayed. We can change the size, position or
orientation of an object within a scene using geometric transformations similar
modeling transformations are used to construct a scene using object description
given in modeling co-ordinates.
Viewing transformations are used to specify the view that is to be presented and
the portion of the output display area that is to be used. Pictures can be subdivided
into component parts called structures/segments/objects, depending on the software
package in use.
Interactive graphics applications use various kinds of input devices, such as a
mouse, a tablet or a joystick. Input functions are used to control and process the
data flow from those interactive devices. A graphics package contains a number of
housekeeping tasks, such as clearing a display screen and initializing parameters.
The functions for carrying out these chores under the heading control operations.
SOFTWARE STANDARDS:
The primary goal of standardized graphics software is probability. When
packages are designed with standard graphics functions, software can be moved
26
easily from one hardware system to another and used in different implementations
and applications.
International and National standards planning organization have co-operated in
an effort to develop a generally accepted standard for computer graphics. After
considerable effort, this work on standards led to the development of the Graphical
Kernel System (GKS). This system was adopted as the first graphics software
standard by International Standard Organization (ISO), and by others.
The 2nd
software standard to be developed and approved by the standards
organizations was Programmer’s Hierarchical Interactive Graphics Standard
(PHIGS), which is an extension of GKS.
Standard Graphics functions are defined as a set of specifications that is
independent of any programming language. A language binding is then defined for
a particular high level programming language. Standardization for device interface
methods is given in Computer Graphics Interface (CGI) system and the Computer
Graphics Metafile (CGM) system specifies standards for archiving and
transporting pictures.
PHIGS WORKSTATION:
The workstation is a computer system with a combination of input and output
device that defined for a single user. In PHIGS and GKS, the term workstation is
used to identify various combinations of graphics hardware and software.
A PHIGS workstation can be a single output device, single input device, a
combination of input and output devices, a file or even a window displayed on a
video monitor. To define and use various “workstations” within an application
program, we need to specify a workstation identifier and the workstation type.
27
COMPONENTS OF GUI:
A GUI uses a combination of technologies and devices to provide a platform
the user can interact with for the tasks of gathering and producing information.
A series of elements confirming a visual language have evolved to represent
information stored in computers. This makes it easier for people with little
computer skill to work with and use computer software.
The most common combination of such elements in GUIs is the WIMP
paradigm, especially in personal computers.
A window manager facilitates the interactions between windows, applications
and the windowing system. The windowing system handles hardware devices, such
as pointing devices and graphics hardware as well as the positioning of the cursor.
In personal computers all these elements are modeled through a desktop
metaphor, to produce a simulation called a desktop environment in which the
display represents a desktop, upon which documents and folders of documents can
be placed.
USER INTERFACE AND INTERACTION DESIGN:
Designing the visual composition and temporal behavior of GUI is an important
part of software application programming. Its goal is to enhance the efficiency and
ease of use for the underlying logical design of a stored program, a design
discipline known as usability. Techniques of user-centered design are used to
ensure that the visual language introduced in the design is well tailored to the tasks
it must perform.
28
The widgets of a well-designed interface are selected to support the actions
necessary to achieve the goals of the user. A model-view controller allows for a
flexible structure in which the interface is independent from and indirectly linked
to application functionality, so the GUI can be easily customized. This allows the
user to select or design a different skin at will and eases the designer’s work to
change the interface as the user needs evolve.
The visible graphical interface features of an application are sometimes referred
to as “chrome”. Larger widgets such as windows usually provide a frame or
container for the main presentation content, such as a web page, email message or
drawing.
A GUI may be designed for the rigorous requirements of a vertical market. This
is known as “Application Specific Graphical User Interface”. Examples of an
application specific GUI are:
i) Touch screen point of sale software used by wait staff in a busy restaurant.
ii) Self-service checkouts used in a retail store.
iii)Automated Teller Machines (ATM)
iv) Information kiosks in a public space, like a train station or a museum.
v) Monitors or control screens in an embedded industrial application which
employ a real-time operating system (RTOS).
COMMAND LINE INTERFACES:
GUIs were introduced in reaction to the steep learning curve of command line
interfaces (CLI), which require commands to be typed on the keyboard. Since the
commands available in command line interfaces can be numerous, complicated
operations can be completed using a short sequence of words and symbols. This
allows for greater efficiency and productivity once many commands are learnt but
29
reaching this level takes some time because the command words are not easily
discoverable and not mnemonic.
Command line interfaces use modes only in limited forms, such as the current
directory and environment variables. Most modern operating systems provide both
a GUI and some level of a CLI, although the GUIs usually receive more attention.
THREE-DIMENSIONAL USER INTERFACES:
Three-dimensional images are projected on them in two dimensions. Since this
technique has been in use for many years, the recent use of the term three-
dimensional must be considered a declaration by equipment marketers that the
speed of three dimensions to two dimension projection is adequate to use in
standard GUIs.
30
CHAPTER – 3
LINE DRAWING ALGORITHMS:
1. SIMPLE LINE DRAWING ALGORITHM:
The Cartesian slope intercept equation for a straight line is:
y = mx + b => b = y – mx,
Where m = slope of line, b = y-intercept form.
If 2 end points of a line segment is specified at position (x1, y1) and (x2,
y2). So the values for the slope ‘m’ and y-intercept form ‘b’ is:
m = ∆y/∆x = (y2 – y)/(x2 – x1)
=> ∆y = m∆x ------- (1) and
∆x = ∆y/m ------ (2)
b = y1 –mx1
Here, ‘∆y’ is the y-interval computed for the given x-interval ∆x along a
line is ∆y = m∆x. Similarly the x-interval ∆x corresponding to a specified
∆y as: ∆x = ∆y/m.
For lines with slope magnitude, |m| < 1, ‘∆x’ increases. So calculate ‘∆y’
as: ∆y = m∆x. For |m| > 1, ‘∆y’ increases. So calculate ‘∆x’ as: ∆x = ∆y/m.
For |m| = 1, ∆x and ∆y both incremented and have the same value for
horizontal and vertical segments.
x1 x2
∆x
y1
y2
∆y
31
Example: Draw a line for points (3, 2) to (7, 8).
Here, (x1, y1) = (3, 2) and (x2, y2) = (7, 8).
So m = y2 –y1/x2 – x1 = 8 – 2/7 – 3 = 6/4 = 3/2 > 1. That is, for m > 1, ∆y
increases and ∆x = ∆y/m.
∆x ∆y x = x1 + ∆x y = y1 + ∆y
2/3 1 11/3 3
4/3 2 13/3 4
2 3 5 5
8/3 4 17/3 6
10/3 5 19/3 7
4 6 7 (x2) 8 (y2)
2. DIGITAL DIFFERENTIAL ANALYSER ALGORITHM (DDA):
This is a scan conversion line algorithm based on calculating either ∆y or
∆x as: ∆y = m∆x and ∆x = ∆y/m. We sample the line at unit intervals in one
co-ordinate and determine corresponding integer values nearest the line path
for the other co-ordinate.
Consider first a line with positive slope. If m ≤ 1, we sample at unit x-
intervals (∆x = 1) and calculate successive ‘y’ values as: yk+1 = yk + m. (∆x
increases by 1 always, so ∆x = 1).
Where, k = subscript and integer values starting from 1,
for 1st
point and increase by 1 until the final end point is
reached. Since ‘m’ can be any real number between 0 and
x1 x2
y1
y2
32
1, the calculated ‘y’ values must be rounded to the
nearest integer.
If m > 1, we sample at unit y-intervals (∆y = 1) and calculate each
succeeding ‘x’ values as: xk+1 = xk + 1/m. (∆y increases by 1 always, so ∆y =
1). Here, the lines are processes from left end point to right end point.
If the lines processed from right end to left end point, then ∆x = -1 and
yk+1 = yk – m (for, m ≤ 1), when ∆y = -1 and m > 1, then xk+1 = xk - 1/m.
Example: Draw a line for points (3, 2) to (7, 8).
Here, xk = 3, yk = 2 and m = y2 –y1/x2 – x1 = 8 – 2/7 – 3 = 6/4 = 3/2 > 1.
So, m > 1, so yk increases by 1 always.
xk+1 = xk + 1/m yk+1 = yk + m
11/3 3
13/3 4
15/3 = 5 5
17/3 6
19/3 7
7 (x2) 8 (y2)
3. BRESENHAM’S LINE DRAWING ALGORITHM:
An accurate and efficient raster line generating algorithm is developed by
Bresenham. In this approach, at slope m < 1, pixel positions along a line
path are then determined by sampling at unit x-intervals. Starting from left
end point (x0, y0) of a given line, we step to each successive column (x –
33
position) and plot the pixel, whose scan-line ‘y’-value is closest to the line
‘p’.
For kth
step, we determine the pixel at (xk, yk)
is to be displayed we need to decide which
pixel to plot in column, xk+1 and the choice of
pixel position are: (xk+1, yk) and (xk+1, yk+1). At
xk+1, we label vertical pixel separations from
the mathematical line path as ‘d1’ and ‘d2’.
So at xk+1, the ‘y’ co-ordinate is:
y = m(xk + 1) + b ------- (1)
d1 = y – yk = m(xk +1) + b – yk
d2 = (yk + 1) – y = yk + 1 – m(xk + 1) – b
The difference between these 2 separations is:
d1 – d2 = m(xk + 1) + b – yk – (yk +1) + m(xk + 1) + b
= 2m(xk + 1) + 2b - yk – (yk +1) = 2m(xk + 1) + 2b - yk – yk – 1
=> d1 – d2 = 2m(xk + 1) + 2b - 2yk – 1 ------- (2)
The decision parameter ‘Pk’ for kth
step in the line algorithm is calculated
with substitution of m = ∆y/∆x.
So, Pk = ∆x(d1 – d2)
= ∆x(2m(xk + 1) + 2b - 2yk – 1)
yk+3
yk+2
yk+1
yk
xk xk+1 xk+2 xk+3
y = mx + b
34
= ∆x(2∆y/∆x (xk + 1) + 2b - 2yk – 1)
= 2∆y(xk + 1) + 2b∆x - 2yk ∆x - ∆x
= 2∆yxk + 2∆y + 2b∆x - 2yk ∆x - ∆x
= 2∆yxk - 2∆xyk + 2∆y + 2b∆x - ∆x
= 2∆yxk - 2∆xyk + 2∆y + ∆x(2b -1)
= 2∆yxk - 2∆xyk + C
=> Pk = 2∆yxk - 2∆xyk + C ------- (3)
The sign of ‘Pk’ is same as the sign of d1 – d2, since ∆x > 0, C = constant
and value of C = 2∆y + ∆x(2b -1), independent of pixel position. If the pixel
at ‘yk’ is closer to the line path than the pixel at yk+1 (i.e. d1 < d2), then the
decision parameter ‘Pk’ is negative.
At step k + 1, the decision parameter is evaluated as:
Pk+1 = 2∆y(xk +1) - 2∆x(yk +1) + C ------- (4)
Now, subtract equation – (3) from equation – (4), we get:
Pk+1 – Pk = 2∆y(xk+1 – xk) - 2∆x(yk+1 – yk), but xk+1 = xk + 1, so that
Pk+1 = Pk + 2∆y(xk+1 – xk) - 2∆x(yk+1 – yk)
=> Pk+1 = Pk + 2∆y - 2∆x(yk+1 – yk) ------- (5)
Where, yk+1 – yk is either 0 or 1, depending on the sign of parameter Pk.
35
The recursive calculation of decision parameters is performed at each
integer x-position, starting at the left co-ordinate end point of the line. The
1st
parameter ‘P0’ is evaluated from equation – (3), at the starting pixel
position (x0, y0) and with ‘m’ evaluated as ∆y/∆x. So, P0 = 2∆y - ∆x --- (6).
ALGORITHM:
STEP 1: Input the two line end points and store the left end point in
(x0, y0)
STEP 2: Load the (x0, y0) or the initial point of the line into the frame
buffer then plot the first point.
STEP 3: Calculate the constant ∆x, ∆y, 2∆y, 2∆y - 2∆x and obtain the
starting value for the decision parameter: P0 = 2∆y - ∆x
STEP 4: At each ‘xk’ along the starting at k = 0, perform the
following point.
If Pk < 0, plot (xk + 1, yk) and Pk+1 = Pk + 2∆y
If Pk > 0, plot (xk + 1, yk + 1) and Pk+1 = Pk + 2∆y - 2∆x.
STEP 5: Repeat STEP – 4, ∆x times.
Example: To illustrate the algorithm, we digitize the line with end points
(20, 10) and 30, 18)
Here the line has a slope:
m = y2 –y1/x2 – x1 = 18 – 10/30 – 20 = 8/10 = 0.8 < 1
36
∆x = 30 – 20 = 10 and ∆y = 18 – 10 = 8.
The initial decision parameter is: P0 = 2∆y - ∆x = 2 x 8 – 10 = 16 – 10 = 6
and the increments for the calculating successive decision parameters are:
∆x = 10, ∆y = 8, 2∆y = 16, 2∆y - 2∆x = 16 – 20 = - 4. At k = 0, we plot the
initial point (xk, yk) = (x0, y0) = (20, 10) and determine successive pixel
positions along the line path from the decision parameter as:
k Pk (xk +1, yk +1) Decision parameter calculation
0 6 (21, 11) P0 = 2∆y - ∆x = 2 x 8 – 10 = 16 – 10 = 6
1 2 (22, 12) Pk+1 = Pk + 2∆y - 2∆x = 6 + 2 x 8 – 2 x 10 = 2
2 -2 (23, 12) Pk+1 = Pk + 2∆y - 2∆x = 2 + 2 x 8 – 2 x 10 = -2
3 14 (24, 13) Pk+1 = Pk + 2∆y = -2 + 2 x 8 = 14
4 10 (25, 14) Pk+1 = Pk + 2∆y - 2∆x = 14 + 2 x 8 – 2 x 10 = 10
5 6 (26, 15) Pk+1 = Pk + 2∆y - 2∆x = 10 + 2 x 8 – 2 x 10 = 6
6 2 (27, 16) Pk+1 = Pk + 2∆y - 2∆x = 6 + 2 x 8 – 2 x 10 = 2
7 -2 (28, 16) Pk+1 = Pk + 2∆y - 2∆x = 2 + 2 x 8 – 2 x 10 = -2
8 14 (29, 17) Pk+1 = Pk + 2∆y = -2 + 2 x 8 = 14
9 10 (30, 18) Pk+1 = Pk + 2∆y - 2∆x = 14 + 2 x 8 – 2 x 10 = 10
It will continue upto 10 times, starting from 0 to 9, because ∆x = 10.
CIRCLE GENERATING ALGORITHMS:
1. PROPERTIES OF THE CIRCLE:
A Circle is defined as the set of points that are all at given distance ‘r’
from a center position (xc, yc). This distance relationship is expressed by the
Cartesian co-ordinates as:
37
(x – xc)2
+ (y – yc)2
= r2
------ (1)
The general equation of a circle is:
x2
+ y2
= r2
------- (2)
To calculate the position of points on a circle circumference by stepping
along the x-axis in unit steps from xc – r to xc + r and calculating the
corresponding ‘y’ – values at each position as:
------ (3)
To calculate the points along the circular boundary, polar co-ordinates ‘r’
and ‘θ’ are used. Expressing the circle equation in parametric polar form
yields the pair of equations is:
x = xc + rcosθ and y = yc +rsinθ
Computation can be reduced by considering the symmetry of circles. The
shape of the circle is similar in each quadrant. We can generate the circle
section in the second quadrant of the xy-plane by noting that the two circle
sections are symmetric with respect to the y-axis and the circle sections in
3rd
and 4th
quadrants can be obtained from sections in the first and second
quadrants by considering symmetry about the x-axis.
Circle sections in adjacent octants within one quadrant are symmetric
with respect to the 45 degree line dividing the two octants. These symmetry
conditions are illustrated in given figure, where a point at position (x, y) on a
one eight circle sector is mapped onto the seven circle points in the other
r
xc
yc
38
octants of the xy-plane. Taking the advantage of the circle symmetry, we can
generate all pixel positions around a circle by calculating only the points
within the sector from x = 0 to x = y.
2. MID-POINT CIRCLE/BRESENHAM’S CIRCLE GENERATING
ALGORITHM:
We set up the algorithm to calculate pixel positions around a circle path
centered at the co-ordinate origin (0, 0). Then each calculated position (x, y)
is moved to its proper screen position by adding ‘xc’ to ‘x’ and ‘yc’ to ‘y’,
where (xc, yc) is the center position of the circle with radius ‘r’.
Along the circle section from x = 0 to x = y in 1st
quadrant, the slope of
the curve varies from 0 to 1. To apply the midpoint method, we define the
circle function with (0, 0) as its center is:
Fcircle(x, y) = x2
+ y2
– r2
--- (1)
It has 3 properties, at any point (x, y). That is:
(x, y)
(y, x)
(-x, y)
(-y, x)
(y, -x)
(x, -y)
(-x, -y)
(-y, -x)
45o
Fcircle(x, y) = < 0, if (x, y) is inside the circle boundary
= 0, if (x, y) is on the circle boundary --- (2)
> 0, if (x, y) is outside the circle boundary
39
We have plotted the pixel at (xk, yk) and next to determine whether the
pixel at position (xk+1, yk) or one at position (x k+1, yk – 1) is closer to the
circle. The decision parameter is the circle function and equation – (1) is
evaluated at the midpoint between these 2 pixels (x k+1, yk) and (x k+1, y k – 1).
So, the decision parameter is:
Pk = Fcircle(xk+1, yk -1/2) = (xk+1)2
+ (yk -1/2)2
– r2
--- (3)
[Since midpoint = (yk + yk-1)/2 = (2yk -1)/2 = yk – 1/2]
If Pk < 0, the midpoint is inside the circle and the pixel on scan line ‘yk’ is
closer to the circle boundary, otherwise the midpoint is outside or on the
circle boundary and we select the pixel on ‘yk - 1’.
The recursive expression for the next decision parameter is obtain by
evaluating the circle function at sampling position xk+1 + 1 = xk + 2, then,
Pk+1 = Fcircle(xk+1 + 1, yk +1 -1/2) = [(xk+1) + 1]2
+ (yk+1 – 1/2)2
– r2
=> Pk+1 – Pk = 2(xk + 1) + (yk+1
2
– yk
2
) – (yk+1 – yk) + 1
=> Pk+1 = Pk + 2(xk + 1) + (yk+1
2
– yk
2
) – (yk+1 – yk) + 1 --- (4),
where yk+1 is either ‘yk’ or ‘yk-1’, depending on the sign of Pk.
Increments for obtaining Pk+1 are either 2xk+1 + 1 (if Pk is negative) or
2xk+1 + 1 – 2yk+1. Evaluation of terms 2xk+1 and 2yk+1 can also be done
incrementally as: 2xk+1 = 2xk + 2, 2yk+1 = 2yk - 2. At the start position (0, r),
the initial decision parameter is obtained by evaluating the circle function at
the start position (x0, y0) = (0, r). So,
40
P0 = Fcircle(1, r – 1/2) = 1 + (r – 1/2)2
– r2
= 1 + r2
+ 1/4 – 2 x r x 1/2 – r2
= 5/4 – r --- (5)
If the radius ‘r’ is specified as an integer, we can simply round the ‘P0’ to
P0 = 1 – r (for ‘r’ is an integer).
ALGORITHM:
STEP 1: Input radius ‘r’ and circle center (xc, yc) and obtain the first point
on the circumference of a circle centered on the origin as: (x0, y0) = (0, r).
STEP 2: Calculate the initial value of decision parameter as P0 = 5/4 – r.
STEP 3: At each ‘xk’ position, starting at k = 0, perform the following test:
i) If Pk < 0, the next point along the circle centered on (0, 0) is
(xk+1, yk) and Pk+1 = Pk + 2xk+1 + 1, otherwise,
ii) The next point along the circle is, (xk+1, yk-1) and
Pk+1 = Pk + 2xk+1 + 1 – 2yk+1, 
Where 2xk+1 = 2xk + 2 and 2yk+1 = 2yk – 2, we will continue this
upto xk ≤ yk
STEP 4: Determine symmetry points in other screen octants.
STEP 5: Move each calculated pixel position (x, y) onto the circular path
centered on (xc, yc) and plot the co-ordinate values x = x + xc and y = y + yc.
STEP 6: Repeat STEP – 3 through S, until x ≥ y.
41
Example: Given a circle radius r = 10, we demonstrate the midpoint circle
algorithm by determining positions along the circle algorithm by
determining positions along circle octant in the first quadrant from x = 0 to x
= y.
The initial value of decision parameter is P0 = 1 –r = - 9. For the circle
centered on the co-ordinates origin, the initial point is (x0, y0) = (0, 10) and
initial increment terms are: 2x0 =0, 2y0 = 20.
Successive decision parameter values and positions along the circle path
are calculated using the midpoint method as:
k Pk (xk+1, yk -1) 2xk+1 2yk+1
0 - 9 (1, 10) 2 20
1 - 6 (2, 10) 4 20
2 - 1 (3, 10) 6 20
3 6 (4, 9) 8 18
4 - 3 (5, 9) 10 18
5 8 (6, 8) 12 18
6 5 (7, 7) 14 16
ELLIPSE GENERATING ALGORITHM:
1. PROPERTIES OF THE ELLIPSE:
An ellipse is the set of points, such that the sum of the distances from two
fixed position is the same for all positions (Foci) is the same for all points.
If the distances to the two foci from any point P = (x, y) on the ellipse
are labeled ‘d1’ and ‘d2’, then the general equation of ellipse stated as:
42
d1 + d2 = Constant --- (1)
Expressing distance ‘d1’ and ‘d2’ in terms of
the focal co-ordinates F1 = (x1, y1) and F2 =
(x2, y2), we have:
By squaring this equation, isolating the remaining radica and then
squaring again, we write the general ellipse equation in the form:
Ax2
+ By2
+ Cxy +Dx + Ey + F = 0 --- (3)
Here the coefficients A, B, C, D, E and F are evaluated in terms of the
focal co-ordinates and dimensions of the major and minor axes of the ellipse.
An ellipse in standard position with major and minor axes oriented
parallel to ‘x’ and ‘y’ axes.
Parameter ‘rx’ is the semi-major axis and
parameter ‘ry’ is the semi-minor axis. The
equation of the ellipse can be written in
terms of the ellipse center co-ordinates and
parameters ‘rx’ and ‘ry’ as:
(x – xc/rx)2
+ (y – yc/ry) 2
= 1 --- (4)
Using polar co-ordinates ‘r’ and ‘θ’, the parametric equations of the
ellipse is:
(x – x1)2
+ (y – y1)2
+ (x – x2)2
+ (y – y2)2
= Constant --- (2)
d1
d2
F1
F2
P = (x, y)
ry
rx
yc
xc
y
x
43
x = xc +rxcosθ and y = yc + rysinθ --- (5)
Symmetric consideration can be used to further reduce computation.
2. MID-POINT ELLIPSE/BRESENHAM’S ELLIPSE GENERATING
ALGORITHM:
This is applied throughout the 1st
quadrant in 2 parts. The division of 1st
quadrant is according to the slope of an ellipse with rx < ry. When we process
this quadrant by taking unit steps in the x-direction, where slope of the curve
has a magnitude less than 1 and taking unit steps in y-direction, when slope
has the magnitude greater than 1.
For a sequential implementation of the
midpoint algorithm, we take the start
position at (0, ry) in clockwise order
throughout the 1st
quadrant. The ellipse
function from equation (4) with (xc, yc) = (0,
0) as:
Fellipse(x, y) = ry
2
x2
+ rx
2
y2
– rx
2
ry
2
--- (6)
-x, y
x, -y
ry
rx
x, y
y
x
-x, -y
Region 2
Region 1
ry
rx
Slope m = -1
y
x
44
It has 3 properties, at any point (x, y). That is:
Thus the ellipse function Fellipse(x, y) serves as the decision parameter in
midpoint algorithm.
Starting at (0, ry), we take unit steps in the x-direction until we reach the
boundary between region 1 and region 2. The slope is calculated from
equation – (6) is:
m = dy/dx = -2ry
2
x/2rx
2
y --- (8)
At the boundary between region 1 and region 2, dy/dx = -1 and 2ry
2
x =
2rx
2
y. So we move to region 1, when 2ry
2
x ≥ 2rx
2
y.
In region 1, the midpoint between the two candidate pixels at sampling
position xk + 1 in the 1st
region. Assuming position (xk, yk) has been selected
at the previous step. We determine the next position along the ellipse path by
evaluating the decision parameter at this midpoint is:
Pk
1
= Fellipse(xk+1, y k -1/2) = ry
2
(xk+1)2
+ rx
2
(yk -1/2)2
– rx
2
ry
2
--- (9)
If Pk
1
< 0, the midpoint is inside the ellipse and the pixel on scan line ‘yk’
is closer to the ellipse boundary, otherwise the midpoint is outside or on the
ellipse boundary and we select the pixel on ‘yk - 1’.
Fellipse(x, y) = < 0, if (x, y) is inside the ellipse boundary
= 0, if (x, y) is on the ellipse boundary --- (7)
> 0, if (x, y) is outside the ellipse boundary
45
At the next sampling position (xk+1 + 1 = xk + 2), the decision parameter
for region 1 is evaluated as:
Pk+1
1
= Fellipse(xk+1 + 1, yk +1 -1/2) = ry
2
[(xk+1) + 1]2
+ rx
2
(yk+1 – 1/2)2
– rx
2
ry
2
=> Pk+1
1
= Pk
1
+ 2ry
2
(xk + 1) + ry
2
+ rx
2
[(yk+1 – 1/2)2
– (yk – 1/2)2
] --- (10),
Where, ‘yk+1’ is either ‘yk’ or ‘yk -1’, depending on the sign of Pkꞌ.
Decision parameters are incremented by following amounts:
At the initial position (0, ry), the two terms evaluated to
2ry
2
x = 0 --- (11)
2rx
2
y = 2rx
2
ry --- (12)
As ‘x’ and ‘y’ are incremented, updated values are obtained by adding
2ry
2
to equation – (11) and subtract 2rx
2
from equation – (12). The updated
values are compared at each step and we move from region 1 to region 2,
when 2ry
2
x ≥ 2rx
2
y is satisfied.
In region 1, the initial value of decision parameter is obtained by
evaluating the ellipse function at start position (x0, y0) = (0, ry).
So, P0
1
= Fellipse(1, ry – 1/2) = ry
2
– rx
2
(ry – 1/2)2
– rx
2
ry
2
=> P0
1
= ry
2
– rx
2
ry – 1/4rx
2
--- (13)
Increment = 2ry
2
xk+1 + ry
2
, if Pkꞌ < 0
2ry
2
xk+1 + ry
2
– 2rx
2
yk + 1, if Pkꞌ ≥ 0
46
Over region 2, we sample at unit steps in negative y-direction and
midpoint in now taken between horizontal pixels at each step. For this region
2, the decision parameter is evaluated as:
Pk
2
= Fellipse(xk+1/2, yk -1) = ry
2
(xk+1/2)2
+ rx
2
(yk -1)2
– rx
2
ry
2
--- (14)
If Pk
2
> 0, the midpoint is outside the ellipse and we select the pixel at
scan line ‘xk’. If Pk
2
≤ 0, the midpoint is inside or on the ellipse boundary
and we select pixel position on ‘xk + 1’.
To determine the relationship between successive decision parameters in
region 2, we evaluate the ellipse function at the next sampling step: yk+1 – 1
= yk – 2.
Pk+1
2
= Fellipse(xk+1 + 1/2, yk +1 -1) = ry
2
(xk+1+ 1/2)2
+ rx
2
[(yk – 1) – 1]2
– rx
2
ry
2
=> Pk+1
2
= Pk
2
- 2rx
2
(yk - 1)2
+ rx
2
+ ry
2
[(xk+1 + 1/2)2
– (xk + 1/2)2
] --- (15),
Here, ‘xk+1’ set either ‘xk’ or ‘xk+1’, depending on the sign of Pk
2
.
When we enter region 2, the initial position (x0, y0) is taken as the last
position selected in region 1 and the initial decision parameter in region 2 is:
P0
2
= Fellipse(x0 + 1/2, y0 – 1) = ry
2
(x0 + 1/2) 2
– rx
2
(y0 – 1)2
– rx
2
ry
2
--- (16)
To simplify the calculation of P0
2
, we could select pixel position in
counter clockwise order starting at (rx, 0) unit steps would then be taken in
the positive y-direction upto the last position selected in region 1.
47
ALGORITHM:
STEP 1: Input rx, ry and ellipse center (xc, yc) and obtain the first point on
an ellipse centered on the origin as: (x0, y0) = (0, ry).
STEP 2: Calculate the initial value of decision parameter in region 1 as:
P0
1
= ry
2
– rx
2
ry – 1/4rx
2
STEP 3: At each ‘xk’ position in region 1, starting at k = 0, perform the
following test:
i) If Pk
1
< 0, the next point along the ellipse centered on (0, 0) is
(xk+1, yk) and Pk+1
1
= Pk
1
+ 2ry
2
xk + 1 + ry
2
, otherwise,
ii) The next point along the circle is, (xk+1, yk-1) and
Pk+1
1
= Pk
1
+ 2ry
2
xk + 1 - 2rx
2
yk+1 + ry
2
With 2ry
2
xk + 1 = 2ry
2
xk + 2ry
2
,
2rx
2
yk+1 = 2rx
2
yk - 2rx
2
and continue until 2ry
2
x ≥ 2rx
2
y.
STEP 4: Calculate the initial value of the decision parameter in region 2,
using the last point (x0, y0). Calculated in region 1 as:
P0
2
= ry
2
(x0 + 1/2) 2
– rx
2
(y0 – 1)2
– rx
2
ry
2
STEP 5: At each ‘yk’ position in region 2, starting at k = 0, perform the
following test:
i) If Pk
2
> 0, the next point along the ellipse centered on (0, 0) is
(xk, yk – 1) and Pk+1
2
= Pk
2
- 2rx
2
yk + 1 + rx
2
, otherwise,
ii) The next point along the circle is, (xk + 1, yk - 1) and
48
Pk+1
2
= Pk
2
+ 2ry
2
xk + 1 - 2rx
2
yk+1 + rx
2
Using the same incremental calculations for ‘x’ and ‘y’ as in
region 1.
STEP 6: Determine symmetry points in the other 3 quadrants.
STEP 7: Move each calculated pixel position (x, y) onto the elliptical path
centered on (xc, yc) and plot the co-ordinate values: x = x + xc, y = y + yc.
STEP 8: Repeat the steps for region 1, until 2ry
2
x ≥ 2rx
2
y.
Example: Given input ellipse parameter rx = 8, ry = 6. We illustrate the steps
in the midpoint ellipse algorithm by determining raster positions along the
ellipse path in the first quadrant. Initial values and increments for the
decision parameter calculations as:
2ry
2
= 0 (with increment 2ry
2
= 72) and
2rx
2
y = 2rx
2
ry (with increment -2rx
2
= -128)
For Region 1:
The initial point for the ellipse centered on the origin is (x0, y0) = (0, 6)
and the initial decision parameter value is: P0
1
= ry
2
– rx
2
ry – 1/4rx
2
= - 332.
Successive decision parameter values and positions along the ellipse path
are calculated using the midpoint method as:
49
k Pk
1
(xk+1, yk +1) 2ry
2
xk + 1 2rx
2
yk+1
0 -332 (1, 6) 72 768
1 -224 (2, 6) 144 768
2 - 44 (3, 6) 216 768
3 208 (4, 5) 288 640
4 - 108 (5, 5) 360 640
5 288 (6, 4) 432 512
6 244 (7, 3) 504 384
We now move out of region 1, since 2ry
2
x > 2rx
2
y.
For Region 2:
The initial point is: (x0, y0) = (7, 3) and the initial decision parameter
value is: P0
2
= Fellipse(7 + 1/2, 2) = -151
The remaining positions along the ellipse path in the 1st
quadrant are then
calculated as:
k Pk
2
(xk+1, yk +1) 2ry
2
xk + 1 2rx
2
yk+1
0 -151 (8, 2) 576 256
1 233 (8, 1) 576 128
2 745 (8, 0) - -
FILLED AREA PRIMITIVES:
A standard output primitives is a solid color or patterned polygon area. Other
kinds of area primitives are sometimes available, but polygons are easier to process
since they have linear boundaries.
50
POLYGON FILLING:
It is the process of coloring in a defined area or regions. The region can be
classified as: Boundary defined region and Interior defined region.
When the region is defined in terms of pixels, if comprised of it is known as
interior defined region. The algorithm used for filled interior-defined regions are
known as Flood-Fill Algorithm.
When the region is defined in terms of bounding pixels, that outline, it is
known as boundary defined region. The algorithm used for filling boundary
defined region is known as: Boundary-Fill Algorithm.
FLOOD-FILL ALGORITHM:
While using this, the user generally provides an initial pixel, which is known as
seed pixel, starting from the seed pixel, the algorithm will inspect each of the
surrounding eight pixels to determine whether the extent has been reached.
In 4-connected region, only 4 pixels are surrounding pixels. i.e.: left, right, top
and bottom. The process is repeated until all pixels inside the region have been
inspected.
INTERIOR-DEFINED REGION
BOUNDARY-DEFINED REGION
51
The flood-fill procedure for a 4-connected region is:
void floodfill4(int x, int y, int fillcolor, int oldcolor)
{
if(getPixel(x, y) == oldcolor)
{
setColor(fillcolor);
setPixel(x, y);
floodfill4(x + 1, y, fillcolor, oldcolor);
floodfill4(x – 1, y, fillcolor, oldcolor);
floodfill4(x, y + 1, fillcolor, oldcolor);
floodfill4(x, y – 1, fillcolor, oldcolor);
}
}
P P P
P S P
P P P
P
P S P
P
8 - CONNECTED 4 - CONNECTED
52
The flood-fill procedure for an 8-connected region is:
void floodfill8(int x, int y, int fillcolor, int oldcolor)
{
if(getPixel(x, y) == oldcolor)
{
setColor(fillcolor);
setPixel(x, y);
floodfill4(x + 1, y, fillcolor, oldcolor);
floodfill4(x – 1, y, fillcolor, oldcolor);
floodfill4(x, y + 1, fillcolor, oldcolor);
floodfill4(x, y – 1, fillcolor, oldcolor);
floodfill4(x + 1, y + 1, fillcolor, oldcolor);
floodfill4(x + 1, y - 1, fillcolor, oldcolor);
floodfill4(x - 1, y + 1, fillcolor, oldcolor);
floodfill4(x - 1, y – 1, fillcolor, oldcolor);
}
}
53
BOUNDARY-FILL ALGORITHM:
A Boundary-Fill Algorithm or procedure accepts as input, the co-ordinates of an
interior point (x, y), a fill color and a boundary color. Starting from (x, y) the
procedure test neighboring positions to determine whether they are of the boundary
color, if not they are painted with fill color and their neighbors are tested.
There are 2 methods are used for area filled i.e.: 4-connected and 8-connected.
In 4-connected form the current positions, four neighboring points are tested.
Those pixel positions are right, left, above and below the current pixel. In 8-
connected, the set of neighboring positions to be tested includes the 4 diagonal
pixels. An 8-connected boundary fill algorithm would correctly fill the interior of
the area.
The procedure or a recursive method for filling a 4-connected area with
intensity specified in parameter fill upto a boundary color specified with parameter
boundary. That is:
void boundaryfill4(int x, int y, int fill, int boundary)
{
int current;
S S
8 - CONNECTED4 - CONNECTED
54
current = getPixel(x, y)
if((current != boundary) && (current != fill))
{
setColor(fill);
setPixel(fill);
boundaryfill4 (x + 1, y, fill, boundary);
boundaryfill4 (x – 1, y, fill, boundary);
boundaryfill4 (x, y + 1, fill, boundary);
boundaryfill4 (x, y – 1, fill, boundary);
}
}
We can extend this procedure to fill an 8-connected region by including 4
additional statements to test diagonal positions, such as (x + 1, y – 1). The
procedure for 8-connected area is:
void boundaryfill8(int x, int y, int fill, int boundary)
{
int current;
current = getPixel(x, y)
55
if((current != boundary) && (current != fill))
{
setColor(fill);
setPixel(fill);
boundaryfill8(x + 1, y, fill, boundary);
boundaryfill8(x – 1, y, fill, boundary);
boundaryfill8(x, y + 1, fill, boundary);
boundaryfill8(x, y – 1, fill, boundary);
boundaryfill8(x + 1, y + 1, fill, boundary);
boundaryfill8(x + 1, y - 1, fill, boundary);
boundaryfill8(x - 1, y + 1, fill, boundary);
boundaryfill8(x - 1, y – 1, fill, boundary);
}
}
SCANLINE POLYGON-FILL ALGORITHM:
For each scan line crossing a polygon, the area-fill algorithm locates the
intersection points of the scan line with the polygon edges. These intersection
56
points are then sorted from left to right and the corresponding frame buffer
positions between each intersection pair are set to specified fill color.
Here, the four pixel intersection positions with the
polygon boundaries define two stretches of interior
pixels from x = 10 to x = 14 and from x = 18 to x =
24.
A scan line passing through a vertex intersects two polygon edges at that
position, adding two points to the list of intersections for the scan line.
Here, 2 scan-lines at position ‘y’ and ‘yꞌ’
that intersect edge endpoints. Scan-line ‘y’
intersects five polygon edges. Scan-line ‘yꞌ’
intersects an even number of edges although
it also passes through a vertex. Intersection
point along scan-line ‘y’ correctly identifies
the interior pixel spans.
But with scan-line ‘y’, we need to do some additional processing to determine
the correct interior points.
The topological difference between scan-line ‘y’ and scan-line ‘yꞌ’ is identified
by noting the position of the intersecting edge relative to the scan-line. For scan-
line ‘y’, the two intersecting edges sharing a vertex are on opposite sides of the
scan-line. But for scan-line ‘yꞌ’, the two intersecting edges are both above the scan-
line.
10 14 18 24
Scan line y’
Scan line y
1 2 1
2 1
1
57
Calculations performed in scan-conversion and other graphics algorithms
typically take advantage of various coherence properties of a scene that is to be
displayed. Coherence is simply that the properties of one part of a scene are related
in some way to other parts of the scene, so that the relationship can be used to
reduce processing.
Here, two successive scan-lines crossing a
left edge of a polygon. The slope of this
polygon boundary line can be expressed in
terms of scan-line intersection co-ordinate
is: m = yk+1 – yk/xk+1 - xk --- (1)
Since the change in ‘y’- coordinates between two scan-lines is:
yk+1 – yk = 1 --- (2).
This x-intersection value xk+1 on the upper scan-line can be determined from the
x-intersection value xk on the proceeding scan-line as: xk+1 = xk + 1/m --- (3).
Along an edge, with slope ‘m’, the intersection ‘xk’ value for scan-line ‘k’ above
the initial scan-line can be calculated as: xk = x0 + k/m --- (4).
In a sequential fill algorithm, the increment of ‘x’ values by the amount 1/m
along an edge can be accomplished with interior operations by recalling that the
slope ‘m’ is the ratio of 2 integers : m = ∆y/∆x.
Here ∆x and ∆y are the differences between the edges, endpoint ‘x’ and ‘y’ co-
ordinate values. So incremental calculations of ‘x’- intercepts along an edge for
successive scan lines can be expressed as:
xk+1 = xk + ∆x/∆y --- (5)
Scan-line yk+1
Scan-line yk
(xk+1, yk+1)
(xk, yk)
58
Using this equation, we can perform the integer evaluation if the x-intercepts by
initializing a counter to 0, then incrementing the counter by the value ‘∆x’ each
time we move upto new scan-line. Whenever the counter value becomes equal to
or greater than ∆y, we increment the current x-intersection value by 1 and decrease
the counter by the value ∆y.
ANTIALIASING:
The distortion of information due to low frequency sampling is called aliasing.
We can improve the appearance of displayed raster lines by applying antialiasing
methods compensate for the under-sampling process.
To avoid losing information from such periodic objects, we need to set the
sampling frequency to at least twice of the highest frequency occurring in the
object referred to as Nyquist Sampling Frequency/Nyquist Sampling Rate i.e.:
fs = 2fmax
Another way to state this is that: the sampling interval should be no longer than
one-half the cycle interval called Nyquist Sampling interval. For x-interval
sampling, the Nyquist Sampling interval ∆xs i.e.:
∆xs = ∆xcycle/2, where, ∆xcycle = 1/fmax
* * * * * : Sampling
Position
It shows the effects of under-
sampling
59
In the above figure, the sampling interval is one and one-half times the cycle
interval, so the sampling interval is at least 3 times too big.
A straight forward antialiasing method is to increase sampling rate by
treating the screen as if it were covered with a finer grid that is actually available.
We can then use multiple sample points across this finer grid to determine an
appropriate intensity level for each screen pixel. This technique of sampling object
characteristics at a high resolution and displaying the results at a lower resolution
is called super-sampling/post-filtering.
An alternative to super-sampling is to determine pixel intensity by calculating
the areas of overlap of each pixel with the objects to be displayed. Antialiasing by
computing overlap areas is referred to as Area Sampling/Pre-filtering. Pixel
overlap areas intersect individual pixel boundaries. Raster objects can also be anti-
aliased by shifting the display location of pixel areas. This technique is called pixel
phasing is applied by “micro-positioning” the electron beam is relation to object
geometry.
60
CHAPTER -4
LINE ATTRIBUTES:
This basic attributes of a straight line segment are its type, its width and its
color. Lines can also be displayed using selected pen or brush options.
i. LINE TYPE:
This attribute includes solid lines, dashed lines and dotted line. The line
drawing algorithm is modifying to generate lines by setting the length and
spacing of displayed solid sections along the line path.
A dashed line could be displayed by generating an inter-dash spacing that
is equal to the length of the solid sections. Both the length of the dashes and
the inter-dash spacing are often specified as user options.
A dotted line can be displayed by generating very short dashes with the
spacing equal to or greater than the dash size.
To set the line type attributes with command is: set linetype (lt); where
parameter ‘lt’ is assigned to a positive integer values of 1, 2, 3 or 4 to
generate lines that are respectively solid, dashed, dotted or dash-dotted. The
line-type parameter ‘lt’ could be used to display variations in dot-dash
patterns.
61
ii. LINE WIDTH:
Implementation of line-width options depends on the capabilities of the
output device. A line-width command is used to set the current line-width
value in the attribute set. The command for this is:
set linewidthscalefactor(lw)
Here line-width parameter ‘lw’ is assigned to a positive number to
indicate the relative width of the line to be displayed.
A value ‘1’ specifies a standard width line. If a user set ‘lw’ to a value
0.5 to plot a line whose width is half of the standard line. Values greater than
‘1’ produce lines thicker than the standard.
Line-caps are used to adjust the shape of the line ends and give better
appearance. One kind of line cap is butt cap, obtained by adjusting the end
positions of the component parallel lines, so that the thick line is displayed
with square ends that are perpendicular to the line path. If the specified line
has slope ‘m’, the square end of the thick line has slope 1/m. Another line-
cap is the round cap, obtained by adding a filled semicircle to each butt cap.
The circular arcs are centered on the line endpoints and have a diameter
equal to the line thickness. A third type of line cap is the projecting square
cap, where we simply extended the line and add butt caps that are positioned
one-half of the line with beyond the specified end points.
[BUTTCAP] [ROUNDCAP] [PROJECTING SQUARE CAP]
62
We can generate thick polylines that are smoothly joined at the cost of
additional processing at the segment endpoints. A meter join is
accomplished by extending the outer boundaries of each of the two lines
until they meet. A round join is produced by capping the connection
between the two segments with a circular boundary whose diameter is equal
to the line width. A bevel join is generated by displaying the line segments
with butt caps and filling in the triangular gap where the segments meet.
iii. LINE WIDTH:
Lines can be displayed with pen and brush selections options in this
category include shape, size and pattern. E.g.: , , , , , , :, etc.
These shapes can be stored in a pixel mask, which identifies the array of
pixel positions that are to be set along the line path. Lines generated with
pen or brush shapes can be displayed in various widths by changing the size
of the mask.
iv. LINE COLOR:
When a system provides color or intensity options, a parameter giving the
current color index is included in the list of system attribute values.
[MITER JOIN] [ROUND JOIN] [BEVEL JOIN]
63
A polyline routine displays a line in the current color by setting this color
value in the frame buffer at pixel locations along the line path using the set
pixel procedure. The number of color choices depends on the number of bits
available per pixel in the frame buffer. The function of line color is:
Set PolylineColorIndex(lc), where lc = line color parameter
A line drawn in background color is invisible and a user can ensure a
previously displayed line by specifying it in the background color.
E.g.: set linetype (2);
set linewidthscalefactor (2);
set PolylineColorIndex (5);
Polyline (n1, wcpoints1);
set PolylineColorIndex (6);
Polyline (n2, wcpoints2);
CURVE ATTRIBUTES:
Parameters of curve attributes are same as line attributes we can display curves
with varying colors, widths, dot-dash patterns and available pen and brush options
methods are also same as line attributes. We can generate the dashes in the various
octants using circle symmetry, but we must shift the pixel positions to maintain the
correct sequence of dashes and spaces as we move from one octant to the next.
Raster curves of various widths can be displayed using the method of horizontal
or vertical pixel spans, where the magnitude of the curve slope is less than 1, we
64
plot vertical spans, where the slope magnitude is greater than 1, and we plot
horizontal spans.
Using circle symmetry, we generate the circle path with vertical spans in the
octant from x = 0 to x = y and then reflect pixel positions about the line y = x to
obtain the remainder of the curve. For displaying the thick curves is to fill area
between two parallel curve paths, whose separation distance is equal to the desired
width.
AREA FILL ATTRIBUTES:
i. FILL STYLES:
Areas are displayed with 3 basic fill styles. i.e.: hollow with a color
border, filled with a solid color or filled with specified pattern or design. The
function for basic fill style is: set Interiorstyle(fs);
Where fs = fill style parameter and the values include hollow, solid and
pattern. Another value for fill style is hatch, which is used to fill an area with
selected hatching patterns i.e.: parallel lines or crossed lines. Fill style
parameter ‘fs’ are normally applied to polygon areas, but they can also be
implemented to fill regions with curved boundaries.
Hollow areas are displayed using only the boundary outline with the
interior color, same as the background color. A solid fill displayed in a
single color upto and including the borders of the region. The color for a
solid interior or for a hollow area outline is chosen with:
set InteriorColorIndex(fc);
65
Where fc = fill color parameter is set to desired color code.
ii. PATTERN FILL:
We select fill patterns with set InteriorStyleIndex(Pi), where
Pi = Pattern index parameter, specifies a table position.
Example: The following set of statements would fill the area defined in
the fill area command with the second pattern type stored in the pattern
table:
set InteriorStyle(Pattern);
set InteriorStyleIndexI2);
fill Area(n, points);
Separate tables are set up for hatch patterns. If we had selected hatch fill
for the interior style in this program segment, then the value assigned to
parameter ‘Pi’ is an index to the stored patterns in the hatch table. For fill
style pattern, table entries can be created on individual output devices with:
set PatternRepresentation (ws, Pi, nx, ny, cp);
Where, parameter ‘Pi’ sets the pattern index number for workstation ws.
cp = two dimensional array of color codes with ‘nx’ columns and ‘ny’ rows.
HOLLOW SOLID PATTERN
N
DIAGONAL HATCH FILL DIAGONAL CROSS
HATCH FILL
66
Example: The 1st
entry in the pattern table for workstation1 is
cp[1, 1] = 4; cp[2, 2] = 4;
cp[1, 2] = 0; cp[2, 1] = 0;
set PatternRepresentation(1, 1, 2, 2, cp);
Index(Pi) Pattern(cp)
1 4 0
0 4
Here, 1st
2 entries for the color table-color
array ‘cp’ specifies a pattern that
produces alternate red and black diagonal
pixel
2
2 1 2
1 2 1
2 1 2
When a color array ‘cp’ is to be applied to fill a region, we specify the
size of the area that is to be converted by each element of the array. We do
this by setting the rectangular co-ordinate extents of the pattern:
set Patternsize(dx, dy);
Here dx and dy is the co-ordinate width and height of the array mapping.
A reference position for starting a pattern fill is assigned with the
statement: set PatternReferencepoint(position);
Here, position = pointer to co-ordinate (xp, yp) that fix the lower left
corner of the rectangular pattern. Form this starting position, the pattern is
then replicated in the ‘x’ and ‘y’ directions until the defined area is covered
by non-overlapping copies of pattern array. The process of filling an area
67
with a rectangular pattern is called tiling and rectangular fill patterns are
sometimes referred to a tiling pattern.
If the row positions in the pattern array are referred in reverse (i.e.: from
bottom to top starting at 1), a pattern value is then assigned to pixel position
(x, y) in screen or window co-ordinate as:
set Pixel(x, y, cp(y mod ny + 1), (x mod nx + 1))
Where, ‘nx’ and ‘ny’ = number of rows and columns in pattern array.
iii. SOFT FILL:
Modified boundary fill and flood fill procedures that are applied to
repaint areas, so that the fill color is combined with background colors are
referred to as Soft-fill/Tint-fill algorithm.
A linear soft fill algorithm repaints an area that was originally painted by
merging a foreground color ‘F’ with a single background color ‘B’. Assume,
we know the values for ‘F’ and ‘B’, we can determine how these colors were
originally combined by checking the current color contents of the frame
buffer. The current RGB color ‘P’ of each pixel within the area to be refilled
is some linear combination of ‘F’ and ‘B’.
P = tF + (1 – t)B --- (1)
Where, the transparency factor ꞌtꞌ has a value between 0 and 1 for each
pixel. For value of ꞌtꞌ less than 0.5, the background color contributes more to
the interior color of the region that does the fill color.
68
The vector equation – (1) holds for each,
P = (PR, PG, PB), F = (FR, FG, FB), B = (BR, BG, BB) --- (2).
So, we can calculate the value of parameter ꞌtꞌ using one of the RGB color
component as:
t = PK – BK/FK – BK --- (3),
Where K = R, G or B and FK ≠ BK. The parameter ꞌtꞌ has the same value
for each RGB component but round off to integer codes can result in
different values of 't' for different components.
We can minimize this round off error by selecting the component with
the largest difference between ‘F’ and ‘B’. This value of ꞌtꞌ is then used to
mix the new fill color ‘NF’ with the background color, using either a
modified flood-fill or boundary-fill procedure.
Soft-fill procedures can be applied to an area whose foreground color is
to be merged with multiple background color areas. E.g.: check board
pattern. When two background colors B1 and B2 are mixed with foreground
color ‘F’, the resulting pixel color ‘P’ is:
P = t0F + t1B1 + (1 – t0 – t1)B2 --- (4)
Where the sum of the coefficients ‘t0’, ‘t1’ and (1 – t0 – t1) on the color
terms must equal to 1. These parameters are then used to mix the new fill
color with parameters are then used to mix the new fill color with the two
background colors to obtain the new pixel color.
69
FILLED AREA ATTRIBUTES WITH IRREGULAR BOUNDARY:
i. CHARACTER ATTRIBUTES:
Here, we have to control the character attributes, such as font size, color
and orientation. Attributes can be set both for entire character string (text)
and for individual characters defined as marker symbols.
a. Text Attributes:
It includes the font (type face) which is a set of characters with a
particular design style, such as: Courier, Times Roman etc. This is also
displayed with assorted underlying styles. i.e.: solid, dotted, double and
also the text may be boldface, italics or in outline or shadow style.
A particular font and associated style is selected by setting an interior
code for the text font parameter ‘tf’ in the function: set Textfont(tf);
Color setting for displayed text is done by the function:
set TextColorIndex(tc);
Here ‘tc’ = text color parameter specifies an allowable color code.
Text size can be adjusted without changing the width to height ratio of
characters with: set CharacterHeight(ch);
Here ‘ch’ is assigned a real value greater than 0 to set the coordinate
height of capital letters.
70
The width of text can be set with the function:
set CharacterExpansionFactor(cw);
Here, cw = character-width parameter is set to positive real value that
scales the body width of characters. Text height is unaffected by this
attributed setting.
Spacing between characters is controlled separately with:
set CharacterSpacing(cs);
Here cs = character spacing parameter can be assigned any real value.
The value assigned to ‘cs’ determine the spacing between character bodies
along print lines. Negative values for ‘cs’ overlap character bodies positive
values insert space to spread out the displayed characters.
The orientation for a displayed character string is set according to the
direction of the character up vector: set CharacterUpVector(upvect);
Parameter ‘upvect’ in this function is assigned two values that specify the
‘x’ and ‘y’ vector components. Text is then displayed, so that the orientation
of characters from baseline to cap line in the direction of the up-vector.
A procedure for orienting text rotates characters so that the sides of
character bodies, from baseline to cap line are aligned with the up-vector.
The rotated character shapes are then scan converted into the frame buffer.
An attributes parameter for this option is set with the statement:
set TextPath(tp);
71
Here tp = text path can be assigned with the value right, left, up, down,
horizontal, vertical etc.
For text alignment, the attribute specifies how text is to be positioned
with respect to the start coordinates. Alignment attributes are set with:
set TextAlignment(h, v);
Here ‘h’ and ‘v’ control horizontal and vertical alignment respectively.
Horizontal alignment is set by assigning ‘h’, a value of left, center and right.
Vertical alignment is set by assigning ‘v’, a value of top, cap, half, base or
bottom.
A precision specification for text display is given with:
set TextPrecision(tpr);
Here tpr = text precision parameter is assigned one of the values: string,
char or stroke.
The highest quality ‘text’ is displayed when the precision parameter is set
to the value stroke. For this precision setting, greater detail would be used in
defining the character shapes and the processing of attributes selection and
other string manipulation procedures would be carried out to the highest
possible accuracy. The lowest quality precision setting, string is used for
faster display of character string.
72
b. Marker Attributes:
A marker symbol is a single character that can be displayed in different
colors and in different sizes. Marker attributes are implemented by
procedures that load the choses character into the raster at the defined
positions with the specified color and size. We select a particular character
to be the marker symbol with: set MarkerType(mt);
Here, mt = marker type parameter is set to an integer code. Typical codes
for marker type are integers 1 through 5, specifying respectively, a dot (.), a
vertical cross (+), an asterisk (*), a circle (O) and a diagonal cross (x).
Displayed marker types are centered on the marker coordinates we set the
marker size with: set MarkerSizeScaleFactor(ms);
Here, ms = marker size parameter, assigned a positive number.
It is applied to the normal size for the particular marker symbol chosen,
values greater than 1 produce character enlargement; values less than 1
reduce the marker size.
Marker color is specified with: set PolymarkerColorIndex(mc);
Here, mc = selected color code, stored in current attribute list and used to
display subsequently specified market primitives.
ii. BUNDLED ATTRIBUTES:
When each function references a single attribute, then that specifies
exactly how a primitive is to be displayed with that attribute setting. These
73
specifications called Individual/Unbundled Attributes and they are used
with an output device that is capable of displaying primitives in the way
specified.
A particular set of attribute values for a primitive on each output device is
then chosen by specifying the appropriate table index. Attributes specified in
this manner are called Bundled Attributes. The table for each primitive that
defined groups of attributes values to be used when displaying that primitive
on particular output device is called a Bundle table.
Attributes that may be bundled into the workstation table entries are those
that don’t involve co-ordinate specifications, such as color and line type. The
choice between a bundled and an unbundled specification is made by setting
a switch called the aspect source flag for each of these attributes:
set IndividualASF(attribute ptr, flag ptr);
Where ‘attribute ptr’ parameter points to a list of attributes and
parameter ‘flag ptr’ points to the corresponding list of aspect source flags.
Each aspect source flag can be assigned a value of individual or bundled.
a. Bundled Line Attributes:
Entries in the bundle table for line attributes on a specified workstation
are set with function: set PolylineRepresentation(ws, li, lt, lw, lc);
Here, ws = workstation identifier, li = line index parameter, defined the
bundle table position. Parameter lt, lw and lc are then bundled and assigned
values to set the line type, line width and line color specifications
respectively for the designated table index.
74
E.g.: set PolylineRepresentation(1, 3, 2, 0, 5, 1);
set PolylineRepresentation(4, 3, 1, 1, 7);
Here, a polyline that is assigned a table index value of 3 would then be
displayed using dashed lines at half thickness in a blue color on workstation
‘1’ while on workstation 4, this same index generates solid standard sized
white lines. Once the bundle tables have been set up a group of bundled line
attributes is chosen for each workstation by specifying the table index value.
set PolylineIndex(li);
b. Bundled Area Fill Attributes:
Table entries for bundled area-fill attributes are set with:
set InteriorRepresentation(ws, fi, fs, Pi, fc);
This defines the attribute list corresponding to fill index ‘fi’ on
workstation ws. Parameter ‘fs’, ‘Pi’, and ‘fc’ are assigned values for the fill
style, pattern index and fill color respectively on the designated workstation.
Similar bundle tables can also be set up for edge attributes of polygon fill
areas. A particular attribute bundle is then selected from the table with the
function: set InteriorIndex(fi);
Subsequently defined fill areas are then displayed on each active
workstation according to the table entry specified by the fill index parameter
‘fi’.
75
c. Bundles Text Attributes:
The function is: set TextRepresentation(ws, ti, tf, tp, te, ts, tc);
Which bundles value for text font, precision, expansion factor, size, and
color in a table position for workstation ‘ws’ that is specified by the value
assigned to text index parameter ‘ti’. Other text attributes, including
character up vector, text, path, character height and text alignment are set
individually. A particular text index value is then chosen with the function:
set TextIndex(ti);
Each text function that is then invoked is displayed on each workstation
with the set of attributes referenced by this table position.
d. Bundled Marker Attributes:
Table entries for bundled marker attributes are set up with:
set PolymarkerRepresentation(ws, mi, mt, ms, mc);
This defined the marker type, marker scale factor and marker color for
index ‘mi’ on workstation ws.
Bundle table sections are then made with the function:
set PolymarkerIndex(mi);
76
CHAPTER -5
2D TRANSFORMATION:
A fundamental objective of 2D transformation is to simulate the movement and
manipulation of objects in the plane. There are 2 points of view are used for
describing the object movement. That is:
i. The object itself is moved relative to a stationary co-ordinate system or
background. The mathematical statement of this viewpoint is described by
geometric transfer motions applied to each point of the object.
ii. The second view holds that the object is held stationary, while the co-
ordinate system is moved relative to the object. This effect is attained
through the application of co-ordinate transformations.
The transformations are used directly by application programs and within many
graphic sub-routines.
BASIC TRANSFORMATION IN 2D:
In 2D Transformation, the basic transformation includes 3 parameters to
reposition and resize the 2D objects. i.e.: Translation, Rotation and Scaling.
i. TRANSLATION:
A translation is applied to an object by repositioning it along a straight
line path from one-coordinate location to another. We translate a 2D point
by adding translation distance ‘tx’ and ‘ty’ to the original co-ordinate position
(x, y) to move the point to a new position (xꞌ, yꞌ). So,
77
xꞌ = x + tx, yꞌ = y + ty --- (1)
The translation distance pair (tx, ty) is called a translation vector or shift
vector. The equation can be express as a single matrix equation by using the
column vector to represent co-ordinate positions and the translation vector
are:
ii. ROTATION:
A 2D Rotation is applied to an object by repositioned it along a circular
path in xy-plane. To generate a rotation angle θ, and the position (xr, yr) of
the rotation or pivot point, about which the object is to be rotated.
Positive values for the rotation angle define counter clockwise rotations
about the pivot point and negative values rotate objects in clockwise
direction. This transformation can also be described as a rotation about a
rotation axis, which is perpendicular to xy-plane and passes through pivot
point.
The transformation equations for rotation of point position ‘P’, when the
pivot point is at the co-ordinate origin.
r = constant distance of the point from origin.
θ = rotation angle
Φ = original angular position of the point from horizontal.
X1
X2
P =
Xꞌ1
Xꞌ2
Pꞌ =
tx
ty
T = --- (2)
78
So, transformed co-ordinates in terms of angle θ and Φ are:
The original co-ordinates of the point in polar co-ordinates are:
Substitute equation – (2), in equation – (1), we get the transformation
equation, for rotating a point of position (x, y) through an angle ‘θ’ about the
origin.
We can write the rotation equation in matrix from is: Pꞌ = R.P --- (4),
where rotation matrix is:
When co-ordinate positions are represented as row vectors instead of
column vectors, the matrix product in rotation equation – (4) is transposed,
so that the transformed row co-ordinate vector [xꞌ, yꞌ] is calculated as:
xꞌ = rcos(θ + Φ) = rcosΦcosθ – rsinΦsinθ
yꞌ = rsin(θ + Φ) = rcosΦsinθ + rsinΦcosθ
--- (1)
x = rcosΦ
y = rsinΦ
--- (2)
xꞌ = xcosθ – ysinθ
yꞌ = xsinθ + ycosθ
--- (3)
cosθ - sinθ
sinθ
sinθ cosθ
R
=
---
(5)
(Clockwise direction)
cosθ sinθ
- sinθ cosθ
R
=
---
(6)
(Anti-Clockwise
direction)
79
PꞌT
= (R.P)T
= PT
.RT
--- (7)
Where PT
= [x y]
RT
= Transpose of ‘R’ obtain by interchanging the rows and
columns.
The transformation equation for rotation of a point about any specified
rotation position (xr, yr) is:
Example 1: Consider an object ‘ABC’ with co-ordinates A(1, 1), B(10, 1),
C(5, 5). Rotate the object by 90 degree in anticlockwise direction and give
the co-ordinates of the transformed object.
Example 2: Perform a 45 degree rotation of the object A(2, 1), B(5, 1) and
C(5, 6) in clockwise direction and give the co-ordinates of the transformed
objects.
xꞌ = xr + (x –xr)cosθ – (y – yr)sinθ
yꞌ = yr + (x –xr)sinθ + (y – yr)cosθ
--- (8)
A
B
C
1
10
5
1
1
5
X =
cosθ sinθ
- sinθ cosθ
R =
cos90 sin90
- sin90 cos90=
0 1
-1 0
=
1
10
5
1
1
5
Xꞌ = [X].[R] =
0 1
-1 0
-1
-1
-5
1
10
5
=
A
B
C
80
iii. SCALING:
A scaling transformation alters the size of the object. This operation can
be carried out for polygons by multiplying the co-ordinate values (x, y) of
each vertex by scaling factors ‘Sx’ and ‘Sy’ to produce the transformed co-
ordinates (xꞌ, yꞌ) as:
xꞌ = x . Sx and yꞌ = y . Sy --- (1)
Scaling factors ‘Sx’ scales objects in the x-direction, while ‘Sy’ scales in
the y-direction. The transformation equation in matrix form is:
Or, Pꞌ = S. P --- (3)
The scaling factors ‘Sx’ and ‘Sy’ is less than 1, reduce the size of objects,
values greater than 1 produce an enlargement. Specifying a value of 1for
both ‘Sx’ and ‘Sy’ leaves the size of objects unchanged. When ‘Sx’ and ‘Sy’
A
B
C
2
5
5
1
1
6
X =
cosθ - sinθ
sinθ cosθ
R =
cos45 - sin45
sin45 cos45
=
1/√2 -1/√2
1/√2 1/√2
=
Xꞌ = [X].[R] =
2
5
5
1
1
6
1/√2 -1/√2
1/√2 1/√2
3/√2
6/√2
11/√2
-1/√2
-4/√2
1/√2
= B
C
A
xꞌ
yꞌ
=
Sx 0
0 Sy
x
y
--- (2)
81
are assigned the same value, a uniform scaling is produced that maintains
relative object proportions.
We can control the location of a scaled object by choosing a position,
called the fixed point that is to remain unchanged after the scaling
transformation. Co-ordinates for the fixed point (xf, yf) can be chosen as one
of the vertices, the object centroid or any other position.
For a vertex, with co-ordinates (x, y), the scaled co-ordinates (xꞌ, yꞌ) are
calculated as:
xꞌ = xf + (x – xf)Sx and yꞌ = yf + (y – yf)Sy --- (4)
The scaling transformations to separate the multiplicative and additive
items:
xꞌ = x . Sx + xf(1 – Sx) and yꞌ = y . Sy + yf(1 – Sy) --- (5)
Where, the additive terms xf(1 – Sx) and yf(1 – Sy) are constant for all
points in the object.
Example 1: Scale the object with co-ordinates A(2, 1), B(2, 3), C(4, 2) and
D(4, 4) with a scale factor Sx = Sy = 2.
Sx 0
0 Sy
S =
2 0
0 2
= X =
2
4
4
1
3
2
2
4
82
Example 2: What will be the effect of scaling factor Sx = 1/2 and Sy = 1/3
on a given triangle ABC whose co-ordinates are: A[4, 1], B[5, 2], C[4, 3]?
OTHER TRANSFORMATION:
i. REFLECTION:
A reflection is a transformation that produces a mirror image of an
object. The mirror image for a 2D reflection is generated relative to an axis
of reflection by the object 180 degree about the reflection axis.
Reflection about x-axis, the line y = 0, the x-axis is accomplished with
the transforming matrix is given below. This transformation keeps ‘x’ values
the same, but flips the y-value of co-ordinate positions.
Xꞌ = [X].[S] =
4
4
5
1
2
3
1/2 0
0 1/3
=
2
2
1/3
2/3
1
5/2
X =
4
4
1
2
3
5
Sx 0
0 Sy
S =
1/2 0
0 1/3
=
Xꞌ = [X].[S] =
2
4
2
4
1
3
2
4
2 0
0 2
=
4
8
8
2
6
4
4
8
Computer graphics notes
Computer graphics notes
Computer graphics notes
Computer graphics notes
Computer graphics notes
Computer graphics notes
Computer graphics notes
Computer graphics notes
Computer graphics notes
Computer graphics notes
Computer graphics notes
Computer graphics notes
Computer graphics notes
Computer graphics notes
Computer graphics notes
Computer graphics notes
Computer graphics notes
Computer graphics notes
Computer graphics notes
Computer graphics notes
Computer graphics notes
Computer graphics notes
Computer graphics notes
Computer graphics notes
Computer graphics notes
Computer graphics notes
Computer graphics notes
Computer graphics notes
Computer graphics notes
Computer graphics notes
Computer graphics notes
Computer graphics notes
Computer graphics notes
Computer graphics notes
Computer graphics notes
Computer graphics notes
Computer graphics notes
Computer graphics notes
Computer graphics notes
Computer graphics notes
Computer graphics notes
Computer graphics notes
Computer graphics notes
Computer graphics notes
Computer graphics notes
Computer graphics notes
Computer graphics notes
Computer graphics notes
Computer graphics notes
Computer graphics notes
Computer graphics notes
Computer graphics notes
Computer graphics notes
Computer graphics notes
Computer graphics notes
Computer graphics notes
Computer graphics notes
Computer graphics notes
Computer graphics notes
Computer graphics notes
Computer graphics notes
Computer graphics notes
Computer graphics notes
Computer graphics notes
Computer graphics notes
Computer graphics notes
Computer graphics notes
Computer graphics notes
Computer graphics notes
Computer graphics notes
Computer graphics notes
Computer graphics notes
Computer graphics notes
Computer graphics notes
Computer graphics notes
Computer graphics notes
Computer graphics notes
Computer graphics notes
Computer graphics notes
Computer graphics notes
Computer graphics notes
Computer graphics notes
Computer graphics notes
Computer graphics notes
Computer graphics notes
Computer graphics notes
Computer graphics notes
Computer graphics notes
Computer graphics notes
Computer graphics notes
Computer graphics notes
Computer graphics notes
Computer graphics notes
Computer graphics notes
Computer graphics notes
Computer graphics notes
Computer graphics notes
Computer graphics notes
Computer graphics notes
Computer graphics notes
Computer graphics notes
Computer graphics notes
Computer graphics notes
Computer graphics notes
Computer graphics notes
Computer graphics notes
Computer graphics notes
Computer graphics notes
Computer graphics notes
Computer graphics notes
Computer graphics notes
Computer graphics notes
Computer graphics notes
Computer graphics notes
Computer graphics notes
Computer graphics notes
Computer graphics notes
Computer graphics notes
Computer graphics notes
Computer graphics notes
Computer graphics notes
Computer graphics notes
Computer graphics notes
Computer graphics notes
Computer graphics notes
Computer graphics notes
Computer graphics notes
Computer graphics notes
Computer graphics notes
Computer graphics notes
Computer graphics notes
Computer graphics notes
Computer graphics notes
Computer graphics notes
Computer graphics notes
Computer graphics notes
Computer graphics notes
Computer graphics notes
Computer graphics notes
Computer graphics notes
Computer graphics notes
Computer graphics notes
Computer graphics notes
Computer graphics notes
Computer graphics notes
Computer graphics notes
Computer graphics notes
Computer graphics notes
Computer graphics notes
Computer graphics notes
Computer graphics notes
Computer graphics notes
Computer graphics notes
Computer graphics notes
Computer graphics notes
Computer graphics notes
Computer graphics notes
Computer graphics notes
Computer graphics notes
Computer graphics notes
Computer graphics notes
Computer graphics notes
Computer graphics notes
Computer graphics notes
Computer graphics notes
Computer graphics notes
Computer graphics notes
Computer graphics notes
Computer graphics notes
Computer graphics notes
Computer graphics notes
Computer graphics notes
Computer graphics notes
Computer graphics notes
Computer graphics notes
Computer graphics notes

More Related Content

What's hot

applications of computer graphics
applications of computer graphicsapplications of computer graphics
applications of computer graphics
Aaina Katyal
 
Raster scan system
Raster scan systemRaster scan system
Raster scan system
Mohd Arif
 
Line drawing algo.
Line drawing algo.Line drawing algo.
Line drawing algo.
Mohd Arif
 
Graphics software
Graphics softwareGraphics software
Graphics software
Mohd Arif
 
Introduction to computer graphics
Introduction to computer graphics Introduction to computer graphics
Introduction to computer graphics
Priyodarshini Dhar
 
Introduction to computer graphics
Introduction to computer graphicsIntroduction to computer graphics
Introduction to computer graphics
Amandeep Kaur
 
Circle generation algorithm
Circle generation algorithmCircle generation algorithm
Circle generation algorithm
Ankit Garg
 

What's hot (20)

Character generation techniques
Character generation techniquesCharacter generation techniques
Character generation techniques
 
applications of computer graphics
applications of computer graphicsapplications of computer graphics
applications of computer graphics
 
Raster scan system
Raster scan systemRaster scan system
Raster scan system
 
Spline representations
Spline representationsSpline representations
Spline representations
 
Line drawing algo.
Line drawing algo.Line drawing algo.
Line drawing algo.
 
Overview of the graphics system
Overview of the graphics systemOverview of the graphics system
Overview of the graphics system
 
Video display devices
Video display devicesVideo display devices
Video display devices
 
Mid point circle algorithm
Mid point circle algorithmMid point circle algorithm
Mid point circle algorithm
 
Graphics software
Graphics softwareGraphics software
Graphics software
 
Introduction to computer graphics
Introduction to computer graphics Introduction to computer graphics
Introduction to computer graphics
 
Computer graphics chapter 4
Computer graphics chapter 4Computer graphics chapter 4
Computer graphics chapter 4
 
2 d viewing computer graphics
2 d viewing computer graphics2 d viewing computer graphics
2 d viewing computer graphics
 
Random scan displays and raster scan displays
Random scan displays and raster scan displaysRandom scan displays and raster scan displays
Random scan displays and raster scan displays
 
Unit 3
Unit 3Unit 3
Unit 3
 
Computer Graphics Notes
Computer Graphics NotesComputer Graphics Notes
Computer Graphics Notes
 
Graphics software and standards
Graphics software and standardsGraphics software and standards
Graphics software and standards
 
3D transformation in computer graphics
3D transformation in computer graphics3D transformation in computer graphics
3D transformation in computer graphics
 
Introduction to computer graphics
Introduction to computer graphicsIntroduction to computer graphics
Introduction to computer graphics
 
3 d display methods
3 d display methods3 d display methods
3 d display methods
 
Circle generation algorithm
Circle generation algorithmCircle generation algorithm
Circle generation algorithm
 

Similar to Computer graphics notes

Computer Graphics Practical
Computer Graphics PracticalComputer Graphics Practical
Computer Graphics Practical
Neha Sharma
 
Lecture applications of cg
Lecture   applications of cgLecture   applications of cg
Lecture applications of cg
avelraj
 
Applications of cg
Applications of cgApplications of cg
Applications of cg
Ankit Garg
 
Graphics pdf
Graphics pdfGraphics pdf
Graphics pdf
aa11bb11
 

Similar to Computer graphics notes (20)

Graphics file
Graphics fileGraphics file
Graphics file
 
Cg
CgCg
Cg
 
Computer Graphics Practical
Computer Graphics PracticalComputer Graphics Practical
Computer Graphics Practical
 
computer graphics unit 1.ppt
computer graphics unit 1.pptcomputer graphics unit 1.ppt
computer graphics unit 1.ppt
 
topic_- introduction of computer graphics.
   topic_- introduction of computer graphics.   topic_- introduction of computer graphics.
topic_- introduction of computer graphics.
 
Computer graphics by bahadar sher
Computer graphics by bahadar sherComputer graphics by bahadar sher
Computer graphics by bahadar sher
 
Lecture applications of cg
Lecture   applications of cgLecture   applications of cg
Lecture applications of cg
 
applications.ppt
applications.pptapplications.ppt
applications.ppt
 
CG_1.pdf
CG_1.pdfCG_1.pdf
CG_1.pdf
 
computer graphics unit 1-I.pptx
computer graphics unit 1-I.pptxcomputer graphics unit 1-I.pptx
computer graphics unit 1-I.pptx
 
Computer graphics Applications and System Overview
Computer graphics Applications and System OverviewComputer graphics Applications and System Overview
Computer graphics Applications and System Overview
 
Applications of cg
Applications of cgApplications of cg
Applications of cg
 
Graphics pdf
Graphics pdfGraphics pdf
Graphics pdf
 
Cg applications
Cg applicationsCg applications
Cg applications
 
Compute graphics
Compute graphicsCompute graphics
Compute graphics
 
Digital design
Digital designDigital design
Digital design
 
COMPUTER GRAPHICS DAY1
COMPUTER GRAPHICS DAY1COMPUTER GRAPHICS DAY1
COMPUTER GRAPHICS DAY1
 
Reviewer in com graphics
Reviewer in com graphicsReviewer in com graphics
Reviewer in com graphics
 
Color based image processing , tracking and automation using matlab
Color based image processing , tracking and automation using matlabColor based image processing , tracking and automation using matlab
Color based image processing , tracking and automation using matlab
 
Computer Graphics Introduction, Open GL, Line and Circle drawing algorithm
Computer Graphics Introduction, Open GL, Line and Circle drawing algorithmComputer Graphics Introduction, Open GL, Line and Circle drawing algorithm
Computer Graphics Introduction, Open GL, Line and Circle drawing algorithm
 

More from smruti sarangi

More from smruti sarangi (7)

Daa notes 3
Daa notes 3Daa notes 3
Daa notes 3
 
Daa notes 2
Daa notes 2Daa notes 2
Daa notes 2
 
Daa notes 1
Daa notes 1Daa notes 1
Daa notes 1
 
Software engineering study materials
Software engineering study materialsSoftware engineering study materials
Software engineering study materials
 
Data structure using c module 1
Data structure using c module 1Data structure using c module 1
Data structure using c module 1
 
Data structure using c module 2
Data structure using c module 2Data structure using c module 2
Data structure using c module 2
 
Data structure using c module 3
Data structure using c module 3Data structure using c module 3
Data structure using c module 3
 

Recently uploaded

Seal of Good Local Governance (SGLG) 2024Final.pptx
Seal of Good Local Governance (SGLG) 2024Final.pptxSeal of Good Local Governance (SGLG) 2024Final.pptx
Seal of Good Local Governance (SGLG) 2024Final.pptx
negromaestrong
 
Gardella_Mateo_IntellectualProperty.pdf.
Gardella_Mateo_IntellectualProperty.pdf.Gardella_Mateo_IntellectualProperty.pdf.
Gardella_Mateo_IntellectualProperty.pdf.
MateoGardella
 
Making and Justifying Mathematical Decisions.pdf
Making and Justifying Mathematical Decisions.pdfMaking and Justifying Mathematical Decisions.pdf
Making and Justifying Mathematical Decisions.pdf
Chris Hunter
 

Recently uploaded (20)

Mehran University Newsletter Vol-X, Issue-I, 2024
Mehran University Newsletter Vol-X, Issue-I, 2024Mehran University Newsletter Vol-X, Issue-I, 2024
Mehran University Newsletter Vol-X, Issue-I, 2024
 
microwave assisted reaction. General introduction
microwave assisted reaction. General introductionmicrowave assisted reaction. General introduction
microwave assisted reaction. General introduction
 
APM Welcome, APM North West Network Conference, Synergies Across Sectors
APM Welcome, APM North West Network Conference, Synergies Across SectorsAPM Welcome, APM North West Network Conference, Synergies Across Sectors
APM Welcome, APM North West Network Conference, Synergies Across Sectors
 
fourth grading exam for kindergarten in writing
fourth grading exam for kindergarten in writingfourth grading exam for kindergarten in writing
fourth grading exam for kindergarten in writing
 
Seal of Good Local Governance (SGLG) 2024Final.pptx
Seal of Good Local Governance (SGLG) 2024Final.pptxSeal of Good Local Governance (SGLG) 2024Final.pptx
Seal of Good Local Governance (SGLG) 2024Final.pptx
 
Mixin Classes in Odoo 17 How to Extend Models Using Mixin Classes
Mixin Classes in Odoo 17  How to Extend Models Using Mixin ClassesMixin Classes in Odoo 17  How to Extend Models Using Mixin Classes
Mixin Classes in Odoo 17 How to Extend Models Using Mixin Classes
 
Presentation by Andreas Schleicher Tackling the School Absenteeism Crisis 30 ...
Presentation by Andreas Schleicher Tackling the School Absenteeism Crisis 30 ...Presentation by Andreas Schleicher Tackling the School Absenteeism Crisis 30 ...
Presentation by Andreas Schleicher Tackling the School Absenteeism Crisis 30 ...
 
Unit-V; Pricing (Pharma Marketing Management).pptx
Unit-V; Pricing (Pharma Marketing Management).pptxUnit-V; Pricing (Pharma Marketing Management).pptx
Unit-V; Pricing (Pharma Marketing Management).pptx
 
How to Give a Domain for a Field in Odoo 17
How to Give a Domain for a Field in Odoo 17How to Give a Domain for a Field in Odoo 17
How to Give a Domain for a Field in Odoo 17
 
INDIA QUIZ 2024 RLAC DELHI UNIVERSITY.pptx
INDIA QUIZ 2024 RLAC DELHI UNIVERSITY.pptxINDIA QUIZ 2024 RLAC DELHI UNIVERSITY.pptx
INDIA QUIZ 2024 RLAC DELHI UNIVERSITY.pptx
 
Introduction to Nonprofit Accounting: The Basics
Introduction to Nonprofit Accounting: The BasicsIntroduction to Nonprofit Accounting: The Basics
Introduction to Nonprofit Accounting: The Basics
 
Measures of Dispersion and Variability: Range, QD, AD and SD
Measures of Dispersion and Variability: Range, QD, AD and SDMeasures of Dispersion and Variability: Range, QD, AD and SD
Measures of Dispersion and Variability: Range, QD, AD and SD
 
Explore beautiful and ugly buildings. Mathematics helps us create beautiful d...
Explore beautiful and ugly buildings. Mathematics helps us create beautiful d...Explore beautiful and ugly buildings. Mathematics helps us create beautiful d...
Explore beautiful and ugly buildings. Mathematics helps us create beautiful d...
 
Unit-IV; Professional Sales Representative (PSR).pptx
Unit-IV; Professional Sales Representative (PSR).pptxUnit-IV; Professional Sales Representative (PSR).pptx
Unit-IV; Professional Sales Representative (PSR).pptx
 
Advanced Views - Calendar View in Odoo 17
Advanced Views - Calendar View in Odoo 17Advanced Views - Calendar View in Odoo 17
Advanced Views - Calendar View in Odoo 17
 
Advance Mobile Application Development class 07
Advance Mobile Application Development class 07Advance Mobile Application Development class 07
Advance Mobile Application Development class 07
 
Gardella_Mateo_IntellectualProperty.pdf.
Gardella_Mateo_IntellectualProperty.pdf.Gardella_Mateo_IntellectualProperty.pdf.
Gardella_Mateo_IntellectualProperty.pdf.
 
Key note speaker Neum_Admir Softic_ENG.pdf
Key note speaker Neum_Admir Softic_ENG.pdfKey note speaker Neum_Admir Softic_ENG.pdf
Key note speaker Neum_Admir Softic_ENG.pdf
 
SECOND SEMESTER TOPIC COVERAGE SY 2023-2024 Trends, Networks, and Critical Th...
SECOND SEMESTER TOPIC COVERAGE SY 2023-2024 Trends, Networks, and Critical Th...SECOND SEMESTER TOPIC COVERAGE SY 2023-2024 Trends, Networks, and Critical Th...
SECOND SEMESTER TOPIC COVERAGE SY 2023-2024 Trends, Networks, and Critical Th...
 
Making and Justifying Mathematical Decisions.pdf
Making and Justifying Mathematical Decisions.pdfMaking and Justifying Mathematical Decisions.pdf
Making and Justifying Mathematical Decisions.pdf
 

Computer graphics notes

  • 1. Computer Graphics By: Smruti Smaraki Sarangi Assistant Professor IMS Unison University, Dehradun
  • 2. 1 C O N T E N T S Sl.no Chapter Page 1 Introduction to graphics, and its application 2-6 2 Video display devices: CRT, Flat panel display, Raster Scan system, Random scan system, Input and Output devices, Graphics software and Functions, GUI. 7-29 3 Line drawing algorithms, Circle generating and ellipse generating algorithms, Filled area primitives: flood-fill, boundary-fill and scan-line polygon fill algorithm and antialiasing. 30-59 4 Line attributes, Curve attributes, Area-fill attributes, Character attributes, Bundled and Marker Attributes 60-75 5 2D basic transformation, 2D composite transformation, Matrix representation and homogeneous co-ordinate, Transformation between co- ordinate system and Affine transformation 76-102 6 Viewing Pipeline, View Co-ordinate reference frame, Window to viewport Transformation, Clipping and Types of Clipping 103-125 7 Representation of point, 3D Transformation and its types 126-149 8 3D Viewing, Projection and its types, Viewing Pipeline. 3D Clipping and Viewport Clipping 150-171 9 Visible surface detection algorithms (back-face detection, z-buffer, a-buffer, scan-line method, depths sorting method) Ray-tracing algorithm and its surface intersection calculation and ray casting method. 172-197 10 Curved line and surfaces, BSP Trees and Octree, Spline Representation and specifications, B-Spline curves and surfaces, Bezier curve and surfaces 198-232 11 Basic Illumination Models, Halftoning and Dithering Techniques, Polygon Rendering Methods, Animation Techniques and Morphing, Hierarchical modeling structures, Displaying light Intensities and continuous tone image 233-258
  • 3. 2 CHAPTER -1 COMPUTER GRAPHICS: Computer graphics are graphics created using computers and more generally, the representation and manipulation of image data by a computer with help from specialized software and hardware. The development of computer graphics has made computers easier to interact with, and better for understanding and interpreting many types of data. Developments in computer graphics have a profound impact on many types of media and have revolutionized animation, movies and the video game industry. APPLICATION OF COMPUTER GRAPHICS: Computers have become a powerful tool for the rapid and economical production of pictures. Advances in computer technology have made interactive computer graphics a practical tool. Today, computer graphics is used in the areas as science, engineering, medicine, business, industry, government, art, entertainment, advertising, education and training. I. COMPUTER AIDED DESIGN (CAD): A major use of computer graphics is in design purposes, generally referred to as CAD. Computer Aided Design methods are now routinely used in the design of buildings, automobiles, aircraft, watercraft, spacecraft, computers, text-tiles and many other products. For some design applications, objects are first displayed in wireframe outline from that shows the overall shapes. Software packages for CAD
  • 4. 3 applications typically provide the designer with the multi-window environment. Animations are often used in CAD applications. Realistic displays of architectural designs permit both architect and their clients to study the appearance of a single building or a group of buildings. With virtual reality systems, designers can even go for a simulated “walk” through the room or around the outsides of buildings to better appreciate to overall effect of a particular design. In addition to realistic exterior building displays architecture, CAD packages also provide facilities for experimenting with 3-dimensional interior layouts and lighting. II. PRESENTATION GRAPHICS: It is used to produce illustrations for reports or to generate 35 mm slides or transparencies for use with projectors. It is commonly used in summarize financial, statistical, mathematics, scientific and economic data for research reports, managerial reports, consumer information, bulletins and other types of reports. E.g.: Bar Charts, Line Graphs, Surface Graphs and Pie Charts. Three dimensional graphs are used simply for effect they can provide a more dramatic or more attractive presentation of data relationships. III. COMPUTER ART: Computer Graphics methods are widely used in both fine art and commercial art applications.
  • 5. 4 Artists use a variety of computer methods, including special purpose hardware, artist’s paint brush programs, other paint packages, specially developed software, symbolic mathematics packages, CAD packages, desktop publishing software and animation packages that provide facilities for designing object shapes and specifying object motions. The basic idea behind a “paint brush” program is that, it allows artists to paint pictures on the screen of video monitor. The picture is usually painted electronically on a graphics tablet using a stylus, which can simulate different brush strokes, brush widths and colors. The art work of electronic art created with the aid of mathematical relationships is designed in relation to frequency variations and other parameters in a musical composition to produce a video that integrates visual and aural patterns. IV. ENTERTAINMENT: Computer Graphics methods are now commonly used in making motion pictures, music videos and television shows. A graphics scene generated for the movie “Star-Trek – The wrath of Khan” is one of the uses, in entertainment field. Many TV series regularly employ compute graphics method. E.g.: Deep Space Nine and Stay Tuned. Music videos use graphics and image processing technique can be used to produce a transformation of one person or object into another or morphing.
  • 6. 5 V. EDUCATION AND TRAINING: Computer generated models of physical, financial and economic systems are often used as educational aids, models of physical systems, physiological systems and population trends or equipment such as the color-coded diagram can help trainings, to understand the operation of the system. For some training applications, special systems are designed. E.g.: Simulators for practice sessions for training of ship, contains pilots, heavy equipment operators and air-traffic control personnel. VI. VISUALIZATION: It is an application for computer graphics. Producing graphical representations for scientific engineering and medical data sets and processes is generally referred to as scientific visualization and the term business visualization is used in connection with data sets related to commerce, industry and other non-scientific areas. There are many different kinds of data sets and effective visualization schemes depend on the characteristics of the data. A collection of data can contain scalar values vectors, higher order tensors or any combination of these data types. Color coding is just one way to visualize the data sets. Additional techniques include contour plots, graphs and charts, surface renderings and visualization of volume interiors.
  • 7. 6 VII. IMAGE PROCESSING: Image Processing applies techniques to modify or interact existing pictures such as photographs and TV scans. Two principal applications of image processing are: i) improving picture quality and ii) machine perception of visual information. To apply image processing methods, we first digitize a photograph or other picture into an image file. Then digital methods can be applied to rearrange picture parts to enhance color separations or to improve the quality of shading. An example of the application of image processing methods is to enhance the quality of a picture. Image processing and computer graphics are typically combined in many applications. The last application is generally referred to as computer-aided surgery. VIII. GRAPHICAL USER INTERFACE(GUI): It is common now for software packages to provide a graphical interface. A major component of a graphical interface is a window manager that allows a user to display multiple-window areas. Each window can contain a different process that can contain graphical and non-graphical displays. An “icon” is a graphical symbol that is designed to look like the processing option it represents. The advantages of icons are that they take up less screen space than corresponding textual description and they can be understood more quickly, if well designed.
  • 8. 7 CHAPTER -2 VIDEO DISPLAY DEVICES: The display devices are known as output devices. The most commonly used output device in a graphics video monitor. The operations of most video monitors are based on the standard cathode-ray-tube (CRT) design. How the Interactive Graphics display works: The modern graphics display is extremely simple in construction. It consists of three components: 1) A digital memory or frame buffer, in which the displayed Image is stored as a matrix of intensity values. 2) A monitor 3) A display controller, which is a simple interface that passes the contents of the frame buffer to the monitor. Inside the frame buffer the image is stored as a pattern of binary digital numbers, which represent a rectangular array of picture elements, or pixel. The pixel is the smallest addressable screen element. In the Simplest case where we wish to store only black and white images, we can represent black pixels by 0's in the frame buffer and white Pixels by 1's. The display controller simply reads each successive byte of data from the frame buffer and converts each 0 and 1 to the corresponding video signal. This signal is then fed to the monitor. If we wish to change the displayed picture all we need to do is to change of modify the frame buffer contents to represent the new pattern of pixels.
  • 9. 8 CATHODE-RAY-TUBES: A Cathode-Ray Tube (CRT) is a video display device, whose design depends on the operation of most video. BASIC DESIGN OF A MAGNETIC DEFLECTION: A beam of electronic emitted by an electron gun, passes through focusing and deflection systems that direct the beam towards specified position on the phosphor- coated screen. The phosphor then emits a small spot of light at each position contacted by the electron beam. Because the light emitted by the phosphor fades very rapidly. Some method is needed for maintaining the screen picture. One way to keep phosphor glowing is to redraw the picture repeatedly by quickly directing the electron beam back over the same points. This type of display is called a “refresh rate”. Phosphor coated screen Deflected Electron beam Electron beam Connector Pins Electron Gun (Cathode) Base Focusing System Magnetic Deflection Coils Horizontal Deflection Amplifier
  • 10. 9 OPERATION OF AN ELECTRON GUN WITH AN ACCELERATING MODE: The primary components of an electron gun in a CRT are the heated metal cathode and a central grid. Heat is supplied to the cathode by directing a current through a coil of wire called the “filament”, inside the cylindrical cathode structure. This causes electrons to be ‘boiled off”, the hot cathode surface. In a vacuum, inside the CRT envelope, the free negatively charged electrons are then accelerated towards the phosphor coating by a high positive voltage. The accelerating voltage can be generated with a positively charged metal coating on the inside of the CRT envelope near the phosphor screen or an accelerating anode can be used. Intensity of the electron beam is controlled by setting voltage levels on the control grid, which is a metal cylinder that fits over the cathode. A high negative voltage applied to the central grid will shut off the beam by repelling electrons and stopping them from passing through the small hole at the end of the control grid structure. A smaller negative voltage on the control grid simply decreases the number of electrons passing through. The focusing system in CRT is needed to force the electron beam to coverage into a small spot as it strikes the phosphor. Otherwise, the electrons would repel each other and the beam would spread out as it approaches the screen. The electron beam will be focused properly only at the center of the screen. As the beam moves to the outer edges of the screen, displayed images become blurred. To compensate for this, the system can adjust the focusing according to the screen position of the beam, when the picture presentation is more than the refresh rate, then it is called “blurred/overlapping”.
  • 11. 10 Cathode ray tubes are now commonly constructed with magnetic deflection coils mounted on the outside of the CRT envelope. Two pairs of coils are used with the coils in each pair mounted on opposite sides of the neck of the CRT envelope. The magnetic field produced by each pair of coils results in a traverse deflection force that is perpendicular both to the direction of the magnetic field and to the direction of travel of the electron beam. Horizontal deflection is accomplished with one pair of coils and vertical deflection is accomplished with another pair of coils. When electrostatic deflection is used 2 pairs of parallel plates is mounted horizontally to control the vertical deflection and other is mounted vertically to control horizontal deflection. Different kinds of phosphors are available for use in a CRT. Besides color, a major difference between phosphors in their “persistence”, how long they continue to emit light after the CRT beam is removed. “Persistence” is defined as the time; it takes the emitted light from the screen to delay to one-tenth of its original intensity. i.e.: Persistence α 1/Refreshment The intensity is greatest at the center of the spot and decreases with a Gaussian distribution out to the edges of the spot. The distribution corresponds to the cross sectional electron density distribution of CRT beam. The maximum number of points that can be displayed without overlap on a CRT is referred to as “resolution”. “Resolution” is the number of points/centimeter that can be plotted horizontally and vertically. Resolution of : Intensity distribution of an illuminated phosphor spot on a CRT screen.
  • 12. 11 CRT is dependent on the type of phosphor, the intensity to be displayed and the focusing and the deflection system. “Aspect Ratio” is the property of video monitors. This number gives the ratio of vertical points to horizontal points necessary to produce equal length lines in both directions on the screen. An aspect ratio of 3/4 means that a vertical line plotted with 3 points has the same length as a horizontal line plotted with 4 points. COLOR CRT MONITORS: The CRT monitor displays color pictures by using a combination of phosphors that emit different colored light. By combining the emitted light from the different phosphor a range of colors can be generated. The 2 basic techniques for producing color display with a CRT are the beam- penetration method and the shadow-mask method. I. BEAM PENETRATION METHOD: The beam penetration method for displaying color pictures has been used with random scan monitors. Two layers of phosphor, usually red and green are coated onto the inside of the CRT screen and the displayed color depends on how for the electron beam penetrates into the phosphor layers. A beam of slow elements excites only the outer red layer. A beam of very fast electrons penetrates through the red layer and excites the inner green layer. An intermediate beam speeds, combinations of red and green lights are emitted to show two additional colors, orange and yellow. The speed of the electrons and hence the screen color at any point is controlled by the beam accelerating voltage.
  • 13. 12 II. SHADOW MASK METHOD: It is commonly used is raster scan systems because they produced a much wider range of colors than the beam penetration method. One phosphor dot emits a red light, another emits a green light and the 3rd emits a blue light. This type of CRT has 3 electron guns, one for each color dot and a shadow mask grid just behind the phosphor coated screen. The 3 electron beams are deflected and focused as a group onto the shadow mask, which contains a series of holes aligned with the phosphor-dot patterns. When the 3 beams pass through a hole in the shadow mask they activate a dot triangle, which appears as a smaller color spot on the screen. “Composite Monitors” are adaptations of TV sets that allow bypass of the broadcast cirentry. These display devices still require that the picture information be combined, but no carrier signal is needed. Color CRTs in graphics systems are designed RGB monitors. These monitors use shadow mask methods and take the intensity level for each electron gun directly from the computer system without any immediate processing. A RGB color system with 24 bits of storage per pixel is generally referred to as a full-color system or a true color system. FLAT PANEL DISPLAY: This refers to a class of video devices that have reduced volume, weight and power requirements compared to a CRT. A significant feature of flat panel displays is that they are thinner than CRTs.
  • 14. 13 Current uses for flat panel displays include small TV monitors and calculators, pocket video games, laptop computers and as graphics display in applications requiring rugged portable monitors. Flat panel display is of 2 types i.e.: emissive display and non-emissive display. Flat Panel Display commonly use threadlike liquid crystal compounds, which tend to keep the long-axes of the rod shaped molecules aligned. A Flat Panel Display can be constructed with a nematic liquid crystal. It has 2 glass plates, each containing a light polarizes at right angles to the other plate, sandwich the liquid crystal material. Horizontal transparent conductors’ rows are built into one glass plate and vertical conductors’ columns are put into other plate. The intersection of 2 conductors defines a pixel position. Polarized light passing through the material is twisted, so that it will pass through the opposite polarizer. The light is then reflected back to the viewer. The turn off the pixel, we apply a voltage to the 2 intersecting conductors to align the molecules, so that the light is not twisted. These types of flat panel device are called as “Passive Matrix LCD”. Another method for constructing LCDs is to place a transistor at each pixel location, using thin film transistor at each pixel location, using thin film transistor technology. The transistors are used to control the voltage at pixel locations and to prevent charge from gradually leaking out of the liquid-crystal cells. These devices are called active matrix displays.
  • 15. 14 I. EMMISIVE DISPLAY: The Emissive Displays (Emitters) are the devices that convert electrical energy into light. E.g.: Plasma Panels, Thin Film Electroluminescent Displays and Light Emitting Diodes are example of emissive displays. a. PLASMA PANEL: The plasma panel also called gas-discharge displays are constructed by filling the region between two glass plates with a mixture of gases that usually includes Neon. A series of vertical conducting ribbons is placed on one glass panel and a set of horizontal ribbons is built into the other glass panel. Firing voltages applied to a pair of horizontal and vertical conductors causes the gat at the intersection of the two conductors to breakdown into glowing plasma of electron and ions. One disadvantage of plasma panels has been that they were strictly monochromatic devices, but systems have been developed that are now capable of displaying color and gray scale. b. THIN-FILM ELECTROLUMINISCENT DISPLAYS: Thin-Film Electroluminescent displays are similar in construction to plasma panel. The difference is that the region between the glass plates is filled with a phosphor, such as zinc sulphide dopped with manganese, instead of a gas.
  • 16. 15 When a high voltage is applied to a pair of crossing electrodes, the phosphor becomes a conductor in the area of the intersection of two electrodes; the phosphor becomes a conductor in the area of the intersection of two electrodes. Electrical energy is then absorbed by manganese, which release the energy as a spot of light similar to the glowing plasma effect in a plasma panel. Electroluminescent displays require more power than plasma panels and good color and gray scale displays are hard to achieve. c. LIGHT-EMITTING DIODE(LED): In Light Emitting Diode (LED), a matrix of diodes are arranged to form the pixel positions in the display and picture definition is stored in a refresh buffer. II. NON-EMISSIVE DISPLAY: The Non-Emissive Displays (Non-Emitters) use optical effects to convert sunlight or light from some other source into graphics patterns. E.g.: Liquid Crystal Display. The Non-Emissive devices produce by passing polarized light from the surroundings or from an internal light source through a liquid crystal material that can be aligned to either block or transmit the light. Liquid crystal refers to the fact that these compounds have a crystalline arrangement of molecules. a. LIQUID CRYSTAL DISPLAY (LCD): Liquid Crystal Displays (LCDs) are commonly used in small systems such as calculators, laptops and computers.
  • 17. 16 RASTER-SCAN SYSTEM: Raster-Scan Systems typically employ several processing units. In addition to the central processing unit or CPU, a special purpose, processor called the “video controller” or “display controller”, is used to control the operation of the display device. Here, the frame buffer can be anywhere in the system memory and the video controller, more sophisticated raster systems employ other processors as co- processors and accelerators to implement various graphics. VIDEO CONTROLLER: A fixed area of the system memory is reserved for the frame buffer and the video controller is given direct access to the frame buffer memory. Frame buffer locations and the corresponding screen positions are referenced in Cartesian coordinate origin is defined at the lower left screen corner. The screen surface is then represented as the first quadrant of 2D system, with positive ‘x’ values increasing to the right and positive ‘y’ values increasing from bottom to top. CPU SYSTEM MEMORY VIDEO CONTROLLER SYSTEM BUS I/O DEVICES MONITOR CPU SYSTEM MEMORY VIDEO CONTROLLER SYSTEM BUS I/O DEVICES MONITORFRAME BUFFER
  • 18. 17 Here, 2 registers are used to store the co-ordinates of the screen pixels. Initially the x-register is set too and y-register is set to ‘ymax’. The value stored in the frame buffer for this pixel position is then retrieved and used to set the intensity of CRT beam. The x-register is incremented by 1 and the process repeated for the next pixel on the top scan line. This procedure is repeated for each pixel along the scan line. After the last pixel on the top scan line has been processed, the x-register is rest to 0 and y-register decremented by 1. This procedure is repeated for each successive scan line. After cycling through all pixels along the bottom scan line y = 0, the video controller resets the registers to the 1st pixel position on the top scan line and the refresh process start over. When the frame/pixel presentation is less than the refresh rate, then it is called flickering. Flickering is a problem occurred in raster-scan system and it is solved by interlacing technique. RASTER-SCAN DISPLAY: It is based on TV technology. Here electron beam is swept across the screen, on row at a time from top to bottom. The picture definition is stored in a memory area called refresh buffer/frame buffer. Here each screen point is referred as a pixel or pel (picture element). The capacity of raster-scan system is to store intensity information for each screen point for realistic display of scenes containing shading and color patterns. In black and white system, a bit value of ‘1’ represents the electron beam intensity is turned on and a value of ‘0’ indicates the beam intensity is to be off.
  • 19. 18 In black and white system, a bit value of ‘1’ indicates the electron beam intensity is turned on and a value of ‘0’ indicates the beam intensity is to be off. A system with 24 bits/pixel requires 2 megabytes of storage for the frame buffer. On black and white system with one bit/pixel, the frame buffer is called bitmap. For multiple bits/pixel, the frame buffer is called pixmap. Raster-Scan display is carried out at the rate of 60 to 80 frames/second. It has units of cycle/sec or hertz. It contains a separate display processor called “graphics controller” or “display co-processor” to free the CPU. A major task of display processor is digitizing a picture definition given in an application program into a set of pixel intensity values for storage in the frame buffer. This is called “Scan Conversion”. RANDOM-SCAN DISPLAY: In this system, an application program is input and stored in the system memory along with a graphics package. The display file is accessed by the display processor to refresh the screen. The display processor is called as display- processing unit or graphics controller. DISPLAY PROCESSOR MEMORY VIDEO CONTROLLER I/O DEVICES MONITOR FRAME BUFFER CPU DISPLAY PROCESSOR SYSTEM MEMORY [RASTER-SCAN SYSTEM WITH A DISPLAY PROCESSOR]
  • 20. 19 RANDOM-SCAN SYSTEMS: When a random-scan display unit is operated a CRT has the electron beam directed only to the parts of the screen, where a picture is to be drawn. Random- scan monitors draw a picture one line at a time and for this reason is also refreshed to as vector displays or stroke-writing or calligraphic display. Refresh rate on a random-scan system depends on the number of lines to be displayed. Picture definition is now stored as a set of line drawing commands in an area of memory referred to as the refresh display file or display list or display program or refresh buffer. Random-scan systems are designed for line drawing applications and can’t display realistic shaded scenes. INPUT DEVICES: The various input devices are: Keyboard, Mouse, Trackball and Space-ball, Joysticks, Data glove, Digitizers, Image Scanner, Light Pens, Touch Panels and Voice System. I. KEYBOARD: It is used primarily as a device for entering that strings. It is an efficient device for inputting non-graphics data as picture labels associated with a graphics display. It provided with features to facilitate entry of screen co- ordinates, menu selections or graphics functions. CPU SYSTEM MEMORY DISPLAY PROCESSOR SYSTEM BUS I/O DEVICES MONITOR
  • 21. 20 Cursor-control keys and function keys are common feature on general purpose keyboards. Function keys allows users to entry frequently used operations in a single keystroke and a cursor control keys can be used to select displayed objects or co-ordinate positions by positioning the screen cursor. II. MOUSE: A mouse is a small hand-held box used to position the screen cursor. Wheels or rollers on the bottom of the mouse can be used to record the amount of direction of movement. III. TRACKBALL AND SPACEBALL: A track-ball is a ball that can be rotated with fingers or palm of the hand produce screen-cursor movement. It is a 2-dimensional positioning device. A space-ball provides 6 degrees of freedom. It does not actually move. It is used for 3-dimensional positioning. IV. JOYSTICKS: It consists of a small, vertical layer mounted on a base that is used to steer the screen cursor around. Most joysticks select screen positions with actual stick movement, other respond to pressure on the stick. In movable joystick, the stick is used to activate switches that cause the screen cursor to move at a constant rate in the direction selected. V. DATA GLOVE: A data glove can be used to grasp a ‘virtual object’. It is constructed with a series of sensors that detect hand and finger motion.
  • 22. 21 VI. DIGITIZERS: It is a common device used for drawing, painting or interactively selecting co-ordinate positions on an object. These devices can be used to input co-ordinate values in either a 2D or 3D space. It is used to scan over a drawing or object and to input a set of discrete coordinate positions, which can be joined with straight line segments to approximate the cure or surface shapes. One type of digitizers is the “Graphics Tablet”, which is used to input 2D co-ordinates by activating a hand cursor or stylus at selected positions on a flat structure. A hand cursor contains cross hairs for sighting positions, while a stylus is a pencil shaped device that is pointed at positions on the tablet. VII. IMAGE SCANNER: Drawings, graphs, color and black and white photos or text can be stored for computer processing with an image scanner by passing an optical scanning mechanism over the information to be stored. VIII. LIGHT PENS: This is a pencil shaped device are used to select screen position by detecting the light coming from points on the CRT screen. An activated light pen, pointed at a spot on the screen as the electron beam lights up that spot generates an electrical plate that causes the co-ordinates position of the electron beam to be recorded.
  • 23. 22 IX. TOUCH PANELS: It allows displayed objects or screen positions to be selected with the touch of a finger. A typical application of touch panels is for the selection of processing options that are represented with graphical icons. Optical touch panels employ a line of infrared light emitting diodes (LEDs) along one vertical edge and along one horizontal edge contain light- detectors. These detectors are used to record which beams are interrupted when the panel is touched. An electrical touch panel is constructed with 2 transparent plates separated by a small distance. One plate is coated with conducting material and other with a resistance material. X. VOICE SYSTEM: It can be used to initiate graphics operations or to enter data. These systems operate by matching an input against a predefined dictionary of words and phrases. HARD COPY OUTPUT DEVICES: Hard-Copy output devices gives images in several format. For presentations or archiving, we can send files to devices or service bureaus that will produce 35mm slides or overhead transparencies. The quality of pictures obtained from a device depends on dot size and the number of dots per inch or lines per inch that can be displayed. Printer produces output by either impact or non-impact methods. Impact printers press format character faces against an inked ribbon onto the paper. A line
  • 24. 23 printer is an example of impact device. Non-impact printers and plotters use laser techniques methods and electro-thermal methods to get images into paper. Character impact printers have a dot matrix print head containing a rectangular array of protruding wire pins with the number of pins depending on the quality of the printer. In a laser device, a laser beam creates a charge distribution on a rotating drum coated with a photoelectric material, such as selenium. Toner is applied to the drum and then transferred to paper. Ink-jet methods produce output by squirting ink in horizontal rows across a roll of paper wrapped on a drum. The electrically charged ink stream is deflected by an electric field to produce dot-matrix patterns. An electrostatic device places a negative charge on the paper, one complete row at a time along the length of the paper. Then the paper is exposed to a toner. The toner is positively charged and so is to the negatively charged areas, where it adheres to produce the specified output. Electro-thermal methods use heat in a dot- matrix print head to output patterns on heat-sensitive paper. GRAPHICS SOFTWARE: Graphics Software is of 2 types. That is: i) General Programming Packages ii) Special Purpose Application Packages. A general programming package provides an extensive set of graphics functions that can be used in a high level programming language. Application graphics packages are designed for non-programmers. So, that user can generate displays without worrying about how graphics operations.
  • 25. 24 CO-ORDINATE REPRESENTATIONS: With few exceptions, general graphics packages are designed to be used with Cartesian co-ordinate specifications. If co-ordinates values for a picture are specified in some other references frame, they must be converted to Cartesian co- ordinates before they can be input to the graphics package. Special purpose packages may allow using of other co-ordinate frames that are appropriate to the application. We can construct the shape of individual objects, in a scene within separate co-ordinate reference frames, called “modeling co- ordinates” or “local co-ordinates” or “master co-ordinates”. Only individual object shapes have been specified, we can place the objects into appropriate positions within the scene using a reference frame called world co- ordinates. The world co-ordinates description of the scene is transferred to one or more output device reference frames for display. The display co-ordinate systems are referred to as device co-ordinates/screen co-ordinates in the case of video monitors. Generally, a graphics system first converts world co-ordinates to specific device co-ordinate to specific device co-ordinates. An initial modeling co-ordinate position (xmc, ymc) is transferred to advice co-ordinate position (xdc, ydc) with the sequence (xmc, ymc) → (xwc, ywc) → (xnc, ync) → (xdc, ydc). The normalized co- ordinates satisfy the inequalities 0 ≤ xnc ≤ 1, 0 ≤ ync ≤ 1 and the device co-ordinates ‘xdc’ and ‘ydc’ are integers within the range (0,0) to (xmax, ymax) for particular device.
  • 26. 25 GRAPHICS FUNCTIONS: A general purpose graphics package provides users with a variety of functions for creating and manipulating pictures. The basic building blocks for pictures are referred to as output primitives. They include character strings and geometric entities such as points, straight lines, curved lines, filled areas and shapes defined with arrays of color points. Attributes are the properties of the output primitives that is an attribute describe how a particular primitive is to be displayed. We can change the size, position or orientation of an object within a scene using geometric transformations similar modeling transformations are used to construct a scene using object description given in modeling co-ordinates. Viewing transformations are used to specify the view that is to be presented and the portion of the output display area that is to be used. Pictures can be subdivided into component parts called structures/segments/objects, depending on the software package in use. Interactive graphics applications use various kinds of input devices, such as a mouse, a tablet or a joystick. Input functions are used to control and process the data flow from those interactive devices. A graphics package contains a number of housekeeping tasks, such as clearing a display screen and initializing parameters. The functions for carrying out these chores under the heading control operations. SOFTWARE STANDARDS: The primary goal of standardized graphics software is probability. When packages are designed with standard graphics functions, software can be moved
  • 27. 26 easily from one hardware system to another and used in different implementations and applications. International and National standards planning organization have co-operated in an effort to develop a generally accepted standard for computer graphics. After considerable effort, this work on standards led to the development of the Graphical Kernel System (GKS). This system was adopted as the first graphics software standard by International Standard Organization (ISO), and by others. The 2nd software standard to be developed and approved by the standards organizations was Programmer’s Hierarchical Interactive Graphics Standard (PHIGS), which is an extension of GKS. Standard Graphics functions are defined as a set of specifications that is independent of any programming language. A language binding is then defined for a particular high level programming language. Standardization for device interface methods is given in Computer Graphics Interface (CGI) system and the Computer Graphics Metafile (CGM) system specifies standards for archiving and transporting pictures. PHIGS WORKSTATION: The workstation is a computer system with a combination of input and output device that defined for a single user. In PHIGS and GKS, the term workstation is used to identify various combinations of graphics hardware and software. A PHIGS workstation can be a single output device, single input device, a combination of input and output devices, a file or even a window displayed on a video monitor. To define and use various “workstations” within an application program, we need to specify a workstation identifier and the workstation type.
  • 28. 27 COMPONENTS OF GUI: A GUI uses a combination of technologies and devices to provide a platform the user can interact with for the tasks of gathering and producing information. A series of elements confirming a visual language have evolved to represent information stored in computers. This makes it easier for people with little computer skill to work with and use computer software. The most common combination of such elements in GUIs is the WIMP paradigm, especially in personal computers. A window manager facilitates the interactions between windows, applications and the windowing system. The windowing system handles hardware devices, such as pointing devices and graphics hardware as well as the positioning of the cursor. In personal computers all these elements are modeled through a desktop metaphor, to produce a simulation called a desktop environment in which the display represents a desktop, upon which documents and folders of documents can be placed. USER INTERFACE AND INTERACTION DESIGN: Designing the visual composition and temporal behavior of GUI is an important part of software application programming. Its goal is to enhance the efficiency and ease of use for the underlying logical design of a stored program, a design discipline known as usability. Techniques of user-centered design are used to ensure that the visual language introduced in the design is well tailored to the tasks it must perform.
  • 29. 28 The widgets of a well-designed interface are selected to support the actions necessary to achieve the goals of the user. A model-view controller allows for a flexible structure in which the interface is independent from and indirectly linked to application functionality, so the GUI can be easily customized. This allows the user to select or design a different skin at will and eases the designer’s work to change the interface as the user needs evolve. The visible graphical interface features of an application are sometimes referred to as “chrome”. Larger widgets such as windows usually provide a frame or container for the main presentation content, such as a web page, email message or drawing. A GUI may be designed for the rigorous requirements of a vertical market. This is known as “Application Specific Graphical User Interface”. Examples of an application specific GUI are: i) Touch screen point of sale software used by wait staff in a busy restaurant. ii) Self-service checkouts used in a retail store. iii)Automated Teller Machines (ATM) iv) Information kiosks in a public space, like a train station or a museum. v) Monitors or control screens in an embedded industrial application which employ a real-time operating system (RTOS). COMMAND LINE INTERFACES: GUIs were introduced in reaction to the steep learning curve of command line interfaces (CLI), which require commands to be typed on the keyboard. Since the commands available in command line interfaces can be numerous, complicated operations can be completed using a short sequence of words and symbols. This allows for greater efficiency and productivity once many commands are learnt but
  • 30. 29 reaching this level takes some time because the command words are not easily discoverable and not mnemonic. Command line interfaces use modes only in limited forms, such as the current directory and environment variables. Most modern operating systems provide both a GUI and some level of a CLI, although the GUIs usually receive more attention. THREE-DIMENSIONAL USER INTERFACES: Three-dimensional images are projected on them in two dimensions. Since this technique has been in use for many years, the recent use of the term three- dimensional must be considered a declaration by equipment marketers that the speed of three dimensions to two dimension projection is adequate to use in standard GUIs.
  • 31. 30 CHAPTER – 3 LINE DRAWING ALGORITHMS: 1. SIMPLE LINE DRAWING ALGORITHM: The Cartesian slope intercept equation for a straight line is: y = mx + b => b = y – mx, Where m = slope of line, b = y-intercept form. If 2 end points of a line segment is specified at position (x1, y1) and (x2, y2). So the values for the slope ‘m’ and y-intercept form ‘b’ is: m = ∆y/∆x = (y2 – y)/(x2 – x1) => ∆y = m∆x ------- (1) and ∆x = ∆y/m ------ (2) b = y1 –mx1 Here, ‘∆y’ is the y-interval computed for the given x-interval ∆x along a line is ∆y = m∆x. Similarly the x-interval ∆x corresponding to a specified ∆y as: ∆x = ∆y/m. For lines with slope magnitude, |m| < 1, ‘∆x’ increases. So calculate ‘∆y’ as: ∆y = m∆x. For |m| > 1, ‘∆y’ increases. So calculate ‘∆x’ as: ∆x = ∆y/m. For |m| = 1, ∆x and ∆y both incremented and have the same value for horizontal and vertical segments. x1 x2 ∆x y1 y2 ∆y
  • 32. 31 Example: Draw a line for points (3, 2) to (7, 8). Here, (x1, y1) = (3, 2) and (x2, y2) = (7, 8). So m = y2 –y1/x2 – x1 = 8 – 2/7 – 3 = 6/4 = 3/2 > 1. That is, for m > 1, ∆y increases and ∆x = ∆y/m. ∆x ∆y x = x1 + ∆x y = y1 + ∆y 2/3 1 11/3 3 4/3 2 13/3 4 2 3 5 5 8/3 4 17/3 6 10/3 5 19/3 7 4 6 7 (x2) 8 (y2) 2. DIGITAL DIFFERENTIAL ANALYSER ALGORITHM (DDA): This is a scan conversion line algorithm based on calculating either ∆y or ∆x as: ∆y = m∆x and ∆x = ∆y/m. We sample the line at unit intervals in one co-ordinate and determine corresponding integer values nearest the line path for the other co-ordinate. Consider first a line with positive slope. If m ≤ 1, we sample at unit x- intervals (∆x = 1) and calculate successive ‘y’ values as: yk+1 = yk + m. (∆x increases by 1 always, so ∆x = 1). Where, k = subscript and integer values starting from 1, for 1st point and increase by 1 until the final end point is reached. Since ‘m’ can be any real number between 0 and x1 x2 y1 y2
  • 33. 32 1, the calculated ‘y’ values must be rounded to the nearest integer. If m > 1, we sample at unit y-intervals (∆y = 1) and calculate each succeeding ‘x’ values as: xk+1 = xk + 1/m. (∆y increases by 1 always, so ∆y = 1). Here, the lines are processes from left end point to right end point. If the lines processed from right end to left end point, then ∆x = -1 and yk+1 = yk – m (for, m ≤ 1), when ∆y = -1 and m > 1, then xk+1 = xk - 1/m. Example: Draw a line for points (3, 2) to (7, 8). Here, xk = 3, yk = 2 and m = y2 –y1/x2 – x1 = 8 – 2/7 – 3 = 6/4 = 3/2 > 1. So, m > 1, so yk increases by 1 always. xk+1 = xk + 1/m yk+1 = yk + m 11/3 3 13/3 4 15/3 = 5 5 17/3 6 19/3 7 7 (x2) 8 (y2) 3. BRESENHAM’S LINE DRAWING ALGORITHM: An accurate and efficient raster line generating algorithm is developed by Bresenham. In this approach, at slope m < 1, pixel positions along a line path are then determined by sampling at unit x-intervals. Starting from left end point (x0, y0) of a given line, we step to each successive column (x –
  • 34. 33 position) and plot the pixel, whose scan-line ‘y’-value is closest to the line ‘p’. For kth step, we determine the pixel at (xk, yk) is to be displayed we need to decide which pixel to plot in column, xk+1 and the choice of pixel position are: (xk+1, yk) and (xk+1, yk+1). At xk+1, we label vertical pixel separations from the mathematical line path as ‘d1’ and ‘d2’. So at xk+1, the ‘y’ co-ordinate is: y = m(xk + 1) + b ------- (1) d1 = y – yk = m(xk +1) + b – yk d2 = (yk + 1) – y = yk + 1 – m(xk + 1) – b The difference between these 2 separations is: d1 – d2 = m(xk + 1) + b – yk – (yk +1) + m(xk + 1) + b = 2m(xk + 1) + 2b - yk – (yk +1) = 2m(xk + 1) + 2b - yk – yk – 1 => d1 – d2 = 2m(xk + 1) + 2b - 2yk – 1 ------- (2) The decision parameter ‘Pk’ for kth step in the line algorithm is calculated with substitution of m = ∆y/∆x. So, Pk = ∆x(d1 – d2) = ∆x(2m(xk + 1) + 2b - 2yk – 1) yk+3 yk+2 yk+1 yk xk xk+1 xk+2 xk+3 y = mx + b
  • 35. 34 = ∆x(2∆y/∆x (xk + 1) + 2b - 2yk – 1) = 2∆y(xk + 1) + 2b∆x - 2yk ∆x - ∆x = 2∆yxk + 2∆y + 2b∆x - 2yk ∆x - ∆x = 2∆yxk - 2∆xyk + 2∆y + 2b∆x - ∆x = 2∆yxk - 2∆xyk + 2∆y + ∆x(2b -1) = 2∆yxk - 2∆xyk + C => Pk = 2∆yxk - 2∆xyk + C ------- (3) The sign of ‘Pk’ is same as the sign of d1 – d2, since ∆x > 0, C = constant and value of C = 2∆y + ∆x(2b -1), independent of pixel position. If the pixel at ‘yk’ is closer to the line path than the pixel at yk+1 (i.e. d1 < d2), then the decision parameter ‘Pk’ is negative. At step k + 1, the decision parameter is evaluated as: Pk+1 = 2∆y(xk +1) - 2∆x(yk +1) + C ------- (4) Now, subtract equation – (3) from equation – (4), we get: Pk+1 – Pk = 2∆y(xk+1 – xk) - 2∆x(yk+1 – yk), but xk+1 = xk + 1, so that Pk+1 = Pk + 2∆y(xk+1 – xk) - 2∆x(yk+1 – yk) => Pk+1 = Pk + 2∆y - 2∆x(yk+1 – yk) ------- (5) Where, yk+1 – yk is either 0 or 1, depending on the sign of parameter Pk.
  • 36. 35 The recursive calculation of decision parameters is performed at each integer x-position, starting at the left co-ordinate end point of the line. The 1st parameter ‘P0’ is evaluated from equation – (3), at the starting pixel position (x0, y0) and with ‘m’ evaluated as ∆y/∆x. So, P0 = 2∆y - ∆x --- (6). ALGORITHM: STEP 1: Input the two line end points and store the left end point in (x0, y0) STEP 2: Load the (x0, y0) or the initial point of the line into the frame buffer then plot the first point. STEP 3: Calculate the constant ∆x, ∆y, 2∆y, 2∆y - 2∆x and obtain the starting value for the decision parameter: P0 = 2∆y - ∆x STEP 4: At each ‘xk’ along the starting at k = 0, perform the following point. If Pk < 0, plot (xk + 1, yk) and Pk+1 = Pk + 2∆y If Pk > 0, plot (xk + 1, yk + 1) and Pk+1 = Pk + 2∆y - 2∆x. STEP 5: Repeat STEP – 4, ∆x times. Example: To illustrate the algorithm, we digitize the line with end points (20, 10) and 30, 18) Here the line has a slope: m = y2 –y1/x2 – x1 = 18 – 10/30 – 20 = 8/10 = 0.8 < 1
  • 37. 36 ∆x = 30 – 20 = 10 and ∆y = 18 – 10 = 8. The initial decision parameter is: P0 = 2∆y - ∆x = 2 x 8 – 10 = 16 – 10 = 6 and the increments for the calculating successive decision parameters are: ∆x = 10, ∆y = 8, 2∆y = 16, 2∆y - 2∆x = 16 – 20 = - 4. At k = 0, we plot the initial point (xk, yk) = (x0, y0) = (20, 10) and determine successive pixel positions along the line path from the decision parameter as: k Pk (xk +1, yk +1) Decision parameter calculation 0 6 (21, 11) P0 = 2∆y - ∆x = 2 x 8 – 10 = 16 – 10 = 6 1 2 (22, 12) Pk+1 = Pk + 2∆y - 2∆x = 6 + 2 x 8 – 2 x 10 = 2 2 -2 (23, 12) Pk+1 = Pk + 2∆y - 2∆x = 2 + 2 x 8 – 2 x 10 = -2 3 14 (24, 13) Pk+1 = Pk + 2∆y = -2 + 2 x 8 = 14 4 10 (25, 14) Pk+1 = Pk + 2∆y - 2∆x = 14 + 2 x 8 – 2 x 10 = 10 5 6 (26, 15) Pk+1 = Pk + 2∆y - 2∆x = 10 + 2 x 8 – 2 x 10 = 6 6 2 (27, 16) Pk+1 = Pk + 2∆y - 2∆x = 6 + 2 x 8 – 2 x 10 = 2 7 -2 (28, 16) Pk+1 = Pk + 2∆y - 2∆x = 2 + 2 x 8 – 2 x 10 = -2 8 14 (29, 17) Pk+1 = Pk + 2∆y = -2 + 2 x 8 = 14 9 10 (30, 18) Pk+1 = Pk + 2∆y - 2∆x = 14 + 2 x 8 – 2 x 10 = 10 It will continue upto 10 times, starting from 0 to 9, because ∆x = 10. CIRCLE GENERATING ALGORITHMS: 1. PROPERTIES OF THE CIRCLE: A Circle is defined as the set of points that are all at given distance ‘r’ from a center position (xc, yc). This distance relationship is expressed by the Cartesian co-ordinates as:
  • 38. 37 (x – xc)2 + (y – yc)2 = r2 ------ (1) The general equation of a circle is: x2 + y2 = r2 ------- (2) To calculate the position of points on a circle circumference by stepping along the x-axis in unit steps from xc – r to xc + r and calculating the corresponding ‘y’ – values at each position as: ------ (3) To calculate the points along the circular boundary, polar co-ordinates ‘r’ and ‘θ’ are used. Expressing the circle equation in parametric polar form yields the pair of equations is: x = xc + rcosθ and y = yc +rsinθ Computation can be reduced by considering the symmetry of circles. The shape of the circle is similar in each quadrant. We can generate the circle section in the second quadrant of the xy-plane by noting that the two circle sections are symmetric with respect to the y-axis and the circle sections in 3rd and 4th quadrants can be obtained from sections in the first and second quadrants by considering symmetry about the x-axis. Circle sections in adjacent octants within one quadrant are symmetric with respect to the 45 degree line dividing the two octants. These symmetry conditions are illustrated in given figure, where a point at position (x, y) on a one eight circle sector is mapped onto the seven circle points in the other r xc yc
  • 39. 38 octants of the xy-plane. Taking the advantage of the circle symmetry, we can generate all pixel positions around a circle by calculating only the points within the sector from x = 0 to x = y. 2. MID-POINT CIRCLE/BRESENHAM’S CIRCLE GENERATING ALGORITHM: We set up the algorithm to calculate pixel positions around a circle path centered at the co-ordinate origin (0, 0). Then each calculated position (x, y) is moved to its proper screen position by adding ‘xc’ to ‘x’ and ‘yc’ to ‘y’, where (xc, yc) is the center position of the circle with radius ‘r’. Along the circle section from x = 0 to x = y in 1st quadrant, the slope of the curve varies from 0 to 1. To apply the midpoint method, we define the circle function with (0, 0) as its center is: Fcircle(x, y) = x2 + y2 – r2 --- (1) It has 3 properties, at any point (x, y). That is: (x, y) (y, x) (-x, y) (-y, x) (y, -x) (x, -y) (-x, -y) (-y, -x) 45o Fcircle(x, y) = < 0, if (x, y) is inside the circle boundary = 0, if (x, y) is on the circle boundary --- (2) > 0, if (x, y) is outside the circle boundary
  • 40. 39 We have plotted the pixel at (xk, yk) and next to determine whether the pixel at position (xk+1, yk) or one at position (x k+1, yk – 1) is closer to the circle. The decision parameter is the circle function and equation – (1) is evaluated at the midpoint between these 2 pixels (x k+1, yk) and (x k+1, y k – 1). So, the decision parameter is: Pk = Fcircle(xk+1, yk -1/2) = (xk+1)2 + (yk -1/2)2 – r2 --- (3) [Since midpoint = (yk + yk-1)/2 = (2yk -1)/2 = yk – 1/2] If Pk < 0, the midpoint is inside the circle and the pixel on scan line ‘yk’ is closer to the circle boundary, otherwise the midpoint is outside or on the circle boundary and we select the pixel on ‘yk - 1’. The recursive expression for the next decision parameter is obtain by evaluating the circle function at sampling position xk+1 + 1 = xk + 2, then, Pk+1 = Fcircle(xk+1 + 1, yk +1 -1/2) = [(xk+1) + 1]2 + (yk+1 – 1/2)2 – r2 => Pk+1 – Pk = 2(xk + 1) + (yk+1 2 – yk 2 ) – (yk+1 – yk) + 1 => Pk+1 = Pk + 2(xk + 1) + (yk+1 2 – yk 2 ) – (yk+1 – yk) + 1 --- (4), where yk+1 is either ‘yk’ or ‘yk-1’, depending on the sign of Pk. Increments for obtaining Pk+1 are either 2xk+1 + 1 (if Pk is negative) or 2xk+1 + 1 – 2yk+1. Evaluation of terms 2xk+1 and 2yk+1 can also be done incrementally as: 2xk+1 = 2xk + 2, 2yk+1 = 2yk - 2. At the start position (0, r), the initial decision parameter is obtained by evaluating the circle function at the start position (x0, y0) = (0, r). So,
  • 41. 40 P0 = Fcircle(1, r – 1/2) = 1 + (r – 1/2)2 – r2 = 1 + r2 + 1/4 – 2 x r x 1/2 – r2 = 5/4 – r --- (5) If the radius ‘r’ is specified as an integer, we can simply round the ‘P0’ to P0 = 1 – r (for ‘r’ is an integer). ALGORITHM: STEP 1: Input radius ‘r’ and circle center (xc, yc) and obtain the first point on the circumference of a circle centered on the origin as: (x0, y0) = (0, r). STEP 2: Calculate the initial value of decision parameter as P0 = 5/4 – r. STEP 3: At each ‘xk’ position, starting at k = 0, perform the following test: i) If Pk < 0, the next point along the circle centered on (0, 0) is (xk+1, yk) and Pk+1 = Pk + 2xk+1 + 1, otherwise, ii) The next point along the circle is, (xk+1, yk-1) and Pk+1 = Pk + 2xk+1 + 1 – 2yk+1, Where 2xk+1 = 2xk + 2 and 2yk+1 = 2yk – 2, we will continue this upto xk ≤ yk STEP 4: Determine symmetry points in other screen octants. STEP 5: Move each calculated pixel position (x, y) onto the circular path centered on (xc, yc) and plot the co-ordinate values x = x + xc and y = y + yc. STEP 6: Repeat STEP – 3 through S, until x ≥ y.
  • 42. 41 Example: Given a circle radius r = 10, we demonstrate the midpoint circle algorithm by determining positions along the circle algorithm by determining positions along circle octant in the first quadrant from x = 0 to x = y. The initial value of decision parameter is P0 = 1 –r = - 9. For the circle centered on the co-ordinates origin, the initial point is (x0, y0) = (0, 10) and initial increment terms are: 2x0 =0, 2y0 = 20. Successive decision parameter values and positions along the circle path are calculated using the midpoint method as: k Pk (xk+1, yk -1) 2xk+1 2yk+1 0 - 9 (1, 10) 2 20 1 - 6 (2, 10) 4 20 2 - 1 (3, 10) 6 20 3 6 (4, 9) 8 18 4 - 3 (5, 9) 10 18 5 8 (6, 8) 12 18 6 5 (7, 7) 14 16 ELLIPSE GENERATING ALGORITHM: 1. PROPERTIES OF THE ELLIPSE: An ellipse is the set of points, such that the sum of the distances from two fixed position is the same for all positions (Foci) is the same for all points. If the distances to the two foci from any point P = (x, y) on the ellipse are labeled ‘d1’ and ‘d2’, then the general equation of ellipse stated as:
  • 43. 42 d1 + d2 = Constant --- (1) Expressing distance ‘d1’ and ‘d2’ in terms of the focal co-ordinates F1 = (x1, y1) and F2 = (x2, y2), we have: By squaring this equation, isolating the remaining radica and then squaring again, we write the general ellipse equation in the form: Ax2 + By2 + Cxy +Dx + Ey + F = 0 --- (3) Here the coefficients A, B, C, D, E and F are evaluated in terms of the focal co-ordinates and dimensions of the major and minor axes of the ellipse. An ellipse in standard position with major and minor axes oriented parallel to ‘x’ and ‘y’ axes. Parameter ‘rx’ is the semi-major axis and parameter ‘ry’ is the semi-minor axis. The equation of the ellipse can be written in terms of the ellipse center co-ordinates and parameters ‘rx’ and ‘ry’ as: (x – xc/rx)2 + (y – yc/ry) 2 = 1 --- (4) Using polar co-ordinates ‘r’ and ‘θ’, the parametric equations of the ellipse is: (x – x1)2 + (y – y1)2 + (x – x2)2 + (y – y2)2 = Constant --- (2) d1 d2 F1 F2 P = (x, y) ry rx yc xc y x
  • 44. 43 x = xc +rxcosθ and y = yc + rysinθ --- (5) Symmetric consideration can be used to further reduce computation. 2. MID-POINT ELLIPSE/BRESENHAM’S ELLIPSE GENERATING ALGORITHM: This is applied throughout the 1st quadrant in 2 parts. The division of 1st quadrant is according to the slope of an ellipse with rx < ry. When we process this quadrant by taking unit steps in the x-direction, where slope of the curve has a magnitude less than 1 and taking unit steps in y-direction, when slope has the magnitude greater than 1. For a sequential implementation of the midpoint algorithm, we take the start position at (0, ry) in clockwise order throughout the 1st quadrant. The ellipse function from equation (4) with (xc, yc) = (0, 0) as: Fellipse(x, y) = ry 2 x2 + rx 2 y2 – rx 2 ry 2 --- (6) -x, y x, -y ry rx x, y y x -x, -y Region 2 Region 1 ry rx Slope m = -1 y x
  • 45. 44 It has 3 properties, at any point (x, y). That is: Thus the ellipse function Fellipse(x, y) serves as the decision parameter in midpoint algorithm. Starting at (0, ry), we take unit steps in the x-direction until we reach the boundary between region 1 and region 2. The slope is calculated from equation – (6) is: m = dy/dx = -2ry 2 x/2rx 2 y --- (8) At the boundary between region 1 and region 2, dy/dx = -1 and 2ry 2 x = 2rx 2 y. So we move to region 1, when 2ry 2 x ≥ 2rx 2 y. In region 1, the midpoint between the two candidate pixels at sampling position xk + 1 in the 1st region. Assuming position (xk, yk) has been selected at the previous step. We determine the next position along the ellipse path by evaluating the decision parameter at this midpoint is: Pk 1 = Fellipse(xk+1, y k -1/2) = ry 2 (xk+1)2 + rx 2 (yk -1/2)2 – rx 2 ry 2 --- (9) If Pk 1 < 0, the midpoint is inside the ellipse and the pixel on scan line ‘yk’ is closer to the ellipse boundary, otherwise the midpoint is outside or on the ellipse boundary and we select the pixel on ‘yk - 1’. Fellipse(x, y) = < 0, if (x, y) is inside the ellipse boundary = 0, if (x, y) is on the ellipse boundary --- (7) > 0, if (x, y) is outside the ellipse boundary
  • 46. 45 At the next sampling position (xk+1 + 1 = xk + 2), the decision parameter for region 1 is evaluated as: Pk+1 1 = Fellipse(xk+1 + 1, yk +1 -1/2) = ry 2 [(xk+1) + 1]2 + rx 2 (yk+1 – 1/2)2 – rx 2 ry 2 => Pk+1 1 = Pk 1 + 2ry 2 (xk + 1) + ry 2 + rx 2 [(yk+1 – 1/2)2 – (yk – 1/2)2 ] --- (10), Where, ‘yk+1’ is either ‘yk’ or ‘yk -1’, depending on the sign of Pkꞌ. Decision parameters are incremented by following amounts: At the initial position (0, ry), the two terms evaluated to 2ry 2 x = 0 --- (11) 2rx 2 y = 2rx 2 ry --- (12) As ‘x’ and ‘y’ are incremented, updated values are obtained by adding 2ry 2 to equation – (11) and subtract 2rx 2 from equation – (12). The updated values are compared at each step and we move from region 1 to region 2, when 2ry 2 x ≥ 2rx 2 y is satisfied. In region 1, the initial value of decision parameter is obtained by evaluating the ellipse function at start position (x0, y0) = (0, ry). So, P0 1 = Fellipse(1, ry – 1/2) = ry 2 – rx 2 (ry – 1/2)2 – rx 2 ry 2 => P0 1 = ry 2 – rx 2 ry – 1/4rx 2 --- (13) Increment = 2ry 2 xk+1 + ry 2 , if Pkꞌ < 0 2ry 2 xk+1 + ry 2 – 2rx 2 yk + 1, if Pkꞌ ≥ 0
  • 47. 46 Over region 2, we sample at unit steps in negative y-direction and midpoint in now taken between horizontal pixels at each step. For this region 2, the decision parameter is evaluated as: Pk 2 = Fellipse(xk+1/2, yk -1) = ry 2 (xk+1/2)2 + rx 2 (yk -1)2 – rx 2 ry 2 --- (14) If Pk 2 > 0, the midpoint is outside the ellipse and we select the pixel at scan line ‘xk’. If Pk 2 ≤ 0, the midpoint is inside or on the ellipse boundary and we select pixel position on ‘xk + 1’. To determine the relationship between successive decision parameters in region 2, we evaluate the ellipse function at the next sampling step: yk+1 – 1 = yk – 2. Pk+1 2 = Fellipse(xk+1 + 1/2, yk +1 -1) = ry 2 (xk+1+ 1/2)2 + rx 2 [(yk – 1) – 1]2 – rx 2 ry 2 => Pk+1 2 = Pk 2 - 2rx 2 (yk - 1)2 + rx 2 + ry 2 [(xk+1 + 1/2)2 – (xk + 1/2)2 ] --- (15), Here, ‘xk+1’ set either ‘xk’ or ‘xk+1’, depending on the sign of Pk 2 . When we enter region 2, the initial position (x0, y0) is taken as the last position selected in region 1 and the initial decision parameter in region 2 is: P0 2 = Fellipse(x0 + 1/2, y0 – 1) = ry 2 (x0 + 1/2) 2 – rx 2 (y0 – 1)2 – rx 2 ry 2 --- (16) To simplify the calculation of P0 2 , we could select pixel position in counter clockwise order starting at (rx, 0) unit steps would then be taken in the positive y-direction upto the last position selected in region 1.
  • 48. 47 ALGORITHM: STEP 1: Input rx, ry and ellipse center (xc, yc) and obtain the first point on an ellipse centered on the origin as: (x0, y0) = (0, ry). STEP 2: Calculate the initial value of decision parameter in region 1 as: P0 1 = ry 2 – rx 2 ry – 1/4rx 2 STEP 3: At each ‘xk’ position in region 1, starting at k = 0, perform the following test: i) If Pk 1 < 0, the next point along the ellipse centered on (0, 0) is (xk+1, yk) and Pk+1 1 = Pk 1 + 2ry 2 xk + 1 + ry 2 , otherwise, ii) The next point along the circle is, (xk+1, yk-1) and Pk+1 1 = Pk 1 + 2ry 2 xk + 1 - 2rx 2 yk+1 + ry 2 With 2ry 2 xk + 1 = 2ry 2 xk + 2ry 2 , 2rx 2 yk+1 = 2rx 2 yk - 2rx 2 and continue until 2ry 2 x ≥ 2rx 2 y. STEP 4: Calculate the initial value of the decision parameter in region 2, using the last point (x0, y0). Calculated in region 1 as: P0 2 = ry 2 (x0 + 1/2) 2 – rx 2 (y0 – 1)2 – rx 2 ry 2 STEP 5: At each ‘yk’ position in region 2, starting at k = 0, perform the following test: i) If Pk 2 > 0, the next point along the ellipse centered on (0, 0) is (xk, yk – 1) and Pk+1 2 = Pk 2 - 2rx 2 yk + 1 + rx 2 , otherwise, ii) The next point along the circle is, (xk + 1, yk - 1) and
  • 49. 48 Pk+1 2 = Pk 2 + 2ry 2 xk + 1 - 2rx 2 yk+1 + rx 2 Using the same incremental calculations for ‘x’ and ‘y’ as in region 1. STEP 6: Determine symmetry points in the other 3 quadrants. STEP 7: Move each calculated pixel position (x, y) onto the elliptical path centered on (xc, yc) and plot the co-ordinate values: x = x + xc, y = y + yc. STEP 8: Repeat the steps for region 1, until 2ry 2 x ≥ 2rx 2 y. Example: Given input ellipse parameter rx = 8, ry = 6. We illustrate the steps in the midpoint ellipse algorithm by determining raster positions along the ellipse path in the first quadrant. Initial values and increments for the decision parameter calculations as: 2ry 2 = 0 (with increment 2ry 2 = 72) and 2rx 2 y = 2rx 2 ry (with increment -2rx 2 = -128) For Region 1: The initial point for the ellipse centered on the origin is (x0, y0) = (0, 6) and the initial decision parameter value is: P0 1 = ry 2 – rx 2 ry – 1/4rx 2 = - 332. Successive decision parameter values and positions along the ellipse path are calculated using the midpoint method as:
  • 50. 49 k Pk 1 (xk+1, yk +1) 2ry 2 xk + 1 2rx 2 yk+1 0 -332 (1, 6) 72 768 1 -224 (2, 6) 144 768 2 - 44 (3, 6) 216 768 3 208 (4, 5) 288 640 4 - 108 (5, 5) 360 640 5 288 (6, 4) 432 512 6 244 (7, 3) 504 384 We now move out of region 1, since 2ry 2 x > 2rx 2 y. For Region 2: The initial point is: (x0, y0) = (7, 3) and the initial decision parameter value is: P0 2 = Fellipse(7 + 1/2, 2) = -151 The remaining positions along the ellipse path in the 1st quadrant are then calculated as: k Pk 2 (xk+1, yk +1) 2ry 2 xk + 1 2rx 2 yk+1 0 -151 (8, 2) 576 256 1 233 (8, 1) 576 128 2 745 (8, 0) - - FILLED AREA PRIMITIVES: A standard output primitives is a solid color or patterned polygon area. Other kinds of area primitives are sometimes available, but polygons are easier to process since they have linear boundaries.
  • 51. 50 POLYGON FILLING: It is the process of coloring in a defined area or regions. The region can be classified as: Boundary defined region and Interior defined region. When the region is defined in terms of pixels, if comprised of it is known as interior defined region. The algorithm used for filled interior-defined regions are known as Flood-Fill Algorithm. When the region is defined in terms of bounding pixels, that outline, it is known as boundary defined region. The algorithm used for filling boundary defined region is known as: Boundary-Fill Algorithm. FLOOD-FILL ALGORITHM: While using this, the user generally provides an initial pixel, which is known as seed pixel, starting from the seed pixel, the algorithm will inspect each of the surrounding eight pixels to determine whether the extent has been reached. In 4-connected region, only 4 pixels are surrounding pixels. i.e.: left, right, top and bottom. The process is repeated until all pixels inside the region have been inspected. INTERIOR-DEFINED REGION BOUNDARY-DEFINED REGION
  • 52. 51 The flood-fill procedure for a 4-connected region is: void floodfill4(int x, int y, int fillcolor, int oldcolor) { if(getPixel(x, y) == oldcolor) { setColor(fillcolor); setPixel(x, y); floodfill4(x + 1, y, fillcolor, oldcolor); floodfill4(x – 1, y, fillcolor, oldcolor); floodfill4(x, y + 1, fillcolor, oldcolor); floodfill4(x, y – 1, fillcolor, oldcolor); } } P P P P S P P P P P P S P P 8 - CONNECTED 4 - CONNECTED
  • 53. 52 The flood-fill procedure for an 8-connected region is: void floodfill8(int x, int y, int fillcolor, int oldcolor) { if(getPixel(x, y) == oldcolor) { setColor(fillcolor); setPixel(x, y); floodfill4(x + 1, y, fillcolor, oldcolor); floodfill4(x – 1, y, fillcolor, oldcolor); floodfill4(x, y + 1, fillcolor, oldcolor); floodfill4(x, y – 1, fillcolor, oldcolor); floodfill4(x + 1, y + 1, fillcolor, oldcolor); floodfill4(x + 1, y - 1, fillcolor, oldcolor); floodfill4(x - 1, y + 1, fillcolor, oldcolor); floodfill4(x - 1, y – 1, fillcolor, oldcolor); } }
  • 54. 53 BOUNDARY-FILL ALGORITHM: A Boundary-Fill Algorithm or procedure accepts as input, the co-ordinates of an interior point (x, y), a fill color and a boundary color. Starting from (x, y) the procedure test neighboring positions to determine whether they are of the boundary color, if not they are painted with fill color and their neighbors are tested. There are 2 methods are used for area filled i.e.: 4-connected and 8-connected. In 4-connected form the current positions, four neighboring points are tested. Those pixel positions are right, left, above and below the current pixel. In 8- connected, the set of neighboring positions to be tested includes the 4 diagonal pixels. An 8-connected boundary fill algorithm would correctly fill the interior of the area. The procedure or a recursive method for filling a 4-connected area with intensity specified in parameter fill upto a boundary color specified with parameter boundary. That is: void boundaryfill4(int x, int y, int fill, int boundary) { int current; S S 8 - CONNECTED4 - CONNECTED
  • 55. 54 current = getPixel(x, y) if((current != boundary) && (current != fill)) { setColor(fill); setPixel(fill); boundaryfill4 (x + 1, y, fill, boundary); boundaryfill4 (x – 1, y, fill, boundary); boundaryfill4 (x, y + 1, fill, boundary); boundaryfill4 (x, y – 1, fill, boundary); } } We can extend this procedure to fill an 8-connected region by including 4 additional statements to test diagonal positions, such as (x + 1, y – 1). The procedure for 8-connected area is: void boundaryfill8(int x, int y, int fill, int boundary) { int current; current = getPixel(x, y)
  • 56. 55 if((current != boundary) && (current != fill)) { setColor(fill); setPixel(fill); boundaryfill8(x + 1, y, fill, boundary); boundaryfill8(x – 1, y, fill, boundary); boundaryfill8(x, y + 1, fill, boundary); boundaryfill8(x, y – 1, fill, boundary); boundaryfill8(x + 1, y + 1, fill, boundary); boundaryfill8(x + 1, y - 1, fill, boundary); boundaryfill8(x - 1, y + 1, fill, boundary); boundaryfill8(x - 1, y – 1, fill, boundary); } } SCANLINE POLYGON-FILL ALGORITHM: For each scan line crossing a polygon, the area-fill algorithm locates the intersection points of the scan line with the polygon edges. These intersection
  • 57. 56 points are then sorted from left to right and the corresponding frame buffer positions between each intersection pair are set to specified fill color. Here, the four pixel intersection positions with the polygon boundaries define two stretches of interior pixels from x = 10 to x = 14 and from x = 18 to x = 24. A scan line passing through a vertex intersects two polygon edges at that position, adding two points to the list of intersections for the scan line. Here, 2 scan-lines at position ‘y’ and ‘yꞌ’ that intersect edge endpoints. Scan-line ‘y’ intersects five polygon edges. Scan-line ‘yꞌ’ intersects an even number of edges although it also passes through a vertex. Intersection point along scan-line ‘y’ correctly identifies the interior pixel spans. But with scan-line ‘y’, we need to do some additional processing to determine the correct interior points. The topological difference between scan-line ‘y’ and scan-line ‘yꞌ’ is identified by noting the position of the intersecting edge relative to the scan-line. For scan- line ‘y’, the two intersecting edges sharing a vertex are on opposite sides of the scan-line. But for scan-line ‘yꞌ’, the two intersecting edges are both above the scan- line. 10 14 18 24 Scan line y’ Scan line y 1 2 1 2 1 1
  • 58. 57 Calculations performed in scan-conversion and other graphics algorithms typically take advantage of various coherence properties of a scene that is to be displayed. Coherence is simply that the properties of one part of a scene are related in some way to other parts of the scene, so that the relationship can be used to reduce processing. Here, two successive scan-lines crossing a left edge of a polygon. The slope of this polygon boundary line can be expressed in terms of scan-line intersection co-ordinate is: m = yk+1 – yk/xk+1 - xk --- (1) Since the change in ‘y’- coordinates between two scan-lines is: yk+1 – yk = 1 --- (2). This x-intersection value xk+1 on the upper scan-line can be determined from the x-intersection value xk on the proceeding scan-line as: xk+1 = xk + 1/m --- (3). Along an edge, with slope ‘m’, the intersection ‘xk’ value for scan-line ‘k’ above the initial scan-line can be calculated as: xk = x0 + k/m --- (4). In a sequential fill algorithm, the increment of ‘x’ values by the amount 1/m along an edge can be accomplished with interior operations by recalling that the slope ‘m’ is the ratio of 2 integers : m = ∆y/∆x. Here ∆x and ∆y are the differences between the edges, endpoint ‘x’ and ‘y’ co- ordinate values. So incremental calculations of ‘x’- intercepts along an edge for successive scan lines can be expressed as: xk+1 = xk + ∆x/∆y --- (5) Scan-line yk+1 Scan-line yk (xk+1, yk+1) (xk, yk)
  • 59. 58 Using this equation, we can perform the integer evaluation if the x-intercepts by initializing a counter to 0, then incrementing the counter by the value ‘∆x’ each time we move upto new scan-line. Whenever the counter value becomes equal to or greater than ∆y, we increment the current x-intersection value by 1 and decrease the counter by the value ∆y. ANTIALIASING: The distortion of information due to low frequency sampling is called aliasing. We can improve the appearance of displayed raster lines by applying antialiasing methods compensate for the under-sampling process. To avoid losing information from such periodic objects, we need to set the sampling frequency to at least twice of the highest frequency occurring in the object referred to as Nyquist Sampling Frequency/Nyquist Sampling Rate i.e.: fs = 2fmax Another way to state this is that: the sampling interval should be no longer than one-half the cycle interval called Nyquist Sampling interval. For x-interval sampling, the Nyquist Sampling interval ∆xs i.e.: ∆xs = ∆xcycle/2, where, ∆xcycle = 1/fmax * * * * * : Sampling Position It shows the effects of under- sampling
  • 60. 59 In the above figure, the sampling interval is one and one-half times the cycle interval, so the sampling interval is at least 3 times too big. A straight forward antialiasing method is to increase sampling rate by treating the screen as if it were covered with a finer grid that is actually available. We can then use multiple sample points across this finer grid to determine an appropriate intensity level for each screen pixel. This technique of sampling object characteristics at a high resolution and displaying the results at a lower resolution is called super-sampling/post-filtering. An alternative to super-sampling is to determine pixel intensity by calculating the areas of overlap of each pixel with the objects to be displayed. Antialiasing by computing overlap areas is referred to as Area Sampling/Pre-filtering. Pixel overlap areas intersect individual pixel boundaries. Raster objects can also be anti- aliased by shifting the display location of pixel areas. This technique is called pixel phasing is applied by “micro-positioning” the electron beam is relation to object geometry.
  • 61. 60 CHAPTER -4 LINE ATTRIBUTES: This basic attributes of a straight line segment are its type, its width and its color. Lines can also be displayed using selected pen or brush options. i. LINE TYPE: This attribute includes solid lines, dashed lines and dotted line. The line drawing algorithm is modifying to generate lines by setting the length and spacing of displayed solid sections along the line path. A dashed line could be displayed by generating an inter-dash spacing that is equal to the length of the solid sections. Both the length of the dashes and the inter-dash spacing are often specified as user options. A dotted line can be displayed by generating very short dashes with the spacing equal to or greater than the dash size. To set the line type attributes with command is: set linetype (lt); where parameter ‘lt’ is assigned to a positive integer values of 1, 2, 3 or 4 to generate lines that are respectively solid, dashed, dotted or dash-dotted. The line-type parameter ‘lt’ could be used to display variations in dot-dash patterns.
  • 62. 61 ii. LINE WIDTH: Implementation of line-width options depends on the capabilities of the output device. A line-width command is used to set the current line-width value in the attribute set. The command for this is: set linewidthscalefactor(lw) Here line-width parameter ‘lw’ is assigned to a positive number to indicate the relative width of the line to be displayed. A value ‘1’ specifies a standard width line. If a user set ‘lw’ to a value 0.5 to plot a line whose width is half of the standard line. Values greater than ‘1’ produce lines thicker than the standard. Line-caps are used to adjust the shape of the line ends and give better appearance. One kind of line cap is butt cap, obtained by adjusting the end positions of the component parallel lines, so that the thick line is displayed with square ends that are perpendicular to the line path. If the specified line has slope ‘m’, the square end of the thick line has slope 1/m. Another line- cap is the round cap, obtained by adding a filled semicircle to each butt cap. The circular arcs are centered on the line endpoints and have a diameter equal to the line thickness. A third type of line cap is the projecting square cap, where we simply extended the line and add butt caps that are positioned one-half of the line with beyond the specified end points. [BUTTCAP] [ROUNDCAP] [PROJECTING SQUARE CAP]
  • 63. 62 We can generate thick polylines that are smoothly joined at the cost of additional processing at the segment endpoints. A meter join is accomplished by extending the outer boundaries of each of the two lines until they meet. A round join is produced by capping the connection between the two segments with a circular boundary whose diameter is equal to the line width. A bevel join is generated by displaying the line segments with butt caps and filling in the triangular gap where the segments meet. iii. LINE WIDTH: Lines can be displayed with pen and brush selections options in this category include shape, size and pattern. E.g.: , , , , , , :, etc. These shapes can be stored in a pixel mask, which identifies the array of pixel positions that are to be set along the line path. Lines generated with pen or brush shapes can be displayed in various widths by changing the size of the mask. iv. LINE COLOR: When a system provides color or intensity options, a parameter giving the current color index is included in the list of system attribute values. [MITER JOIN] [ROUND JOIN] [BEVEL JOIN]
  • 64. 63 A polyline routine displays a line in the current color by setting this color value in the frame buffer at pixel locations along the line path using the set pixel procedure. The number of color choices depends on the number of bits available per pixel in the frame buffer. The function of line color is: Set PolylineColorIndex(lc), where lc = line color parameter A line drawn in background color is invisible and a user can ensure a previously displayed line by specifying it in the background color. E.g.: set linetype (2); set linewidthscalefactor (2); set PolylineColorIndex (5); Polyline (n1, wcpoints1); set PolylineColorIndex (6); Polyline (n2, wcpoints2); CURVE ATTRIBUTES: Parameters of curve attributes are same as line attributes we can display curves with varying colors, widths, dot-dash patterns and available pen and brush options methods are also same as line attributes. We can generate the dashes in the various octants using circle symmetry, but we must shift the pixel positions to maintain the correct sequence of dashes and spaces as we move from one octant to the next. Raster curves of various widths can be displayed using the method of horizontal or vertical pixel spans, where the magnitude of the curve slope is less than 1, we
  • 65. 64 plot vertical spans, where the slope magnitude is greater than 1, and we plot horizontal spans. Using circle symmetry, we generate the circle path with vertical spans in the octant from x = 0 to x = y and then reflect pixel positions about the line y = x to obtain the remainder of the curve. For displaying the thick curves is to fill area between two parallel curve paths, whose separation distance is equal to the desired width. AREA FILL ATTRIBUTES: i. FILL STYLES: Areas are displayed with 3 basic fill styles. i.e.: hollow with a color border, filled with a solid color or filled with specified pattern or design. The function for basic fill style is: set Interiorstyle(fs); Where fs = fill style parameter and the values include hollow, solid and pattern. Another value for fill style is hatch, which is used to fill an area with selected hatching patterns i.e.: parallel lines or crossed lines. Fill style parameter ‘fs’ are normally applied to polygon areas, but they can also be implemented to fill regions with curved boundaries. Hollow areas are displayed using only the boundary outline with the interior color, same as the background color. A solid fill displayed in a single color upto and including the borders of the region. The color for a solid interior or for a hollow area outline is chosen with: set InteriorColorIndex(fc);
  • 66. 65 Where fc = fill color parameter is set to desired color code. ii. PATTERN FILL: We select fill patterns with set InteriorStyleIndex(Pi), where Pi = Pattern index parameter, specifies a table position. Example: The following set of statements would fill the area defined in the fill area command with the second pattern type stored in the pattern table: set InteriorStyle(Pattern); set InteriorStyleIndexI2); fill Area(n, points); Separate tables are set up for hatch patterns. If we had selected hatch fill for the interior style in this program segment, then the value assigned to parameter ‘Pi’ is an index to the stored patterns in the hatch table. For fill style pattern, table entries can be created on individual output devices with: set PatternRepresentation (ws, Pi, nx, ny, cp); Where, parameter ‘Pi’ sets the pattern index number for workstation ws. cp = two dimensional array of color codes with ‘nx’ columns and ‘ny’ rows. HOLLOW SOLID PATTERN N DIAGONAL HATCH FILL DIAGONAL CROSS HATCH FILL
  • 67. 66 Example: The 1st entry in the pattern table for workstation1 is cp[1, 1] = 4; cp[2, 2] = 4; cp[1, 2] = 0; cp[2, 1] = 0; set PatternRepresentation(1, 1, 2, 2, cp); Index(Pi) Pattern(cp) 1 4 0 0 4 Here, 1st 2 entries for the color table-color array ‘cp’ specifies a pattern that produces alternate red and black diagonal pixel 2 2 1 2 1 2 1 2 1 2 When a color array ‘cp’ is to be applied to fill a region, we specify the size of the area that is to be converted by each element of the array. We do this by setting the rectangular co-ordinate extents of the pattern: set Patternsize(dx, dy); Here dx and dy is the co-ordinate width and height of the array mapping. A reference position for starting a pattern fill is assigned with the statement: set PatternReferencepoint(position); Here, position = pointer to co-ordinate (xp, yp) that fix the lower left corner of the rectangular pattern. Form this starting position, the pattern is then replicated in the ‘x’ and ‘y’ directions until the defined area is covered by non-overlapping copies of pattern array. The process of filling an area
  • 68. 67 with a rectangular pattern is called tiling and rectangular fill patterns are sometimes referred to a tiling pattern. If the row positions in the pattern array are referred in reverse (i.e.: from bottom to top starting at 1), a pattern value is then assigned to pixel position (x, y) in screen or window co-ordinate as: set Pixel(x, y, cp(y mod ny + 1), (x mod nx + 1)) Where, ‘nx’ and ‘ny’ = number of rows and columns in pattern array. iii. SOFT FILL: Modified boundary fill and flood fill procedures that are applied to repaint areas, so that the fill color is combined with background colors are referred to as Soft-fill/Tint-fill algorithm. A linear soft fill algorithm repaints an area that was originally painted by merging a foreground color ‘F’ with a single background color ‘B’. Assume, we know the values for ‘F’ and ‘B’, we can determine how these colors were originally combined by checking the current color contents of the frame buffer. The current RGB color ‘P’ of each pixel within the area to be refilled is some linear combination of ‘F’ and ‘B’. P = tF + (1 – t)B --- (1) Where, the transparency factor ꞌtꞌ has a value between 0 and 1 for each pixel. For value of ꞌtꞌ less than 0.5, the background color contributes more to the interior color of the region that does the fill color.
  • 69. 68 The vector equation – (1) holds for each, P = (PR, PG, PB), F = (FR, FG, FB), B = (BR, BG, BB) --- (2). So, we can calculate the value of parameter ꞌtꞌ using one of the RGB color component as: t = PK – BK/FK – BK --- (3), Where K = R, G or B and FK ≠ BK. The parameter ꞌtꞌ has the same value for each RGB component but round off to integer codes can result in different values of 't' for different components. We can minimize this round off error by selecting the component with the largest difference between ‘F’ and ‘B’. This value of ꞌtꞌ is then used to mix the new fill color ‘NF’ with the background color, using either a modified flood-fill or boundary-fill procedure. Soft-fill procedures can be applied to an area whose foreground color is to be merged with multiple background color areas. E.g.: check board pattern. When two background colors B1 and B2 are mixed with foreground color ‘F’, the resulting pixel color ‘P’ is: P = t0F + t1B1 + (1 – t0 – t1)B2 --- (4) Where the sum of the coefficients ‘t0’, ‘t1’ and (1 – t0 – t1) on the color terms must equal to 1. These parameters are then used to mix the new fill color with parameters are then used to mix the new fill color with the two background colors to obtain the new pixel color.
  • 70. 69 FILLED AREA ATTRIBUTES WITH IRREGULAR BOUNDARY: i. CHARACTER ATTRIBUTES: Here, we have to control the character attributes, such as font size, color and orientation. Attributes can be set both for entire character string (text) and for individual characters defined as marker symbols. a. Text Attributes: It includes the font (type face) which is a set of characters with a particular design style, such as: Courier, Times Roman etc. This is also displayed with assorted underlying styles. i.e.: solid, dotted, double and also the text may be boldface, italics or in outline or shadow style. A particular font and associated style is selected by setting an interior code for the text font parameter ‘tf’ in the function: set Textfont(tf); Color setting for displayed text is done by the function: set TextColorIndex(tc); Here ‘tc’ = text color parameter specifies an allowable color code. Text size can be adjusted without changing the width to height ratio of characters with: set CharacterHeight(ch); Here ‘ch’ is assigned a real value greater than 0 to set the coordinate height of capital letters.
  • 71. 70 The width of text can be set with the function: set CharacterExpansionFactor(cw); Here, cw = character-width parameter is set to positive real value that scales the body width of characters. Text height is unaffected by this attributed setting. Spacing between characters is controlled separately with: set CharacterSpacing(cs); Here cs = character spacing parameter can be assigned any real value. The value assigned to ‘cs’ determine the spacing between character bodies along print lines. Negative values for ‘cs’ overlap character bodies positive values insert space to spread out the displayed characters. The orientation for a displayed character string is set according to the direction of the character up vector: set CharacterUpVector(upvect); Parameter ‘upvect’ in this function is assigned two values that specify the ‘x’ and ‘y’ vector components. Text is then displayed, so that the orientation of characters from baseline to cap line in the direction of the up-vector. A procedure for orienting text rotates characters so that the sides of character bodies, from baseline to cap line are aligned with the up-vector. The rotated character shapes are then scan converted into the frame buffer. An attributes parameter for this option is set with the statement: set TextPath(tp);
  • 72. 71 Here tp = text path can be assigned with the value right, left, up, down, horizontal, vertical etc. For text alignment, the attribute specifies how text is to be positioned with respect to the start coordinates. Alignment attributes are set with: set TextAlignment(h, v); Here ‘h’ and ‘v’ control horizontal and vertical alignment respectively. Horizontal alignment is set by assigning ‘h’, a value of left, center and right. Vertical alignment is set by assigning ‘v’, a value of top, cap, half, base or bottom. A precision specification for text display is given with: set TextPrecision(tpr); Here tpr = text precision parameter is assigned one of the values: string, char or stroke. The highest quality ‘text’ is displayed when the precision parameter is set to the value stroke. For this precision setting, greater detail would be used in defining the character shapes and the processing of attributes selection and other string manipulation procedures would be carried out to the highest possible accuracy. The lowest quality precision setting, string is used for faster display of character string.
  • 73. 72 b. Marker Attributes: A marker symbol is a single character that can be displayed in different colors and in different sizes. Marker attributes are implemented by procedures that load the choses character into the raster at the defined positions with the specified color and size. We select a particular character to be the marker symbol with: set MarkerType(mt); Here, mt = marker type parameter is set to an integer code. Typical codes for marker type are integers 1 through 5, specifying respectively, a dot (.), a vertical cross (+), an asterisk (*), a circle (O) and a diagonal cross (x). Displayed marker types are centered on the marker coordinates we set the marker size with: set MarkerSizeScaleFactor(ms); Here, ms = marker size parameter, assigned a positive number. It is applied to the normal size for the particular marker symbol chosen, values greater than 1 produce character enlargement; values less than 1 reduce the marker size. Marker color is specified with: set PolymarkerColorIndex(mc); Here, mc = selected color code, stored in current attribute list and used to display subsequently specified market primitives. ii. BUNDLED ATTRIBUTES: When each function references a single attribute, then that specifies exactly how a primitive is to be displayed with that attribute setting. These
  • 74. 73 specifications called Individual/Unbundled Attributes and they are used with an output device that is capable of displaying primitives in the way specified. A particular set of attribute values for a primitive on each output device is then chosen by specifying the appropriate table index. Attributes specified in this manner are called Bundled Attributes. The table for each primitive that defined groups of attributes values to be used when displaying that primitive on particular output device is called a Bundle table. Attributes that may be bundled into the workstation table entries are those that don’t involve co-ordinate specifications, such as color and line type. The choice between a bundled and an unbundled specification is made by setting a switch called the aspect source flag for each of these attributes: set IndividualASF(attribute ptr, flag ptr); Where ‘attribute ptr’ parameter points to a list of attributes and parameter ‘flag ptr’ points to the corresponding list of aspect source flags. Each aspect source flag can be assigned a value of individual or bundled. a. Bundled Line Attributes: Entries in the bundle table for line attributes on a specified workstation are set with function: set PolylineRepresentation(ws, li, lt, lw, lc); Here, ws = workstation identifier, li = line index parameter, defined the bundle table position. Parameter lt, lw and lc are then bundled and assigned values to set the line type, line width and line color specifications respectively for the designated table index.
  • 75. 74 E.g.: set PolylineRepresentation(1, 3, 2, 0, 5, 1); set PolylineRepresentation(4, 3, 1, 1, 7); Here, a polyline that is assigned a table index value of 3 would then be displayed using dashed lines at half thickness in a blue color on workstation ‘1’ while on workstation 4, this same index generates solid standard sized white lines. Once the bundle tables have been set up a group of bundled line attributes is chosen for each workstation by specifying the table index value. set PolylineIndex(li); b. Bundled Area Fill Attributes: Table entries for bundled area-fill attributes are set with: set InteriorRepresentation(ws, fi, fs, Pi, fc); This defines the attribute list corresponding to fill index ‘fi’ on workstation ws. Parameter ‘fs’, ‘Pi’, and ‘fc’ are assigned values for the fill style, pattern index and fill color respectively on the designated workstation. Similar bundle tables can also be set up for edge attributes of polygon fill areas. A particular attribute bundle is then selected from the table with the function: set InteriorIndex(fi); Subsequently defined fill areas are then displayed on each active workstation according to the table entry specified by the fill index parameter ‘fi’.
  • 76. 75 c. Bundles Text Attributes: The function is: set TextRepresentation(ws, ti, tf, tp, te, ts, tc); Which bundles value for text font, precision, expansion factor, size, and color in a table position for workstation ‘ws’ that is specified by the value assigned to text index parameter ‘ti’. Other text attributes, including character up vector, text, path, character height and text alignment are set individually. A particular text index value is then chosen with the function: set TextIndex(ti); Each text function that is then invoked is displayed on each workstation with the set of attributes referenced by this table position. d. Bundled Marker Attributes: Table entries for bundled marker attributes are set up with: set PolymarkerRepresentation(ws, mi, mt, ms, mc); This defined the marker type, marker scale factor and marker color for index ‘mi’ on workstation ws. Bundle table sections are then made with the function: set PolymarkerIndex(mi);
  • 77. 76 CHAPTER -5 2D TRANSFORMATION: A fundamental objective of 2D transformation is to simulate the movement and manipulation of objects in the plane. There are 2 points of view are used for describing the object movement. That is: i. The object itself is moved relative to a stationary co-ordinate system or background. The mathematical statement of this viewpoint is described by geometric transfer motions applied to each point of the object. ii. The second view holds that the object is held stationary, while the co- ordinate system is moved relative to the object. This effect is attained through the application of co-ordinate transformations. The transformations are used directly by application programs and within many graphic sub-routines. BASIC TRANSFORMATION IN 2D: In 2D Transformation, the basic transformation includes 3 parameters to reposition and resize the 2D objects. i.e.: Translation, Rotation and Scaling. i. TRANSLATION: A translation is applied to an object by repositioning it along a straight line path from one-coordinate location to another. We translate a 2D point by adding translation distance ‘tx’ and ‘ty’ to the original co-ordinate position (x, y) to move the point to a new position (xꞌ, yꞌ). So,
  • 78. 77 xꞌ = x + tx, yꞌ = y + ty --- (1) The translation distance pair (tx, ty) is called a translation vector or shift vector. The equation can be express as a single matrix equation by using the column vector to represent co-ordinate positions and the translation vector are: ii. ROTATION: A 2D Rotation is applied to an object by repositioned it along a circular path in xy-plane. To generate a rotation angle θ, and the position (xr, yr) of the rotation or pivot point, about which the object is to be rotated. Positive values for the rotation angle define counter clockwise rotations about the pivot point and negative values rotate objects in clockwise direction. This transformation can also be described as a rotation about a rotation axis, which is perpendicular to xy-plane and passes through pivot point. The transformation equations for rotation of point position ‘P’, when the pivot point is at the co-ordinate origin. r = constant distance of the point from origin. θ = rotation angle Φ = original angular position of the point from horizontal. X1 X2 P = Xꞌ1 Xꞌ2 Pꞌ = tx ty T = --- (2)
  • 79. 78 So, transformed co-ordinates in terms of angle θ and Φ are: The original co-ordinates of the point in polar co-ordinates are: Substitute equation – (2), in equation – (1), we get the transformation equation, for rotating a point of position (x, y) through an angle ‘θ’ about the origin. We can write the rotation equation in matrix from is: Pꞌ = R.P --- (4), where rotation matrix is: When co-ordinate positions are represented as row vectors instead of column vectors, the matrix product in rotation equation – (4) is transposed, so that the transformed row co-ordinate vector [xꞌ, yꞌ] is calculated as: xꞌ = rcos(θ + Φ) = rcosΦcosθ – rsinΦsinθ yꞌ = rsin(θ + Φ) = rcosΦsinθ + rsinΦcosθ --- (1) x = rcosΦ y = rsinΦ --- (2) xꞌ = xcosθ – ysinθ yꞌ = xsinθ + ycosθ --- (3) cosθ - sinθ sinθ sinθ cosθ R = --- (5) (Clockwise direction) cosθ sinθ - sinθ cosθ R = --- (6) (Anti-Clockwise direction)
  • 80. 79 PꞌT = (R.P)T = PT .RT --- (7) Where PT = [x y] RT = Transpose of ‘R’ obtain by interchanging the rows and columns. The transformation equation for rotation of a point about any specified rotation position (xr, yr) is: Example 1: Consider an object ‘ABC’ with co-ordinates A(1, 1), B(10, 1), C(5, 5). Rotate the object by 90 degree in anticlockwise direction and give the co-ordinates of the transformed object. Example 2: Perform a 45 degree rotation of the object A(2, 1), B(5, 1) and C(5, 6) in clockwise direction and give the co-ordinates of the transformed objects. xꞌ = xr + (x –xr)cosθ – (y – yr)sinθ yꞌ = yr + (x –xr)sinθ + (y – yr)cosθ --- (8) A B C 1 10 5 1 1 5 X = cosθ sinθ - sinθ cosθ R = cos90 sin90 - sin90 cos90= 0 1 -1 0 = 1 10 5 1 1 5 Xꞌ = [X].[R] = 0 1 -1 0 -1 -1 -5 1 10 5 = A B C
  • 81. 80 iii. SCALING: A scaling transformation alters the size of the object. This operation can be carried out for polygons by multiplying the co-ordinate values (x, y) of each vertex by scaling factors ‘Sx’ and ‘Sy’ to produce the transformed co- ordinates (xꞌ, yꞌ) as: xꞌ = x . Sx and yꞌ = y . Sy --- (1) Scaling factors ‘Sx’ scales objects in the x-direction, while ‘Sy’ scales in the y-direction. The transformation equation in matrix form is: Or, Pꞌ = S. P --- (3) The scaling factors ‘Sx’ and ‘Sy’ is less than 1, reduce the size of objects, values greater than 1 produce an enlargement. Specifying a value of 1for both ‘Sx’ and ‘Sy’ leaves the size of objects unchanged. When ‘Sx’ and ‘Sy’ A B C 2 5 5 1 1 6 X = cosθ - sinθ sinθ cosθ R = cos45 - sin45 sin45 cos45 = 1/√2 -1/√2 1/√2 1/√2 = Xꞌ = [X].[R] = 2 5 5 1 1 6 1/√2 -1/√2 1/√2 1/√2 3/√2 6/√2 11/√2 -1/√2 -4/√2 1/√2 = B C A xꞌ yꞌ = Sx 0 0 Sy x y --- (2)
  • 82. 81 are assigned the same value, a uniform scaling is produced that maintains relative object proportions. We can control the location of a scaled object by choosing a position, called the fixed point that is to remain unchanged after the scaling transformation. Co-ordinates for the fixed point (xf, yf) can be chosen as one of the vertices, the object centroid or any other position. For a vertex, with co-ordinates (x, y), the scaled co-ordinates (xꞌ, yꞌ) are calculated as: xꞌ = xf + (x – xf)Sx and yꞌ = yf + (y – yf)Sy --- (4) The scaling transformations to separate the multiplicative and additive items: xꞌ = x . Sx + xf(1 – Sx) and yꞌ = y . Sy + yf(1 – Sy) --- (5) Where, the additive terms xf(1 – Sx) and yf(1 – Sy) are constant for all points in the object. Example 1: Scale the object with co-ordinates A(2, 1), B(2, 3), C(4, 2) and D(4, 4) with a scale factor Sx = Sy = 2. Sx 0 0 Sy S = 2 0 0 2 = X = 2 4 4 1 3 2 2 4
  • 83. 82 Example 2: What will be the effect of scaling factor Sx = 1/2 and Sy = 1/3 on a given triangle ABC whose co-ordinates are: A[4, 1], B[5, 2], C[4, 3]? OTHER TRANSFORMATION: i. REFLECTION: A reflection is a transformation that produces a mirror image of an object. The mirror image for a 2D reflection is generated relative to an axis of reflection by the object 180 degree about the reflection axis. Reflection about x-axis, the line y = 0, the x-axis is accomplished with the transforming matrix is given below. This transformation keeps ‘x’ values the same, but flips the y-value of co-ordinate positions. Xꞌ = [X].[S] = 4 4 5 1 2 3 1/2 0 0 1/3 = 2 2 1/3 2/3 1 5/2 X = 4 4 1 2 3 5 Sx 0 0 Sy S = 1/2 0 0 1/3 = Xꞌ = [X].[S] = 2 4 2 4 1 3 2 4 2 0 0 2 = 4 8 8 2 6 4 4 8