SlideShare une entreprise Scribd logo
1  sur  93
Télécharger pour lire hors ligne
UNIT – I
                                  INTRODUCTION

Operating system is a program that manages the computer hardware. It also provides a
basis for application programs and acts as an intermediary between a user of a computer
and the computer hardware. An operating system is an important part of almost every
computer system. A computer system can be divided roughly into four components:
    • The hardware – the central processing unit (CPU), the memory, and the
        input/output (I/O) devices provides the basic computing resources.
    • The operating system - controls and coordinates the use of the hardware among
        the various application programs for the various users.
    • Application programs – such as word processors, spreadsheets, compilers, and
        web browsers define the way in which these resources are used to solve the
        computing problems of the users.
    • Users.

                              MAINFRAME SYSTEMS

Mainframe computer systems were the first computers used to tackle many commercial
and scientific applications.

1.Batch Systems
The user did not interact directly with the computer systems. They prepared a job which
consisted of the program, the data, and some control information about the nature of the
job and submitted it to the computer operator. After some days the output appeared. The
output consisted of the result of the program. The major task was to transfer control
automatically from one job to next.
        To speed up processing, operators batched together jobs with similar needs and
ran them through the computer as a group. Thus the programmers leave their programs
with the operator. The operator would sort programs into batches with similar
requirements and, as the computer became available, would run each batch. The output
from each job would be sent back to the appropriate programmer.

2.Multiprogrammed Systems

Multiprogramming increases CPU utilization by organizing jobs so that the CPU
always has one to execute.
The concept of multiprogramming is that the operating system keeps several jobs in
memory simultaneously. This set of jobs are kept in the job pool. The OS picks up and
execute one of the jobs in the memory.
   • In non-multiprogramming system, the CPU would sit idle.
   • In multiprogramming system, the OS simply switches to, and executes, another
       job. When that job needs to wait, the CPU is switched to another job, and so on.
       The first finishes waiting and gets the CPU back. As long a at least one job needs
       to execute, the CPU is never idle.



                                           1
Multiprogramming operating systems are used for decisions for the users. All the jobs
that enter the system are kept in the job pool. This pool consists of all processes residing
on disk awaiting allocation of main memory. If several jobs are ready to be brought into
memory, and if there is not enough room for all of them, then the system must choose
among them. Making this decision is job scheduling. If several jobs are ready to run at
the same time, the system must choose among them, this decision is CPU scheduling.

3.Time – Sharing Systems

Time sharing or multitasking is a logical extension of multiprogramming. The CPU
executes multiple jobs by switching among them, but the switches occur so frequently
that the users can interact with each program while it is running.
An interactive (or hands-on) computer system provides direct communication between
the user and the system. The user gives instructions to the operating system or to a
program directly, using a keyboard or a mouse, and waits for immediate results.
         A time-shared operating system allows many users to share the computer
simultaneously. Each action or command in a time –shared system tends to be short, only
a little CPU time is needed for each user. As the system switches rapidly from one user
to the next, each user is given the impression that the entire computer system is dedicated
to her use, even though it is being shared among many users.
         A time-shared operating system uses CPU scheduling and multiprogramming to
provide each user with a small portion of a time-shared computer. Each user has atleast
one separate program in memory. A program loaded into memory and executing is
commonly referred to as a process. When a process executes, it typically executes for
only a short time before it either finishes or needs to perform I/O.

                                 DESKTOP SYSTEMS

Personal computers PCs appeared in the 1970s. During their first decade, the CPUs in
PCs lacked the features needed to protect an operating system from user programs. PC
operating systems therefore were neither multiuser nor multitasking. The MS-DOS
operating system from Microsoft has been produced multiple flavors of Microsoft
Windows, and IBM has upgraded MS-DOS to OS/2 multitasking system. UNIX is used
for its scalability, performance, and features but retains the same rich GUI. Linux, a
UNIX operating system available for PCs, has also become popular recently.
        OS has developed for mainframes. Micro computers were able to adopt some of
the technology developed for larger OS. On the other hand, the hardware costs for micro
computers are sufficiently low that individuals have sole use of the computer, and the
CPU utilization is no longer a prime concern.
        Earlier file protection was not necessary on a personal machine. But these
computers are now often tied into other computers over local-area networks or other
Internet connections. When other computers and other users can access the files on a PC,
file protection becomes essential feature of the operating system. The lack of such
protection has made it easy for malicious programs to destroy data on the systems such as
MS-DOS and the Macintosh operating system. These programs are self-replicating, and
may spread rapidly via worm or virus mechanism.



                                             2
MULTIPROCESSOR SYSTEMS

Multiprocessor systems have more than one processor or CPU. Multiprocessor systems
also known as parallel systems. Such systems have more than one processor in close
communication, sharing the computer bus, the clock, and sometimes memory and
peripheral devices.

Advantages of the multiprocessor systems are:

   1. Increased throughput: By increasing the number of processors, get more work
       done in less time. When multiple processors cooperate on a task, a certain amount
       of overhead is incurred in keeping all the parts working correctly.
   2. Economy of scale: Multiprocessor systems can save more money than multiple
       singe-processor systems, because they can share peripherals, mass storage, and
       power supplies. If several programs operate on the same set of data, it is cheaper
       to store those data on one disk and to have all the processors share them, than to
       have many computers with local disks and many copies of the data.
   3. Increased reliability: If functions can be distributed properly among several
       processors, then the failure of one processor will not halt the system, only slow it
       down.
The most common multiple-processor systems now use symmetric multiprocessing
(SMP), in which each processor runs on identical copy of the OS, and these copies
communicate with one another as needed. Some systems use asymmetric
multiprocessing, in which each processor is assigned a specific task. A master processor
controls the system, the other processors either look to the master for instruction or have
predefined tasks. This scheme defines a master-slave relationship. The master processor
schedules and allocates work to the slave processors.

Figure: Symmetric multiprocessing architecture


         CPU                                                          CPU
                                CPU



                                       memory
SMP means that all processors are peers, no master-slave relationship exists between
processors. Each processor concurrently runs a copy of the operating system. The benefit
of this model is that many processes can run simultaneously - N processes can run if
there a N CPUs- without causing a significant performance. Since the CPU are separate,
one may be sitting idle while another is overloaded, resulting in inefficiencies. A
multiprocessor system will allow processors and resources-such s memory to be shared
among                     the                     various                    processors.




                                            3
DISTRIBUTED SYSTEMS

A network, in the simplest terms, is a communication path between two or more
systems. Distributed systems depend on networking for their functionality. Distributed
systems are able to share computational tasks, and provide a rich set of features to users.
Networks are typecast based on the distance between their nodes.
    • Local area network: exists within a room, a floor, or a building.
    • Wide area network: exists between building, cities, or countries. A global
       company may have a WAN to connect its offices, world wide.
    • Metropolitan are network: could link buildings within a city.

1.Client – Server Systems:

         client           client            client                     client


                                   server

Terminals connected to centralized systems are now being supplanted by CPU.
Centralized systems today act as server systems to satisfy requests generated by client
systems.
Server systems can be broadly categorized as compute servers and file servers.
    • Compute-server systems: provide an interface to which clients can send requests
       to perform an action, in response to which they execute the action and send back
       results to the clients.
    • File-server systems: provide a file-system interface where clients can create,
       update, read and delete files.

2.Peer-to-Peer Systems

The growth of computer networks especially the Internet and WWW has a profound
influence on the recent development of OS. When PCs were introduced in 1970s they
were developed as stand alone systems, but with the widespread public use of the Internet
in 1980s for electronic mail, ftp, PCs became connected to computer networks.
        Modern PCs and workstations are capable of running a web browser for accessing
hypertext documents on the web. The PCs now include software that enables a computer
to access the Internet via a local area network or telephone connection. These systems
consist of collection of processors that do not share memory or a clock. Instead each
processor has its own local memory. The processors communicate with another through
various communication lines, such as high speed buses or telephone line.
        OS has taken the concept of network and distributed systems for network
connectivity. A network operating systems is an OS that provides features such as file
sharing across the network, and that includes a communication scheme that allows
different processes on different computers to exchange messages.




                                             4
CLUSTERED SYSTEMS

Clustered systems gather together multiple CPUs to accomplish computational work.
Clustered systems differ from parallel system, in that they are composed of two or more
individual systems coupled together. Clustering is usually performed to provide high
availability. A layer of cluster software runs on the cluster nodes. Each node can monitor
one or more o the others. If the monitored machine fails, the monitoring machine can take
ownership of its storage, and restart the application that were running on the failed
machine. The failed machine can remain down, but the users and clients of the
application would only see a brief interruption of service.
        In asymmetric clustering, one machine is in hot standby mode when the other
is running the applications. The hot standby machine does nothing but monitor the active
server. If that server fails, the hot standby host becomes the active serer.
        In symmetric clustering, two or more nodes are running applications, and they
are monitoring each other. This mode is more efficient, as it uses all o the available
hardware.
        Parallel clusters allow multiple hosts to access the same data on the shared
storage. Because most of the operating systems lack support for simultaneous data access
by multiple hosts, parallel clusters are usually accomplished by special versions of
software. Example Oracle parallel server is a version of Oracle’s database that has been
designed to run on parallel clusters.

                                 REAL-TIME SYSTEMS

        A real time system is used when rigid time requirements have been placed on
the operation of a processor or the flow of data; thus it is used as a control device in a
dedicated application. Sensors bring data to the computer. Computer must analyze the
data and possibly adjust controls to modify the sensor inputs. Systems that control
scientific experiments, medical imaging systems, industrial control systems, some
automobile-engine fuel-injection systems, home appliance controllers and weapon
systems are also real time systems.
        A real-time system has well-defined, fixed time constraints. Processing must be
done within the defined constraints, or the system will fail. A real time systems functions
correctly only if it returns the correct result within its time constraints.
        Real time systems if of two types:
    • A hard real time system guarantees that critical tasks be completed on time. This
        goal requires that all delays in the system be bounded, from the retrieval of stored
        data to the time that it takes the operating system to finish any request made of it.
    • A soft real time system where a critical real time task gets priority over those
        tasks, and retains that priority until it completes. An in hard real-time systems, the
        OS kernel delays need to be bounded: A real-time task cannot be kept waiting
        indefinitely for the kernel to run it. They are useful in several areas, including
        multimedia, virtual reality, and advanced scientific projects such as undersea
        exploration and planetary rovers. These systems needs advanced operating
        systems.




                                              5
HANDHELD SYSTEMS

Handheld systems include personal digital assistants(PDAs), such as Palm pilots
or cellular telephones with connectivity to a network such as the Internet. The
handheld systems are of limited size. Example, a PDA is typically 5 inches in height
and 3 inches in width, and weighs less than one-half pound. Due to its limited size,
handheld systems have a small amount of memory, include slow processors, and
feature small display screens. They have between 512 KB and 8 MB of memory.
         The next issue is the handheld devices is the speed of the processor used in the
device. Processors of handheld devices often run at a fraction of the speed of a
processor in a PC. Faster processors require more power. To include a faster
processor in a handheld device would require a larger battery that would have to be
replaced more frequently.
         The last issue for the designers for handheld devices are handheld devices is
the small display screens typically available. Whereas a monitor for a home computer
may measure up to 21 inches, the display for a handheld device is often no more than
3 inches square. Tasks such as reading e-mail or browsing web pages, must be
condensed onto the smaller displays. One approach for displaying the content in web
pages is web clipping, where only a small subset of a web page is delivered and
displayed on the handheld device.
         Some handheld devices may use wireless technology, such as BlueTooth,
allowing remote access to e-mail and web browsing. Cellular telephones with
connectivity to Internet fall into this category. To download data to these devices,
first the data is downloaded to a PC or workstation, and then downloads the data to
the PDA. Some PDAs allow data to be directly copied from one device to another
using an infrared link.

                      OPERATING SYSTEM STRUCTURES

An operating system may be viewed from several points.
1. By examining the services that it provides.
2. By looking at the interface that it makes available to users and programmers.
3. By disassembling, the system into its components and their interconnections.

OPERATING SYSTEM COMPONENTS:

The various system components are:
1. Process management
2. Main-memory management
3. File management
4. I/O-system management
5. Secondary-storage management
6. Networking
7. Protection system
8. Command-Interpreter system




                                         6
1. Process Management
   A Process is thought as a program in execution. A program does nothing unless
   its instructions are executed by a CPU. Example, a compiler is a process, a word
   processing program run by an individual user on a PC is a process, a system task,
   such as sending output to a printer is also a process.
            A process needs certain resources including CPU time, memory, files and
   I/O devices to accomplish its task. These resources are either given to the process
   when it is created or allocated to it while it is running. A program is a passive
   entity, such as the contents of the file stored on disk, whereas a process is an
   active entity, with a program counter specifying the next instruction to be
   executed. The execution of the process must be sequential. The CPU executes
   one instruction of the process after another, until the process completes.
            A process is the unit of work in a system. Such a system consists of
   collection of processes, some of which are operating system processes and the rest
   are user processes.

   The OS is responsible for the following activities in process management:
      • Creating and deleting both user and system processes
      • Suspending and resuming processes
      • Providing mechanisms for process synchronization
      • Providing mechanisms for process communication
      • Providing mechanisms for deadlock handling

2. Main-Memory Management

   Main memory is a repository of quickly accessible data shared by the CPU and I/
   O devices. The central processor reads the instructions from main memory during
   the instruction fetch cycle, and it reads instructions from main memory during the
   data fetch cycle.
            The main memory is the only larger storage device the CPU able to
   address and access directly. For a program to be executed, it must be mapped to
   absolute addresses and loaded into memory. As the program executes, it accesses
   program instructions and data from memory by generating theses absolute
   addresses.      To improve the CPU utilization and the speed of the computer’s
   response to its users, we must keep several programs in memory.

       The OS is responsible for the following activities in memory management:
          • Keeping track of which parts of memory are currently being used and
             by whom
          • Deciding which processes are to be loaded into memory when memory
             space becomes available.
          • Allocating and deallocating memory space as needed.




                                        7
3. File Management

   A File is a collection of related information defined by its creator. Files define
   programs and data. Data files can be numeric, alphabetic and alphanumeric. Files
   may be of free form, or may be formatted rigidly. A file consists of sequence of
   bits, bytes, lines or records whose meanings are defined by their creators. Files are
   organized into directories to ease of their use. Files can be opened in different
   modes to be accessed, they are : read, write, append.

   The OS responsibilities for the following activities for file management :
         • Creating and deleting files
         • Creating and deleting directories
         • Supporting primitives for manipulating files and directories
         • Mapping files onto secondary storage
         • Backing up files on stable, nonvolatile storage media

4. I/O System Management

   One of the purposes of an operating system is to hide the attributes and characters
   of specific hardware devices for the user.

      The I/O subsystem consists of:
         • A memory-management component that includes buffering, caching,
             and spooling
         • A general device-driver interface
         • Drivers for specific hardware devices

   The device drivers alone know the characteristics of the specific device to which
   it is assigned.

5. Secondary-Storage Management

   The main purpose of the computer system is to execute the programs. These
   programs, with the data they access, must be in main memory, or primary
   memory, during execution. Because the main memory is too small to hold all the
   data and programs, and as the data it holds is lost when the power is lost, the
   computer must provide secondary storage to back up main memory. Programs
   such as compilers, assemblers, routines, editors and formatters are stored in
   the disk until loaded into main memory.

   The OS responsibility for the following activities for disk management:
             • Free-space management
             • Storage allocation
             • Disk scheduling




                                        8
6. Networking

A distributed system is a collection of processors that do not share memory,
peripheral devices, or a clock. Each processor has its own local memory and clock,
and the processors communicate with one another through various communication
lines, such as high-speed buses or networks. The processors in distributed system
vary in size and function. They include small micro-processors, workstations,
minicomputers and large, general purpose computer systems.
        A distributed system collects physically separate, and many heterogeneous
systems into a single coherent system, providing user with access to the various
resources that the system maintains. The shared resources allows computation
speedup, increased functionality, increased data availability, and reliability.
        Different protocols are used in the network system, such as file-transfer
protocol (FTP), Network file-system (NFS), hypertext transfer protocol (HTTP) ,
for use in communication between a web server and a web browser. A web browser
then just needs to send a request for information to a remote machines web server.
And the information is returned.

7. Protection System

If a computer system has multiple users and allows the concurrent execution of
multiple processes, then the various processes must be protected from the other
process.
       Protection is any mechanism for controlling the access of programs,
processes, or users to the resources defined by a computer system. This mechanism
must provide means for specification of the controls to be imposed and means for
enforcement.
       Protection can improve reliability by detecting latent errors at the interfaces
between component subsystems. Early detection of interface errors can often prevent
contamination of a healthy subsystem by another subsystem. Unprotected system can
be used by unauthorized users.

8. Command-Interpreter System

Command interpreter is an interface between the user and the operating system.
Many commands are given to the operating system by control statements. When a
new job is started in a batch system, or when a user logs on to a time-shared system, a
program that reads and interprets control statement is executed automatically. This
program is sometimes called the control card interpreter or command-Interpreter,
and is often known as the shell. Its function is to get the next command statement and
execute it.




                                        9
OPERATING SYSTEM SERVICES

An operating system provides an environment for the execution of programs. It
provides certain services to programs and to the users of those programs.
The various services offered by Operating system are:

   •   Program execution: The system may be able to load a program into memory
       and to run that program. The program must be able to end its execution, either
       normally or abnormally indicating the error.
   •   I/O operations: A running program may require I/O. This I/O may involve a
       file or an I/O device. For security and efficiency users usually cannot control
       I/O devices directly, operating system must be provided to do it.
   •   File-system manipulation: The files needs programs to read and write files.
       Programs also need to create and delete files by names.
   •   Communications: One process needs to communicate with another process.
       Such communication may occur in two ways:
            o Communication takes place between processes that are executing on
               the same computer.
            o Takes place between processes that are executing on different
               computer systems that are tied together by a computer network.
               Communications may be implemented via shared memory, or by the
               technique of message passing, in which packets of information are
               moved between processes by the operating system.

   •   Error detection: The operating system constantly needs to be aware of
       possible errors. Errors may occur in :
          o the CPU and memory hardware-memory error or power failure.
          o I/O devices-such as parity error on tape, a connection failure on
              network, or lack of paper in the printer.
          o User program-an arithmetic overflow, an attempt to access an illegal
              memory location, or too great use of CPU time.

   •   Resource allocation: When multiple users are logged on the system, multiple
       jobs are running at the same time, resources must be allocated to all of them.
       Many different types of resources are managed by the operating system. Some
       are CPU cycles, main memory, and file storage may have special allocation
       code, and I/O devices may have general request and release code.
   •   Accounting: Os must keep track of which users use how many and which
       kinds of computer resources.
   •   Protection: The owners of the information stored in a multiuser computer
       system may want to control use of the information. When several processes
       are executed concurrently, one process should not interfere with the other
       process. Security of the system from outsiders is also important. Such
       security starts with each user having to authenticate himself to the system ,
       usually by means of a password, to be allowed access to the resources.


                                       10
SYSTEM CALLS

System calls provide the interface between a process and the operating system. These
calls are usually available as assembly-language instructions and they are listed in
manuals for programmers. Several systems allow system calls to be available in high
level languages. Several languages such as C, C++, Perl have replaced assembly
language for system programming. Example, UNIX system calls may be invoked
directly from a C or C++ program.

System calls can be grouped into five major categories:

1) Process Control:

A running program needs to be able to halt its execution either normally or
abnormally. If a system call is made to terminate the currently running program
abnormally. Either normal or abnormal , the operating system must transfer control to
the command interpreter. The command interpreter then reads the next command.
The command interpreter reads the next instruction.
        A process or job executing one program may want to load and execute other
program. An interesting question is where to return control when the loaded program
terminates. Whether the existing program is lost, saved, or allowed to continue
execution concurrently with the new program.
        If control returns to the existing program when the new program terminates,
we must save the memory image of the existing program. If both programs continue
concurrently, we have created a new job or process to be multiprogrammed. System
calls for this purpose is create process or submit job.
        If a new job or process is created, we may need to wait for them for some time
to finish their execution. We may want to wait for a certain amount of time (wait
time), we may want to wait or a specific event to occur (wait event). The jobs or
processes should then signal when that event has occurred (signal event).
        If we create a new job or processes, we should be able to execute it. This
control requires the ability to determine and reset the attributes of a job or process,
including the job’s priority, its maximum allowable execution time, (get process
attributes and set process attribute).
        Another set of system calls is helpful in debugging a program. Many systems
provide system calls to dump memory. This provision is useful for debugging. A
program trace lists each instruction as it is executed; it is provided by fewer systems.
The trap is usually caught by a debugger, which is a system program designed to aid
the programmer in finding and correcting bugs.

2) File Management

The files must be created and deleted. Either system calls requires the name of the
file and some of the attributes. Once the file is created, it must be open and used. It




                                        11
may also read, write and reposition (rewind or skip to the end of the file). Finally the
file need to be closed.
         File attributes include the file name, a file type, protection codes, accounting
information and so on. The system calls for this purpose are get file attribute and set
file attribute.

3) Device Management

A program running, may need additional resources to proceed. The resources may be
more memory, tape drivers, access to files. If the resources are available, they can be
granted and control can be returned to the user program; otherwise the program will
have to wait until sufficient resource are available.
       Once the device has been requested (an allocating to us), we can read, write,
and reposition the device.

4) Information Maintenance

Many system calls exist simply for the purpose of transferring information between
the user program and the operating system. For example, systems have a system call
to return the current time and date. Other system calls may return information about
the system, such as the number of current users, the version number of the operating
system, the amount of free memory or disk space.
        There are system calls to access these information. They are get process
attributes and set process attributes.

5) Communication

There are two common models of communication. In message-passing model,
information is exchanged through an interprocess communication. Before
communication can take place, a connection must be opened. The name of the other
communicator must be known, bit it another process on the same PU, or a process on
another computer connected by networks. Each computer in the network has a host
name, such as IP name. Similarly each process has a process name.
     The get hosted and get processed system calls do this process. These identifiers
are then passed to specific open connection and close connection system calls. The
recipient process usually must give its permission for communication to take place
with an accept connection call. Most processes that will be receiving connection are
special purpose daemons. They execute a wait for connection call and are awakened
when a connection is made. The source of the communication, known as the client,
and the receiving daemon, known as server, then exchange messages by read message
and write message system calls. The close connection terminates the communication.
        In shared-memory model, processes may exchange information by reading
and writing data in these processes and are not under the operating systems control.




                                         12
SYSTEM PROGRAMS

System program provide a convenient environment for program development and
execution. They can be divided into these categories:

   •   File Management
       These programs create, delete, copy, rename, print, dump, list and generally
       manipulate files and directories.
   •   Status information
       Some programs simply ask the system for the date, time, amount of available
       memory or disk space, number of users, or similar status information. That
       information is then formatted, and printed to the terminal or other output
       device or file.

   •   File modification
       Several text editors may be available to create and modify the content of the
       files stored on disk or tape.

   •   Programming language support
       Compilers, assemblers, and interpreters for common programming languages
       such a C, C++, java, Visual Basic are often provided to the user with the
       operating system. Some of these programs are now priced and provided
       separately.

   •    Program loading and execution
       Once a program is assembled or compiled, it must be loaded into memory to
       be executed. Debugging systems for either higher-level languages or machine
       language are needed.

   •   Communications
       These programs provide the mechanism for creating virtual connections
       among processes, users, and different computer systems. They allow users to
       send messages to one another’s screens, to browse web pages, to send e-mail
       messages, to log remotely, or to transfer data files from one machine to
       another.




                                      13
SYSTEM DESIGN AND IMPLEMENTATION

1. Design Goals
The first problem in designing a system is to define the goals and specification of the
system. The design of the system will be affected by the choice of hardware and type
of system: batch, time shared, single user, multiuser, distributed, real time, or general
purpose.

Requirement can be divided into two basic groups:
    • User goals
    • System goals
User desire certain properties in a system: The system should be convenient and easy
to use, easy to learn, reliable, safe, and fast. Similarly a set of requirements can be
defined by those people who must design, create, implement and maintain; it should
be flexible, reliable, error free, and efficient.

2. Mechanisms and Policies
One important principle is the separation of policy from mechanism. Mechanism
determines how to do something; policies determine what will be done. Example, the
timer construct is a mechanism for ensuring CPU protection, but deciding how long
the timer is to be set for a particular user is a policy decision.
       Policies are likely to change across places or overtime. In worst case, each
change in policy would require a change in the underlying mechanisms. A change in
policy would require redefinition of only certain parameters of the system. For
instance, if one computer system, a policy decision is made that I/O intensive
programs should have priority over CPU intensive ones, then the opposite policy
could be instituted easily on some other computer systems if the mechanism were
properly separated and were policy independent.

3. Implementation
Once an operating system is designed, it must be implemented. Traditionally, OS
were written in assembly language. Now they are often written in higher-level
languages as C, C++.
The various operating systems not written in assembly language are:

    • The Master Control Program (MCP), it was written in ALGOL.
    • MULTICS, developed at MIT, was written in PL/1.
    • The Primos operating system for Prime computers is written in Fortran.
    • The UNIX operating system, OS/2, and Windows NT are written in C.
    The advantages of writing OS in higher-level language are the code can be
written faster, is more compact, and is easier to understand and debug. The OS is far
easier to port to move to some other hardware if it is written in high-level language.
        The disadvantage of writing OS in high-level language is the reduced speed
and increased storage requirements.


                                         14
OS are large, only a small amount of the code is critical to high performance,
   the memory manager and the CPU schedulers are probably the most critical routines
                                      UNIT II
                           PROCESS MANAGEMENT

A process is a program which is in execution. A batch system executes jobs, whereas a
timeshared system has user programs, or tasks. The jobs and the process are used almost
interchangeably. A process is more than a program code, which is sometimes known as
the text section. It also includes the current activity, represented by the value of the
program counter and the contents of the processor’s registers.
        A program is a passive entity, such as the contents of a file stored on disk. A
process is an active entity, with the program counter specifying the next instruction to
execute and a set of associated resources.

Process State

As a process executes, it changes state. The state of a process is defined in part by the
current activity of that process. Each process may be in one of the following states:

   •   New: The process is being created.
   •   Running: Instructions are being executed.
   •   Waiting: The process is waiting for some event to occur (such as an I/O
       completion or reception of a signal).
   •   Ready: The process is waiting to be assigned to a processor.
   •   Terminated: The process has finished execution.

Figure: Diagram of process state.



                                                                     Terminated
          New




                     Ready                              Running




                                     Waiting




                                           15
PROCESS CONTROL BLOCK

Each process is represented in the operating system by a process control block (PCB)
also called as task control block. It contains many pieces of information associated with
a specific process, including these:


                       Pointer              Process state


                                  Process number

                                  Program counter


                                    Registers



                                    Memory lines

                                 List of open files

                                         ….
                                         ….

Figure: Process control block

   •   Process state: The state may be new, ready, running, waiting, halted and so on.
   •   Program counter: The counter indicates the address of the next instruction to be
       executed.
   •   CPU registers: The registers vary in number and type, depending upon the
       computer. The registers are accumulators, index register, stack pointers, and
       general purpose registers.
   •   CPU scheduling information: This information includes a process priority,
       pointers to scheduling queues, and any other scheduling parameters.
   •   Memory-Management information: This information includes such as
       information as the value of the base and limit registers, the page tables, or the
       segment tables, depending on the memory system used by the operating system.
   •   Accounting information: This information includes the amount of CPU and real
       time used, time limits, account numbers, job or process numbers, and so on.
   •   I/O status information: The information includes the list of I/O devices allocated
       to this process, a list of open files, and so on.


                                              16
PROCESS SCHEDULING

1. Scheduling Queues:
As process enter the system, they are put into a job queue. This queue consists of all
processes in system. The processes that are residing in main memory and are ready
and waiting to execute are kept on a list called ready queue. This queue is generally
stored as linked list. A ready-queue header contains pointers to the first and final
PCBs in the list.
        The OS also has other queues. When a process is allocated the CPU, it
executes for a while and quits, is interrupted or waits for the occurrence of a
particular event, such as the completion of an I/O request. Since the system has many
processes it may be busy with the I/O request of some other process. The process
therefore may have to wait for the disk. The list of processes waiting for a particular
I/O device is called a device queue. Each device has its own device queue.
        A common representation of process scheduling is a queuing diagram. Each
rectangular box represents a queue. Two types are queues are present: the ready
queue and a set of device queues. The circle represents the resources that serve the
queues, and the arrows indicate the flow of processes in the system.
        A new process is put in the ready queue. It waits in the ready queue until it is
selected for execution. Once the process is assigned to the CPU and is executing, one
of the several events could occur:
    • The process could issue an I/O request, and then be placed in an I/O queue.
    • The process could create a new subprocess and wait for its termination.
    • The process could be removed forcibly from the CPU, as a result of an
        interrupt, and be put back in the ready queue.



                                                                   CPU
              Ready queue


              I/O               I/O queue                   I/O request


                                                          Time slice expired

                               Child                      Fork a child
                               executes

                               Interrupt                  Wait for interrupt




                                          17
2. Schedulers

  A process migrates between the various scheduling queues throughout its lifetime.
  The operating system must select, for scheduling purposes, processes from these
  queues in some fashion. The selection process is carried out by the appropriate
  scheduler. There are two types of schedulers:
          • Long-term schedulers or job schedulers: This selects processes from
              job pool and loads them into memory for execution.
          • Short-term scheduler or CPU scheduler: This selects among the
              processes that are ready to execute and allocates the CPU to one of them.
  The primary difference between these two schedulers is the frequency of their
  execution. The short-term scheduler must select a new process for the CPU
  frequently. A process may execute for only a few milliseconds therefore waiting for
  an I/O request. This often executes once in every 100 milliseconds.
          The long-term scheduler on the other hand, executes much less frequently.
  There may be minutes between the creations of new processes in the system. The
  long-term schedulers are needed to be invoked only when the process leaves the
  system.
          On some systems, such as time sharing systems, may introduce an additional,
  intermediate level of scheduling. This medium-term scheduler removes processes
  from memory, and thus reduces the degree of multiprogramming. At some later time,
  the process can be reintroduced into memory and its execution can be continued
  where it left off. This scheme is called swapping. The process is swapped out, and is
  later swapped in, by the medium-term scheduler.

  3. Context Switch

  Switching the CPU to another process requires saving the state of the old process and
  loading the saved state for the new process. This task is known as a context switch.
  The context of a process is represented in the PCB of a process; it includes the value
  of the PU registers, the process state and memory management information. When a
  context switch occurs the Kernel saves the context of the old process in its PCB and
  loads the saved context of the new process scheduled to run. Context switch is pure
  overhead, because the system does no useful work while switching. Its speed varies
  from machine to machine, depending on the memory speed, the number of registers
  that must be copied, and the existence of special instructions. The speed ranges from
  1 to 1000 milliseconds.

OPERATIONS ON PROCESSES

 The processes in the system can execute concurrently, and they must be created and

 deleted dynamically. Thus the operating system must provide a mechanism for process

 creation and termination.


                                         18
1. Process Creation
   A process may create several new processes, via create-process system call during
   the execution. The creating process is called a parent process, whereas the new
   processes are called children of that process. Each of these new processes may in
   turn create other processes, forming a tree structure.
           A process needs certain resources such as CPU time, memory, files, I/O
   devices to accomplish any task. When a process creates a sub process may be able to
   obtain its resources directly from the operating system, or the parent may have to
   partition its resources among its children.
           Example: Consider a process whose function is to display the status of a file,
   say F1, on the screen of the terminal. When it is created, it will get as an input from
   its parent process, the name of the file F1, and it will execute using that datum to
   obtain the desired information. It may also get the name of the output device.
           When a process creates new process, two possibilities exist in terms of
   execution:
   1. The parent continues to execute concurrently with its children.
   2. The parent waits until some or all of its children have terminated.

   3. Process Termination

   A process terminates when it finishes executing its final statement and asks the
   operating system to delete it by using the exit system call. At that point the process
   may return data output to its parent process via a wait system call.
          A process can cause termination of another process via an appropriate system
   call. When a process is newly created, the identity of the newly created process is
   passed to its parent.
          A parent may terminate the execution of one of its children for a variety of
   reasons, such as:
       • The child has exceeded its usage of some of the resources that it has been
          allocated. This requires the parent to have a mechanism to inspect the state of
          its children.
       • The task assigned to the child is no longer required.
       • The parent is exiting, and the operating system does not allow a child to
          continue if its parent terminates. On such systems, if a process terminates then
          all its children must also be terminated. This phenomenon is called as
          cascading termination.

   COOPERATING PROCESSES

The concurrent processes executing in the operating system may either be independent
processes or cooperating processes. A process is independent if it cannot affect or be
affected by other processes executing in the system. The process that does not share any
data with any other process is independent. A process is cooperating if it can affect or be
affected by the other processes is a cooperating process.




                                            19
Process cooperation is provided for several reasons:
   • Information sharing: Since several users may be interested in the same piece of
       information, we must provide an environment to allow concurrent access to these
       types of resources.
   • Computation speedup: If we want a particular task to run faster, we must break
       it into subtasks, each of which will be executing in parallel with the others. Such
       speedup can be achieved only if the computer has multiple processing elements.
   • Modularity: We may want to construct the system in a modular fashion, dividing
       the system functions into separate processes or threads.
   • Convenience: Even an individual user may have many tasks on which to work at
       one time. For instance, a user may be editing, printing, and compiling in parallel.

   To illustrate the concept of cooperating processes, consider the producer-consumed
   problem. A producer process produces information that is consumed by a consumer
   process. Example, a print program produces characters that are consumed by the
   printer device. A compiler may produce assembly code, which is consumed by an
   assembler. The assembler, in turn, may produce object modules, which are consumed
   by the loader.
           To allow producer and consumer processes to run concurrently, we must have
   available a buffer of items that can be filled by producer and emptied by the
   consumer. A producer can produce one item while the consumer is consuming
   another item. The producer and the customer must be synchronized so that the
   consumer does not try consuming an item that has not yet produced.

INTERPROCESS COMMUNICATION
IPC provides a mechanism to allow processes to communicate and to synchronize their
actions without sharing the same address space. IPC is particularly useful in a distributed
environment where the communicating processes may reside on different computers
connected with a network. An example is the chat program used on WWW.

1. Message Passing System

The function of a message system is to allow processes to communicate with one another
without the need to resort to the shared data. Communication among the user processes is
accomplished through the passing of message. An IPC facility provides at least the two
operations: Send(message) and receive(message).
        Messages sent by7 a process can be of either fixed size or variable size. If
processes P and Q want to communicate they must send messages to and receive
messages from each other; a communication link must exist between them. This link can
be implemented in a variety of ways. We are concerned not with physical
implementation(such as shared memory, hardware, bus or network) but rather with its
logical implementation. Here are several methods for logically implementing a link and
the send / receive operations:

   •   Direct or indirect communication
   •   Symmetric or asymmetric communication


                                            20
•   Automatic or explicit buffering
   •   Send by copy or send by reference
   •   Fixed-sized or variable-sized messages.


Naming

1. Direct Communication:

   With direct communication, each process that wants to communicate must
   explicitly name the recipient or sender of the communication. In this scheme the send
   and the receive primitives are:

     • Send (P, message) – Send a message to process P.
     • Receive(Q, message) – receive a message from process Q
A communication link in this scheme has the following properties:
   • A link is established automatically between every pair of processes that want to
     communicate. The processes need to know only each other’s identity to
     communicate.
   • A link is associated with exactly two processes.
   • Exactly one link exists between each pair of processes.

This scheme exhibits symmetry in addressing, that is both the sender and the receiver
processes must name the other to communicate. A variant of this scheme employs
asymmetric in addressing. Only the sender names the recipient; the recipient is not
required to name the sender. In this scheme the send and receive primitives are defined as
follows:
    • Send (P, message) – send a message to process P
    • Receive(id, message) – receive a message from any process, the variable id is set
       to the name of the process with which communication has taken place.

2. Indirect Communication:

        With indirect communication, the messages are sent to and received from
mailboxes, or ports. A mailbox can be viewed abstractly as an object into which
message can be placed by processes and from which messages can be removed. Each
mailbox has a unique identification. In this scheme a process can communicate with some
other process via a number of different mailboxes. Two processes can communicate only
if they share a mailbox. The send and receive primitives are defined as follows:

   •   Send (A, mailbox) – send a message to mailbox A
   •   Receive(A, message) – receive a message from mailbox A

In this scheme, a communication link has the following property:
    • A link may be associated with more than two processes.



                                           21
•   A number of different links may exist between each pair of communicating
       processes, with each link corresponding to one mailbox.

Mailbox can be owned by either by a process or by the operating system. If the mailbox
is owned by a process, then each owner is distinguished (who can receive the message)
and the user( who send the messages). Since each mailbox has a unique owner, there can
be no confusion about who should receive the message sent to the mailbox. When a
process that owns a mailbox terminates, the mailbox disappears. Any process that
subsequently sends a message to this mailbox must be notified that the mailbox no longer
exists.
        On the other hand, a mailbox owned by the operating system is independent and
is not attached to any particular process. The operating system then must provide a
mechanism that allows a process to do the following:
    • Create a new mailbox
    • Send and receive messages through the mailbox
    • Delete a mailbox.

3. Synchronization:

Communication between processes takes place by calls to send and receive primitives.
Message passing may be either blocking or non-blocking also known as synchronous
and asynchronous.
   • Blocking send: The sending process is blocked until the message is received by
       the receiving process or by the mailbox.
   • Non-blocking send: The sending process sends the message and resumes
       operation.
   • Blocking receive: The receiver blocks until a message is available.
   • Non-blocking receive: The receiver retrieves either a valid message or a null.

4. Buffering:

Whether the communication is direct or indirect, message exchanged by communicating
processes reside in a temporary queue.

   •   Zero capacity: The queue has maximum length 0, thus the link cannot have any
       message waiting in it. In this case, the sender must block until the recipient
       receives the message.
   •   Bounded capacity: The queue has finite length n; thus at most n message can
       reside in it. If the queue is not full when a new message is sent, the latter is placed
       in the queue, and the sender can continue execution without waiting. The link has
       finite capacity. If the link is full, the sender must block until space is available in
       the queue.
   •   Unbounded capacity: The queue has potentially infinite length, thus any number
       of messages can wait in it. The sender never blocks.




                                             22
SCHEDULING ALGORITHMS

   1. First Come First Served Scheduling:

   The simplest CPU scheduling algorithm is the first come first served (FCFS)

   scheduling algorithm. With this scheme the process that requests the CPU first is

   allocated the CPU first. This is managed by the FIFO queue. When the CPU is free, it

   is allocated to the process at the head of the queue. The running process is then

   removed from the queue.

           The average waiting time under the FCFS policy, is often quite long. Consider
   the following set of processes that arrive at time 0, with the length of the CPU-burst
   time given in milliseconds:
                          Process                Burst time
                            P1                      24
                            P2                      3
                            P3                      3

   If the processes arrive in the order P1, P2, P3 and are served in the FCFS order, we
   get the result shown in the following Gantt chart:


                            P1                                             P2        P3
   0                                                                  24        27        30

The waiting time is 0 milliseconds for process P1, 24 milliseconds for process p2, and 27
millisecond for process P3. Thus the average waiting time is (0+24+27)/3=17
milliseconds. If the processes arrive in the order P2, P3,P1 the results will be as shown in
the following Gantt chart:

   P2        P3                                  P1
  0      3        6                                                                       30




                                            23
The average waiting time is now (6+0+3)/3=3 milliseconds. Thus the average waiting
time under FCFS policy is generally not minimal and may vary if the process CPU-burst
times vary greatly.
        The FCFS scheduling algorithm is non preemptive. Once the CPU has been
allocated to a process, that process keeps the CPU until it releases the CPU either by
termination or by requesting I/O.

2. Shortest-Job First Scheduling:

A different approach to CPU scheduling is the shortest-job-first (SJF) scheduling
algorithm. This algorithm associates with each process the length of the latter’s next
CPU burst. When the CPU is available, it is assigned to the process that has the smallest
next CPU burst. If two processes have same length next CPU burst, FCFS scheduling is
used to break the tie. Consider the following set of processes, with the length of the CPU
burst time given in milliseconds:



               Process                       Burst time
                 P1                            6
                 P2                            8
                 P3                            7
                 P4                            3

Using SJF scheduling, we would schedule the process according to the following Gantt
chart:

        P4           P1                   P3                       P2
         0                3                           9                                16
   24
            The waiting time is 3 milliseconds for process P1, 16 milliseconds for process
    P2, 9 milliseconds for process P3, and 0 milliseconds for process P4. Thus the
    average waiting time is (3+16+9+0)/4=7 milliseconds. If FCFS is used the average
    waiting time is 10.25 milliseconds.
        The real difficulty with the SJF algorithm is knowing the length of the next CPU
request. For long term or job scheduling in a batch system, the length of the process time
limit is specified by the user when he submits the job. There is no way to know the
length of the next CPU burst in short term or CPU scheduling. We may not know the
length but can predict the length of the CPU burst. We expect that the next CPU burst
will be similar in length to the previous ones.
        The next CPU burst is generally predicted as an exponential average of the
measured lengths of previous CPU bursts.
Let tn be the length of the nth CPU burst
Let Tn+1 be our predicted value for the next CPU burst.
Then for α, o≤α≤1, define



                                           24
Tn+1 = α tn + (1 - α) Tn
This formula defines an exponential average.
The value of tn contains our most recent information;
Tn stores the past history
The parameter α controls the relative weight of recent and past history in our prediction.
If α=0 then Tn+1=Tn, recent history has no effect
If α=1 then Tn+1 = tn only the most recent CPU burst matters
If α=1/2 recent and past history are equally weighted.

SJF is preemptive or non preemptive. The choice arises when a new process arrives at
the ready queue while previous process is executing. The new process may have a shorter
next CPU burst than what is left of the currently executing process, whereas non
preemptive SJF algorithm will preempt the currently running process to finish its CPU
burst.
Consider an example, with four processes, with the length of the CPU burst time given in
milliseconds:


                   Process               Arrival time    Burst time
                     P1                         0          8
                     P2                         1          4
                     P3                         2          9
                     P4                         3          5

If the processes arrive at the ready queue at the times as shown and need the indicated
burst times, then the resulting preemptive SJF schedule is as depicted in the following
Gantt chart:


     P1       P2          P4                        P1                P3

 0        1           5             10                        17                    26

     Process P1 is started at time 0, since it is the only process in the queue. Process P2
     arrives at time 1. The remaining time for process P1 (7 milliseconds) is larger than the
     time required by process P2 (4 milliseconds), so process P1 is preempted and process
     P2 is scheduled. The average waiting time for this example is 6.5 milliseconds.




                                               25
3. Priority Scheduling:

The SJF is a special case of the general priority-scheduling algorithm. A priority is
associated with each process, and the CPU is allocated to the process with the highest
priority. Equal priority processes are scheduled in FCFS order.
        As an example, consider the following set of processes, assumed to have arrived
at time 0, in the order P1, P2, P3,P4,P5, with the length of the CPU burst time given in
milliseconds:

              Process             Burst time          Priority
                P1                        10           3
                P2                        1            1
                P3                        2            4
                P4                        1            5
                P5                        5            2
Using the priority scheduling, we would schedule these processes according to the
following Gantt chart:

         P2         P5                            P1                   P3       P4
         0      1             6                                   16         18    19
         The average waiting time is 8.2 milliseconds. Priorities can be defined either
internally or externally. Internally defined priorities use some measurable quantity to
compute the priority of a process. For example, time limits, memory requirements, the
number of open files etc. External priorities are set by criteria that are external to the
operating system, such as importance of the process, the type and amount of funds being
paid for computer use, the department sponsoring the work, and other often political
factors.
                Priority scheduling can be either preemptive or non-preemptive. When
process arrives at the ready queue, its priority is compared with the priority of the
currently running process. A preemptive priority-scheduling algorithm will preempt the
CPU if the priority of the newly arrived process is higher than the priority of the currently
running process. A non preemptive priority scheduling algorithm will simply put the new
process at the head of the ready queue.



                                             26
A major problem with priority-scheduling algorithms is indefinite blocking or
starvation. A process that is ready to run but lacking the CPU is considered blocked –
waiting for the CPU. A priority-scheduling algorithm can leave some low-priority
processes waiting indefinitely for the CPU. Generally one of the two things will happen.
Either the process will eventually be run, or the computer system will eventually crash
and lose all unfinished low-priority processes.
        A solution to the problem of indefinite blockage of low-priority processes is
aging. Aging is a technique of gradually increasing the priority of processes that wait in
the system for a long time.



4. Round-Robin Scheduling:

        The round-robin (RR) scheduling algorithm is designed especially for time-
sharing systems. It is similar to FCFS scheduling, but preemption is added to switch
between processes. A small unit of time, called a time quantum is defined. The time
quantum is generally from 10 to 100 milliseconds. The ready queue, allocating the CPU
to each process for a time interval of up to 1 time quantum.
        To implement RR scheduling, we keep the ready queue as a FIFO queue of
processes. New processes are added to the tail of the ready queue. The CPU scheduler
picks the first process from the ready queue, sets a timer to interrupt after 1 time
quantum, and dispatches the process.
        One of the two things will then happen. The process may have a CPU burst of less
than 1 time quantum. In this case the process itself will release the CPU voluntarily. The
scheduler will then proceed to the next process in the ready queue. Otherwise, if the CPU
burst of the currently running process is longer than 1 time quantum, the timer will go off
and will cause an interrupt to the operating system. A context switch will be executed,
and the process will be put at the tail of the ready queue. The CPU scheduler will then
select the next process in the ready queue.
        Consider the following set of processes that arrive at time 0, with the length of the
CPU burst time given in milliseconds:
                        Process        Burst Time
                          P1                24
                          P2                 3
                          P3                 3
        If the time quantum is 4 milliseconds, then process P1 gets the first 4
milliseconds. Since it requires another 20 milliseconds, it is preempted after the first time
quantum, and the CPU is given to the next process in the queue, process P2. Since
process p2 does not need 4 milliseconds, it quits before its time quantum expires. The
CPU is then given to the next process, process P3. Once each process has received 1 time
quantum, the CPU is returned to process P1 for an additional time quantum. The
resulting RR scheduling is:


   P1            P2             P3            P1       P1         P1           P1     P1



                                             27
0         4              7            10          14         18        22     26
30

The average waiting time is 17/3=5.66 milliseconds. In RR scheduling algorithm, no
process is allocated the CPU for more than 1 time quantum in a row. If a process CPU
burst exceeds 1 time quantum, that process is preempted and is put back in the ready
queue. The RR scheduling algorithm is preemptive.
        If there are n processes in the ready queue and the time quantum is q, then each
process gets 1/n of the CPU time in chunks of atmost 1 time units. Each process must
wait no longer than (n-1) x q time units until its next time quantum.
        The effect of context switch must also be considered in the performance of RR
scheduling. Let us assume that we have only one process of 10 time units. If the time
quantum is 12 time units, the process finishes in less than 1 time quantum with no over
head. If the quantum is 6 time units, the process requires 2 quanta, resulting in 1 context
switch.

5. Multilevel Queue Scheduling:

A multilevel queue scheduling algorithm partitions the ready queue into several separate
queues. The processes are permanently assigned to one queue, based on some property of
the process, such as memory size, process priority or process type. Each queue has its
own scheduling algorithm. For example, separate queues can be used as foreground and
background queues.
    • Foreground queues: this is for interactive processes, with highest priority. This
       can be scheduled using RR scheduling algorithm.
    • Background queues: This is for batch processes, with lowest priority and uses
       FCFS scheduling algorithm for scheduling.
                                    System processes

                                 Interactive processes

                             Interactive editing processes

                                Batch processes

                                Student processes
       Let us look at an example of a multilevel queue scheduling algorithm with five
queues:
   1. System processes
   2. Interactive processes
   3. Interactive editing processes
   4. Batch processes
   5. Student processes

Each queue has absolute priority over lower priority. No process in batch queue for
example could run unless the queues for system processes, interactive processes, and



                                             28
interactive editing processes were all empty. If an interactive editing process entered the
ready queue while a batch process was running, the batch process would be preempted.

6. Multilevel Feedback Queue Scheduling:

In a multilevel queue scheduling algorithm, process are permanently assigned to a queue
on entry to the system. Processes do not move between queues. If there are separate
queues for foreground and background processes, for example, processes do not move
from one queue to the other, since processes do not change their foreground or
background nature.
        Multilevel feedback queue scheduling, allows a process to move between the
queues. The idea is to separate processes with different CPU burst characteristics.
     • If a process uses too much CPU time, it will be moved to a lower priority queue.
     • If a process waits too long in a lower priority queue may be moved to a higher-
         priority queue. This form of aging prevents starvation.
For example consider a multilevel feedback queue scheduler with three queues,
numbered from 0 to 2. The scheduler first executes all processes in queue 0. Only when
queue 0 is empty will it execute processe4s in queue 1. Similarly, processes in queue 2
will be executed only if queues 0 and 1 are empty. A process that arrives for queue 1 will
preempt a process in queue 2. A process that arrives for queue 0 will, in turn preempt a
process in queue 1.
        A process entering the ready queue is put in queue 0. A process in queue is given
a time quantum of 8 milliseconds. If it does not finish within this time it is moved to the
tail of the queue . If queue 0 is empty, the process at the head of queue 1 is given a
quantum of 16 milliseconds. If it does not complete, it is preempted and is put into queue
2. Process in queue 2 is run on an FCFS basis, only when queues 0 and 1 are empty.
        Multilevel feedback queue scheduler is defined by the following parameters:

   •   The number of queues
   •   The scheduling algorithm for each queue
   •   The method used to determine when to upgrade a process to a higher priority
       queue
   •   The method used to determine when to demote a process to a lower priority
       queue
   •   The method used to determine which queue a process will enter when the process
       needs service.

MULTIPLE – PROCESSOR SCHEDULING

If multiple CPUs are available, the scheduling problem is more complex. In multiple
processors system those system with identical in terms of functionality are called
homogenous. And those CPUs with different functionality are called heterogeneous.
    Even within homogenous multi processor, there are sometimes limitations on
scheduling. If several identical processes are available, then load sharing can occur. It
would be possible to provide a separate queue for each processor. In this case one
processor could be idle, with an empty queue, while another processor was very busy. To


                                            29
prevent this situation, we use a common ready queue. All processes go into one queue
and are scheduled onto any available processor.
   In such a scheduling scheme one of the two scheduling approaches may be used.
       • Self-scheduling:. Each processor examines the common ready queue and
           selects a process to execute.
       • Master-Slave structure: appointing one processor as a scheduler for the
           other processors, thus creating a master slave structure.




REAL TIME SCHEDULING

Real time computing is divided into two types:
       • Hard real time systems
       • Soft real time systems

Hard real time systems are required to complete a critical task within a guaranteed
amount of time. A process is submitted with a statement of the amount of time in which it
needs to complete or perform I/O. The scheduler either admits the process, guaranteeing
that the process will complete on time or rejects the request as impossible. This is known
as resource reservation.
         Such a guarantee requires that the scheduler know exactly how long each type of
operating system functions takes to perform, and therefore each operation must be
guaranteed to take a maximum amount of time. Therefore, hard real time systems run on
special software dedicated to their critical process, and lack the full functionality of
modern computers and operating systems.
Soft real time systems computing is less restrictive. It requires that critical processes
receive priority over less fortunate ones. Implementing soft real time functionality
requires careful design of the scheduler and related aspects of the operating system.

   1. The system must have priority scheduling, and real time processes must have the
      highest priority . The priority of real time processes must not degrade over time,
      even though the priority of non-real time processes may.
   2. The dispatch latency must be small. The smaller the latency the faster a real time
      process ca start executing once it is runable.




                                           30
UNIT III

                                     DEADLOCKS

        In a multiprogramming environment, several processes may compete for a finite
number of resources. A process requests resources; if the resources are not available at
that time, the process enters a wait state. Waiting processes may never again change state,
because the resources they have requested are by other waiting processes. This situation
is called a deadlock.
    A process must request a resource before using it, and must release the resource after
    using it. A process may request as many resources as it requires to carry out its task.
    The number of resources, requested may not exceed the total number of resources
    available in the system. A process cannot request three printers if the system has only
    two.
    Under the normal mode of operation, a process may utilize a resource in only the
    following sequence:

   1. Request: If the request cannot be granted immediately, (if the resource is being
      used by another process), then the requesting process must wait until it can
      acquire the resource.
   2. Use: The process can operate on the resource, (for example if the resource is a
      printer, the process can print on the printer).
   3. Release: The process releases the resources.

   A set of processes is in a deadlock state when every process in the set is waiting for
   an event that can be caused only be another process in the set.

                          DEADLOCK CHARACTERIZATION

I. Necessary Conditions:

A deadlock situation can arise if the following four conditions hold simultaneously in a
system:
    1. Mutual exclusion: At least one resource must be held, that is only one process at
       a time can use the resource. If another process requests that resource, the
       requesting process must be delayed until the resource has been released.
    2. Hold and wait: A process must be holding at least one resource and waiting to
       acquire additional resources that are currently being held by other processes.
    3. No preemption: Resources cannot be preempted, that is, a resource can be
       released only after the process ahs completed its task.
    4. Circular wait: A set {P0, P1,P2,……,Pn} of waiting processes must exist such
       that P0 is waiting for a resource that is held by P1,
       P1 is waiting for a resource that is held by P2,
       P2 is waiting for a resource that is held by p3,



                                            31
Pn-1 is waiting for a resource that is held by Pn,
       Pn is waiting for a resource that is held by P0.
II. Resource Allocation Graph:

Deadlocks can be described in terms of directed graph called a system resource
allocation graph. This graph consists of set of vertices V and set of edges E. The set of
vertices V is partitioned into two different types of nodes
    • P= {P1, P2,…,Pn}, set consisting of active processes in the system
    • R={R1, R2,….,R3}, set consisting of all resource types in the system.

A directed edge from process Pi to resource type Rj is denoted by Pi        Rj it signifies
that process Pi requested an instance of resource type Rj and is currently waiting for that
resource. This edge is called a request edge.
A directed edge from resource type Rj to resource type Pi is denoted by Rj            Pi it
signifies that an instance of resource type Rj has been allocated to process Pi. This edge
is called a assignment edge.
        Pictorially each process Pi is represented by a circle, and each resource type Rj as
a square. Since resource type Rj may have more than one instance, we represent each
such instance as a dot within the square.
        The resource allocation graph shown in figure depicts the following situation:
                        Resource allocation graph

                    R1                    R3




                                                             P
        P                    P




                    R2

                                            R4
   •   The sets P, R and E:
          o P={P1,P2,P3}
          o R={R1,R2,R3}
          o E={P1 R1, P2 R3, R1            P2, R2           P2, R2    P1, R3      P3}
   •   Resource instances:
          o One instance of resource type R1
          o Two instances of resource type R2


                                            32
o One instance of resource type R3
            o Three instances of resource type R4

   •   Process states:
           o Process P1 is holding an instance of resource type R2, and is waiting for
              an instance of resource type R1.
           o Process P2 is holding an instance of R1 and R2, and is waiting for an
              instance of resource type R3.
           o Process P3 is holding an instance of R3.
    Given the definition of a resource allocation graph, if the graph contain no cycle,
       then no process in the system is deadlocked.
    If the graph contains a cycle, then a deadlock may exist.
    If each resource type is exactly having one instance, then a cycle implies that a
       deadlock has occurred.
    If each resource type has several instances, then a cycle does not have a deadock
       occurred.
Fig. Resource allocation graph with a deadlock

                   R1                   R3




                                                         P
        P                   P




                    R2


                                                        R4

   Two minimal cycles exist in the system:

   P1       R1    P2     R3     P3      R2     P1

   P2       R3    P3     R2     P2

   Processes P1, P2 and P3 are deadlocked. Process P2 is waiting for the resource R3,
   which is held by process P3. Process P3, on the other hand, is waiting for either




                                          33
process P1 or process P2 to release resource R2. In addition, process P1 is now
   waiting for process P2 to release resource R1.

                            Methods for Handling Deadlocks

   Deadlock problem can be dealt in one of the three ways:
   • We can use a protocol to prevent or avoid deadlocks, ensuring that the system will
      never enter a deadlock state.
   • We can allow the system to enter a deadlock state, detect it, and recover.
   • We can ignore the problem altogether, and pretend that deadlocks never occur in
      the system.

       To ensure that deadlocks never occur, the system can use either a deadlock
       prevention or a deadlock avoidance scheme.

Deadlock prevention: this is a set of methods for ensuring that at least one of the
necessary condition cannot hold.
Deadlock avoidance: requires the OS be given in advance additional information
concerning which resources a process will request and use during its lifetime. With this
additional knowledge, we can decide for each request can be satisfied or must be delayed,
the system must consider the resources currently available, the resources currently
allocated to each process and the future requests and releases of each process.

                             DEADLOCK PREVENTION

Deadlocks can be prevented by ensuring that each of the four conditions cannot hold. The
conditions are:
   • Mutual Exclusion
   • Hold and Wait
   • No Preemption
   • Circular Wait

1. Mutual Exclusion:

The mutual exclusion condition must hold for non sharable resources. For example, a
printer cannot be simultaneously shared by several processes. Sharable resources on the
other hand, do not require mutually exclusive access, and thus cannot be involved in a
deadlock. Read only files are a good example for sharable resources. If several processes
attempt to open a read-only file at the same time, they can be granted simultaneous access
to the file. A process never needs to wait for a sharable resource.

2. Hold and Wait:

To ensure that the hold and wait condition never occurs in the system, we must guarantee
that, whenever a process requests a resource, it does not hold any other resources.



                                           34
One protocol that can be used requires each process to request and be allocated all its
resources before it begins execution.
Another protocol allows a process to request resources only when the process has none.
A process may request some resources and use them. Before it can request any additional
resources, it must release all the resources that it is currently allocated.
Examples to illustrate the two protocols:

Consider a process that copies data from a tape drive to a disk file, sorts the disk file and
then prints the results to a printer.

Protocol one - If all resources must be requested at the beginning of the process, then the
process must initially request the tape drive, disk file and printer. It will hold the printer
for its entire execution, even though it needs the printer only at the end.

Protocol two – the second method allows the process to request initially only the tape
drive and disk file. It copies from the tape drive to the disk, then releases both the tape
drive and the disk file. The process must then again request the disk file and the printer.
After copying the disk file to the printer, it releases these two resources and terminates.

Disadvantages of two protocols:

   1. Resource utilization may be low, since many of the resources may be allocated
      but unused for a long period.
   2. Starvation is possible. A process that needs several popular resources may have to
      wait indefinitely, because at least one of the resources that it needs is always
      allocated to some other process.

3. No Preemption

The third necessary condition is that there be no preemption of resources that have
already been allocated. To ensure this condition does not happen, the following protocol
is used.
         If the process is holding some resources and requests another resource that cannot
be immediately allocated to it, then all resources for which the process is waiting. The
process will be restarted only when it can regain its old resources, as well as the new ones
that it is requesting.
         If process requests some resources, we first check whether they are available. If
they are, we allocate them. If they are not available, we check whether they are allocated
to some other process that is waiting for additional resources. If so preempt the desired
resources from the waiting process and allocate them to the requesting process. If the
resources are not either available or held by a waiting process, the requesting process
must wait. While it is waiting, some of its resources may be preempted, but only if
another process requests them. A process can be restarted only when it is allocated the
new resources it is requesting and recovers any resources that were preempted while it is
waiting.




                                             35
4. Circular Wait

The fourth condition for deadlocks is circular-wait. One way to ensure this conditions is
to impose a total ordering of all resource types, and to require that each process requests
resources in the increasing order of enumeration.
       Let R = {R1, R2,…. Rm} be set of resource types. We assign to each resource
type an unique number, which compares the resources and to determine whether one
precedes another in our ordering.

Example:       F(tape drive) = 1,
               F(disk drive) = 5
               F(printer) = 12.

Now consider the protocol to prevent deadlocks. Each process is requested in an
increasing order. If a process request resource type Ri, after that the process can request
instances of resource type Rj if and only if F(Rj) > F(Ri). Example, using the function
defined above, a process that wants to use a tape drive and printer at the same time must
first request the tape drive and then request the printer.
         Alternatively, we can require that, whenever a process requests an instance of
resource type Rj, it has released any resources Ri such that the F(Ri)>= F(Rj). If these
two protocol are used, then the circular wait cannot hold.

DEADLOCK RECOVERY

There are two approaches to solve the deadlock problem. They are

   •   Suspend/Resume a Process
   •   Kill a Process

1. Suspend/Resume a Process:

In this method a process is selected based on a variety of criteria example low priority
and it is suspended for a long time. The resources are reclaimed form that process and
then allocated to other processes that are waiting for them. When one of the waiting
process gets over the original suspended process is resumed.
        This process strategy cannot be used in any on-line or real time systems, because
the response time of some processes then become unpredictable.
        Suspend/Resume operations are not easy to manage example that a tape is read
half way through and then a process holding the tape drive is suspended. The operator
will have to dismount that tape, mount the new tape for the new process to which the tape
drive is now to be allocated. After this new process is over, when the old process is
resumed, the tape for the original process will have to be mounted again, and more
importantly, it will exactly positioned.




                                            36
2. Kill a Process:

The operating system decides to kill a process and reclaim all its resources after ensuring
that such action will solve the deadlock. This solution is simple but involves loss of at
least one process.
Choosing a process to be killed, again, depends on the scheduling policy and the process
priority. It is safest to kill a lowest priority process which has just begin, so the loss is not
very heavy.

DEADLOCK AVOIDANCE

Deadlock avoidance is concerned with starting with an environment, where a deadlock is
possible, but by some algorithm in the operating system, it is ensured before allocating
any resource that after allocating it, deadlock can be avoided. If that cannot be avoided,
the operating system does not grant the request of the process for a resource.
        Dijkstra was the first person to propose an algorithm in 1965 for deadlock
avoidance. This is known as “Banker algorithm” due to its similarity in solving a
problem of a banker waiting to disburse loans to various customers within limited
resources.
        This algorithm in the OS is such that it can know in advance before a resource is
allocated to a process, whether it can lead to a deadlock “unsafe state” or it can manage
to avoid it “safe state”.
        Banker’s algorithm maintains two matrices.
Matrix A – consists of the resources allocated to different processes at a given time.
Matrix B – maintains the resources still needed by different processes at the same time.

Process    Tape      Printers    Plotters
           drives
P0         2         0           0
P1         0         1           0
P2         1         2           1
P3         1         0           1
Process    Tape      Printers    Plotters
           drives
P0         1         0           0
P1         1         1           0
P2         2         1           1
P3         1         1           1

Matrix A                                        Matrix B
Resources assigned                              Resources still required


                                               37
Vectors

Total Resources (T) = 543
Held Resources (H) = 432
Free Resources (F) = 111


Matrix A shows that process P0 is holding 2 tape drives. At the same time
P1 is holding 1 printer and so on. The total held resources by various processes are : 4
tape drives, 3 printers and 2 plotters.

        This says that at a given moment, total resources held by various processes are: 4
tape drivers, 3printers and 2 plotters. This should not be confused with the decimal
number 432. That is why it is called a vector. By the same logic, the figure shows that the
vector for the Total Resources (T) is 543. This means that in the whole system, there are
physically 5 tape drivers, 4printers and 3 plotters. These resources are made known to the
operating system at the time of system generation. By subtraction of (H) from (T) column
wise, we get a vector (F) of free resources which is 111. This means that the resources
available to the operating system for further allocation are: 1 tape drive, 1 printer and 1
plotter at that juncture.

         Matrix B gives process wise additional resources that are expected to be required
in the course during the execution of these processes. For instance, process P2 will
require 2 tape drives, 1 printer and 1 plotter, in addition to the resources already held by
it. It means that process P2 requires in all 1+2=3 tape drivers, 2+1=3 printers and 1+1=2
plotters. If the vector of all the resources required by all the processes (vector addition of
Matrix A and Matrix B) is less then the vector T for each of the resources, there will be
no contention and therefore, no deadlock. However, if that is not so, a deadlock has to be
avoided.

       Having maintained these two matrices, the algorithm for the deadlock avoidance
works as follows:

         (i)     Each process declares the total required resources tot the operating
                 system at the beginning. The operating system puts this figure in Matrix
                 B (resources required for completion) against each process. For a newly
                 created process, the row in Matrix A is fully zeros to begin with, because
                 no resources are yet assigned for that process. For instance, at the
                 beginning of process P2, the figures for the row P2 in Matrix A will be
                 all 0’s; and those in Matrix B will be 3, 3 and 2 respectively.

         (ii)    When a process requests the operating system for a resources, the
                 operating system finds out whether the resource is free and whether it
                 can be allocated by using the vector F. If it can be allocated, the
                 operating system does so, and updates Matrix A by adding 1 to the



                                             38
appropriate slot. It simultaneously subtracts 1 from the corresponding
                 slot of Matrix B. For instance, starting from the beginning, if the
                 operating system allocates a tape drive to P2, the row for P2 in Matrix
                 will become 1, 0 and 0. The row for P2 in Matrix B will correspondingly
                 become 2, 3 and 2. At any time, the total vector of these two, i.e.
                 addition of the corresponding numbers in the two rows, is always
                 constant and is equivalent to the total resources needed by P2, which in
                 this case will be 3, 3 and 2.

         (iii)     However, before making the actual allocation, whenever, a process
                 makes a request to the operating system for any resource, the operating
                 system goes through the Banker’s algorithm to ensure that after the
                 imaginary allocation, there need not be a deadlock, i.e. after the
                 allocation, the system will still be in a ‘safe state’. The operating system
                 actually allocates the resource only after ensuring this. If it finds that
                 there can be a deadlock after the imaginary allocation at some point in
                 time, it postpones the decision to allocate that resource. It calls this state
                 of the system that would result after the possible allocation as ‘unsafe’.
                 Remember: the unsafe state is not actually a deadlock. It is a situation of
                 a potential deadlock.

The point is: How does the operating system conclude about the safe or unsafe? It uses an
interesting method. It looks at vector F and each row of Matrix B. It compares them on a
vector to vector basis i.e. within the vector, it compares each digit separately to conclude
whether all the resources that a process is going to need to complete are available at that
juncture or not. For instance, the figure shows F = 111. It means that at that juncture, the
system has 1 tape drive, 1 printer and 1 plotter free and allocable. (The first row in Matrix
B for P0 to 100.) This means that if the operating system decides to allocate all needed
resources to P0, P0 can go to completion, because 111 > 100 on a vector basis. Similarly
row for P1 in Matrix B is 110. Therefore, if the operating system decides to allocate
resources to P1 instead of to P0, P1 can complete. The row for P2 is 211. Therefore, P2
cannot complete unless there is one more tape drive available. This is because 211 is
greater than 111 on a vector basis.
        The vector comparison should not be confused with the arithmetic comparison.
For instance, if were 411 and a row in Matrix B was 322, it might appear that 411 > 322
and therefore, the process can go to completion. But that is not true. As 4 > 3, the tape
drives would be allocable. But as 1 < 2, the printer as well as the plotter would both fall
short.
        The operating system now does the following to ensure the safe state:

   (a) After the process requests for a resources, the operating system allocates it on a
       ‘trial’ basis.
   (b) After this trial allocation, it updates all the matrices and vectors, i.e. it arrives at
       the new values of F and Matrix B, as if the allocation was actually done.
       Obviously, this updation will have to be done by the operating system in a
       separate work area in the memory.



                                             39
(c) It then compares vector F with each row of Matrix B on a vector basis.
   (d) If F is smaller than each of the rows in Matrix B on a vector basis, i.e. even if all
       F was made available to any of the processes in Matrix B, none would be
       guaranteed to complete, the operating system concludes that it is an ‘unsafe state’.
       Again, it does not mean that a deadlock has resulted. However, it means that it
       can takes place if the operating system actually goes ahead with the allocation.
   (e) If F is greater than any row for a process in Matrix B, the operating system
       proceeds as follows:

           •   It allocates all the needed resources for that process on a trial basis.
           •   It assumes that after this trial allocation, that process will eventually get
               completed, and, in fact, release all the resources on completion. These
               resources now will be added to the free pool (F). Its now calculates all the
               matrices and F after this trial allocation and the imaginary completion of
               this process. It removes the row for the completed process from both the
               matrices.
           • It repeats the procedures from step © above. If in the process, all the rows
               in the matrices get eliminated, i.e. all the process can go tot completion,
               yit concludes that it is a ‘safe state’ i.e. even after the allocation, a
               deadlock can be avoided. Otherwise, it concludes that it is an ‘unsafe
               state’.
   (f) For each request for any resources by a process, the operating system goes
       through all these trial or imaginary allocations and updations, and if finds that
       after the trial allocation, the state of the system would be ‘safe’, it actually goes
       ahead and makes an allocation after which it updates various matrices and table in
       real sense. The operating system may need to maintain two sets of matrices for
       this purpose. Any time, before any allocation, it could copy the first set of
       matrices (the real one) into the other, carry out all trial allocations and updations
       in the other, and of the safe state results, update the former set with the
       allocations.

Banker’s Algorithm

The resource-allocation graph algorithm is not applicable to a resource-allocation system
with multiple instances of each resource type. The allocation system with multiple
instances of each resource type. The deadlock-avoidance algorithm that we describe next
is applicable to such a system, but is less efficient than the resource-allocation graph
scheme. This algorithm is commonly known as the banker’s algorithm. The name was
chosen because this algorithm could used in a banking system to ensure that the bank
never allocates its available its available cash such that it can longer satisfy the needs of
all its customers.

       When a new process enters the system, it must declare the maximum number of
instances of each resources type that it may need. This number may not exceed the total
number of resources in the system. When a user requests a set of resources, the system
must determine whether the allocation of these resources will leave the system in a safe


                                             40
Operating Systems
Operating Systems
Operating Systems
Operating Systems
Operating Systems
Operating Systems
Operating Systems
Operating Systems
Operating Systems
Operating Systems
Operating Systems
Operating Systems
Operating Systems
Operating Systems
Operating Systems
Operating Systems
Operating Systems
Operating Systems
Operating Systems
Operating Systems
Operating Systems
Operating Systems
Operating Systems
Operating Systems
Operating Systems
Operating Systems
Operating Systems
Operating Systems
Operating Systems
Operating Systems
Operating Systems
Operating Systems
Operating Systems
Operating Systems
Operating Systems
Operating Systems
Operating Systems
Operating Systems
Operating Systems
Operating Systems
Operating Systems
Operating Systems
Operating Systems
Operating Systems
Operating Systems
Operating Systems
Operating Systems
Operating Systems
Operating Systems
Operating Systems
Operating Systems
Operating Systems
Operating Systems

Contenu connexe

Tendances

OPERATING SYSTEM - SHORT NOTES
OPERATING SYSTEM - SHORT NOTESOPERATING SYSTEM - SHORT NOTES
OPERATING SYSTEM - SHORT NOTESsuthi
 
introduction to operating system
introduction to operating systemintroduction to operating system
introduction to operating systemHAMZA AHMED
 
Introduction to operating system
Introduction to operating systemIntroduction to operating system
Introduction to operating systemAviroop Mandal
 
introduction To Operating System
introduction To Operating Systemintroduction To Operating System
introduction To Operating SystemLuka M G
 
Operating system 02 os as an extended machine
Operating system 02 os as an extended machineOperating system 02 os as an extended machine
Operating system 02 os as an extended machineVaibhav Khanna
 
Operating systems
Operating systemsOperating systems
Operating systemsanishgoel
 
OS - Ch1
OS - Ch1OS - Ch1
OS - Ch1sphs
 
lecture 1 (Introduction to Operating System.)
lecture 1 (Introduction to Operating System.)lecture 1 (Introduction to Operating System.)
lecture 1 (Introduction to Operating System.)WajeehaBaig
 
Operating system
Operating systemOperating system
Operating systemyogitamore3
 
Operating system 1
Operating system 1Operating system 1
Operating system 1edudivya
 
Operating system notes
Operating system notesOperating system notes
Operating system notesSANTOSH RATH
 
operating system
operating systemoperating system
operating systemKadianAman
 
System Z operating system
System Z operating systemSystem Z operating system
System Z operating systemArpana shree
 
chapter 1 introduction to operating system
chapter 1 introduction to operating systemchapter 1 introduction to operating system
chapter 1 introduction to operating systemAisyah Rafiuddin
 

Tendances (20)

OPERATING SYSTEM
OPERATING SYSTEMOPERATING SYSTEM
OPERATING SYSTEM
 
OPERATING SYSTEM - SHORT NOTES
OPERATING SYSTEM - SHORT NOTESOPERATING SYSTEM - SHORT NOTES
OPERATING SYSTEM - SHORT NOTES
 
introduction to operating system
introduction to operating systemintroduction to operating system
introduction to operating system
 
Introduction to operating system
Introduction to operating systemIntroduction to operating system
Introduction to operating system
 
introduction To Operating System
introduction To Operating Systemintroduction To Operating System
introduction To Operating System
 
Operating system 02 os as an extended machine
Operating system 02 os as an extended machineOperating system 02 os as an extended machine
Operating system 02 os as an extended machine
 
Operating systems
Operating systemsOperating systems
Operating systems
 
OS - Ch1
OS - Ch1OS - Ch1
OS - Ch1
 
lecture 1 (Introduction to Operating System.)
lecture 1 (Introduction to Operating System.)lecture 1 (Introduction to Operating System.)
lecture 1 (Introduction to Operating System.)
 
Presentation on operating system
 Presentation on operating system Presentation on operating system
Presentation on operating system
 
Operating system
Operating systemOperating system
Operating system
 
Types Of Operating Systems
Types Of Operating SystemsTypes Of Operating Systems
Types Of Operating Systems
 
Operating system 1
Operating system 1Operating system 1
Operating system 1
 
Operating system notes
Operating system notesOperating system notes
Operating system notes
 
operating system
operating systemoperating system
operating system
 
Operating System concepts
Operating System conceptsOperating System concepts
Operating System concepts
 
operating system
operating systemoperating system
operating system
 
Introduction to OS.
Introduction to OS.Introduction to OS.
Introduction to OS.
 
System Z operating system
System Z operating systemSystem Z operating system
System Z operating system
 
chapter 1 introduction to operating system
chapter 1 introduction to operating systemchapter 1 introduction to operating system
chapter 1 introduction to operating system
 

En vedette

30326851 -operating-system-unit-1-ppt
30326851 -operating-system-unit-1-ppt30326851 -operating-system-unit-1-ppt
30326851 -operating-system-unit-1-pptraj732723
 
Operating system.ppt (1)
Operating system.ppt (1)Operating system.ppt (1)
Operating system.ppt (1)Vaibhav Bajaj
 
Unit 1 architecture of distributed systems
Unit 1 architecture of distributed systemsUnit 1 architecture of distributed systems
Unit 1 architecture of distributed systemskaran2190
 

En vedette (7)

23 deadlock
23 deadlock23 deadlock
23 deadlock
 
Unit 1 introduction to operating system
Unit 1 introduction to operating systemUnit 1 introduction to operating system
Unit 1 introduction to operating system
 
Fcfs Cpu Scheduling With Gantt Chart
Fcfs Cpu Scheduling With Gantt ChartFcfs Cpu Scheduling With Gantt Chart
Fcfs Cpu Scheduling With Gantt Chart
 
30326851 -operating-system-unit-1-ppt
30326851 -operating-system-unit-1-ppt30326851 -operating-system-unit-1-ppt
30326851 -operating-system-unit-1-ppt
 
Operating system.ppt (1)
Operating system.ppt (1)Operating system.ppt (1)
Operating system.ppt (1)
 
Unit 1 architecture of distributed systems
Unit 1 architecture of distributed systemsUnit 1 architecture of distributed systems
Unit 1 architecture of distributed systems
 
operating system
operating systemoperating system
operating system
 

Similaire à Operating Systems

Similaire à Operating Systems (20)

Operating system || Chapter 1: Introduction
Operating system || Chapter 1: IntroductionOperating system || Chapter 1: Introduction
Operating system || Chapter 1: Introduction
 
Ch1 OS
Ch1 OSCh1 OS
Ch1 OS
 
OS_Ch1
OS_Ch1OS_Ch1
OS_Ch1
 
OSCh1
OSCh1OSCh1
OSCh1
 
OS UNIT1.pptx
OS UNIT1.pptxOS UNIT1.pptx
OS UNIT1.pptx
 
Types of os
Types of osTypes of os
Types of os
 
ITM(2).ppt
ITM(2).pptITM(2).ppt
ITM(2).ppt
 
Session1 intro to_os
Session1 intro to_osSession1 intro to_os
Session1 intro to_os
 
Introduction to OS 1.ppt
Introduction to OS 1.pptIntroduction to OS 1.ppt
Introduction to OS 1.ppt
 
Operating Systems
Operating Systems Operating Systems
Operating Systems
 
Unit 1 q&amp;a
Unit  1 q&amp;aUnit  1 q&amp;a
Unit 1 q&amp;a
 
ch1(Introduction).ppt
ch1(Introduction).pptch1(Introduction).ppt
ch1(Introduction).ppt
 
Os notes
Os notesOs notes
Os notes
 
Operating System Introduction.pptx
Operating System Introduction.pptxOperating System Introduction.pptx
Operating System Introduction.pptx
 
Distributed system notes unit I
Distributed system notes unit IDistributed system notes unit I
Distributed system notes unit I
 
chapter 1 intoduction to operating system
chapter 1 intoduction to operating systemchapter 1 intoduction to operating system
chapter 1 intoduction to operating system
 
introduction to Operating system for computer science Program
introduction to Operating system for computer science Programintroduction to Operating system for computer science Program
introduction to Operating system for computer science Program
 
Os unit 1
Os unit 1Os unit 1
Os unit 1
 
Fundamental Operating System Concepts.pptx
Fundamental Operating System Concepts.pptxFundamental Operating System Concepts.pptx
Fundamental Operating System Concepts.pptx
 
MYSQL DATABASE Operating System Part2 (1).pptx
MYSQL DATABASE Operating System Part2 (1).pptxMYSQL DATABASE Operating System Part2 (1).pptx
MYSQL DATABASE Operating System Part2 (1).pptx
 

Operating Systems

  • 1. UNIT – I INTRODUCTION Operating system is a program that manages the computer hardware. It also provides a basis for application programs and acts as an intermediary between a user of a computer and the computer hardware. An operating system is an important part of almost every computer system. A computer system can be divided roughly into four components: • The hardware – the central processing unit (CPU), the memory, and the input/output (I/O) devices provides the basic computing resources. • The operating system - controls and coordinates the use of the hardware among the various application programs for the various users. • Application programs – such as word processors, spreadsheets, compilers, and web browsers define the way in which these resources are used to solve the computing problems of the users. • Users. MAINFRAME SYSTEMS Mainframe computer systems were the first computers used to tackle many commercial and scientific applications. 1.Batch Systems The user did not interact directly with the computer systems. They prepared a job which consisted of the program, the data, and some control information about the nature of the job and submitted it to the computer operator. After some days the output appeared. The output consisted of the result of the program. The major task was to transfer control automatically from one job to next. To speed up processing, operators batched together jobs with similar needs and ran them through the computer as a group. Thus the programmers leave their programs with the operator. The operator would sort programs into batches with similar requirements and, as the computer became available, would run each batch. The output from each job would be sent back to the appropriate programmer. 2.Multiprogrammed Systems Multiprogramming increases CPU utilization by organizing jobs so that the CPU always has one to execute. The concept of multiprogramming is that the operating system keeps several jobs in memory simultaneously. This set of jobs are kept in the job pool. The OS picks up and execute one of the jobs in the memory. • In non-multiprogramming system, the CPU would sit idle. • In multiprogramming system, the OS simply switches to, and executes, another job. When that job needs to wait, the CPU is switched to another job, and so on. The first finishes waiting and gets the CPU back. As long a at least one job needs to execute, the CPU is never idle. 1
  • 2. Multiprogramming operating systems are used for decisions for the users. All the jobs that enter the system are kept in the job pool. This pool consists of all processes residing on disk awaiting allocation of main memory. If several jobs are ready to be brought into memory, and if there is not enough room for all of them, then the system must choose among them. Making this decision is job scheduling. If several jobs are ready to run at the same time, the system must choose among them, this decision is CPU scheduling. 3.Time – Sharing Systems Time sharing or multitasking is a logical extension of multiprogramming. The CPU executes multiple jobs by switching among them, but the switches occur so frequently that the users can interact with each program while it is running. An interactive (or hands-on) computer system provides direct communication between the user and the system. The user gives instructions to the operating system or to a program directly, using a keyboard or a mouse, and waits for immediate results. A time-shared operating system allows many users to share the computer simultaneously. Each action or command in a time –shared system tends to be short, only a little CPU time is needed for each user. As the system switches rapidly from one user to the next, each user is given the impression that the entire computer system is dedicated to her use, even though it is being shared among many users. A time-shared operating system uses CPU scheduling and multiprogramming to provide each user with a small portion of a time-shared computer. Each user has atleast one separate program in memory. A program loaded into memory and executing is commonly referred to as a process. When a process executes, it typically executes for only a short time before it either finishes or needs to perform I/O. DESKTOP SYSTEMS Personal computers PCs appeared in the 1970s. During their first decade, the CPUs in PCs lacked the features needed to protect an operating system from user programs. PC operating systems therefore were neither multiuser nor multitasking. The MS-DOS operating system from Microsoft has been produced multiple flavors of Microsoft Windows, and IBM has upgraded MS-DOS to OS/2 multitasking system. UNIX is used for its scalability, performance, and features but retains the same rich GUI. Linux, a UNIX operating system available for PCs, has also become popular recently. OS has developed for mainframes. Micro computers were able to adopt some of the technology developed for larger OS. On the other hand, the hardware costs for micro computers are sufficiently low that individuals have sole use of the computer, and the CPU utilization is no longer a prime concern. Earlier file protection was not necessary on a personal machine. But these computers are now often tied into other computers over local-area networks or other Internet connections. When other computers and other users can access the files on a PC, file protection becomes essential feature of the operating system. The lack of such protection has made it easy for malicious programs to destroy data on the systems such as MS-DOS and the Macintosh operating system. These programs are self-replicating, and may spread rapidly via worm or virus mechanism. 2
  • 3. MULTIPROCESSOR SYSTEMS Multiprocessor systems have more than one processor or CPU. Multiprocessor systems also known as parallel systems. Such systems have more than one processor in close communication, sharing the computer bus, the clock, and sometimes memory and peripheral devices. Advantages of the multiprocessor systems are: 1. Increased throughput: By increasing the number of processors, get more work done in less time. When multiple processors cooperate on a task, a certain amount of overhead is incurred in keeping all the parts working correctly. 2. Economy of scale: Multiprocessor systems can save more money than multiple singe-processor systems, because they can share peripherals, mass storage, and power supplies. If several programs operate on the same set of data, it is cheaper to store those data on one disk and to have all the processors share them, than to have many computers with local disks and many copies of the data. 3. Increased reliability: If functions can be distributed properly among several processors, then the failure of one processor will not halt the system, only slow it down. The most common multiple-processor systems now use symmetric multiprocessing (SMP), in which each processor runs on identical copy of the OS, and these copies communicate with one another as needed. Some systems use asymmetric multiprocessing, in which each processor is assigned a specific task. A master processor controls the system, the other processors either look to the master for instruction or have predefined tasks. This scheme defines a master-slave relationship. The master processor schedules and allocates work to the slave processors. Figure: Symmetric multiprocessing architecture CPU CPU CPU memory SMP means that all processors are peers, no master-slave relationship exists between processors. Each processor concurrently runs a copy of the operating system. The benefit of this model is that many processes can run simultaneously - N processes can run if there a N CPUs- without causing a significant performance. Since the CPU are separate, one may be sitting idle while another is overloaded, resulting in inefficiencies. A multiprocessor system will allow processors and resources-such s memory to be shared among the various processors. 3
  • 4. DISTRIBUTED SYSTEMS A network, in the simplest terms, is a communication path between two or more systems. Distributed systems depend on networking for their functionality. Distributed systems are able to share computational tasks, and provide a rich set of features to users. Networks are typecast based on the distance between their nodes. • Local area network: exists within a room, a floor, or a building. • Wide area network: exists between building, cities, or countries. A global company may have a WAN to connect its offices, world wide. • Metropolitan are network: could link buildings within a city. 1.Client – Server Systems: client client client client server Terminals connected to centralized systems are now being supplanted by CPU. Centralized systems today act as server systems to satisfy requests generated by client systems. Server systems can be broadly categorized as compute servers and file servers. • Compute-server systems: provide an interface to which clients can send requests to perform an action, in response to which they execute the action and send back results to the clients. • File-server systems: provide a file-system interface where clients can create, update, read and delete files. 2.Peer-to-Peer Systems The growth of computer networks especially the Internet and WWW has a profound influence on the recent development of OS. When PCs were introduced in 1970s they were developed as stand alone systems, but with the widespread public use of the Internet in 1980s for electronic mail, ftp, PCs became connected to computer networks. Modern PCs and workstations are capable of running a web browser for accessing hypertext documents on the web. The PCs now include software that enables a computer to access the Internet via a local area network or telephone connection. These systems consist of collection of processors that do not share memory or a clock. Instead each processor has its own local memory. The processors communicate with another through various communication lines, such as high speed buses or telephone line. OS has taken the concept of network and distributed systems for network connectivity. A network operating systems is an OS that provides features such as file sharing across the network, and that includes a communication scheme that allows different processes on different computers to exchange messages. 4
  • 5. CLUSTERED SYSTEMS Clustered systems gather together multiple CPUs to accomplish computational work. Clustered systems differ from parallel system, in that they are composed of two or more individual systems coupled together. Clustering is usually performed to provide high availability. A layer of cluster software runs on the cluster nodes. Each node can monitor one or more o the others. If the monitored machine fails, the monitoring machine can take ownership of its storage, and restart the application that were running on the failed machine. The failed machine can remain down, but the users and clients of the application would only see a brief interruption of service. In asymmetric clustering, one machine is in hot standby mode when the other is running the applications. The hot standby machine does nothing but monitor the active server. If that server fails, the hot standby host becomes the active serer. In symmetric clustering, two or more nodes are running applications, and they are monitoring each other. This mode is more efficient, as it uses all o the available hardware. Parallel clusters allow multiple hosts to access the same data on the shared storage. Because most of the operating systems lack support for simultaneous data access by multiple hosts, parallel clusters are usually accomplished by special versions of software. Example Oracle parallel server is a version of Oracle’s database that has been designed to run on parallel clusters. REAL-TIME SYSTEMS A real time system is used when rigid time requirements have been placed on the operation of a processor or the flow of data; thus it is used as a control device in a dedicated application. Sensors bring data to the computer. Computer must analyze the data and possibly adjust controls to modify the sensor inputs. Systems that control scientific experiments, medical imaging systems, industrial control systems, some automobile-engine fuel-injection systems, home appliance controllers and weapon systems are also real time systems. A real-time system has well-defined, fixed time constraints. Processing must be done within the defined constraints, or the system will fail. A real time systems functions correctly only if it returns the correct result within its time constraints. Real time systems if of two types: • A hard real time system guarantees that critical tasks be completed on time. This goal requires that all delays in the system be bounded, from the retrieval of stored data to the time that it takes the operating system to finish any request made of it. • A soft real time system where a critical real time task gets priority over those tasks, and retains that priority until it completes. An in hard real-time systems, the OS kernel delays need to be bounded: A real-time task cannot be kept waiting indefinitely for the kernel to run it. They are useful in several areas, including multimedia, virtual reality, and advanced scientific projects such as undersea exploration and planetary rovers. These systems needs advanced operating systems. 5
  • 6. HANDHELD SYSTEMS Handheld systems include personal digital assistants(PDAs), such as Palm pilots or cellular telephones with connectivity to a network such as the Internet. The handheld systems are of limited size. Example, a PDA is typically 5 inches in height and 3 inches in width, and weighs less than one-half pound. Due to its limited size, handheld systems have a small amount of memory, include slow processors, and feature small display screens. They have between 512 KB and 8 MB of memory. The next issue is the handheld devices is the speed of the processor used in the device. Processors of handheld devices often run at a fraction of the speed of a processor in a PC. Faster processors require more power. To include a faster processor in a handheld device would require a larger battery that would have to be replaced more frequently. The last issue for the designers for handheld devices are handheld devices is the small display screens typically available. Whereas a monitor for a home computer may measure up to 21 inches, the display for a handheld device is often no more than 3 inches square. Tasks such as reading e-mail or browsing web pages, must be condensed onto the smaller displays. One approach for displaying the content in web pages is web clipping, where only a small subset of a web page is delivered and displayed on the handheld device. Some handheld devices may use wireless technology, such as BlueTooth, allowing remote access to e-mail and web browsing. Cellular telephones with connectivity to Internet fall into this category. To download data to these devices, first the data is downloaded to a PC or workstation, and then downloads the data to the PDA. Some PDAs allow data to be directly copied from one device to another using an infrared link. OPERATING SYSTEM STRUCTURES An operating system may be viewed from several points. 1. By examining the services that it provides. 2. By looking at the interface that it makes available to users and programmers. 3. By disassembling, the system into its components and their interconnections. OPERATING SYSTEM COMPONENTS: The various system components are: 1. Process management 2. Main-memory management 3. File management 4. I/O-system management 5. Secondary-storage management 6. Networking 7. Protection system 8. Command-Interpreter system 6
  • 7. 1. Process Management A Process is thought as a program in execution. A program does nothing unless its instructions are executed by a CPU. Example, a compiler is a process, a word processing program run by an individual user on a PC is a process, a system task, such as sending output to a printer is also a process. A process needs certain resources including CPU time, memory, files and I/O devices to accomplish its task. These resources are either given to the process when it is created or allocated to it while it is running. A program is a passive entity, such as the contents of the file stored on disk, whereas a process is an active entity, with a program counter specifying the next instruction to be executed. The execution of the process must be sequential. The CPU executes one instruction of the process after another, until the process completes. A process is the unit of work in a system. Such a system consists of collection of processes, some of which are operating system processes and the rest are user processes. The OS is responsible for the following activities in process management: • Creating and deleting both user and system processes • Suspending and resuming processes • Providing mechanisms for process synchronization • Providing mechanisms for process communication • Providing mechanisms for deadlock handling 2. Main-Memory Management Main memory is a repository of quickly accessible data shared by the CPU and I/ O devices. The central processor reads the instructions from main memory during the instruction fetch cycle, and it reads instructions from main memory during the data fetch cycle. The main memory is the only larger storage device the CPU able to address and access directly. For a program to be executed, it must be mapped to absolute addresses and loaded into memory. As the program executes, it accesses program instructions and data from memory by generating theses absolute addresses. To improve the CPU utilization and the speed of the computer’s response to its users, we must keep several programs in memory. The OS is responsible for the following activities in memory management: • Keeping track of which parts of memory are currently being used and by whom • Deciding which processes are to be loaded into memory when memory space becomes available. • Allocating and deallocating memory space as needed. 7
  • 8. 3. File Management A File is a collection of related information defined by its creator. Files define programs and data. Data files can be numeric, alphabetic and alphanumeric. Files may be of free form, or may be formatted rigidly. A file consists of sequence of bits, bytes, lines or records whose meanings are defined by their creators. Files are organized into directories to ease of their use. Files can be opened in different modes to be accessed, they are : read, write, append. The OS responsibilities for the following activities for file management : • Creating and deleting files • Creating and deleting directories • Supporting primitives for manipulating files and directories • Mapping files onto secondary storage • Backing up files on stable, nonvolatile storage media 4. I/O System Management One of the purposes of an operating system is to hide the attributes and characters of specific hardware devices for the user. The I/O subsystem consists of: • A memory-management component that includes buffering, caching, and spooling • A general device-driver interface • Drivers for specific hardware devices The device drivers alone know the characteristics of the specific device to which it is assigned. 5. Secondary-Storage Management The main purpose of the computer system is to execute the programs. These programs, with the data they access, must be in main memory, or primary memory, during execution. Because the main memory is too small to hold all the data and programs, and as the data it holds is lost when the power is lost, the computer must provide secondary storage to back up main memory. Programs such as compilers, assemblers, routines, editors and formatters are stored in the disk until loaded into main memory. The OS responsibility for the following activities for disk management: • Free-space management • Storage allocation • Disk scheduling 8
  • 9. 6. Networking A distributed system is a collection of processors that do not share memory, peripheral devices, or a clock. Each processor has its own local memory and clock, and the processors communicate with one another through various communication lines, such as high-speed buses or networks. The processors in distributed system vary in size and function. They include small micro-processors, workstations, minicomputers and large, general purpose computer systems. A distributed system collects physically separate, and many heterogeneous systems into a single coherent system, providing user with access to the various resources that the system maintains. The shared resources allows computation speedup, increased functionality, increased data availability, and reliability. Different protocols are used in the network system, such as file-transfer protocol (FTP), Network file-system (NFS), hypertext transfer protocol (HTTP) , for use in communication between a web server and a web browser. A web browser then just needs to send a request for information to a remote machines web server. And the information is returned. 7. Protection System If a computer system has multiple users and allows the concurrent execution of multiple processes, then the various processes must be protected from the other process. Protection is any mechanism for controlling the access of programs, processes, or users to the resources defined by a computer system. This mechanism must provide means for specification of the controls to be imposed and means for enforcement. Protection can improve reliability by detecting latent errors at the interfaces between component subsystems. Early detection of interface errors can often prevent contamination of a healthy subsystem by another subsystem. Unprotected system can be used by unauthorized users. 8. Command-Interpreter System Command interpreter is an interface between the user and the operating system. Many commands are given to the operating system by control statements. When a new job is started in a batch system, or when a user logs on to a time-shared system, a program that reads and interprets control statement is executed automatically. This program is sometimes called the control card interpreter or command-Interpreter, and is often known as the shell. Its function is to get the next command statement and execute it. 9
  • 10. OPERATING SYSTEM SERVICES An operating system provides an environment for the execution of programs. It provides certain services to programs and to the users of those programs. The various services offered by Operating system are: • Program execution: The system may be able to load a program into memory and to run that program. The program must be able to end its execution, either normally or abnormally indicating the error. • I/O operations: A running program may require I/O. This I/O may involve a file or an I/O device. For security and efficiency users usually cannot control I/O devices directly, operating system must be provided to do it. • File-system manipulation: The files needs programs to read and write files. Programs also need to create and delete files by names. • Communications: One process needs to communicate with another process. Such communication may occur in two ways: o Communication takes place between processes that are executing on the same computer. o Takes place between processes that are executing on different computer systems that are tied together by a computer network. Communications may be implemented via shared memory, or by the technique of message passing, in which packets of information are moved between processes by the operating system. • Error detection: The operating system constantly needs to be aware of possible errors. Errors may occur in : o the CPU and memory hardware-memory error or power failure. o I/O devices-such as parity error on tape, a connection failure on network, or lack of paper in the printer. o User program-an arithmetic overflow, an attempt to access an illegal memory location, or too great use of CPU time. • Resource allocation: When multiple users are logged on the system, multiple jobs are running at the same time, resources must be allocated to all of them. Many different types of resources are managed by the operating system. Some are CPU cycles, main memory, and file storage may have special allocation code, and I/O devices may have general request and release code. • Accounting: Os must keep track of which users use how many and which kinds of computer resources. • Protection: The owners of the information stored in a multiuser computer system may want to control use of the information. When several processes are executed concurrently, one process should not interfere with the other process. Security of the system from outsiders is also important. Such security starts with each user having to authenticate himself to the system , usually by means of a password, to be allowed access to the resources. 10
  • 11. SYSTEM CALLS System calls provide the interface between a process and the operating system. These calls are usually available as assembly-language instructions and they are listed in manuals for programmers. Several systems allow system calls to be available in high level languages. Several languages such as C, C++, Perl have replaced assembly language for system programming. Example, UNIX system calls may be invoked directly from a C or C++ program. System calls can be grouped into five major categories: 1) Process Control: A running program needs to be able to halt its execution either normally or abnormally. If a system call is made to terminate the currently running program abnormally. Either normal or abnormal , the operating system must transfer control to the command interpreter. The command interpreter then reads the next command. The command interpreter reads the next instruction. A process or job executing one program may want to load and execute other program. An interesting question is where to return control when the loaded program terminates. Whether the existing program is lost, saved, or allowed to continue execution concurrently with the new program. If control returns to the existing program when the new program terminates, we must save the memory image of the existing program. If both programs continue concurrently, we have created a new job or process to be multiprogrammed. System calls for this purpose is create process or submit job. If a new job or process is created, we may need to wait for them for some time to finish their execution. We may want to wait for a certain amount of time (wait time), we may want to wait or a specific event to occur (wait event). The jobs or processes should then signal when that event has occurred (signal event). If we create a new job or processes, we should be able to execute it. This control requires the ability to determine and reset the attributes of a job or process, including the job’s priority, its maximum allowable execution time, (get process attributes and set process attribute). Another set of system calls is helpful in debugging a program. Many systems provide system calls to dump memory. This provision is useful for debugging. A program trace lists each instruction as it is executed; it is provided by fewer systems. The trap is usually caught by a debugger, which is a system program designed to aid the programmer in finding and correcting bugs. 2) File Management The files must be created and deleted. Either system calls requires the name of the file and some of the attributes. Once the file is created, it must be open and used. It 11
  • 12. may also read, write and reposition (rewind or skip to the end of the file). Finally the file need to be closed. File attributes include the file name, a file type, protection codes, accounting information and so on. The system calls for this purpose are get file attribute and set file attribute. 3) Device Management A program running, may need additional resources to proceed. The resources may be more memory, tape drivers, access to files. If the resources are available, they can be granted and control can be returned to the user program; otherwise the program will have to wait until sufficient resource are available. Once the device has been requested (an allocating to us), we can read, write, and reposition the device. 4) Information Maintenance Many system calls exist simply for the purpose of transferring information between the user program and the operating system. For example, systems have a system call to return the current time and date. Other system calls may return information about the system, such as the number of current users, the version number of the operating system, the amount of free memory or disk space. There are system calls to access these information. They are get process attributes and set process attributes. 5) Communication There are two common models of communication. In message-passing model, information is exchanged through an interprocess communication. Before communication can take place, a connection must be opened. The name of the other communicator must be known, bit it another process on the same PU, or a process on another computer connected by networks. Each computer in the network has a host name, such as IP name. Similarly each process has a process name. The get hosted and get processed system calls do this process. These identifiers are then passed to specific open connection and close connection system calls. The recipient process usually must give its permission for communication to take place with an accept connection call. Most processes that will be receiving connection are special purpose daemons. They execute a wait for connection call and are awakened when a connection is made. The source of the communication, known as the client, and the receiving daemon, known as server, then exchange messages by read message and write message system calls. The close connection terminates the communication. In shared-memory model, processes may exchange information by reading and writing data in these processes and are not under the operating systems control. 12
  • 13. SYSTEM PROGRAMS System program provide a convenient environment for program development and execution. They can be divided into these categories: • File Management These programs create, delete, copy, rename, print, dump, list and generally manipulate files and directories. • Status information Some programs simply ask the system for the date, time, amount of available memory or disk space, number of users, or similar status information. That information is then formatted, and printed to the terminal or other output device or file. • File modification Several text editors may be available to create and modify the content of the files stored on disk or tape. • Programming language support Compilers, assemblers, and interpreters for common programming languages such a C, C++, java, Visual Basic are often provided to the user with the operating system. Some of these programs are now priced and provided separately. • Program loading and execution Once a program is assembled or compiled, it must be loaded into memory to be executed. Debugging systems for either higher-level languages or machine language are needed. • Communications These programs provide the mechanism for creating virtual connections among processes, users, and different computer systems. They allow users to send messages to one another’s screens, to browse web pages, to send e-mail messages, to log remotely, or to transfer data files from one machine to another. 13
  • 14. SYSTEM DESIGN AND IMPLEMENTATION 1. Design Goals The first problem in designing a system is to define the goals and specification of the system. The design of the system will be affected by the choice of hardware and type of system: batch, time shared, single user, multiuser, distributed, real time, or general purpose. Requirement can be divided into two basic groups: • User goals • System goals User desire certain properties in a system: The system should be convenient and easy to use, easy to learn, reliable, safe, and fast. Similarly a set of requirements can be defined by those people who must design, create, implement and maintain; it should be flexible, reliable, error free, and efficient. 2. Mechanisms and Policies One important principle is the separation of policy from mechanism. Mechanism determines how to do something; policies determine what will be done. Example, the timer construct is a mechanism for ensuring CPU protection, but deciding how long the timer is to be set for a particular user is a policy decision. Policies are likely to change across places or overtime. In worst case, each change in policy would require a change in the underlying mechanisms. A change in policy would require redefinition of only certain parameters of the system. For instance, if one computer system, a policy decision is made that I/O intensive programs should have priority over CPU intensive ones, then the opposite policy could be instituted easily on some other computer systems if the mechanism were properly separated and were policy independent. 3. Implementation Once an operating system is designed, it must be implemented. Traditionally, OS were written in assembly language. Now they are often written in higher-level languages as C, C++. The various operating systems not written in assembly language are: • The Master Control Program (MCP), it was written in ALGOL. • MULTICS, developed at MIT, was written in PL/1. • The Primos operating system for Prime computers is written in Fortran. • The UNIX operating system, OS/2, and Windows NT are written in C. The advantages of writing OS in higher-level language are the code can be written faster, is more compact, and is easier to understand and debug. The OS is far easier to port to move to some other hardware if it is written in high-level language. The disadvantage of writing OS in high-level language is the reduced speed and increased storage requirements. 14
  • 15. OS are large, only a small amount of the code is critical to high performance, the memory manager and the CPU schedulers are probably the most critical routines UNIT II PROCESS MANAGEMENT A process is a program which is in execution. A batch system executes jobs, whereas a timeshared system has user programs, or tasks. The jobs and the process are used almost interchangeably. A process is more than a program code, which is sometimes known as the text section. It also includes the current activity, represented by the value of the program counter and the contents of the processor’s registers. A program is a passive entity, such as the contents of a file stored on disk. A process is an active entity, with the program counter specifying the next instruction to execute and a set of associated resources. Process State As a process executes, it changes state. The state of a process is defined in part by the current activity of that process. Each process may be in one of the following states: • New: The process is being created. • Running: Instructions are being executed. • Waiting: The process is waiting for some event to occur (such as an I/O completion or reception of a signal). • Ready: The process is waiting to be assigned to a processor. • Terminated: The process has finished execution. Figure: Diagram of process state. Terminated New Ready Running Waiting 15
  • 16. PROCESS CONTROL BLOCK Each process is represented in the operating system by a process control block (PCB) also called as task control block. It contains many pieces of information associated with a specific process, including these: Pointer Process state Process number Program counter Registers Memory lines List of open files …. …. Figure: Process control block • Process state: The state may be new, ready, running, waiting, halted and so on. • Program counter: The counter indicates the address of the next instruction to be executed. • CPU registers: The registers vary in number and type, depending upon the computer. The registers are accumulators, index register, stack pointers, and general purpose registers. • CPU scheduling information: This information includes a process priority, pointers to scheduling queues, and any other scheduling parameters. • Memory-Management information: This information includes such as information as the value of the base and limit registers, the page tables, or the segment tables, depending on the memory system used by the operating system. • Accounting information: This information includes the amount of CPU and real time used, time limits, account numbers, job or process numbers, and so on. • I/O status information: The information includes the list of I/O devices allocated to this process, a list of open files, and so on. 16
  • 17. PROCESS SCHEDULING 1. Scheduling Queues: As process enter the system, they are put into a job queue. This queue consists of all processes in system. The processes that are residing in main memory and are ready and waiting to execute are kept on a list called ready queue. This queue is generally stored as linked list. A ready-queue header contains pointers to the first and final PCBs in the list. The OS also has other queues. When a process is allocated the CPU, it executes for a while and quits, is interrupted or waits for the occurrence of a particular event, such as the completion of an I/O request. Since the system has many processes it may be busy with the I/O request of some other process. The process therefore may have to wait for the disk. The list of processes waiting for a particular I/O device is called a device queue. Each device has its own device queue. A common representation of process scheduling is a queuing diagram. Each rectangular box represents a queue. Two types are queues are present: the ready queue and a set of device queues. The circle represents the resources that serve the queues, and the arrows indicate the flow of processes in the system. A new process is put in the ready queue. It waits in the ready queue until it is selected for execution. Once the process is assigned to the CPU and is executing, one of the several events could occur: • The process could issue an I/O request, and then be placed in an I/O queue. • The process could create a new subprocess and wait for its termination. • The process could be removed forcibly from the CPU, as a result of an interrupt, and be put back in the ready queue. CPU Ready queue I/O I/O queue I/O request Time slice expired Child Fork a child executes Interrupt Wait for interrupt 17
  • 18. 2. Schedulers A process migrates between the various scheduling queues throughout its lifetime. The operating system must select, for scheduling purposes, processes from these queues in some fashion. The selection process is carried out by the appropriate scheduler. There are two types of schedulers: • Long-term schedulers or job schedulers: This selects processes from job pool and loads them into memory for execution. • Short-term scheduler or CPU scheduler: This selects among the processes that are ready to execute and allocates the CPU to one of them. The primary difference between these two schedulers is the frequency of their execution. The short-term scheduler must select a new process for the CPU frequently. A process may execute for only a few milliseconds therefore waiting for an I/O request. This often executes once in every 100 milliseconds. The long-term scheduler on the other hand, executes much less frequently. There may be minutes between the creations of new processes in the system. The long-term schedulers are needed to be invoked only when the process leaves the system. On some systems, such as time sharing systems, may introduce an additional, intermediate level of scheduling. This medium-term scheduler removes processes from memory, and thus reduces the degree of multiprogramming. At some later time, the process can be reintroduced into memory and its execution can be continued where it left off. This scheme is called swapping. The process is swapped out, and is later swapped in, by the medium-term scheduler. 3. Context Switch Switching the CPU to another process requires saving the state of the old process and loading the saved state for the new process. This task is known as a context switch. The context of a process is represented in the PCB of a process; it includes the value of the PU registers, the process state and memory management information. When a context switch occurs the Kernel saves the context of the old process in its PCB and loads the saved context of the new process scheduled to run. Context switch is pure overhead, because the system does no useful work while switching. Its speed varies from machine to machine, depending on the memory speed, the number of registers that must be copied, and the existence of special instructions. The speed ranges from 1 to 1000 milliseconds. OPERATIONS ON PROCESSES The processes in the system can execute concurrently, and they must be created and deleted dynamically. Thus the operating system must provide a mechanism for process creation and termination. 18
  • 19. 1. Process Creation A process may create several new processes, via create-process system call during the execution. The creating process is called a parent process, whereas the new processes are called children of that process. Each of these new processes may in turn create other processes, forming a tree structure. A process needs certain resources such as CPU time, memory, files, I/O devices to accomplish any task. When a process creates a sub process may be able to obtain its resources directly from the operating system, or the parent may have to partition its resources among its children. Example: Consider a process whose function is to display the status of a file, say F1, on the screen of the terminal. When it is created, it will get as an input from its parent process, the name of the file F1, and it will execute using that datum to obtain the desired information. It may also get the name of the output device. When a process creates new process, two possibilities exist in terms of execution: 1. The parent continues to execute concurrently with its children. 2. The parent waits until some or all of its children have terminated. 3. Process Termination A process terminates when it finishes executing its final statement and asks the operating system to delete it by using the exit system call. At that point the process may return data output to its parent process via a wait system call. A process can cause termination of another process via an appropriate system call. When a process is newly created, the identity of the newly created process is passed to its parent. A parent may terminate the execution of one of its children for a variety of reasons, such as: • The child has exceeded its usage of some of the resources that it has been allocated. This requires the parent to have a mechanism to inspect the state of its children. • The task assigned to the child is no longer required. • The parent is exiting, and the operating system does not allow a child to continue if its parent terminates. On such systems, if a process terminates then all its children must also be terminated. This phenomenon is called as cascading termination. COOPERATING PROCESSES The concurrent processes executing in the operating system may either be independent processes or cooperating processes. A process is independent if it cannot affect or be affected by other processes executing in the system. The process that does not share any data with any other process is independent. A process is cooperating if it can affect or be affected by the other processes is a cooperating process. 19
  • 20. Process cooperation is provided for several reasons: • Information sharing: Since several users may be interested in the same piece of information, we must provide an environment to allow concurrent access to these types of resources. • Computation speedup: If we want a particular task to run faster, we must break it into subtasks, each of which will be executing in parallel with the others. Such speedup can be achieved only if the computer has multiple processing elements. • Modularity: We may want to construct the system in a modular fashion, dividing the system functions into separate processes or threads. • Convenience: Even an individual user may have many tasks on which to work at one time. For instance, a user may be editing, printing, and compiling in parallel. To illustrate the concept of cooperating processes, consider the producer-consumed problem. A producer process produces information that is consumed by a consumer process. Example, a print program produces characters that are consumed by the printer device. A compiler may produce assembly code, which is consumed by an assembler. The assembler, in turn, may produce object modules, which are consumed by the loader. To allow producer and consumer processes to run concurrently, we must have available a buffer of items that can be filled by producer and emptied by the consumer. A producer can produce one item while the consumer is consuming another item. The producer and the customer must be synchronized so that the consumer does not try consuming an item that has not yet produced. INTERPROCESS COMMUNICATION IPC provides a mechanism to allow processes to communicate and to synchronize their actions without sharing the same address space. IPC is particularly useful in a distributed environment where the communicating processes may reside on different computers connected with a network. An example is the chat program used on WWW. 1. Message Passing System The function of a message system is to allow processes to communicate with one another without the need to resort to the shared data. Communication among the user processes is accomplished through the passing of message. An IPC facility provides at least the two operations: Send(message) and receive(message). Messages sent by7 a process can be of either fixed size or variable size. If processes P and Q want to communicate they must send messages to and receive messages from each other; a communication link must exist between them. This link can be implemented in a variety of ways. We are concerned not with physical implementation(such as shared memory, hardware, bus or network) but rather with its logical implementation. Here are several methods for logically implementing a link and the send / receive operations: • Direct or indirect communication • Symmetric or asymmetric communication 20
  • 21. Automatic or explicit buffering • Send by copy or send by reference • Fixed-sized or variable-sized messages. Naming 1. Direct Communication: With direct communication, each process that wants to communicate must explicitly name the recipient or sender of the communication. In this scheme the send and the receive primitives are: • Send (P, message) – Send a message to process P. • Receive(Q, message) – receive a message from process Q A communication link in this scheme has the following properties: • A link is established automatically between every pair of processes that want to communicate. The processes need to know only each other’s identity to communicate. • A link is associated with exactly two processes. • Exactly one link exists between each pair of processes. This scheme exhibits symmetry in addressing, that is both the sender and the receiver processes must name the other to communicate. A variant of this scheme employs asymmetric in addressing. Only the sender names the recipient; the recipient is not required to name the sender. In this scheme the send and receive primitives are defined as follows: • Send (P, message) – send a message to process P • Receive(id, message) – receive a message from any process, the variable id is set to the name of the process with which communication has taken place. 2. Indirect Communication: With indirect communication, the messages are sent to and received from mailboxes, or ports. A mailbox can be viewed abstractly as an object into which message can be placed by processes and from which messages can be removed. Each mailbox has a unique identification. In this scheme a process can communicate with some other process via a number of different mailboxes. Two processes can communicate only if they share a mailbox. The send and receive primitives are defined as follows: • Send (A, mailbox) – send a message to mailbox A • Receive(A, message) – receive a message from mailbox A In this scheme, a communication link has the following property: • A link may be associated with more than two processes. 21
  • 22. A number of different links may exist between each pair of communicating processes, with each link corresponding to one mailbox. Mailbox can be owned by either by a process or by the operating system. If the mailbox is owned by a process, then each owner is distinguished (who can receive the message) and the user( who send the messages). Since each mailbox has a unique owner, there can be no confusion about who should receive the message sent to the mailbox. When a process that owns a mailbox terminates, the mailbox disappears. Any process that subsequently sends a message to this mailbox must be notified that the mailbox no longer exists. On the other hand, a mailbox owned by the operating system is independent and is not attached to any particular process. The operating system then must provide a mechanism that allows a process to do the following: • Create a new mailbox • Send and receive messages through the mailbox • Delete a mailbox. 3. Synchronization: Communication between processes takes place by calls to send and receive primitives. Message passing may be either blocking or non-blocking also known as synchronous and asynchronous. • Blocking send: The sending process is blocked until the message is received by the receiving process or by the mailbox. • Non-blocking send: The sending process sends the message and resumes operation. • Blocking receive: The receiver blocks until a message is available. • Non-blocking receive: The receiver retrieves either a valid message or a null. 4. Buffering: Whether the communication is direct or indirect, message exchanged by communicating processes reside in a temporary queue. • Zero capacity: The queue has maximum length 0, thus the link cannot have any message waiting in it. In this case, the sender must block until the recipient receives the message. • Bounded capacity: The queue has finite length n; thus at most n message can reside in it. If the queue is not full when a new message is sent, the latter is placed in the queue, and the sender can continue execution without waiting. The link has finite capacity. If the link is full, the sender must block until space is available in the queue. • Unbounded capacity: The queue has potentially infinite length, thus any number of messages can wait in it. The sender never blocks. 22
  • 23. SCHEDULING ALGORITHMS 1. First Come First Served Scheduling: The simplest CPU scheduling algorithm is the first come first served (FCFS) scheduling algorithm. With this scheme the process that requests the CPU first is allocated the CPU first. This is managed by the FIFO queue. When the CPU is free, it is allocated to the process at the head of the queue. The running process is then removed from the queue. The average waiting time under the FCFS policy, is often quite long. Consider the following set of processes that arrive at time 0, with the length of the CPU-burst time given in milliseconds: Process Burst time P1 24 P2 3 P3 3 If the processes arrive in the order P1, P2, P3 and are served in the FCFS order, we get the result shown in the following Gantt chart: P1 P2 P3 0 24 27 30 The waiting time is 0 milliseconds for process P1, 24 milliseconds for process p2, and 27 millisecond for process P3. Thus the average waiting time is (0+24+27)/3=17 milliseconds. If the processes arrive in the order P2, P3,P1 the results will be as shown in the following Gantt chart: P2 P3 P1 0 3 6 30 23
  • 24. The average waiting time is now (6+0+3)/3=3 milliseconds. Thus the average waiting time under FCFS policy is generally not minimal and may vary if the process CPU-burst times vary greatly. The FCFS scheduling algorithm is non preemptive. Once the CPU has been allocated to a process, that process keeps the CPU until it releases the CPU either by termination or by requesting I/O. 2. Shortest-Job First Scheduling: A different approach to CPU scheduling is the shortest-job-first (SJF) scheduling algorithm. This algorithm associates with each process the length of the latter’s next CPU burst. When the CPU is available, it is assigned to the process that has the smallest next CPU burst. If two processes have same length next CPU burst, FCFS scheduling is used to break the tie. Consider the following set of processes, with the length of the CPU burst time given in milliseconds: Process Burst time P1 6 P2 8 P3 7 P4 3 Using SJF scheduling, we would schedule the process according to the following Gantt chart: P4 P1 P3 P2 0 3 9 16 24 The waiting time is 3 milliseconds for process P1, 16 milliseconds for process P2, 9 milliseconds for process P3, and 0 milliseconds for process P4. Thus the average waiting time is (3+16+9+0)/4=7 milliseconds. If FCFS is used the average waiting time is 10.25 milliseconds. The real difficulty with the SJF algorithm is knowing the length of the next CPU request. For long term or job scheduling in a batch system, the length of the process time limit is specified by the user when he submits the job. There is no way to know the length of the next CPU burst in short term or CPU scheduling. We may not know the length but can predict the length of the CPU burst. We expect that the next CPU burst will be similar in length to the previous ones. The next CPU burst is generally predicted as an exponential average of the measured lengths of previous CPU bursts. Let tn be the length of the nth CPU burst Let Tn+1 be our predicted value for the next CPU burst. Then for α, o≤α≤1, define 24
  • 25. Tn+1 = α tn + (1 - α) Tn This formula defines an exponential average. The value of tn contains our most recent information; Tn stores the past history The parameter α controls the relative weight of recent and past history in our prediction. If α=0 then Tn+1=Tn, recent history has no effect If α=1 then Tn+1 = tn only the most recent CPU burst matters If α=1/2 recent and past history are equally weighted. SJF is preemptive or non preemptive. The choice arises when a new process arrives at the ready queue while previous process is executing. The new process may have a shorter next CPU burst than what is left of the currently executing process, whereas non preemptive SJF algorithm will preempt the currently running process to finish its CPU burst. Consider an example, with four processes, with the length of the CPU burst time given in milliseconds: Process Arrival time Burst time P1 0 8 P2 1 4 P3 2 9 P4 3 5 If the processes arrive at the ready queue at the times as shown and need the indicated burst times, then the resulting preemptive SJF schedule is as depicted in the following Gantt chart: P1 P2 P4 P1 P3 0 1 5 10 17 26 Process P1 is started at time 0, since it is the only process in the queue. Process P2 arrives at time 1. The remaining time for process P1 (7 milliseconds) is larger than the time required by process P2 (4 milliseconds), so process P1 is preempted and process P2 is scheduled. The average waiting time for this example is 6.5 milliseconds. 25
  • 26. 3. Priority Scheduling: The SJF is a special case of the general priority-scheduling algorithm. A priority is associated with each process, and the CPU is allocated to the process with the highest priority. Equal priority processes are scheduled in FCFS order. As an example, consider the following set of processes, assumed to have arrived at time 0, in the order P1, P2, P3,P4,P5, with the length of the CPU burst time given in milliseconds: Process Burst time Priority P1 10 3 P2 1 1 P3 2 4 P4 1 5 P5 5 2 Using the priority scheduling, we would schedule these processes according to the following Gantt chart: P2 P5 P1 P3 P4 0 1 6 16 18 19 The average waiting time is 8.2 milliseconds. Priorities can be defined either internally or externally. Internally defined priorities use some measurable quantity to compute the priority of a process. For example, time limits, memory requirements, the number of open files etc. External priorities are set by criteria that are external to the operating system, such as importance of the process, the type and amount of funds being paid for computer use, the department sponsoring the work, and other often political factors. Priority scheduling can be either preemptive or non-preemptive. When process arrives at the ready queue, its priority is compared with the priority of the currently running process. A preemptive priority-scheduling algorithm will preempt the CPU if the priority of the newly arrived process is higher than the priority of the currently running process. A non preemptive priority scheduling algorithm will simply put the new process at the head of the ready queue. 26
  • 27. A major problem with priority-scheduling algorithms is indefinite blocking or starvation. A process that is ready to run but lacking the CPU is considered blocked – waiting for the CPU. A priority-scheduling algorithm can leave some low-priority processes waiting indefinitely for the CPU. Generally one of the two things will happen. Either the process will eventually be run, or the computer system will eventually crash and lose all unfinished low-priority processes. A solution to the problem of indefinite blockage of low-priority processes is aging. Aging is a technique of gradually increasing the priority of processes that wait in the system for a long time. 4. Round-Robin Scheduling: The round-robin (RR) scheduling algorithm is designed especially for time- sharing systems. It is similar to FCFS scheduling, but preemption is added to switch between processes. A small unit of time, called a time quantum is defined. The time quantum is generally from 10 to 100 milliseconds. The ready queue, allocating the CPU to each process for a time interval of up to 1 time quantum. To implement RR scheduling, we keep the ready queue as a FIFO queue of processes. New processes are added to the tail of the ready queue. The CPU scheduler picks the first process from the ready queue, sets a timer to interrupt after 1 time quantum, and dispatches the process. One of the two things will then happen. The process may have a CPU burst of less than 1 time quantum. In this case the process itself will release the CPU voluntarily. The scheduler will then proceed to the next process in the ready queue. Otherwise, if the CPU burst of the currently running process is longer than 1 time quantum, the timer will go off and will cause an interrupt to the operating system. A context switch will be executed, and the process will be put at the tail of the ready queue. The CPU scheduler will then select the next process in the ready queue. Consider the following set of processes that arrive at time 0, with the length of the CPU burst time given in milliseconds: Process Burst Time P1 24 P2 3 P3 3 If the time quantum is 4 milliseconds, then process P1 gets the first 4 milliseconds. Since it requires another 20 milliseconds, it is preempted after the first time quantum, and the CPU is given to the next process in the queue, process P2. Since process p2 does not need 4 milliseconds, it quits before its time quantum expires. The CPU is then given to the next process, process P3. Once each process has received 1 time quantum, the CPU is returned to process P1 for an additional time quantum. The resulting RR scheduling is: P1 P2 P3 P1 P1 P1 P1 P1 27
  • 28. 0 4 7 10 14 18 22 26 30 The average waiting time is 17/3=5.66 milliseconds. In RR scheduling algorithm, no process is allocated the CPU for more than 1 time quantum in a row. If a process CPU burst exceeds 1 time quantum, that process is preempted and is put back in the ready queue. The RR scheduling algorithm is preemptive. If there are n processes in the ready queue and the time quantum is q, then each process gets 1/n of the CPU time in chunks of atmost 1 time units. Each process must wait no longer than (n-1) x q time units until its next time quantum. The effect of context switch must also be considered in the performance of RR scheduling. Let us assume that we have only one process of 10 time units. If the time quantum is 12 time units, the process finishes in less than 1 time quantum with no over head. If the quantum is 6 time units, the process requires 2 quanta, resulting in 1 context switch. 5. Multilevel Queue Scheduling: A multilevel queue scheduling algorithm partitions the ready queue into several separate queues. The processes are permanently assigned to one queue, based on some property of the process, such as memory size, process priority or process type. Each queue has its own scheduling algorithm. For example, separate queues can be used as foreground and background queues. • Foreground queues: this is for interactive processes, with highest priority. This can be scheduled using RR scheduling algorithm. • Background queues: This is for batch processes, with lowest priority and uses FCFS scheduling algorithm for scheduling. System processes Interactive processes Interactive editing processes Batch processes Student processes Let us look at an example of a multilevel queue scheduling algorithm with five queues: 1. System processes 2. Interactive processes 3. Interactive editing processes 4. Batch processes 5. Student processes Each queue has absolute priority over lower priority. No process in batch queue for example could run unless the queues for system processes, interactive processes, and 28
  • 29. interactive editing processes were all empty. If an interactive editing process entered the ready queue while a batch process was running, the batch process would be preempted. 6. Multilevel Feedback Queue Scheduling: In a multilevel queue scheduling algorithm, process are permanently assigned to a queue on entry to the system. Processes do not move between queues. If there are separate queues for foreground and background processes, for example, processes do not move from one queue to the other, since processes do not change their foreground or background nature. Multilevel feedback queue scheduling, allows a process to move between the queues. The idea is to separate processes with different CPU burst characteristics. • If a process uses too much CPU time, it will be moved to a lower priority queue. • If a process waits too long in a lower priority queue may be moved to a higher- priority queue. This form of aging prevents starvation. For example consider a multilevel feedback queue scheduler with three queues, numbered from 0 to 2. The scheduler first executes all processes in queue 0. Only when queue 0 is empty will it execute processe4s in queue 1. Similarly, processes in queue 2 will be executed only if queues 0 and 1 are empty. A process that arrives for queue 1 will preempt a process in queue 2. A process that arrives for queue 0 will, in turn preempt a process in queue 1. A process entering the ready queue is put in queue 0. A process in queue is given a time quantum of 8 milliseconds. If it does not finish within this time it is moved to the tail of the queue . If queue 0 is empty, the process at the head of queue 1 is given a quantum of 16 milliseconds. If it does not complete, it is preempted and is put into queue 2. Process in queue 2 is run on an FCFS basis, only when queues 0 and 1 are empty. Multilevel feedback queue scheduler is defined by the following parameters: • The number of queues • The scheduling algorithm for each queue • The method used to determine when to upgrade a process to a higher priority queue • The method used to determine when to demote a process to a lower priority queue • The method used to determine which queue a process will enter when the process needs service. MULTIPLE – PROCESSOR SCHEDULING If multiple CPUs are available, the scheduling problem is more complex. In multiple processors system those system with identical in terms of functionality are called homogenous. And those CPUs with different functionality are called heterogeneous. Even within homogenous multi processor, there are sometimes limitations on scheduling. If several identical processes are available, then load sharing can occur. It would be possible to provide a separate queue for each processor. In this case one processor could be idle, with an empty queue, while another processor was very busy. To 29
  • 30. prevent this situation, we use a common ready queue. All processes go into one queue and are scheduled onto any available processor. In such a scheduling scheme one of the two scheduling approaches may be used. • Self-scheduling:. Each processor examines the common ready queue and selects a process to execute. • Master-Slave structure: appointing one processor as a scheduler for the other processors, thus creating a master slave structure. REAL TIME SCHEDULING Real time computing is divided into two types: • Hard real time systems • Soft real time systems Hard real time systems are required to complete a critical task within a guaranteed amount of time. A process is submitted with a statement of the amount of time in which it needs to complete or perform I/O. The scheduler either admits the process, guaranteeing that the process will complete on time or rejects the request as impossible. This is known as resource reservation. Such a guarantee requires that the scheduler know exactly how long each type of operating system functions takes to perform, and therefore each operation must be guaranteed to take a maximum amount of time. Therefore, hard real time systems run on special software dedicated to their critical process, and lack the full functionality of modern computers and operating systems. Soft real time systems computing is less restrictive. It requires that critical processes receive priority over less fortunate ones. Implementing soft real time functionality requires careful design of the scheduler and related aspects of the operating system. 1. The system must have priority scheduling, and real time processes must have the highest priority . The priority of real time processes must not degrade over time, even though the priority of non-real time processes may. 2. The dispatch latency must be small. The smaller the latency the faster a real time process ca start executing once it is runable. 30
  • 31. UNIT III DEADLOCKS In a multiprogramming environment, several processes may compete for a finite number of resources. A process requests resources; if the resources are not available at that time, the process enters a wait state. Waiting processes may never again change state, because the resources they have requested are by other waiting processes. This situation is called a deadlock. A process must request a resource before using it, and must release the resource after using it. A process may request as many resources as it requires to carry out its task. The number of resources, requested may not exceed the total number of resources available in the system. A process cannot request three printers if the system has only two. Under the normal mode of operation, a process may utilize a resource in only the following sequence: 1. Request: If the request cannot be granted immediately, (if the resource is being used by another process), then the requesting process must wait until it can acquire the resource. 2. Use: The process can operate on the resource, (for example if the resource is a printer, the process can print on the printer). 3. Release: The process releases the resources. A set of processes is in a deadlock state when every process in the set is waiting for an event that can be caused only be another process in the set. DEADLOCK CHARACTERIZATION I. Necessary Conditions: A deadlock situation can arise if the following four conditions hold simultaneously in a system: 1. Mutual exclusion: At least one resource must be held, that is only one process at a time can use the resource. If another process requests that resource, the requesting process must be delayed until the resource has been released. 2. Hold and wait: A process must be holding at least one resource and waiting to acquire additional resources that are currently being held by other processes. 3. No preemption: Resources cannot be preempted, that is, a resource can be released only after the process ahs completed its task. 4. Circular wait: A set {P0, P1,P2,……,Pn} of waiting processes must exist such that P0 is waiting for a resource that is held by P1, P1 is waiting for a resource that is held by P2, P2 is waiting for a resource that is held by p3, 31
  • 32. Pn-1 is waiting for a resource that is held by Pn, Pn is waiting for a resource that is held by P0. II. Resource Allocation Graph: Deadlocks can be described in terms of directed graph called a system resource allocation graph. This graph consists of set of vertices V and set of edges E. The set of vertices V is partitioned into two different types of nodes • P= {P1, P2,…,Pn}, set consisting of active processes in the system • R={R1, R2,….,R3}, set consisting of all resource types in the system. A directed edge from process Pi to resource type Rj is denoted by Pi Rj it signifies that process Pi requested an instance of resource type Rj and is currently waiting for that resource. This edge is called a request edge. A directed edge from resource type Rj to resource type Pi is denoted by Rj Pi it signifies that an instance of resource type Rj has been allocated to process Pi. This edge is called a assignment edge. Pictorially each process Pi is represented by a circle, and each resource type Rj as a square. Since resource type Rj may have more than one instance, we represent each such instance as a dot within the square. The resource allocation graph shown in figure depicts the following situation: Resource allocation graph R1 R3 P P P R2 R4 • The sets P, R and E: o P={P1,P2,P3} o R={R1,R2,R3} o E={P1 R1, P2 R3, R1 P2, R2 P2, R2 P1, R3 P3} • Resource instances: o One instance of resource type R1 o Two instances of resource type R2 32
  • 33. o One instance of resource type R3 o Three instances of resource type R4 • Process states: o Process P1 is holding an instance of resource type R2, and is waiting for an instance of resource type R1. o Process P2 is holding an instance of R1 and R2, and is waiting for an instance of resource type R3. o Process P3 is holding an instance of R3.  Given the definition of a resource allocation graph, if the graph contain no cycle, then no process in the system is deadlocked.  If the graph contains a cycle, then a deadlock may exist.  If each resource type is exactly having one instance, then a cycle implies that a deadlock has occurred.  If each resource type has several instances, then a cycle does not have a deadock occurred. Fig. Resource allocation graph with a deadlock R1 R3 P P P R2 R4 Two minimal cycles exist in the system: P1 R1 P2 R3 P3 R2 P1 P2 R3 P3 R2 P2 Processes P1, P2 and P3 are deadlocked. Process P2 is waiting for the resource R3, which is held by process P3. Process P3, on the other hand, is waiting for either 33
  • 34. process P1 or process P2 to release resource R2. In addition, process P1 is now waiting for process P2 to release resource R1. Methods for Handling Deadlocks Deadlock problem can be dealt in one of the three ways: • We can use a protocol to prevent or avoid deadlocks, ensuring that the system will never enter a deadlock state. • We can allow the system to enter a deadlock state, detect it, and recover. • We can ignore the problem altogether, and pretend that deadlocks never occur in the system. To ensure that deadlocks never occur, the system can use either a deadlock prevention or a deadlock avoidance scheme. Deadlock prevention: this is a set of methods for ensuring that at least one of the necessary condition cannot hold. Deadlock avoidance: requires the OS be given in advance additional information concerning which resources a process will request and use during its lifetime. With this additional knowledge, we can decide for each request can be satisfied or must be delayed, the system must consider the resources currently available, the resources currently allocated to each process and the future requests and releases of each process. DEADLOCK PREVENTION Deadlocks can be prevented by ensuring that each of the four conditions cannot hold. The conditions are: • Mutual Exclusion • Hold and Wait • No Preemption • Circular Wait 1. Mutual Exclusion: The mutual exclusion condition must hold for non sharable resources. For example, a printer cannot be simultaneously shared by several processes. Sharable resources on the other hand, do not require mutually exclusive access, and thus cannot be involved in a deadlock. Read only files are a good example for sharable resources. If several processes attempt to open a read-only file at the same time, they can be granted simultaneous access to the file. A process never needs to wait for a sharable resource. 2. Hold and Wait: To ensure that the hold and wait condition never occurs in the system, we must guarantee that, whenever a process requests a resource, it does not hold any other resources. 34
  • 35. One protocol that can be used requires each process to request and be allocated all its resources before it begins execution. Another protocol allows a process to request resources only when the process has none. A process may request some resources and use them. Before it can request any additional resources, it must release all the resources that it is currently allocated. Examples to illustrate the two protocols: Consider a process that copies data from a tape drive to a disk file, sorts the disk file and then prints the results to a printer. Protocol one - If all resources must be requested at the beginning of the process, then the process must initially request the tape drive, disk file and printer. It will hold the printer for its entire execution, even though it needs the printer only at the end. Protocol two – the second method allows the process to request initially only the tape drive and disk file. It copies from the tape drive to the disk, then releases both the tape drive and the disk file. The process must then again request the disk file and the printer. After copying the disk file to the printer, it releases these two resources and terminates. Disadvantages of two protocols: 1. Resource utilization may be low, since many of the resources may be allocated but unused for a long period. 2. Starvation is possible. A process that needs several popular resources may have to wait indefinitely, because at least one of the resources that it needs is always allocated to some other process. 3. No Preemption The third necessary condition is that there be no preemption of resources that have already been allocated. To ensure this condition does not happen, the following protocol is used. If the process is holding some resources and requests another resource that cannot be immediately allocated to it, then all resources for which the process is waiting. The process will be restarted only when it can regain its old resources, as well as the new ones that it is requesting. If process requests some resources, we first check whether they are available. If they are, we allocate them. If they are not available, we check whether they are allocated to some other process that is waiting for additional resources. If so preempt the desired resources from the waiting process and allocate them to the requesting process. If the resources are not either available or held by a waiting process, the requesting process must wait. While it is waiting, some of its resources may be preempted, but only if another process requests them. A process can be restarted only when it is allocated the new resources it is requesting and recovers any resources that were preempted while it is waiting. 35
  • 36. 4. Circular Wait The fourth condition for deadlocks is circular-wait. One way to ensure this conditions is to impose a total ordering of all resource types, and to require that each process requests resources in the increasing order of enumeration. Let R = {R1, R2,…. Rm} be set of resource types. We assign to each resource type an unique number, which compares the resources and to determine whether one precedes another in our ordering. Example: F(tape drive) = 1, F(disk drive) = 5 F(printer) = 12. Now consider the protocol to prevent deadlocks. Each process is requested in an increasing order. If a process request resource type Ri, after that the process can request instances of resource type Rj if and only if F(Rj) > F(Ri). Example, using the function defined above, a process that wants to use a tape drive and printer at the same time must first request the tape drive and then request the printer. Alternatively, we can require that, whenever a process requests an instance of resource type Rj, it has released any resources Ri such that the F(Ri)>= F(Rj). If these two protocol are used, then the circular wait cannot hold. DEADLOCK RECOVERY There are two approaches to solve the deadlock problem. They are • Suspend/Resume a Process • Kill a Process 1. Suspend/Resume a Process: In this method a process is selected based on a variety of criteria example low priority and it is suspended for a long time. The resources are reclaimed form that process and then allocated to other processes that are waiting for them. When one of the waiting process gets over the original suspended process is resumed. This process strategy cannot be used in any on-line or real time systems, because the response time of some processes then become unpredictable. Suspend/Resume operations are not easy to manage example that a tape is read half way through and then a process holding the tape drive is suspended. The operator will have to dismount that tape, mount the new tape for the new process to which the tape drive is now to be allocated. After this new process is over, when the old process is resumed, the tape for the original process will have to be mounted again, and more importantly, it will exactly positioned. 36
  • 37. 2. Kill a Process: The operating system decides to kill a process and reclaim all its resources after ensuring that such action will solve the deadlock. This solution is simple but involves loss of at least one process. Choosing a process to be killed, again, depends on the scheduling policy and the process priority. It is safest to kill a lowest priority process which has just begin, so the loss is not very heavy. DEADLOCK AVOIDANCE Deadlock avoidance is concerned with starting with an environment, where a deadlock is possible, but by some algorithm in the operating system, it is ensured before allocating any resource that after allocating it, deadlock can be avoided. If that cannot be avoided, the operating system does not grant the request of the process for a resource. Dijkstra was the first person to propose an algorithm in 1965 for deadlock avoidance. This is known as “Banker algorithm” due to its similarity in solving a problem of a banker waiting to disburse loans to various customers within limited resources. This algorithm in the OS is such that it can know in advance before a resource is allocated to a process, whether it can lead to a deadlock “unsafe state” or it can manage to avoid it “safe state”. Banker’s algorithm maintains two matrices. Matrix A – consists of the resources allocated to different processes at a given time. Matrix B – maintains the resources still needed by different processes at the same time. Process Tape Printers Plotters drives P0 2 0 0 P1 0 1 0 P2 1 2 1 P3 1 0 1 Process Tape Printers Plotters drives P0 1 0 0 P1 1 1 0 P2 2 1 1 P3 1 1 1 Matrix A Matrix B Resources assigned Resources still required 37
  • 38. Vectors Total Resources (T) = 543 Held Resources (H) = 432 Free Resources (F) = 111 Matrix A shows that process P0 is holding 2 tape drives. At the same time P1 is holding 1 printer and so on. The total held resources by various processes are : 4 tape drives, 3 printers and 2 plotters. This says that at a given moment, total resources held by various processes are: 4 tape drivers, 3printers and 2 plotters. This should not be confused with the decimal number 432. That is why it is called a vector. By the same logic, the figure shows that the vector for the Total Resources (T) is 543. This means that in the whole system, there are physically 5 tape drivers, 4printers and 3 plotters. These resources are made known to the operating system at the time of system generation. By subtraction of (H) from (T) column wise, we get a vector (F) of free resources which is 111. This means that the resources available to the operating system for further allocation are: 1 tape drive, 1 printer and 1 plotter at that juncture. Matrix B gives process wise additional resources that are expected to be required in the course during the execution of these processes. For instance, process P2 will require 2 tape drives, 1 printer and 1 plotter, in addition to the resources already held by it. It means that process P2 requires in all 1+2=3 tape drivers, 2+1=3 printers and 1+1=2 plotters. If the vector of all the resources required by all the processes (vector addition of Matrix A and Matrix B) is less then the vector T for each of the resources, there will be no contention and therefore, no deadlock. However, if that is not so, a deadlock has to be avoided. Having maintained these two matrices, the algorithm for the deadlock avoidance works as follows: (i) Each process declares the total required resources tot the operating system at the beginning. The operating system puts this figure in Matrix B (resources required for completion) against each process. For a newly created process, the row in Matrix A is fully zeros to begin with, because no resources are yet assigned for that process. For instance, at the beginning of process P2, the figures for the row P2 in Matrix A will be all 0’s; and those in Matrix B will be 3, 3 and 2 respectively. (ii) When a process requests the operating system for a resources, the operating system finds out whether the resource is free and whether it can be allocated by using the vector F. If it can be allocated, the operating system does so, and updates Matrix A by adding 1 to the 38
  • 39. appropriate slot. It simultaneously subtracts 1 from the corresponding slot of Matrix B. For instance, starting from the beginning, if the operating system allocates a tape drive to P2, the row for P2 in Matrix will become 1, 0 and 0. The row for P2 in Matrix B will correspondingly become 2, 3 and 2. At any time, the total vector of these two, i.e. addition of the corresponding numbers in the two rows, is always constant and is equivalent to the total resources needed by P2, which in this case will be 3, 3 and 2. (iii) However, before making the actual allocation, whenever, a process makes a request to the operating system for any resource, the operating system goes through the Banker’s algorithm to ensure that after the imaginary allocation, there need not be a deadlock, i.e. after the allocation, the system will still be in a ‘safe state’. The operating system actually allocates the resource only after ensuring this. If it finds that there can be a deadlock after the imaginary allocation at some point in time, it postpones the decision to allocate that resource. It calls this state of the system that would result after the possible allocation as ‘unsafe’. Remember: the unsafe state is not actually a deadlock. It is a situation of a potential deadlock. The point is: How does the operating system conclude about the safe or unsafe? It uses an interesting method. It looks at vector F and each row of Matrix B. It compares them on a vector to vector basis i.e. within the vector, it compares each digit separately to conclude whether all the resources that a process is going to need to complete are available at that juncture or not. For instance, the figure shows F = 111. It means that at that juncture, the system has 1 tape drive, 1 printer and 1 plotter free and allocable. (The first row in Matrix B for P0 to 100.) This means that if the operating system decides to allocate all needed resources to P0, P0 can go to completion, because 111 > 100 on a vector basis. Similarly row for P1 in Matrix B is 110. Therefore, if the operating system decides to allocate resources to P1 instead of to P0, P1 can complete. The row for P2 is 211. Therefore, P2 cannot complete unless there is one more tape drive available. This is because 211 is greater than 111 on a vector basis. The vector comparison should not be confused with the arithmetic comparison. For instance, if were 411 and a row in Matrix B was 322, it might appear that 411 > 322 and therefore, the process can go to completion. But that is not true. As 4 > 3, the tape drives would be allocable. But as 1 < 2, the printer as well as the plotter would both fall short. The operating system now does the following to ensure the safe state: (a) After the process requests for a resources, the operating system allocates it on a ‘trial’ basis. (b) After this trial allocation, it updates all the matrices and vectors, i.e. it arrives at the new values of F and Matrix B, as if the allocation was actually done. Obviously, this updation will have to be done by the operating system in a separate work area in the memory. 39
  • 40. (c) It then compares vector F with each row of Matrix B on a vector basis. (d) If F is smaller than each of the rows in Matrix B on a vector basis, i.e. even if all F was made available to any of the processes in Matrix B, none would be guaranteed to complete, the operating system concludes that it is an ‘unsafe state’. Again, it does not mean that a deadlock has resulted. However, it means that it can takes place if the operating system actually goes ahead with the allocation. (e) If F is greater than any row for a process in Matrix B, the operating system proceeds as follows: • It allocates all the needed resources for that process on a trial basis. • It assumes that after this trial allocation, that process will eventually get completed, and, in fact, release all the resources on completion. These resources now will be added to the free pool (F). Its now calculates all the matrices and F after this trial allocation and the imaginary completion of this process. It removes the row for the completed process from both the matrices. • It repeats the procedures from step © above. If in the process, all the rows in the matrices get eliminated, i.e. all the process can go tot completion, yit concludes that it is a ‘safe state’ i.e. even after the allocation, a deadlock can be avoided. Otherwise, it concludes that it is an ‘unsafe state’. (f) For each request for any resources by a process, the operating system goes through all these trial or imaginary allocations and updations, and if finds that after the trial allocation, the state of the system would be ‘safe’, it actually goes ahead and makes an allocation after which it updates various matrices and table in real sense. The operating system may need to maintain two sets of matrices for this purpose. Any time, before any allocation, it could copy the first set of matrices (the real one) into the other, carry out all trial allocations and updations in the other, and of the safe state results, update the former set with the allocations. Banker’s Algorithm The resource-allocation graph algorithm is not applicable to a resource-allocation system with multiple instances of each resource type. The allocation system with multiple instances of each resource type. The deadlock-avoidance algorithm that we describe next is applicable to such a system, but is less efficient than the resource-allocation graph scheme. This algorithm is commonly known as the banker’s algorithm. The name was chosen because this algorithm could used in a banking system to ensure that the bank never allocates its available its available cash such that it can longer satisfy the needs of all its customers. When a new process enters the system, it must declare the maximum number of instances of each resources type that it may need. This number may not exceed the total number of resources in the system. When a user requests a set of resources, the system must determine whether the allocation of these resources will leave the system in a safe 40