SlideShare une entreprise Scribd logo
1  sur  34
Unit 2
Process Management
Process Management
A process is basically a program in execution. The execution of a process must
progress in a sequential fashion.
A process is defined as an entity which represents the basic unit of work to be
implemented in the system.
To put it in simple terms, we write our computer programs in a text file and when
we execute this program, it becomes a process which performs all the tasks
mentioned in the program.
When a program is loaded into the memory and it becomes a process, it can be
divided into four sections ─ stack, heap, text and data. The following image
shows a simplified layout of a process inside main memory −
Stack
The process Stack contains the temporary data such as
method/function parameters, return address and local
variables.
Heap
This is dynamically allocated memory to a process during its
run time.
Text
This includes the current activity represented by the value of
Program Counter and the contents of the processor's
registers.
Data
This section contains the global and static variables.
Program
A program is a piece of code which may be a single line or
millions of lines.
A computer program is usually written by a computer
programmer in a programming language.
For example, here is a simple program written in C programming
language −
• A computer program is a collection of instructions that performs
a specific task when executed by a computer.
• When we compare a program with a process, we can conclude
that a process is a dynamic instance of a computer program.
• A part of a computer program that performs a well-defined task
is known as an algorithm.
• A collection of computer programs, libraries and related data
are referred to as a software.
Process Life Cycle (Process State)
• When a process executes, it passes through different states.
These stages may differ in different operating systems, and the
names of these states are also not standardized.
• In general, a process can have one of the following five states
at a time.
1
Start
This is the initial state when a process is first started/created.
2
Ready
The process is waiting to be assigned to a processor. Ready processes are waiting to
have the processor allocated to them by the operating system so that they can run.
Process may come into this state after Start state or while running it by but interrupted
by the scheduler to assign CPU to some other process.
3
Running
Once the process has been assigned to a processor by the OS scheduler, the process
state is set to running and the processor executes its instructions.
4
Waiting
Process moves into the waiting state if it needs to wait for a resource, such as waiting
for user input, or waiting for a file to become available.
5
Terminated or Exit
Once the process finishes its execution, or it is terminated by the operating system, it is
moved to the terminated state where it waits to be removed from main memory.
THREADS:
• A thread is a flow of execution through the process code, with
its own program counter that keeps track of which instruction to
execute next, system registers which hold its current working
variables, and a stack which contains the execution history.
• A thread shares with its peer threads few information like code
segment, data segment and open files. When one thread alters
a code segment memory item, all other threads see that.
• A thread is also called a lightweight process.
• Threads provide a way to improve application performance
through parallelism.
• Threads represent a software approach to improving
performance of operating system by reducing the overhead
• Each thread belongs to exactly one process and no thread can
exist outside a process.
• Each thread represents a separate flow of control.
• The following figure shows the working of a single-threaded and
a multithreaded process.
Difference between Process and Thread
Advantages of Thread
• Threads minimize the context switching time.
• Use of threads provides concurrency within a process.
• Efficient communication.
• It is more economical to create and context switch threads.
• Threads allow utilization of multiprocessor architectures to a
greater scale and efficiency.
Context Switching
• A context switch is the mechanism to store and restore the state or
context of a CPU in Process Control block so that a process
execution can be resumed from the same point at a later time.
• Using this technique, a context switcher enables multiple processes
to share a single CPU.
• Context switching is an essential part of a multitasking operating
system features.
• When the scheduler switches the CPU from executing one process
to execute another, the state from the current running process is
stored into the process control block.
• After this, the state for the process to run next is loaded from its
own PCB and used to set the PC, registers, etc.
• At that point, the second process can start executing.
CPU Scheduling Algorithms
• A Process Scheduler schedules different processes to be
assigned to the CPU based on particular scheduling algorithms.
• There are six popular process scheduling algorithms in which
we are going to discuss four algorithms in this chapter −
• First-Come, First-Served (FCFS) Scheduling
• Shortest-Job-First (SJF) Scheduling
• Round Robin (RR) Scheduling
• Multiple-Level Queues Scheduling
• Multi-level Feedback Scheduling
• These algorithms are either non-preemptive or preemptive.
• Non-preemptive algorithms are designed so that once a
process enters the running state, it cannot be preempted until it
completes its allotted time, whereas the preemptive scheduling
is based on priority where a scheduler may preempt a low
priority running process anytime when a high priority process
enters into a ready state.
CPU Scheduling in Operating Systems
Scheduling of processes/work is done to finish the work on time.
Below is the different time with respect to a process.
Arrival Time: Time at which the process arrives in the ready queue.
Completion Time: Time at which process completes its execution.
Burst Time: Time required by a process for CPU execution.
Turn Around Time: Time Difference between completion time and
arrival time.
Turn Around Time = Completion Time – Arrival Time
Waiting Time (W.T): Time Difference between turn around time and
burst time.
Waiting Time = Turn Around Time – Burst Time
Why do we need scheduling?
• A typical process involves both I/O time and CPU time. In a uni-
programming system like MS-DOS, time spent waiting for I/O is
wasted and CPU is free during this time.
• In multi programming systems, one process can use CPU while
another is waiting for I/O. This is possible only with process
scheduling.
Objectives of Process Scheduling
Algorithm
• Max CPU utilization [Keep CPU as busy as possible]
•
Fair allocation of CPU.
•
Max throughput [Number of processes that complete their execution
per time unit]
•
Min turnaround time [Time taken by a process to finish execution]
•
Min waiting time [Time a process waits in ready queue]
•
Min response time [Time when a process produces first response]
First-Come, First-Served (FCFS)
Scheduling
• First Come First Serve is the full form of FCFS. It is the easiest
and most simple CPU scheduling algorithm.
• In this type of algorithm, the process which requests the CPU
gets the CPU allocation first.
• This scheduling method can be managed with a FIFO queue.
As the process enters the ready queue, its PCB (Process
Control Block) is linked with the tail of the queue.
• So, when CPU becomes free, it should be assigned to the
process at the beginning of the queue.
Characteristics of FCFS method:
• It offers non-preemptive scheduling algorithm.
• Jobs are always executed on a first-come, first-serve basis
• It is easy to implement and use.
• However, this method is poor in performance, and the general
wait time is quite high.
Program for FCFS CPU Scheduling |
Set 1
• Given n processes with their burst times, the task is to find
average waiting time and average turn around time using FCFS
scheduling algorithm.
First in, first out (FIFO), also known as first come, first served
(FCFS), is the simplest scheduling algorithm.
• FIFO simply queues processes in the order that they arrive in
the ready queue.
In this, the process that comes first will be executed first and
next process starts only after the previous gets fully executed.
Here we are considering that arrival time for all processes is 0.
Important Points:
1.Non-preemptive
2.Average Waiting Time is not optimal
3.Cannot utilize resources in parallel: Results in Convoy effect
(Consider a situation when many IO bound processes are there
and one CPU bound process. The IO bound processes have to
wait for CPU bound process when CPU bound process
acquires CPU. The IO bound process could have better taken
CPU for some time, then used IO devices).
Convoy Effect in OS
• Convoy effect is a phenomenon which occurs in First Come First
Serve Scheduling Algorithm. Consider a process that is too large
that occurs at the very start of the ready queue. Due to this long
process other processes in the queue may starve for a very long
time.
• To understand this, consider an example. Where a very long train is
passing by and smaller cars and other vehicles may have to wait for
a very long time to pass by.
• So, as in First Come First Serve Algorithm (FCFS), Jobs are
executed as they come. As FCFS is a non-preemptive algorithm,
meaning jobs can’t be preempted (stopped) if they have begun
execution, so unless the process does not get finished other process
can’t get CPU resources.
• This situation can be likened to a convoy passing through the road.
So, unless the convoy passes completely, other people can’t use the
road.
• Advantages –
• It is simple and easy to understand.
• Disadvantages –
• The process with less execution time suffers i.e. waiting time is often quite
long.
• Favors CPU Bound process then I/O bound process.
• Here, first process will get the CPU first, other processes can get CPU only
after the current process has finished its execution. Now, suppose the first
process has large burst time, and other processes have less burst time, then
the processes will have to wait more unnecessarily, this will result in more
average waiting time, i.e., Convoy effect.
• This effect results in lower CPU and device utilization.
• FCFS algorithm is particularly troublesome for time-sharing systems, where it
is important that each user get a share of the CPU at regular intervals.
Shortest-Job-First (SJF) Scheduling
• Shortest Job First Scheduling (Non-Preemptive Algorithm)
• Shortest Job First (SJF) is a Scheduling Algorithm where the
process is executed in ascending order of their burst time, that is, the
process having the shortest burst time is executed first and so on.
• The processor knows burst time of each process in advance.
• It can be thought of as shortest-next-CPU-burst algorithm, as
Scheduling depends on length of the next CPU burst of a process.
(The duration for which a process gets control of the CPU, is the
Burst time for a process.)
• SJF can be Pre-emptive or Non- preemptive. Under Non-preemptive
Scheduling, once a process has been allocated to CPU, the process
keeps the CPU until the process has finished its execution.
• Let us try to understand SJF Non-Preemptive Scheduling with
an example –
• Here, processes are assumed to be arrived at the same time.
• See the below Example –
• Steps –
1.Since Burst time for P3 (Burst = 1 sec) is lowest it is executed first.
2.Then Burst time for P1 is lowest in order thus it gets executed the
2nd time.
3.Similarly, P4 and then P2
• Waiting time –
1.P1 waiting time = 1
2.P2 waiting time = 7
3.P3 waiting time = 0
4.P4 waiting time = 3
• The average waiting time is = (1+ 7 + 0 + 3) / 4 = 2.75 ms
• Features of SJF
• SJF is a greedy Algorithm
• It has Minimum average waiting time among all scheduling algorithms.
• Difficulty of SJF is knowing the length of next CPU request.
• It’s used frequently in Long-term scheduling in Batch System as in this, the
time limit is provided by the user specifying the process. SJF can’t be
implemented in Short-term scheduling as one can’t know the exact length
of next CPU burst.
• Drawback
• In SJF Scheduling, a process with high burst time may suffer starvation.
Starvation is the process in which a process with higher burst time is kept
on waiting and waiting, but is not allocated to the CPU. It’s prevented
by aging.
• Total execution time must be known beforehand of a process.
• What is aging?
• Aging is a technique which is used to reduce starvation of the
processes.
• Aging takes into account the waiting time of the process in the
ready queue and gradually increases the priority of the process.
• Hence, as the priority of a process gets increased this ensures
that all process get completed eventually.

Contenu connexe

Similaire à PROCESS.pptx

Lecture 4 - Process Scheduling (1).pptx
Lecture 4 - Process Scheduling (1).pptxLecture 4 - Process Scheduling (1).pptx
Lecture 4 - Process Scheduling (1).pptx
Amanuelmergia
 
Lecture 4 - Process Scheduling.pptx
Lecture 4 - Process Scheduling.pptxLecture 4 - Process Scheduling.pptx
Lecture 4 - Process Scheduling.pptx
Amanuelmergia
 
Operating Systems chap 2_updated2.pptx
Operating Systems chap 2_updated2.pptxOperating Systems chap 2_updated2.pptx
Operating Systems chap 2_updated2.pptx
Amanuelmergia
 
Operating Systems chap 2_updated2 (1).pptx
Operating Systems chap 2_updated2 (1).pptxOperating Systems chap 2_updated2 (1).pptx
Operating Systems chap 2_updated2 (1).pptx
Amanuelmergia
 
Chapter 2 (Part 2)
Chapter 2 (Part 2) Chapter 2 (Part 2)
Chapter 2 (Part 2)
rohassanie
 

Similaire à PROCESS.pptx (20)

Lecture 4 - Process Scheduling (1).pptx
Lecture 4 - Process Scheduling (1).pptxLecture 4 - Process Scheduling (1).pptx
Lecture 4 - Process Scheduling (1).pptx
 
Osy ppt - Copy.pptx
Osy ppt - Copy.pptxOsy ppt - Copy.pptx
Osy ppt - Copy.pptx
 
Lecture 4 - Process Scheduling.pptx
Lecture 4 - Process Scheduling.pptxLecture 4 - Process Scheduling.pptx
Lecture 4 - Process Scheduling.pptx
 
Cp usched 2
Cp usched  2Cp usched  2
Cp usched 2
 
Insider operating system
Insider   operating systemInsider   operating system
Insider operating system
 
Process scheduling in Light weight weight and Heavy weight processes.
Process scheduling in Light weight weight and Heavy weight processes.Process scheduling in Light weight weight and Heavy weight processes.
Process scheduling in Light weight weight and Heavy weight processes.
 
Real Time Kernels and Operating Systems.pptx
Real Time Kernels and Operating Systems.pptxReal Time Kernels and Operating Systems.pptx
Real Time Kernels and Operating Systems.pptx
 
Operating system 28 fundamental of scheduling
Operating system 28 fundamental of schedulingOperating system 28 fundamental of scheduling
Operating system 28 fundamental of scheduling
 
Lecture 2 process
Lecture 2   processLecture 2   process
Lecture 2 process
 
Schudling os presentaion
Schudling os presentaionSchudling os presentaion
Schudling os presentaion
 
Operating System-Concepts of Process
Operating System-Concepts of ProcessOperating System-Concepts of Process
Operating System-Concepts of Process
 
Operating Systems chap 2_updated2.pptx
Operating Systems chap 2_updated2.pptxOperating Systems chap 2_updated2.pptx
Operating Systems chap 2_updated2.pptx
 
Lecture- 2_Process Management.pdf
Lecture- 2_Process Management.pdfLecture- 2_Process Management.pdf
Lecture- 2_Process Management.pdf
 
Operating Systems chap 2_updated2 (1).pptx
Operating Systems chap 2_updated2 (1).pptxOperating Systems chap 2_updated2 (1).pptx
Operating Systems chap 2_updated2 (1).pptx
 
Chapter 2 (Part 2)
Chapter 2 (Part 2) Chapter 2 (Part 2)
Chapter 2 (Part 2)
 
Scheduling
SchedulingScheduling
Scheduling
 
Section05 scheduling
Section05 schedulingSection05 scheduling
Section05 scheduling
 
Presentation.pdf
Presentation.pdfPresentation.pdf
Presentation.pdf
 
OS - Process Concepts
OS - Process ConceptsOS - Process Concepts
OS - Process Concepts
 
ch_scheduling (1).ppt
ch_scheduling (1).pptch_scheduling (1).ppt
ch_scheduling (1).ppt
 

Plus de DivyaKS18 (10)

UNIX.ppt
UNIX.pptUNIX.ppt
UNIX.ppt
 
STORAGE MANAGEMENT AND PAGING ALGORITHMS.pptx
STORAGE MANAGEMENT AND PAGING ALGORITHMS.pptxSTORAGE MANAGEMENT AND PAGING ALGORITHMS.pptx
STORAGE MANAGEMENT AND PAGING ALGORITHMS.pptx
 
System calls in OS.pptx
System calls in OS.pptxSystem calls in OS.pptx
System calls in OS.pptx
 
OS introduction.pptx
OS introduction.pptxOS introduction.pptx
OS introduction.pptx
 
exception -ppt.pptx
exception -ppt.pptxexception -ppt.pptx
exception -ppt.pptx
 
DES
DES DES
DES
 
monte carlo simulation
monte carlo simulationmonte carlo simulation
monte carlo simulation
 
mac-protocols
mac-protocols mac-protocols
mac-protocols
 
SMTP and TCP protocol
SMTP and TCP protocolSMTP and TCP protocol
SMTP and TCP protocol
 
Multithreading -Thread Fundamentals6.pptx
Multithreading -Thread Fundamentals6.pptxMultithreading -Thread Fundamentals6.pptx
Multithreading -Thread Fundamentals6.pptx
 

Dernier

Beyond the EU: DORA and NIS 2 Directive's Global Impact
Beyond the EU: DORA and NIS 2 Directive's Global ImpactBeyond the EU: DORA and NIS 2 Directive's Global Impact
Beyond the EU: DORA and NIS 2 Directive's Global Impact
PECB
 
1029 - Danh muc Sach Giao Khoa 10 . pdf
1029 -  Danh muc Sach Giao Khoa 10 . pdf1029 -  Danh muc Sach Giao Khoa 10 . pdf
1029 - Danh muc Sach Giao Khoa 10 . pdf
QucHHunhnh
 
Kisan Call Centre - To harness potential of ICT in Agriculture by answer farm...
Kisan Call Centre - To harness potential of ICT in Agriculture by answer farm...Kisan Call Centre - To harness potential of ICT in Agriculture by answer farm...
Kisan Call Centre - To harness potential of ICT in Agriculture by answer farm...
Krashi Coaching
 
Ecosystem Interactions Class Discussion Presentation in Blue Green Lined Styl...
Ecosystem Interactions Class Discussion Presentation in Blue Green Lined Styl...Ecosystem Interactions Class Discussion Presentation in Blue Green Lined Styl...
Ecosystem Interactions Class Discussion Presentation in Blue Green Lined Styl...
fonyou31
 

Dernier (20)

Beyond the EU: DORA and NIS 2 Directive's Global Impact
Beyond the EU: DORA and NIS 2 Directive's Global ImpactBeyond the EU: DORA and NIS 2 Directive's Global Impact
Beyond the EU: DORA and NIS 2 Directive's Global Impact
 
The Most Excellent Way | 1 Corinthians 13
The Most Excellent Way | 1 Corinthians 13The Most Excellent Way | 1 Corinthians 13
The Most Excellent Way | 1 Corinthians 13
 
SOCIAL AND HISTORICAL CONTEXT - LFTVD.pptx
SOCIAL AND HISTORICAL CONTEXT - LFTVD.pptxSOCIAL AND HISTORICAL CONTEXT - LFTVD.pptx
SOCIAL AND HISTORICAL CONTEXT - LFTVD.pptx
 
Software Engineering Methodologies (overview)
Software Engineering Methodologies (overview)Software Engineering Methodologies (overview)
Software Engineering Methodologies (overview)
 
Student login on Anyboli platform.helpin
Student login on Anyboli platform.helpinStudent login on Anyboli platform.helpin
Student login on Anyboli platform.helpin
 
Q4-W6-Restating Informational Text Grade 3
Q4-W6-Restating Informational Text Grade 3Q4-W6-Restating Informational Text Grade 3
Q4-W6-Restating Informational Text Grade 3
 
Measures of Central Tendency: Mean, Median and Mode
Measures of Central Tendency: Mean, Median and ModeMeasures of Central Tendency: Mean, Median and Mode
Measures of Central Tendency: Mean, Median and Mode
 
1029 - Danh muc Sach Giao Khoa 10 . pdf
1029 -  Danh muc Sach Giao Khoa 10 . pdf1029 -  Danh muc Sach Giao Khoa 10 . pdf
1029 - Danh muc Sach Giao Khoa 10 . pdf
 
Measures of Dispersion and Variability: Range, QD, AD and SD
Measures of Dispersion and Variability: Range, QD, AD and SDMeasures of Dispersion and Variability: Range, QD, AD and SD
Measures of Dispersion and Variability: Range, QD, AD and SD
 
Código Creativo y Arte de Software | Unidad 1
Código Creativo y Arte de Software | Unidad 1Código Creativo y Arte de Software | Unidad 1
Código Creativo y Arte de Software | Unidad 1
 
A Critique of the Proposed National Education Policy Reform
A Critique of the Proposed National Education Policy ReformA Critique of the Proposed National Education Policy Reform
A Critique of the Proposed National Education Policy Reform
 
Presentation by Andreas Schleicher Tackling the School Absenteeism Crisis 30 ...
Presentation by Andreas Schleicher Tackling the School Absenteeism Crisis 30 ...Presentation by Andreas Schleicher Tackling the School Absenteeism Crisis 30 ...
Presentation by Andreas Schleicher Tackling the School Absenteeism Crisis 30 ...
 
Arihant handbook biology for class 11 .pdf
Arihant handbook biology for class 11 .pdfArihant handbook biology for class 11 .pdf
Arihant handbook biology for class 11 .pdf
 
Kisan Call Centre - To harness potential of ICT in Agriculture by answer farm...
Kisan Call Centre - To harness potential of ICT in Agriculture by answer farm...Kisan Call Centre - To harness potential of ICT in Agriculture by answer farm...
Kisan Call Centre - To harness potential of ICT in Agriculture by answer farm...
 
microwave assisted reaction. General introduction
microwave assisted reaction. General introductionmicrowave assisted reaction. General introduction
microwave assisted reaction. General introduction
 
Advance Mobile Application Development class 07
Advance Mobile Application Development class 07Advance Mobile Application Development class 07
Advance Mobile Application Development class 07
 
Ecosystem Interactions Class Discussion Presentation in Blue Green Lined Styl...
Ecosystem Interactions Class Discussion Presentation in Blue Green Lined Styl...Ecosystem Interactions Class Discussion Presentation in Blue Green Lined Styl...
Ecosystem Interactions Class Discussion Presentation in Blue Green Lined Styl...
 
Paris 2024 Olympic Geographies - an activity
Paris 2024 Olympic Geographies - an activityParis 2024 Olympic Geographies - an activity
Paris 2024 Olympic Geographies - an activity
 
Key note speaker Neum_Admir Softic_ENG.pdf
Key note speaker Neum_Admir Softic_ENG.pdfKey note speaker Neum_Admir Softic_ENG.pdf
Key note speaker Neum_Admir Softic_ENG.pdf
 
Holdier Curriculum Vitae (April 2024).pdf
Holdier Curriculum Vitae (April 2024).pdfHoldier Curriculum Vitae (April 2024).pdf
Holdier Curriculum Vitae (April 2024).pdf
 

PROCESS.pptx

  • 2. Process Management A process is basically a program in execution. The execution of a process must progress in a sequential fashion. A process is defined as an entity which represents the basic unit of work to be implemented in the system. To put it in simple terms, we write our computer programs in a text file and when we execute this program, it becomes a process which performs all the tasks mentioned in the program. When a program is loaded into the memory and it becomes a process, it can be divided into four sections ─ stack, heap, text and data. The following image shows a simplified layout of a process inside main memory −
  • 3. Stack The process Stack contains the temporary data such as method/function parameters, return address and local variables. Heap This is dynamically allocated memory to a process during its run time. Text This includes the current activity represented by the value of Program Counter and the contents of the processor's registers. Data This section contains the global and static variables.
  • 4. Program A program is a piece of code which may be a single line or millions of lines. A computer program is usually written by a computer programmer in a programming language. For example, here is a simple program written in C programming language −
  • 5. • A computer program is a collection of instructions that performs a specific task when executed by a computer. • When we compare a program with a process, we can conclude that a process is a dynamic instance of a computer program. • A part of a computer program that performs a well-defined task is known as an algorithm. • A collection of computer programs, libraries and related data are referred to as a software.
  • 6. Process Life Cycle (Process State) • When a process executes, it passes through different states. These stages may differ in different operating systems, and the names of these states are also not standardized. • In general, a process can have one of the following five states at a time.
  • 7. 1 Start This is the initial state when a process is first started/created. 2 Ready The process is waiting to be assigned to a processor. Ready processes are waiting to have the processor allocated to them by the operating system so that they can run. Process may come into this state after Start state or while running it by but interrupted by the scheduler to assign CPU to some other process. 3 Running Once the process has been assigned to a processor by the OS scheduler, the process state is set to running and the processor executes its instructions. 4 Waiting Process moves into the waiting state if it needs to wait for a resource, such as waiting for user input, or waiting for a file to become available. 5 Terminated or Exit Once the process finishes its execution, or it is terminated by the operating system, it is moved to the terminated state where it waits to be removed from main memory.
  • 8.
  • 9. THREADS: • A thread is a flow of execution through the process code, with its own program counter that keeps track of which instruction to execute next, system registers which hold its current working variables, and a stack which contains the execution history. • A thread shares with its peer threads few information like code segment, data segment and open files. When one thread alters a code segment memory item, all other threads see that.
  • 10. • A thread is also called a lightweight process. • Threads provide a way to improve application performance through parallelism. • Threads represent a software approach to improving performance of operating system by reducing the overhead • Each thread belongs to exactly one process and no thread can exist outside a process. • Each thread represents a separate flow of control. • The following figure shows the working of a single-threaded and a multithreaded process.
  • 11.
  • 13. Advantages of Thread • Threads minimize the context switching time. • Use of threads provides concurrency within a process. • Efficient communication. • It is more economical to create and context switch threads. • Threads allow utilization of multiprocessor architectures to a greater scale and efficiency.
  • 14. Context Switching • A context switch is the mechanism to store and restore the state or context of a CPU in Process Control block so that a process execution can be resumed from the same point at a later time. • Using this technique, a context switcher enables multiple processes to share a single CPU. • Context switching is an essential part of a multitasking operating system features. • When the scheduler switches the CPU from executing one process to execute another, the state from the current running process is stored into the process control block. • After this, the state for the process to run next is loaded from its own PCB and used to set the PC, registers, etc. • At that point, the second process can start executing.
  • 15.
  • 16. CPU Scheduling Algorithms • A Process Scheduler schedules different processes to be assigned to the CPU based on particular scheduling algorithms. • There are six popular process scheduling algorithms in which we are going to discuss four algorithms in this chapter − • First-Come, First-Served (FCFS) Scheduling • Shortest-Job-First (SJF) Scheduling • Round Robin (RR) Scheduling • Multiple-Level Queues Scheduling • Multi-level Feedback Scheduling
  • 17. • These algorithms are either non-preemptive or preemptive. • Non-preemptive algorithms are designed so that once a process enters the running state, it cannot be preempted until it completes its allotted time, whereas the preemptive scheduling is based on priority where a scheduler may preempt a low priority running process anytime when a high priority process enters into a ready state.
  • 18. CPU Scheduling in Operating Systems Scheduling of processes/work is done to finish the work on time. Below is the different time with respect to a process. Arrival Time: Time at which the process arrives in the ready queue. Completion Time: Time at which process completes its execution. Burst Time: Time required by a process for CPU execution. Turn Around Time: Time Difference between completion time and arrival time. Turn Around Time = Completion Time – Arrival Time Waiting Time (W.T): Time Difference between turn around time and burst time. Waiting Time = Turn Around Time – Burst Time
  • 19. Why do we need scheduling? • A typical process involves both I/O time and CPU time. In a uni- programming system like MS-DOS, time spent waiting for I/O is wasted and CPU is free during this time. • In multi programming systems, one process can use CPU while another is waiting for I/O. This is possible only with process scheduling.
  • 20. Objectives of Process Scheduling Algorithm • Max CPU utilization [Keep CPU as busy as possible] • Fair allocation of CPU. • Max throughput [Number of processes that complete their execution per time unit] • Min turnaround time [Time taken by a process to finish execution] • Min waiting time [Time a process waits in ready queue] • Min response time [Time when a process produces first response]
  • 21. First-Come, First-Served (FCFS) Scheduling • First Come First Serve is the full form of FCFS. It is the easiest and most simple CPU scheduling algorithm. • In this type of algorithm, the process which requests the CPU gets the CPU allocation first. • This scheduling method can be managed with a FIFO queue. As the process enters the ready queue, its PCB (Process Control Block) is linked with the tail of the queue. • So, when CPU becomes free, it should be assigned to the process at the beginning of the queue.
  • 22. Characteristics of FCFS method: • It offers non-preemptive scheduling algorithm. • Jobs are always executed on a first-come, first-serve basis • It is easy to implement and use. • However, this method is poor in performance, and the general wait time is quite high.
  • 23. Program for FCFS CPU Scheduling | Set 1 • Given n processes with their burst times, the task is to find average waiting time and average turn around time using FCFS scheduling algorithm. First in, first out (FIFO), also known as first come, first served (FCFS), is the simplest scheduling algorithm. • FIFO simply queues processes in the order that they arrive in the ready queue. In this, the process that comes first will be executed first and next process starts only after the previous gets fully executed. Here we are considering that arrival time for all processes is 0.
  • 24.
  • 25. Important Points: 1.Non-preemptive 2.Average Waiting Time is not optimal 3.Cannot utilize resources in parallel: Results in Convoy effect (Consider a situation when many IO bound processes are there and one CPU bound process. The IO bound processes have to wait for CPU bound process when CPU bound process acquires CPU. The IO bound process could have better taken CPU for some time, then used IO devices).
  • 26. Convoy Effect in OS • Convoy effect is a phenomenon which occurs in First Come First Serve Scheduling Algorithm. Consider a process that is too large that occurs at the very start of the ready queue. Due to this long process other processes in the queue may starve for a very long time. • To understand this, consider an example. Where a very long train is passing by and smaller cars and other vehicles may have to wait for a very long time to pass by. • So, as in First Come First Serve Algorithm (FCFS), Jobs are executed as they come. As FCFS is a non-preemptive algorithm, meaning jobs can’t be preempted (stopped) if they have begun execution, so unless the process does not get finished other process can’t get CPU resources. • This situation can be likened to a convoy passing through the road. So, unless the convoy passes completely, other people can’t use the road.
  • 27.
  • 28. • Advantages – • It is simple and easy to understand. • Disadvantages – • The process with less execution time suffers i.e. waiting time is often quite long. • Favors CPU Bound process then I/O bound process. • Here, first process will get the CPU first, other processes can get CPU only after the current process has finished its execution. Now, suppose the first process has large burst time, and other processes have less burst time, then the processes will have to wait more unnecessarily, this will result in more average waiting time, i.e., Convoy effect. • This effect results in lower CPU and device utilization. • FCFS algorithm is particularly troublesome for time-sharing systems, where it is important that each user get a share of the CPU at regular intervals.
  • 29. Shortest-Job-First (SJF) Scheduling • Shortest Job First Scheduling (Non-Preemptive Algorithm) • Shortest Job First (SJF) is a Scheduling Algorithm where the process is executed in ascending order of their burst time, that is, the process having the shortest burst time is executed first and so on. • The processor knows burst time of each process in advance. • It can be thought of as shortest-next-CPU-burst algorithm, as Scheduling depends on length of the next CPU burst of a process. (The duration for which a process gets control of the CPU, is the Burst time for a process.) • SJF can be Pre-emptive or Non- preemptive. Under Non-preemptive Scheduling, once a process has been allocated to CPU, the process keeps the CPU until the process has finished its execution.
  • 30. • Let us try to understand SJF Non-Preemptive Scheduling with an example – • Here, processes are assumed to be arrived at the same time. • See the below Example –
  • 31.
  • 32. • Steps – 1.Since Burst time for P3 (Burst = 1 sec) is lowest it is executed first. 2.Then Burst time for P1 is lowest in order thus it gets executed the 2nd time. 3.Similarly, P4 and then P2 • Waiting time – 1.P1 waiting time = 1 2.P2 waiting time = 7 3.P3 waiting time = 0 4.P4 waiting time = 3 • The average waiting time is = (1+ 7 + 0 + 3) / 4 = 2.75 ms
  • 33. • Features of SJF • SJF is a greedy Algorithm • It has Minimum average waiting time among all scheduling algorithms. • Difficulty of SJF is knowing the length of next CPU request. • It’s used frequently in Long-term scheduling in Batch System as in this, the time limit is provided by the user specifying the process. SJF can’t be implemented in Short-term scheduling as one can’t know the exact length of next CPU burst. • Drawback • In SJF Scheduling, a process with high burst time may suffer starvation. Starvation is the process in which a process with higher burst time is kept on waiting and waiting, but is not allocated to the CPU. It’s prevented by aging. • Total execution time must be known beforehand of a process.
  • 34. • What is aging? • Aging is a technique which is used to reduce starvation of the processes. • Aging takes into account the waiting time of the process in the ready queue and gradually increases the priority of the process. • Hence, as the priority of a process gets increased this ensures that all process get completed eventually.