SlideShare une entreprise Scribd logo
1  sur  160
Chapter 2
Processes and Threads

2.1 Processes
2.2 Threads
2.3 Interprocess communication
2.4 Classical IPC problems
2.5 Scheduling




                                 1
Processes
                  The Process Model




• Multiprogramming of four programs
• Conceptual model of 4 independent, sequential processes
• Only one program active at any instant

                                                       2
Process Creation

Principal events that cause process creation

1. System initialization
2. Execution of a process creation system by a
   running process.
3. User request to create a new process
4. Initiation of a batch job


                                                 3
Process Termination

Conditions which terminate processes
1. Normal exit (voluntary)
2. Error exit (voluntary)
3. Fatal error (involuntary)
4. Killed by another process (involuntary)



                                             4
Process Hierarchies

• Parent creates a child process, child processes
  can create its own process
• Forms a hierarchy
  – UNIX calls this a "process group"
• Windows has no concept of process hierarchy
  – all processes are created equal



                                                    5
Process States (1)
                        • Process Transitions




• Possible process states
   – running
   – blocked
   – ready
• Transitions between states shown
                                                6
Process States (2)




• Lowest layer of process-structured OS
  – handles interrupts, scheduling
• Above that layer are sequential processes
                                              7
Implementation of Processes
The OS organizes the data about each process in a table naturally
called the process table. Each entry in this table is called
a process table entry or process control block (PCB).

                Characteristics of the process table.

1.One entry per process.
2.The central data structure for process management.
3.A process state transition (e.g., moving from blocked to ready)
is reflected by a change in the value of one or more fields in the
PCB.
4.We have converted an active entity (process) into a data
structure (PCB). Finkel calls this the level principle an active entity
becomes a data structure when looked at from a lower level.
                                                                          8
Implementation of Processes
A process in an operating system is represented by a
data structure known as a Process Control Block (PCB)
or process descriptor.

The PCB contains important information about the
specific process including

1.The current state of the process i.e., whether it is
ready, running, waiting, or whatever.
2.Unique identification of the process in order to track
"which is which" information.
3.A pointer to parent process.
                                                           9
Implementation of Processes
4. Similarly, a pointer to child process (if it exists).
5. The priority of process (a part of CPU scheduling
   information).
6. Pointers to locate memory of processes.
7. A register save area.
8. The processor it is running on.

The PCB is a certain store that allows the operating
   systems to locate key information about a process.
   Thus, the PCB is the data structure that defines a
   process to the operating systems.

                                                       10
Process Control Block   11
Process Control Block
                        12
Process Table
     PID                   PCB
      1                     .
      2                     .
      .                     .
      n                     .




                                                    Process control Block


Process control Block            Process control Block
                                                                        13
14
Process States




                 15
Implementation of Processes (2)




Skeleton of what lowest level of OS does when an
    interrupt occurs

                                                   16
Implementation of Processes (1)




    Fields of a process table entry
                                      17
Threads
            The Thread Model (1)




(a) Three processes each with one thread
(b) One process with three threads
                                           18
The Thread Model (2)




• Items shared by all threads in a process
• Items private to each thread
                                             19
The Thread Model (3)




Each thread has its own stack
                                20
Thread Usage (1)




A word processor with three threads
                                      21
Thread Usage (2)




A multithreaded Web server
                             22
Thread Usage (3)




• Rough outline of code for previous slide
  (a) Dispatcher thread
  (b) Worker thread

                                             23
Thread Usage (4)




Three ways to construct a server


                                   24
Implementing Threads in User Space




        A user-level threads package
                                       25
Implementing Threads in the Kernel




  A threads package managed by the kernel
                                            26
Hybrid Implementations




Multiplexing user-level threads onto
kernel- level threads
                                       27
Scheduler Activations

• Goal – mimic functionality of kernel threads
   – gain performance of user space threads
• Avoids unnecessary user/kernel transitions
• Kernel assigns virtual processors to each process
   – lets runtime system allocate threads to processors
• Problem:
    Fundamental reliance on kernel (lower layer)
    calling procedures in user space (higher layer)


                                                          28
Pop-Up Threads




• Creation of a new thread when message arrives
  (a) before message arrives
  (b) after message arrives                   29
Making Single-Threaded Code Multithreaded (1)




Conflicts between threads over the use of a global variable

                                                          30
Making Single-Threaded Code Multithreaded (2)




   Threads can have private global variables
                                                31
Interprocess Communication (IPC)

• Process frequently need to communicate with
  other process. ( Ex: A shell Pipeline)
• Interrupt is the one way to achieve IPC.

• But we require a well structured way to
  achieve IPC.

                                                32
Interprocess Communication (IPC)
• Issues to be considered:
1.How one process can pass information to
  other process.
2.Making sure that two or more process don’t
  get into each others’ way, when engaging
  Critical Region.
3.Proper sequencing of processes when
  dependencies present.
     Ex: Process A produce Data &
            Process B has to print this data
                                               33
Interprocess Communication
         Race Conditions
• In o.s. processes working together may
  share recourses (Storage) .

• Shared storage
1. may be in primary memory
2. may be a shared file.


                                           34
IPC – Race conditions
1. The process wants to print a
   file enters the file name in a
   special spooler directory.
   (shared)
2. Another       process,     the
   printer daemon periodically
   checks, if there are any files
   to be printed and if there
   are, it prints them & then
   removes their name from          Print Spooler
   the directory.
 Two processes want to access shared memory at same
                          time                        35
IPC – Race conditions
                here,

                In: points to the next free
                  slots in the directory
                  (Local variable)

                Out : points to the next file
                  to be printed
                & both are shared Variable
Print Spooler
                                            36
IPC – Race conditions
                Following Might Happen:
                1. Process A reads in and stores
                   the value 7 in a local variable
                   called Next –Free-Slot.
                2. Just then clock interrupt occurs
                   and CPU decides that process
                   A has run long enough.
                3. It switches to the process B.
                4. Process B also reads in & also
                   get a 7.
                5. It too stores 7 into its local
                   variable Next –Free-Slot.
Print Spooler
                                                  37
IPC – Race conditions
                6. Process B continues to run
                    and store the name of the file
                    in slot 7 & updates in to be 8.
                7. Now, process B goes off &
                    does other things.
                8. Eventually, process A runs
                    again, starting from the place
                    it lefts off.
                9. It looks at Next-Free-Slot.
                10. It finds 7 there.
                11. It writes its file name in slot 7
                    erasing the name that
Print Spooler       process B just put there.
                                                    38
IPC – Race conditions
                12. Then it computes Next-Free-
                    Slot +1, which is 8.
                13. Now, it sets in to 8.
                14. The spooler directory is now
                    internally consistent.
                15. So, the printer daemon
                    process will not notice any
                    thing wrong.
                16. But, process B never get its’
                    job done.
                17. Situation like this is known as
                    RACE CONDITIONS.
Print Spooler
                                                  39
Mutual exclusion & Critical Regions
• We must avoid race conditions by finding some
  way to prohibit more than one process reading
  & writing the shared data at the same time.

• We can achieve this by doing
          MUTUAL EXCLUSION.



                                              40
Mutual exclusion & Critical Regions

• MUTUAL EXCLUSION : it is, some way of making
  sure that if one process is using a shared
  variable or file, the other process will be
  excluded from doing the same thing.

• CRITICAL REGION: the part of the memory
  where the shared memory is accessed is called
  the critical region.

                                                  41
Mutual exclusion & Critical Regions
Conditions required to avoid race condition:

1.   No two processes may be simultaneously inside their
     critical regions.
2.   No assumptions may be made about speeds or the
     number of CPUs.
3.   No process running outside its critical region may block
     other processes.
4.   No process should have to wait forever to enter its
     critical region.



                                                                42
Mutual exclusion using critical regions
• CRITICAL REGION: the part of the memory where the
  shared memory is accessed is called the critical region.
                                                             43
Mutual Exclusion with Busy Waiting
BUSY WAITING : Continually testing a variable until
  some value appears is called BUSY WAITING.

 Proposals for achieving mutual exclusion:

       •   Disabling interrupts
       •   Lock variables
       •   Strict alternation
       •   Peterson's solution
       •   The TSL instruction
                                                      44
Mutual Exclusion with Busy Waiting
             Disabling Interrupts
• It is the Simplest Solution
• Each Process should disable all interrupts just after entering its
   critical region
• Each Process should re-enable all interrupts just before leaving
   its critical region
• With interrupts disabled, No clock interrupts occur
• CPU can’t switch from process to process without clock interrupts
Disadvantages:
• What happens if one user disables interrupts and then never
   turned them on again
• If a system is a multi processor system ; disabling interrupts
   affects only the CPU that executed disable instruction
                                                                   45
Mutual Exclusion with Busy Waiting
            LOCK VARIABLES
• It is the Simplest software Solution
• We can have a single shared (Lock) variable
•  Keep initially 0
• Now a process wants to enter critical region , it first test
  Lock variable
• If the lock is zero , the process sets it to 1 and enters the
  critical region.
• If the lock is 1 , the process just waits to be it 0

Disadvantages:
• Unfortunately , this idea contains exactly the same
  problem that we show in the spooler directory example.
                                                                  46
Mutual Exclusion with Busy Waiting (1)   Strict Alternation




Notice the semicolons terminating the while statements in
Fig. above

•Busy waiting continuously testing a variable until some value
 appears using it as a lock.

•A lock that uses busy waiting is called a spin lock.
•It should usually be avoided, since it wastes CPU time.
                                                                 47
1. The integer variable turn (keeps track of whose turn it is
   to enter the CR),
2. Initially, process 0 inspects turn, finds it to be 0, and
   enters its CR,
3. Process 1 also finds it to be 0 and therefore sits in a tight
   loop continually testing turn to see when it becomes 1,
4. When process 0 leaves the CR, it sets turn to 1, to allow
   process 1 to enter its CR,
5. Suppose that process 1 finishes its CR quickly, so both
   processes are in their nonCR (with turn set to 0)
                                                                   48
6. Process 0 finishes its nonCR and goes back to the top of its
    loop. Process 0 executes its whole loop quickly, exiting its CR
    and setting turn to 1.
7. At this point turn is 1 and both processes are executing in
    their nonCR,
8. Process 0 finishes its nonCR and goes back to the top of its
    loop,
9. Unfortunately, it is not permitted to enter its CR, turn is 1
    and process 1 is busy with its nonCR,
10. It hangs in its while loop until process 1 sets turn to 0,
11. This algorithm does avoid all races. But violates condition
    Fault tolerance.                                                  49
Mutual Exclusion with Busy Waiting       TSL Instruction
• Lets take some help of hardware
• Many multiprocessor system have an instruction –


                  TSL RX, Lock        ( Test and set lock)
• This works as follows


1. It reads the content of the memory word into register RX and
   then stores a non zero value at the memory address Lock
   (Sets a lock )
2. No other processor can access the memory word until the
   instruction is finished
3. In other words the CPU executing TSL instruction locks the
   memory bus to prohibit other CPUs from accessing memory
   until it is done
                                                                  50
Mutual Exclusion with Busy Waiting           TSL Instruction

 1. To use the TSL instruction, we will use a shared variable , Lock to co-
    ordinate the access to shared memory
 2. When lock = 0 any process can use it by setting it 1
 3. When lock = 1 no process can use it




Entering and leaving a critical region using TSL Instruction
                                                                              51
Peterson's Solution to achieve Mutual Exclusion.

Peterson’s algorithm is shown in Fig. 2-21.

This algorithm consists of two procedures written in ANSI C.

Before using the shared variables (i.e., before entering its critical
region), each process calls enter_region with its own process
number, 0 or 1, as parameter.

This call will cause it to wait, if need be, until it is safe to enter.

 After it has finished with the shared variables, the process calls
leave_region to indicate that it is done and to allow the other
process to enter, if it so desires.
Peterson's Solution
Let us see how this solution works.

1.Initially neither process is in its critical region.

2.Now process 0 calls enter_region.

3.It indicates its interest by setting its array element and sets turn
to 0.

4.Since process 1 is not interested, enter_region returns
immediately.

5.If process 1 now calls enter_region, it will hang there until
interested[0] goes to FALSE, an event that only happens when
process 0 calls leave_region to exit the critical region.
Peterson's Solution
6. Now consider the case that both processes call enter_region
   almost simultaneously.

7. Both will store their process number in turn.

8. Whichever store is done last is the one that counts; the first one
   is overwritten and lost.

9. Suppose that process 1 stores last, so turn is 1.

10. When both processes come to the while statement, process 0
    executes it zero times and enters its critical region.

11. Process 1 loops and does not enter its critical region until
    process 0 exits its critical region.
Mutual Exclusion with Busy Waiting (2)




Peterson's solution for achieving mutual exclusion 55
PRIORITY INVERSION PROBLEM
1. In Scheduling, priority inversion is the scenario where a
   low priority Task holds a shared resource, that is required
   by a high priority task.

2. This causes the execution of the high priority task to be
   blocked until the low priority task has released the
   resource, effectively “inverting” the relative priorities of
   the two tasks.

3. If some other medium priority task, one that does not
   depend on the shared resource, attempts to run in the
   interim, it will take precedence over both the low priority
   task and the high priority task.
                                                              56
PRIORITY INVERSION PROBLEM

Priority Inversion will

1.Make problems in real time systems.

2.Reduce the performance of the system

3.May reduce the system responsiveness
which leads to the violation of response
time guarantees.

                                           57
1. Consider Three Tasks A,B,C with priorities A > B > C.
2. Assume these tasks are served by a common server (Sequential).
3. Assume A & C share a critical resource.
4. Suppose C has the Server and acquires the resource.
5. A requests the server, Preempting C.
                                        PRIORITY INVERSION EXAMPLE
6. A then Wants the Resource.
7. Now C must take the server while A blocks waiting for C to
   release the resource.
8. Meanwhile B requests the server.
9. Since B > C, B can run arbitrarily long, all the while with A being
   blocked.
10. But A > B, Which is Anomaly. (Priority Inversion)                58
Sleep & Wakeup
• Both Peterson & TSL solution have the defect of
  requiring Busy Waiting

• So we can have some problems like,
1. CPU time is wasted
2. Priority Inversion Problem

These problems can be solved by using Sleep &
  Wakeup primitives (System Calls).

                                                    59
Sleep & Wakeup
• Sleep: Sleep is a system call that causes the
  caller to block, that is, be suspended until
  another process wakes it up

• Wakeup : Wakeup system call awakens the
  process. It has one parameter which is
  process itself.



                                                  60
Producer – Consumer Problem
         (Bounded Buffer Problem)
• It consists of two processes, Producer &
  Consumer

• They share a common fixed size Buffer

• Producer puts information into Buffer

• Consumer takes information out of buffer

                                             61
Producer – Consumer Problem
         (Bounded Buffer Problem)
• Trouble:
   When the producer wants to put information but
   the buffer isn’t empty


• Solution:
1. Producer to go to sleep
2. To be awakened when consumer removes a item
   or items from buffer


                                                    62
Producer – Consumer Problem
         (Bounded Buffer Problem)
• Trouble:
  When the consumer wants to take information
  from the buffer but buffer is empty.


• Solution:
1. Consumer go to sleep
2. To be awakened when Producer put information
   in the buffer


                                                  63
64
65
Sleep and Wakeup
  Producer Module




  Producer-consumer problem with fatal race condition
Reason: Access to the count is unconstrained( Ex: Book)   66
Sleep and Wakeup
  Consumer Module




  Producer-consumer problem with fatal race condition
Reason: Access to the count is unconstrained( Ex: Book)   67
Sleep and Wakeup
• Due to access to the count in unconstrained
  manner a fatal race condition occurs here
• So some wakeups calls are wasted here
• Wakeup waiting bit is used here to avoid this
• A wakeup bit is set to a process which is still
  awake
• Later on when the process go to sleep & if
  wakeup bit is set , this bit is turned off but the
  process remains still awake
                                                   68
Problem With Sleep and Wakeup
The problem with this solution is that it contains a race
condition that can lead into a deadlock. Consider the following
scenario:


1.The consumer has just read the variable itemCount, noticed
it's zero and is just about to move inside the if-block.

2.Just before calling sleep, the consumer is interrupted and the
producer is resumed.

3.The producer creates an item, puts it into the buffer, and
increases itemCount.
                                                               69
Problem With Sleep and Wakeup
1.Because the buffer was empty prior to the last addition, the
producer tries to wake up the consumer.

2.Unfortunately the consumer wasn't yet sleeping, and the
wakeup call is lost. When the consumer resumes, it goes to
sleep and will never be awakened again. This is because the
consumer is only awakened by the producer when itemCount
is equal to 1.

3.The producer will loop until the buffer is full, after which it
will also go to sleep.

4.Since both processes will sleep forever, we have run into a
deadlock. This solution therefore is unsatisfactory.
                                                                70
Semaphores
• Semaphore is an integer variable
• It is used to count the number of wakeups
  saved for future use
• A semaphore could have –
• Value 0 : No wakeups were saved
• Value +ve Integer: Indicates wakeups pending
Semaphore operations:
                  1. Down operation
                  2. Up operation
                                                 71
Operations on Semaphores
• Down operation
1.It checks the value of the semaphore.
2.If it is greater than zero, it decrements the
  value by 1 & just continues.
3.If it is zero, the process is put to sleep
  without completing Down for a moment.
4.All these operations are done as a single,
  indivisible Atomic action.

                                                  72
Operations on Semaphores
• UP operation
1. It increments the value of the semaphore addressed
2. If one or more process were sleeping on that semaphore
   unable to complete down earlier, one of them chosen by
   the system
3. it is allowed to complete Down (Decrement semaphore
   by 1)
4. Thus, after an UP on a semaphore with process sleeping
   on it, the semaphore will still be 0
5. But there will be one less process sleeping on it.
6. Above operation is totally invisible
7. No process ever blocks here                              73
Producer – Consumer Problem using Semaphore
• This solution uses three semaphores
           (1) full (2) empty & (3) mutex
Full : Full is used for counting the number of slots that are full

Empty: Empty is used for counting the number of slots that are
       empty

Mutex: Mutex is used to make sure that Producer & Consumer
        don’t access the buffer at the same time
Semaphores used here in two different ways –
1. For synchronization ( full & empty)
2. To guarantee Mutual exclusion ( mutex)
                                                                     81
Semaphores : Producer




                        82
Semaphores : Consumer




                        83
Semaphores




The producer-consumer problem using semaphores   84
Mutexes
• A mutex is a variable
• It can be in one out of two states : Unlocked or
  Locked
• Only one bit is required to represent it
• In practice an integer value is often used, with 0
  meaning unlocked and all other values meaning
  locked
• When a process (or thread) needs access to a
  critical region, it calls mutex_lock
• If the mutex is currently unlocked, the call succeeds
  and the calling process (or thread )is free to enter
  the critical region                                     85
Mutexes
• On the other hand, if mutex is already locked,
  the calling process (or thread) is blocked until
  the process (or thread) in the critical region is
  finished and calls mutex_unlock.
• Because mutexes are so simple, they can easily
  be implemented in user space if a TSL
  instruction is available
• The code for mutex_lock and mutex_unlock for
  use with a user level threads package

                                                      86
Mutexes
The code for mutex_lock and mutex_unlock for
use with a user level threads package is as
under.




Implementation of mutex_lock and mutex_unlock
                                                87
Monitors (1)




Example of a monitor
                       88
Monitors (2)




• Outline of producer-consumer problem with monitors
  – only one monitor procedure active at one time
  – buffer has N slots                                 89
Monitors (3)




Solution to producer-consumer problem in Java (part 1)   90
Monitors (4)




Solution to producer-consumer problem in Java (part 2)
                                                         91
Message Passing




The producer-consumer problem with N messages
                                                92
MONITORS
• The Problem With Semaphores
• Suppose that the two downs in producers’ code
  is reversed in order....
• Both process would stay blocked forever
• If resources are not tightly controlled, “chaos
  will ensue”
  - Race conditions
  • To make it easier to write correct programs, a
    higher – level synchronization primitive called
    a monitor.
The Solution
• Monitors provide control by allowing only one process
  to access a critical resource at a time
• A monitor is a collection of procedures, variables and
  data structures that are all grouped together in a
  special kind of module or package.
• Procedures may call the procedures in a monitor
  whenever they want to, but they cannot directly access
  the monitor’s internal data structures from procedures
  declared outside the monitor.
• Monitors have an important property that makes them
  useful for achieving mutual exclusion: only one process
  can be active in a monitor at any instant.
• A monitor may only access it’s local variables
An Abstract Monitor
name : monitor
  … some local declarations
  … initialize local data

  procedure name(…arguments)
     … do some work

 … other procedures
Monitors




Example of a monitor
                       96
Monitors




• Outline of producer-consumer problem with monitors
  – only one monitor procedure active at one time
  – buffer has N slots                                 97
Things Needed to Enforce Monitor
• A solution lies in the introduction of condition
  variables , along with two operators on them,
  Wait & Signal
• “Wait” operation
  – Forces running process to sleep
• “signal” operation
  – Wakes up a sleeping process
• A condition (Condition variable)
  – Something to store who’s waiting for a particular
    reason
  – Implemented as a queue
A Running Example – Kitchen
kitchen : monitor                            Monitor
                                            Declaration

  occupied : Boolean; occupied := false;
  nonOccupied : condition;                 Declarations /
                                            Initialization
  procedure enterKitchen
      if occupied then nonOccupied.wait;
      occupied = true;
                                            Procedure

  procedure exitKitchen
      occupied = false;
                                            Procedure
      nonOccupied.signal;
Multiple Conditions
• Sometimes desirable to be able to wait on multiple
  things
• Can be implemented with multiple conditions

•   Example:
•   Two reasons to enter kitchen
-   cook (remove clean dishes)
-   clean (add clean dishes)
•   Two reasons to wait:
     – Going to cook, but no clean dishes
     – Going to clean, no dirty dishes
Emerson’s Kitchen
kitchen : monitor

  cleanDishes, dirtyDishes : condition;
  dishes, sink : stack;      dishes := stack of 10 dishes
                             sink := stack of 0 dishes

  procedure cook
       if dishes.isEmpty then cleanDishes.wait
       sink.push ( dishes.pop );
       dirtyDishes.signal;

  procedure cleanDish
       if sink.isEmpty then dirtyDishes.wait
       dishes.push (sink.pop)
       cleanDishes.signal
Condition Queue
• Checking if any process is waiting on a
  condition:
  – “condition.queue” returns true if a process is
    waiting on condition
• Example: Doing dishes only if someone is
  waiting for them
Summary
• Advantages
  – Data access synchronization simplified (vs.
    semaphores or locks)
  – Better encapsulation
• Disadvantages:
  – Deadlock still possible (in monitor code)
  – Programmer can still botch use of monitors
  – No provision for information exchange between
    machines
Interprocess Communication (IPC)
   Mechanism for processes to communicate and
    synchronize their actions.
           Via shared memory
           Via Messaging system - processes communicate without resorting
            to shared variables.
   Messaging system and shared memory not mutually
    exclusive -
               can be used simultaneously within a single OS or a single
                process.
   IPC facility provides two operations.
               send(message) - message size can be fixed or variable
               receive(message)
Producer-Consumer using IPC
     Producer
        repeat
           …
           produce an item in nextp;
          …
          send(consumer, nextp);
        until false;
     Consumer
        repeat
           receive(producer, nextc);
           …
           consume item from nextc;
          …
        until false;
IPC via Message Passing

   If processes P and Q wish to communicate,
    they need to:
          establish a communication link between them
          exchange messages via send/receive
   Fixed vs. Variable size message
          Fixed message size - straightforward physical
           implementation, programming task is difficult due
           to fragmentation
          Variable message size - simpler programming,
           more complex physical implementation.
Producer-Consumer using Message Passing
       Producer
          repeat
             …
             produce an item in nextp;
            …
            send(consumer, nextp);
          until false;
       Consumer
          repeat
             receive(producer, nextc);
             …
             consume item from nextc;
            …
          until false;
Direct Communication
   Sender and Receiver processes must name
    each other explicitly:
         send(P, message) - send a message to process P
         receive(Q, message) - receive a message from
          process Q
   Properties of communication link:
         Links are established automatically.
         A link is associated with exactly one pair of
          communicating processes.
         Exactly one link between each pair.
         Link may be unidirectional, usually bidirectional.
Indirect Communication
   Messages are directed to and received from mailboxes
    (also called ports)
          Unique ID for every mailbox.
          Processes can communicate only if they share a

           mailbox.
           Send(A, message) /* send message to mailbox A */
            Receive(A, message) /* receive message from
         mailbox A */
   Properties of communication link
          Link established only if processes share a common
         mailbox.
          Link can be associated with many processes.
          Pair of processes may share several communication
Indirect Communication using
mailboxes
Mailboxes (cont.)
   Operations
     create a new mailbox
      send/receive messages through mailbox
      destroy a mailbox
   Issue: Mailbox sharing
      P1, P2 and P3 share mailbox A.
      P1 sends message, P2 and P3 receive… who

      gets message??
   Possible Solutions
        disallow links between more than 2 processes
        allow only one process at a time to execute receive
         operation
        allow system to arbitrarily select receiver and then
         notify
Barriers
This mechanism is used for groups of processes rather than two-
process producer-consumer type of situations




    • Use of a barrier
        – processes approaching a barrier
        – all processes but one blocked at barrier
        – last process arrives, all are let through               112
Dining Philosophers (1)


•   Philosophers eat/think
•   Eating needs 2 forks
•   Pick one fork at a time
•   How to prevent deadlock




                                     113
Dining Philosophers (2)




A nonsolution to the dining philosophers problem
                                                   114
Dining Philosophers (3)




Solution to dining philosophers problem (part 1) 115
Dining Philosophers (4)




Solution to dining philosophers problem (part   116
The Readers and Writers Problem




A solution to the readers and writers problem   117
The Sleeping Barber Problem (1)




                                  118
The Sleeping Barber Problem (2)




          Solution to sleeping barber problem.   119
Scheduling
          Introduction to Scheduling (1)




• Bursts of CPU usage alternate with periods of I/O wait
   – A CPU/Compute-bound process – Spends most of the time
     in computing. They have long CPU Bursts and infrequent I/O
     waits
   – An I/O bound process - Spends most of the time waiting for
     I/O. They have Short CPU Bursts and frequent I/O waits     120
Introduction to Scheduling
        Types of Scheduling Algorithms
• Non –Preemptive : a non-preemptive scheduling
  algorithm picks a process to run and then just
  lets it run until it blocks OR until it voluntarily
  releases CPU. It can’t be forcibly suspended
• Preemptive: a preemptive scheduling algorithm
  picks a process and lets it run for a maximum of
  some fixed time. If it is still running at the end of
  the time interval, it is suspended and scheduler
  picks another process to run.
                                                      121
Categories of Scheduling Algorithms


      •   Batch
      •   Interactive
      •   Real time
Introduction to Scheduling (2)
    Scheduling Algorithm Goals




                                 123
Scheduling in Batch Systems
• There are following methods-

1.First – Come – First – Serve
2.Shortest Job First
3.Shortest Remaining Time Next
4.Three level Scheduling




                                 124
Scheduling in Batch Systems
• First – Come – First – Serve method:
1.Simplest non-preemptive algorithm
2.Processes are assigned the CPU in the order
  they request it
3.Basically there is a single queue of ready
  process
4.It is very easy to understand and program
5.A single linked list keeps track of all ready
  process
                                                  125
Scheduling in Batch Systems
FCFS – Example : (With the arrival at Same Time)




            Average turn around time is
      (20 + 30 + 55 + 70 + 75) / 5 = 250/5 = 50
                                                   126
FCFS – Example : (With the arrival at Different Times)




                                                         127
Scheduling in Batch Systems
• FCFS Disadvantages:
What happens when –
1.One compute bound process, runs for one
  second at a time and goes for disk read (CPU
  will remain Idle)
2.Many I/O bound process that uses little CPU
  time but each have to perform 1000 disk reads
  to complete (CPU will remain Idle)

                                                  128
Scheduling in Batch Systems
              Shortest Job First method
Working:
       here when several equally important jobs are sitting
in the input queue waiting to be started, the scheduler
picks the shortest job first.




         Average Turn Around time here:
      (5 + 15 +30 + 50 + 75 ) / 5 = 175/5 = 35
    An example of shortest job first scheduling
                                                         129
Shortest Job First




Figure 2-40. An example of shortest job first scheduling.
    (a) Running four jobs in the original order. (b) Running them
    in shortest job first order.

 Tanenbaum, Modern Operating Systems 3 e, (c) 2008 Prentice-Hall, Inc. All rights reserved. 0-13-6006639
Preemptive Shortest job Scheduling




                                     131
Scheduling in Batch Systems
   • It is worth pointing out that shortest job first is
     only optimal when all the jobs are available
     simultaneously
   • See following example:
      Processes      A      B       C         D         E
      Run times      2      4       1         1         1
     Arrival times   0      0       3         3         3


Here we can run SJF in two orders like ABCDE or BCDEA

Average Turn. time (ABCDE) = (2-0)+(6-0)+(7-3)+(8-3)+(9-3) =   4.6
Average Turn. time (BCDEA) = ?
                                                                 132
Three level scheduling in Batch Systems
The CPU scheduler
Decides the job to be given
CPU first.




The admission
scheduler Decides
which job to admit first      The Memory scheduler
to the system.                Decides which job is to be kept in
It is used to handle          memory & which are to be swap
compute                       out to handle memory space
 and I/O bound jobs.          problem.
                                                                   133
Scheduling in Interactive Systems (1)
1. Round Robin Scheduling
2. Priority Scheduling




• Round Robin Scheduling
 – list of runnable processes (a)
 – list of runnable processes after B uses up its quantum(b)
                                                          134
Priority Scheduling
1. A priority number (integer) is associated with
   each process
2. The CPU is allocated to the process with the
   highest priority

    Normally (smallest integer = highest priority)

It can be:
• Preemptive
• Non-preemptive
Processes Burst time Priority Arrival
    Priority             time
  Scheduling             P1         10        3         00
Example With             P2          1        1         00
 Same Arrival            P3          2        4         00
     Time                P4          1        5         00
                         P5          5        2         00

    P2       P5           P1           P3            P4
0        1        6                  16     18            19
    The average waiting time:

    =((16-10) + (1-1) + (18-2) + (19-1) + (6-5))/5
    = (6+0+16+18+1)/5 = 41/5 = 8.2
Priority Scheduling Example
         With Different Arrival Time
     Processes Burst time Priority Arrival time
         P1       10       3            00
         P2        1       1             1
         P3        2       4             2
         P4        1       5             3
         P5        5       2             4

The average waiting time:

=(( ? ) + ( ? ) + ( ? ) + ( ? ) + ( ? ))/5
= ( ? +?+?+?+?)/5 = ?/5 = ?
Priority Scheduling

Problem :
Starvation – low priority processes may never

            execute

Solution :
Aging – As time progresses increase the
        priority of the process
Round-Robin Scheduling

• The Round-Robin is designed especially for
  time sharing systems.
• Similar to FCFS but adds preemption concept
• Each process gets a small unit of CPU time
  (time quantum), usually 10-100 milliseconds
• After this time has elapsed, the process is
  preempted and added to the end of the ready
  queue.
Round-Robin Scheduling Example
Time Quantum : 20ms     Arrival Time : 00 (Simultaneously)




  The average waiting time:
  =((134 ) + (37) + (162) + (121) )/4 = 113.5
Round Robin scheduling Example
                   Time Quantum here : 04ms
                 Process        Arrival Time     Service time
                    1                0                 8
                    2                1                 4
                    3                2                 9
                    4                3                 5

        P1        P2       P3         P4    P1        P3    P4        P3



    0        4         8         12        16    20        24    25        26

The average waiting time:
=((20-0 ) + (8-1) + (26-2) + (25-3))/4 = (74 )/4 = 18.5
                                                                                141
PRIORITY BASED SCHEDULING

• Assign each process a priority. Schedule highest priority first. All
  processes within same priority are FCFS.

• Priority may be determined by user or by some default
  mechanism. The system may determine the priority based on
  memory requirements, time limits, or other resource usage.

• Starvation occurs if a low priority process never runs. Solution:
  build aging into a variable priority.

• Delicate balance between giving favorable response for
  interactive jobs, but not starving batch jobs.
                                                                     142
ROUND ROBIN
• Use a timer to cause an interrupt after a predetermined
  time. Preempts if task exceeds it’s quantum.

• Train of events
       1. Dispatch
       2. Time slice occurs OR process suspends on event
       3. Put process on some queue and dispatch next

• Use numbers to find queueing and residence times. (Use
  quantum.)



                                                            143
ROUND ROBIN
• Definitions:


– Context Switch: Changing the processor from
  running one task (or process) to another. Implies
  changing memory.

– Processor Sharing : Use of a small quantum such
  that each process runs frequently at speed 1/n.

– Reschedule latency : How long it takes from when
  a process requests to run, until it finally gets
  control of the CPU.
                                                      144
ROUND ROBIN

    • Choosing a time quantum

– Too short - inordinate fraction of the time is spent in context
  switches.

–    Too long - reschedule latency is too great. If many processes
    want the CPU, then it's a long time before a particular process
    can get the CPU. This then acts like FCFS.

– Adjust so most processes won't use their slice. As processors
  have become faster, this is less of an issue.


                                                                    145
Round-Robin Scheduling




        NEXT SLIDE
Multilevel Queue
• Ready Queue partitioned into separate queues
         – Example: system processes, foreground
           (interactive), background (batch), student
           processes….
• Each queue has its own scheduling algorithm
         – Example: foreground (RR), background(FCFS)
• Processes assigned to one queue permanently.
• Scheduling must be done between the queues
         – Fixed priority - serve all from foreground, then
           from background. Possibility of starvation.
         – Time slice - Each queue gets some CPU time that it
           schedules - e.g. 80% foreground(RR), 20%
           background (FCFS)
Multilevel Queues
MULTI-LEVEL QUEUES:
• Each queue has its scheduling algorithm.
• Then some other algorithm (perhaps priority based) arbitrates
  between queues.
• Can use feedback to move between queues
• Method is complex but flexible.
• For example, could separate system processes, interactive,
  batch, favored, unfavored processes




                                                                  149
Multilevel Queue Interactive Systems




A scheduling algorithm with four priority classes
                                                    150
Scheduling in Real-Time Systems

Real Time Scheduling:

  •Hard real-time systems – required to complete a
   critical task within a guaranteed amount of time.


  •Soft real-time computing – requires that critical
   processes receive priority over less fortunate ones.



                                                          151
Scheduling in Real-Time Systems

Schedulable real-time system
• Given
  – m periodic events
  – event i occurs within period Pi and requires Ci
    seconds
• Then the load can only be handled if

                   m
                       Ci
                  ∑ P ≤1
                  i =1  i
                                                      152
Scheduling in Real-Time Systems
Example:     Events      Periods      CPU Time
               01          100           50
               02          200           30
               03          500          100


Here ,      = 0.5 + 0.15 + 0.2 = 0.85
            System is
            schedulable because
            here    m
                      Ci
                      ∑P
                      i =1
                                 ≤1
                             i
                                                 153
Policy versus Mechanism

• Separate what is allowed to be done with
  how it is done
  – a process knows which of its children threads
    are important and need priority

• Scheduling algorithm parameterized
  – mechanism in the kernel

• Parameters filled in by user processes
  – policy set by user process
                                                    154
Thread Scheduling (1)




Possible scheduling of user-level threads
• 50-msec process quantum
• threads run 5 msec/CPU burst              155
Thread Scheduling (2)




Possible scheduling of kernel-level threads
• 50-msec process quantum
• threads run 5 msec/CPU burst
                                              156
FCFS
 Process          Burst Time
      P1              3
      P2              6
      P3              4
      P4              2
Order : P1,P2,P3,P4   FCFS
Process           Compl Time
     P1                3
     P2                9
     P3               13
     P4               15

average Waiting Time = ( )/4 =
                                 157
Shortest Job First
Process           Burst Time
     P1               3
     P2               6
     P3               4
     P4               2

Process           Compl Time
     P4                   2
     P1                   5
     P3                   9
     P2                   15


Average Waiting Time = ( )/4 =
                                 158
Priority Scheduling
          Process    Burst Time    Priority
             P1          3            2
             P2          6            4
             P3          4            1
             P4          2            3

Gantt Chart :

P3              P1            P4         P2
0         4               7                   15
                              9
Average Waiting Time: =

                                                   159
Round Robin Scheduling
Process                        Burst Time
          P1                       3
          P2                       6
          P3                       4
          P4                       2

          Time Quantum : 2ms


               Gantt Chart :    ?


      Average Waiting Time: =

                                            160

Contenu connexe

Tendances (20)

Pipeline & Nonpipeline Processor
Pipeline & Nonpipeline ProcessorPipeline & Nonpipeline Processor
Pipeline & Nonpipeline Processor
 
Semaphore
SemaphoreSemaphore
Semaphore
 
Session 9 advance_verification_features
Session 9 advance_verification_featuresSession 9 advance_verification_features
Session 9 advance_verification_features
 
Circular wait - Operating Systems
Circular wait - Operating SystemsCircular wait - Operating Systems
Circular wait - Operating Systems
 
Distributed Operating System_1
Distributed Operating System_1Distributed Operating System_1
Distributed Operating System_1
 
Os structure
Os structureOs structure
Os structure
 
Unit 6 interprocessor arbitration
Unit 6 interprocessor arbitrationUnit 6 interprocessor arbitration
Unit 6 interprocessor arbitration
 
Pipeline processing - Computer Architecture
Pipeline processing - Computer Architecture Pipeline processing - Computer Architecture
Pipeline processing - Computer Architecture
 
Priority scheduling algorithms
Priority scheduling algorithmsPriority scheduling algorithms
Priority scheduling algorithms
 
Superscalar processor
Superscalar processorSuperscalar processor
Superscalar processor
 
Distributed file system
Distributed file systemDistributed file system
Distributed file system
 
Deadlocks in operating system
Deadlocks in operating systemDeadlocks in operating system
Deadlocks in operating system
 
Deadlock
DeadlockDeadlock
Deadlock
 
Deadlock
DeadlockDeadlock
Deadlock
 
Lecture 2 process
Lecture 2   processLecture 2   process
Lecture 2 process
 
Hierarchical Memory System
Hierarchical Memory SystemHierarchical Memory System
Hierarchical Memory System
 
Unix commands
Unix commandsUnix commands
Unix commands
 
message passing
 message passing message passing
message passing
 
Memory management
Memory managementMemory management
Memory management
 
Deadlock
DeadlockDeadlock
Deadlock
 

En vedette

International Journal of Engineering Research and Development (IJERD)
International Journal of Engineering Research and Development (IJERD)International Journal of Engineering Research and Development (IJERD)
International Journal of Engineering Research and Development (IJERD)IJERD Editor
 
Low-level concurrency (reinvent vehicle)
Low-level concurrency (reinvent vehicle)Low-level concurrency (reinvent vehicle)
Low-level concurrency (reinvent vehicle)Olena Syrota
 
Classical problem of synchronization
Classical problem of synchronizationClassical problem of synchronization
Classical problem of synchronizationShakshi Ranawat
 
OS Process Synchronization, semaphore and Monitors
OS Process Synchronization, semaphore and MonitorsOS Process Synchronization, semaphore and Monitors
OS Process Synchronization, semaphore and Monitorssgpraju
 
Cloud computing simple ppt
Cloud computing simple pptCloud computing simple ppt
Cloud computing simple pptAgarwaljay
 
Introduction of Cloud computing
Introduction of Cloud computingIntroduction of Cloud computing
Introduction of Cloud computingRkrishna Mishra
 

En vedette (10)

International Journal of Engineering Research and Development (IJERD)
International Journal of Engineering Research and Development (IJERD)International Journal of Engineering Research and Development (IJERD)
International Journal of Engineering Research and Development (IJERD)
 
Os module 2 c
Os module 2 cOs module 2 c
Os module 2 c
 
Low-level concurrency (reinvent vehicle)
Low-level concurrency (reinvent vehicle)Low-level concurrency (reinvent vehicle)
Low-level concurrency (reinvent vehicle)
 
Unit 2 notes
Unit 2 notesUnit 2 notes
Unit 2 notes
 
Mutual exclusion
Mutual exclusionMutual exclusion
Mutual exclusion
 
Classical problem of synchronization
Classical problem of synchronizationClassical problem of synchronization
Classical problem of synchronization
 
OS Process Synchronization, semaphore and Monitors
OS Process Synchronization, semaphore and MonitorsOS Process Synchronization, semaphore and Monitors
OS Process Synchronization, semaphore and Monitors
 
Cloud computing simple ppt
Cloud computing simple pptCloud computing simple ppt
Cloud computing simple ppt
 
Introduction of Cloud computing
Introduction of Cloud computingIntroduction of Cloud computing
Introduction of Cloud computing
 
cloud computing ppt
cloud computing pptcloud computing ppt
cloud computing ppt
 

Similaire à Chapter 02 modified

Week 11Linux InternalsProcesses, schedulingLecture o.docx
Week 11Linux InternalsProcesses, schedulingLecture o.docxWeek 11Linux InternalsProcesses, schedulingLecture o.docx
Week 11Linux InternalsProcesses, schedulingLecture o.docxmelbruce90096
 
Processes and Thread OS_Tanenbaum_3e
Processes and Thread OS_Tanenbaum_3eProcesses and Thread OS_Tanenbaum_3e
Processes and Thread OS_Tanenbaum_3eLe Gia Hoang
 
Chapter -2 operating system presentation
Chapter -2 operating system presentationChapter -2 operating system presentation
Chapter -2 operating system presentationchnrketan
 
UNIT-2-Process-Management.pdf
UNIT-2-Process-Management.pdfUNIT-2-Process-Management.pdf
UNIT-2-Process-Management.pdfaakritii765
 
04 threads-pbl-2-slots
04 threads-pbl-2-slots04 threads-pbl-2-slots
04 threads-pbl-2-slotsmha4
 
04 threads-pbl-2-slots
04 threads-pbl-2-slots04 threads-pbl-2-slots
04 threads-pbl-2-slotsmha4
 
Chapter 02
Chapter 02Chapter 02
Chapter 02 Google
 
Processes and Threads in Windows Vista
Processes and Threads in Windows VistaProcesses and Threads in Windows Vista
Processes and Threads in Windows VistaTrinh Phuc Tho
 
Kcd226 Sistem Operasi Lecture03
Kcd226 Sistem Operasi Lecture03Kcd226 Sistem Operasi Lecture03
Kcd226 Sistem Operasi Lecture03Cahyo Darujati
 
OS Module-2.pptx
OS Module-2.pptxOS Module-2.pptx
OS Module-2.pptxbleh23
 
OPERATING SYSTEM CHAPTER 3.ppt
OPERATING SYSTEM CHAPTER 3.pptOPERATING SYSTEM CHAPTER 3.ppt
OPERATING SYSTEM CHAPTER 3.pptGevitaChinnaiah
 
procress and threads.ppt
procress and threads.pptprocress and threads.ppt
procress and threads.pptDrBashirMSaad
 
Processes and Threads in modern Operating system
Processes and Threads in modern Operating systemProcesses and Threads in modern Operating system
Processes and Threads in modern Operating systemssuserf2075d
 

Similaire à Chapter 02 modified (20)

Week 11Linux InternalsProcesses, schedulingLecture o.docx
Week 11Linux InternalsProcesses, schedulingLecture o.docxWeek 11Linux InternalsProcesses, schedulingLecture o.docx
Week 11Linux InternalsProcesses, schedulingLecture o.docx
 
OS (1).pptx
OS (1).pptxOS (1).pptx
OS (1).pptx
 
Processes and Thread OS_Tanenbaum_3e
Processes and Thread OS_Tanenbaum_3eProcesses and Thread OS_Tanenbaum_3e
Processes and Thread OS_Tanenbaum_3e
 
Os lectures
Os lecturesOs lectures
Os lectures
 
Chapter -2 operating system presentation
Chapter -2 operating system presentationChapter -2 operating system presentation
Chapter -2 operating system presentation
 
UNIT-2-Process-Management.pdf
UNIT-2-Process-Management.pdfUNIT-2-Process-Management.pdf
UNIT-2-Process-Management.pdf
 
04 threads-pbl-2-slots
04 threads-pbl-2-slots04 threads-pbl-2-slots
04 threads-pbl-2-slots
 
04 threads-pbl-2-slots
04 threads-pbl-2-slots04 threads-pbl-2-slots
04 threads-pbl-2-slots
 
Os
OsOs
Os
 
Chapter 02
Chapter 02Chapter 02
Chapter 02
 
Chapter 02
Chapter 02Chapter 02
Chapter 02
 
Processes and Threads in Windows Vista
Processes and Threads in Windows VistaProcesses and Threads in Windows Vista
Processes and Threads in Windows Vista
 
Linux architecture
Linux architectureLinux architecture
Linux architecture
 
Kcd226 Sistem Operasi Lecture03
Kcd226 Sistem Operasi Lecture03Kcd226 Sistem Operasi Lecture03
Kcd226 Sistem Operasi Lecture03
 
OS Module-2.pptx
OS Module-2.pptxOS Module-2.pptx
OS Module-2.pptx
 
OPERATING SYSTEM CHAPTER 3.ppt
OPERATING SYSTEM CHAPTER 3.pptOPERATING SYSTEM CHAPTER 3.ppt
OPERATING SYSTEM CHAPTER 3.ppt
 
procress and threads.ppt
procress and threads.pptprocress and threads.ppt
procress and threads.ppt
 
Processes and Threads in modern Operating system
Processes and Threads in modern Operating systemProcesses and Threads in modern Operating system
Processes and Threads in modern Operating system
 
Process and Thread
Process and Thread Process and Thread
Process and Thread
 
Linux Internals - Part II
Linux Internals - Part IILinux Internals - Part II
Linux Internals - Part II
 

Dernier

Wellbeing inclusion and digital dystopias.pptx
Wellbeing inclusion and digital dystopias.pptxWellbeing inclusion and digital dystopias.pptx
Wellbeing inclusion and digital dystopias.pptxJisc
 
Understanding Accommodations and Modifications
Understanding  Accommodations and ModificationsUnderstanding  Accommodations and Modifications
Understanding Accommodations and ModificationsMJDuyan
 
Graduate Outcomes Presentation Slides - English
Graduate Outcomes Presentation Slides - EnglishGraduate Outcomes Presentation Slides - English
Graduate Outcomes Presentation Slides - Englishneillewis46
 
NO1 Top Black Magic Specialist In Lahore Black magic In Pakistan Kala Ilam Ex...
NO1 Top Black Magic Specialist In Lahore Black magic In Pakistan Kala Ilam Ex...NO1 Top Black Magic Specialist In Lahore Black magic In Pakistan Kala Ilam Ex...
NO1 Top Black Magic Specialist In Lahore Black magic In Pakistan Kala Ilam Ex...Amil baba
 
How to Give a Domain for a Field in Odoo 17
How to Give a Domain for a Field in Odoo 17How to Give a Domain for a Field in Odoo 17
How to Give a Domain for a Field in Odoo 17Celine George
 
Unit 3 Emotional Intelligence and Spiritual Intelligence.pdf
Unit 3 Emotional Intelligence and Spiritual Intelligence.pdfUnit 3 Emotional Intelligence and Spiritual Intelligence.pdf
Unit 3 Emotional Intelligence and Spiritual Intelligence.pdfDr Vijay Vishwakarma
 
ICT Role in 21st Century Education & its Challenges.pptx
ICT Role in 21st Century Education & its Challenges.pptxICT Role in 21st Century Education & its Challenges.pptx
ICT Role in 21st Century Education & its Challenges.pptxAreebaZafar22
 
Towards a code of practice for AI in AT.pptx
Towards a code of practice for AI in AT.pptxTowards a code of practice for AI in AT.pptx
Towards a code of practice for AI in AT.pptxJisc
 
80 ĐỀ THI THỬ TUYỂN SINH TIẾNG ANH VÀO 10 SỞ GD – ĐT THÀNH PHỐ HỒ CHÍ MINH NĂ...
80 ĐỀ THI THỬ TUYỂN SINH TIẾNG ANH VÀO 10 SỞ GD – ĐT THÀNH PHỐ HỒ CHÍ MINH NĂ...80 ĐỀ THI THỬ TUYỂN SINH TIẾNG ANH VÀO 10 SỞ GD – ĐT THÀNH PHỐ HỒ CHÍ MINH NĂ...
80 ĐỀ THI THỬ TUYỂN SINH TIẾNG ANH VÀO 10 SỞ GD – ĐT THÀNH PHỐ HỒ CHÍ MINH NĂ...Nguyen Thanh Tu Collection
 
How to Add New Custom Addons Path in Odoo 17
How to Add New Custom Addons Path in Odoo 17How to Add New Custom Addons Path in Odoo 17
How to Add New Custom Addons Path in Odoo 17Celine George
 
HMCS Vancouver Pre-Deployment Brief - May 2024 (Web Version).pptx
HMCS Vancouver Pre-Deployment Brief - May 2024 (Web Version).pptxHMCS Vancouver Pre-Deployment Brief - May 2024 (Web Version).pptx
HMCS Vancouver Pre-Deployment Brief - May 2024 (Web Version).pptxmarlenawright1
 
Kodo Millet PPT made by Ghanshyam bairwa college of Agriculture kumher bhara...
Kodo Millet  PPT made by Ghanshyam bairwa college of Agriculture kumher bhara...Kodo Millet  PPT made by Ghanshyam bairwa college of Agriculture kumher bhara...
Kodo Millet PPT made by Ghanshyam bairwa college of Agriculture kumher bhara...pradhanghanshyam7136
 
Single or Multiple melodic lines structure
Single or Multiple melodic lines structureSingle or Multiple melodic lines structure
Single or Multiple melodic lines structuredhanjurrannsibayan2
 
COMMUNICATING NEGATIVE NEWS - APPROACHES .pptx
COMMUNICATING NEGATIVE NEWS - APPROACHES .pptxCOMMUNICATING NEGATIVE NEWS - APPROACHES .pptx
COMMUNICATING NEGATIVE NEWS - APPROACHES .pptxannathomasp01
 
Jamworks pilot and AI at Jisc (20/03/2024)
Jamworks pilot and AI at Jisc (20/03/2024)Jamworks pilot and AI at Jisc (20/03/2024)
Jamworks pilot and AI at Jisc (20/03/2024)Jisc
 
How to Manage Global Discount in Odoo 17 POS
How to Manage Global Discount in Odoo 17 POSHow to Manage Global Discount in Odoo 17 POS
How to Manage Global Discount in Odoo 17 POSCeline George
 
General Principles of Intellectual Property: Concepts of Intellectual Proper...
General Principles of Intellectual Property: Concepts of Intellectual  Proper...General Principles of Intellectual Property: Concepts of Intellectual  Proper...
General Principles of Intellectual Property: Concepts of Intellectual Proper...Poonam Aher Patil
 
How to Create and Manage Wizard in Odoo 17
How to Create and Manage Wizard in Odoo 17How to Create and Manage Wizard in Odoo 17
How to Create and Manage Wizard in Odoo 17Celine George
 
Fostering Friendships - Enhancing Social Bonds in the Classroom
Fostering Friendships - Enhancing Social Bonds  in the ClassroomFostering Friendships - Enhancing Social Bonds  in the Classroom
Fostering Friendships - Enhancing Social Bonds in the ClassroomPooky Knightsmith
 
Application orientated numerical on hev.ppt
Application orientated numerical on hev.pptApplication orientated numerical on hev.ppt
Application orientated numerical on hev.pptRamjanShidvankar
 

Dernier (20)

Wellbeing inclusion and digital dystopias.pptx
Wellbeing inclusion and digital dystopias.pptxWellbeing inclusion and digital dystopias.pptx
Wellbeing inclusion and digital dystopias.pptx
 
Understanding Accommodations and Modifications
Understanding  Accommodations and ModificationsUnderstanding  Accommodations and Modifications
Understanding Accommodations and Modifications
 
Graduate Outcomes Presentation Slides - English
Graduate Outcomes Presentation Slides - EnglishGraduate Outcomes Presentation Slides - English
Graduate Outcomes Presentation Slides - English
 
NO1 Top Black Magic Specialist In Lahore Black magic In Pakistan Kala Ilam Ex...
NO1 Top Black Magic Specialist In Lahore Black magic In Pakistan Kala Ilam Ex...NO1 Top Black Magic Specialist In Lahore Black magic In Pakistan Kala Ilam Ex...
NO1 Top Black Magic Specialist In Lahore Black magic In Pakistan Kala Ilam Ex...
 
How to Give a Domain for a Field in Odoo 17
How to Give a Domain for a Field in Odoo 17How to Give a Domain for a Field in Odoo 17
How to Give a Domain for a Field in Odoo 17
 
Unit 3 Emotional Intelligence and Spiritual Intelligence.pdf
Unit 3 Emotional Intelligence and Spiritual Intelligence.pdfUnit 3 Emotional Intelligence and Spiritual Intelligence.pdf
Unit 3 Emotional Intelligence and Spiritual Intelligence.pdf
 
ICT Role in 21st Century Education & its Challenges.pptx
ICT Role in 21st Century Education & its Challenges.pptxICT Role in 21st Century Education & its Challenges.pptx
ICT Role in 21st Century Education & its Challenges.pptx
 
Towards a code of practice for AI in AT.pptx
Towards a code of practice for AI in AT.pptxTowards a code of practice for AI in AT.pptx
Towards a code of practice for AI in AT.pptx
 
80 ĐỀ THI THỬ TUYỂN SINH TIẾNG ANH VÀO 10 SỞ GD – ĐT THÀNH PHỐ HỒ CHÍ MINH NĂ...
80 ĐỀ THI THỬ TUYỂN SINH TIẾNG ANH VÀO 10 SỞ GD – ĐT THÀNH PHỐ HỒ CHÍ MINH NĂ...80 ĐỀ THI THỬ TUYỂN SINH TIẾNG ANH VÀO 10 SỞ GD – ĐT THÀNH PHỐ HỒ CHÍ MINH NĂ...
80 ĐỀ THI THỬ TUYỂN SINH TIẾNG ANH VÀO 10 SỞ GD – ĐT THÀNH PHỐ HỒ CHÍ MINH NĂ...
 
How to Add New Custom Addons Path in Odoo 17
How to Add New Custom Addons Path in Odoo 17How to Add New Custom Addons Path in Odoo 17
How to Add New Custom Addons Path in Odoo 17
 
HMCS Vancouver Pre-Deployment Brief - May 2024 (Web Version).pptx
HMCS Vancouver Pre-Deployment Brief - May 2024 (Web Version).pptxHMCS Vancouver Pre-Deployment Brief - May 2024 (Web Version).pptx
HMCS Vancouver Pre-Deployment Brief - May 2024 (Web Version).pptx
 
Kodo Millet PPT made by Ghanshyam bairwa college of Agriculture kumher bhara...
Kodo Millet  PPT made by Ghanshyam bairwa college of Agriculture kumher bhara...Kodo Millet  PPT made by Ghanshyam bairwa college of Agriculture kumher bhara...
Kodo Millet PPT made by Ghanshyam bairwa college of Agriculture kumher bhara...
 
Single or Multiple melodic lines structure
Single or Multiple melodic lines structureSingle or Multiple melodic lines structure
Single or Multiple melodic lines structure
 
COMMUNICATING NEGATIVE NEWS - APPROACHES .pptx
COMMUNICATING NEGATIVE NEWS - APPROACHES .pptxCOMMUNICATING NEGATIVE NEWS - APPROACHES .pptx
COMMUNICATING NEGATIVE NEWS - APPROACHES .pptx
 
Jamworks pilot and AI at Jisc (20/03/2024)
Jamworks pilot and AI at Jisc (20/03/2024)Jamworks pilot and AI at Jisc (20/03/2024)
Jamworks pilot and AI at Jisc (20/03/2024)
 
How to Manage Global Discount in Odoo 17 POS
How to Manage Global Discount in Odoo 17 POSHow to Manage Global Discount in Odoo 17 POS
How to Manage Global Discount in Odoo 17 POS
 
General Principles of Intellectual Property: Concepts of Intellectual Proper...
General Principles of Intellectual Property: Concepts of Intellectual  Proper...General Principles of Intellectual Property: Concepts of Intellectual  Proper...
General Principles of Intellectual Property: Concepts of Intellectual Proper...
 
How to Create and Manage Wizard in Odoo 17
How to Create and Manage Wizard in Odoo 17How to Create and Manage Wizard in Odoo 17
How to Create and Manage Wizard in Odoo 17
 
Fostering Friendships - Enhancing Social Bonds in the Classroom
Fostering Friendships - Enhancing Social Bonds  in the ClassroomFostering Friendships - Enhancing Social Bonds  in the Classroom
Fostering Friendships - Enhancing Social Bonds in the Classroom
 
Application orientated numerical on hev.ppt
Application orientated numerical on hev.pptApplication orientated numerical on hev.ppt
Application orientated numerical on hev.ppt
 

Chapter 02 modified

  • 1. Chapter 2 Processes and Threads 2.1 Processes 2.2 Threads 2.3 Interprocess communication 2.4 Classical IPC problems 2.5 Scheduling 1
  • 2. Processes The Process Model • Multiprogramming of four programs • Conceptual model of 4 independent, sequential processes • Only one program active at any instant 2
  • 3. Process Creation Principal events that cause process creation 1. System initialization 2. Execution of a process creation system by a running process. 3. User request to create a new process 4. Initiation of a batch job 3
  • 4. Process Termination Conditions which terminate processes 1. Normal exit (voluntary) 2. Error exit (voluntary) 3. Fatal error (involuntary) 4. Killed by another process (involuntary) 4
  • 5. Process Hierarchies • Parent creates a child process, child processes can create its own process • Forms a hierarchy – UNIX calls this a "process group" • Windows has no concept of process hierarchy – all processes are created equal 5
  • 6. Process States (1) • Process Transitions • Possible process states – running – blocked – ready • Transitions between states shown 6
  • 7. Process States (2) • Lowest layer of process-structured OS – handles interrupts, scheduling • Above that layer are sequential processes 7
  • 8. Implementation of Processes The OS organizes the data about each process in a table naturally called the process table. Each entry in this table is called a process table entry or process control block (PCB). Characteristics of the process table. 1.One entry per process. 2.The central data structure for process management. 3.A process state transition (e.g., moving from blocked to ready) is reflected by a change in the value of one or more fields in the PCB. 4.We have converted an active entity (process) into a data structure (PCB). Finkel calls this the level principle an active entity becomes a data structure when looked at from a lower level. 8
  • 9. Implementation of Processes A process in an operating system is represented by a data structure known as a Process Control Block (PCB) or process descriptor. The PCB contains important information about the specific process including 1.The current state of the process i.e., whether it is ready, running, waiting, or whatever. 2.Unique identification of the process in order to track "which is which" information. 3.A pointer to parent process. 9
  • 10. Implementation of Processes 4. Similarly, a pointer to child process (if it exists). 5. The priority of process (a part of CPU scheduling information). 6. Pointers to locate memory of processes. 7. A register save area. 8. The processor it is running on. The PCB is a certain store that allows the operating systems to locate key information about a process. Thus, the PCB is the data structure that defines a process to the operating systems. 10
  • 13. Process Table PID PCB 1 . 2 . . . n . Process control Block Process control Block Process control Block 13
  • 14. 14
  • 16. Implementation of Processes (2) Skeleton of what lowest level of OS does when an interrupt occurs 16
  • 17. Implementation of Processes (1) Fields of a process table entry 17
  • 18. Threads The Thread Model (1) (a) Three processes each with one thread (b) One process with three threads 18
  • 19. The Thread Model (2) • Items shared by all threads in a process • Items private to each thread 19
  • 20. The Thread Model (3) Each thread has its own stack 20
  • 21. Thread Usage (1) A word processor with three threads 21
  • 22. Thread Usage (2) A multithreaded Web server 22
  • 23. Thread Usage (3) • Rough outline of code for previous slide (a) Dispatcher thread (b) Worker thread 23
  • 24. Thread Usage (4) Three ways to construct a server 24
  • 25. Implementing Threads in User Space A user-level threads package 25
  • 26. Implementing Threads in the Kernel A threads package managed by the kernel 26
  • 27. Hybrid Implementations Multiplexing user-level threads onto kernel- level threads 27
  • 28. Scheduler Activations • Goal – mimic functionality of kernel threads – gain performance of user space threads • Avoids unnecessary user/kernel transitions • Kernel assigns virtual processors to each process – lets runtime system allocate threads to processors • Problem: Fundamental reliance on kernel (lower layer) calling procedures in user space (higher layer) 28
  • 29. Pop-Up Threads • Creation of a new thread when message arrives (a) before message arrives (b) after message arrives 29
  • 30. Making Single-Threaded Code Multithreaded (1) Conflicts between threads over the use of a global variable 30
  • 31. Making Single-Threaded Code Multithreaded (2) Threads can have private global variables 31
  • 32. Interprocess Communication (IPC) • Process frequently need to communicate with other process. ( Ex: A shell Pipeline) • Interrupt is the one way to achieve IPC. • But we require a well structured way to achieve IPC. 32
  • 33. Interprocess Communication (IPC) • Issues to be considered: 1.How one process can pass information to other process. 2.Making sure that two or more process don’t get into each others’ way, when engaging Critical Region. 3.Proper sequencing of processes when dependencies present. Ex: Process A produce Data & Process B has to print this data 33
  • 34. Interprocess Communication Race Conditions • In o.s. processes working together may share recourses (Storage) . • Shared storage 1. may be in primary memory 2. may be a shared file. 34
  • 35. IPC – Race conditions 1. The process wants to print a file enters the file name in a special spooler directory. (shared) 2. Another process, the printer daemon periodically checks, if there are any files to be printed and if there are, it prints them & then removes their name from Print Spooler the directory. Two processes want to access shared memory at same time 35
  • 36. IPC – Race conditions here, In: points to the next free slots in the directory (Local variable) Out : points to the next file to be printed & both are shared Variable Print Spooler 36
  • 37. IPC – Race conditions Following Might Happen: 1. Process A reads in and stores the value 7 in a local variable called Next –Free-Slot. 2. Just then clock interrupt occurs and CPU decides that process A has run long enough. 3. It switches to the process B. 4. Process B also reads in & also get a 7. 5. It too stores 7 into its local variable Next –Free-Slot. Print Spooler 37
  • 38. IPC – Race conditions 6. Process B continues to run and store the name of the file in slot 7 & updates in to be 8. 7. Now, process B goes off & does other things. 8. Eventually, process A runs again, starting from the place it lefts off. 9. It looks at Next-Free-Slot. 10. It finds 7 there. 11. It writes its file name in slot 7 erasing the name that Print Spooler process B just put there. 38
  • 39. IPC – Race conditions 12. Then it computes Next-Free- Slot +1, which is 8. 13. Now, it sets in to 8. 14. The spooler directory is now internally consistent. 15. So, the printer daemon process will not notice any thing wrong. 16. But, process B never get its’ job done. 17. Situation like this is known as RACE CONDITIONS. Print Spooler 39
  • 40. Mutual exclusion & Critical Regions • We must avoid race conditions by finding some way to prohibit more than one process reading & writing the shared data at the same time. • We can achieve this by doing MUTUAL EXCLUSION. 40
  • 41. Mutual exclusion & Critical Regions • MUTUAL EXCLUSION : it is, some way of making sure that if one process is using a shared variable or file, the other process will be excluded from doing the same thing. • CRITICAL REGION: the part of the memory where the shared memory is accessed is called the critical region. 41
  • 42. Mutual exclusion & Critical Regions Conditions required to avoid race condition: 1. No two processes may be simultaneously inside their critical regions. 2. No assumptions may be made about speeds or the number of CPUs. 3. No process running outside its critical region may block other processes. 4. No process should have to wait forever to enter its critical region. 42
  • 43. Mutual exclusion using critical regions • CRITICAL REGION: the part of the memory where the shared memory is accessed is called the critical region. 43
  • 44. Mutual Exclusion with Busy Waiting BUSY WAITING : Continually testing a variable until some value appears is called BUSY WAITING. Proposals for achieving mutual exclusion: • Disabling interrupts • Lock variables • Strict alternation • Peterson's solution • The TSL instruction 44
  • 45. Mutual Exclusion with Busy Waiting Disabling Interrupts • It is the Simplest Solution • Each Process should disable all interrupts just after entering its critical region • Each Process should re-enable all interrupts just before leaving its critical region • With interrupts disabled, No clock interrupts occur • CPU can’t switch from process to process without clock interrupts Disadvantages: • What happens if one user disables interrupts and then never turned them on again • If a system is a multi processor system ; disabling interrupts affects only the CPU that executed disable instruction 45
  • 46. Mutual Exclusion with Busy Waiting LOCK VARIABLES • It is the Simplest software Solution • We can have a single shared (Lock) variable • Keep initially 0 • Now a process wants to enter critical region , it first test Lock variable • If the lock is zero , the process sets it to 1 and enters the critical region. • If the lock is 1 , the process just waits to be it 0 Disadvantages: • Unfortunately , this idea contains exactly the same problem that we show in the spooler directory example. 46
  • 47. Mutual Exclusion with Busy Waiting (1) Strict Alternation Notice the semicolons terminating the while statements in Fig. above •Busy waiting continuously testing a variable until some value appears using it as a lock. •A lock that uses busy waiting is called a spin lock. •It should usually be avoided, since it wastes CPU time. 47
  • 48. 1. The integer variable turn (keeps track of whose turn it is to enter the CR), 2. Initially, process 0 inspects turn, finds it to be 0, and enters its CR, 3. Process 1 also finds it to be 0 and therefore sits in a tight loop continually testing turn to see when it becomes 1, 4. When process 0 leaves the CR, it sets turn to 1, to allow process 1 to enter its CR, 5. Suppose that process 1 finishes its CR quickly, so both processes are in their nonCR (with turn set to 0) 48
  • 49. 6. Process 0 finishes its nonCR and goes back to the top of its loop. Process 0 executes its whole loop quickly, exiting its CR and setting turn to 1. 7. At this point turn is 1 and both processes are executing in their nonCR, 8. Process 0 finishes its nonCR and goes back to the top of its loop, 9. Unfortunately, it is not permitted to enter its CR, turn is 1 and process 1 is busy with its nonCR, 10. It hangs in its while loop until process 1 sets turn to 0, 11. This algorithm does avoid all races. But violates condition Fault tolerance. 49
  • 50. Mutual Exclusion with Busy Waiting TSL Instruction • Lets take some help of hardware • Many multiprocessor system have an instruction – TSL RX, Lock ( Test and set lock) • This works as follows 1. It reads the content of the memory word into register RX and then stores a non zero value at the memory address Lock (Sets a lock ) 2. No other processor can access the memory word until the instruction is finished 3. In other words the CPU executing TSL instruction locks the memory bus to prohibit other CPUs from accessing memory until it is done 50
  • 51. Mutual Exclusion with Busy Waiting TSL Instruction 1. To use the TSL instruction, we will use a shared variable , Lock to co- ordinate the access to shared memory 2. When lock = 0 any process can use it by setting it 1 3. When lock = 1 no process can use it Entering and leaving a critical region using TSL Instruction 51
  • 52. Peterson's Solution to achieve Mutual Exclusion. Peterson’s algorithm is shown in Fig. 2-21. This algorithm consists of two procedures written in ANSI C. Before using the shared variables (i.e., before entering its critical region), each process calls enter_region with its own process number, 0 or 1, as parameter. This call will cause it to wait, if need be, until it is safe to enter. After it has finished with the shared variables, the process calls leave_region to indicate that it is done and to allow the other process to enter, if it so desires.
  • 53. Peterson's Solution Let us see how this solution works. 1.Initially neither process is in its critical region. 2.Now process 0 calls enter_region. 3.It indicates its interest by setting its array element and sets turn to 0. 4.Since process 1 is not interested, enter_region returns immediately. 5.If process 1 now calls enter_region, it will hang there until interested[0] goes to FALSE, an event that only happens when process 0 calls leave_region to exit the critical region.
  • 54. Peterson's Solution 6. Now consider the case that both processes call enter_region almost simultaneously. 7. Both will store their process number in turn. 8. Whichever store is done last is the one that counts; the first one is overwritten and lost. 9. Suppose that process 1 stores last, so turn is 1. 10. When both processes come to the while statement, process 0 executes it zero times and enters its critical region. 11. Process 1 loops and does not enter its critical region until process 0 exits its critical region.
  • 55. Mutual Exclusion with Busy Waiting (2) Peterson's solution for achieving mutual exclusion 55
  • 56. PRIORITY INVERSION PROBLEM 1. In Scheduling, priority inversion is the scenario where a low priority Task holds a shared resource, that is required by a high priority task. 2. This causes the execution of the high priority task to be blocked until the low priority task has released the resource, effectively “inverting” the relative priorities of the two tasks. 3. If some other medium priority task, one that does not depend on the shared resource, attempts to run in the interim, it will take precedence over both the low priority task and the high priority task. 56
  • 57. PRIORITY INVERSION PROBLEM Priority Inversion will 1.Make problems in real time systems. 2.Reduce the performance of the system 3.May reduce the system responsiveness which leads to the violation of response time guarantees. 57
  • 58. 1. Consider Three Tasks A,B,C with priorities A > B > C. 2. Assume these tasks are served by a common server (Sequential). 3. Assume A & C share a critical resource. 4. Suppose C has the Server and acquires the resource. 5. A requests the server, Preempting C. PRIORITY INVERSION EXAMPLE 6. A then Wants the Resource. 7. Now C must take the server while A blocks waiting for C to release the resource. 8. Meanwhile B requests the server. 9. Since B > C, B can run arbitrarily long, all the while with A being blocked. 10. But A > B, Which is Anomaly. (Priority Inversion) 58
  • 59. Sleep & Wakeup • Both Peterson & TSL solution have the defect of requiring Busy Waiting • So we can have some problems like, 1. CPU time is wasted 2. Priority Inversion Problem These problems can be solved by using Sleep & Wakeup primitives (System Calls). 59
  • 60. Sleep & Wakeup • Sleep: Sleep is a system call that causes the caller to block, that is, be suspended until another process wakes it up • Wakeup : Wakeup system call awakens the process. It has one parameter which is process itself. 60
  • 61. Producer – Consumer Problem (Bounded Buffer Problem) • It consists of two processes, Producer & Consumer • They share a common fixed size Buffer • Producer puts information into Buffer • Consumer takes information out of buffer 61
  • 62. Producer – Consumer Problem (Bounded Buffer Problem) • Trouble: When the producer wants to put information but the buffer isn’t empty • Solution: 1. Producer to go to sleep 2. To be awakened when consumer removes a item or items from buffer 62
  • 63. Producer – Consumer Problem (Bounded Buffer Problem) • Trouble: When the consumer wants to take information from the buffer but buffer is empty. • Solution: 1. Consumer go to sleep 2. To be awakened when Producer put information in the buffer 63
  • 64. 64
  • 65. 65
  • 66. Sleep and Wakeup Producer Module Producer-consumer problem with fatal race condition Reason: Access to the count is unconstrained( Ex: Book) 66
  • 67. Sleep and Wakeup Consumer Module Producer-consumer problem with fatal race condition Reason: Access to the count is unconstrained( Ex: Book) 67
  • 68. Sleep and Wakeup • Due to access to the count in unconstrained manner a fatal race condition occurs here • So some wakeups calls are wasted here • Wakeup waiting bit is used here to avoid this • A wakeup bit is set to a process which is still awake • Later on when the process go to sleep & if wakeup bit is set , this bit is turned off but the process remains still awake 68
  • 69. Problem With Sleep and Wakeup The problem with this solution is that it contains a race condition that can lead into a deadlock. Consider the following scenario: 1.The consumer has just read the variable itemCount, noticed it's zero and is just about to move inside the if-block. 2.Just before calling sleep, the consumer is interrupted and the producer is resumed. 3.The producer creates an item, puts it into the buffer, and increases itemCount. 69
  • 70. Problem With Sleep and Wakeup 1.Because the buffer was empty prior to the last addition, the producer tries to wake up the consumer. 2.Unfortunately the consumer wasn't yet sleeping, and the wakeup call is lost. When the consumer resumes, it goes to sleep and will never be awakened again. This is because the consumer is only awakened by the producer when itemCount is equal to 1. 3.The producer will loop until the buffer is full, after which it will also go to sleep. 4.Since both processes will sleep forever, we have run into a deadlock. This solution therefore is unsatisfactory. 70
  • 71. Semaphores • Semaphore is an integer variable • It is used to count the number of wakeups saved for future use • A semaphore could have – • Value 0 : No wakeups were saved • Value +ve Integer: Indicates wakeups pending Semaphore operations: 1. Down operation 2. Up operation 71
  • 72. Operations on Semaphores • Down operation 1.It checks the value of the semaphore. 2.If it is greater than zero, it decrements the value by 1 & just continues. 3.If it is zero, the process is put to sleep without completing Down for a moment. 4.All these operations are done as a single, indivisible Atomic action. 72
  • 73. Operations on Semaphores • UP operation 1. It increments the value of the semaphore addressed 2. If one or more process were sleeping on that semaphore unable to complete down earlier, one of them chosen by the system 3. it is allowed to complete Down (Decrement semaphore by 1) 4. Thus, after an UP on a semaphore with process sleeping on it, the semaphore will still be 0 5. But there will be one less process sleeping on it. 6. Above operation is totally invisible 7. No process ever blocks here 73
  • 74.
  • 75.
  • 76.
  • 77.
  • 78.
  • 79.
  • 80.
  • 81. Producer – Consumer Problem using Semaphore • This solution uses three semaphores (1) full (2) empty & (3) mutex Full : Full is used for counting the number of slots that are full Empty: Empty is used for counting the number of slots that are empty Mutex: Mutex is used to make sure that Producer & Consumer don’t access the buffer at the same time Semaphores used here in two different ways – 1. For synchronization ( full & empty) 2. To guarantee Mutual exclusion ( mutex) 81
  • 85. Mutexes • A mutex is a variable • It can be in one out of two states : Unlocked or Locked • Only one bit is required to represent it • In practice an integer value is often used, with 0 meaning unlocked and all other values meaning locked • When a process (or thread) needs access to a critical region, it calls mutex_lock • If the mutex is currently unlocked, the call succeeds and the calling process (or thread )is free to enter the critical region 85
  • 86. Mutexes • On the other hand, if mutex is already locked, the calling process (or thread) is blocked until the process (or thread) in the critical region is finished and calls mutex_unlock. • Because mutexes are so simple, they can easily be implemented in user space if a TSL instruction is available • The code for mutex_lock and mutex_unlock for use with a user level threads package 86
  • 87. Mutexes The code for mutex_lock and mutex_unlock for use with a user level threads package is as under. Implementation of mutex_lock and mutex_unlock 87
  • 88. Monitors (1) Example of a monitor 88
  • 89. Monitors (2) • Outline of producer-consumer problem with monitors – only one monitor procedure active at one time – buffer has N slots 89
  • 90. Monitors (3) Solution to producer-consumer problem in Java (part 1) 90
  • 91. Monitors (4) Solution to producer-consumer problem in Java (part 2) 91
  • 92. Message Passing The producer-consumer problem with N messages 92
  • 93. MONITORS • The Problem With Semaphores • Suppose that the two downs in producers’ code is reversed in order.... • Both process would stay blocked forever • If resources are not tightly controlled, “chaos will ensue” - Race conditions • To make it easier to write correct programs, a higher – level synchronization primitive called a monitor.
  • 94. The Solution • Monitors provide control by allowing only one process to access a critical resource at a time • A monitor is a collection of procedures, variables and data structures that are all grouped together in a special kind of module or package. • Procedures may call the procedures in a monitor whenever they want to, but they cannot directly access the monitor’s internal data structures from procedures declared outside the monitor. • Monitors have an important property that makes them useful for achieving mutual exclusion: only one process can be active in a monitor at any instant. • A monitor may only access it’s local variables
  • 95. An Abstract Monitor name : monitor … some local declarations … initialize local data procedure name(…arguments) … do some work … other procedures
  • 96. Monitors Example of a monitor 96
  • 97. Monitors • Outline of producer-consumer problem with monitors – only one monitor procedure active at one time – buffer has N slots 97
  • 98. Things Needed to Enforce Monitor • A solution lies in the introduction of condition variables , along with two operators on them, Wait & Signal • “Wait” operation – Forces running process to sleep • “signal” operation – Wakes up a sleeping process • A condition (Condition variable) – Something to store who’s waiting for a particular reason – Implemented as a queue
  • 99. A Running Example – Kitchen kitchen : monitor Monitor Declaration occupied : Boolean; occupied := false; nonOccupied : condition; Declarations / Initialization procedure enterKitchen if occupied then nonOccupied.wait; occupied = true; Procedure procedure exitKitchen occupied = false; Procedure nonOccupied.signal;
  • 100. Multiple Conditions • Sometimes desirable to be able to wait on multiple things • Can be implemented with multiple conditions • Example: • Two reasons to enter kitchen - cook (remove clean dishes) - clean (add clean dishes) • Two reasons to wait: – Going to cook, but no clean dishes – Going to clean, no dirty dishes
  • 101. Emerson’s Kitchen kitchen : monitor cleanDishes, dirtyDishes : condition; dishes, sink : stack; dishes := stack of 10 dishes sink := stack of 0 dishes procedure cook if dishes.isEmpty then cleanDishes.wait sink.push ( dishes.pop ); dirtyDishes.signal; procedure cleanDish if sink.isEmpty then dirtyDishes.wait dishes.push (sink.pop) cleanDishes.signal
  • 102. Condition Queue • Checking if any process is waiting on a condition: – “condition.queue” returns true if a process is waiting on condition • Example: Doing dishes only if someone is waiting for them
  • 103. Summary • Advantages – Data access synchronization simplified (vs. semaphores or locks) – Better encapsulation • Disadvantages: – Deadlock still possible (in monitor code) – Programmer can still botch use of monitors – No provision for information exchange between machines
  • 104. Interprocess Communication (IPC)  Mechanism for processes to communicate and synchronize their actions.  Via shared memory  Via Messaging system - processes communicate without resorting to shared variables.  Messaging system and shared memory not mutually exclusive -  can be used simultaneously within a single OS or a single process.  IPC facility provides two operations.  send(message) - message size can be fixed or variable  receive(message)
  • 105. Producer-Consumer using IPC  Producer repeat … produce an item in nextp; … send(consumer, nextp); until false;  Consumer repeat receive(producer, nextc); … consume item from nextc; … until false;
  • 106. IPC via Message Passing  If processes P and Q wish to communicate, they need to:  establish a communication link between them  exchange messages via send/receive  Fixed vs. Variable size message  Fixed message size - straightforward physical implementation, programming task is difficult due to fragmentation  Variable message size - simpler programming, more complex physical implementation.
  • 107. Producer-Consumer using Message Passing  Producer repeat … produce an item in nextp; … send(consumer, nextp); until false;  Consumer repeat receive(producer, nextc); … consume item from nextc; … until false;
  • 108. Direct Communication  Sender and Receiver processes must name each other explicitly:  send(P, message) - send a message to process P  receive(Q, message) - receive a message from process Q  Properties of communication link:  Links are established automatically.  A link is associated with exactly one pair of communicating processes.  Exactly one link between each pair.  Link may be unidirectional, usually bidirectional.
  • 109. Indirect Communication  Messages are directed to and received from mailboxes (also called ports) Unique ID for every mailbox. Processes can communicate only if they share a mailbox. Send(A, message) /* send message to mailbox A */ Receive(A, message) /* receive message from mailbox A */  Properties of communication link Link established only if processes share a common mailbox. Link can be associated with many processes. Pair of processes may share several communication
  • 111. Mailboxes (cont.)  Operations create a new mailbox  send/receive messages through mailbox  destroy a mailbox  Issue: Mailbox sharing  P1, P2 and P3 share mailbox A.  P1 sends message, P2 and P3 receive… who gets message??  Possible Solutions  disallow links between more than 2 processes  allow only one process at a time to execute receive operation  allow system to arbitrarily select receiver and then notify
  • 112. Barriers This mechanism is used for groups of processes rather than two- process producer-consumer type of situations • Use of a barrier – processes approaching a barrier – all processes but one blocked at barrier – last process arrives, all are let through 112
  • 113. Dining Philosophers (1) • Philosophers eat/think • Eating needs 2 forks • Pick one fork at a time • How to prevent deadlock 113
  • 114. Dining Philosophers (2) A nonsolution to the dining philosophers problem 114
  • 115. Dining Philosophers (3) Solution to dining philosophers problem (part 1) 115
  • 116. Dining Philosophers (4) Solution to dining philosophers problem (part 116
  • 117. The Readers and Writers Problem A solution to the readers and writers problem 117
  • 118. The Sleeping Barber Problem (1) 118
  • 119. The Sleeping Barber Problem (2) Solution to sleeping barber problem. 119
  • 120. Scheduling Introduction to Scheduling (1) • Bursts of CPU usage alternate with periods of I/O wait – A CPU/Compute-bound process – Spends most of the time in computing. They have long CPU Bursts and infrequent I/O waits – An I/O bound process - Spends most of the time waiting for I/O. They have Short CPU Bursts and frequent I/O waits 120
  • 121. Introduction to Scheduling Types of Scheduling Algorithms • Non –Preemptive : a non-preemptive scheduling algorithm picks a process to run and then just lets it run until it blocks OR until it voluntarily releases CPU. It can’t be forcibly suspended • Preemptive: a preemptive scheduling algorithm picks a process and lets it run for a maximum of some fixed time. If it is still running at the end of the time interval, it is suspended and scheduler picks another process to run. 121
  • 122. Categories of Scheduling Algorithms • Batch • Interactive • Real time
  • 123. Introduction to Scheduling (2) Scheduling Algorithm Goals 123
  • 124. Scheduling in Batch Systems • There are following methods- 1.First – Come – First – Serve 2.Shortest Job First 3.Shortest Remaining Time Next 4.Three level Scheduling 124
  • 125. Scheduling in Batch Systems • First – Come – First – Serve method: 1.Simplest non-preemptive algorithm 2.Processes are assigned the CPU in the order they request it 3.Basically there is a single queue of ready process 4.It is very easy to understand and program 5.A single linked list keeps track of all ready process 125
  • 126. Scheduling in Batch Systems FCFS – Example : (With the arrival at Same Time) Average turn around time is (20 + 30 + 55 + 70 + 75) / 5 = 250/5 = 50 126
  • 127. FCFS – Example : (With the arrival at Different Times) 127
  • 128. Scheduling in Batch Systems • FCFS Disadvantages: What happens when – 1.One compute bound process, runs for one second at a time and goes for disk read (CPU will remain Idle) 2.Many I/O bound process that uses little CPU time but each have to perform 1000 disk reads to complete (CPU will remain Idle) 128
  • 129. Scheduling in Batch Systems Shortest Job First method Working: here when several equally important jobs are sitting in the input queue waiting to be started, the scheduler picks the shortest job first. Average Turn Around time here: (5 + 15 +30 + 50 + 75 ) / 5 = 175/5 = 35 An example of shortest job first scheduling 129
  • 130. Shortest Job First Figure 2-40. An example of shortest job first scheduling. (a) Running four jobs in the original order. (b) Running them in shortest job first order. Tanenbaum, Modern Operating Systems 3 e, (c) 2008 Prentice-Hall, Inc. All rights reserved. 0-13-6006639
  • 131. Preemptive Shortest job Scheduling 131
  • 132. Scheduling in Batch Systems • It is worth pointing out that shortest job first is only optimal when all the jobs are available simultaneously • See following example: Processes A B C D E Run times 2 4 1 1 1 Arrival times 0 0 3 3 3 Here we can run SJF in two orders like ABCDE or BCDEA Average Turn. time (ABCDE) = (2-0)+(6-0)+(7-3)+(8-3)+(9-3) = 4.6 Average Turn. time (BCDEA) = ? 132
  • 133. Three level scheduling in Batch Systems The CPU scheduler Decides the job to be given CPU first. The admission scheduler Decides which job to admit first The Memory scheduler to the system. Decides which job is to be kept in It is used to handle memory & which are to be swap compute out to handle memory space and I/O bound jobs. problem. 133
  • 134. Scheduling in Interactive Systems (1) 1. Round Robin Scheduling 2. Priority Scheduling • Round Robin Scheduling – list of runnable processes (a) – list of runnable processes after B uses up its quantum(b) 134
  • 135. Priority Scheduling 1. A priority number (integer) is associated with each process 2. The CPU is allocated to the process with the highest priority Normally (smallest integer = highest priority) It can be: • Preemptive • Non-preemptive
  • 136. Processes Burst time Priority Arrival Priority time Scheduling P1 10 3 00 Example With P2 1 1 00 Same Arrival P3 2 4 00 Time P4 1 5 00 P5 5 2 00 P2 P5 P1 P3 P4 0 1 6 16 18 19 The average waiting time: =((16-10) + (1-1) + (18-2) + (19-1) + (6-5))/5 = (6+0+16+18+1)/5 = 41/5 = 8.2
  • 137. Priority Scheduling Example With Different Arrival Time Processes Burst time Priority Arrival time P1 10 3 00 P2 1 1 1 P3 2 4 2 P4 1 5 3 P5 5 2 4 The average waiting time: =(( ? ) + ( ? ) + ( ? ) + ( ? ) + ( ? ))/5 = ( ? +?+?+?+?)/5 = ?/5 = ?
  • 138. Priority Scheduling Problem : Starvation – low priority processes may never execute Solution : Aging – As time progresses increase the priority of the process
  • 139. Round-Robin Scheduling • The Round-Robin is designed especially for time sharing systems. • Similar to FCFS but adds preemption concept • Each process gets a small unit of CPU time (time quantum), usually 10-100 milliseconds • After this time has elapsed, the process is preempted and added to the end of the ready queue.
  • 140. Round-Robin Scheduling Example Time Quantum : 20ms Arrival Time : 00 (Simultaneously) The average waiting time: =((134 ) + (37) + (162) + (121) )/4 = 113.5
  • 141. Round Robin scheduling Example Time Quantum here : 04ms Process Arrival Time Service time 1 0 8 2 1 4 3 2 9 4 3 5 P1 P2 P3 P4 P1 P3 P4 P3 0 4 8 12 16 20 24 25 26 The average waiting time: =((20-0 ) + (8-1) + (26-2) + (25-3))/4 = (74 )/4 = 18.5 141
  • 142. PRIORITY BASED SCHEDULING • Assign each process a priority. Schedule highest priority first. All processes within same priority are FCFS. • Priority may be determined by user or by some default mechanism. The system may determine the priority based on memory requirements, time limits, or other resource usage. • Starvation occurs if a low priority process never runs. Solution: build aging into a variable priority. • Delicate balance between giving favorable response for interactive jobs, but not starving batch jobs. 142
  • 143. ROUND ROBIN • Use a timer to cause an interrupt after a predetermined time. Preempts if task exceeds it’s quantum. • Train of events 1. Dispatch 2. Time slice occurs OR process suspends on event 3. Put process on some queue and dispatch next • Use numbers to find queueing and residence times. (Use quantum.) 143
  • 144. ROUND ROBIN • Definitions: – Context Switch: Changing the processor from running one task (or process) to another. Implies changing memory. – Processor Sharing : Use of a small quantum such that each process runs frequently at speed 1/n. – Reschedule latency : How long it takes from when a process requests to run, until it finally gets control of the CPU. 144
  • 145. ROUND ROBIN • Choosing a time quantum – Too short - inordinate fraction of the time is spent in context switches. – Too long - reschedule latency is too great. If many processes want the CPU, then it's a long time before a particular process can get the CPU. This then acts like FCFS. – Adjust so most processes won't use their slice. As processors have become faster, this is less of an issue. 145
  • 146. Round-Robin Scheduling NEXT SLIDE
  • 147. Multilevel Queue • Ready Queue partitioned into separate queues – Example: system processes, foreground (interactive), background (batch), student processes…. • Each queue has its own scheduling algorithm – Example: foreground (RR), background(FCFS) • Processes assigned to one queue permanently. • Scheduling must be done between the queues – Fixed priority - serve all from foreground, then from background. Possibility of starvation. – Time slice - Each queue gets some CPU time that it schedules - e.g. 80% foreground(RR), 20% background (FCFS)
  • 149. MULTI-LEVEL QUEUES: • Each queue has its scheduling algorithm. • Then some other algorithm (perhaps priority based) arbitrates between queues. • Can use feedback to move between queues • Method is complex but flexible. • For example, could separate system processes, interactive, batch, favored, unfavored processes 149
  • 150. Multilevel Queue Interactive Systems A scheduling algorithm with four priority classes 150
  • 151. Scheduling in Real-Time Systems Real Time Scheduling: •Hard real-time systems – required to complete a critical task within a guaranteed amount of time. •Soft real-time computing – requires that critical processes receive priority over less fortunate ones. 151
  • 152. Scheduling in Real-Time Systems Schedulable real-time system • Given – m periodic events – event i occurs within period Pi and requires Ci seconds • Then the load can only be handled if m Ci ∑ P ≤1 i =1 i 152
  • 153. Scheduling in Real-Time Systems Example: Events Periods CPU Time 01 100 50 02 200 30 03 500 100 Here , = 0.5 + 0.15 + 0.2 = 0.85 System is schedulable because here m Ci ∑P i =1 ≤1 i 153
  • 154. Policy versus Mechanism • Separate what is allowed to be done with how it is done – a process knows which of its children threads are important and need priority • Scheduling algorithm parameterized – mechanism in the kernel • Parameters filled in by user processes – policy set by user process 154
  • 155. Thread Scheduling (1) Possible scheduling of user-level threads • 50-msec process quantum • threads run 5 msec/CPU burst 155
  • 156. Thread Scheduling (2) Possible scheduling of kernel-level threads • 50-msec process quantum • threads run 5 msec/CPU burst 156
  • 157. FCFS Process Burst Time P1 3 P2 6 P3 4 P4 2 Order : P1,P2,P3,P4 FCFS Process Compl Time P1 3 P2 9 P3 13 P4 15 average Waiting Time = ( )/4 = 157
  • 158. Shortest Job First Process Burst Time P1 3 P2 6 P3 4 P4 2 Process Compl Time P4 2 P1 5 P3 9 P2 15 Average Waiting Time = ( )/4 = 158
  • 159. Priority Scheduling Process Burst Time Priority P1 3 2 P2 6 4 P3 4 1 P4 2 3 Gantt Chart : P3 P1 P4 P2 0 4 7 15 9 Average Waiting Time: = 159
  • 160. Round Robin Scheduling Process Burst Time P1 3 P2 6 P3 4 P4 2 Time Quantum : 2ms Gantt Chart : ? Average Waiting Time: = 160

Notes de l'éditeur

  1. Notice that the condition of one person in the kitchen is now relaxed. Two new rules: there must be one dish in the sink to clean a dish there must be one dish in dishes to cook