2. 1. Background
• Concurrent access to shared data may result in data inconsistency
• Maintaining data consistency requires mechanisms to ensure the
orderly execution of cooperating processes
• A solution to the consumer-producer problem that fills all the
buffers can have an integer count that keeps track of the number
of full buffers. Initially, count is set to 0. It is incremented by the
producer after it produces a new buffer and is decremented by
the consumer after it consumes a buffer.
Producer Consumer
while (true) { while (true) {
while (count == BUFFER_SIZE) while (count == 0); // do nothing
; // do nothing nextConsumed = buffer[out];
buffer [in] = nextProduced; out = (out + 1) % BUFFER_SIZE;
in = (in + 1) % BUFFER_SIZE; count--;
count++; nextConsumed
} }
Loganatahn R, CSE, HKBKCE 2
3. 1. Background Contd…
• Where several processes access and manipulate the same data concurrently
and the outcome of the execution depends on the particular order in which
the access takes place, is called a race condition
• To guard against the race condition, ensure that only one process at a time can
be manipulating the data which require that the processes be synchronized
• count++ could be implemented as • count-- could be implemented as
register1 = count register2 = count
register1 = register1 + 1 register2 = register2 - 1
count = register1 count = register2
• The concurrent execution of "counter++" and "counter--" is equivalent to a
sequential execution where the lower-level statements are interleaved in
some arbitrary order but the order within each high-level statement is
preserved
• Consider this execution interleaving with “count = 5” initially:
S0: producer execute register1 = count {register1 = 5}
S1: producer execute register1 = register1 + 1 {register1 = 6}
S2: consumer execute register2 = count {register2 = 5}
S3: consumer execute register2 = register2 - 1 {register2 = 4}
S4: producer execute count = register1 {count = 6 }
S5: consumer execute count = register2 {count = 4}
Loganatahn R, CSE, HKBKCE 3
4. 2. The Critical-Section Problem
• Each process in a system has a segment of code, called a critical
section, in which the process may be changing common
variables, updating a table, writing a file, and so on
• When one process is executing in its critical section, no other
process is to be allowed to execute in its critical section is known
as critical-section problem
• Each process must request permission to enter its critical section
and the code implementing this request is the entry section
• The critical section is followed by an exit section and the
remaining code is the remainder section
• General structure of a typical process
do{
entry section
critical section
exit section
remainder section
} while (TRUE);
Loganatahn R, CSE, HKBKCE 4
5. 2. The Critical-Section Problem Contd…
• A solution to the critical-section problem must satisfy the following three
requirements
1. Mutual Exclusion - If process Pi is executing in its critical section, then no
other processes can be executing in their critical sections
2. Progress - If no process is executing in its critical section and there exist some
processes that wish to enter their critical section, then only those processes
that are not executing in their remainder sections can participate in the
decision on which will enter its critical section next, and this selection cannot
be postponed indefinitely
3. Bounded Waiting - There exists a bound, or limit, on the number of times
that other processes are allowed to enter their critical sections after a
process has made a request to enter its critical section and before that
request is granted
Assume that each process executes at a nonzero speed
No assumption concerning relative speed of the N processes
• Two approaches are used to handle critical sections in OS (1) preemptive
kernels and (2) nonpreemptive kernels
Loganatahn R, CSE, HKBKCE 5
6. 3. Peterson's Solution
• Restricted to Two Process only
• Two data items shared is between the process P1 & P2 (Pi & Pj)
– int turn;
– Boolean flag[2]
• The variable turn indicates whose turn it is to enter the critical
section. If turn == i, then process Pi is allowed
• The flag array is used to indicate if a process is ready to enter the
critical section. flag[i] = true implies that process Pi is ready
• Algorithm for Pi
do {
flag[ i ] = TRUE;
turn = j ;
while (flag[j] && turn == j ) ;
critical section
flag[i] = FALSE;
remainder section
} while (TRUE);
Loganatahn R, CSE, HKBKCE 6
7. 3. Peterson's Solution Contd…
• To Prove
1.Mutual exclusion is preserved.
– Pi enters its critical section only if either flag[j] == false or
turn = I
– if both processes are executing in their critical sections at the
same time, then flag [i] == flag [j] == true(NOT Possible)
2.The progress requirement is satisfied.
– Since Pi does not change the value of the variable turn while
executing the while statement, Pi- will enter the critical
section
3.The bounded-waiting requirement is met.
– Once Pj exits its critical section, it will reset flag[j] to false,
allowing Pi to enter its critical section.
Loganatahn R, CSE, HKBKCE 7
8. 4. Synchronization Hardware
• Race conditions are prevented by requiring that critical
regions be protected by locks i.e., a process must acquire
a lock before entering a critical section and it releases the
lock when it exits the critical section.
do {
acquire lock
critical section
release lock
remainder section
} while (TRUE);
• All these solutions are based on the locking, however, the
design of such locks can be quite sophisticated
• Uniprocessors – If interrupts could be disabled during
critical section
• Modern machines provide special atomic(non-
interruptable) hardware instructions
– Either test memory word and set value - TestAndSet()
– Or swap contents of two memory words – Swap()
Loganatahn R, CSE, HKBKCE 8
9. 4. Synchronization Hardware Contd…
TestAndndSet Instruction
boolean TestAndSet (boolean *target)
{
boolean rv = *target;
*target = TRUE;
return rv:
}
• if two TestAndSet () instructions are executed simultaneously
(each on a different CPU), they will be executed sequentially in
some arbitrary order
Mutual-exclusion implementation with TestAndSet ( )
while (true) {
while ( TestAndSet (&lock ));
// critical section
lock = FALSE;
// remainder section
}
Loganatahn R, CSE, HKBKCE 9
10. 4. Synchronization Hardware Contd…
Swap() instruction
void Swap (boolean *a, boolean *b)
{
boolean temp = *a;
*a = *b;
*b = temp:
}
Mutual-exclusion implementation with the Swap()
• A global Boolean variable lock is declared and is initialized to false and each
process has a local Boolean variable key
while (true) {
key = TRUE;
while ( key == TRUE)
The above both algorithms do
Swap (&lock, &key ); not satisfy the bounded-waiting
// critical section requirement
lock = FALSE;
// remainder section
} Loganatahn R, CSE, HKBKCE 10
11. 4. Synchronization Hardware Contd…
Bounded-waiting mutual exclusion with TestAndSet ().
Common data structures are initialized to false do {
boolean waiting[n]; & boolean lock; waiting[i] = TRUE;
To prove the mutual exclusion key = TRUE;
Process Pi can enter its CS only if either waiting[i] = false or
while (waiting[i] && key)
key = false. The first process to execute the TestAndSet ()
key = TestAndSet(&lock);
will find key = false and all others must wait. The variable
waiting[i] = FALSE;
waiting[i] can become false only if another process leaves
its critical section i.e. only one waiting [i] is set to false, // critical section
maintaining the mutual-exclusion requirement j = (i + 1) % n;
To prove that the progress(Same as above) while ((j != i) && !waiting[j])
Process exiting the CS either sets lock to false or sets j = (j + 1) % n;
waiting[j] to false allow a process that is waiting to enter if (j == i)
its critical section to proceed lock = FALSE;
To prove that the bounded-waiting else
Process which leaves its CS, scans the array waiting in the waiting[j] = FALSE;
cyclic ordering (i+1, i+2,...,n-1, 0,..., i)It designates the // remainder section
first process in this ordering that is in the entry section } while (TRUE);
(waiting [j] =- true) as the next one to enter the critical
section. Any process waiting to enter its critical section
will do so within n-1 turns
Loganatahn R, CSE, HKBKCE 11
12. 5. Semaphores
• Synchronization tool that does not require busy waiting
• Semaphore S is integer variable, apart from initialization, is accessed only by 2
standard atomic operations: wait () and signal () Originally called P() and V()
• The Definition of wait (S) { signal (S) {
while (S <= 0); // no-op S++;
S--;
}
}
• When one process modifies the semaphore, no other process can modify it
• 5.1 Usage
• Counting semaphore – integer value can range over an unrestricted domain
• Binary semaphore – integer value can range only between 0 and 1 and also
known as mutex locks, as they are locks that provide mutual exclusion
• Binary semaphores used in CS for multiple processesshare a semaphore,
mutex, initialized to 1 is organized do {
waiting(mutex);
// critical section
signal(mutex);
// remainder
section
Loganatahn R, CSE, HKBKCE }while (TRUE); 12
13. 5. Semaphores Contd…
• Counting semaphores can be used to control access to a resource consisting of
a finite number of instances.
• The semaphore is initialized to the number of resources available.
• Process that use a resource performs a wait() (decrementing S) and releases a
resource performs a signal () (incrementing S)
• When the count for the semaphore goes to 0, all resources are being used.
After that, processes that wish to use a resource will block until the count
becomes greater than 0.
5.2 Implementation
• The disadvantage of the semaphore is it requires busy waiting i.e. while a
process is in its CS, other process that tries to enter its CS must loop
continuously(wastes CPU cycles)
• This type of semaphore is also called a spinlock because the process "spins"
while waiting for the lock(Advantage : no context switch)
• To overcome busy waiting, the definition of wait () and signal () are modified
• When semaphore value is not positive the process can blocks itself by block()
operation, which moves process to waiting queue and restarted by a wakeup ()
operation when some other process executes a signal() operation
Loganatahn R, CSE, HKBKCE 13
14. 5. Semaphores Contd…
• To implement semaphores under this definition
typedef struct {
int value;
struct process *list;
} semaphore;
• Implementation of wait():
wait(semaphore *S) {
S->value --;
if (S->value < 0) {
//add this process to S->list;
block();
}
}
• Implementation of signal:
signal(semaphore *S) {
S->value++;
if (S->value <= 0) {
remove a process P from S->list;
wakeup(P);
}
} Loganatahn R, CSE, HKBKCE 14
15. 5. Semaphores Contd…
5.3 Deadlock and Starvation
• Deadlock – two or more processes are waiting indefinitely for an
event that can be caused by only one of the waiting processes
• Let S and Q be two semaphores initialized to 1
P0 P1
wait (S); wait (Q);
wait (Q); wait (S);
. .
. .
. .
signal (S); signal (Q);
signal (Q); signal (S);
• When P0 executes wait(Q), it must wait until P1 executes signal(Q).
Similarly, when P1 executes wait(S), it must wait until Po executes
signal(S)
• These signal () operations cannot be executed, Po and Pi are
deadlocked.
• Starvation – indefinite blocking. A process may never be removed
from the semaphore queue in which it is suspended.
Loganatahn R, CSE, HKBKCE 15
16. 6. Classical Problems of Synchronization
• Large class of concurrency-control problems which are used for testing nearly
every newly proposed synchronization scheme.
– Bounded-Buffer Problem
– Readers and Writers Problem
– Dining-Philosophers Problem
6.1 Bounded-Buffer Problem
• N buffers, each can hold one item
• Semaphore mutex initialized to the value 1 to provide mutual exclusion for
accesses to the buffer
• Semaphore full initialized to the value 0
• Semaphore empty initialized to the value N.
The structure of the producer process The structure of the consumer process
do{ do{
// produce an item wait (full);
wait (empty); wait (mutex);
wait (mutex); // remove an item from buffer
// add the item to the buffer signal (mutex);
signal (mutex); signal (empty);
signal (full); // consume the removed item
} while (true) ; } while (true) ;
Loganatahn R, CSE, HKBKCE 16
17. 6. Classical Problems of Synchronization Contd..
6.2 Readers-Writers Problem
• A database is shared among a number of concurrent processes
– Readers – only read the data set; they do not perform any updates
– Writers – can both read and write.
• Problem –
• first readers-writers problem, requires that, no reader will be kept
waiting unless a writer has already obtained permission to use.
• second readers-writers problem, requires that, once a writer is ready,
that writer performs its write as soon as possible
• That is allow multiple readers to read at the same time. Only one
single writer can access the shared data at the same time.
• Solution to first readers-writers problem Shares the following data
– Semaphore mutex initialized to 1. The structure of a writer process
– Semaphore wrt initialized to 1. while (true) {
– Integer readcount initialized to 0. wait (wrt) ;
// writing is performed
signal (wrt) ;
}
17
Loganatahn R, CSE, HKBKCE
18. 6. Classical Problems of Synchronization Contd..
• If a writer is in the critical section and n readers are waiting, then one
reader is queued on wrt, and n - 1 readers are queued on mutex.
• When a writer executes signal (wrt), It may resume the execution of
either the waiting readers or a single waiting writer
The structure of a reader process
do {
wait(mutex);
readcount + + ;
if (readcount == 1)
wait(wrt);
signal(mutex);
// reading is performed
wait (mutex);
readcount --;
if (readcount == 0)
signal(wrt);
signal(mutex);
}while (TRUE);
Loganatahn R, CSE, HKBKCE 18
19. 6. Classical Problems of Synchronization Contd..
6.3 The Dining-Philosophers Problem
• A large class of concurrency-control problem i.e. It is a simple
representation of the need to allocate several resources among several
processes in a deadlock-free and starvation-free manner
• Five philosophers who spend their lives thinking and eating share a
circular table surrounded by five chairs and In the center of the table is
a bowl of rice, and the table is laid with five single chopsticks
• From time to time, a philosopher gets hungry and tries to pick up the 2
chopsticks that are between him and his left and right neighbors)
• A philosopher may pick up only one
chopstick at a time, he cannot pick up a
chopstick that is already in the hand of a
neighbor.
• When a hungry philosopher has both his
chopsticks at the same time, he eats
without releasing his chopsticks until
finished eating 19
Loganatahn R, CSE, HKBKCE
20. 6. Classical Problems of Synchronization Contd..
• Simple solution is to represent each chopstick with a semaphore
semaphore chopstick[5]; and all are initialized to 1
• A philosopher tries to grab a chopstick by executing a wait () operation on that
semaphore
• A philosopher releases the chopsticks by executing the signal() operation on
the appropriate semaphores
• It could create a deadlock i.e. if all five The structure of philosopher Pi
philosophers become hungry
simultaneously and each grabs left do {
chopstick making all the chopstick[ ] equal
wait (chopstick [i] ) ;
to 0, which delays to grab right chopstick
forever.
wait(chopstick [ (i + 1) % 5] ) ;
• Several possible remedies // eat
- Allow at most four philosophers to be sitting signal(chopstick [i]);
simultaneously at the table. signal(chopstick [(i + 1) % 5]);
- Allow a philosopher to pick up her / / think
chopsticks only if both chopsticks are }while (TRUE);
available
- Allow an odd philosopher to pick up first left chopstick and then right chopstick,
whereas an even philosopher picks up right chopstick and then left chopstick
20
Loganatahn R, CSE, HKBKCE
21. 7. Monitors
• Incorrect use of semaphore operations:
1. Interchanges the order of wait() and signal () signal(mutex);
//critical section
wait(mutex);
2. replaces signal (mutex) with wait (mutex) wait(mutex);
//critical section
wait(mutex);
3. omits the wait (mutex), or the signal (mutex), or both
To deal with such errors, researchers have developed high-level language
synchronization construct called the monitor type
7.1 Usage
• A type, or abstract data type, encapsulates private data with public methods
to operate on that data
• Presents a set of programmer-defined operations that are provided mutual
exclusion within the monitor
• A procedure defined within a monitor can access only those variables declared
locally within the monitor and its formal parameters
• Only one process may be active within the monitor at a time
• The syntax of a monitor is shown
21
Loganatahn R, CSE, HKBKCE
22. 7. Monitors Contd…
Syntax of a monitor Schematic view of a monitor
monitor monitor-name
{
// shared variable declarations
procedure P1 (…) { …. }
procedure P2 (…) { …. }
…
procedure Pn (…) {……}
Initialization code ( ….) { … }
…
}
• Additional synchronization
mechanisms are provided in
monitor by the condition
construct which define one
or more variables of type
condition
condition x, y;
22
Loganatahn R, CSE, HKBKCE
23. 7. Monitors Contd…
• The only operations that can be invoked on a condition variable are wait () and
signal()
• x.wait() means that the process invoking this operation is suspended until
another process invokes x.signal() which resumes (if any) exactly one
suspended process;
Monitor with Condition Variables
23
Loganatahn R, CSE, HKBKCE
24. 7. Monitors Contd…
7.2 Dining-Philosophers Solution Using Monitors
• Philosopher i can set the variable state[i] = eating only if two neighbors are not
eating: (state[(i+4)%5] != eating) and (state[(i+1) % 5] != eating)
• philosopher i must invoke the operations pickup() and putdown() in the
following sequence: DP.pickup(i);
eat
DP.putdown(i);
• This solution will ensure that there is NO deadlocks but starvation may occur
monitor DP
{
enum {THINKING, HUNGRY, EATING}state [5] ; void test (int i) {
condition self [5]; if ( (state[(i + 4) % 5] != EATING) &&
(state[i] == HUNGRY) &&
void pickup (int i) { (state[(i + 1) % 5] != EATING) ) {
state[i] = HUNGRY; state[i] = EATING ;
test(i); self[i].signal () ;
if (state[i] != EATING) self [i].wait; }
}
} initialization_code() {
void putdown (int i) { for (int i = 0; i < 5; i++)
state[i] = THINKING; state[i] = THINKING;
// test left and right neighbors }
}
test((i + 4) % 5);
test((i + 1) % 5); Loganatahn R, CSE, HKBKCE 24
}
25. 7. Monitors Contd…
7.3 Implementing a Monitor Using Semaphores
• Semaphores and Variables
semaphore mutex; // (initially = 1) where processes waiting to enter the monitor wait
semaphore next; // (initially = 0) where processes that are within the monitor and
ready, wait to become the active process
int next_count = 0; number of process suspended on next
• Each procedure F will be replaced by
wait(mutex);
//body of F;
i f (next_count > 0) signal(next)
else signal(mutex); to ensure Mutual exclusion within a monitor
• For each condition variable x, Semaphores and Variables
semaphore x_sem; // initially = 0
int x_count = 0; • The operation x.signal can be
• The operation x.wait can be implemented as: implemented as:
x_count++; if (x_count > 0) {
if (next_count > 0) signal(next); next_count++;
else signal(mutex); signal(x_sem);
wait(x_sem);
wait(next);
x_count--;
next_count--;
Loganatahn R, CSE, HKBKCE } 25
26. 7. Monitors Contd…
7.4 Resuming Processes Within a Monitor Semaphores and Variables
• If several processes are suspended on monitor ResourceAllocator {
condition x, and then x. signal () executed
boolean busy;
by some process how to determine which
of the suspended processes should be
condition x;
resumed next void acquire(int time) {
• Using FCFS ordering scheme may not be if (busy) x.wait(time);
adequate busy = TRUE;
• Conditional-wait construct can be used }
which has the form : x.wait(c); void release() {
c is a priority number or time required to busy = FALSE;
complete the process x.signal();
• For example a process that needs to access }
the resource must observe the following
Initialization_code() {
sequence: R.acquire(t);
access the resource;
busy = FALSE;
R.release(); }
where R is an instance of type }
ResourceAllocator
Loganatahn R, CSE, HKBKCE 26