SlideShare une entreprise Scribd logo
1  sur  100
Multitasking and Process
Management in
embedded systemsReal-Time Scheduling Algorithms
Differences Between
Multithreading and Multitasking
4
Processing Elements Architecture
Flynn’s Classification Of Computer Architectures
• In 1966, Michael Flynn proposed a classification for
computer architectures based on the number of
instruction steams and data streams (Flynn’s
Taxonomy).
• Flynn uses the stream concept for describing a
machine's structure
• A stream simply means a sequence of items (data or
instructions).
• The classification of computer architectures based on
the number of instruction steams and data streams
(Flynn’s Taxonomy).
6
Simple classification by Flynn:
(No. of instruction and data streams)
> SISD - conventional
> SIMD - data parallel, vector computing
> MISD - systolic arrays
> MIMD - very general, multiple approaches.
Current focus is on MIMD model, using
general purpose processors.
(No shared memory)
Processing Elements
7
Processor OrganizationsProcessor Organizations
Computer Architecture
Classifications
Single Instruction,Single Instruction, Single Instruction,Single Instruction, Multiple InstructionMultiple Instruction
Multiple InstructionMultiple Instruction
Single Data StreamSingle Data Stream Multiple Data StreamMultiple Data Stream Single Data StreamSingle Data Stream
Multiple Data StreamMultiple Data Stream
(SISD)(SISD) (SIMD)(SIMD) (MISD)(MISD) (MIMD)(MIMD)
Uniprocessor Vector Array Shared MemoryUniprocessor Vector Array Shared Memory MulticomputerMulticomputer
Processor Processor (tightly coupled) (looselyProcessor Processor (tightly coupled) (loosely
coupled)coupled)
8
9
SISD : A Conventional Computer
 Speed is limited by the rate at which computer can
transfer information internally.
ProcessorProcessorData Input Data Output
Instructions
Ex:PC, Macintosh,
Workstations
10
SISD
SISD (Singe-Instruction stream, Singe-Data stream)
SISD corresponds to the traditional mono-processor
( von Neumann computer). A single data stream is
being processed by one instruction stream OR
A single-processor computer (uni-processor) in which
a single stream of instructions is generated from the
program.
11
SISD
where CU= Control Unit, PE= Processing
Element, M= Memory
12
SIMD
SIMD (Single-Instruction stream, Multiple-Data
streams)
Each instruction is executed on a different set of
data by different processors i.e multiple processing
units of the same type process on multiple-data
streams.
This group is dedicated to array processing
machines.
Sometimes,  vector processors can also be seen as
a part of this group.
13
SIMD
where CU= Control Unit, PE= Processing
Element, M= Memory
14
MISD
MISD (Multiple-Instruction streams, Singe-Data
stream)
Each processor executes a different sequence of
instructions.
In case of MISD computers, multiple processing units
operate on one single-data stream .
In practice, this kind of organization has never been
used
15
MISD
where CU= Control Unit, PE= Processing
Element, M= Memory
16
MIMD
MIMD (Multiple-Instruction streams, Multiple-Data
streams)
Each processor has a separate program.
An instruction stream is generated from each
program.
Each instruction operates on different data.
This last machine type builds the group for the
traditional multi-processors. Several processing units
operate on multiple-data streams.
17
MIMD Diagram
18
The MISD
Architecture
 More of an intellectual exercise than a practical configuration.
Few built, but commercially not available
Data
Input
Stream
Data
Output
Stream
Processor
A
Processor
B
Processor
C
Instruction
Stream A
Instruction
Stream B
Instruction Stream C
19
20
SIMD Architecture
Ex: CRAY machine vector processing, Thinking machine cm*
Ci<= Ai * Bi
Instruction
Stream
Processor
A
Processor
B
Processor
C
Data Input
stream A
Data Input
stream B
Data Input
stream C
Data Output
stream A
Data Output
stream B
Data Output
stream C
21
Unlike SISD, MISD, MIMD computer works asynchronously.
Shared memory (tightly coupled) MIMD
Distributed memory (loosely coupled) MIMD
MIMD Architecture
Processor
A
Processor
B
Processor
C
Data Input
stream A
Data Input
stream B
Data Input
stream C
Data Output
stream A
Data Output
stream B
Data Output
stream C
Instruction
Stream A
Instruction
Stream B
Instruction
Stream C
22
M
E
M
O
R
Y
B
U
S
Shared Memory MIMD machine
Comm: Source PE writes data to GM & destination retrieves it
 Easy to build, conventional OSes of SISD can be easily be ported
 Limitation : reliability & expandability. A memory component or
any processor failure affects the whole system.
 Increase of processors leads to memory contention.
Ex. : Silicon graphics supercomputers....
M
E
M
O
R
Y
B
U
S
Global Memory SystemGlobal Memory System
Processor
A
Processor
A
Processor
B
Processor
B
Processor
C
Processor
C
M
E
M
O
R
Y
B
U
S
23
M
E
M
O
R
Y
B
U
S
Distributed Memory MIMD
Communication : IPC on High Speed Network.
Network can be configured to ... Tree, Mesh, Cube, etc.
Unlike Shared MIMD
 easily/ readily expandable
 Highly reliable (any CPU failure does not affect the whole system)
Processor
A
Processor
A
Processor
B
Processor
B
Processor
C
Processor
C
M
E
M
O
R
Y
B
U
S
M
E
M
O
R
Y
B
U
S
Memory
System A
Memory
System A
Memory
System B
Memory
System B
Memory
System C
Memory
System C
IPC
channel
IPC
channel
interprocess communication 
24
Single Core
Figure 1. Single-core systems schedule
tasks on 1 CPU to multitask
Single-core systems schedule tasks on
1 CPU to multitask
• Applications that take advantage of multithreading
have numerous benefits, including the following:
1.More efficient CPU use
2.Better system reliability
3.Improved performance on multiprocessor
computers
Multicore
Figure 2. Dual-core systems enable
multitasking operating systems to
execute two tasks simultaneously
• The OS executes multiple applications more
efficiently by splitting the different
applications, or processes, between the
separate CPU cores. The computer can spread
the work - each core is managing and
switching through half as many applications as
before - and deliver better overall throughput
and performance. In effect, the applications
are running in parallel.
3. Multithreading
Threads
• Threads are lightweight processes as the
overhead of switching between threads is less
• The can be easily spawned
• The Java Virtual Machine spawns a thread
when your program is run called the Main
Thread
P PP P P P..
MicrokernelMicrokernel
Multi-Processor Computing System
Threads InterfaceThreads Interface
Hardware
Operating System
ProcessProcessor ThreadPP
Applications
Computing Elements
Programming paradigms
Why do we need threads?
• To enhance parallel processing
• To increase response to the user
• To utilize the idle time of the CPU
• Prioritize your work depending on priority
Example
• Consider a simple web server
• The web server listens for request and serves it
• If the web server was not multithreaded, the
requests processing would be in a queue, thus
increasing the response time and also might hang
the server if there was a bad request.
• By implementing in a multithreaded
environment, the web server can serve multiple
request simultaneously thus improving response
time
Synchronization
• Synchronization is prevent data corruption
• Synchronization allows only one thread to
perform an operation on a object at a time.
• If multiple threads require an access to an
object, synchronization helps in maintaining
consistency.
Threads Concept
40
Multiple
threads on
multiple
CPUs
Multiple
threads
sharing a
single CPU
Creating Tasks and Threads
41
// Custom task class
public class TaskClass implements Runnable {
...
public TaskClass(...) {
...
}
// Implement the run method in Runnable
public void run() {
// Tell system how to run custom thread
...
}
...
}
// Client class
public class Client {
...
public void someMethod() {
...
// Create an instance of TaskClass
TaskClass task = new TaskClass(...);
// Create a thread
Thread thread = new Thread(task);
// Start a thread
thread.start();
...
}
...
}
java.lang.Runnable TaskClass
Figure 3. Dual-core system enables
multithreading
43
Multithreading - Uniprocessors
Concurrency Vs Parallelism
K ConcurrencyConcurrency
Number of Simulatneous execution units > no of CPUs
P1
P2
P3
timetime
CPU
44
Multithreading -Multithreading -
MultiprocessorsMultiprocessors
Concurrency Vs ParallelismConcurrency Vs Parallelism
P1
P2
P3
timetime
No of execution process = no of CPUs
CPU
CPU
CPU
45
Compiler
Thread
Preprocessor
Thread
Multithreaded Compiler
Source
Code
Object
Code
46
Thread Programming models
1. The boss/worker model
2. The peer model
3. A thread pipeline
The scheduling algorithm is one of the most important portions of
embedded operating system. The performance of scheduling
algorithm influences the performance of the whole system.
47
taskX
taskX
taskY
taskY
taskZ
taskZ
main ( )
main ( )
Workers
Program
Files
Resources
Databas
es
Disks
Special
Devices
Boss
Input (Stream)
The boss/worker model
48
The peer model
taskX
taskX
taskY
taskY
Workers
Program
Files
Resources
Databas
es
Disks
Special
Devices
taskZ
taskZ
Input
(static)
Input
(static)
49
A thread pipeline
Resources Files
Databas
es
Disks
Special
Devices
Files
Databas
es
Disks
Special
Devices
Files
Databas
es
Disks
Special
Devices
Stage 1
Stage 1
Stage 2
Stage 2
Stage 3
Stage 3
Program Filter Threads
Input (Stream)
Applications that take advantage of
multithreading have numerous
benefits, including the following:
• More efficient CPU use
• Better system reliability
• Improved performance on multiprocessor
computers
Thread State Diagram
51
A thread can be in one of five states: New,
Ready, Running, Blocked, or Finished.
Cooperation Among Threads
52
The conditions can be used to facilitate communications among
threads. A thread can specify what to do under a certain condition.
Conditions are objects created by invoking the newCondition()
method on a Lock object. Once a condition is created, you can use its
await(), signal(), and signalAll() methods for thread communications.
The await() method causes the current thread to wait until the
condition is signaled. The signal() method wakes up one waiting
thread, and the signalAll() method wakes all waiting threads.
«interface»
java.util.concurrent.Condition
+await(): void
+signal(): void
+signalAll(): Condition
Causes the current thread to wait until the condition is signaled.
Wakes up one waiting thread.
Wakes up all waiting threads.
• In many applications, you make synchronous calls to resources, such as
instruments. These instrument calls often take a long time to complete.
In a single-threaded application, a synchronous call effectively blocks,
or prevents, any other task within the application from executing until
the operation completes. Multithreading prevents this blocking.
• While the synchronous call runs on one thread, other parts of the
program that do not depend on this call run on different threads.
Execution of the application progresses instead of stalling until the
synchronous call completes. In this way,
• a multithreaded application maximizes the efficiency of the CPU ?
• because it does not idle if any thread of the application is ready to
run.
4. Multithreading with LabVIEW
Semaphores & Deadlock
Semaphores (Optional)
56
Semaphores can be used to restrict the number of threads that
access a shared resource. Before accessing the resource, a thread
must acquire a permit from the semaphore. After finishing with
the resource, the thread must return the permit back to the
semaphore, as shown:
Thread Priority
57
• Each thread is assigned a default priority of the thread
that spawned it. The priority of the main thread is
Thread.NORM_PRIORITY. You can reset the
priority using setPriority(int priority).
• Some constants for priorities include
Thread.MIN_PRIORITY : 1
Thread.MAX_PRIORITY : 10
Thread.NORM_PRIORITY : 5
Deadlock
58
Sometimes two or more threads need to acquire the locks on several shared
objects. This could cause deadlock, in which each thread has the lock on one of
the objects and is waiting for the lock on the other object. Consider the scenario
with two threads and two objects, as shown. Thread 1 acquired a lock on object1
and Thread 2 acquired a lock on object2. Now Thread 1 is waiting for the lock on
object2 and Thread 2 for the lock on object1. The two threads wait for each other
to release the in order to get the lock, and neither can continue to run.
synchronized (object1) {
// do something here
synchronized (object2) {
// do something here
}
}
Thread 1
synchronized (object2) {
// do something here
synchronized (object1) {
// do something here
}
}
Thread 2Step
1
2
3
4
5
6
Wait for Thread 2 to
release the lock on object2
Wait for Thread 1 to
release the lock on object1
59
Deadlocks
lock( M1 );
lock( M2 );
lock( M2 );
lock( M1 );
Thread 1 Thread 2
Thread1 is waiting for the resource(M2) locked by Thread2 and
Thread2 is waiting for the resource (M1) locked by Thread1
Preventing Deadlock
60
Deadlock can be easily avoided by using a simple technique known
as resource ordering. With this technique, you assign an order on
all the objects whose locks must be acquired and ensure that each
thread acquires the locks in that order. For the example, suppose
the objects are ordered as object1 and object2. Using the resource
ordering technique, Thread 2 must acquire a lock on object1 first,
then on object2. Once Thread 1 acquired a lock on object1, Thread
2 has to wait for a lock on object1. So Thread 1 will be able to
acquire a lock on object2 and no deadlock would occur.
Process scheduling in embedded
systems
$
Process management - Three key
requirements:
1. The timing behavior of the OS must be
predictable - services of the OS: Upper
bound on the execution time!
2. OS must manage the timing and scheduling
OS possibly has to be aware of task deadlines;
(unless scheduling is done off-line).
3. The OS must be fast
Scheduling States: Simplified View of
Thread State Transitions
RUNNABLE
SLEEPINGSTOPPED
ACTIVE
Stop
Continue
Preempt Stop
Stop Sleep
Wakeup
Hard and Soft Real Time Systems
(Operational Definition)
• Hard Real Time System
– Validation by provably correct procedures or extensive
simulation that the system always meets the timings
constraints
• Soft Real Time System
– Demonstration of jobs meeting some statistical constraints
suffices.
• Example – Multimedia System
– 25 frames per second on an average
Real-Time System continue
• Soft RTS: meet timing constraints most of the time, it is not
necessary that every time constraint be met. Some deadline
miss is tolerated. ( tolerence is there )
• Hard RTS: meet all time constraints exactly, Every resource
management system must work in the correct order to meet
time constraints. No deadline miss is allowed.
( hod when a print documentation was not available on time ,
teared the paper at mets )
Role of an OS in Real Time Systems
1. Standalone Applications
– Often no OS involved
– Micro controller based Embedded Systems
• Some Real Time Applications are huge & complex
– Multiple threads
– Complicated Synchronization Requirements
– Filesystem / Network / Windowing support
– OS primitives reduce the software design time
Tasks Categories
• Invocation = triggered operations
– Periodic (time-triggered)
– Aperiodic (event-triggered)
• Creation
– Static
– Dynamic
• Multi-Tasking System
– Preemptive: higher-priority process taking control
of the processor from a lower-priority
– Non-Preemptive : Each task can control the CPU
for as long as it needs it.
Features of RTOS’s
1. Scheduling.
2. Resource Allocation.
3. Interrupt Handling.
4. Other issues like kernel size.
1. Scheduling in RTOS
• More information about the tasks are known
– No of tasks
– Resource Requirements
– Release Time
– Execution time
– Deadlines
• Being a more deterministic system better scheduling
algorithms can be devised.
Why we need scheduling ?!
• each computation (task) we want to execute needs resources
• resources: processor, memory segments, communication, I/O
devices etc.)
• the computation must be executed in particular order
(relative to each other and/or relative to time)
• the possible ordering is either completely or statistically a
priori known (described)
• scheduling: assignment of processor to computations;
• allocation: assignment of other resources to computations;
Real-Time Scheduling Algorithms
Fixed Priority Algorithms Dynamic Priority Algorithms Hybrid algorithms
Rate
Monotonic
scheduling
Deadline
Monotonic
scheduling
Earliest
Deadline
First
Least Laxity
First
Maximum
Urgency
First
1. Static 2. Dynamic 3.hybrid
Scheduling Algorithm
Static vs. DynamicStatic Scheduling:
All scheduling decisions at compile time.
Temporal task structure fixed.
Precedence and mutual exclusion satisfied by the
schedule (implicit synchronization).
One solution is sufficient.
Any solution is a sufficient schedulability test.
Benefits
Simplicity
Scheduling Algorithm
Static vs. Dynamic
• Dynamic Scheduling:
– All scheduling decisions at run time.
• Based upon set of ready tasks.
• Mutual exclusion and synchronization enforced by
explicit synchronization constructs.
– Benefits
• Flexibility.
• Only actually used resources are claimed.
– Disadvantages
• Guarantees difficult to support
• Computational resources required for scheduling
Scheduling Algorithm
Preemptive vs. Nonpreemptive
• Preemptive Scheduling:
– Event driven.
• Each event causes interruption of running tasks.
• Choice of running tasks reconsidered after each
interruption.
– Benefits:
• Can minimize response time to events.
– Disadvantages:
• Requires considerable computational resources for
scheduling
• Nonpreemptive Scheduling:
– Tasks remain active till completion
• Scheduling decisions only made after task completion.
– Benefits:
• Reasonable when
task execution times ~= task switching times.
• Less computational resources needed for scheduling
– Disadvantages:
• Can leads to starvation (not met the deadline)
especially for those real time tasks ( or high priority
tasks).
Scheduling Algorithm
Preemptive vs. Nonpreemptive
1.A Rate Monotonic scheduling
• Priority assignment based on rates of tasks
• Higher rate task assigned higher priority
• Schedulable utilization = 0.693 (Liu and Leyland)
Where Ci is the computation time, and Ti is the release period
• If U < 0.693, schedulability is guaranteed
• Tasks may be schedulable even if U > 0.693
RM example
Process Execution Time Period
P1 1 8
P2 2 5
P3 2 10
The utilization will be:
The theoretical limit for processes, under which we can conclude that the
system is schedulable is:
Since the system is schedulable!
1.B Deadline Monotonic scheduling
• Priority assignment based on relative deadlines of tasks
• Shorter the relative deadline, higher the priority
2.A Earliest Deadline First (EDF)
• Dynamic Priority Scheduling
• Priorities are assigned according to deadlines:
– Earlier deadline, higher priority
– Later deadline, lower priority
• The first and the most effectively widely used dynamic
priority-driven scheduling algorithm.
• Effective for both preemptive and non-preemptive
scheduling.
Two Periodic Tasks
• Execution profile of two periodic tasks
– Process A
• Arrives 0 20 40 …
• Execution Time 10 10 10 …
• End by 20 40 60 …
– Process B
• Arrives 0 50 100 …
• Execution Time 25 25 25 …
• End by 50 100 150 …
• Question: Is there enough time for the execution of two periodic
tasks?
Five Periodic Tasks
• Execution profile of five periodic tasks
Process Arrival Time
Execution
Time
Starting
Deadline
A 10 20 110
B 20 20 20
C 40 20 50
D 50 20 90
E 60 20 70
2.B Least Laxity First (LLF)
• Dynamic preemptive scheduling with dynamic priorities
• Laxity : The difference between the time until a tasks
completion deadline and its remaining processing time
requirement.
• a laxity is assigned to each task in the system and minimum
laxity tasks are executed first.
• Larger overhead than EDF due to higher number of context
switches caused by laxity changes at run time
– Less studies than EDF due to this reason
Least Laxity First Cont
• LLF considers the execution time of a task, which EDF does
not.
• LLF assigns higher priority to a task with the least laxity.
• A task with zero laxity must be scheduled right away and
executed without preemption or it will fail to meet its
deadline.
• The negative laxity indicates that the task will miss the
deadline, no matter when it is picked up for execution.
The definition of laxity is the quality or condition of being loose. Being lose
and relaxed in the enforcement of a procedure is an example of laxity.
Least Laxity First Example
Maximum Urgency First Algorithm
• This algorithm is a combination of fixed and dynamic priority
scheduling, also called mixed priority scheduling.
• With this algorithm, each task is given an urgency which is
defined as a combination of two fixed priorities (criticality and
user priority) and a dynamic priority that is inversely
proportional to the laxity.
• The MUF algorithm assigns priorities in two phases
• Phase One concerns the assignment of static priorities to
tasks
• Phase Two deals with the run-time behavior of the MUF
scheduler
The first phase consists of these steps :
1) It sorts the tasks from the shortest period to the longest
period. Then it defines the critical set as the first N tasks such
that the total CPU load factor does not exceed 100%. These
tasks are guaranteed not to fail even during a transient
overload.
2) All tasks in the critical set are assigned high criticality. The
remaining tasks are considered to have low criticality.
3) Every task in the system is assigned an optional unique user
priority
3.A Maximum Urgency First Algorithm phase 1
In the second phase, the MUF scheduler follows an algorithm
to select a task for execution.
This algorithm is executed whenever a new task is arrived to
the ready queue.
The algorithm is as follows:
1) If there is only one highly critical task, pick it up and execute
it.
2) If there are more than one highly critical task, select the one
with the highest dynamic priority. Here, the task with the least
laxity is considered to be the one with the highest priority.
3) If there is more than one task with the same laxity, select the
one with the highest user priority.
Maximum Urgency First Algorithm phase 2
Scheduling Algorithms in RTOS
1. Clock Driven Scheduling
2. Weighted Round Robin Scheduling
3. Priority Scheduling
(Greedy / List / Event Driven)
Scheduling Algorithms in RTOS (contd)
• Clock Driven
– All parameters about jobs (release time/
execution time/deadline) known in advance.
– Schedule can be computed offline or at some
regular time instances.
– Minimal runtime overhead.
– Not suitable for many applications.
Scheduling Algorithms in RTOS (contd)
• Weighted Round Robin
– Jobs scheduled in FIFO manner
– Time quantum given to jobs is proportional to it’s weight
– Example use : High speed switching network
• QOS guarantee.
– Not suitable for precedence constrained jobs.
• Job A can run only after Job B. No point in giving time quantum to
Job B before Job A.
Scheduling Algorithms in RTOS (contd)
• Priority Scheduling
(Greedy/List/Event Driven)
– Processor never left idle when there are ready
tasks
– Processor allocated to processes according to
priorities
– Priorities
• static - at design time
• Dynamic - at runtime
Priority Scheduling
• Earliest Deadline First (EDF)
– Process with earliest deadline given highest priority
• Least Slack Time First (LSF)
– slack = relative deadline – execution left
• Rate Monotonic Scheduling (RMS)
– For periodic tasks
– Tasks priority inversely proportional to it’s period

Contenu connexe

Tendances (20)

Introduction to loaders
Introduction to loadersIntroduction to loaders
Introduction to loaders
 
Mutual exclusion and sync
Mutual exclusion and syncMutual exclusion and sync
Mutual exclusion and sync
 
File system structure
File system structureFile system structure
File system structure
 
Loaders
LoadersLoaders
Loaders
 
Distributed file system
Distributed file systemDistributed file system
Distributed file system
 
Lecture 2 more about parallel computing
Lecture 2   more about parallel computingLecture 2   more about parallel computing
Lecture 2 more about parallel computing
 
Putnam Resource allocation model.ppt
Putnam Resource allocation model.pptPutnam Resource allocation model.ppt
Putnam Resource allocation model.ppt
 
Embedded System Tools ppt
Embedded System Tools  pptEmbedded System Tools  ppt
Embedded System Tools ppt
 
Compiler Design Unit 4
Compiler Design Unit 4Compiler Design Unit 4
Compiler Design Unit 4
 
Address Binding Scheme
Address Binding SchemeAddress Binding Scheme
Address Binding Scheme
 
Operating System Process Synchronization
Operating System Process SynchronizationOperating System Process Synchronization
Operating System Process Synchronization
 
Rtos concepts
Rtos conceptsRtos concepts
Rtos concepts
 
Code Optimization
Code OptimizationCode Optimization
Code Optimization
 
Real time-embedded-system-lec-02
Real time-embedded-system-lec-02Real time-embedded-system-lec-02
Real time-embedded-system-lec-02
 
Memory Management in OS
Memory Management in OSMemory Management in OS
Memory Management in OS
 
Real Time OS For Embedded Systems
Real Time OS For Embedded SystemsReal Time OS For Embedded Systems
Real Time OS For Embedded Systems
 
Inter Process Communication
Inter Process CommunicationInter Process Communication
Inter Process Communication
 
Run time storage
Run time storageRun time storage
Run time storage
 
Error Detection & Recovery
Error Detection & RecoveryError Detection & Recovery
Error Detection & Recovery
 
isa architecture
isa architectureisa architecture
isa architecture
 

Similaire à Real-Time Scheduling Algorithms

Similaire à Real-Time Scheduling Algorithms (20)

intro, definitions, basic laws+.pptx
intro, definitions, basic laws+.pptxintro, definitions, basic laws+.pptx
intro, definitions, basic laws+.pptx
 
Parallel processing
Parallel processingParallel processing
Parallel processing
 
Flynn's classification.pdf
Flynn's classification.pdfFlynn's classification.pdf
Flynn's classification.pdf
 
Parallel computing persentation
Parallel computing persentationParallel computing persentation
Parallel computing persentation
 
distributed system lab materials about ad
distributed system lab materials about addistributed system lab materials about ad
distributed system lab materials about ad
 
Flynn's Classification .pptx
Flynn's Classification .pptxFlynn's Classification .pptx
Flynn's Classification .pptx
 
Ca alternative architecture
Ca alternative architectureCa alternative architecture
Ca alternative architecture
 
Parallel processing
Parallel processingParallel processing
Parallel processing
 
Lec 2 (parallel design and programming)
Lec 2 (parallel design and programming)Lec 2 (parallel design and programming)
Lec 2 (parallel design and programming)
 
Distributed Computing
Distributed ComputingDistributed Computing
Distributed Computing
 
Topic 4- processes.pptx
Topic 4- processes.pptxTopic 4- processes.pptx
Topic 4- processes.pptx
 
unit_1.pdf
unit_1.pdfunit_1.pdf
unit_1.pdf
 
CA UNIT IV.pptx
CA UNIT IV.pptxCA UNIT IV.pptx
CA UNIT IV.pptx
 
Par com
Par comPar com
Par com
 
Computing notes
Computing notesComputing notes
Computing notes
 
OS Module-2.pptx
OS Module-2.pptxOS Module-2.pptx
OS Module-2.pptx
 
Introduction to parallel_computing
Introduction to parallel_computingIntroduction to parallel_computing
Introduction to parallel_computing
 
Aca module 1
Aca module 1Aca module 1
Aca module 1
 
Lecture 04 chapter 2 - Parallel Programming Platforms
Lecture 04  chapter 2 - Parallel Programming PlatformsLecture 04  chapter 2 - Parallel Programming Platforms
Lecture 04 chapter 2 - Parallel Programming Platforms
 
Chapter 10
Chapter 10Chapter 10
Chapter 10
 

Plus de AJAL A J

KEAM KERALA ENTRANCE EXAM
KEAM KERALA ENTRANCE EXAMKEAM KERALA ENTRANCE EXAM
KEAM KERALA ENTRANCE EXAMAJAL A J
 
Paleontology Career
Paleontology  CareerPaleontology  Career
Paleontology CareerAJAL A J
 
CHEMISTRY basic concepts of chemistry
CHEMISTRY  basic concepts of chemistryCHEMISTRY  basic concepts of chemistry
CHEMISTRY basic concepts of chemistryAJAL A J
 
Biogeochemical cycles
Biogeochemical cyclesBiogeochemical cycles
Biogeochemical cyclesAJAL A J
 
ac dc bridges
ac dc bridgesac dc bridges
ac dc bridgesAJAL A J
 
Hays bridge schering bridge wien bridge
Hays bridge  schering bridge  wien bridgeHays bridge  schering bridge  wien bridge
Hays bridge schering bridge wien bridgeAJAL A J
 
App Naming Tip
App Naming TipApp Naming Tip
App Naming Tip AJAL A J
 
flora and fauna of himachal pradesh and kerala
flora and fauna of himachal pradesh and keralaflora and fauna of himachal pradesh and kerala
flora and fauna of himachal pradesh and keralaAJAL A J
 
B.Sc Cardiovascular Technology(CVT)
 B.Sc Cardiovascular Technology(CVT)  B.Sc Cardiovascular Technology(CVT)
B.Sc Cardiovascular Technology(CVT) AJAL A J
 
11 business strategies to make profit
11 business strategies to make profit 11 business strategies to make profit
11 business strategies to make profit AJAL A J
 
PCOS Polycystic Ovary Syndrome
PCOS  Polycystic Ovary SyndromePCOS  Polycystic Ovary Syndrome
PCOS Polycystic Ovary SyndromeAJAL A J
 
Courses and Career Options after Class 12 in Humanities
Courses and Career Options after Class 12 in HumanitiesCourses and Career Options after Class 12 in Humanities
Courses and Career Options after Class 12 in HumanitiesAJAL A J
 
MANAGEMENT Stories
 MANAGEMENT Stories MANAGEMENT Stories
MANAGEMENT StoriesAJAL A J
 
NEET PREPRATION TIPS AND STRATEGY
NEET PREPRATION TIPS AND STRATEGYNEET PREPRATION TIPS AND STRATEGY
NEET PREPRATION TIPS AND STRATEGYAJAL A J
 
REVOLUTIONS IN AGRICULTURE
REVOLUTIONS IN AGRICULTUREREVOLUTIONS IN AGRICULTURE
REVOLUTIONS IN AGRICULTUREAJAL A J
 
NRI QUOTA IN NIT'S
NRI QUOTA IN NIT'S NRI QUOTA IN NIT'S
NRI QUOTA IN NIT'S AJAL A J
 
Subjects to study if you want to work for a charity
Subjects to study if you want to work for a charitySubjects to study if you want to work for a charity
Subjects to study if you want to work for a charityAJAL A J
 
IIT JEE A KERALA PERSPECTIVE
IIT JEE A KERALA PERSPECTIVE IIT JEE A KERALA PERSPECTIVE
IIT JEE A KERALA PERSPECTIVE AJAL A J
 
Clat 2020 exam COMPLETE DETAILS
Clat 2020 exam COMPLETE DETAILSClat 2020 exam COMPLETE DETAILS
Clat 2020 exam COMPLETE DETAILSAJAL A J
 

Plus de AJAL A J (20)

KEAM KERALA ENTRANCE EXAM
KEAM KERALA ENTRANCE EXAMKEAM KERALA ENTRANCE EXAM
KEAM KERALA ENTRANCE EXAM
 
Paleontology Career
Paleontology  CareerPaleontology  Career
Paleontology Career
 
CHEMISTRY basic concepts of chemistry
CHEMISTRY  basic concepts of chemistryCHEMISTRY  basic concepts of chemistry
CHEMISTRY basic concepts of chemistry
 
Ecology
EcologyEcology
Ecology
 
Biogeochemical cycles
Biogeochemical cyclesBiogeochemical cycles
Biogeochemical cycles
 
ac dc bridges
ac dc bridgesac dc bridges
ac dc bridges
 
Hays bridge schering bridge wien bridge
Hays bridge  schering bridge  wien bridgeHays bridge  schering bridge  wien bridge
Hays bridge schering bridge wien bridge
 
App Naming Tip
App Naming TipApp Naming Tip
App Naming Tip
 
flora and fauna of himachal pradesh and kerala
flora and fauna of himachal pradesh and keralaflora and fauna of himachal pradesh and kerala
flora and fauna of himachal pradesh and kerala
 
B.Sc Cardiovascular Technology(CVT)
 B.Sc Cardiovascular Technology(CVT)  B.Sc Cardiovascular Technology(CVT)
B.Sc Cardiovascular Technology(CVT)
 
11 business strategies to make profit
11 business strategies to make profit 11 business strategies to make profit
11 business strategies to make profit
 
PCOS Polycystic Ovary Syndrome
PCOS  Polycystic Ovary SyndromePCOS  Polycystic Ovary Syndrome
PCOS Polycystic Ovary Syndrome
 
Courses and Career Options after Class 12 in Humanities
Courses and Career Options after Class 12 in HumanitiesCourses and Career Options after Class 12 in Humanities
Courses and Career Options after Class 12 in Humanities
 
MANAGEMENT Stories
 MANAGEMENT Stories MANAGEMENT Stories
MANAGEMENT Stories
 
NEET PREPRATION TIPS AND STRATEGY
NEET PREPRATION TIPS AND STRATEGYNEET PREPRATION TIPS AND STRATEGY
NEET PREPRATION TIPS AND STRATEGY
 
REVOLUTIONS IN AGRICULTURE
REVOLUTIONS IN AGRICULTUREREVOLUTIONS IN AGRICULTURE
REVOLUTIONS IN AGRICULTURE
 
NRI QUOTA IN NIT'S
NRI QUOTA IN NIT'S NRI QUOTA IN NIT'S
NRI QUOTA IN NIT'S
 
Subjects to study if you want to work for a charity
Subjects to study if you want to work for a charitySubjects to study if you want to work for a charity
Subjects to study if you want to work for a charity
 
IIT JEE A KERALA PERSPECTIVE
IIT JEE A KERALA PERSPECTIVE IIT JEE A KERALA PERSPECTIVE
IIT JEE A KERALA PERSPECTIVE
 
Clat 2020 exam COMPLETE DETAILS
Clat 2020 exam COMPLETE DETAILSClat 2020 exam COMPLETE DETAILS
Clat 2020 exam COMPLETE DETAILS
 

Dernier

4.16.24 21st Century Movements for Black Lives.pptx
4.16.24 21st Century Movements for Black Lives.pptx4.16.24 21st Century Movements for Black Lives.pptx
4.16.24 21st Century Movements for Black Lives.pptxmary850239
 
ENGLISH6-Q4-W3.pptxqurter our high choom
ENGLISH6-Q4-W3.pptxqurter our high choomENGLISH6-Q4-W3.pptxqurter our high choom
ENGLISH6-Q4-W3.pptxqurter our high choomnelietumpap1
 
Karra SKD Conference Presentation Revised.pptx
Karra SKD Conference Presentation Revised.pptxKarra SKD Conference Presentation Revised.pptx
Karra SKD Conference Presentation Revised.pptxAshokKarra1
 
AUDIENCE THEORY -CULTIVATION THEORY - GERBNER.pptx
AUDIENCE THEORY -CULTIVATION THEORY -  GERBNER.pptxAUDIENCE THEORY -CULTIVATION THEORY -  GERBNER.pptx
AUDIENCE THEORY -CULTIVATION THEORY - GERBNER.pptxiammrhaywood
 
Science 7 Quarter 4 Module 2: Natural Resources.pptx
Science 7 Quarter 4 Module 2: Natural Resources.pptxScience 7 Quarter 4 Module 2: Natural Resources.pptx
Science 7 Quarter 4 Module 2: Natural Resources.pptxMaryGraceBautista27
 
Virtual-Orientation-on-the-Administration-of-NATG12-NATG6-and-ELLNA.pdf
Virtual-Orientation-on-the-Administration-of-NATG12-NATG6-and-ELLNA.pdfVirtual-Orientation-on-the-Administration-of-NATG12-NATG6-and-ELLNA.pdf
Virtual-Orientation-on-the-Administration-of-NATG12-NATG6-and-ELLNA.pdfErwinPantujan2
 
Earth Day Presentation wow hello nice great
Earth Day Presentation wow hello nice greatEarth Day Presentation wow hello nice great
Earth Day Presentation wow hello nice greatYousafMalik24
 
Incoming and Outgoing Shipments in 3 STEPS Using Odoo 17
Incoming and Outgoing Shipments in 3 STEPS Using Odoo 17Incoming and Outgoing Shipments in 3 STEPS Using Odoo 17
Incoming and Outgoing Shipments in 3 STEPS Using Odoo 17Celine George
 
Procuring digital preservation CAN be quick and painless with our new dynamic...
Procuring digital preservation CAN be quick and painless with our new dynamic...Procuring digital preservation CAN be quick and painless with our new dynamic...
Procuring digital preservation CAN be quick and painless with our new dynamic...Jisc
 
ECONOMIC CONTEXT - PAPER 1 Q3: NEWSPAPERS.pptx
ECONOMIC CONTEXT - PAPER 1 Q3: NEWSPAPERS.pptxECONOMIC CONTEXT - PAPER 1 Q3: NEWSPAPERS.pptx
ECONOMIC CONTEXT - PAPER 1 Q3: NEWSPAPERS.pptxiammrhaywood
 
Transaction Management in Database Management System
Transaction Management in Database Management SystemTransaction Management in Database Management System
Transaction Management in Database Management SystemChristalin Nelson
 
FILIPINO PSYCHology sikolohiyang pilipino
FILIPINO PSYCHology sikolohiyang pilipinoFILIPINO PSYCHology sikolohiyang pilipino
FILIPINO PSYCHology sikolohiyang pilipinojohnmickonozaleda
 
Like-prefer-love -hate+verb+ing & silent letters & citizenship text.pdf
Like-prefer-love -hate+verb+ing & silent letters & citizenship text.pdfLike-prefer-love -hate+verb+ing & silent letters & citizenship text.pdf
Like-prefer-love -hate+verb+ing & silent letters & citizenship text.pdfMr Bounab Samir
 
Choosing the Right CBSE School A Comprehensive Guide for Parents
Choosing the Right CBSE School A Comprehensive Guide for ParentsChoosing the Right CBSE School A Comprehensive Guide for Parents
Choosing the Right CBSE School A Comprehensive Guide for Parentsnavabharathschool99
 
ACC 2024 Chronicles. Cardiology. Exam.pdf
ACC 2024 Chronicles. Cardiology. Exam.pdfACC 2024 Chronicles. Cardiology. Exam.pdf
ACC 2024 Chronicles. Cardiology. Exam.pdfSpandanaRallapalli
 
ANG SEKTOR NG agrikultura.pptx QUARTER 4
ANG SEKTOR NG agrikultura.pptx QUARTER 4ANG SEKTOR NG agrikultura.pptx QUARTER 4
ANG SEKTOR NG agrikultura.pptx QUARTER 4MiaBumagat1
 
call girls in Kamla Market (DELHI) 🔝 >༒9953330565🔝 genuine Escort Service 🔝✔️✔️
call girls in Kamla Market (DELHI) 🔝 >༒9953330565🔝 genuine Escort Service 🔝✔️✔️call girls in Kamla Market (DELHI) 🔝 >༒9953330565🔝 genuine Escort Service 🔝✔️✔️
call girls in Kamla Market (DELHI) 🔝 >༒9953330565🔝 genuine Escort Service 🔝✔️✔️9953056974 Low Rate Call Girls In Saket, Delhi NCR
 

Dernier (20)

4.16.24 21st Century Movements for Black Lives.pptx
4.16.24 21st Century Movements for Black Lives.pptx4.16.24 21st Century Movements for Black Lives.pptx
4.16.24 21st Century Movements for Black Lives.pptx
 
ENGLISH6-Q4-W3.pptxqurter our high choom
ENGLISH6-Q4-W3.pptxqurter our high choomENGLISH6-Q4-W3.pptxqurter our high choom
ENGLISH6-Q4-W3.pptxqurter our high choom
 
FINALS_OF_LEFT_ON_C'N_EL_DORADO_2024.pptx
FINALS_OF_LEFT_ON_C'N_EL_DORADO_2024.pptxFINALS_OF_LEFT_ON_C'N_EL_DORADO_2024.pptx
FINALS_OF_LEFT_ON_C'N_EL_DORADO_2024.pptx
 
Karra SKD Conference Presentation Revised.pptx
Karra SKD Conference Presentation Revised.pptxKarra SKD Conference Presentation Revised.pptx
Karra SKD Conference Presentation Revised.pptx
 
AUDIENCE THEORY -CULTIVATION THEORY - GERBNER.pptx
AUDIENCE THEORY -CULTIVATION THEORY -  GERBNER.pptxAUDIENCE THEORY -CULTIVATION THEORY -  GERBNER.pptx
AUDIENCE THEORY -CULTIVATION THEORY - GERBNER.pptx
 
Science 7 Quarter 4 Module 2: Natural Resources.pptx
Science 7 Quarter 4 Module 2: Natural Resources.pptxScience 7 Quarter 4 Module 2: Natural Resources.pptx
Science 7 Quarter 4 Module 2: Natural Resources.pptx
 
YOUVE GOT EMAIL_FINALS_EL_DORADO_2024.pptx
YOUVE GOT EMAIL_FINALS_EL_DORADO_2024.pptxYOUVE GOT EMAIL_FINALS_EL_DORADO_2024.pptx
YOUVE GOT EMAIL_FINALS_EL_DORADO_2024.pptx
 
Virtual-Orientation-on-the-Administration-of-NATG12-NATG6-and-ELLNA.pdf
Virtual-Orientation-on-the-Administration-of-NATG12-NATG6-and-ELLNA.pdfVirtual-Orientation-on-the-Administration-of-NATG12-NATG6-and-ELLNA.pdf
Virtual-Orientation-on-the-Administration-of-NATG12-NATG6-and-ELLNA.pdf
 
Earth Day Presentation wow hello nice great
Earth Day Presentation wow hello nice greatEarth Day Presentation wow hello nice great
Earth Day Presentation wow hello nice great
 
Incoming and Outgoing Shipments in 3 STEPS Using Odoo 17
Incoming and Outgoing Shipments in 3 STEPS Using Odoo 17Incoming and Outgoing Shipments in 3 STEPS Using Odoo 17
Incoming and Outgoing Shipments in 3 STEPS Using Odoo 17
 
Procuring digital preservation CAN be quick and painless with our new dynamic...
Procuring digital preservation CAN be quick and painless with our new dynamic...Procuring digital preservation CAN be quick and painless with our new dynamic...
Procuring digital preservation CAN be quick and painless with our new dynamic...
 
ECONOMIC CONTEXT - PAPER 1 Q3: NEWSPAPERS.pptx
ECONOMIC CONTEXT - PAPER 1 Q3: NEWSPAPERS.pptxECONOMIC CONTEXT - PAPER 1 Q3: NEWSPAPERS.pptx
ECONOMIC CONTEXT - PAPER 1 Q3: NEWSPAPERS.pptx
 
Transaction Management in Database Management System
Transaction Management in Database Management SystemTransaction Management in Database Management System
Transaction Management in Database Management System
 
FILIPINO PSYCHology sikolohiyang pilipino
FILIPINO PSYCHology sikolohiyang pilipinoFILIPINO PSYCHology sikolohiyang pilipino
FILIPINO PSYCHology sikolohiyang pilipino
 
Like-prefer-love -hate+verb+ing & silent letters & citizenship text.pdf
Like-prefer-love -hate+verb+ing & silent letters & citizenship text.pdfLike-prefer-love -hate+verb+ing & silent letters & citizenship text.pdf
Like-prefer-love -hate+verb+ing & silent letters & citizenship text.pdf
 
Choosing the Right CBSE School A Comprehensive Guide for Parents
Choosing the Right CBSE School A Comprehensive Guide for ParentsChoosing the Right CBSE School A Comprehensive Guide for Parents
Choosing the Right CBSE School A Comprehensive Guide for Parents
 
ACC 2024 Chronicles. Cardiology. Exam.pdf
ACC 2024 Chronicles. Cardiology. Exam.pdfACC 2024 Chronicles. Cardiology. Exam.pdf
ACC 2024 Chronicles. Cardiology. Exam.pdf
 
Model Call Girl in Tilak Nagar Delhi reach out to us at 🔝9953056974🔝
Model Call Girl in Tilak Nagar Delhi reach out to us at 🔝9953056974🔝Model Call Girl in Tilak Nagar Delhi reach out to us at 🔝9953056974🔝
Model Call Girl in Tilak Nagar Delhi reach out to us at 🔝9953056974🔝
 
ANG SEKTOR NG agrikultura.pptx QUARTER 4
ANG SEKTOR NG agrikultura.pptx QUARTER 4ANG SEKTOR NG agrikultura.pptx QUARTER 4
ANG SEKTOR NG agrikultura.pptx QUARTER 4
 
call girls in Kamla Market (DELHI) 🔝 >༒9953330565🔝 genuine Escort Service 🔝✔️✔️
call girls in Kamla Market (DELHI) 🔝 >༒9953330565🔝 genuine Escort Service 🔝✔️✔️call girls in Kamla Market (DELHI) 🔝 >༒9953330565🔝 genuine Escort Service 🔝✔️✔️
call girls in Kamla Market (DELHI) 🔝 >༒9953330565🔝 genuine Escort Service 🔝✔️✔️
 

Real-Time Scheduling Algorithms

  • 1. Multitasking and Process Management in embedded systemsReal-Time Scheduling Algorithms
  • 2.
  • 5. Flynn’s Classification Of Computer Architectures • In 1966, Michael Flynn proposed a classification for computer architectures based on the number of instruction steams and data streams (Flynn’s Taxonomy). • Flynn uses the stream concept for describing a machine's structure • A stream simply means a sequence of items (data or instructions). • The classification of computer architectures based on the number of instruction steams and data streams (Flynn’s Taxonomy).
  • 6. 6 Simple classification by Flynn: (No. of instruction and data streams) > SISD - conventional > SIMD - data parallel, vector computing > MISD - systolic arrays > MIMD - very general, multiple approaches. Current focus is on MIMD model, using general purpose processors. (No shared memory) Processing Elements
  • 7. 7 Processor OrganizationsProcessor Organizations Computer Architecture Classifications Single Instruction,Single Instruction, Single Instruction,Single Instruction, Multiple InstructionMultiple Instruction Multiple InstructionMultiple Instruction Single Data StreamSingle Data Stream Multiple Data StreamMultiple Data Stream Single Data StreamSingle Data Stream Multiple Data StreamMultiple Data Stream (SISD)(SISD) (SIMD)(SIMD) (MISD)(MISD) (MIMD)(MIMD) Uniprocessor Vector Array Shared MemoryUniprocessor Vector Array Shared Memory MulticomputerMulticomputer Processor Processor (tightly coupled) (looselyProcessor Processor (tightly coupled) (loosely coupled)coupled)
  • 8. 8
  • 9. 9 SISD : A Conventional Computer  Speed is limited by the rate at which computer can transfer information internally. ProcessorProcessorData Input Data Output Instructions Ex:PC, Macintosh, Workstations
  • 10. 10 SISD SISD (Singe-Instruction stream, Singe-Data stream) SISD corresponds to the traditional mono-processor ( von Neumann computer). A single data stream is being processed by one instruction stream OR A single-processor computer (uni-processor) in which a single stream of instructions is generated from the program.
  • 11. 11 SISD where CU= Control Unit, PE= Processing Element, M= Memory
  • 12. 12 SIMD SIMD (Single-Instruction stream, Multiple-Data streams) Each instruction is executed on a different set of data by different processors i.e multiple processing units of the same type process on multiple-data streams. This group is dedicated to array processing machines. Sometimes,  vector processors can also be seen as a part of this group.
  • 13. 13 SIMD where CU= Control Unit, PE= Processing Element, M= Memory
  • 14. 14 MISD MISD (Multiple-Instruction streams, Singe-Data stream) Each processor executes a different sequence of instructions. In case of MISD computers, multiple processing units operate on one single-data stream . In practice, this kind of organization has never been used
  • 15. 15 MISD where CU= Control Unit, PE= Processing Element, M= Memory
  • 16. 16 MIMD MIMD (Multiple-Instruction streams, Multiple-Data streams) Each processor has a separate program. An instruction stream is generated from each program. Each instruction operates on different data. This last machine type builds the group for the traditional multi-processors. Several processing units operate on multiple-data streams.
  • 18. 18 The MISD Architecture  More of an intellectual exercise than a practical configuration. Few built, but commercially not available Data Input Stream Data Output Stream Processor A Processor B Processor C Instruction Stream A Instruction Stream B Instruction Stream C
  • 19. 19
  • 20. 20 SIMD Architecture Ex: CRAY machine vector processing, Thinking machine cm* Ci<= Ai * Bi Instruction Stream Processor A Processor B Processor C Data Input stream A Data Input stream B Data Input stream C Data Output stream A Data Output stream B Data Output stream C
  • 21. 21 Unlike SISD, MISD, MIMD computer works asynchronously. Shared memory (tightly coupled) MIMD Distributed memory (loosely coupled) MIMD MIMD Architecture Processor A Processor B Processor C Data Input stream A Data Input stream B Data Input stream C Data Output stream A Data Output stream B Data Output stream C Instruction Stream A Instruction Stream B Instruction Stream C
  • 22. 22 M E M O R Y B U S Shared Memory MIMD machine Comm: Source PE writes data to GM & destination retrieves it  Easy to build, conventional OSes of SISD can be easily be ported  Limitation : reliability & expandability. A memory component or any processor failure affects the whole system.  Increase of processors leads to memory contention. Ex. : Silicon graphics supercomputers.... M E M O R Y B U S Global Memory SystemGlobal Memory System Processor A Processor A Processor B Processor B Processor C Processor C M E M O R Y B U S
  • 23. 23 M E M O R Y B U S Distributed Memory MIMD Communication : IPC on High Speed Network. Network can be configured to ... Tree, Mesh, Cube, etc. Unlike Shared MIMD  easily/ readily expandable  Highly reliable (any CPU failure does not affect the whole system) Processor A Processor A Processor B Processor B Processor C Processor C M E M O R Y B U S M E M O R Y B U S Memory System A Memory System A Memory System B Memory System B Memory System C Memory System C IPC channel IPC channel interprocess communication 
  • 24. 24
  • 26. Figure 1. Single-core systems schedule tasks on 1 CPU to multitask
  • 27. Single-core systems schedule tasks on 1 CPU to multitask • Applications that take advantage of multithreading have numerous benefits, including the following: 1.More efficient CPU use 2.Better system reliability 3.Improved performance on multiprocessor computers
  • 29. Figure 2. Dual-core systems enable multitasking operating systems to execute two tasks simultaneously
  • 30. • The OS executes multiple applications more efficiently by splitting the different applications, or processes, between the separate CPU cores. The computer can spread the work - each core is managing and switching through half as many applications as before - and deliver better overall throughput and performance. In effect, the applications are running in parallel.
  • 32. Threads • Threads are lightweight processes as the overhead of switching between threads is less • The can be easily spawned • The Java Virtual Machine spawns a thread when your program is run called the Main Thread
  • 33. P PP P P P.. MicrokernelMicrokernel Multi-Processor Computing System Threads InterfaceThreads Interface Hardware Operating System ProcessProcessor ThreadPP Applications Computing Elements Programming paradigms
  • 34. Why do we need threads? • To enhance parallel processing • To increase response to the user • To utilize the idle time of the CPU • Prioritize your work depending on priority
  • 35.
  • 36.
  • 37. Example • Consider a simple web server • The web server listens for request and serves it • If the web server was not multithreaded, the requests processing would be in a queue, thus increasing the response time and also might hang the server if there was a bad request. • By implementing in a multithreaded environment, the web server can serve multiple request simultaneously thus improving response time
  • 38.
  • 39. Synchronization • Synchronization is prevent data corruption • Synchronization allows only one thread to perform an operation on a object at a time. • If multiple threads require an access to an object, synchronization helps in maintaining consistency.
  • 41. Creating Tasks and Threads 41 // Custom task class public class TaskClass implements Runnable { ... public TaskClass(...) { ... } // Implement the run method in Runnable public void run() { // Tell system how to run custom thread ... } ... } // Client class public class Client { ... public void someMethod() { ... // Create an instance of TaskClass TaskClass task = new TaskClass(...); // Create a thread Thread thread = new Thread(task); // Start a thread thread.start(); ... } ... } java.lang.Runnable TaskClass
  • 42. Figure 3. Dual-core system enables multithreading
  • 43. 43 Multithreading - Uniprocessors Concurrency Vs Parallelism K ConcurrencyConcurrency Number of Simulatneous execution units > no of CPUs P1 P2 P3 timetime CPU
  • 44. 44 Multithreading -Multithreading - MultiprocessorsMultiprocessors Concurrency Vs ParallelismConcurrency Vs Parallelism P1 P2 P3 timetime No of execution process = no of CPUs CPU CPU CPU
  • 46. 46 Thread Programming models 1. The boss/worker model 2. The peer model 3. A thread pipeline The scheduling algorithm is one of the most important portions of embedded operating system. The performance of scheduling algorithm influences the performance of the whole system.
  • 47. 47 taskX taskX taskY taskY taskZ taskZ main ( ) main ( ) Workers Program Files Resources Databas es Disks Special Devices Boss Input (Stream) The boss/worker model
  • 49. 49 A thread pipeline Resources Files Databas es Disks Special Devices Files Databas es Disks Special Devices Files Databas es Disks Special Devices Stage 1 Stage 1 Stage 2 Stage 2 Stage 3 Stage 3 Program Filter Threads Input (Stream)
  • 50. Applications that take advantage of multithreading have numerous benefits, including the following: • More efficient CPU use • Better system reliability • Improved performance on multiprocessor computers
  • 51. Thread State Diagram 51 A thread can be in one of five states: New, Ready, Running, Blocked, or Finished.
  • 52. Cooperation Among Threads 52 The conditions can be used to facilitate communications among threads. A thread can specify what to do under a certain condition. Conditions are objects created by invoking the newCondition() method on a Lock object. Once a condition is created, you can use its await(), signal(), and signalAll() methods for thread communications. The await() method causes the current thread to wait until the condition is signaled. The signal() method wakes up one waiting thread, and the signalAll() method wakes all waiting threads. «interface» java.util.concurrent.Condition +await(): void +signal(): void +signalAll(): Condition Causes the current thread to wait until the condition is signaled. Wakes up one waiting thread. Wakes up all waiting threads.
  • 53. • In many applications, you make synchronous calls to resources, such as instruments. These instrument calls often take a long time to complete. In a single-threaded application, a synchronous call effectively blocks, or prevents, any other task within the application from executing until the operation completes. Multithreading prevents this blocking. • While the synchronous call runs on one thread, other parts of the program that do not depend on this call run on different threads. Execution of the application progresses instead of stalling until the synchronous call completes. In this way, • a multithreaded application maximizes the efficiency of the CPU ? • because it does not idle if any thread of the application is ready to run.
  • 56. Semaphores (Optional) 56 Semaphores can be used to restrict the number of threads that access a shared resource. Before accessing the resource, a thread must acquire a permit from the semaphore. After finishing with the resource, the thread must return the permit back to the semaphore, as shown:
  • 57. Thread Priority 57 • Each thread is assigned a default priority of the thread that spawned it. The priority of the main thread is Thread.NORM_PRIORITY. You can reset the priority using setPriority(int priority). • Some constants for priorities include Thread.MIN_PRIORITY : 1 Thread.MAX_PRIORITY : 10 Thread.NORM_PRIORITY : 5
  • 58. Deadlock 58 Sometimes two or more threads need to acquire the locks on several shared objects. This could cause deadlock, in which each thread has the lock on one of the objects and is waiting for the lock on the other object. Consider the scenario with two threads and two objects, as shown. Thread 1 acquired a lock on object1 and Thread 2 acquired a lock on object2. Now Thread 1 is waiting for the lock on object2 and Thread 2 for the lock on object1. The two threads wait for each other to release the in order to get the lock, and neither can continue to run. synchronized (object1) { // do something here synchronized (object2) { // do something here } } Thread 1 synchronized (object2) { // do something here synchronized (object1) { // do something here } } Thread 2Step 1 2 3 4 5 6 Wait for Thread 2 to release the lock on object2 Wait for Thread 1 to release the lock on object1
  • 59. 59 Deadlocks lock( M1 ); lock( M2 ); lock( M2 ); lock( M1 ); Thread 1 Thread 2 Thread1 is waiting for the resource(M2) locked by Thread2 and Thread2 is waiting for the resource (M1) locked by Thread1
  • 60. Preventing Deadlock 60 Deadlock can be easily avoided by using a simple technique known as resource ordering. With this technique, you assign an order on all the objects whose locks must be acquired and ensure that each thread acquires the locks in that order. For the example, suppose the objects are ordered as object1 and object2. Using the resource ordering technique, Thread 2 must acquire a lock on object1 first, then on object2. Once Thread 1 acquired a lock on object1, Thread 2 has to wait for a lock on object1. So Thread 1 will be able to acquire a lock on object2 and no deadlock would occur.
  • 61. Process scheduling in embedded systems
  • 62. $
  • 63. Process management - Three key requirements: 1. The timing behavior of the OS must be predictable - services of the OS: Upper bound on the execution time! 2. OS must manage the timing and scheduling OS possibly has to be aware of task deadlines; (unless scheduling is done off-line). 3. The OS must be fast
  • 64.
  • 65. Scheduling States: Simplified View of Thread State Transitions RUNNABLE SLEEPINGSTOPPED ACTIVE Stop Continue Preempt Stop Stop Sleep Wakeup
  • 66.
  • 67.
  • 68. Hard and Soft Real Time Systems (Operational Definition) • Hard Real Time System – Validation by provably correct procedures or extensive simulation that the system always meets the timings constraints • Soft Real Time System – Demonstration of jobs meeting some statistical constraints suffices. • Example – Multimedia System – 25 frames per second on an average
  • 69. Real-Time System continue • Soft RTS: meet timing constraints most of the time, it is not necessary that every time constraint be met. Some deadline miss is tolerated. ( tolerence is there ) • Hard RTS: meet all time constraints exactly, Every resource management system must work in the correct order to meet time constraints. No deadline miss is allowed. ( hod when a print documentation was not available on time , teared the paper at mets )
  • 70. Role of an OS in Real Time Systems 1. Standalone Applications – Often no OS involved – Micro controller based Embedded Systems • Some Real Time Applications are huge & complex – Multiple threads – Complicated Synchronization Requirements – Filesystem / Network / Windowing support – OS primitives reduce the software design time
  • 71. Tasks Categories • Invocation = triggered operations – Periodic (time-triggered) – Aperiodic (event-triggered) • Creation – Static – Dynamic • Multi-Tasking System – Preemptive: higher-priority process taking control of the processor from a lower-priority – Non-Preemptive : Each task can control the CPU for as long as it needs it.
  • 72. Features of RTOS’s 1. Scheduling. 2. Resource Allocation. 3. Interrupt Handling. 4. Other issues like kernel size.
  • 73. 1. Scheduling in RTOS • More information about the tasks are known – No of tasks – Resource Requirements – Release Time – Execution time – Deadlines • Being a more deterministic system better scheduling algorithms can be devised.
  • 74.
  • 75. Why we need scheduling ?! • each computation (task) we want to execute needs resources • resources: processor, memory segments, communication, I/O devices etc.) • the computation must be executed in particular order (relative to each other and/or relative to time) • the possible ordering is either completely or statistically a priori known (described) • scheduling: assignment of processor to computations; • allocation: assignment of other resources to computations;
  • 76. Real-Time Scheduling Algorithms Fixed Priority Algorithms Dynamic Priority Algorithms Hybrid algorithms Rate Monotonic scheduling Deadline Monotonic scheduling Earliest Deadline First Least Laxity First Maximum Urgency First 1. Static 2. Dynamic 3.hybrid
  • 77. Scheduling Algorithm Static vs. DynamicStatic Scheduling: All scheduling decisions at compile time. Temporal task structure fixed. Precedence and mutual exclusion satisfied by the schedule (implicit synchronization). One solution is sufficient. Any solution is a sufficient schedulability test. Benefits Simplicity
  • 78. Scheduling Algorithm Static vs. Dynamic • Dynamic Scheduling: – All scheduling decisions at run time. • Based upon set of ready tasks. • Mutual exclusion and synchronization enforced by explicit synchronization constructs. – Benefits • Flexibility. • Only actually used resources are claimed. – Disadvantages • Guarantees difficult to support • Computational resources required for scheduling
  • 79. Scheduling Algorithm Preemptive vs. Nonpreemptive • Preemptive Scheduling: – Event driven. • Each event causes interruption of running tasks. • Choice of running tasks reconsidered after each interruption. – Benefits: • Can minimize response time to events. – Disadvantages: • Requires considerable computational resources for scheduling
  • 80. • Nonpreemptive Scheduling: – Tasks remain active till completion • Scheduling decisions only made after task completion. – Benefits: • Reasonable when task execution times ~= task switching times. • Less computational resources needed for scheduling – Disadvantages: • Can leads to starvation (not met the deadline) especially for those real time tasks ( or high priority tasks). Scheduling Algorithm Preemptive vs. Nonpreemptive
  • 81. 1.A Rate Monotonic scheduling • Priority assignment based on rates of tasks • Higher rate task assigned higher priority • Schedulable utilization = 0.693 (Liu and Leyland) Where Ci is the computation time, and Ti is the release period • If U < 0.693, schedulability is guaranteed • Tasks may be schedulable even if U > 0.693
  • 82. RM example Process Execution Time Period P1 1 8 P2 2 5 P3 2 10 The utilization will be: The theoretical limit for processes, under which we can conclude that the system is schedulable is: Since the system is schedulable!
  • 83. 1.B Deadline Monotonic scheduling • Priority assignment based on relative deadlines of tasks • Shorter the relative deadline, higher the priority
  • 84. 2.A Earliest Deadline First (EDF) • Dynamic Priority Scheduling • Priorities are assigned according to deadlines: – Earlier deadline, higher priority – Later deadline, lower priority • The first and the most effectively widely used dynamic priority-driven scheduling algorithm. • Effective for both preemptive and non-preemptive scheduling.
  • 85. Two Periodic Tasks • Execution profile of two periodic tasks – Process A • Arrives 0 20 40 … • Execution Time 10 10 10 … • End by 20 40 60 … – Process B • Arrives 0 50 100 … • Execution Time 25 25 25 … • End by 50 100 150 … • Question: Is there enough time for the execution of two periodic tasks?
  • 86.
  • 87. Five Periodic Tasks • Execution profile of five periodic tasks Process Arrival Time Execution Time Starting Deadline A 10 20 110 B 20 20 20 C 40 20 50 D 50 20 90 E 60 20 70
  • 88.
  • 89. 2.B Least Laxity First (LLF) • Dynamic preemptive scheduling with dynamic priorities • Laxity : The difference between the time until a tasks completion deadline and its remaining processing time requirement. • a laxity is assigned to each task in the system and minimum laxity tasks are executed first. • Larger overhead than EDF due to higher number of context switches caused by laxity changes at run time – Less studies than EDF due to this reason
  • 90. Least Laxity First Cont • LLF considers the execution time of a task, which EDF does not. • LLF assigns higher priority to a task with the least laxity. • A task with zero laxity must be scheduled right away and executed without preemption or it will fail to meet its deadline. • The negative laxity indicates that the task will miss the deadline, no matter when it is picked up for execution. The definition of laxity is the quality or condition of being loose. Being lose and relaxed in the enforcement of a procedure is an example of laxity.
  • 92. Maximum Urgency First Algorithm • This algorithm is a combination of fixed and dynamic priority scheduling, also called mixed priority scheduling. • With this algorithm, each task is given an urgency which is defined as a combination of two fixed priorities (criticality and user priority) and a dynamic priority that is inversely proportional to the laxity. • The MUF algorithm assigns priorities in two phases • Phase One concerns the assignment of static priorities to tasks • Phase Two deals with the run-time behavior of the MUF scheduler
  • 93. The first phase consists of these steps : 1) It sorts the tasks from the shortest period to the longest period. Then it defines the critical set as the first N tasks such that the total CPU load factor does not exceed 100%. These tasks are guaranteed not to fail even during a transient overload. 2) All tasks in the critical set are assigned high criticality. The remaining tasks are considered to have low criticality. 3) Every task in the system is assigned an optional unique user priority 3.A Maximum Urgency First Algorithm phase 1
  • 94. In the second phase, the MUF scheduler follows an algorithm to select a task for execution. This algorithm is executed whenever a new task is arrived to the ready queue. The algorithm is as follows: 1) If there is only one highly critical task, pick it up and execute it. 2) If there are more than one highly critical task, select the one with the highest dynamic priority. Here, the task with the least laxity is considered to be the one with the highest priority. 3) If there is more than one task with the same laxity, select the one with the highest user priority. Maximum Urgency First Algorithm phase 2
  • 95.
  • 96. Scheduling Algorithms in RTOS 1. Clock Driven Scheduling 2. Weighted Round Robin Scheduling 3. Priority Scheduling (Greedy / List / Event Driven)
  • 97. Scheduling Algorithms in RTOS (contd) • Clock Driven – All parameters about jobs (release time/ execution time/deadline) known in advance. – Schedule can be computed offline or at some regular time instances. – Minimal runtime overhead. – Not suitable for many applications.
  • 98. Scheduling Algorithms in RTOS (contd) • Weighted Round Robin – Jobs scheduled in FIFO manner – Time quantum given to jobs is proportional to it’s weight – Example use : High speed switching network • QOS guarantee. – Not suitable for precedence constrained jobs. • Job A can run only after Job B. No point in giving time quantum to Job B before Job A.
  • 99. Scheduling Algorithms in RTOS (contd) • Priority Scheduling (Greedy/List/Event Driven) – Processor never left idle when there are ready tasks – Processor allocated to processes according to priorities – Priorities • static - at design time • Dynamic - at runtime
  • 100. Priority Scheduling • Earliest Deadline First (EDF) – Process with earliest deadline given highest priority • Least Slack Time First (LSF) – slack = relative deadline – execution left • Rate Monotonic Scheduling (RMS) – For periodic tasks – Tasks priority inversely proportional to it’s period