6. COMPARISON
Parameters User Level Threads Kernel Level
Threads
1. Support Managed without the Managed by the OS
kernel
2 . Implementation Through the following Through the following
thread models thread models
•many to many •One to one
•many to one
3. Examples Depends on the •Windows XP
application •Solaris 9
•Linux
7. NEED OF THREAD SCHEDULING?
To exploit the power of parallelism in a multiprocessor.
Utilized in medium grained parallelism.
10. ADVANTAGES OF LOAD SHARING
Load distributed evenly across the processors
No centralized scheduler required
Global queue can be organized and accessed
12. DEDICATED PROCESSOR ASSIGNMENT
When application is scheduled, its threads are
assigned to a processor that remains dedicated to
it.
Some processors may be idle
No multiprogramming of processors
13. Test results for multiprocessor system with 16 processors
Speedup drops off
when number of
threads exceeds
number of
processors
14. DYNAMIC SCHEDULING
Number of threads in a process are altered dynamically
by the application
Operating system adjusts the load to improve utilization
15. CASE STUDY: SOLARIS
Schedules threads based on priority.
Priority
Classes
Time
Real Time System Interactive
Sharing
17. FEATURES OF PRIORITY IN SOLARIS
Altered dynamically
250
200
Time Quantum
150
100
50
0
1 2 3 4 5
Priority
Benefits-
Good response time to interactive processes
Good throughput
18. CASE STUDY: LINUX
Uses a priority-based, preemptive scheduling
algorithm.
Interactive tasks are assigned higher priority
Priority
Scheme
Real time Nice
[0-99] [100-140]
22. CASE STUDY: WINDOWS XP
Uses a priority-based, preemptive scheduling
algorithm.
The dispatcher uses a 32 bit priority scheme.
32 bit priority
Real time
Variable class
class
[1-15]
[1-15]
23. Distributed into classes
Base priority.
Priority is increased by the dispatcher.
The increase depends on the operation.
24. EXAMPLE
Initial Priority=8 Initial Priority=9
After increase=11 After increase=12
private storage area(used for DLLs),register set(status of the processor), and a stack(user stack when running in user mode & kernel stack for kernel mode)
As is clear from here, shares code & data section
What exactly is medium grained parallelism? In 1 sentenceSingle application is a collection of threadsThreads usually interact frequently, affecting the performance of the entire application
Processes not assigned to a particular processorA global queue of ready threads is maintainedEach processor, when idle , selects a thread from the queue.
Explain 2nd point…when processor is available,schedulingalgo is run to select next ttread according to the desire of the programmer
Add about Mach OSpointA refinement of the load-sharing technique is used in the Mach operating system[BLAC90, WEND89]. The operating system maintains a local run queue foreach processor and a shared global run queue. The local run queue is used bythreads that have been temporarily bound to a specific processor. A processor examinesthe local run queue first to give bound threads absolute preference over unboundthreads. As an example of the use of bound threads, one or more processorscould be dedicated to running processes that are part of the operating system.May be a bottleneck when more than one processor looks for work at the same time
There are two observations regarding this extreme strategy that indicate better than expected performance:In a highly parallel system, with tens or hundreds of processors, each of which represents a small fraction of the cost of the system, processor utilization is no longer an extremely important metric for effectiveness or performance.Total avoidance of process switching during the lifetime of a program should result in a substantial speedup of that program
As seen from the diag., the scheduler converts the threads priority into a global priority and then schedules the highest one foremost.The real time processes are given highest priority.
Priority and time slice are inversely proportionalInteractive processes have a higher priorityCPU bound processes have a lower priority
Interactivity is determined depending on the sleep time of the task, ie, how long it has been waiting for I/OTasks that are more interactive have higher sleep times
A runnable task is considered eligible for execution as long as it has its remaining time quantumThe runnable tasks are maintained on a runqueue data structure which contains 2 priority arraysOn multiprocessors,each processor schedules the highest priority task from its own runqueueWhen exhausted the 2 arrays are exchanged
Schedules threads using a priority-based, preemptive scheduling algorithm.
divided according to the Win32 API.Each thread has a bWhen released from wait operation, its