SlideShare une entreprise Scribd logo
1  sur  69
Real Time Kernels and
Operating Systems
Unit 5 & 6
Introduction
• Embedded system involves a complex design.
• The design is made easy by decomposing it into a several lower level
modules called tasks.
• These tasks will work together in a organized way to meet the
requirement, such a system is referred to as multitasking system.
• Important aspects of multitasking design include
Exchanging/sharing data between tasks
Synchronizing tasks
Scheduling task execution
Sharing resources among the task.
• The software that provides the required coordination is called an
operating system.
• If the system has tight timing constraints, then it is called Real Time
Operating System
Tasks And Things
• To understand the concept related to tasks and things, let us consider the
following example.
• We are interested to invite group of our friends for an evening dinner with
music, a meal, and perhaps some philosophical discussions while dining.
• The lists of requirements are
For our friends, it is important that everything be perfect
• We want each of the dishes that are being prepared to finish at the same time so that they
are cooked to perfection and can all be presented at the table together
Menu
• Gourmet meal
 Fish
 Meat
 Soup
 Sauce
 Fruit salad
 Vegetable curries
Execution
• The items in the menu can be prepared by one person but is very
difficult for a single person to do this alone and it is time consuming
as well.
• In order to reduce the workload, we can appoint few workers to do
the work easily on time and divide the work to them.
• In addition to this we can appoint one supervisor who will be able to
give instructions then and there to the workers to complete the work
on time.
• We can depict the meal preparation in a high-level UML activity
diagram as shown in Figure 5.1.
• The same concept is used in embedded systems which have more
functions to perform with several constraints.
• This work can be simplified by decomposing a larger task into
several smaller tasks and use the CPU as a supervisor which will
take the task to completion.
Programs and Processes
• To operate an embedded system we should have a firmware which is
developed by using the instruction set of the microprocessor.
• The firmware is called program and when this program is set for
execution in a CPU it is called as process or tasks.
• Now for the successful execution of the task the operating system has to
allocate the necessary resources such as process stack, memory address
space, registers (through the CPU), a program counter, I/O ports, network
connections, file descriptors and so on.
• When a program is set for execution, the contents of the registers will
change and the control flow of the program may change, so the
information related to program during execution is called process state
The CPU is a resource
• For the execution of the task operating
systems has to provide required
resources mainly the attention of CPU
to execute the firmware.
• The time taken for the completion of
execution is called its execution time
• The duration, from the time when it
enters the system until it terminates is
called its persistence
• If there is only a single task in the system,
there will be no contention for resources
and no restrictions on how long it can run
Figure: A Model of a single process
Multitasking
• If the second task is added to the system.
• Potential resource contention problems arises as there is only one CPU and
limited other resources.
• The problem is resolved by
Carefully managing how the resources are allocated to each task
By controlling how long each can retain the resources.
• The main resource CPU is given to one task for short while and then to the
other.
• If each task shares the system resources back and forth, each can get its
job finished.
• If CPU is passed between the tasks quickly enough, it will appear as if both
tasks are using it at the same time.
• The system models parallel operations by time sharing a single processor.
• The execution time for the program will be extended, but the operation will
give the appearance of simultaneous execution.
• Such a system is called multitasking.
• The tasks are said to be running concurrently.
• The concept can be extended to more than two tasks as shown in Figure
Figure: Multiple Processes
Setting a Schedule
• In a multiple process, in addition to the CPU,
the processes are sharing other system
resources such as timers, I/O facilities, and
busses.
• It is an illusion that all of the tasks are
running simultaneously, in reality, at any
instant in time, only one process is actively
executing.
• That process is said to be in the run state.
• The other process(es) is/are in the ready
waiting state.
• Such behaviour is illustrated in the state and
sequence diagrams.
Figure State Chart and Sequence Diagram
• One task will be running while the others are waiting for the CPU.
• Since the CPU has to be shared among several tasks, the problem
of deciding which task will be given the CPU and when, arises?
• To over come this problem, a schedule is set up to specify, under
what conditions, and for how long each task will be given the CPU
and other resources.
• The criteria for deciding which task is to run next are collectively
called a scheduling strategy.
• Scheduling strategy generally fall into three categories
1. Multiprogramming in which the running task continues until it performs
an operation that requires waiting for an external event (e.g., waiting for
an I/O event or timer to expire).
2. Real-Time in which tasks with specified temporal deadlines are
guaranteed to complete before those deadlines expire. Systems using
such a scheme require a response to certain events within a well-
defined and constrained time.
3. Time sharing in which the running task is required to give up the CPU
so that another task may get a turn. Under a time-shared strategy, a
hardware timer is used to preempt the currently executing task and
return control to the operating system. Such a scheme permits one to
reliably ensure that each process is given a slice of time to use the
operating system.
Changing Context
• Context- important information regarding the state of the task such
as value of any variables, value of program counter etc.
• Each time the running task is stopped (preempted or blocked) and
the CPU is given to another task that is ready, a switch to the new
context is executed.
• A context switch first saves the state of the currently active task.
• If the task that is scheduled for execution next had been running
previously, its state is restored and it continues from where it had
left off.
• If the context was not saved the task has to start from initial state,
which again will consume significant amount of time.
Figure: A Basic Diagram of Possible Task States
Threads – Lightweight and heavyweight
• A task or process is characterized by a collection of resources that
are utilized to execute a program.
• The smallest subset of these resources that is necessary for the
execution of the program is called thread.
• Some times the subset of resources is also called a lightweight
thread.
• The process itself is referred as heavyweight thread.
• The thread can be in only one process, and a process with out a
thread can do nothing.
A Single Thread
• The sequential execution of a set
of instructions through a task or
process in an embedded
application is called a thread of
execution or thread of control.
• The thread has a stack and status
information relevant to its state
and operation and a copy of the
physical registers.
• During the execution the thread
uses the code, data, CPU and
other resources that have been
allocated to the process.
• Figure shows a single task with
one thread of execution, referred
as single process-single thread
design.
Figure: Single process – single thread
• When we state that the
process is running,
blocked, ready, or
terminated in fact, we are
describing the different
states of the thread.
• If embedded design is
intended to perform a wide
variety of operation with
minimal interaction, then it
is appropriate to allocate
one process to each major
function to be performed.
• Such a system are called a
multiprocess-single thread
Figure Multiprocess – single thread
Multiple threads
• Sometimes embedded systems may
have to perform a single primary
function, during partitioning and
functional decomposition this single
primary function can be considered to
divide into several sub-tasks and
each sub task will have resources for
its execution then the CPU can be
passed to each of these sub tasks to
accomplish the intended work.
• We now see that each of the smaller
jobs has its own thread of execution.
• Such a system is called a single
process–multithread design.
Figure: Single Process–Multiple Threads
• We can easily extend the design of the application to support
multiple processes.
• We can further decompose each process into multiple subtasks.
• Such a system is called a multiprocess–multithread design.
• An operating system that supports tasks with multiple threads is,
naturally, referred to as a multithreaded operating system.
Sharing Resources
• Process need resources for its successful execution.
• In embedded system we have following types of design
1. Single process–single thread: has only one process, and that
process runs forever.
2. A multiprocess–single thread: supports multiple simultaneously
executing processes; each process has only a single thread of
control.
3. A single process–multiple threads: supports only one process;
within the process, it has multiple threads of control.
4. A multiprocess–multiple threads: supports multiple processes,
and within each process there are multiple threads of control.
• In a multiple process design the resources are needed to be
allocated and shared among the processes.
• The following are the important resources for any process
1. The code or firmware, the instructions.
2. The data that the code is manipulating.
3. The CPU and associated physical registers
4. A stack
5. Status information
• The first three items are shared among member threads,
• The last two are proprietary to each thread
Memory Resource Management
• In an embedded system based on the architecture of the
computation engine we may have a single memory or two memory
system which is called von-Neumann and Harvard architecture
respectively
• Every process in a system requires memory to store data and
firmware, hence we need a memory management at different levels
System Level Management
• When a process is created by the operating system, it is given a portion of
physical memory in which it should work.
• The set of addresses delimiting that code and data memory, proprietary to each
process, is called its address space.
• The address space will not be shared with any other peer processes.
• When multiple processes are concurrently executing in memory, a pointer or
stack error can result in overwriting of memory owned by other processes.
• Therefore system software must restrict the range of addresses that are
accessible to the executing process.
• A process (thread) trying to access memory outside its allowed range should be
immediately stopped before it damages memory belonging to other processes.
• One means by which such restrictions are enforced is through the concept of
privilege level.
• Processes are segregated into
Supervisor mode capability: is able to access the entire memory space
User mode capability: User mode limits the subset of instructions that a process can
use.
• Processes with a low (user mode) privilege level are not allowed to perform
certain kinds of memory accesses or to execute certain instructions.
• When a process attempts to execute such restricted instructions, an
interrupt is generated and a supervisory program with a higher privilege
level decides how to respond.
• The higher (supervisor mode) privilege level is generally reserved for
supervisory or administration types of tasks that one finds delegated to the
operating system or other such software.
• Processes with such privilege have access to any firmware and can use any
instructions within the microprocessor’s instruction set.
Figure: Address Space Access Privileges
Process-Level Management
A process may create child processes.
• When doing so, the parent process may choose to give a subset of its
resources to each of the children.
• The children are separate processes, and each has its own address space,
data, status, and stack.
• The code portion of the address space is shared.
A process may create multiple threads.
• When doing so, the parent process shares most of its resources with each
of the threads.
• These are not separate processes but separate threads of execution within
the same process.
• Each thread will have its own stack and status information.
• The processes or tasks exist in separate address spaces.
• Therefore, one must use some form of messaging or shared variable for
inter task exchange.
• Processes have a stronger notion of encapsulation than threads since
each thread has its own CPU state but shares the code section, data
section, and task resources with peer threads.
• It is this sharing that gives threads a weaker notion of encapsulation.
Reentrant Code
• Child processes and their threads share the same firmware memory area.
• As a result, two different threads can be executing the same function at the
same time.
• Functions using only local variables are inherently reentrant. That is, they can
be simultaneously called and executed in two or more contexts.
• On the other hand, functions that use global variables, variables local to the
process, variables passed by reference, or shared resources are not reentrant.
• One must be careful to ensure that all accesses to any common resources are
coordinated.
• When designing the application, one must make certain that one thread cannot
corrupt the values of the variables in a second.
• Any shared functions must be designed to be re-entrant.
• It is good practice to make all functions reentrant.
• One never knows when a future modification to the design may need to share
an existing function.
Foreground/Background systems
• A set of tasks can be decomposed into two subsets called background tasks
and foreground tasks.
• Traditionally a tasks that interact with the user or other I/O devices are
foreground set and the remaining are background set.
• The interpretation is slightly modified in the embedded world
• The foreground tasks are those initiated by interrupt or by a real-time
constraint that must be met.
• They will be assigned the higher priority levels in the system.
• The background tasks are non-interrupt driven and are assigned the
lower priorities.
• Once started, the background task will typically run to completion;
however, they can be interrupted or preempted by any foreground task at
any time.
The Operating System
• The easiest way to view an operating system is from the perspective
of the services it can provide.
• To begin, an operating system must provide or support three specific
functions
Schedule task execution.
Dispatch a task to run.
Ensure communication and synchronization among tasks.
• The kernel is the smallest portion of operating system that
provides these functions.
• The scheduler determines which task will run and when it will do so.
• The dispatcher performs the necessary operations to start the task
• Intertask or interprocess communication is the mechanism for
exchanging data and information between tasks or processes on the
same machine or on different ones.
• In an embedded operating system, such functions are captured in
the following types of services.
Process or Task Management is responsible for
• The creation and deletion of user and system processes
• The suspension and resumption of such processes.
• The management of interprocess communication and of deadlocks.
Deadlocks arise when two or more tasks need a resource that is held by
some other task.
Memory Management is responsible for
• The tracking and control of which tasks are loaded into memory,
• Monitoring which parts of memory are being used and by whom,
administering dynamic memory if it is used
• Managing caching schemes.
I/O System Management is responsible for
• Interaction with a great variety of different devices. In complex
systems, such interaction occurs through a special piece of software
called a device driver.
• Common calling interface—an application programmer’s interface
(API). It permit the application software to interact with different
devices like UNIXTM, for example, everything looks like a file.
• The interaction between each of the devices and the users or tasks
uses caching and buffering of all input and output transactions that
are necessary.
File System Management is responsible for
• Creation, deletion, and management of files and directories.
• Routine backup of any data that is to be saved
• Emergency backup either as power is failing or as some other
catastrophic event is occurring to the system
System Protection
• Ensures the protection of data and resources in the context of
concurrent processes.
• Such a duty is more acute in the context of a von Neumann
machine.
Networking
• In the context of a distributed application, the operating system must
also take the responsibility of managing distributed intra system
communication and the remote scheduling of tasks
Command Interpretation
• The operating system in desktop computers directly interact with the
user and provides the interface to that user’s application.
• In embedded system the task is implemented via a variety of
software drivers supported by the OS, in turn that interact with the
hardware I/O devices.
• As commands and directives come into the system, they must be
parsed, checked for grammatical accuracy, and directed to the
target task
The Real-Time Operating System (RTOS)
• A real-time operating system (RTOS) is primarily an operating
system. In addition to the responsibilities already enumerated, this
special class of operating system ensures (among other things) that
(rigid) time constraints can be met.
• Often people misuse the term real-time to mean that the system
responds quickly.
• Such an interpretation is only partially correct. The key characteristic
of a RTOS is deterministic behavior.
• Deterministic behavior mean that, given the same state and same set
of inputs, the next state (and any associated outputs) will be the
same each time the control algorithm utilized by the system is
executed.
• There are two types of real time systems
hard real-time - system delays are known or at least bounded
- return results within the specified timing bounds
soft real-time - ensures that critical tasks have priority over other tasks and
retain that priority until complete.
• The RTOS is commonly found in embedded applications because, if
such requirements are not met, the performance of the application is
inaccurate or compromised in some way.
• Such systems are often interacting with the physical environment
through sensors and various types of measurement devices.
• RTOS-based applications are frequently used in scientific
experiments, control systems, or other applications where missed
deadlines cannot be tolerated.
Operating System Architecture
• Most contemporary operating systems
are designed and implemented as a
hierarchy of what are called virtual
machines, as illustrated in Figure
• The only real machine in the
architecture is the microprocessor.
• Each layer uses the function/operations
and services of lower layers.
• The advantage of such approach is
increased modularity.
Figure: Operating System
Virtual Machine Model
• In some architectures, the higher level
layers have access to lower levels through
system calls and hardware instructions.
• The existing calling interface between
levels is retained while providing access to
the physical hardware below.
• With such capability, an interface can be
made to appear as if it is a machine
executing a specific set of instructions as
defined by the API.
• The idea can be logically extended so as to
create the illusion that the tasks at each
level are running on its own machine.
Each level in such a model is called a
virtual machine.
Figure: Typical High-Level OS arch
Tasks And Task Control Blocks
Tasks
• A task or process simply identifies a job that is to be done within an
embedded application.
• More specifically, it is a set of software (firmware) instructions, collected
together, that are designed and executed to accomplish that job.
• An embedded application is thus nothing more than a collection of such
jobs.
• How and when each is executed is determined by the schedule and the
dispatch algorithms;
• How and what data are acted upon by the task is specified by the intertask
communication scheme.
• The performance of each of these three operations determines the
robustness and quality of the design.
The Task Control Block
• In a tasked-based approach, each process is represented in the operating
system by a data structure called a task control block (TCB),also known as a
process control block.
• The TCB contains all the important information about the task such as:
Pointer (for linking the TCB to various queues)
Process ID and state
Program counter
CPU registers
Scheduling information (priorities and pointers to scheduling queues)
Memory management information (tag tables and cache information)
Scheduling information (time limits or time and resources used)
I/O status information (resources allocated or open files)
• TCB allocation may be static or dynamic.
• Static allocation is typically used in embedded systems with no memory
management.
• There are a fixed number of task control blocks; the memory is allocated at
system generation time and placed in a dormant or unused state.
• When a task is initiated, a TCB is created and the appropriate information
is entered.
• The TCB is then placed into the ready state by the scheduler.
• From the ready state, it will be moved to the execute state by dispatcher.
• When a task terminates, the associated TCB is returned to the dormant
state.
• With a fixed number of TCBs, no runtime memory management is
necessary.
• One must be cautious, however, not to exhaust the supply of TCBs.
Dynamic allocation of TCB
• variable number of task control blocks can be allocated from the heap at
runtime.
• When a task is created, the TCB is created, initialized, and placed into the
ready state and scheduled by the scheduler.
• From the ready state, it will be moved to the execute state and given to
the CPU by dispatcher.
• When a task is terminated, the TCB memory is returned to heap storage.
• With a dynamic allocation, heap management must be supported.
• Dynamic allocation suggests an unlimited supply of TCBs.
• However, the typical embedded application has limited memory; allocating
too many TCBs can exhaust the supply.
• A dynamic memory allocation scheme is generally too expensive for
smaller embedded systems
• When a task enters the system, it will typically be placed into a
queue called the Entry Queue or Job Queue.
• The easiest and most flexible way to implement such a queue is to
utilize a linked list as the underlying data structure.
• Thus, the last entries in the TCB hold the pointers to the preceding
and succeeding TCBs in the queue.
• One certainly could use an array data type as well.
• In C, the TCB is implemented
as a struct containing
pointers to all relevant
information.
• Because the data members of
a struct must all be of the
same type, the pointers are
all void* pointers.
• The skeletal structure for a
typical TCB identifying the
essential elements, the task,
and an example set of task
data are given in the C
declarations
• The first entry is a pointer to a function—taskPtr.
• That function embodies the functionality associated with the task.
• The function’s parameter list comprises the single argument of type void*.
• Because we do not wish to place any restrictions on the kinds of information that is
passed into the task and because we do not want to force each task to take the
same kinds of data,
• we utilize a struct as the means through which to pass the data into the task.
• To satisfy the requirement that all TCBs must look alike and yet be able to retain
flexibility on what data is passed into the task, the type information associated
with the data struct is removed by referencing it through a void* pointer.
• Within the task itself, the pointer must be cast back to the original type before it
can be dereferenced to get the data.
• Each task will have its own stack.
• The third entry in the TCB is a pointer to that stack.
• The fourth entry gives the priority for the task.
• The fifth and sixth entries are pointers used to link the TCB to the next and
previous TCBs in any of the aforementioned queues.
A Simple Kernel
• Consider a simple kernel performing three simple jobs to be scheduled and
performed:
1. Bring in some data.
2. Perform a computation on the data.
3. Display the data.
• The initial example will be a simple queue of functions operating on shared
data.
• In this example, an array will be used as the underlying data type of the
queue.
• The system will run forever, and each task will be scheduled and executed
in turn.
• An important characteristic of such an implementation is that each task will
run to completion (no preemption) before another is allowed to run.
Figure: Three Asynchronous Tasks Sharing a Common Data Buffer
Interrupts revisited
• Interrupt is a signal which causes the microprocessor to stop from what it
is doing and request for a service
• Interrupts may originate inside or outside of the processor
• There are different types of interrupts for each interrupts an interrupt
subroutine (ISR) should be written.
• Each subroutine should be placed in a particular place in a memory called
Interrupt vector table, for each interrupt separate range of address have
been allocated in the memory so all interrupts subroutine must be placed
in an appropriate place.
• As there are more no of interrupts we have to manage the interrupt traffic
in a microprocessor by using different strategies
• Control specifies the ability of the system to accept or ignore interrupts.
• The highest level of control is provided by enable and disable instructions.
• The enable instruction permits an interrupt to be recognized by the system.
• The disable instruction does the opposite.
• The second level of control is implemented through masking.
• It permits one to selectively listen to or ignore individual interrupts.
• Typically, the microprocessor supports a mask register with 1 bit associated with
each interrupt.
• If the mask bit is a logical 1, the associated interrupt will be recognized when it
occurs.
• Similarly, when the bit is a logical 0, the interrupt will be ignored.
• If masking is supported, normally at least one of the interrupts will be designated
as nonmaskable.
• That is, the interrupt must be listened to and responded to.
• Generally nonmaskable interrupts are associated with system-level functionality
and are often disaster management tools.
• The third level of control assigns a priority to each interrupt.
• Higher priority interrupts can interrupt those with lower priority, but not vice
versa.
• In most cases, the priority of each interrupt is set by the microprocessor
manufacturer.
• An interrupt is an asynchronous function or a subroutine call.
• The mechanics of handling an interrupt is similar to function calls.
• Like the function call, under interrupt, the system state information is held on
the stack and restored on return.
• Consequently, as is also found with the function call, it is possible to overflow
the stack.
• If, under a priority based scheme, interrupts are permitted to interrupt an
interrupt in its ISR at the same level, the potential for stack overflow exists and
must be managed.
• The normal solution is to disable or mask the interrupts as appropriate to ensure
that overflow cannot occur.
• when working with interrupts and ISRs, always keep the routine as simple
and short as possible.
• An ISR with more than a dozen to 18 lines of code is probably too long.
• The objective is to respond to the interrupt, do the minimum amount of
work that absolutely needs to be done, and then exit the ISR; further
processing, if necessary, can be done in one of the tasks or foreground
processes.
Memory Management Revisited
• We studied about the different possible states for task or processes and
also the importance of saving and restoring the context.
• Thus the context switch involves
1. Saving the existing context
2. Switching to the new one
3. Restoring the old one
• These three steps can consume a significant amount of time.
• When operating under real time constraints, the time required to affect
the switch can be critical to the success or failure of the application.
• The information that must be saved from an existing context may be as
simple as the program counter and stack pointer for the original context or
as complex as the state of the system at the time the switch occurs.
• The typical minimum information to be saved includes”
The state of the CPU registers, including the CPU
The values of local variables
Status information
• The saving of such information can be accomplished in several different
ways.
Duplicate Hardware Context
Task Control Blocks
Stack
Duplicate Hardware Context
• A typical microprocessor has limited number of general purpose registers.
• When context switching takes place, the values of general purpose registers
should be saved prior to the switch and then restored on return.
• Some microprocessor provide some hardware support for context switch by
increasing the number of available general purpose registers.
• At the software level, several different contexts can be defined and a subset
of the registers allocated to each.
• For example, with 64 general-purpose registers, 4 different contexts, each
with 16 general-purpose registers, can be defined.
• Thus, each context can have a set of registers called R0–R15 as illustrated in
Figure.
• When switching occur, rather than saving the contents of the current set of
registers, the system simply switches to a new hardware context.
Figure: General-Purpose Registers Organized as Four
Different contexts
• As different contexts are a logical interpretation of the register set at the
software level, overlapping contexts can be done.
• That is, a subset of registers can be included in two adjacent contexts as
shown in Figure.
• In the illustration, the fourteenth and fifteenth registers appear as
registers E and F in context 0 and as registers 0 and 1 in context 1.
• Using such a scheme, variables can easily be passed between contexts
with no overhead
Figure: General-Purpose Registers Organized with
Overlapping Contexts
Task Control Blocks
• When a system is implemented using the task control block model, each TCB will
contain all relevant information about the state of the task.
• To affect the context switch, necessary task state information is copied to the
TCB.
• The TCB can then be inserted into the appropriate queue, and the status and
state information for the new or resumed task can be entered into the system
variables.
• If the running task has been preempted, the TCB will be linked into the ready
queue waiting for the CPU to become available.
• Based on the scheduling algorithm, it may or may not be the next task to run.
• If the task has blocked, the TCB will be linked into the waiting queue for the
required resource.
• When the resource becomes available, the task will move to the ready queue.
Stacks
• The stack is a rather simple data
structure used for storing
information associated with a
task or thread.
• It is an area set aside in
memory as part of system
allocation.
• The information is held in a
data structure similar to TCB
called a stack frame or
activation record.
• Typical information that must
be stored is illustrated in Figure.
Figure: Information Stored in a Stack Frame
• When a stack is used, procedures must be written to manage the processes of
saving, accessing, and removing information to or from the stack.
• Such procedures are initially invoked as part of a function call or by the interrupt
handler prior to a context switch.
• In the case of an interrupt, further interrupts are temporarily blocked to allow the
mechanics of the switch to occur.
• The stack management procedures are also invoked when returning to the calling
context to restore the original state.
• The current top of the stack is identified by a variable called the stack pointer.
• When an activation record is added to the stack, the stack pointer is advanced.
• Top of stack and stack pointer advanced have several different interpretations.
• Based on implementation, top of stack can be interpreted either as the next
available empty location on the stack or as the location of the last valid entry.
• Figure shows the stack growing from low to high memory.
• An alternate implementation grows the stack from high to low memory.
• The stack data type generally supports the following operations
Push—Add to the top of the stack.
Pop—Remove from the top of the stack.
Peek—Look at the top of the stack
• Three kinds of stack are identified:
1. Runtime
2. Application
3. Multiprocessing
Runtime stacks
• The runtime stack is under system control and may be shared by other processes or
threads.
• The stack size is known a priori, and there is usually no dynamic allocation.
• At runtime, one must ensure that not too many stack frames are pushed on to the
stack; otherwise, there is potential for overflow, eventually leading to a system crash.
• A difficulty with a single runtime stack in a TCB context arises from the access semantics
of the stack, which permit access only to the top of the stack.
• Consider a simple system comprising two tasks, T0 and T1. If T0 is running and blocks on
an I/O operation, for example, its state information is saved on the stack.
• T1 now starts and similarly blocks.
• Meantime, the I/O operation, for T0 completes, and it is ready to resume.
• However, its state information is contained in the second entry on the stack.
• The single runtime stack can work in a foreground/background model.
• Tasks in the background will generally run to completion.
• Real-time tasks, driven by interrupts, will push then pop stack frames onto the stack,
thereby precluding the need to access an entry that is not at the top of the stack.
• Interrupts within interrupts do not present a problem as long as the stack size is not
exceeded.
Application Stacks
• The single-stack model can be extended by
incorporating several additional stacks as we see
in Figure.
• The design is utilizing a runtime stack as well as
multiple application stacks to simplify the
management of multiple tasks in a preemptive
environment.
• On interrupt, the runtime stack holds a pointer to
the application stack associated with the initial or
preempted task or thread.
• The preempting process now works with a new
stack.
• If that task is subsequently preempted, the
existing context is held on the preempted task’s
application stack, and a pointer to new
application stack is placed on the runtime stack.
• The save and restore interface functions must be
modified to store/restore with respect to the
current context as the runtime stack is unwound.
Such a scheme can provide a very fast context
switch.
Figure: A Stack Architecture Using a Runtime
Stack and Application Stacks
Multiprocessing Stacks
• Multiprocessing in the current context refers to working with multiple
processes rather than multiple processors.
• Multiprocessing stacks are similar to the main runtime stack.
• When a task is started, among other resources, it is allocated its own
stack space.
• In contrast to application stacks, which are managed by the
foreground task (assuming a foreground/background versus a TCB
architecture), the process stack is managed by the owner process.
• It is allocated from heap when the process is created and returned to
the heap when the process exits.

Contenu connexe

Similaire à Real Time Kernels and Operating Systems.pptx

Module 2 - PPT.pdfModule 2 - PPT.pdfModule 2 - PPT.pdfModule 2 - PPT.pdfModul...
Module 2 - PPT.pdfModule 2 - PPT.pdfModule 2 - PPT.pdfModule 2 - PPT.pdfModul...Module 2 - PPT.pdfModule 2 - PPT.pdfModule 2 - PPT.pdfModule 2 - PPT.pdfModul...
Module 2 - PPT.pdfModule 2 - PPT.pdfModule 2 - PPT.pdfModule 2 - PPT.pdfModul...morganjohn3
 
CSI-503 - 3. Process Scheduling
CSI-503 - 3. Process SchedulingCSI-503 - 3. Process Scheduling
CSI-503 - 3. Process Schedulingghayour abbas
 
In computing, scheduling is the action .
In computing, scheduling is the action .In computing, scheduling is the action .
In computing, scheduling is the action .nathansel1
 
Operating system 17 process management
Operating system 17 process managementOperating system 17 process management
Operating system 17 process managementVaibhav Khanna
 
Types of operating system.................
Types of operating system.................Types of operating system.................
Types of operating system.................harendersin82880
 
Os unit 3 , process management
Os unit 3 , process managementOs unit 3 , process management
Os unit 3 , process managementArnav Chowdhury
 
Process management- This ppt contains all required information regarding oper...
Process management- This ppt contains all required information regarding oper...Process management- This ppt contains all required information regarding oper...
Process management- This ppt contains all required information regarding oper...ApurvaLaddha
 
Operating system concepts
Operating system conceptsOperating system concepts
Operating system conceptsArnav Chowdhury
 
opearating system notes mumbai university.pptx
opearating system notes mumbai university.pptxopearating system notes mumbai university.pptx
opearating system notes mumbai university.pptxssuser3dfcef
 
Schudling os presentaion
Schudling os presentaionSchudling os presentaion
Schudling os presentaioninayat khan
 
Processes and operating systems
Processes and operating systemsProcesses and operating systems
Processes and operating systemsRAMPRAKASHT1
 
Module 1 Introduction.ppt
Module 1 Introduction.pptModule 1 Introduction.ppt
Module 1 Introduction.pptshreesha16
 
CSE3120- Module1 part 1 v1.pptx
CSE3120- Module1 part 1 v1.pptxCSE3120- Module1 part 1 v1.pptx
CSE3120- Module1 part 1 v1.pptxakhilagajjala
 
UNIT 1 - UNDERSTANDINGTHE PRINCIPLES OF OPERATING SYSTEM.pptx
UNIT 1 - UNDERSTANDINGTHE PRINCIPLES OF OPERATING SYSTEM.pptxUNIT 1 - UNDERSTANDINGTHE PRINCIPLES OF OPERATING SYSTEM.pptx
UNIT 1 - UNDERSTANDINGTHE PRINCIPLES OF OPERATING SYSTEM.pptxLeahRachael
 

Similaire à Real Time Kernels and Operating Systems.pptx (20)

Process Management
Process ManagementProcess Management
Process Management
 
Module 2 - PPT.pdfModule 2 - PPT.pdfModule 2 - PPT.pdfModule 2 - PPT.pdfModul...
Module 2 - PPT.pdfModule 2 - PPT.pdfModule 2 - PPT.pdfModule 2 - PPT.pdfModul...Module 2 - PPT.pdfModule 2 - PPT.pdfModule 2 - PPT.pdfModule 2 - PPT.pdfModul...
Module 2 - PPT.pdfModule 2 - PPT.pdfModule 2 - PPT.pdfModule 2 - PPT.pdfModul...
 
CSI-503 - 3. Process Scheduling
CSI-503 - 3. Process SchedulingCSI-503 - 3. Process Scheduling
CSI-503 - 3. Process Scheduling
 
In computing, scheduling is the action .
In computing, scheduling is the action .In computing, scheduling is the action .
In computing, scheduling is the action .
 
Operating system 17 process management
Operating system 17 process managementOperating system 17 process management
Operating system 17 process management
 
Processes
ProcessesProcesses
Processes
 
Types of operating system.................
Types of operating system.................Types of operating system.................
Types of operating system.................
 
Os unit 3 , process management
Os unit 3 , process managementOs unit 3 , process management
Os unit 3 , process management
 
Process management- This ppt contains all required information regarding oper...
Process management- This ppt contains all required information regarding oper...Process management- This ppt contains all required information regarding oper...
Process management- This ppt contains all required information regarding oper...
 
Operating System
Operating SystemOperating System
Operating System
 
Operating system concepts
Operating system conceptsOperating system concepts
Operating system concepts
 
opearating system notes mumbai university.pptx
opearating system notes mumbai university.pptxopearating system notes mumbai university.pptx
opearating system notes mumbai university.pptx
 
Schudling os presentaion
Schudling os presentaionSchudling os presentaion
Schudling os presentaion
 
Lecture 4 process cpu scheduling
Lecture 4   process cpu schedulingLecture 4   process cpu scheduling
Lecture 4 process cpu scheduling
 
Processes and operating systems
Processes and operating systemsProcesses and operating systems
Processes and operating systems
 
Module 1 Introduction.ppt
Module 1 Introduction.pptModule 1 Introduction.ppt
Module 1 Introduction.ppt
 
CSE3120- Module1 part 1 v1.pptx
CSE3120- Module1 part 1 v1.pptxCSE3120- Module1 part 1 v1.pptx
CSE3120- Module1 part 1 v1.pptx
 
Ch3 processes
Ch3   processesCh3   processes
Ch3 processes
 
UNIT 1 - UNDERSTANDINGTHE PRINCIPLES OF OPERATING SYSTEM.pptx
UNIT 1 - UNDERSTANDINGTHE PRINCIPLES OF OPERATING SYSTEM.pptxUNIT 1 - UNDERSTANDINGTHE PRINCIPLES OF OPERATING SYSTEM.pptx
UNIT 1 - UNDERSTANDINGTHE PRINCIPLES OF OPERATING SYSTEM.pptx
 
Unit I OS.pdf
Unit I OS.pdfUnit I OS.pdf
Unit I OS.pdf
 

Dernier

Booking open Available Pune Call Girls Koregaon Park 6297143586 Call Hot Ind...
Booking open Available Pune Call Girls Koregaon Park  6297143586 Call Hot Ind...Booking open Available Pune Call Girls Koregaon Park  6297143586 Call Hot Ind...
Booking open Available Pune Call Girls Koregaon Park 6297143586 Call Hot Ind...Call Girls in Nagpur High Profile
 
CCS335 _ Neural Networks and Deep Learning Laboratory_Lab Complete Record
CCS335 _ Neural Networks and Deep Learning Laboratory_Lab Complete RecordCCS335 _ Neural Networks and Deep Learning Laboratory_Lab Complete Record
CCS335 _ Neural Networks and Deep Learning Laboratory_Lab Complete RecordAsst.prof M.Gokilavani
 
Sheet Pile Wall Design and Construction: A Practical Guide for Civil Engineer...
Sheet Pile Wall Design and Construction: A Practical Guide for Civil Engineer...Sheet Pile Wall Design and Construction: A Practical Guide for Civil Engineer...
Sheet Pile Wall Design and Construction: A Practical Guide for Civil Engineer...Dr.Costas Sachpazis
 
UNIT - IV - Air Compressors and its Performance
UNIT - IV - Air Compressors and its PerformanceUNIT - IV - Air Compressors and its Performance
UNIT - IV - Air Compressors and its Performancesivaprakash250
 
High Profile Call Girls Nagpur Meera Call 7001035870 Meet With Nagpur Escorts
High Profile Call Girls Nagpur Meera Call 7001035870 Meet With Nagpur EscortsHigh Profile Call Girls Nagpur Meera Call 7001035870 Meet With Nagpur Escorts
High Profile Call Girls Nagpur Meera Call 7001035870 Meet With Nagpur EscortsCall Girls in Nagpur High Profile
 
Extrusion Processes and Their Limitations
Extrusion Processes and Their LimitationsExtrusion Processes and Their Limitations
Extrusion Processes and Their Limitations120cr0395
 
Call Girls Service Nashik Vaishnavi 7001305949 Independent Escort Service Nashik
Call Girls Service Nashik Vaishnavi 7001305949 Independent Escort Service NashikCall Girls Service Nashik Vaishnavi 7001305949 Independent Escort Service Nashik
Call Girls Service Nashik Vaishnavi 7001305949 Independent Escort Service NashikCall Girls in Nagpur High Profile
 
The Most Attractive Pune Call Girls Manchar 8250192130 Will You Miss This Cha...
The Most Attractive Pune Call Girls Manchar 8250192130 Will You Miss This Cha...The Most Attractive Pune Call Girls Manchar 8250192130 Will You Miss This Cha...
The Most Attractive Pune Call Girls Manchar 8250192130 Will You Miss This Cha...ranjana rawat
 
Call Girls Pimpri Chinchwad Call Me 7737669865 Budget Friendly No Advance Boo...
Call Girls Pimpri Chinchwad Call Me 7737669865 Budget Friendly No Advance Boo...Call Girls Pimpri Chinchwad Call Me 7737669865 Budget Friendly No Advance Boo...
Call Girls Pimpri Chinchwad Call Me 7737669865 Budget Friendly No Advance Boo...roncy bisnoi
 
University management System project report..pdf
University management System project report..pdfUniversity management System project report..pdf
University management System project report..pdfKamal Acharya
 
UNIT-III FMM. DIMENSIONAL ANALYSIS
UNIT-III FMM.        DIMENSIONAL ANALYSISUNIT-III FMM.        DIMENSIONAL ANALYSIS
UNIT-III FMM. DIMENSIONAL ANALYSISrknatarajan
 
Call for Papers - African Journal of Biological Sciences, E-ISSN: 2663-2187, ...
Call for Papers - African Journal of Biological Sciences, E-ISSN: 2663-2187, ...Call for Papers - African Journal of Biological Sciences, E-ISSN: 2663-2187, ...
Call for Papers - African Journal of Biological Sciences, E-ISSN: 2663-2187, ...Christo Ananth
 
Coefficient of Thermal Expansion and their Importance.pptx
Coefficient of Thermal Expansion and their Importance.pptxCoefficient of Thermal Expansion and their Importance.pptx
Coefficient of Thermal Expansion and their Importance.pptxAsutosh Ranjan
 
VIP Call Girls Ankleshwar 7001035870 Whatsapp Number, 24/07 Booking
VIP Call Girls Ankleshwar 7001035870 Whatsapp Number, 24/07 BookingVIP Call Girls Ankleshwar 7001035870 Whatsapp Number, 24/07 Booking
VIP Call Girls Ankleshwar 7001035870 Whatsapp Number, 24/07 Bookingdharasingh5698
 
MANUFACTURING PROCESS-II UNIT-5 NC MACHINE TOOLS
MANUFACTURING PROCESS-II UNIT-5 NC MACHINE TOOLSMANUFACTURING PROCESS-II UNIT-5 NC MACHINE TOOLS
MANUFACTURING PROCESS-II UNIT-5 NC MACHINE TOOLSSIVASHANKAR N
 
Online banking management system project.pdf
Online banking management system project.pdfOnline banking management system project.pdf
Online banking management system project.pdfKamal Acharya
 
(ANVI) Koregaon Park Call Girls Just Call 7001035870 [ Cash on Delivery ] Pun...
(ANVI) Koregaon Park Call Girls Just Call 7001035870 [ Cash on Delivery ] Pun...(ANVI) Koregaon Park Call Girls Just Call 7001035870 [ Cash on Delivery ] Pun...
(ANVI) Koregaon Park Call Girls Just Call 7001035870 [ Cash on Delivery ] Pun...ranjana rawat
 
(PRIYA) Rajgurunagar Call Girls Just Call 7001035870 [ Cash on Delivery ] Pun...
(PRIYA) Rajgurunagar Call Girls Just Call 7001035870 [ Cash on Delivery ] Pun...(PRIYA) Rajgurunagar Call Girls Just Call 7001035870 [ Cash on Delivery ] Pun...
(PRIYA) Rajgurunagar Call Girls Just Call 7001035870 [ Cash on Delivery ] Pun...ranjana rawat
 
MANUFACTURING PROCESS-II UNIT-2 LATHE MACHINE
MANUFACTURING PROCESS-II UNIT-2 LATHE MACHINEMANUFACTURING PROCESS-II UNIT-2 LATHE MACHINE
MANUFACTURING PROCESS-II UNIT-2 LATHE MACHINESIVASHANKAR N
 

Dernier (20)

Booking open Available Pune Call Girls Koregaon Park 6297143586 Call Hot Ind...
Booking open Available Pune Call Girls Koregaon Park  6297143586 Call Hot Ind...Booking open Available Pune Call Girls Koregaon Park  6297143586 Call Hot Ind...
Booking open Available Pune Call Girls Koregaon Park 6297143586 Call Hot Ind...
 
CCS335 _ Neural Networks and Deep Learning Laboratory_Lab Complete Record
CCS335 _ Neural Networks and Deep Learning Laboratory_Lab Complete RecordCCS335 _ Neural Networks and Deep Learning Laboratory_Lab Complete Record
CCS335 _ Neural Networks and Deep Learning Laboratory_Lab Complete Record
 
Sheet Pile Wall Design and Construction: A Practical Guide for Civil Engineer...
Sheet Pile Wall Design and Construction: A Practical Guide for Civil Engineer...Sheet Pile Wall Design and Construction: A Practical Guide for Civil Engineer...
Sheet Pile Wall Design and Construction: A Practical Guide for Civil Engineer...
 
UNIT - IV - Air Compressors and its Performance
UNIT - IV - Air Compressors and its PerformanceUNIT - IV - Air Compressors and its Performance
UNIT - IV - Air Compressors and its Performance
 
High Profile Call Girls Nagpur Meera Call 7001035870 Meet With Nagpur Escorts
High Profile Call Girls Nagpur Meera Call 7001035870 Meet With Nagpur EscortsHigh Profile Call Girls Nagpur Meera Call 7001035870 Meet With Nagpur Escorts
High Profile Call Girls Nagpur Meera Call 7001035870 Meet With Nagpur Escorts
 
Extrusion Processes and Their Limitations
Extrusion Processes and Their LimitationsExtrusion Processes and Their Limitations
Extrusion Processes and Their Limitations
 
Call Girls Service Nashik Vaishnavi 7001305949 Independent Escort Service Nashik
Call Girls Service Nashik Vaishnavi 7001305949 Independent Escort Service NashikCall Girls Service Nashik Vaishnavi 7001305949 Independent Escort Service Nashik
Call Girls Service Nashik Vaishnavi 7001305949 Independent Escort Service Nashik
 
The Most Attractive Pune Call Girls Manchar 8250192130 Will You Miss This Cha...
The Most Attractive Pune Call Girls Manchar 8250192130 Will You Miss This Cha...The Most Attractive Pune Call Girls Manchar 8250192130 Will You Miss This Cha...
The Most Attractive Pune Call Girls Manchar 8250192130 Will You Miss This Cha...
 
Call Girls Pimpri Chinchwad Call Me 7737669865 Budget Friendly No Advance Boo...
Call Girls Pimpri Chinchwad Call Me 7737669865 Budget Friendly No Advance Boo...Call Girls Pimpri Chinchwad Call Me 7737669865 Budget Friendly No Advance Boo...
Call Girls Pimpri Chinchwad Call Me 7737669865 Budget Friendly No Advance Boo...
 
University management System project report..pdf
University management System project report..pdfUniversity management System project report..pdf
University management System project report..pdf
 
UNIT-III FMM. DIMENSIONAL ANALYSIS
UNIT-III FMM.        DIMENSIONAL ANALYSISUNIT-III FMM.        DIMENSIONAL ANALYSIS
UNIT-III FMM. DIMENSIONAL ANALYSIS
 
Call for Papers - African Journal of Biological Sciences, E-ISSN: 2663-2187, ...
Call for Papers - African Journal of Biological Sciences, E-ISSN: 2663-2187, ...Call for Papers - African Journal of Biological Sciences, E-ISSN: 2663-2187, ...
Call for Papers - African Journal of Biological Sciences, E-ISSN: 2663-2187, ...
 
Coefficient of Thermal Expansion and their Importance.pptx
Coefficient of Thermal Expansion and their Importance.pptxCoefficient of Thermal Expansion and their Importance.pptx
Coefficient of Thermal Expansion and their Importance.pptx
 
VIP Call Girls Ankleshwar 7001035870 Whatsapp Number, 24/07 Booking
VIP Call Girls Ankleshwar 7001035870 Whatsapp Number, 24/07 BookingVIP Call Girls Ankleshwar 7001035870 Whatsapp Number, 24/07 Booking
VIP Call Girls Ankleshwar 7001035870 Whatsapp Number, 24/07 Booking
 
MANUFACTURING PROCESS-II UNIT-5 NC MACHINE TOOLS
MANUFACTURING PROCESS-II UNIT-5 NC MACHINE TOOLSMANUFACTURING PROCESS-II UNIT-5 NC MACHINE TOOLS
MANUFACTURING PROCESS-II UNIT-5 NC MACHINE TOOLS
 
Online banking management system project.pdf
Online banking management system project.pdfOnline banking management system project.pdf
Online banking management system project.pdf
 
(ANVI) Koregaon Park Call Girls Just Call 7001035870 [ Cash on Delivery ] Pun...
(ANVI) Koregaon Park Call Girls Just Call 7001035870 [ Cash on Delivery ] Pun...(ANVI) Koregaon Park Call Girls Just Call 7001035870 [ Cash on Delivery ] Pun...
(ANVI) Koregaon Park Call Girls Just Call 7001035870 [ Cash on Delivery ] Pun...
 
(PRIYA) Rajgurunagar Call Girls Just Call 7001035870 [ Cash on Delivery ] Pun...
(PRIYA) Rajgurunagar Call Girls Just Call 7001035870 [ Cash on Delivery ] Pun...(PRIYA) Rajgurunagar Call Girls Just Call 7001035870 [ Cash on Delivery ] Pun...
(PRIYA) Rajgurunagar Call Girls Just Call 7001035870 [ Cash on Delivery ] Pun...
 
(INDIRA) Call Girl Aurangabad Call Now 8617697112 Aurangabad Escorts 24x7
(INDIRA) Call Girl Aurangabad Call Now 8617697112 Aurangabad Escorts 24x7(INDIRA) Call Girl Aurangabad Call Now 8617697112 Aurangabad Escorts 24x7
(INDIRA) Call Girl Aurangabad Call Now 8617697112 Aurangabad Escorts 24x7
 
MANUFACTURING PROCESS-II UNIT-2 LATHE MACHINE
MANUFACTURING PROCESS-II UNIT-2 LATHE MACHINEMANUFACTURING PROCESS-II UNIT-2 LATHE MACHINE
MANUFACTURING PROCESS-II UNIT-2 LATHE MACHINE
 

Real Time Kernels and Operating Systems.pptx

  • 1. Real Time Kernels and Operating Systems Unit 5 & 6
  • 2. Introduction • Embedded system involves a complex design. • The design is made easy by decomposing it into a several lower level modules called tasks. • These tasks will work together in a organized way to meet the requirement, such a system is referred to as multitasking system. • Important aspects of multitasking design include Exchanging/sharing data between tasks Synchronizing tasks Scheduling task execution Sharing resources among the task. • The software that provides the required coordination is called an operating system. • If the system has tight timing constraints, then it is called Real Time Operating System
  • 3. Tasks And Things • To understand the concept related to tasks and things, let us consider the following example. • We are interested to invite group of our friends for an evening dinner with music, a meal, and perhaps some philosophical discussions while dining. • The lists of requirements are For our friends, it is important that everything be perfect • We want each of the dishes that are being prepared to finish at the same time so that they are cooked to perfection and can all be presented at the table together Menu • Gourmet meal  Fish  Meat  Soup  Sauce  Fruit salad  Vegetable curries
  • 4. Execution • The items in the menu can be prepared by one person but is very difficult for a single person to do this alone and it is time consuming as well. • In order to reduce the workload, we can appoint few workers to do the work easily on time and divide the work to them. • In addition to this we can appoint one supervisor who will be able to give instructions then and there to the workers to complete the work on time. • We can depict the meal preparation in a high-level UML activity diagram as shown in Figure 5.1.
  • 5.
  • 6. • The same concept is used in embedded systems which have more functions to perform with several constraints. • This work can be simplified by decomposing a larger task into several smaller tasks and use the CPU as a supervisor which will take the task to completion.
  • 7. Programs and Processes • To operate an embedded system we should have a firmware which is developed by using the instruction set of the microprocessor. • The firmware is called program and when this program is set for execution in a CPU it is called as process or tasks. • Now for the successful execution of the task the operating system has to allocate the necessary resources such as process stack, memory address space, registers (through the CPU), a program counter, I/O ports, network connections, file descriptors and so on. • When a program is set for execution, the contents of the registers will change and the control flow of the program may change, so the information related to program during execution is called process state
  • 8. The CPU is a resource • For the execution of the task operating systems has to provide required resources mainly the attention of CPU to execute the firmware. • The time taken for the completion of execution is called its execution time • The duration, from the time when it enters the system until it terminates is called its persistence • If there is only a single task in the system, there will be no contention for resources and no restrictions on how long it can run Figure: A Model of a single process
  • 9. Multitasking • If the second task is added to the system. • Potential resource contention problems arises as there is only one CPU and limited other resources. • The problem is resolved by Carefully managing how the resources are allocated to each task By controlling how long each can retain the resources. • The main resource CPU is given to one task for short while and then to the other. • If each task shares the system resources back and forth, each can get its job finished. • If CPU is passed between the tasks quickly enough, it will appear as if both tasks are using it at the same time. • The system models parallel operations by time sharing a single processor.
  • 10. • The execution time for the program will be extended, but the operation will give the appearance of simultaneous execution. • Such a system is called multitasking. • The tasks are said to be running concurrently. • The concept can be extended to more than two tasks as shown in Figure Figure: Multiple Processes
  • 11. Setting a Schedule • In a multiple process, in addition to the CPU, the processes are sharing other system resources such as timers, I/O facilities, and busses. • It is an illusion that all of the tasks are running simultaneously, in reality, at any instant in time, only one process is actively executing. • That process is said to be in the run state. • The other process(es) is/are in the ready waiting state. • Such behaviour is illustrated in the state and sequence diagrams. Figure State Chart and Sequence Diagram
  • 12. • One task will be running while the others are waiting for the CPU. • Since the CPU has to be shared among several tasks, the problem of deciding which task will be given the CPU and when, arises? • To over come this problem, a schedule is set up to specify, under what conditions, and for how long each task will be given the CPU and other resources. • The criteria for deciding which task is to run next are collectively called a scheduling strategy.
  • 13. • Scheduling strategy generally fall into three categories 1. Multiprogramming in which the running task continues until it performs an operation that requires waiting for an external event (e.g., waiting for an I/O event or timer to expire). 2. Real-Time in which tasks with specified temporal deadlines are guaranteed to complete before those deadlines expire. Systems using such a scheme require a response to certain events within a well- defined and constrained time. 3. Time sharing in which the running task is required to give up the CPU so that another task may get a turn. Under a time-shared strategy, a hardware timer is used to preempt the currently executing task and return control to the operating system. Such a scheme permits one to reliably ensure that each process is given a slice of time to use the operating system.
  • 14. Changing Context • Context- important information regarding the state of the task such as value of any variables, value of program counter etc. • Each time the running task is stopped (preempted or blocked) and the CPU is given to another task that is ready, a switch to the new context is executed. • A context switch first saves the state of the currently active task. • If the task that is scheduled for execution next had been running previously, its state is restored and it continues from where it had left off. • If the context was not saved the task has to start from initial state, which again will consume significant amount of time.
  • 15. Figure: A Basic Diagram of Possible Task States
  • 16. Threads – Lightweight and heavyweight • A task or process is characterized by a collection of resources that are utilized to execute a program. • The smallest subset of these resources that is necessary for the execution of the program is called thread. • Some times the subset of resources is also called a lightweight thread. • The process itself is referred as heavyweight thread. • The thread can be in only one process, and a process with out a thread can do nothing.
  • 17. A Single Thread • The sequential execution of a set of instructions through a task or process in an embedded application is called a thread of execution or thread of control. • The thread has a stack and status information relevant to its state and operation and a copy of the physical registers. • During the execution the thread uses the code, data, CPU and other resources that have been allocated to the process. • Figure shows a single task with one thread of execution, referred as single process-single thread design. Figure: Single process – single thread
  • 18. • When we state that the process is running, blocked, ready, or terminated in fact, we are describing the different states of the thread. • If embedded design is intended to perform a wide variety of operation with minimal interaction, then it is appropriate to allocate one process to each major function to be performed. • Such a system are called a multiprocess-single thread Figure Multiprocess – single thread
  • 19. Multiple threads • Sometimes embedded systems may have to perform a single primary function, during partitioning and functional decomposition this single primary function can be considered to divide into several sub-tasks and each sub task will have resources for its execution then the CPU can be passed to each of these sub tasks to accomplish the intended work. • We now see that each of the smaller jobs has its own thread of execution. • Such a system is called a single process–multithread design. Figure: Single Process–Multiple Threads
  • 20. • We can easily extend the design of the application to support multiple processes. • We can further decompose each process into multiple subtasks. • Such a system is called a multiprocess–multithread design. • An operating system that supports tasks with multiple threads is, naturally, referred to as a multithreaded operating system.
  • 21. Sharing Resources • Process need resources for its successful execution. • In embedded system we have following types of design 1. Single process–single thread: has only one process, and that process runs forever. 2. A multiprocess–single thread: supports multiple simultaneously executing processes; each process has only a single thread of control. 3. A single process–multiple threads: supports only one process; within the process, it has multiple threads of control. 4. A multiprocess–multiple threads: supports multiple processes, and within each process there are multiple threads of control. • In a multiple process design the resources are needed to be allocated and shared among the processes.
  • 22. • The following are the important resources for any process 1. The code or firmware, the instructions. 2. The data that the code is manipulating. 3. The CPU and associated physical registers 4. A stack 5. Status information • The first three items are shared among member threads, • The last two are proprietary to each thread
  • 23. Memory Resource Management • In an embedded system based on the architecture of the computation engine we may have a single memory or two memory system which is called von-Neumann and Harvard architecture respectively • Every process in a system requires memory to store data and firmware, hence we need a memory management at different levels
  • 24. System Level Management • When a process is created by the operating system, it is given a portion of physical memory in which it should work. • The set of addresses delimiting that code and data memory, proprietary to each process, is called its address space. • The address space will not be shared with any other peer processes. • When multiple processes are concurrently executing in memory, a pointer or stack error can result in overwriting of memory owned by other processes. • Therefore system software must restrict the range of addresses that are accessible to the executing process. • A process (thread) trying to access memory outside its allowed range should be immediately stopped before it damages memory belonging to other processes. • One means by which such restrictions are enforced is through the concept of privilege level.
  • 25. • Processes are segregated into Supervisor mode capability: is able to access the entire memory space User mode capability: User mode limits the subset of instructions that a process can use. • Processes with a low (user mode) privilege level are not allowed to perform certain kinds of memory accesses or to execute certain instructions. • When a process attempts to execute such restricted instructions, an interrupt is generated and a supervisory program with a higher privilege level decides how to respond. • The higher (supervisor mode) privilege level is generally reserved for supervisory or administration types of tasks that one finds delegated to the operating system or other such software. • Processes with such privilege have access to any firmware and can use any instructions within the microprocessor’s instruction set.
  • 26. Figure: Address Space Access Privileges
  • 27. Process-Level Management A process may create child processes. • When doing so, the parent process may choose to give a subset of its resources to each of the children. • The children are separate processes, and each has its own address space, data, status, and stack. • The code portion of the address space is shared. A process may create multiple threads. • When doing so, the parent process shares most of its resources with each of the threads. • These are not separate processes but separate threads of execution within the same process. • Each thread will have its own stack and status information.
  • 28. • The processes or tasks exist in separate address spaces. • Therefore, one must use some form of messaging or shared variable for inter task exchange. • Processes have a stronger notion of encapsulation than threads since each thread has its own CPU state but shares the code section, data section, and task resources with peer threads. • It is this sharing that gives threads a weaker notion of encapsulation.
  • 29. Reentrant Code • Child processes and their threads share the same firmware memory area. • As a result, two different threads can be executing the same function at the same time. • Functions using only local variables are inherently reentrant. That is, they can be simultaneously called and executed in two or more contexts. • On the other hand, functions that use global variables, variables local to the process, variables passed by reference, or shared resources are not reentrant. • One must be careful to ensure that all accesses to any common resources are coordinated. • When designing the application, one must make certain that one thread cannot corrupt the values of the variables in a second. • Any shared functions must be designed to be re-entrant. • It is good practice to make all functions reentrant. • One never knows when a future modification to the design may need to share an existing function.
  • 30. Foreground/Background systems • A set of tasks can be decomposed into two subsets called background tasks and foreground tasks. • Traditionally a tasks that interact with the user or other I/O devices are foreground set and the remaining are background set. • The interpretation is slightly modified in the embedded world • The foreground tasks are those initiated by interrupt or by a real-time constraint that must be met. • They will be assigned the higher priority levels in the system. • The background tasks are non-interrupt driven and are assigned the lower priorities. • Once started, the background task will typically run to completion; however, they can be interrupted or preempted by any foreground task at any time.
  • 31. The Operating System • The easiest way to view an operating system is from the perspective of the services it can provide. • To begin, an operating system must provide or support three specific functions Schedule task execution. Dispatch a task to run. Ensure communication and synchronization among tasks. • The kernel is the smallest portion of operating system that provides these functions. • The scheduler determines which task will run and when it will do so. • The dispatcher performs the necessary operations to start the task • Intertask or interprocess communication is the mechanism for exchanging data and information between tasks or processes on the same machine or on different ones.
  • 32. • In an embedded operating system, such functions are captured in the following types of services. Process or Task Management is responsible for • The creation and deletion of user and system processes • The suspension and resumption of such processes. • The management of interprocess communication and of deadlocks. Deadlocks arise when two or more tasks need a resource that is held by some other task. Memory Management is responsible for • The tracking and control of which tasks are loaded into memory, • Monitoring which parts of memory are being used and by whom, administering dynamic memory if it is used • Managing caching schemes.
  • 33. I/O System Management is responsible for • Interaction with a great variety of different devices. In complex systems, such interaction occurs through a special piece of software called a device driver. • Common calling interface—an application programmer’s interface (API). It permit the application software to interact with different devices like UNIXTM, for example, everything looks like a file. • The interaction between each of the devices and the users or tasks uses caching and buffering of all input and output transactions that are necessary.
  • 34. File System Management is responsible for • Creation, deletion, and management of files and directories. • Routine backup of any data that is to be saved • Emergency backup either as power is failing or as some other catastrophic event is occurring to the system System Protection • Ensures the protection of data and resources in the context of concurrent processes. • Such a duty is more acute in the context of a von Neumann machine. Networking • In the context of a distributed application, the operating system must also take the responsibility of managing distributed intra system communication and the remote scheduling of tasks
  • 35. Command Interpretation • The operating system in desktop computers directly interact with the user and provides the interface to that user’s application. • In embedded system the task is implemented via a variety of software drivers supported by the OS, in turn that interact with the hardware I/O devices. • As commands and directives come into the system, they must be parsed, checked for grammatical accuracy, and directed to the target task
  • 36. The Real-Time Operating System (RTOS) • A real-time operating system (RTOS) is primarily an operating system. In addition to the responsibilities already enumerated, this special class of operating system ensures (among other things) that (rigid) time constraints can be met. • Often people misuse the term real-time to mean that the system responds quickly. • Such an interpretation is only partially correct. The key characteristic of a RTOS is deterministic behavior. • Deterministic behavior mean that, given the same state and same set of inputs, the next state (and any associated outputs) will be the same each time the control algorithm utilized by the system is executed.
  • 37. • There are two types of real time systems hard real-time - system delays are known or at least bounded - return results within the specified timing bounds soft real-time - ensures that critical tasks have priority over other tasks and retain that priority until complete. • The RTOS is commonly found in embedded applications because, if such requirements are not met, the performance of the application is inaccurate or compromised in some way. • Such systems are often interacting with the physical environment through sensors and various types of measurement devices. • RTOS-based applications are frequently used in scientific experiments, control systems, or other applications where missed deadlines cannot be tolerated.
  • 38. Operating System Architecture • Most contemporary operating systems are designed and implemented as a hierarchy of what are called virtual machines, as illustrated in Figure • The only real machine in the architecture is the microprocessor. • Each layer uses the function/operations and services of lower layers. • The advantage of such approach is increased modularity. Figure: Operating System Virtual Machine Model
  • 39. • In some architectures, the higher level layers have access to lower levels through system calls and hardware instructions. • The existing calling interface between levels is retained while providing access to the physical hardware below. • With such capability, an interface can be made to appear as if it is a machine executing a specific set of instructions as defined by the API. • The idea can be logically extended so as to create the illusion that the tasks at each level are running on its own machine. Each level in such a model is called a virtual machine. Figure: Typical High-Level OS arch
  • 40. Tasks And Task Control Blocks Tasks • A task or process simply identifies a job that is to be done within an embedded application. • More specifically, it is a set of software (firmware) instructions, collected together, that are designed and executed to accomplish that job. • An embedded application is thus nothing more than a collection of such jobs. • How and when each is executed is determined by the schedule and the dispatch algorithms; • How and what data are acted upon by the task is specified by the intertask communication scheme. • The performance of each of these three operations determines the robustness and quality of the design.
  • 41. The Task Control Block • In a tasked-based approach, each process is represented in the operating system by a data structure called a task control block (TCB),also known as a process control block. • The TCB contains all the important information about the task such as: Pointer (for linking the TCB to various queues) Process ID and state Program counter CPU registers Scheduling information (priorities and pointers to scheduling queues) Memory management information (tag tables and cache information) Scheduling information (time limits or time and resources used) I/O status information (resources allocated or open files)
  • 42.
  • 43. • TCB allocation may be static or dynamic. • Static allocation is typically used in embedded systems with no memory management. • There are a fixed number of task control blocks; the memory is allocated at system generation time and placed in a dormant or unused state. • When a task is initiated, a TCB is created and the appropriate information is entered. • The TCB is then placed into the ready state by the scheduler. • From the ready state, it will be moved to the execute state by dispatcher. • When a task terminates, the associated TCB is returned to the dormant state. • With a fixed number of TCBs, no runtime memory management is necessary. • One must be cautious, however, not to exhaust the supply of TCBs.
  • 44. Dynamic allocation of TCB • variable number of task control blocks can be allocated from the heap at runtime. • When a task is created, the TCB is created, initialized, and placed into the ready state and scheduled by the scheduler. • From the ready state, it will be moved to the execute state and given to the CPU by dispatcher. • When a task is terminated, the TCB memory is returned to heap storage. • With a dynamic allocation, heap management must be supported. • Dynamic allocation suggests an unlimited supply of TCBs. • However, the typical embedded application has limited memory; allocating too many TCBs can exhaust the supply. • A dynamic memory allocation scheme is generally too expensive for smaller embedded systems
  • 45. • When a task enters the system, it will typically be placed into a queue called the Entry Queue or Job Queue. • The easiest and most flexible way to implement such a queue is to utilize a linked list as the underlying data structure. • Thus, the last entries in the TCB hold the pointers to the preceding and succeeding TCBs in the queue. • One certainly could use an array data type as well.
  • 46. • In C, the TCB is implemented as a struct containing pointers to all relevant information. • Because the data members of a struct must all be of the same type, the pointers are all void* pointers. • The skeletal structure for a typical TCB identifying the essential elements, the task, and an example set of task data are given in the C declarations
  • 47. • The first entry is a pointer to a function—taskPtr. • That function embodies the functionality associated with the task. • The function’s parameter list comprises the single argument of type void*. • Because we do not wish to place any restrictions on the kinds of information that is passed into the task and because we do not want to force each task to take the same kinds of data, • we utilize a struct as the means through which to pass the data into the task. • To satisfy the requirement that all TCBs must look alike and yet be able to retain flexibility on what data is passed into the task, the type information associated with the data struct is removed by referencing it through a void* pointer. • Within the task itself, the pointer must be cast back to the original type before it can be dereferenced to get the data. • Each task will have its own stack. • The third entry in the TCB is a pointer to that stack. • The fourth entry gives the priority for the task. • The fifth and sixth entries are pointers used to link the TCB to the next and previous TCBs in any of the aforementioned queues.
  • 48. A Simple Kernel • Consider a simple kernel performing three simple jobs to be scheduled and performed: 1. Bring in some data. 2. Perform a computation on the data. 3. Display the data. • The initial example will be a simple queue of functions operating on shared data. • In this example, an array will be used as the underlying data type of the queue. • The system will run forever, and each task will be scheduled and executed in turn. • An important characteristic of such an implementation is that each task will run to completion (no preemption) before another is allowed to run.
  • 49. Figure: Three Asynchronous Tasks Sharing a Common Data Buffer
  • 50.
  • 51.
  • 52.
  • 53. Interrupts revisited • Interrupt is a signal which causes the microprocessor to stop from what it is doing and request for a service • Interrupts may originate inside or outside of the processor • There are different types of interrupts for each interrupts an interrupt subroutine (ISR) should be written. • Each subroutine should be placed in a particular place in a memory called Interrupt vector table, for each interrupt separate range of address have been allocated in the memory so all interrupts subroutine must be placed in an appropriate place. • As there are more no of interrupts we have to manage the interrupt traffic in a microprocessor by using different strategies • Control specifies the ability of the system to accept or ignore interrupts.
  • 54. • The highest level of control is provided by enable and disable instructions. • The enable instruction permits an interrupt to be recognized by the system. • The disable instruction does the opposite. • The second level of control is implemented through masking. • It permits one to selectively listen to or ignore individual interrupts. • Typically, the microprocessor supports a mask register with 1 bit associated with each interrupt. • If the mask bit is a logical 1, the associated interrupt will be recognized when it occurs. • Similarly, when the bit is a logical 0, the interrupt will be ignored. • If masking is supported, normally at least one of the interrupts will be designated as nonmaskable. • That is, the interrupt must be listened to and responded to. • Generally nonmaskable interrupts are associated with system-level functionality and are often disaster management tools.
  • 55. • The third level of control assigns a priority to each interrupt. • Higher priority interrupts can interrupt those with lower priority, but not vice versa. • In most cases, the priority of each interrupt is set by the microprocessor manufacturer. • An interrupt is an asynchronous function or a subroutine call. • The mechanics of handling an interrupt is similar to function calls. • Like the function call, under interrupt, the system state information is held on the stack and restored on return. • Consequently, as is also found with the function call, it is possible to overflow the stack. • If, under a priority based scheme, interrupts are permitted to interrupt an interrupt in its ISR at the same level, the potential for stack overflow exists and must be managed. • The normal solution is to disable or mask the interrupts as appropriate to ensure that overflow cannot occur.
  • 56. • when working with interrupts and ISRs, always keep the routine as simple and short as possible. • An ISR with more than a dozen to 18 lines of code is probably too long. • The objective is to respond to the interrupt, do the minimum amount of work that absolutely needs to be done, and then exit the ISR; further processing, if necessary, can be done in one of the tasks or foreground processes.
  • 57. Memory Management Revisited • We studied about the different possible states for task or processes and also the importance of saving and restoring the context. • Thus the context switch involves 1. Saving the existing context 2. Switching to the new one 3. Restoring the old one • These three steps can consume a significant amount of time. • When operating under real time constraints, the time required to affect the switch can be critical to the success or failure of the application. • The information that must be saved from an existing context may be as simple as the program counter and stack pointer for the original context or as complex as the state of the system at the time the switch occurs.
  • 58. • The typical minimum information to be saved includes” The state of the CPU registers, including the CPU The values of local variables Status information • The saving of such information can be accomplished in several different ways. Duplicate Hardware Context Task Control Blocks Stack
  • 59. Duplicate Hardware Context • A typical microprocessor has limited number of general purpose registers. • When context switching takes place, the values of general purpose registers should be saved prior to the switch and then restored on return. • Some microprocessor provide some hardware support for context switch by increasing the number of available general purpose registers. • At the software level, several different contexts can be defined and a subset of the registers allocated to each. • For example, with 64 general-purpose registers, 4 different contexts, each with 16 general-purpose registers, can be defined. • Thus, each context can have a set of registers called R0–R15 as illustrated in Figure. • When switching occur, rather than saving the contents of the current set of registers, the system simply switches to a new hardware context.
  • 60. Figure: General-Purpose Registers Organized as Four Different contexts
  • 61. • As different contexts are a logical interpretation of the register set at the software level, overlapping contexts can be done. • That is, a subset of registers can be included in two adjacent contexts as shown in Figure. • In the illustration, the fourteenth and fifteenth registers appear as registers E and F in context 0 and as registers 0 and 1 in context 1. • Using such a scheme, variables can easily be passed between contexts with no overhead
  • 62. Figure: General-Purpose Registers Organized with Overlapping Contexts
  • 63. Task Control Blocks • When a system is implemented using the task control block model, each TCB will contain all relevant information about the state of the task. • To affect the context switch, necessary task state information is copied to the TCB. • The TCB can then be inserted into the appropriate queue, and the status and state information for the new or resumed task can be entered into the system variables. • If the running task has been preempted, the TCB will be linked into the ready queue waiting for the CPU to become available. • Based on the scheduling algorithm, it may or may not be the next task to run. • If the task has blocked, the TCB will be linked into the waiting queue for the required resource. • When the resource becomes available, the task will move to the ready queue.
  • 64. Stacks • The stack is a rather simple data structure used for storing information associated with a task or thread. • It is an area set aside in memory as part of system allocation. • The information is held in a data structure similar to TCB called a stack frame or activation record. • Typical information that must be stored is illustrated in Figure. Figure: Information Stored in a Stack Frame
  • 65. • When a stack is used, procedures must be written to manage the processes of saving, accessing, and removing information to or from the stack. • Such procedures are initially invoked as part of a function call or by the interrupt handler prior to a context switch. • In the case of an interrupt, further interrupts are temporarily blocked to allow the mechanics of the switch to occur. • The stack management procedures are also invoked when returning to the calling context to restore the original state. • The current top of the stack is identified by a variable called the stack pointer. • When an activation record is added to the stack, the stack pointer is advanced. • Top of stack and stack pointer advanced have several different interpretations. • Based on implementation, top of stack can be interpreted either as the next available empty location on the stack or as the location of the last valid entry. • Figure shows the stack growing from low to high memory.
  • 66. • An alternate implementation grows the stack from high to low memory. • The stack data type generally supports the following operations Push—Add to the top of the stack. Pop—Remove from the top of the stack. Peek—Look at the top of the stack • Three kinds of stack are identified: 1. Runtime 2. Application 3. Multiprocessing
  • 67. Runtime stacks • The runtime stack is under system control and may be shared by other processes or threads. • The stack size is known a priori, and there is usually no dynamic allocation. • At runtime, one must ensure that not too many stack frames are pushed on to the stack; otherwise, there is potential for overflow, eventually leading to a system crash. • A difficulty with a single runtime stack in a TCB context arises from the access semantics of the stack, which permit access only to the top of the stack. • Consider a simple system comprising two tasks, T0 and T1. If T0 is running and blocks on an I/O operation, for example, its state information is saved on the stack. • T1 now starts and similarly blocks. • Meantime, the I/O operation, for T0 completes, and it is ready to resume. • However, its state information is contained in the second entry on the stack. • The single runtime stack can work in a foreground/background model. • Tasks in the background will generally run to completion. • Real-time tasks, driven by interrupts, will push then pop stack frames onto the stack, thereby precluding the need to access an entry that is not at the top of the stack. • Interrupts within interrupts do not present a problem as long as the stack size is not exceeded.
  • 68. Application Stacks • The single-stack model can be extended by incorporating several additional stacks as we see in Figure. • The design is utilizing a runtime stack as well as multiple application stacks to simplify the management of multiple tasks in a preemptive environment. • On interrupt, the runtime stack holds a pointer to the application stack associated with the initial or preempted task or thread. • The preempting process now works with a new stack. • If that task is subsequently preempted, the existing context is held on the preempted task’s application stack, and a pointer to new application stack is placed on the runtime stack. • The save and restore interface functions must be modified to store/restore with respect to the current context as the runtime stack is unwound. Such a scheme can provide a very fast context switch. Figure: A Stack Architecture Using a Runtime Stack and Application Stacks
  • 69. Multiprocessing Stacks • Multiprocessing in the current context refers to working with multiple processes rather than multiple processors. • Multiprocessing stacks are similar to the main runtime stack. • When a task is started, among other resources, it is allocated its own stack space. • In contrast to application stacks, which are managed by the foreground task (assuming a foreground/background versus a TCB architecture), the process stack is managed by the owner process. • It is allocated from heap when the process is created and returned to the heap when the process exits.