2. Basic Hardware
Main Memory and registers built into processor and CPU can access directly.
Machine instructions take memory address as argument but not disk address.
Registers on CPU accessible within one CPU cycle.
Instruction execution use registers to perform operations.
Accessing main memory requires many CPU cycles.
To avoid stall of processor cache memory is used.
Cache is faster memory placed between CPU and main memory
We are not only concerned with relative speed of access memory but also protect
process from one another.
3. Each process has separate memory
space.
Provide protection by using two
registers Base and Limit registers
Base register: Hold smallest legal
physical address.
Limit register: Specifies the size of
range
Only load by OS using special
privileged instructions
5. Address Binding
Program reside on disk in binary file.
Brought into memory for execution.
Process moves between disk and memory.
Process on disk waiting in input queue
most of the process reside in physical memory.
Address space of computer starts at 00000
Address in program is symbolic
Compiler bind these symbolic address to relocatable address
Linker or loader convert this relocatable address to absolute address
Binding maps from ne address space to another.
6.
7. Address binding
Types of address binding
1. Compile Time: at compile time where the process reside in memory, absolute code
can be generated.
2. Load time: if it is not known at compile time where the process reside in memory,
then compiler must generate relocatable code. Final binding is delayed at load
time. If starting address changes need to reload the program.
3. Execution Time: if process can move during its execution from one memory
segment to another, then bonding must be delayed until run time.
8. Logical vs physical address space
Address generated by CPU referred as logical Address (Virtual Address).
Address seen by memory unit is referred as physical address.
Compile and load time address binding generate identical logical and physical
address.
Execution time binding differ in logical and physical address.
Set of logical address generated by CPU is logical address space
Set of physical addresses corresponding to these addresses is physical address
space.
Runtime mapping from logical to physical address is done by a hardware device
called as memory management unit(MMU)
9.
10. Dynamic loading
To execute process entire program and data needs to in physical memory.
Size of process limits to size of physical memory.
To achieve better space utilization we can use dynamic loading.
With dynamic loading routine is not loaded until it is called.
All routines kept on disk in relocatable load format.
The main program is load into memory and executed.
Then other routines are called as required.
Advantage is that unused routine is never loaded.
Used when code is large.
No OS support is required.
11. Dynamic linking and shared libraries
Linking of system libraries postponed until execution time.
Used with system libraries.
Each program must include language library in executable image.
Wastes both disk space and main memory.
Stub is also included into program image.
Stub is piece of code indicate how to locate memory resident library routine.
Stud is also responsible for refereeing new versions of libraries when they are
updated.
Such libraries specific to language shared by programs are called as shared
libraries.
12. Swapping
Process must be in memory for execution.
It can be swapped out temporarily for baking store then brought back into
memory for continue execution.
Round robin scheduling
Priority scheduling
Process that is swapped out will be swapped back into the same memory space it is
occupied previously. (if compile or load time binding )
For execution time binding it is swapped in different address space.
Ready queue contains all the process that are ready to execute.
CPU scheduler selects process from ready queue for execution and calls dispatcher
Dispatcher swaps out process from memory and swaps in another process for
execution.
13.
14. Contiguous memory allocation
Need to allocate main memory in most efficient way
Memory is divided into two partitions: one for resident OS and another for user
processes.
We can place OS in low memory or high memory.
Usually placed in low memory.
multiprogramming system several process in memory at a time.
How to allocate memory to processes waiting in input queue.
Contiguous memory Allocation: Each process contained in contiguous section of
memory.
15. Memory mapping and protection
Use limit and relocation register.
Relocation register: value of smallest physical address.
Limit register: range of logical addresses.
Each logical address must be less than limit register.
MMU maps logical address into physical address dynamically by adding value of
relocation register into logical address.
Mapped address sent to memory.
When CPU scheduler selects process for execution ,dispatcher loads limit and
relocation register with correct values as part of context switch.
We protect both OS and user process from executing process from modification.
16.
17. Memory Allocation
Fixed sized partition: (Multiple partition method)
Divide memory into several fixed sized partitions.
Degree of multiprogramming is bound with number of partitions.
When partition is free a process selected from input queue and is loaded free
partition.
When process terminates the partition becomes available for another process.
Disadvantage :
1. Produce internal fragmentation: unused memory that is internal to partition.
18.
19. Memory Allocation (Cont.)
Variable partition:
OS keeps table of indicating which part of memory is occupied and which part is
free.
Initially all memory is available to user process as large block called as hole.
When process enters into system OS takes in to account memory requirements of
process and amount of available memory space.
When process allocated memory it loaded into memory.
When process terminated it release its memory.
22. Strategies to select best hole from available hole
1. First fit: allocate first hole that is big enough. Searching can start at the beginning
of set of holes or at location where last search ended.
2. Best fit: allocates smallest hole that is big enough. Search whole list to search
smallest hole.
3. Worst fit: allocates largest hole. Search entire list to search largest hole in memory
Disadvantages:
1. Both First and best fit strategies for memory allocation suffer from external
fragmentation.
A process is loaded and removed from memory the free memory space is broken
into little pieces.
External fragmentation occurs when there is enough space is to satisfy request but
memory is not contiguous. Storage fragmented into large number of small holes
23.
24. Solution to external fragmentation is compaction.
Goal is to shuffle memory contents so as to place all free blocks together in one
large block.
Compaction is not possible every time if address binding is don at load or compile
time.
It is possible only when address binding is at execution time.
25. Paging
Paging is a storage mechanism that allows OS to retrieve processes from the
secondary storage into the main memory in the form of pages.
Paging avoid external fragmentation and need of compaction.
Paging involves breaking physical memory into fixed sized blocks called as frames.
Breaking logical memory in fixed sized blocks called as pages. (divide each
process in the form of pages)
when process is to be executed its pages are loaded into any of the available
frames.
The size of page and size of frame is same.
26.
27.
28. Basic method
Hardware support for paging
Every address generated by CPU divided into two parts: page number (p) and
offset (d).
Page number is used as an index into page table.
Page table contains an base address of each page in physical memory.
This base address combined with page offset to define a physical memory
address which is sent to memory unit.
Page size is defined by hardware usually in power of 2. it is between 512 bytes to
16 MB per page.
29.
30.
31.
32.
33. Hardware support for paging
Each OS has its own method to store page table.
Mast of system allocate page table for each process.
Pointer to page table is stored in special register (instruction register in PCB).
When dispatcher loads process it must reload these register and define correct
hardware page table values from stored page table.
The use of registers for page table is satisfactory if page table is reasonably
small.(for 256 entries).
Most of the computers allows large amount of entries in such cases instead of
using fast registers page table is kept into memory and pointer to this page table is
in Page Table Base Register(PTBR)
Problem with this mechanism is that time require to access memory location.
34. Hardware support for paging
Another mechanism is to use special, small, fast lookup hardware cache called as
Translation Lookaside Buffer(TLB).
It is associative high speed memory.
Each entry in TLB consists of two parts: a key and Value
When associative memory is presented with an item, it is compared with all keys
simultaneously. If item is found corresponding value field is returned.
TLB contains only few of page table entries. When logical address is generated by
CPU, the page number is presented to TLB. If page number is found its frame
number is immediately available and is used to access memory.
If the page not found in TLB, a memory reference to TLB must be made. When
frame number is obtained we use it to access a memory. Also we add page number
and frame number in TLB entry, so they can found quickly on next reference.
35.
36. Protection
Memory protection is provided by using protection bits which are associated with
each frame. These bits are kept in page table.
One bit to define page to be read-write, or read-only.
Protection bits verify that no writes are being made to read-only pages.
Another bit called as valid-invalid bit with each page table entry.
When bit is set to valid then associated page is in processes logical address space
and thus it is legal page.
When bit is set to invalid the page is not in processes logical address space.
Illegal addresses are trapped by valid and invalid bits.
37.
38. Shared pages
It is possible to share pages
Reentrant code (pure code)=not self modifying code
Reentrant code can not change during execution
Since two or more processes can execute same code at once
Each process has its own copy of registers and data storage to hold the data for
process execution.
Only one copy of editor needs to be kept in physical memory.
Each processes page table maps to same copy in physical memory, but data pages
are mapped onto different page.
40. Advantages of Paging
Easy to use memory management algorithm
No need for external Fragmentation
Swapping is easy between equal-sized pages and page frames.
41. Disadvantages of Paging
May cause Internal fragmentation
Complex memory management algorithm
Page tables consume additional memory.
Multi-level paging may lead to memory reference overhead.