SlideShare une entreprise Scribd logo
1  sur  42
Memory Management
Strategies
Prepared By: Mr. Sangram A. Patil
Assistant Professor PVPIT,Budhgaon
Basic Hardware
 Main Memory and registers built into processor and CPU can access directly.
 Machine instructions take memory address as argument but not disk address.
 Registers on CPU accessible within one CPU cycle.
 Instruction execution use registers to perform operations.
 Accessing main memory requires many CPU cycles.
 To avoid stall of processor cache memory is used.
 Cache is faster memory placed between CPU and main memory
 We are not only concerned with relative speed of access memory but also protect
process from one another.
 Each process has separate memory
space.
 Provide protection by using two
registers Base and Limit registers
 Base register: Hold smallest legal
physical address.
 Limit register: Specifies the size of
range
 Only load by OS using special
privileged instructions
Basic Hardware
 Protection to memory is accomplished by CPU Hardware
Address Binding
 Program reside on disk in binary file.
 Brought into memory for execution.
 Process moves between disk and memory.
 Process on disk waiting in input queue
 most of the process reside in physical memory.
 Address space of computer starts at 00000
 Address in program is symbolic
 Compiler bind these symbolic address to relocatable address
 Linker or loader convert this relocatable address to absolute address
 Binding maps from ne address space to another.
Address binding
 Types of address binding
1. Compile Time: at compile time where the process reside in memory, absolute code
can be generated.
2. Load time: if it is not known at compile time where the process reside in memory,
then compiler must generate relocatable code. Final binding is delayed at load
time. If starting address changes need to reload the program.
3. Execution Time: if process can move during its execution from one memory
segment to another, then bonding must be delayed until run time.
Logical vs physical address space
 Address generated by CPU referred as logical Address (Virtual Address).
 Address seen by memory unit is referred as physical address.
 Compile and load time address binding generate identical logical and physical
address.
 Execution time binding differ in logical and physical address.
 Set of logical address generated by CPU is logical address space
 Set of physical addresses corresponding to these addresses is physical address
space.
 Runtime mapping from logical to physical address is done by a hardware device
called as memory management unit(MMU)
Dynamic loading
 To execute process entire program and data needs to in physical memory.
 Size of process limits to size of physical memory.
 To achieve better space utilization we can use dynamic loading.
 With dynamic loading routine is not loaded until it is called.
 All routines kept on disk in relocatable load format.
 The main program is load into memory and executed.
 Then other routines are called as required.
 Advantage is that unused routine is never loaded.
 Used when code is large.
 No OS support is required.
Dynamic linking and shared libraries
 Linking of system libraries postponed until execution time.
 Used with system libraries.
 Each program must include language library in executable image.
 Wastes both disk space and main memory.
 Stub is also included into program image.
 Stub is piece of code indicate how to locate memory resident library routine.
 Stud is also responsible for refereeing new versions of libraries when they are
updated.
 Such libraries specific to language shared by programs are called as shared
libraries.
Swapping
 Process must be in memory for execution.
 It can be swapped out temporarily for baking store then brought back into
memory for continue execution.
 Round robin scheduling
 Priority scheduling
 Process that is swapped out will be swapped back into the same memory space it is
occupied previously. (if compile or load time binding )
 For execution time binding it is swapped in different address space.
 Ready queue contains all the process that are ready to execute.
 CPU scheduler selects process from ready queue for execution and calls dispatcher
 Dispatcher swaps out process from memory and swaps in another process for
execution.
Contiguous memory allocation
 Need to allocate main memory in most efficient way
 Memory is divided into two partitions: one for resident OS and another for user
processes.
 We can place OS in low memory or high memory.
 Usually placed in low memory.
 multiprogramming system several process in memory at a time.
 How to allocate memory to processes waiting in input queue.
 Contiguous memory Allocation: Each process contained in contiguous section of
memory.
Memory mapping and protection
 Use limit and relocation register.
 Relocation register: value of smallest physical address.
 Limit register: range of logical addresses.
 Each logical address must be less than limit register.
 MMU maps logical address into physical address dynamically by adding value of
relocation register into logical address.
 Mapped address sent to memory.
 When CPU scheduler selects process for execution ,dispatcher loads limit and
relocation register with correct values as part of context switch.
 We protect both OS and user process from executing process from modification.
Memory Allocation
 Fixed sized partition: (Multiple partition method)
 Divide memory into several fixed sized partitions.
 Degree of multiprogramming is bound with number of partitions.
 When partition is free a process selected from input queue and is loaded free
partition.
 When process terminates the partition becomes available for another process.
 Disadvantage :
1. Produce internal fragmentation: unused memory that is internal to partition.
Memory Allocation (Cont.)
 Variable partition:
 OS keeps table of indicating which part of memory is occupied and which part is
free.
 Initially all memory is available to user process as large block called as hole.
 When process enters into system OS takes in to account memory requirements of
process and amount of available memory space.
 When process allocated memory it loaded into memory.
 When process terminated it release its memory.
Main Memory
 Strategies to select best hole from available hole
1. First fit: allocate first hole that is big enough. Searching can start at the beginning
of set of holes or at location where last search ended.
2. Best fit: allocates smallest hole that is big enough. Search whole list to search
smallest hole.
3. Worst fit: allocates largest hole. Search entire list to search largest hole in memory
 Disadvantages:
1. Both First and best fit strategies for memory allocation suffer from external
fragmentation.
A process is loaded and removed from memory the free memory space is broken
into little pieces.
External fragmentation occurs when there is enough space is to satisfy request but
memory is not contiguous. Storage fragmented into large number of small holes
 Solution to external fragmentation is compaction.
 Goal is to shuffle memory contents so as to place all free blocks together in one
large block.
 Compaction is not possible every time if address binding is don at load or compile
time.
 It is possible only when address binding is at execution time.
Paging
 Paging is a storage mechanism that allows OS to retrieve processes from the
secondary storage into the main memory in the form of pages.
 Paging avoid external fragmentation and need of compaction.
 Paging involves breaking physical memory into fixed sized blocks called as frames.
 Breaking logical memory in fixed sized blocks called as pages. (divide each
process in the form of pages)
 when process is to be executed its pages are loaded into any of the available
frames.
 The size of page and size of frame is same.
Basic method
 Hardware support for paging
 Every address generated by CPU divided into two parts: page number (p) and
offset (d).
 Page number is used as an index into page table.
 Page table contains an base address of each page in physical memory.
 This base address combined with page offset to define a physical memory
address which is sent to memory unit.
 Page size is defined by hardware usually in power of 2. it is between 512 bytes to
16 MB per page.
Hardware support for paging
 Each OS has its own method to store page table.
 Mast of system allocate page table for each process.
 Pointer to page table is stored in special register (instruction register in PCB).
 When dispatcher loads process it must reload these register and define correct
hardware page table values from stored page table.
 The use of registers for page table is satisfactory if page table is reasonably
small.(for 256 entries).
 Most of the computers allows large amount of entries in such cases instead of
using fast registers page table is kept into memory and pointer to this page table is
in Page Table Base Register(PTBR)
 Problem with this mechanism is that time require to access memory location.
Hardware support for paging
 Another mechanism is to use special, small, fast lookup hardware cache called as
Translation Lookaside Buffer(TLB).
 It is associative high speed memory.
 Each entry in TLB consists of two parts: a key and Value
 When associative memory is presented with an item, it is compared with all keys
simultaneously. If item is found corresponding value field is returned.
 TLB contains only few of page table entries. When logical address is generated by
CPU, the page number is presented to TLB. If page number is found its frame
number is immediately available and is used to access memory.
 If the page not found in TLB, a memory reference to TLB must be made. When
frame number is obtained we use it to access a memory. Also we add page number
and frame number in TLB entry, so they can found quickly on next reference.
Protection
 Memory protection is provided by using protection bits which are associated with
each frame. These bits are kept in page table.
 One bit to define page to be read-write, or read-only.
 Protection bits verify that no writes are being made to read-only pages.
 Another bit called as valid-invalid bit with each page table entry.
 When bit is set to valid then associated page is in processes logical address space
and thus it is legal page.
 When bit is set to invalid the page is not in processes logical address space.
 Illegal addresses are trapped by valid and invalid bits.
Shared pages
 It is possible to share pages
 Reentrant code (pure code)=not self modifying code
 Reentrant code can not change during execution
 Since two or more processes can execute same code at once
 Each process has its own copy of registers and data storage to hold the data for
process execution.
 Only one copy of editor needs to be kept in physical memory.
 Each processes page table maps to same copy in physical memory, but data pages
are mapped onto different page.
Shared pages
Advantages of Paging
 Easy to use memory management algorithm
 No need for external Fragmentation
 Swapping is easy between equal-sized pages and page frames.
Disadvantages of Paging
 May cause Internal fragmentation
 Complex memory management algorithm
 Page tables consume additional memory.
 Multi-level paging may lead to memory reference overhead.
Memory Management

Contenu connexe

Tendances

Memory management
Memory managementMemory management
Memory management
Slideshare
 

Tendances (20)

Operating Systems - memory management
Operating Systems - memory managementOperating Systems - memory management
Operating Systems - memory management
 
Operating Systems Part III-Memory Management
Operating Systems Part III-Memory ManagementOperating Systems Part III-Memory Management
Operating Systems Part III-Memory Management
 
Memory management
Memory managementMemory management
Memory management
 
Introduction of Memory Management
Introduction of Memory Management Introduction of Memory Management
Introduction of Memory Management
 
Memory Management
Memory ManagementMemory Management
Memory Management
 
Memory Management
Memory ManagementMemory Management
Memory Management
 
Operating Systems 1 (9/12) - Memory Management Concepts
Operating Systems 1 (9/12) - Memory Management ConceptsOperating Systems 1 (9/12) - Memory Management Concepts
Operating Systems 1 (9/12) - Memory Management Concepts
 
Memory management
Memory managementMemory management
Memory management
 
Memory Management
Memory ManagementMemory Management
Memory Management
 
chapter 2 memory and process management
chapter 2 memory and process managementchapter 2 memory and process management
chapter 2 memory and process management
 
Memory Management | Computer Science
Memory Management | Computer ScienceMemory Management | Computer Science
Memory Management | Computer Science
 
Memory management
Memory managementMemory management
Memory management
 
Memory managment
Memory managmentMemory managment
Memory managment
 
Storage management
Storage managementStorage management
Storage management
 
Opetating System Memory management
Opetating System Memory managementOpetating System Memory management
Opetating System Memory management
 
Memory management OS
Memory management OSMemory management OS
Memory management OS
 
Operating system Memory management
Operating system Memory management Operating system Memory management
Operating system Memory management
 
Paging and segmentation
Paging and segmentationPaging and segmentation
Paging and segmentation
 
Memory management
Memory managementMemory management
Memory management
 
Memory management
Memory managementMemory management
Memory management
 

Similaire à Memory Management

Chapter 9 OS
Chapter 9 OSChapter 9 OS
Chapter 9 OS
C.U
 
Ch9 OS
Ch9 OSCh9 OS
Ch9 OS
C.U
 

Similaire à Memory Management (20)

Bab 4
Bab 4Bab 4
Bab 4
 
CH08.pdf
CH08.pdfCH08.pdf
CH08.pdf
 
Chapter 8 - Main Memory
Chapter 8 - Main MemoryChapter 8 - Main Memory
Chapter 8 - Main Memory
 
Ch8
Ch8Ch8
Ch8
 
Main memory os - prashant odhavani- 160920107003
Main memory   os - prashant odhavani- 160920107003Main memory   os - prashant odhavani- 160920107003
Main memory os - prashant odhavani- 160920107003
 
Chapter 9 OS
Chapter 9 OSChapter 9 OS
Chapter 9 OS
 
CS6401 OPERATING SYSTEMS Unit 3
CS6401 OPERATING SYSTEMS Unit 3CS6401 OPERATING SYSTEMS Unit 3
CS6401 OPERATING SYSTEMS Unit 3
 
Operating system
Operating systemOperating system
Operating system
 
Cs8493 unit 3
Cs8493 unit 3Cs8493 unit 3
Cs8493 unit 3
 
Cs8493 unit 3
Cs8493 unit 3Cs8493 unit 3
Cs8493 unit 3
 
Paging +Algorithem+Segmentation+memory management
Paging +Algorithem+Segmentation+memory managementPaging +Algorithem+Segmentation+memory management
Paging +Algorithem+Segmentation+memory management
 
OS_Ch9
OS_Ch9OS_Ch9
OS_Ch9
 
OSCh9
OSCh9OSCh9
OSCh9
 
Ch9 OS
Ch9 OSCh9 OS
Ch9 OS
 
Unit 5Memory management.pptx
Unit 5Memory management.pptxUnit 5Memory management.pptx
Unit 5Memory management.pptx
 
Os unit 2
Os unit 2Os unit 2
Os unit 2
 
Lecture20-21-22.ppt
Lecture20-21-22.pptLecture20-21-22.ppt
Lecture20-21-22.ppt
 
unit5_os (1).pptx
unit5_os (1).pptxunit5_os (1).pptx
unit5_os (1).pptx
 
Memory+management
Memory+managementMemory+management
Memory+management
 
UNIT-2 OS.pptx
UNIT-2 OS.pptxUNIT-2 OS.pptx
UNIT-2 OS.pptx
 

Plus de sangrampatil81

Plus de sangrampatil81 (20)

Deadlock
DeadlockDeadlock
Deadlock
 
virtual memory
virtual memoryvirtual memory
virtual memory
 
IO hardware
IO hardwareIO hardware
IO hardware
 
File system structure
File system structureFile system structure
File system structure
 
File management
File managementFile management
File management
 
Disk structure
Disk structureDisk structure
Disk structure
 
Directory structure
Directory structureDirectory structure
Directory structure
 
Directory implementation and allocation methods
Directory implementation and allocation methodsDirectory implementation and allocation methods
Directory implementation and allocation methods
 
Page replacement algorithms
Page replacement algorithmsPage replacement algorithms
Page replacement algorithms
 
Methods for handling deadlock
Methods for handling deadlockMethods for handling deadlock
Methods for handling deadlock
 
Semaphore
SemaphoreSemaphore
Semaphore
 
Monitors
MonitorsMonitors
Monitors
 
Classical problems of process synchronization
Classical problems of process synchronizationClassical problems of process synchronization
Classical problems of process synchronization
 
System programs
System programsSystem programs
System programs
 
System programs
System programsSystem programs
System programs
 
Services and system calls
Services and system callsServices and system calls
Services and system calls
 
Operating system structure
Operating system structureOperating system structure
Operating system structure
 
Operating system deign and implementation
Operating system deign and implementationOperating system deign and implementation
Operating system deign and implementation
 
Pointer to array and structure
Pointer to array and structurePointer to array and structure
Pointer to array and structure
 
Pointer arithmetic in c
Pointer arithmetic in c Pointer arithmetic in c
Pointer arithmetic in c
 

Dernier

TECUNIQUE: Success Stories: IT Service provider
TECUNIQUE: Success Stories: IT Service providerTECUNIQUE: Success Stories: IT Service provider
TECUNIQUE: Success Stories: IT Service provider
mohitmore19
 
CHEAP Call Girls in Pushp Vihar (-DELHI )🔝 9953056974🔝(=)/CALL GIRLS SERVICE
CHEAP Call Girls in Pushp Vihar (-DELHI )🔝 9953056974🔝(=)/CALL GIRLS SERVICECHEAP Call Girls in Pushp Vihar (-DELHI )🔝 9953056974🔝(=)/CALL GIRLS SERVICE
CHEAP Call Girls in Pushp Vihar (-DELHI )🔝 9953056974🔝(=)/CALL GIRLS SERVICE
9953056974 Low Rate Call Girls In Saket, Delhi NCR
 

Dernier (20)

W01_panagenda_Navigating-the-Future-with-The-Hitchhikers-Guide-to-Notes-and-D...
W01_panagenda_Navigating-the-Future-with-The-Hitchhikers-Guide-to-Notes-and-D...W01_panagenda_Navigating-the-Future-with-The-Hitchhikers-Guide-to-Notes-and-D...
W01_panagenda_Navigating-the-Future-with-The-Hitchhikers-Guide-to-Notes-and-D...
 
How to Choose the Right Laravel Development Partner in New York City_compress...
How to Choose the Right Laravel Development Partner in New York City_compress...How to Choose the Right Laravel Development Partner in New York City_compress...
How to Choose the Right Laravel Development Partner in New York City_compress...
 
AI & Machine Learning Presentation Template
AI & Machine Learning Presentation TemplateAI & Machine Learning Presentation Template
AI & Machine Learning Presentation Template
 
Unlocking the Future of AI Agents with Large Language Models
Unlocking the Future of AI Agents with Large Language ModelsUnlocking the Future of AI Agents with Large Language Models
Unlocking the Future of AI Agents with Large Language Models
 
TECUNIQUE: Success Stories: IT Service provider
TECUNIQUE: Success Stories: IT Service providerTECUNIQUE: Success Stories: IT Service provider
TECUNIQUE: Success Stories: IT Service provider
 
Reassessing the Bedrock of Clinical Function Models: An Examination of Large ...
Reassessing the Bedrock of Clinical Function Models: An Examination of Large ...Reassessing the Bedrock of Clinical Function Models: An Examination of Large ...
Reassessing the Bedrock of Clinical Function Models: An Examination of Large ...
 
VTU technical seminar 8Th Sem on Scikit-learn
VTU technical seminar 8Th Sem on Scikit-learnVTU technical seminar 8Th Sem on Scikit-learn
VTU technical seminar 8Th Sem on Scikit-learn
 
Unveiling the Tech Salsa of LAMs with Janus in Real-Time Applications
Unveiling the Tech Salsa of LAMs with Janus in Real-Time ApplicationsUnveiling the Tech Salsa of LAMs with Janus in Real-Time Applications
Unveiling the Tech Salsa of LAMs with Janus in Real-Time Applications
 
Diamond Application Development Crafting Solutions with Precision
Diamond Application Development Crafting Solutions with PrecisionDiamond Application Development Crafting Solutions with Precision
Diamond Application Development Crafting Solutions with Precision
 
The Ultimate Test Automation Guide_ Best Practices and Tips.pdf
The Ultimate Test Automation Guide_ Best Practices and Tips.pdfThe Ultimate Test Automation Guide_ Best Practices and Tips.pdf
The Ultimate Test Automation Guide_ Best Practices and Tips.pdf
 
HR Software Buyers Guide in 2024 - HRSoftware.com
HR Software Buyers Guide in 2024 - HRSoftware.comHR Software Buyers Guide in 2024 - HRSoftware.com
HR Software Buyers Guide in 2024 - HRSoftware.com
 
Optimizing AI for immediate response in Smart CCTV
Optimizing AI for immediate response in Smart CCTVOptimizing AI for immediate response in Smart CCTV
Optimizing AI for immediate response in Smart CCTV
 
Tech Tuesday-Harness the Power of Effective Resource Planning with OnePlan’s ...
Tech Tuesday-Harness the Power of Effective Resource Planning with OnePlan’s ...Tech Tuesday-Harness the Power of Effective Resource Planning with OnePlan’s ...
Tech Tuesday-Harness the Power of Effective Resource Planning with OnePlan’s ...
 
10 Trends Likely to Shape Enterprise Technology in 2024
10 Trends Likely to Shape Enterprise Technology in 202410 Trends Likely to Shape Enterprise Technology in 2024
10 Trends Likely to Shape Enterprise Technology in 2024
 
The Real-World Challenges of Medical Device Cybersecurity- Mitigating Vulnera...
The Real-World Challenges of Medical Device Cybersecurity- Mitigating Vulnera...The Real-World Challenges of Medical Device Cybersecurity- Mitigating Vulnera...
The Real-World Challenges of Medical Device Cybersecurity- Mitigating Vulnera...
 
Azure_Native_Qumulo_High_Performance_Compute_Benchmarks.pdf
Azure_Native_Qumulo_High_Performance_Compute_Benchmarks.pdfAzure_Native_Qumulo_High_Performance_Compute_Benchmarks.pdf
Azure_Native_Qumulo_High_Performance_Compute_Benchmarks.pdf
 
Define the academic and professional writing..pdf
Define the academic and professional writing..pdfDefine the academic and professional writing..pdf
Define the academic and professional writing..pdf
 
CHEAP Call Girls in Pushp Vihar (-DELHI )🔝 9953056974🔝(=)/CALL GIRLS SERVICE
CHEAP Call Girls in Pushp Vihar (-DELHI )🔝 9953056974🔝(=)/CALL GIRLS SERVICECHEAP Call Girls in Pushp Vihar (-DELHI )🔝 9953056974🔝(=)/CALL GIRLS SERVICE
CHEAP Call Girls in Pushp Vihar (-DELHI )🔝 9953056974🔝(=)/CALL GIRLS SERVICE
 
Introducing Microsoft’s new Enterprise Work Management (EWM) Solution
Introducing Microsoft’s new Enterprise Work Management (EWM) SolutionIntroducing Microsoft’s new Enterprise Work Management (EWM) Solution
Introducing Microsoft’s new Enterprise Work Management (EWM) Solution
 
Shapes for Sharing between Graph Data Spaces - and Epistemic Querying of RDF-...
Shapes for Sharing between Graph Data Spaces - and Epistemic Querying of RDF-...Shapes for Sharing between Graph Data Spaces - and Epistemic Querying of RDF-...
Shapes for Sharing between Graph Data Spaces - and Epistemic Querying of RDF-...
 

Memory Management

  • 1. Memory Management Strategies Prepared By: Mr. Sangram A. Patil Assistant Professor PVPIT,Budhgaon
  • 2. Basic Hardware  Main Memory and registers built into processor and CPU can access directly.  Machine instructions take memory address as argument but not disk address.  Registers on CPU accessible within one CPU cycle.  Instruction execution use registers to perform operations.  Accessing main memory requires many CPU cycles.  To avoid stall of processor cache memory is used.  Cache is faster memory placed between CPU and main memory  We are not only concerned with relative speed of access memory but also protect process from one another.
  • 3.  Each process has separate memory space.  Provide protection by using two registers Base and Limit registers  Base register: Hold smallest legal physical address.  Limit register: Specifies the size of range  Only load by OS using special privileged instructions
  • 4. Basic Hardware  Protection to memory is accomplished by CPU Hardware
  • 5. Address Binding  Program reside on disk in binary file.  Brought into memory for execution.  Process moves between disk and memory.  Process on disk waiting in input queue  most of the process reside in physical memory.  Address space of computer starts at 00000  Address in program is symbolic  Compiler bind these symbolic address to relocatable address  Linker or loader convert this relocatable address to absolute address  Binding maps from ne address space to another.
  • 6.
  • 7. Address binding  Types of address binding 1. Compile Time: at compile time where the process reside in memory, absolute code can be generated. 2. Load time: if it is not known at compile time where the process reside in memory, then compiler must generate relocatable code. Final binding is delayed at load time. If starting address changes need to reload the program. 3. Execution Time: if process can move during its execution from one memory segment to another, then bonding must be delayed until run time.
  • 8. Logical vs physical address space  Address generated by CPU referred as logical Address (Virtual Address).  Address seen by memory unit is referred as physical address.  Compile and load time address binding generate identical logical and physical address.  Execution time binding differ in logical and physical address.  Set of logical address generated by CPU is logical address space  Set of physical addresses corresponding to these addresses is physical address space.  Runtime mapping from logical to physical address is done by a hardware device called as memory management unit(MMU)
  • 9.
  • 10. Dynamic loading  To execute process entire program and data needs to in physical memory.  Size of process limits to size of physical memory.  To achieve better space utilization we can use dynamic loading.  With dynamic loading routine is not loaded until it is called.  All routines kept on disk in relocatable load format.  The main program is load into memory and executed.  Then other routines are called as required.  Advantage is that unused routine is never loaded.  Used when code is large.  No OS support is required.
  • 11. Dynamic linking and shared libraries  Linking of system libraries postponed until execution time.  Used with system libraries.  Each program must include language library in executable image.  Wastes both disk space and main memory.  Stub is also included into program image.  Stub is piece of code indicate how to locate memory resident library routine.  Stud is also responsible for refereeing new versions of libraries when they are updated.  Such libraries specific to language shared by programs are called as shared libraries.
  • 12. Swapping  Process must be in memory for execution.  It can be swapped out temporarily for baking store then brought back into memory for continue execution.  Round robin scheduling  Priority scheduling  Process that is swapped out will be swapped back into the same memory space it is occupied previously. (if compile or load time binding )  For execution time binding it is swapped in different address space.  Ready queue contains all the process that are ready to execute.  CPU scheduler selects process from ready queue for execution and calls dispatcher  Dispatcher swaps out process from memory and swaps in another process for execution.
  • 13.
  • 14. Contiguous memory allocation  Need to allocate main memory in most efficient way  Memory is divided into two partitions: one for resident OS and another for user processes.  We can place OS in low memory or high memory.  Usually placed in low memory.  multiprogramming system several process in memory at a time.  How to allocate memory to processes waiting in input queue.  Contiguous memory Allocation: Each process contained in contiguous section of memory.
  • 15. Memory mapping and protection  Use limit and relocation register.  Relocation register: value of smallest physical address.  Limit register: range of logical addresses.  Each logical address must be less than limit register.  MMU maps logical address into physical address dynamically by adding value of relocation register into logical address.  Mapped address sent to memory.  When CPU scheduler selects process for execution ,dispatcher loads limit and relocation register with correct values as part of context switch.  We protect both OS and user process from executing process from modification.
  • 16.
  • 17. Memory Allocation  Fixed sized partition: (Multiple partition method)  Divide memory into several fixed sized partitions.  Degree of multiprogramming is bound with number of partitions.  When partition is free a process selected from input queue and is loaded free partition.  When process terminates the partition becomes available for another process.  Disadvantage : 1. Produce internal fragmentation: unused memory that is internal to partition.
  • 18.
  • 19. Memory Allocation (Cont.)  Variable partition:  OS keeps table of indicating which part of memory is occupied and which part is free.  Initially all memory is available to user process as large block called as hole.  When process enters into system OS takes in to account memory requirements of process and amount of available memory space.  When process allocated memory it loaded into memory.  When process terminated it release its memory.
  • 20.
  • 22.  Strategies to select best hole from available hole 1. First fit: allocate first hole that is big enough. Searching can start at the beginning of set of holes or at location where last search ended. 2. Best fit: allocates smallest hole that is big enough. Search whole list to search smallest hole. 3. Worst fit: allocates largest hole. Search entire list to search largest hole in memory  Disadvantages: 1. Both First and best fit strategies for memory allocation suffer from external fragmentation. A process is loaded and removed from memory the free memory space is broken into little pieces. External fragmentation occurs when there is enough space is to satisfy request but memory is not contiguous. Storage fragmented into large number of small holes
  • 23.
  • 24.  Solution to external fragmentation is compaction.  Goal is to shuffle memory contents so as to place all free blocks together in one large block.  Compaction is not possible every time if address binding is don at load or compile time.  It is possible only when address binding is at execution time.
  • 25. Paging  Paging is a storage mechanism that allows OS to retrieve processes from the secondary storage into the main memory in the form of pages.  Paging avoid external fragmentation and need of compaction.  Paging involves breaking physical memory into fixed sized blocks called as frames.  Breaking logical memory in fixed sized blocks called as pages. (divide each process in the form of pages)  when process is to be executed its pages are loaded into any of the available frames.  The size of page and size of frame is same.
  • 26.
  • 27.
  • 28. Basic method  Hardware support for paging  Every address generated by CPU divided into two parts: page number (p) and offset (d).  Page number is used as an index into page table.  Page table contains an base address of each page in physical memory.  This base address combined with page offset to define a physical memory address which is sent to memory unit.  Page size is defined by hardware usually in power of 2. it is between 512 bytes to 16 MB per page.
  • 29.
  • 30.
  • 31.
  • 32.
  • 33. Hardware support for paging  Each OS has its own method to store page table.  Mast of system allocate page table for each process.  Pointer to page table is stored in special register (instruction register in PCB).  When dispatcher loads process it must reload these register and define correct hardware page table values from stored page table.  The use of registers for page table is satisfactory if page table is reasonably small.(for 256 entries).  Most of the computers allows large amount of entries in such cases instead of using fast registers page table is kept into memory and pointer to this page table is in Page Table Base Register(PTBR)  Problem with this mechanism is that time require to access memory location.
  • 34. Hardware support for paging  Another mechanism is to use special, small, fast lookup hardware cache called as Translation Lookaside Buffer(TLB).  It is associative high speed memory.  Each entry in TLB consists of two parts: a key and Value  When associative memory is presented with an item, it is compared with all keys simultaneously. If item is found corresponding value field is returned.  TLB contains only few of page table entries. When logical address is generated by CPU, the page number is presented to TLB. If page number is found its frame number is immediately available and is used to access memory.  If the page not found in TLB, a memory reference to TLB must be made. When frame number is obtained we use it to access a memory. Also we add page number and frame number in TLB entry, so they can found quickly on next reference.
  • 35.
  • 36. Protection  Memory protection is provided by using protection bits which are associated with each frame. These bits are kept in page table.  One bit to define page to be read-write, or read-only.  Protection bits verify that no writes are being made to read-only pages.  Another bit called as valid-invalid bit with each page table entry.  When bit is set to valid then associated page is in processes logical address space and thus it is legal page.  When bit is set to invalid the page is not in processes logical address space.  Illegal addresses are trapped by valid and invalid bits.
  • 37.
  • 38. Shared pages  It is possible to share pages  Reentrant code (pure code)=not self modifying code  Reentrant code can not change during execution  Since two or more processes can execute same code at once  Each process has its own copy of registers and data storage to hold the data for process execution.  Only one copy of editor needs to be kept in physical memory.  Each processes page table maps to same copy in physical memory, but data pages are mapped onto different page.
  • 40. Advantages of Paging  Easy to use memory management algorithm  No need for external Fragmentation  Swapping is easy between equal-sized pages and page frames.
  • 41. Disadvantages of Paging  May cause Internal fragmentation  Complex memory management algorithm  Page tables consume additional memory.  Multi-level paging may lead to memory reference overhead.