SlideShare utilise les cookies pour améliorer les fonctionnalités et les performances, et également pour vous montrer des publicités pertinentes. Si vous continuez à naviguer sur ce site, vous acceptez l’utilisation de cookies. Consultez nos Conditions d’utilisation et notre Politique de confidentialité.
SlideShare utilise les cookies pour améliorer les fonctionnalités et les performances, et également pour vous montrer des publicités pertinentes. Si vous continuez à naviguer sur ce site, vous acceptez l’utilisation de cookies. Consultez notre Politique de confidentialité et nos Conditions d’utilisation pour en savoir plus.
Virtual Memory : Conceptual separation of user logical memory from
Only part of the program needs to be
in memory for execution
Logical address space can therefore
be much larger than physical address
Allows address spaces to be shared by
Allows for more efficient process
More programs running concurrently
Less I/O needed to load or swap
Virtual Address Space
Virtual address space–logical view of how process is stored in
Usually start at address 0, contiguous addresses until end of space
Meanwhile, physical memory organized in page frames
MMU must map logical to physical
Virtual memory can be implemented via:
Virtual Address Space (cont.)
Usually design logical address space for stack to
start at Max logical address and grow “down”
while heap grows “up”
Maximizes address space use
Unused address space between the two is hole
No physical memory needed until heap or stack grows to a
given new page
Enables sparse address spaces with holes left for
growth, dynamically linked libraries, etc.
System libraries shared via mapping into virtual
Shared memory by mapping pages read-write
into virtual address space
Pages can be shared during fork(), speeding
System libraries can be shared by several processes through mapping of the shared
object into a virtual address space.
Virtual Memory allows one process to create a region of memory that it can share
with another process through the use of Shared Memory.
Could bring entire process into memory
at load time Or bring a page into
memory only when it is needed
Less I/O needed, no unnecessary I/O
Less memory needed
Similar to paging system with swapping
(diagram on right)
Page is needed =>reference to it
invalid reference =>abort
not-in-memory =>bring to memory
Lazy swapper–never swaps a page into
memory unless page will be needed
Swapper that deals with pages is a pager
Virtual Memory Table
With each page table entry a valid–invalid
bit is associated (1: in-memory, 0:not-in-
Initially valid–invalid but is set to 0 on all
During address translation, if valid–invalid
bit in page table entry is 0 : page fault
Example of a page table snapshot:
STEPS IN HANDLING A PAGE FAULT
1. The process has touched a page not
currently in memory.
2. Check an internal table for the target
process to determine if the reference was
valid (do this in hardware.)
3. If page valid, but page not resident, try to
get it from secondary storage.
4. Find a free frame; a page of physical
memory not currently in use. (May need to
free up a page.)
5. Schedule a disk operation to read the
desired page into the newly allocated
6. When memory is filled, modify the page
table to show the page is now resident.
7. Restart the instruction that failed
Approach: If no physical frame is free, find
one not currently being touched and free it.
Steps to follow are:
1. Find requested page on disk.
2. Find a free frame.
a. If there's a free frame, use it
b. Otherwise, select a victim page.
c. Write the victim page to disk.
3. Read the new page into freed frame.
Change page and frame tables.
4. Restart user process.
Hardware requirements include "dirty" or
When we over-allocate memory, we need to push out something already in memory.
Over-allocation may occur when programs need to fault in more pages than there are physical
frames to handle.
Page Replacement Algorithms
When memory is over allocated, we can either swap out some process, or overwrite some
Which pages should we replace ? <--- here the goal is to minimize the number of faults.
Conceptually easy to implement; either use a time-stamp on pages, or organize on a queue.
(The queue is by far the easier of the two methods.)
This is the replacement policy that results in the lowest page fault rate.
Algorithm: Replace that page which will not be next used for the longest period of time.
Impossible to achieve in practice; requires crystal ball.
LEAST RECENTLY USED ( LRU )
Replace that page which has not been used for the longest period of time.
Results of this method considered favorable. The difficulty comes in making it work.
Time stamp on pages - records when the page is last touched.
Page stack - pull out touched page and put on top