Modern systems provide an abstraction of main memory known as virtual memory (VM), which provides each process with a large, uniform and private address space. VM uses main memory efficiently by treating it as a cache for an address space stored on the disk, keeping only the active areas in main memory, and transferring data back and forth between disk and memory as needed.
10.1 Physical and Virtual Addressing
The main memory of a computer system is organized as an array of M contiguous byte-sized cells. Each byte has a unique pyhsical address (PA). {0, 1, 2, ,,, , M-1}
With virtual addressing, the CPU accesses main memory by generating a virtual address, which is converted to the appropriate physical address before being sent to the memory. Memory Management Unit (MMU) on the CPU chip translates virtual addresses on the fly, using a look-up table stored in main memory whose contents are managed by the operating system.
10.2 Address Spaces
An address space is an ordered set of nonnegative integer addresses {0,1,2,...}. If the integers in the address space are consecutive, then we say that it is a linear address space.
The concept of an address space is important because it make a clean distinction between data objects (bytes) and their attributes (addresses). This is the basic idea of virtual memory. Each byte of main memory has a virtual address chosen from the virtual address space, and a physical address chosen from the physical address space.
10.3 VM as a tool for caching
Conceptually, a virtual memory is organized as an arry of N contiguous byte-sized cells stored on disk. VM partitions the virtual memory into fixed-size blocks called virtual pages (VPs). Similarly, physical memory is partitioned into physical pages (PPs), the same szie.
The set of virtual pages in partitioned into three disjoint subsets:
1.Unallocated. Unallocated blocks do not have any data associated with them, and thus do not occupy any space on disk.
2.Cached
3.Uncached
1.Unallocated. Unallocated blocks do not have any data associated with them, and thus do not occupy any space on disk.
2.Cached
3.Uncached
10.3.1 DRAM Cache Organization
Due to the large miss penalty, virtual pages tend to be large, typically 4 to 8 KB. DRAM caches are fully associative, that is, any virtual page can be placed in any physical page.
10.3.2 Page Tables
a data structure stored in physical memory known as a page table that maps virtual pages to physical pages. The address translation hardware reads the page table each time it converts a virtual address to a physical address. The operating system is responsible for maintaining the contents of the page table and transferring pages back and forth between disk and DRAM.
A page table is an array of page table entries (PTEs). Each page in the virtual address has a PTE at a fixed offset in the page table.
10.3.3 Page Hits
10.3.4 Page Faults
A DRAM cache miss is known as a page fault, which will triggers a page fault exception. The page fault exception will invoke a page fault excpetion handler in the kernel, which select a victim page, copy that page back if it has been modified and modify the page table entry. After that, restarting the previous faulting instruction.
Demand paging: the stratery of waiting until the last moment to swap in a page, when a miss occurs.
10.3.5 Allocating Pages
10.3.6 Locality to the Resue Again
The principle of locality promises that at any point in time they will tend to work on a smaller set of active pages known as the working set or resident set.
As long as our programs have good temporal localtiy, virtual memory systems work quite well.
10.4 VM as a Tool for Memory Management
Operating Systems Provide a separate page table, and thus a separate virtual address space, for each process. Multiple virtual pages can be mapped to the same shared physical page.
10.4.1 Simplifying Linking
A separate address space allows each process to use the same basic format for its memory image, regardless of where the code and data actually reside in physical memory.
10.4.2 Simplifying Sharing
In general, each process has its own private code, data, heap, and stack areas that are not shared with any other process. However, in some instances it is desirable for processes to share code and data. the operation system can arrange for multiple processes to share a single copy of this code by mapping the appropirate virtual pages in different processes to the same physical pages.
10.4.3 Simplifying Memory Allocation
Operating system allocates an appropriate number, say k, of contiguous virtual memory pages, and maps them to k arbitrary physical pages located anywhere in physical memory. Because of the way page tables work, there is no need for the operating system to locate k continuous pages of physical memory. The pages can be scattered randomly in physical memory.
10.4.4 Simplifying Loading
The .text and .data sections in ELF executables are continuous. To load these sections into a newly created process, the Linux loader allocates a continuous chunk of virtual pages starting at address 0x08048000, marks them as invalid, and points their page table entries to the appropriate locations in the object file.
The loader NEVER actually copies any data from disk into memory. The data is paged in automatically and on demand by the virtual memory system the first time each page is referenced.
This notation of mapping a set of continuous virtual pages to an arbitrary location in an arbitrary file is known as memory mapping.
10.5 VM as a Tool for Memory Protection
Since the address translation hardware reads a PTE each time the CPU generates an address,, it is straightforward to control access to the contents of a virtual page by adding some additional permission bits to the PTE.
If an instruction violates these permissions, then the CPU triggers a general protection fault that transfers control to an exception handler in the kernel. Unix shells typically report this exception as a "segmentation fault".
10.6 Address Translation
No comments:
Post a Comment