Section 5.13. Portability

   


5.13. Portability

Everything discussed in this chapter up to this section has been part of the machine-independent data structures and algorithms. These parts of the virtual-memory system require little change when FreeBSD is ported to a new architecture. This section will describe the machine-dependent parts of the virtual-memory system; the parts of the virtual-memory system that must be written as part of a port of FreeBSD to a new architecture. The machine-dependent parts of the virtual-memory system control the hardware memory-management unit (MMU). The MMU implements address translation and access control when virtual memory is mapped onto physical memory.

One common MMU design uses memory-resident forward-mapped page tables. These page tables are large contiguous arrays indexed by the virtual address. There is one element, or page-table entry, in the array for each virtual page in the address space. This element contains the physical page to which the virtual page is mapped, as well as access permissions, status bits telling whether the page has been referenced or modified, and a bit showing whether the entry contains valid information. For a 4-Gbyte address space with 4-Kbyte virtual pages and a 32-bit page-table entry, 1 million entries, or 4 Mbyte, would be needed to describe an entire address space. Since most processes use little of their address space, most of the entries would be invalid, and allocating 4 Mbyte of physical memory per process would be wasteful. Thus, most page-table structures are hierarchical, using two or more levels of mapping. With a hierarchical structure, different portions of the virtual address are used to index the various levels of the page tables. The intermediate levels of the table contain the addresses of the next lower level of the page table. The kernel can mark as unused large contiguous regions of an address space by inserting invalid entries at the higher levels of the page table, eliminating the need for invalid page descriptors for each individual unused virtual page.

This hierarchical page-table structure requires the hardware to make frequent memory references to translate a virtual address. To speed the translation process, most page-table-based MMUs also have a small, fast, fully associative hardware cache of recent address translations, a structure known commonly as a translation lookaside buffer (TLB). When a memory reference is translated, the TLB is first consulted and, only if a valid entry is not found there, the page-table structure for the current process is traversed. Because most programs exhibit spatial locality in their memory-access patterns, the TLB does not need to be large; many are as small as 128 entries.

As address spaces grew beyond 32 to 48 and, more recently, 64 bits, simple indexed data structures become unwieldy, with three or more levels of tables required to handle address translation. A response to this page-table growth is the inverted page table, also known as the reverse-mapped page table. In an inverted page table, the hardware still maintains a memory-resident table, but that table contains one entry per physical page and is indexed by physical address instead of by virtual address. An entry contains the virtual address to which the physical page is currently mapped, as well as protection and status attributes. The hardware does virtual-to-physical address translation by computing a hash function on the virtual address to select an entry in the table. The system handles collisions by linking together table entries and making a linear search of this chain until it finds the matching virtual address.

The advantages of an inverted page table are that the size of the table is proportional to the amount of physical memory and that only one global table is needed, rather than one table per process. A disadvantage to this approach is that there can be only one virtual address mapped to any given physical page at any one time. This limitation makes virtual-address aliasing having multiple virtual addresses for the same physical page difficult to handle. As it is with the forward-mapped page table, a hardware TLB is typically used to speed the translation process.

A final common MMU organization consists of just a TLB. This architecture is the simplest hardware design. It gives the software maximum flexibility by allowing the latter to manage translation information in whatever structure it desires.

Often, a port to another architecture with a similar memory-management organization can be used as a starting point for a new port. The PC architecture uses the typical two-level page-table organization shown in Figure 5.15. An address space is broken into 4-Kbyte virtual pages, with each page identified by a 32-bit entry in the page table. Each page-table entry contains the physical page number assigned to the virtual page, the access permissions allowed, modify and reference information, and a bit showing that the entry contains valid information. The 4 Mbyte of page-table entries are likewise divided into 4-Kbyte page-table pages, each of which is described by a single 32-bit entry in the directory table. Directory-table entries are nearly identical to page-table entries: They contain access bits, modify and reference bits, a valid bit, and the physical page number of the page-table page described. One 4-Kbyte page 1024 directory-table entries covers the maximum-sized 4-Gbyte address space. The CR3 hardware register contains the physical address of the directory table for the currently active process.

Figure 5.15. Two-level page-table organization. Key: V page-valid bit; M page-modified bit; R page-referenced bit; ACC page-access permissions.


In Figure 5.15, translation of a virtual address to a physical address during a CPU access proceeds as follows:

1. The 10 most significant bits of the virtual address are used to index into the active directory table.

2. If the selected directory-table entry is valid and the access permissions grant the access being made, the next 10 bits of the virtual address are used to index into the page-table page referenced by the directory-table entry.

3. If the selected page-table entry is valid and the access permissions match, the final 12 bits of the virtual address are combined with the physical page referenced by the page-table entry to form the physical address of the access.

The Role of the pmap Module

The machine-dependent code describes how the physical mapping is done between the user-processes and kernel virtual addresses and the physical addresses of the main memory. This mapping function includes management of access rights in addition to address translation. In FreeBSD, the physical-mapping (pmap) module manages machine-dependent translation and access tables that are used either directly or indirectly by the memory-management hardware. For example, on the PC, the pmap maintains the memory-resident directory and page tables for each process, as well as for the kernel. The machine-dependent state required to describe the translation and access rights of a single page is often referred to as a mapping or mapping structure.

The FreeBSD pmap interface is nearly identical to that in Mach 3.0 and it shares many design characteristics. The pmap module is intended to be logically independent of the higher levels of the virtual-memory system. The interface deals strictly in machine-independent page-aligned virtual and physical addresses and in machine-independent protections. The machine-independent page size may be a multiple of the architecture-supported page size. Thus, pmap operations must be able to affect more than one physical page per logical page. The machine-independent protection is a simple encoding of read, write, and execute permission bits. The pmap must map all possible combinations into valid architecture-specific values.

A process's pmap is considered to be a cache of mapping information kept in a machine-dependent format. As such, it does not need to contain complete state for all valid mappings. Mapping state is the responsibility of the machine-independent layer. With one exception, the pmap module may throw away mapping state at its discretion to reclaim resources. The exception is wired mappings, which should never cause a fault that reaches the machine-independent vm_fault() routine. Thus, state for wired mappings must be retained in the pmap until it is removed explicitly.

In general, pmap routines may act either on a set of mappings defined by a virtual address range or on all mappings for a particular physical address. Being able to act on individual or all virtual mappings for a physical page requires that the mapping information maintained by the pmap module be easily found by both virtual and physical address. For architectures such as the PC that support memory-resident page tables, the virtual-to-physical, or forward lookup, may be a simple emulation of the hardware page-table traversal. Physical-to-virtual, or reverse, lookup uses a list of pv_entry structures, described in the next subsection, to find all the page-table entries referencing a page. The list may contain multiple entries only if virtual-address aliasing is allowed.

There are two strategies that can be used for management of pmap memory resources, such as user-directory or page-table memory. The traditional and easiest approach is for the pmap module to manage its own memory. Under this strategy, the pmap module can grab a fixed amount of wired physical memory at system boot time, map that memory into the kernel's address space, and allocate pieces of the memory as needed for its own data structures. The primary benefit is that this approach isolates the pmap module's memory needs from those of the rest of the system and limits the pmap module's dependencies on other parts of the system. This design is consistent with a layered model of the virtual-memory system in which the pmap is the lowest, and hence self-sufficient, layer.

The disadvantage is that this approach requires the duplication of many of the memory-management functions. The pmap module has its own memory allocator and deallocator for its private heap a heap that is statically sized and cannot be adjusted for varying systemwide memory demands. For an architecture with memory-resident page tables, it must keep track of noncontiguous chunks of processes' page tables, because a process may populate its address space sparsely. Handling this requirement entails duplicating much of the standard list-management code, such as that used by the vm_map code.

An alternative approach, used by the PC, is to use the higher-level virtual-memory code recursively to manage some pmap resources. Here, the 4-Kbyte directory table for each user process is mapped into the address space of the kernel as part of setting up the process and remains resident until the process exits. While a process is running, its page-table entries are mapped into a virtually contiguous 4-Mbyte array of page-table entries in the kernel's address space. This organization leads to an obscure memory-saving optimization exploited in the PC pmap module where the kernel's page-table page describing the 4-Mbyte user page-table range can double as the user's directory table. The kernel also maintains alternate maps to hold individual page-table pages of other nonrunning processes if it needs to access their address space.

Using the same page-allocation routines as all the other parts of the system ensures that physical memory is allocated only when needed and from the systemwide free-memory pool. Page tables and other pmap resources also can be allocated from pageable kernel memory. This approach easily and efficiently supports large sparse address spaces, including the kernel's own address space.

The primary drawback is that this approach violates the independent nature of the interface. In particular, the recursive structure leads to deadlock problems with global multiprocessor spin locks that can be held while the kernel is calling a pmap routine.

The pmap data structures are contained in the machine-dependent include directory in the file pmap.h. Most of the code for these routines is in the machine-dependent source directory in the file pmap.c. The main tasks of the pmap module are these:

  • System initialization and startup (pmap_bootstrap(), pmap_init(), pmap_growkernel())

  • Allocation and deallocation of mappings of physical to virtual pages (pmap_enter(), pmap_remove(), pmap_qenter(), pmap_qremove())

  • Change of access protections and other attributes of mappings (pmap_change_wiring(), pmap_page_protect(), pmap_protect())

  • Maintenance of physical page-usage information (pmap_clear_modify(), pmap_clear_reference(), pmap_is_modified(), pmap_ts_referenced())

  • Initialization of physical pages (pmap_copy_page(), pmap_zero_page())

  • Management of internal data structures (pmap_pinit(), pmap_release())

Each of these tasks is described in the following subsections.

Initialization and Startup

The first step in starting up the system is for the loader to bring the kernel image from a disk or the network into the physical memory of the machine. The kernel load image looks much like that of any other process; it contains a text segment, an initialized data segment, and an uninitialized data segment. The loader places the kernel contiguously into the beginning of physical memory. Unlike a user process that is demand paged into memory, the text and data for the kernel are read into memory in their entirety. Following these two segments, the loader zeros an area of memory equal to the size of the kernel's uninitialized memory segment. After loading the kernel, the loader passes control to the starting address given in the kernel executable image. When the kernel begins executing, it is executing with the MMU turned off. Consequently, all addressing is done using the direct physical addresses.

The first task undertaken by the kernel is to set up the kernel pmap, and any other data structures that are necessary to describe the kernel's virtual address space. On the PC, the initial setup includes allocating and initializing the directory and page tables that map the statically loaded kernel image and memory-mapped I/O address space, allocating a fixed amount of memory for kernel page-table pages, allocating and initializing the user structure and kernel stack for the initial process, reserving special areas of the kernel's address space, and initializing assorted critical pmap-internal data structures. When done, it is possible to enable the MMU. Once the MMU is enabled, the kernel begins running in the context of process zero.

Once the kernel is running in its virtual address space, it proceeds to initialize the rest of the system. It determines the size of the physical memory, then calls pmap_bootstrap() and vm_page_startup() to set up the initial pmap data structures, to allocate the vm_page structures, and to create a small, fixed-size pool of memory, which the kernel memory allocators can use so that they can begin responding to memory allocation requests. Next it makes a call to set up the machine-independent portion of the virtual-memory system. It concludes with a call to pmap_init(), which allocates all resources necessary to manage multiple user address spaces and synchronizes the higher-level kernel virtual-memory data structures with the kernel pmap.

Pmap_init() allocates a minimal amount of wired memory to use for kernel page-table pages. The page-table space is expanded dynamically by the pmap_growkernel() routine as it is needed while the kernel is running. Once allocated it is never freed. The limit on the size of the kernel's address space is selected at boot time. On the PC, the kernel is typically given a maximum of 1 Gbyte of address space.

In 4.4BSD, the memory managed by the buffer cache was separate from the memory managed by the virtual-memory system. Since all the virtual-memory pages were used to map process regions, it was sensible to create an inverted page table. This table was an array of pv_entry structures. Each pv_entry described a single address translation and included the virtual address, a pointer to the associated pmap structure for that virtual address, a link for chaining together multiple entries mapping this physical address, and additional information specific to entries mapping page-table pages. Building a dedicated table was sensible, since all valid pages were referenced by a pmap, yet few had multiple mappings.

With the merger of the buffer cache into the virtual-memory system in FreeBSD, many pages of memory are used to cache file data that is not mapped into any process address space. Thus, preallocating a table of pv_entry structures is wasteful, since many of them would go unused. So, FreeBSD allocates pv_entry structures on demand as pages are mapped into a process address space.

Figure 5.16 shows the pv_entry references for a set of pages that have a single mapping. The purpose of the pv_entry structures is to identify the address space that has the page mapped. The machine-dependent part of each vm_page structure contains the head of list of pv_entry structures and a count of the number of entries on the list. In Figure 5.16, the object is using pages 5, 18, and 79. The list heads in the machine-dependent structures of these vm_page structures would each point to a single pv_entry structure labelled in the figure with the number of the vm_page structure that references them. Not shown in Figure 5.16 is that each physical map structure also maintains a list of all the pv_entry structures that reference it.

Figure 5.16. Physical pages with a single mapping.


Each pv_entry can reference only one physical map. When an object becomes shared between two or more processes, each physical page of memory becomes mapped into two or more sets of page tables. To track these multiple references, the pmap module must create chains of pv_entry structures, as shown in Figure 5.17. Copy-on-write is an example of the need to find all the mappings of a page as it requires that the page tables be set to read-only in all the processes sharing the object. The pmap module can implement this request by walking the list of pages associated with the object to be made copy-on-write. For each page, it traverses that page's list of pv_entry structures. It then makes the appropriate change to the page-table entry associated with each pv_entry structure.

Figure 5.17. Physical pages with multiple mappings.


A system with many shared objects can require many pv_entry structures, which can use an unreasonable amount of the kernel memory. The alternative would be to keep a list associated with each object of all the vm_map_entry structures that reference it. When it becomes necessary to modify the mapping of all the references to the page, the kernel could traverse this list checking the address space associated with each vm_map_entry to see if it held a reference to the page. For each page found, it could make the appropriate update.

The pv_entry structures consume more memory but reduce the time to do a common operation. For example, consider a system running a thousand processes that all share a common library. Without the pv_entry list, the cost to change a page to copy-on-write would require checking all thousand processes. With the pv_entry list, only those processes using the page would need to be inspected.

Mapping Allocation and Deallocation

The primary responsibility of the pmap module is validating (allocating) and invalidating (deallocating) mappings of physical pages to virtual addresses. The physical pages represent cached portions of an object that is providing data from a file or an anonymous memory region. A physical page is bound to a virtual address because that object is being mapped into a process's address space either explicitly by mmap or implicitly by fork or exec. Physical-to-virtual address mappings are not created at the time that the object is mapped; instead, their creation is delayed until the first reference to a particular page is made. At that point, an access fault will occur, and pmap_enter() will be called. Pmap_enter(), is responsible for any required side effects associated with creation of a new mapping. Such side effects are largely the result of entering a second translation for an already mapped physical page for example, as the result of a copy-on-write operation. Typically, this operation requires flushing uniprocessor or multiprocessor TLB or cache entries to maintain consistency.

In addition to its use to create new mappings, pmap_enter() may also be called to modify the wiring or protection attributes of an existing mapping or to rebind an existing mapping for a virtual address to a new physical address. The kernel can handle changing attributes by calling the appropriate interface routine, described in the next subsection. Changing the target physical address of a mapping is simply a matter of first removing the old mapping and then handling it like any other new mapping request.

Pmap_enter() is the only routine that cannot lose state or delay its action. When called, it must create a mapping as requested, and it must validate that mapping before returning to the caller. On the PC, pmap_enter() must first check whether a page-table entry exists for the requested address. If a physical page has not yet been allocated to the process page-table at the location required for the new mapping, a zeroed page is allocated, wired, and inserted into the directory table of the process.

After ensuring that all page-table resources exist for the mapping being entered, pmap_enter() validates or modifies the requested mapping as follows:

1. Check to see whether a mapping structure already exists for this virtual-to-physical address translation. If one does, the call must be one to change the protection or wiring attributes of the mapping; it is handled as described in the next subsection.

2. Otherwise, if a mapping exists for this virtual address but it references a different physical address, that mapping is removed.

3. The hold count on a page-table page is incremented each time a new page reference is added and decremented each time an old page reference is removed. When the last valid page is removed, the hold count drops to zero, the page is unwired, and the page-table page is freed because it contains no useful information.

4. A page-table entry is created and validated, with cache and TLB entries flushed as necessary.

5. If the physical address is outside the range managed by the pmap module (e.g., a frame-buffer page), no pv_entry structure is needed. Otherwise, for the case of a new mapping for a physical page that is mapped into an address space, a pv_entry structure is created.

6. For machines with a virtually-indexed cache, a check is made to see whether this physical page already has other mappings. If it does, all mappings may need to be marked cache inhibited, to avoid cache inconsistencies.

When an object is unmapped from an address space, either explicitly by munmap or implicitly on process exit, the pmap module is invoked to invalidate and remove the mappings for all physical pages caching data for the object. Unlike pmap_enter(), pmap_remove() can be called with a virtual-address range encompassing more than one mapping. Hence, the kernel does the unmapping by looping over all virtual pages in the range, ignoring those for which there is no mapping and removing those for which there is one.

Pmap_remove() on the PC is simple. It loops over the specified address range, invalidating individual page mappings. Since pmap_remove() can be called with large sparsely allocated regions, such as an entire process virtual address range, it needs to skip efficiently invalid entries within the range. It skips invalid entries by first checking the directory-table entry for a particular address and, if an entry is invalid, skipping to the next 4-Mbyte boundary. When all page mappings have been invalidated, any necessary global cache flushing is done.

To invalidate a single mapping, the kernel locates and marks as invalid the appropriate page-table entry. The reference and modify bits for the page are saved in the page's vm_page structure for future retrieval. If this mapping was a user mapping, the hold count for the page-table page is decremented. When the count reaches zero, the page-table page can be reclaimed because it contains no more valid mappings. When a user page-table page is removed from the kernel's address space (i.e., as a result of removal of the final valid user mapping from that page), the process's directory table must be updated. The kernel does this update by invalidating the appropriate directory-table entry. If the physical address from the mapping is outside the managed range, nothing more is done. Otherwise, the pv_entry structure is found and is deallocated.

The pmap_qenter() and pmap_qremove() are faster versions of the pmap_enter() and pmap_remove() functions that can be used by the kernel to quickly create and remove temporary mappings. They can only be used on non-pageable mappings in the address space of the kernel. For example, the buffer-cache management routines use these routines to map file pages into kernel memory so that they can be read or written by the filesystem.

Change of Access and Wiring Attributes for Mappings

An important role of the pmap module is to manipulate the hardware access protections for pages. These manipulations may be applied to all mappings covered by a virtual-address range within a pmap via pmap_protect(), or they may be applied to all mappings of a particular physical page across pmaps via pmap_page_protect(). There are two features common to both calls. First, either form may be called with a protection value of VM_PROT_NONE to remove a range of virtual addresses or to remove all mappings for a particular physical page. Second, these routines should never add write permission to the affected mappings. Thus, calls including VM_PROT_WRITE should make no changes. This restriction is necessary for the copy-on-write mechanism to function properly. The request to make the page writable is made only in the vm_map_entry structure. When a later write attempt on the page is made by the process, a page fault will occur. The page-fault handler will inspect the vm_map_entry and determine that the write should be permitted. If it is a copy-on-write page, the fault handler will make any necessary copies before calling pmap_enter() to enable writing on the page. Thus, write permission on a page is added only via calls to pmap_enter().

Pmap_protect() is used primarily by the mprotect system call to change the protection for a region of process address space. The strategy is similar to that of pmap_remove(): Loop over all virtual pages in the range and apply the change to all valid mappings that are found. Invalid mappings are left alone.

For the PC, pmap_protect() first checks for the special cases. If the requested permission is VM_PROT_NONE, it calls pmap_remove() to handle the revocation of all access permission. If VM_PROT_WRITE is included, it just returns immediately. For a normal protection value, pmap_remove() loops over the given address range, skipping invalid mappings. For valid mappings, the page-table entry is looked up, and, if the new protection value differs from the current value, the entry is modified and any TLB and cache flushing done. As occurs with pmap_remove(), any global cache actions are delayed until the entire range has been modified.

Pmap_page_protect() is used internally by the virtual-memory system for two purposes. It is called to set read-only permission when a copy-on-write operation is set up (e.g., during fork). It also removes all access permissions before doing page replacement to force all references to a page to block pending the completion of its operation. In Mach, this routine used to be two separate routines pmap_clear_ptes() and pmap_remove_all() and many pmap modules implement pmap_page_protect() as a call to one or the other of these functions, depending on the protection argument.

In the PC implementation of pmap_page_protect(), if VM_PROT_WRITE is requested, it returns without doing anything. The addition of write enable must be done on a page-by-page basis by the page-fault handling routine as described for pmap_protect(). Otherwise, it traverses the list of pv_entry structures for this page, invalidating the individual mappings as described in the previous subsection. As occurs with pmap_protect(), the entry is checked to ensure that it is changing before expensive TLB and cache flushes are done. Note that TLB and cache flushing differ from those for pmap_remove(), since they must invalidate entries from multiple process contexts, rather than invalidating multiple entries from a single process context.

Pmap_change_wiring() is called to wire or unwire a single machine-independent virtual page within a pmap. As described in the previous subsection, wiring informs the pmap module that a mapping should not cause a hardware fault that reaches the machine-independent vm_fault() code. Wiring is typically a software attribute that has no effect on the hardware MMU state: It simply tells the pmap not to throw away state about the mapping. As such, if a pmap module never discards state, then it is not strictly necessary for the module even to track the wired status of pages. The only side effect of not tracking wiring information in the pmap is that the mlock system call cannot be completely implemented without a wired page-count statistic.

The PC pmap implementation maintains wiring information. An unused bit in the page-table-entry structure records a page's wired status. Pmap_change_wiring() sets or clears this bit when it is invoked with a valid virtual address. Since the wired bit is ignored by the hardware, there is no need to modify the TLB or cache when the bit is changed.

Management of Page-Usage Information

The machine-independent page-management code needs to be able to get basic information about the usage and modification of pages from the underlying hardware. The pmap module facilitates the collection of this information without requiring the machine-independent code to understand the details of the mapping tables by providing a set of interfaces to query and clear the reference and modify bits. The pageout daemon can call vm_page_test_dirty() to determine whether a page is dirty. If the page is dirty, the pageout daemon can write it to backing store and then call pmap_clear_modify() to clear the modify bit. Similarly, when the pageout daemon pages out or inactivates a page, it uses pmap_clear_reference() to clear the reference bit for the page. When it wants to update the active count for a page, it uses pmap_ts_referenced() to count the number of uses of the page since it was last scanned.

One important feature of the query routines is that they should return valid information even if there are currently no mappings for the page in question. Thus, referenced and modified information cannot just be gathered from the hardware-maintained bits of the various page-table or TLB entries; rather, there must be a place where the information is retained when a mapping is removed.

For the PC, the modified information for a page is stored in the dirty field of its vm_page structure. Initially cleared, the information is updated whenever a mapping for a page is considered for removal. The vm_page_test_dirty() routine first checks the dirty field and, if the bit is set, returns TRUE immediately. Since this attribute array contains only past information, it still needs to check status bits in the page-table entries for currently valid mappings of the page. This information is checked by calling the pmap_is_modified() routine, which immediately returns FALSE if it is not passed a managed physical page. Otherwise, pmap_is_modified() traverses the pv_entry structures associated with the physical page, examining the modified bit for the pv_entry's associated page-table entry. It can return TRUE as soon as it encounters a set bit or FALSE if the bit is not set in any page-table entry.

The referenced information for a page is stored in the act_count field and as a flag of its vm_page structure. Initially cleared, the information is updated periodically by the pageout daemon. As it scans memory, the pageout daemon calls the pmap_ts_referenced() routine to collect a count of references to the page. The pmap_ts_referenced() routine returns zero if it is not passed a managed physical page. Otherwise, it traverses the pv_entry structures associated with the physical page, examining and clearing the referenced bit for the pv_entry's associated page-table entry. It returns the number of referenced bits that it found.

The clear routines also return immediately if they are not passed a managed physical page. Otherwise, the referenced or modified bit is cleared in the attribute array, and they loop over all pv_entry structures associated with the physical page, clearing the hardware-maintained page-table-entry bits. This final step may involve TLB or cache flushes along the way or afterward.

Initialization of Physical Pages

Two interfaces are provided to allow the higher-level virtual-memory routines to initialize physical memory. Pmap_zero_page() takes a physical address and fills the page with zeros. Pmap_copy_page() takes two physical addresses and copies the contents of the first page to the second page. Since both take physical addresses, the pmap module will most likely have to first map those pages into the kernel's address space before it can access them.

The PC implementation has a pair of global kernel virtual addresses reserved for zeroing and copying pages. Pmap_zero_page() maps the specified physical address into the reserved virtual address, calls bzero() to clear the page, and then removes the temporary mapping with the single translation-invalidation primitive used by pmap_remove(). Similarly, pmap_copy_page() creates mappings for both physical addresses, uses bcopy() to make the copy, and then removes both mappings.

Management of Internal Data Structures

The remaining pmap interface routines are used for management and synchronization of internal data structures. Pmap_pinit() creates an instance of the machine-dependent pmap structure. It is used by the vmspace_fork() and vmspace_exec() routines when creating new address spaces during a fork or exec. Pmap_release() deallocates the pmap's resources. It is used by the vmspace_free() routine when cleaning up a vmspace when a process exits.


   
 


The Design and Implementation of the FreeBSD Operating System
The Design and Implementation of the FreeBSD Operating System
ISBN: 0201702452
EAN: 2147483647
Year: 2003
Pages: 183

flylib.com © 2008-2017.
If you may any questions please contact us: flylib@qtcs.net