Chapter4.Memory Management


Chapter 4. Memory Management

In this chapter

  • 4.1 Pages 183

  • 4.2 Memory Zones 187

  • 4.3 Page Frames 191

  • 4.4 Slab Allocator 200

  • 4.5 Slab Allocator's Lifecycle 211

  • 4.6 Memory Request Path 224

  • 4.7 Linux Process Memory Structures 226

  • 4.8 Process Image Layout and Linear Address Space 232

  • 4.9 Page Tables 236

  • 4.10 Page Fault 237

  • Summary 249

  • Project: Process Memory Map 250

  • Exercises 251

Memory management is the method by which an application running on a computer accesses memory through a combination of hardware and software manipulation. The job of the memory management subsystem is to allocate available memory to requesting processes and to deallocate the memory from a process as it releases it, keeping track of memory as it is handled.

The operating system lifespan can be split up into two phases: normal execution and bootstrapping. The bootstrapping phase makes temporary use of memory. The normal execution phase splits the memory between a portion that is permanently assigned to the kernel code and data, and a second portion that is assigned for dynamic memory requests. Dynamic memory requests come about from process creation and growth. This chapter concentrates on normal execution.

We must understand a few high-level concepts regarding memory management before we delve into the specifics of implementation and how they tie together. This chapter first overviews what a memory management system is and what virtual memory is. Next, we discuss the various kernel structures and algorithms that aid in memory management. After we understand how the kernel manages memory, we consider how process memory is split up and managed and outline how it ties into the kernel structures in a top-down manner. After we cover process memory acquisition, management, and release, we look at page faults and how the two architecturesPowerPC and x86handle them.

The simplest type of memory management system is one in which a running process has access to all the memory. For a process to work in this way, it must contain all the code necessary to manipulate any hardware it needs in the system, must keep track of its memory addresses, and must have all its data loaded into memory. This approach places a heavy responsibility on the program developer and assumes that processes can fit into the available memory. As these requirements have proven unrealistic given our increasingly complex program demands, available memory is usually divided between the operating system and user processes, relegating the task of memory management to the operating system.

The demands placed on operating systems today are such that multiple programs should be able to share system resources and that the limitations on memory be transparent to the program developer. Virtual memory is the result of a method that has been adopted to support programs with the need to access more memory than is physically available on the system and to facilitate the efficient sharing of memory among multiple programs. Physical, or core, memory is what is made available by the RAM chips in the system. Virtual memory allows programs to behave as though they have more memory available than that provided by the system's core memory by transparently making use of disk space. Disk space, which is less expensive and has more capacity for storage than physical memory, can be used as an extension of internal memory. We call this virtual memory because the disk storage effectively acts as though it were memory without being so. Figure 4.1 illustrates the relations between the various levels of data storage.

Figure 4.1. Data Access Hierarchy


To use virtual memory, the program data is split into basic units that can be moved from disk to memory and back. This way, the parts of the program that are being used can be placed into memory, taking advantage of the faster access times. The unused parts are temporarily placed on disk, which minimizes the impact of the disk's significantly higher access times while still having the data ready for access. These data units, or blocks of virtual memory, are called pages. In the same manner, physical memory needs to be split up into partitions that hold these pages. These partitions are called page frames. When a process requests an address, the page containing it is loaded into memory. All requests to data on that page yield access to the page. If no addresses in a page have been previously accessed, the page is not loaded into memory. The first access to an address in a page yields a miss or page fault because it is not available in memory and must be acquired from disk. A page fault is a trap. When this happens, the kernel must select a page frame and write its contents (the page) back to disk, replacing it with the contents of the page the program just requested.

When a program fetches data from memory, it uses addresses to indicate the portion of memory it needs to access. These addresses, called virtual addresses, make up the process virtual address space. Each process has its own range of virtual addresses that prevent it from reading or writing over another program's data. Virtual memory allows processes to "use" more memory than what's physically available. Hence, the operating system can afford to give each process its own virtual linear address space.[1]

[1] Process addressing makes a few assumptions regarding process memory usage. The first is that a process will not make use of all the memory it requests at the same time. The second is that two or more processes instantiated from a common executable should need only to load the executable object once.

The size of this address space is determined by the size of the architecture's word size. If a processor can hold a 32-bit value in its registers, the virtual address space of a program running on that processor consists of 232 addresses.[2] Not only does virtual memory expand the amount of memory addressable, it makes certain limitations imposed by the nature of physical memory transparent to the user space programmer. For example, the programmer does not need to manage any holes in memory. In our 32-bit example, we have a virtual address space that ranges from 0 to 4GB. If the system has 2GB of RAM, its physical address range spans from 0 to 2GB. Our programs might be 4GB programs, but they have to fit into the available memory. The entirety of the program is kept on disk and pages are moved in as they are used.

[2] Although the limit of memory available is technically the sum of memory and swap space, the addressable limit is imposed by the size of the architecture's word size. This means that even in a system with more than 4GB of memory, a process cannot malloc more than 3GB (after accounting for the top 1GB that is assigned to the kernel).

The act of moving a page from memory to disk and back is called paging. Paging includes the translation of the program virtual address onto the physical memory address.

The memory manager is a part of the operating system that keeps track of associations between virtual addresses and physical addresses and handles paging. To the memory manager, the page is the basic unit of memory. The Memory Management Unit (MMU), which is a hardware agent, performs the actual translations.[3] The kernel provides page tables, indexed lists of the available pages, and their associated addresses that the MMU can access when performing address translations. These are updated whenever a page is loaded into memory.

[3] Some microprocessors, such as the Motorola 68000 (68K), lack an MMU altogether. uCLinux is a Linux distribution that has specifically ported Linux to run in MMU-less systems. Without an MMU, virtual addresses and physical addresses are one and the same.

Having seen the high-level concepts in memory management, let's start our view of how the kernel implements its memory manager with a view at the implementation of pages.




The Linux Kernel Primer. A Top-Down Approach for x86 and PowerPC Architectures
The Linux Kernel Primer. A Top-Down Approach for x86 and PowerPC Architectures
ISBN: 131181637
EAN: N/A
Year: 2005
Pages: 134

flylib.com © 2008-2017.
If you may any questions please contact us: flylib@qtcs.net