Reasons to Create Additional Heaps

[Previous] [Next]

In addition to the process's default heap, you can create additional heaps in your process's address space. You would want to create additional heaps in your own applications for the following reasons:

  • Component protection
  • More efficient memory management
  • Local access
  • Avoiding thread synchronization overhead
  • Quick Free

Let's examine each reason in detail.

Component Protection

Imagine that your application needs to process two components: a linked list of NODE structures and a binary tree of BRANCH structures. You have two source code files: LnkLst.cpp, which contains the functions that process the linked list of NODEs, and BinTree.cpp, which contains the functions that process the binary tree of BRANCHes.

If the NODEs and the BRANCHes are stored together in a single heap, the combined heap might look like Figure 18-1.

Now let's say that a bug in the linked-list code causes the 8 bytes after NODE 1 to be accidentally overwritten, which in turn causes the data in BRANCH 3 to be corrupted. When the code in BinTree.cpp later attempts to traverse the binary tree, it will probably fail because of this memory corruption. Of course, this will lead you to believe that there is a bug in your binary-tree code when in fact the bug exists in the linked-list code. Because the different types of objects are mixed together in a single heap, tracking down and isolating bugs becomes significantly more difficult.

Figure 18-1. A single heap that stores NODEs and BRANCHes together

By creating two separate heaps—one for NODEs and the other for BRANCHes—you localize your problems. A small bug in your linked-list code does not compromise the integrity of your binary tree, and vice versa. It is still possible to have a bug in your code that causes a wild memory write to another heap, but this is a far less likely scenario.

More Efficient Memory Management

Heaps can be managed more efficiently by allocating objects of the same size within them. For example, let's say that every NODE structure requires 24 bytes and every BRANCH structure requires 32 bytes. All of these objects are allocated from a single heap. Figure 18-2 shows a fully occupied single heap with several NODE and BRANCH objects allocated within it. If NODE 2 and NODE 4 are freed, memory in the heap becomes fragmented. If you then attempt to allocate a BRANCH structure, the allocation will fail even though 48 bytes are available and a BRANCH needs only 32 bytes.

If each heap consisted only of objects that were the same size, freeing an object would guarantee that another object would fit perfectly into the freed object's space.

Figure 18-2. A single fragmented heap that contains several NODE and BRANCH objects

Local Access

There is a huge performance penalty whenever the system must swap a page of RAM to and from the system's paging file. If you keep accesses to memory localized to a small range of addresses, it is less likely that the system will need to swap pages between RAM and disk.

So, in designing an application, it's a good idea to allocate things close to each other if they will be accessed together. Returning to our linked list and binary tree example, traversing the linked list is not related in any way to traversing the binary tree. By keeping all the NODEs close together (in one heap), you can keep the NODEs in adjoining pages; in fact, it's likely that several NODEs will fit within a single page of physical memory. Traversing the linked list will not require that the CPU refer to several different pages of memory for each NODE access.

If you were to allocate both NODEs and BRANCHes in a single heap, the NODEs would not necessarily be close together. In a worst-case situation, you might be able to have only one NODE per page of memory, with the remainder of each page occupied by BRANCHes. In this case, traversing the linked list could cause page faults for each NODE, which would make the process extremely slow.

Avoiding Thread Synchronization Overhead

As I'll explain shortly, heaps are serialized by default so that there is no chance of data corruption if multiple threads attempt to access the heap at the same time. However, the heap functions must execute additional code in order to keep the heap thread-safe. If you are performing lots of heap allocations, executing this additional code can really add up, taking a toll on your application's performance. When you create a new heap, you can tell the system that only one thread will access the heap and therefore the additional code will not execute. However, be careful—you are now taking on the responsibility of keeping the heap thread-safe. The system will not be looking out for you.

Quick Free

Finally, using a dedicated heap for some data structures allows you to free the entire heap without having to free each memory block explicitly within the heap. For example, when Windows Explorer walks the directory hierarchy of your hard drive, it must build a tree in memory. If you tell Windows Explorer to refresh its display, it could simply destroy the heap containing the tree and start over (assuming, of course, that it has used a dedicated heap only for the directory tree information). For many applications, this can be extremely convenient—and they'll run faster too.



Programming Applications for Microsoft Windows
Programming Applications for Microsoft Windows (Microsoft Programming Series)
ISBN: 1572319968
EAN: 2147483647
Year: 1999
Pages: 193

flylib.com © 2008-2017.
If you may any questions please contact us: flylib@qtcs.net