Heap Defenses


In the last several years, heap exploits have gone from exotic to typical. Countermeasures such as /GS (explained later in this chapter) and better programming practices have made stack overruns less frequent, but the heap is actually easier in some ways to exploit. First, the size of the allocation is typically determined using an arithmetic calculation, and both computers and programmers can be very bad at math–although computers are faster and more predictable. For a more comprehensive look at how computers can mangle integer manipulation, take a look at “Integer Handling with the C++ SafeInt Class” (LeBlanc 2004), and “Another Look at the SafeInt Class” (LeBlanc 2005).

To make matters worse, common behaviors for a heap make unreliable exploits work more often. Let’s say that you can get execution flow to jump into some spot in the heap, but you can’t control exactly where it will go. One obvious attack is to put an enormous amount of data into the heap and use a very large NOP slide that causes execution of anything in the NOP slide to run down into your shell code. It isn’t always possible to put very large amounts of data into the heap, so an alternate approach is a technique known as “heap spraying,” which is a technique first noted in eEye’s advisory about the IDA overflow (eEye 2001), which was well known because it was developed into the Code Red worm in 2001. Heap spraying involves writing a large number of copies of your shell code into the heap; depending on the details of the exploit, you would be much more likely to have the execution jump into the shell code because large allocations would typically end up in another location. If you’re faced with executing legitimate allocations, none of the defenses discussed in the remainder of this section apply, but NX (No eXecute) will stop these attacks cold. NX will be covered in the next section.

Another problem exploits a very common programming flaw–dangling pointers and double-free conditions. Mike Marcelais of the Office Trustworthy Computing team came up with the following three common double-free conditions that can be dangerous (personal communication):

The first common double-free problem is when the heap structures in a way that an attacker can exploit. If the heap manager does not maintain the control data adjacent to the user data, this attack may not be possible. The base Windows heap in Windows Vista has also been hardened against this attack–more on this in a moment.

There is a pattern of alloc(a), free(a), alloc(b), free(a), alloc(c) all on the same address, as illustrated by the following code:

 char* ptrA = new char[64]; // some code here delete[] ptrA; // now we need some more memory char* ptrB = new char[64]; // Note that ptrB has the same value as ptrA // Some more code, just to confuse things // And now we make a mistake delete[] ptrA; // Oops! We just freed ptrB! // Now we need more memory char* ptrC = new char[64]; // ptrC will now be used to write memory that // the code dealing with ptrB thinks is validated

The second condition is that efficient heap behavior reallocates recently freed memory that is the same size. In this case, the function that requested alloc(b) has a pointer to memory controlled by the function that called alloc(c). If the attacker can control the memory written into alloc(c), this situation is very exploitable, and there isn’t anything an allocator can do to prevent this from causing problems. An obvious attack here is that function b validates user input and copies it to allocated memory. Function c then copies something else over the validated input, and then a subsequent operation is performed on the data in allocation b.

A third attack is a pattern of alloc(a), free(a), alloc(b), use(a), free(a), as shown here:

 char* ptrA = new CFoo; // Some code, and we then delete allocation A delete[] ptrA; // Now we need another allocation the same size // Note that ptrA and ptrB point to the same memory char* ptrB = new CFoo; // Copy some data into ptrB // Do something with ptrA, not knowing that ptrB has changed things // If ptrA is a class, this includes calling the destructor delete[] ptrA;

Note that if allocation a contains an object with a destructor, that’s equivalent to using the memory. In this case, the code using ptrB is changing the contents of the buffer pointed to by ptrA, while ptrA is believed to have validated data. There is some potential for the usage of ptrB to attack ptrA, and in this case, the converse is true as well–the usage of ptrA could very easily cause the data kept in ptrB to become invalid.

One good programming practice prevents this type of error from being exploitable: always set pointers to null when freeing them, although this still won’t help if there are multiple copies of the same pointer. Then the use(a) step will cause a null dereference crash; in the previous example, freeing or deleting a null pointer is benign–not only does it not crash, but the pattern of alloc(a), free(a), alloc(b), free(a), alloc(c) becomes non-exploitable as well, since calling delete on a null pointer doesn’t do anything, and functions b and c would then have allocations in different places. The following C++ inline functions would help:

 template < typename T > void DeleteT( T*& tPtrRef ) {    assert( tPtrRef != NULL );    delete tPtrRef;    tPtrRef = NULL; } // Use when allocation is new T[count] template < typename T > void DeleteTArray( T*& tPtrRef ) {    assert( tPtrRef != NULL );    delete[] tPtrRef;    tPtrRef = NULL; }

The reason these functions must be templatized is that for delete or delete[] to properly call object destructors, the object type must be known. A debugging assert will help catch and fix these conditions because, at run time, the second delete will be benign and without the assert, you wouldn’t catch the double-free problem, which could be a symptom of other serious errors. While the heap manager can’t protect you against the last two problems listed here, some of the other countermeasures might help. Our advice is that all double-free bugs should be fixed. Another good approach is to use smart pointer classes, although the behavior of the class must be understood before it is used.

In the case of a heap overrun, the effects depend on the heap manager being used. The default Windows heap places control data immediately before and after every allocation, and attackers can target both the control data and data kept on the heap in adjacent allocations. A number of researchers have found ways to attack the default Windows heap, and improvements in the Windows Vista heap have been numerous. Here’s a partial list of recent improvements:

  • Checking validity of forward and back links  A free block has the address of the previous and next free blocks stored immediately after the block header. Basically, the value of the forward link turns into the value to write, and the value of the backward link is where to write the forward link value. This leads to an arbitrary 4 bytes (on a 32-bit system) being written anywhere in memory. The change is to check that the structures at those locations properly point back to where it started. This improvement was delivered in Windows XP SP2.

  • Block metadata randomization  Part of the block header is XOR’d with a random number, which makes determining the value to overwrite very difficult. The performance impact is small, but the benefits are large.

  • Entry integrity check  The previous 8-bit cookie has been repurposed to validate a larger part of the header. Another change with low performance impact, but it’s difficult to attack.

  • Heap base randomization  This was mentioned earlier in the ASLR section.

  • Heap function pointer randomization  Function pointers used by the heap are encoded. This technique will be discussed at greater length in Chapter 10.

Basically, we must expect the abilities of the attackers to continue to improve, but we must also expect the defenders to continue to improve. Any list of attacks and countermeasures we can give you is probably going to be out of date by the time you read this book. What is important is knowing how to protect yourself and how to leverage the capabilities of the operating system to help protect your customers.

The first heap countermeasure that’s new to Windows Vista is enabling application termination on heap corruption in your application, although it is possible for the exploit to happen before the heap manager notices corruption. In older versions of the heap, the default behavior when the application’s heap became corrupt was to just to leak the corrupted memory and keep running, even in the face of poorly behaved code. For example, here’s something guaranteed to cause problems:

 char* pBuf = (char*)malloc(128); char* pBuf2 = (char*)malloc(128); char* pBuf3 = (char*)malloc(128); memset(pBuf, 'A', 128*3); printf("Freeing pBuf3\n"); free(pBuf3); printf("Freeing pBuf2\n"); free(pBuf2); printf("Freeing pBuf\n"); free(pBuf);

On Windows Vista, even without the heap set to terminate on corruption, this code won’t get any further than the first call to free before it causes the application to abort. On earlier versions of the operating system, including Windows Server 2003 and Windows XP, it executes all the printf statements and exits normally. Note that this has to be tested with release builds, because the debug heap does extra checking.

It’s always better to crash than to run someone else’s shell code, but your customers won’t appreciate crashing either. Enabling terminate on corruption for your process’s heap should be done early in your development cycle to give you time to shake out any previously benign bugs. Additionally, if your application can host third-party code in some form, such as plug-ins, you may want to think about getting the third-party code out of your process. A surprisingly large number of the crashes in Microsoft Office and Internet Explorer are due to code that isn’t shipped by Microsoft. To enable the heap to terminate the application on corruption, simply add this code snippet to your application’s main or WinMain function:

 bool EnableTerminateOnCorrupt() {       if( HeapSetInformation( GetProcessHeap(),                               HeapEnableTerminationOnCorruption,                               NULL,                               0 ) )      {           printf( "Terminate on corruption enabled\n" );           return true;      }      printf( "Terminate on corruption not enabled - err = %d\n",               GetLastError() );      return false; }

Obviously, you wouldn’t leave diagnostic printf statements in your shipping code, so handle errors however you like. We’d suggest making an unusual failure such as this an exception–or this could be a protection that is only enabled on Vista and just ignore errors when running an earlier version of Windows.

An additional countermeasure is that the low fragmentation heap (LFH) is historically more resistant to attack than the standard Windows heap. Why not use the LFH all of the time? The answer is that heap performance is very dependent on how an application uses the heap, and in fact, Vista may decide to use the LFH at run time if usage patterns make it beneficial. Before shipping code with the LFH, use performance benchmarking to see if there’s an improvement or a decrease in performance. If the LFH works well with the application, here’s how to enable it:

 bool EnableLowFragHeap() {      ULONG ulHeapInfo = 2;      if( HeapSetInformation( GetProcessHeap(),                              HeapCompatibilityInformation,                              &ulHeapInfo,                              sizeof( ULONG ) ) )      {        printf( "Low fragmentation heap enabled\n" );        return true;      }      printf( "Low fragmentation heap not enabled - err = %d\n",                     GetLastError() );      return false; }



Writing Secure Code for Windows Vista
Writing Secure Code for Windows Vista (Best Practices (Microsoft))
ISBN: 0735623937
EAN: 2147483647
Year: 2004
Pages: 122

flylib.com © 2008-2017.
If you may any questions please contact us: flylib@qtcs.net