Finding Problems Using Code Reviews

When performing a code review of managed code, you can look for several items to find security problems. In this section, we cover some of the most common security issues you might find during code reviews of managed code. Luckily, several available resources can help you perform a security code review, such as http://msdn.microsoft.com/library/en-us/dnpag2/html/SecurityCodeReviewIndex.asp . Many times, guides such as these are geared toward the developer. However, by knowing which guidelines a developer should follow, you can more easily detect security weaknesses when a developer doesnt follow them.

In addition to manually performing code reviews, FxCop ( http://www.gotdotnet.com/team/fxcop ) is a tool that analyzes .NET managed code to make sure the assembly adheres to the Microsoft .NET Framework Design Guidelines. It can inspect assemblies for several types of flaws, and it also has some great security checks. If your developers arent already using FxCop during the development process, their applications are likely to have security issues that can be found easily using the tool. Of course, you can run this tool yourself to find the flaws attackers certainly will.

Even if your developers are using FxCop, they can suppress warnings and even turn off entire rules. For instance, to disable a FxCop message, the SuppressMessage attribute can be used. But, FxCop can still be used to find the violations. Any place in code that a developer suppressed a message, especially a security message, you should take a closer look at the code to make sure the developer wasnt suppressing the message to avoid having to figure out how to revise the code to be more secure.

The rest of this section discusses other security risks that can be found by reviewing the managed source code, such as the following:

  • Calling unsafe code

  • Problems with asserts

  • Problems with link demands

  • Poor exception handling

Calling Unsafe Code

Any code with a security bug is considered unsafe. In this section, however, we refer to unsafe code specifically as code in which the following types of conditions occur:

  • Using unverifiable code

  • Using Win32 (APIs)

  • Marshaling data

If an application written in managed code calls unsafe code, the chances of a security problem increase because the CLR cannot adequately protect the system once unsafe or native code is introduced. When an attackers data is used when calling into unsafe code, the probability of a security vulnerability increases even more.

Using Unverifiable Code

As mentioned earlier in the Myth 1 section, an application written in managed code can actually define code that is unverifiable. Normally, managed code is verified by the CLR to check for such items as type safety and memory management. By declaring code as unsafe, the CLR will not verify the code. Thus, it is up to the developer to make sure the code does not introduce any security risksand its up to the tester to verify this.

For the application to compile unverifiable C# code, the /unsafe compiler option must be specified. Then, an application can define a whole method using the unsafe modifier, such as this:

 private unsafe void UnsafeFunction (byte[] data, int size) {     // "Unsafe" code can cause a buffer overflow. } 

Also, the application can define a block of code as unsafe using the following syntax:

 private void AnotherUnsafeFunction(byte[] data) {     unsafe     {         // "Unsafe" code block can also cause integer overflows.     } } 

Any code inside the unsafe block is more susceptible to buffer overflows, integer overflows, format string problems, and even out-of-bound errors for arrays; however, it does not mean that it contains those bugs . An unsafe block just means that pointers can be used to access memory, which can lead to security problems.

Testing for buffer overflows is discussed in Chapter 8, but you can also see whether your application compiles using the /unsafe option and review all of the unsafe code blocks for security issues.

Using Win32 APIs

It is possible for managed code to call functions that are exported from an unmanaged dynamic-link library (DLL). The CLR also wont verify any unmanaged code that your application might call into. Just like when using the unsafe code block, calling unmanaged APIs from managed code means your application could be vulnerable to attacks, such as buffer overflows. Unmanaged APIs are called through a platform invoke , also called PInvoke .

To PInvoke a Win32 API, System.Runtime.InteropServices.DllImport is used, such as in the following example:

 [System.Runtime.InteropServices.DllImport("advapi32.dll")] public static extern bool ConvertStringSidToSid(string stringSid,                                                 out IntPtr sid); 

Because some APIs are designed under the assumption that the caller is responsible for the sanitization of the input, they might not handle untrusted input in a secure fashion. If the attacker can ever supply data to the APIs, appropriate data validation should be done prior to calling the API. If your application uses PInvoke to call a Win32 API, you should see whether data from a potential attacker is ever used when calling the API and test any data validation prior to calling the API. Normally, attacker-supplied code cannot call these APIs because there is a demand for the UnmanagedCode permission to make sure that all of the callers have the necessary permissions to call into these APIs. However, much like an assert can be used to stop a call stack from verifying all of the callers , the application can also use the Suppress- UnmanagedCodeSecurity attribute to disable the demand for the permission. If you find code that uses this attribute, investigate further. Managed code can also call into native code using COM interoperability or the It Just Works (IJW) mechanism. Refer to http://msdn2.microsoft.com/en-us/library/ms235282.aspx for more information on how to call native functions from managed code.

Marshaling Data

Any time data is going back and forth between COM objects and managed code or a PInvoke is used, there is an opportunity for the data to be handled improperly. When an attacker controls this data, a security vulnerability is highly probable. If you are familiar with different programming languages, you already know different data types can be used. Marshaling data refers to converting or copying the data so that something else can use it properly. For example, if your managed code uses a COM object that passes a complex data type in a method, the data needs to be represented in a way that both the managed code application and COM object will understand. Mixing programming languages means the data needs to be marshaled to convert from managed to unmanaged types.

Watch out for ways that data can become mangled when the wrong types are used in marshaling. For instance, if the API takes an int and the developer uses a System.UInt32 , there is a chance of an integer overflow. You can also search the source for code that uses the Marshal class, which provides several powerful methods that can deal with unmanaged memory and types.

Finding Problems with Asserts

Pay special attention to any code that asserts a permission. Recall that an assert stops the CLR from walking the call stack to verify any additional callers. The following are a few issues you should look for when an assert is used:

  • Complement an assert with a demand

  • Dont assert too much permission

  • Reduce the scope of the assert

If any of these items are not handled properly by the application, a security bug is likely.

Complement an Assert with a Demand

Because an assert vouches for a permission of the calling code, the application should also verify the permission of the caller, which can be done by using a demand for another permission. However, it can be difficult to determine which permission to demand. Obviously, if the application demands a permission that is already granted (or that can easily be granted), the demand will not be effective.

For instance, which permission should the application demand if it needs to assert for the FileIOPermission? The answer varies, actually. Some applications will demand a custom defined permission specific to the application. Or an application might simply check to make sure the caller has a strong name identity. If you see an assert without an appropriate corresponding demand, be sure to check whether you can always trust the caller of the function where the assert is granted for the permission.

The following example can help clarify this issue:

 [FileIOPermission(SecurityAction.Assert, Unrestricted=true)] public void CopyFile(string source, string destination) {     ... } 

If this method is in a public class on an assembly marked APTCA (which uses the AllowPartially TrustedCallers attribute), an attacker is able to call the CopyFile method from untrusted code to copy files. (APTCA issues are covered in more depth later in this chapter.) Here, you can see how a developer might protect untrusted callers from using the method. In the following code, a demand for a custom permission is made prior to asserting the FileIOPermission:

 [CustomPermission(SecurityAction.Demand, Unrestricted=true)] [FileIOPermission(SecurityAction.Assert, Unrestricted=true)] public void CopyFile(string source, string destination) {     ... } 

As long as CustomPermission is a subclass of a CodeAccessPermission object and implements the IUnrestrictedPermission , only callers that have been specifically granted that permission in their policy will be allowed to call CopyFile . By default, all FullTrust callers are granted the custom permission. Other demands can also be used, such as the StrongNameIdentityPermission ; however, you should make sure the demand isnt for a permission that is easily obtained. For example, the default Machine policy has a code group for assemblies executed from the Internet zone that grants the FileDialogPermission . Thus, if that permission protects CopyFile that is used by untrusted callers, it wouldnt work well because code executing from the Internet zone is already granted the FileDialogPermission .

Dont Assert Too Much Permission

If your applications code only needs to read a file, it shouldnt assert permission for reading and writing to a file. Never assert more permissions than are needed. Imagine if there is a flaw in the application that allows an attacker to call into your assembly. Code that grants only the least permissions necessary reduces the scope of damage the attacker could cause.

A good indication that an application is granting too broad a permission is when any of the following assert attributes are used:

  • All

  • Unrestricted

  • Unmanaged

Reduce the Scope of the Assert

As mentioned earlier, an assert can be declarative or imperative. A declarative assert affects the entire method and indicates a problem. Typically, there is no reason for an assert to span multiple lines of code. Take a look at the following code that uses a declarative assert:

 // Assert the RegistryPermission. [RegistryPermission(SecurityAction.Assert, Unrestricted=true)] public static void RegistryTest(string untrustedInput) 
 {    string keyPath = "Software/" + untrustedInput;    RegistryKey key = Registry.CurrentUser.OpenSubKey(keyPath);    DoSomethingWithRegistry(key);    key.Close(); } 

Aside from some of the fairly obvious problems with this code, such as untrustedInput not being checked for null, what else is wrong? Well, no exception handling is done when calling OpenSubKey , and an exception could occur in DoSomethingWithRegistry that would cause key.Close() not to be called. This situation would result in a leaked handled, and could cause a denial of service. Also, if any malicious code could be injected when calling DoSomethingWith Registry , it would be granted the unrestricted RegistryPermission. This chapter discusses exception filtering later in the section called Recognizing Poor Exception Handling, which could be used to elevate the permissions of the calling code.

An imperative assert for the RegistryPermission could help avoid this security bug. Anytime you see an assert that is scoped to the entire method, question why and push for the scope to be reduced to just the code that actually needs the permissions. If imperative asserts are used, make sure the code calls CodeAccessPermission.RevertAssert .

Finding Problems with Link Demands

As mentioned earlier, link demands check the immediate caller for the requested permission. A lot of times, for performance reasons developers use a link demand instead of a full demand that causes an entire stack walk. However, because a link demand does not verify all of the callers, it also poses a security risk. The main problem with using a link demand is that it can be bypassed easily and unknowingly. The following sections describe how.

Calling the Public Method Does Not Enforce the Link Demand

Because a link demand enforces that only the immediate caller has the permission needed, another public method in the same or a fully trusted assembly might be able to expose the same functionality. The following C# code is an example:

 public class SecurityHazard {     ...     [FileIOPermission(SecurityAction.LinkDemand, Unrestricted=true)]     public void WriteFile(string filename)     {         // Write data to the file.         ...     } } public class BuggyClass {    private string filename;    public BuggyClass(string filename)    {        this.filename = filename;    }    public void Initialize()    {        SecurityHazard hazard = new SecurityHazard();        hazard.WriteFile(this.filename);    } } 

In this example, an attacker could bypass the link demand for the FileIOPermission by creating a BuggyClass object with a path to a file and then calling the Initialize method. Because BuggyClass is in the same assembly, the link demand will succeed as long as the assembly is granted the FileIOPermission. To prevent this bug, BuggyClass needs to promote the LinkDemand for the FileIOPermission or the access modifier for Initialize changed to internal if it shouldnt be called by other assemblies.

To find these issues during a code review, do the following:

  1. Look for all the link demands.

  2. For each link demand, identify the member that is protected by the LinkDemand, and identify all code using the member.

  3. If the code that uses the member with the LinkDemand is accessible to another assembly, ensure the member promotes the link demand or the caller has permission.

Obviously, this can be a fairly tedious task and can easily lead to an oversight. Instead of a manual code review, you can enable the FxCop rule UnsecuredMembersDoNotCallMembers- ProtectedByLinkDemands to catch these issues.

The Interface the Method Implements Does Not Have the Link Demand

Similar to using a public method that calls the method that does the link demand, a method that implements an interface can allow the link demand to be bypassed if the method is accessed through the interface instead. For example, say an assembly has the following interface and class:

 public interface IByPassLinkDemand {     void DoSomething(string); } public class SecurityHazard : IByPassLinkDemand {     ...    [FileIOPermission(SecurityAction.LinkDemand, Unrestricted=true)]    public void DoSomething(string filename)    {        ...    } } 

Even if all of the callers of DoSomething promote the link demand, a malicious user can still bypass the permission check by accessing the method through the interface. The following code shows how this is accomplished:

 public void DoSomethingMalicious() {     SecurityHazard hazard = new SecurityHazard();     // If the following is uncommented, it throws a security exception.     //hazard.DoSomething();     // However, the following causes the link demand to succeed.     ((IByPassLinkDemand)hazard).DoSomething(); } 

As you can see, using DoSomething by itself could not be called if DoSomethingMalicious is granted FileIOPermission. However, the malicious code is able to access DoSomething through using the IByPassLinkDemand interface. Because IByPassLinkDemand is defined in an assembly that has the FileIOPermission and does not LinkDemand the FileIOPermission for any of its callers, you can cast the hazard object to the interface and then call DoSomething .

For issues such as these, the method definitions in the interface must have the link demands as well. Alternatively, you can ensure that all the methods that implement the interface have a full demand on the methods. The FxCop rule VirtualMethodsAndOverridesRequireSameLink- Demands could be used to catch these issues.

Recognizing Poor Exception Handling

Managed code generally uses exceptions to handle application errors. If an error occurs while a certain operation is performed, an exception is thrown. It is up to the application how the exception is handled. Exception handling is done using the try , catch , finally , and throw keywords. Look at the following ASP.NET example:

 private void DoDivision() {     string input = Request.QueryString("numerator");     // Ensure input has value.     if (String.IsEmptyOrNull(input))     {         // Throw an exception if argument is null.         throw new ArgumentNullException();     }         try         {             int result = Int32.Parse(input);             Response.Write("10 / {0} = {1}", input, result);         }         catch (Exception e)         {             Response.Write("Error occurred will trying to divide.");         }         finally         {             Response.Write("Finished calling DoDivision().");         } } 

DoDivision gets a value from a URL QueryString , which an attacker could supply. It then does a simple check to make sure the value is not empty or null; if the value is empty or null, an ArgumentNullException is thrown. Next, the method attempts to parse the value as an integer. It will catch all of the exceptions that could have occurred during those operations and output an error message. Before the function returns, the code in the finally code block will always execute and print a message stating it is done.

Improperly handling exceptions can lead to different types of security vulnerabilities, such as these:

  • Elevation of privilege

  • Information disclosure

Finding Elevation of Privilege Bugs

Improper exception handling can cause an elevation of privilege (EoP), not just of the user account the code is running as, but also of the permissions the code can execute. Remember, managed code offers code access security in addition to the user security provided by the system. For an exception to cause an EoP, the following conditions must be true:

  • The code must be elevating a certain privilege. For instance, the code could impersonate a different user to elevate user permissions.

  • An attacker must be able to cause an exception in the code that is elevating privileges.

  • For EoP of code access security, the attacker must also be able to execute code that calls into the application.

Suppose a piece of code changes the user context of the running code. This is commonly done by impersonating another account, or, in some cases, the application is already impersonating a user of low privileges and calls RevertToSelf to elevate its permissions to use the context of the running process. In either case, if the attacker is able to cause an exception when the application is running as a different user context, the attacker might cause an EoP. Look at the following sample code:

 public void DoSomething(string untrustedInput) {     SecurityContext securityContext = RevertToSelf();     try     {         PerformHighPrivilegeAction(untrustedInput);     }     catch (ArgumentException e)     {         LogException(e);     }     finally     {         securityContext.Undo();     } } 

In this example, the code reverts to the identity of the process running (such as a service account) and performs an action that the user couldnt have done. It also catches a specific exception and calls a method that logs it. But what happens if PerformHighPrivilegeAction throws a different exception other than ArgumentException? The finally code block that undoes the RevertToSelf is executed, right? Maybe. A technique known as exception filtering might allow the caller to handle the exception with its own code because the framework will walk up the call stack and look for the handler of the exception. For instance, an attacker might use the following code and handle the exception before the finally code block is called. Now, because the RevertToSelf isnt undone, the attackers code is now running with the higher privileges.

 public void MaliciousCode() {     try     {         object.DoSomething(null);     }     catch     {         // Code execution is running at higher privileges.         ...     } } 

The same concept applies for EoP of code access security, and such EoP is probably more likely to occur, especially in a partially trusted environment like a Web application.

Look through the applications source for places that elevate permissions, such as when changing the user context in which the code is running or when the application grants any permissions. Code that can cause an exception when the permissions are elevated must be reviewed to see whether the exception is handled by the application using a catch block. Otherwise, any malicious code that can cause the exception, will be able to catch the exception in their codeand the malicious codes would have the elevated permissions.

Finding Information Disclosure Bugs

An exception that is caught and logged out can contain a lot of useful information that, if disclosed, an attacker can take advantage of. Refer to Chapter 7, Information Disclosure, for more details on how an attacker might be able to use certain data for malicious purposes. For example, if a Web application catches all exceptions and does a Response.Write to output the error, the output might include sensitive information. Imagine if the Web application is making a database connection, and the SQL server connection information is disclosed if an exception occursespecially if it outputs a connection string that includes the user name and password used to connect to the database.

Look at the exceptions that are caught and see what the application is doing with the exception information. Does it output the data to the user? Does the output contain sensitive information? Does it reveal the call stack? Because error cases arent always hit when testing an application, look through the code to determine what the error cases are doing and how an attacker might be able to trigger the code path to cause the exception.



Hunting Security Bugs
Hunting Security Bugs
ISBN: 073562187X
EAN: 2147483647
Year: 2004
Pages: 156

flylib.com © 2008-2017.
If you may any questions please contact us: flylib@qtcs.net