Thinking Like a Security Expert: How to Improve the Security of Your Designs from Day One

for RuBoard

You are now in a daunting position: You need to design for security in a new software component you're about to develop, but the concept of security is a slippery one. How exactly do you "design for security?" Where do you even begin?

Remember our discussions of trust and responsibility in the previous chapter? When we look at the problem in those terms, we see that our principal concern is interacting with code of a lower (or undetermined) level of trust. The system administrator will handle assigning trust through the use of a security policy. The .NET Framework security system will take care of sandboxing each unit of code (that is, each assembly) into the appropriate trust level. It's your responsibility to ensure that an untrusted party cannot leverage your trust level for its own means by using your code as a proxy in order to effectively work outside the sandbox set for it.

This implies that most of the design for a secure system will focus on the boundaries of your code: the public classes, interfaces, and methods plus input files, network connections, and any other means by which information and/or commands may be passed to your code. Only your interactions with code of lesser trust need to be scrutinized: When interacting with code of higher trust, it is that code's responsibility to ensure that the rules aren't broken.

Now let's get into the right frame of mind. The right mindset for security can be summed up in two words: paranoia and conservatism. These qualities won't necessarily make you the most popular developer around, but then again neither will a security breach in your product.

Paranoia: Designing Defensively for the Worst-Case Scenario

Paranoid developers believe that everyone is out to get them and is trying to find security weaknesses in the product that will grant a toehold into the system itself. While for the most part this is not the case, paranoia is nevertheless a pragmatic standpoint for a designer to take. Code that is not malicious will never exploit a security hole and therefore the only interesting code from the security perspective is malicious code.

It pays to assume the worst-case scenario: Not only are all people malicious hackers, but they're smart, malicious hackers with full access to your source code. It's a good idea to be paranoid for a number of reasons:

  • There are a lot of very smart hackers out there. Just because they're destructive and malicious doesn't mean that they can't think intelligently. Consider buffer overrun attacks. Some of these attacks (which rely on overflowing internal buffers in an application so that the hacker can write data into another part of the program) are very elegantly constructed indeed, and require a great deal of time, effort, and knowledge to construct.

  • It is very hard to keep secrets. Any security mechanism that relies on simple obscurity (a secret key built into the application, for instance) is doomed to failure. Some hackers are smart enough to find out secrets from the application itself, via reverse engineering. (In fact, finding keys within a program is actually reasonably easy due to properties of the key itself.) Your company may sell the source to external parties or there may be lapses in internal security (either unintentional or malicious). There are so many holes through which this information can leak, it's hard to plug them (and know they're plugged) with any degree of confidence.

    All this doesn't mean that you shouldn't make it difficult for hackers to understand what's going on, of course. Just don't rely on it for any part of your security story.

  • Once someone has found a security hole, information about the attack spreads quickly. Hackers (at least a certain class of them) like to publicly boast of their exploits. Therefore, details of their attacks very often become available on mediums such as the Internet within a very short period of time. This means that it only takes one smart hacker to identify a weakness and you'll have a hundred hackers trying to exploit it. As some email viruses in the last couple of years have shown, elaborate and carefully constructed attacks can be mutated into new attacks with relatively little imagination or skill.

In practice, what does this mean to you? The key points to remember are listed here:

  • Think of every point of entry into your code (public method, input file, and so on) as a potential vulnerability. Consider how the API might be abused (hackers might not call a method in the way you intended it to be called; they're not under any obligation to follow the rules, after all).

  • When looking for attacks, always assume that the hacker has access to at least as much information as you. If there's an embedded secret such as a key, assume that the hacker will find it. Likewise, a bizarre sequence of events or preconditions (certain error paths being taken, for example) that happen to lead to lowered security, no matter how Byzantine, must be considered important. If you found it, so will a hacker.

Conservatism: Limiting the Scope of Your Design to Reduce the Likelihood of Security Flaws

The key ideas here are that "less is better" and that the safest answer is always "no." Whereas paranoia makes you suspicious of every transaction with the outside world, conservatism will compel you to limit such interactions whenever possible. This has two benefits:

  • The safest way to eliminate bugs is to eliminate code. By removing code, you remove its ability to introduce defects into your product. The same holds true for security.

  • Because the number of areas with potential vulnerabilities is reduced, you can focus your attention on the remaining areas during design, implementation, and testing. Thus, vulnerabilities in those areas are more likely to be found and fixed earlier in the product life cycle.

Of course, balance is necessary if your code is to prove useful: The most secure piece of software would refuse to interact with the world in any way at all, but that is unlikely to be practical. Instead, the idea is to prune away avenues of attack that your product does not really need. How severely you do this depends on the product itself. For example, an application likely to be granted a low level of trust (a control designed to be downloaded from the Web, for instance) or one that does not use security-sensitive resources (such as a package of mathematical transforms) has less need of such conservatism than a component that manages file system operations on the user 's computer.

One way to approach the problem is to look at the class hierarchy your software exposes. Start at the top (the larger units of code) and work down:

  • Assemblies ”If your product is designed to be installed locally (that is, onto the user's hard disk) and contains a mixture of assemblies to install into an application directory (under Program Files, for example) and into the Global Application Cache (GAC), consider moving as many assemblies as possible into the application directory. Only install assemblies into the GAC when they need to be shared among multiple applications.

  • Classes ”By default, classes should not be marked public. Only expose those classes that need to be exposed. Remember, it's much easier to make these classes public at a later stage than it is to make them private (and potentially break third party applications) when a security vulnerability in them is discovered .

    Another technique is to seal classes that are not intended for subclassing. Many attacks in the code access security world depend on the attacking code being derived from a poorly written class (see Listing 24.1). This technique is obviously at odds with some object-oriented programming practices ”many times code is derived from a class just to add additional context that the caller wants to associate with each instance. The designer of the code needs to weigh the vulnerability of the class versus the likelihood of useful subclassing possibilities.

    An alternative to sealing a class is supplying a class-level security inheritance demand. This is a form of declarative security that ensures that code derived from the given class comes from an assembly granted the requisite permissions (see Listing 24.2). This is covered in greater detail in the next chapter.

  • Methods and fields (and other class members) ”Again, lower the visibility of these as much as possible. It's often tempting to make class members internal (using the C# terminology) so that other classes in the same assembly can access them directly. On the face of it, this is not an issue, because no wider access is granted. However, it does make the job of tracing potential code paths through the code base that much harder, so it's wise to leave such members marked private until wider access is explicitly needed.

    Another common strategy is to make all fields private and allow access only through a property. This has a couple of advantages (apart from allowing the implementation of the field to change in the future): You can set different levels of access for getting and setting the field, and the property can use Code Access Security to control access. (Fields cannot be directly protected in this way.)

    Be careful when creating virtual methods. Malicious code may override them with unexpected results (see Listing 24.1 for an example). If there is no real need to customize the behavior of the method based on the class, remove the virtual keyword. Otherwise, a couple of alternative options exist. If the method is virtual only for internal reasons (you have a type or types deriving from a base type and need to customize the behavior of this method, but don't need to expose this to outside code), the method can be sealed in the most commonly derived types. This will prevent code derived from these types from providing a new implementation of the method. Alternatively, you can use a method-level inheritance demand (similar to the class-level demand described previously) to ensure that only code with the given permissions can override this method. See Listing 24.2 and the next chapter for more details.

Listing 24.1 A Poorly Written Base Class and a Derived Class Taking Advantage of It
 using System; public class Base {     protected String secretData = "This might contain a password";     public virtual void SetSecretData(String data)     {         secretData = data;     } } public class Derived : Base {     // First problem: sub classes can access protected members.     public String GetSecretData()     {         return secretData;     }     // Second problem: if the data is ever explicitly set, it can be     // intercepted by a malicious subclasser because the set method is     // virtual.     String mySecretData;     public override void SetSecretData(String data)     {         mySecretData = data;     } } 
Listing 24.2 Various Techniques for Securing a Base Class
 using System; using System.Security.Permissions; // The following code secures the Base class in a number of different ways. // In practice, it's not necessary to use all of these techniques at once, // they're shown here just for illustrative purposes. // We assume you've created a permission -- SecretDataAccessPermission -- to // control access to the sensitive data. Alternatively, one of the standard // code access security permissions (e.g. FileIOPermission) could be used if // it makes sense with respect to the resource being protected. // Prevent this class being subclassed by code without the requisite // permission. [SecretDataAccessPermission(SecurityAction.InheritanceDemand,                             Unrestricted=true)] // Or seal the class to prevent any subclassing at all. public sealed class Base {     // Make internal data private.     private String secretData = "This might contain a password";     // Require implementors of SetSecretData to possess the required     // permission.     [SecretDataAccessPermission(SecurityAction.InheritanceDemand,                                 Unrestricted=true)]     // In this case we can secure SetSecretData by simply removing the     // virtual keyword. If this method was instead inherited from a     // parent class and was required to be a virtual override, it can     // still be secured by adding the keyword 'sealed', so that     // subclassers cannot re-implement the method.     public void SetSecretData(String data)     {         secretData = data;     } } 

The techniques demonstrated in Listing 24.2 cover the most common ways that your code is controlled by the outside world, but don't forget that other modes of information transfer often exist. These include mediums such as the file system (your application reading settings from a .ini file) or the network (a server taking requests from a client).

Remember that these sources are beyond the scope of the .NET Framework security system: It can't guarantee that they're safe and/or tamper free. Paranoia tells you that hackers will modify the data in any way possible to cause undesirable effects ”maybe in ways that don't seem to make sense. For instance, well-behaved data sources might always format the data in a specific way, but unless you verify this at the receiving end, a hacker can leverage the resulting unpredictable behavior as part of an attack. Don't forget that even if a protocol isn't published, a hacker can still reverse engineer it.

Actions that you can take to curtail this sort of mischief include

  • Limiting where the input comes from. An initialization file may need to come from a specific directory (maybe protected with an operating system access control list) or network requests might be allowed only from specific hosts . For instance, you may require that an initialization file be read from the system directory (where everyone can read the file, but only an administrator can write it).

  • Verification of the sender/creator of the data. The data may be digitally signed with the private key of the sender. This allows your code (together with the corresponding public key) to validate the sender's identity. Your code can then determine whether to trust the sender not to have sent malicious data.

  • Verification of the data itself. This is the most flexible option, but also the hardest to implement. In anything except the simplest protocol, the ways in which data can be maliciously altered are almost innumerable. It's very easy to define verification rules that appear to work (since most test cases won't be trying to maliciously compromise the system), but proving that they work is harder. Exhaustive testing is also difficult, especially given that most test cases should result in a failure condition.

    Take the seemingly simple example of verifying a filename. (This example is valid for the case where the filename is passed as a parameter to a public method as well.) If the filename is just taken as is, there could be many different effects: The file might be nonexistent where the code does not expect it to be and the resulting error path might expose vulnerabilities in code that hasn't been as highly tested or inspected. If a verification check is added for file existence, a hacker might provide one of the Windows "special" device names ( CON , PRN , and so on) in order to launch a denial-of-service attack or elicit other strange effects. If these names are filtered out, the hacker might try names of well-known system files, trying to gain information about them or destroy them. (The knowledge of a file's existence on the user's hard disk is enough to serve as part of some attacks.) Your code attempts to block this by always prepending a known directory to the filename. The hacker responds by using a filename of the form ..\windows\system.ini .

    As you can see, verifying something as seemingly simple as a filename can be a difficult and error-prone task. In general, data validation should be used as a last resort and the data itself should be kept as simple as possible to aid in its validation.

Another way to look at cutting down the number of vulnerable entry points into your code is to set up specific security " choke points." These are avenues into your code that utilize the .NET Framework security primitives to demand a specific level of trust and open the gateway to a larger set of APIs that no longer need to revalidate that trust.

The way this typically works is that the caller of your code needs to acquire some sort of "handle" (an instance of an object, possibly opaque to the caller). Creating this handle will initiate a security demand to determine whether the caller has the permissions your code deems necessary to perform operations on the handle.

From this point on, the caller provides the handle on every operation. (The operations are usually methods on the handle's class.) Because the .NET Framework guarantees that callers can't generate their own instance of the handle without going through the security choke point we referred to earlier, there is no need to revalidate the caller's permissions on each call. This has benefits both to security ”it reduces the number of places where a bug in the code could introduce a security hole ”and to performance.

The code in Listing 24.3 demonstrates this technique. An instance of the DatabaseHandle class is required to perform operations against the database, so we can control access to the operations simply by controlling the creation of the DatabaseHandle object itself. All other operations (such as GetStatus ) can remain unburdened by additional security, making them both faster and less likely to present a security threat. The creator of a DatabaseHandle then has the responsibility of making sure that the object is never passed to caller with lower trust.

Listing 24.3 Defining a Security Choke Point
 using System; using System.Security.Permissions; // Define a class that will represent controlled access to a database. public sealed class DatabaseHandle {     // Prevent construction without arguments by making the default     // constructor private.     private DatabaseHandle()     {     }     // The following the only way to create a DatabaseHandle.     // So we can concentrate our security here.     public DatabaseHandle(String someData)     {         // We're using FileIOPermission here just for convenience,         // you'll probably want to define your own permission to         // cover controlling access to a database.         new FileIOPermission(PermissionState.Unrestricted).Demand();         // Now we've checked the caller (and all their callers too)         // have the necessary permission, we can get on with the         // business of setting up the database connection and other         // housekeeping tasks.     }     // All further (non-static) methods in this class don't need to     // recheck the trust level of the caller, since they're based on     // an instance of this class that was obtained through the     // constructor above.     public String GetStatus()     {     }     // Be careful with static methods, they don't depend on an object     // instance. If the method is security sensitive, secure it     // separately.     public static void SetDefaultOptions(int optionflags)     {         new FileIOPermission(PermissionState.Unrestricted).Demand();     } } 

The FileStream class in the .NET Framework is an example of this technique. The constructor for the class initiates a demand for FileIOPermission , but none of the other methods on the class needs to recheck this.

There are a couple of caveats regarding this technique. One is that it only works if you expect your callers to have a certain level of trust. If you expect to be called by absolutely anybody, you need to use a technique that doesn't demand high-level permissions. (At the same time, you probably shouldn't be exporting anything like arbitrary file access to such people.)

The other point to consider is that this pushes more responsibility onto your callers. You've already verified that they have the requisite level of trust to perform the operation, but by giving them a handle as described earlier, you've opened up the possibility that they'll mistakenly give that handle out to code of a lower level of trust. This would be like your code handing out a FileStream object to an untrusted caller. It's hard to set any firm rules when trying to take this into account, but in general you have to consider the sophistication of the callers you expect to have. Are they likely to be people familiar with fundamental security principles?

This becomes another exercise in risk assessment; you must balance the risk of a trusted caller accidentally misusing your interface versus the reduction in risk that concentrating security into a choke point gives your code.

for RuBoard


. NET Framework Security
.NET Framework Security
ISBN: 067232184X
EAN: 2147483647
Year: 2000
Pages: 235

flylib.com © 2008-2017.
If you may any questions please contact us: flylib@qtcs.net