The Need for Code Access Security


Back in the old days, before the Internet was mainstream, a user or administrator typically installed all software into fixed locations on desktop machines, servers, and local network shares. Most organizations with significant investments at stake made certain that the administrator understood the relevant security issues. Corporate security standards, auditing procedures, disaster recovery planning, as well as end-user training helped reduce the risk further. Even in that relatively closed and controlled environment, there were a few notable security risks. On the desktop the threats came mainly in the form of boot sector viruses, executable file viruses, and trojan horses. These nasty executables spread primarily via relatively slow manual means (i.e., via diskette over the so called "sneaker net") or over isolated local area networks. Servers were used in isolated client/server configurations that were relatively free from attack, but the risk was not zero there either. For example, a disgruntled employee with a powerful account could leave a time bomb behind that could do substantial damage to a server.

Now, with virtually every computer attached to every other computer over the Internet, threats come in many new forms, including executable downloads, remote execution, email attachments, and buffer overrun attacks. Unfortunately , the Internet has created several new opportunities for nasty code to proliferate. First, the speed at which rogue code can travel has increased due to higher bandwidth and interconnectivity. Second, because of the much larger number of services and protocols that now exist, the number of vulnerable targets has grown tremendously. Third, an enormous amount of how-to information is available on the Internet in the form of hacker Web sites and newsgroups that enable a much wider audience of potential attackers . Gone are the days when you had to be a genius evildoer to figure out how to mount a cyber attack. Sadly, you now only have to be an evildoer of average-intelligence!

Cost Versus Risk

How far should one go with implementing security? This is of course a purely economic question. The amount of effort or money you expend on protection should be dictated by the value of your data or, more precisely, the cost to your organization if the data were to be destroyed or compromised. For higher risk scenarios, senior management is concerned with mission-critical issues such as system recovery and business continuity, which might entail considerably high expense. In the most extreme case, the tradeoff is between the cost of total annihilation versus the cost of a mainframe housed in a nuclear bomb- hardened bunker. However, the programmer is typically concerned with more mundane risks and costs, where the question might be whether or not a particular assembly should be allowed to read a particular environment variable or the files in some directory. Nevertheless, even in this less extreme situation, the same tradeoff question must be considered : How much security effort (i.e., cost) should be expended given the perceived risk?

It may seem on the surface that the cost of implementing mundane security logic into a program is negligible; however, that is certainly not the case. Software development incurs significant additional costs during the design, coding, testing, and maintenance phases for every additional feature that you implement. Unfortunately, security features must compete against every other desirable application feature. That security represents a real and significant cost is attested to by the fact that many software developers tend to implement as many nifty features as possible at the expense of critical security features. In the past, Microsoft, Oracle, and others have been criticized by some security experts [1] for focusing too much on bells , whistles, and ease-of-use features at the expense of solid security. It is my opinion that this is now improving across the industry. The choice is yours. If you want your code to be more than just fancy whiz-bang Swiss cheese, you cannot focus entirely on cool features and ignore security.

[1] Some top security experts feel that because the marketplace has rewarded cool features over security, software vendors have not treated security seriously enough. They even claim that major software vendors have treated major security holes as nothing more than public relations problems. Some are proponents of the full-disclosure movement in which the security community makes discovered vulnerabilities public. Full disclosure is a double-edged sword, since it motivates software vendors to fix security issues quickly, but it also arms hoodlums with information that can help them in their malicious efforts. It is hard to say whether full disclosure was the only factor in play, but it does seem that Microsoft is now truly committed to security. This is evidenced by the fact that .NET has such great security programming support. To read one of the many fascinating articles by Bruce Schneier on the issue of features versus security, see http://www. counterpane .com/crypto-gram-0202.html.

The Range of Risks

Unfortunately, there is an enormous range of possible risks to be considered for systems and data, including rogue code, password cracking, packet sniffing, and denial-of-service attacks. Even physical attacks, such as theft or destruction of media, as well as espionage and con artistry are possibilities. In extreme cases, you may need to consider dealing with natural disasters and terrorist attacks. Although many attacks are possible, and they should all be considered, in this chapter we focus only on rogue code attacks, since it is the only type of attack that .NET security can effectively address.

Let's consider some of the main threats in the rogue code category. Stack-overrun [2] attacks have proven to be a serious risk, especially on the server side. If you are interested in seeing a demonstration of how a stack-overrun attack actually works, see the Win32ProjectBufferOverflow example program in Appendix A. Clients send requests to servers, and those requests , if cleverly constructed , may be able to exploit certain types of careless sloppy code in the server. Unmanaged C/C++ code is notoriously prone to buffer overrun and type-casting bugs that can be exploited by evil client programs. The most famous example of this was the Morris Internet worm. [3]

[2] The Code Red II Worm is an example of a stack-overrun attack, exploiting a stack-overflow bug in the IIS indexing service. An unchecked buffer in the URL handling code in the Index Server ISAPI extension DLL ( Idq.dll) is the key vulnerability in this case. The unchecked buffer is used to overwrite the call stack with specially crafted code, and the target application is then tricked into executing it. By sending a specially constructed URL request to IIS (unpatched versions 4.0 or 5.0 with Indexing services enabled), an attacker can thereby execute arbitrary code on that server machine. Compounding this risk is the fact that Index Server runs under the powerful System account, giving the rampaging attacker a great deal of power over the server.

[3] A Worm is a program that automatically duplicates and propagates itself to other host computers on the Internet. The first and most famous example is the Morris worm, written by a computer science graduate student at Cornell University. It was released on November 2, 1988, quickly infecting approximately 10 percent of all Internet hosts and bringing them to their knees. This worm took advantage of several security holes in certain BSD UNIX programs, including a buffer overflow bug in the finger daemon. Interestingly, there is evidence that the worm's author was only interested in researching the worm concept, and he may have not intended the enormous damage that resulted. The code contained intentional self-limiting features designed to reduce the potential damage, but, unfortunately, there was a bug in his code that prevented this safety feature from working properly. The good news is that this proves that even brilliant minds write bugs, which should be comforting to us mere mortals !

It is very easy to accidentally introduce such security holes into traditional unmanaged C and C++ code, and extraordinary care must be taken to avoid all the possible pitfalls. Code that is written for a managed runtime environment, such as C# or Java, [4] is inherently much safer [5] and takes no special vigilance on the part of the programmer. This is because managed languages generate code that automatically ensures that data is properly initialized , buffer overruns are automatically detected and prevented at runtime, and unsafe type casting is disallowed at compile time. Of course, C was the language used to build virtually all the traditional Internet host programs. As more server code in the future is written as managed code using languages such as C#, VB.NET, and Java, a great deal more protection against this type of attack will exist.

[4] The Java language has mostly the same security features as C#, including type safety and bounds checking. The Java runtime environment, known as the Java Virtual Machine (JVM), also supports the same basic security features as the .NET CLR, including code verification and managed code execution. The Java class library also supports a set of security classes that are somewhat similar in concept and functionality to the .NET Framework security classes.

[5] It may be argued that it is possible to avoid buffer overruns in C/C++ by avoiding a few high-risk API functions, and some C/C++ compilers can generate stack boundary checking code for each function call. However, using a managed runtime environment such as .NET or Java provides these protections automatically without the need for any heroic programmer vigilance.

Internet-mobile code, such as email [6] attachments and scripts and ActiveX controls, has also been a major source of risk on the client side. These threats come in several forms, including trojans, [7] logic bombs , [8] the traditional virus, [9] and even good old-fashioned bugs. Fortunately, by using CAS in the development of your applications, you can achieve effective protection from these forms of malicious code as well. Managed code obviously helps clientside code to be more reliable, secure, and bug-free as well.

[6] The Nimda worm primarily infects email, but it can also attack IIS via the backdoor left behind by the Code Red II worm, as well as any unprotected file shares that it may discover. Nimda is an HTML email with an executable attachment. IE is tricked into executing the attachment automatically when the HTML is rendered. Unfortunately, just opening the email can infect the machine, even if the user never explicitly opens the attachment. The worm is propagated from an infected machine by sending copies of itself to other machines via email. Unpatched IE versions 5.01 and 5.5 and IIS 4.0 and 5.0 are vulnerable.

[7] A trojan is a program that purports to be beneficial or useful, but in fact performs an additional hidden and possibly malicious action. Unlike viruses and worms, a trojan does not typically replicate itself programmatically. There are many examples of trojans, but probably the most interesting of all is described by Ken Thompson (known as the father of Unix) in his article "Reflections on Trusting Trust," found at http://www.acm.org/classics/sep95/. He describes how he built a C compiler that installs a trojan login backdoor into a Unix build. The cool thing that he points out is that he did it in such a way that there is no trace of the trojan in any of the source code in either the C compiler or the Unix system. That means you can't be completely certain about any software, even if you have the source code! Scary, huh?

[8] A logic bomb is a secretly deployed time-activated program that either causes severe damage to its host or quietly provides a backdoor for future system access. One of the most notorious cases involved an employee who was demoted after 11 years as chief programmer at a defense contractor company in New Jersey. The disgruntled employee retaliated by quickly deploying a logic bomb that deleted much of the company's most critical data after he left.

[9] A virus is a code fragment that inserts itself into other programs, modifying them in a way that causes further virus replication. Some viruses infect ordinary executable program files. Other viruses infect sensitive operating-system disk sectors, such as the system boot record. Examples of famous viruses are Brain, Stoned, and Michelangelo.

Assembly Trustworthiness

The fundamental question that CAS addresses is this: What code should you trust and to what degree should you trust it? The problem is that code can now originate from many sources, representing varying degrees of risk. This becomes an issue wherever there is a concern that a particular piece of code may either maliciously or accidentally do some type of damage to your system or data, or leak confidential information in some way. For example, you might trust code that you have written to have certain access privileges that you would not entrust to code written by some other software developers. Or, you may trust an assembly to carry out certain limited actions if it was written by a particular company, but not if it was written by some other company. You probably also have varying degrees of trust based on where the code physically originated, trusting locally deployed assemblies that have been installed by a trusted administrator over assemblies that are installed by a mere user or automatically deployed via a Web browser.

This problem becomes more complex when you consider that a single application may contain a combination of assemblies, which fall into varying degrees of trustworthiness. With many assemblies working together in a single application, it could happen that a trusted component is tricked or coerced into doing evil by less trusted components . [10]

[10] This is known as the luring attack, which CAS is particularly well suited to deal with.

CAS enables these kinds of risk assessments to be made on the basis of many factors concerning the trustworthiness of individual assemblies. CAS also allows you to customize your level of trust at a finer level of granularity than was possible in traditional programming environments. For example, you can choose your degree of trust at the assembly level, class level, or even at the individual method level.

The need for CAS becomes especially clear when you consider that code can be used to perform many different tasks , and it is not obvious to users, administrators, or even programmers exactly what operations a particular assembly may attempt. Clearly, a security model that provides access control based only on user accounts, as described in the previous chapter, is insufficient to deal with these new problems. This has become especially true in the modern era of mobile code, remote method invocation, and Web services.

Risks of Calling into Unmanaged Code

It is important to note that for CAS to do its work at all, the executing code must be verifiably type-safe managed code. That means that the CLR must be able to verify the assembly's type safety when it is loaded into memory. Using PInvoke to call into legacy Win32 DLLs is a security risk because we are then on our own, and the CLR cannot help us in any way. Obviously, only highly trusted code should be permitted to use PInvoke, but if the DLL being called uses the Win32 Security API effectively, or if you are calling a clearly harmless Win32 API such as GetSystemTime , then you may decide to allow it. But you must always keep in mind that if you call into unmanaged native code, you are opening a potential security hole. For this reason, calling into unmanaged code requires you have the unmanaged code permission. If you have ever programmed in C or C++, then you probably know all too well how easy it is to get into trouble with uninitialized variables , invalid pointers, out-of-bounds array indexing, incorrect type casts, memory leaks, and the use of inherently unsafe functions such as strcpy , gets, strcat , and sprintf .

The need to call legacy native code is, however, a fact of life for the foreseeable future, so it is important to architect your applications to limit native code calls to a minimal number of fully trusted assemblies. Then, you can configure security or call the Deny method to disable PInvoke in the majority of your code and use the Assert method to enable PInvoke in the few methods where it is needed. We will talk more about the Deny and Assert methods later. We will see in the upcoming PInvoke example how to configure security policy to enable or disable the permission to call into unmanaged code.



.NET Security and Cryptography
.NET Security and Cryptography
ISBN: 013100851X
EAN: 2147483647
Year: 2003
Pages: 126

flylib.com © 2008-2017.
If you may any questions please contact us: flylib@qtcs.net