Back in the old days, before the Internet was mainstream, a user or administrator typically installed all software into fixed locations on desktop machines, servers, and local network shares. Most organizations with significant investments at stake made certain that the administrator understood the relevant security issues. Corporate security standards, auditing procedures, disaster recovery planning, as well as end-user training helped reduce the risk further. Even in that relatively closed and controlled environment, there were a few notable security risks. On the desktop the threats came mainly in the form of boot sector viruses, executable file viruses, and trojan horses. These nasty executables spread primarily via relatively slow manual means (i.e., via diskette over the so called "sneaker net") or over isolated local area networks. Servers were used in isolated client/server configurations that were relatively free from attack, but the risk was not zero there either. For example, a disgruntled employee with a powerful account could leave a time bomb behind that could do substantial damage to a server. Now, with virtually every computer attached to every other computer over the Internet, threats come in many new forms, including executable downloads, remote execution, email attachments, and buffer overrun attacks. Unfortunately , the Internet has created several new opportunities for nasty code to proliferate. First, the speed at which rogue code can travel has increased due to higher bandwidth and interconnectivity. Second, because of the much larger number of services and protocols that now exist, the number of vulnerable targets has grown tremendously. Third, an enormous amount of how-to information is available on the Internet in the form of hacker Web sites and newsgroups that enable a much wider audience of potential attackers . Gone are the days when you had to be a genius evildoer to figure out how to mount a cyber attack. Sadly, you now only have to be an evildoer of average-intelligence! Cost Versus RiskHow far should one go with implementing security? This is of course a purely economic question. The amount of effort or money you expend on protection should be dictated by the value of your data or, more precisely, the cost to your organization if the data were to be destroyed or compromised. For higher risk scenarios, senior management is concerned with mission-critical issues such as system recovery and business continuity, which might entail considerably high expense. In the most extreme case, the tradeoff is between the cost of total annihilation versus the cost of a mainframe housed in a nuclear bomb- hardened bunker. However, the programmer is typically concerned with more mundane risks and costs, where the question might be whether or not a particular assembly should be allowed to read a particular environment variable or the files in some directory. Nevertheless, even in this less extreme situation, the same tradeoff question must be considered : How much security effort (i.e., cost) should be expended given the perceived risk? It may seem on the surface that the cost of implementing mundane security logic into a program is negligible; however, that is certainly not the case. Software development incurs significant additional costs during the design, coding, testing, and maintenance phases for every additional feature that you implement. Unfortunately, security features must compete against every other desirable application feature. That security represents a real and significant cost is attested to by the fact that many software developers tend to implement as many nifty features as possible at the expense of critical security features. In the past, Microsoft, Oracle, and others have been criticized by some security experts [1] for focusing too much on bells , whistles, and ease-of-use features at the expense of solid security. It is my opinion that this is now improving across the industry. The choice is yours. If you want your code to be more than just fancy whiz-bang Swiss cheese, you cannot focus entirely on cool features and ignore security.
The Range of RisksUnfortunately, there is an enormous range of possible risks to be considered for systems and data, including rogue code, password cracking, packet sniffing, and denial-of-service attacks. Even physical attacks, such as theft or destruction of media, as well as espionage and con artistry are possibilities. In extreme cases, you may need to consider dealing with natural disasters and terrorist attacks. Although many attacks are possible, and they should all be considered, in this chapter we focus only on rogue code attacks, since it is the only type of attack that .NET security can effectively address. Let's consider some of the main threats in the rogue code category. Stack-overrun [2] attacks have proven to be a serious risk, especially on the server side. If you are interested in seeing a demonstration of how a stack-overrun attack actually works, see the Win32ProjectBufferOverflow example program in Appendix A. Clients send requests to servers, and those requests , if cleverly constructed , may be able to exploit certain types of careless sloppy code in the server. Unmanaged C/C++ code is notoriously prone to buffer overrun and type-casting bugs that can be exploited by evil client programs. The most famous example of this was the Morris Internet worm. [3]
It is very easy to accidentally introduce such security holes into traditional unmanaged C and C++ code, and extraordinary care must be taken to avoid all the possible pitfalls. Code that is written for a managed runtime environment, such as C# or Java, [4] is inherently much safer [5] and takes no special vigilance on the part of the programmer. This is because managed languages generate code that automatically ensures that data is properly initialized , buffer overruns are automatically detected and prevented at runtime, and unsafe type casting is disallowed at compile time. Of course, C was the language used to build virtually all the traditional Internet host programs. As more server code in the future is written as managed code using languages such as C#, VB.NET, and Java, a great deal more protection against this type of attack will exist.
Internet-mobile code, such as email [6] attachments and scripts and ActiveX controls, has also been a major source of risk on the client side. These threats come in several forms, including trojans, [7] logic bombs , [8] the traditional virus, [9] and even good old-fashioned bugs. Fortunately, by using CAS in the development of your applications, you can achieve effective protection from these forms of malicious code as well. Managed code obviously helps clientside code to be more reliable, secure, and bug-free as well.
Assembly TrustworthinessThe fundamental question that CAS addresses is this: What code should you trust and to what degree should you trust it? The problem is that code can now originate from many sources, representing varying degrees of risk. This becomes an issue wherever there is a concern that a particular piece of code may either maliciously or accidentally do some type of damage to your system or data, or leak confidential information in some way. For example, you might trust code that you have written to have certain access privileges that you would not entrust to code written by some other software developers. Or, you may trust an assembly to carry out certain limited actions if it was written by a particular company, but not if it was written by some other company. You probably also have varying degrees of trust based on where the code physically originated, trusting locally deployed assemblies that have been installed by a trusted administrator over assemblies that are installed by a mere user or automatically deployed via a Web browser. This problem becomes more complex when you consider that a single application may contain a combination of assemblies, which fall into varying degrees of trustworthiness. With many assemblies working together in a single application, it could happen that a trusted component is tricked or coerced into doing evil by less trusted components . [10]
CAS enables these kinds of risk assessments to be made on the basis of many factors concerning the trustworthiness of individual assemblies. CAS also allows you to customize your level of trust at a finer level of granularity than was possible in traditional programming environments. For example, you can choose your degree of trust at the assembly level, class level, or even at the individual method level. The need for CAS becomes especially clear when you consider that code can be used to perform many different tasks , and it is not obvious to users, administrators, or even programmers exactly what operations a particular assembly may attempt. Clearly, a security model that provides access control based only on user accounts, as described in the previous chapter, is insufficient to deal with these new problems. This has become especially true in the modern era of mobile code, remote method invocation, and Web services. Risks of Calling into Unmanaged CodeIt is important to note that for CAS to do its work at all, the executing code must be verifiably type-safe managed code. That means that the CLR must be able to verify the assembly's type safety when it is loaded into memory. Using PInvoke to call into legacy Win32 DLLs is a security risk because we are then on our own, and the CLR cannot help us in any way. Obviously, only highly trusted code should be permitted to use PInvoke, but if the DLL being called uses the Win32 Security API effectively, or if you are calling a clearly harmless Win32 API such as GetSystemTime , then you may decide to allow it. But you must always keep in mind that if you call into unmanaged native code, you are opening a potential security hole. For this reason, calling into unmanaged code requires you have the unmanaged code permission. If you have ever programmed in C or C++, then you probably know all too well how easy it is to get into trouble with uninitialized variables , invalid pointers, out-of-bounds array indexing, incorrect type casts, memory leaks, and the use of inherently unsafe functions such as strcpy , gets, strcat , and sprintf . The need to call legacy native code is, however, a fact of life for the foreseeable future, so it is important to architect your applications to limit native code calls to a minimal number of fully trusted assemblies. Then, you can configure security or call the Deny method to disable PInvoke in the majority of your code and use the Assert method to enable PInvoke in the few methods where it is needed. We will talk more about the Deny and Assert methods later. We will see in the upcoming PInvoke example how to configure security policy to enable or disable the permission to call into unmanaged code. |