< Day Day Up > |
Risk mitigation starts at the top. If you're working in an environment where you identify systems that need protection, determine how much effort you will put forth to protect systems, and perform the subsequent remediation, then you're doing too much. In order to be able to respond to risk, the first step is to identify the resources that are important. Your organizational leadership is responsible for providing this high level assessment of information system criticality. Once this is done, senior security professionals get involved so that management can understand the kinds of risks given resources will face and perform a cost/benefit analysis for risk remediation.
So how is this analysis done? If you work for a fairly small organization, you may be very involved in this process. So, let's take a moment to look at how to decide how much security is the "right amount." This is useful both in the context of defining priorities for the organization and at a smaller scale. For instance, your security policy may state that your DMZ systems must be protected by "strong authentication." Security policies in general are not much more explicit than this. It's up to security (and possibly also system) administrators to figure out how to get this done. 1.3.1. How Much Security?There are two considerations that influence how much time you spend "doing" system security: risk/consequence considerations and usability. The greater the risk or the higher the consequence, the more effort you must spend securing your systems. With respect to usability, if your application of security principles makes administration more difficult worse yet, if it discourages administrators from maintaining systems because of the hassle you may have gone too far.
1.3.1.1 Risk and consequenceThe role of your system combined with its exposure to risk helps you determine to what lengths you should go in locking down your system. If you are building a server that provides DNS functionality to your network, a failure or compromise of this system would easily lead to widespread problems. An incident involving a server that provides only NTP however may merely disrupt clock synchronization until the service can be restored, but may not immediately affect the rest of your network. The location of your system both physically and logically on your network is also an important consideration. Systems located on a perimeter network are exposed to external attacks frequently. A computer providing Internet access at a library may have hundreds of users a day and not all of them trustworthy. Servers on an office network are only directly exposed to other systems on that network.
So, what if there were a compromise of a system? What would the impact be to you and your organization? Your organization may suffer from bad press and/or loss of revenue. You may be deemed incompetent and then fired. The effects of a security compromise are not always obvious. The higher the cost of a security breach of the system currently being built, the more time you need to spend securing the system.
From a risk perspective, the amount of effort you must put forth "doing" system security relates directly to the amount of risk involved and the expected consequences of risk realization. Table 1-1 summarizes the effect that probability and impact have on risk.
This makes sense. Buying a house on an eroding precipice is a high risk proposition. The chances of your house sliding into the abyss are pretty likely. The impact to your house and your belongings certainly qualify as "disasterous." Maybe you'd rather build a house on a mountain. The chances of being buried under a mudslide or avalanche are probably on the low side, depending of course on the mountain's history. The impact would likely be fairly significant though. We're looking at medium risk proposition here. You get the idea. The higher the impact and/or probability, the higher the risk. 1.3.1.2 Security versus functionalityYou might wonder why you don't just use every security tool you have on and around every system. Security costs something. At the very least, the time you spend performing security-related tasks is one kind of cost. The associated loss of functionality or convenience is another form of cost. Envision building a webmail gateway so that users are able to access internal email even when they are offsite. This has the potential to expose private email to users on the Internet, so you may decide that a few extra notches of security are necessary. So you lock down the webmail system. You remove all webmail plugins except those that are absolutely necessary. You enable HTTP-digest based authentication in your web server. You require that users also come through a proxy server that requires a different layer of authentication. Finally, you mandate that users carry around signed certificates so that their identity can be validated to the server. This example may be contrived, but security overkill is not always so obvious. In our example, users will become so frustrated with the passwords they must remember, the USB drives they must carry, and the overall hassle that they will eschew webmail altogether. Instead, they may very well start forwarding all their corporate mail to their personal account, which is far more accessible. As a security-minded system administrator, you have just lost: security came at too great a loss of convenience. Similar problems arise if the balance between security and maintainability is lost. In less structured environments, administrators often have a lot of leeway. If you choose to boot your OpenBSD system from CD-ROM and store all binaries and libraries on this read-only media, you may have gone a long way to keeping your system inviolate. However, what happens when a critical vulnerability is announced and you have to immediately patch vulnerable servers? It will take a great deal of effort to build a new standard configuration onto a CD, test it in your QA environment, and finally deploy it in production. This effort may make you want to wait to perform the upgrades later, delaying your response to a critical event. The maintenance hassle of making your security posture "go to 11" just increased the window of opportunity for attackers. The danger here is obvious. If you put off patching known vulnerabilities because of the effort required to do so, then you can be worse off than if you had spent a little less effort on your secure configuration. In some very strict environments, you may not have a choice and be required to build, test, and deploy new CD boot images that night. 1.3.2. Choosing the Right ResponseAfter the important resources have been identified and some cost/benefit analysis has been performed to figure out what it's going to take to secure the resource, it's time to decide whether to do it or not. It's pretty obvious what you do when the cost is low and the benefit is high: you act! But what about when the cost is high? If the benefit is high, do you do it anyway? The following outlines what you can typically do with risks after you've identified them. 1.3.2.1 Mitigate riskIf the fallout from a realized risk is significant and the costs of mitigation can be tolerated, you should take action to mitigate the risk. Some, but perhaps not all, tasks will fall to the system administrator. She is responsible for things like patching vulnerabilities to keep attackers at bay and system backup to prevent accidental or intentional data destruction. 1.3.2.2 Accept riskThese risks are identified, evaluated, but not mitigated. The realization of these risks fails to significantly affect the organization, the likelihood of risk realization is too low, or no mitigation can be enacted for some reason. For example, a university might categorize the potential for an outside observer using intelligence-grade surveillance equipment to remotely observe the registrar's grade entry as a risk. The likelihood of this occurrence is as low as the cost of mitigation is high. Should the risk be realized, the consequences are probably not dire. Therefore, the risk remains and the university will accept the ramifications if this risk is realized. 1.3.2.3 Transfer riskRisk transference is a form of mitigation in which the risk is transferred to another party. This might involve outsourcing a system or process, or keeping it in-house while taking out insurance to cover potential loss. If your system is processing credit card information, there is a risk that the credit card numbers stored on your systems could be stolen by an attacker or insider. Standard mitigation techniques (firewalls, strict access control, and encrypted data) may keep out external eyes, but a rogue employee with access rights may be able to steal and sell this information. The costs associated with the realization of this risk are too immense for the organization to handle, but the likelihood of it happening is low to moderate. A risk like this is a good candidate for being transferred to another entity, like an insurance company. |
< Day Day Up > |