Seeing Processes in Action


Mandatory Controls Versus Discretionary Controls

I've already noted that the idea of integrity levels was taken from something called "mandatory access controls," which are different from something called "discretionary access controls." Those aren't Microsoft's terms, they're standard phrases in the IT security industry for describing different methods of securing things in operating systems. To see where they came from, permit me to set the dials on the Wayback Machine to the early 1980s. Disco's dead, Reagan's in office, they were still making good Star Trek movies occasionally, no one has yet heard of Martha Stewart, and it seems like everybody in the government's buying computers.

The Orange Book

In the late 1970s and early 1980s, government offices started buying computers. Compared to 10 years earlier, the government was buying lots of computers, partially because they were getting cheaper and partially because the government, like the rest of the world, was growing more and more dependent on number-crunching in large volumes. They bought mainframes from folks like IBM, CDC, Burroughs, and Univac. They bought minicomputers from DEC, Data General, and Harris. They bought stand-alone word processing systems from Wang and Four-Phase and, yes, as the 1970s became the 1980s, then even began to buy desktop computers from Apple, IBM, and others. Information security managers soon started saying that, hey, maybe those computer buyers should be thinking about security on these widely varying systems-but how much security did they need, and how would they know that they had it?

Some people might reflexively answer the question "how much security did those computer buyers need?" with "top secret" or "as good as possible," but that would honestly be a waste of money. Sure, military and intelligence agencies need a high amount of security, but most government facilities don't. For example, around that time I worked as a contractor for the Department of Energy, where I helped build economic models for forecasting short-term supplies and prices of various sources of energy. The group that I was in all worked on an IBM mainframe. We each had our own space on the mainframe's hard disks, but there was no security, and anyone could look at or modify anyone else's data. Eventually the IT managers there installed a package called ACF/2 that allowed us to secure any or all of our files, but the general consensus was "why bother figuring out to set up a ‘who can use my files’ list?" I would do analysis based on data collected by the government for public use, and then I would write computer programs that did publicly available forecasts based on those data. The code, the data, and the forecasts were paid for by the U.S. taxpayer, so in theory if any citizen were to ask to see my files, I couldn't see a reason why I wouldn't just hand them over. (No citizen ever did, in case you're wondering.) My analysis wasn't exactly right, however, because putting all of the data created and maintained by several dozen people into a pool that anyone on the team could modify or delete was potentially dangerous. An unscrupulous team member looking to discredit another might change a file in a manner that would render the work useless and make his rival seem to be the culprit. The result? The unscrupulous character might get away with it, but the taxpayer gets to pay for the lost time as the data and code are reconstructed. Now, let me clarify that nothing like that ever happened, but it could have, so having some level of security seems like a good idea. But Top Secret, Pentagon-ish levels of security? It'd just cost money and offer no return to the agency or to the taxpayer. Most of us wouldn't find installing titanium doors, bulletproof glass, and gun turrets on our houses cost-effective security-wise, and making the Energy analysts spend the time and money to jump through the Top Secret hoops would have been as unrewarding.

But there are folks in the government who need Top Secret or similar data protection. Clearly, then, different places in the government needed more or less security in their system software; no one size would fit all.

Following this reasoning, some folks at the NSA decided that it might be a good idea to make life easier for government managers evaluating and buying computer software if there were a range of predefined standards that describe multiple levels of more or less secure computer systems. These would be a collection of sets of requirements, basically lists of what features a piece of software needed in order to satisfy the need for a given level of security. That way, the person running, say, the FBI's National Crime Information Center (NCIC) or the person running the online public records of the proceedings of the Congress in the THOMAS system needn't sit down from scratch and each try to cook up a complete set of security requirements that work for their particular operation. Instead, the NCIC person might be able to look at that list of features required for each level of security and say "aha, I see that there are seven defined levels of security and oh, heck, I don't need the top two levels; they'd cost a fortune to implement and we're not holding the keys to the nukes anyway, but the third level sounds just right" and the THOMAS manager could say "let's see, we need to make this open and easy to get to but we can't have the public modifying the content, so maybe we'll go with the second-to-the-lowest level of security."

The NSA group, named the "National Computer Security Center," laid out four "divisions" of trustworthiness in operating systems, each with subdivisions called "classes," of which there are seven in total. These divisions and classes were described in a document usually called the "Orange Book" because of the color of its cover, but its actual name was Trusted Computer System Evaluation Criteria. The National Computer Security Center released it in December 1985. (Isn't it strange to think that some standards that we work with are old enough to drink in every state in the United States?)

C2 Certification and NT

NSA called their four "divisions" of security D, C, B, and A in increasing order of security, and then defined seven "classes" of software requirements called, in increasing order of security, D, C1, C2, B1, B2, and A, and described them all in the Orange Book. That, however, was only the beginning. Once our imaginary FBI or THOMAS manager decides to go with a particular level of security, how will he or she know whether or not a product meets that level?

NSA then took on the job of evaluating systems to certify them for particular levels. Thanks to their certification efforts, the government computer buyer's job is a bit easier, as she can just say "let's see, I think we need something that meets C1 or C2 specifications…which operating systems can do that?"

Now, I'll bet that "C2" rings a bell for some of you, as it was often bandied about when discussing NT in the 1990s. Admittedly I'm simplifying when I say that a given operating system can be certified as B1, C2, or whatever as NSA actually evaluates entire systems, taking the hardware, applications, and so on into consideration. Informally, however, it's common to hear people say that such-and-such operating system is "C2 compliant," as has been said of Windows for quite some time, as NSA certified several specific NT configurations as C2. You won't hear the phrase nowadays, however, as the whole Trusted Computer System Evaluation Criteria program has been folded into an international set of standards called "Common Criteria."

C and B: Discretionary Versus Mandatory

Of the four divisions, most people only talk about B and C because D just means "this operating system has no security" and A is like B3, but requires formal proofs of security that are hideously expensive to acquire. The main difference between C and B is that C involves what Trusted Computer System Evaluation Criteria called "discretionary protection" and B involves something more stringent called "mandatory protection."

Discretionary Access Overview and Terminology

As I've already said, back when I was working at the Department of Energy, we needed some security, but not much. We all created and maintained certain data and program files, and so we might have benefited from a system that would let each of us provide our files to the common projects on a read-only basis. That way, the data or programs could contribute to the analysis that the project needed but we each could be sure that the files that we were responsible for wouldn't be tampered with. Again, we never needed to do that, but if we had, we'd have been using a discretionary access control security model. But note that none of us were system administrators for the mainframe on which we worked and, honestly, I don't remember ever even meeting any of the system administrators on that system. That's an important part of discretionary access systems: they give a lot of control to just regular old users over their own resources, without having to make them system-wide administrators.

Parts of a Discretionary Access System

The ingredients to a discretionary access control system are user accounts, a logon procedure, objects, owners, and permissions. In C2 certification, the idea with discretionary access is that every object, like a file or folder, has an owner. The owner is usually the person who created the object, but not necessarily. An object's owner has the power to create a list of user accounts and groups who are granted or denied access to the object. C2 requires user accounts so that the owner can name specific people or groups to allow or deny access, and therefore logons are a "must" so as to prove that someone claiming to be someone with access to an object does indeed have that access.

Sound familiar? It should. Microsoft used the C2 framework as the basis of what was initially a system of access controls for files and folders and that they later expanded to many more types of things. They adopted the C2 framework because they sought C2 certification for Windows. The central notion in a discretionary system is that the owner-again, just any old user who happens to be the owner, he needn't be an administrator to be a file's owner-gets to grant and deny access. If a file's owner wanted to grant Full Control to the Guest account, making it available to anyone on the planet, then nothing would stand in his way.

In a "pure" discretionary model, then, each user would store all of her documents in some folder that she owns, like a profile folder in Windows, but no one would have the ability to control who can access that folder but her, the owner. In Windows, of course, that's not exactly what happens, as the LocalSystem account and the local Administrators group typically get Full Control permissions applied to all folders on the computer by default. Nevertheless, Windows up through XP and 2003 offers something quite similar to the C2 notion of discretionary access.

Note 

In addition to requiring object owners, discretionary permissions, user accounts, and logon procedures, C2 also requires audit mechanisms to track use of discretionary powers, and of course Windows offers that via its security auditing functions.

"Securable Objects": What Discretionary Access Can Protect

Let's take a moment and look at the sorts of objects that Windows can secure. Even though Windows uses a discretionary model, not everything in Windows can have permissions on it. For example, could I grant you the power to change just the IP address on my computer, without also giving you the power to change my subnet mask, default gateway, and the like? (I have no idea why I'd want to, but it's a simple example.) No, I can't, because Microsoft has chosen to build support for discretionary permissions into some things in Windows and not bother for others. Microsoft calls types of objects that can get permissions "securable objects." A complete list of those objects includes, according to the Microsoft MSDN article "Securable Objects" (http://www.windowssdk.msdn.microsoft.com/en-us/library/ms723270.aspx),

  • Files and folders

  • Named pipes, a method for one process to communicate with another process

  • Processes

  • Threads

  • Access tokens, the data that systems receive once you're authenticated to them

  • Windows-management objects; also known as "WMI namespaces," they are currently the most important tool for programmatic control of almost anything in Windows

  • Registry keys

  • Windows services

  • Local and remote printers

  • Network shares

  • Interprocess synchronization objects (the way that one program taps another on the shoulder and asks it to do something)

  • Job objects (a tool that's been around since Windows 2000 that allows you to tell Windows, “run such-and-such set of programs but if they require more than some specified amount of CPU time or requires more than some specified amount of RAM, then kill the job)

  • Directory service objects (user accounts, machine accounts, group policy objects, domains, and so on)

An object's permissions tend to be stored on the object itself. For example, if you configure a given file on an NTFS file system as having a file permission giving you full control of that file, then that permission itself is stored in an area that every file has called its "metadata." More specifically, every permission that you create, every discretionary permission, is technically called a "discretionary access control entry" or DACE, pronounced to rhyme with "face." The entire list of discretionary access control entries is a "discretionary access control list" or DACL, pronounced "dackel" to rhyme with "crackle." But in the Windows world we tend to drop the "D," and refer to an ACL that contains ACEs. Note that many people tend to simplify things even further and use the phrase "ACL" to refer to both ACLs and ACEs. When someone says, "I put an ACL on accounts.dat so that the Accounting group could read it," what he really means is "I added a discretionary ACE to accounts.dat 's DACL to grant the Accounting group read access." (You probably knew that, but I figured the reminder wouldn't hurt, because soon we'll be talking about corresponding terms for mandatory access control systems.)

Mandatory Access Overview and Terminology

But now suppose we didn't quite trust just any old user to set permissions. Suppose, for example, we found a few users setting the permissions on their profile directories to allow the Guest account full control of those directories. Argh! In that case, we might want to restrict just how much discretion we offered in our discretionary access control system, and impose some constraints on our users' abilities to open our systems to the world. In that case, we'd want some sort of mandatory access controls. To get a "B" certification under Orange Book specifications, operating systems required some kind of mandatory access control system.

The idea with a mandatory access control is that an organization draws up some enterprise-wide security policy with rules like "passwords must be at least 7 characters long" or "no permissions may allow the Guest account to do anything," and then some feature of the operating system keeps the user from doing anything contrary to that policy. It's called "mandatory" rather than "discretionary" because where in the theoretical discretionary model the user is king, the mandatory model demotes him to a sort of a local viscount.

We know that Windows implements discretionary access; does it do mandatory access as well? Well, yes, sort of; I suppose you could say that it should get more than a "C,"-something more like a "C+" or a "B–" rating. For example, here's a way to accomplish something of a mandatory access control that keeps users from granting access to the Guest account: ever since Windows 2000, a domain administrator could just create a group policy object applying a permission to every C:\ drive in the domain explicitly denying the Guest account any access to C:\. That would count as a mandatory access control, as it is imposed on the user from the outside and limits her power.

In the Windows world, though, the reality is that this wouldn't really constitute a "mandatory access control" of any real value. For one thing, the sad fact is that in the Windows world we often don't have the kind of separation between users and administrators that I saw in the mainframe days, as most machines are used by just one person, and that person spends her whole day logged on as an administrator. Furthermore, in Windows prior to Vista, the administrator can do anything that he or she wants, and so could undo the "mandatory" effects of most domain-based group policies. In order for any kind of mandatory access control to be effective in helping protect people from malware, it'll have to be imposed not by the administrator (who may not be security-conscious enough to restrict himself), but by the operating system itself, and in general that's how Vista's Windows Integrity Control system works.

Vista's "Windows Integrity Control" system, then, is an implementation of the standard notion of a mandatory access control system.

Note 

Strictly speaking, it's not really a mandatory access control, as it lacks the flexibility and richness of Windows's DACLs. Its goal is mostly to keep low-level processes from damaging higher-level objects, so Microsoft uses the phrase "integrity" rather than "access."

WIC's mandatory nature comes from the fact that by default it is the operating system, not the administrator, that sets the integrity levels, and it is possible for the operating system to create objects with integrity levels so high that no administrator could ever access them.

Conceptually, an object, like a file, stores its integrity level pretty much the same way that it stores its discretionary access control list and, for that matter, the name of its owner: in the file metadata. I figured that the piece of data on a file that says, "I have the medium integrity level" might be called a mandatory access control entry or MACE, but the Orange Book guys refer to that information as a "sensitivity label" or, sometimes, just a "label." Microsoft calls theirs "mandatory labels" although I would have thought that "integrity labels" would have been a better name. In any case, remember:

  • "Integrity level" refers to a process, user, or object's level of trustworthiness.

  • "Mandatory label" is just the name for the thing stuck on the process, user, or object to announce its integrity level.




Administering Windows Vista Security. The Big Surprises
Administering Windows Vista Security: The Big Surprises
ISBN: 0470108320
EAN: 2147483647
Year: 2004
Pages: 101

flylib.com © 2008-2017.
If you may any questions please contact us: flylib@qtcs.net