Code Access Security Concepts

The purpose of code access security is to protect users from code that they wish to execute, but which they are not absolutely convinced that they trust. Traditional Windows security, which was based on allowing or denying access to resources based on your user accounts, worked well in the days before the Internet, when the only software on your computer would normally be programs that you had bought from a reputable commercial company (or that you had written yourself). But now that it is commonplace just to download code that looks interesting without much knowledge of who wrote it, security based solely on user accounts is clearly inadequate. The problem is well-known: there is simply so much code floating around on the Internet, much of which is useful, but some of which is either malicious or so badly written that it could damage your system. Even code that comes from reputable companies may have bugs that cause problems - you only have to think of the number of patches Microsoft has had to release to correct problems such as buffer overruns in their software. If you download any of this code so that it executes with the privileges of your account, who knows what it could do?

The solution is to implement security that restricts access to resources based not only on the identity of the user, but also on the extent to which you trust the code. The basic idea is very similar to the sandbox under which code in languages such as JavaScript would execute, but code access security is much more sophisticated than the sandbox, allowing a fine degree of control of permissions based on an analysis of the assembly concerned. With code access security, the system will only allow an assembly to access a resource if the account under which the process is running is allowed access to the resource and the assembly containing the relevant to code is also allowed access. If either of these conditions is not satisfied, the code won't be permitted to proceed, and an exception will be thrown. Security based on the identity of the account is of course provided for by both native Windows security and the CLR's role-based security.

Another aspect of code access security is that the CLR's security infrastructure exposes various hooks that will allow you to define and plug in your own security permissions if the ones defined by Microsoft are not adequate for your needs. One scenario in which you might do this would be if you had designed your own hardware device, and you wanted the systems administrator to be able to control who and what code is allowed access to this device. We'll examine how to define custom permissions later in the chapter.

In this section, I'll examine how CAS works in practice and the basic concepts behind it. We'll start by looking at what CAS means for a single assembly, and then move on to consider what happens when you have assemblies invoking methods on other assemblies, where the respective assemblies have been given different sets of permissions, and you need to decide whether to allow access to various resources. We'll also examine the permissions available and the way that CAS interacts with Windows security and role-based security.

CAS for a Single Assembly

A good way to understand the concepts behind code access security is to compare it with the traditional security architecture offered by Windows. Classic Windows security is based on user accounts and groups; the actions that the user can perform are based on the group to which the user has been assigned. Each group is associated with a set of privileges - a list of tasks on the system that members of that group are allowed to do. Typical privileges include the right to debug a process, to increase the priority of a process, or to load a device driver. Users and groups can also be granted access permissions to network resources; the difference between a privilege and a permission in unmanaged code is that a privilege is associated with an account and indicates what actions an account can perform, whereas permissions tend to be associated with particular resources - notably files and folders on the file system - and allow that resource to indicate which should be allowed to access it. The classic example is that each file on an NTFS partition stores details of who is allowed to read that file, who is allowed to write to it, and so on.

Whenever a process tries to do something that is subject to the control of a privilege, the system will first check and will only allow the operation if the account under which the process is running is allowed to perform the requested operation. In addition, if the user attempts to access some resource that is protected by permissions, the system will similarly check that those permissions allow that user to access that resource.

Code access security has concepts that are quite analogous to this, but the details are rather different. The reason that Windows defines groups is of course that it makes it much simpler to administer security policy. For example, if an employee is promoted to a manager and therefore requires more privileges, then rather than editing the details of the privileges for that account, you just add the account to the Managers group. If you want to change what managers are allowed to do, you just change the privileges assigned to the Managers group, and you don't have to edit the properties of each individual manager's account. CAS's equivalent to the group is the code group. A code group groups together the assemblies that have been given the same set of permissions. However, placing assemblies into code groups works very differently from placing users into groups. On Windows, users are placed into groups using a database on the machine(s) responsible for security policy. This database lists which users are in which groups. That's feasible because users usually exist for long periods of time, and administrators will in principle know who is registered to use the system. However, that's not the case for assemblies. You will generate one, perhaps many, new assemblies every time you rebuild a project! So there's no practical way we could maintain and keep up-to-date any database that says which assemblies should belong to which code groups. Instead, Microsoft has introduced the concept of evidence. This means that when an assembly is loaded, it is examined for certain characteristics that can be used to identify which code groups it belongs to. In formal terms, each code group has a membership condition, which indicates the condition that an assembly must satisfy to be a member of that code group. Membership conditions can be based on:

  • Signature: whether an assembly has been signed with a particular strong name, certificate, or whether its hash evaluates to a certain value.

  • Location: the location of the assembly - for example, its path on the local file system, or the URL or web site from which it was downloaded.

  • Zone: this is a concept borrowed from Internet Explorer. The world is assumed to be divided into five zones: the local computer, the intranet (in other words, network file shares), internet sites that you have added to your trusted zone in Internet Explorer, Internet sites that you have added to your untrusted zone in Internet Explorer, and all remaining sites (the 'Internet'). Membership of a code group can be based on which zone the assembly's location falls under.

  • Custom: if none of the above possibilities is adequate for your requirements, it's possible to write code to implement your own membership condition, and plug this code into the security infrastructure.

The set of code groups and associated membership conditions forms part of the CLR's security policy. When you first install .NET, you get a default security policy that includes a set of code groups and membership conditions that Microsoft believes will form a sensible basis for implementing security on. That policy is there without you having to do anything: from the moment you install .NET, every time you load and execute code in an assembly, the CLR is there behind the scenes, checking which code groups that assembly belongs to, and ensuring that the default security policy allows that assembly to perform the tasks it's trying to do. However, most systems administrators will obviously want to customize the CLR's security policy for the particular needs of their organization.

Once we have established which code groups an assembly belongs to, we need to sort out which permissions that code has. And again we can see an analogy with traditional Windows security. As we've seen, with traditional security each group has an associated set of privileges. Similarly with CAS, each code group has an associated permission set, which will contain a number of permissions. At this point, we need to be careful with the analogy. Despite the terminology, a CLR permission is more analogous to a Windows privilege, since a CLR permission indicates whether code should be allowed to perform a certain type of action. To give you an idea, typical CLR permissions include permissions to call into unmanaged code, to access the file system, or to use reflection to examine a type. However, CLR permissions allow a fine degree of control that is not available to Windows privileges. For example, the CLR file system permission (FileIO) allows you to specify exactly which files or folders that permission should apply to. Thus, in a sense, CLR permissions have a similar flexibility and power to native Windows permissions and privileges combined.

From the above discussion, you'll have realized that many assemblies will satisfy the membership condition of more than one code group. In this case, the permissions for each code group an assembly belongs to are added together. For an assembly to be allowed to do something, all it needs is for any one of the code groups that it is a member of to grant the relevant permission. Hence, membership of a code group can only ever add permissions, not remove them.

To get a feel for how this all works, let's quickly work through an example. Consider the assembly System.Drawing.dll, which contains many of Microsoft's GDI+ classes. Using the default security policy, it turns out that this assembly satisfies the membership condition of three code groups:

  • All_Code (all assemblies are a member of this group). By default this group gives no permissions.

  • My_Computer_Zone (because the assembly is installed on the local machine). Membership of this assembly confers full trust to do anything. The CLR will impose no security restrictions based on the assembly identity (though, of course, that does not guarantee the code unlimited access to the machine, since role-based security and native Windows security may still be active).

  • Microsoft_Strong_Name (because this assembly has been signed with Microsoft's private key). This code group also confers full trust.

The net result of combining all these groups and their associated permission sets is of course that System.Drawing.dll has full trust.

CAS for Multiple Assemblies

There is a major complication that can occur in code access security: a large amount, perhaps even the majority, of code that is executed by assemblies is being executed because it was invoked by a method in another assembly, and this brings in a whole set of new subtleties. Indeed, there can be a whole chain of assemblies that have contributed to code on the call stack - and we might need to be careful to ensure that not only is the currently executing assembly permitted to do the exact operation it is attempting to do, but that it is not being abused by some other assembly further up the call stack. So the question of what a block of code should be allowed to do depends not only on its own assembly, but also on the identities of all the assemblies working up the call stack. This brings about the concept of the stack walk, in which the permission sets of all assemblies on the stack are checked. There are a number of possible scenarios here:

  • Demand. Suppose I write a library that searches the local file system for certain types of file and displays the results. This library will clearly need permission to read the file system (FileIO permission), but there's more. What if I have some code that I've downloaded from somewhere, which claims to need to read the file system, but which I don't trust? Should this code be permitted to use the library? Evidently not - for all I know this untrusted code might (for example) read the file system and then send back confidential information that it finds to some third party. So it's important not only that my library has FileIO permission, but that every caller in the chain also has this permission. If there is just one assembly on the call stack that does not have permission to access the file system, then the security check should fail. This kind of check is known as a demand for permission, and this is the type of check that is normally responsible for the stack walk.

  • Assert. Now suppose I write a library that removes temporary files, and that this library also accesses the file system, but does so in a very safe way. No matter which methods in this library callers invoke or what parameters are passed to them, the only effect can be to remove certain files that don't matter anyway. I have thoroughly tested this application and am satisfied that it cannot affect other files, nor will it ever return confidential information to the caller. There is no way that this library can be used to damage or compromise the system. It is therefore reasonable to suppose that I would be happy for other code to invoke this library, even though I might not be happy for that other code to have unrestricted access to the file system. In other words, I am happy for code that does not have FileIO permission nevertheless to be able to invoke this library. In this case, I can have my library assert FileIO permission. What then happens is that my code will presumably call on the System.IO classes to do delete the temporary files. Somewhere in the implementation of those classes will be code that demands the appropriate FileIO permission. The CLR will respond by walking up the stack to see if all code on the stack has permission to do that. During the stack walk, it will discover the assert made by my code, and at that point, it will decree the permission asserted and stop the stack walk. This means it won't matter whether or not the code that invoked my assembly had this permission. Note that it is only possible for an assembly to assert a permission if the assembly has that permission in the first place. It also needs to have a security permission called Assertion, which indicates that an assembly is allowed to declare asserts.

  • Deny. Whereas making an assert or a demand is aimed at the assemblies further up the call stack, a deny is aimed in the opposite direction: at protecting code from malicious assemblies that may be called into. For example, suppose that someone has written an assembly that claims to clean text files by removing excess white space from them. You want to call this assembly from your code, but you're not sure how much you trust it. The solution to this is a deny: your assembly calls a method which informs the CLR that certain specified permissions that must not be granted if they are requested by code in any method that is invoked directly or indirectly from the currently executing method, even if a request for those permissions is made by an assembly that would normally have the appropriate permissions. This allows you to prevent called assemblies from performing actions that you believe they should not be able to perform.

  • PermitOnly. PermitOnly works in much the same way as deny, except that, where deny will cause all future requests for the specified permission(s) to be denied, PermitOnly will only allow the permissions explicitly specified, disallowing all other permissions. You can use these methods to control what a called assembly should be allowed to do, and hence to provide additional security.

Incidentally, it's very easy to see these same principles at work in the framework class libraries. Take the isolated storage classes, for example: one of the points of isolated storage is that because it represents a private, application-specific, area of the file system, you might trust code to use isolated storage where you wouldn't trust that code to have more general access to the file system. Internally, the System.IO.IsolatedStorage.IsolatedStorageFile class is going to be implemented using the System.IO classes to access the file system. This means that the code that implements IsolatedStorageFile is going to need FileIO permission. Clearly, the only way that isolated storage can be used by code that doesn't have FileIO permission is if System.IO.IsolatedStorage asserts this permission - and that is exactly what happens. IsolatedStorageFile demands IsolatedStorage permission and asserts FileIO permission.

Thus we see a subtle situation in which, on occasions, assemblies need to demand permissions, and in other cases, assemblies need to assert permissions in order to carry out some internal work in a carefully controlled manner.

The CLR Permissions

Microsoft has defined a number of specific permissions that indicate whether access to perform some specific task should be allowed. The full list is as follows:

Directory Services

DNS

Environment Variables

Event Log

File Dialog

File IO

Isolated Storage File

Message Queue

OLE DB

Performance Counter

Printing

Reflection

Registry

Security

Service Controller

Socket Access

SQL Client

User Interface

Web Access

 

The broad purposes of these permissions should be obvious from their names - the exact specifications of exactly what each permission covers are detailed in MSDN, and would take too long to list here.

The idea is that these permissions cover a range of potentially dangerous activities that are enabled by various classes in the framework class library - and they are used by the relevant classes to restrict what code can access those facilities. As an example, suppose you want to use the FileInfo class to read the file C:\boot.ini. Before reading this file, the implementation of FileInfo will at some point execute some code that has the same effect as the following:

 FileIOPermission perm = new FileIOPermission(FileIOPermissionAccess.Read,                                              @"C:\Boot.ini"); perm.Demand(); 

In other words, the FileInfo object will demand the permission to read this file - and notice how the permission request is very specific, asking for no more than exactly the permission needed to perform the task at hand. The FileIOPermission class implements this permission - every in-built code access permission is represented by a corresponding class, and these classes are implemented in mscorlib.dll, contained in the namespace System.Security.Permissions, and all derive from System.Security.CodeAccessPermission. The Demand() method is implemented by System.Security.CodeAccessPermission, and it will walk up the stack, inspecting the credentials of every assembly involved. For each one, the permission set that that assembly is running under will be examined to make sure that the FileIO permission is contained in that code's permission set. Moreover, if that permission is present, it will be further examined to check that read access to C:\boot.ini is covered (for example, a FileIO permission that only gives permission to read files on the D:\ drive wouldn't count, whereas one that gave permission to read the C:\ drive would count, since that implicitly includes C:\Boot.ini). If there is one failure in this series of checks, the call to Demand() will throw an exception, which means the file won't get read. There is an obvious performance hit here, but that's the inevitable price we pay for security.

One point that might surprise you here if you are used to Windows security is that I've been talking about the code actively demanding the security. This is very different from native Windows security, in which the privileges are just there. Windows will automatically prevent actions that you are not allowed to do. In .NET the situation is rather different. Each assembly is automatically given the permissions that are determined by its code groups, but (with a couple of exceptions) those permissions are only actually checked if the code explicitly asks the CLR to do so, for example by calling Demand(). You might think that this would expose a security loophole whereby malicious code can just 'forget' to demand a permission, but in practice it doesn't - the CLR's security architecture is very secure. We'll see why and how this apparent contradiction is resolved later in the chapter.

The Security Permission

It's worth drawing particular attention to the Security permission because - uniquely amongst the various Microsoft-defined permissions - it contains various subpermissions that are crucial to the operation of any managed code. The following screenshot shows the state of this permission for the LocalIntranet permission set:

click to expand

This screenshot has been taken from an MMC snap-in, mscorcfg, which we introduced in Chapter 4.

Microsoft's chosen terminology is a little unfortunate here. "Permission" is used to refer to a permission object that deals with a certain area, hence FileIO and Security are both permissions. However, the term "permission" is also used to refer to the subpermissions within each permission, such as the FileIO permission to access a particular file, or the Security permission specifically to enable code execution. For clarity, I'll sometimes refer to these as subpermissions (my own term) where there is a risk of confusion.

In order to execute at all, an assembly must be granted the Enable Code Execution subpermission. You can think of it a bit like this: imagine that before starting to execute code in an assembly, the CLR executes some code that has the same effect as this:

 SecurityPermission perm = new SecurityPermission(                                    SecurityPermissionFlag.Execution); perm.Demand(); 

If the Demand() call throws an exception, the CLR will simply refuse to execute the code. I emphasize that this is only an analogy - on performance grounds, I very much doubt that any real IL code like the above snippet is executed; it's more likely that the above security check will be handled internally within the CLR - but the end result is the same.

Skip Verification is another important subpermission - this permission allows code to run even if it is not verifiably type-safe. Skip Verification and Allow Calls to Unmanaged Code are arguably the most dangerous permissions to grant, since code that has either of these permissions can theoretically circumvent all other CLR-based permissions, either by using unmanaged code, or by using some cleverly written unsafe code, which makes it impossible for the CLR to detect what the code is actually doing. For all practical purposes, granting code either of these permissions is about as unsafe as giving it FullTrust - which is why, as you'll observe in the screenshot, the LocalIntranet permission set includes neither of these permissions! Unlike many CLR permissions, these permissions are enforced by the CLR, irrespective of whether the assembly specifically asks for the permission.

You'll also notice the permission listed in the screenshot as Assert any permission which has been granted. This is the permission that allows code to make an assert, and therefore to declare that it does not care whether calling code has the required permission.

Relationship to Windows Security

I've said a lot about how we have the CLR's security mechanisms as well as native Windows security, so it's worth saying a couple of words about the relationship between them. In fact, CLR security and Windows security work completely independently. .NET Framework security is implemented within the DLLs that form the CLR, while Windows security is implemented by the operating system.

Let's say your code is requesting to perform some action, such as accessing the file system, which is covered by both of the security infrastructures; in the first place, the CLR's evidence-based (and in some cases, role-based) security tests whether the code is permitted to perform the requested operation. Then, if that test is passed, Windows itself will check whether the account under which the code is running is permitted to perform the requested operation. This means that there are a lot of possibilities for code to be denied access to something.

Of course, CLR security and Windows security don't cover exactly the same areas. This means that while there are some operations (such as file access) that are subject to both security mechanisms, there are other areas where Windows does not set any security tests, and so the only test is the CLR-based one (this is the case for running unverifiable code), and other areas for which the CLR does not provide security but Windows does (such as loading a device deriver). In general, you'll notice that despite the overlap between CLR-security and Windows security, the CLR-defined permissions do often focus on higher-level activities, since the CLR security restrictions tend to apply to activities recognized by the framework and the framework class library (such as using ADO.NET to talk to SQL Server), while Windows security is concerned with basic operations that effect objects known to the operating system (such as creating a paging file or debugging a process). Also, because the action of demanding or asserting a permission is performed from the code within an assembly, there is more scope for CLR security to be sensitive to what the surrounding code is doing, in a way that is not really possible for native security. Take as an example the CLR permission called FileDialog permission: this permission grants code the right to access a file that has been specified by the user in a File Open or File Save dialog, and it can be granted even where code does not in general have any FileIO permission. There is no way that that kind of sophisticated analysis - to grant permission to access a file based on the fact that that file has been identified by some code that has just executed - can realistically be performed by the operating system's security.

Another point worth noting is that neither the CLR's security nor Windows security operates in all situations. In particular, CLR security only works for managed code and won't, obviously, give you any protection against the actions of unmanaged code (though it will prevent managed code from invoking unmanaged code without the Allow Calls To Unmanaged Code subpermission). Windows security on the other hand is only operative on Windows NT/2K/XP. In addition, Windows file permissions are only effective on NTFS-formatted partitions (the CLR's file-related permissions will work on any partition, since they are implemented within the CLR and not based on information stored with individual files and folders).

Managing Security Policy

The security policy for the CLR is stored, like most other CLR configuration information, in a set of XML files. This does mean that in principle you can (provided you have the appropriate rights) change security policy by directly editing these files. However, due to the risk of breaking the CLR by introducing formatting errors into these files, it is recommended that this should only be done as a last resort. Instead, there are two tools available which will edit these files on your behalf.

  • The .NET configuration tool, mscorcfg.msc. As we saw in Chapter 4, mscorcfg is not intended solely for manipulating security policy - it can control some other aspects of CLR configuration - but security policy is where this tool is at its most powerful. This is the tool we will mostly use in this chapter, since its rich user interface is very helpful for understanding the principles of .NET security.

  • caspol.exe (the name stands for "Code Access Security Policy") is a command-line tool that implements similar features, though caspol is only able to modify code access security, not role-based security. The user interface for caspol is not particularly friendly, but it has the advantage that, being a command-line tool, it can be called up from batch files, which simplifies the process of modifying security on a large number of machines (just distribute and run the batch file). We won't be using caspol significantly in this chapter, but its various command-line options are documented in MSDN.

Although caspol and mscorcfg do provide a relatively rich set of features, neither tool is comprehensive - which is why for some specialized tasks you will need to edit the XML files directly. We're not going to cover the format of the XML files here - you can fairly easily deduce that for yourself by examining the files. With .NET version 1.0 you can find these files at:

  • %windir%\Microsoft.NET\Framework\v1.0.3705\CONFIG\security.config

  • %windir%\Microsoft.NET\Framework\v1.0.3705\CONFIG\enterprisesec.config

  • User.config (this file, if present, will be stored in a folder specific to the individual user)

There are three files because the CLR's security policy works at three levels: the enterprise, the machine, and the user (in some situations, it's also possible to apply security at the AppDomain level). When you first install .NET, the default out-of-the-box policy only really defines machine-level policy - that is to say, a policy that applies to the individual computer. Network administrators may then, if they wish, add rules to the Enterprise.config file. Users may also add rules to the user.config file - these rules will only apply to the individual user. When the CLR evaluates whether some code is allowed to perform a task, it first checks all three policies, and calculates the intersection of the policies. Hence managed code can normally perform some task only if all policy levels allow the operation.

Beyond the differences I've noted above, all three policy levels function in the same way, using the same XML format to define code groups, permission sets, and permissions (or, for role-based security, principals and roles). Since in this chapter I want to focus on how CLR security infrastructure works, and I don't want to get bogged down in questions of domain administration, I'm going to concentrate exclusively on working with the machine policy.



Advanced  .NET Programming
Advanced .NET Programming
ISBN: 1861006292
EAN: 2147483647
Year: 2002
Pages: 124

flylib.com © 2008-2017.
If you may any questions please contact us: flylib@qtcs.net