IL Validation and Verification

for RuBoard

When an assembly is executed on the Common Language Runtime, an assembly's IL is compiled on a method-by-method basis as needed just prior to method execution. This form of compilation immediately prior to code execution is aptly called Just-In-Time (JIT) compilation. As a result, IL itself never actually gets run; it is an intermediary between compilers emitting assemblies and the Common Language Runtime generating and executing native code. As a result, all IL code that gets invoked ends up being compiled into and executed as native code of the platform on which the CLR runs. However, running native code is inherently dangerous, for example,

  • Unmanaged, native code has direct memory access at least throughout the process it is running in, so it can subvert the isolation of application domains (the CLR's "processes").

  • Using pointer arithmetic, native code can break type contracts. For example, a class can use pointer arithmetic to read or overwrite the private fields of its superclass.

Consequently, before any IL causes native code to be run, it must pass scrutiny that would disallow exploitations of the sort previously described.

The Common Language Runtime contains two forms of checks for the IL in an assembly ”IL validation and IL verification. We'll now have a look at both of these forms of tests, and then look at the repercussions of running and distributing assemblies that do not pass IL verification.

IL Validation

IL validation is the process of checking that the structure of the IL in an assembly is correct. IL validity includes the following:

  • Checking that the bytes designating IL instructions in the IL stream of an assembly actually correspond to a known IL instruction. Though normally they do, not all possible combinations of bytes in the IL stream of an assembly correspond to legal, valid IL instructions.

  • Checking that jump instructions do not stray outside of the method in which they are contained.

IL validity checks are always carried out by the Common Language Runtime. If an assembly contains invalid IL, the JIT compiler wouldn't know what to do with it. Consequently, any assembly failing IL validity tests is prevented from executing on the Common Language Runtime. Fortunately, IL validity is nothing you have to worry about when writing code using one of the many compilers targeting the .NET Framework. It is the compiler's job to produce valid IL sequences. Typically, failures of assemblies to meet these tests are caused by file corruptions or compiler bugs .

Let us now focus on the other IL- related set of checks carried out by the Common Language Runtime ”IL verification.

Verifiability and Type Safety

Every type contained in an assembly is a form of contract. Take the following class as an example:

 class test {        private int X;        public String Z;         private void foo() { ...}         public float bar() { ...} } 

This class defines two fields. X is an integer and should not be accessible outside the class. On the other hand, Field Z , of type String , is publicly accessible. It is part of the public interface of class test . The class also defines two methods , foo and bar . foo is not intended to be callable from outside its enclosing class, whereas bar is declared to be publicly callable on test . The accessibility defined on fields and methods of test are part of this class's contract. It defines how state and functionality on its instances should be accessed.

It is possible to form valid IL constructs that undermine the way types have been defined and limited access to themselves . For example, valid IL allows for the use and formation of arbitrary pointer arithmetic. With that, in the previous code example, it would be possible to read or overwrite X or to invoke foo . IL that contains such constructs is called type unsafe and is inherently dangerous to the security of the .NET Framework. For example, such IL could be compiled into native code accessing private fields of the security infrastructure implementation, modifying the in-memory copy of security policy that determines permission allocation for newly run and loaded assemblies. The following example shows just how this could happen without IL verification:

Suppose a system assembly of the .NET Framework implementing security infrastructure defined a SecurityInfo type as follows :

 SystemAssembly:: public SecurityInfo {      private bool m_fFullyTrusted;     public IsFullyTrusted()     { return (m_fFullyTrusted); }     public static SecurityInfo GetSecurityInfo(Assembly a) { ...} } 

Let's assume SecurityInfo is used to represent the security state of an assembly. The private field m_fFullyTrusted is used by the security infrastructure to denote whether the respective assembly has been granted full trust by the CAS policy system. Clearly, when security information is returned by the security system, this field should not be modifiable by any code except the security system code itself. Otherwise, an assembly could ask for its own security state and then happily grant itself full trust to access all protected resources. However, the following code is doing just that:

 HackerAssembly::           Public SpoofSecurityInfo         {             public bool m_fFullyTrusted;         }         Main()         {         SecurityInfo si = SecurityInfo.GetSecurityInfo(               System.GetMyAssembly());         SpoofSecurityInfo spoof = si; // typeunsafe, no cast operation               spoof.m_fFullyTrusted=true; //fulltrust !!!               FormatDisk();               } 

As you can see, the code declares its own security information type, which contains a Boolean field denoting whether an assembly has been fully trusted. Only here, the field has been declared as public . In the main function, the hacker's assembly first gets the security information about itself from the security system, by calling the static GetSecurityInfo function on the SecurityInfo class. The resulting SecurityInfo instance is stored in the variable si . However, immediately afterward, the content of si is assigned to variable spoof, which is of type SpoofSecurityInfo . But SpoofSecurityInfo made the field m_fFullyTrusted public, thus allowing the hacker assembly to grant itself full trust to access all protected resources.

In this example, the type system is being broken, an instance of SecurityInfo should never have been assigned to SpoofSecurityInfo . Verification comes to the rescue here. The following is the IL code that the IL verification checks would encounter when surveying the Main method of the hacker assembly:

 .locals init(class SecurityInfo si, class SpoofSecurityInfo sp) . call class Assembly System::GetMyAssembly() call class SecurityInfo SecurityInfo.GetSecurityInfo(class Assembly) stloc si ldloc si stloc sp // type unsafe assignment here ! ldloc sp ldc.i1 1 //true value to be stored into field stfld int8 SpoofSecurityInfo::m_fFullyTrusted .. 

The Verification checks built into the JIT would not allow the stloc sp instruction to pass. Verification checks would determine that the types of instances si and sp are not assignment compatible with each other and would not allow the type-unsafe IL to be compiled and executed.

IL verficiation also prevents unsafe use of pointers. If we again take the SecurityInfo class as an example, the following code, using pointer arithmetic, manages to overwrite the private state of the full trust field:

 Main() { SecurityInfo si =         SecurityInfo.GetSecurityInfo(System.GetMyAssembly()); bool* pb = (*si+4);    // unsafe assignment!! *pb = true;    // pb now points to the private field! } 

The verification checks in the JIT compiler would see the following corresponding IL sequence:

 .locals init(class SecurityInfo si, bool* pb) .. ldloc si ldc.i4 4 add stloc pb // unsafe assignment ldloc pb ldc.i4 1 // true stind.i1 // *pb = true ... 

The Verification checks built into the JIT would, again, not allow the stloc pb assignment to pass. The verification checks would have tracked the fact that the address si references had been modified, thus disallowing an assignment of si to pb .

Generally, type unsafe IL breaks the CLR type system and undermines the ability to predict and mandate the behavior of types. Therefore, the Common Language Runtime does not allow any assembly with type unsafe IL constructs to be run, unless the assembly has been granted high levels of trust from the Code Access Security policy. To prevent assemblies containing type unsafe constructs from running, the Just-In-Time compilation is sprinkled with various checks that make certain no unsafe IL construct will be run unless CAS policy grants the respective assembly that tremendous privilege. A typical IL verification check is a check for the occurrence of an attempt to dereference a memory location based on an arbitrary pointer location, which could lead to the discovery of private state held in a private field.

NOTE

For a complete list of verification rules checked during JIT compilation, please see the ECMA standard, Partition III, at http://msdn.microsoft.com/net/ecma.


The process of checking for type unsafe constructs during Just-In-Time compilation is called IL verification. Unfortunately, there is not an existing algorithm that will reliably delineate all type-safe IL sequences from all unsafe IL sequences. Instead, the CLR uses a set of stringent, conservative checks that are guaranteed to only allow type-safe IL to pass, but may nevertheless fail some other more esoteric type-safe constructs. Compilers, such as the Microsoft Visual Basic compiler, that declare to emit only verifiable code, have limited their code generation to the set of type-safe IL constructs that will pass the IL verification checks built into the JIT. However, other common compilers, such as Microsoft C# or Managed C++, allow you to create unverifiable assemblies. Let us now look at some of the repercussions of creating, running, and deploying unverifiable code.

Repercussions of Writing Unverifiable Code

It is possible to have the CLR Just-In-Time compile and execute unverifiable code. However, this operation is highly guarded by the Code Access Security system. Only assemblies receiving the permission to skip verification (as expressed on a flag on the SecurityPermission ) will be able to execute type-unsafe constructs. Default security policy only grants this right to assemblies from the local machine. Therefore, explicit administrative actions need to be taken if you want to run unverifiable code from the Internet or intranet. Unless the execution scenario for the assembly in question is strictly limited to being a desktop application run from the local hard disk, it is not recommended that you use or distribute unverifiable assemblies. In addition to potentially requiring administrators to dangerously loosen security policy, unverifiable assemblies can also be a prime target for external security exploits, because type-unsafe programming constructs are a first-rate tool to produce unwanted side effects, such as buffer overruns. Thus, whenever possible, you should stay within the realm of verifiability and avoid the creation of type-unsafe assemblies, such as through the use of [unsafe] features in C#.

NOTE

The .NET Framework SDK ships with the PE Verify tool that can be used to check on the verifiability of an assembly. If an assembly passes the checks in this tool, it is guaranteed to pass the verification checks of the CLR's JIT compiler.


for RuBoard


. NET Framework Security
.NET Framework Security
ISBN: 067232184X
EAN: 2147483647
Year: 2000
Pages: 235

flylib.com © 2008-2017.
If you may any questions please contact us: flylib@qtcs.net