CLR and Performance

CLR and Performance

The common language runtime is the part of the .NET Framework that provides the management we refer to when we speak of .NET managed code. For .NET applications, CLR stands in for the Windows kernel, providing vital services such as loading, memory management and protection, exception handling, and the means to easily interoperate with other components and applications. In addition to reprising the features of a classic runtime environment, CLR also takes on the job of compiling .NET applications on the system where they will actually be running.

Microsoft s reasons for creating a new runtime environment go beyond the scope of this book, but many of the particular features and trade-offs of CLR s design are of immediate interest.

Microsoft Intermediate Language

The biggest difference between traditional applications and .NET applications is that .NET applications are not directly compiled into native instructions for the processor on which they will eventually run. Instead, .NET applications are compiled from any number of .NET languages (such as Visual Basic .NET, C++ .NET, or C#) into Microsoft Intermediate Language (MSIL), which is then packaged and distributed in the form of assemblies. An assembly is a file or set of files containing objects compiled into MSIL and a manifest that describes them.

NOTE
You can browse the contents of an assembly using the tool ildasm.exe, which Microsoft provides with the .NET Framework.

With this design, code represented as MSIL can be analyzed and managed by CLR. Its benefits include garbage collection, whereby CLR determines which objects in memory are no longer in use and automatically de-allocates them, and memory type safety, meaning that CLR knows how a given object in memory is meant to be accessed and can verify in advance that no executable code will misuse it. In addition, managed code simplifies interoperability between applications and components written in different languages.

The Just-in-Time Compiler

Code written in MSIL is never executed. Instead, CLR uses a built-in compiler called the Just-in-Time Compiler (JIT) to generate native machine instructions for execution.

Code is typically compiled only as needed. When a process calls a method for the first time, the JIT steps in and compiles the method on the spot. (If another application or instance of the same application later calls the same method, it will have to compile its own instance of the method as well.) One part of this process is verification, in which CLR verifies that the code is safe, meaning it only accesses objects in memory as they are intended to be accessed. After the code is compiled, execution proceeds from the address where the generated native instructions are located. Finally, when the process terminates, the native instructions that were generated are discarded.

This process provides a huge performance advantage when measured against classic Web applications written using ASP. Until now, ASP has been an interpreted language, meaning that it has carried the overhead cost of having to interpret code as it goes along, never reducing that code to a more efficient compiled form the way ASP.NET does.

However, the case is not as clear cut when measured against classic compiled applications. Compiling code at run time, instead of ahead of time, obviously incurs a performance impact. Microsoft has taken measures to minimize the impact, and in a few cases, JIT compiled code can even outperform its unmanaged counterpart.

One performance benefit of compiling code at run time is that so much more is known about the operating environment at run time than the developer could possibly have known at design time. Certain optimizations may be available to the JIT based on the number of system processors and their individual features, as well as what other system resources are available and how they are being used at the time.

On the other hand, only a limited amount of optimization can be done before the time required to optimize the code has the potential to outweigh the benefit of optimization. Recognizing this, the JIT implements certain algorithms to avoid optimizations that are unlikely to save as much time as it costs to attempt them.

NOTE
If you re interested in quantifying exactly how the JIT affects performance, you ll find a number of helpful performance counters in the .NET CLR Jit performance object.

The Pre-JIT Alternative

Included with the .NET Framework is the tool ngen.exe, used to compile assemblies from MSIL into native instructions at the time they are installed, in a process referred to as Pre-JIT. At first glance, Pre-JITting looks like the best of all worlds why compile at run time when the compiler can still benefit from knowing the details of the system at install time?

The truth is that the impact of JITting at run time is most noticeable when the application is first loaded. Since Web applications rarely reload, if ever, there s little reason to Pre-JIT them. Another reason not to Pre-JIT is that you miss out on the optimizations made available by knowing the state of the system at run time.

NOTE
On the other hand, the JIT could afford to spend more time computing code optimizations at install time than it can at run time. The current version of .NET does not take advantage of this, but future versions may do so, possibly making Pre-JIT more suitable for Web-based applications.



Performance Testing Microsoft  .NET Web Applications
Performance Testing Microsoft .NET Web Applications
ISBN: 596157134
EAN: N/A
Year: 2002
Pages: 67

flylib.com © 2008-2017.
If you may any questions please contact us: flylib@qtcs.net