Examining the .NET Compact Framework in Detail
The .NET Compact Framework does not exist in a
. It rests upon three
of technology: the Common Language Runtime, the Just-In-Time compiler, and the
CE Operating System. Figure 2.1 is a block diagram that describes how an application that targets the .NET Compact Framework
with the .NET Compact Framework and the layers below.
Figure 2.1. This block diagram depicts the .NET Compact Framework architecture.
Figure 2.1 gives us a launching pad for discussing the architecture of the .NET Compact Framework. The discussion starts at the deepest level and works upward.
The first layer is the hardware itself, which is
of the CPU and main memory. Typical devices include a video adapter with touch screen, network connectivity, sound hardware, and so on.
All of the hardware is controlled by the second layer, the Windows CE operating system. Windows CE provides memory management routines and a program loader. The program loader pushes an executable into memory and launches it. It also
for drawing windows and
to GUI events, handles network connectivity, and manages a host of other responsibilities.
The Common Language Runtime
layer above the Windows CE operating system is the Common Language Runtime, or CLR. Applications written in traditional languages, such as C or C++, target Windows CE as a platform. They are compiled into code understandable by the CPU on the device, but they rely on Windows CE to load them and provide services, such as drawing windows and reacting to
The CLR is the platform that "managed" applications target. Standard "native" applications are compiled to machine code directly understandable by the device's CPU. In contrast, managed applications are compiled into Microsoft Intermediate Language bytecodes, or
an intermediate format that loosely resembles machine code for a CPU architecture that does not physically exist.
Code that is compiled to IL code is referred to as
. Similarly, a
is an entire EXE that is composed of IL code, and a
is a DLL file composed of IL code. A very important fact about managed code is that it cannot be directly executed by an existing CPU. It must be translated into native code first.
The CLR's job is to execute managed code. In order to do so, the managed code must be rendered into a native form understandable by the CPU. The CLR
these three basic steps to transform managed code into native code and execute it:
The IL code must be loaded into memory from a filesystem.
Some or all of the IL must be translated into native code so that the CPU can actually execute it. This includes dealing with housekeeping issues, such as dealing with threads, exceptions, argument passing, and so on.
If the IL code refers to
of code in a DLL, then the DLL must be located and loaded. Then the correct portions of the DLL must be translated into native code and executed.
Transforming Managed Code to Native Code with the Just-In-Time Compiler
The JIT, the Just-In-Time compiler, is responsible for translating managed code into native code so that a managed program can execute. There are two JITs available with the .NET Compact Framework, the "SJIT" and the "IJIT."
The IJIT compiler is available for every CPU that is supported by the .NET Compact Framework: ARM, MIPS, SHx, and x86. The IJIT compiler is the faster of the two JITs, but it is simpler. Thus, although the IJIT compiler compiles managed code into native code quickly, the resulting native code is not as optimal as that produced by the SJIT compiler.
The SJIT compiler is available only for ARM processors. The ARM is the most common processor for the Pocket PC platform, and it is thus likely to represent the largest fraction of the .NET Compact Framework customer base. The SJIT compiler is very heavily
to take advantage of the ARM processor. Although compiling managed code with the SJIT compiler takes longer than compiling with the IJIT compiler, the resulting native code can run up to twice as quickly.
By default, the IJIT compiler is used only on platforms for which the SJIT compiler is unavailable. This makes sense because if your device is not under memory pressure, then the time spent JITing code with either compiler seems insignificant compared to the time executing code. Thus, you want the SJIT compiler whenever available.
An important part of writing well-performing applications is avoiding excess JIT activity. Both JIT engines compile code only as it as needed on a method-by-method basis. That is, if a class contains 100 methods but only 10 of them are ever executed by an application, then only 10 of them are ever JITed by the CLR. Once a method is JITed, the CLR
to retain the native code in memory for the life of the application. Thus, the cost of JITing a method is paid only once, regardless of how many times the method is executed.
The CLR retains all JITed code in memory for as long as possible. It will kick out JITed code on a method-by-method basis if the device encounters memory pressure. This is called
. If the JITed code for a method is pitched and then the method is called again, then the method's IL code must be JITed all over again. The CLR pitches code for methods based on how recently the methods were executed; the code that was least recently executed is pitched first.
CONSIDER YOUR DEVICE AUDIENCE
The discussion comparing the SJIT and the IJIT underscores the fact that you must consider your device audience when writing an application. Will all of your users have ARM-based devices? Can you assume a certain amount of performance of all devices? You can get yourself into trouble if you assume that the performance you see on your ARM-based 400MHz Pocket PC device is the same as what a customer with a 200MHz MIPS-based device will see. It is
invalid to compare different architectures simply by comparing clock speeds. Even if comparing clock speeds were a good way to extrapolate performance, the 200MHz MIPS device will probably run managed code at less than half the speed of the 400MHz ARM device because it is using the IJIT instead of the SJIT.
In extreme situations, applications can end up in a scenario where a method is re-JITed each time it is called in a loop. For very complex applications running under memory pressure, this is a plausible scenario that will horribly impact performance. This problem is analogous to running too many applications on a computer with insufficient memory. Pages of memory are swapped to disk, and if the memory pressure is severe enough, then the computer spends most of its time swapping pages instead of executing program code.
There are some simple steps you can take to avoid this situation. If you are writing a custom application to deploy to your company's workforce, equip the devices that will run the application with enough memory. Also, note that the default behavior when closing an application on the Pocket PC is to retain it in memory. In this state, applications still
memory. To see what applications you have running on a Pocket PC, follow these steps:
Select Start, Settings.
At the bottom of the settings window, click the tab labeled System.
Click the Memory icon.
At the bottom of the window, click the Running Programs tab. You will see a list of programs that are currently running and consuming memory.
You can highlight a program and click Stop to terminate it. You can click Stop All to terminate all programs.
There are other tricks and tools in addition to those just listed that can help developers determine whether code pitching is occuring and where in the code it happens. For example, performance counters are very effective tools for determining what causes an application to perform poorly. Chapter 15, "Measuring the Performance of a .NET Compact Framework Application," describes such tools in detail.
Comparing the .NET Compact Framework CLR to the Desktop CLR
The architecture of the CLR for the .NET Compact Framework is markedly different from that of the desktop .NET Framework. For example, the CLR for the .NET Compact Framework is built upon a Platform Abstraction Layer (PAL) that abstracts away differences in hardware from the rest of the CLR. To port the CLR to a new platform, one needs only to change the PAL on which the CLR rests and create a JIT for the target CPU, if it is different from all existing supported CPUs. This approach gives the .NET Compact Framework flexibility and nimbleness in keeping up with evolving mobile hardware. However, there are no
available white papers or utilities to allow third parties to port the CLR to a new hardware platform.
Another difference between the CLR for the .NET Compact Framework and the desktop is how JITed code is handled. On the desktop, JITed code is retained even after a managed program exits, speeding up its load the next time. It is only re-JITed if the managed application's IL changes. The CLR for the .NET Compact Framework stores JITed code only for the lifetime of the application. The next time the application is launched, JITing must occur again.
The CLR that supports the .NET Compact Framework is similar to the one supporting the desktop .NET Framework in that it supports the notion of assemblies and application domains. An
is the atomic unit by which an application is identified. For example, a managed EXE file or a managed DLL file is each an assembly. Each assembly contains metadata inside the binary that describes its structure, and it can contain a signature to allow the CLR to detect if the assembly has been tampered with.
The desktop CLR supports the notion of an assembly that is composed of more than one file. The CLR for the .NET Compact Framework does not support this feature. This can have
for developers who are trying to port managed code from a desktop application to a device. For example, if an assembly for a desktop component is composed of three individual DLLs, then the code must be merged into a single DLL when it is ported to the .NET Compact Framework.
to a CLR is that of an
, which is a reasonably secure container in which each managed program runs. In a modern operating system with protected memory, each stand-alone process is completely
from all others. The language used to write the program running in the process may allow developers to use pointers to wreak havoc on the process memory space. However, because the process lives in a virtual address space, it cannot touch the virtual address space of other applications.
Similarly, the CLR enforces insulation between the application domains running within it. This level of insulation is cheaper than having separate processes, because the CLR actually operates as one process. Thus, application domains do live in the same virtual address space. This means it is technically possible for code in one application domain to write into memory being used by another application domain.
The CLR can prevent this behavior in virtually all cases because, with few exceptions, IL code can be
type safe. There is no possibility of pointer use going awry and accessing unexpected locations if there is no notion of a pointer in the language and if type references are
by the CLR.
The C# language allows the direct manipulation of pointers if you embed the pointer activity in an
block. One of the reasons the
as the keyword to delineate such a block is because it makes it possible for the C# code to escape the CLR type checking mechanisms and access unexpected memory locations. There are times when using unsafe code is completely necessary. For example, unsafe code can help when calling into native code with custom marshalling routines (see Chapter 12). Unsafe managed code should be avoided and used only if
The .NET Compact Framework Class Libraries
We now have enough of an understanding of the underlying technologies involved with the .NET Compact Framework to very
define the .NET Compact Framework itself. The .NET Compact Framework is a set of DLL files containing classes capable of performing all of the useful things that the rest of this book is devoted to. The DLL files are written in mostly managed code, although a handful of them interact through native code directly with the Windows CE operating system.
Out of the box, developers can use Smart Device Extensions in Visual Studio 7.1 to target the .NET Compact Framework by using the C# and Visual BASIC.NET programming languages. Third-party languages that can also target the .NET Compact Framework may become available in the future.
The specific DLLs comprising the .NET Compact Framework and the roles they play are now outlined:
This library comprises one part of the base classes. It holds the fundamental data type classes, such as
, and so on, and the classes for the following namespaces:
. Some of the classes held in this library require calling into native code, but they present to managed code developers the illusion of being managed code only classes.
This library holds classes like
. It also
This library includes helper methods and objects for Visual Basic programs, such as the
that is used in Visual Basic exception handling. All Visual Basic projects must reference this DLL.
This library holds most of the classes in the
namespace. These classes provide common GUI elements, such as
, list boxes, labels, and so on. Chapter 3, "Designing GUI Applications with Windows Forms," discusses how to use these classes to create professional-looking applications for Windows CE by using the .NET Compact Framework. Note that not all of the WinForms classes available on the desktop are available in
. Additionally, many of the WinForms classes on the .NET Compact Framework are missing some of the methods and fields available on their desktop counterparts.
This library holds the
class, a GUI component that makes it easy to show the contents of a database in a structured, tabular format by using only a few lines of code. It is used in conjunction with the
classes, and it is described in detail in Chapter 6, "ADO.NET on the .NET Compact Framework."
This library includes the API for low-level drawing methods, such as for drawing lines, circles, squares, images, and so on. Game developers will use this library often.
his library handles implementation of the input panel (SIP) and the
class, which allows developers to directly handle the messages from the Windows CE message pump.
This library holds string resources for exceptions. You can choose not to ship this library with an application to save space on a device. If you make this choice and if an exception is thrown, then there will be no textual information included with the exception.
This library holds the implementation for classes in the
namespace. The central class in this namespace is the
, which is described in Chapter 6. The
is capable of XML serialization, as described in Chapter 8, "XML and the Dataset," and it can interact with the SQL CE database engine, as described in Chapter 7, "Programming with Microsoft SQL Server CE."
This library holds the classes in the
namespace. Chapter 10, "Manipulating XML with the XmlTextReader and the XmlTextWriter," covers them in detail. The classes available on the .NET Compact Framework version of
are a subset of those available on the desktop.
This library holds classes in the
namespace that provide the ability to interact with XML-based Web services. This feature is described in Chapter 9, "Using XML Web Services"
This library holds classes in the
namespaces that relate to communication by using a device's infrared (IR) port. The central class in this namespace is the
, which is described in detail in Chapter 5, "Network Connectivity with the .NET Compact Framework."