At this point, it is worth taking a look at the reasons for the development of the .NET Framework and the features that it has that are of particular interest when writing code for the .NET Micro Framework. If you have considerable experience with the Framework, you may be familiar with the terms and concepts discussed in this chapter, but if you are new to the .NET Framework, we recommend that you pay close attention to this chapter, because it describes a fundamental change in the way that programs are created and executed. It also covers how an object-based environment can be made to interact with physical devices, which might be of interest even to experienced .NET users.
From the perspective of the programmer, three aspects should be considered when looking at platforms for code development:
Writing software for the platform should be easy.
The software should run quickly on the platform.
The software should not be able to damage the integrity of the platform.
In this section, we will see how the .NET Framework addresses all these issues and how the .NET Micro Framework fits in with all this.
The .NET Micro Framework is a fully accredited member of the .NET club in that it behaves according to the standards that are laid down for .NET. However, as we shall see, it does this in its own particular way, and this has some impact on the way that we write programs for it.
To understand all this, you must first take a look at managed code in the context of how programs are created and executed. Figure 2-1 shows how programs have been executed through the ages. In "ancient times," programs were written in assembler (another name for the source of a program written using machine code) and then loaded directly onto the computer hardware in the form of machine code.
Figure 2-1: Program execution through the ages.
In assembler, one assembler instruction is usually mapped to one machine code instruction. The programmer had to create complex behaviors by stringing these individual instructions together. The good news was that this gave the programmers maximum control of the hardware and the opportunity to create the most efficient programs possible, because they mapped their operation directly onto the hardware of the machine. The bad news was that with great power came great responsibility.
Any badly written code is quite capable of feeding incorrect instructions into the processor, causing it to fail. In the worst case, this can result in the much-feared "Blue Screen of Death." Assembly code is still used when you need to write software that maps most closely onto hardware and gets the greatest performance, but the care and attention to detail required means that this is an expensive exercise. Examples of situations in which we might need to get "close to the metal" are when writing code for device drivers and performance-critical parts of games.
In the Middle Ages of program execution, computers became more powerful, and compilers emerged. A compiler takes a program written in a high-level language and converts it into a sequence of machine code instructions. Rather than having to specify every individual step of an operation, programmers could use higher-level constructions to describe what they wanted the computer to do.
Higher-level languages made writing programs much easier: the machine code was not as expressive as handwritten code, however, using a high-level language did make a programmer more efficient. There was another advantage: programs in a high-level language could contain additional information (sometimes called metadata) so that the compiler could detect when the programmer did something inappropriate-for example, trying to manipulate a string as if it were an integer-and then stop such invalid code from being written. This meant that the underlying system could start to take responsibility for some kinds of error checking. Unfortunately, even when writing programs using a high-level language, programs can still fail catastrophically. One reason for this is that all the metadata describing the original code is discarded when the program actually runs, so the underlying system has no way of knowing the validity of a particular action.
Now we are in the age of the .NET Framework and .NET Micro Framework, which provide a complete environment for the execution of programs. These Frameworks still use a compiler, but the compiler does not generate code for a particular hardware device. Instead, a file containing an intermediate-language version of the program is produced. Intermediate language is a "halfway house" between a high-level language and machine code. The intermediate-language output from the compiler is stored in a file called an assembly. This is not to be confused with assembler, defined earlier in the chapter as another name for the source of a program written using machine code. An assembly file is just what the name implies: it is an assembly of many different parts. These parts include the actual compiled code, along with all the metadata that the compiler extracted. Thus the program can be executed with a lot more knowledge about what constitutes sensible behavior regarding how the data is to be manipulated. In this respect, we can say that the environment for the code is managed. The managed environment that .NET provides also extends to a standardized set of resources for the running code and automatic memory management.
The name for this underlying framework is the Common Language Infrastructure, or CLI. This provides the means by which the programs can be executed and also specifies how resources are to be provided to the programs as they run. This infrastructure and the C# programming language are the basis of international standards as managed by the European Computer Manufacturers Association (ECMA). Specification ECMA-334 defines the C# language, and ECMA-335 defines the architecture of the intermediate language, the metadata, and the library specification for the Common Language Infrastructure. The .NET Micro Framework supports only a subset of all the services described in the Common Language Infrastructure. Services have been discarded to reduce the memory requirements of the system.
We have already observed that programmers want their programs to run quickly as well as safely. In the full .NET framework, good application performance is attained by a process called just-in-time (JIT) compilation. Just before each method is actually executed, the intermediate-language instructions from the assembly are converted into machine code for the host computer that will run the program. Once the code has been compiled, the user can look forward to a level of performance comparable with machine code. The only limitation of this approach is the time taken to compile the program when it starts running, and this can be addressed by caching the machine code produced on a particular system. In the full .NET Framework, the Native Inage Generation (NGEN) technology can be used to precompile a program for a particular host computer.
The just-in-time compilation process (but not the precompilation or caching) takes place even in the .NET Compact Framework on mobile devices. This highlights another advantage of the use of an intermediate language: it can be made to execute on a wide range of platforms.
When the .NET Micro Framework was contemplated, the goal was to get this infrastructure to work on a machine with very limited hardware resources: (512 KB ROM, 384 KB RAM, 8 KB EEPROM, no flash, on a custom ARM7 ASIC running at 27Mhz). This put us in a bit of a fix. Having the safety of managed code on small devices is really useful to the embedded developer, but such small processors do not have the performance or memory resources to run all the components of the system. This means that in the .NET Micro Framework, the just-in-time compilation process does not take place. Instead, a compressed version of the intermediate language is transferred into the .NET Micro Framework-based device, in which the runtime system then interprets the instructions to run the program. Because no compilation is being performed, this means that such a runtime system can be very small. Of course, the downside of this way of working is that performance is not on a par with the full system. The good news is that for the types of devices that are being based on this platform, this is not a limitation. The fact remains, however, that if real-time ability in the realm of microsecond response times to interrupts is required, the .NET Micro Framework is not an appropriate platform.
However, the .NET Micro Framework does represent a huge step forward in the field of embedded development because it brings a powerful high-level programming language, the integrity of managed code, and a high-quality development and debugging environment to the field of embedded development. You can write your C# code using exactly the same tools as before and debug it in just the same way as before, except that now your code is running inside a tiny embedded device that will ultimately cost just a handful of change.
The .NET Framework represents resources in terms of objects that have behaviors and properties. This is carried into the .NET Micro Framework, where the various electronic connections supported by the hardware are represented in this way. This may be quite different from techniques that you have used previously, in that the hardware entities that a C# program will interact with are represented by software objects. Before we can discuss how this is done, we have to take a walk down memory lane and consider how programs actually communicate with physical devices.
If you have built programs that interact with hardware, you will be used to the concept of memory-mapped input/output. This is a mechanism by which hardware to be controlled is actually mapped onto particular memory locations in the address space of the computer.
We can regard the memory of a computer as a vast array of locations, numbered starting at zero and extending up to the limit of the memory space available. When a program runs, instructions are fetched from one part of memory (program code), and they cause changes to values held in another part of memory (variable storage). When a program is processing data, it is fetching and storing values to and from memory.
The process of incrementing a variable consists of fetching the contents of the memory location where the variable is stored into a processor register, adding one to the value in the register, and then storing the counter back into the place it was fetched from.
However, when a program wants to interact with the outside world, perhaps by reading a key code produced by the keyboard or by sending a sound level to the sound card, that program will interact with special memory locations that are connected to hardware that performs that particular function. Early computers made extensive use of this form of input/output, and it is still used for such devices as the screen display on some computer systems.
More recent processors have added the concept of input/output address space so that the hardware could be constructed with a separate addressable area within which devices could be accessed using special machine code instructions. Locations in the computer that can be used for input/output in this way are called ports.
To actually drive signal lines via a port, the programmer has to construct a pattern of bits (ons and offs) that are then written to the port. The designer of the hardware connects individual items to particular bits in a given input/output port so that, for example, placing 1 in a register would light up a given LED, placing 2 in the register would light up a different LED, and placing 3 would light them both. When reading data, the program will load a pattern of bits from an input port and the program must then pick bits out of this value. The result of this was that controlling hardware required a certain amount of expertise in binary operators such as AND, shift, and OR.
In the .NET Micro Framework, hardware is represented completely differently. Although the underlying processor and the hardware attached to it may still be implemented as described earlier, the way in which a programmer uses this functionality is not by patterns of bits and ports. Instead, hardware devices are represented as objects. This means that we no longer have to create bit patterns to get the required behavior; instead, we create instances of objects that are then requested to perform the required task.
A software object is a lump of code that contains data in the form of properties and behavior in the form of methods. In a C# program, an object has a corresponding class file that gives the runtime system instructions about how to create an instance of the class and what it can do. In the case of the port objects supplied with the .NET Micro Framework, these classes are provided by the creators of the underlying system. As users of these classes, we just have to create object instances and then interact with these instances to achieve the required behaviors. This means that our code will never actually interact directly with a piece of hardware; instead, it will interact with a software object that represents it.
This is directly analogous to me asking a chef to prepare me a meal rather than operating the cooker myself. As long as the requested food is provided, I do not need to know or care how this was done. It is of no interest to me whether the cook prepared the meal or went out and bought it. As long as I get what I want, I don't care how it is done. In the same way, the object hides the true nature of the hardware from my code. We will see this ability used to particularly good effect in later chapters, where we will explore the way in which you can create emulations of hardware against which you can test your code.
Figure 2-2 shows how this works. The software is controlling a device that contains two output devices, a lamp and a speaker. The C# code will contain two references, called speaker and lamp. These have been set to refer to instances of the OutputPort class that each expose a method called Write. The Write method accepts a single Boolean value as its parameter. When these class instances are created, they are assigned a particular pin to interact with. In the case of Figure 2-2, the speaker has been assigned to Pin 1 and the lamp to Pin 0. The class instances use machine code routines to actually interact with the hardware pins and control the electrical signal to the lamp and speaker. However, as far as we are concerned, the act of turning the lamp on is that of a simple method call on our reference:
lamp.Write(true); // light the lamp
Figure 2-2: Object instances and hardware.
At the moment, we have just seen how to write output to a port. The .NET Micro Framework also provides a class called InputPort that provides methods that can be used to read the state of the port. In order to support interrupts, there is also a class called InterruptPort, which is based on InputPort. By using delegates (references to methods), we can also nominate code to be executed when an event occurs and thus create programs that can respond to interrupts.
The use of objects to represent hardware devices in this way brings significant benefits in terms of flexibility and program safety. It is an example of how the availability of low-cost, high-performance processors can be used not just to widen the range of possible applications programmable devices, but also to ease their development and produce more reliable products.