Compilers

Compilers

Besides the obvious requirement of correct code compilation, compilers can be measured on a number of other features including:

  • speed of compilation

  • speed of the code produced

  • compactness of code produced

  • the degree of customization provided to the developer

The speed of a compiler is important during development as code is continually being edited, compiled, and debugged. A good compiler will include options for at least three levels of compilation. Most compilers have a basic mode that does not produce any debug information or perform advanced optimizations. When compiling a large program during development, this option may be used for most of the code as it is usually the fastest . For any code segments being actively debugged , the developer will want the compiler to produce additional debug information as output for use in debugging. Typical programs take longer to compile in this mode as the compiler does extra work to produce the debug information. The slowest compilation speeds will usually result when additional optimization levels are turned on. While optimization levels will vary by compiler, in most cases you should look for a compiler that offers optimizations that:

  • add several additional general purpose optimization levels

  • optimize for specific CPU instruction sets

  • optimize for specific CPU architectures

As additional optimization levels may lead to varying behavior in certain types of code, you should always be sure to test software that has been compiled at the optimization level you plan to use when distributing the software. For instance, most Windows 95 software is only compiled at optimization levels supported by a 386 CPU. This is because a 386 is the minimum CPU requirement for Windows 95. Someone writing high performance code who knew it would only be executed on the latest Pentium CPU would want to be sure to compile the code with all Pentium specific optimizations turned on to make sure the code executed as fast as possible.

Most compilers work in a two step fashion. First, the compiler translates the source code into object code, which is a machine representation of the actual CPU instructions to be executed. The second step of compiling is referred to as linking. In linking, the compiler (or linker) takes all the object code files produced in step one, links any needed libraries (or at least links stubs to these libraries if dynamic linking is supported), and produces the final executable file. If a function is statically linked, the code for that function is included in the executable file. If a function is dynamically linked, only a code stub and not the actual function code is included in the executable file. Dynamically linked code is referenced at execution time and only then is the referenced code loaded. Dynamic linking is typically used for operating system and system library functions that are always assumed to be present on the target platform. Static linking is typically used for user -developed code since that would not be otherwise present in the base platform.

In large programs consisting of thousands of functions and hundreds of files, program link time may be much longer than the compile time of any single file. To speed up the edit-compile-debug cycle on large programs, some advanced compilers support a feature typically referred to as "fix and continue" or incremental linking. This allows a developer to recompile individual files and, via dynamic linking, "patch" the executable with the changes without relinking the entire executable. On a large program this can turn a five minute re-link into a simple 10-20 second recompile.



Software Development. Building Reliable Systems
Software Development: Building Reliable Systems
ISBN: 0130812463
EAN: 2147483647
Year: 1998
Pages: 193
Authors: Marc Hamilton

flylib.com © 2008-2017.
If you may any questions please contact us: flylib@qtcs.net