Software development involves choices, each of which represents a compromise. Sometimes the developer must write a short, simple subroutine because he doesn't have time to include a more sophisticated one. At other times a limited amount of RAM might be a constraint. In some situations one must avoid tying up a network connection with too many small data packets, because the network protocol in use favors transfers of large data blocks. The programmer always chooses from a set of compromises in attempting to satisfy often conflicting goals.
One difficult choice the programmer faces is portability versus efficiency. It is an agonizing one, too, because favoring efficiency results in nonportable code, while selecting portability often results in software whose performance is unsatisfactory.
Efficient software doesn't waste CPU cycles. It takes full advantage of the underlying hardware, often completely disregarding portability issues. It capitalizes on such features as graphics accelerators, cache memories, and specialized floating-point instructions.
Although efficient software is attractive from a purist's standpoint, the value of running the software on many different machine architectures tips the balance in the other direction. The reason is more financial than technical: in today's computing environments, software that runs on only one architecture sharply limits its potential marketability.
Building in portability doesn't mean that you must necessarily settle for inefficient, technically arcane software. To the contrary, obtaining optimum performance from portable software requires a higher level of technical sophistication. If the sophistication isn't there, however, you have an alternative in that you can wait until the hardware becomes available.
We used to fill in the blank with "year": Next year's hardware will run faster. Because of the speed at which hardware technology races ahead, today we can sometimes say that next quarter's or even next month's hardware may run faster. The ever-shrinking product-development cycle of today's computer manufacturers allows them to produce newer models in a fraction of the time required in the past. Vendors leapfrog each other with frequent announcements of higher performance for a lower price. As a result, whatever computer you're using today will soon feel like the clunky old behemoth in the back of your college lab.
This tendency toward tighter design cycles will accelerate, too. Today's semiconductor designers use sophisticated simulators to create follow-on versions of their microchips. As these simulators—themselves powerful computers—continue to gain speed, developers can complete designs even more quickly. These newer chips then become the engines used in tomorrow's simulators. The phenomenon snowballs with each successive generation of silicon processors. The semiconductor-design world rides a dizzying, upward-bound performance spiral.
As faster machines replace slower ones, the critical issue then becomes not whether your software takes full advantage of all the features of the hardware, but whether your software will run on the newer machines. You might spend days or weeks tuning an application for better performance on today's platform, only to find that the next hardware upgrade gives you a factor-of-ten increase in speed "free." However, your software must be portable to take advantage of the next supercomputer.
In the Unix environment, this usually translates into writing much software as shell scripts. A shell script consists of multiple executable commands placed in a single file executed indirectly by the Unix command interpreter. Because of the wide array of small, single-purpose commands found in a typical Unix distribution, all but the lowest level tasks can be constructed easily from shell scripts.
A side benefit of shell scripts is that they are far more portable than programs written in the C language. This may sound heretical, but writing your programs in C should be considered only if absolutely necessary. In the Unix environment, C lacks the portability of shell scripts. C programs often depend on definitions in header files, machine architecture sizes, and a host of other nonportable characteristics of a Unix implementation. As Unix was ported from 16-bit to 32-bit and 64-bit architectures, a significant amount of software became inoperable because C is not very portable. C is little more than the assembly language of the 1980s.
What if you want your program to run on other systems besides Unix? Other language choices exist, each with their peculiar strengths and drawbacks. If your goal is strictly one of portability, then Perl and Java fill the bill quite nicely, as implementations exist for both Unix and Windows platforms. However, just because a language has been ported to another platform besides Unix doesn't mean that the other platform adheres to the Unix philosophy and the Unix way of doing things. You will find that Java developers on Windows, for example, tend to be big proponents of interactive development environments (IDEs) that hide the complexity underneath, while Unix Java developers often prefer tools and environments that let them delve behind the graphical user interface.
If it barely runs fast enough, then accept the fact that it already meets your needs. All time spent tuning subroutines and eliminating critical bottlenecks should be done with an eye toward leveraging performance gains on future hardware platforms as well. Resist the tendency to make the software faster for its own sake. Remember that next year's machine is right around the corner.
A common mistake that many Unix programmers make is to rewrite a shell script in C to gain a marginal edge in performance. This is a waste of time better spent obtaining constructive responses from users. Sometimes a shell script may truly not run fast enough. However, if you absolutely must have high performance and believe that you need C to get it, think it over—twice. Certain specialized applications notwithstanding, it usually doesn't pay to rewrite a script in C.
Beware of "micro-optimizations." If—and this is a very big "if"—you must optimize a C program's performance, Unix provides prof and other tools for identifying the subroutines used most. Tuning routines called upon hundreds or thousands of times produces the greatest improvement for the least effort.
Another way to improve a program's performance is to examine how you deal with the data. For example, I once wrote a simple shell script that could search through more than two million lines of source code scattered across several thousand files in under a second. The secret was to create an index of the data first. Each line of the index contained a unique word and a list of all the files that the word was contained in. Most programs use less than 20,000 symbols, meaning that the index would be no more than 20,000 lines long. Searching through 20,000 lines is a relatively simple task for the Unix tool grep. Once grep located the unique word entry, it would simply print out a list of the file names associated with that word. It was very fast, yet portable due to the way that the data was being viewed.
Any time a program takes advantage of special hardware capabilities, it becomes at once more efficient and less portable. Special capabilities may occasionally provide great boosts, but their usage requires separate device-dependent code that must be updated when the target hardware is replaced by a faster version. Although updating hardware-specific code provides job security for many system programmers, it does little to enhance the profit margins of their employers.
At one point in my career, I worked on a design team producing an early version of the X Window System for a new hardware platform. An engineer on the project wrote several demos that bypassed the X Window System and took advantage of the advanced graphics capabilities of the hardware. Another engineer coded a similar set of demos to run under the portable interface provided by the X Window System. The first engineer's demos really sparkled because they used state-of-the-art graphics accelerators. Incredibly efficient, they flexed the hardware's muscles to their fullest. The second engineer's demos, while admittedly slower, remained staunchly portable because they ran under X.
Eventually, the state-of-the-art graphics hardware became not-so-state-of-the-art and the company built a new, faster graphics chip set incompatible with the first. Reimplementing the first engineer's demos for the new hardware required a significant effort, more than anyone had time for. So the demos disappeared into obscurity. The portable demos that ran under X, on the other hand, were ported to the new system without modification and are probably still in use as of this writing.
When you take advantage of specialized hardware capabilities for efficiency's sake, your software becomes a tool for selling the hardware, instead of software that stands on its own. This limits its efficiency as a software product and leads you to sell it for less than its intrinsic worth.
Software tightly coupled to a hardware platform holds its value only as long as that platform remains competitive. Once the platform's advantage fades, the software's worth drops off dramatically. To retain its value, it must be ported from one platform to another as newer, faster models become available. Failure to move rapidly to the next available hardware spells death. Market opportunity windows remain open for short periods before slamming shut. If the software doesn't appear within its opportunity window, it finds its market position usurped by a competitor. One could even argue that the inability to port their software to the latest platforms has killed more software companies than all other reasons combined.
One measure of an application's success is the number of systems it runs on. Obviously, a program depending largely on one vendor's hardware base will have difficulties becoming a major contender compared to another that has captured the market on multiple vendors' systems. In essence, portable software is more valuable than efficient software.
Software conceived with portability in mind reduces the transfer cost associated with moving to a new platform. Since the developer must spend less time porting, he can devote more time to developing new features that attract more users and give the product a commercial advantage. Therefore, portable software is more valuable from the day it is written. The incremental effort incurred in making it portable from the beginning pays off handsomely later as well.
Once a user has taken the time to learn an application package, he can reap the benefits of this investment on future platforms by running the same package. Future versions of the software may change slightly, but the core idea and the user interface supporting it are likely to remain intact. The user's experience with the product increases with each new version, transforming him from an "occasional" user into a "power" user over time.
The process of transforming users into power users has been ongoing since the very earliest versions of Unix. Most of the knowledge learned in working with the earlier versions is directly applicable to Linux today. It is true that Linux adds a few wrinkles of its own, but by and large experienced Unix users are quite comfortable in a Linux environment due to the portability of the entire user environment, from the shell to the common utilities available.
Have you ever noticed that certain programs have been around for years in one form or another? People have always found them useful. They have true intrinsic worth. Usually someone has taken it upon himself to write or port them, for fun or profit, to the currently popular hardware platform.
Take Emacs-style text editors, for example. While in some ways they are bad examples of Unix applications, they have long been favorites with programmers in general and Unix devotees in particular. You can always find a version around not only on Unix systems but on other systems as well. Although some Emacs versions have grown into cumbersome monoliths over the years, in its simplest form Emacs still offers a respectable "modeless" vehicle for entering and manipulating text.
Another good example is the set of typical business programs such as that found in Microsoft Office and similar products. People have found that modern business environments usually require desktop programs such as word processors, spreadsheets, and presentation tools in order to function effectively.
No single individual, company, organization, or even nation can keep a good idea to itself, however. Eventually, others take notice, and "reasonable facsimiles" of the idea begin to appear. The intelligent entity, recognizing an idea's merit, should strive to implement it on as many platforms as practical to gain the largest possible market share. The most effective way to achieve this end is to write portable software.
Case Study: The Atari 2600
Let's look at the Atari 2600, otherwise known as the Atari Video Computer System (VCS). The VCS was the first successful home video game system, a product in the right place at the right time. It captured the imagination of a people that had just sampled Space Invaders at local pubs and arcades and was ready to bring the new world of video games into the living room. The first cartridge-programmable game console, it launched an entire industry bent on bringing the joys of games once found only in campus labs and software professionals' hidden directories to the family television screen. If programmers have an affinity for games, it is minuscule compared to the interests of main-stream America and, for that matter, the world at large.
High on the price-performance curve when it was introduced, the 2600 gave you reasonable capabilities for your money. The 8-bit console sold for around $100 or so. It came with a couple of joysticks and a pair of potentiometers known as paddle controllers. Atari supplied it with "Combat," a cartridge programmed with a variety of two-person battle games incorporating tanks, jets, and Fokkers.
Atari made comparatively little money on sales of the console. The big profits came from the sale of the game cartridges. With prices ranging from $14 to more than $35 each, these units became the bread and butter of Atari and a slew of smaller software houses hoping to capitalize on the video game craze. From Atari's standpoint, once the software engineering investment to develop a cartridge was recovered, the rest was pure gravy.
A friend of mine found a job at a software house that produced cartridges for the 2600. He explained that it was quite a feat to squeeze, say, a chess game or a "shoot-em-up" into less than 8K of ROM. It was like jamming twenty people into a Volkswagen Bug; not everyone gets a chance to look out the windows.
In writing the code for the game cartridges, he wrote some of the most efficient—and nonportable—software in his career. He treated instructions as data and data as instructions. Certain operations were performed during horizontal retrace, the time when the light beam on a television set, having finished painting the last dot on the right side of the screen, returns to the left side. Every possible shortcut had to be taken to conserve memory. His code was at once a thing of beauty and a software maintainer's worst nightmare.
Sometime during the reign of the 2600, Atari introduced the 800, a 6502-based system that became the flagship of its home computer line. The 800 was a true computer in the sense that it had a typewriter-style keyboard and interfaces for secondary storage and communications devices. Selling for close to $1,000, the 800 posed little threat to the 2600's captive niche—until the price of memory chips dropped.
Because of competitive pressures from other vendors and the 800's popular extended graphics capabilities, Atari fell under heavy pressure to produce a video game machine for the mass market that could run the 800's software. The new machine, dubbed the 5200, made it possible for the mass-market computer illiterates to run the same games that the techies were playing on the 800.
Once the mass market had discovered this amazing new machine, it dumped the primitive looking 2600 for the smoother graphics of the 5200. The bottom then promptly fell out from under Atari 2600 game cartridge prices. Dealers, expecting the demise of the 2600, began slashing the prices on their existing inventories, virtually flooding the market with cut-rate 2600 cartridges. This drove prices down even further, taking down a few software houses in the process.
The pain didn't end there for the cartridge producers. Most popular games on the 2600 became instant hits on the 5200 as well—but not before they were completely rewritten to run on the new hardware platform. Since the code in the 2600 cartridges was so efficient, it lacked anything remotely portable. This meant the software had to be rewritten at great expense.
The point here is that although the game-cartridge software was arguably the most efficient ever written, its value plummeted the day the new hardware was announced, all because it was not portable enough to be recompiled and reused on the 5200. Had the code been portable, the video-game phenomenon would have evolved quite differently. Atari would probably have become the world's largest supplier of software.
Finally, note that you will pay as little as a few pennies on the dollar for an Atari 2600 game cartridge today, and you will do so mostly out of nostalgia. You still cannot purchase the most powerful versions of Microsoft Office for less than several hundred dollars. Part of the reason for this is that Microsoft Office migrated from one platform to the next as Intel Corporation released progressively more powerful versions of its 8086 processor. This kept Office on the leading edge of the power curve. But that position on the power curve comes at a high cost.
The developers of Office must keep a sharp eye on the future of Open-Office and other Office clones appearing in the Linux marketplace, however. Some of these have greater inherent portability than Office. This could mean that they might be more adaptable in the future as business technology evolves. Microsoft could find itself having to dig ever deeper into its pockets to maintain Office's dominance. As long as Intel continues to produce CPU chips with instruction sets that are backward compatible, Microsoft Office (and for that matter, Microsoft Windows itself) will be relatively inexpensive to port. If someone came out with a machine so advanced that everyone wanted its capabilities even if it weren't Intel-compatible, Microsoft would face the gargantuan task of porting Windows and Office to the new architecture—or else.
The moral of the story? Portability pays. Anything else is merely efficient.
Thus far we have been discussing the merits of portable software versus efficient software. Code moved easily to a new platform is far more valuable than code that takes advantage of special hardware features. We have seen that this axiom can be measured in real terms, (i.e., in dollars and cents). To preserve its profit base, a software company should strive for portability in its products, possible foregoing efficiency in the process.
Portable code, however, goes only halfway toward meeting the goal of portability. All applications consist of instructions and data. By making the instructions portable, you ensure that your code will be ready to run on next year's machine. What then happens to your data? Is it left behind? Not at all. The Unix programmer chooses to make not only the code portable, but the data as well.
How does one make one's data portable? The next tenet of the Unix philosophy is one solution.