Before we begin a discussion of the reasons for avoiding CUIs, we first need to define them. A CUI is a style of interaction with an application that exists outside the scope of the highest-level command interpreter present on the system. Once you invoke an application from the command interpreter, it is not possible to communicate with the command interpreter until the application exits. You are, in effect, held captive within the user interface of the application until you take actions that cause it to release you.
Let's clarify this with an example. Suppose you have two programs, one that lists the contents of your electronic mailbox and another that searches for a text string in files. If the mail and search programs use CUIs, your interaction might look something like this:
| || |
Invoke the mail program from the command line.
Show the contents of the mailbox.
Exit the mail program. Return to command interpreter level.
Invoke the search program.
SEARCH>find jack *.txt
Perform the search.
Exit the search program.
We're back at the command interpreter level.
Notice the flow between the levels here. First, invoking the mail command places you inside its own input interpreter. It forces you to interact with its command parser. The parser has a command set different from that of the command interpreter that called it. Second, to execute the search command, you must first leave the mail command by entering an exit command. You then called the search command from the main command interpreter. Once you're inside the search application, you must then interact with another command interpreter. This interpreter behaves differently from both the main command interpreter and the mail command's interpreter. The only similarity is that it requires you to enter an exit command to end it.
You can see that there are obvious disadvantages with this approach. You must become familiar with three different command interpreters, each with its own interaction language. This may not sound too difficult on a small scale, but it can become prohibitive very quickly on a system that has hundreds of applications. Also, while executing a command, you cannot do anything else until you exit from it. Suppose, for instance, that in responding to a letter in your mailbox, you needed to include some text from another file but you forget which one. You would have to exit from the mail command, do the search, and then return to the mail command. At that point you probably will have forgotten your context in the mail application.
So much for the obvious drawbacks. The reasons Unix devotees eschew captive user interfaces are not so obvious. Before we explore these, however, let's contrast this CUI with a Unix-style "noncaptive" interface:
The scan command lists the contents of a mail folder.
sh>grep jack *.txt
The grep command searches for the string "jack" in all files having names ending in ".txt".
Notice that the Unix user invokes all commands at the shell prompt or main command interpreter level. Each command completes its task, and control returns to the shell prompt. It is not necessary to exit from each command individually by typing "exit." The user needs to learn only one language-that of the shell, the Unix command interpreter.
The cynic might point out that the user still must learn the order or parameters to be supplied for each command invocation. This is true. But with the CUI, the user must first recall which command to run, then which subcommand to invoke from its CUI. Therefore, there is nearly twice as much to remember. It is not surprising, then, that systems that employ CUIs often must provide highly developed help systems to guide users through the choices. On the other hand, most Unix users get by quite well without complex help systems. Unix commands often return simple messages listing the required parameters and their usage if the user enters an incorrect one.
Thus far, we've defined CUIs and touched upon some of their glaring deficiencies. The real reasons that Unix users avoid them, though, run much deeper. They have to do with the way commands interact with each other. In the Unix environment, no command exists in isolation. Commands interact with each other at various times. CUIs interfere with the ability of multiple commands to make this happen. Multiple command interaction is a key Unix concept.
Producers of CUIs base their designs on the premise that a person is sitting at the keyboard. They expect the person to enter responses to the prompts provided by the application. The application then performs calculations or carries out various tasks.
The problem, however, is that even the fastest human being is slower than the average computer. The computer can conduct operations at lightning speed, and it doesn't get tired or take breaks. As stated earlier, people function only within a narrow range. For example, even the speediest typists do not type much more than 80 words per minute. Most CUIs eventually reach a point where the user must respond to a prompt. Then even the fastest supercomputer is little more effective than the lowliest PC. Virtually all PCs capture text typed by users without the least bit of strain. As long as a system is constrained to operate within the limits imposed by human beings, it cannot function at its maximum potential.
I first became aware of this phenomenon when confronted with a window system running on a very fast workstation. Back in the days before PCs, most people were accustomed to scanning text files on a terminal. Terminals usually allowed you to stop and start the displayed output by pressing a <^S>/<^Q> combination or a "HOLD SCREEN" key. At modem speeds of 9,600 bps or lower, most people could control the scrolling rate without any trouble. Today's window systems, however, have no artificial limit such as the communications speed to control the rate at which the system displays text. The user is entirely at the mercy of the CPU and its I/O capabilities. When left to its own devices, so to speak, the computer can display text considerably faster than anyone can deal with it by entering <^S>. This situation will worsen in the future as large cache memories and machines in the multigigahertz speed range become common.
Because of the limitations we humans impose on computers, any system that must wait for user input can operate only as fast as the person sitting at the keyboard. In other words, not very fast at all.
Typical Unix commands strive to perform their tasks entirely without human intervention. Most only prompt the user when they are about to take some potentially irreversible action such as "repairing" a file system by deleting files. As a result, Unix commands always run at maximum speed. This is part of the reason a system designed for portability instead of efficiency still performs well. It recognizes that the weakest link, performancewise, in many man-machine interactions is not the machine.
A parser reads the user's input and translates it into a form that the application software can understand. It has to read correctly everything a user might conceivably (and inconceivably!) type. This causes the typical command parser to grow to gargantuan proportions. Sometimes the command parser will require more programming than the application's main task.
Consider this example. Suppose you had a program for formatting a hard disk. Since the potential for data loss is great, you might consider it "user friendly" to ask the user whether he really wants to wipe out everything on the disk:
FORMAT V1.0 Rev.A
About to format drive C:
Formatting will destroy all files on the disk!
Begin format? <y|N>
The number of potential user responses to a prompt like this one is staggering. First, if the user wants to go ahead with the format, he may enter Y, y, Yes, YES, or various combinations in between. Similarly, if he is sure that he doesn't want to proceed, he may enter N, n, no, or NO. These responses are fairly easy to parse.
The complexity begins when the user isn't quite sure what he or she wants to do. An unsophisticated user might enter "help" in the hope of getting more general information about formatting. A more experienced user could enter "?" to obtain a list of formatting options. Still other users might try to break out of the formatting command altogether by forcing it to exit by means of one or more interrupt characters. Some of these may cause the formatter application to exit ungracefully or even terminate the user's login session.
To handle such reactions, a command parser must be large and highly sophisticated, a "tunnel through solid rock" according to Chris Crawford, (see Chapter 9). You can imagine how large a parser would be if the application required multiple prompts. The amount of code would comprise the bulk of the application.
The Unix programmer deals with the user interface by avoiding it (i.e., the typical Unix application doesn't have a command parser). Instead, it expects its operating parameters to be entered on the command line when invoking the command. This eliminates most of the possibilities described above, especially the less graceful ones. For those commands that have many command line options-a cautionary sign to begin with-Unix provides standard library routines for weeding out bad user input. This results in significantly smaller application programs.
Another approach here would be to use a dialog box with a GUI. This can be especially useful with commands, such as FORMAT, that have potentially disastrous effects. One doesn't want to wipe clean one's hard drive because of a poorly chosen letter. A GUI can be used in this case to get users to slow down and think about what they're doing.
While GUIs can be wonderful for guiding users through tunnels in solid rock, however, they don't let you connect the tunnels together very well. Solid rocks, especially BIG solid rocks, are such time-consuming things to drill through. We'll explore this in more detail later.
Many CUIs employ menus to restrict the user to a limited set of choices. This sounds good in theory. But for some unknown reason, CUI designers are seldom satisfied with a simple menu having, say, five items. They usually add menu items that invoke "submenus" to expand the number of options. It's as if they're thinking, "Hey! I went to all this trouble to design this menu system; I may as well use it for something." So "creeping featurism" takes precedence over brevity.
The marketing arm of the computer industry must also share some blame. The constant push for "features, Features, FEATURES!" forces software designers to expand the number of menu choices without regard to whether the features are truly helpful or even make sense at all. From a salesperson's point of view, it's not enough to make the application better. It must look better. If a program sells well with 5 menu choices, then one with 10 will sell even better. It doesn't matter that the additional complexity may alienate much of the target market.
There are technical reasons, too, for avoiding CUIs and their "big is beautiful" approach. As CUIs grow in complexity, they need an ever-increasing number of system resources. Memory requirements explode upward. More disk space must be purchased. Network and I/O bandwidth becomes an issue. Consequently, computer hardware vendors love CUIs.
One strength of Unix is the way its programs interact with each other so effectively. Programs with CUIs, because they assume that the user is human, do not interface well with other programs. Software designed for communicating with other software is usually much more flexible than the software designed to communicate with people.
Do you remember the example of workers loading a moving van? We said that the large pieces did not fit well with each other and it was the small pieces that provided the most flexibility. Similarly, programmers find it difficult to connect programs having CUIs to each other because of their size. CUIs tend to result in huge programs. Large programs, like large pieces of furniture, are not very portable. Movers don't say, "Hand me that piano, will ya?" any more than programmers move complex, monolithic applications from one platform to another overnight.
CUI programs' inability to combine with other programs causes them to grow to massive proportions. Since the programmer cannot rely on interfacing easily with other programs on the system to obtain the required capabilities, he must build new features into the program itself. This deadly spiral feeds on itself: The more features built into the CUI program, the larger it becomes. The larger it becomes, the greater the difficulty in interfacing it with other programs. As it gets harder to interface with other programs, the CUI program itself must incorporate more features.
CUIs tend to work well when dealing with only a few instances. By limiting choices, they can make it easier for an inexperienced user to accomplish complex tasks. As long as there are not too many instances, the user is usually content to respond to the prompts. The number of prompts can become unwieldy, however, when one must respond to several hundred of them.
A popular program (a shell script, really) provided by many Unix system vendors is adduser. It allows the system administrator to add a new user account to the system via a "user-friendly" CUI. Most of the time it works well. The problem with adduser becomes evident when you must add, say, several thousand users at once.
Once a university had decided to change from another OS to Unix. Several thousand user accounts were to be transferred to the new system. It didn't take long for the system administrators to realize that running adduser that many times wasn't going to cut it. They ultimately settled on a solution that involved moving the user account files from the old system to the Unix system. Then they wrote a shell script to convert them into files resembling the Unix password file. The irony was that the shell script written to do the conversion was shorter than adduser.
Because CUI programs expect to communicate with a human being at some point, it is very difficult to incorporate them into shell scripts. It takes many lines in a shell script to carry on the kinds of dialogs that CUIs require. These dialogs can be so cumbersome to write in a shell script that programmers often resort to small user interface programs to conduct yes-no queries and obtain other responses from the user.
Since CUIs hinder interaction with other programs, they are typically used only for their original purpose and not much else. Although you might suggest that this is simply a case of "doing one thing well," a CUI differs from small Unix commands in that it exists as an application unto itself. It doesn't offer the same mix-and-match, plug-and-play features of its Unix counterparts. It yields very little in terms of software leverage as a result.
Without software leverage, CUI programs cannot multiply their effects-and the effects of their developers-on the computer world. Although a CUI program may gain an early following because of its novelty when it is released, it soon loses its appeal as the rest of the software world marches on. Software advances appear daily on the computer scene, and a few ideas cause major upheaval in each decade. The monolithic CUI program is simply incapable of adapting in such a rapidly evolving environment.
Some of you may be wondering why there is so much discussion about CUIs in the first place. After all, aren't most user interfaces graphical these days? Why be concerned with typing things on the command line when you can view a list of check boxes and click on them with a mouse?
The short answer is that a GUI is simply a visual form of a CUI. Thus, it has the same characteristics as a CUI for the following reasons:
It assumes that the user is human. The software designer expects that a physical user will be present to click on buttons and navigate menus. In some cases, steps are taken to slow the program down to better accommodate the user. Frequently, more effort is spent on making the user interface than on providing functional capabilities.
A GUI is often big and difficult to write. Large IDEs such as Microsoft's Visual Basic and Borland's Jbuilder have ameliorated this problem somewhat. But these IDEs have problems of their own that we've explored elsewhere.
GUIs tend to adopt a big is beautiful approach. If five options meet the need, then ten options meet the need even better. That's the rationale behind some office programs that have grown to gargantuan proportions. When you want to drive a couple of nails, you don't need to hire a construction company to come in with a team of laborers armed with nail guns.
It's difficult to combine GUI-based programs with other programs. Unless the program has initially been designed to interface with another program, it won't interface well. In the Microsoft world, OLE and COM provide some of this connectivity. With most Unix and Linux commands, such interfaces are unnecessary, as most of them already interface well with other programs.
GUIs do not scale well. They have the same problem as CUIs, as shown by the adduser example. Clicking the mouse several times to perform an operation is easy. Clicking the mouse to perform the same operation thousands of times results in user frustration and a nagging sense that the computer is in charge instead of the other way around.
GUIs do not take advantage of software leverage. It is extremely difficult to script operations performed only via a GUI. It is often necessary to resort to recorder-like programs that capture mouse and keyboard events. These provide an imperfect solution as situations often arise that require users to make decisions depending on the output produced by the program. Recorded scripts usually have little ability to deal with this situation. Those that can often require manual editing of the scripts, at which point you are once again operating in the scripter's world, not the GUI world.
Thus far in this chapter we have discussed how a CUI presents several obstacles to a program's impact. CUIs make sense under certain circumstances, but they are the exception rather than the rule. Applications fare much better if they are made of a collection of small components that communicate well with each other. It doesn't matter much if they do not interface with human beings well. Ultimately a specialized program that, not surprisingly, is likely to be a CUI itself manages this interaction.
Programs that interact with each other are actually data filters. Each gathers several bytes on its input stream, applies a filtering algorithm to the data, and usually produces several bytes on its output system. We say "usually produces" here, because not all programs send data to their output stream. Depending on the data and the algorithms, some simply output nothing.
The fact that programs filter data is significant. All computers and their programs filter data. That's why we call them data processors. To process data is to filter it.
If a program is a filter, then it ought to act like one; that is, it should concentrate not on synthesizing data, but rather on selectively passing on data that is presented to it. This is the essence of the next tenet of the Unix philosophy.