Civilization advances by extending the number of important operations we can perform without thinking.
Alfred North Whitehead
At the dawn of the age of automobiles, the instructions for starting a Model-T Ford were more than two pages long. With modern cars , you just turn the key ”the starting procedure is automatic and foolproof. A person following a list of instructions might flood the engine, but the automatic starter won't.
Although computing is still an industry at the Model-T stage, we can't afford to go through two pages of instructions again and again for some common operation. Whether it is the build and release procedure, code review paperwork, or any other recurring task on the project, it has to be automatic. We may have to build the starter and fuel injector from scratch, but once it's done, we can just turn the key from then on.
In addition, we want to ensure consistency and repeatability on the project. Manual procedures leave consistency up to chance; repeatability isn't guaranteed , especially if aspects of the procedure are open to interpretation by different people.
We were once at a client site where all the developers were using the same IDE. Their system administrator gave each developer a set of instructions on installing add-on packages to the IDE. These instructions filled many pages ”pages full of click here, scroll there, drag this, double-click that, and do it again.
Not surprisingly, every developer's machine was loaded slightly differently. Subtle differences in the application's behavior occurred when different developers ran the same code. Bugs would appear on one machine but not on others. Tracking down version differences of any one component usually revealed a surprise.
Don't Use Manual Procedures
People just aren't as repeatable as computers are. Nor should we expect them to be. A shell script or batch file will execute the same instructions, in the same order, time after time. It can be put under source control, so you can examine changes to the procedure over time as well ("but it used to work ").
Another favorite tool of automation is cron (or "at" on Windows NT). It allows us to schedule unattended tasks to run periodically ”usually in the middle of the night. For example, the following crontab file specifies that a project's nightly command be run at five minutes past midnight every day, that the backup be run at 3:15 a.m. on weekdays, and that expense_reports be run at midnight on the first of the month.
# MIN HOUR DAY MONTH DAYOFWEEK COMMAND # --------------------------------------------------------------- 5 0 * * * /projects/Manhattan/bin/nightly 15 3 * * 1-5 /usr/local/bin/backup 0 0 1 * * /home/accounting/expense_reports
Using cron, we can schedule backups , the nightly build, Web site maintenance, and anything else that needs to be done ”unattended, automatically.
Compiling the project is a chore that should be reliable and repeat-able. We generally compile projects with makefiles, even when using an IDE environment. There are several advantages in using makefiles. It is a scripted, automatic procedure. We can add in hooks to generate code for us, and run regression tests automatically. IDEs have their advantages, but with IDEs alone it can be hard to achieve the level of automation that we're looking for. We want to check out, build, test, and ship with a single command.
In The Evils of Duplication, we advocated generating code to derive knowledge from common sources. We can exploit make 's dependency analysis mechanism to make this process easy. It's a pretty simple matter to add rules to a makefile to generate a file from some other source automatically. For example, suppose we wanted to take an XML file, generate a Java file from it, and compile the result.
.SUFFIXES: .Java .class .xml .xml.java: perl convert.pl $< > $@ .Java.class: $(JAVAC) $(JAVAC_FLAGS) $<
Type make test.class, and make will automatically look for a file named test.xml, build a .java file by running a Perl script, and then compile that file to produce test.class.
We can use the same sort of rules to generate source code, header files, or documentation automatically from some other form as well (see Code Generators).
You can also use the makefile to run regression tests for you, either for an individual module or for an entire subsystem. You can easily test the entire project with just one command at the top of the source tree, or you can test an individual module by using the same command in a single directory. See Ruthless Testing, for more on regression testing.
Many projects set up recursive, hierarchical for project builds and testing. But be aware of some potential problems.
make calculates dependencies between the various targets it has to build. But it can analyze only the dependencies that exist within one single make invocation. In particular, a recursive make has no knowledge of dependencies that other invocations of make may have. If you are careful and precise, you can get the proper results, but it's easy to cause extra work unnecessarily ”or miss a dependency and not recompile when it's needed.
In addition, build dependencies may not be the same as test dependencies, and you may need separate hierarchies.
A build is a procedure that takes an empty directory (and a known compilation environment) and builds the project from scratch, producing whatever you hope to produce as a final deliverable ”a CD-ROM master image or a self-extracting archive, for instance. Typically a project build will encompass the following steps.
Check out the source code from the repository.
Build the project from scratch, typically from a top-level makefile. Each build is marked with some form of release or version number, or perhaps a date stamp.
Create a distributable image. This procedure may entail fixing file ownership and permissions, and producing all examples, documentation, README files, and anything else that will ship with the product, in the exact format that will be required when you ship. 
 If you are producing a CD-ROM in ISO9660 format, for example, you would run the program that produces a bit-for-bit image of the 9660 file system. Why wait until the night before you ship to make sure it works?
Run specified tests ( make test ).
For most projects, this level of build is run automatically every night. In this nightly build, you will typically run more complete tests than an individual might run while building some specific portion of the project. The important point is to have the full build run all available tests. You want to know if a regression test failed because of one of today's code changes. By identifying the problem close to the source, you stand a better chance of finding and fixing it.
When you don't run tests regularly, you may discover that the application broke due to a code change made three months ago. Good luck finding that one.
Final builds, which you intend to ship as products, may have different requirements from the regular nightly build. A final build may require that the repository be locked, or tagged with the release number, that optimization and debug flags be set differently, and so on. We like to use a separate make target (such as make final ) that sets all of these parameters at once.
Remember that if the product is compiled differently from earlier versions, then you must test against this version all over again.
Wouldn't it be nice if programmers could actually devote all of their time to programming? Unfortunately, this is rarely the case. There is e-mail to be answered , paperwork to be filled out, documents to be posted to the Web, and so on. You may decide to create a shell script to do some of the dirty work, but you still have to remember to run the script when needed.
Because memory is the second thing you lose as you age,  we don't want to rely on it too heavily. We can run scripts to perform procedures for us automatically, based on the content of source code and documents. Our goal is to maintain an automatic, unattended, content-driven workflow.
 What's the first? I forget.
Many development teams use an internal Web site for project communication, and we think this is a great idea. But we don't want to spend too much time maintaining the Web site, and we don't want to let it get stale or out of date. Misleading information is worse than no information at all.
Documentation that is extracted from code, requirements analyses, design documents, and any drawings, charts , or graphs all need to be published to the Web on a regular basis. We like to publish these documents automatically as part of the nightly build or as a hook into the source code check-in procedure.
However it is done, Web content should be generated automatically from information in the repository and published without human intervention. This is really another application of the DRY principle: information exists in one form as checked-in code and documents. The view from the Web browser is simply that ”just a view. You shouldn't have to maintain that view by hand.
Any information generated by the nightly build should be accessible on the development Web site: results of the build itself (for example, the build results might be presented as a one-page summary that includes compiler warnings, errors, and current status), regression tests, performance statistics, coding metrics and any other static analysis, and so on.
Some projects have various administrative workflows that must be followed. For instance, code or design reviews need to be scheduled and followed through, approvals may need to be granted, and so on. We can use automation ”and especially the Web site ”to help ease the paperwork burden .
Suppose you wanted to automate code review scheduling and approval. You might put a special marker in each source code file:
/* Status: needs_review */
A simple script could go through all of the source code and look for all files that had a status of needs_review, indicating that they were ready to be reviewed. You could then post a list of those files as a Web page, automatically send e-mail to the appropriate people, or even schedule a meeting automatically using some calendar software.
You can set up a form on a Web page for the reviewers to register approval or disapproval. After the review, the status can be automatically changed to reviewed. Whether you have a code walk-through with all the participants is up to you; you can still do the paperwork automatically. (In an article in the April 1999 CACM, Robert Glass summarizes research that seems to indicate that, while code inspection is effective, conducting reviews in meetings is not [Gla99a].)
The cobbler's children have no shoes. Often, people who develop software use the poorest tools to do the job.
But we have all the raw materials we need to craft better tools. We have cron. We have make, on both Windows and Unix platforms. And we have Perl and other high-level scripting languages for quickly developing custom tools, Web page generators, code generators, test harnesses, and so on.
Let the computer do the repetitious, the mundane ”it will do a better job of it than we would. We've got more important and more difficult things to do.
The Cat Ate My Source Code
The Evils of Duplication
The Power of Plain Text
It's All Writing
Look at your habits throughout the workday . Do you see any repetitive tasks? Do you type the same sequence of commands over and over again?
Try writing a few shell scripts to automate the process. Do you always click on the same sequence of icons repeatedly? Can you create a macro to do all that for you?
How much of your project paperwork can be automated? Given the high expense of programming staff,  determine how much of the project's budget is being wasted on administrative procedures. Can you justify the amount of time it would take to craft an automated solution based on the overall cost savings it would achieve?
 For estimating purposes, you can figure an industry average of about US$100,000 per head ”that's salary plus benefits, training, office space and overhead, and so on.