Using the C Programming Project Management Tools Provided with Fedora Core Linux

 < Day Day Up > 

Fedora Core is replete with tools that make your life as a C/C++ programmer easier. There are tools to create programs (editors), compile programs (gcc), create libraries (ar), control the source (Revision Control System [RCS] and the Concurrent Versions System [CVS]), automate builds (make), debug programs (gdb and ddd), and determine where inefficiencies lie (gprof).

The following sections introduce some of the programming and project management tools included with Fedora. The discs included with this book contain many of these tools, which you can use to help automate software development projects. If you have some previous Unix experience, you will be familiar with most of these programs because they are traditional complements to a programmer's suite of software.

Building Programs with make

You use the make command to automatically build and install a C program, and for that use it is an easy tool. If you want to create your own automated builds, however, you need to learn the special syntax that make uses; the following sections walk you through a basic make setup.

Using Makefiles

The make command automatically builds and updates applications by using a makefile. A makefile is a text file that contains instructions about which options to pass on to the compiler preprocessor, the compiler, the assembler, and the linker. The makefile also specifies, among other things, which source code files need to be compiled (and the compiler command line) for a particular code module and which code modules are needed to build the program a mechanism called dependency checking.

The beauty of the make command is its flexibility. You can use make with a simple makefile, or you can write complex makefiles that contain numerous macros, rules, or commands that work in a single directory or traverse your file system recursively to build programs, update your system, and even function as document management systems. The make command works with nearly any program, including text processing systems such as TeX.

You could use make to compile, build, and install a software package using a simple command like this:

 # make install 

You can use the default makefile (usually called Makefile, with a capital M), or you can use make's -f option to specify any makefile, such as MyMakeFile, like this:

 # make -f MyMakeFile 

Other options might be available, depending on the contents of your makefile.

Using Macros and Makefile Targets

Using make with macros can make a program portable. Macros allow users of other operating systems to easily configure a program build by specifying local values, such as the names and locations, or pathnames, of any required software tools. In the following example, macros define the name of the compiler (CC), the installer program (INS), where the program should be installed (INSDIR), where the linker should look for required libraries (LIBDIR), the names of required libraries (LIBS), a source code file (SRC), the intermediate object code file (OBS), and the name of the final program (PROG):

 # a sample makefile for a skeleton program CC= gcc INS= install INSDIR = /usr/local/bin LIBDIR= -L/usr/X11R6/lib LIBS= -lXm -lSM -lICE -lXt -lX11 SRC= skel.c OBJS= skel.o PROG= skel skel:  ${OBJS}         ${CC} -o ${PROG} ${SRC} ${LIBDIR} ${LIBS} install: ${PROG}         ${INS} -g root -o root ${PROG} ${INSDIR} 

NOTE

The indented lines in the previous example are indented with tabs, not spaces. This is very important to remember! It is difficult for a person to see the difference, but make can tell. If make reports confusing errors when you first start building programs under Linux, you should check your project's makefile for the use of tabs and other proper formatting.


Using the makefile from the preceding example, you can build a program like this:

 # make 

To build a specified component of a makefile, you can use a target definition on the command line. To build just the program, you use make with the skel target, like this:

 # make skel 

If you make any changes to any element of a target object, such as a source code file, make rebuilds the target automatically. This feature is part of the convenience of using make to manage a development project. To build and install a program in one step, you can specify the target of install like this:

 # make install 

Larger software projects might have a number of traditional targets in the makefile, such as the following:

  • test To run specific tests on the final software

  • man To process an include a TRoff document with the man macros

  • clean To delete any remaining object files

  • archive To clean up, archive, and compress the entire source code tree

  • bugreport To automatically collect and then mail a copy of the build or error logs

Large applications can require hundreds of source code files. Compiling and linking these applications can be a complex and error-prone task. The make utility helps you organize the process of building the executable form of a complex application from many source files.

Using the autoconf Utility to Configure Code

The make command is only one of several programming automation utilities included with Fedora. There are others, such as pmake (which causes a parallel make), imake (which is a dependency-driven makefile generator that is used for building X11 clients), automake, and one of the newer tools, autoconf, which builds shell scripts that can be used to configure program source code packages.

Building many software packages for Linux that are distributed in source form requires the use of GNU's autoconf utility. This program builds an executable shell script named configure that, when executed, automatically examines and tailors a client's build from source according to software resources, or dependencies (such as programming tools, libraries, and associated utilities), that are installed on the target host (your Linux system). The idea is to increase software project portability and ease the task of the programmer who is faced with a software project expected to run on many compatible, yet potentially disparate, software platforms.

Many Linux commands and graphical clients for X downloaded in source code form include configure scripts. To configure the source package, build the software, and then install the new program, the root user might use the script like this (after uncompressing the source and navigating into the resulting build directory):

 # ./configure ; make ; make install 

The autoconf program uses a file named configure.in that contains a basic ruleset, or set of macros. The configure.in file is created with the autoscan command. Building a properly executing configure script also requires a template for the makefile, named Makefile.in. Although creating the dependency-checking configure script can be done manually, you can easily overcome any complex dependencies by using a graphical project development tool such as KDE's KDevelop or GNOME's Glade. (See the section "Graphical Development Tools," later in this chapter, for more information.)

Managing Software Projects with RCS and CVS

Although make can be used to manage a software project, larger software projects require document management, source code controls, security, and revision tracking as the source code goes through a series of changes during its development. RCS and CVS provide source code version control utilities for this kind of large software project management. You can find both of these utilities in RPM packages on your Fedora Core Linux CD-ROMs.

The RCS and CVS systems are used to track changes to multiple versions of files, and they can be used to backtrack or branch off versions of documents inside the scope of a project. They can also be used to prevent or resolve conflicting entries or changes made to source code files by multiple developers.

Although RCS and CVS provide similar features, RCS uses a locking and unlocking scheme to control access for making revisions; CVS, on the other hand, provides a modification and merging approach to working on older, current, or new versions of software. RCS uses different programs to check in or out of a revision under a directory; CVS uses a number of administrative files in a software repository of source code modules to merge and resolve change conflicts.

NOTE

Numerous changes to a source tree (directory of source code files) can also be accomplished using the patch command. Although patch does not offer the source control features of RCS and CVS, it is very handy if there is a need to rapidly apply a "fix" to one or more files. You can use the diff command to create a patch, which is a text file that contains line-by-line differences between two files or directories of files. See "Patching the Kernel" in Chapter 38 for information on how to use patch to apply updates to the Linux kernel source tree.


RCS uses at least the following eight separate programs to track source code revisions:

  • ci Checks in revisions

  • co Checks out revisions

  • ident Starts the keyword utility for source files

  • rcs Changes file attributes

  • rcsclean Cleans up working files

  • rcsdiff Starts the revision comparison utility

  • rcsmerge Merges revisions

  • rlog Activates the logging and information utility

Source code control with CVS requires the use of at least the following six command options on the cvs command line:

  • checkout Checks out revisions

  • update Updates your sources with changes made by other developers

  • add Adds new files in cvs records

  • import Adds new sources to the repository

  • remove Eliminates files from the repository

  • commit Publishes changes to other repository developers

Note that some of these commands require you to use additional fields, such as the names of files.

RCS and CVS can be used for more than software development projects. These tools can be used for document preparation and workgroup editing of documents, and they can work with any text files. Both systems use registration and control files to accomplish revision management. Both systems also offer the opportunity to revisit any step or branch in a revision history and to restore previous versions of a project. This mechanism is extremely important in cross-platform development and for software maintenance.

Tracking information is usually contained in separate control files; each document within a project might contain information that is automatically updated with each change to a project. A process called keyword substitution is used to perform these automatic updates.

CVS and RCS use similar keywords, which are usually included inside C comment strings (/* */) near the top of a document. The following are some of the available keywords:

  • $Author$ The username of the person who performed the last check-in

  • $Date$ The date and time of the last check-in

  • $Header$ The pathname of the document's RCS file, with the revision number, date and time, author, and state inserted

  • $Id$ Same as $Header$, but without a full pathname

  • $Name$ A symbolic name (see the co man page)

  • $Revision$ The assigned revision number (such as 1.1)

  • $Source$ The RCS file's full pathname

  • $State$ The state of the document, such as Exp for experimental, Rel for released, or Stab for stable

You can also use these keywords to insert version information into compiled programs by using character strings in program source code. For example, given an extremely short C program named foo.c, you could use the following to assign version info to a variable (rsrcid in the example), which will then be embedded in the binary program:

 /* $Header$ */ #include <stdio.h> static char rsrcid[] = "$Header$"; main() {     printf("Hello, Linus!\n"); } 

The resulting $Header$ keyword might expand to this in an RCS document:

 $Header: /home/bball/sw/RCS/foo.c,v 1.1 1999/04/20 15:01:07 root Exp Root $ 

Getting started with RCS is as simple as creating a project directory and an RCS directory under the project directory and then creating or copying initial source files in the project directory. You can then use the ci command to check in documents. Getting started with CVS requires that you initialize a repository by first setting the $CVSROOT environment variable with the full pathname of the repository and then using the init command option with the cvs command, like this:

 # cvs init 

You can find documentation for RCS and CVS in various man pages, under the /usr/share/doc directories for each of them, and in GNU information documents.

TIP

Even though they are free, CVS and RCS are powerful software project management tools. Many organizations use CVS and RCS to manage the source for their projects. Commercial code-management tools (also called version-control programs) also exist that include fancy interfaces, homogenous platform support, and greater flexibility than RCS and CVS. But those tools require dedicated administrators and might have significant licensing costs both of which are major disadvantages.


Making Libraries with ar

C and C++ project management tools can help ease the process of building large applications; the ar command is an example of such a tool. If several different programs use the same functions, they can be combined into a single library archive. You use the ar command to build such a library. When you include the library on the compile line, the archive is searched to resolve any external symbols. Listing 32.1 shows an example of building and using a library.

Listing 32.1. Building and Using a Library
 $ gcc -c sine.c $ gcc -c cosine.c $ gcc -c tangent.c $ ar c libtrig.a sine.o cosine.o tangent.o $ gcc -c mainprog.c $ gcc -o mainprog mainprog.o libtrig.a 

Remember that gcc is the command that you use to invoke the GNU C compiler. (The section "Using the GNU C Compiler," later in this chapter, explains what -c and -o mean.)

Debugging Tools

Debugging is both a science and an art. Sometimes, the simplest tool the code listing is the best debugging tool. At other times, however, you need to use other debugging tools. Three of these tools are splint, gprof, and gdb.

Using splint to Check Source Code

The splint command is similar to the traditional Unix lint command: It statically examines source code for possible problems, and it also has many additional features. Even if your C code meets the standards for C and compiles cleanly, it might still contain errors. splint performs many types of checks and can provide extensive error information. For example, this simple program might compile cleanly and even run:

 $ gcc -o tux tux.c $ ./tux 

But the splint command might point out some serious problems with the source:

 $ splint tux.c Splint 3.1.1 --- 17 Feb 2004 tux.c: (in function main) tux.c:2:19: Return value (type int) ignored: putchar(t[++j] -...   Result returned by function call is not used. If this is intended, can cast   result to (void) to eliminate message. (Use -retvalint to inhibit warning) Finished checking --- 1 code warning 

You can use the splint command's -strict option, like this, to get a more verbose report:

 $ splint -strict tux.c 

The GNU C compiler also supports diagnostics through the use of extensive warnings (through the -Wall and -pedantic options):

 $ gcc -Wall tux.c tux.c:1: warning: return type defaults to `int' tux.c: In function `main': tux.c:2: warning: implicit declaration of function `putchar' 

NOTE

If you would like to explore various C syntax-checking programs, navigate to http://www.ibiblio.org/pub/Linux/devel/lang/c/. The splint program is derived from lclint, which you can find in the lclint-2.2a-src.tar.gz file at the website.


Using gprof to Track Function Time

You use the gprof (profile) command to study how a program is spending its time. If a program is compiled and linked with -p as a flag, a mon.out file is created when it executes, with data on how often each function is called and how much time is spent in each function. gprof parses and displays this data. An analysis of the output generated by gprof helps you determine where performance bottlenecks occur. Using an optimizing compiler can speed up a program, but taking the time to use gprof's analysis and revising bottleneck functions significantly improves program performance.

Doing Symbolic Debugging with gdb

The gdb tool is a symbolic debugger. When a program is compiled with the -g flag, the symbol tables are retained, and a symbolic debugger can be used to track program bugs. The basic technique is to invoke gdb after a core dump (a file containing a snapshot of the memory used by a program that has crashed) and get a stack trace. The stack trace indicates the source line where the core dump occurred and the functions that were called to reach that line. Often, this is enough to identify a problem. It isn't the limit of gdb, though.

gdb also provides an environment for debugging programs interactively. Invoking gdb with a program enables you to set breakpoints, examine the values of variables, and monitor variables. If you suspect a problem near a line of code, you can set a breakpoint at that line and run gdb. When the line is reached, execution is interrupted. You can check variable values, examine the stack trace, and observe the program's environment. You can single-step through the program to check values. You can resume execution at any point. By using breakpoints, you can discover many bugs in code.

A graphical X Window interface to gdb is called the Data Display Debugger, or ddd.

NOTE

If you browse to http://www.ibiblio.org/pub/Linux/devel/debuggers/, you can find at least a dozen different debuggers.


     < Day Day Up > 


    Red Hat Fedora 4 Unleashed
    Red Hat Fedora 4 Unleashed
    ISBN: 0672327929
    EAN: 2147483647
    Year: 2006
    Pages: 361

    flylib.com © 2008-2017.
    If you may any questions please contact us: flylib@qtcs.net