Get Comfortable with Source Code

Get Comfortable with Source Code

Difficult-o-Meter: 3 (moderate Linux skill required)

"Just compile it!" is probably a slogan we'll never see from an overpriced-sneaker company ”particularly not now that tech stocks are in the toilet and most dot-commers can't even afford Ramen noodles, much less $90 sneakers. But there's another reason we'll never see it as a shoe slogan :

Shoe company slogans are always exhorting us to go do difficult things. They want us to run to strange, isolated places and stop and do sit-ups, or to race traffic lights while on foot . I'm afraid I'm not that type of person. When I want to race traffic lights, I play some variant of "Need for Speed." When I get to strange , isolated places, I tend to spend my time enjoying the view by rubbernecking. I guess I don't like all the hard work associated with "TV Commercial" exercise.

This is why I often exhort people to "Just compile it!" It's easier than figuring out the switches to something like RPM (RedHat package manager, a utility that is supposed to work quite well, but with which I'm always having some type of argument). Almost every GPL'ed project I've downloaded has configured and compiled easily, with nary a wasted neuronal discharge on my part (a neuronal discharge is a thought, for those of you not up on your medical terminology). A casual perusing of the README file has traditionally told me which packages were prerequisites, generally where said prerequisites can be found, and then what commands should be used to configure, compile, and install the package.

I could devote a whole chapter to why I don't like RPM, but I can sum it up in one sentence : Most people who make RPMs are exceedingly careless. They require all sorts of other packages ”most of which aren't required at all. RedHat, Mandrake, and TurboLinux should be ashamed for this. Installing one of their .rpm files will undoubtedly drive you to the brink of insanity, because it requires their version of every package in the world. SuSE is smarter . They put only the real requirements in the requirement list. As a result, I find myself using some .rpm files from SuSE. Too bad SuSE has stopped distributing free iso's [*] of their distribution. But I digress.

[*] ISO stands for International Standards Organization. In this case, it is common shorthand for ISO-9660, the CD-ROM file system standard, so "an iso" is an image of a CD-ROM that may be downloaded and burned to a CD-R or CD-RW. See Chapter 18 .

Being comfortable with compiling and installing source code is rather equivalent to being able to put oil and coolant in your own car ”it's a skill that isn't absolutely required but that will certainly make your life simpler ”and it will certainly give you a better understanding of how your computer works.

Compiling your kernel from scratch is one of my big bugaboos. I've put a bit of a write-up online at

http://www.jurai.net/~patowic/computer/pre-kernel.html

explaining why I don't think you should use a precompiled kernel. I'll save you the effort of heading off to that URL by reproducing the text here.

At least once a week, I answer the question "Why shouldn't I use a precompiled kernel?" It's a simple question, with a fairly simple answer.

The short answer is this: Don't trust your vendor.

The long answer goes something like this: All Linux distributors have to make certain decisions when making their distribution. They have to decide what software to include and what kernel options to enable. They don't know anything about your hardware. They don't have much idea what you intend to do with your Linux box. They do have to worry about supporting the widest range of hardware possible.

Out of this need grew kernel modules ”which give you the ability to insert code into the kernel on the fly. This is really great for assembling a generic system. This has less utility for Joe Average. Joe Average, you see, does not change his PC hardware that often. He builds a system and installs an operating system (OS). Ideally, he should have some idea of what is in his system.

A kernel based on modules is kind of like a generic engine that can be placed in any car. A custom-built kernel would (for the purposes of this analogy) be a tuned engine that only fits in your car. Which do you think will be faster and work better?

After speed, security is a concern. RedHat is infamous for leaving nice fat security holes all over their distribution. Their kernel is no different. A while back, they left a debugging feature enabled in all kernels . This was a feature that would be used by far less than 1% of Linux users, but it did open up a hole to the outside. This is what most people would call "bad."

Upgrading one's kernel should also be done fairly regularly. When new kernels come out, look at the features and fixes they offer. Try to decide if you should have them. When kernel 2.2.16 came out, fixing a nice big security hole that had existed in all previous versions, it was a week before the major vendors had new kernels available for upgrade. I had all my machines upgraded within a matter of hours after hearing the announcement ”because I build my own kernels.

Building your own kernel is like having a tailor-made suit for the same price as a regular one. Who would pick off-the-rack over tailor-made?

Do you see why I don't like precompiled kernels? I feel this way about most precompiled software. I'd rather download the source, compile it myself, and run from there.

Compiling Software

Compiling software under Linux generally follows one of three patterns: autoconf, makefiles, and imakefiles. Before I cover each one, I want to point out a few commonalities. In every case, the INSTALL and README files should be carefully read. The answers to any questions you may have are undoubtedly in them. The configure and compile should be done using a normal user account, and only the install should be done as root.

Avoid using the root account whenever possible. The root account is basically the .44 magnum of your Linux box. Screwing up with a normal account can be painful. Screwing up with the root account can cost you a limb.

I keep all my source code under /usr/src . It's a logical place, and it helps keep my home directory slightly less cluttered. Files that end in .tar.gz or .tgz or .tar.bz2 are generally called tarballs. Yep ”that's a real word (at least in the Unix world). Tar is a Tape ARchival program that's been around since the beginning of time, and it's the de facto standard for distributing non-RPM'd files. Note that RPM files are generally only available for RedHat or its derivatives ”so you'd never find an RPM file for slackware or OpenBSD or FreeBSD. Oops, we're talking about Linux, aren't we? Just strike those last two names from your mind for a while.

Imakefiles

Developed for X Windows , the only package I've run into that uses Imake is XFree86. This was intended to be a platform-independent version of make, but it never really took off. Imake has a neat O'Reilly book on it that I've never read, and (I suspect) I'm not alone in this. It's complex, it's different from what's out there, and Gnu Autoconf pretty much does away with the need for it. Even XFree86 can now be built without fiddling with Imake at all ”download the source code and run "make World" from the top directory. Things seem to go just fine from there.

Basic Makefiles

Plain-Jane make files may require modification to compile on your system. Try them first to see if they work:

 tar zxvf mypackage.tar.gz 
 cd mypackage 
 more README 
 more INSTALL 
 [make config] 
 make 
 su -c "make install" 
 make clean 

As always, we read the instructions first, and remember that /usr/src is a good place to put all that source code. The next step, make config , is optional. Many packages don't use it. Some do. Some have other options as well ”it'll say in the README or INSTALL files. The make config step will generally mean you'll have a variety of questions asked of you. You'll have to figure out the correct answers (for example, it may ask "What is the path to md5sum?"), or the compile will fail.

For packages that do not have a config option and that don't compile the first time, you'll have to open up the Makefile with an editor and look for variables that can be changed. Generally it'll be something minor, like adjusting the path to a binary. It's also worth looking at the makefile to see just where it intends to put the compiled binaries. Big problems with the Makefile will require either in-depth knowledge of Makefiles (which few people seem to have) or e-mail to the package maintainer. The latter is generally the better course of action, since if there is a genuine problem, the main developer needs to know. Don't go crying wolf, though, and be polite about e-mailing the guy or gal.

Autoconf

Gnu autoconf is one of the more clever pieces of software out there. Basically, it's a way for the software package to ask a couple of hundred questions about your system ”and to ask the system directly. The configure program (which is what autoconf uses) checks the location of binaries, checks compiler behavior, and does everything short of washing the windows. Autoconf'd packages are really, really nice to compile, particularly in comparison to the manual editing of Makefiles.

The pattern for autoconf'd packages looks like this:

 tar zxvf myswfr2.tar.gz 
 cd mysfwr-2 
 more README 
 more INSTALL 
 ./configure --help 
 ./configure [options] 
 make 
 su -c "make install" 
 make clean 

Note the [options] ”many autoconf'd packages have a gaggle of extra information you can manually pass to the configure script. You can generally ignore this, but there are times it's a distinct advantage. In the XMMS (Cross-platform Multi-Media System) package, for example, there is a switch to enable 3DNOW! Optimizations ”which is great if you have an Athlon, Duron, K6, K6/2, or K6-III microprocessor, but useless if you have an Intel or Cyrix chip. Here's an example of using some command line flags to the configure script:

 tar zxvf xmms-1.2.4.tar.gz 
 cd xmms-1.2.4 
 ./configure --with-x --enable-3dnow 
 make 
 su -c "make install" 
 make clean 

Simple, eh? Astute readers will also note that I included X-windows support, too. These flags are all different from package to package, so doing a "configure--help" is really the only way (besides reading documentation) to figure them out.

If something gets fouled up in your compile and you need to rerun configure, be sure to erase the old results! These are stored in the files config.cache or .config.cache and the files will need to be removed before you run configure again.

How to Roll Your Own Kernel

Many new to Linux are seriously intimidated by the prospect of compiling their own operating system kernel. There is no doubt that if you are not a programmer, this is a more complex undertaking than you probably have ever attempted with a computer.

Even though it may seem an awesome prospect, this doesn't have to be as nerve -wracking as it sounds. Before we actually get into kernel compilation, we are going to show you around a few directories and files so you know something about how Linux sets itself up on your hard drive and how it starts itself up. We'll then show you how a little preparation can allow you to safely get back where you started, even if you totally mess up the configuration and compilation of your new kernel. Once you have this safe recovery point, you need never worry again. You can compile the kernel again and again, certain that even if you screw up, you can always boot from your working original kernel. Let's start that tour.

The /etc/lilo.conf File

The following discussion pertains, alas, only to Linux on Intel processors. I would love to be equally knowledgeable about other platforms, but Intel machines are all I work with when it comes to Linux. Also, Intel machines will be what the vast majority of our readers are using. For these reasons, we have followed the herd and written about "Lintel" platforms. Be aware as you read the following that Linux on Alpha, PowerPC, Sparc, or any other processor platform will have similar structures and capabilities, but they will not be the same.

Every Linux distribution with which I am familiar uses the LILO boot loader by default. What is a boot loader? You may not know it, but your PC has two invisible programs in it. They show up in no directory and have no file name . One of them is probably familiar to you. It is called the BIOS, which stands for basic input/output system. It resides in read-only memory (ROM), which is a "burned-in" (or, in newer machines, "flashed") separate memory chip on your motherboard. The second is the boot loader.

When the BIOS starts up, it knows how to find a special little bit of disk called the superblock. The superblock contains two important things. One is the partition table. The other is a little data area called the master boot record, or MBR.

The BIOS reads the data from the MBR into a particular place in random-access memory (RAM) and then executes the instructions stored there. You could write your own little assembly language program and put it in the MBR of your hard drive, and then your computer would be able to execute only your little program. Nasty little creatures called boot sector viruses do precisely this.

Since most machines ship with some variety of Windows preinstalled , the default MBR program consists of enough code to determine which partition is marked as "bootable" and then to boot the operating system stored there.

When Linux is installed on a machine, it generally replaces this silent and invisible boot loader program with a verbose and configurable boot loader called LILO.

What LILO does is determined by a configuration file called /etc/lilo.conf. Here is a sample file:

 boot = /dev/hda 
 vga = normal 
 read-only 
 linear 
 prompt 
 timeout = 30 
 
 image = /boot/vmlinuz 
 label = linux 
 root = /dev/hda3 
 
 image = /boot/vmlinuz.suse 
 label = suse 
 root = /dev/hda3 
 
 other = /dev/hda1 
 label = win 

I separate the global options from the bootable regions by indenting the bootable specifications. This isn't a requirement for LILO; I just do it to keep them clear. The vertical whitespace I use is also optional.

Here's the meaning of some of the global options you might use.

Option

Description

boot

This option specifies which device is the boot device. In this case, it is the first IDE hard drive (/dev/hda).

default

This option names the image that will be booted by default. If omitted, the first image specified is booted .

delay

Specifies the number of tenths of seconds to wait before booting the default image.

read-only

The read-only option is actually a Linux kernel parameter, not a LILO option. It specifies that the root partition is initially to be mounted as read only. This allows for file system checking and cleaning.

linear

The linear option tells LILO to use logical sector addresses instead of so-called cylinder/head/sector (CHS) addressing. This is a rather esoteric point. You should never have to add or remove this option if your lilo.conf is working.

lock

This option causes LILO to "remember" any kernel options you apply manually at boot and to use them the next time that image is booted. In other words, it makes the Linux kernel options you type at the boot prompt "sticky."

prompt

Normally LILO boots directly into the default image, unless a Shift key is pressed. When a Shift key is pressed during boot, the "boot:" prompt is given. If this option is specified, the "boot:" prompt always appears. Note that you cannot have an unattended reboot if this option is specified and the "timeout" option is not specified.

Running LILO

To install LILO, you will have to be root, and then, assuming the lilo.conf file you wish to use is in /etc/lilo.conf, simply type:

 # lilo 
 Added linux * 
 Added suse 
 Added win 
 # 

The output you see here are the names of the images defined in the lilo.conf file. The asterisk denotes the default image.

The /boot Directory (Maybe)

Most distributions place a number of files of relevance to the kernel and to LILO in the /boot directory. This is often where default kernels (precompiled kernels) for your distribution are installed. Also to be found here are kernel maps, boot sector images, backups of old boot sector contents, and so forth.

Welcome to /usr/src/linux

Now we take the plunge! Let's visit /usr/src/linux. Take a look at Figure 1-1 . This is the base directory of the 2.2.x Linux kernel source. This is an actual look at the 2.2.16 kernel on my Linux-based laptop. You don't need to know all that much about the kernel to compile it as you need. Even so, let me introduce you to this base directory so that you can use it as a launching point should you ever want to dig into the depths.

Figure 1-1. Our first look at /usr/src/linux .

The README file gives a very brief introduction to the kernel and how to compile it. The Makefile file is the "formula" for building the kernel (similar to those cases described earlier in this chapter, although the Linux kernel make process does not exactly match any of the "standard" make styles we talked about earlier).

Directory

Description

Documentation

This is, as you may have guessed, a collection of documentation on the Linux kernel and its subsystems. This documentation varies quite a bit. Some of it is written to be understood by end users; some of it is written to be understood only by other kernel developers. Some of it is a bit out of date. As is often the case with programs everywhere, documentation tends to lag behind coding. Still, this is a rich mine of information, and often the best place to turn when you are having problems using some kernel feature for the first time.

arch

This is where architecture-specified code (read: processor-specific code) is kept. Subdirectories of this directory include:

·                 i386

·                 alpha

·                 arm

·                 m68k

·                 mips

·                 ppc

·                 s390

·                 sparc

·                 sparc64

So, as you can see, Linux runs on quite a range of hardware. Soon you will see something like "ia64," which will be support for the next generation of Intel processors (the "Itanium"). That "s390" is none other than the IBM mainframe S/390. So Linux runs on everything from the smallest 386's right on through IBM mainframes.

drivers

The drivers directory contains code for driving various devices, from serial ports to network cards, from floppy drives to RAID arrays, and everything in between.

fs

The fs directory contains the code to implement the various file systems Linux supports. Linux supports many file systems. Its preferred native file system, ext2, is here, along with several others, including FAT, NTFS, and HPFS and such network file systems as smbfs , nfs, and coda.

include

This has the header files ( .h files, or "include files") that are used throughout the kernel source. This can be interesting to poke around in, since data structures are generally defined in such files.

init

Duh, gee, Tennessee this is the kernel initialization code.

ipc

The code that implements the System V InterProcess Communication APIs is in here (semaphores, message queues, and shared memory).

kernel

This is where the " core " kernel code lives. Here is where you will find the code that does process scheduling, resource allocation, signals, kernel modules, etc.

lib

Here you will find code for kernel versions of some familiar standard C library function calls, including ctype , sprintf, and so forth. The kernel cannot call user-space versions of these functions for thread-safety reasons (among others), so it must have its own implementations of them.

mm

This directory contains the memory management system of Linux. You might well think that memory management was about as "core" a kernel feature as you can get, so why isn't it in "kernel"? The simple answer is that the memory management system contains as much code by itself as the rest of the "core" kernel. Keeping it apart makes it easier to figure out which source files are involved in this critical function. The more complex answer has to do with the fact that this is both a complex system and one where bugs will have disastrous consequences. It is always easier to maintain and debug code "off by itself."

net

As you may have guessed, the implementation of the various network protocols supported by the Linux kernel are to be found here. Note that this is code for the protocols, not for the network devices themselves (ethernet cards, etc.). Code for these is to be found in the drivers directory.

scripts

Various "support" scripts for the configuration and make process are kept here. For example, Perl/Tk scripts that implement the xconfig configuration tool are to be found here.

Friendly Kernel Configuration ( menuconfig , xconfig)

The Linux kernel is configurable in thousands of ways. The complexity is such that an interactive configuration system is built into the kernel's make file. Making the Linux kernel is quite dissimilar to any other build I have seen (although Perl's make process comes close).

There are three ways to run the interactive configuration. The most basic is to type make config, which will ask a question about every single possible configuration option. I don't recommend this. You'll start going completely nuts long before you get to the options you want to change. The other two options are make menuconfig, which is a text-based "point-and-shoot" interface, and make xconfig, which is an X-windows “based GUI point-and-click configuration interface.

The make menuconfig interface looks like Figure 1-2 . The make xconfig interface looks like Figure 1-3 .

Figure 1-2. Sample menuconfig.

Figure 1-3. Sample xconfig.

Generally, you will need to be root to configure, build, and install a new kernel.

Most optional code in the Linux kernel can be disabled, compiled in to the kernel, or built as modules. Modules offer greater flexibility, but they do involve a small performance cost to get loaded; and if you wish to load and unload them, you must use a set of commands ( insmod, rmmod, etc.) to manipulate them. I find it much simpler to actually directly compile in everything I need, and then, if I need different things, to boot between two or three different kernels. One of the great things about Linux is you can do it your own way. You can go either route.

Several chapters in this book will take you through configuring the kernel to enable such features as IP Masquerading and Parallel Line Internet Protocol (PLIP). Those chapters will just tell you what settings to change. This chapter is the only one in the book that will actually walk you through compiling and deploying a new kernel. Refer back to this chapter when you read later chapters that tell you to reconfigure your kernel if you are not comfortable with the process. We will try to make this as painless as possible.

Using make menuconfig or make xconfig you will be able to reconfigure your Linux kernel in a great many ways.

As a general principle, be sure to write down exactly which configuration parameters you change, what they used to be set to, and what you changed them to. This allows you to confidently "undo" your changes (assuming you can boot the system to change them back! More about how to be sure that you can, even if you screw up the new kernel, will appear later).

Also, as another general rule, change as few things at a time as possible. If you wish, for example, to install your Intel EtherExpress Pro and set up PLIP networking, do one first, compile and deploy the kernel, and then do the other. If one doesn't work, you then know which one.

Building the New Kernel

Now that you have set your configuration, you must run make dep to rebuild the dependencies. What does this mean? Well, some code requires that other code be built and linked to the kernel. If that code was not included for any other reason, then the compilation would fail. Running make dep causes such prerequisite code to be compiled for any of your configuration changes.

Now you can make your kernel. There are a number of build targets (the arguments to make that build a kernel). Some of them build and install the kernel for you. We won't use those options here. We will do this all "manually" because I don't know which distribution you have and I don't know if some of the installs will work correctly with what you have. So we will do it by the numbers , by the book. Most kernels have enough code in them that compressed images are required. These targets are zImage or bzImage. The bzImage kernel will use bzip to compress the kernel. It gets them smaller, and almost all distributions support it. That's the one I'll use. Note that while running, make bzImage should be all that is necessary; some people, out of paranoia , want to have every single bit of code rebuilt. They like to run a make clean before they run their make bzImage. This should not be necessary, but it will do no harm (other than to make the compile take quite a long time).

There's a little bit more that you must do. If any options are set to compile as modules, you must run make modules and then you must run make modules_install to install those modules where they belong.

You can string all of these commands together on one command line and do it in such a way that subsequent commands will not run if an earlier one fails for some reason. Here's how you could do this:

 # make dep && make clean && make bzImage && make modules && make modules_install 

When this finishes (and it will take several minutes even on the fastest machines), you have a compressed compiled kernel and you have the modules installed where they belong. You will now need to install the kernel.

Installing the New Kernel

To install the new kernel, you must set up an entry in /etc/lilo.conf that points at the new kernel.

In most distributions I have seen, kernels reside either in the root directory or in /boot. Since this is a new kernel (and perhaps your first kernel), we should not replace any existing kernel, especially not your current default, so that your system will properly boot into one of your old kernels if something is wrong.

Let's assume you are working with the machine from which came the sample lilo.conf earlier in this chapter.

Your new compiled kernel will be found in /usr/src/linux/arch/i386/boot (remember, we are assuming an Intel-compatible PC here) and will be named bzImage. Let's copy it into the boot directory and call it vmlinuz.new:

 # cp arch/i386/boot/bzImage /boot/vmlinuz.new 

You might also want to copy down the kernel memory map, System.map, (which is used to resolve addresses to kernel function names in the event of a kernel panic). We won't bother with that here.

Now add the new kernel to your /etc/lilo.conf file. (The new lines are in bold type):

 boot = /dev/hda 
 vga = normal 
 read-only 
 linear 
 prompt 
 timeout = 30 
 
 image = /boot/vmlinuz 
 label = linux 
 root = /dev/hda3 
 
 image = /boot/vmlinuz.suse 
 label = suse 
 root = /dev/hda3 
 
  image = /boot/vmlinuz.new  
  label = new  
  root = /dev/hda3  
 
 other = /dev/hda1 
 label = win 

Next, be sure to run lilo, or your new kernel won't be accessible on boot:

 # lilo 
 Added linux * 
 Added suse 
 Added new 
 Added win 
 # 

Next comes the moment of truth.

Booting the New Kernel

Okay, are you ready? Then gird your loins, or whatever it is you do, and type the fateful:

 # init 6 

Or, if you have actual users on your system:

 # shutdown -r 5 

This latter will give users five minutes to get off the system before it reboots. When it reboots, you should see the LILO prompt:

 LILO: 

Type:

 LILO: new 

Typing "new" and pressing Enter will cause LILO to boot your newly built and installed kernel instead of the default (which is still the kernel named "linux"). If all goes well, your system should boot normally.

Yikes! Or Booting the Old Kernel

But what if it doesn't? Becuase of the way we set up /etc/lilo.conf, all you should have to do in the event of a crash is cold-boot the machine. Your old (and known to be working) Linux kernel will boot instead of the new one and the system should recover.

So, what went wrong? Because of the wide variety of PC configurations and Linux kernels I really cannot hazard a guess.

None of the changes this book tells you to make should prevent your kernel from working. If you changed other kernel configuration parameters, you may have dropped some driver needed by your hardware. Before you reboot into the old kernel, take a look at the kernel messages output just before the crash (if they are still visible). Which subsystem was starting may provide a clue about what you are missing or have misconfigured.

Another possible cause of difficulty is if the default kernel setup in /usr/src/linux does not, in fact, match the configuration of your boot kernel.

This isn't a book on the Linux kernel. If you can't boot your modified kernel and you can't figure out why, your best bets are to try any of the following:

1.             A friend who is into Linux. This is often the best possible resource. Most Linux people are thrilled to help people have good experiences with Linux and will take any opportunity to help out.

2.             A local Linux user's group .

3.             A Web site like LinuxCare or the Web site for your Linux distribution.

4.             The linux newsgroups.

5.             A Web search engine, like google.com .

One nice thing is that almost every possible problem has been encountered and solved by at least one person before you. Help can be found.

Finding Source

The Freshmeat Web site, at http://www.freshmeat.net/ is a major repository and announcement site for open source projects. Sourceforge , at http://www.sourceforge.net/ , is another repository, and it too has an announcement section. Yet another way to find open source projects is by searching on my favorite search engine, google, at http://www.google.com/ .

Source Is Good!

Downloading the source can lead to other good habits ”like looking through the source code when you find a problem. I've seen almost no Linux/open source projects that aren't currently under development. Usually, at least one person is actively looking into improving a given project. Source code isn't that hard to learn. It's basically another language, remember ”but one tailored to solving problems. Learning source code is much like learning a limited subset of a foreign language ”for example, learning only enough Turkish to be able to read a Turkish recipe.

There are a lot of really good books on various programming languages. I tend to get O'Reilly books when I need some programming reference books. Note that I said reference books. For actually learning a language, there are generally hundreds, if not thousands, of free lesson sets and tutorials available on the Web. The primary languages used for Linux and open source projects are C, C++, Perl, and Python.

What this means is that I can dig through small programs and find the answers to certain compilation problems. While working on the chapter about Wine ( Chapter 17 ), I found myself unable to compile the Wine debugger. It kept erroring out, saying that a shared library (.so files under Linux) did not contain needed functions. "That's odd," I thought, and I did a quick find to see if that was a library compiled by Wine. Sure enough, it was. My compile was failing because the compilation process was using an old version of the library. Once I copied the new library into place, the compile hummed along happily.

Someday, "when I have more time" (which generally translates to never), I hope to start digging through C++ code for things like Wine and for fixing small bugs. Most open source projects have 'FIXME!' notes tagged in their source code. These notes mark code that needs to be fixed but that wasn't a high enough priority for someone to get to it yet. If you want to learn to program, these things are often a great place to start.

How does one get comfortable with source code? By downloading and installing things from source. You've read my rant, and hopefully you understand some of the rationale behind it.

Eschew RPM, I say! Use the Source, Luke!

 



Multitool Linux. Practical Uses for Open Source Software
Multitool Linux: Practical Uses for Open Source Software
ISBN: 0201734206
EAN: 2147483647
Year: 2002
Pages: 257

flylib.com © 2008-2017.
If you may any questions please contact us: flylib@qtcs.net