IN THIS CHAPTER
So your Mac OS X computer is sitting there minding its own business, and along comes another machine and says "Hi there, remember me from 10.1.5.0? Want to go out and share a file?" If the visiting computer knows the correct information to mount your machine's drives , or authenticate as one of your users, what's your machine to do, except believe that it is what it says it is, acting on the authority of who it says that it is? This is a fundamental problem with the idea of a computer network, and one that, in many senses, almost nothing can be done about. We and our computers identify ourselves and other systems over the network via mechanisms that range from visual branding identity to passwords, and from digital signatures to serial number information extracted from CPUs, network cards, and system boards . Whether we're trying to identify software across the network or are identifying ourselves to it, some pieces of selected information are considered sufficient to prove those identities. If some imposter system out there can replicate all these pieces of information and provide them on demand, that imposter will be believed to be the system that it is impersonating. This is called spoofing .
In a sense, any form of identity misappropriation can be considered spoofing. An intruder using your user ID and password is spoofing your identity on the system. A remote machine that is serving up an NIS domain with the same name as your normal server, in the hopes that your machine will listen to it rather than the password master it was intended to obey, is spoofing your NIS domain master. In recent Internet history there have been a number of fly-by-night businesses that have set up Web sites located at common misspellings of popular e-commerce sites with visual duplicates of the real site. These spoofs of the actual online retailer's sites have taken orders at elevated prices and then passed the order off to the real business and skimmed the pricing difference. PayPal.com users, as well as customers of a host of other online businesses, have been bilked out of money by "reregistering" at the request of spoofed email indicating that their accounts had been compromised, and giving them a link at which they could reenter their personal and credit card information (http://www.scambusters.org/Scambusters55.html). Because in many cases the deception doesn't constitute fraud, even big business gets in on the act occasionally. When AT&T put together its 1-800-OPERATOR campaign, MCI cashed in on the opportunity and picked up 1-800-OPERATER, directing it to its own operator service, and supposedly raked in $300,000 a month of free money by using AT&T's advertising (http://icbtollfree.com/pressetc/telephonyarticle10142002.html). A classic demonstration that computer security cannot be implemented without user education was carried out via a spoof by a tiger team (http://www.catb.org/jargon/html/entry/tiger-team.html) hired to test a U.S. Military installation's security. When the team discovered that they couldn't find a way to break into the system through application of network techniques, they spoofed an official system patch document from IBM, packaged it with a software backdoor to the system, and had it delivered to the installation, which dutifully installed it, letting them in (http://www.catb.org/jargon/html/entry/patch.html). An enterprising individual who wanted to compete in the world of domain name registrations came up with the clever(?) plan to steal InterNIC's traffic and customers by spoofing InterNIC's registration service (http://www.nwfusion.com/archive/1997/97-07-28____.html). Heck, there's a whole flippin' country full of people pretending to be the children of Nigerian diplomats in desperate need of a foreign partner to move millions of dollars out of hidden bank accounts (http://home.rica.net/alphae/419coal/).
Formally, spoofing is defined as providing false identity credentials for the purpose of carrying out a deceit. In some circles it's considered a requirement that this deceit be with the intent to obtain unauthorized access to a system or its resources, whereas others consider maintaining plausible deniability to be spoofing oneself as a trustable entity. Finally, pure theft of identity, although it meets almost everyone's definition, doesn't tend to be called spoofing unless the theft of identity was for the purpose of forging that identity: Just using someone else's password to access a system wouldn't commonly be called spoofing unless the purpose was to pretend that that person was performing some actions.
Clear as mud? Don't worry ”after you get used to the way the term is used, it'll all make sense in context.
The issue boils down to a matter of establishing trust, and fixing a set of credentials by which a protocol or system may establish the identity of some other user or system. If these credentials can be forged or replicated, then a third untrusted system may spoof itself as a trusted entity to any system that relies on the replicable credentials for identification. The parallels between the computing world and the human world are, in this case, many, as the problem of establishing trust and identity is a basic issue of human existence as well. We take thumbprints or retina scans at the entrance to a top-secret facility (or at least in James Bond's world we do); we compare handwritten signatures with originals ; banks ask us for our mothers' maiden names ; and every day we compare our remembered copies of our associate's voices with what our ears are telling us, so that we can identify the speaker. As humans we have the same problems our computers have. We need to trust some collection of information as being sufficient to identify others around us as who they are. No matter how large this collection of information is, it will always be possible in some way for someone to fake these credentials, but we trust them because we believe that they are, for our purposes, sufficient. If we're overly naive, we may place our trust on too few credentials, and we may be fooled with too great a frequency. If we're overly paranoid we may decide that this means that we can't actually trust anyone , ever. Neither of these extremes are practical for human interactions, nor are they practical for computer interactions. Trust must be established in some way, and it will always be possible for the mechanism for establishing that trust to be deceived. It is simply an issue that we and our computing systems must live with.