Section 6.2. What Are the Network Protocols?


6.2. What Are the Network Protocols?

A protocol is a defined procedure for interconnecting and interacting. Within a protocol, defined behaviors allow the participants to move towards an anticipated condition or result. The protocols that determine how data are transported over the Internet, or over a LAN that uses TCP/IP, provide a variety of services. Some are directly visible to the end user, others operate in the wings or serve as aids in troubleshooting. Some protocols move web pages, some move email, some move files, and some move streaming media. Others exist to provide watchdog functions over the system or to allow various network components to communicate. Many of the most important network protocols, which also happen to be most commonly attacked, are the protocols needed to make communication over a network possible.

6.2.1. Data Navigation Protocols

The fundamental network protocol is likely Internet Protocol, which describes how packets will navigate from network to network. It does this by defining the structure of IP addresses. The IP also provides a fragmentation and reassembly function, which means that if a message, or datagram, is too long, an IP packet can be split into smaller chunks for transmission through the network and then put it back together when it gets to its final destination.

What IP does not do is keep track of whether messages actually make it to where they are going, or pace the transmission of messages so that a link does not overfill. IP treats each piece of a message, or Internet datagram, as an independent entity unrelated to any other Internet datagram. The IP must link up with several other protocols to insure reliable end-to-end delivery and retransmission of missing messages.

For instance, the Transmission Control Protocol wraps itself around the IP packet and provides the information needed to see a packet through multiple hops to its destination and determines if all packets made the trip. TCP can figure out which packets were lost and order up replacements. UDP, on the other hand, is the stripped-down version of TCP, moving packets with speed but sacrificing end-to-end reliability.

The File Transfer Protocol mentioned previously operates using TCP. All the data travels reliably over the network, and the transmission is not finished until the packets have all made the trip and been reassembled in order at the destination. FTP's UDP cousin, however, Trivial File Transfer Protocol (TFTP), transmits the packets as a firehose transmits water. It streams, but it has no way to determine by itself if the water is hitting the target.

TCP can detect errors because each packet uses a cyclical redundancy check (CRC), which is like a parity or checksum, to check itself. A checksum is a mathematical mechanism that detects errors in transmission, usually by adding up the numeric value of all the characters transmitted and seeing if the total is the same at both ends of the link.

If IP needs to report errors to the sender, it uses helping protocols from a suite called Internet Control Message Protocol (ICMP). IP includes a facility for limiting the transmission of misdirected messages, a self-destruct mechanism called time-to-live (TTL). Every time a packet passes through a network node that retransmits it, the TTL counter gets decremented, that is, subtracted by one. If the TTL reaches zero before the Internet datagram reaches its destination, the Internet datagram is considered lost, and is destroyed.

6.2.2. Data Navigation Protocol Attacks

These four protocols: IP, TCP, UDP, and ICMP are the basis for Internet communications. They are also the basis of many attacks that use the Internet or of attacks against the Internet itself.

For instance, setting the TTL counter to a high or infinite value causes bad packets to crash on indefinitely. This takes up bandwidth and clogs the pipes so other traffic can't get through. False ICMP messages can put the network on alert against threats that aren't there, slowing or stopping communications, or causing unneeded reroutes. This can be particularly vexing when heavy traffic gets shipped down a narrow street. Network administrators will have a hard time limiting the flow without cutting off the desirable traffic.

TCP assures reliability by introducing sequence counters and acknowledgments to IP. Also, TCP transmissions proceed after the communicating systems give each other a handshake, that is, after both ends go through a short three-part exchange to confirm which other system they are talking with, similar to exchanging business cards before sitting down to make a deal. A favorite hacker trick is to open up a session (begin a communication) with a system under attack, receive an acknowledgment, and then leave the connection half-completed, tying up resources and memory on the attacked device. Do this enough times, and unprotected systems will buckle under the load, similar to meeting too many interesting people at one time at a party. Affected systems can hang up or cease functioning, denying services to legitimate users, or they can crash, possibly allowing attackers to modify the operating software with illicit changes that can create secret entrances to open the device to attackers.

Why is IP such a pushover? Because it's not being used for that which it was built. The military wanted a network protocol that would survive a worst case scenariosomething along the lines of global thermonuclear war. The network needed to pass traffic to every location smoothly and efficiently, and to be able to reconfigure itself around bad routes and sudden outages. Were the balloon to go up, and everyone except people in deep bunkers had to spend two weeks hiding in basements and under doors in slit trenches, while waiting for fallout to decay, the network was supposed to reconfigure itself and be ready to go once humanity reemerged, serving every place that was still a place.

Instead, the Internet became an "information superhighway" that led to economic growth, prosperity, and jobs. It became a tool of enhanced communications, helping to bring the entire human family closer together. True enough, there are robbers in the bushes around that highway, and attacks for money, both of the travelers and of the destinations, are increasingly common. There is also increasing concern about pedophiles who use the Internet to form associations with unsupervised innocents. These are unintended consequences against which the Internet was never fortified.

6.2.3. Other Internet Protocols

FTP, SMTP, and lots of the behind-the-scenes protocols use datagrams to communicate. These protocols can be subjected to attack. The easiest way to attack these datagrams is by monitoring the network using a packet sniffer. A packet sniffer surreptitiously monitors and decodes packets, allowing the attacker to gather information about the network and the devices and persons attached to it. A more sophisticated attack would be to change the contents of a datagram (data modification) or to make it appear as if it came from a different party (spoofing). Packet sniffers are, however, very useful tools for network administrators because they allow you to see what protocols are on the network.

6.2.3.1. File Transfer Protocol

The File Transfer Protocol was designed to promote sharing files, such as computer programs or data, by connecting machines reliably and efficiently, without getting tangled up in whether or not the host machine was the same brand or used the same operating system as the client. As a result, remote access of computers became more commonplace. In fact, even though simple FTP terminal programs are available, web browsers can often perform such transfers simply and transparently. This appears to be in line with the original intent of the standard.

However, the FTP protocol is subject to abuse. In the first place, it transmits in the clear without encryption shielding. Just sit and listen to a network connection, and in time, files by the boatload will come streaming by for the copying. FTP is also very subject to anonymous access. This is highly desirable in many environments, where to regulate access requires issuing passwords to every applicant, creating a tremendous administrative burden. It also means, however, that while you are in the process of gathering usage information from your FTP server, you will not learn anything more about me than I choose to reveal. Like an honor system coffee club, FTP is vulnerable to those lacking honor.

6.2.3.2. Simple Mail Transfer Protocol

The SMTP is designed to transfer email messages reliably and efficiently, again without regards to the particular computers or operating systems encountered along the way. It does this by setting up a channel between the initial sender and a receiver, which can be either the ultimate destination or some waypoint. Once the transmission channel is established, the mail sender issues a MAIL command, which identifies the sender and states that there is traffic to send. If the mail receiver can accept mail, it responds with an OK reply. The mail sender then sends a RCPT command identifying the mail recipient. If the mail receiver can accept mail for that recipient, it responds with an OK reply. If not, it responds with a reply rejecting that recipient (but not the whole mail transaction).

The mail sender and mail receiver may negotiate with several recipients. When the recipients have been negotiated, the sender sends the mail data. If the SMTP receiver successfully processes the mail data, it responds with an OK reply.

In the case that mail is sent to an intermediary stop, or waypoint, the process is repeated. If the mail receiver is the intended destination, the message is forwarded to a mailbox for storage until the recipient calls for it with her mail client.

Mail that can't be delivered because of incorrect or invalid addresses are returned with a note from whichever mail server determined the problem, stating that delivery was impossible.

The SMTP system works so well that email has become an important means of doing business. This same reliability, however, is its undoing. Email is normally transmitted in the clear, which means that a host that pretends to be an email relay can access all email that passes through it; mail could then be copied or modified. When an attacker suspects that a user or administrator is getting suspicious, it is relatively easy to disconnect the relay and lay low. The flow of message receipts and returns may be delayed but will likely not be disrupted because of the self healing nature of the robust SMTP protocol.

Further, it is very easy to create an email message that looks as if it was sent from someone other than the true sender. This can create problems in its own right (for example, a university student notifies everyone in a class that a certain test has been cancelled, and the message appears to emanate from the professor's computer). This also makes it easy to formulate an attack that sends tens of thousands of emails out to various addresses on the Internet, valid or not, using the spoofed return address of someone you wish to annoy or attack. As the emails bounce off the bad recipient addresses, your target will get a flood of annoying messages saying that the address is no longer valid. A few of the addresses will be valid, so your victim may get a couple of irate responses from legitimate but uninterested recipients as well.

6.2.3.2.1. SMTP and spam

The ability to spoof a return address and easily mail the same message to multiple recipients has lead to the uncontrolled outbreak of junk email, or spam. Spam, by some accounts, represents up to 50% of email traffic and is popular for one reason: email is dirt cheap. Junk physical mail is annoying, but it has a certain cost of delivery, and each piece must be handled and addressed, even if the name used is "occupant." This inherent cost limits the total amount of junk mail sent out because someone has to pay the bill. Email, on the other hand, has few costs: scraping up a few million email addresses off newsgroups and chain letters is not really that hard, especially if software is used to scan newsgroups and other places where addresses are densely displayed. Plus, launching and sending such messages is largely automatic; just turn on the spambot machine, and forget it.

Some email recipients resent the intrusion, and send responses asking that they no longer be disturbed. This proves the user's email is valid and increases the value of the address when compiled into lists sold to other spammers. Other addresses are nonexistent or already abandoned. Email systems will send back notices to the sender to this effect. As the bad addresses bounce back and fill up the sender's inbox, there are no worries: it probably wasn't a valid inbox. Most spammers are careful to avoid using their real Internet addresses.

Tracking spammers down requires a lot of detective work; for example, you have to request that ISPs and the owners of various intermediate systems check their logs. This makes detection difficult. Because most spam today leads respondents to a web page anyway, dead-letter information is not really important to the sender. In fact, as stated, the most malicious spammers mimic a return address of someone they wish to annoy, letting them deal with the crush of dead-letter notifications, which often come in such volume as to shut down the unlucky victims email account in order to protect the server.

The cost-per-thousand of spam is so low that any spammer with technical savvy and a little software can send millions of junk emails a day from a very humble facility. The potential returns make it worthwhile grubbing about for addresses.

In an experiment I completed recently, a valid but unadvertised email account was left active but unopened for one year's time. At the end of that period, several hundred megabytes of email had accumulatedover 17,000 pieces in all. All of it spam.


Recent rule changes have made it illegal to send spam. However, just because a law is in effect in one country does not mean it's valid in others. Many spammers operate remotely, from countries that don't get too negative about earning revenue from a nonpolluting, high-tech industry such as bulk email. Other spammers use hacking techniques to turn ordinary computers, perhaps yours, into machines that will either generate or relay their spam for them. The best way to cope is likely the same way you cope with unsolicited physical mail. Use the antispam features of your email client software to filter undesired email into the recycle bin before you even see it.

6.2.3.3. Domain Name Service

Of the many other protocols inherent in the Internet, most are subject to attack or subversion. The Domain Name Service (DNS) has its own vulnerabilities.

DNS is used to resolve a friendly name, such as www.oreilly.com, to an IP address, such as 192.168,32.10. DNS is needed because while the Internet runs with IP addresses, people tend to think in words. The DNS service keeps a distributed directory handy, which allows you, the user, to type a uniform resource locator (URL) or Internet address, something that in most cases is fairly straightforward and easy to remember, into the address block on your web browser, and the computer will sweat the numbers.

DNS is not usually the first step in address resolution. To save time and prevent wasted bandwidth, a table of address and their URLs is usually stored or cached on the local machine. Your computer starts at this table when you make a web request, looking to see if it already has the IP address of the site you desire. When your local machine cannot find where to send a web request, it contacts the nearest DNS server, which tells the computer every thing it knows about the desired IP address. If if the address is unknown at the DNS server, that DNS server consults the next DNS server up the chain, until your address is found or you hit the top, come up empty, and are sent an error message.

This suggests three very convenient DNS attacks. First, if you seed the local machine's cache with incorrect data, it sends the user's communications to the wrong place, including possibly a decoy site of the attacker's own design. Second, if you pollute the database of one of the nation's big DNS servers, you may shut down a major portion of the Internet, which is always good for achieving status in the cracker underworld. Fortunately, the distributed nature of the DNS system makes this a little far-fetched because backup systems will likely kick in. Third, if you deny access to the DNS server that provides address resolution to a population of users, say the LAN that serves your company, users of that LAN are not going to be able to contact web sites for which they do not already have IP addresses. Attackers do this in at least two ways: take out the server with some kind of attack or change the place that your desktop computer looks for DNS resolution. It may be easier to force a DNS error by changing the place computers look for DNS by modifying the local information in cache than it would be to take down the server.

Poisoning the DNS system doesn't only slow down or prevent the access of web pages and services. Mail may not work, remote filesystems may be rendered inaccessible, and network printing may go down. Essentially everything that involves an external communication is at risk when DNS fails.

6.2.3.4. Dynamic Host Configuration Protocol

To access the Internet, users need an IP address. But there are situations in which there may not be enough IP addresses to serve all users. (This used to happen frequently when networks expanded beyond their original projected size, and administrators discovered that they had not reserved enough address numbers.)

To overcome this shortage, a system to share IP addresses was created, called the Dynamic Host Configuration Protocol. DHCP provided an IP address to those users who were actually logged on at the moment, drawing them from a pool of all available IP addresses. The pool was usually much smaller than the number of users, but that was okay, because all the users were rarely in the office or using the Internet at once. Shortly after the user logged off, the IP address could be reassigned to another user that was just logging on. Oversubscribing, that is, operating with fewer addresses than computers, is a tactic also used by phone companies to provide everybody sharing a common bank of equipment.

6.2.3.5. Network Address Translation

Network Address Translation (NAT) has also become a popular way to share addresses among many users. With NAT, the IP addressing system inside the network is known only to those in the network. Outsiders see only a small number of external addresses, which are rotated among all the users who may need one. A table in the NAT server keeps track of which internal addresses map to which external addresses. The number of internal addresses can be almost unlimited (after all, they stay inside the closed network). Because these addresses never appear to the outside world, NAT network administrators usually adopt the Private Address space allocations set aside in the IP address system.

RFC 1918 Address Allocation for Private Internets

The Internet Assigned Numbers Authority (IANA), the organization that allocates the assignment of IP addresses, created the following three blocks of the IP address space for private internets.

  • Class A 10.0.0.0 to 10.255.255.255 (10/8 prefix)

  • Class B 172.16.0.0 to 172.31.255.255 (172.16/12 prefix)

  • Class C 192.168.0.0 to 192.168.255.255 (192.168/16 prefix)

Home networks and Wi-Fi wireless systems likely use addresses that fall into this assignment.

In some Microsoft networks, the Automatic Private IP Addressing (APIPA) addressing scheme may come be used in cases in which a DHCP system is nonfunctional. When the DHCP server is again able to service requests, clients update their addresses automatically.


Before NAT, DHCP was used to ration IP addresses. With NAT, DHCP is used to tell a PC which IP address it has been issued. This is in line with the root of DHCP, which is called the Bootstrap Protocol (BOOTP). BOOTP and DHCP tell a recently awakened PC many things it needs to know about its configuration, such as what address it can use and where it can locate various network resources. This is needed in case these things have changed while the computer was turned off. It is also necessary to receive this configuration information in case a computer is new to the network.

If the DHCP server is compromised and starts issuing wrong information, the ability of the computers on the internal network to access the external network or the Internet will be severely limited.

Attacks against DHCP usually involve interrupting these processes. For instance, one item that is frequently shared by DHCP is the location of DNS servers. If the DHCP server is compromised and starts issuing wrong information, the ability of the computers on the net to access the Internet will be severely limited.

This is not the only DHCP attack, however. Another popular attack is to change the pool assignments so that DHCP starts to issue IP addresses that are either invalid, or which are in use elsewhere. When this occurs, the routers and switches learn these new addresses and share them, and soon much of the traffic on the network can be going to the wrong place. Further, it may not be long before duplicate IP addresses begin to appear on the network. Many pieces of network equipment will blacklist devices that are using illicit, duplicate addresses. Finally, the routers and switches themselves will begin to labor under the strain of having to update so much information, and soon the network will be severely degraded. And this is not all. Bad address data must be purged, and good data must repopulate all the cache tables that need it. This takes time, is a burden to the network equipment, and it consumes bandwidth as well.

6.2.3.6. Port Address Translation

It is possible to overload a single external NAT address so that it can be used by several internal users. You can use port numbers in addition to network addresses to keep all ongoing exchanges organized; the resulting system is called Port Address Translation (PAT). (Ports were compared previously to the various services such as gas and water that entered a house separately, even though they were at the same physical address.) In a sense, PAT fills part of the role of DHCP because it shares a small number of public IP addresses, that would be one (1), with a larger number of users. Unlike DHCP, which may open some security holes, NAT and PAT can actually increase security because they obscure the true addresses used by users.




Computer Security Basics
Computer Security Basics
ISBN: 0596006691
EAN: 2147483647
Year: 2004
Pages: 121

flylib.com © 2008-2017.
If you may any questions please contact us: flylib@qtcs.net