Chapter 24

Chapter 24


What does it mean when a system sends an initial SYN segment with a window scale factor of 0?


It means the sending TCP supports the window scale option, but doesn't need to scale its window for this connection. The other end (that receives this SYN) can then specify a window scale factor (that can be 0 or nonzero).


If the host bsdi in Figure 24.7 supported the window scale option, what is the expected value of the 16-bit window size field in the TCP header from vangogh in segment 3? Similarly, if the option were in use for the second connection in that figure, what would be the advertised window in segment 13?


64000: the receive buffer size (128000) right shifted 1 bit. 55000: the receive buffer size (220000) right shifted 2 bits.


Instead of fixing the window scale factor when the connection is established, could the window scale option have been defined to also appear when the scaling factor changes?


No. The problem is that acknowledgments are not reliably delivered (unless they're piggybacked with data) so a scale change appearing on an ACK could get lost.


At what data rate does sequence number wrap become a problem, assuming an MSL of 2 minutes?


2 32 — 8 / 120 equals 286 Mbits/sec, 2.86 times the FDDI data rate.


PAWS is defined to operate within a single connection only. What modifications would have to be made to TCP to use PAWS as a replacement for the 2MSL wait (the TIME_WAIT state)?


Each TCP would have to remember the last timestamp received on any connection from each host. Read Appendix B.2 of RFC 1323 for additional details.


In our example at the end of Section 24.4, why did our sock program output the size of the receive buffer before the line that followed (with the IP addresses and port numbers )?


The application must set the size of the receive buffer before establishing the connection with the other end, since the window scale option is sent in the initial SYN segment.


Redo the calculations of the throughput in Section 24.8 assuming an MSS of 1024.


If the receiver ACKs every second data segment, the throughput is 1,118,881 bytes/sec. Using a window of 62 segments, with an ACK for every 31 segments, the value is 1,158,675.


How does the timestamp option affect Karn's algorithm (Section 21.3)?


With this option the timestamp echoed in the ACK is always from the segment that caused the ACK. There is no ambiguity about which retransmitted segment the ACK is for, but the other part of Karn's algorithm, dealing with the exponential backoff on retransmission, is still required.


If TCP sends data with the SYN segment that's generated by an active open (without using the extensions we described in Section 24.7), what does the receiving TCP do with the data?


The receiving TCP queues the data, but it cannot be passed to the application until the three-way handshake is complete: when the receiving TCP moves into the ESTABLISHED state.


In Section 24.7 we said that without the T/TCP extensions, even if the active open is sent with data and a FIN, the client delay in receiving the server's response is still twice the RTT plus SPT. Show the segments to account for this.


Five segments are exchanged:

  1. Client to server: SYN, data (request), and FIN. The server must queue the data as described in the previous exercise.

  2. Server to client: SYN and ACK of client's SYN.

  3. Client to server: ACK of server's SYN and client FIN (again). This causes the server to move to the ESTABLISHED state, and the queued data from segment 1 is passed to the server application.

  4. Server to client: ACK of client FIN (which also acknowledges client data), data (server's reply), and server's FIN. This assumes that the SPT is short enough to allow this delayed ACK. When the client TCP receives this segment, the reply is passed to the client application, but the total time has been twice the RTT plus the SPT.

  5. Client to server: ACK of server's FIN.


Redo Exercise 18.14 assuming T/TCP support and the minimum RTO supported by Berkeley-derived systems of one-half second.


16,128 transactions per second (64,512 divided by 4).


If we implement T/TCP and measure the transaction time between two hosts , what can we compare it to, to determine its efficiency?


The transaction time using T/TCP cannot be faster than the time required to exchange a UDP datagram between the two hosts. T/TCP should always take longer, since it still involves state processing that UDP doesn't do.

TCP.IP Illustrated, Volume 1. The Protocols
TCP/IP Illustrated, Vol. 1: The Protocols (Addison-Wesley Professional Computing Series)
ISBN: 0201633469
EAN: 2147483647
Year: 1993
Pages: 378

Similar book on Amazon © 2008-2017.
If you may any questions please contact us: