JMS Wireless Transport Protocol

JMS involves the use of a ConnectionFactory-administered object to create a Connection object. The Connection object is an active connection to a JMS provider. The Connection object, in turn, is used to create the Session and, subsequently, the MessageProducer and MessageConsumer objects.

The crucial point to be considered here is the JMS provider. The JMS specification describes one of the purposes of a Connection object as:

It encapsulates an open TCP/IP socket between a client and a Provider's service daemon.Due to the authentication and communication setup done when a Connection is created, a Connection is a relatively heavyweight JMS object. (JMS Specification, 2002)

Contemporary middleware products, whether message oriented or otherwise, have not been designed for mobile devices (Maffeis2, 2002). The inappropriateness of contemporary middleware design stems partly from the use of communication protocols that were designed for wired networks. The dominant wired network is the Internet, which has the Transmission Control Protocol (TCP) as its transport protocol. Regular TCP, which is TCP designed for wired networks, could be used in the JMS Connection object, however, this may lead to intolerable inefficiencies. Regular TCP as well as the use of regular TCP in networks with wireless links (WLAN, GSM, UMTS) is considered below.

Regular TCP Features

TCP is a connection-oriented, end-to-end reliable protocol. It facilitates reliable interprocess communication between pairs of processes in host computers attached to distinct but interconnected computer communication networks. TCP uses the services of a less-reliable protocol, such as the Internet Protocol (IP), and provides its own services by means of a variety of facilities. These facilities can be categorized into basic data transfer, reliability, flow control, multiplexing, and connections. The TCP facilities, as described in RFC 793 (University of Southern California, 1981), are briefly discussed below.

Data Transfer

The TCP has a facility that allows for a continuous stream of octets to be sent in each direction between the users. This is done by packaging variable numbers of octets into segments (such as IP packets) for transmission though the Internet.


TCP recovers from underlying network layer errors. TCP is able to recover from data that is damaged, lost, duplicated, or delivered out of order by the IP. Error-recovery mechanisms used include the use of sequence numbers in packets (to correctly order packets at the receiver) and the requiring of positive acknowledgments (ACK) from the receiving TCP peer. A time-out and retransmission mechanism is used at the sender to deal with ACK packets not arriving. Checksums appended to every packet ensure detection and recovery from damaged packets.

Flow Control

Flow control is the ability of the receiver to limit (and, hence, control) the amount of data sent by the sender. A sliding window mechanism is used that takes the form of a range (the window) of acceptable sequence numbers beyond the last segment successfully received with every ACK. The window thus represents the set of packets that the sender may send without further permission.


Multiplexing occurs when multiple processes within a single host use TCP communication facilities simultaneously. Each host has an IP address and a set of ports. A host IP address and a port form a socket, and two sockets (on different hosts) form a unique connection. A socket may be simultaneously used in multiple connections.


A connection encapsulates status information maintained in reliability and flow control mechanisms that are initialized and maintained for each data stream. Status information that is maintained by a connection includes sockets, sequence numbers, and window sizes. Connection establishment between hosts involves the use of a three-way handshake to avoid erroneous initialization of connections (which may occur due to the delayed duplicate problem in unreliable packet-switched networks such as the Internet).

TCP Congestion Control

RFC2581 (TCP Congestion Control) (Allman, Paxson, & Stevens, 1999) defines the four congestion control algorithms: slow start, congestion avoidance, fast retransmit, and fast recovery. These algorithms are used interchangeably to govern segment transmission based on both the changing network conditions and the response received from the receiver. The algorithms are appropriate for connections that lose traffic primarily because of congestion and buffer exhaustion (Dawkins, Montenegro, Kojo, Magret, & Vaidya, 2001).

Comprehensive coverage of the TCP congestion control algorithms can be found in the RFC, however, they are also briefly discussed here.

  • Slow Start and Congestion Avoidance. The slow start and congestion avoidance algorithms are used by a TCP sender to control the amount of outstanding data being injected into the network. The slow-start algorithm is used when the TCP sender starts transmission into the network as well as after loss being detected by a retransmission timer. The network conditions are unknown when transmission is started and TCP probes the network to determine the available capacity. Probing the network in this manner avoids congesting the network with an inappropriately large burst of data.

    The slow-start and congestion avoidance algorithms are both used by the sender, however, they are used at different times. Their respective usage depends on the sender's congestion window (cwnd), the receiver's advertised window (rwnd), and the slow-start threshold (ssthresh).

    Cwnd is a sender-side limit on the amount of unacknowledged data the sender can transmit into the network. Rwnd is the receiver-side limit on the amount of outstanding data. Ssthresh is used to determine whether the slow-start or congestion avoidance algorithms are used. When cwnd < ssthresh, the slow-start algorithm is used. When cwnd > ssthresh, the congestion avoidance algorithm is used.

Fast-Retransmit and Fast-Recovery Algorithms

The fast-retransmit algorithm is used to detect and repair loss based on incoming duplicate acknowledgment (ACK) packets.The algorithm involves retransmission of what appears to be a missing segment (due to duplicate ACK arrival) without waiting for the retransmission timer to expire. The fast-recovery algorithm controls the transmission of new data immediately after the fast retransmit algorithm sends what appears to be a missing segment, and it remains in control of transmission until a nonduplicate ACK arrives.

Regular TCP in the Wireless Domain

The performance of regular TCP as described in RFC 793 (Transmission Control Protocol) and RFC 2581 (TCP Congestion Control) is known to be adversely affected by the presence of a wireless link between the sender and receiver. This problem is being and has been researched extensively.

Wireless links have properties that affect TCP performance. Most importantly, they do not provide the degree of reliability that hosts expect. The lack of reliability stems from high uncorrected-error rates (or bit-error rates) of wireless links (especially terrestrial and satellite links) when compared to wired links. Additionally, certain wireless links are subject to intermittent connectivity problems due to handoffs. Handoffs occur in cellular wireless networks such as GSM and involve calls being transferred from between-base transceiver stations in adjacent cells.

The properties of wireless links mentioned have adverse effects on the TCP congestion control algorithms (Dawkins et al., 2001). The root of the problem is that congestion avoidance in the wired Internet is based on the assumption that most packet losses are due to congestion. This assumption is certainly correct in wired links and subnets that have low uncorrected-error rates however, as mentioned, wireless links do not enjoy low uncorrected-error rates.

The result of the incorrect-error rate assumption is poor TCP performance experienced by users. The reason for this observed poor performance is that the TCP connections are spending too much time in congestion-avoidance procedures (such as the slow-start algorithm). Essentially, the TCP sender spends excessive amounts of time waiting for acknowledgments that do not arrive.

The first reaction to packet losses is a drop in the transmission (congestion) window size before retransmitting packets. This is followed by the initiation of congestion control or avoidance mechanisms and backing off of the retransmission timer. The result of these measures is a reduction of the load on intermediate links that results in a reduction of congestion in the network.

The measures mentioned above result in an unnecessary reduction in end-to-end throughput and suboptimal performance. The root of the problem is the excessive amount of time spent avoiding congestion that is triggered by packet losses that result from transmission errors, not congestion. The sender assumes packet loss (say due to congestion-related buffer exhaustion) and thus substantially reduces traffic levels as it probes the network to determine "appropriate" traffic levels.

Recommendations for Improving TCP Performance in the Wireless Domain

Recommendations (Dawkins et al., 2001; Balakrishnan, Padmanabhan, Seshan, & Katz, 1997) for improving the performance of TCP in wireless and lossy networks can be split into three broad categories: end-to-end protocols in which the sender is aware of the wireless link, link-layer protocols that provide local reliability, and split-connection protocols that break the end-to-end connection at the base station.

The mechanisms for improving TCP performance over wireless links have been comprehensively compared, and methods of improving the performance of TCP over wireless links were presented (Balakrishnan et al., 1997). They state the two fundamentally different approaches to improving TCP performance in the wireless domain.

The first approach involves hiding all noncongestion-related losses from the TCP sender, which consequently requires no changes to existing sender implementations. The reasoning behind such an approach is that the problem is a local one (local to the wireless link) and should thus be solved locally so that the transport layer does not need to be aware of individual link characteristics. The lossy link is made to appear to be a higher-quality link with a reduced affective bandwidth.

The second approach involves attempting to make the sender aware of wireless links on the path to the receiver. The sender thus knows that certain losses are not due to congestion and can consequently avoid invoking congestion-control algorithms when noncongestion-related losses occur.

Link-Layer Solutions

Link-layer solutions attempt to hide link-related losses from the TCP sender by using local retransmissions and possibly forwarding error-correcting codes over the wireless link. Local retransmission can use techniques that respond to the specific characteristics of the wireless link, resulting in a significant performance increase. Experiments (Balakrishnan et al., 1997), show a 10% to 30% increase in performance when shielding the TCP sender from duplicate acknowledgments.

End-to-End Solutions

End-to-end protocols involve the use of two techniques that attempt to make the sender handle losses. The first technique is to use selective acknowledgments (SACKs) to allow the sender to recover from multiple-packet losses in a window without having to use a time-out mechanism. The second technique involves the use of an Explicit Loss Notification Mechanism (ELN). It was demonstrated (Balakrishnan et al., 1997) that SACKs and ELN result in significant performance improvements. They show that a simple ELN scheme can improve the end-to-end throughput by a factor of more than two compared to TCP Reno (the de facto TCP standard).

Split-Connection Solutions

Split-connection approaches completely hide the wireless link from the sender by terminating the TCP connection at the base station. This involves one reliable connection between the sender and the base-station node as well as a second reliable connection between the base-station node and the destination. The second connection can thus use techniques, such as negative or selective acknowledgments as more appropriate alternatives than regular TCP, to perform well (greater throughput) over the wireless link. This technique produced unfavorable results (Balakrishnan et al., 1997), because the sender was found to regularly stall due to time-outs on the wireless connection, resulting in poor end-to-end throughput.

Header Compression

Improving TCP performance in the wireless domain using header compression is a possibility, regardless of the mechanism used to improve regular TCP performance. The bandwidth of the wireless link will always be limited due to the properties of the physical medium as well as regulatory limits on frequency bands. Limited bandwidth is evident in the circuit-switched GSM data channels that offer 9.6 kbit/s and GPRS extension to GSM that offers up to 170 kbit/s. A consequence of limited bandwidth is the highly desirable process of limiting redundant data transmissions (in the forms of IP, UDP, and TCP headers) in order to efficiently utilize the limited bandwidth. The advent of IPv6, which increases the 20 byte IPv4 header to 40 bytes, serves as further motivation for the implementation of header compression techniques. In fact, the total header size will increase to a total of 84 bytes for a single TCP segment in Mobile IPv6.

Large headers of 50 bytes or more can be reduced in size to four to five bytes, (Degermark, Engan, Nordgen, & Pink, 1997). Their efficient header-compression algorithm takes advantage of the observation that consecutive headers belonging to the same packet stream are either largely identical or seldom change in the life of a packet stream. A rudimentary compression technique, which serves as an introduction to header compression, is mentioned here. However, lossy links, such as wireless links, require more sophisticated measures. It should be noted that header compression has the potential to actually reduce throughput in networks with lossy links, however, Degermark et al. (1997) showed how to avoid such eventualities.

The first step conducted when compressing the headers of the packet stream is for the compressor to send a packet with a full header. The full header is a regular header with all fields intact. This initial packet serves the purpose of establishing an association between the nonchanging fields of the header and a compression identifier (CID). The CID is a small unique number also carried by compressed headers. The decompressor stores the initially received full header as a compression state. The second step in the compression and decompression processes is for the received CIDs to be used to look up the appropriate compression state to use for decompression.

The compression technique used means that the CID is sent instead of a full header, thus conserving bandwidth. The compression does not take place for every packet following the initial header. Whenever one of the fields change that are regarded as being mostly unchanged, one needs to send another full header.

Alternatives to Using TCP

The primary alternative to using TCP is to use UDP with a thin reliability layer that provides the guaranteed message delivery required by JMS. A thin reliability layer is required, because UDP occasionally drops packets and may deliver them out of sequence. UDP packets incorporate checksums that guarantee data integrity, and hence, messages can be reconstructed with a high degree of confidence with respect to message integrity.

RFC 908 and RFC 1151, Reliable Data Protocol (RDP), are designed to provide a reliable data transport service for packet-based applications (applications that send discrete chunks of data such as JMS). RDP can be implemented using UDP and could certainly provide the reliable message delivery required by JMS. RDP can be simple to implement and efficient in the wireless domain:

The protocol is intended to be simple to implement but still be efficient in environments where there may be long transmission delays and loss or non sequential delivery of message segments. (Velten, Hinden, & Sax, 1984)

RDP was developed because TCP has disadvantages when used in certain applications (remote loading and debugging, file transfer, e-mail, transaction processing). RFC 908 lists the general byte-stream transfer of TCP as a disadvantage, due to its complexity, in applications that do not require such a feature.

The argument in favor of using UDP (Bonachea & Hettena, 2000) (with a thin reliability layer such as specified in RFC 908) instead of TCP is that it typically provides the lowest overhead access to the network (it is lightweight) and is widely portable. Additionally, when considering wireless links with their limited bandwidths, UDP becomes attractive due to less header overhead associated with each packet. UDP datagrams contain an 8 byte header, whereas in TCP implementations, one finds 20 to 40 byte headers.

TCP provides a generic protocol, but this generality comes with a price in performance (as mentioned in RFC 908). TCP is a complicated protocol, and software implementations incur significant overhead. Apart from the complexities of the protocol, concerns regarding the scalability of any distributed application are likely to arise. These concerns arise due to significant OS resources, of which there is a finite amount, consumed by a TCP connection.

TCP may be a complex protocol with a significant amount of overhead, yet one can implement a lightweight TCP stack based on the parameters of a particular application or environment, such as wireless links. Apart from the recommendations for improving TCP performance over the wireless link mentioned previously, other techniques have been reported (Frey, 2002). Techniques such as protocol reduction, acknowledgment spoofing, and data compression can be used to improve the efficiency of TCP over the wireless link. On the other hand, performance penalties are likely when using a stream protocol such as TCP for an application, which is message oriented.

Wireless JMS Provider TCP Solutions

Having examined the alternative transport mechanisms, let us now consider the transport mechanisms used by industrial wireless middleware solutions and the motivation behind the mechanisms used. Our focus is on industrial wireless middleware that implements MOM, such as JMS.


Softwired is a wireless JMS solution provider that offers the iBus//Mobile product as their JMS implementation. IBus//Mobile is reported to include an UDP-based reliable messaging protocol that can deliver better performance than TCP:

iBus//Mobile provides various features particularly suited for GPRS (and other packet-based bearers). E.g., a UDP-based reliable messaging protocol that can deliver better performance than TCP or HTTP, various transmission parameters that can be tuned (retransmit delays, flow control, transmission windows, etc.), dynamic adaptation to changing bandwidth, reliable messaging in spite of holes in network coverage, data encryption, etc. (Softwired, 2002)

Softwired does not provide motivation as to why their UDP-based protocol improves on the performance of TCP. Dr. Silvano Maffeis, the inventor of iBus technology, was questioned on the protocol used, and his response was that the information is proprietary. However, he said that the protocol is based on a sliding-window mechanism incorporating both positive and negative acknowledgments. Dr. Maffeis also agreed with our statement that TCP is not optimal for wireless networks.


The Broadbeam Corporation offers Axio as its mobile software platform. The Axio platform includes three components, of which the ExpressQ component is of interest here. ExpressQ is a wireless messaging server offering secure store-and-forward message queueing, real-time communication, and notifications. ExpressQ appears to be remarkably similar to JMS in that it provides a store-and-forward message queuing system with guaranteed message delivery. ExpressQ is reported to provide an optimized wireless transport protocol for reliable and efficient communication (Broadbeam1, 2002).

The following excerpt outlines the optimized wireless transport:

Optimized Wireless Transport and Compression. ExpressQ was built from the ground up to minimize over-the-air data. ExpressQ's transport layer uses smaller packet headers, uses fewer acknowledgments, produces fewer transmission failures, and retransmits less data over wireless networks than TCP, resulting in significant savings in communication costs. Like TCP/IP, the ExpressQ transport dynamically adapts to changing network conditions (e.g., congestion) to speed up or slow down the rate at which data is transmitted, resulting in fewer failures and errors. The ExpressQ transport automatically compresses and decompresses data to further reduced over-the-air data. ExpressQ's wireless optimized transport and compression result in significantly lower communication costs vs. using transports like TCP/IP. They also result in better application performance, both in terms of transaction times and battery life of mobile devices, and a better user experience. (Broadbeam2, 2002)

From the above excerpt, one can infer that ExpressQ either uses a lightweight TCP implementation or a reliable data protocol similar to that described in RFC 908. It is clear that regular TCP is not used, and that specific enhancements are included to provide an efficient transport mechanism when traversing the wireless link.

Wireless Communications and Mobile Commerce
Wireless Communications and Mobile Commerce
ISBN: 1591402123
EAN: 2147483647
Year: 2004
Pages: 139 © 2008-2017.
If you may any questions please contact us: