That quality of service (QoS) impacts upon the success of m-commerce applications is without doubt, as it plays a pivotal role in attracting and retaining customers. Frustrated customers will leave a site if they perceive it as being slow (for instance), causing lost revenue. To compound the topic, m-commerce sites are likely to be compared to the user 's actual shopping experiences in the physical world and their normal purchasing and browsing interactions in that environment. While the advent of multimedia-enhanced m-commerce applications, offering, for instance, video footage of merchandise to be bought, will go someway towards alleviating this factor, in a mobile communication environment such applications will inevitably have severe constraints put upon them by the limited bandwidth on offer, further highlighting the importance of QoS provision mechanisms in this context.
Traditional approaches of providing quality of service to distributed multimedia applications have focused on ways and modalities of ensuring and managing different technical parameters such as delay, jitter, and packet loss over unreliable networks. Now, while issues raised by m-commerce systems do not necessarily preclude the use of such approaches, there are, nonetheless, a few context-specific characteristics which demand complementary methods of delivering QoS. Moreover, such methods of QoS provisioning have considerable overlap with those used by the more general category of e-commerce applications, and as such, QoS considerations identified in the latter type of applications have direct applicability to the m-commerce arena.
These specific e-commerce QoS requirements stem from the fact that the e-commerce server has been identified as the main resource upon which rests the success of such applications. In a B2C session scenario, for instance, the user is, irrespective of the client device, interested in getting good quality search results and in the speed of the transactions, both of which are server dependent (Bochmann, Kerherv ƒ , Lutfiyya, Salem, & Ye, 2001). The longer the response delay and the poorer the search results, the less inclined will the user be to shop from that specific e-commerce site, resulting in lost revenue.
Server response times have thus been identified as a key metric in e-commerce scenarios. It is therefore logical for QoS considerations to concentrate on ways on how to manage such response times (Arlitt, Krishnamurthy, & Rolia, 2001; Bochmann et al., 2001; Bochmann, Kerherv ƒ , & Salem, 2000; Chen, Mohapatra, & Chen, 2001), and a primary focus has been on the identification of the specific characteristics of e-commerce servers' workloads. Such workloads have been found to be significantly different from traditional Web server workloads and to be characterised by short and frequent requests , an abundance of dynamic data being generated, variable think times, and undefined session lengths (Arlitt et al., 2001; Bochmann et al., 2000, 2001; Krishnamurthy & Rolia, 2000). While there have been attempts to model e-commerce server traffic and requests using clustering techniques based on factors such as session length, request-class mix, and navigational behaviour (Arlitt et al., 2001; Menasc ƒ , Almeida, Fonseca, & Mendes, 1999), the frequency of such requests implies that QoS cannot be negotiated on a per-request basis, as is the case with distributed multimedia (Bochmann et al., 2000).
Even though e-commerce server workloads have been found to display time-of-day patterns (i.e., busiest during the day and least busy in early mornings), there are nonetheless substantial fluctuations to be found. Although the existence of "flash" surges in server workload at times when promotions are running on the respective e-commerce site or during traditionally shoppingintensive periods, such as those preceding Valentine's Day or Christmas Day, is to be expected, even when these factors are missing, observation of e-commerce server workload has found request rates to vary by a factor of 9 (Arlitt et al., 2001). However, it is precisely this relative unpredictability of server workload which makes the application of traditional QoS management techniques especially opportune.
Admission control algorithms have thus been proposed in order to provide predictable e-commerce response times (Chen & Mohapatra, 2002; Chen et al., 2001; Kant & Mohapatra, 2000). Such an algorithm would ensure user satisfaction with a B2C context from two points of view: not only would server delay latency be bounded, but denial of service attacks would also be prevented. The latter is especially an issue of concern in e-commerce systems, as server workload analysis has identified that a non-negligible percentage of requests are issued by robots, automatically collecting, for instance, price information by "crawling" through a site. Such robot-originated requests have a significant impact on server processing capacity, degrading the service provided to customers. While work has been done on identifying particular characteristics of robot sessions, the issue is further complicated by robots intentionally behaving like users (Arlitt et al., 2001; Menasc ƒ et al., 2000).
Admission control is inextricably linked to service differentiation provision, which is itself particularly suited to the for-profit transactions characteristic of e-commerce. For instance, personalisation, although benefiting the B2C shopping experience, has been identified as a potential threat to the performance and scalability of e-commerce systems (Arlitt et al., 2001). The suggestion is therefore to provide personalisation capabilities when server usage is light and ban such capabilities at periods of high request; alternatively, users can have a personalised shopping experience, even at times of high system load, if they pay for the privilege.
While dynamically generated information greatly aids personalisation, it is a drain on server processing capacity and does add delays to server response times. Overall system performance could be improved, though, if information was cached on the application server side, and various mechanisms for dynamic content caching have been put forward. Thus, while Arlitt et al. (2001) suggest the use of a cache large enough to store all of a site's responses, Mohapatra and Chen (2002) propose WebGraph, a graph-based representation of Web pages, where nodes (called weblets) represent parts of a Web page that either remain static or change simultaneously and edges represent the inclusion relationship. When a dynamic Web page is created, only those weblets that have changed need be refreshed, thus relieving some of the server processing burden . Moreover, this framework can also be used for QoS maintenance, as edges in the graph have attributes representing QoS characteristics (such as delay-, throughput- and loss-sensitiveness) of the particular weblet to which it is pointing. Thus, if a particular weblet is indicated as being loss-sensitive (such as is the case with financial information) and if the environment is one in which losses have occurred recently, then that edge is deleted and the respective weblet is not refreshed, thus economising on resource requests.
While this section has identified several server-side considerations which need to be handled by mobile B2C transactions, the fact remains that the biggest impediment to enhancing such transactions with multimedia is the relative paucity of bandwidth available to such applications. As the next section will illustrate , results obtained in the field of perceived multimedia quality have shown that, if results obtained in the field of perceptual multimedia quality are taken into account, then the lack of bandwidth available to mobile B2C applications is not necessarily a constraining factor to multimedia-enhanced m-commerce applications.