An important issue for application software suppliers is the value offered to a customer (individual or organization). Value can be quantified economically by users' willingness to pay (see chapter 9), but there are many constituents to this value. No explicit distinction is made between individuals and end-user organizations, although this will obviously change the equation substantially. Of course, suppliers may have other goals in addition to maximizing value, such as excluding competitors by creating switching costs for their customers (see chapter 9) or gaining competitive advantages through lower costs or higher volume sales.
The customer is assessed various costs associated with acquiring, provisioning, and operating software, including payments to software suppliers, acquiring supporting hardware, and salaries for operational and support staff (see chapter 5). To the user, these costs and their relation to value is an important issue. A supplier will also work to maximize the value (and hence the price that can be charged).
Willingness to pay is difficult to quantify during the conceptualization and development of an application. Value is derived from the behavior the software invokes; that is, what it causes a computer to do on behalf of a user (or group, community, or organization), and how well it does those things. Although the specific context of the user is important, there are also generic characteristics of value that transcend the specific context. We discuss some of these now—how they affect value and how they affect the supplier's business model. Business issues relating to the sale of software are discussed more extensively in chapter 6.
One source of value is the tangible and intangible effects of an application in making a user (or group or organization) more productive, effective, or successful. An application may improve productivity, decrease the time to complete tasks, enhance collaboration among workers, better manage and exploit knowledge assets, or improve the quality of outcomes. Often, productivity enhancements can be quantified by financial metrics, such as increased revenue or reduced costs. Other types of effects (e.g., the effectiveness of an organization in customer service and greater customer satisfaction) are less quantifiable but no less real. When reengineering of processes and reorganization accompany application deployment, it is difficult to assess what portion of the benefit accrues to the software directly, but that artificial separation is arguably unnecessary because those benefits may be unattainable without the software.
Applications can sometimes enable outcomes that are otherwise not achievable, such as movie special effects or design simulation. In this case, the effect is not to improve but rather to extend the range of the feasible.
Examples While auctions are sometimes used in the physical world for selling high-value goods that are unique and whose price is therefore difficult to set (like artwork), auctions have become more common in e-commerce. The network circumvents one obstacle, getting bidders physically together. Many remote collaboration applications allow interaction at distances that would otherwise not be possible. The online tracking of inventories has allowed businesses to greatly reduce business interruptions for manual inventory accounting. Searching the content of all Web sites is enabled by the global Internet.
Software suppliers may try to adjust pricing based on an application's usefulness, or value, to a specific organization, since that is one of the biggest variables for different customers in the willingness to pay (this is an example of price discrimination; see chapter 9). How well the application fills an organization's needs varies widely; for example, is the organization already using a competitive application, or is it stuck with a manual process? Negotiation of price, with usefulness to a particular organization as a prime consideration, is especially common for custom-developed applications. For off-the-shelf applications, it is common to publish a standard catalog price but offer discounts based on value to the particular organization and willingness to pay or number of licenses bought. Another strategy is to offer different variants at different prices and allow customers to self-select (this is called versioning by economists; see chapter 9).
Software features do not always exactly match the needs of users, but a better match offers greater value. This is clearly true for off-the-shelf applications, where different adopters usually have differentiated needs that may be partially (but not totally) met by configuration options, but it is true even for custom-designed software. Unfortunately there is an inevitable mismatch between what is built and what is needed. It is relatively easy to provide many configuration and customization options in software, but this is insufficient. It is difficult enough to capture precisely the requirements of any individual user at any one point in time. Most software targets a large number of users (to share development and maintenance costs or to meet the needs of an organization) and also serves users over an extended period of time, during which needs change. Needs of large numbers of users over extended time can be approximated at best.
The matching of needs is a significant factor in make versus buy decisions in software acquisition. One argument for internal development of software in a large organization is a more precise match to specific needs and more control over future evolution to match changing needs. This is balanced against a higher cost of development (not amortized over multiple organizations) and often an increased time to deployment and higher risk of budget or time overrun or outright failure. On the other hand, an independent software supplier that achieves a closer match to needs offers higher value. For this reason, capturing user needs and requirements is always an important part of the application software development process (see chapter 4). Software acquisition is discussed further in chapters 5 and 6.
For much software, the value depends not only on intrinsic features and capabilities of the software itself but also on the number of other adopters of the same or compatible solutions: the more adopters, the higher the value. This is called a network effect or network externality (Church and Gaudal 1992; Katz and Shapiro 1985; 1986b; Shapiro and Varian 1999b), and it plays an important role in some software markets.
An externality is a situation where the actions of one party affect another without a compensating payment (e.g., air pollution). Network externalities can be either positive (the form of primary concern here) or negative. In a negative externality, value actually decreases rather than increases with more adopters.
Example Congestion of a communication network (or highway system) is a negative externality in the absence of congestion-based pricing. As more users generate traffic and as throughput increases, the delay for information traversing the network increases for other users (see section 2.3.1), decreasing the value to them.
Positive network externalities in technology products come in two distinct forms (Messerschmitt 1999a; 1999c), as illustrated in figure 3.3. Assume that there are multiple instances (copies) of a product in the hands of different adopters. When these instances have no relationship, there is no network and no network effect. When they are dependent in some way, a network results. (This does not imply that there is an actual communication network connecting them but rather merely a network of complementary dependencies.) In the stronger direct network effect, the different instances of the product are directly complementary to one another, and the value to each individual adopter typically increases with the size of the network. In particular, the first adopter may derive no value at all.
Figure 3.3: Network effects on instances of a software product.
Example A facsimile machine's value increases as the number of owners of facsimile machines increases, because this creates more opportunities to send a fax. The first purchaser of a facsimile machine derives no value. A similar software example would be a remote conferencing or group meeting application for a desktop computer.
It is important to realize that there are two distinct sources of value in the presence of direct network effects. First, there is the intrinsic value of the software: what it does, how well it does it, and its effect on users when it is used. Second, there is the network effect: how many opportunities there are to use the software, the size of the network.
Example Once a remote conference begins, its value will depend on various features and performance characteristics that determine effectiveness and user experience. This is distinct from the direct network effect, which flows from the number of opportunities to conference.
In the weaker indirect network effect, the instances of the product have no direct dependence, but their value is collectively dependent on some complementary commodity, like available information content or trained staff, technical assistance, or complementary applications. (In the economics literature, direct and indirect network effects are sometimes associated with "real" or "virtual" networks, respectively.) In this case, a larger network typically stimulates more investment in this complementary commodity and thus indirectly affects value.
Example The Web exhibits an indirect network effect based on the amount of content it attracts. With more adopters of Web browsers, information suppliers have more incentive to provide information content (for example, they are likely to derive more revenue from advertisers, paying for more content). As another example, the value of a software development toolkit (languages, compilers, and other tools; see chapter 4) to a software development organization depends on the total number of adopters. With a larger base, there are more prospective workers who can work with that toolkit, and a richer set of complementary products, like training materials.
There are, of course, intermediate cases. These examples illustrate that the relative importance of intrinsic capabilities and network size in affecting user value can vary widely. This can be true even for a single software product.
Example A word-processing application offers intrinsic value to a solitary user (increasing effectiveness in authoring documents), but a larger network also gives more opportunities to share documents in a compatible format or to author a document collaboratively. For some users, the ability to share or collaborate is paramount, whereas it may be a marginal benefit to others.
Chapter 9 gives a simple quantitative model of network effects and discusses the dynamics of software markets influenced by network effects.
Network effects are common in software and have a couple of practical results. First, they make it more difficult to establish a market if the initial demand is diminished by a dearth of adopters. The early product cycle is precisely when the supplier is most concerned with recovering the high fixed (and often sunk) costs of creation. Second, if a substantial market is established, network effects create a positive feedback in which "success breeds success." An incremental increase in adoptions makes the product appear more valuable to the remaining population.
Example The Linux operating system is benefiting from this cycle (as did Mac OS, UNIX, and Windows before it). As the number of adopters increases, more software suppliers choose to offer applications on Linux. The availability of applications is one of the prime motivators for adoption. More adopters in turn encourage application development.
There are a number of measures that suppliers can take to help establish a market in the presence of network effects (Shapiro and Varian 1999b). One is to offer backward compatibility with established products.
Example Voiceband data modems have followed Moore's law for years, introducing new generations of product with increasing speed and more (but still affordable) processing power. If each new generation communicated only with its own generation, early adoptions would be stifled. This was avoided by allowing each new generation to communicate with the older generations (at their lower speed). Since the communication algorithms are implemented in embedded software, the incremental manufacturing cost (the cost of storage for the older software) is low. Another approach is to develop an industry standard that allows all suppliers to offer compatible products, and thus expand the collective market (see chapter 7).
Example The Digital Versatile Disk (DVD) illustrates the role of standardization for a representation of information (entertainment video). Initially two groups of manufacturers formed alliances to share research and development (R&D) costs. Eventually, the two groups felt the need to cooperate to bring a single format to market, in part to satisfy content suppliers worried about consumer confusion. The DVD Forum is an association of 230 companies cooperating on DVD standards. Such efforts are always fragile: periodically companies dissatisfied with the forum consensus choose to develop incompatible technologies; the forum explicitly states that members are free to do so.
Generally speaking, software that is used more offers more value. Usage comprises two factors. First is the number of users, which in an organizational context often relates to the overall value of the software to the organization. Second is the amount of time each user spends with the application; a user who invokes an application only occasionally may derive less value than one using it the entire working day. Of course, if greater usage is a side effect of poor usability (see section 3.2.7), this could represent lower, not higher, value. Also, the value varies among different activities within an organization, so usage is at best an approximation of value.
Software licensing and pricing often take usage into account (see chapter 9), usually by fairly crude but easily formulated and enforced mechanisms.
Example Price may be coupled to the number of computers on which an application runs or to the speed of those computers. A floating license allows a certain number of concurrent users while allowing the software to be installed on a greater number of computers. Another usage approach is to couple pricing to the number of transactions; this makes sense for certain types of applications (like e-commerce). Selling software as a service (rather than licensing software code; see chapter 6) makes it easier to couple pricing directly to usage.
Software licensing is discussed further in chapter 8.
Software functionality and fidelity speak primarily to the perceptual experience of the user (Slaughter, Harter, and Krishnan 1998): how accurately and expeditiously an application completes user directives. Functionality refers to whether the software does what is asked or expected, from the user perspective. Fidelity refers to the accuracy with which the software achieves that functionality. Observed defects in the software (both their number and severity) contribute to a perception of lack of fidelity.
With respect to defects (sometimes colloquially called "bugs"), observed is an important qualifier. The actual number of defects may be either higher or lower than the number observed. Some defects won't be observed under typical use (the defective behaviors are not exercised). On the other hand, a perceived defect may actually be a misunderstanding as to how the software is supposed to work. The latter case could be reinterpreted as an actual defect in the intuitiveness of the usage model, the help/training material, or the certification process used to determine whether a user is qualified to use the application. Some perceived defects represent legitimate disagreement between the software developer and the user about what an application should do, reflected in the old joke "it's not a bug, it's a feature."
Perceived and real defects cannot be avoided completely. Perceived defects are defined relative to specific requirements, which can't be captured fully and accurately (see section 3.2.2). While a similar dilemma is faced by all engineering disciplines, many benefit from relatively slow change and much longer historical experience, allowing them to deliver close-to-perfect products from this perspective. IT as well as user requirements have always changed rapidly, and any stabilization in requirements is accurately interpreted today as a leading indicator of obsolescence. Of course, this has been generally true of all relatively immature technologies.
Example The automobile in its early days was perceived as cranky and hard to keep running, requiring a great deal of technical knowledge on the part of its drivers.
A second reason defects can't be eliminated is the impracticality of detecting all design flaws in software during testing (see chapter 4). As pointed out in chapter 2, software is subject to little in the way of physical limits in the hardware, and therefore software complexity tends to balloon to whatever level its human designers think they can cope with. The resulting complexity results in an astronomical set of states and outcomes, astronomically more than can be exercised during testing. Thus, the reduction of defects focuses on development testing, on making software correct by construction, and inevitably on the maintenance of the software to remove latent defects observed by users (see chapter 5).
There are important gradations of defects that determine their perceptual and quantifiable severity. A defect consuming considerable invested time and effort is more severe than a defect that, for example, temporarily disturbs the resolution of a display.
Performance refers to how fast and expeditiously software achieves its delegated tasks. As discussed in section 2.3.1, there are several types of performance attributes of interest, some of which are of interest to users and some of which are of primary concern to operators. Delay metrics are of greatest importance to the user, such as interactive delay, playback delay for audio or video sources, or the delay introduced in an audio or video conference. Other types of delay are of importance to organizations, such as added delays in propagating work flow tasks through an organization.
Example Consider an application performing real-time control of an electric power network. Individual workers will notice interactive delays when they request information about the network status. The power company will be affected by delays between entering control actions and achieving resulting actions, since this may reduce the control accuracy and effectiveness.
Throughput metrics interest operators because they affect the number of users supported more than the perceptual experience of each individual user. There is a connection between utilization (throughput as a fraction of capacity) and interactive delay that should be taken into account (see section 2.3.1).
Observed performance is not strictly objective. For example, poor interactivity in one activity can be masked by a multitude of attention-diverting activities. For this reason Web browsers display animated graphics while waiting for the arrival of requested Web content. When the "observer" is actually another piece of software rather than a human user, then objective measures apply.
Example In Web services, one application makes use of the services of a second application made visible on the Web (see chapter 7). The effect of the performance attributes of the second application on the first application can usually be defined and measured objectively.
The most immediate determinant of performance is the capital expenditures that the operator chooses to make in hardware resources, which is not under the control of the software supplier. The architecture of an application does have a great deal to do with whether or not performance can be maintained under all operational conditions (e.g., as the number of users increases).
Other aspects of quality are learnability (ease with which the user can become facile with the application) and usability (the user's perception of how easy or difficult it is to accomplish a desired task once an application has been learned) (Nielsen 2000; Usability Professionals Association). These are hard to quantify and vary dramatically from one user to another, even for the same application. Education, background, skill level, preferred mode of interaction, experience in general or with the particular application, and other factors are influential. There is often a trade-off as well, because features that make an application easier to learn often make it more cumbersome to use.
Example Novice users of an application will find it easier to get going if the user interface relies heavily on pull-down menus. An experienced user may prefer keyboard shortcuts for the same functions because they don't require taking the fingers off the keyboard. Experienced users may prefer an online help system with searching and hyperlinks, whereas a novice user may prefer a printed manual.
Thus learnability and usability, like needs, vary widely among users, and usability varies over time for the same user (Nielsen 1993). Offering alternatives in each user-interface function is one way to enhance usability across all users and to combine ease of learning with ease of using.
Example An application may allow the same function to be invoked by mouse or keyboard, offer visual and audio cues, or offer context-free and context-based operations.
Another useful technique is adaptation to the user's desires or sophistication.
Example An application may allow a "discovery" of features via several likely paths, while later repetitive use of certain features can be fine-tuned to minimize the required number of manipulative steps. Examples include the reconfiguration of user interface elements or the association of common functions with command keys.
Security recognizes that there are inevitably individuals who, for whatever reason, will try to penetrate or disrupt an application or steal information. In the popular press, these individuals are frequently called hackers, but among computer aficionados the term hacker has a much more positive connotation as a person who has "technical adeptness and a delight in solving problems and overcoming limits" (Raymond 1996), adeptness that can be channeled in various ways positive and negative (but overwhelmingly positive). The preferred term for those whose goal is breaking into systems for nefarious purposes (such as theft of information or vandalism) is cracker.
Security strives to exclude unauthorized attacks that aim to unveil secrets or inflict damage to software and information (Howard 1997; Pfleeqer 1997). Damage can come in many forms, such as rendering services unavailable, or compromising the authenticity, integrity, or confidentiality of information. The user (or end-user organization) of a software application is most directly affected by security breaches, and therefore software that is less susceptible to cracking offers greater value. Secure software is, however, insufficient; good security requires vigilance and conscientious action on the part of both the users and operators of the software and in extreme cases involves law enforcement agencies who investigate and prosecute illegal cracking activities (see chapter 5).
There is a natural trade-off between security and usability. Introducing security measures within software as well as organizations operating and using the software frequently makes the software harder to use.
Example Requiring a password is annoying to some users, but it is an important security measure for limiting who gains access. Requiring different passwords for multiple applications is doubly annoying. This can be mitigated by allowing a single password for multiple applications, but this weakens security because a single successful cracking of one password allows the cracker access to all applications. However, allowing a single password may improve security if it obviates the user's need to write down the password.
Another issue of concern to the user of a software application is privacy, which relates to how much information is available about a user, or a user's location or activities, and to whom that information is disclosed. Privacy is compromised, from the user perspective, when other parties learn too much about his or her persona or activities by virtue of using a software application.
Example Users often disclose personal information like e-mail address and credit card number to e-commerce applications. A site may track which products a user purchases as well as the products a user examines without purchasing. A wireless network must know (and thus can track and record) the geographic location of a user over time. Potentially this personal information can be shared with third parties.
There are considerable variations in privacy concerns depending on circumstances. Users are considerably more sensitive about financial, health, and employment information than they are about shopping habits, for example. Sensitivity may also vary depending on the nature of the organization gathering the information, for example, private firm or government.
Example Seeking a compromise between the privacy requirements of users and the interests of marketers in the United States, the Financial Services Modernization Act of 1999 (also known as Gramm-Leach-Bliley, or GLB) introduced the user's right to opt out of the collection of personal information. That is, everyone has the right to request nonuse of their contact information for marketing and other purposes, on a provider-by-provider basis. From the perspective of maximizing privacy, requiring users to opt in instead (where the default would be nonuse) would be preferable.
Users may be deterred from using applications that they perceive as violating privacy, whether this is real or not. Software can help by allowing user control over which personal information is disclosed. The primary issue is not the software, however, but the ways in which organizations operating an application treat information that is gathered. In the normal course of using an application, personal information often must be legitimately revealed, and the user may be concerned that this information not be passed to third parties. Organizations can respond to these concerns by setting and posting privacy policies providing users with sufficient information to make an informed choice about which information to reveal; they may also provide the user some control over how the information is used by offering policy options. Common options are opt out, in which the user must explicitly specify that the personal information should not be shared with third parties, or opt in, in which the user must explicitly give permission before information can be shared. Organizational and management issues surrounding privacy are discussed in chapter 5, and government roles are discussed in chapter 8.
Business changes at a rapid rate, including organizational changes, mergers and divestment, updates to existing products and services, and the introduction of new products and services. The ability to rapidly change or introduce new products and services is a competitive advantage.
Many enterprise applications are the product of a long evolution. Often they are custom developed by starting with existing departmental applications using obsolete technology (called legacy applications), modifying them, and adding translators so they work together to automate enterprise business processes (this is called a federation of applications). This level of flexibility is achieved by handcrafting, and is not as structured and systematic as one would like. It is time-consuming and expensive, and creates high maintenance costs in the long run.
Another approach is to purchase an off-the-shelf enterprise application and mold the business processes around it. Although more disruptive, this course of action may yield a more cost-effective and higher-value solution in the long run. Suppliers of enterprise software solutions try to make them more flexible in meeting differing and changing needs in a couple of ways. One is to offer, instead of a monolithic solution, a set of modules that can be mixed and matched at will (see chapter 4 for a description of modularity and chapter 7 for a description of the related concept of frameworks). Another is to offer many configuration options to allow customization.
End-user organizations often make large investments in acquiring and deploying an application, especially where reorganization of business processes is required. The cost of licensing software is usually a small portion of that investment. Organizations thus hope for staying power, both of the solution and its suppliers. Software suppliers that articulate a well-defined road map for future evolution provide reassurance.
Flexibility is hardly a strength of most business and enterprise software applications today, especially in an environment heavily infused with legacy applications. In an ideal world, IT would be a great enabler of change, or at least not a hindrance. Instead, it is sadly the case that bringing along information systems to match changing business needs is often perceived as one of the greatest obstacles. Redressing this problem is a big challenge for the software industry (NRC 2000b).
Functionality and fidelity, performance, usability, flexibility, and extensibility compose user satisfaction. In fact, two distinct criteria can be identified for judging the alignment of a software program with its requirements (Lehman et al. 1997). Specification-driven programs are judged by how well they satisfy an objective specification. Satisfaction-driven programs, on the other hand, are judged by the perception, judgment, and degree of satisfaction of various stakeholders. This generalizes the notion of user satisfaction, since the stakeholders may include many parties other than end-users, including operators and managers of software development and end-user organizations (see chapters 4 and 5).
Specification-driven programs can be adequately evaluated through objective testing procedures in the laboratory, whereas satisfaction-driven programs can only be assessed by actual users. Although there is no hard dividing line, applications tend to be satisfaction-driven, whereas some types of infrastructure software may be specification-driven. Lehman et al. (1997) make some useful observations about satisfaction-driven programs:
They must be continually adapted else they become progressively less satisfactory.
Unless rigorously adapted to take into account changes in the operational environment, quality will appear to be declining.
Functional capability must be continually increased to maintain user satisfaction.
Over time complexity increases unless explicit effort is made to contain or reduce it.
These observations have profound implications for the software development process (see chapter 4).
The user is concerned with value provided, but also with the total cost associated with using an application. The user is not concerned with value alone but rather with value less costs. Costs include direct payments to a software supplier, but also any other costs incurred by the user or intermediaries acting on behalf of the user. For example, whether an application and its supporting infrastructure are operated by the user or by a separate organization, a significant portion of the cost of use is in the operational support (see chapter 5). Suppliers thus have control over the user's costs directly through the price charged for the software and indirectly by controlling other costs of ownership. This is especially significant given that software acquisition decisions are often made by operational rather than end-user organizations.
A single closed software solution offers less value than one that can be combined with other solutions to achieve greater functionality. This is called the composability of complementary software solutions.
Example The office suite (e.g., ClarisWorks, Lotus Smart Suite, Microsoft Office, StarOffice, and WordPerfect Office) illustrates composability. It allows sharing of information and formatting among individual applications (like a word processor and a spreadsheet), and objects created in one application can be embedded in documents managed by another application (such as embedding a spreadsheet in a word-processing document). A more challenging example is the ability to compose distinct business applications to realize a new product or service by the federation of existing departmental applications, as described in section 3.2.10. A third example is Web services, in which applications are built up by composing multiple application services on the Web (see chapter 7).
Composability is most valuable when it is easy and automatic. Less valuable (but more practical in complex situations) is composability by manual configuration, or handcrafting. Composability is one of the motivations behind many major standardization initiatives in the software industry (see chapter 7). The Internet has greatly magnified the opportunities for composability, but also its inherent challenge, because of its ability to mix solutions from different suppliers running on different platforms (see chapter 4).
Performance is an important aspect of software composition (see section 3.2.12): two separately fast components, when combined, can be very slow—a bit like two motors working against each other when coupled. The exact effect of composed components on overall performance is hard to predict accurately for complex software systems.
Another common terminology is calling both groups hackers but distinguishing between "white-hat" and "black-hat" hackers. Black-hat hacker is synonymous with cracker as used here.
The literature refers to these as S-type and E-type programs (following the classification of music into serious and entertainment classes), but we substitute these more descriptive names here.