While the marketplace works for most purposes, there are situations where government regulations can and should intervene in ways that place requirements or constraints on the software industry. These include situations where larger societal interests such as law enforcement and national security intervene, or where the market may fail to provide sufficient protections to users (such as security and privacy), where the rights of some citizens may need to be preserved, or where vibrant competition in the marketplace may be at stake.
Like other aspects of our society, software has considerable impact on law enforcement and national security. This has three facets. First, IT creates new challenges for law enforcement, including new laws to enforce and new ways for lawbreakers or national enemies to coordinate themselves or hide their activities. Second, the new tools in the hands of lawbreakers requires law enforcement agencies to adopt new countermeasures. Third, like other institutions, information technologies provide law enforcement and intelligence agencies with some tools to improve organizational effectiveness and productivity.
As IT is increasingly woven into the social fabric, governments have increasing concern that the benefits be maximized while avoiding possible negative implications. Nowhere is this tension more evident than in the area of computer security. On the one hand, it is important to promulgate strong technological security measures to limit the potential for computer crime so as to contain the losses and limit the resources devoted to law enforcement ("an ounce of prevention is worth a pound of cure"). There is even increasing concern about sophisticated attacks by terrorists or enemies. To counter these threats, the many authentication- and encryption-based tools and technologies discussed in section 5.4 should be in common use, and operators and users should be well trained and vigilant. On the other hand, similar security tools and technologies can be utilized by criminals and terrorists to hide their activities. This has led governments to attempt to limit the use of encryption technologies, or to limit the "strength" of encryption technologies that are used, even though that might reduce our collective security (Dam and Lin 1996; 1997).
Example For much of the past decade, the U.S. government has classified encryption as a "defense article" on the Munitions List, making it subject to the Arms Control Export Act (Levin 1999; NRC 1996). An export license from the Department of State (and later Commerce) was required to sell products incorporating encryption. Regulations limited the number of bits in encryption keys to 40 (or 56 in some cases). This "weak" encryption was sufficient to thward casual attacks but was also susceptible to sustained cryptanalysis attacks given sufficient resources (e.g., government intelligence services). The motivation was to allow strong encryption to be used within the United States (where no restrictions were placed) but limit its availability outside the United States. Opponents of this policy argued that overseas companies could develop the same technologies, creating new competitors for U.S. companies. Civil libertarians argued that this violated free expression by limiting the right of researchers or commercial firms to publish their encryption knowledge, and also the privacy and security of individuals by exposing their encrypted information to possible scrutiny. These export restrictions were incrementally relaxed (but not eliminated) in 1998 and 2000.
As a rule, attempts to ban the civilian use of new software technologies are likely to be ineffective, except in the unlikely event that they are globally banned, because many countries have the capability to develop them.
Escrowed encryption is a more direct way to address these conflicting concerns, enabling strong encryption and legitimate access by authorities at the same time (NRC 1996). It creates three classes of entities: users, who have unconditional access to the plaintext; adversaries, who never have access, and authorities, who are allowed access under exceptional circumstances strictly defined by laws or policies. The first two classes are recognized by ordinary confidentiality protocols (see section 5.4.5): users have access to the secret decryption key and adversaries don't. The class of authorities can be supported by key escrow agents, one or more disinterested entities who have access to a decryption key but are legally bound to release this key to specified authorities under a set of prescribed policies and bear legal responsibility for malfeasance or mistakes.
Escrowed encryption can solve a number of problems in law enforcement and national security. For example, it can be used to allow law enforcement authorities exceptional access to encrypted information under court order (similar to a search warrant) or to allow intelligence authorities access to encrypted information while still preventing access by adversaries. However, escrowed encryption requires the cooperation of users, either voluntarily or because they are legally bound. Escrowed encryption has other applications as well. For example, an organization may routinely keep vital information assets in encrypted form to prevent inadvertent disclosure or theft, but then the loss or corruption of the decryption key means loss of the information. Escrowed encryption provides a mechanism to recover lost keys.
A number of escrowed encryption protocols are possible, depending on the precise social needs (Denning and Branstad 1996). An important design parameter is the granularity with which a key is used and susceptible to being revealed.
Example The public key infrastructure provides a natural mechanism for escrowed encryption; since the certificate authority (CA) creates the secret key and distributes it to individuals or organizations, the CA could also retain a copy of these keys and provide one to an authority under exceptional circumstances. However, this potentially gives the authority access to all confidential communications directed at the user with this secret key. Access can be limited to a specific session by using a onetime randomly generated session key (see section 5.4.7). An escrowed encryption protocol using a session key would communicate the session key confidentially to the key escrow agent (for example using its authenticated public key).
An additional level of trust and protection can be introduced by using two or more key escrow agents, requiring them to work in concert to reconstruct a secret key. This reduces the possibilities of both malfeasance (requiring collusion) and mistakes (requiring all agents to make a mistake). Most practical applications of key escrowing have other detailed requirements that make the required protocols considerably more intricate than those described.
Example In 1994 the U.S. government proposed the Escrowed Encryption Standard (EES) for use in confidential telephone conversations (Denning 1994). (This was more popularly known as the clipper chip initiative.) EES uses a one-time random session key for each telephone call, two key escrow agents who must collaborate to allow a law enforcement authority to decrypt a conversation, and sufficient information carried with the encrypted telephone signal itself to allow this. Each telephone encryption device has a unique secret and encapsulated device key split into two pieces, each of those pieces supplied to one of the escrow agents. The encryption device attaches a law enforcement access field (LEAF) to each communication. The LEAF includes sufficient information to uniquely identify the encryption device and, with coordinated information obtained from both escrow agents, obtain the session key and decrypt the conversation. EES was proposed for widespread use but never gained traction because it received a poor reception from civil libertarians. Today, EES is a voluntary standard for federal agency communication.
Authentication is another illustration of a tough balancing act for governments. On the one hand, biometric authentication is a powerful evidentiary tool in law enforcement, and is also the most effective form of access control in computer applications or at national borders. However, it also raises numerous issues relating to personal privacy and civil liberties.
Example In the United States a national identity smartcard containing biometric information has occasionally been proposed. This smartcard could encapsulate a secret establishing its authenticity, and that secret could be verified using a challenge-response protocol. Further, the identity and biometric information provided by the smartcard could be digitally signed by an authority, establishing its integrity, and could be compared with biometric information gathered at border crossings, airport gates, and so on. Even though this security system would not require a central database nor require the storage of information on the whereabouts of individuals as they traveled, there would be strong motivation to capture and store this information for possible investigative or evidentiary purposes. There is considerable opposition to this system on the basis of privacy and civil liberties.
One of the greatest challenges for law enforcement arises from the global nature of the Internet compared to the geography-based jurisdictions of most agencies and courts. Without firewalls placed at national boundaries, the Internet is transparent across geographic and jurisdictional boundaries. Where a jurisdiction might attempt to ban offensive software (such as strong encryption) or information (such as pornography), this can be circumvented by making it available on a server in another jurisdiction.
On the opportunity side, software-based information management techniques are essential tools in crime detection and investigation. A big opportunity is the federation of information repositories held by different agencies, a term applied to building a layer of middleware above a set of separately administered databases that gives them the appearance of being merged. This is a natural way to share the information stored in fragmented databases among agencies. Another opportunity is data mining, a term applied to extracting unexpected patterns from masses of information. A current research topic—one that raises many privacy and civil liberties issues—is the dispersal of many tiny sensors with wireless networking or the mass surveillance of public spaces using miniature video cameras. Such sensors can be used for monitoring the location of individuals (like parolees), intrusion detection, and the detection of chemical or biological attacks.
Government interest and involvement in security issues (see section 5.3.1) has increased markedly in recent years, for several reasons. As the use of computers and the Internet in commercial activities has become widespread, the vulnerability to attacks has increased. Not only are there direct monetary losses to worry about, but consumer confidence and hence national economic performance can potentially be affected. The global Internet has opened everyone to organized and sophisticated attacks by foreign governments and terrorists. There have been some well-publicized successful attacks wrought by individual crackers that illustrate our vulnerability.
Example A worm (allegedly unleashed by a graduate student at Cornell University) caused a major collapse of servers on the Internet in November 1988 by exploiting several vulnerabilities in UNIX programs and continually replicating itself. As these types of operating system vulnerabilities have increasingly been plugged, recent worms have focused on e-mail attachments. In February 2000 a denial-of-service attack (allegedly launched by a Canadian teenager using tools readily available on the Internet) brought to a halt several major e-commerce sites (including eBay, Schwab, and Amazon).
The primary defense against attacks is the actions taken by provisioning and operation personnel to protect themselves and users. However, there are a number of reasons why these measures may, by themselves, be inadequate. Operators only have control over their own vulnerabilities, and many techniques to reduce these vulnerabilities are both expensive and invasive—they typically interfere with usability. Good security does nothing to increase revenue for either operators or suppliers; it simply combats losses, often of an unknown or unspecified nature. Thus, operators have mixed incentives and may underinvest in security measures. Very specialized expertise is required to achieve security, expertise that is expensive and difficult to develop in individual organizations, the latter because it benefits directly from the learning curve of dealing with many incidents. Operators have no control whatsoever over the other side of security, the threats, and have great difficulty assessing risks. Further, there are externalities: inadequate measures taken by one user may increase the threat to other users.
Example A distributed denial-of-service attack is typically mounted by cracking multiple hosts and installing attack programs on all of them. Thus, an attack on one host is assisted by the weak security of other hosts anywhere on the Internet. From the perspective of the site being attacked, it is difficult to detect the difference between legitimate use and a denial-of-service attack (although tools are being developed by some startup companies, such as Arbor Networks, Asta Networks, Lancope Technologies, and Mazu Technologies, to detect and thwart such attacks). The spread of viruses is interrupted by users' installing viral protection software, and worms can be defeated by installing the latest security patches; in this manner, each user is affected by the collective actions of others.
Perhaps most important, security is ultimately a systems issue; to a large extent, good security can be achieved only by software suppliers working together and with provisioning and operation personnel across all organizations. In this sense, security has a lot in common with public health, where individuals must take responsibility for their own health but also depend on a whole infrastructure to assess risks, monitor the system, coordinate actions, and take speedy and collective remedial measures when problems occur.
All these issues, and especially the public health analogy, suggest a role for government laws and regulation in increasing the collective level of security. The level of threat can be reduced by laws that impose criminal penalties for certain types of cracking activities, especially where there is a resulting direct harm or economic loss.
Example In the United States the 1986 Computer Fraud and Abuse Act made it a federal crime to gain unauthorized access to certain types of hosts, such as those of financial institutions and those involved in interstate commerce. It was amended by the National Information Infrastructure Protection Act of 1996. To conduct a felony prosecution, the attacked organization must maintain comprehensive audit trails to prove that the penetration occurred and that losses where suffered.
Laws may also outlaw the creation or dissemination of software tools that assist crackers, although these raise significant issues of free speech (similar to copy protection circumvention; see section 8.1.5).
On the demand side, the primary mechanism for ensuring good security is customer evaluation of security features as one important criterion for choosing one supplier over another. Unfortunately, there are limits to the sophistication of most customers, and they may also lack adequate information to make an informed decision. Additional economic incentives for suppliers flow from product liability laws that penalize suppliers for security breaches within their control, and insurance underwriters who price in accordance with a professional assessment of risk. To supplement these economic forces, an argument can be made for laws that mandate measures (such as incorporating virus protection software in every operating system) that reduce threats to other users. There may be measures that can be introduced into the infrastructure at extra expense to improve collective security but that may have to be mandated by government because the market provides insufficient or even adverse economic incentives (analogous to automotive seatbelts, which were not common until mandated by government). Another useful government tool is policies for procurement of equipment and software for its internal use: if those policies mandate strong security features, this will stimulate vendors to develop the requisite technology and then amortize the resulting costs over commercial as well as government sales. Or government could simply mandate full disclosure of available information on security features and defects, giving customers all available information so that they can exercise market power.
As in public health, a simple and uncontroversial measure that governments can undertake is a campaign to educate users about the nature of threats and measures they can take to counter them. It can also fund organizations to conduct research, monitor, and respond to security threats (like the U.S. Centers for Disease Control).
Example After a major security incident in 1988 brought to a halt a substantial portion of the Internet, the U.S. government funded the CERT Coordination Center at Carnegie Mellon University, which by its own description "studies Internet security vulnerabilities, handles computer security incidents, publishes security alerts, researches long-term changes in networked systems, and develops information and training to help you improve security at your site." The government also operates a Federal Computer Incident Response Center (FedCIRC) to provide operational support to federal civilian agencies.
Privacy (see section 5.3.2) is similar to intellectual property rights protection in that no completely satisfactory technological solution is possible, and ultimately users must rely to some extent on government regulation or voluntary action on the part of companies gathering personal information. When personal information is disclosed to a company for legitimate business purposes, the user is dependent on that company to appropriately limit the disclosure of personal information to others. There is no technological or other measure the user could directly employ that would guarantee that. Of course, many of the security technologies described in section 5.4 are available to assist the company in exercising its responsibilities.
While there is no doubt that privacy is an area of great concern for users of the Internet, there is considerable debate over the appropriate role for government regulation. Generally, the European Union has imposed a stronger regulatory regime, whereas the United States has relied more on voluntary action. Since the Internet is global, and users may be in different countries from companies that collect their private information, regulatory regimes in single countries or regions have global ramifications.
Example In the United States the Federal Trade Commission has a systematic privacy initiative that monitors Web sites to see if they disclose privacy policies, and initiates actions against sites that violate their own policies under existing consumer fraud legislation. There are no specific laws on consumer privacy, except for those governing financial institutions (the Gramm-Leach-Bliley Act). The European Union, on the other hand, has enacted comprehensive privacy legislation (the Directive on Data Protection) that requires creation of government data protection agencies, registration of databases with those agencies, and in some instances prior approval of disclosure of personal information. These European laws apply to U.S. companies dealing with European citizens, so the United States and Europe negotiated a set of safe harbor principles that when adhered to, will protect U.S. companies from disruption of business and from legal action.
The most fundamental disagreement is over whether the market provides adequate incentives to preserve the right balance between the interests of companies and individual users. A problem with government regulation is that it cannot distinguish the widely varying concerns of different users, whereas voluntary action can. Laws can establish a framework under which privacy policies are disclosed and those disclosed policies are enforced. With adequate disclosure, users can exercise market power by not dealing with companies they feel do not sufficiently respect their personal information. Another approach that accommodates users' varying degrees of concern is conditional policies that allow users to either opt in (the default is no disclosure of personal information to others) or opt out (the default is disclosure). Companies can provide monetary incentives (such as discounts) to users who choose to opt in. A third technological approach is to automate the choice of opt in or opt out transparently to the user.
Example The Platform for Privacy Preferences (P3P) is a standard of the World Wide Web Consortium (W3C). Each user configures her browser to incorporate preferences as to the circumstances under which personal information is to be disclosed, and the P3P standardizes a way for a server to describe its privacy policies to the browser, where those policies are compared to the user preferences. P3P acts as a gatekeeper to allow personal information to be supplied only under circumstances acceptable to the user. However, by itself, such an automated scheme cannot confirm that the described policies are actually adhered to.
Proponents of stronger government regulation in the United States argue that companies are biased toward the dissemination of personal information in light of the revenues they can derive from selling that information, and that any voluntary policies may be arbitrarily abandoned if a company is sold or goes bankrupt. They also argue that concern over privacy is stifling Internet commerce, and thus government regulation that restored confidence would help the economy. Industry has responded to consumer concerns not only through voluntary policies but also through voluntary certification efforts.
It is clear that informing users about the use of their personal information is necessary, whether by voluntary or government-mandated means, since providing personal information is necessary for legitimate business purposes but its unauthorized disclosure may not otherwise be visible to the consumer and can bring harm. It is clear that consensus has not yet been reached in this debate in the United States.
The balance between free speech (the unconditional right of individuals to freely provide information of their choosing) and legitimate restrictions on those rights has been debated for centuries. (In this context, speech is synonymous with information in all media, not just the spoken word.) The reality is that many governments place legal restrictions on free speech, limiting either what can be published (for example, the literature of certain hate groups is banned in some countries) or limiting access to certain types of information (for example, access to pornography by children is banned in most countries). The Internet poses some special challenges in trying to enforce limits on the publication of or access to speech. The fundamental issue is one of authentication: the Internet includes no internal foolproof mechanism to authenticate either the source or the destination of information transmitted over it. This problem manifests itself in several ways, depending on what limitations on speech a government wants to impose.
Limitations on acceptable speech can be circumvented by moving the source of the offending speech outside jurisdictional boundaries and exploiting the global nature of the network. A government might respond by imposing access control either locally or remotely.
Example The governments of Singapore and China have attempted to censor the Internet by blocking access to certain foreign sites and by proscribing regulations on speech on the Internet within their borders (Augaud Nadarajan 1996; Hartford 2000). China did this by maintaining control over all foreign links and blocking certain IP addresses, and Singapore by requiring all Web accesses through a sanctioned "proxy server" that refused to connect to certain sites.
Since such censorship was not a design requirement in the original Internet, technological measures can usually be defeated by sophisticated users. Censorship can be bypassed by frequently moving the offending information around or replicating it in many places. Alternatively, a government may attempt to impose its laws on the information source, requiring it to authenticate and block access to its citizens, though the basic design of the Internet includes no technical provision to identify the country from which a user is accessing the network.
Example France has outlawed literature from Nazi organizations, but such literature is protected free speech under the Constitution and laws of the U.S. A French court attempted in 2001 to enforce its law against Yahoo (which had Nazi paraphernalia for sale in its auctioning site). Yahoo argued that there was no feasible way to restrict access to French citizens alone, and a U.S. court refused to enforce French law in the United States.
Any limits to access of speech imposed by law require the authentication of at least the minimum identifying information for the user attempting access. Implementing this requires authentication, which in turn requires an infrastructure of third-party authorities who certify identity or identifying characteristics (see section 5.4.3).
Example Placing a lower age limit on access to pornographic literature in the physical economy is relatively easy, because sales clerks interact directly with the buyer and can estimate their age or ask for proof of identity in questionable cases. On the network there is no comparable physical identifying characteristic to rely on, and in no country is there a suitable authentication infrastructure. Indeed, in some countries (like the United States) there is considerable opposition to such an infrastructure based on privacy concerns. In the United States this issue has been addressed by two (at best only partially effective) means—parental control through firewall software in a home computer, and authentication of users at pornographic sites by possession of a valid credit card (it being assumed that a child would not possess such a card).
Legal limits to the publication of or access to speech would be considerably easier to enforce if there were a universal infrastructure for authentication of users. The most effective infrastructure would be a universal requirement for users to obtain authoritative digital certificates containing biometric data and mechanisms to check this biometric data (see section 5.4.3). The certificates could include information on country of citizenship, age, and other relevant personal information, and servers could then implement authentication and access control that implemented jurisdiction-specific policies. This is unlikely to happen soon, in part because of the serious privacy and "Big Brother" concerns that it would raise.
An interesting and important question is whether software is protected free speech, or whether it is inherently distinct from information. This is a difficult issue because of the dual purpose of software code, first, as a means of directly controlling a machine (the computer) in the case of object code, and second, as a means programmers use to document and modify a program (especially source code). This question comes up specifically in the copyright protection anticircumvention restrictions, since software running on a personal computer can be an anticircumvention device (see section 8.1.5). Civil libertarians argue that banning the publication and dissemination of such code is a prior restraint of speech, something prohibited by the U.S. constitution. Copyright owners argue that such code constitutes an anticircumvention device and can therefore be banned under the Digital Millennium Copyright Act. This issue is greatly complicated by compilation (see section 4.4.4): while the source code is primarily for the eyes of programmers, which can be viewed as protected free expression, it can also be automatically translated (compiled) into object code, which is primarily for execution on a computer. Civil libertarians and programmers generally argue that a clear distinction should be made between publication, viewing, and modification of software code (which is purely an expression of ideas) and the act of compiling and executing such code. They also point out, rightly, that source code is an important mechanism for distributing new results in computer science, both conceptually and in terms of implementation, and that it serves as the basis for community-based software development (see sections 4.4.4 and 4.2.4). Some copyright owners have argued that no legal distinction should be made between source and object code, and between the publication and execution of either source or object code, and that the publication of offending object and source anticircumvention code should be banned as the most practical way to prevent its illegal use. This argument also has validity, in that legally enforcing a ban on the execution of published code against individual users is difficult at best. In light of these ambiguities, the banning of such code is an issue still awaiting a definitive ruling from the U.S. Supreme Court.
Example DeCSS is a small program that allows DVD video files encrypted using the DVD Content Scrambling System to be stored on a computer's disk and played back by unlicensed software-based DVD players on that or another computer. Its origins are in creating an open source DVD player for the Linux platform. DeCSS can be used by a user to move DVD playback to a different platform, but it can also be used as the first step in illegally distributing DVD video in unprotected form. The U.S. District Court issued an injunction in January 2000 against 2600 Magazine, barring it from posting DeCSS source code on the Web or even linking to other Web sites that posted the code (Touretzky 2001). Programmers who believed this ruling improperly put a prior restraint on speech pointed out that the algorithms embodied in the source code could be expressed in many essentially equivalent ways, such as in English or an alternative source language for which no compiler exists. To buttress this point, many equivalent but nonexecutable expressions of the algorithms embodied in DeCSS have been created (some of them humorous), including the hardware description language Verilog, songs, and printing on T-shirts (Touretzky 2002).
The antitrust laws are intended to preserve competition in the industry (the emphasis in Europe) or to protect the interests of consumers (the emphasis in the United States) by regulating certain business activities. Antitrust laws affect several aspects of the software industry, including mergers and acquisitions, cooperation among firms, and business strategies (Katz and Shapiro 1999a; 1999b; Kovacic and Shapiro 1999).
When two firms merge, or one acquires the other, there is generally a regulatory review focused on the harm to consumers due to a reduction of choice, increased prices, and so on. This is a judgment call, because this harm has to be balanced against potential efficiencies and other benefits that might accrue to companies and consumers.
Example A proposed merger of Worldcom and Sprint was abandoned in July 2000 in the face of an antitrust lawsuit filed by the U.S. Department of Justice largely because of the concentration in business data services this would create. While there were a number of competitors for these services, most of them could not provide the geographic coverage of Worldcom, Sprint, and AT&T. In another case, Microsoft tried to acquire Intuit, a software supplier specializing in personal finance and tax preparation software but abandoned the deal in 1995 after encountering regulatory opposition because of the market concentration in personal finance software (Microsoft Money and Intuit Quicken).
Mergers come in three primary forms (Farreel and Shapiro 2000). Horizontal mergers involve direct competitors, vertical mergers involve suppliers and customers, and complementary mergers allow firms to broaden their product base to offer a larger product portfolio or a more complete system. Horizontal mergers raise the most regulatory questions because they reduce consumer choice and increase market share, but they can also have significant benefits such as eliminating duplicated costs or achieving synergies. In the software industry, manufacturing and distribution costs are small, so efficiency arguments are largely limited to the development costs for maintenance and upgrades and the costs of customer service. Thus, the arguments in favor of horizontal mergers are arguably weaker than in some other industries.
Example Horizontal mergers are regulated in the United States by the Clayton and Sherman Antitrust Acts and the Federal Trade Commission Act. The U.S. Department of Justice and the Federal Trade Commission have issued standard guidelines for evaluating horizontal mergers in light of these laws. One of the key criteria is the Herfindahl-Hirschman index (HHI) of market concentration, defined as the sum of the squares of the market shares (in percent) of all participants. For example, if there are four firms with market shares of 30, 30, 20 and 20 percent, the HHI = 302 + 302 + 202 + 202 = 2600. The HHI is divided into three ranges of market concentration, unconcentrated (between 0 and 1000), moderately concentrated (between 1000 and 1800), and highly concentrated (between 1800 and 10,000). Mergers that move the HHI above 1800 receive special scrutiny.
Of course, there are many possible definitions of market share. One special challenge with software is that revenue, unit sales, and actual use of a software application may diverge significantly. For example, users may purchase and install software just to have it available for occasional use (like sharing files with colleagues), or site license arrangements may offer significant unit discounts. The resulting difficulty of accurate market sizing is combined with an equal difficulty of share attribution, given that no measurable physical resource, territory, or similar metric is involved and that revenues are split across the actual software and many associated services, including consultancy, help desks, and provisioning, all of which vary widely among software makers. Market share analysis is thus fraught with complications.
Antitrust laws also place limits on collaboration and standard-setting activities in the industry (Shapiro 2000) when those activities cross the line into conspiracies among competitors to limit competition or raise prices to the detriment of consumers. Thus far there have been no major accusations of such conspiracies in the software industry, but since collaboration among firms and standard setting are important dynamics in the software industry, caution is in order.
Example A standardization activity that would clearly not be anticompetitive is the agreement on standards for Web services or for component interoperability, since those standards are focused on moving competition from the level of whole applications down to the level of component services that can be mixed and matched (see section 7.3.7). As the intended effect is to offer customers more choices and lower the barriers for entry of suppliers, they cannot be construed as anticompetitive. On the other hand, an explicit agreement with a competitor for a specific application to divide up the market—each competitor focusing on a different platform—may be illegal.
The antitrust laws also deal with issues of monopolization, when one firm has sufficient market share to allow it substantial control over pricing or the ability to exclude competition. In the United States such monopolies are not illegal per se if they arise from normal market forces such as network effects or a patent (Povtero-S nchez 1999; Shapiro 2001a) or simply competing on the merits. Monopolization by other means or unfair business practices can be illegal. Once a monopoly is created, it places special regulatory constraints on business practices, such as using market power to coerce other firms into exclusive deals or leveraging one monopoly to create a second (a strategy known as tying).
Example In 1982 the U.S. government dropped an antitrust complaint filed in 1969 against IBM alleging that it had illegally monopolized mainframe computers (Levy and Welzer 1985). With the benefit of hindsight, this illustrates the effect of rapid technological change on winners and losers in the industry, since IBM's computer industry market share is much lower today. In 1998 the U.S. government filed an antitrust complaint against Microsoft alleging illegal monopolization of the market for "Intel-compatible PC operating systems" and Web browsers and the illegal tying of the Internet Explorer browser to the operating system (U.S. Dept. of Justice 1998; Furse 1998; 1999; Liebowitz and Margolis 1999). The tying complaint was later dropped, but the findings of fact in the case (U.S. Dept. of Justice 1999) supported the illegal monopolization complaint. As of this writing, the final settlement and remedy are not completed.
Compared to other industries and products, the software products of different companies tend to be more dependent on complements (e.g., application and infrastructure). Also, software suppliers strive to differentiate their products for valid business and economic reasons, in part because of the large economies of scale that make head-to-head competition problematic (see chapter 9). Putting these two observations together, antitrust laws become an important consideration in some business decisions.
Example The decision to offer an API that enables extensions to a software product by other suppliers is an important business decision, as is opening up internal interfaces within a product as APIs (see section 4.3.4). On the plus side, such APIs add value to customers by offering them more choice and the ability to mix and match solutions from different suppliers. Although this added value aids suppliers by making their products more valuable to customers, this needs to be balanced against the competition it enables, especially if the suppliers plan to offer products on the other side of the interface being considered as an API. Since failure to provide a fully documented API can constitute illegal tying in some circumstances, antitrust laws need to be taken into account in such business decisions.
For detailed information on U.S. laws relating to computer security and crime, see <http://www.cybercrime.gov/>.
The Federal Trade Commission's initiatives in consumer privacy are described in depth at <http://www.ftc.gov/privacy>.
Full information on the safe harbor principles for the protection of privacy for European citizens in dealing with U.S. companies can be found at <http://www.export.gov/safeharbor>.
Mergers are prohibited if their effect "may be substantially to lessen competition, or to tend to create a monopoly" (15 U.S.C. Section 18, 1988), if they constitute a "contract, combination or conspiracy in restraint of trade" (15 U.S.C. Section 1, 1988), or if they constitute an "unfair method of competition" (15 U.S.C. Section 45, 1988). The merger guidelines referred to are available at <http://www.ftc.gov/bc/docs/horizmer.htm>.
The standard guidelines for acceptable collaboration among competitors from the U.S. Department of Justice and Federal Trade Commission are available at <http://www.ftc.gov/os/2000/04/ftcdojguidelines.pdf>.