10.1 What ICANN Is Not

 < Day Day Up > 

Ultimately, ICANN's primary claim to legitimacy and uniqueness rests on its assertion that it is the Internet community's vehicle for bottom-up consensus development or self-governance. Both concepts-the idea of an Internet community and of bottom-up consensus as the basis of self-governance-reflect the legacy of the Internet Engineering Task Force (IETF) and the Internet Society. The attempt to transmute the technical community's methods into a new type of international organization can be explained in part by the important role the technical community-notably Postel, Cerf, and other members of the Internet Society inner circle- played in the formation of ICANN. But it is also true that a significant measure of the technical community's power in that process was derived from widespread acceptance of the idea that the IETF governance model was unique and worthy of emulation. As one theorist has written, 'The engineers who gave us the Net (hardly a noncontentious group) also gave us the first inkling of a better way to evolve policy in a global online space' (Johnson and Crawford 2000). Key politicians such as Ira Magaziner were deeply impressed with the methods of the technical community. Even the World Intellectual Property Organization (WIPO) appropriated the technical community's nomenclature for WIPO domain name processes, labeling documents RFCs (Requests for Comments).

10.1.1 The Theory: Federalism and Consensus-Based Self-Governance

An explicit theoretical justification for modeling Internet governance on the IETF has been developed by David Johnson, with Susan Crawford (2000; 2001). As counsel to Network Solutions during the formative stages of ICANN's development, Johnson was an influential participant in the Internet governance process. His ideas directly influenced Magaziner, Burr, and ICANN's first board chair, Esther Dyson.

Johnson's thinking about ICANN is rooted in his earlier work with the legal scholar David Post on jurisdiction in cyberspace (Johnson and Post 1996; 1997; 1998). Johnson and Post recognized that the fundamental problem of Internet law and governance is that existing institutions-the democratic nation-state and the international treaty organizations-are based on the control of physical territory. Cyberspace, in contrast, creates an arena for human interaction in which location doesn't matter much.

A single global government is an unattractive solution to this problem, for reasons too numerous to recount here. So Johnson and Post sought solutions in adapting the concept of federalism. A federalist structure breaks down the collective action problem into smaller units but maintains some coordination among the parts. Post and Johnson approve of the idea of varying, even competing, sets of rules. Additionally, they contend that federalism works best when congruence-the degree to which the effects of an individual's actions are confined to the governance unit to which the individual belongs-is high but not perfect and complete (Johnson and Post 1998, 10). Effects and rule-making authority need to be closely related, but the optimum is somewhere below 100 percent. The virtue of an interdependent federalist structure is that local experiments create spillover effects that provoke reactions and adjustments by other decision-making units. This chain of mutual adjustments can push the social system as a whole to higher levels of welfare. A single, integrated jurisdiction, or a collection of isolated governance units, on the other hand, is more likely to get stuck in a suboptimal equilibrium. The analysis is supported by the results of modeling work by Stuart Kauffman (1993).

If one accepts the federalist premise, the next question is, What should be the collective action unit in cyberspace? Territorial governments are out, of course, because geography is mostly irrelevant to the Internet's virtual spaces. Johnson and Post propose the principle that decision-making authority over parts of the online world should be allocated to people who are 'most affected' by the decisions. To implement this principle, they suggest that a federalist structure could be based on the 'natural electronic boundaries' of the Internet: the 'territories' or interfaces of the many private and local network systems it connects. In short, private property rights over network access and facilities provide the units of their decentralized governance structure. In Johnson's view, participation in the Internet is fundamentally voluntary in nature. And under a federalist structure, a local decision-making unit, once it has decided to join, can make its own decisions about how open or closed it will be to the rest of the Internet.

So far, so good. But the DNS and IP address roots still need a central point of coordination at the top. Where does it come from, and how is it governed? David Post emphatically recognizes the danger that such a central authority will become a Leviathan, exploiting its administration of resources to control users or the industry (Post 1998). Johnson, on the other hand, believes that the IETF governance model provides a solution to the dilemma. Participation in IETF, he notes, is voluntary and open. It is a private sector organization[2] that operates, allegedly, on the basis of working groups that allow initiatives to start at the bottom and move up through the hierarchy if and when consensus for the action develops. And so, the new regime, like the IETF, should be private rather than governmental. It should be open and consensual in nature. Ideally, the root administrator should implement only those policies that reflect the broadest consensus among affected stakeholders. Consensus exists, Johnson and Crawford (2000) write, when 'opposition to a particular policy is limited in scope and intensity (or is unreasoned) and opposition does not stem from those specially impacted by the policy' (3).

This line of reasoning leads Johnson and Crawford to an explicit rejection of democratic (one-person/one-vote) methods of governance. There would be an extremely low level of congruence between a global electorate and the Internet stakeholders 'most affected' by its decisions. [3 ]Moreover, voting presumes that ICANN is some kind of sovereign authority, which they deny. 'The principle of one-person one-vote provides a basis for delegating a people's sovereignty to a government. It does not provide legitimacy for a system that seeks voluntary compliance with policies that have the support or acquiescence of all groups particularly impacted by those policies' (Johnson and Crawford 2001, 2). Their argument against democracy is more than just fear of public irrationality. They recognize that a surrender of sovereignty to an institution empowers the institution, making it easier for it to assert control over more and more aspects of life because it can credibly claim to be 'acting on behalf of the people.'

10.1.2 Critique of the Theory

There are two problems with the Johnson-Crawford theses. First, as demonstrated in chapter 9, neither 'federalism' nor 'bottom-up consensus' describes how ICANN actually operates. ICANN's management and professional staff control its agenda and frequently define policy unilaterally in the course of drafting contracts with registries and registrars. The supporting organizations have never developed the tradition or culture of independent working groups that are formed from the bottom and pass proposals up a consensus development hierarchy.

Second, the political bargains that created ICANN were struck by parties unsympathetic, if not hostile, to both federalist decentralization based on private property rights and bottom-up processes. The second problem explains the first. With a single point of control (the root) and competition for the political and economic benefits that can be derived from it, it was inevitable that political strength, not a communitarian commitment to rough consensus, would drive decisions. And because ICANN was created and captured by a political coalition that wanted to impose uniform, global regulations upon the Internet, a federalist model was deliberately avoided in favor of a broad consolidation of authority over all aspects of the name space. Johnson and Crawford write, 'Participation in [the Internet] doesn't subject the participants to rules made by a global governing body.' They seem to have missed the point. The purpose of ICANN is to change that. And it is succeeding, its uniform dispute resolution policy being the earliest and most obvious example.

There is a fundamental difference between ICANN and the IETF. In the end, the IETF produces only technical standards documents. Their actual adoption and implementation is entirely voluntary. The losers in the process are free to promote acceptance of other standards; there are many other places to go to get standards defined and agreed upon. Thus, one of the fundamental prerequisites of consensus-based decision making exists in IETF: it is normally in everyone's interest, from the working group level on up, to gain the assent and participation of as many relevant players as possible. There is a whole layer-one of market acceptance-interposed between the IETF and the society.

ICANN is in a completely different situation. It has monopoly control of an essential resource-the root. Control of the DNS root gives it substantial power over all top-level domain name registries, and through them it can control the domain name industry as a whole. Through its control of the identifiers it can also regulate various aspects of end user behavior. Johnson and Crawford's analysis does not devote much attention to the powerful network externalities that keep Internet service providers ' voluntarily' pointing at the ICANN root, and to whether this gives the central authority quasi-coercive powers. It is not necessarily a 'voluntary' regime simply because it is based on 'contracts.' Short of starting another root, a costly and risky prospect, there is no one else for registries to contract with.

ICANN consistently deviates from the bottom-up consensus model because of the type of decisions it has to make. The underlying subject of ICANN policies is the distribution of wealth among various industry players and consumers. Thus, if any actor or coalition of actors can gain more influence over the process and exploit it to gain a larger share of the pie, they will do so. ICANN's domain name policies are driven by power politics and economic conflicts of interest, not consensus.

There are also practical reasons why the concept of bottom-up consensus cannot work within ICANN. As soon as one concedes that one can move forward on the basis of 'rough consensus' rather than unanimity, one has eliminated what is supposed to be the prime virtue of consensusbased processes: the need to persuade, rather than overrule or ignore, minorities. Unanimity is a stringent check on the abuse of power. 'Rough consensus,' on the other hand, is informal and cannot be precisely defined and measured. It must be 'recognized' or 'declared.' Indeed, Johnson and Crawford's specific definition of consensus requires discerning judgments not only about how much opposition there is to a policy but also from whom the opposition comes and whether those specific parties are substantially impacted by the policy. Herein lies the most serious problem with the practicality of their consensus model. In large, impersonal institutions, recognition of consensus is complex and subjective, and hence easily abused. The whole problem of identifying a legitimate exercise of authority simply reverts back to debates about whether a consensus really exists.

Johnson and Crawford recognize this. 'The process of consensusbuilding is not easy and will be subject to subversion,' they write. 'The presence of a consensus can be suggested when nothing of the kind can be achieved.' They believe, however, that this problem has been addressed in the ICANN context by contracts that require ICANN to produce carefully documented 'demonstrations of consensus support.' But demonstrated to whom? Who enforces the requirement? Other than the Department of Commerce, which is a partner in the regime and at any rate claims to be promoting 'self-governance,' there is no formal oversight body, independent of ICANN, to review its declarations. And because it is a private, contractual regime, the legal standing of anyone who would challenge its actions in the courts is unclear.

[2]Of course, the IETF originated as an extension of government research contracts, and received funding from the National Science Foundation and other government agencies (see chapter 5).

[3 ]The registries and registrars regulated by ICANN, for example, 'would not voluntarily agree to contracts that submit decision-making to voting by an unpredictable populace of those who may or may not have a significant stake in (or even pay much attention to) the resulting rules' (Johnson and Crawford 2000, 2).

 < Day Day Up > 

Ruling the Root(c) Internet Governance and the Taming of Cyberspace
Ruling the Root: Internet Governance and the Taming of Cyberspace
ISBN: 0262134128
EAN: 2147483647
Year: 2006
Pages: 110

flylib.com © 2008-2017.
If you may any questions please contact us: flylib@qtcs.net