In considering Mark Weiser's "ubiquitous" computing alongside all those efforts that define the next computing as one that is "mobile" or "wearable" or "connected" or "situated," one is reminded time and again of the parable of the six blind men describing an elephant.
We've all heard this one, haven't we? Six wise elders of the village were asked to describe the true nature of the animal that had been brought before them; sadly, age and infirmity had reduced them all to a reliance on the faculty of touch. One sage, trying and failing to wrap his arms around the wrinkled circumference of the beast's massive leg, replied that it must surely be among the mightiest of trees. Another discerned a great turtle in the curving smoothness of a tusk, while yet another, encountering the elephant's sinuous, muscular trunk, thought he could hardly have been handling anything other than the king of snakes. None of the six, in fact, could come anywhere close to agreement regarding what it was that they were experiencing, and their disagreement might have become quite acrimonious had the village idiot not stepped in to point out that they were all in the presence of the same creature.
And so it is with post-PC computing. Regardless of the valid distinctions between these modes, technologies, and strategies, I argue that such distinctions are close to meaningless from the perspective of people exposed to the computing these theories all seem to describe.
Historically, there have been some exceptions to the general narrowness of vision in the field. Hiroshi Ishii's Tangible Media Group at the MIT Media Lab saw their work as cleaving into three broad categories: "interactive surfaces," in which desks, walls, doors, and even ceilings were reimagined as input/output devices; "ambients," which used phenomena such as sound, light, and air currents as peripheral channels to the user; and "tangibles," which leveraged the "graspable and manipulable" qualities of physical objects as provisions of the human interface.
A separate MIT effort, Project Oxygen, proceeded under the assumption that a coherently pervasive presentation would require coordinated effort at all levels; they set out to design a coordinated suite of devices and user interfaces, sensor grids, software architecture, and ad hoc and mesh-network strategies. (Nobody could accuse them of lacking ambition.)
These inclusive visions aside, however, very few of the people working in ubicomp or its tributaries seem to have quite gotten how all these pieces would fit together. From the user's point of view, I'd argue, these are all facets of a single larger experience.
What is that experience? It involves a diverse ecology of devices and platforms, most of which have nothing to do with "computers" as we've understood them. It's a distributed phenomenon: The power and meaning we ascribe to it are more a property of the network than of any single node, and that network is effectively invisible. It permeates places and pursuits that we've never before thought of in technical terms. And it is something that happens out here in the world, amid the bustle, the traffic, the lattes, and gossip: a social activity shaped by, and in its turn shaping, our relationships with the people around us.
And although too many changes in the world get called "paradigm shifts"the phrase has been much abused in our timewhen we consider the difference between our experience of PCs and the thing that is coming, it is clear that in this case no other description will do. Its sense of a technological transition entraining a fundamental alteration in worldview, and maybe even a new state of being, is fully justified.
We need a new word to begin discussing the systems that make up this state of beinga word that is deliberately vague enough that it collapses all of the inessential distinctions in favor of capturing the qualities they all have in common.
What can we call this paradigm? I think of it as everyware.