Thesis 58


Thesis 58

As yet, everyware offers the user no compelling and clearly stated value proposition.

The last of the inhibiting factors we'll be discussing is the deep and as yet unaddressed disconnect that exists between the current discourse around ubiquitous systems, and any discernable desire on the part of meaningfully large populations for such systems.

Inside the field, however elaborated they've become with an embroidery of satisfying and clever details, we've told each other these tales of ubiquity so many times that they've become rote, even clichédbut we've forgotten to ascertain whether or not they make any sense to anyone outside the contours of our consensual hallucination.

HP's Gene Becker describes the issue this way:

The potential uses and benefits of ubicomp often seem 'obvious'; most of us in the field have spun variations of the same futuristic scenarios, to the point where it seems like a familiar and tired genre of joke. 'you walk into the [conference room, living room, museum gallery, hospital ward], the contextual intention system recognizes you by your [beacon, tag, badge, face, gait], and the [lights, music, temperature, privacy settings, security permissions] adjust smoothly to your preferences. your new location is announced to the [room, building, global buddy list service, Homeland Security Department], and your [videoconference, favorite TV show, appointment calendar, breakfast order] is automatically started.' And so on. Of course, what real people need or want in any given situation is far from obvious.

It's ironic, then, that one of the things that real people demonstrably do not want in their present situation is everyware. There is no constituency for it, no pent-up demand; you'll never hear someone spontaneously express a wish for a ubiquitous house or city. There are days, in fact, when it can seem to me that the entire endeavor has arisen out of some combination of the technically feasible and that which is of interest to people working in human-computer interaction. Or worse, much worse: out of marketing, and the desire to sell people yet more things for which they have neither a legitimate need nor even much in the way of honest desire.

What people do want, and will ask for, is more granular. They want, as Mark Weiser knew so long ago, to be granted a god's-eye view of the available parking spaces nearby, to spend less time fumbling with change at the register, to have fewer different remote controls to figure out and keep track of.

And, of course, everyware is the (or at least an) answer to all of these questions. But until those of us in the field are better able to convey this premise to the wider world in convincing and compelling detail, we can expect that adoption will be significantly slower than might otherwise be the case.


Thesis 59

The necessary processor speed already exists.

Of the major limiting factors on ubiquitous computing, one of the most vexingand certainly the most fundamentalhas always been processor speed. The challenges posed by the deployment of computing out in the everyday environment, whether parsing the meaning of a gesture in real time or tracking 500 individual trajectories through an intersection, have always been particularly processor-intensive.

But if processor speed has historically constituted a brake on development, it needn't any longer. The extravagance of computational resources such applications require is now both technically feasible and, at long last, economic.

The machine I am writing these words on operates at a clock speed of 1.5 GHzthat is, the internal clock by which it meters its processes cycles 1.5 billion times every second. While this sounds impressive enough in the abstract, it's not particularly fast, even by contemporary standards. Central processors that operate more than twice as fast are widely commercially available; a 2004 version of Intel's Pentium 4 chip runs at 3.4 GHz, and by the time this book reaches your hands, the CPU inside the most generic of PCs will likely be faster yet.

We know, too, that relying on CPU clock speeds for estimates of maximum speed can be deceptive: such general-purpose chips are held to speeds well below the theoretical maximum, while specialized chips can be optimized to the requirements of a particular applicationvideo or sound processing, encryption, and so on. In synchrony, CPUs and specialized chips already handle with aplomb the elaborate variety of processor-intensive applications familiar from the desktop, from richly immersive games to real-time multiway videoconferencing.

In principle, then, a locally ubiquitous systemsay, one dedicated to household managementbuilt right now from commonly available CPUs and supported by a battery of specialized helpers, should be perfectly adequate to the range of routine tasks foreseeable in such a setting. Excepting those problems we've already identified as "AI-hard," which aren't as a rule well-suited to brute-force approaches anyway, there shouldn't be anything in the home beyond the compass of such a system.

Especially if a grid architecture is employedif, that is, the computational burden imposed by more convoluted processes is distributed through the constellation of locally-embedded processors, working in paralleltoday's clock speeds are entirely adequate to deliver services to the user smoothly and reliably. Whatever challenges exist, it's hard to imagine that they would be order-of-magnitude harder than supporting an iRoom-style collaborative workspace, and that was achieved with 2001-vintage processor speeds.

The other side of the speed equation is, of course, expense; one-off showpieces for research labs and corporate "visioning" centers are well and good, but their effects are generally achieved at prohibitive cost. In order to support meaningfully ubiquitous systems, componentry must be cheap. Current projectionsand not necessarily the most optimisticindicate that processors with speeds on the order of 2 GHz will cost about what ordinary household electrical components (e.g., dimmer switches) do now, at the end of the decade or very soon thereafter. This would allow an ordinary-sized room to be provisioned with such an abundance of computational power that it is difficult to imagine it all being used, except as part of some gridlike approach to a particularly intractable problem. Less extravagant implementations could be accomplished at negligible cost.

When there are that many spare processing cycles available, some kind of market mechanism might evolve to allocate them: an invisible agora going on behind the walls, trading in numeric operations. But we can leave such speculations for other times. For the moment, let's simply note thateven should Moore's Law begin to crumble and benchmark speeds stagnate rather than continuing their steep upward climbprocessing capacity presents no obstacle to the emergence of full-fledged ubiquitous services.