Thesis 20

Thesis 20

Everyware unavoidably invokes the specter of multiplicity.

One word that should bedevil would-be developers of everyware is "multiple," as in multiple systems, overlapping in their zones of influence; multiple inputs to the same system, some of which may conflict with each other; above all, multiple human users, each equipped with multiple devices, acting simultaneously in a given space.

As we've seen, the natural constraints on communication between a device and its human user imposed by a one-to-one interaction model mean that a PC never has to wonder whether I am addressing it, or someone or -thing else in our shared environment. With one application open to input at any given time, it never has to parse a command in an attempt to divine which of a few various possibilities I might be referring to. And conversely, unless some tremendously processor-intensive task has monopolized it for the moment, I never have to wonder whether the system is paying attention to me.

But the same thing can't really be said of everyware. The multiplicity goes both ways, and runs deep.

Perhaps my living room has two entirely separate and distinct voice-activated systemssay, the wall screen and the actual windowto which a command to "close the window" would be meaningful. How are they to know which window I mean?

Or maybe our building has an environmental control system that accepts input from personal body monitors. It works just fine as long as there's only one person in the room, but what happens when my wife's monitor reports that she's chilly at the same moment that mine thinks the heat should be turned down?

It's not that such situations cannot be resolved. Of course they can be. It's just that designers will have to explicitly anticipate such situations and devise rules to address themsomething that gets exponentially harder when wallscreen and window, shirt telemetry and environmental control system, are all made by different parties.

Multiplicity in everyware isn't just a user-experience issue, either. It's a question that goes directly to the management and allocation of computational resources, involving notions of precedence. Given the resources available locally, which of the many running processes present gets served first? What kind of coordinating mechanisms become necessary?

The situation is complicated still further by the fact that system designers cannot reasonably foresee how these multiple elements will behave in practice. Ideally, a pervasive household network should be able to mediate not merely among its own local, organic resources, but whatever transient ones are brought into range as well.

People come and go, after all, with their personal devices right alongside them. They upgrade the firmware those devices run on, or buy new ones, and those all have to work too. Sometimes people lose connectivity in the middle of a transaction; sometimes their devices crash. As Tim Kindberg and Armando Fox put it, in their 2002 paper "System Software for Ubiquitous Computing," "[a]n environment can contain infrastructure components, which are more or less fixed, and spontaneous components based on devices that arrive and leave routinely." Whatever infrastructure is proposed to coordinate these activities had better be able to account for all of that.

Kindberg and Fox offer designers a guideline they call the volatility Principle: "you should design ubicomp systems on the assumption that the set of participating users, hardware and software is highly dynamic and unpredictable. Clear invariants that govern the entire system's execution should exist."

In other words, no matter what kind of hell might be breaking loose otherwise, it helps to have some kind of stable arbitration mechanism. But with so many things happening at once in everyware, the traditional event queuethe method by which a CPU allocates cycles to running processesjust won't do. A group of researchers at Stanford (including Fox) has proposed a replacement better suited to the demands of volatility: the event heap.

Without getting into too much detail, the event heap model proposes that coordination between heterogeneous computational processes be handled by a shared abstraction called a "tuple space." An event might be a notification of a change in state, like a wireless tablet coming into the system's range, or a command to perform some operation; any participating device can write an event to the tuple space, read one out, or copy one from it so that the event in question remains available to other processes.

In this model, events expire after a specified elapse of time, so they're responded to either immediately, or (in the event of a default) not at all. This keeps the heap itself from getting clogged up with unattended-to events, and it also prevents a wayward command from being executed so long after its issuance that the user no longer remembers giving it. Providing for such expiry is a canny move; imagine the volume suddenly jumping on your bedside entertainment system in the middle of the night, five hours after you had told it to.

The original Stanford event heap implementation, called iRoom, successfully coordinated activities among several desktop, laptop, and tablet Windows PCs, a Linux server, Palm OS, and Windows CE handheld devices, multiple projectors, and a room lighting controller. In this environment, moving a pointer from an individual handheld to a shared display was easily achieved, while altering a value in a spreadsheet on a PDA updated a 3D model on a machine across the room. Real-world collaborative work was done in iRoom. It was "robust to failure of individual interactors," and it had been running without problems for a year and a half at the time the paper describing it was published.

It wasn't a perfect solutionthe designers foresaw potential problems emerging around scalability and latencybut iRoom and the event heap model driving it were an important first response to the challenges of multiplicity that the majority of ubiquitous systems will eventually be forced to confront.