Nothing takes place in a vacuum. As former PARC researcher Paul Dourish observes, in his 2001 study Where the Action Is, "interaction is intimately connected with the settings in which it occurs." His theory of "embodied interaction" insists that interactions derive their meaning by occurring in real time and real space and, above all, among and between real people.
In Dourish's view, the character and quality of interactions between people and the technical systems they use depend vitally on the fact that both are embedded in the world in specific ways. A video chat is shaped by the fact that I'm sitting in this office, in other words, with its particular arrangement of chair, camera, and monitor, and not that one; whether a given gesture will seem to be an appropriate mapping to a system command will seem different depending on whether the user is Sicilian or Laotian or Senegalese.
This seems pretty commonsensical, but it's something that by and large we've been able to overlook throughout the PC era. This is because personal computing is something that we've historically conceived of as being largely independent of context.
In turning on your machine, you enter the nonspace of its interfaceand that nonspace is identical whether your laptop is sitting primly atop your desk at work or teetering atop your knees on the library steps. Accessing the Web through such interfaces only means that the rabbit hole goes deeper; as William Gibson foresaw in the first few pages of Neuromancer, it really is as if each of our boxes is a portal onto a "consensual hallucination" that's always there waiting for us. No wonder technophiles of the early 1990s were so enthusiastic about virtual reality: it seemed like the next logical step in immersion.
By instrumenting the actual world, though, as opposed to immersing a user in an information-space that never was, everyware is something akin to virtual reality turned inside out. So it matters quite a lot when we propose to embed functionality in all the surfaces the world affords us: we find ourselves deposited back in actuality with an almost-audible thump, and things work differently here. If you want to design a system that lets drive-through customers "tap and go" from the comfort of their cars, you had better ensure that the reader is within easy reach of a seated driver; if your building's smart elevator system is supposed to speed visitor throughput, it probably helps to ensure that the panel where people enter their floors isn't situated in a way that produces bottlenecks in the lobby.
Interpersonal interactions are also conditioned by the apparently trivial fact that they take place in real space. Think of all of the subtle, nonverbal cues we rely upon in the course of a multi-party conversation and how awkward it can be when those cues are stripped away, as they are in a conference call.
Some ubiquitous systems have made attempts at restoring these cues to mediated interactionsone of Hiroshi Ishii's earlier projects, for example, called ClearBoard. ClearBoard attempted to "integrate interpersonal space and shared workspace seamlessly"; it was essentially a shared digital whiteboard, with the important wrinkle that the image of a remote collaborator was projected onto it, "behind" what was being drawn on the board itself.
Not only did this allow partners working at a distance from one another to share a real-time workspace, it preserved crucial indicators like "gestures, head movements, eye contact, and gaze direction"all precisely the sort of little luxuries that do so much to facilitate communication in immediate real space and that are so often lacking in the virtual.
A sensitively designed everyware will take careful note of the qualities our experiences derive from being situated in real space and time. The more we learn, the more we recognize that such cues are more than mere nicetiesthat they are, in fact, critical to the way we make sense of our interactions with one another.