Largely as a consequence of their complex and densely interwoven nature, in the event of a breakdown in ubiquitous systems, it may not be possible to figure out where something's gone wrong. Even expert technicians may find themselves unable to determine which component or subsystem is responsible for the default.
Let's consider the example of a "smart" household-management system, to which all of the local heating, lighting, ventilation, and plumbing infrastructure has been coupled. In the hope of striking a balance between comfort and economy, you've set its winter mode to lower any room's temperature to 60 degrees Fahrenheit when that room has been empty for ten minutes or more, but to maintain it at 68 otherwise.
When the heat fails to come on in one room or another, which of the interlinked systems involved has broken down? Is it a purely mechanical problem with the heater itself, the kind of thing you'd call a plumber for? Is it a hardware issuesay, a failure of the room's motion detector to properly register your presence? Maybe the management interface has locked up or crashed entirely. It's always possible that your settings file has become corrupt. Or perhaps these systems have between them gotten into some kind of strange feedback loop.
In the latter case particularlywhere the problem may indeed not reside in any one place at all, but rather arises out of the complex interaction of independent partsresolving the issue is going to present unusual difficulties. Diagnosis of simple defaults in ubiquitous systems will likely prove to be inordinately time-consuming by current standards, but systems that display emergent behavior may confound diagnosis entirely. Literally the only solution may be to power everything down and restart components one by one, in various combinations, until a workable and stable configuration is once again reached.
This will mean rebooting the car, or the kitchen, or your favorite sweater, maybe once and maybe several times, until every system that needs to do so has recognized the others and basic functionality has been restored to them all. And even then, of course, the interaction of their normal functioning may entrain the same breakdown. Especially when you consider how dependent on everyware we are likely to become, the prospect of having to cut through such a Gordian tangle of interconnected parts just to figure out which one has broken down is somewhat less than charming.
There's good reason to believe that users will understand their transactions with ubiquitous systems to be essentially social in nature, whether consciously or otherwiseand this will be true even if there is only one human party to a given interaction.
Norbert Wiener, the "father of cybernetics," had already intuited something of this in his 1950 book, The Human Use of Human Beings: according to Wiener, when confronted with cybernetic machines, human beings found themselves behaving as if the systems possessed agency.
This early insight was confirmed and extended in the pioneering work of Byron Reeves and Clifford Nass, published in 1996 as The Media Equation. In an extensive series of studies, Reeves and Nass found that people treat computers more like other people than like anything elsethat, in their words, computers "are close enough to human that they encourage social responses." (The emphasis is present in the original.) We'll flatter a computer, or try wheedling it into doing something we want, or insult it when it doesn'teven if, intellectually, we're perfectly aware how absurd this all is.
We also seem to have an easier time dealing with computers when they, in turn, treat us politelywhen they apologize for interrupting our workflow or otherwise acknowledge the back-and-forth nature of communication in ways similar to those our human interlocutors might use. Reeves and Nass urge the designers of technical systems, therefore, to attend closely to the lessons we all learned in kindergarten and engineer their creations to observe at least the rudiments of interpersonal etiquette.
Past attempts to incorporate these findings into the design of technical systems, while invariably well-intentioned, have been disappointing. From Clippy, Microsoft's widely-loathed "Office Assistant" ("It looks like you're writing a letter"), to the screens of Japan Railways' ticket machines, which display an animated hostess bowing to the purchaser at the completion of each transaction, none of the various social interfaces have succeeded in doing anything more than reminding users of just how stilted and artificial the interaction is. Even Citibank's ATMs merely sound disconcerting, like some miserly cousin of HAL 9000, when they use the first person in apologizing for downtime or other violations of user expectations ("I'm sorryI can only dispense cash in multiples of $20 right now.")
But genuinely internalizing the Media Equation insights will be critical for the designers of ubiquitous systems. Some are directly relevant to the attempted evocation of seamlessness ("Rule: Users will respond to the same voice on different computers as if it were the same social actor"), while others speak to the role of affect in the ubiquitous experiencenotably, the authors' finding that the timing of interactions plays a critical role in shaping their interpretations, just as much as their content does.[*] Coming to grips with what Reeves and Nass are trying to tell us will help designers accept the notion that people will more often understand their interactions with everyware to be interpersonal in nature than technical.
These findings take on new importance when people encounter a technology that, by design, borders on the imperceptible. When there are fewer visible cues as to a system's exact nature, we're even more likely to mistake it for something capable of reciprocating our feelingsand we will be that much more hurt if it does not.