Ubiquitous Computing


Over the past 60 years, the ratio of humans to computers has been changing. In the early years of computing, the ratio of humans to computers was many to one: many people worked on one mainframe computer. Then came the era of the personal computer, and the ratio changed to one to one: people who used computers had their own on their desks. Recently, however, and in the future this will be even more true, the ratio has changed so that one person now has many "computers" under his or her control: a laptop, digital camera, MP3 player, mobile phone, car, microwave, television, and on and on. In the words of Mark Weiser, the Xerox PARC scientist who wrote the seminal papers on the subject, most of these computers are "invisible, yet all around us."

The era of ubiquitous computing (or ubicomp) has, like so much of the "future" technology in this chapter, already started; it just isn't widespread yet. As microprocessors and sensors grow ever cheaper and also more powerful, it's easy to imagine the ratio of humans to computers becoming one to thousands. Most of these computers will be embedded in the products we own, and aside from the behavior they afford, they will be imperceptible to us. We won't be controlling them via a keyboard and mouse either. As described in Chapter 6, these interfaces will have no faces; we'll engage with them using voice, touch, and gestures.

Note

Designers have yet to figure out ways of documenting the gestures, voice commands, and body positions that will trigger and engage ubicomp systems. It's been suggested that dance notation or some variation could be used.


Interaction designers have a major part to play in the design of ubicomp systems, and it will be an exciting and interesting time. The possibilities for interactions between people through ubicomp are simply astounding. While you get ready in the morning, your bathroom mirror might show you your calendar, the weather report for the day, and perhaps e-mail from your friends. The bus stop might indicate when the next bus will arrive and how crowded it is. The bus itself might have digital notes on it left by passengers ("This seat is broken"). At your office, a wall might be your monitor, turning on when you tell it to. Meeting rooms might automatically record what is said and drawn on digital whiteboards. Any room you are in throughout the day might play music of your choice and adjust to the temperature you like based on the clothes you are wearing.

This scenario sounds to us now like science fiction or those old AT&T "You Will" commercials, but it likely isn't too far off, and each of these moments will need the skills and talents of interaction designers to make them easy, fun, and appropriate. How do you change the bathroom mirror from displaying the weather report to displaying e-mail? How do riders leave or see messages left on a bus? The incredible range of design opportunities is apparent.

Frankly, the stakes are simply too high in ubicomp for interaction designers not to be involved. In a typical interaction with a digital device right now, users are in control of the engagement. They determine when the engagement stops and starts. They control how the computer (and through the computer, others) sees and experiences them. Users' bodies, except for their hands and eyes, are for the most part irrelevant. None of this is true in ubicomp.

Users may step into a room and unknowingly begin to engage with a ubicomp systemor many systems. The thermostat, door, light fixture, television, and so on may all be part of different systems, wired to respond to a person's presence. Where users are in the roomeven the direction they are facingmay matter. Standing near the television and facing it may trigger it to turn on, as could a particular gesture, such as pretending to click a remote control in the air. But because users may not know any of this, they have no way of controlling how they present themselves to the system. Perhaps they don't want the room to know they are there!

The implications of ubicomp are profound, and it will be up to interaction designers to make these systems discoverable, recoverable, safe, and humane. Like robots, ubicomp systems are often both products and services, so all the skills, methods, and techniques discussed throughout this book (and more) will be needed to design them in a way that works for humans. One can easily imagine how ubicomp systems could get out of control, embarrassing and annoying us. Our privacy could be impinged upon every day, especially since ubicomp is hard to see without signage systems and icons on objects and in areas to let us know we are in a ubicomp environment. We will need to know what is being observed, and how, and where, but hopefully without filling our rooms with signs.

Interaction designers need to design ways for people not only to understand these systems, but also to gain access to them if problems occur. When problems happenthe system switches off the TV every time you sneeze!how can they be corrected? Is it the lamp that controls the TV or is it the wall?

Adam Greenfield on Everyware

courtesy of Nurri Kim

Adam Greenfield, author of Everywhere: The Dawning Age of Ubiquitous Computing (2006), is an internationally recognized writer, user experience consultant, and critical futurist. Before starting his current company, Studies and Observations, he was lead information architect for the Tokyo office of Web consultancy Razorfish; prior to that, he worked as senior information architect for marchFIRST, also in Tokyo. He's also been, at various points in his career, a rock critic for SPIN magazine, a medic at the Berkeley Free Clinic, a coffeehouse owner in West Philadelphia, and a PSYOP sergeant in the U.S. Army's Special Operations Command.

What do interaction designers need to know about ubiquitous computing, what you call "everyware?"

Probably the single most important thing that we need to wrap our heads around is multiplicity.

Instead of the neatly circumscribed space of interaction between a single user and his or her PC, his or her mobile device, we're going to have to contend with a situation in which multiple users are potentially interacting with multiple technical systems in a given space at a given moment.

This has technical implications, of course, in terms of managing computational resources and so on, but for me the most interesting implications concern the quality of user experience. How can we best design informational systems so that they (a) work smoothly in synchrony with each other, and (b) deliver optimal experiences to the overburdened human at their focus? This is the challenge that Mark Weiser and John Seely Brown refer to as "encalming, as well as informing," and I think it's one we've only begun to scratch the surface of addressing.

How will the interactions we have with digital products now differ from those in the future?

The simple fact that networked information-processing devices are going to be deployed everywhere in the built environment rather strongly implies the inadequacy of the traditional user interface modalities we've been able to call on, most particularly keyboards and keypads.

When a room, or a lamp post, or a running shoe is, in and of itself, an information gathering, processing, storage, and transmission device, it's crazy to assume that the keyboard or the traditional GUI makes sense as a channel for interactionsomewhat akin to continuing to think of a car as a "horseless carriage." We're going to need to devise ways to interact with artifacts like these that are sensitive to the way we use them, biomechanically, psychologically, and socially. Especially if we want the systems we design to encalm their users, we're going to need to look somewhere else.

Voice and gestural interfaces, in this context, are very appealing candidates, because they so easily accommodate themselves to a wide variety of spaces and contexts, without taking up physical space, or preventing the user from attending to more focal tasks. They become particularly interesting with the expansion in the number of child, elderly, or nonliterate users implied by the increased ambit of post-PC informatics.

You've spoken about "design dissolving into behavior." How can interaction designers accomplish that?

Well, that's a notion of Naoto Fukasawa's, that interactions with designed systems can be so well thought out by their authors, and so effortless on the part of their users, that they effectively abscond from awareness.

Following him, I define everyware at its most refined as "information processing dissolving in behavior." We see this, for example, in Hong Kong, where women leave their RFID-based Octopus cards in their handbags and simply swing their bags across the readers as they move through the turnstiles. There's a very sophisticated transaction between card and reader there, but it takes 0.2 seconds, and it's been subsumed entirely into this very casual, natural, even jaunty gesture.

But that wasn't designed. It just emerged; people figured out how to do that by themselves, without some designer having to instruct them in the nuances. So I'd argue that creating experiences with ubiquitous systems that are of similar quality and elegance is largely a matter of close and careful attention to the way people already use the world. The more we can accommodate and not impose, the more successful our designs will be.


Another challenge when designing for ubicomp is that most ubicomp systems will likely be stateless, meaning that they will change from moment to momentthere won't be a place in time (a specific state) that the system can go back to. Users won't be able to refer to an earlier moment and revert to that, or at least not easily, making it harder to undo mistakes"Wait, what did I just say that caused all the windows of the room to open?" or "Pretend I didn't just walk into this room." Interaction designers will need to take this feature of ubicomp systems into account and design without the benefits of Undo commands and Back buttons.

As with all systems (but, again, more so), it is incumbent upon interaction designers to instill meaning and values into ubicomp. When the things around us are aware, monitoring us and capable of turning our offices, homes, and public spaces into nightmares of reduced civil liberties and insane levels of personalization ("Hi Sarah! Welcome back to the bus! I see you are wearing jeans today. Mind if I show you some ads for Levi's?"), interaction designers need to have compassionate respect for the people who will be engaged with them, some of them unwillingly and unknowingly.




Designing for Interaction(c) Creating Smart Applications and Clever Devices
Designing for Interaction: Creating Smart Applications and Clever Devices
ISBN: 0321432061
EAN: 2147483647
Year: 2006
Pages: 110
Authors: Dan Saffer

flylib.com © 2008-2017.
If you may any questions please contact us: flylib@qtcs.net