Interfaces Without Faces


We are arriving at a time when screens aren't the onlyand possibly not even the primaryway we interact with the digital world or the way the digital world reacts to us. With the dawn of ubiquitous computing environments (see Chapter 9) in the near future, people will need to engage with many different sorts of objects that have microprocessors and sensors built into them, from rooms to appliances to bicycles.

As novelist William Gibson famously reminds us, the future is here; it's just unevenly distributed. There are already examples of these faceless interfaces, such as the dreaded voice-operated phone systems that now dominate customer service. Your car, too, may have a faceless interface, letting out a screech when you accidentally leave the headlights on.

The controls for these faceless interfaces are the human body: our voices, our movements, and simply our presence.

Voice

Voice-controlled interfaces are already with us, particularly on phones. People can call their banks and perform transactions or dial their mobile phones with just their voices. Voice commands typically control limited functionality, and the device typically has to be ready to receive voice commands, either because it functions only via voice commands (as with automated phone systems and some voice-controlled devicessee Figure 6.29), or because it has been prepared to receive voice commands, as with mobile phones that allow voice dialing. What are difficult to create, from both technical and design perspectives, are voice-controlled interfaces in public spaces, when a device or system is always listening for a command to do something. How will the system know that someone is issuing it a command? Will it be like old Star Trek episodes where the crew actually addresses the computer? "Computer, get me all the files relating to geeks." As with other body-based controls discussed in this section, this is a design challenge that has yet to be solved.

Figure 6.29. The author screams at Blendie, a voice-controlled blender by Kellie Dobson, to get it to frappe.

courtesy of Kerry Bodine


Gestures

There is a scene in the sci-fi movie Minority Report in which Tom Cruise stands before a semitransparent screen and simply by gesturing with his hands he moves things around the screen, zooming documents and video in and out. This scene has become a touchstone for gesture-based controls.

To most computers and devices, people consist of two things: hands and eyes. The rest of the human body is ignored. But as our devices gain more awareness of the movement of the human body through Global Positioning System (GPS) sensors and sudden-motion sensors (SMSs), for instance, they will become better able to respond to the complete human body, including to gestures. Indeed, some mobile phones are coming equipped with tilt motion sensors, so that users can, for example, "pour" data from their phone into another device.

Determining what gestures (like pouring) are appropriate for initiating actions on what devices and in what environments is a task for interaction designers in the next decade.

Presence

Some systems respond simply to a person's presence. Many interactive games and installations, such as Danny Rozin's "Wooden Mirror" (Figure 6.30) respond to a body's being near their sensors. With sensors and cameras being built into laptops such as Apple's MacBook, we'll certainly see more applications that respond to presence, when users are active in front of their computers.

Figure 6.30. The "Wooden Mirror" creates the image of what is in front of it by flipping wooden blocks within its frame.

courtesy of Daniel Rosen


There are many design decisions to be made with presence-activated systems. Consider a room with sensors and environmental controls, for example. Does the system respond immediately when someone enters the room, turning on lights and climate-control systems, or does it pause for a few moments, in case someone was just passing through?

In addition, sometimes users may not want their presence known. Users may not want their activities and location known for any number of reasons, including personal safety and simple privacy. Designers will have to determine how and when a user can become "invisible" to presence-activated systems.




Designing for Interaction(c) Creating Smart Applications and Clever Devices
Designing for Interaction: Creating Smart Applications and Clever Devices
ISBN: 0321432061
EAN: 2147483647
Year: 2006
Pages: 110
Authors: Dan Saffer

flylib.com © 2008-2017.
If you may any questions please contact us: flylib@qtcs.net