Norman defines affordance as "the perceived and actual properties of the thing, primarily those fundamental properties that determine just how the thing could possibly be used." In other words, affordance relates to the ability of a user to determine how to use an object just by looking at its visual clues.
As noted earlier, an object having affordance is similar but different from an object being visible. An object is visible when a user is able to determine how it is used just by looking at it. What is the difference? The difference is that a user can determine how to use an object with affordance solely by interpreting its visible characteristics. A user can determine how to use a visible object by interpreting its visible characteristics or by knowing the standards and conventions. A command button and a hyperlink are both visible user interface elements, but only the command button offers affordance, because it has a raised 3-D border that makes it look like it can be pushed. Consequently, a first-time computer user is able to figure out instantly how to use a command button, whereas the same user would have to experiment with a hyperlink to understand its function. We can say an object has affordance when all of the following are true:
Consistency is also required for affordance, since inconsistent behavior of objects of similar visible characteristics undermines the user's ability to figure out how to use an object just by looking at it.
For affordance to work, the user needs to be able to interpret an object's visual clues. How does the user do this? The user combines the object's visual clues with real-world knowledge and perhaps some common sense. Often this real-world knowledge depends upon basic human anatomy, especially the properties of the human hand. The hand can point, grab, lift, pull, push, and rotate. Raised 3-D rectangles, such as command buttons, look as if they can be pushed, whereas raised 3-D lines, such as gripper bars, look as if they can be grabbed. And, if something looks like it can be pushed, pulled, grabbed, rotated, and so on, users are going to try to do it. By changing the cursor pointer to the appropriate shape that matches an object's visible features, the mouse in effect becomes an extension of the user's hand.
Another common technique for interpreting an object's visual clues is through metaphor. An object on the screen looks like a common, everyday object that the user already understands and knows how to use. For example, the drawing tools in a typical paint program use metaphors. Instead of calling a free-form line-drawing tool a "free-form line-drawing tool," paint programs call it a "pencil." Selecting the pencil tool changes the cursor to a pencil shape, completing the metaphor. Now the user knows what to do.
However, as I will discuss in the section "Conceptual Models" using metaphors can potentially be a problem because a metaphor can give the user a wrong impression of what an object can do. For example, since real-world pencils don't work in a similar fashion, the user might not realize that a pencil tool can be constrained to draw straight lines. Likewise, the affordance of a push button is strong: because we push real buttons all the time in the real world, we know exactly how they work. And have you ever had to double-push a real push button? To conform to this real-world experience, you should never assign double-click behavior to a command button. It would simply never occur to anyone to try it.
TIP
Users understand affordance through real-world knowledge, including knowledge of human anatomy, metaphors, and experience with everyday objects.
Windows maintains the following visual affordances:
The following cursor pointers reinforce the affordance of the objects they manipulate: