4.3 Psychological Design Aspects

4.3.1 Introduction

While having the technical parts in place is important for human-centered design, it is equally important to have all the non-technical building blocks in place as well. Social sciences such as psychology and semiotics play an important role in building successful me-centric systems. It would be possible to write a separate book on the influence of social sciences on good design, so we provide you with a short introduction on these topics only. When starting a design, you should take these topics into account and involve some people with a deep understanding of them to make your design more powerful.

4.3.2 Gestalt Laws

Gestalt psychology began as a reaction to behaviorism and introspectionism. Gestalt's argument with behaviorism was the focus on systematic collection and analysis of data from the bottom up; investigating the elements individually without an appreciation for their importance as a whole that was greater than the sum of their parts. This concept of an integrated whole is described by the German word Gestalt, for which there is no English equivalent. Gestalt psychologists apply this concept to relationships between people, citing the group dynamic of a common enterprise where each individual puts forth his gifts to create something more meaningful than each member could individually.

An early influence on Gestalt psychology was the philosopher Immanuel Kant. He argued that we do not perceive the world as it is; we impose cause-and-effect relationships on it, and therefore our perceptions are influenced by our experiences. Later, this understanding emerged in Max Wertheimer's explanation of a phenomenon known as apparent motion.

Max Wertheimer, considered to be Gestalt psychology's founder, was born in Prague in 1880 and studied at the University of Frankfurt. There, he became aware of a form of apparent motion known as the phi phenomenon. The phi phenomenon is experienced when an observer notices that two lights within close proximity to each other and flashing alternately appear to be one light moving to and from both locations. The observer perceives movement, even though none has occurred. This "apparent motion" is thought to occur because we perceive experiences in a way that calls for the simplest explanation, even though it may differ from reality. This is known as the Gestalt Law of Minimum Principle: "We do not perceive what is actually in the external world so much as we tend to organize our experience so that it is as simple as possible...simplicity is a principle that guides our perception and may even override the effects of previous experience." [7] Explaining apparent motion in this way marked the beginning of Gestalt psychology as a separate school of thought.

[7] John G. Benjafield (1996). The Developmental Point of View. A History of Psychology , p. 173 . Needham Heights, MA: Simon & Schuster Company.

When designing smart appliances, we need to take into account Gestalt; otherwise , the devices can become unusable. There are five elements of Gestalt: proximity, similarity, closure, continuity, and symmetry. When placing objects close to each other, they tend to be seen as a group. Objects with the same shape or color are seen as belonging together. If parts of an object are missing, the missing parts are filled in to complete it, so that it appears as a whole. At the same time, lines tend to be seen as continuous, even if they are interrupted . If regions are bound by symmetrical borders, they tend to be perceived as coherent figures. As you can see, these five elements should be the basis for every user interface. If Gestalt is not applied, users will have difficulty figuring out which elements in the interface belong together. If done well, the interface can be simplified and reduced by leaving out parts of objects, as the users can complete the objects in their minds and use the interface as if the objects were complete. Symmetrical boundaries can also help to simplify the interface.

Kurt Koffka extended the basic Gestalt elements in his work. Among the many important concepts he introduced, we mention two here. The first is the concept of geographical versus behavioral environment. People behave in the ways they do based on how they perceive the environment (behavioral) instead of how the environment actually is (geographical). The practical application of this would be understanding someone's behavior within the context of their environment instead of our own. The second one is the concept of distal stimuli versus proximal stimuli. Distal stimuli describes things as they exist in the geographical environment; proximal stimuli are the effects that distal stimuli have on sensory perception.

The Gestalt grouping laws represent repetition or redundancy in the visual world, which provide an opportunity for information compression. The laws can be applied both to the device itself and to the user interface. This means not only that the designers need to be aware of Gestalt, but also that the devices should behave according to Gestalt theory.

4.3.3 Mental Models

Smart devices and services should be designed in such a way that users can quickly acquire a good functional model of the system that is in accordance with their task model. An important goal of user-centered design is to understand the users and how they interact with the systems that they use. Using both syntactic knowledge (knowledge of procedures, such as pressing the "delete" key to erase a character) and semantic knowledge (knowledge of a domain, such as a theory) gained by experience with a system, users build a mental model of how that system operates.

Mental models are representations of the function and/or structure of objects in peoples' minds. They are analogical representations, or a combination of analogical and propositional representations, and can be dynamically constructed when required. There are two main types of mental models. The functional models are good for everyday use; the structural models are good for breakdown situations but are difficult to acquire from usage experience only.

For example, having been a student at a university for a while, one can establish a "mental model" of attending a university. That is, he goes to classes, talks to his classmates about how to accomplish the homework, knows how to interact with his professors, etc. Suppose now a virtual university is being offered to students for online courses, and a Web site is to be constructed for the virtual university. This Web site should understand and respect the "mental models" of targeted students in order to avoid confusion for the student trying to find his way around the virtual university.

In computer-based systems, a user's model of how to perform a task is directly influenced by how she perceives the data being presented by the computer. For example, let's say that in order to successfully accomplish a task, a particular switch must be open. The controller must not only know that the switch is to be open, but also be able to interpret the current state of the switch as displayed on the computer's screen (perhaps displayed graphically as a circuit diagram or simply as a "yes/no" field). Also, she must know how to tell the computer to send the command to open that switch (perhaps via a typed command or by clicking on a button).

The task flows more smoothly when the data displayed and the method of interaction match the user's expectations of how the system will operate , that is, her mental models. Thus, it is important that the designer incorporate the user's model of the task into the design of the computer's interface so that there is no disconnect between what the operator thinks she sees and does, and what the computer actually is presenting and executing. This problem is compounded by the fact that, to a certain degree, all users possess somewhat different mental models. This is often the reason why computing systems are not successful, because they do not match the majority of users and their mental models.

To implement mental models successfully, it is important to concentrate on small-effect operations where possible, since operations that do many things in one step are more difficult to model, understand, and use correctly. For modeling the state of objects, choose models that can be drawn and/or visualized easily. When designing a me-centric service, you should evaluate models and specifications, both for novices and experts. There are possible "intermediate" levels of abstraction between a raw representation and the abstraction in the requirements specification. These intermediate levels can provide more abstract views of the representation for use by implementers and maintainers. Recognize and document these where appropriate.

4.3.4 Cognitive Models

In contrast with mental models, a cognitive model is the attempt to represent conscious and subconscious mental processes and outcomes . Cognitive modeling is emerging as an effective means to construct predictive engineering models of people interacting with computer systems. A cognitive model is a theory-based, graphical representation of inferred relationships between hypothesized components of human thought. These overt models of the mind may be qualitative or quantitative, specific or general, and they are typically embedded in graphical models of human performance. Most cognitive models encompass the basic constructs of human information processing, such as sensory processing, perception, short- and long- term memory, and analytic processes such as problem solving and decision making.

Different models vary in their level of detail and sometimes in the theory that underlies their depiction of the flow of information and control within the model. Cognitive models aid analysts in getting beyond the behavioral aspects of human activities to their underlying goals, objectives, and thought processes. Cognitive modeling also documents operational decision points, information requirements, and the analytic processes followed in making decisions on the basis of available information, incomplete and uncertain though that information may be. Cognitive task analysis provides critical input to user-interface design and evaluation. Such models serve as frameworks for investigating and documenting the role of human information processing in operational settings.

In a specific operational environment, early and continuing cognitive modeling and cognitive task analysis can help to identify analysts' information requirements and can support the design of automated job aids. When a job or position is being defined or redefined, it is worth the effort to document cognitive tasks and information requirements as a basis for the user interfaces, position manuals, and training programs. When a change is contemplated for a computer system, design of user interfaces and operational procedures that are based on a thorough cognitive task analysis are likely to yield a more usable and operationally suitable product than will result from a design that gives low priority to human information-processing issues. A design's support for cognitive capabilities and compensation for cognitive limitations should be evaluated throughout the project's life cycle.

One of the most important assumptions of cognitive models is the fact that human performance is predictable. Human performance refers to the patterns of actions that people carry out to accomplish a task according to some standard. Tasks might include using Amazon [8] to find all the books about Shakespeare, or driving your car to the Virgin Megastore [9] in London's Oxford Street. Actions would include gaining access to the Internet or starting your car. The related standards to these actions would include speed and accuracy, for example.

[8] http://www.amazon.com/

[9] http://www.virgin.com/megastore/

Through cognitive models, these aspects of human performance can be predicted . These models are basically computer programs that behave in the way that humans behave and allow one to make accurate comparative and absolute predictions of how people will interact with a computer to do a piece of work. Given a few assumptions, for example, we can predict how long a person would take to find all of the books mentioned in the example above or how long it takes to drive to the shop. A cognitive model might include just the discrete observable steps involved in the task, but it can also account for the processing in the various components in the human brain.

The brain includes four major components that need to be taken into account for cognitive models: perception, cognition, memory, and action. A person uses all four components when interacting with a computing device. More detailed cognitive modeling, such as that in the service of advancing cognitive psychology, is typically done using a cognitive architecture, a computational framework for building cognitive models that simulates and constrains fundamental aspects of human performance.

Cognitive architectures represent the hardware of human performance, the characterize the invariants that constrain it, and provide a framework for building predictive models. Methodologies for applying cognitive architectures to predict aspects of human performance are still evolving.

4.3.5 Semiotics

Until now, interface design has been mainly dominated by computer scientists and psychologists. Unfortunately, the problem is that computer science so far does not provide concepts for supporting human thought, whereas psychology so far lacks concepts for externalizing and materializing computational structures that accord with mental operations. There is, therefore, a need for a science that complements both sciences to make design more human-centered.

Semiotics is the study of signs and sign functions in all conceivable aspects of message exchange, and it concerns the conveyance and development of meaning through all sign vehicles. It can be seen as a framework for the comprehension of the world. Interest in the nature of signs began with Aristotle.

In the 4th and 5th centuries A.D., St. Augustine formulated the first general theory of semiotics. In On Dialectics , he defined the sign as "something which is itself sensed and which indicates to the mind something beyond the sign itself. To speak is to give a sign by means of an articulate utterance." The term semiotics appeared first in 1690 in John Locke's " Essay Concerning Human Understanding ."

One of the basic assumptions of semiotics is that humans cannot "not communicate." We always communicate even when we are not consciously sending a message. Semiotics underscores that everything in the world communicates (from mountains to humans). Semiosis is a pervasive phenomenon, but there is a difference between the mountain's act of producing signs and the human act of communicating. In a simple semiosis, the sender isn't well defined ( mountains aren't conscious addressers), while in the communication process, the sender has a central role.

Semiotics provides an abstract language covering a diversity of special sign-usages (language, pictures, movies, theater, etc.). In this capacity, semiotics is helpful for bringing insights from older media to the task of interface design, and for defining the special characteristics of the computer medium.

The core of semiotics is the sign that integrates a physical side (the signifier) and a psychic side (the signified). Therefore, semiotics can talk about representations (the algorithms and data structures as signifiers) as well as the user's interpretation of these representations (user interpretations and domain concepts as the signified), but it does so with a particular focus, namely the sign.

In human-computer interaction, the computer is the object of what is represented. The operating system, on the other hand, is the representamen. Think of the desktop metaphor as an example. A specific use is one possible interpretation, such as word processing. In such an interpretation much is left out, but this is intended. For a typist, the computer is not the computer with its many functions and components, but a typewriter with more possibilities.

The same computer can offer an interpretation as a database management tool or as a multimedia console based on the specific application a user is concerned with. By the same token, the object can be an application: PhotoShop (the metaphor of the darkroom carried over to the digital realm), database, or text processing, for example. In such a case, the representamen is the "representation" of the language one must command in order to achieve the desired performance. The interpretation is the performance actually achieved.

Semiotics can analyze only those parts of the computational processes that influence interpretation and only those parts of the interpretation that are influenced by the computation. Although semiotics cannot replace computer science or psychology, it provides lots of additional knowledge about humans that is required in creating good designs. Experts in semiotics need to acquire a solid understanding of the technical possibilities and limitations of computer systems in order to become creative in this domain.



Radical Simplicity. Transforming Computers Into Me-centric Appliances
Radical Simplicity: Transforming Computers Into Me-centric Appliances (Hewlett-Packard Press Strategic Books)
ISBN: 0131002910
EAN: 2147483647
Year: 2002
Pages: 88

flylib.com © 2008-2017.
If you may any questions please contact us: flylib@qtcs.net