Ch5BusinessIntelligence.pdf

USER INTERFACE

To the decision maker, the user interface is the DSS. The user interface includes all the mechanisms by which commands, requests, and data are entered into the DSS as well as all the methods by which results and information are output by the system. It does not matter how well the system performs; if the decision maker cannot access models and data and peruse results, invoke assistance, share results, or in some other way interact with the system, then the system cannot provide decision support. In fact, if the interface does not meet their needs and expectations, decision makers often will abandon use of the system entirely regardless of its modeling power or data availability.

To paraphrase Dickens, it is the most exciting of times for designing user interfaces, and it is the most frustrating of times for designing user interfaces. It is an exciting time because advances in computing technologies, interface design, and Web and mobile technologies have opened a wide range of opportunities for making more useful, more easily used, and more aesthetically pleasing representations of options, data, and information. It is a frustrating time because legacy systems still exist, and there are a wide range of user preferences. Some DSS must be built using technologies that actually limit the development of user interfaces. Others must at least interact with such legacy systems and are therefore limited in the range of options available. In this chapter, the focus will be on the future. However, remember that “the future” may take a long time to get to some installations.

Decision Support Systems for Business Intelligence by Vicki L. Sauter Copyright © 2010 John Wiley & Sons, Inc.

215

USER INTERFACE

GOALS OF THE USER INTERFACE

The purpose of the user interface is communication between the human and the computer, known as human-computer interaction (HCI). As with person-to-person communication, the goal of HCI is to minimize the amount of incorrectly perceived information (on both parts) while also minimizing the amount of effort expended by the decision maker. Said differently, the goal is to design systems that minimize the barrier between the human's cognitive model of what they want to accomplish and the computer's understanding of the user's task so that users can avail themselves of the full potential of the system.

Although there has been an active literature on HCI since the 1990s, the actual im-plementation of that goal continues to be more an “art” than a science. With experience, designers become more attuned to what users want and need and can better provide it through good color combinations, appropriate placement of input and output windows, and generally good composition of the work environment. The key to making the most out of it is knowing when to apply it. Some of the material is quite pertinent for all user interface design. Other material applies only in certain circumstances. But there are some guiding principles and those will be discussed first.

A prime concern of this goal is the speed at which decision makers can glean available information. Humans have powerful pattern-seeking visual systems. If they focus, humans can perceive as many as 625 separate points in a square inch and thus can realize substantial information. The eyes constantly scan the environment for cues, and the associated brain components act as a massive parallel processor, attempting to understand the patterns among those cues. The visual system includes preattentive processing, which allows humans to recognize some attributes quite quickly, long before the rest of the brain is aware that it has perceived the information. Good user interfaces will exploit that preattentive processing to get the important information noticed and perceived quickly. However, the information is sent to short-term visual processing in our brain, which is limited and is purged frequently. Specifically, the short-term visual memory holds only three to nine chunks of information at a time. When new information is available (we see another image), the old information is lost unless it has been moved along to our attention. Hence we lose the information before it is actually perceived. Since preattentive processing is much faster than attentive processing, one goal is to encode important information for rapid perception. If the data are presented well, so that important and informative patterns are highlighted, the preattentive processes will discern the patterns and then they will stand out. Otherwise the data may be missed, be incomprehensible, or even be misleading.

The attributes that invoke the preattentive processing include the hue and intensity of the color, the location, the orientation, the form of the object (width, size, shape, etc.), and motion. For example, more intense colors are likely to provoke preattentive processing, especially if those around it are more neutral. Longer, wider images will get more attention, as will variations in the shapes of the items and their being grouped together. However, clutter, too much unnecessary decoration, and an effort to overdesign the interface may actually slow down the perception and therefore work against us.

In addition to making the information quickly apparent, the user interface must be effective. These interfaces must allow users to work in a comfortable way and to focus on the data and the models in a way that supports a decision. Equally important is that the interface must allow these things without causing users frustration and hesitation and without requiring them to ask questions. This requires designers to make navigation of the system clear to ensure that decision makers can do what they need to do easily. It also requires the designers make the output clear and actionable. To accomplish this, designers

GOALS OF THE USER INTERFACE

should organize groups, whether they be menus, commands, or output, according to a well-defined principle, such as functions, entities, or use. In addition, designers should colocate items that belong to the same group. This might mean keeping menu items together or putting results for the same group together on the screen. Output should be organized to support meaningful comparisons and to discourage meaningless comparisons.

A third overall principle of interface design is that the user interfaces must be easily learned. Designers want the user to master operation of the system and relate to the system intuitively. To achieve this goal, they must be simple, structured, and consistent so that users know what to expect and where to expect it on the screen. A simple and well-organized interface can be remembered more easily. These systems have a minimum number of user responses, such as pointing and clicking, that require users to learn few rules but allow those rules to be generalized to more complexity. Well-designed systems will also provide good feedback to the user about why some actions are acceptable while others are not and how to fix the problem of the unacceptable actions. Such feedback can take the form of the hour glass to demonstrate the system is processing to useful error messages if it is not. Similarly, tolerant systems that allow the user multiple ways to achieve a goal adapt to the user, thereby allowing more natural efforts to make a system perform.

The goal of making the interface easily learned (and thus used) is complicated because every system will have a range of users, from beginners to experts, who have different needs. Beginners will need basic information about the scope of a program or specifics about how to make it work. Experts, on the other hand, will need information about how to make the program more efficient, with automation, shortcuts, and hot keys, and the boundaries of safe operation of the program. In between, users need reminders on how to use known functions, how to locate unfamiliar functions, and how to understand upgrades. All of these users rely not only on the information available with the user interface but also on the feedback that the system provides to learn how to use the system. Feedback that helps the users understand what they did incorrectly and how to adjust their actions in the future is critical to learning. Not only must the feedback be provided, but also it must be constructive, helping the user to understand mistakes, not to increase his or her frustration. It should provide clear instructions about how to fix the problem.

Finally usable systems are ones that satisfy the user's perceptions, feelings and opinions about the decision. Norman (2005) says that this dimension is impacted significantly by aesthetics. Specifically, he says that systems that are more enjoyable, makes users more relaxed and open to greater insight and creative response. The user interface should not be ugly and should fit the culture of the organization. Designers should avoid “cute” displays, unnecessary decoration and three-dimensional images because they simply detract from the main effort. Cooper (2007) believes that designing harmonious, ethical interactions that improve human situations and are well behaved is critical to satisfying user needs. Cooper (2007, p. 203) provides some guidance about creating harmonious interactions with the following:

• Less is more. • Enable users to direct, don't force them to discuss. • Design for the probably; provide for the possible. • Keep tools close at hand. • Provide feedback. • Provide for direct manipulation and graphical input. • Avoid unnecessary reporting.

218 USER INTERFACE

• Provide choices. • Optimize for responsiveness; accommodate latency.

By “ethical,” Cooper (2007, p. 152) means the design should do no harm. He identifies the kinds of harm frequently seen in systems that should be avoided in DSS design as follows:

• Interpersonal harm with insults and loss of dignity (especially with error messages) • Psychological harm by causing confusion, discomfort, frustration, or boredom • Social and societal harm with exploitation or perpetuation of justice

Cooper (2007, p. 251) also provides guidance about designing for good behavior when he notes that products should:

• Personalize user experience where possible • Be deferential • Be forthcoming • Use common sense • Anticipate needs • Not burden users with internal problems with operations • Inform • Be perceptive • Not ask excessive questions • Take responsibility • Know when to bend the rules

Throughout the chapter, we will discuss the specifics these overriding principles of user interface design. The primary goal is to design DSS that make it easy and comfortable for decision makers to consider ill-structured problems, understand and evaluate a wide range of alternatives, and make a well-informed choice.

MECHANISMS OF USER INTERFACES

In addition to understanding the principles of good design, it is important to review the range of mechanisms for user interfaces that exist today and those mechanisms that are coming in the near future. Everyone is familiar with the keyboard and the mouse as input devices and the monitor as the primary output device. Increasingly users are relying upon portable devices. Consider, for example, the pen-and-gesture-based device shown in Figure 5.1. Information is “written” on the device and saved using handwriting and gesture recognition. This allows the device to go where the decisions are, such as an operating room, and to provide flexible support. Or, the user might rely upon a mobile phone, with much smaller screens such as the ones shown in Figure 5.2. These mobile devices have a substantially smaller screen yet have much higher resolution. On the other hand, if the decision makers will include a group, they might rely upon wall systems to

MECHANISMS OF USER INTERFACES 219

Figure 5.1. Pen-based system. HP Tablet. Photo by Janto Dreijer. Available at http://www.

wikipedia.com/File:Tablet:jpg used under the Creative Commons Attribution ShareAlike 3.0

License.

Figure 5.2. Mobile phones as input and output devices.

220 USER INTERFACE

Figure 5.3. Wall screens as displays. Ameren UE's Severe Weather Centre. Photo reprinted cour-tesy of Ameren Corporation.

display their output, such as those shown in Figure 5.3. These large screens may have lower resolution. Designing an interface for anything from a screen 5 in. x 3 in. with gestures and handwriting recognition to one that might take the entire wall and use only voice commands is a challenging proposition. User interfaces are, however, getting even more complicated for design. Increasingly, virtual reality is becoming more practical for DSS incorporation, so your system might include devices such as those shown in Figure 5.4 or even something like the wii device shown in Figure 5.5.

The future will bring both input and output devices that are increasingly different from the keyboard and the monitor that we rely upon today. Consider the device shown in Figure 5.6, which was developed in the MIT Media Laboratory. The device is a microcomputer. It includes a projector and a camera as two of the input/output devices. This device connects with the user's cell phone to obtain Internet connectivity. The decision maker can use his or her hands, as the user is doing in the photograph, to control the computer. The small bands on his hands provide a way for the user to communicate with the camera and thus the computer. This projection system means that any surface can be a computer screen and that one may interact with the screen using just one's fingers, as shown in Figure 5.7. In this figure, the user is selecting from menus and beginning his work. You can integrate these features into any activity. Notice how the user in Figure 5.8 has invoked his computer to supplement the newspaper article with a video from a national news service. Or, the decision maker can get information while shopping. Figure 5.9 shows a person who is considering purchasing a book in a local bookstore. Among the various kinds of information considered is the Amazon rating and Amazon reviews pulled up from his computer. Notice how they are projected on the front of the book (about halfway down the book cover).

It is important to think creatively about user interfaces to be sure that we provide the richest medium that will facilitate decision making. Different media require different design

MECHANISMS OF USER INTERFACES 221

Figure 5.4. Virtual reality devices. Ames developed (Pop Optics) now at the Dulles Ames of the

National Air and Space Museum. Source: http://gimp-savvy.com/cgi-bin/ing.cgi7ailsxmzVn080jE094

used under the Creative Commons Attribution ShareAlike 3.0 License.

and there is not a “one size fits all.” It is important to think of the medium as a tool and let context drive the design and to customize for a specific platform. The general principles of this chapter will help readers evaluate the needs of the user and the medium. Most of the examples, however, will focus on current technologies.

Figure 5.5. A wii device. Wii remote control. Image from http://en.wikipedia.org/wiki/File: Wiimote-lite2.jpg used under the Creative Commons Attribution ShareAlike 3.0 License.

222 USER INTERFACE

Figure 5.6. MIT Media Lab's view of user interface device. Demonstration of the Sixth Sense Project of the MIT Media Lab. Photo taken by Sam Ogden. Photo reprinted courtesy of the MIT Media Laboratory, P. Maes, Project Director, and P. Mistry, Doctoral Student, (pictured).

Figure 5.7. MIT Media Lab's view of user interface device. Demonstration of the Sixth Sense Project of the MIT Media Lab. Photo taken by Lynn Barry. Photo reprinted courtesy of the MIT Media Laboratory, P. Maes, Project Director, and P. Mistry, Doctoral Student (pictured).

DSS in Action Friends

The FRIEND system is an emergency dispatch system in the Bellevue Borough, north of Pitts-burgh, Pennsylvania. This system, known as the First Responder Interactive Emergency Naviga-tional Database (FRIEND), dispatches information to police using hand-held computers in the field. The hand-held devices are too small to support keyboards or mice. Rather police use a stylus to write on the screen or even draw pictures. These responses arc transmitted immediately to the station for sharing. Police at the station can use a graphical interface or even speech commands to facilitate the sharing of information to members in the field.

USER INTERFACE COMPONENTS 223

Figure 5.8. MIT Media Lab's view of user interface device. Demonstration of the Sixth Sense Project of the MIT Media Lab. Photo taken by Sam Ogden. Photo reprinted courtesy of the MIT Media Laboratory, P. Maes, Project Director, and P. Mistry, Doctoral Student.

USER INTERFACE COMPONENTS

We must describe the user interface in terms of its components as well as its mode of communication, as in Table 5.1. The components are not independent of the modes of communication. However, since they each highlight different design issues, we present them separately—components first.

Figure 5.9. MIT Media Lab's view of user interface device. Demonstration of the Sixth Sense Project of the MIT Media Lab. Photo taken by Sam Ogden. Photo reprinted courtesy of the MIT Media Laboratory, P. Maes, Project Director, and P. Mistry, Doctoral Student.

USER INTERFACE

Table 5.1. User Interfaces

User interface components • Action language • Display or presentation language

• Knowledge base

Modes of communication • Mental model • Metaphors and idioms • Navigation of the model • Look

Action Language

The action language identifies the form of input used by decision makers to enter requests into the DSS. This includes the way by which decision makers request information, ask for new data, invoke models, perform sensitivity analyses, and even request mail. Historically, five main types of action languages have been used, as shown in Table 5.2.

Menus. Menus, the most common action language today, display one or more lists of alternatives, commands, or results from which decision makers can select. A menu provides a structured progression through the options available in a program to accomplish a specific task. Since they guide users through the steps of processing data and allow the user to avoid knowing the syntax of the software, menus often are called “user friendly.” Menus can be invoked in any number of ways, including selecting specific keys on a keyboard, moving the mouse to a specific point on the screen and clicking it, pointing at the screen, or even speaking a particular word(s).

In many applications, menus exist as a list with radio buttons or check boxes on a page. Or the menu might be a list of terms over which the user moves the mouse and clicks to select. Or the menu might actually exist as a set of commands in a pull-down menu such as seen in the menu bar. As most computer users today are aware, you can invoke the pull-down menu by clicking on one of the words or using a hot-key shortcut. When this is done, a second set of menus is shown below the original command, as illustrated with Analytical menu bar shown in Figure 5.10.

Menus and menu bars should not be confused with the toolbars available on most programs. In Figure 5.10, the toolbar is the set of graphical buttons shown immediately below the menu bar. They might also show up as part of the “ribbon bar” that Microsoft has built into its 2007 Access, shown in Figure 5.11. These toolbars provide direct access to some specific component of the system. They do not provide an overview of the capabilities and operation of a program in the way that menus do but rather provide a shortcut for more experienced users.

Table 5.2. Basic Action Language Types

Menu format Question-answer format Command language format Input/output structured format Free-form natural language format

USER INTERFACE COMPONENTS 225

Figure 5.10. One form of a menu. Menu from Analytica. Used with permission of Lumina

Decision Systems.

Menu formats use the process of guiding the user through the steps with a set of pictures or commands that are easy for the user to understand. In this way, the designer can illustrate for the user the full range of analyses the DSS can perform and the data that can be used for analysis. Their advantage is clear. If the menus are understandable, the DSS is very easy to use; the decision maker is not required to remember how it works and only needs to make selections on the screen. The designer can allow users keyboard control (either arrow keys or letter key combinations), mouse control, light pen control, or touch screen control.

Menus are particularly appealing to inexperienced users, who can thereby use the system immediately. They may not fully understand the complexity of the system or the range of modeling they can accomplish, but they can get some results. The menu provides a pedagogical tool describing how the system works and what it can do. Clearly this provides an advantage. In the same way, menu formats are useful to decision makers who use a DSS only occasionally, especially if there are long intervals between uses. Like the inexperienced user, these decision makers can forget the commands necessary to accomplish a task and hence profit by the guidance the menus can provide.

Menu formats tend not to be an optimal action language choice for experienced users, however, especially if these decision makers use the system frequently. Such users can be-come frustrated with the time and keystrokes needed to process a request when other action language formats can allow them access to more complex analyses and more flexibility. This will be discussed in more depth under the command language.

Figure 5.11. A “ribbon bar” as a menu. Microsoft's “Ribbon” in Excel 2007 from http://en.wikipedia.com/wiki/ File:office2007vibbon.png. Used under the Creative Commons Attribution ShareAlike 3.0 License.

226 USER INTERFACE

Figure 5.12. Independent command and object menus.

The advantage of the menu system hinges on the understandability of the menus. A poorly conceived menu system can make the DSS unusable and frustrating. To avoid such problems, designers must consider several features. First, menu choices should be clearly stated. The names of the options or the data should coincide with those used by the decision makers. For example, if a DSS is being created for computer sales and the decision makers refer to CRTs as “screens,” then the option on the menu ought to be “screen” not “CRT.” The latter may be equivalent and even more nearly correct, but if it is not the jargon used by decision makers, it may not be clear. Likewise, stating a graphing option as “HLCO,” even with the descriptor “high-low-close-open,” does not convey sufficient information to the user, especially not novice or inexperienced user.

A second feature of a well-conceived menu is that the options are listed in a logical sequence. “Logical” is, of course, defined by the environment of the users. Sometimes the logical sequence is alphabetical or numerical. Other times it is more reasonable to group similar entries together. Some designers like to order the entries in a menu according to the frequency with which they are selected. While that can provide a convenience for experienced users, it can be confusing to the novice user who is after all the target of the menu and may not be aware of the frequency of responses. A better approach is to preselect a frequently chosen option so that users can simply press return or click a mouse to accept that particular answer. Improvements in software platforms make such preselection easier to implement, as we will discuss later in the chapter.

When creating a menu, designers need to be concerned about how they group items together. Generally, the commands are in one list, and the objects of the commands1 are in an alternate list, as shown in Figure 5.12. Of course, with careful planning, we can list the commands and objects together in the same list, as shown in Figure 5.13, and allow users to select all attributes that are appropriate.

In today's programming environment, designers tend not to combine command and object menus. The primary reason to combine them in the past was to save input time for the user since each menu represented a different screen that needed to be displayed. Display

!The “objects of the commands” typically refer to the data that should be selected for the particular command invoked.

USER INTERFACE COMPONENTS 227

Figure 5.13. Combined command and object menu.

changes could be terribly slow, especially on highly utilized, old mainframes. The trade-off between processing time and grouping options together seemed reasonable. For most programming languages and environments, that restriction no longer holds. Several menus on the same screen can all be accessed by the user. Furthermore, most modeling packages allow a user several options, depending upon earlier selections. If these were all displayed in a menu, the screen could become quite cluttered and not easy for the decision maker to use.

An alternative is to provide menus that are nested in a logical sequence. For example, Figure 5.14 demonstrates a nested menu that might appear in a DSS. All users would begin the system use on the “first-level” menu. Since the user selected “graph” as the option, the system displays the two options for aggregating data for a graph: annually and quarterly.

Figure 5.14. Nested menu structure.

USER INTERFACE

Note that this choice is provided prior to and independent of the selection of the variables to be graphed so that the user cannot inadvertently select the x axis as annual and the y axis as quarterly data (or vice versa).

The “third-level” menu item allows the users to specify what they want displayed on the y axis. While this limits the flexibility of the system, if carefully designed, it can represent all options needed by the user. Furthermore, it forces the user to declare what should be the dependent variable, or the variable plotted on the y axis, without using traditional jargon. This decreases the likelihood of misspecification of the graph.

The “fourth-level” menu is presented as a direct response to the selection of the dependent variable selection. That is, because the decision maker selected La Chef sales, the system “knows” that the only available and appropriate variables to present on the x axis are price, advertising, and the competitor's sales. In addition, the system “knows” that the time dimension for the data on the x axis must be consistent with that on the y axis and hence displays “quarterly” after the only selection that could be affected. Note that the system does not need to ask how users want the graph displayed because it has been specified without the use of jargon.

Finally, the last menu level allows the users the option of customizing the labeling and other visual characteristics of their graphs. Since the first option, standard graph, was selected, the system knows not to display the variety of options available for change. Had the user selected the customize option, the system would have moved to another menu that allows users to specify what should be changed.

In early systems, designers needed to provide menu systems that made sense in a fairly linear fashion. While they could display screens as a function of the options selected to that point, such systems typically did not have the ability to provide “intelligent” steps through the process. Today's environments, which typically provide some metalogic and hypertext functionality as well as some intelligent expertise integrated into the rules, can provide paths through the menu options that relieve users of unnecessary stops along the way.

Depending upon the programming environment, the menu choices might have the boxes, or radio buttons illustrated in Figure 5.12 or underscores or simply a blank space. The system might allow the user to pull down the menu or have it pop up with a particular option. Indeed, in some systems, users can click the mouse on an iconic representation of the option. These icons are picture symbols of familiar objects that can make the system appear friendlier, such as a depiction of a monthly calendar for selecting a date.

Ideally the choice from among these options is a function of the preferences of the system designers and users. In some cases, the choice will be easy because the programming environment only will support some of the options. In still other cases, multiple options are allowed, but the software restricts the meaning and uses of the individual options. For example, in some languages, the check box will support users selecting more than one of the options whereas the radio button will allow users to select only one. Before designing the menus, designers need to be familiar with the implications of their choices.

However the options are displayed on the screen, users might also have a variety of ways of selecting them. In most systems, the user would always have the arrow keys and “enter” key to register options. Similarly, most systems support pressing a character (typically the first letter of the command) to select an option. Many systems also support the use of a mouse in a “point-and-click” selection of options. Less often, we find a touch screen, where the user literally selects an option by touching the word or the icon on the screen, or a light pen, where the user touches the screen with the end of a special pen. In a voice input system, the user selects an option by speaking into a microphone connected to the computer. The computer must then translate the sound into a known command

USER INTERFACE COMPONENTS 229

Figure 5.15. Question-answer format.

and invoke the command. This option is still rare. Voice systems can accept only limited vocabulary and must be calibrated to the speech patterns of each user.

Question-Answer Format A second option for the action language is to provide users questions they must answer. In the text form, this is actually a precursor to the modern menu and tends to be found only in legacy systems. However, the option appears in newer systems that use voice activation of menus. Since it is easier to show the text form in the book, that is the example that will be used. An example of computer questions and user answers is shown in Figure 5.15.

One attribute of the question-answer format in some environments is the opportunity to embed information into the questions. Such information might be the name of the user, the project of interest, or other information regarding the use of the system. For example, the previous example could be redefined as shown in Figure 5.16. While some users respond favorably to the use of their name in these questions, others find it quite annoying. Furthermore, the use of the personalized questions tends to slow down the processing and make the questions appear much longer and more difficult to read.

The goal of the question-answer approach is to give the appearance of flexibility in proceeding through the options of the system. Indeed, its usefulness is optimized when it is most flexible. The question-answer format works best when the user has more control over the system and its options. However, coding such flexibility can be infeasible in many programming environments. Thus this type of action language is generally implemented as a fixed sequence and format, which is very rigid and often limiting to the user.

Command Language. The command language format allows user-constructed statements to be selected from a predefined set of verbs or noun-verb pairings. It is similar to a programming language that has been focused on the task of the DSS. An example of a command language format is shown in Figure 5.17.

The command language format allows the user to control the systems' operations directly providing greater latitude in choosing the order of the commands. In this way, the

230 USER INTERFACE

Figure 5.16. Personalized question-answer format.

user is not bound by the predetermined sequencing of a menu system and can ignore the use of options that are not pertinent to a specific inquiry. It can be structured hierarchically, however, so that one major command will control all auxiliary commands unless specific alternations are required. Notice that in the example the user must specify the columns and rows to be able to display a menu. In the event the user wants more control over the report, he or she can have it, as shown in the latter parts of Figure 5.17.

More importantly, command language gives the user complete access to all the options available. Hence, users can employ the full range of commands and the full variety of subcommands. Since the combinations and the ways in which they are used are unlimited,

Figure 5.17. Command language format.

USER INTERFACE COMPONENTS 231

the user has greater power than is available with any other action language format. The command language format is thus appreciated by the “power” user, or the experienced and frequent user who wants to push the system to its full capability.

However, such a format is a problem for the infrequent user and a nightmare to the inexperienced user who is likely to forget the commands or the syntax of their use. Such problems can be mitigated with the use of “help menus,” especially those that are context sensitive.

Generally DSS do not support only command language formats because of their in-accessibility. However, good design typically allows both a menu format and a command language format. In this way, the user has the ability to make the trade-offs between flexibility (or power) and ease of use.

Input-Output Structured Formats. The input-output (I/O) structured formats present users with displays resembling a series of forms, with certain areas already com-pleted. Users can move through the form and add, change, or delete prespecified information as if completing the form by hand. Like question-answer formats, this kind of user interface tends to be associated primarily with legacy systems.

Consider a DSS used by builders or designers of homes. Once they are satisfied with their design requirements, they need to place an order to acquire the necessary materials. While ordering is not the primary function of the DSS, it might be very useful if they could simply take the information from their design specifications and move it to an order form like the form shown in Figure 5.18. Once the users are satisfied with the completed form, they can send it directly to the wholesaler.

Figure 5.18. I/O structured format.

USER INTERFACE

It is not surprising that such I/O structured formats are not commonly seen in DSS, because they replicate a repeated, structured manual process. They should not be a primary action language option in a DSS; however, they can be used as a supplement. It makes sense to include an order form as a part of the DSS in our example because its function is integrated with the primary function of the system. Since the completion of the form is integrated with the development of the design, as design features change, the form will be updated immediately. For example, if the designer later finds a need for three items, rather than the two items first entered into the form, the order form will be updated immediately. Or, if the designer decides a conventional widget will not suffice and substitutes an oblique widget, the form will be updated automatically.

The question that should be troubling you is, why have the designer complete the order form at all? Why not have a clerk place the order? Under some circumstances that might be reasonable. However, a designer tends to have preferences for styles, workmanship, and other factors of particular manufacturers. Part of the actual design is in fact the selection of the manufacturer. Or, the designer might want to complete some cost sensitivity analyses on a particular design in order to make trade-offs among various options which could have differential impact on the total cost. Hence, the costing function must be part of the DSS. However, part of the functionality of the system might be to send information to clerks about parts not specified by the designer so they can actually place the orders.

Free-Form Natural Language. The final action language option is the one most like conventional human communication. By “free-form,” we imply that there is no preconceived structure in the way commands should be entered. By “natural language,” we imply that the terms used in the commands are not specified by the system but rather are chosen by the users themselves. Hence, the system cannot rely upon finding “key terms” in the midst of other language (as it might with the question-answer format), because they may not be present. For example, rather than requesting a “report,” users might request a “summary” or a “synopsis” of the information. The system must be able to scan a request, parse the language, and determine that the requested summary is actually a report. So the same request that was presented in Figure 5.15 (in the question-answer section) might now be presented as in Figure 5.19.

Figure 5.19. Free-form natural language format.

USER INTERFACE COMPONENTS 233

While parsing of this request can be accomplished, it takes extra computer power and extra processing time. Under conditions of limited possibilities for the requests, such systems have been shown to perform adequately. However, this approach might produce an inappropriate result, especially if the user has particularly unusual terminology (as might be the case if the system serves users transnationally) or if the range of options is large. The possibility is troubling because the requested information might be close to the intended result and the error might not be noticed.

If the input medium is voice, a free-form natural language format can become particu-larly difficult to implement because of the implications of intonation and the confusion of homonyms. On the other hand, it is with voice input that natural language makes the most sense, especially for addressing special circumstances or needs. Such systems have their greatest contribution in serving handicapped users who cannot use other input mechanisms. Under these conditions, the extra programming and computer needs are justified because they provide empowerment to users.

Display or Presentation Language

While the action language describes how the user communicates to the computer, the second aspect, the presentation language, describes how the computer provides information back to the user. Of course, such an interface must convey the analysis in a fashion that is meaningful to the user. This applies not only to the results at the end of an analysis but also to the intermediary steps that support all phases of decision making. Furthermore, the presentation must provide a sense of human control of the process and of the results. All of this must be accomplished in a pleasing and understandable fashion without unduly cluttering the screen.

Visual Design Issues. The goal of the display of a DSS is for people to be able to understand and appreciate the information provided to them. The display should help users evaluate alternatives and make an informed decision and do that with a minimum amount of work. Don't make the users think about how to use the system, but rather encourage them to think about the results the system is providing. To that end, displays should be simple, well organized, understandable, and predictable.

Since 1992, IBM has worked with the Olympic Committee to create the Olympic Technology Solution. This tool was written in object code for use in future Olympic games. The system works with 40,000 volunteers as well as countless home users. This requires the system to be truly human centric and accessible. Part of the secret in achieving clarity of the user interface is to separate the various components of the system into separately accessed modules. Hence, users can focus on the Results System, the Press Information System, the Commentator Information System, or the Games Management System. The Results System will deliver times to the 31 Olympic venues, the pagers, and the Internet. Hence, scoreboards and a Web page will obtain their information from the same source at approximately the same time. The Press Information System and the Commentator Information System get not only the game results but also personalized athlete profiles and other statistical information. The Games Management System handles all of the operational information for the games.

USER INTERFACE

The first rule of design is that the display should be readable. Of course that means that it should be understandable and not overly verbose. All interfaces should use the fewest possible words and the terminology used on the display should be that of the user, not the designers. Readability also implies that you can discern the words. Reading is really a form of pattern recognition, and so a combination of uppercase and lowercase letters is the easiest text to read. The chosen font should also be selected to help users recognize patterns. Although most written word uses serif fonts, researchers have found they are harder to dis-cern on a display. Instead designers should use a sans serif font, such as Arial, Helvetica, or Tahoma. In addition, the font size should be large enough for comfortable reading; generally this requires a font size of at least 10 pixels. Finally, to allow pattern recognition implies that the user can discern the letters. This requires there to be the greatest contrast between the color of the background and the font as possible. If the colors are too close together, such as navy and black or yellow and white, users will have difficulty finding the letters. Of course, if your interface is audible, then similar rules apply, such as making the words clear, talking slowly enough to discern words, and avoiding background sounds that get in the way.

The second rule of design is to control color. There is the temptation for designers to use every color that is available to them. But, using many colors increases the time it takes users to discern the information on the screen. Instead of making it easier to see patterns, users actually spend more time trying to remember what the various colors mean and may actually miss the patterns afforded to them. Similarly, designers should limit the number of saturated colors used and take care in their placement. The basic display should use neutral colors, which have a calming and actually encourage people to stay looking at it. As stated in the previous paragraph, there must be enough contrast between items for the user to discern them. However, designers should take care not to use saturated complementary colors because that much difference actually causes optical illusions. On a neutral background, bright colors, used selectively, can focus the users' attention to important or concerning results on the display. Or designers can highlight relationships and similarities by repeating colors for different information. Finally, designers should take care that colors are not the only cues available since many individuals have some form of color blindness and thus will not be able to discern the differences.

The third rule of design is to control location and size. On a display, the largest item and the one in the top, left corner will get the users' attention first. Using that information, designers can display items so as to help users to find the most important, the most critical, the most frequently used, or the most summarized information. The order in which items appear on the screen should make sense to the audience and reflect their view of the choice context. Continuity in location will cause decision makers to believe the items should be considered as a group, so separate diverse items. Information that belongs together should be put together on the display and connected. A small box or lines around such items will help to focus the user on the similarities; the color of these lines should be consistent with the primary font and should be as narrow as possible.

The fourth rule of design is to keep the display organized. Of course, the less that is on the screen, the easier it is to look organized. Designers should avoid clutter and noise in the interface that might distract from the important objects the user needs to consider. Overembellishment, overuse of boxes and rules, insufficient use of white space, and poor use of color all threaten the look of organization on a page. Instead, consistent (within a particular display and across displays) and moderated use of size, shape, color, position, and orientation on the screen make the page appear more organized.

The fifth rule of design is to make the navigation easy. Of course this means there should be an obvious way for the user to move from display to display, to drill down in

USER INTERFACE COMPONENTS 235

the data, or to find wanted information. It also means not having items that appear to be navigational devices on the page. For example, it is best not to have arrows that do not function just to be design elements. Icons should be used sparingly and in a well-defined manner so people do not confuse them with navigational tools. If the display takes more room than just the viewable display, make sure there are clear scrollbars to help them see the additional information.

Finally, any design element that takes away from the user interacting with the infor-mation should be avoided.

Windowing. How one accomplishes the task of organizing information depends on the kind of models, the kind of decision maker, and the kind of environment in which one is working. For example, in the New York City courts example illustrated in Chapter 1, designers faced the problem of how to profile defendants in a manner that would help judges see the entire perspective of the case. Their solution to the enormity of information available about each defendant is to use a four-grid display in a Windows environment. The top half of the screen displays information about the infractions in which the defendant may have been involved; the left portion provides information about the complaint in question while the right portion summarizes the defendant's prior criminal history. The bottom-left quadrant summarizes the interview data about the defendant's socioeconomic and health conditions. Finally, the bottom right is reserved for the judge's comments. The software lets the user focus on any of the quadrants through screen maximization and the use of more detailed subroutines. For instance, in its normal state, the bottom-left interview screen displays the defendant's education level (ReadingProb: Y), housing status (Can Return Home: N, Homeless: Y), and drug habit (Requests Treatment: N). Maximized, it details everything from what drugs the person uses to whom he or she lives with and where. In addition, problematic answers are displayed in red so as to highlight them for users.

The one underlying tenet of presentation language is that the display should be “clean” and easy to read. Today, use of the Windows standard for many products makes the design of an uncluttered display easier. In particular, this standard brings with it the analogy of a desktop consisting of files. On the screen, we see windows, each representing a different kind of output. One window might include graphs of the output while another includes a spreadsheet and still another holds help descriptions that encourage sensitivity analyses. An example is shown in Figure 5.20. The use of different windows for different kinds of information separates different kinds of results so users can focus their attention on the different components; the windows give order to the items at which the user is looking.

Of course, everyone has seen desktops that are totally cluttered because there are so many aspects of the problem one needs to consider. Layering options allow the various

One of the most widely publicized examples of virtual reality used by the public is a setup created by Matsushita in Japan. This is a retail application set up in Japan to help people choose appliances and furnishings for the relatively small kitchen apartment spaces in Tokyo. Users bring their architectural plans to the Matsushita store, and a virtual copy of their home kitchen is programmed into the computer system. Buyers can then mix and match appliances, cabinets, colors, and sizes to see what their complete kilchen will look like—without ever installing a single item in the actual location.

236 USER INTERFACE

Figure 5.20. Windowed output.

windows to overlap in many applications. Designers should, however, refrain from putting too much on the screen at once for the same reason decision makers are discouraged from having cluttered desks—too many things get lost, and it becomes hard to get perspective on the problem. Instead, if the application allows it, the designer should use icons to indicate various options, as illustrated in Figure 5.21. When the users want to examine that particular aspect of the problem, they can simply click on an icon to enlarge it so it can be viewed in its entirety.

Windows can be sized and placed by the users so they can customize their analysis of the information. Hence, users can have cluttered desktops if they choose, but clutter should not be inherent in the design of the DSS.

Representations. The most common form of output is to show the results of some analysis. Suppose, for example, that the goal were to show the sales of the various divisions for the last year. The appropriateness of the output depends on what the decision maker expects to do with the information. If the decision makers simply wanted to know if the various regions were meeting their goals, they might appreciate the use of metriglyphs, such as those shown in Figure 5.22. Metriglyphs are simply symbols that help convey information to the user quickly. Those with “smiling faces” show sales that met the goals, while those

USER INTERFACE COMPONENTS

Figure 5.21. Icon options.

with “sad faces” did not. Further, the larger the smile, the more sales exceeded objectives, and the larger the grimaces, the more seriously they missed. We can even illustrate one set of results with the “smile” and another with the “eyes” of the face. For example, if the smile represented the profit level, the eyes might represent the dividend level. Closed eyes would represent no dividends, while the size of the open eyes would represent the magnitude of the dividends. Of course, not all decision makers (or all cultures) appreciate the cute use of metriglyphs as output. Today a user is more likely to see common glyphs, such as the traffic lights in Figure 5.23, to allow a quick evaluation of the conditions. This figure provides the same evaluation as in Figure 5.22. However, the user can easily discern the meaning because of his or her understanding of traffic lights. It has the additional benefit of redundancy of message, once with the color and once with the location of the highlighted signal. In addition, use of such glyphs heep with accessibility for those with color vision disabilities.

Alternatively, if the goal of the analysis were to determine where sales were largest, we might display those on a map with different shadings or colors as codes to show the

Figure 5.22. Metriglyphs.

238 USER INTERFACE

Figure 5.23. Using traffic lights as metriglyphs.

range of results. Designers should avoid drawing the map to scale in proportion to the sales of the region, as shown in Figure 5.24, since many people do not have a sufficiently strong memory of the size of geographical places to make such representations meaningful.

If the goal were to determine trends over several years, then the most appropriate output is a graph of the results, as shown in Figure 5.25. It is easy to see that some regions increased sales while others decreased and to read off the relative amounts (such as “a lot” or “a little”).

On the other hand, if the decision maker wanted the actual numbers (e.g., to do some hand calculation), then the graph in Figure 5.25 is inappropriate because it is difficult to glean the actual sales figures from it. In this case, a table of numbers, such as Figure 5.26, is more useful.

Designers should take care to use rich visualizations that convey the analysis most accurately and most efficiently to the user. Consider Figure 5.27, which shows Napoleon's march. This graphic by Charles Joseph Minard, portrays the losses suffered by Napoleon's army in the Russian campaign of 1812. Beginning at the Polish-Russian border, the top band shows the size of the army at each position during the offensive. The path of Napoleon's retreat from Moscow is depicted by the dark lower band, which is tied to temperature and time scales. So, by simply looking at the graph, you can discern the size of the army and its location and direction at any time as well as the temperature on some days. That powerful graphic contains a substantial amount of information for in-depth examination but also allows users to simply get an overview of the situation.

Some situations are best represented with the association of two or more variables as they change over time. Most of us are not particularly adept at drawing (or viewing) three- or more dimensional depictions. But, with today's technology, it is possible to view those changes by watching a graph move over time. The two graphs shown in Figures 2.7 and 2.8 illustrate the end points of such a graphic. Figure 2.7 shows two axes, “life expectancy at birth” and “average number of children per woman.” The graph also shows the data by country with the bubbles in the chart. Each country is a bubble. The relative size of the bubble indicates the size of the country, and the color of the bubble illustrates the continent on which the country is located. You can watch the video on Gapminder's website (http://www.gapminder.org/) to see it move, but the end result is Figure 2.8. In this graphic, you can see multiple variables and how they interact over time, again inviting either the in-depth analysis or a quick overview of the data.

Data visualization techniques for qualitative data have improved over time as well. Consider the question of something like relationship data, which illustrate how groups are related to one another. For example, consider Figure 5.28. This is a relationship diagram from a social networking site showing one person's contacts through the site. The names around the circle are people with whom this individual is connected. The lines represent associations that these individuals have with others in this group. As you can see, some

Fig

ure

5.

24.

Map

of

sale

s vo

lum

e d

raw

n t

o s

cale

.

240 USER INTERFACE

of the individuals (particularly those at the top) are highly connected to one another while those at the bottom seem relatively unconnected to others in the group. This kind of diagram allows the user to investigate how people—or items—are related and where hubs of activity might be.

Another relationship diagram is shown in Figure 5.29. This diagram shows not only associations but also the types of associations. This particular diagram illustrates all of the companies (the darker highlighted items) at which we have placed interns in the last year as well as how many and what kinds of other relationships they have with the department and with each other (the lighter highlighted items). It allows the decision maker to see the depth of the relationship, not simply that there is a relationship.

There are a myriad of other diagramming tools available to the DSS designers to help them help decision makers understand their data properly. Of course, the appropriate output might be animation and/or video rather than a display on a screen. For example, if the model is a simulation of a bank and varies the number of clerks, the types of services handled by each clerk, and number of queues as well as the impact of each factor upon queue length,

Design Insight: Speech Emulatioi

When we emulate speech in a computer, designers need to worry about more than speech recog-nition and synthesis. Researchers have found three important aspects of speech that need to be incorporated. First, speech is interactive. Few of us can actually hold our part of the conversation without hearing something in return. Without some form of feedback, our speech will probably increase in speed and probably even in tone. Research teams at MIT* found that these changes in speech can actually cause the computer to reject commands it would otherwise adopt. Hence, they incorporated phrases such as “ah ha” that would be uttered at judicious times and found that it helped the human keep his or her speech in a normal range. In other words, some utterances in speech are protocols such as those found in networking handshaking.

A second important part of speech is that meaning can be expressed in shorthand language that probably would be meaningless to others if the participants know each other weih Over time, shared experiences lead to shared meanings in phrases. For example, occasionally one of my colleagues will utter “1-4-3-2″ in a conversation. Those of us who know him well know this is shorthand for *lI told you so” (the numbers reflect the number of letters in each of the words). To others, it makes no sense. Another colleague, when discussing the potential problems of a strategy I was about to adopt for a meeting, warned me to remember Pickett's charge. Now, to those who know nothing about the American Civil War, this warning tells us nothing. To those who know about the war, and the Gettysburg confrontation in particular, know that he was telling me that we all face decisions with incomplete information and that we should not become too confident in our abilities in light of that incomplete information, In fact, he was warning me to (a) check my assumptions and (b) look for indications of crucial information that could suggest a need to my strategy. Many historians believe that had PickeLt's charge been successful, the American Civil War might have had a different outcome.

A third important part of speech is that it is contextual. A phrase or sentence in context might be totally understandable but quite baffling out of context. For this reason, we generally have redundant signals in human interactions, Somehow that same redundancy needs to be incorporated into human-computer interactions to ensure un der standabi lily.

*Negroponte, K, “Talking with Computers,” Wired, Volume 2.03, March, 1994, p. 144.

USER INTERFACE COMPONENTS 241

Figure 5.25. Graphical representation.

then an animation of the queues might be more illustrative than the aggregated, summary statistics.

Perceived Ownership of Analyses. In addition to providing the appropriate type of output for the results under consideration, designers should remind the users that they control the analyses and therefore the decision-making authority. Computer novices may not feel “ownership” of the answer because it was something “done by the computer,” not really by them. One way of counteracting this tendency is to provide users an easy way of changing the analyses if the results do not answer the question appropriately or completely. For example, consider the screen labeled Figure 5.30. Note that in this analysis we can compute profitability either with discounting or without it. The decision maker has chosen discounting (that box is checked). However, the results without discounting are easy to obtain given the on-screen keys. Similarly, Figure 5.31 encourages users to experiment with the model (by providing different estimates for key variables) by prompting the user with the “revise” buttons and by making it easy to do. Note in Figure 5.31 that the user has the option of revising both decision variables under consideration, clerks and queues. Similarly, the user has the ability to affect the value of the environment variable, expected

242 USER INTERFACE

Figure 5.26. Disaggregate posting of results.

number of customers per hour.2 However, relevant statistics (in this case, average waiting time) are only recomputed after the user selects the “recompute” button. This provides the users the ability not only to acquire new values but also to validate that the entered value is the one intended. Similarly, the simulation is only rerun for the user when requested.

Graphs and Bias. Just as it is important to provide users unbiased use of models, it is also important to provide them unbiased output. What and how designers provide information can affect how that information is perceived by the decision maker. Of course, we assume the designer will not intentionally rig the system to provide biased results. However, the more dangerous problem is when the rigging is done unintentionally.

2 While an average would have been provided automatically, the user may want to test the sensitivity of the model to the parameter. Users should not expect to complete such testing blindly. Hence, there is a button that allows them to review the relevant statistics over different time horizons and during different times of the day.

Fig

ure

5.

27.

Min

ard

's m

ap o

f N

apol

eon'

s 18

12 R

ussi

an C

ampa

ign.

(S

ou

rce:

E. T

uft

e, T

he

Vis

ual

Dis

pla

y o

f Q

uan

tita

tive

In

form

atio

n,

Gra

phic

s Pr

ess

UC

, 198

3,

2001

, p.

40.

) M

ap is

repr

oduc

ed w

ith

per

mis

sion

of

the

publ

ishe

r

244 USER INTERFACE

Figure 5.28. Relationship diagram.

Suppose, for example, the user is considering a decision regarding the management of two plants and examines average daily productivity in those plants. If it provides only the average values, the system could be giving biased output because it does not help the user see the meaningfulness of those numbers. Average productivity at plant 1 could be 5000, while that at plant 2 could be 7000. This appears to be a big difference. However, if we know the standard deviation in daily productivity is 2000, the difference no longer looks so significant. Hence, simply providing the appropriate supplementary information, as described in Chapter 4, will help provide better support.

Another place where designers inadvertently provide bias in the results is in the display of graphs. Since most decision makers look at graphs to obtain a quick impression of the meaning of the data, they might not take the time to determine that their impression is affected by the way the graph is displayed. For example, consider the effect of the difference in scaling of the axes in Figure 5.32.

In the first version of this graph, the axes were determined so that the graph would fill the total space. Clearly this graph demonstrates a fairly high rate of revenue growth. However, by simply increasing the range of the x axis, the second graph gives the impression of a considerably higher rate of growth over the same time period. Similarly, increasing the range of the y axis makes the rate of growth appear much smaller in the last graph. The designer must ensure this misrepresentation does not occur by correctly choosing and labeling the scale.

USER INTERFACE COMPONENTS 245

Figure 5.29. Depth of relationship diagram. UMSL's external relationship map. Software devel-

oped by S. Mudigonda, 2008.

The use of icons on bar charts can leave inappropriate impressions too. Consider Figure 5.33, which presents a histogram of the revenues for three different regions using the symbol for the British pound sterling. Clearly, revenues are greatest in region 2 and least in region 3. However, the magnitude of the differences in revenues is distorted by the appearance of the symbol. To increase the height of the symbol and maintain the appropriate proportions, we must also increase the width. Hence, the taller the symbol, the wider it becomes. As both dimensions increase, the symbol's presence increases at the square of the increased revenues, thereby exaggerating the magnitude of the increase. Instead, a better option is to stack the icon to get the appropriate magnitude represented, as shown in the second portion of the figure.

Another factor that can provide perceptual bias for decision makers is the absence of aggregation of subjects when creating a histogram or pie chart. Consider Figure 5.34, which displays the sales of 23 sales representatives from nine regions. It is impossible to determine any differences in the typical performance in the regions, because the data are not aggregated; rather what you see in this graph is the differences among sales associates.

246 USER INTERFACE

Figure 5.30. On-screen analysis change prompting.

The eye is directed toward the outliers, such as the tenth associate, who had high sales, and the thirteenth associate who had relatively low performance. The problem is exacerbated, of course, as the number of subjects increase.

Consider, instead, Figure 5.35, in which sales associates are aggregated by region. Here the regional pattern is much clearer and we are not inappropriately distracted by outlier observations. On the other hand, aggregated data can allow decision makers to generalize

Figure 5.31. Additional on-screen prompting.

USER INTERFACE COMPONENTS

inappropriately from the data. Specifically, Figure 5.36 does not identify how many sales associates work in each region and what the dispersion of performance is among those associates. A better design would identify the number of associates and a measure of dispersion either as a legend or on the graph.

We cannot here enumerate all the distortion and bias that can be represented in a graph. However, awareness of the problems can help to avoid bias problems in DSS design.

Support for All Phases of Decision Making. Displays must be constructed so as to help decision makers through all the phases in decision making. According to Simon's model discussed in Chapter 2, this means there must be displays to help users with the intelligence phase, the design phase, and the choice phase.

In the first of these phases, intelligence, the decision maker is looking for problems or opportunities. The DSS should help by continually scanning relevant records. For an operations manager, these records might be productivity and absenteeism levels for all the plants. For a CEO, they might be news reports about similar companies or about the economy as a whole. Decision support is the creation and automatic presentation of exception reports or news stories that need the decision maker's attention. Hence, when the operations decision maker turns on the computers, he or she could automatically be notified that productivity is low in a particular plant or absenteeism is high in another as an indicator of a problem needing attention. When the CEO turns on the computer, automatic notification of changes in economic indicators might suggest the consideration of a new product. The system does not make the decision; rather it brings the information to the user's attention. What must be scanned and how it is displayed for it to highlight problems or opportunities are a function of the specific DSS.

Figure 5.32. Scaling deception.

248 USER INTERFACE

Figure 5.32. (Continued)

USER INTERFACE COMPONENTS 249

Figure 5.33. Distortion in histogram.

In the second phase of decision making, users are developing and analyzing possible courses of action. Typically they are building and running models and considering their sensitivity to assumptions. Displays must be created that will help users generate alterna-tives. This might be as easy as providing an outlining template on which to brainstorm or the ability to teleconference with employees at a remote plant to initiate ideas.

Displays must also be created to help in the building, analysis, and linking of models. This includes the formulation of the model, its development and refinement, and analysis. This means displays should be able to prompt users for information necessary to run the model that has not been provided. The system should provide suggestions for improvements to the models as well as alert the user to violations of the model's assumptions. Finally, displays must provide diagnostic help when the model does not work appropriately.

In the choice phase, the decision maker must select a course of action from those available. Hence, the displays should help users compare and contrast the various options. In addition, the displays should prompt users to complete sensitivity of the models to assumptions and scenarios of problems.

250 USER INTERFACE

Figure 5.34. Individual histogram.

Figure 5.35. Aggregated histogram.

USER INTERFACE COMPONENTS 251

Figure 5.36. Use of international symbols. Menu from Marcus, A., “Human Communications in Advanced Uls”, Communications oftheACM, Vol. 36 , No. 4, p. 101-109. Image is reprinted here with permission of the Association of Computing Machinery.

Regardless of what phase of decision making is being supported, the goal of the display is to provide information to the user in the most natural and understandable way. It is critical that any display be coherent and understandable and provide context-sensitive help. Since no one can anticipate in all the ideas that might be generated from any particular display, the system must be flexible enough to allow nonlinear movement. For example, the user should be able to transfer to a new topic or model, display a reference, seek auxiliary information, activate a video or audio clip, run a program, or send for help.

Knowledge Base

The knowledge base, as it refers to a user interface, includes all the information users must know about the system to use it effectively. We might think of this as the instructions for systems operation, including how to initiate it, how to select options, and how to change options. These instructions are presented to the users in different ways. Preliminary training for system use might be individual or group training and hands-on or conceptual training. To supplement this training, there is typically some on-screen prompting and help screens with additional information.

In the DSS context, there are additional ways of delivering the knowledge base. One popular mechanism is training by example. The user is taken through a complete decision scenario and shown all the options used and why. The system also can provide diagnostic information when the user is at an impasse, such as additional steps in an analysis. Or it can offer suggestions for additional data use or analyses. For example, the system

USER INTERFACE

might recommend to users of mathematical programming techniques that they consider postoptimality analyses.

The goal is to make the system as effortless as possible so as to encourage users to actually employ the software to its fullest. This means there must be ways for experienced users and inexperienced users to obtain the kind of help they need and the training and help must be for specific techniques and models. Users typically are not experts in statistical modeling, financial modeling, mathematical programming, or the like. They need help in formulating their models and using them properly. This help must be included in the system.

Knowing how the users will employ the system is important to understanding what one can assume of them. Historically, users have used DSS in three modes: subscription mode, chauffeured mode, or terminal mode.3

Subscription mode means that the decision maker receives reports, analyses, or other aggregated information on a regular basis without request. This mode does not allow for any special requests or user-oriented manipulation or modification. Reports might be generated on paper or sent directly to the user's computer for display. Clearly there is very little involve-ment of the user with the system and hence users expect the computer requests to be trivial.

Chauffeured mode implies that the decision maker does not use the system directly, but rather makes requests through an assistant or other intermediary, who actually performs and interprets the analysis and reports the results to the decision maker. Since these “chauffeurs” are often technical experts, the systems designer can provide more “power use” instructions and less help with interpretation instructions.

Finally, terminal mode implies the decision maker actually sits at the computer, requests the data and analyses, and interprets results. These users are often high-level executives who should not be expected to remember a lot of commands and rules for usage. It is especially important for them to have easy navigation through the system, accessible help options for both navigation and content that are context sensitive, and recommendations regarding better analyses. Touch screens, mouse entry, and pull-down menus have made many sophisticated systems seem easy.

Modes of Communication. In a listserv discussion group regarding the use of computers in education, one teacher wrote that her class requested information about “what it was like before computers.” The answers they obtained with regard to communication included discussion of voice inflections, gestures, and other forms of nonverbal communi-cation that helped people understand what others were trying to convey. Many of us can remember when neatness in written work was another aspect of communication. In any kind of communication, there is significant room for misinterpretation. Keeping in mind the fact that computers do not understand nuances, nonverbal communications, or voice inflections, you begin to understand the care with which designers should regard the user interface design. As user interfaces become more sophisticated, as technology allows for greater variation in the kind of interfaces designed, and as decisions become more global, our concern about the appropriateness of every kind of communication is increased.

Four basic elements of communication need attention: mental models, metaphors, navigation of the model, and look. The mental model is an explanation of someone's thought process about how something works in the real world. It is a representation of the surrounding world, the relationships between its various parts, and users' intuitive perceptions about their own acts and their consequences. It, in fact, describes how we

3The classical definition of modes also includes the clerk mode. This mode differs from the terminal mode only in that decision makers prepare their requests offline and submit them in batch mode. While once common, such batch processing of DSS is rarely being seen today.

USER INTERFACE COMPONENTS 253

believe tasks are performed. The advantage of the mental model is that it provides a series of shortcuts to explaining the relationships among ideas, objects, and functions and how they might work to complete a task.

For example, consider how people thought about the economic meltdown of 2008. The economy was referred to as a shipwreck, a perfect storm, an earthquake, a tsunami, an Armageddon, a train wreck, a crash, and cancer. Each of those terms brings with it a set of activities that must occur, a set of feelings of the user, and insight about how to respond. In computer terms, it is common today to use a desktop as a representation of the operation of a computer because it is familiar. Users know how to behave in an office, understand what the items are for (e.g., information might be kept in file folders, access to messages might be through a telephone icon, the erase function might be represented by a garbage can), and have an intuition for how to work in it. This way of representing specific operations makes sense because it brings with it all the shared meaning of these objects. However, if your place of business is not an office, this way of organizing your computer probably would not make sense. For example, if your task is in an operating room of a hospital, you need your user interface to resemble the functions you are accustomed to performing. Your screen should look more like a medical chart because it groups together processes and information in the way medical personnel are accustomed to reading it. Understanding how users think about their job is crucial to making the system work for them.

Within the mental model are metaphors. These metaphors rely upon connections the user has built up between objects and their functions to help bolster intuition about how to use the system. Since metaphors provide quick insight into purposes and operation, it is thought they can help users see purposes and operations of the system more clearly with less training. They are used every day to represent fundamental images and concepts that are easily recognized, understood, or remembered, so as to make the system operation easier to understand. The desktop image, for example, helps us understand how applications are launched and controlled by using those technologies. Similarly, the classroom metaphor brings with it not only an expectation of how furniture is arranged but also the general operating rules of the group. In the design of DSS user interfaces, metaphors refer to the substitution of a symbol for information or procedures; the substitution of an associated symbol with the item itself, such as a red cross with medical care; the personification of an inanimate object; or the substitution of a part of a group for the whole, such as the use of one number to indicate data. Before building metaphors into a system, we need to be sure they will convey the intended meaning by being intuitive, accurate, and easily understood. Whether icons, pictorial representation of results (such as in animations or in graphics), or terminology (such as the difference between browse mode and edit mode), metaphors ease and shorten communication but only if all parties share the meaning. Consider Figure 5.36, which provides metaphors for type specification. While many people would understand the symbols at the right of this screen, clearly not everyone would.

Design Insights Flexibility

Often the benefit of user interfaces is in simplicity. For example, in one DSS used for supplier selection, users are required to enter information into only a limited number of cells in a matrix, To them, this provides complete flexibility because they can still get decision support even in the face of incomplete information. Once the data entry is complete, the DSS ranks the criteria by importance and presents a model that displays only those factors that ranked highly. This facilitates comparison of alternatives among important dimensions. In addition, if a decision maker notices the absence of a particular criterion thai he or she believes is important, he or she is warned of a problem immediately,

254 USER INTERFACE

Some designers dislike using the literal metaphor approach to design because it can be limiting. Using a metaphor ties the operation of the system to how those items work in the real world. Generally systems do not work like things in the real world so icons do not convey what system designers really mean. That means that there are not many sets of metaphors that are appropriate for explaining how software works, and those that exist do not scale well to involving a large number of functions or activities. Furthermore, while they may help the novice user learn to use the system better, they can prohibit the more

Design Insights Window Size

Often designers of DSS and other computer systems do not attend well enough to questions of the impact of the screen design on the use of the technology. Studies have shown that some factors heighten emotional response while others calm it. In tact, the literature, taken as a whole. suggests that individuals' interactions with computers and other communication technologies are fundamentally social and natural· One of the current projects of the Social Responses to Communication Technology Consortium is an examination of the effect of the size of the image of a human displayed on a computer for teleconferencing upon individuals” responses to that image. Stanford Professor Byron Reeves was quoted as saying that “many cultures around the world assign magical properties to people who are small . , . These small people grant wishes» they monitor behavior and they keep people safe, But they also can punish or be bad just for the hell of it.” Professor Cliffford Nass further elaborates in that same article, 41We want to know, when you see a small face on a screen, do you respond to it as if it were magical? Is it perceived as powerful or capable?” So, the question is, do you have a different response to the two screens below?

■H5 ■ i

Source: From I Morkes, +

USER INTERFACE COMPONENTS

advanced user from truly seeing the options available in the software. Finally, metaphors can be a particular problem in cross-culturally used systems because they do not mean the same thing to all users.

An alternative to metaphors in design is to rely upon idioms. Unlike metaphors, which rely upon the user having intuition about how the system works, idioms rely upon training of the user to accomplish certain tasks even if the user is unsure why those tasks work. This approach to designing systems does not require users to have the technical knowledge to understand why the system works; instead it only requires they know that certain actions do accomplish the users' goals. There is not an intuitive link because of experience; rather it is a leaned link, much the same way people learn idioms in speech. For example, one does not intuit the relationship between a piece of cake and “being easy”; one learns that it is frequently said that something easy is a piece of cake.

Most of the basic usage of windowing software is guided by idioms. The fact that we can open and close windows and boxes, click on hyperlinks, and use a mouse is not guided by our intuition in using these items. Rather we can use them because they have been taught to the users. They are easy to learn and transfer from situation to situation. Users become conditioned to the idioms and they make the software easier to use. They do not wear down because of changes in the environment or become less useful because of cultural changes or changes over time because they are not dependent upon those things. Thus, generally, they are preferred to metaphors.

The navigation of the model refers to the movement among the data and functions and how it can be designed to provide quick access and easy understanding. In one environment, it might make sense to group together all the models and to create subgroups of, say, specific statistical functions, because users differentiate them from mathematical programming functions. However, in another environment, users think of the kind of question, not the kind of technique, when moving among the options in the DSS. Here, it would be appropriate to group certain statistical tests with financial data and analyses and certain mathematical models with production planning.

Finally, the look of a system refers to its appearance. No one who knows computer company culture would expect to see the same dress code at IBM that was observed at Apple Computer Corporation. By extension, then, we would not expect to find preferences for the same user interface at the two corporations. Just as corporate culture can affect preferences for the user interface, other cultural influences associated with national origin, sex, race, age, employment level, and the interaction among all of those influences will affect the way a person responds to particular user interfaces. However, designers have assumed that all users will respond similarly.

For example, it is well known that color metaphors mean different things in different cultures. While a red flashing light might be interpreted as an indicator of something important to one culture, it might suggest users stop all processing in another. Similarly, it is believed that the size of the image can affect how we respond to it. A group of researchers at Stanford is studying how different cultures respond to “little people” (as good luck? or as a curse?) to help understand how best to size human images to be for effective teleconferencing in a DSS framework. Others believe the linear, restrained treatment of menus is received differently in different cultures. They suggest a menu that is more curvilinear and less aggressive, such as that in Figure 5.37, might be received better by some cultures.

While we do not have many guidelines for user interface today, it is important to reflect on possible differences in needs and use them in our development efforts. Research is being conducted now that will be used in the future to guide in the development effort.

256 USER INTERFACE

Figure 5.37. An alternative menu format. Menu From Marcus, A., “Human Communications in Advanced Ills”, Communications of the ACM, Vol. 36 , No. 4, p. 101-109. Image is reprinted here with permission of the Association of Computing Machinery.

CAR EXAMPLE

The expected user of the car selection DSS we have been discussing is a consumer who intends to purchase an automobile. It may be the first automobile the user has ever selected or the user may have purchased new automobiles every year for the last 20 years. In addition, the user may never have touched a computer before or may be an expert computer user. This leads to a wide range of user capabilities and user needs for the system, which in turn leads to complications in the design of the user interface.

It is crucial that system designers provide multiple paths through the system to accom-modate the needs of all kinds of users. For example, some users may have no idea what kind of automobile to acquire and need guidance at every step of the process. Other users may have a particular manufacturer from which to select, while other users have particular criteria that are of importance to them. Still others may have a small number of automobiles they want to compare on specific functions. The system must be able to accommodate all these decision styles, and the user interface needs to facilitate that process. Examples of commercial systems are shown in Figure 5.38.

CAR EXAMPLE 257

Figure 5.38. Initial screens from commercial automobile purchasing system.

USER INTERFACE

Early screens should guide users to the part of the system that will meet their needs. The temptation exists to use the first few screens to gain some insight into the user's needs and his or her preferences for information, but the temptation should be resisted. Users want to see information on these first few screens that convinces them the system will facilitate their choice process; background information about themselves will not do that. Rather, it is important to use some simple mechanism for screening users and deciding what part of the system will be most appropriate to use. Some designers simply ask whether the user wants “no support,” “partial support,” or “total support” from the system. While this may be appropriate in some circumstances, it can be very confusing unless the user can query the system and find what kinds of analyses and access each of those levels provide. An alternative is to pose the question of whether the user knows the set of automobiles from which a selection will be made, whether the user knows the criteria that should be applied to the choice process, or whether the user needs full support in understanding the dimensions that should be evaluated. Further, if the user selects known criteria and specifies financial information, then the choice process should follow a financial model selection. That does not mean that the system cannot pop up warning messages or help screens that suggest consideration of other criteria. Rather, it means that the focus of the process must have face validity and seem relevant to the user.

The first few screens also set the tone for the system, and hence particular attention must be given to their design. The screens need to be simple, clean, and easy to follow. There should be sufficient instructions to help the novice user to move through the system easily while not slowing down the more proficient user. In addition, users will want to see information that moves quickly but is easily discerned.

One way to accomplish this is to provide a menuing system through which it is easy for the user to maneuver. Consider, for example, the three options demonstrated in Figure 5.39. Please note that a designer would not place all three of these options on the same screen. They are presented here for the purposes of discussion.

The first option allows the user to enter the manufacturer of automobiles that is preferred (Code 5.1). After this the user can select the option to start the search. From a programming point of view, this is the easiest of the searches to accomplish; the Cold Fusion shown in Figure 5.39 illustrates the process that must be used to accomplish the search. While it appears user friendly at the outset, it actually is not a particularly useful user interface. One problem is that the user is restricted to searching for only one manufacturer of automobile. Many people want to search on multiple manufacturers; they would have to make several trips through the system and would have more difficulty comparing the results. A second problem is that this method requires users to be able to remember all the manufacturers they might consider. This may cause them to neglect some options, either because they forgot about them or because they did not know they existed. While it is acceptable for the user to narrow his or her search, it is not acceptable for the system to do it on the user's behalf. A third problem is that this method requires the user to spell the name of the manufacturer correctly. Often users do not know the correct spelling, or they make typographical errors, or they use a variation on the name (such as Chevy for Chevrolet). Unless the search “corrects” for these possible problems, no relevant matches will be made.

The middle option of Figure 5.39 provides the options to the users as radio buttons. The code for this is shown in Code 5.2. This has two advantages. First, it reminds the user what models of automobiles are available to the user (which is especially good for the novice user). Second, it does not rely upon the user spelling the automobile type correctly or using the same form of the model name as the designer. It does, however, limit the user to selecting only one option; only one radio button of a group may be selected. The coding

CAR EXAMPLE 259

Figure 5.39. Three methods by which users can enter data in the system.

requires the radio buttons to be selected, as can be seen in the form section of the code. However, searching the database is virtually the same for this example and the previous one.

Code 5.1 JavaScript Examples

What make of automobile is of interest to you?

SELECT model FROM new.cars WHERE model = ' #Form . car_pref erence# '

  • #model#

Code 5.2 JavaScript Examples

I

Sample Select A

Audi
Chevrolet
Dodge
Ford
Mer cedes
Toyota
BM

#d_oracle# 1″ username=”#u_oracle#,f password^1' #p_oracle#N DEBUG>

SELECT model FROM new_cars WHERE model=J#carl#J OR '#car2#' OR '#car3#J

OR '#car4#' OR '#car5#' OR '#car6#' OR ficar7f

  • #model#

264 USER INTERFACE

Figure 5.40. Change in menu after other selections.

not be interested in leasing an automobile and hence that option would not be displayed. The underlying code simply notes that another option is added to the screen when these conditions are found to be true.

So, consider Code 5.4. This code includes the basic form code so as to be able to get the radio buttons on the screen. Notice there is something new associated with the first value of the second question. It states that when that radio button is clicked, the program should run the function labeled “CheckLease,” which appears near the top of the program in the heading section. Since this code is only run if the user has specified that he or she wants a new car, it queries the user as to whether the car will be kept for a short period of time. If the answer is yes, then the conditions would allow the user to lease an automobile rather than buying it outright. The code will run to open a new, small window shown toward the right side of the display with the question about leasing an automobile. Note that the code

Code 5.4 JavaScript Examples

How long do you expect to keep this vehicle?

1-4 years
more than 4 years

Do you prefer a new vehicle?

OnClick=”CheckLease() ; return false; >Yes
No
I don't know

266 USER INTERFACE

Figure 5.41. Possible window definitions.

It is important that the user interface provide a standard and uniform look and feel in the system. One way to do this is to provide consistent windows for the different kinds of information that you might want to provide. For example, consider Figure 5.41, in which some possible windows are defined. In this example, warning messages are displayed in the upper left corner while help messages are displayed in the lower right corner. Similarly, graphics may appear in the upper right corner while technical assistance, such as help in modeling or generating alternatives, appears in the lower left corner. These windows should have consistent titles, colors, sizes, and other characteristics. In this way, users will develop intuition about the information being displayed and act accordingly.

Generally, these windows will not appear until needed. In Figure 5.41, users can request technical assistance by pressing the “help” button on the main screen. When they do, the technical assistance window (shown open in this figure) appears. You can allow the window to be closeable using standard Windows tools, through a menu item, or through a push button. If you need to ensure the user reads the information, you can make it impossible for him or her to continue without acknowledgment. If there is a need for additional processing after the window has been displayed, then you must have a mechanism for alerting the system after it has been read. Both those purposes are served best by the push button, as shown in the figure.

Suppose when running the system that the user always wants to start with the data window open but with the other three windows closed, as shown in Figure 5.42. The code for this is in Code 5.5. Since this first window should be opened every time the program is

CAR EXAMPLE 267

Figure 5.42. Mechanisms for opening windows.

started, it is run with the “OnLoad” command used in the “body” statement. Notice that in addition to specifying colors and other attributes of the page, the statement now says that immediately upon being opened—the function “windowOpen.” You will recall from the last example that it is possible to control the size and location of a window. In this case, the goal is to control the size of the window to be one-quarter the size of the display open (so that each window appears in a quadrant, as shown in Figure 5.41). Since the user may vary the monitor in use or the size of the window available for the program, the goal is to scale the new window on the fly. So, the first thing that happens in the function is to measure the available height and available width and to set the height and width to 50%, respectively. Since we know the window is going to appear in the top-left corner, the starting points for the window (left and top) are at zero. Using the same command as in the last example, the code opens a new page, “data_window.html,” in the upper left corner, as shown in Figure 5.42.

Notice there is a button in the “data” window in Figure 5.42. The user can click that button anytime the help window is needed. Once clicked, the display would appear as in Figure 5.43 using Code 5.6. The code is similar to that in the previous example, but the function is invoked from clicking the button rather than loading the page. In addition, while we want the window to be the same size, we want it to start in a different place, namely slightly to the right and below the window that is already open. As with the

268 USER INTERFACE

Code 5.5 JavaScript Examples

OnLoad=”windowOpenΐ); return false”>

Open Multiple Windows

CAR EXAMPLE 269

Figure 5.43. Alternative method for opening windows.

//W3C //DTD HTML 4.0 Transitional //EW”> Code 5.6

HI, H2, H3, H4, H5, Η6 {font-family:"Arial"} td {font-family:"Arial"} td {font — size: lOpt} td {font-weight: bold} td {border-width: 2px} table {border-color: #8D89C7} body {font-family:"Arial"; font-size: lOpt; font-weight:bold} p {font-family:"Arial"; font-size:

— >

lOpt; font-weight:bold}

270 USER INTERFACE

previous example, it is important to compute that location, as shown in Code 5.6: The new starting point is one pixel to the right and below the current window as defined with the two new variables, newstartJeft and newstart_top, respectively. The addition of the new variables makes the window open statement even harder to read because it means additional concatenation of literals, such as “top=,” and variables, such as “newstart_top.” The computer will read them all together since they are joined with the “+” between them

SUGGESTED READINGS 271

and because every literal is enclosed in quotes. Similar code could be used to open the other two windows on the display.

As stated earlier in the chapter, formatting is important for the environment. Sometimes designers use icons or pictures, such as those in Figure 5.21, for menu options. These can be helpful if they are understandable to the user and if they are used consistently. Since these icons are to elicit the intuition of the user, it is most important that they be meaningful to the user, and hence the user needs to be involved in their selection. One way to supplement these is to provide either permanent or transient wording near the icon to help the user build intuition.

Features should be built into the system to lessen the chance of user confusion. Only available options should appear in normal text, with others dimmed. Also, when a user se-lects a specific car, standard options should appear in one box with add-on options in another.

If the users access the system frequently, alternate information retrieval techniques should be made available. In this way, the user who accesses it frequently can increase the speed of retrieval and hence improve its performance value. The system should be tailored to acquire information in as few steps as possible while still maintaining clarity.

Finally, the format of the output of the system needs to be tailored to specific uses. If the user is comparing the prices for a type of vehicle from several makers, a simple histogram may be an easy way to display the comparison. The actual numerical value should also be displayed in some proximity to the bar that it represents or next to the legend. If, however, the user wishes to compare the available options, a table display may be more appropriate. If an option is available, the system could display the option highlighted or in a different color from those that are not available. This would allow for an easier comparison since the difference will be more noticeable.

DISCUSSION

The user interface is the most important part of a DSS because it is what the user thinks of as being the DSS. The best access to models and data is irrelevant if the decision makers cannot make the system understand their specific needs for information or if the system cannot provide the answers in a manner that decision makers can understand and use. As tools become more sophisticated, designers will be able to select input devices that are touch, motion, or voice sensitive and output devices that are graphical, motion, or virtual reality based. All this can bring a richness to the choice context if used appropriately.

SUGGESTED READINGS

Alter, S. L., Decision Support Systems: Current Practices and Continuing Challenges, Reading, MA: Addison-Wesley, 1980.

Bennett, J., “User Oriented Graphics,” in S. Treu (Ed.), User-Oriented Design of Interactive Graphic Systems, New York: Association for Computing Machinery, 1977.

Calvary, G., J. Coutaza, D.Thevenin, Q. Limbourgc, L. Bouillon, and J. Vanderdonckt, “A Unifying Reference Framework for Multi-Target User Interfaces, Interacting with Computers, Vol. 15, No. 3, 2003, pp. 289-308.

Card, S. K., T. P. Moran, and A. Newell, The Psychology of Human-Computer Interaction, New York: CRC Press, 1986.

Cooper, A., R. Reimann, and D. Cronin, About Face 3: The Essentials of Interactive Design, Indi-anapolis, IN: Wiley Publishing, 2007.

USER INTERFACE

Donovan, J. J., and S. E. Madnick, “Institutional and ad hoc Decision Support Systems and Their Effective Use,” DataBase, Vol. 8, No. 3, Winter 1977, pp. 79-88.

Eisenstein, J., J. Vanderdonckt, and A. Puerta, “Adapting to Mobile Contexts with User-Interface Modeling,” paper presented at Monterey Peninsula: Mobile Computing Systems and Applications: Third IEEE Workshop on Mobile Computing Systems and Applications (WMCSA'OO), 2000, pp. 83.

Eisenstein, J., J. Canderdonckt and A. Puerta, “Applying Model Based Techniques to the Development of Uls for Mobile Computing,” Sante Fe, NM: Proceedings of the Conference of Intelligent User Interfaces, January, 2001, pp. 69-76.

Few, S., Show Me the Numbers: Designing Tables and Graphs to Enlighten, Oakland, CA: Analytics, 2004.

Few, S., Information Dashboard Design: The Effective Visual Communication of Data, Sebastopol,

CA: O'Reilly, 2006.

Few, S., Now You See It: Simple Visualization Techniques for Quantitative Analysis, Oakland, CA:

Analytics, 2009.

Fisk, A. D., W. A. Rogers, N. Charness, S. J. Czaja, and J. Shark, Designing for Older Adults:

Principles and Creative Human Factors Approaches, 2nd ed., New York: CRC Press, 2009. Frenkel, K. A., “The Art and Science of Visualizing Data,” Communications of the ACM, Vol. 31,

No. 2, 1988, pp. 110-121.

Goodwin, K., and A. Cooper, Designing for the Digital Age: How to Create Human-Centered

Products and Services, Indianapolis, IN: Wiley Publishing, 2009.

Hearst, M. A., Search User Interfaces, New York: Cambridge University Press, 2009. Hess, T. J., M. A. Fuller, and J. Mathew, “Involvement and Decision-Making Satisfaction with a

Decision Aid: The Influence of Social Multimedia, Gender and Playfulness,” in F. Burstein and C. W. Holsapple (Eds.), Handbook on Decision Support Systems, Vol. 1, Berlin: Springer-Verlag, 2008, pp. 731-761.

Kamel Boulos, M. N., “Web GIS in Practice III: Creating a Simple Interactive Map of England's Strategic Health Authorities Using Google Maps API, Google Earth KML, and MSN Virtual Earth Map Control,” Internationaljournal of Health Geographies, Vol. 4,2005, available: http://www.ij-healthgeographics.eom/content/4/1/22, accessed 2009.

Korte, G. B., The GIS Book, 5th ed., Albany, NY: OnWord Press, 2001.

Krug, S., Don't Make Me Think: A Common Sense Approach to Web Usability, 2nd ed., Berkeley,

CA: New Riders, 2005. Landay, J. A., and T. R. Kaufmann, “User Interface Issues in Mobile Computing, “Proceedings of

the Fourth Workshop on Workstation Operating Systems, Napa, CA, October 1993.

Maes, P., and P. Mistry, “Unveiling the 'Sixth Sense,' Game-Changing Wearable Tech,” TED 2009,

Long Beach, CA, 2009. Marcus, A., “Human Communications Issues in Advanced User-Interfaces,” Communications of the

ACM, Vol. 36, No. 4, April 1993, pp. 101-109.

Marcus, A., and A. vanDam, “User Interface Developments for the Nineties,” IEEE Computing, Vol. 24, No. 9, September 1991, pp. 49-57.

Miller, G. A., “The Magical Number Seven Plus or Minus Two: Some Limits on Our Capacity for

Processing Information,” Psychology Review, Vol. 63, 1956, pp. 81-97.

Mistry, P., P. Maes, and L. Chang, “WUW—Wear Ur World—A Wearable Gestural Interface,” CHI '09 Extended Abstracts on Human Factors in Computing Systems, Boston, MA, 2009.

Molich, R., M. Ede, K. Kaasgaard, and B. Karyukin, “Comparative Usability Evaluation, Behaviour

and Information Technology, Vol. 23, No. 1, January-February 2004, pp. 65-74. Nielsen, J. “Noncommand User Interfaces,” Communications of the ACM, Vol. 36, No. 4, April 1993,

pp. 82-99.

Norman, D. A., The Design of Everyday Things, New York: Doubleday, 1990.

QUESTIONS

Norman, D. A., Emotional Design: Why We Love (or Hate) Everyday Things, New York: Basic Books, 2005.

Norman, D. A., The Design of Future Things, New York: Basic Books, 2007.

Peter, C, R. Beale, E. Crane, and L. Axelrod, “Emotion in HCI,” BCS-HCI '07: Proceedings of the 21st British HCI Group Annual Conference on HCI 2008: People and Computers XXI: HCI . . . But Not as We Know it, Vol. 2, University of Lancaster, United Kingdom, 2007, pp. 211-212.

Robertson, G. G., S. K. Card, and J. D. Mackinlay, “Information Visualization Using 3D Interactive

Animation,” Communications oftheACM, Vol. 36, No. 4, April 1993, pp. 56-71.

Rubin, J., D. Chisnell, and J. Spool, Handbook of Usability Testing: How to Plan, Design, and

Conduct Effective Tests, New York: Wiley, 2008.

Sears, A., and J. A. Jacko, Human-Computer Interaction: Designing for Diverse Users and Domains,

Boca Raton, FL: CRC Press, 2009.

Shackle, B., “Human-Computer Interaction—Whence and Whither?” Interacting with Computers, Vol. 21, Nos. 5-6, December 2009, pp. 353-366.

Sharp, H., and Y Rogers, Interaction Design: Beyond Human-Computer Interaction, West Sussex,

England: Wiley, 2007.

Shneiderman, B., C. Plaisant, M. Cohen, and S. Jacobs, Designing the User Interface: Strategies for

Effective Human-Computer Interaction, 5th ed., Reading, MA: Addison-Wesley, 2009. Steiger, D., R. Sharda, and B.LeClaire, “Graphical Interfaces for Network Modeling: A Model

Management System Perspective,” ORSA Journal on Computing, Vol. 5, No. 3, Summer 1993, pp. 275-291.

Stohr, E. A., and N. H. White, “User Interfaces for Decision Support Systems: An Overview,” International Journal of Policy Analysis and Information Systems, Vol. 6, No. 4, pp. 393^423. 1982.

Tannen, D., You Just Don't Understand: Women and Men in Conversation, New York: William

Morrow and Company, 1990.

Tidwell, J., Designing Interfaces: Patterns for Effective Interaction Design, Sebastopol, CA: O'Relly, 2005.

Toivonen, S., J. Kolari, and T. Laakko, “Facilitating Mobile Users with Contextualized Con-tent,” in Procedings of the Workshop Artificial Intelligence in Mobile System, 2003, available: http://www.vtt.fi/tte/tte31/pdfs/AIMS2003-toivonen-kolari-laakko.pdf.

Tufte, E., The Visual Display of Quantitative Information, Chesire, CT: Graphics, 1983. Tufte, E., Visual Explanations: Images and Quantities, Evidence and Narrative, Cheshire, CT: Graph-

ics, 1997. Tufte, E., Beautiful Evidence, Cheshire CT: Graphics, 2006. Walker, V, and R. B. Johnston, “Making Ubiquitous Computing Available,” Communications of the

ACM, Vol. 52, No. 10, October 2009, pp. 127-130.

Wickens, C. D., J. D. Lee, Y Liu, and S. E. Gordon Becker, An Introduction to Human Factors Engineering, 2nd ed. Upper Saddle River, NJ: Pearson Prentice Hall, 2004.

QUESTIONS

1. Many computer products now have something called “online documentation.” Depend-ing upon the product, this can include a text manual available electronically, a passive request system that accesses the text manual, and bubble help on menus. Discuss what formats of online documentation are appropriate for a DSS.

274 USER INTERFACE

2. Identify how your features of a user interface should be affected by the decision-making literature covered in Chapter 2.

3. Accenture utilizes a technique described as “low-fidelity prototyping” when designing user interfaces. This method has designers and users design screens together using paper template items. Hence, if the user indicates that another item should be added to the screen, such as a button, the designer picks up a paper object shaped like a button and allows the user to place it on the paper designated as the screen. Compare and contrast the advantages and disadvantages of using low-fidelity prototyping in the design of a DSS to those associated with using “high-fidelity prototyping” of designing screens with a product on the computer.

4. How should the design of a user interface be influenced by the corporate environment? How should its design be influenced by the national environment?

5. Discuss how you might provide a user interface through which to compare multiple automobiles. Would users' modeling preferences influence this decision?

6. Discuss how virtual reality devices might be used as a user interface in a DSS intended to help users select automobiles.

7. The fact that windows can be sized by the user can be both a problem and an opportunity in the design of DSS. Discuss the advantages and disadvantages of sizing windows. How might the disadvantages be overcome?

8. What kinds of problems are introduced if designers use stand-alone prototyping pack-ages to design screens and interact with users?

9. How is the user interface design influenced by the use of object-oriented tools?

10. Discuss how the process for establishing user interface requirements for a 1-person system would differ from the process for a 25-person system.

11. By what process would you evaluate the user interface of a DSS?

12. Find Web pages or sketch a user interface that displays the characteristics of being harmonious and well behaved and that do no harm.

13. Discuss how you would implement tool bars and menus to address various levels of experience among your users.

14. What are the principles of good visual design. Find Web pages that display them or sketch a user interface that would have them.

15. Suppose you wanted to display information about others who are your contacts on a social networking site. Discuss the kind of display you would use and the kinds of information you would want on the display.

ON THE WEB

On the Web for this chapter provides additional information about user interfaces and the tools used to develop them. Links can provide access to demonstration packages, general overview information, applications, software providers, tutorials, and more. Additional discussion questions and new applications will also be added as they become available.

• Links provide access to information about user interface products. Links can provide access to information, comparisons, reviews, and general information about software

ON THE WEB

products and tools for user interface design. Users can use the tools to determine the factors that facilitate and inhibit DSS use.

• Links provide access to descriptions of applications and development hints. In addi-tion to information about the software itself, the Web provides links to applications of the tools worldwide. You will have access to chronicles of users' successes and failures as well as innovative applications.

• Links provide access to different user interface methodologies. Specifically, users can access currently unconventional user interfaces, such as virtual reality or voice-activated menus.

• Links provide access to systems regarding automobile purchase and leasing. Several tools to help users purchase or lease an automobile are available on the Web. Users have the opportunity to access the tools and gain insights of the kinds of options that facilitate and those that inhibit the use of the DSS.

You can access material for this chapter from the Web page for the book or directly at http://www.umsl.edu/^sauterv/DSS4BI/ui.html.

Our customer support team is here to answer your questions. Ask us anything!