7
DESIGNING A DECISION SUPPORT SYSTEM
At this point, you may be sold on the idea of decision support systems. You believe they are important, and you want to include them within the assets of your department or organization. The next logical question is how to start. The answer is a clear and unequivocal, “it depends.”
The best approach depends upon the kind of systems already in place and the intended focus of the DSS. As with any good systems analysis and design process, it is important to understand the needs of the application and to select the models, model management system, databases, database management system, and user interface in a manner that best meets the needs of that application. Successful DSS can be built on almost any kind of platform with almost any kind of software, but it is crucial that the choices fit the application. Selecting tools and vendors before understanding the problem or forcing tools to meet needs after the fact will certainly lead to failure.
The physical design of a successful DSS must follow a logical design, which in turn must be guided by the decision-making process. In particular, designers should ask the same fundamental questions as those on which reporters rely:
• Who needs the DSS? • What advantages does the user expect by using the DSS? • When will the DSS be used? • Where does this system fit into the general business process? • Why is a DSS needed? • How will the DSS be used?
Decision Support Systems for Business Intelligence by Vicki L. Sauter Copyright © 2010 John Wiley & Sons, Inc.
315
316 DESIGNING A DECISION SUPPORT SYSTEM
While these questions seem obvious, we must keep returning to them as a reality test that the system is providing support for decisions.
Unfortunately, the systems development life cycle approach, which provides a reliable framework in which to design transaction processing systems (TPS), generally does not work for DSS design. Unlike TPS, DSS typically will have fuzzy or even wicked problem definitions that change substantially over time. In addition, since DSS support decision making, generally that of higher level managers, its design is highly subjective and subject to change. Since such managers have less time and less inclination to attend training sessions, it is necessary to create a system that has lower training needs than those generally associated with TPS. Finally, it is difficult to determine with certainty that a DSS works properly for all applications. Test data sets and problem scenarios can be developed for TPS and run against a system to determine whether it works properly. But, by its very nature, which is to be flexible and allow decision makers to use it as it best fits their decision style, DSS cannot be “tested” to ensure that they always work properly.
Therefore, DSS require a different approach to design. It must be a process and a product that relate to the constraints of the domain in which the DSS will be used. Gachet and Sprague (2005) remind us that there must be tangible improvements in the life of the decision maker to justify using the system. The DSS must make it easier to get data, improved knowledge management, and improved outcomes for it to be used. The faster the DSS can make the points of the value, the faster the DSS will be adopted. If those factors are to be realized, they argue, designers must use a context-based development life cycle for DSS design. This methodology emphasizes the following:
1. Identify Requirement Specifications Based on Contextual Issues. This means that the first step to design is to identify the user interface requirements from the end users. In ad-dition, at this stage designers must identify needs for data integration to bring improvement in the process and where there is a need for parity with workflow.
2. Preliminary Conceptual Design. There must be an emphasis on inputs and outputs from the end-user requirements: what do they need and how must it be represented. Also in this step designers identify specific hardware and software requirements and identify specifications for databases.
3. Logical Design and Architectural Specifications. In this stage, designers begin to specify user interfaces. Using early prototyping, they can compare their understanding of the interface needs with the users to ensure they understood the message correctly. In addition, designers must specify the procedures for obtaining data and sharing it with others and the distributed architecture required for appropriate levels of integration of the DSS with other systems. Finally, designers must model data and the strategic design as well as develop procedures for maintenance and backup of the system.
Design insights Picking a Team
As in any large-scale, important application, the question of who should do the development may be critical. Often project teams are hand-picked members of the staff who are pulled together especially for their ability to respond to a particular need. They are thought of as a SWAT team in that they develop the DSS and then return to their separate departments, If they are successful, then they are often called upon for the next important application, Especially with the design of DSS, here are sometimes subtle elements of group synergy that lead to success for the group in one application but not in other applications. Unfortunately, the understanding of what leads to such success in high-performance projects is not well understood.
DESIGNING A DECISION SUPPORT SYSTEM
4. Detailed Design and Testing. While testing is important in any design process, in this methodology, the emphasis is on testing the system with the end users and testing the integration of the system with the decision makers' functions. That means we need to test if individual decision makers can use the system and if it flows nicely in their workflow process. Of course, this also includes testing the resilience, reliability, and scalability of the system and its performance under specific failure scenarios.
5. Operational Implementation. In this stage, the system is made operational in a subset of the decision makers' world. Systems are linked to appropriate parts of the data warehouse and than are made available to used by decision. Those decision makers involved in the test would be trained and receive access to the system.
6. Evaluation and Modification. Finally, the system is evaluated in terms of it overall user acceptance, system integration, architecture resilience, and scalability. Finally the system is modified across the organization.
7. Operational Deployment. Final changes are made in the system and it is distributed to all users after training. This stage includes continuous monitoring of both technical problems in the operation of the DSS and patterns of use that might suggest problems.
Pick (2008) further addresses the process of Gachet and Sprague's first step of require-ments definition. He states that it is important to elucidate the value of the DSS before beginning to build the system. Pick's argument is that the benefits of a DSS are often much more subtle than decision makers expect, and so it is important to sensitize them to the benefits that might be expected as the process starts. In addition, consideration of the benefits early in the process will help decision makers develop a better understanding of the opportunities that might be built into the system. He suggests questions such as the following (Pick, 2008, p. 725):
• If we will be better able to cope with large or complex problems, how much may that ability be worth?
• If the system will allow greater exploration and discovery, how much might the resulting insights be worth?
• If there is better knowledge processing, how is this beneficial? If the system pro-vides better understanding of a problem, can anyone judge the costs of incomplete understanding?
So, how would a designer know when he or she has a good DSS? Arnott and Dodson (2008) provide a simple model of what impacts DSS success, as shown in Figure 7.1. They bring two basic concepts to a methodology for designing DSS. First they, as Gachet and Sprague (2006) and Pick (2008) say, the system must be comfortable for the user and improve decision making. Arnott and Dodson represent these concepts with “user satisfaction” and “impact of the system.” Notice that they show user satisfaction impacting use. What this implies is that if the users do not see the benefit of the system, find it too difficult to use, or do not find the information and models they perceive they need to complete their decision, they are likely not to use the system at all. Clearly, even if the system could have a significant impact if it were used, it will not be a success if users do not find what they need.
Arnott and Dodson (2008) also identify 10 critical success factors that need to be satisfied to ensure both use and success (pp. 770-771):
• There is a committed and informed executive sponsor. • There is widespread management support.
318 DESIGNING A DECISION SUPPORT SYSTEM
Figure 7.1. A model of DSS success. (Adapted from D. Arnott, and G. Dodson, “Decision Support Systems Failure,” in F. Burstein and C.W. Holsapple (Eds.), Handbook on Decision Support Systems, Vol. I, Berlin: Springer-Verlag, 2008, p. 768.) Image is reprinted here with permission.
• The design team has appropriate skills. • The design team uses appropriate technology. • The design team has adequate resources. • There is effective data management. • There is a clear link with business objectives. • There exists well-defined requirements. • The system is allowed to evolve in development. • The design team manages project scope.
These critical success factors mirror those generally accepted for system design. In par-ticular, they highlight that the success of the DSS is dependent upon it being aligned with business objectives and the technology plan of the organization. This will be discussed in the next section.
These methodologies, however, identify decision makers being comfortable with the system as the critical component to DSS success. It has been said that most users would rather live with a problem they cannot solve than use a solution they cannot understand. Thus making the DSS too “black box” or difficult to use will make it an instant failure. Of course, designers need to know what factors will make the system easy to use and comfortable to use. Norman (2007, p. 93) identifies six design rules:
Provide rich, complete and natural signals. Be predictable. Provide good conceptual models. Make output understandable. Provide continual awareness without annoyance. Exploit natural mappings.
PLANNING FOR DECISION SUPPORT SYSTEMS 319
You will notice that this list tells us that understandability and requiring the system design to follow the decision process are important aspects of good design. If the system is predictable, cues (that guide the operations of the system or the evaluation of information) are informative, and the output is presented in a clear and useful manner, the decision maker is likely to use the DSS. Norman's emphasis is on providing a comfortable metaphor for the system to which the user can relate. If the metaphor is right, then the procedures will be understandable and the signals will be informative. In addition, he says that there should be ubiquitous, yet nonobtrusive help available to the user.
Norman further emphasized rules of good design from the perspective of the system that mirrors the themes of understandability and congruence with the decision process:
• Keep things simple. • Give people a conceptual model. • Give reasons. • Make people think they are in control. • Continually reassure. • Never label human behavior as “error.”
As a field, we tend to forget the most important design rule—keep everything simple. This means the user interface, the processes needed to use the system, and the output. Removing clutter and the newest but unnecessary gadget will encourage users to focus on the important forms of support the system has to offer. In addition, these principles remind us that the decision maker, not the DSS, ultimately will make the choice among alternatives. The system must provide support and work in the way the user needs or the decision maker will not use the system. Helping to make the system more predictable and more like a trusted assistant will encourage decision makers to utilize its power. This includes the specific attributes of the data, models, and user interface discussed in previous chapters.
PLANNING FOR DECISION SUPPORT SYSTEMS
In an ideal world, a multilevel plan guides the development of new DSS, such as that described in Figure 7.2. The plan provides specifications for a specific DSS, in terms of the way it interacts with the rest of the business processes, the kind of information that it will provide, and its relative importance to the growth of the organization.
The specifications for DSS begin with the corporate strategic or long-range plan. A strategic plan defines where the corporation expects to change its products or processes and during what time line and provides direction to management of the corporation as a whole. The MIS master plan, in turn, inherits its priorities and concerns from this corporate strategic plan. The information system (IS) plan provides guidelines for prioritizing requests for maintenance of existing systems and creation of new systems. In particular, it describes the priorities for hardware, software, and staff necessary to respond to corporate strategy plans. The IS master plan specifies modifications and maintenance of legacy systems, creation and implementation of new systems, and diffusion of technology within the organization. It should provide a plan for regular updating and other maintenance. Finally, it should provide specifications for how staff should proceed in the creation of systems.
The DSS plan derives its priorities from the IS plan. Its goal is to coordinate future implementations in the broadest possible way to ensure that all decision making is supported
320 DESIGNING A DECISION SUPPORT SYSTEM
Figure 7.2. Ideal planning.
in an appropriate way while planning for the reuse of code, flexibility for the future, and the greatest potential for growth. In particular, the DSS plan should help answer questions such as those posed by Sprague and Carlson (1982):
• How can current needs susceptible to DSS be recognized? • How can the likely extent of their growth be assessed? • What types of DSS are required to support the needs, now and in the future? • What are the minimum startup capabilities required, both organizational and tech-
nical? • What kind of plan can be developed to establish the long-term direction yet respond
to unanticipated developments in managerial needs and technical capabilities?
The DSS master plan would provide direction in the selection of hardware and software and for integration with current systems. In addition, it could include a process for the creation of reusable libraries of code that future designers could embed into similarly operating DSS.
Designing a Specific DSS
Where DSS master plans exist, there is already some guidance in how to proceed. More often than not, however, such plans do not exist. Then, designers must judge for themselves how the DSS will fit into corporate plans and how it will interact with other systems. The methodology described in Figure 7.3 will help designers ensure they get the best fit. Note
PLANNING FOR DECISION SUPPORT SYSTEMS 321
Goals:
Goncerns:
Initial analysis
• Identify key decisions • Identify key information needs
• Theoretical or conceptual needs • Industry-based needs • Corporation-based needs • Decision-specific parameters
Situation analysis
Goals: · Understand the organizational setting • Understand the task • Understand the user characteristics
System design
Goals: · Logical design • System construction • System evaluation
Goals:
Implementation
• Demonstration • Training • Deployment
Figure 7.3. DSS design methodology.
that it differs from the traditional systems development life cycle (SDLC) approach in that it puts much more emphasis on determining what information needs to be provided and in what fashion.
In the first stage, the designer learns the decision needs and environment. Designers must know the key decisions under consideration by the decision maker and the related information needs if the DSS is to be a tool that supports decisions. Then the designer can begin to examine the parameters needed for consideration. Sometimes these parameters will be easy to identify. For example, one key issue for investment executives is what investments will provide the best returns. Knowing that, they need to consider return, relative risk, tax advantages, term of return, and other fiscal parameters. On the other hand, a chief executive officer-s key issue might be how to prevent a leveraged buyout or how to strategically acquire a new vertical market. In this situation, even knowledge of the key decision reveals information needs well.
DESIGNING A DECISION SUPPORT SYSTEM
Interviewing Techniques. Often designers learn decision makers' needs by inter-viewing them. There are many ways of conducting interviews, each of which provides different kinds of information. For example, consider the interviewing styles noted in Fig-ure 7.4 and discussed below. Interviews can be structured, unstructured, or focused. They can follow case studies or protocol analysis. Finally, they can utilize tools such as card sorting and multidimensional scaling.
The benefit of interviews is that they provide access to information or a perspective on information that only the decision maker can provide. In both the structured interview and the focused interview, the designer is interacting with the decision maker to obtain information regarding a prescribed set of topics. This interaction might be in a face-to-face setting, over the telephone, via computer, or by a pen-and-paper questionnaire. Generally the richest information can be gleaned in a face-to-face setting in a neutral location (away from the interruptions of the decision maker's normal activities). Good results can be achieved with intelligent computer questionnaires (that move through the questions as a function of the answers already provided); unfortunately, it is generally too expensive to develop this software for a one-time use.
The degree of structure we build into the interview depends upon the specificity of the information we seek. A structured interview is one in which the questions and the order in which they will be asked are prescribed. The interviewer seeks short answers that provide specific information. A focused interview, on the other hand, is relatively unstructured. In this case, the interviewer also has a set of questions and an order for asking the questions. However, the questions are more general, allowing the respondent to drive the direction of the discussion. The interviewer must be prepared with probing questions that help the respondent to focus on salient points.
Concepts
Problems (major tasks completed by decision maker)
Solutions (possible outcomes)
Problem-solving steps
Raw concepts
Concept definition
Concept structures
Problems
Problem types
Solutions
Solution types
General steps
Specific steps
Structured interviews
V
S
S
S
Focused interviews
^
^
^
^
^
S
Case study interviews
^
S
S
S
Protocol analysis
^
V
S
V
Card sorting
V
S
S
Multidimensional scaling
s
S
s
Figure 7.4. Interviewing techniques matrix.
PLANNING FOR DECISION SUPPORT SYSTEMS 323
The more structured the interview, the greater the chance that the decision maker will provide precisely the information sought. However, the more structured the interview, the less likely the decision maker will provide insights the designer had not considered previously. Therefore, if the designer is relatively uninformed about the choice process or the decision maker's tendencies, the focused interview will allow for greater probing of new avenues and hence greater understanding of the relationships between tasks and concepts and why the procedures are sequenced in a particular fashion.
A protocol analysis is a different kind of interview because the interviewer does not set even the basis of the discussion. Instead, respondents complete their typical choice processes (including seeking information, generating alternatives, merging information, modeling, sensitivity analysis, and other tasks included in the process). In order to communicate what is happening and why it is happening, the decision maker verbalizes each task and subtask and how a decision is made to move to another task. Usually, the interviewer does not intervene but just records the descriptions provided. Protocol analysis is a valuable tool because it helps the designer understand what the decision makers actually do in the choice process, not what they perceive they do. This can be important because often the decision maker is not aware of the actual tasks and hence cannot communicate them; this can be a particular problem with very experienced decision makers, as discussed in Chapter 2.
Other Techniques. Both the card-sorting technique and multidimensional scaling require the decision maker to perform some task from which the designer infers the preferred information and models. “Card sorting” refers to any task (whether or not one actually uses cards, even if one uses a computer simulation, such as that shown in Figure 7.5) in which the decision maker iteratively sorts and combines things or concepts to determine a point of view. For example, if the choice situation involves loan applications, the decision maker
Figure 7.5. Card Sorts simulation.
DESIGNING A DECISION SUPPORT SYSTEM
would sort a set of loan applications into multiple piles (perhaps “acceptable,” “borderline”, and “unacceptable”). After the decision maker is comfortable with the similarity of the loan applications in each pile, the designer analyzes the applications, with the help of the decision maker, to determine the bases for the sorting. In other words, by noting the similarity and differences among the applications within piles and between piles, the designer can glean the set of criteria and standards for applying them. This helps the designer to understand how to provide information and models for the decision maker.
Multidimensional scaling is a similar process in which decision makers are asked to rate items as being similar or dissimilar. It differs from card sorting in that it forces the decision maker to make choices among less complex alternatives. For example, rather than asking whether or not an entire application is acceptable, the designer would ask the decision maker to compare two candidates with particular incomes or particular debt ratios with regard to their risks as loan candidates. Designers pose a large number of combinations and analyze the data mathematically to determine the criteria being employed by the decision makers. Unfortunately, the factors driving the decision often are not obvious or they lack face validity. Hence the exercise can result in no useful information.
To identify more information needs, designers research the specific kind of decision under consideration. For example, they can identify some informational needs by studying the conceptual and theoretical bases for decisions, such as those covered in business school classes. From such an analysis, designers could identify that investment executives need to consider term of investment, relative risk, tax advantages, and other fiscal parameters in addition to the fundamental question of return on investment. This would provide a starting point for identifying additional needs. Alternatively, designers can gain insight by learning about the industry in general. For example, designers of a DSS for a pharmaceutical firm could gain insights by examining the creation, approval, marketing, and selling processes for drugs. Issues such as testing, purity, reliability, and statistical confidence levels would become evident. Such topics would likely have a home in any DSS in such a firm. Finally, designers could examine copies of reports, memos, transactions, and models to identify additional needs. This is comparable to an archeological analysis of the context from which inferences about needs can be drawn.
Influence Diagrams. It is important to be sure that all of the critical factors are represented in a DSS. Hence, designers often rely upon tools like influence diagrams to help them keep track of the range of information that is needed in a DSS. This popular decision analysis tool helps to identify and to clarify the variables that might be considered as well as the information needed to assess the variables. For example, suppose that a designer is developing a DSS to help investors. As a starting point, the designer knows that there are quantitative models that can be used to describe the financial market and to forecast changes in the market. Similarly, the designer knows that the decision makers will rely on some expert judgment about the financial market. These quantitative and qualitative factors will influence the decision about how to invest. Of course, even with the best forecasts and qualitative judgments, there may be sudden changes in the many factors that influence the market, including events that change assumptions or even news that appears to change those assumptions. In other words, the range of information and models that need to be included in the DSS for this relatively straightforward situation can be very complex. Designers, then, use influence diagrams to keep track of the factors that need to be included in a DSS.
Influence diagrams have few symbols and rules and so are easy to draw once the conditions are well understood. First one must consider the variables of the decision itself. There are decision variables that are controlled by the decision maker, outcome variables
PLANNING FOR DECISION SUPPORT SYSTEMS 325
The decision.
A chance variable that is out of the control of a decision maker.
> The objective of the decision. This is the variable the decision maker is attempting to maximize or minimize.
A deterministic function of the quantities that depend on it or an intermediate variable.
> Influence.
Figure 7.6. Influence diagram symbols.
that represent the outcome of the decision, exogenous variables that influence the decision but are not under the control of the decision maker, and intermediate variables that are evaluated between the decision and the outcome. To map those into an influence diagram, consider Figure 7.6, which shows the symbols that can be used in an influence diagram. As shown in this figure, there are symbols corresponding to a decision (a rectangle), exogenous variables (an ellipse), intermediate variables (a rounded rectangle), the outcome variables (an elongated hexagon), and the influence (an arrow). Using these symbols, one can diagram the factors needing representation in the DSS. Consider again the example DSS in the previous paragraph. These relationships are shown in an influence diagram in Figure 7.7. See that the ultimate goal is (to maximize) profit (as shown by the hexagon). The decision that will impact profit is the investments shown in the center (rectangle). The decision maker comes to the decisions about investments after consideration of the quantitative models (the left rectangle) and expert judgment (the right rectangle). Of course, in making these choices, it is necessary to keep an eye on the events and relationship changes in the environment. This tells us the kinds of information needed in the DSS. Each of these decisions can be broken down into more detail to determine specific information, specific variables, and specific models that might be included.
There are computer tools that can help designers build these diagrams and use them to create the system. For example, consider Lumina's Analytica, which builds influence diagrams easily, as shown in the top portion of Figure 7.8. These tools can then provide the backbone of analysis using the functionality built within Analytica, as shown in the bottom portion of Figure 7.8, or provide a blueprint for analysis with other modeling tools.
Situational Analysis. Once an initial analysis of the key decisions and related information needs has been completed, designers must complete a situation analysis to help identify some of the remaining needs. This includes an analysis of the task, the
Mathematical analysis and
forecasts
Expert financial judgment
DESIGNING A DECISION SUPPORT SYSTEM
Figure 7.7. An influence diagram.
organizational setting, and the user characteristics and how each contributes to the infor-mational requirements of the DSS.
Using either interviewing or other techniques, a designer completes a task analysis to identify the baseline information and model needs. The baseline needs represent the theoretical or conceptual information needs that everyone would need without consideration of the preferences of the decision maker or external needs imposed by the choice context. These needs are driven by the nature of the tasks, their relative structure, variability, length, and frequency to identify information needs and sources as well as constraints. Simon's stages of decision making can provide insight into these needs. If the decision maker's goal is “intelligence,” the system must monitor and scan data to identify indicators of problems and opportunities, such as trends, patterns, or exceptions to patterns. For example, a financial DSS might continually scan the stock and bond market for investment opportunities that have high potential payoff. On the other hand, if the goal is “design,” the system must be able to facilitate the identification and construction of alternate strategies. In this case, the DSS needs to provide opportunities for investment, such as tools for identifying mutual funds with characteristics that will meet the needs of the investor. Finally, if the goal is “choice,” the system must facilitate evaluating and testing of the alternatives for sensitivity to assumptions. In this case, the financial DSS might evaluate alternatives for past performance as well as the expected reaction of the financial opportunity to changes in resources, political climates, or other factors that could affect its desirability.
Similarly, the task analysis determines whether there are limitations on the number or types of models appropriate or necessary in the analysis. For example, in the case of the investment executive, task analysis identifies the need to distinguish between deciding when to invest, how much to invest, and for how long to invest as well as the outcomes of liquidity, rate of return, and total profit. In addition, it reminds us to consider related factors such as the inflation rate, competition, and market stability.
Once we know the main independent, dependent, and interdependent aspects of the choice context, we can begin to understand what the decision maker needs to do, what the decision maker can control, and what constrains the decision maker's actions. Designers
PLANNING FOR DECISION SUPPORT SYSTEMS 327
Figure 7.8. An influence diagram in Analytica. Example screen shots were provided by and
reprinted with the permission of Lumina Decision Systems.
328 DESIGNING A DECISION SUPPORT SYSTEM
must learn what guides and limits the decision maker as well as what measures are ap-propriate for assessing the quality of the decision and/or outcome. Knowing the facets of the problem and the interdependencies among them will help identify information sources, authority constraints, and coordination necessary to provide decision makers what they will need.
The organizational setting analysis describes the forum in which the choice will take place. In this analysis, designers identify informal norms or other relevant practices for analyses as well as the climate in which the decision maker functions and relevant rela-tionships among the decision makers within their organization. Each of these evaluations results in information and modeling needs for the DSS.
Finally, designers need to examine the user characteristics, such as the amount of experience and knowledge possessed by the decision makers and the extent of their skills. As we saw in Chapter 2, this is influenced by the experience and background of the users concerning the problem type under consideration with the models appropriate for that problem and with DSS or other computer-based tools. Further, this is influenced by information preferences, decision-making styles, and approaches to problem identification and evaluation. Knowledge of the user requirements along all of these dimensions will provide insight into the information needs (primarily background needs) as well as into the model management and other user interface requirements.
The entire situation analysis results in a deeper understanding of how the DSS will be used, including the kinds of information, models, support, and intervention the user is likely to employ. To achieve this understanding, designers develop a model of how decision makers will use information. They identify a basic understanding of the model through the identification of baseline needs. This model is refined by interviews and observation of the decision makers. Designers abstract important information from those interviews and compare the expressed needs to those predicted by the model. Differences between the expressed and predicted needs are used to refine the model. Often these steps are followed in an iterative fashion, with designers forming and refining models of the decision makers' needs between data collection steps, as shown in Figure 7.9.
Designers should be able to understand how the decision maker conceptualizes, an-alyzes, and communicates problems. For example, at this stage, designers should be able to learn where and how decision makers will employ graphs, lists, charts, and other aids
Perform initial interviews
Pick basic model <-
Perform model-directed interviews <■
Perform data abstraction <
Figure 7.9. Interative nature of situation Check fit of model to data analysis. I
PLANNING FOR DECISION SUPPORT SYSTEMS
to understand the problem. In addition, designers should know how decision makers ana-lyze and manipulate the information in different contexts. This includes understanding the representations (i.e., the lists, graphs, charts, etc.), operations (i.e., the means of analysis), and the linkages between representations and operations as well as the model management tools, note pads, or user interface components that facilitate those linkages. Finally, the situation analysis suggests frameworks for making the DSS useful to decision makers. In particular, it suggests characteristics of the user interface, its design, the necessary kinds of intelligent and context-specific assistance, and the relationships between modeling and database components. Said differently, it would tell the designers how to evaluate the four components of a DSS discussed in earlier chapters.
The end of the situation analysis begins the design phase. As shown in Figure 7.9, the design stage begins with a logical design of the system and ends with the construction of the system. In particular, this includes the identification and/or creation of (a) available databases and a database management system; (b) available models and a model base management system; (c) user interfaces, and (d) a mail management system. In this step, designers determine how the system will work and what hardware is appropriate. Further, they must identify what software or tools to utilize or create. Finally, they must identify an appropriate design approach. Advantages and disadvantages of different approaches are discussed in the next section.
After the construction of the system, designers must evaluate and then implement the DSS. This stage includes testing of the system and evaluation by the designers as well as the users. Implementation includes training, deployment, and demonstration. Of course, the final stage is maintenance and adaptation. Maintenance covers the correction of any defaults in the system that appear after deployment of the system. In adaptation designers modify the system in response to changing demands upon the choice process resulting from new choices or information sources or they make improvements in usage of the system.
As with the SDLC, there have been several attempts to provide methodologies which specify the various steps in the design of DSS. Among these are ESPIRIT and KADS from the University of Amsterdam. Each of these follows the basic structure outlined in this chap-ter, but they provide additional details and specifications for completing the analysis phase.
Design Approaches
Table 7.1 outlines the three approaches to design and implementation. These three methods suggest that you either build the whole system from scratch (“one stage, complete system”) or use current technologies to facilitate the development (“quick-hit method” and the “evolutionary method”). In addition, they suggest that you either treat the DSS as a one-time development, with some maintenance over time (“one stage, complete system” and the “quick-hit method”), or you plan for the system to grow with the demands placed upon it (“evolutionary development”).
Systems Built from Scratch. The one-stage, complete-system approach assumes that nothing, including the models, the model base management system, the databases, the
Table 7.1. Design Approaches
One-stage, complete system Quick-hit method Evolutionary development
DESIGNING A DECISION SUPPORT SYSTEM
database management system, the user interface—or even their components—is available on which to build the desired DSS. As the name suggests, this approach requires the designers to build an entire system and deliver it in total to the decision maker. In these cases, designers use the DSS-adapted life-cycle approach, shown earlier in Figure 7.3, with significant emphasis on the design and construction phases. It means that designers code every aspect of the system from one or more languages without the benefit of available electronic tools or modules. Although all early DSS were built in this way, today, the one-stage, complete system is implemented only when building a large-scale, multiple-user, or unique system.
This approach is useful when the models are so specific to the problem that modeling software is not available. For example, suppose the purpose of the DSS is to facilitate decisions regarding battlefield logistics and strategy. Or suppose the DSS must simulate human tolerance of toxic wastes. The necessary models are sophisticated and unique, libraries of such models do not exist, and the models may be quite complex. Since the model is such an important part of the DSS, it may be easier to build the system around the specialized model than to incorporate it into preexisting tools and modules.
The best generalized example of the use of a one-stage, complete system today is the design of geographic information systems (GIS). These systems provide decision support for a particular class of problems, namely those requiring map-oriented analyses of data. For example, city planning agencies may need to track sewer development, electricity and gas hookups, movement of the population, and housing starts. For some analyses and decisions, it may be most meaningful to model the infrastructure to support housing starts with a map. In this case, the map and the associated analysis tools serve as the model and model management tools. In other words, the GIS is a DSS that uses a specialized set of models and model management capabilities, specialized database files, and a powerful user interface. Since these tools require a unique programming platform at this time, this form of DSS is designed as a one-stage, complete system today, especially because of the fact that the tools are generally used in isolation. The specifics of a GIS and its applications will be highlighted in a later chapter.
When using the one-step, complete-system method, designers often prototype as a means of determining system requirements. A prototype is a facsimile system that simulates the user interface as well as the data and modeling activities of the DSS. Designers develop prototypes using fourth-generation languages or other prototyping tools that allow rapid development and easy change of the system. After decision makers specify the basic needs and preferences, designers can quickly produce a prototype that they believe meets those needs and preferences. Users then operate the prototype as they intend to use the ultimate system. In this way, they experience the user interface, data management, model management, and mail management features and capabilities. Since users can demonstrate problems or less preferred options for the designer, they can also respond to specific features or constraints and express their concerns more precisely. Similarly, designers can ask questions in an unambiguous manner.
Designers armed with this feedback can adjust the prototype quickly to respond to the users' needs. Since the elapsed time between the expression of preferences and observation of the effects is so short and since specific system attributes are identified, users can focus on whether designers understood and implemented their concerns. This cycle is repeated until the user is satisfied with the design of the entire DSS.
The prototype enables designers and users to communicate concretely, reducing the chance of miscommunication. Such a tool is important because the designers and the decision makers have different mental representations of the problem which bias how they respond to new information provided to them about the system. The tangible nature of the
PLANNING FOR DECISION SUPPORT SYSTEMS
prototype allows them to look for disconfirmatory data which identifies when the DSS is not performing adequately. While many seasoned designers prefer an intuitive approach, this empirical way to determine needs will generally provide a better analysis of decision-making needs.
After a satisfactory system has been created, the designer could translate the fourth-generation code into more efficient and easily maintained production code. This would have the benefit of providing a system that could be maintained over time and that could be used by multiple users without magnifying the strain on resources. However, it introduces a time delay in users' access to the real system. Further, since production code may not have the same capabilities as are available in the fourth-generation languages, some important features can be lost. More often, designers leave the DSS in the fourth-generation language and allow users immediate access to the technology. If there are few users, particularly if they do not use a system frequently or intensely, the advantage of improved efficiency in code is not worth the delay.
Although the one-stage, complete-system method was once the preferred approach to design, it is unusual to use this approach to DSS design today. The change is associated closely with the move from file processing applications toward database applications in most corporations and organizations. In earlier periods of data processing history, most applications had their own unique data that ran with the system, and hence the need to identify data and control it was associated with the DSS itself. In today's environment, there has been a move toward shared databases. Certainly most large computer users have shared data to both simplify control and access to the data and make results across applications consistent. Further, this shared view of data allows more types of data to be available for a greater number of applications and therefore makes possible richer decision making. Today increasing numbers of models are computerized and easily integrated within a DSS. Since these sophisticated databases, models, and their control mechanisms exist, it would be inefficient to design without them.
Access to a wide range of databases has made the DSS more useful. However, it has also led decision makers to make greater data demands on the systems. Fortunately, with the greater connectivity available through the Internet and the capability of “surfing” the Internet, decision makers can get access to broader data in shorter times.
Simultaneously, there has been a change in the capability of hardware and the efficiency of the software provided as DSS appliances. Early appliances were quite limited in the range of models and data they could reach, were relatively inefficient in their analyses, and provided user interfaces that would be considered archaic today. Today, designers can realize significant economies of scale from centralizing the development of such sophisticated tools, such as those provided with Cognos, shown in Figure 7.10. If such centralized tools are implemented properly, it can result in the development of a very efficient engine and a system that integrates well with other systems.
Furthermore, since tools are substantially more sophisticated today, building them from scratch is likely to be a long, tedious effort which most corporations cannot tolerate. The resulting system would be both late and technologically obsolete before use. In short, the resources available to DSS designers are substantially better today than they were in the past. Hence, where the resources exist, it makes sense to use them.
Using Technology to Form the Basis of the DSS. The other two approaches to designing a DSS differ from the first in that both rely upon the use of an existing base technology, called a DSS appliance. In the one-shot, complete-system approach, designers customize all the components by building them from a language. More often, designers use commercially available, leading-edge tools and technologies to construct the system more
332 DESIGNING A DECISION SUPPORT SYSTEM
Figure 7.10. Example DSS appliance-Cognos. Screen Shot provided by and reprinted courtesy
of International Business Machines.
quickly. These tools and technologies are referred to as “appliances” for the DSS, or DSS appliances.
EVALUATING A DSS APPLIANCE. Of course, designing more quickly is only good if the appliance will, in fact, meet the needs of the application. Said differently, it will only be appropriate to use an appliance if it allows the uses and functionality that are anticipated for the DSS, such as those described in earlier chapters. These needs must be stated before purchase or lease. Certain issues must be considered for any appliance, such as those summarized in Tables 7.2-7.7.
The needs summarized in these tables correspond to the DSS needs discussed in earlier chapters. For example, Table 7.2 shows some features to consider regarding the database and data management component of the DSS. From a macroperspective, we need to ask whether the appliance will simplify or prohibit access to and manipulation of the necessary data. Unfortunately, it is not always easy to determine what data will be used or even how the demand for data will expand as the DSS is used. So, designers need to take the perspective of the decision maker when asking whether the appliance is adequate, flexible, and usable and provides sufficient security to meet the needs of the application.
In particular, the designer needs to consider whether the appliance (a) is consistent in providing data (both in raw and processed form) to users, regardless of the source of the data; (b) interfaces well with the corporatewide data management tools; and (c) allows data warehousing. In other words, adequacy reflects whether the appliance will provide users access to the necessary data in a seamless and friendly fashion. Further, the appliance must be flexible in its use of data to meet the varying needs of decision makers. For example, earlier chapters discussed the importance of allowing development and use of
PLANNING FOR DECISION SUPPORT SYSTEMS 333
Table 7.2. Data and Data Management Concerns in Selecting a DSS Generator
Adequacy • Provides common user view of data • Links well to corporate database management system • Facilitates data warehousing
Flexibility • Offers the creation of “personal” databases • Supports a wide range of database formats (text, graphics, audio, video, etc.) • Facilitates ad hoc query capability • Provides flexible browsers for public databases • Facilitates knowledge management
Usability • Offers ease in data selection • Has data dictionary • Handles necessary amount of data • Can handle sparse data
Security • Provides data security features • Offers multiple levels of security • Controls number of users, with what kinds of access, in simultaneous use • Creates audit trails
personal databases exclusively by the decision makers. The appliance must allow such development and provide full tool use on these data. Similarly, decision makers must have a tool that searches public databases using the full range of query development they use in the corporate databases. An ideal system would provide the same search engine for all databases, thereby making transition from one type to another transparent to the user. In addition, the appliance must allow for formats beyond simple text. Depending upon the application, decision makers are likely to need graphics, audio, video, and even access to virtual reality files. These alternate format files can only be effective support mechanisms, however, if they can be indexed, stored, and retrieved easily and merged with other data.
Usability refers to the system's ability to meet data and decision maker needs. On the one hand, the question of usability can refer to the size of databases, the size of resulting tables, or the number of queries that can be made at once. Since size and price are often highly correlated, we need to be sure of buying enough to meet foreseeable needs. On the other hand, the question of usability can refer to the decision maker's ability to find the necessary variables and to make the system understand those variables.
Finally, any corporation needs to provide security for certain data. Users expect the DSS will ease the problems of location of information and reports. With this ease, however, comes the requirement that the appliance prevents those not employed by the corporation from using the data. It is also true that some data are so important or controversial that only some members can have access to it on a “need-to-know basis.” Hence, the appliance must be able to provide multiple levels of security as well as necessary audit trails to determine who has gained access to what data.
Similarly, designers need to evaluate these issues with regard to the models and the model management system associated with the DSS appliance. The appliance needs to meet
334 DESIGNING A DECISION SUPPORT SYSTEM
Table 7.3. Models and Model Management Concerns in Selecting a DSS Generator
Modeling • Functionality
(a) User-defined functions (b) Procedurability (ability to solve equations independent of their ordering, symbolic reference
of data) (c) A Wide range of functions (d) Nonprodedurality (c) Time as a possible dimension
• Flexibility (a) Size restrictions (b) Currency and date conversions (c) Ability to aggregate and disaggregate analyses (d) Ability to link sequential analyses (e) Multidimensionality (f) Links well to available modeling packages
• Appropriateness of included models (a) Symbolic modeling (b) Statistical ability (descriptive statistics, hypothesis testing, predictive statistics, regression) (c) Project management ability (PERT/CPM, multilevel work breakdown structure) (d) Operations research ability (mathematical programming, stochastic analysis) (f) Forecasting and econometrics ability (time series analysis, causal modeling, seasonalization,
smoothing) • Ease of use
Analysis capabilities • Sensitivity analysis • “What-if” analysis • Impact analysis
Note: PERT is program evation and review technique; CPM is critical path method.
both the modeling and analysis capabilities of the decision makers, such as those described in Table 7.3. In this case, the modeling concerns address whether the appliance can handle the kind and size of models that are of interest to the decision maker. In particular, it examines the models that can be accessed, the ease of use, the flexibility (especially with regard to size), and the functionality made available in the system.
The analysis capabilities, on the other hand, question the appliance's ability to provide the decision makers with a rich modeling environment. The characteristic of a DSS that distinguishes it from (being simply) a modeling package is its ability to simplify both the use and the interpretation of the models. For example, the model management component needs to be able to use the output of one analysis as the input to a second analysis, if wanted. In addition, the appliance must allow and simplify appropriate sensitivity and “what-if” analyses associated with the models in its portfolio. Not only must it tolerate review of the assumptions and rerunning of models in light of changes in the assumptions, it also must encourage the user by providing an easy path to such analyses and a user-friendly interpretation of the output. Finally, the appliance needs to provide context-sensitive modeling assistance for the user. This does not mean the online version of the user manual as exists in many PC-based applications today. Rather, this is a level of assistance in how to run the model, including a statement of the assumptions and limitations of the model
PLANNING FOR DECISION SUPPORT SYSTEMS 335
Table 7.4. User Interface Concerns in Selecting a DSS Generator
User friendliness
• Novice and expert modes • Menus and Prompts • Consistent, natural language commands • Command abbreviations • Context-sensitive Help • Clear, end-user-oriented error messages • “Undo” command support • Meaningful identifiers • Documentation • User-defined commands • Context-sensitive warnings
Support of modeling and data needs
• Wide range of graphics support • Windowing support • Multitasking support • Support for a variety of input and output devices • Color and functional control over user interface • Support for individual customization
Graphics
• Quality and resolution of output • Multicolor support • Range of output control • Support for dynamic graphics, video and audio enhancements • Basic plots and charts • Complex charts • Format and layout control • Spacing of graphs • Compatibility with available graphics devices • Preview ability • Modification ability • Ease of use
Reporting formats
• Supports a range of platforms • Flexibility of reporting formats • Standardformats • Ease of customization • Standard symbols and conventions
and even an intelligent intervention when modeling assumptions have been stretched or violated. While most appliances will not have such assistance built into the package, a good one will simplify the development of such tools.
Consideration of the user interface capabilities is important as well. Table 7.4 refers to those needs outlined in Chapter 5. In particular, in order for the system to be helpful for the decision maker, it must be user friendly (whatever the level of user expertise and experience); support a wide range of output and input devices; provide graphical, video, and
DESIGNING A DECISION SUPPORT SYSTEM
Table 7.5. Connectivity Concerns in Selecting a DSS Generator
Compatibility with available electronic mail system • Document sharing • Data sharing • Communications • Mail-handling and priority-setting code
Connectivity to Internet resources, including news services, and Web pages Electronic searching devices for Internet resources Firewall availability
audio interpretation of the results; and provide a reporting format that can be customized for the specific application and/or user under consideration. From the designer's point of view, this means that the appliance must either provide such functionality itself or make it easy to design.As with any software adoption, we need to be concerned that the system will work in our environment and will be affordable and can be upgraded over time. Tables 7.6-7.8 summarize criteria to consider for ensuring the selection of the appliance makes good business sense.
Table 7.6 illustrates the issues associated with basic compatibility issues. It is important to ensure that the appliance will work with the equipment available and with the operating systems and networking options available. In addition, it must be able to work with any additional resources acquired, including input, output, and storage devices.
Table 7.7 illustrates the cost issues associated with the use of the appliance. Today, software can be purchased or leased using a variety of options. Designers must examine these costs carefully to ensure they are in line with the usage patterns of decision makers. For example, it is not appropriate to deploy a product across many occasional users of a system if the cost is based upon each installation of a developed product, especially if it is not possible to deploy those copies via a network. On the other hand, it would be appropriate if the cost were a function of the number of simultaneous users of the product.
Finally, Table 7.8 illustrates issues that should be considered to ensure that the vendor is reputable and is likely to provide the kinds and level of support needed for your application. Such support will be important not only during the development stage but also as users begin to find undocumented features needing explanation.
USING A DSS APPLIANCE. If a DSS appliance forms the basis of a DSS, a designer has two possibilities for development, the quick-hit approach or the evolutionary approach.
Table 7.6. Hardware and Software Concerns in Selecting a DSS Generator
Compatibility with available equipment Compatibility with available operating system Compatibility with available networking configuration Printer and plotter support Preferred hardware/operating system/networking configuration Time-sharing option Disk and other resource requirements
PLANNING FOR DECISION SUPPORT SYSTEMS 337
Table 7.7. Cost Concerns in Selecting a DSS Generator
Initial purchase/license cost Per-capita fee Maintenance costs Documentation Resource utilization Conversion costs Upgrade frequency and costs
The difference between the two is in the staging of development and the basic involvement of decision makers in the design process.
The goal of the quick-hit method is to design a system quickly in response to some well-understood and usually immediate need that is expected to have a high payoff. Furthermore, the system is likely to reside on a microcomputer and be used by either one person or a small group. The goals and procedures are clear, the data are available, the system can stand independently, and there is little need to address conflicting concerns. Hence, much of the analysis component of design can be done quickly. Further, since the system is discarded after the choice is made, it is not necessary to employ many of the procedures necessary that ensure the long time viability of a DSS.
We might use this approach to design a DSS for a problem such as a high-level personnel decision. In some industries, many of the criteria needing evaluation are well known. Furthermore, selecting the right person for the job can save corporations significant money and provide significant opportunities for growth. However, it is a decision that is not made often. Hence, a DSS to support a choice would be a good candidate for use of a quick-hit design process.
To achieve the goal of fast deployment, designers rely heavily upon already available tools and packages, existing data and model sources, and existing data, model, and mail management systems. Such systems work well in the short run, because designers can rely upon tested components that use the current technology. However, over the long run, designers only may be able to update, maintain, or enhance the system when the vendor
Table 7.8. Vendor Concerns in Selecting a DSS Generator
Financial stability and viability Length of time in business Size of installed base Growth in customer base Quality and size of staff Activity of R&D staff Ongoing commitment to this product Technical support personnel Availability of support hot line Availability of Internet-based support Time horizon for support Internet user discussion group Organized user group Product target market User perceptions
DESIGNING A DECISION SUPPORT SYSTEM
provides updates to the appliance. In addition, the vendor dictates the kinds of enhancements provided in the system. Alternatively, if the system is composed of a makeshift combination of existing tools and systems, the processing efficiency may not be as good as it can be with more structured systems. Of course, in the long run, it may be difficult to bridge such systems to other existing systems or to systems introduced later.
The quick-hit process relies heavily upon the use of appliances and other tools so that the designers can focus their energies on the analysis and user interface components. Such a process is reasonable if the system can stand independently and if the data are already available. However, it becomes difficult if there is a long-term need for the system or a need to tie it to existing systems. The approach only works if users know what kinds of data and models to use and do not need significant levels of “support” in either the data selection or modeling phases. In fact, it works best if the need is so domain specific that a particular modeling package can be used as the core of the system.
The third approach, evolutionary development, is similar to the quick-hit approach in that it is dependent upon the use of DSS appliances, which allow for quick development and quick changes. Further, they allow the designer to focus on analysis of the needs rather than on construction of the software. Evolutionary development differs from the quick-hit approach in that designers expect the system design will mature as decision makers gain experience with the system and the information access.
Evolutionary development begins when the designer selects an important subproblem of the choice process. Through focus on this subproblem, the designer learns about the information, modeling needs, and user interface needs of the decision maker. This subprob-lem must be small enough to be unambiguous to both the designer and the decision maker but large enough to require computer support. In addition, the problem must be important to the decision makers so that they will participate closely in the development process and adopt the process after design.
The process of design is heavily dependent upon the use of prototyping, discussed earlier in this chapter. Designers begin by seeking user needs. From this information, they design a “quick-and-dirty” but working mockup of the system. Decision makers test and evaluate the prototype and refine their information needs. Designers then fine tune the system and provide it to decision makers again for testing and evaluation. This process is repeated until the evaluation calls for no substantive changes and an acceptable and stable product is available to the decision maker.
The key to this being different from the first process defined is twofold. First, unlike the one-step, complete system, it builds all components from scratch. As such, there is often a delay between the agreement of specifications and the provision of the product. The evolutionary approach, on the other hand, provides decision makers with a working system quickly. However, rather than providing the entire system at once, the evolutionary system provides only a small component of the eventual DSS at the outset. This allows users to experience using the system with the agreed-upon specifications and hence to be able to change those specifications as the system matures.
In addition, when prototyping is used in the one-step, complete method, it generally is not a working system but rather a mockup using a shell tool, a limited database, and a stand-alone machine. Often, response is better with these prototypes (in terms of both quality of the response and response time) than the designers can provide with the production language. As a result, users are often disappointed by the final system. In the case of the evolutionary development, designers use appliances, not mockup shells, in development. Hence, what the user sees when interacting with the system early is what the user sees with the eventual development system. Furthermore, since designers and decision makers
PLANNING FOR DECISION SUPPORT SYSTEMS
concentrate on one small part of the process in the prototyping effort, it is easier for both parties to concentrate on the implications of features and changes to features. In addition, because the evaluation of the system and changes requires less of the decision makers' time (because it is smaller), they are able to provide better and more meaningful feedback to the designer, and thus the exercise tends to have better results. By focusing on the small but important component of the process, decision makers can understand the implications of their suggestions better. In the one-step process, designers and decision makers dilute the focus by looking at the entire system at once. Since there is so much to look at, decision makers may not consider how many of the functions will actually work in a production system. Decision makers may not commit the amount of time, energy, or attention to understanding the entire system at once.
The problem with the evolutionary development is, of course, where to start. Clearly, we need to begin with some component of the problem that is of importance to the decision maker. Once that decision is made, however, the designer still needs to determine what information should be included at the outset. However, information is not a unidimensional concept. Suppose decision makers state that their most important focus is on effectiveness. While “effectiveness” of the alternatives might seem like an unambiguous concept, it can really mean very different things to different people. To the designers, it might mean cost effectiveness. To the decision maker, it might mean the expected outcome of attracting new clients. Even if there is agreement on the measure “attracting new clients,” there might be disagreement about when relevant data are actually information. For example, designers might think of hard numbers of new clients and thus new sales. However, decision makers might think of an increase in customer satisfaction that will lead, in turn, to acceptance of the product.
Ultimately, all these views might be important to the decision maker. Nevertheless, designers need to know where to start. To define the needed information, designers must look at it from a variety of perspectives: (a) the content, (b) the representation, and (c) the attributes of the data themselves. An understanding of the appropriate content means an understanding of what knowledge needs to be accumulated and maintained or what issues need to be addressed. Here, for example, the designer determines if the most important issue is the attrition rate, the schedule needs, or the advertising expenditures. In addition, the designer must differentiate the relevant perspective of that content. For example, designers need to determine if decision makers prefer the function or merit associated with the relevant content. If one considers the topic of “advertising expenditures,” a “function” perspective would represent how the money was spent, where the money was spent, and so on, while a “merit” perspective would represent how the expenditure had an impact on the clientele. Similarly, designers need to determine the focus of the information, or whether the data should be oriented toward how the alternative is structured (an internal focus) or upon the service that is provided by the alternative (an external focus). Third, designers need to understand what kinds of measures are most important to the decision makers. This might include cost data, activity data, performance data, or impact data. For example, the data needs are quite different if decision makers simply want to know on what kinds of ads money is spent than if decision makers want to know what market segments are reached and are likely to be influenced by the information. The representation of data has traditionally included the format, or the presentation of what kinds of data in what order. This would include whether graphs or charts are provided, icons or text, and numbers or conclusions. Finally, the attributes of the information are those characteristics discussed in Chapter 3. This includes whether the data are qualitative or quantitative, facts or judgments, specific information or global generalizations, past performance or expected performance data. In
DESIGNING A DECISION SUPPORT SYSTEM
addition, it includes a specification of who provides the data and what kinds of credibility go along with that presentation.
Once one can describe the content in this multidimensional manner, it is possible to provide guidance as to how to start the DSS design and how to let it evolve. Fortunately, the dimensions cluster together, making it easier to determine where information will be most useful. For example, the content and representation of the data required often are a function of the experience level of the decision makers. As decision makers gain greater experience with a particular type of decision, they move from seeking feasibility information to preferring information regarding the performance of alternatives under consideration; with increasing amounts of experience, they tend to move toward information regarding the efficiency of alternatives. A similar shift occurs with regard to the attributes of the information. Decision makers with little experience tend to seek quantitative, factual data that reflect future economic implications. As decision makers gain more experience, they seek more information regarding the past performance of alternatives, usually in terms of qualitative information and more speculative opinions. Finally, decision makers with significant amounts of experience tend to address process issues. They seek quantitative, factual data, reflecting the operations issues of the adoption of the alternative.
These preference patterns can be useful for guiding the evolution of systems. For example, if it is known that users are primarily inexperienced in a particular category of decisions, it would be wise to emphasize feasibility information with factual data reflecting the economics of the environment in the early stages of development.
In addition, decision makers' preference for analytical methods evolves over time as a function of how the decision context changes. For example, decision makers are likely to employ compensatory models, such as optimization models, only when considering tactical decisions in a stable environment for which the user has significant experience. Knowing this suggests a need for including many exploratory and statistical tools in the early stages of DSS development and can deemphasize other kinds of tools until later stages of the evolution of the system.
The Design Team
Selecting the appropriate design approach and the appropriate technology clearly are im-portant aspects of DSS design. A third concern is selecting the appropriate project team to meet the needs of the system. This is particularly important for the first DSS in a corporation or group and/or if the DSS is part of a strategic change to the corporation.
First, the team must include a champion (even if it is simply an ex officio position) from among the senior management of the group. Including such a person and keeping him or her updated regularly can help you to get the necessary access to resources, data, and models. In addition, you need a team of developers with the appropriate skills. For most DSS applications today, this team needs to include people trained in the graphical and object-oriented technologies who are open minded and imaginative. However, it is important to include people who understand the issues associated with disaster recovery and security. Planning for problems from the start makes it much easier to solve them. Finally, it is important to include end-user decision makers on the team to ensure that the DSS meets their decision-making needs.
Whether internal end users or external consultants, team members need to have certain characteristics, such as those outlined in Table 7.9. Notice that the primary team need is a sense of creativity and open-mindedness. If the DSS is to result in better decisions, the team must do something more than simply automate the current procedures. If team
DSS DESIGN AND REENGINEERING 341
Table 7.9. Characteristics of Good Team Members
Creativity and openmindedness Good communication skills An understanding of the decision task and the organization, business,
and marketplace. An understanding of and experience with DSS design and/or use An understanding of possible technology A willingness to work cooperatively Good chemistry between the design team and the use team
members do not have the capacity to see potential opportunities for change, change will not occur.
A second need is good communication skills. Later in this chapter, we will discuss the problems associated with putting decision needs into words. In addition, it is difficult to communicate technical requirements or enabling technology. Without good communi-cation, no creative change to the decision process can happen. A humorous example of the kind of miscommunication that can occur is found in the accompanying box.
Similarly, the team needs to have a good understanding of the decision task, how that task fits within the organization, and how it relates to the business and the marketplace. The goal of the exercise is to provide a value-added service through the DSS.
DSS DESIGN AND REENGINEERING
In today's business environment there is considerable discussion about business process reengineering (BPR). The term was coined by Hammer (1990) to mean the radical redesign of business processes to achieve dramatic performance improvements. The redesign typi-cally uses modern information technology and changes of the focus of decision making so that it crosses functional and departmental lines. BPR requires (a) the organization of ac-tivities around outcomes (not tasks), (b) decision making at the point of work performance, and (c) the development of adequate control processes. Finally it requires that information
Computer people often arc guilty of talking only in acronyms. This can be intimidating to the user who may not understand the acronyms and hence cannot fully understand the problems or opportunities that are being presented. However, il can also be contusing when the end user has similar acronyms and does not understand how they are being used differently.
One of the best examples of this was observed when an external consulting learn developed a DSS for a large, progressive hospital. Part of the development team met with a committee of the nurses and nursing supervisors to design one component of the system. During this discussion, designers kept referring to the I/O and how it would change. The nurses were obviously becoming more and more confused until one of them asked, “What do the patient 's liquid inputs and liquid outputs have to do with how we can make better nursing decisions?” In other words, they were baffled because “I/O” had a meaning to them but did not have the same usage as that of the consulting team,
DESIGNING A DECISION SUPPORT SYSTEM
be captured only once, at its source. Much has been written about the reengineering process and how it is conducted, but it will not be repeated here since that is not the purpose of this text. However, since BPR has an impact on decision making and the use of technology, it is reasonable to question the relationship between the design of DSS and BPR. In particular, we will address three questions.
• Is DSS design BPR? • Does DSS design require BPR? • Can DSS design facilitate BPR?
Clearly, DSS design is not the same as BPR. Although technology and its rapid development are the enablers to achieving the goal in both cases, the goals of the analysis and the expected outcomes are quite different. Business process reengineering, by its very nature, focuses on the fundamental activities of a department or organization, the processes necessary for their completion and improvement, and the activities that would improve the flow of work in the organization. This might include an analysis of what information and what models are available to whom, but the more likely focus would be on who makes the decision, how decentralized the decision making becomes, and what controls are established to ensure that it happens well.
Instead DSS design focuses on the process by which decisions are made. It does not question whether or not the individual decision makers should be making the decision, does not focus on most of the employees of an organization, and does not necessarily result in a physical product or service being improved. Like BPR, DSS design does not have cost cutting as its goal. Rather, its goal is a better thought-out choice process that often has as a natural result a reduction in costs and losses. In addition, good DSS design, like good BPR, can have a side benefit of improvements in corporate performance, because decision making is improved. There clearly are parallels, but they are two substantially different activities.
Second, does DSS design require BPR? Not always. Sometimes, designers and decision makers intend for the DSS to only improve access to data and models but not to make a fundamental change in how operations are conducted. In these cases, reengineering is not an important component of the DSS design. However, at other times, the decision to move toward a DSS is part of a corporate strategic decision. In such cases, the existence of a DSS alone is unlikely to cause a substantial change in the way business is conducted. Just throwing the power of a computer at a problem will not cause expected productivity gains. As Hammer has said (1992 p. 104), “turning the cowpaths of most business processes into superhighways using the plethora of computer hardware just doesn't work.” That is, if all the DSS does is to automate the current decision processes, and the decision makers have only the same data, the same models, and the same charts as they have always had, this will not improve decision making.
Instead, the DSS design needs to be coupled with a reengineering of the decision process. The design process allows designers and decision makers alike to rethink the choice process by considering explicitly what decision makers need to know and how they need information presented. It allows an opportunity to take a holistic view of the process, the natural way of considering choices, the neglected opportunities for insight and possible integration strategies. The technological solution is not as significant as the way the technology is used to implement an organization's strategic vision.
DSS DESIGN AND REENGINEERING 343
Third, can a well-designed DSS facilitate the BPR? The answer is Yes! The DSS can be a resource that simplifies the reengineering effort. One of the major difficulties in reengineering is the absence of necessary data. It is impossible to plan for change or predict the impact of change without appropriate information regarding current operations and current environmental data. Unfortunately, such data are not readily available in most organizations. However, using a DSS can provide managers access to the data and means for understanding them. The DSS can help managers to challenge old procedures and create new ones through better alternative generation, more informed decision making, and better use of models. In addition, decision makers can view a given problem from a variety of perspectives and be better able to understand the problem, the assumptions, and the implications of the solutions. With group DSS technology, decision makers across functional areas can collaborate by sharing information, analyses, and models. The use of DSS technologies can actually help the reengineering effort be more effective and productive. (This topic will be discussed in more detail in Chapter 11.)
Although BPR and DSS design are two separate activities, they have similar aspects, and therefore there are some lessons we can learn from BPR that have parallels in DSS de-sign. First and foremost, communication during the process is crucial. Carr and Johanssen (1995) indicate that communication is crucial in the beginning of BPR to assess the cultural climate and the barriers to change and in the later stages for obtaining acceptance of the changes so that the improved processes will not be sabotaged.1 Furthermore, communi-cation can help us improve the overall design by gaining from the experience of many individuals through their comments and suggestions. This clearly is true with DSS design as well. Without active communication, the designer will implicitly state assumptions of the design process as:
• There is one best way to make decisions. • I can understand how your decisions are made easily and quickly. • Little about how you make decisions is worth saving. • You will make decisions in the manner that the designer specifies.2
While managers and other employees might not be as concerned about job loss as they would be during BPR, there are concerns about making the task “too hard” or the perception that managers were just not doing a good enough job. Forcing people into a new decision style may not be productive. Table 7.10 shows some tenets of “good communication” during BPR adapted for DSS design.
This leads to the second similarity between DSS design and BPR: There is likely to be resistance to change. Concerns about uncertainly and additional workload affect both DSS design and BPR. However, perhaps a bigger problem in DSS design (as compared to BPR) is the fear of criticism. Most decision makers consider a specific set of issues when they make choices. Some of those factors may use sophisticated models or grand
'For example, Carr and Johanssen (1995, p. 51) suggest Motorola's success with total quality manage-ment (TQM) and BPR is due, to a large measure, to their strong communication plan. The company holds “town hall meetings” to review concerns, changes, and the overall state of the business with their employees and managers hold informal communication sessions with their employees. 2This list is adapted from one developed for reengineering as described in T. Davenport, “Don't Forget the Workers,” Information Week, August 8, 1994, p. 70.
DESIGNING A DECISION SUPPORT SYSTEM
Table 7.10. Tenets of Effective Communication
It is impossible to use too much communication. Simplify your message, no matter how complex the issue. Anticipate the issues and communicate your position early. Don't underestimate the technical requirements of a communications project. Involve all levels of management where appropriate. Honesty is the best policy. Tell the truth. Identify and know your audiences.
Source: Adapted from D. K. Carr and H. J. Johansson, Best Practices in Reengineering, New York: McGraw-Hill, 1995, p. 55. This material is reproduced with the permission of the McGraw-Hill Companies.
database mergers. For others, decision rules might be quite simple, coming quite close to “gut feelings” or generalizations from past experiences. Decision makers may be concerned about sharing these procedures, regardless of their reliability, for fear of looking silly or less capable, and of fear they will need to learn new and harder methods of making decisions. They may be unwilling to share accurate information about choice processes or information and modeling needs. Of course effective communication is one approach to addressing this resistance to change. Another is the implementation of a planned environment for change.
Finally, the third similarity is that good DSS design, like good BPR, takes place incrementally over a period of time. BPR is best when it is limited to a process or a group of processes at the outset. DSS design works best when a particular focus or type of analysis is prototyped and built, then improved and expanded over time. Both require a multilayered process that must be repeated over time. Further, managers need to become accustomed to them before moving on to change another component of their organization.
DISCUSSION
When DSS have been designed well, they represent tools that add value to the process of making selections among alternatives. Improvements in hardware and design tools release designers to focus upon meeting the needs of the decision maker. Regrettably, there is no process the use of which will assure the resulting system is a value-added product. However, the use of prototypes to discuss specifications, an evolutionary strategy to development, and good communication skills increase the chance of getting a useful and used system.
SUGGESTED READINGS
Arinze, B., “A Contingency Model of DSS Development Methodology,” Journal of Management Information Systems, Vol. 8, No. 1, June 1991, pp. 149-166.
Arnott, D., “Decision Support Systems Evolution: Framework, Case Study and Research Agenda,”
European Journal of Information Systems, Vol. 13, No. 4, December 2004, pp. 247-259. Arnott, D, and G. Dodson, “Decision Support Systems Failure,” in F. Burstein and C. W. Holsapple
(Eds.), Handbook on Decision Support Systems, Vol. I, Berlin: Springer-Verlag, 2008, pp. 763-790.
Bento, A., “Tools for End-User Systems Development: A Case Example,” Interface, Fall 1991, pp. 24-31.
SUGGESTED READINGS
Boehm, B., and L. G. Huan, “Value-Based Software Engineering: A Case Study,” Computer, Vol. 36, No. 3, 2003, pp. 33-41.
Bradshaw, G. F., P. W. Langley, and H. A. Simon, “Studying Scientific Discovery by Computer Simulation,” Science, Vol. 222, No. 4627, December 1983, pp. 971-975.
Bücher, T., M. Klesse, S. Kurpjuweit, and R. Winter, “Situational Method Engineering: On the Dif-ferentiation of 'Context' and 'Project Type',” in Situational Method Engineering: Fundamentals and Experiences, Proceedings of the IFIP WG 8.1 Working Conference, September 12-14, 2007, Geneva, Switzerland.
Burstein, F., and S. A. Carlsson, “Decision Support through Knowledge Management,” in F. Burstein, and C. W. Holsapple (Eds.), Handbook on Decision Support Systems – Vol. 1, Berlin: Springer-Verlag, 2008, pp. 103-120.
Carr, D. K., and H. J. Johansson, Best Practices in Reengineering, New York: McGraw-Hill, 1995. Clemen, R. T., Making Hard Decisions: An Introduction to Decision Analysis, 2nd ed., Belmont,
CA: Duxbury, Belmont, 1996.
Courtney, J. F., “Decision Making and Knowledge Management in Inquiring Organizations: Toward a New Decision-Making Paradigm for DSS,” Decision Support Systems, Vol. 31, No. 1, May 2001, pp. 17-38.
Curtis, B., H. Krasner, and N. Iscoe, “A Field Study of the Software Design Process for Large Systems,” Communications of the ACM, Vol. 31, No. 11, November 1988, pp. 1268-1287.
Eom, S. B., The Development of Decision Support Systems Research: A Bibliometrical Approach,
Lewiston, NY: Edwin Meilen, 2007.
Eom, S. B., and Kim, Y B. “A Survey of Decision Support System Applications (1995— 2001),” Journal of the Operational Research Society, Vol. 57, No. 11, 2006, pp. 1264-1278.
Few, S., Show Me the Numbers: Designing Tables and Graphs to Enlighten, Oakland, CA: Analytics,
2004.
Few, S., Information Dashboard Design: The Effective Visual Communication of Data, Sebastapol,
CA: O'Reilly, 2006. Gachet, A., and P. Haettenschwiler, “Development Processes of Intelligent Decision-Making Support
Systems: Review and Perspective,” in J. N. D. Gupta, G. A. Forgionne, and M. Mora (Eds.), Intelligent Decision-Making Support Systems, London: Springer, 2006.
Gachet, A., and R. Sprague, “A Context-Based Approach to the Development of Decision Support Systems,” in T. Bui and A. Gachet (Eds.), Proceedings of the Workshop on Context Modeling and Decision Support. Paris, France, July 5, 2005.
Hammer, M., “Reengineering Work: Don't Automate, Obliterate,” Harvard Business Review,
July-August, 1990, pp. 104-112.68 4.
Hammer, M., The Reengineering Revolution, New York: Harper Collins Publishers, 1995.
Hammer, M., Beyond Reengineering: How the Process-Centered Organization is Changing Our Work
and Our Lives, New York: Harper Collins Publishers, 1997. Hammer, M., and J. Champy, Reengineering the Corporation: A Manifesto for Business Revolution,
New York: Harper Collins Publishers, 2003. Hall, D., “Decision Makers and their Needs for Support,” in F. Burstein and C. W. Holsap-
ple (Eds.), Handbook on Decision Support Systems – Vol. 1, Berlin: Springer-Verlag, 2008, pp. 83-102.
Holsapple, C , “DSS Architectures and Types,” in F. Burstein and C. W. Holsapple (Eds.), Handbook
on Decision Support Systems – Vol. 1, Berlin: Springer-Verlag, 2008, pp. 163-190.
Howard, R. A., and J. E. Matheson, “Influence Diagrams,” Decision Analysis, Vol. 2, No. 3, 2005, pp. 127-143.
346 DESIGNING A DECISION SUPPORT SYSTEM
Howson, C, Successful Business Intelligence: Secrets to Making BI a Killer Application, New York: McGraw-Hill, 2008.
Kendall, K. E., and J. E. Kendall, “DSS Systems Analysis and Design: The Role of the Analyst as Change Agent,” in F. Burstein and C. W. Holsapple (Eds.), Handbook on Decision Support Systems, Vol. 2: Variations, Berlin: Springer-Verlag, 2008, pp. 293-312.
Klashnera, R., and S. Sabeta, “A DSS Design Model for Complex Problems: Lessons from Mission
Critical Infrastructure,” Decision Support Systems, Vol. 43, No. 3, April 2007, pp. 990-1013.
Morgan, M. G., and M. Henrion, Uncertainty: A Guide to Dealing with Uncertainty in Quantitative
Risk and Policy Analysis, New York: Cambridge University Press, 1998.
Norman, D., The Design of Future Things, New York: Basic Books, 2007.
O'Leary, D. E., “Decision Support Evolution: Predicting, Facilitating, and Managing Knowledge Evolution,” in F. Burstein and C. W. Holsapple (Eds.), Handbook on Decision Support Systems, Vol. 2: Variations, Berlin: Springer-Verlag, 2008, pp. 345-370.
Pick, R. A., “Benefits of Decision Support Systems,” in F. Burstein and C. W. Holsap-ple (Eds.), Handbook on Decision Support Systems, Vol. I, Berlin: Springer-Verlag, 2008, pp. 719-730.
Silver, M. S., “On the Design Features of Decision Support Systems: The Role of System Restrictive-ness and Decisional Guidance,” in F. Burstein and C. W. Holsapple (Eds.), Handbook on Decision Support Systems, Vol. 2, Variations, Berlin: Springer-Verlag, 2008, pp. 261-292.
Sprague, R., and E. Carlson, Building Effective Decision Support Systems, Englewood Cliffs, NJ:
Prentice Hall, 1982. Thorp, J., The Information Paradox : Realizing the Business Benefits of Information Technology,
Toronto, ON: McGraw-Hill Ryerson, 2003.
Tillquist, J., and W. Rodgers, “Using Asset Specificity and Asset Scope to Measure the Value of IT,” Communications of the ACM, Vol. 48, No. 1, 2005, pp. 75-80.
QUESTIONS
1. Suppose you were designing a DSS to help students make better career decisions. Identify three questions you might use during interviews to determine their decision support needs. How would you alter those questions if the person being interviewed were too talkative? If they were uncooperative?
2. Defend the use of the evolutionary development of DSS in a manner that you might for a boss or client of a consulting firm.
3. What kinds of documents would you request to begin the process of understanding users' needs for the development of a DSS for production planning?
4. Should users design their own DSS? Why or why not?
5. Discuss the advantages and disadvantages of using a DSS appliance and available tools in the design process.
6. Consider a DSS design project (perhaps a class project). How would this DSS develop if the evolutionary development process were used?
7. Discuss the potential design trade-offs involved in designing a specific decision support or expert system directly from tools, as compared with using a DSS appliance or expert system shell.
ON THE WEB 347
8. One of the steps in generally recognized methodologies is the testing of the system to ensure reliability and validity of a system. How would you test a DSS for reliability and validity? What kinds of tests would you run? What kinds of data would you need?
9. Critique the concept of using a standardized methodology to design DSS.
10. Suppose you were attempting to justify the development of a DSS for a corporation. Discuss how you would justify the expenditures.
11. Discuss the critical success factors associated with DSS design. How would designers evaluate these factors prior to beginning a project?
12. Consider a system that you use. Does it display Norman's design rules?
13. Why is it important to design error and warning messages carefully? What impact might it have on DSS use if they are not designed carefully?
14. Draw an influence diagram that conveys how decisions are made regarding what classes are offered each semester on your campus.
15. What characteristics of an organization does a DSS designer need to understand before beginning a project?
16. Find information about one or more DSS appliances. How might it make design of a DSS easier? What problems might it pose?
ON THE WEB
On the Web for this chapter provides additional information about how DSS enhance design concepts. The links provide access to case studies and success stories about the design process. In addition, links can provide access to information about methodologies for design, design standards, and reengineering hints. Additional discussion questions and new applications will also be added as they become available.
• Links give access to information about DSS appliances. The page provides links to corporations and marketing information about generators as well as reviews of products.
• Links give access to actual decision support systems. The pages will link you not only to the DSS but also to a “behind-the-scenes” look at the development process.
You can access material for this chapter from the general Web page for the book or directly athttp://www.umsl.edu/^sauterv/DSS4BI/design.html.