SlideShare une entreprise Scribd logo
1  sur  13
Télécharger pour lire hors ligne
Hsu, J. S.-C., Huang, H.-H., & Linden, L. P. (2011). Computer-mediated Counter-Arguments and Individual Learning.
Educational Technology & Society, 14 (4), 111–123.


                Computer-mediated Counter-Arguments and Individual Learning
                            Jack Shih-Chieh Hsu1, Hsieh-Hong Huang2* and Lars P. Linden3
 1
  Department of Information Management, National Sun Yat-sen University, Taiwan // 2Department of Information
Science and Management Systems, National Taitung University, Taiwan // 3Department of Information Technology,
             Georgia Southern University, USA // jackshsu@mis.nsysu.edu.tw // kory@nttu.edu.tw //
                                         llinden@georgiasouthern.edu
*Corresponding author

ABSTRACT
           This study explores a de-bias function for a decision support systems (DSS) that is designed to help a user avoid
           confirmation bias by increasing the user’s learning opportunities. Grounded upon the theory of mental models,
           the use of DSS is viewed as involving a learning process, whereby a user is directed to build mental models so
           as to reduce the tendency to consider only data that supports any preexisting mental models. The results of an
           experiment suggest that a de-bias function, called computer-mediated counter-argument (CMCA), facilitates
           individual learning and improves decision satisfaction.

Keywords
           Mental model, Individual learning, Computer-mediated counter-argument, De-bias, Decision support systems

Introduction
A decision support system (DSS) is a platform for learning in at least two ways. First, a DSS aids the user in
extending his or her knowledge about the subject matter of the decision domain. For example, a person using a stock-
selecting DSS may explore the relationships among interest rates and stock prices and, while exploring, learn
knowledge about the markets. Second, a DSS may present the user with an opportunity for adjusting his or her
knowledge about the decision-making process itself. In this case, the system aids a person in altering the decision-
making process in an attempt to improve the outcome. An example of this second way of learning is shown by a user
of a stock-selecting DSS who, having been warned of an error-producing cognitive bias known as gambler’s fallacy,
reacts to this warning by performing additional steps in the decision-making process, which in this case might
include the reflection upon possible market outcomes, if future prices are not dependent upon historical prices. This
second way of learning, an adjustment of the decision-making process, serves the decision maker by reconciling the
many interlocking and incompatible beliefs about the problem domain, some of which may be based upon cognitive
biases (Tverskey & Kahneman, 1974). Increasingly, attention is being focused upon cognitive theories of learning
and how these theories can be understood and applied in the context of DSS, a trajectory of research that is distinct
from much of DSS research, which largely focuses upon the end result of decision making: decision quality
(Santana, 1995). Along these lines, our focus is upon the second type of learning and how a DSS might help to adjust
a user’s decision-making process for the purposes of overcoming a cognitive bias known as confirmation bias.

During DSS use, cognitive biases present a perilous downside. Users, enabled by DSS, may make poor decisions and
enact seemingly limited decision-making processes, often while confident that all is well. Several examples illustrate
cognitive biases arising during the use of DSS. A cognitive bias called the illusion of control occurs when a user of a
DSS performs a what-if analysis and displays increased confidence, yet achieves no significant performance gain
(Davis & Kottemann, 1994). The cognitive bias called confirmation bias occurs when a person who is gathering
information restricts attention to only data that supports a favored hypothesis. The cognitive bias known as the
illusion of knowledge occurs when a user is overconfident about having access to a greater amount of information, as
is easily the case with a DSS, but makes poor decisions, despite the additional information. This last cognitive bias
was detected when investors who had switched from a phone-based investing method to an online trading platform
became more confident, and yet recorded poor performance despite having access to more information (Barber &
Odean, 2002). Attempting to counter these cognitive biases is called de-biasing (Fischhoff, 1982). In general, de-
biasing techniques, such as educating users about biases, are believed to beneficially impact decision quality (Arnott,
2006) and de-biasing techniques applied to DSS have been successfully demonstrated (Bhandari, Deaves, &
Hassanein, 2008). To investigate the embedding of a de-biasing technique into the design of a DSS we adopt mental-
model theory.



     ISSN 1436-4522 (online) and 1176-3647 (print). © International Forum of Educational Technology & Society (IFETS). The authors and the forum jointly retain the
                                                                                                                                                                                  111
     copyright of the articles. Permission to make digital or hard copies of part or all of this work for personal or classroom use is granted without fee provided that copies
     are not made or distributed for profit or commercial advantage and that copies bear the full citation on the first page. Copyrights for components of this work owned by
     others than IFETS must be honoured. Abstracting with credit is permitted. To copy otherwise, to republish, to post on servers, or to redistribute to lists, requires prior
     specific permission and/or a fee. Request permissions from the editors at kinshuk@ieee.org.
The theory of mental models explains how people perform certain activities, for example, information processing.
When a decision maker confronts a large amount of information, a mental model aids in the filtering of that
information. The mental model fulfills a gatekeeper role for the mind by preventing unrelated and unimportant
information from being consciously considered. However, along with the beneficial effects that are attributed to
mental models, there are also error-producing effects. Inadequate mental models may block important information.
The ability of a decision maker to change mental models is viewable as a learning process, one that may be enabled
by DSS. That a person can change existing mental models is important, given that mental models may have a
negative impact during decision making (Vandenbosch & Higgins, 1994). Through the mental-model theory, we
seek to explain how an embedded de-bias function can induce a change in a person’s mental model, conceptualized
as a type of learning, and, as a result, overcome confirmation bias. Mental models provide a theoretical explanation
upon which to ground a DSS function for reducing confirmation bias.

The consideration of mental models is not new to DSS research and can be recognized as a type of cognitive skill.
The use of DSS is attributed to the development of cognitive skills. Intelligent tutoring tools, for example, provide
the user with an opportunity to obtain cognitive skills, such as the ability to learn via self-explanation (Patel,
Kinshuk, & Russell, 2000). Technology-enriched learning includes meta-learning activities, such as monitoring and
sequencing one’s own learning (Sinitsa, 2000). The cognitive skill being considered in this research is the learning
effect of mental-model reforming. We seek to understand if the design of a de-biasing function embedded in a DSS
increases the cognitive skill of the type of learning associated with mental-model reforming.

The de-bias function being explored aims to eliminate cognitive bias through the introduction of counter-arguments.
The de-bias function that is embedded in a DSS is designed to present counter-arguments to the user so as to
eliminate bias through a learning process. We seek to understand if users, having read counter-arguments, report to
have built mental models and express a lower confidence level, as compared with users without counter-arguments.
Being aware of the need to lessen the impact of confounding variables from the environment, we conducted an
experiment to investigate this embedded de-bias function.

In the following sections, we discuss the concepts of mental-model theory, confirmation bias, and learning. We
propose a research model grounded upon mental-model theory and state our hypotheses. We discuss the research
design and the procedures of the experiment. We analyze the data and discuss the results in light of their implication
to DSS design and individual learning.


Theoretical background and hypotheses
Mental models and confirmation bias

While explaining the nature of thought, Craik (1943) describes the modeling capabilities of the human mind,
whereby “small-scale models” of external entities are manipulated by the mind to consider past events and to
anticipate future events. These mental models are psychological representations of the environment and consist of
cognitive constructs and their relationships (Holyoak, 1984). As such, mental models serve a range of purposes:
when designing, mental models help a person to describe the composition of entities; when observing a system,
mental models help a person to explain the current states of unseen variables; and when considering the future,
mental models help a person to predict events and suitable responses to these events (Rouse & Morris, 1986). Mental
models, as structured patterns, aid in classifying information and aid in simulating a set of outcomes based upon
varying conditions (Cannon-Bowers, Salas, & Converse, 1993). An activity such as information processing suggests
that mental models are relevant to DSS design.

One example illustrates the association of mental models with cognitive biases. In the stock market, a decision maker
needs to predict a future price in order to make the purchasing decision. The decision maker possesses a mental
model of the impacts of various factors upon price. This mental model helps to guide the search for information
required for predicting price and abandon information that is not important to the decision. If abundant amounts of
information are available, the mental model serves as a filter that helps the decision maker forego consideration of
inconsequential information. A mental-model-based information search can reduce processing time and minimize
information overload. These benefits, however, which coexist with a mental model, may lead the decision maker to


                                                                                                                  112
neglect, or perceive as unimportant, information critical to the analysis. Mental models contribute to a cognitive bias
called confirmation bias.

Wason (1960) defined the phenomenon of confirmation bias as people’s tendency to seek evidence in support of
their assumptions instead of searching for evidence that challenges their assumptions. When a person contemplates
concepts or verifies assumptions, a variety of biased tendencies have been described. A person may tend to treat
evidence supporting existing beliefs more favorably than is objectively appropriate. A person confronted with a body
of information may “see” exactly the effect that the person set out to find (Russo, Medvec, & Meloy, 1996). A
person may tend to pass along information that is congruent to their beliefs (Heath, 1996). In short, the problem is
that “decision-makers seek confirmatory evidence and do not search for disconfirming information” (Arnott, 2006, p.
60). The consequence of overweighting some evidence and underweighting other evidence is reaching a wrong
conclusion. Confirmation bias is observed to be an exceptionally problematic aspect of human reasoning (Nickerson,
1998). How then might the design of a DSS compensate for this cognitive bias?


Mental models and learning

Based on mental models and learning theory (e.g., Gagne, 1977; Norman 1982; Piaget 1954), Vandenbosch and
Higgins (1994) argue that mental models are closely connected to learning. Mental models direct the information-
gathering process and limit the process. Reflexively, the information gathered has the potential to change mental
models, confirm them, enhance them, change them, and reinforce them. Vandenbosch and Higgins (1994) state that
the gathered information impacts mental models through two different processes: mental-model maintenance and
mental-model building. Mental-model maintenance occurs when new information fits easily into existing mental
models. Mental-model building occurs when new information from newly perceived situations and environments
provokes a person to change existing mental models.

The connection between mental models and learning is amplified by Vandebosch and Higgins (1994) with their
comparison of mental models to theories found in the management discipline. First among these theories is Argyris
and Schön’s (1978) organizational learning theory of single-loop and double-loop learning. Single-loop learning
represents the learning that takes place when existing routines guide the problem-solving process. Double-loop
learning represents the challenging of existing routines and the need to restructure existing norms and assumptions.
These two ways of learning are similar to the learning that is explained by the use of mental models. In terms of
mental models, single-loop learning maps to mental-model maintenance, both of which pertain to the confirmation
and disconfirmation of new information when compared to an existing, stable theory. Double-loop learning maps to
mental-model building, both of which represent a potential change in foundations of how learning is achieved.

Vandenbosch and Higgins (1994) also draw parallels between mental models and another theory, March’s (1991)
theory of exploitation and exploration learning. Exploitation is the act of enhancing performance by improving
current practices. Exploration is the act of enhancing performance by introducing new practices. Exploitation
represents mental-model maintenance as one attempts to reinforce current mental models by adding new constructs
or building new links between existing constructs. Exploration represents the mental-model building process as one
attempts to reconstruct the model completely (March, 1991). These management theories, because they include both
the learning of individuals and the collective of individuals (i.e., the organization as a whole) may not be fully
applicable to the individual learning considered in this research. However, in as much as they are found to model
individual learning, these theories are useful because they are congruent with our understanding of the maintenance
and building of mental models.

Of the two approaches described, mental-model maintenance and mental-model building, the one more likely to
happen is mental-model maintenance (e.g., Quinn, 1980; Kiesler & Sproull, 1982; Grønhaug & Falkenberg, 1989).
Acceptance of mental-model maintenance as a form of learning is high because it can be routinely confirmed through
a person’s past experience. Vandenbosch and Higgins (1994) write how, in contrast, mental-model building implies
that decision makers face uncertainty. The theory of mental-model building has been tested through empirical
evidence. People tend to make decisions or take in part or all action based on their mental models. The mental-
model-building form of learning requires stimulus such as feedback from actions and information that contrasts with
current thinking. DSS could be designed to induce mental-model building by injecting a particular type of
information. Therefore, we attempt to identify the presence of mental-model building by introducing the stimulus of

                                                                                                                   113
computer-mediated counter-arguments (CMCA). A counter-argument is information contrasting with what the
decision maker previously hypothesized, and computer-mediated counter-argument is defined as opposite
information provided by a DSS during the decision-making process. In addition, in our study, we also attempt to
determine whether individuals using a system with counter-argument report a higher level of satisfaction. The
research model (Figure 1) shows the relationship between these factors.



                                                                          Learning
                                                                           mental-model building
                                                  H1                       mental-model maintenance
               Computer-mediated
               counter-argument

                                                  H2                      Satisfaction
                                                                           toward process
                                                                           toward outcome



                                              Figure 1. Research model

From a cognitive perspective, existing mental models determine the ideal construction and concept framing process.
Due to cognitive-based biases, people tend to read information that fits into their current mental model. The
information-selection process allows them to gather evidence that can support their existing mindset. Although the
received information either strengthens or weakens the current mental model (Vandenbosch & Higgins, 1996),
theory indicates that once one’s belief is formed, it is not easy to change if there is no additional stimulus (Hoch &
Deighton, 1989). Since subjects in the group without computer-mediated counter-argument tend to maintain their
current mental model rather than building a new mental model, therefore, we hypothesize the following:

            H1a:    The score of mental-model maintenance for subjects who use DSS without
                    computer-mediated counter-argument is higher than for those who use DSS with
                    computer-mediated counter-argument.

On the other hand, people question their existing mindset when they receive contradictory evidence (Vandenbosch &
Higgins, 1996). Counter-arguments provided by the system give contradictory evidence that challenges decision
makers’ perspectives and provides a chance for them to reevaluate their current mental model. Under this setting,
people are more likely to abandon their current mental model or take appropriate steps to improve their mental
model. Therefore, subjects in the group with computer-mediated counter-argument tend to reform their mental
model, rather than maintain their mental model. We hypothesize the following:

            H1b:    The score of mental-model building for subjects who use DSS with computer-
                    mediated counter-argument is higher than for those who use DSS without computer-
                    mediated counter-argument.

Decision satisfaction can be separated into process satisfaction and outcome satisfaction (Green & Taber, 1980).
Process satisfaction refers to users’ perceived efficiency and fluency of using a decision support system to facilitate
decision making. On the other hand, outcome satisfaction refers to users’ perceived effectiveness and expected
performance toward the decision that has been made (Sanders & Courtney, 1985). For subjects in the group with
CMCA, the existence of counter-argument challenges their prior assumptions, and this process may lead to negative
emotional outcomes. Decision makers have to spend more time on making a decision and the fluency of the decision-
making process is low. Negative moods and increased fatigue are caused by interruptions (Schonpflug & Battman,
1988; Zohar, 1999). Therefore, decision makers tend to rate satisfaction with the decision-making outcome as low,
even if the interruption is helpful (Speier, Vessey, & Valacich, 2003). However, different results are expected toward
                                                                                                                   114
the outcome satisfaction. Although the counter-arguments interrupt the decision-making process by allowing
decision makers to consider both positive and negative opinions suggested by the system, the decision maker is
predicted to report satisfaction toward their decision. Therefore, we hypothesize the following:

           H2a:     Subjects who use DSS with computer-mediated counter-argument will feel less
                    satisfied about the decision-making process.

           H2b:     Subjects who use DSS with computer-mediated counter-argument will feel more
                    satisfied about the decision-making outcome.


Research methods
An experiment was conducted to test the listed hypotheses. Participants were assigned randomly into two groups,
with/without computer-mediated counter-argument. They were asked to accomplish a stock-investment task using
the DSS provided by the authors. A stock-investment task was selected because DSS are intensively adopted to
support stock investment decision making to cope with high uncertainty and process data in large quantities.

In general stock-investment decision-making tasks, subjects are allowed to invest certain amounts of money in one or
more selected investing targets. Real decision making is more complex, and the final investment portfolio may
contain several combinations of investments. Since this study focuses on understanding learning effects only, a
simplified investment environment was constructed. That is, people were requested to invest their money in two
companies belonging to different industries instead of among all available companies in the open market.

The stock-investment task used in this paper was modified from Melone, McGuire, Hinson, and Yee (1993).
Subjects were requested to put the given amount of money into two different stocks. To maximize their return,
subjects allocated that money into two different options (a company in the financial holding industry or high-tech
industry) based on whatever subset they deemed appropriate. DSS with “what if” analysis was provided as a decision
aid. The system provided basic market information and current market status (with news format). Basic market
information included the name of company, capital, business scope, expected ROI in the next five years, and stock
price for the past five years. Current market status included positive and negative information about these two
companies and the industries in which they belong. Information was summarized from articles published in
investing-related journals and magazines.

Although the investment information presented in this experiment was collected from real financial news, events,
and reports, the content was reviewed and modified by two business-school PhD candidates. We removed the names
and brand names that appeared in the original news report to avoid any inappropriate connection between the
experiment and real-world events. Further, we refined the wording of news titles and news content. The length of the
presented news headlines and story bodies were condensed to 10–20 words and 150–200 words, respectively.

The content validity was then assured by eight financial professionals, including professors, experienced investors,
senior managers in the high-tech industry, and accountants. News with ambiguous titles or content was dropped or
modified. The direction (positive or negative) and importance of each piece of news and the accompanying headline
was determined based on opinions provided by the above eight financial professionals.


Measurement

User satisfaction includes two major dimensions: satisfaction toward the system and toward the decision making
(Sanders and Courtney, 1985). Since this study focused on the new function, namely counter-argument provided by
the DSS, several questions were used to understand users’ attitude toward the DSS. A total of three questions
adopted from Sanders and Courtney (1985) were used to measure subjects’ satisfaction with the DSS. On the other
hand, three questions adopted from Green and Taber (1980) were used to measure satisfaction toward the decision-
making process.



                                                                                                                115
Learning in this study refers to, with regard to DSS use, participants’ using the information to challenge their
assumptions or using the information to maintain their initial beliefs. A total of six questions obtained from
(Vandenbosch & Higgins, 1996) were used to measure learning effects: three for mental-model maintenance and
three for mental-model building. The measurement uses a seven-point Likert scale ranging from 1 (strong disagree)
to 7 (strong agree).


Manipulation check

Since we wanted to manipulate the effect of with and without counter-argument, we asked subjects whether or not
they were aware of the pop-up window showing counter-argument information during the experiment. We found that
90 subjects in the group with CMCA were aware and only two subjects were not; and that 86 subjects in the group
without CMCA were not aware (or could not recall) and only nine subjects were. The result of subjects’ awareness is
shown in Table 1.

                                       Table 1. Results of subjects’ awareness
                                                               Awareness
                                                                                                   Total number
                                                 Yes        No         Could not recall
      Computer-mediated               w/o         9         71                15                        95
       Counter-argument                w         90          1                 1                        92
              Total number                       99         72                16                        187

According to a reading-behavior study by using eye-tracking machines, in a two-column condition, people tend to
focus on content that appears on the left-hand side first. Since the information for these two companies appeared in a
two-column format, we randomized the location of the messages to avoid the possible confounding effect. As shown
in Table 2, the location of the news and whether or not there was CMCA are independent.

                       Table 2. Results of subjects in different information displayed location
                                                    Computer-mediated counter-argument
       Location of displayed information                                                      Total number of subjects
                                                            w/o                   w
      IT-related news on the left-hand side                  51                   45                    96
     IT-related news on the right-hand side                  44                   47                    91
            Total number of subjects                         95                   92                   187

                                      Table 3. DSS functions (Hung et al., 2007)
        Menu                                                       Function
Market summary           Detailed information on the stock market, including stock quotes, change, day’s range,
                         volume, etc.
Industry quotes          Detailed information on the specific industries.
Company information      Provided information on six companies used for trading in the experiment, including basic
                         information, historical prices, news, and streaming charts.
Technical analysis       Analyzed technical aspects, such as the moving average, relative strength index, and
                         stochastic line indicators.
Recent news              Provided three categories of news: political, international, and financial.
Calculator               Calculated investment returns and more.


System

A GUI-based DSS, with 4GL programming language and a relational database management system, was developed
for the experiment. The system was able to support Simon’s decision processes, which allow people to perform
intelligence, design, and choice activities. Actual design activities followed Hung, Ku, Liang, and Lee’s
recommendations (2007), and the detail system tools are shown in Table 3. Furthermore, the purpose of this study
was to understand how users’ information-reading behaviors affected their final decision. To support this, a total of 5
to 15 positive and 5 to 15 negative message titles about each company were displayed on the main page for the non-
information-loading group. After users clicked on the message title, the system popped up an “always on top” screen,

                                                                                                                   116
which contained detailed message content. To record the exact reading time for each message, this pop-up “always
on top” window could only be closed by clicking the “close” function. Finally, for experiment group, after they
clicked the “close” button, another window containing a reversed direction message popped up.




                    Basic information                                      Technical analysis




                    Information listing                                  Detailed information




            Computer-mediated counter-argument                              Decision making
                                 Figure 2. Screenshots of the experimental system


Procedure

Before the experiment started, each participant read and signed a consent form describing the purpose of the
research. Subjects were told that any decision they made was independent from their academic performance and that

                                                                                                             117
the experiment process was anonymous. All experiments were held by the same facilitator to reduce possibilities of
bias.

In the first stage, the experimenter gave a five-minute introduction of two investment options (a financial holding
company and a high-tech company). After the brief introduction, subjects were asked to make a quick decision first.
We consider the decision at this stage as prior belief. Next, what-if analysis tools and the current status of each
market were shown on the screen. Positive and negative information was represented by a brief title only.
Participants had to click on the title to obtain detailed information. They could use what-if analysis tools to calculate
possible returns according to their understanding of the market and the provided information. In this stage, the
subjects’ information acquisition behaviors, including reading sequence and time, were recorded by the system
automatically. When participants felt comfortable making their final decision, they pressed a button on the screen,
and the system led them to the final decision screen. After participants made their final decision, they were asked to
provide their level of confidence toward their final decision and answer predefined questions. Finally, they were
thanked for their participation.


Pilot tests

The experiment was pilot-tested twice with master and senior students from the MIS department. After the first pilot
test, with eight volunteer students, we refined the process flow and divided the company information page into two
pages: one for company information and the other for the preference test. In addition, we invited three students
studying human-computer interfaces to review our experiment system and provide us with some functional and
interface-design comments. Based on these comments, we improved our system and made its operation smoother and
more user-friendly. The second pilot test was conducted with fifteen students. We found no revisions were needed
after the second pilot test, so this version was considered final.


Demographics

A total of 187 subjects were recruited from part-time MBA programs of seven different universities and one
academic institution. Since our predefined task was stock investment, we focused on subjects with stock or other
investing experience. Five of the six subjects had experience with stock investment, while one of the six had no
experience with stock investment but had other investment experience. The average age was 33.1 years old with a
standard deviation of 5.6. The eldest subject was 52 years old, and the youngest was 22 years old.

For those subjects in the with-CMCA group, one out of five had never invested in the stock market, and the average
age was 33.19 years old with a standard deviation of 4.843. The eldest subject was 48 years old, and the youngest
was 22 years old. For those subjects in the without-CMCA group, one out of five had never invested in stock, and the
average age was 33.01 years old with a standard deviation of 6.325. The eldest subject was 52 years old, and the
youngest was 22 years old.

                                            Table 4. Validity and reliability
                                                                                        Factor loading
              Variables                         Items
                                                                        Factor 1      Factor 2    Factor 3      Factor4
                                   Support my viewpoint                  0.86
  Mental-model maintenance
                                   Support action taking                  0.77
  (Cronbach’s alpha = 0.74)
                                   Reinforce belief                       0.70
                                   Challenge my viewpoint                               0.82
   Mental-model building
                                   Criticize my cognition                               0.81
  (Cronbach’s alpha = 0.70)
                                   Reinvestigate my assumption                          0.73
                                   Process efficiency                                                0.79
     Satisfaction process
                                   Process satisfaction                                              0.69
  (Cronbach’s alpha = 0.87)
                                   Process clarity                                                   0.68
                                   Happy with the decision                                                        0.83
    Satisfaction decision
                                   Pleased with the decision                                                      0.82
  (Cronbach’s alpha = 0.95)
                                   Satisfied with the decision                                                    0.82

                                                                                                                     118
Data analysis
Item reliability, convergent validity, and discriminant validity tests are often used to test the robustness of
measurement. Individual item reliability can be examined by observing the factor loading of each item. Convergent
validity should be assured when multiple indicators are used to measure one construct. Convergent validity can be
examined by bivariate correlation analyses and reliability of questions. Discriminant validity focuses on testing
whether the measures of constructs are different from one another. It can be assessed by testing whether the
correlation between pairs of construct are below the threshold value of 0.90. As shown in Table 4, given that all
criteria were met, validity and reliability of our measurement are assured in our study.


Hypotheses testing

There are four hypotheses in this category, based on different decision outcomes: mental-model maintenance,
mental-model building, decision-process satisfaction, and outcome satisfaction. In this study, MANOVA was
selected because the correlation matrix showed a moderate- to high-level correlation among variables and to control
an experiment-wide error rate resulting from conducting multiple ANOVA (Hair, Black, Babin, Anderson &
Tatham, 2006). In Table 5, descriptive analysis and the correlation among variables were also provided for the
validation of assumption. “Mean” represents the averaged response toward each construct. “Skewness” and
“kurtosis” represent the extent to which our sample fit into the normality assumption. The MANOVA test result in
Table 6 shows that, except for process satisfaction, CMCA has effects on each dependent variable when
confirmation bias is controlled.

                               Table 5. Descriptive analysis and correlation matrix
                                          Descriptive analysis                          Correlation matrix
                               Mean       Std dev.    Skewness Kurtosis MMM MMB SATP SATD
Mental-model maintenance       4.87         0.78        −0.03             1.00      1.00
  Mental-model building        5.16         0.87        −0.09            −0.16 −0.09 1.00
Satisfaction toward process    4.97         0.93        −0.71             1.51      0.31 0.45      1.00
Satisfaction toward outcome    5.02         0.97        −0.33             0.51      0.32 0.46      0.78    1.00

                              Table 6. Effect of CMCA on learning and satisfaction
               Source                Dependent variable                     F                      Sig.
                              MM building                              6109.007                    .000
                              MM maintenance                           7370.924                    .000
   Intercept
                              Process satisfaction                     5246.856                    .000
                              Outcome satisfaction                     4992.449                    .000
                              MM building                                  4.711                   .031
                              MM maintenance                               7.912                   .005
   CMCA
                              Process satisfaction                         2.388                   .124
                              Outcome satisfaction                         5.868                   .016

The hypothesis that the score of mental-model maintenance for subjects who use DSS without CMCA will be higher
than for those who use DSS with CMCA is supported. As Table 7 indicates, for subjects in the group without
CMCA, the average-rated mental-model maintenance was 5.03 (out of 7). For subjects in the group with CMCA, the
average-rated mental-model maintenance score was 4.71 (out of 7). The significant result indicates that CMCA can
effectively reduce mental-model maintenance.

The hypothesis that the score of mental-model building for subjects who use DSS with CMCA will be higher than
for those who use DSS without CMCA is supported. In addition, as Table 7 shows, for subjects in the group without
CMCA, the average-rated mental-model building was 4.96 (out of 7); for subjects in the group with CMCA, the
average-rated mental-model building was 5.36 (out of 7).

The MONOVA result shows that counter-argument has an effect on outcome satisfaction but not on process
satisfaction. The hypothesis that subjects who use DSS with CMCA will feel more satisfied toward the system is not

                                                                                                               119
supported. However, significant difference can be found between these two groups and H2, and thus, the satisfaction
toward decision-making outcome is supported. The average scores and standard deviation for process and outcome
satisfaction for each group are provided in Table 7.

                              Table 7. Analytical results of mental-model maintenance
                                                        Computer-mediated counter-argument
                                                        w/o                           w
Mental-model maintenance                                5.03 (0.88)                   4.71 (0.64)
Mental-model building                                   4.96 (0.81)                   5.36 (0.87)
Process satisfaction                                    4.88 (0.90)                   5.07 (0.96)
Outcome satisfaction                                    4.86 (0.89)                   5.19 (1.02)


Discussion

One of our hypotheses (H2) was not supported. Contrary to our expectation that subjects provided with CMCA
would be interrupted by the popping-up of counter-argument during the decision-making process and would tend to
feel less satisfied, subjects in the group with counter-argument felt even more satisfied than the group without
counter-argument. A couple of reasons may have caused this to happen. First, decision-process satisfaction is
measured by a subject’s perceived efficiency, satisfaction, and clearness toward the decision-making process. In
general, decision makers do not read all the information provided by the system. Accompanied by the experimenter’s
introduction, the decision-making process is very easy to understand and follow. The measurement may not exactly
reflect the actual problem with counter-argument. Therefore, there is no significant difference between these two
groups. There is a need to develop another instrument to understand subject’s satisfaction toward the decision-
making process.

Second, we measured the subjects’ perceived process satisfaction and perceived outcome satisfaction together, after
the decision had been made. Although the questionnaire focused on the decision-making process, the decision
makers’ perceived satisfaction toward the process may have been affected by their perceived satisfaction toward the
decision outcome. Therefore, future research should reconsider the most appropriate time for collecting process
satisfaction.


Conclusion
Because human beings have bounded rationality, our decision-making process is largely affected by cognitive biases
(Tversky & Kahneman, 1974). This study focused on the introduction of a de-bias function, which may reduce the
negative consequences of confirmation bias in the computer-supported decision-making context. The relationships
between counter-argument and decision outcomes were hypothesized. After collecting data from 187 experienced
financial investors, three out of four proposed hypotheses were supported (the relationship between the providing of
CMCA and process satisfaction was not supported). We successfully showed how the de-bias function can trigger
individual learning by leading decision makers to challenge their current mental models. This result contributes to
academics and practitioners in several ways.


Implications for research

First, our results indicate that decision support systems with specific functions can stimulate learning by reforming
decision makers’ mental model. How users can learn through the use of information systems has long been ignored
by information system research (Alavi & Leidner, 2001). In this study, we focused on two different approaches to
learning: mental-model maintenance and mental-model building. However, biases emerged while decision makers
attempt to maintain current mental models. This research shows that with pre-design functions, users may challenge
their initial beliefs and reform their mental models.

Second, there was no difference with regard to process satisfaction with or without counter-argument. This indicates
that although a new function may be useful to minimize overconfidence and to guide the decision maker to balance
                                                                                                                 120
their decision, people may be against the de-bias function in the decision-making process. The existence of counter-
argument may interrupt decision makers’ information reading and downgrade their processing fluency. This implies
that, although counter-argument allows decision makers to challenge their assumptions, an unwanted emotional side
effect may also appear. Future research may explore other emotional consequences of using de-biasing functions
embedded in computer-supported decision-making tools.


Implications for practice

Learning takes place when learners enhance their current mental models or build new mental models. It is vital to
build an environment to help individuals to avoid cognitive bias as well as to support the building and reforming of
mental models. Computer-supported learning is popular and broadly adopted by the contemporary education system.
In addition to providing various types of knowledge in a more efficient or effective manner through the support of
information technology, approaches such as counter thinking, which provide stimulus for learners to challenge their
existing mental model, should be adopted by educators or embedded into the learning system.

For a decision-support-systems designer, de-bias functions should be included in the system design. Our study shows
that systems should provide certain functions to prevent bias or support learning. However, designers should also
note that some unwanted effects may be caused by those de-bias functions. For example, in our study, we provided
one piece of counter-argument when subjects read a piece of news. The pop-up counter-argument may lead to
negative emotions, for example, annoyance or confusion. Too many counter-arguments or too much information may
increase the effect of information overload. Therefore, the frequency and timing used to provide counter-argument is
important. System designers should also consider the type of task and other factors.


Limitations and future research

This study is not without limitations. First, a between-group comparison was conducted and subjective evaluation of
perceived mental-model building was used to evaluate the learning effect. Some may argue that although this
approach may capture part of the learning effect, a pre- and post-experiment comparison should be made in order to
truly reflect how counter-argument changes the mental model. Therefore, further research should extend this study
by employing within-individual experiment and examining the change of one’s mental model objectively. Second,
this was a cross-sectional study and the existence of counter-argument was unexpected by the subjects. People may
get used to the appearance of counter-argument and ignore it automatically (e.g., Handzic & Tolhurst, 2002). Future
research may extend this study by conducting a longitudinal study to examine the above assumption. Third, only a
financial decision-making task was used. Since confirmation bias can be applied to various decision-making tasks
and counter-argument may serve as a useful tool for eliminating it, future research should extend the proposed
concept to other decision-making areas to test the effect of counter-argument on decision-making. Fourth, only one
de-biasing function was included in this study. One bias may be eliminated through different approaches under
different settings. Future research should include other de-biasing functions and compare the effect of different de-
biasing functions under different settings. Lastly, although we have illustrated the effect of confirmation bias and
methods to prevent it, we did so with a simple experiment design. Future studies may examine this issue in more
complicated circumstances, such as a group decision-making or social decision-making process.


References
Alavi, M., & Leidner, D. E. (2001). Research commentary: Technology-mediated learning: A call for greater depth and breadth of
research. Information System Research, 12(1), 1–10.
Argyris, C., & Schön, D. (1978). Organizational Learning: A Theory of Action Approach. Boston, MA: Addison-Wesley.
Arnott, D. (2006). Cognitive biases and decision support systems development: A design science approach. Information Systems
Journal, 16(1), 55–78.
Barber, B. M., & Odean, T. (2002). Online investors: Do the slow die first? The Review of Financial Studies, 15(S2), 455–487.
Bhandari, G., Deaves, R., & Hassanein, K. (2008) Debiasing investors with decision support systems: An experimental
investigation. Decision Support Systems, 46(1), 399–410.
                                                                                                                            121
Cannon-Bowers, J. A., Salas, E., & Converse, S. (1993). Shared mental models in expert team decision making. In J. N. J.
Castellan (ed.), Individual and group decision making. Hillsdale, NJ: Lawrence Erlbaum Associates, 221–246.
Craik, K. J. W. (1943). The Nature of Explanation. Cambridge, UK: Cambridge University Press.
Davis, F. D., & Kottemann, J. E. (1994). User perceptions of decision support effectiveness: Two production planning
experiments. Decision Sciences, 25(1), 57–76.
Festinger, L. A. (1957). Theory of Cognitive Dissonance. Stanford, CA: Stanford University Press.
Fischhoff, B. (1982) Debiasing. In D. Kahneman, P. Slovic, & A. Tversky (Eds.), Judgment under uncertainty: Heuristics and
biases. Cambridge, Cambridge University Press, 422–444.
Gagne, R. M. 1977. The condition of learning, New York: Holt, Rinehart and Winston.
Green, S. G., & Taber, T. D. (1980). The effects of three social decision schemes on decision group process. Organizational
Behavior and Human Performance, 25(1), 97–106.
Grønhaug, K., & Falkenberg, J. S. (1989). Exploring strategy perceptions in changing environments. Journal of Management
Studies, 26(4), 349–359.
Hair, J. F., Black, B., Anderson, R. E., Tatham, R. L., & Black, W. C. (2006). Multivariate data analysis, 6th ed. Englewood
Cliffs, NJ: Prentice Hall.
Handzic, M., & Tolhurst, D. (2002). Evaluating an interactive learning environment in management education. Educational
Technology & Society, 5(3), 113–122.
Heath, C. (1996). Do People Prefer to Pass Along Good or Bad News? Valence and Relevance of News as Predictors of
Transmission Propensity. Organizational Behavior and Human Decision Processes, 68(2), 79–94.
Hoch, S. J., & Deighton, J. (1989). Managing What Consumers Learn from Experience. Journal of Marketing, 53(2), 1–20.
Holyoak, K. J. (1984). Analogical thinking and human intelligence. In Advances in the psychology of human intelligence, R. J.
Sternberg (Eds.), Hillsdale, NJ: Erlbaum, 199–223.
Hung, S. Y., Ku, Y. C., Liang, T. P., & Lee, C. J. (2007). Regret avoidance as a measure of DSS success: An exploratory study.
Decision Support Systems, 42(4), 2093–2106.
Kiesler, S. & Sproull, L. (1982). Managerial response to changing environments: Perspectives on problem sensing from social
cognition. Administrative Science Quarterly, 27(4), 548–570.
March, P. (1991). How to people with a mild/moderate mental handicap conceptualise physical illness and its cause. British
Journal of Mental Subnormality, 37(73), 80–91.
Melone, N. P., McGuire, T. W., Hinson, G. B., & Yee, K. Y. (1993). The effect of decision support systems on managerial
performance and decision confidence. Proceedings of the 26th Hawaii International Conference on System Sciences, 482–489.
Nickerson, R. S. (1998). Confirmation bias: A ubiquitous phenomenon in many guises. Review of General Psychology, 2(2), 175–
220.
Norman, D. A. (1982). Learning and memory. San Francisco: W. H. Freeman and Company.
Patel, A., Kinshuk, & Russell, D. (2000). Intelligent tutoring tools for cognitive skill acquisition in life long learning. Educational
Technology & Society, 3(1), 32–40.
Piaget, J. (1954). The construction of reality in the child, New York: Basic Books.
Quinn, J. B. (1980). Strategies for change: Logical incrementalism, Homewood, IL: Richard D Irwin.
Rouse, W. & Morris, N. (1986). On looking into the black box: Prospects and limits in the search for mental models.
Psychological bulletin, 100(3), 349–363.
Russo, J. E., Medvec, V. H., & Meloy, M. G. (1996). The distortion of information during decisions. Organizational Behavior
and Human Decision Processes, 66(1), 102–110.
Sanders, G. L., & Courtney, J. F. (1985). A field study of organizational factors influencing DSS success. MIS Quarterly, 9(1),
77–93.
Santana, M. (1995). Managerial learning: a neglected dimension in decision support systems. Proceedings of the 28th Annual
Hawaii International Conference on System Sciences, 82–91.
Schonpflug, W., & Battmann, W. (1988). The costs and benefits of coping. In S. Fisher & J. Reason (Eds.), Handbook of life
stress, cognition, and health, New York: Wiley, 699–713.

                                                                                                                                   122
Sinitsa, K. (2000). Learning individually: A life-long perspective. Educational Technology & Society, 3(1), 17–23.
Speier, C., Vessey, I., & Valacich, J. S. (2003). The effects of interruptions, task complexity and information presentation on
computer-supported decision-making performance. Decision Sciences, 34(4), 771–797.
Tversky, A., & Kahneman, D. (1974). Judgment under uncertainty: Heuristics and biases. Science, 185(4157), 1124–1131.
Vandenbosch, B., & Higgins, C. (1994). The measurement of learning from executive information systems. Proceedings of the
Twenty-Seventh Hawaii International Conference on System Science, Hawaii, 572–585.
Wason, P. C. (1960). On the failure to eliminate hypotheses in a conceptual task. The Quarterly Journal of Experimental
Psychology, 12(3), 129–140.
Zohar, D. (1999). When things go wrong: The effect of daily work hassles on effort, exertion, and negative mood. Journal of
Occupational and Organizational Psychology, 72(3), 265–283.




                                                                                                                           123

Contenu connexe

Tendances

Issus with the definition of Unconscious Thought in Unconscious Thought Theory
Issus with the definition of Unconscious Thought in Unconscious Thought TheoryIssus with the definition of Unconscious Thought in Unconscious Thought Theory
Issus with the definition of Unconscious Thought in Unconscious Thought TheorySumitava Mukherjee
 
NYAI #23: Using Cognitive Neuroscience to Create AI (w/ Dr. Peter Olausson)
NYAI #23: Using Cognitive Neuroscience to Create AI (w/ Dr. Peter Olausson)NYAI #23: Using Cognitive Neuroscience to Create AI (w/ Dr. Peter Olausson)
NYAI #23: Using Cognitive Neuroscience to Create AI (w/ Dr. Peter Olausson)Maryam Farooq
 
A communications goals model of online persuasion
A communications goals model of online persuasionA communications goals model of online persuasion
A communications goals model of online persuasionKrishna De
 
Information Processing_Implications for Adult Education Practices by Jane White
Information Processing_Implications for Adult Education Practices by Jane WhiteInformation Processing_Implications for Adult Education Practices by Jane White
Information Processing_Implications for Adult Education Practices by Jane WhiteJane Christine White
 
Computers as mindtools for engaging learners in critical learning
Computers as mindtools for engaging learners in critical learningComputers as mindtools for engaging learners in critical learning
Computers as mindtools for engaging learners in critical learningguevarra_2000
 
ON SOFT COMPUTING TECHNIQUES IN VARIOUS AREAS
ON SOFT COMPUTING TECHNIQUES IN VARIOUS AREASON SOFT COMPUTING TECHNIQUES IN VARIOUS AREAS
ON SOFT COMPUTING TECHNIQUES IN VARIOUS AREAScscpconf
 

Tendances (8)

Issus with the definition of Unconscious Thought in Unconscious Thought Theory
Issus with the definition of Unconscious Thought in Unconscious Thought TheoryIssus with the definition of Unconscious Thought in Unconscious Thought Theory
Issus with the definition of Unconscious Thought in Unconscious Thought Theory
 
NYAI #23: Using Cognitive Neuroscience to Create AI (w/ Dr. Peter Olausson)
NYAI #23: Using Cognitive Neuroscience to Create AI (w/ Dr. Peter Olausson)NYAI #23: Using Cognitive Neuroscience to Create AI (w/ Dr. Peter Olausson)
NYAI #23: Using Cognitive Neuroscience to Create AI (w/ Dr. Peter Olausson)
 
A communications goals model of online persuasion
A communications goals model of online persuasionA communications goals model of online persuasion
A communications goals model of online persuasion
 
Information Processing_Implications for Adult Education Practices by Jane White
Information Processing_Implications for Adult Education Practices by Jane WhiteInformation Processing_Implications for Adult Education Practices by Jane White
Information Processing_Implications for Adult Education Practices by Jane White
 
Computers as mindtools for engaging learners in critical learning
Computers as mindtools for engaging learners in critical learningComputers as mindtools for engaging learners in critical learning
Computers as mindtools for engaging learners in critical learning
 
K1802056469
K1802056469K1802056469
K1802056469
 
ON SOFT COMPUTING TECHNIQUES IN VARIOUS AREAS
ON SOFT COMPUTING TECHNIQUES IN VARIOUS AREASON SOFT COMPUTING TECHNIQUES IN VARIOUS AREAS
ON SOFT COMPUTING TECHNIQUES IN VARIOUS AREAS
 
02.system thingking
02.system thingking02.system thingking
02.system thingking
 

Similaire à 11

System Dynamics Modeling for IntellectualDisability Services.docx
System Dynamics Modeling for IntellectualDisability Services.docxSystem Dynamics Modeling for IntellectualDisability Services.docx
System Dynamics Modeling for IntellectualDisability Services.docxmabelf3
 
The Development of the Heuristics and Biases Scale.pdf
The Development of the Heuristics and Biases Scale.pdfThe Development of the Heuristics and Biases Scale.pdf
The Development of the Heuristics and Biases Scale.pdfJoshuaLau29
 
Research Methodology For A Researcher
Research Methodology For A ResearcherResearch Methodology For A Researcher
Research Methodology For A ResearcherRenee Wardowski
 
Intelligent decision support systems a framework
Intelligent decision support systems  a frameworkIntelligent decision support systems  a framework
Intelligent decision support systems a frameworkAlexander Decker
 
The Role of Families and the Community Proposal Template (N.docx
The Role of Families and the Community Proposal Template  (N.docxThe Role of Families and the Community Proposal Template  (N.docx
The Role of Families and the Community Proposal Template (N.docxssusera34210
 
The World Testifies Of Data And Our Understanding Of It Essay
The World Testifies Of Data And Our Understanding Of It EssayThe World Testifies Of Data And Our Understanding Of It Essay
The World Testifies Of Data And Our Understanding Of It EssaySandy Harwell
 
Overview of decision making process in psychology and its types
Overview of decision making process in psychology and its typesOverview of decision making process in psychology and its types
Overview of decision making process in psychology and its typespsychegames2
 
Ms 07 information systems for managers
Ms   07 information systems for managersMs   07 information systems for managers
Ms 07 information systems for managerssmumbahelp
 
Km knowledge application.11
Km  knowledge application.11Km  knowledge application.11
Km knowledge application.11leilajannati
 
Implementation of different tutoring system to enhance student learning
Implementation of different tutoring system to enhance student learningImplementation of different tutoring system to enhance student learning
Implementation of different tutoring system to enhance student learningijctet
 
Decision support systems and its impact on organization empowerment field st...
Decision support systems and its impact on organization empowerment  field st...Decision support systems and its impact on organization empowerment  field st...
Decision support systems and its impact on organization empowerment field st...Alexander Decker
 
JISC RSC London Workshop - Learner analytics
JISC RSC London Workshop - Learner analyticsJISC RSC London Workshop - Learner analytics
JISC RSC London Workshop - Learner analyticsJames Ballard
 
MODULARISM: THE MODES OF THE PROVISIONING OF SPECIFICITY-SURVEYS IN A COMPAR...
MODULARISM:  THE MODES OF THE PROVISIONING OF SPECIFICITY-SURVEYS IN A COMPAR...MODULARISM:  THE MODES OF THE PROVISIONING OF SPECIFICITY-SURVEYS IN A COMPAR...
MODULARISM: THE MODES OF THE PROVISIONING OF SPECIFICITY-SURVEYS IN A COMPAR...Terence Morris
 
Running Head Data Mining in The Cloud .docx
Running Head Data Mining in The Cloud                            .docxRunning Head Data Mining in The Cloud                            .docx
Running Head Data Mining in The Cloud .docxhealdkathaleen
 
TECHNOLOGY ACCEPTANCE MODELS & FRAMEWORKS
TECHNOLOGY ACCEPTANCE MODELS & FRAMEWORKSTECHNOLOGY ACCEPTANCE MODELS & FRAMEWORKS
TECHNOLOGY ACCEPTANCE MODELS & FRAMEWORKSHamed Taherdoost
 
INVESTIGATE THE ROLE OF IMPULSIVITY IN DECISIONS MAKING DURING GAMBLING TASK:...
INVESTIGATE THE ROLE OF IMPULSIVITY IN DECISIONS MAKING DURING GAMBLING TASK:...INVESTIGATE THE ROLE OF IMPULSIVITY IN DECISIONS MAKING DURING GAMBLING TASK:...
INVESTIGATE THE ROLE OF IMPULSIVITY IN DECISIONS MAKING DURING GAMBLING TASK:...Gyan Prakash
 

Similaire à 11 (19)

System Dynamics Modeling for IntellectualDisability Services.docx
System Dynamics Modeling for IntellectualDisability Services.docxSystem Dynamics Modeling for IntellectualDisability Services.docx
System Dynamics Modeling for IntellectualDisability Services.docx
 
The Development of the Heuristics and Biases Scale.pdf
The Development of the Heuristics and Biases Scale.pdfThe Development of the Heuristics and Biases Scale.pdf
The Development of the Heuristics and Biases Scale.pdf
 
Research Methodology For A Researcher
Research Methodology For A ResearcherResearch Methodology For A Researcher
Research Methodology For A Researcher
 
Intelligent decision support systems a framework
Intelligent decision support systems  a frameworkIntelligent decision support systems  a framework
Intelligent decision support systems a framework
 
The Role of Families and the Community Proposal Template (N.docx
The Role of Families and the Community Proposal Template  (N.docxThe Role of Families and the Community Proposal Template  (N.docx
The Role of Families and the Community Proposal Template (N.docx
 
Intelligent tutoring systems
Intelligent  tutoring  systemsIntelligent  tutoring  systems
Intelligent tutoring systems
 
The World Testifies Of Data And Our Understanding Of It Essay
The World Testifies Of Data And Our Understanding Of It EssayThe World Testifies Of Data And Our Understanding Of It Essay
The World Testifies Of Data And Our Understanding Of It Essay
 
Overview of decision making process in psychology and its types
Overview of decision making process in psychology and its typesOverview of decision making process in psychology and its types
Overview of decision making process in psychology and its types
 
Ms 07 information systems for managers
Ms   07 information systems for managersMs   07 information systems for managers
Ms 07 information systems for managers
 
Importance Of Road Mapping For Students
Importance Of Road Mapping For StudentsImportance Of Road Mapping For Students
Importance Of Road Mapping For Students
 
Km knowledge application.11
Km  knowledge application.11Km  knowledge application.11
Km knowledge application.11
 
Implementation of different tutoring system to enhance student learning
Implementation of different tutoring system to enhance student learningImplementation of different tutoring system to enhance student learning
Implementation of different tutoring system to enhance student learning
 
Decision support systems and its impact on organization empowerment field st...
Decision support systems and its impact on organization empowerment  field st...Decision support systems and its impact on organization empowerment  field st...
Decision support systems and its impact on organization empowerment field st...
 
JISC RSC London Workshop - Learner analytics
JISC RSC London Workshop - Learner analyticsJISC RSC London Workshop - Learner analytics
JISC RSC London Workshop - Learner analytics
 
MODULARISM: THE MODES OF THE PROVISIONING OF SPECIFICITY-SURVEYS IN A COMPAR...
MODULARISM:  THE MODES OF THE PROVISIONING OF SPECIFICITY-SURVEYS IN A COMPAR...MODULARISM:  THE MODES OF THE PROVISIONING OF SPECIFICITY-SURVEYS IN A COMPAR...
MODULARISM: THE MODES OF THE PROVISIONING OF SPECIFICITY-SURVEYS IN A COMPAR...
 
Hots
HotsHots
Hots
 
Running Head Data Mining in The Cloud .docx
Running Head Data Mining in The Cloud                            .docxRunning Head Data Mining in The Cloud                            .docx
Running Head Data Mining in The Cloud .docx
 
TECHNOLOGY ACCEPTANCE MODELS & FRAMEWORKS
TECHNOLOGY ACCEPTANCE MODELS & FRAMEWORKSTECHNOLOGY ACCEPTANCE MODELS & FRAMEWORKS
TECHNOLOGY ACCEPTANCE MODELS & FRAMEWORKS
 
INVESTIGATE THE ROLE OF IMPULSIVITY IN DECISIONS MAKING DURING GAMBLING TASK:...
INVESTIGATE THE ROLE OF IMPULSIVITY IN DECISIONS MAKING DURING GAMBLING TASK:...INVESTIGATE THE ROLE OF IMPULSIVITY IN DECISIONS MAKING DURING GAMBLING TASK:...
INVESTIGATE THE ROLE OF IMPULSIVITY IN DECISIONS MAKING DURING GAMBLING TASK:...
 

Plus de Valentino Selayan

Plus de Valentino Selayan (20)

pert3_4-fuzzy.pptx
pert3_4-fuzzy.pptxpert3_4-fuzzy.pptx
pert3_4-fuzzy.pptx
 
pert7_8-fis-e28093-metode-sugeno.pptx
pert7_8-fis-e28093-metode-sugeno.pptxpert7_8-fis-e28093-metode-sugeno.pptx
pert7_8-fis-e28093-metode-sugeno.pptx
 
tutorial-fuzzy-inferensi-system-dengan-matlab.docx
tutorial-fuzzy-inferensi-system-dengan-matlab.docxtutorial-fuzzy-inferensi-system-dengan-matlab.docx
tutorial-fuzzy-inferensi-system-dengan-matlab.docx
 
Materi 8 Introduction to Fuzzy Logic.pdf
Materi 8 Introduction to Fuzzy Logic.pdfMateri 8 Introduction to Fuzzy Logic.pdf
Materi 8 Introduction to Fuzzy Logic.pdf
 
LOGIKA FUZZY MATERI KULIAH-1.pdf
LOGIKA FUZZY MATERI KULIAH-1.pdfLOGIKA FUZZY MATERI KULIAH-1.pdf
LOGIKA FUZZY MATERI KULIAH-1.pdf
 
LOGIKA FUZZY MATERI KULIAH.pdf
LOGIKA FUZZY MATERI KULIAH.pdfLOGIKA FUZZY MATERI KULIAH.pdf
LOGIKA FUZZY MATERI KULIAH.pdf
 
pert_fuzzy-database.pptx
pert_fuzzy-database.pptxpert_fuzzy-database.pptx
pert_fuzzy-database.pptx
 
pert5_fis.pptx
pert5_fis.pptxpert5_fis.pptx
pert5_fis.pptx
 
pert1_2fuzzy.pptx
pert1_2fuzzy.pptxpert1_2fuzzy.pptx
pert1_2fuzzy.pptx
 
pert6_-fis_mamdani.pptx
pert6_-fis_mamdani.pptxpert6_-fis_mamdani.pptx
pert6_-fis_mamdani.pptx
 
Modul2-Tipe-data-identifier-dan-operator-02.pdf
Modul2-Tipe-data-identifier-dan-operator-02.pdfModul2-Tipe-data-identifier-dan-operator-02.pdf
Modul2-Tipe-data-identifier-dan-operator-02.pdf
 
Modul_Praktikum_Algoritma_and_Pemrograma.pdf
Modul_Praktikum_Algoritma_and_Pemrograma.pdfModul_Praktikum_Algoritma_and_Pemrograma.pdf
Modul_Praktikum_Algoritma_and_Pemrograma.pdf
 
1 3
1 31 3
1 3
 
1 1
1 11 1
1 1
 
1 sist. komputer_00
1 sist. komputer_001 sist. komputer_00
1 sist. komputer_00
 
1 sist. komputer_00
1 sist. komputer_001 sist. komputer_00
1 sist. komputer_00
 
1 sist. komputer_
1 sist. komputer_1 sist. komputer_
1 sist. komputer_
 
1 installasi-xampp-dan-mysql (1)
1 installasi-xampp-dan-mysql (1)1 installasi-xampp-dan-mysql (1)
1 installasi-xampp-dan-mysql (1)
 
Chapter 7 transformers
Chapter 7 transformersChapter 7 transformers
Chapter 7 transformers
 
Lecture 28 360 chapter 9_ power electronics inverters
Lecture 28 360 chapter 9_  power electronics invertersLecture 28 360 chapter 9_  power electronics inverters
Lecture 28 360 chapter 9_ power electronics inverters
 

11

  • 1. Hsu, J. S.-C., Huang, H.-H., & Linden, L. P. (2011). Computer-mediated Counter-Arguments and Individual Learning. Educational Technology & Society, 14 (4), 111–123. Computer-mediated Counter-Arguments and Individual Learning Jack Shih-Chieh Hsu1, Hsieh-Hong Huang2* and Lars P. Linden3 1 Department of Information Management, National Sun Yat-sen University, Taiwan // 2Department of Information Science and Management Systems, National Taitung University, Taiwan // 3Department of Information Technology, Georgia Southern University, USA // jackshsu@mis.nsysu.edu.tw // kory@nttu.edu.tw // llinden@georgiasouthern.edu *Corresponding author ABSTRACT This study explores a de-bias function for a decision support systems (DSS) that is designed to help a user avoid confirmation bias by increasing the user’s learning opportunities. Grounded upon the theory of mental models, the use of DSS is viewed as involving a learning process, whereby a user is directed to build mental models so as to reduce the tendency to consider only data that supports any preexisting mental models. The results of an experiment suggest that a de-bias function, called computer-mediated counter-argument (CMCA), facilitates individual learning and improves decision satisfaction. Keywords Mental model, Individual learning, Computer-mediated counter-argument, De-bias, Decision support systems Introduction A decision support system (DSS) is a platform for learning in at least two ways. First, a DSS aids the user in extending his or her knowledge about the subject matter of the decision domain. For example, a person using a stock- selecting DSS may explore the relationships among interest rates and stock prices and, while exploring, learn knowledge about the markets. Second, a DSS may present the user with an opportunity for adjusting his or her knowledge about the decision-making process itself. In this case, the system aids a person in altering the decision- making process in an attempt to improve the outcome. An example of this second way of learning is shown by a user of a stock-selecting DSS who, having been warned of an error-producing cognitive bias known as gambler’s fallacy, reacts to this warning by performing additional steps in the decision-making process, which in this case might include the reflection upon possible market outcomes, if future prices are not dependent upon historical prices. This second way of learning, an adjustment of the decision-making process, serves the decision maker by reconciling the many interlocking and incompatible beliefs about the problem domain, some of which may be based upon cognitive biases (Tverskey & Kahneman, 1974). Increasingly, attention is being focused upon cognitive theories of learning and how these theories can be understood and applied in the context of DSS, a trajectory of research that is distinct from much of DSS research, which largely focuses upon the end result of decision making: decision quality (Santana, 1995). Along these lines, our focus is upon the second type of learning and how a DSS might help to adjust a user’s decision-making process for the purposes of overcoming a cognitive bias known as confirmation bias. During DSS use, cognitive biases present a perilous downside. Users, enabled by DSS, may make poor decisions and enact seemingly limited decision-making processes, often while confident that all is well. Several examples illustrate cognitive biases arising during the use of DSS. A cognitive bias called the illusion of control occurs when a user of a DSS performs a what-if analysis and displays increased confidence, yet achieves no significant performance gain (Davis & Kottemann, 1994). The cognitive bias called confirmation bias occurs when a person who is gathering information restricts attention to only data that supports a favored hypothesis. The cognitive bias known as the illusion of knowledge occurs when a user is overconfident about having access to a greater amount of information, as is easily the case with a DSS, but makes poor decisions, despite the additional information. This last cognitive bias was detected when investors who had switched from a phone-based investing method to an online trading platform became more confident, and yet recorded poor performance despite having access to more information (Barber & Odean, 2002). Attempting to counter these cognitive biases is called de-biasing (Fischhoff, 1982). In general, de- biasing techniques, such as educating users about biases, are believed to beneficially impact decision quality (Arnott, 2006) and de-biasing techniques applied to DSS have been successfully demonstrated (Bhandari, Deaves, & Hassanein, 2008). To investigate the embedding of a de-biasing technique into the design of a DSS we adopt mental- model theory. ISSN 1436-4522 (online) and 1176-3647 (print). © International Forum of Educational Technology & Society (IFETS). The authors and the forum jointly retain the 111 copyright of the articles. Permission to make digital or hard copies of part or all of this work for personal or classroom use is granted without fee provided that copies are not made or distributed for profit or commercial advantage and that copies bear the full citation on the first page. Copyrights for components of this work owned by others than IFETS must be honoured. Abstracting with credit is permitted. To copy otherwise, to republish, to post on servers, or to redistribute to lists, requires prior specific permission and/or a fee. Request permissions from the editors at kinshuk@ieee.org.
  • 2. The theory of mental models explains how people perform certain activities, for example, information processing. When a decision maker confronts a large amount of information, a mental model aids in the filtering of that information. The mental model fulfills a gatekeeper role for the mind by preventing unrelated and unimportant information from being consciously considered. However, along with the beneficial effects that are attributed to mental models, there are also error-producing effects. Inadequate mental models may block important information. The ability of a decision maker to change mental models is viewable as a learning process, one that may be enabled by DSS. That a person can change existing mental models is important, given that mental models may have a negative impact during decision making (Vandenbosch & Higgins, 1994). Through the mental-model theory, we seek to explain how an embedded de-bias function can induce a change in a person’s mental model, conceptualized as a type of learning, and, as a result, overcome confirmation bias. Mental models provide a theoretical explanation upon which to ground a DSS function for reducing confirmation bias. The consideration of mental models is not new to DSS research and can be recognized as a type of cognitive skill. The use of DSS is attributed to the development of cognitive skills. Intelligent tutoring tools, for example, provide the user with an opportunity to obtain cognitive skills, such as the ability to learn via self-explanation (Patel, Kinshuk, & Russell, 2000). Technology-enriched learning includes meta-learning activities, such as monitoring and sequencing one’s own learning (Sinitsa, 2000). The cognitive skill being considered in this research is the learning effect of mental-model reforming. We seek to understand if the design of a de-biasing function embedded in a DSS increases the cognitive skill of the type of learning associated with mental-model reforming. The de-bias function being explored aims to eliminate cognitive bias through the introduction of counter-arguments. The de-bias function that is embedded in a DSS is designed to present counter-arguments to the user so as to eliminate bias through a learning process. We seek to understand if users, having read counter-arguments, report to have built mental models and express a lower confidence level, as compared with users without counter-arguments. Being aware of the need to lessen the impact of confounding variables from the environment, we conducted an experiment to investigate this embedded de-bias function. In the following sections, we discuss the concepts of mental-model theory, confirmation bias, and learning. We propose a research model grounded upon mental-model theory and state our hypotheses. We discuss the research design and the procedures of the experiment. We analyze the data and discuss the results in light of their implication to DSS design and individual learning. Theoretical background and hypotheses Mental models and confirmation bias While explaining the nature of thought, Craik (1943) describes the modeling capabilities of the human mind, whereby “small-scale models” of external entities are manipulated by the mind to consider past events and to anticipate future events. These mental models are psychological representations of the environment and consist of cognitive constructs and their relationships (Holyoak, 1984). As such, mental models serve a range of purposes: when designing, mental models help a person to describe the composition of entities; when observing a system, mental models help a person to explain the current states of unseen variables; and when considering the future, mental models help a person to predict events and suitable responses to these events (Rouse & Morris, 1986). Mental models, as structured patterns, aid in classifying information and aid in simulating a set of outcomes based upon varying conditions (Cannon-Bowers, Salas, & Converse, 1993). An activity such as information processing suggests that mental models are relevant to DSS design. One example illustrates the association of mental models with cognitive biases. In the stock market, a decision maker needs to predict a future price in order to make the purchasing decision. The decision maker possesses a mental model of the impacts of various factors upon price. This mental model helps to guide the search for information required for predicting price and abandon information that is not important to the decision. If abundant amounts of information are available, the mental model serves as a filter that helps the decision maker forego consideration of inconsequential information. A mental-model-based information search can reduce processing time and minimize information overload. These benefits, however, which coexist with a mental model, may lead the decision maker to 112
  • 3. neglect, or perceive as unimportant, information critical to the analysis. Mental models contribute to a cognitive bias called confirmation bias. Wason (1960) defined the phenomenon of confirmation bias as people’s tendency to seek evidence in support of their assumptions instead of searching for evidence that challenges their assumptions. When a person contemplates concepts or verifies assumptions, a variety of biased tendencies have been described. A person may tend to treat evidence supporting existing beliefs more favorably than is objectively appropriate. A person confronted with a body of information may “see” exactly the effect that the person set out to find (Russo, Medvec, & Meloy, 1996). A person may tend to pass along information that is congruent to their beliefs (Heath, 1996). In short, the problem is that “decision-makers seek confirmatory evidence and do not search for disconfirming information” (Arnott, 2006, p. 60). The consequence of overweighting some evidence and underweighting other evidence is reaching a wrong conclusion. Confirmation bias is observed to be an exceptionally problematic aspect of human reasoning (Nickerson, 1998). How then might the design of a DSS compensate for this cognitive bias? Mental models and learning Based on mental models and learning theory (e.g., Gagne, 1977; Norman 1982; Piaget 1954), Vandenbosch and Higgins (1994) argue that mental models are closely connected to learning. Mental models direct the information- gathering process and limit the process. Reflexively, the information gathered has the potential to change mental models, confirm them, enhance them, change them, and reinforce them. Vandenbosch and Higgins (1994) state that the gathered information impacts mental models through two different processes: mental-model maintenance and mental-model building. Mental-model maintenance occurs when new information fits easily into existing mental models. Mental-model building occurs when new information from newly perceived situations and environments provokes a person to change existing mental models. The connection between mental models and learning is amplified by Vandebosch and Higgins (1994) with their comparison of mental models to theories found in the management discipline. First among these theories is Argyris and Schön’s (1978) organizational learning theory of single-loop and double-loop learning. Single-loop learning represents the learning that takes place when existing routines guide the problem-solving process. Double-loop learning represents the challenging of existing routines and the need to restructure existing norms and assumptions. These two ways of learning are similar to the learning that is explained by the use of mental models. In terms of mental models, single-loop learning maps to mental-model maintenance, both of which pertain to the confirmation and disconfirmation of new information when compared to an existing, stable theory. Double-loop learning maps to mental-model building, both of which represent a potential change in foundations of how learning is achieved. Vandenbosch and Higgins (1994) also draw parallels between mental models and another theory, March’s (1991) theory of exploitation and exploration learning. Exploitation is the act of enhancing performance by improving current practices. Exploration is the act of enhancing performance by introducing new practices. Exploitation represents mental-model maintenance as one attempts to reinforce current mental models by adding new constructs or building new links between existing constructs. Exploration represents the mental-model building process as one attempts to reconstruct the model completely (March, 1991). These management theories, because they include both the learning of individuals and the collective of individuals (i.e., the organization as a whole) may not be fully applicable to the individual learning considered in this research. However, in as much as they are found to model individual learning, these theories are useful because they are congruent with our understanding of the maintenance and building of mental models. Of the two approaches described, mental-model maintenance and mental-model building, the one more likely to happen is mental-model maintenance (e.g., Quinn, 1980; Kiesler & Sproull, 1982; Grønhaug & Falkenberg, 1989). Acceptance of mental-model maintenance as a form of learning is high because it can be routinely confirmed through a person’s past experience. Vandenbosch and Higgins (1994) write how, in contrast, mental-model building implies that decision makers face uncertainty. The theory of mental-model building has been tested through empirical evidence. People tend to make decisions or take in part or all action based on their mental models. The mental- model-building form of learning requires stimulus such as feedback from actions and information that contrasts with current thinking. DSS could be designed to induce mental-model building by injecting a particular type of information. Therefore, we attempt to identify the presence of mental-model building by introducing the stimulus of 113
  • 4. computer-mediated counter-arguments (CMCA). A counter-argument is information contrasting with what the decision maker previously hypothesized, and computer-mediated counter-argument is defined as opposite information provided by a DSS during the decision-making process. In addition, in our study, we also attempt to determine whether individuals using a system with counter-argument report a higher level of satisfaction. The research model (Figure 1) shows the relationship between these factors. Learning  mental-model building H1  mental-model maintenance Computer-mediated counter-argument H2 Satisfaction  toward process  toward outcome Figure 1. Research model From a cognitive perspective, existing mental models determine the ideal construction and concept framing process. Due to cognitive-based biases, people tend to read information that fits into their current mental model. The information-selection process allows them to gather evidence that can support their existing mindset. Although the received information either strengthens or weakens the current mental model (Vandenbosch & Higgins, 1996), theory indicates that once one’s belief is formed, it is not easy to change if there is no additional stimulus (Hoch & Deighton, 1989). Since subjects in the group without computer-mediated counter-argument tend to maintain their current mental model rather than building a new mental model, therefore, we hypothesize the following: H1a: The score of mental-model maintenance for subjects who use DSS without computer-mediated counter-argument is higher than for those who use DSS with computer-mediated counter-argument. On the other hand, people question their existing mindset when they receive contradictory evidence (Vandenbosch & Higgins, 1996). Counter-arguments provided by the system give contradictory evidence that challenges decision makers’ perspectives and provides a chance for them to reevaluate their current mental model. Under this setting, people are more likely to abandon their current mental model or take appropriate steps to improve their mental model. Therefore, subjects in the group with computer-mediated counter-argument tend to reform their mental model, rather than maintain their mental model. We hypothesize the following: H1b: The score of mental-model building for subjects who use DSS with computer- mediated counter-argument is higher than for those who use DSS without computer- mediated counter-argument. Decision satisfaction can be separated into process satisfaction and outcome satisfaction (Green & Taber, 1980). Process satisfaction refers to users’ perceived efficiency and fluency of using a decision support system to facilitate decision making. On the other hand, outcome satisfaction refers to users’ perceived effectiveness and expected performance toward the decision that has been made (Sanders & Courtney, 1985). For subjects in the group with CMCA, the existence of counter-argument challenges their prior assumptions, and this process may lead to negative emotional outcomes. Decision makers have to spend more time on making a decision and the fluency of the decision- making process is low. Negative moods and increased fatigue are caused by interruptions (Schonpflug & Battman, 1988; Zohar, 1999). Therefore, decision makers tend to rate satisfaction with the decision-making outcome as low, even if the interruption is helpful (Speier, Vessey, & Valacich, 2003). However, different results are expected toward 114
  • 5. the outcome satisfaction. Although the counter-arguments interrupt the decision-making process by allowing decision makers to consider both positive and negative opinions suggested by the system, the decision maker is predicted to report satisfaction toward their decision. Therefore, we hypothesize the following: H2a: Subjects who use DSS with computer-mediated counter-argument will feel less satisfied about the decision-making process. H2b: Subjects who use DSS with computer-mediated counter-argument will feel more satisfied about the decision-making outcome. Research methods An experiment was conducted to test the listed hypotheses. Participants were assigned randomly into two groups, with/without computer-mediated counter-argument. They were asked to accomplish a stock-investment task using the DSS provided by the authors. A stock-investment task was selected because DSS are intensively adopted to support stock investment decision making to cope with high uncertainty and process data in large quantities. In general stock-investment decision-making tasks, subjects are allowed to invest certain amounts of money in one or more selected investing targets. Real decision making is more complex, and the final investment portfolio may contain several combinations of investments. Since this study focuses on understanding learning effects only, a simplified investment environment was constructed. That is, people were requested to invest their money in two companies belonging to different industries instead of among all available companies in the open market. The stock-investment task used in this paper was modified from Melone, McGuire, Hinson, and Yee (1993). Subjects were requested to put the given amount of money into two different stocks. To maximize their return, subjects allocated that money into two different options (a company in the financial holding industry or high-tech industry) based on whatever subset they deemed appropriate. DSS with “what if” analysis was provided as a decision aid. The system provided basic market information and current market status (with news format). Basic market information included the name of company, capital, business scope, expected ROI in the next five years, and stock price for the past five years. Current market status included positive and negative information about these two companies and the industries in which they belong. Information was summarized from articles published in investing-related journals and magazines. Although the investment information presented in this experiment was collected from real financial news, events, and reports, the content was reviewed and modified by two business-school PhD candidates. We removed the names and brand names that appeared in the original news report to avoid any inappropriate connection between the experiment and real-world events. Further, we refined the wording of news titles and news content. The length of the presented news headlines and story bodies were condensed to 10–20 words and 150–200 words, respectively. The content validity was then assured by eight financial professionals, including professors, experienced investors, senior managers in the high-tech industry, and accountants. News with ambiguous titles or content was dropped or modified. The direction (positive or negative) and importance of each piece of news and the accompanying headline was determined based on opinions provided by the above eight financial professionals. Measurement User satisfaction includes two major dimensions: satisfaction toward the system and toward the decision making (Sanders and Courtney, 1985). Since this study focused on the new function, namely counter-argument provided by the DSS, several questions were used to understand users’ attitude toward the DSS. A total of three questions adopted from Sanders and Courtney (1985) were used to measure subjects’ satisfaction with the DSS. On the other hand, three questions adopted from Green and Taber (1980) were used to measure satisfaction toward the decision- making process. 115
  • 6. Learning in this study refers to, with regard to DSS use, participants’ using the information to challenge their assumptions or using the information to maintain their initial beliefs. A total of six questions obtained from (Vandenbosch & Higgins, 1996) were used to measure learning effects: three for mental-model maintenance and three for mental-model building. The measurement uses a seven-point Likert scale ranging from 1 (strong disagree) to 7 (strong agree). Manipulation check Since we wanted to manipulate the effect of with and without counter-argument, we asked subjects whether or not they were aware of the pop-up window showing counter-argument information during the experiment. We found that 90 subjects in the group with CMCA were aware and only two subjects were not; and that 86 subjects in the group without CMCA were not aware (or could not recall) and only nine subjects were. The result of subjects’ awareness is shown in Table 1. Table 1. Results of subjects’ awareness Awareness Total number Yes No Could not recall Computer-mediated w/o 9 71 15 95 Counter-argument w 90 1 1 92 Total number 99 72 16 187 According to a reading-behavior study by using eye-tracking machines, in a two-column condition, people tend to focus on content that appears on the left-hand side first. Since the information for these two companies appeared in a two-column format, we randomized the location of the messages to avoid the possible confounding effect. As shown in Table 2, the location of the news and whether or not there was CMCA are independent. Table 2. Results of subjects in different information displayed location Computer-mediated counter-argument Location of displayed information Total number of subjects w/o w IT-related news on the left-hand side 51 45 96 IT-related news on the right-hand side 44 47 91 Total number of subjects 95 92 187 Table 3. DSS functions (Hung et al., 2007) Menu Function Market summary Detailed information on the stock market, including stock quotes, change, day’s range, volume, etc. Industry quotes Detailed information on the specific industries. Company information Provided information on six companies used for trading in the experiment, including basic information, historical prices, news, and streaming charts. Technical analysis Analyzed technical aspects, such as the moving average, relative strength index, and stochastic line indicators. Recent news Provided three categories of news: political, international, and financial. Calculator Calculated investment returns and more. System A GUI-based DSS, with 4GL programming language and a relational database management system, was developed for the experiment. The system was able to support Simon’s decision processes, which allow people to perform intelligence, design, and choice activities. Actual design activities followed Hung, Ku, Liang, and Lee’s recommendations (2007), and the detail system tools are shown in Table 3. Furthermore, the purpose of this study was to understand how users’ information-reading behaviors affected their final decision. To support this, a total of 5 to 15 positive and 5 to 15 negative message titles about each company were displayed on the main page for the non- information-loading group. After users clicked on the message title, the system popped up an “always on top” screen, 116
  • 7. which contained detailed message content. To record the exact reading time for each message, this pop-up “always on top” window could only be closed by clicking the “close” function. Finally, for experiment group, after they clicked the “close” button, another window containing a reversed direction message popped up. Basic information Technical analysis Information listing Detailed information Computer-mediated counter-argument Decision making Figure 2. Screenshots of the experimental system Procedure Before the experiment started, each participant read and signed a consent form describing the purpose of the research. Subjects were told that any decision they made was independent from their academic performance and that 117
  • 8. the experiment process was anonymous. All experiments were held by the same facilitator to reduce possibilities of bias. In the first stage, the experimenter gave a five-minute introduction of two investment options (a financial holding company and a high-tech company). After the brief introduction, subjects were asked to make a quick decision first. We consider the decision at this stage as prior belief. Next, what-if analysis tools and the current status of each market were shown on the screen. Positive and negative information was represented by a brief title only. Participants had to click on the title to obtain detailed information. They could use what-if analysis tools to calculate possible returns according to their understanding of the market and the provided information. In this stage, the subjects’ information acquisition behaviors, including reading sequence and time, were recorded by the system automatically. When participants felt comfortable making their final decision, they pressed a button on the screen, and the system led them to the final decision screen. After participants made their final decision, they were asked to provide their level of confidence toward their final decision and answer predefined questions. Finally, they were thanked for their participation. Pilot tests The experiment was pilot-tested twice with master and senior students from the MIS department. After the first pilot test, with eight volunteer students, we refined the process flow and divided the company information page into two pages: one for company information and the other for the preference test. In addition, we invited three students studying human-computer interfaces to review our experiment system and provide us with some functional and interface-design comments. Based on these comments, we improved our system and made its operation smoother and more user-friendly. The second pilot test was conducted with fifteen students. We found no revisions were needed after the second pilot test, so this version was considered final. Demographics A total of 187 subjects were recruited from part-time MBA programs of seven different universities and one academic institution. Since our predefined task was stock investment, we focused on subjects with stock or other investing experience. Five of the six subjects had experience with stock investment, while one of the six had no experience with stock investment but had other investment experience. The average age was 33.1 years old with a standard deviation of 5.6. The eldest subject was 52 years old, and the youngest was 22 years old. For those subjects in the with-CMCA group, one out of five had never invested in the stock market, and the average age was 33.19 years old with a standard deviation of 4.843. The eldest subject was 48 years old, and the youngest was 22 years old. For those subjects in the without-CMCA group, one out of five had never invested in stock, and the average age was 33.01 years old with a standard deviation of 6.325. The eldest subject was 52 years old, and the youngest was 22 years old. Table 4. Validity and reliability Factor loading Variables Items Factor 1 Factor 2 Factor 3 Factor4 Support my viewpoint 0.86 Mental-model maintenance Support action taking 0.77 (Cronbach’s alpha = 0.74) Reinforce belief 0.70 Challenge my viewpoint 0.82 Mental-model building Criticize my cognition 0.81 (Cronbach’s alpha = 0.70) Reinvestigate my assumption 0.73 Process efficiency 0.79 Satisfaction process Process satisfaction 0.69 (Cronbach’s alpha = 0.87) Process clarity 0.68 Happy with the decision 0.83 Satisfaction decision Pleased with the decision 0.82 (Cronbach’s alpha = 0.95) Satisfied with the decision 0.82 118
  • 9. Data analysis Item reliability, convergent validity, and discriminant validity tests are often used to test the robustness of measurement. Individual item reliability can be examined by observing the factor loading of each item. Convergent validity should be assured when multiple indicators are used to measure one construct. Convergent validity can be examined by bivariate correlation analyses and reliability of questions. Discriminant validity focuses on testing whether the measures of constructs are different from one another. It can be assessed by testing whether the correlation between pairs of construct are below the threshold value of 0.90. As shown in Table 4, given that all criteria were met, validity and reliability of our measurement are assured in our study. Hypotheses testing There are four hypotheses in this category, based on different decision outcomes: mental-model maintenance, mental-model building, decision-process satisfaction, and outcome satisfaction. In this study, MANOVA was selected because the correlation matrix showed a moderate- to high-level correlation among variables and to control an experiment-wide error rate resulting from conducting multiple ANOVA (Hair, Black, Babin, Anderson & Tatham, 2006). In Table 5, descriptive analysis and the correlation among variables were also provided for the validation of assumption. “Mean” represents the averaged response toward each construct. “Skewness” and “kurtosis” represent the extent to which our sample fit into the normality assumption. The MANOVA test result in Table 6 shows that, except for process satisfaction, CMCA has effects on each dependent variable when confirmation bias is controlled. Table 5. Descriptive analysis and correlation matrix Descriptive analysis Correlation matrix Mean Std dev. Skewness Kurtosis MMM MMB SATP SATD Mental-model maintenance 4.87 0.78 −0.03 1.00 1.00 Mental-model building 5.16 0.87 −0.09 −0.16 −0.09 1.00 Satisfaction toward process 4.97 0.93 −0.71 1.51 0.31 0.45 1.00 Satisfaction toward outcome 5.02 0.97 −0.33 0.51 0.32 0.46 0.78 1.00 Table 6. Effect of CMCA on learning and satisfaction Source Dependent variable F Sig. MM building 6109.007 .000 MM maintenance 7370.924 .000 Intercept Process satisfaction 5246.856 .000 Outcome satisfaction 4992.449 .000 MM building 4.711 .031 MM maintenance 7.912 .005 CMCA Process satisfaction 2.388 .124 Outcome satisfaction 5.868 .016 The hypothesis that the score of mental-model maintenance for subjects who use DSS without CMCA will be higher than for those who use DSS with CMCA is supported. As Table 7 indicates, for subjects in the group without CMCA, the average-rated mental-model maintenance was 5.03 (out of 7). For subjects in the group with CMCA, the average-rated mental-model maintenance score was 4.71 (out of 7). The significant result indicates that CMCA can effectively reduce mental-model maintenance. The hypothesis that the score of mental-model building for subjects who use DSS with CMCA will be higher than for those who use DSS without CMCA is supported. In addition, as Table 7 shows, for subjects in the group without CMCA, the average-rated mental-model building was 4.96 (out of 7); for subjects in the group with CMCA, the average-rated mental-model building was 5.36 (out of 7). The MONOVA result shows that counter-argument has an effect on outcome satisfaction but not on process satisfaction. The hypothesis that subjects who use DSS with CMCA will feel more satisfied toward the system is not 119
  • 10. supported. However, significant difference can be found between these two groups and H2, and thus, the satisfaction toward decision-making outcome is supported. The average scores and standard deviation for process and outcome satisfaction for each group are provided in Table 7. Table 7. Analytical results of mental-model maintenance Computer-mediated counter-argument w/o w Mental-model maintenance 5.03 (0.88) 4.71 (0.64) Mental-model building 4.96 (0.81) 5.36 (0.87) Process satisfaction 4.88 (0.90) 5.07 (0.96) Outcome satisfaction 4.86 (0.89) 5.19 (1.02) Discussion One of our hypotheses (H2) was not supported. Contrary to our expectation that subjects provided with CMCA would be interrupted by the popping-up of counter-argument during the decision-making process and would tend to feel less satisfied, subjects in the group with counter-argument felt even more satisfied than the group without counter-argument. A couple of reasons may have caused this to happen. First, decision-process satisfaction is measured by a subject’s perceived efficiency, satisfaction, and clearness toward the decision-making process. In general, decision makers do not read all the information provided by the system. Accompanied by the experimenter’s introduction, the decision-making process is very easy to understand and follow. The measurement may not exactly reflect the actual problem with counter-argument. Therefore, there is no significant difference between these two groups. There is a need to develop another instrument to understand subject’s satisfaction toward the decision- making process. Second, we measured the subjects’ perceived process satisfaction and perceived outcome satisfaction together, after the decision had been made. Although the questionnaire focused on the decision-making process, the decision makers’ perceived satisfaction toward the process may have been affected by their perceived satisfaction toward the decision outcome. Therefore, future research should reconsider the most appropriate time for collecting process satisfaction. Conclusion Because human beings have bounded rationality, our decision-making process is largely affected by cognitive biases (Tversky & Kahneman, 1974). This study focused on the introduction of a de-bias function, which may reduce the negative consequences of confirmation bias in the computer-supported decision-making context. The relationships between counter-argument and decision outcomes were hypothesized. After collecting data from 187 experienced financial investors, three out of four proposed hypotheses were supported (the relationship between the providing of CMCA and process satisfaction was not supported). We successfully showed how the de-bias function can trigger individual learning by leading decision makers to challenge their current mental models. This result contributes to academics and practitioners in several ways. Implications for research First, our results indicate that decision support systems with specific functions can stimulate learning by reforming decision makers’ mental model. How users can learn through the use of information systems has long been ignored by information system research (Alavi & Leidner, 2001). In this study, we focused on two different approaches to learning: mental-model maintenance and mental-model building. However, biases emerged while decision makers attempt to maintain current mental models. This research shows that with pre-design functions, users may challenge their initial beliefs and reform their mental models. Second, there was no difference with regard to process satisfaction with or without counter-argument. This indicates that although a new function may be useful to minimize overconfidence and to guide the decision maker to balance 120
  • 11. their decision, people may be against the de-bias function in the decision-making process. The existence of counter- argument may interrupt decision makers’ information reading and downgrade their processing fluency. This implies that, although counter-argument allows decision makers to challenge their assumptions, an unwanted emotional side effect may also appear. Future research may explore other emotional consequences of using de-biasing functions embedded in computer-supported decision-making tools. Implications for practice Learning takes place when learners enhance their current mental models or build new mental models. It is vital to build an environment to help individuals to avoid cognitive bias as well as to support the building and reforming of mental models. Computer-supported learning is popular and broadly adopted by the contemporary education system. In addition to providing various types of knowledge in a more efficient or effective manner through the support of information technology, approaches such as counter thinking, which provide stimulus for learners to challenge their existing mental model, should be adopted by educators or embedded into the learning system. For a decision-support-systems designer, de-bias functions should be included in the system design. Our study shows that systems should provide certain functions to prevent bias or support learning. However, designers should also note that some unwanted effects may be caused by those de-bias functions. For example, in our study, we provided one piece of counter-argument when subjects read a piece of news. The pop-up counter-argument may lead to negative emotions, for example, annoyance or confusion. Too many counter-arguments or too much information may increase the effect of information overload. Therefore, the frequency and timing used to provide counter-argument is important. System designers should also consider the type of task and other factors. Limitations and future research This study is not without limitations. First, a between-group comparison was conducted and subjective evaluation of perceived mental-model building was used to evaluate the learning effect. Some may argue that although this approach may capture part of the learning effect, a pre- and post-experiment comparison should be made in order to truly reflect how counter-argument changes the mental model. Therefore, further research should extend this study by employing within-individual experiment and examining the change of one’s mental model objectively. Second, this was a cross-sectional study and the existence of counter-argument was unexpected by the subjects. People may get used to the appearance of counter-argument and ignore it automatically (e.g., Handzic & Tolhurst, 2002). Future research may extend this study by conducting a longitudinal study to examine the above assumption. Third, only a financial decision-making task was used. Since confirmation bias can be applied to various decision-making tasks and counter-argument may serve as a useful tool for eliminating it, future research should extend the proposed concept to other decision-making areas to test the effect of counter-argument on decision-making. Fourth, only one de-biasing function was included in this study. One bias may be eliminated through different approaches under different settings. Future research should include other de-biasing functions and compare the effect of different de- biasing functions under different settings. Lastly, although we have illustrated the effect of confirmation bias and methods to prevent it, we did so with a simple experiment design. Future studies may examine this issue in more complicated circumstances, such as a group decision-making or social decision-making process. References Alavi, M., & Leidner, D. E. (2001). Research commentary: Technology-mediated learning: A call for greater depth and breadth of research. Information System Research, 12(1), 1–10. Argyris, C., & Schön, D. (1978). Organizational Learning: A Theory of Action Approach. Boston, MA: Addison-Wesley. Arnott, D. (2006). Cognitive biases and decision support systems development: A design science approach. Information Systems Journal, 16(1), 55–78. Barber, B. M., & Odean, T. (2002). Online investors: Do the slow die first? The Review of Financial Studies, 15(S2), 455–487. Bhandari, G., Deaves, R., & Hassanein, K. (2008) Debiasing investors with decision support systems: An experimental investigation. Decision Support Systems, 46(1), 399–410. 121
  • 12. Cannon-Bowers, J. A., Salas, E., & Converse, S. (1993). Shared mental models in expert team decision making. In J. N. J. Castellan (ed.), Individual and group decision making. Hillsdale, NJ: Lawrence Erlbaum Associates, 221–246. Craik, K. J. W. (1943). The Nature of Explanation. Cambridge, UK: Cambridge University Press. Davis, F. D., & Kottemann, J. E. (1994). User perceptions of decision support effectiveness: Two production planning experiments. Decision Sciences, 25(1), 57–76. Festinger, L. A. (1957). Theory of Cognitive Dissonance. Stanford, CA: Stanford University Press. Fischhoff, B. (1982) Debiasing. In D. Kahneman, P. Slovic, & A. Tversky (Eds.), Judgment under uncertainty: Heuristics and biases. Cambridge, Cambridge University Press, 422–444. Gagne, R. M. 1977. The condition of learning, New York: Holt, Rinehart and Winston. Green, S. G., & Taber, T. D. (1980). The effects of three social decision schemes on decision group process. Organizational Behavior and Human Performance, 25(1), 97–106. Grønhaug, K., & Falkenberg, J. S. (1989). Exploring strategy perceptions in changing environments. Journal of Management Studies, 26(4), 349–359. Hair, J. F., Black, B., Anderson, R. E., Tatham, R. L., & Black, W. C. (2006). Multivariate data analysis, 6th ed. Englewood Cliffs, NJ: Prentice Hall. Handzic, M., & Tolhurst, D. (2002). Evaluating an interactive learning environment in management education. Educational Technology & Society, 5(3), 113–122. Heath, C. (1996). Do People Prefer to Pass Along Good or Bad News? Valence and Relevance of News as Predictors of Transmission Propensity. Organizational Behavior and Human Decision Processes, 68(2), 79–94. Hoch, S. J., & Deighton, J. (1989). Managing What Consumers Learn from Experience. Journal of Marketing, 53(2), 1–20. Holyoak, K. J. (1984). Analogical thinking and human intelligence. In Advances in the psychology of human intelligence, R. J. Sternberg (Eds.), Hillsdale, NJ: Erlbaum, 199–223. Hung, S. Y., Ku, Y. C., Liang, T. P., & Lee, C. J. (2007). Regret avoidance as a measure of DSS success: An exploratory study. Decision Support Systems, 42(4), 2093–2106. Kiesler, S. & Sproull, L. (1982). Managerial response to changing environments: Perspectives on problem sensing from social cognition. Administrative Science Quarterly, 27(4), 548–570. March, P. (1991). How to people with a mild/moderate mental handicap conceptualise physical illness and its cause. British Journal of Mental Subnormality, 37(73), 80–91. Melone, N. P., McGuire, T. W., Hinson, G. B., & Yee, K. Y. (1993). The effect of decision support systems on managerial performance and decision confidence. Proceedings of the 26th Hawaii International Conference on System Sciences, 482–489. Nickerson, R. S. (1998). Confirmation bias: A ubiquitous phenomenon in many guises. Review of General Psychology, 2(2), 175– 220. Norman, D. A. (1982). Learning and memory. San Francisco: W. H. Freeman and Company. Patel, A., Kinshuk, & Russell, D. (2000). Intelligent tutoring tools for cognitive skill acquisition in life long learning. Educational Technology & Society, 3(1), 32–40. Piaget, J. (1954). The construction of reality in the child, New York: Basic Books. Quinn, J. B. (1980). Strategies for change: Logical incrementalism, Homewood, IL: Richard D Irwin. Rouse, W. & Morris, N. (1986). On looking into the black box: Prospects and limits in the search for mental models. Psychological bulletin, 100(3), 349–363. Russo, J. E., Medvec, V. H., & Meloy, M. G. (1996). The distortion of information during decisions. Organizational Behavior and Human Decision Processes, 66(1), 102–110. Sanders, G. L., & Courtney, J. F. (1985). A field study of organizational factors influencing DSS success. MIS Quarterly, 9(1), 77–93. Santana, M. (1995). Managerial learning: a neglected dimension in decision support systems. Proceedings of the 28th Annual Hawaii International Conference on System Sciences, 82–91. Schonpflug, W., & Battmann, W. (1988). The costs and benefits of coping. In S. Fisher & J. Reason (Eds.), Handbook of life stress, cognition, and health, New York: Wiley, 699–713. 122
  • 13. Sinitsa, K. (2000). Learning individually: A life-long perspective. Educational Technology & Society, 3(1), 17–23. Speier, C., Vessey, I., & Valacich, J. S. (2003). The effects of interruptions, task complexity and information presentation on computer-supported decision-making performance. Decision Sciences, 34(4), 771–797. Tversky, A., & Kahneman, D. (1974). Judgment under uncertainty: Heuristics and biases. Science, 185(4157), 1124–1131. Vandenbosch, B., & Higgins, C. (1994). The measurement of learning from executive information systems. Proceedings of the Twenty-Seventh Hawaii International Conference on System Science, Hawaii, 572–585. Wason, P. C. (1960). On the failure to eliminate hypotheses in a conceptual task. The Quarterly Journal of Experimental Psychology, 12(3), 129–140. Zohar, D. (1999). When things go wrong: The effect of daily work hassles on effort, exertion, and negative mood. Journal of Occupational and Organizational Psychology, 72(3), 265–283. 123