Certain
interesting features regarding the study of heuristics and biases in the
writings of Tversky and Kahneman: Munish Alagh.
Many decisions are based on beliefs concerning the
likelihood of uncertain events.
Occasionally, beliefs concerning uncertain events are
expressed in numerical form as odds or subjective probabilities. The subjective
assessment of probability involve judgements based on data of limited validity,
which are processed according to heuristic rules. However, the reliance on this
rule leads to systematic errors. Such biases are also found in the intuitive
judgement of probability. Kahneman and Tversky[1]describe three heuristics that are employed to assess
probabilities and to predict values. Biases to which these heuristics lead are
enumerated, and the applied and theoretical implications of these observations
are discussed. This discussion below is based broadly on writings by Kahneman
and Tversky, the following heuristics and the biases they lead to are
discussed:
Ø Representativeness.
Ø Adjustment
and Anchoring.
Representativeness:
Judging probability by representativeness has
important virtues: the intuitive impressions that it produces are often-indeed,
usually-more accurate than chance guesses would be.[2]
This approach to the judgement of probability however
leads to serious errors, because similarity, or representativeness, is not
influenced by several factors that should affect judgments of probability.
Certain interesting features regarding the errors
which result from representativeness are:
Insensitivity to prior probability of
outcomes:
It is noticed that subjects use prior probabilities correctly when they
have no other information. However, prior probabilities are effectively ignored
when a description is introduced, even when this description is totally
uninformative. Evidently, people respond differently when given no evidence and
when given worthless evidence. When no specific evidence is given, prior
probabilities are ignored.[3]
Insensitivity
to sample size:
Subjects failed to
appreciate the role of sample size even when it was emphasized in the
formulation of the problem A similar insensitivity to sample size has been
reported in judgments of posterior probability, that is, of the probability
that a sample has been drawn from one population rather than from another Here
again, intuitive judgments are dominated by the sample proportion and are
essentially unaffected by the size of the sample, which plays a crucial role in
the determination of the actual posterior odds [4].
In addition, intuitive estimates of posterior odds are far less extreme than
the correct values. The underestimation of the impact of evidence has been
observed repeatedly in problems of this type[5]
It has been labeled "conservatism."
Misconceptions of chance:
Misconceptions of chance are
not limited to naive subjects. A study of the statistical intuitions of
experienced research psychologists[6]
revealed a lingering belief in what may be called the "law of small
numbers," according to which even small samples are highly representative
of the populations from which they are drawn.
The illusion of validity:
The internal consistency of
a pattern of inputs is a major determinant of one's confidence in predictions
based on these inputs Highly consistent patterns are most often observed when
the input vari-ables are highly redundant or correlated. Hence, people tend to
have great con-fidence in predictions based on redundant input variables.
However, an elementary result in the statistics of correlation asserts that,
given input vari-ables of stated validity, a prediction based on several such
inputs can achieve higher accuracy when they are independent of each other than
when they are redundant or correlated. Thus, redundancy among inputs decreases
accuracy even as it increases confidence, and people are often confident in
pre-dictions that are quite likely to be off the mark[7]
Regression to the mean:
Involves moving closer to the average than the
earlier value of the variable observed. Also regression to the mean has an
explanation, but does not have a cause.[8]
Regression
does not have a causal explanation. Regression effects are ubiquitous, and so are
misguided casual stories to explain them. The point to remember is that the
change from the first to the second occurrence does not need a causal
explanation. It is a mathematically inevitable consequence of the fact that
luck played a role in the outcome of the first occurence.
Regression
inevitably occurs when the correlation between two measures is less than
perfect.
The
correlation coefficient between two measures, which varies between 0 and 1, is
a measure of the relative weight of the factors they share.
Correlation
and regression are not two concepts-they are different perspectives on the same
concept. The general rule is straightforward but has surprising consequences:
whenever the correlation between two scores is imperfect, there will be regression
to the mean.
Our mind is
strongly biased toward causal explanations and does not deal well with “mere
statistics.” When our attention is called to an event, associative memory will
look for its cause, more precisely, activation will automatically spread to any
cause that is already stored in memory. Causal explanations will be evoked when
regression is detected, but they will be wrong because the truth is that
regression to the mean has an explanation but does not have a cause.
Regression
effects are a common source of trouble in research, and experienced scientists
develop a healthy fear of the trap of unwarranted causal inference.
Adjustment and
Anchoring:
Biases in the evaluation of compound events are particularly significant
in the context of planning. The successful completion of an undertaking, such
as the development of a new product, typically has a conjunctive character: for
the undertaking to succeed, each of a series of events must occur. Even when
each of these events is very likely, the overall probability of success can be
quite low if the number of events is large. The general tendency to
overestimate the probability of conjunctive events leads to unwarranted
optimism in the evaluation of the likelihood that a plan will succeed or that a
project will be completed on time. Conversely, dis-junctive structures are
typically encountered in the evaluation of risks. A complex system, such as a
nuclear reactor or a human body, will malfunction if any of its essential
components fails. Even when the likelihood of failure in each component is
slight, the probability of an overall failure can be high if many components
are involved. Because of anchoring, people will tend to underestimate the
probabilities of failure in complex systems.
The subjects state overly narrow confidence intervals which reflect more
certainty than is justified by their knowledge about the assessed quantities.
Anchoring in the
assessment of subjective probability distributions.: the subjects
state overly narrow confidence intervals which reflect more certainty than is
justified by their knowledge about the assessed quantities
it is natural to begin by thinking about one's best estimate of the
parameter and to adjust this value upward. If this adjustment like most others is
insufficient, then the upper value of the distribution will not be sufficiently
extreme. A similar anchoring effect will occur in the selection of the lower
value of the distribution, which is presumably obtained by adjusting one's best
estimate downward. Consequently, the confidence interval between the lower and
upper values of the distribution will be too narrow, and the assessed
probability distribution will be too tight.
Discussion :
Statistical principles are not learned from everyday experience because
the relevant in-stances are not coded appropriately.
The lack of an appropriate code also explains why people usually do not
detect the biases in their judgments of probability.
The inherently subjective nature of probability has led many students to
the belief that coherence, or internal consistency, is the only valid criterion
by which judged probabilities should be evaluated. From the standpoint of the
formal theory of subjective probability, any set of internally consistent
probability judgments is as good as any other. This criterion is not entirely
satisfactory, because an internally consistent set of subjective probabilities
can be incompatible with other beliefs held by the individual. Consider a
person whose subjective probabilities for all possible outcomes of a
coin-tossing game reflect the gambler's fallacy. That is, his estimate of the
probability of tails on a particular toss increases with the number of
consecutive heads that preceded that toss. The judgments of such a person could
be internally consistent and therefore acceptable as adequate subjective
probabilities according to the criterion of the formal theory. These
probabilities, however, are incompatible with the generally held belief that a
coin has no memory and is therefore incapable of generating sequential dependencies.
For judged probabilities to be considered adequate, or rational, in-ternal
consistency is not enough. The judgments must be compatible with the entire web
of beliefs held by the individual. Unfortunately, there can be no simple formal
procedure for assessing the compatibility of a set of probability judgments
with the judge's total system of beliefs.
[1]
Amos Tversky and Daniel Kahneman, Judgement under Uncertainty: Heuristics and
Biases, (1974).
[2] Daniel Kahneman, Thinking, Fast and
Slow (2011).
[3] Tversky and Kahneman, “On the
Psychology of Prediction.” (1973)
[4]
D Kahneman and A Tversky, “Subjective Probability: A Judgment of
Representativeness,” Cognitive Psychology 3(1972);430-54
[5]
W Edwards,Conservatism in Human Information Processing, 1968
[6]
Kahneman and Tversky,1972.
[7]
Ibid.
[8]
Kahneman, 2011-chapter 17
No comments:
Post a Comment