Heuristics and Biases
Many decisions are based on beliefs concerning the
likelihood of uncertain events.
Occasionally, beliefs concerning uncertain events are
expressed in numerical form as odds or subjective probabilities. The subjective
assessment of probability involve judgements based on data of limited validity,
which are processed according to heuristic rules. However, the reliance on this
rule leads to systematic errors. Such biases are also found in the intuitive
judgement of probability. Kahneman and Tversky[1]describe
heuristics that are employed to assess
probabilities and to predict values. Biases to which these heuristics lead are
enumerated, and the applied and theoretical implications of these observations
are discussed.
Ø
Law of Small Numbers.
Ø
Anchors.
Ø
Availability.
Ø
Affect Heuristic.
Ø
Representativeness.
Ø
Conjuctive fallacy.
Ø
Stereotyping.
Ø
Regression to the mean.
Ø
Substitution.
Kahneman[2]
starts with the notion that our minds contain two interactive modes of
thinking:
One
part of our mind (which he calls System 1) operates automatically and quickly,
with little or no effort and no sense of voluntary control.
The
other part of our mind (which he calls System 2) allocates attention to the
effortful mental activities that demand it, including complex computations. The
operations of System 2 are often associated with the subjective experience of
agency, choice, and concentration.[3]
In
other words, System 1 is unconscious, intuitive thought (automatic pilot), while
slower System 2 is conscious, rational thinking (effortful system).
When
we are awake, most of our actions are controlled automatically by System 1. The
mind
cannot consciously perform the thousands of complex tasks per day that human
functioning requires. System 2 is normally in a low-effort mode. System 2
activates when System 1 cannot deal with a task–when more detailed processing
is needed; only System 2 can construct thoughts in a step-by-step fashion. In
addition, it continuously monitors human behavior. The interactions of Systems
1 and 2 are usually highly efficient. However, System 1 is prone to biases and
errors, and System 2 is often lazy.
System
2 requires effort and acts of self-control in which the intuitions and impulses
of System 1 are overcome. Attention and Effort requires the lazy system 2 to
act.[4]
System
2 is by its nature lazy.[5]
System
1 works in a process called associative
activation: ideas that have been evoked trigger connected coherent ideas.[6]
System
2 is required when cognitive strain
comes up due to unmet demands of the situation which require the system 2 to
focus.[7]
Our
system 1 develops our image of what is normal
and associative ideas are formed which represent the structure of events in our
life and represent the structure of events in our life and interpretation of
the present as well as expectation of the future.[8]
System
1 is a kind of machine which jumps to
conclusions, which can lead to intuitive errors, which may be prevented by
a deliberate intervention of System 2.[9]
System
1 forms basic assesments by continuously monitoring what is going on inside and
outside the mind, and continuously generating assessments of various aspects of
the situation without specific intention and with little or no effort. These
basic assessments are easily substituted for more difficult questions.[10]
We first define Heuristics–“a simple
procedure that helps find adequate, though
often imperfect, answers to difficult
questions.”[11]
Heuristics,
allow humans to act fast, but they can also lead to wrong conclusions (biases)
because they sometimes substitute an easier question for the one asked. A type
of heuristic is the halo effect–“the tendency to like (or dislike) everything
about a person–including things you have not observed. ..”[12]
A
simple example is rating a baseball player as good at pitching because he is
handsome and athletic.
We
first discuss the law of small numbers which basically states that researchers
who pick too small a sample leave themselves at the mercy of sampling luck.[13]
Random
events by definition do not behave in a systematic fashion, but collection of
random events do behave in a highly regular fashion. This can lead to an
illusion of causation.
Predicting
results is based on the following facts:
Results
of large samples deserve more trust than smaller samples, we know this as the
law of large numbers.
But
also, more significantly, the following two statements mean exactly the same
thing:
·
large
samples are more precise than small samples.
·
Small
samples yield extreme results more often than large samples do.
Let
us repeat the following result: “researchers who pick too small a sample leave
themselves at the mercy of sampling luck”, traditionally psychologists do not
use calculations to decide on sample
size. They use their judgement, which is commonly flawed.
People
are not adequately sensitive to sample-size. The automatic part of our mind is
not prone to doubt. It suppresses ambiguity and spontaneously constructs
stories that are as coherent as possible. The effortful part of our mind is
capable of doubt, because it can maintain incompatible possibilities at the
same time.
The
strong bias toward believing that small samples closely resemble the population
from which they are drawn is also part of a larger story: we are prone to
exaggerate the consistency and coherence of what we see.
Our
prelidiction for causal thinking exposes us to serious mistakes in evaluating
the randomness of truly random events. We are pattern seekers, believers in a
coherent world, in which regularities appear not by accident but as a result of
mechanical causality or of someone’s intention. We do not expect to see
regularity produced by a random process, and when we detect what appears to be
a rule, we quickly reject the idea that the process is truly random. Random
processes produce many sequences that convince people that the process is not random
after all.
The
law of small numbers is part of two larger stories about the workings of the
mind.
·
The
exaggerated faith in small samples is only one example of a more general illusion-we
pay more attention to the content of messages than to information about their
reliability.
·
Statistics
produce many observations that appear to beg for causal explanations but do not
lend themselves to such explanations. Many facts of the world are due to
chance, including accidents of sampling. Causal explanations of chance events
are inevitably wrong.
Another
example of a heuristic bias is when judgments are influenced by an
uninformative number (an anchor), which results from an associative activation
in System 1. People are influenced when they consider a particular value for an
unknown number before estimating that number. The estimate for a number then
stays close to the anchor. For example, two groups estimated Gandhi’s age when
he died. The first group were initially asked whether he was more than 114; a
second group was asked whether he was 35 or older. The first group then
estimated a higher number for when he died than the second one.[14]
Two
different mechanisms produce anchoring effects-one for each system. There is a
form of anchoring that occurs in a deliberate process of adjustment, an
operation of System 2. And there is anchoring that occurs by a priming effect,
an automatic manifestation of System 1.
Insufficient
adjustment neatly explains why you are likely to drive too fast when you come
off the highway into city streets-especially if you are talking with someone as
you drive.
Adjustment
is a deliberate attempt to find reasons to move away from the anchor: people
who are instructed to shake their head when they hear the anchor, as if they
rejected it, move farther from the anchor, and people who nod their head show
enhanced anchoring.
Adjustment
is an effortful operation. People adjust less (stay closer to the anchor) when
their mental resources are depleted, either because their memory is loaded with
digits or because they are slightly drunk. Insufficient adjustment is a failure
of a weak or lazy System 2.
Suggestion
is a priming effect, which selectively evokes compatible evidence. System 1
understands sentences by trying to make them true, and the selective activation
of compatible thoughts produces a family of systematic errors that make us
gullible and prone to believe too strongly whatever we believe.
A
process that resembles suggestion is indeed at work in many situations: System
1 tries its best to construct a world in which the anchor is the true number.
Suggestion
and anchoring are both explained by the same automatic operation of System 1.
A
key finding of anchoring research is that anchors that are obviously random can
be just as effective as potentially informative anchors. Anchors clearly do not
have their effects because people believe they are informative.
Anchoring
effects-sometimes due to priming, sometimes to insufficient adjustment-are
everywhere. The psychological mechanisms that produce anchoring make us far
more suggestible than most of us would want to be. And of course there are
quite a few people who are willing and able to exploit our gullibility.
A
strategy of deliberately “thinking the opposite” may be a good defense against
anchoring effects, because it negates the biased recruitment of thoughts that
produces these effects.
System
2 is susceptible to the biasing influence of anchors that make some information
easier to retrieve.
A
message, unless it is immediately rejected as a lie, will have the same effect
on the associative system regardless of its reliability. The gist of the
message is the story, which is based on whatever information is available, even
if the quantity of of the information is slight and its quality is poor.
Anchoring
results from associative activation. Whether the story is true, or believable,
matters little, if at all. The powerful effect of random anchors is an extreme
case of this phenomenon, because a random anchor obviously provides no
information at all.
The
main moral of priming research is that our thoughts and our behaviour are
influenced, much more than we know or want, by the environment of the moment.
The
concept of availability is the
process of judging frequency by “ the ease with which instances come to mind.”,
this heuristic is known to be both a deliberate problem solving strategy and an
automatic operation.[15]
A
question considered early was how many instances must be retrieved to get an
impression of the ease with which they come to mind. We now know the answer:
none.
The
availability heuristic, like other heuristics of judgement, substitutes one
question for another: you wish to estimate the size of a category or the
frequency of an event, but you report an impression of the ease with which
instances come to mind. Substitution of questions inevitably produces
systematic errors.:
·
A salient
event that attracts your attention will be easily retrieved from memory.
·
A dramatic
event temporarily increases the availability of its category.
·
Personal
experiences, pictures, and vivid examples are more available than incidents
that happened to others, or mere words, or statistics.
Resisting this large collection of potential availability
biases is possible, but tiresome.
One of the best-known studies of availability suggests that
awareness of your own biases can contribute to peace in marriages, and probably
in other joint projects.
The ease with which instances comes to mind is a System 1
heuristic, which is replaced by a focus on content when System 2 is more
engaged.
People who let themselves be guided by System 1 are more
strongly susceptible to availability biases than others who are in a higher
state of vigilance. The following are some conditions in which people “go with
the flow” and are affected more strongly by ease of retrieval than by the
content they retrieved:
·
When they
are engaged in another effortful task.
·
When they
are in a good mood.
·
If they
are depressed.
·
If they
are knowledgeable novices.
·
Faith in
intuition.
·
Are or
made to feel powerful.
The
concept of an affect heuristic is one in which people make judgements and
decisions by consulting their emotions, a particularly important concept is:
the availability cascade, the importance of an idea is often judged by the
fluency (and emotional charge ) with which that idea comes to mind, this has
impacts on public policy, particularly with reference to the effect of the
media.[16]
Availability
effects help explain the pattern of insurance purchases and protective action
after disasters. Victims and near victims are very concerned after a disaster.
However, the memories of the disaster dim over time, and so do worry and
diligence.
Protective
actions, whether by individuals or governments, are usually designed to be
adequate to the worst disaster actually experienced.
Esimates
of causes of death are warped by media coverage. The coverage is itself biased
towards novelty and poignancy. The media do not just shape what the public is
interested in, but are also shaped by it.
Notion
of an affect heuristic was developed in which people make judgements and
decisions by consulting their emotions: Do I like it? Do I hate it? How
strongly do I feel about it?
“The
emotional tail wags the rational dog.” The affect heuristic simplifies our
lives by creating a world that is much tidier than reality. In the real world,
of course, we often face painful trade-offs between benefits and costs.
Availability
cascades are real and they undoubtedly distort priorities in the allocation of
public resources. One perspective is offered by Cass Sunstein who would seek
mechanisms that insulate decision makers from public pressures, letting the
allocation of resourcesa be determined by impartial experts who have a broad
view of all risks and of the resources available to reduce them. Paul Slovic on
the other hand trusts the experts much less and the public somewhat more than
Sunstein does, and he points out that insulating the experts from the emotions
of the public produces policies that the public will reject-an impossible
situation in a democracy.
People
who are asked to assess probability are not stumped, because they do not try to
judge probability as statisticians and philosophers use the word. A question
about probability or likelihood activates a mental shotgun, evoking answers to
easier questions. Judging probability by representativeness has important
virtues: the intuitive impressions that it produces are often-indeed,
usually-more accurate than chance guesses would be.[17]
In
the absence of specific information about a subject, you will go by the base
rates.
Activation
of association with a stereotype, is an automatic activity of System1.
Representativeness
involves ignoring both the base rates and the doubts about the veracity of the
description. This is a serious mistake, because judgements of similarity and
probability are not constrained by the same logical rules. It is entirely
acceptable for judgements of similarity to be unaffected by the base rates and
also by the possibility that the description was inaccurate, but anyone who
ignores base rates and the quality of evidence in probability assessments will
certainly make mistakes.
Logicians
and statisticians have developed competing definitions of probability, all very
precise.
In
contrast people who are asked to assess probability are not stumped, because
they do not try to judge probability as statisticians and philosophers use the
word. A question about probability or likelihood activates a mental shotgun,
evoking answers to easier questions.
Although
it is common, prediction by representativeness is not statistically optimal.
Judging
probability by representativeness has important virtues: the intuitive
impressions that it produces are often-indeed, usually-more accurate than
chance guesses would be.
In
other situations, the stereotypes are false and the representativeness
heuristic will mislead, especially if it causes people to neglect base-rate
information that points in another direction.
One
sin of representativeness is an excessive willingness to predict the occurrence
of unlikely (low base-rate) events.
People
without training in statistics are quite capable of using base rates in
predictions under some conditions.
Instructing
people to “think like a statistician” enhanced the use of base rate
information, while the instruction to “think like a clinician” had the opposite
effect.
Some
people ignore base rates because they believe them to be irrelevant in the
presence of individual information. Others make the same mistake because they
are not focussed on the task.
The
second sin of representativeness is insensitivity to the quality of evidence.
To
be useful your beliefs should be constrained by the logic of probability.
The
relevant “rules” for such cases are provided by Bayesian Statistics: the logic
of how people should change their mind in the light of evidence.
There
are two ideas to keep in mind about Bayesian reasoning and how we tend to mess
it up. The first is that base rates matter, even in the presence of evidence
about the case at hand. This is often not intuitively obvious. The second is
that intuitive impressions of the diagnosity of evidence are often exaggerated.
A
conjunction fallacy is one which people commit when they judge a conjunction of
two events to be more probable than one of the events in a direct comparision.[18]
When
you specify a possible event in greater detail you can only lower its
probability. So, there is a conflict between the intuition of representativeness
and the logic of probability.
The
word fallacy is used, in general, when people fail to apply a logical rule that
is obviously relevant. Amos and I introduced the idea of a conjunction fallacy,
which people commit when they judge a conjunction of two events to be more
probable than one of the events in a direct comparision.
The
fallacy remains attractive even when you recognise it for what it is.
The
uncritical substitution of plausibility for probability has pernicious effects
on judgements when scenarios are used as tools of forecasting.
Adding
detail to scenarios makes them more persuasive, but less likely to come true.
Less
is more:sometimes even in joint evaluation: the scenario that is judged more
probable is unquestionably more plausible, a more coherent fit with all that is
known.
A
reference to a number of individuals brings a spatial representation to mind.
The
frequency representation, as it is known, makes it easy to appreciate that one
group is wholly included in the other. The solution to the puzzle appears to be
that a question phrased as “how many?’
makes you think of individuals, but the same question phrased as “ what
percentage?” does not.
The
laziness of System 2 is an important fact of life, and the observation that
representativeness can block the application of an obvious logical rule is also
of some interest.
Intuition
governs judgments in the between-subjects condition:logic rules in joint
evaluation. In other problems, in contrast, intuition often overcame logic even
in joint evaluation, although we identified some conditions in which logic
prevails.
The
blatant violations of the logic of probability that we had observed in
transparent problems were interesting.
Causes
trump statistics, in the sense that statistical base rates are generally
underweighted and causal base rates are considered as information about the
individual.[19]
This
chapter considers a standard problem of Bayesian inference. There are two items
of information: a base rate and the imperfectly reliable testimony of a
witness.
You
can probably guess what people do when faced wth this problem: they ignore the
base rate and go with the witness.
Now
consider a variation of the same story, in which only the presentation of the
base rate has been altered.
The
two versions of the problem are mathematically indistinguishable, but they are
psychologically quite different. People who read the first version do not know
how to use the base rate and often ignore it. In contrast, people who see the
second version give considerable weight to the base rate, and their average
judgment is not too far from the Bayesian solution. Why?
In
the first version, the base rate is a statistical fact. A mind that is hungry
for causal stories finds nothing to chew on.
In
the second version, in contrast, you formed a stereotype, which you apply to
unknown individual observations. The stereotype is easily fitted into a causal
story. In this version, there are two causal stories that need to be combined
or reconciled. The inferences from the two stories are contradictory and
approximately cancel each other. The Bayesian estimate is 41%, reflecting the
fact that the base rate is a little more extreme than the reliability of the
witness.
The
example illustrates two types of base rates. Statistical base rates are facts
about a population to which a case belongs, but they are not relevant to the
individual case. Causal base rates change your view of how the individual case
came to be. The two types of base-rate information are treated differently:
Statistical
base rates are generally underweighted, and sometimes neglected altogether,
when specific information about the case at hand is available.
Causal
base rates are treated as information about the individual case and are easily combined
with other case-specific information.
The
causal version of the cab problem had the form of a stereotype: Stereotypes are
statements about the group that are (at least tentatively) accepted as facts
about every member.
These
statements are readily interpreted as setting up a propensity in individual
members of the group, and they fit in a causal story.
Stereotyping
is a bad word in our culture, but in the authors usage it is neutral. One of
the bsic characteristics of System 1 is that it represents categories as norms
and prototypical examplars; we hold in memory a representation of one or more
“normal” members of each of these categories. When the categories are social,
these representations are called stereotypes. Some stereotypes are perniciously
wrong, and hostile stereotyping can have dreadful consequences, but the
psychological facts cannot be avoided: stereotypes, both correct and false, are
how we think of categories.
You
may note the irony. In the context of the
problem, the neglect of base-rate information is a cognitive flaw, a
failure of Bayesian reasoning, and the reliance on causal base rates is
desireable. Stereotyping improves the accuracy of judgement. In other contexts,
however, there is a strong social norm against stereotyping, which is also
embedded in the law.
The
social norm against stereotyping, including the opposition to profiling, has
been highly been highly beneficial in creating a more civilised and more equal
society. It is useful to remember, however, that neglecting valid stereotypes
inevitably results in suboptimal judgements.
The
explicitly stated base rates had some effects on judgment, but they had much
less impact than the statistically equivalent causal base rates. System 1 can
deal with stories in which the elements are causally linked, but it is weak in
statistical reasoning. For a Bayesian thinker, of course, the versions are
equivalent. It is tempting to conclude that we have reached a satisfactory
conclusion:causal base rates are used; merely statistical facts are more or
less neglected. The next study, however, shows that the situation is rather
more complex.
Individuals
feel relieved of responsibility when they know that others can take
responsibility.
Even
normal, decent people do not rush to help when they expect others to take on
the unpleasantness of dealing with a seizure.
Respondents
“quietly exempt themselves” (and their friends and acquaintances) from the
conclusions of experiments that surprise them.
To
teach students any psychology they did not know before, you must surprise them.
But which surprise will do? When respondents were presented with a surprising
statistical fact they managed to learn nothing at all. But when the students
were surprised by individual cases-two nice people who had not helped-they
immediately made the generalisation and inferred that helping is more difficult
than they had thought.
This
is a profoundly important conclusion. People who are taught surprising
statistical facts about human behaviour may be impressed to the point of
telling their friends about what they have heard, but this does not mean that
their understanding of the world has really changed. The test of learning
psychology is whether your understanding of situations you encounter has
changed, not whether you have learned a new fact. There is a deep gap between
our thinking about statistics and our thinking about individual cases.
Statistical results with a causal interpretation have a stronger effect on our
thinking than noncausal information. But even compelling causal statistics will
not change long-held beliefs or beliefs rooted in personal experience. On the
other hand, surprising individual cases have a powerful impact and are a more
effective tool for teaching psychology because the incongruity must be resolved
and embedded in a causal story.
Regression
to the mean involves moving closer to the average than the earlier value of the
variable observed. Also regression to the mean has an explanation, but does not
have a cause.[20]
An
important principle of skill training:rewards for improved performance work
better than punishment of mistakes. This proposition is supported by much
evidence from research.
Regression
to the mean, involves that poor performance is typically followed by
improvement and good performance by deterioration, without any help from either
praise or punishment.
The
feedback to which life exposes us is perverse. Because we tend to be nice to
other people when they please us and nasty when they do not, we are
statistically punished for being nice and rewarded for being nasty.
Regression
does not have a causal explanation. Regression effects are ubiquitous, and so
are misguided casual stories to explain them. The point to remember is that the
change from the first to the second occurrence does not need a causal
explanation. It is a mathematically inevitable consequence of the fact that
luck played a role in the outcome of the first occurence.
Regression
inevitably occurs when the correlation between two measures is less than
perfect.
The
correlation coefficient between two measures, which varies between 0 and 1, is
a measure of the relative weight of the factors they share.
Correlation
and regression are not two concepts-they are different perspectives on the same
concept. The general rule is straightforward but has surprising consequences:
whenever the correlation between two scores is imperfect, there will be
regression to the mean.
Our
mind is strongly biased toward causal explanations and does not deal well with
“mere statistics.” When our attention is called to an event, associative memory
will look for its cause-more precisely, activation will automatically spread to
any cause that is already stored in memory. Causal explanations will be evoked
when regression is detected, but they will be wrong because the truth is that
regression to the mean has an explanation but does not have a cause.
System
2 finds it difficult to understand and learn. This is due in part to the
insistent demand for causal interpritations, which is a feature of System 1.
Regression
effects are a common source of trouble in research, and experienced scientists
develop a healthy fear of the trap of unwarranted causal inference.
Intuitive
predictions need to be corrected because they are not based on regression to
the mean and are therefore biased. Correcting intuitive predictions are a task
for system 2.[21]
Life
presents us with many occasions to forecast. Some predictive judgments, rely
largely on precise calculations. Others involve intuition and System 1 in two main varieties. Some
intuitions draw primarily on skill and expertise acquired by repeated
experience.
Other
intuitions, which are sometimes subjectively indistinguishable from the first,
arise from the operation of heuristics that often substitute an easy question
for the harder one that was asked. Of course, many judgements, especially in
the professional domain, are influenced by a combination of analysis and
intuition.
We
are capable of rejecting information as irrelevant or false, but adjusting for smaller
weaknesses in the evidence is not something that system 1 can do. As a result
intuitive predictions are almost completely insensitive to the actual
predictive quality of the evidence. When a link is found, what you see is all
there is applies:your associative memory quickly and automatically constructs
the best possible story from the information available.
Next
the evidence is evaluated in relation to a relevant norm.
The
next step involves substitution and intensity matching.
The
final step is a translation from an impression of the relative position of the candidates performation to the result.
Intensity
matching yields predictions that are as extreme as the evidence on which they
are based. By now you should realise that all these operations are features of
system 1. You should imagine a process of spreading activation that is
initially prompted by the evidence and the question, feeds back upon itself,
and eventually settles on the most coherent solution possible.
The
prediction of the future is not distinguished from an evaluation of current
evidence-prediction matches evaluation.
This
is perhaps the best evidence we have for the role of substitution. People are
asked for a prediction but they substitute an evaluation of the evidence,
without noticing that the question they answer is not the one they were asked.
This process is guaranteed to generate predictions that are systematically
biased; they completely ignore regression to the mean.
Intuitive
predictions need to be corrected because they are not regressive and are
therefore are biased.
The
corrected intuitive predictions eliminate these biases, so that predictions
(both high and low) are about equally likely to overestimate and to
underestimate the true value. You will still make errors when your predictions
are unbiased, but the errors are smaller and do not favour either high or low
outcomes.
Correcting
your intuitive predictions is a task for System 2. Significant effort is
required to find the relevant reference category, estimate the baseline
prediction, and evaluate the quality of the evidence. The effort is justified
only when the stakes are high and when you are particularly keen not to make
mistakes.
The
objections to the principle of moderating intuitive predictions must be taken
seriously, because absence of bias is not always what matters most. A
preference for unbiased predictions is justified if all errors of prediction
are treated alike, regardless of their direction. But there are situations in
which one type of error is much worse than another.
For
a rational person, predictions that are unbiased and moderate should not
present a problem.
Extreme
predictions and a willingness to predict rare events from weak evidence are
both manifestations of System1. It is natural for the associative machinery to
match the extremeness of predictions to the perceived extremeness of evidence
on which it is based-this is how substitution works. And it is natural for
System1 to generate overconfident judgements, because confidence, as we have
seen, is determined by the coherence of the best story you can tell from the
evidence at hand. Be warned: your intuitions will deliver predictions that are
too extreme and you will be inclined to put far too much faith in them.
Regression
is also a problem for System 2. The very idea of regression is alien and
difficult to communicate and comprehend.
Matching
predictions to the evidence is not only something we do intuitively; it also
seems a reasonable thing to do. We will not learn to understand regression from
experience. Even when a regression is identified, it will be given a causal
interpretation that is almost always wrong.
[1] Amos Tversky and Daniel Kahneman, Judgement under Uncertainty:
Heuristics and Biases, 1974.
[3] Ibid-page 21
[4] Ibid-chapter 2
[5] Ibid-chapter 3
[6] Ibid-chapter 4
[7] Ibid-chapter 5
[8] Ibid-chapter 6
[9] Ibid-chapter 7
[10] Ibid-chapter 8
[11] Ibid-page 98
[12] Ibid-page 82
[13] Ibid-chapter 10
[14] Ibid-chapter 11.
[15] Ibid-chapter 12.
[16] Ibid-chapter 13.
[17] Ibid-chapter 14.
[18] Ibid-chapter 15
[19] Ibid-chapter 16
[20] Ibid-chapter 17
[21] Ibid-chapter 18.
No comments:
Post a Comment