Abstract
Several studies have shown that when people are asked to estimate the probabilities for a set of exclusive and exhaustive events they often produce probabilities that add up to more than 1 or 100% (Robinson & Hastie, 1985; Teigen, 1974, 1983), thus violating the additivity principle stated by formal probability theory. The present dissertation aims to further investigate the determining factors of additivity neglect and the underlying cognitive processes.
Paper I investigated the notion that this bias is related to people’s lack of mathematical skills, by giving participants a numeracy test. Further, several pilot studies had suggested that answering format affected people’s additivity, where a written format (writing estimates in an empty slot next to each outcome in the set) seemed to prompt more additive responses than estimates given on a scale format (circling numbers on 0-100% horizontal rating scales for each outcome in the set).
The overall results showed that numeracy (Experiments 1 and 3) was positively related to additive responses. In addition, varying the presentation of the Numeracy scale (before vs. after the estimation tasks), revealed that answering the Numeracy test prior to the probability tasks “primed” participants (mainly those with high numeracy) to answer according to mathematical principles. It is thus not sufficient to have high numeracy; one must also be reminded that mathematical rules apply. We also found a clear tendency for estimates given in the written format to be lower than estimates in scale format (Experiments 2, 3 and 4). The written format in the single outcome conditions also yielded more additive responses.
Paper II further investigated the difference in answering format found in Paper I, by using a process measure in the form of eye-tracking. The results showed that participants in the Self-generated condition had more fixations, and on average almost twice as many revisits between the alternatives compared to the Scale condition, indicating more comparisons between the alternatives in the set in the Self-generated condition. We also expected to find that the participants in the Self-generated condition had longer fixation durations than participants in the Scale condition, however, this was not the case. Overall, the results from Paper II indicate that the Scale format might prompt a selective evaluation of the alternatives, and thereby discourage comparisons between alternatives. The Self-generated estimates facilitate a class-based approach and make people engage in more comprehensive comparisons.
Paper III compared additivity neglect to another type of bias, namely the nonselective superiority bias. The nonselective superiority bias (NSSB) is the consistent evaluation of individual members of a positive set of items (e.g., five good movies) as superior to most other members in the set (Giladi & Klar, 2002). Both biases violate basic formal constraints, as a set of attractive candidates cannot all be rated as better than the group mean (NSSB); while the probabilities of a set of exhaustive events cannot add up to more than 100% (additivity neglect).
Participants in three experiments were asked to give both probability estimates and comparative judgments in separate tasks. The results from all experiments indicated several similarities between the nonselective superiority bias and additivity neglect. Both biases seem to be about equally widespread, as a majority of participants´ probability sums far exceeded 100%, and mean ratings were significantly greater than zero (normative mean for items in a set), even when all alternatives were presented together. The two biases were also related, as participants who gave additive probability estimates gave more balanced distributions of ratings to the NSSB tasks. However, NSSB seems to be more robust, in the sense that the degree of bias could not be reduced by changing the answering format to a Self-generated format.